Accessing a File (Linux Kernel)
Accessing Files Different Ways to Access a File Canonical Mode (O_SYNC and O_DIRECT cleared) Synchronous Mode (O_SYNC flag set) Memory Mapping Mode Direct I/O Mode (O_DIRECT flag set, user space - disk) Asynchronous Mode Reading a file is
Accessing Files
Different Ways to Access a File
ð Canonical Mode (O_SYNC and O_DIRECT cleared)
ð Synchronous Mode (O_SYNC flag set)
ð Memory Mapping Mode
ð Direct I/O Mode (O_DIRECT flag set, user space disk)
ð Asynchronous Mode
Reading a file is always page-based: the kernel always transfers whole pages of data at once.
Allocate a new page frame -> fill the page with suitable portion of the file -> add the page to the page cache -> copy the requested bytes to the process address space
Writing to a file may involve disk space allocation because the file size may increase.
Reading from a File
/**
* do_generic_file_read - generic file read routine
* @filp: the file to read
* @ppos: current file position
* @desc: read_descriptor
* @actor: read method
*
* This is a generic file read routine, and uses the
* mapping->a_ops->readpage() function for the actual low-level stuff.
*
* This is really ugly. But the goto's actually try to clarify some
* of the logic when it comes to error handling etc.
*/
static void do_generic_file_read(struct file *filp, loff_t *ppos,
read_descriptor_t *desc, read_actor_t actor)
Read-Ahead of Files
Many disk accesses are sequential, that is, many adjacent sectors on disk are likely to be fetched when handling a series of process’s read requests on the same file.
Read-ahead consists of reading several adjacent pages of data of a regular file or block device file before they are actually requested. In most cases, this greatly improves the system performance, because it lets the disk controller handle fewer commands. In some cases, the kernel reduces or stops read-ahead when some random accesses to a file are performed.
Natural language description -> design (data structure + algo) -> code
Description:
ð Read-ahead may be gradually increased as long as the process keeps accessing the file sequentially.
ð Read-ahead must be scaled down when or even disabled when the current access is not sequential.
ð Read-ahead should be stopped when the process keeps accessing the same page over and over again or when almost all the pages of the file are in the cache.
Design:
Current window: a contiguous portion of the file consisting of pages being requested by the process
Ahead window: a contiguous portion of the file following the ones in the current window
/*
* Track a single file's readahead state
*/
struct file_ra_state {
pgoff_t start; /* where readahead started */
unsigned int size; /* # of readahead pages */
unsigned int async_size; /* do asynchronous readahead when
there are only # of pages ahead */
unsigned int ra_pages; /* Maximum readahead window */
unsigned int mmap_miss; /* Cache miss stat for mmap accesses */
loff_t prev_pos; /* Cache last read() position */
};
struct file {
…
struct file_ra_state f_ra;
…
}
When is read-ahead algorithm executed?
1. Read pages of file data
2. Allocate a page for a file memory mapping
3. Readahead(), posix_fadvise(), madvise()
Writing to a File
Deferred write
Memory Mapping
ð Shared Memory Mapping
ð Private Memory Mapping
System call: mmap(), munmap(), msync()
mmap, munmap - map or unmap files or devices into memory
msync - synchronize a file with a memory map
The kernel offers several hooks to customize the memory mapping mechanism for every different filesystem. The core of memory mapping implementation is delegated to a file object’s method named mmap. For disk-based filesystems and for block devices, this method is implemented by a generic function called generic_file_mmap().
Memory mapping mechanism depends on the demand paging mechanism.
For reasons of efficiency, page frames are not assigned to a memory mapping right after it has been created, but at the last moment that is, when the process tries to address one of its pages, thus causing a Page Fault exception.
Non-Linear Memory Mapping
The remap_file_pages() system call is used to create a non-linear mapping, that is, a mapping in which the pages of the file are mapped into a non-sequen‐
tial order in memory. The advantage of using remap_file_pages() over using repeated calls to mmap(2) is that the former approach does not require the ker‐
nel to create additional VMA (Virtual Memory Area) data structures.
To create a non-linear mapping we perform the following steps:
1. Use mmap(2) to create a mapping (which is initially linear). This mapping must be created with the MAP_SHARED flag.
2. Use one or more calls to remap_file_pages() to rearrange the correspondence between the pages of the mapping and the pages of the file. It is possible
to map the same page of a file into multiple locations within the mapped region.
Direct I/O Transfer
There’s no substantial difference between:
1. Accessing a regular file through filesystem
2. Accessing it by referencing its blocks on the underlying block device file
3. Establish a file memory mapping
However, some highly-sophisticated programs (self-caching application such as high-performance server) would like to have full control of the I/O data transfer mechanism.
Linux offers a simple way to bypass the page cache: direct I/O transfer.
O_DIRECT
Generic_file_direct_IO() -> __block_dev_direct_IO(), it does not return until all direct IO data transfers have been completed.
Asynchronous I/O
“Asynchronous” essentially means that when a User Mode process invokes a library function to read or write a file, the function terminates as soon as the read or write operation has been enqueued, possibly even before the real I/O data transfer takes place. The calling process thus continue its execution while the data is being transferred.
aio_read(3), aio_cancel(3), aio_error(3), aio_fsync(3), aio_return(3), aio_suspend(3), aio_write(3)
Asynchronous I/O Implementation
ð User-level Implementation
ð Kernel-level Implementation
User-level Implementation:
Clone the current process -> the child process issues synchronous I/O requests -> aio_xxx terminates in parent process
io_setup(2), io_cancel(2), io_destroy(2), io_getevents(2), io_submit(2)

熱AI工具

Undresser.AI Undress
人工智慧驅動的應用程序,用於創建逼真的裸體照片

AI Clothes Remover
用於從照片中去除衣服的線上人工智慧工具。

Undress AI Tool
免費脫衣圖片

Clothoff.io
AI脫衣器

Video Face Swap
使用我們完全免費的人工智慧換臉工具,輕鬆在任何影片中換臉!

熱門文章

熱工具

記事本++7.3.1
好用且免費的程式碼編輯器

SublimeText3漢化版
中文版,非常好用

禪工作室 13.0.1
強大的PHP整合開發環境

Dreamweaver CS6
視覺化網頁開發工具

SublimeText3 Mac版
神級程式碼編輯軟體(SublimeText3)

VS Code 系統要求:操作系統:Windows 10 及以上、macOS 10.12 及以上、Linux 發行版處理器:最低 1.6 GHz,推薦 2.0 GHz 及以上內存:最低 512 MB,推薦 4 GB 及以上存儲空間:最低 250 MB,推薦 1 GB 及以上其他要求:穩定網絡連接,Xorg/Wayland(Linux)

雖然 Notepad 無法直接運行 Java 代碼,但可以通過借助其他工具實現:使用命令行編譯器 (javac) 編譯代碼,生成字節碼文件 (filename.class)。使用 Java 解釋器 (java) 解釋字節碼,執行代碼並輸出結果。

VS Code擴展安裝失敗的原因可能包括:網絡不穩定、權限不足、系統兼容性問題、VS Code版本過舊、殺毒軟件或防火牆干擾。通過檢查網絡連接、權限、日誌文件、更新VS Code、禁用安全軟件以及重啟VS Code或計算機,可以逐步排查和解決問題。

Linux系統的五個基本組件是:1.內核,2.系統庫,3.系統實用程序,4.圖形用戶界面,5.應用程序。內核管理硬件資源,系統庫提供預編譯函數,系統實用程序用於系統管理,GUI提供可視化交互,應用程序利用這些組件實現功能。

Visual Studio Code (VSCode) 是一款跨平台、開源且免費的代碼編輯器,由微軟開發。它以輕量、可擴展性和對眾多編程語言的支持而著稱。要安裝 VSCode,請訪問官方網站下載並運行安裝程序。使用 VSCode 時,可以創建新項目、編輯代碼、調試代碼、導航項目、擴展 VSCode 和管理設置。 VSCode 適用於 Windows、macOS 和 Linux,支持多種編程語言,並通過 Marketplace 提供各種擴展。它的優勢包括輕量、可擴展性、廣泛的語言支持、豐富的功能和版

VS Code 可以在 Mac 上使用。它具有強大的擴展功能、Git 集成、終端和調試器,同時還提供了豐富的設置選項。但是,對於特別大型項目或專業性較強的開發,VS Code 可能會有性能或功能限制。

VS Code 全稱 Visual Studio Code,是一個由微軟開發的免費開源跨平台代碼編輯器和開發環境。它支持廣泛的編程語言,提供語法高亮、代碼自動補全、代碼片段和智能提示等功能以提高開發效率。通過豐富的擴展生態系統,用戶可以針對特定需求和語言添加擴展程序,例如調試器、代碼格式化工具和 Git 集成。 VS Code 還包含直觀的調試器,有助於快速查找和解決代碼中的 bug。

要查看 Git 倉庫地址,請執行以下步驟:1. 打開命令行並導航到倉庫目錄;2. 運行 "git remote -v" 命令;3. 查看輸出中的倉庫名稱及其相應的地址。
