fs: Add read_folio documentation

Convert all the ->readpage documentation to ->read_folio.
Signed-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
parent 5efe7448
......@@ -1256,7 +1256,7 @@ inline encryption hardware will encrypt/decrypt the file contents.
When inline encryption isn't used, filesystems must encrypt/decrypt
the file contents themselves, as described below:
For the read path (->readpage()) of regular files, filesystems can
For the read path (->read_folio()) of regular files, filesystems can
read the ciphertext into the page cache and decrypt it in-place. The
page lock must be held until decryption has finished, to prevent the
page from becoming visible to userspace prematurely.
......
......@@ -548,7 +548,7 @@ already verified). Below, we describe how filesystems implement this.
Pagecache
~~~~~~~~~
For filesystems using Linux's pagecache, the ``->readpage()`` and
For filesystems using Linux's pagecache, the ``->read_folio()`` and
``->readahead()`` methods must be modified to verify pages before they
are marked Uptodate. Merely hooking ``->read_iter()`` would be
insufficient, since ``->read_iter()`` is not used for memory maps.
......
......@@ -237,7 +237,7 @@ address_space_operations
prototypes::
int (*writepage)(struct page *page, struct writeback_control *wbc);
int (*readpage)(struct file *, struct page *);
int (*read_folio)(struct file *, struct folio *);
int (*writepages)(struct address_space *, struct writeback_control *);
bool (*dirty_folio)(struct address_space *, struct folio *folio);
void (*readahead)(struct readahead_control *);
......@@ -268,7 +268,7 @@ locking rules:
ops PageLocked(page) i_rwsem invalidate_lock
====================== ======================== ========= ===============
writepage: yes, unlocks (see below)
readpage: yes, unlocks shared
read_folio: yes, unlocks shared
writepages:
dirty_folio maybe
readahead: yes, unlocks shared
......@@ -289,13 +289,13 @@ swap_activate: no
swap_deactivate: no
====================== ======================== ========= ===============
->write_begin(), ->write_end() and ->readpage() may be called from
->write_begin(), ->write_end() and ->read_folio() may be called from
the request handler (/dev/loop).
->readpage() unlocks the page, either synchronously or via I/O
->read_folio() unlocks the folio, either synchronously or via I/O
completion.
->readahead() unlocks the pages that I/O is attempted on like ->readpage().
->readahead() unlocks the folios that I/O is attempted on like ->read_folio().
->writepage() is used for two purposes: for "memory cleansing" and for
"sync". These are quite different operations and the behaviour may differ
......
......@@ -96,7 +96,7 @@ attached to an inode (or NULL if fscache is disabled)::
Buffered Read Helpers
=====================
The library provides a set of read helpers that handle the ->readpage(),
The library provides a set of read helpers that handle the ->read_folio(),
->readahead() and much of the ->write_begin() VM operations and translate them
into a common call framework.
......@@ -136,8 +136,8 @@ Read Helper Functions
Three read helpers are provided::
void netfs_readahead(struct readahead_control *ractl);
int netfs_readpage(struct file *file,
struct page *page);
int netfs_read_folio(struct file *file,
struct folio *folio);
int netfs_write_begin(struct file *file,
struct address_space *mapping,
loff_t pos,
......@@ -148,7 +148,7 @@ Three read helpers are provided::
Each corresponds to a VM address space operation. These operations use the
state in the per-inode context.
For ->readahead() and ->readpage(), the network filesystem just point directly
For ->readahead() and ->read_folio(), the network filesystem just point directly
at the corresponding read helper; whereas for ->write_begin(), it may be a
little more complicated as the network filesystem might want to flush
conflicting writes or track dirty data and needs to put the acquired folio if
......
......@@ -656,7 +656,7 @@ by memory-mapping the page. Data is written into the address space by
the application, and then written-back to storage typically in whole
pages, however the address_space has finer control of write sizes.
The read process essentially only requires 'readpage'. The write
The read process essentially only requires 'read_folio'. The write
process is more complicated and uses write_begin/write_end or
dirty_folio to write data into the address_space, and writepage and
writepages to writeback data to storage.
......@@ -722,7 +722,7 @@ cache in your filesystem. The following members are defined:
struct address_space_operations {
int (*writepage)(struct page *page, struct writeback_control *wbc);
int (*readpage)(struct file *, struct page *);
int (*read_folio)(struct file *, struct folio *);
int (*writepages)(struct address_space *, struct writeback_control *);
bool (*dirty_folio)(struct address_space *, struct folio *);
void (*readahead)(struct readahead_control *);
......@@ -772,14 +772,14 @@ cache in your filesystem. The following members are defined:
See the file "Locking" for more details.
``readpage``
called by the VM to read a page from backing store. The page
will be Locked when readpage is called, and should be unlocked
and marked uptodate once the read completes. If ->readpage
discovers that it needs to unlock the page for some reason, it
can do so, and then return AOP_TRUNCATED_PAGE. In this case,
the page will be relocated, relocked and if that all succeeds,
->readpage will be called again.
``read_folio``
called by the VM to read a folio from backing store. The folio
will be locked when read_folio is called, and should be unlocked
and marked uptodate once the read completes. If ->read_folio
discovers that it cannot perform the I/O at this time, it can
unlock the folio and return AOP_TRUNCATED_PAGE. In this case,
the folio will be looked up again, relocked and if that all succeeds,
->read_folio will be called again.
``writepages``
called by the VM to write out pages associated with the
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment