fs: Remove ->readpages address space operation

All filesystems have now been converted to use ->readahead, so
remove the ->readpages operation and fix all the comments that
used to refer to it.
Signed-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
Reviewed-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
Acked-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
parent ebf921a9
...@@ -549,7 +549,7 @@ Pagecache ...@@ -549,7 +549,7 @@ Pagecache
~~~~~~~~~ ~~~~~~~~~
For filesystems using Linux's pagecache, the ``->readpage()`` and For filesystems using Linux's pagecache, the ``->readpage()`` and
``->readpages()`` methods must be modified to verify pages before they ``->readahead()`` methods must be modified to verify pages before they
are marked Uptodate. Merely hooking ``->read_iter()`` would be are marked Uptodate. Merely hooking ``->read_iter()`` would be
insufficient, since ``->read_iter()`` is not used for memory maps. insufficient, since ``->read_iter()`` is not used for memory maps.
...@@ -611,7 +611,7 @@ workqueue, and then the workqueue work does the decryption or ...@@ -611,7 +611,7 @@ workqueue, and then the workqueue work does the decryption or
verification. Finally, pages where no decryption or verity error verification. Finally, pages where no decryption or verity error
occurred are marked Uptodate, and the pages are unlocked. occurred are marked Uptodate, and the pages are unlocked.
Files on ext4 and f2fs may contain holes. Normally, ``->readpages()`` Files on ext4 and f2fs may contain holes. Normally, ``->readahead()``
simply zeroes holes and sets the corresponding pages Uptodate; no bios simply zeroes holes and sets the corresponding pages Uptodate; no bios
are issued. To prevent this case from bypassing fs-verity, these are issued. To prevent this case from bypassing fs-verity, these
filesystems use fsverity_verify_page() to verify hole pages. filesystems use fsverity_verify_page() to verify hole pages.
...@@ -778,7 +778,7 @@ weren't already directly answered in other parts of this document. ...@@ -778,7 +778,7 @@ weren't already directly answered in other parts of this document.
- To prevent bypassing verification, pages must not be marked - To prevent bypassing verification, pages must not be marked
Uptodate until they've been verified. Currently, each Uptodate until they've been verified. Currently, each
filesystem is responsible for marking pages Uptodate via filesystem is responsible for marking pages Uptodate via
``->readpages()``. Therefore, currently it's not possible for ``->readahead()``. Therefore, currently it's not possible for
the VFS to do the verification on its own. Changing this would the VFS to do the verification on its own. Changing this would
require significant changes to the VFS and all filesystems. require significant changes to the VFS and all filesystems.
......
...@@ -241,8 +241,6 @@ prototypes:: ...@@ -241,8 +241,6 @@ prototypes::
int (*writepages)(struct address_space *, struct writeback_control *); int (*writepages)(struct address_space *, struct writeback_control *);
bool (*dirty_folio)(struct address_space *, struct folio *folio); bool (*dirty_folio)(struct address_space *, struct folio *folio);
void (*readahead)(struct readahead_control *); void (*readahead)(struct readahead_control *);
int (*readpages)(struct file *filp, struct address_space *mapping,
struct list_head *pages, unsigned nr_pages);
int (*write_begin)(struct file *, struct address_space *mapping, int (*write_begin)(struct file *, struct address_space *mapping,
loff_t pos, unsigned len, unsigned flags, loff_t pos, unsigned len, unsigned flags,
struct page **pagep, void **fsdata); struct page **pagep, void **fsdata);
...@@ -274,7 +272,6 @@ readpage: yes, unlocks shared ...@@ -274,7 +272,6 @@ readpage: yes, unlocks shared
writepages: writepages:
dirty_folio maybe dirty_folio maybe
readahead: yes, unlocks shared readahead: yes, unlocks shared
readpages: no shared
write_begin: locks the page exclusive write_begin: locks the page exclusive
write_end: yes, unlocks exclusive write_end: yes, unlocks exclusive
bmap: bmap:
...@@ -300,9 +297,6 @@ completion. ...@@ -300,9 +297,6 @@ completion.
->readahead() unlocks the pages that I/O is attempted on like ->readpage(). ->readahead() unlocks the pages that I/O is attempted on like ->readpage().
->readpages() populates the pagecache with the passed pages and starts
I/O against them. They come unlocked upon I/O completion.
->writepage() is used for two purposes: for "memory cleansing" and for ->writepage() is used for two purposes: for "memory cleansing" and for
"sync". These are quite different operations and the behaviour may differ "sync". These are quite different operations and the behaviour may differ
depending upon the mode. depending upon the mode.
......
...@@ -726,8 +726,6 @@ cache in your filesystem. The following members are defined: ...@@ -726,8 +726,6 @@ cache in your filesystem. The following members are defined:
int (*writepages)(struct address_space *, struct writeback_control *); int (*writepages)(struct address_space *, struct writeback_control *);
bool (*dirty_folio)(struct address_space *, struct folio *); bool (*dirty_folio)(struct address_space *, struct folio *);
void (*readahead)(struct readahead_control *); void (*readahead)(struct readahead_control *);
int (*readpages)(struct file *filp, struct address_space *mapping,
struct list_head *pages, unsigned nr_pages);
int (*write_begin)(struct file *, struct address_space *mapping, int (*write_begin)(struct file *, struct address_space *mapping,
loff_t pos, unsigned len, unsigned flags, loff_t pos, unsigned len, unsigned flags,
struct page **pagep, void **fsdata); struct page **pagep, void **fsdata);
...@@ -817,15 +815,6 @@ cache in your filesystem. The following members are defined: ...@@ -817,15 +815,6 @@ cache in your filesystem. The following members are defined:
completes successfully. Setting PageError on any page will be completes successfully. Setting PageError on any page will be
ignored; simply unlock the page if an I/O error occurs. ignored; simply unlock the page if an I/O error occurs.
``readpages``
called by the VM to read pages associated with the address_space
object. This is essentially just a vector version of readpage.
Instead of just one page, several pages are requested.
readpages is only used for read-ahead, so read errors are
ignored. If anything goes wrong, feel free to give up.
This interface is deprecated and will be removed by the end of
2020; implement readahead instead.
``write_begin`` ``write_begin``
Called by the generic buffered write code to ask the filesystem Called by the generic buffered write code to ask the filesystem
to prepare to write len bytes at the given offset in the file. to prepare to write len bytes at the given offset in the file.
......
...@@ -645,7 +645,7 @@ static int btrfs_extent_same_range(struct inode *src, u64 loff, u64 len, ...@@ -645,7 +645,7 @@ static int btrfs_extent_same_range(struct inode *src, u64 loff, u64 len,
int ret; int ret;
/* /*
* Lock destination range to serialize with concurrent readpages() and * Lock destination range to serialize with concurrent readahead() and
* source range to serialize with relocation. * source range to serialize with relocation.
*/ */
btrfs_double_extent_lock(src, loff, dst, dst_loff, len); btrfs_double_extent_lock(src, loff, dst, dst_loff, len);
...@@ -739,7 +739,7 @@ static noinline int btrfs_clone_files(struct file *file, struct file *file_src, ...@@ -739,7 +739,7 @@ static noinline int btrfs_clone_files(struct file *file, struct file *file_src,
} }
/* /*
* Lock destination range to serialize with concurrent readpages() and * Lock destination range to serialize with concurrent readahead() and
* source range to serialize with relocation. * source range to serialize with relocation.
*/ */
btrfs_double_extent_lock(src, off, inode, destoff, len); btrfs_double_extent_lock(src, off, inode, destoff, len);
......
...@@ -597,7 +597,7 @@ CIFSSMBNegotiate(const unsigned int xid, ...@@ -597,7 +597,7 @@ CIFSSMBNegotiate(const unsigned int xid,
set_credits(server, server->maxReq); set_credits(server, server->maxReq);
/* probably no need to store and check maxvcs */ /* probably no need to store and check maxvcs */
server->maxBuf = le32_to_cpu(pSMBr->MaxBufferSize); server->maxBuf = le32_to_cpu(pSMBr->MaxBufferSize);
/* set up max_read for readpages check */ /* set up max_read for readahead check */
server->max_read = server->maxBuf; server->max_read = server->maxBuf;
server->max_rw = le32_to_cpu(pSMBr->MaxRawSize); server->max_rw = le32_to_cpu(pSMBr->MaxRawSize);
cifs_dbg(NOISY, "Max buf = %d\n", ses->server->maxBuf); cifs_dbg(NOISY, "Max buf = %d\n", ses->server->maxBuf);
......
...@@ -49,7 +49,7 @@ static void cifs_set_ops(struct inode *inode) ...@@ -49,7 +49,7 @@ static void cifs_set_ops(struct inode *inode)
inode->i_fop = &cifs_file_ops; inode->i_fop = &cifs_file_ops;
} }
/* check if server can support readpages */ /* check if server can support readahead */
if (cifs_sb_master_tcon(cifs_sb)->ses->server->max_read < if (cifs_sb_master_tcon(cifs_sb)->ses->server->max_read <
PAGE_SIZE + MAX_CIFS_HDR_SIZE) PAGE_SIZE + MAX_CIFS_HDR_SIZE)
inode->i_data.a_ops = &cifs_addr_ops_smallbuf; inode->i_data.a_ops = &cifs_addr_ops_smallbuf;
......
...@@ -248,7 +248,7 @@ EXPORT_SYMBOL(fscrypt_encrypt_block_inplace); ...@@ -248,7 +248,7 @@ EXPORT_SYMBOL(fscrypt_encrypt_block_inplace);
* which must still be locked and not uptodate. Normally, blocksize == * which must still be locked and not uptodate. Normally, blocksize ==
* PAGE_SIZE and the whole page is decrypted at once. * PAGE_SIZE and the whole page is decrypted at once.
* *
* This is for use by the filesystem's ->readpages() method. * This is for use by the filesystem's ->readahead() method.
* *
* Return: 0 on success; -errno on failure * Return: 0 on success; -errno on failure
*/ */
......
...@@ -109,7 +109,7 @@ static void verity_work(struct work_struct *work) ...@@ -109,7 +109,7 @@ static void verity_work(struct work_struct *work)
struct bio *bio = ctx->bio; struct bio *bio = ctx->bio;
/* /*
* fsverity_verify_bio() may call readpages() again, and although verity * fsverity_verify_bio() may call readahead() again, and although verity
* will be disabled for that, decryption may still be needed, causing * will be disabled for that, decryption may still be needed, causing
* another bio_post_read_ctx to be allocated. So to guarantee that * another bio_post_read_ctx to be allocated. So to guarantee that
* mempool_alloc() never deadlocks we must free the current ctx first. * mempool_alloc() never deadlocks we must free the current ctx first.
......
...@@ -164,7 +164,7 @@ static void f2fs_verify_bio(struct work_struct *work) ...@@ -164,7 +164,7 @@ static void f2fs_verify_bio(struct work_struct *work)
bool may_have_compressed_pages = (ctx->enabled_steps & STEP_DECOMPRESS); bool may_have_compressed_pages = (ctx->enabled_steps & STEP_DECOMPRESS);
/* /*
* fsverity_verify_bio() may call readpages() again, and while verity * fsverity_verify_bio() may call readahead() again, and while verity
* will be disabled for this, decryption and/or decompression may still * will be disabled for this, decryption and/or decompression may still
* be needed, resulting in another bio_post_read_ctx being allocated. * be needed, resulting in another bio_post_read_ctx being allocated.
* So to prevent deadlocks we need to release the current ctx to the * So to prevent deadlocks we need to release the current ctx to the
...@@ -2392,7 +2392,7 @@ static void f2fs_readahead(struct readahead_control *rac) ...@@ -2392,7 +2392,7 @@ static void f2fs_readahead(struct readahead_control *rac)
if (!f2fs_is_compress_backend_ready(inode)) if (!f2fs_is_compress_backend_ready(inode))
return; return;
/* If the file has inline data, skip readpages */ /* If the file has inline data, skip readahead */
if (f2fs_has_inline_data(inode)) if (f2fs_has_inline_data(inode))
return; return;
......
...@@ -627,7 +627,7 @@ struct fuse_conn { ...@@ -627,7 +627,7 @@ struct fuse_conn {
/** Connection successful. Only set in INIT */ /** Connection successful. Only set in INIT */
unsigned conn_init:1; unsigned conn_init:1;
/** Do readpages asynchronously? Only set in INIT */ /** Do readahead asynchronously? Only set in INIT */
unsigned async_read:1; unsigned async_read:1;
/** Return an unique read error after abort. Only set in INIT */ /** Return an unique read error after abort. Only set in INIT */
......
// SPDX-License-Identifier: GPL-2.0 // SPDX-License-Identifier: GPL-2.0
/* /*
* Data verification functions, i.e. hooks for ->readpages() * Data verification functions, i.e. hooks for ->readahead()
* *
* Copyright 2019 Google LLC * Copyright 2019 Google LLC
*/ */
...@@ -214,7 +214,7 @@ EXPORT_SYMBOL_GPL(fsverity_verify_page); ...@@ -214,7 +214,7 @@ EXPORT_SYMBOL_GPL(fsverity_verify_page);
* that fail verification are set to the Error state. Verification is skipped * that fail verification are set to the Error state. Verification is skipped
* for pages already in the Error state, e.g. due to fscrypt decryption failure. * for pages already in the Error state, e.g. due to fscrypt decryption failure.
* *
* This is a helper function for use by the ->readpages() method of filesystems * This is a helper function for use by the ->readahead() method of filesystems
* that issue bios to read data directly into the page cache. Filesystems that * that issue bios to read data directly into the page cache. Filesystems that
* populate the page cache without issuing bios (e.g. non block-based * populate the page cache without issuing bios (e.g. non block-based
* filesystems) must instead call fsverity_verify_page() directly on each page. * filesystems) must instead call fsverity_verify_page() directly on each page.
......
...@@ -370,12 +370,6 @@ struct address_space_operations { ...@@ -370,12 +370,6 @@ struct address_space_operations {
/* Mark a folio dirty. Return true if this dirtied it */ /* Mark a folio dirty. Return true if this dirtied it */
bool (*dirty_folio)(struct address_space *, struct folio *); bool (*dirty_folio)(struct address_space *, struct folio *);
/*
* Reads in the requested pages. Unlike ->readpage(), this is
* PURELY used for read-ahead!.
*/
int (*readpages)(struct file *filp, struct address_space *mapping,
struct list_head *pages, unsigned nr_pages);
void (*readahead)(struct readahead_control *); void (*readahead)(struct readahead_control *);
int (*write_begin)(struct file *, struct address_space *mapping, int (*write_begin)(struct file *, struct address_space *mapping,
......
...@@ -221,7 +221,7 @@ static inline void fsverity_enqueue_verify_work(struct work_struct *work) ...@@ -221,7 +221,7 @@ static inline void fsverity_enqueue_verify_work(struct work_struct *work)
* *
* This checks whether ->i_verity_info has been set. * This checks whether ->i_verity_info has been set.
* *
* Filesystems call this from ->readpages() to check whether the pages need to * Filesystems call this from ->readahead() to check whether the pages need to
* be verified or not. Don't use IS_VERITY() for this purpose; it's subject to * be verified or not. Don't use IS_VERITY() for this purpose; it's subject to
* a race condition where the file is being read concurrently with * a race condition where the file is being read concurrently with
* FS_IOC_ENABLE_VERITY completing. (S_VERITY is set before ->i_verity_info.) * FS_IOC_ENABLE_VERITY completing. (S_VERITY is set before ->i_verity_info.)
......
...@@ -2538,7 +2538,7 @@ static int filemap_create_folio(struct file *file, ...@@ -2538,7 +2538,7 @@ static int filemap_create_folio(struct file *file,
* the page cache as the locked folio would then be enough to * the page cache as the locked folio would then be enough to
* synchronize with hole punching. But there are code paths * synchronize with hole punching. But there are code paths
* such as filemap_update_page() filling in partially uptodate * such as filemap_update_page() filling in partially uptodate
* pages or ->readpages() that need to hold invalidate_lock * pages or ->readahead() that need to hold invalidate_lock
* while mapping blocks for IO so let's hold the lock here as * while mapping blocks for IO so let's hold the lock here as
* well to keep locking rules simple. * well to keep locking rules simple.
*/ */
......
...@@ -170,13 +170,6 @@ static void read_pages(struct readahead_control *rac, struct list_head *pages, ...@@ -170,13 +170,6 @@ static void read_pages(struct readahead_control *rac, struct list_head *pages,
unlock_page(page); unlock_page(page);
put_page(page); put_page(page);
} }
} else if (aops->readpages) {
aops->readpages(rac->file, rac->mapping, pages,
readahead_count(rac));
/* Clean up the remaining pages */
put_pages_list(pages);
rac->_index += rac->_nr_pages;
rac->_nr_pages = 0;
} else { } else {
while ((page = readahead_page(rac))) { while ((page = readahead_page(rac))) {
aops->readpage(rac->file, page); aops->readpage(rac->file, page);
...@@ -253,10 +246,7 @@ void page_cache_ra_unbounded(struct readahead_control *ractl, ...@@ -253,10 +246,7 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
folio = filemap_alloc_folio(gfp_mask, 0); folio = filemap_alloc_folio(gfp_mask, 0);
if (!folio) if (!folio)
break; break;
if (mapping->a_ops->readpages) { if (filemap_add_folio(mapping, folio, index + i,
folio->index = index + i;
list_add(&folio->lru, &page_pool);
} else if (filemap_add_folio(mapping, folio, index + i,
gfp_mask) < 0) { gfp_mask) < 0) {
folio_put(folio); folio_put(folio);
read_pages(ractl, &page_pool, true); read_pages(ractl, &page_pool, true);
...@@ -318,8 +308,7 @@ void force_page_cache_ra(struct readahead_control *ractl, ...@@ -318,8 +308,7 @@ void force_page_cache_ra(struct readahead_control *ractl,
struct backing_dev_info *bdi = inode_to_bdi(mapping->host); struct backing_dev_info *bdi = inode_to_bdi(mapping->host);
unsigned long max_pages, index; unsigned long max_pages, index;
if (unlikely(!mapping->a_ops->readpage && !mapping->a_ops->readpages && if (unlikely(!mapping->a_ops->readpage && !mapping->a_ops->readahead))
!mapping->a_ops->readahead))
return; return;
/* /*
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment