Commit 69056ee6 authored by Dave Chinner's avatar Dave Chinner Committed by Linus Torvalds

Revert "mm: don't reclaim inodes with many attached pages"

This reverts commit a76cf1a4 ("mm: don't reclaim inodes with many
attached pages").

This change causes serious changes to page cache and inode cache
behaviour and balance, resulting in major performance regressions when
combining worklaods such as large file copies and kernel compiles.

  https://bugzilla.kernel.org/show_bug.cgi?id=202441

This change is a hack to work around the problems introduced by changing
how agressive shrinkers are on small caches in commit 172b06c3 ("mm:
slowly shrink slabs with a relatively small number of objects").  It
creates more problems than it solves, wasn't adequately reviewed or
tested, so it needs to be reverted.

Link: http://lkml.kernel.org/r/20190130041707.27750-2-david@fromorbit.com
Fixes: a76cf1a4 ("mm: don't reclaim inodes with many attached pages")
Signed-off-by: default avatarDave Chinner <dchinner@redhat.com>
Cc: Wolfgang Walter <linux@stwm.de>
Cc: Roman Gushchin <guro@fb.com>
Cc: Spock <dairinin@gmail.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent aa0c38cf
...@@ -730,11 +730,8 @@ static enum lru_status inode_lru_isolate(struct list_head *item, ...@@ -730,11 +730,8 @@ static enum lru_status inode_lru_isolate(struct list_head *item,
return LRU_REMOVED; return LRU_REMOVED;
} }
/* /* recently referenced inodes get one more pass */
* Recently referenced inodes and inodes with many attached pages if (inode->i_state & I_REFERENCED) {
* get one more pass.
*/
if (inode->i_state & I_REFERENCED || inode->i_data.nrpages > 1) {
inode->i_state &= ~I_REFERENCED; inode->i_state &= ~I_REFERENCED;
spin_unlock(&inode->i_lock); spin_unlock(&inode->i_lock);
return LRU_ROTATE; return LRU_ROTATE;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment