1. 09 Aug, 2012 25 commits
  2. 01 Aug, 2012 15 commits
    • Greg Kroah-Hartman's avatar
      Linux 3.0.39 · f351a1d7
      Greg Kroah-Hartman authored
      f351a1d7
    • Konstantin Khlebnikov's avatar
      vmscan: fix initial shrinker size handling · 909e0a4e
      Konstantin Khlebnikov authored
      commit 635697c6 upstream.
      
      Stable note: The commit [acf92b48: vmscan: shrinker->nr updates race and
      	go wrong] aimed to reduce excessive reclaim of slab objects but
      	had bug in how it treated shrinker functions that returned -1.
      
      A shrinker function can return -1, means that it cannot do anything
      without a risk of deadlock.  For example prune_super() does this if it
      cannot grab a superblock refrence, even if nr_to_scan=0.  Currently we
      interpret this -1 as a ULONG_MAX size shrinker and evaluate `total_scan'
      according to this.  So the next time around this shrinker can cause
      really big pressure.  Let's skip such shrinkers instead.
      
      Also make total_scan signed, otherwise the check (total_scan < 0) below
      never works.
      Signed-off-by: default avatarKonstantin Khlebnikov <khlebnikov@openvz.org>
      Cc: Dave Chinner <david@fromorbit.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      909e0a4e
    • Konstantin Khlebnikov's avatar
      mm/hugetlb: fix warning in alloc_huge_page/dequeue_huge_page_vma · ad04b9e9
      Konstantin Khlebnikov authored
      commit b1c12cbc upstream.
      
      Stable note: Not tracked in Bugzilla. [get|put]_mems_allowed() is extremely
      	expensive and severely impacted page allocator performance. This
      	is part of a series of patches that reduce page allocator overhead.
      
      Fix a gcc warning (and bug?) introduced in cc9a6c87 ("cpuset: mm: reduce
      large amounts of memory barrier related damage v3")
      
      Local variable "page" can be uninitialized if the nodemask from vma policy
      does not intersects with nodemask from cpuset.  Even if it doesn't happens
      it is better to initialize this variable explicitly than to introduce
      a kernel oops in a weird corner case.
      
      mm/hugetlb.c: In function `alloc_huge_page':
      mm/hugetlb.c:1135:5: warning: `page' may be used uninitialized in this function
      Signed-off-by: default avatarKonstantin Khlebnikov <khlebnikov@openvz.org>
      Acked-by: default avatarMel Gorman <mgorman@suse.de>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      ad04b9e9
    • Mel Gorman's avatar
      cpuset: mm: reduce large amounts of memory barrier related damage v3 · 627c5c60
      Mel Gorman authored
      commit cc9a6c87 upstream.
      
      Stable note:  Not tracked in Bugzilla. [get|put]_mems_allowed() is extremely
      	expensive and severely impacted page allocator performance. This
      	is part of a series of patches that reduce page allocator overhead.
      
      Commit c0ff7453 ("cpuset,mm: fix no node to alloc memory when
      changing cpuset's mems") wins a super prize for the largest number of
      memory barriers entered into fast paths for one commit.
      
      [get|put]_mems_allowed is incredibly heavy with pairs of full memory
      barriers inserted into a number of hot paths.  This was detected while
      investigating at large page allocator slowdown introduced some time
      after 2.6.32.  The largest portion of this overhead was shown by
      oprofile to be at an mfence introduced by this commit into the page
      allocator hot path.
      
      For extra style points, the commit introduced the use of yield() in an
      implementation of what looks like a spinning mutex.
      
      This patch replaces the full memory barriers on both read and write
      sides with a sequence counter with just read barriers on the fast path
      side.  This is much cheaper on some architectures, including x86.  The
      main bulk of the patch is the retry logic if the nodemask changes in a
      manner that can cause a false failure.
      
      While updating the nodemask, a check is made to see if a false failure
      is a risk.  If it is, the sequence number gets bumped and parallel
      allocators will briefly stall while the nodemask update takes place.
      
      In a page fault test microbenchmark, oprofile samples from
      __alloc_pages_nodemask went from 4.53% of all samples to 1.15%.  The
      actual results were
      
                                   3.3.0-rc3          3.3.0-rc3
                                   rc3-vanilla        nobarrier-v2r1
          Clients   1 UserTime       0.07 (  0.00%)   0.08 (-14.19%)
          Clients   2 UserTime       0.07 (  0.00%)   0.07 (  2.72%)
          Clients   4 UserTime       0.08 (  0.00%)   0.07 (  3.29%)
          Clients   1 SysTime        0.70 (  0.00%)   0.65 (  6.65%)
          Clients   2 SysTime        0.85 (  0.00%)   0.82 (  3.65%)
          Clients   4 SysTime        1.41 (  0.00%)   1.41 (  0.32%)
          Clients   1 WallTime       0.77 (  0.00%)   0.74 (  4.19%)
          Clients   2 WallTime       0.47 (  0.00%)   0.45 (  3.73%)
          Clients   4 WallTime       0.38 (  0.00%)   0.37 (  1.58%)
          Clients   1 Flt/sec/cpu  497620.28 (  0.00%) 520294.53 (  4.56%)
          Clients   2 Flt/sec/cpu  414639.05 (  0.00%) 429882.01 (  3.68%)
          Clients   4 Flt/sec/cpu  257959.16 (  0.00%) 258761.48 (  0.31%)
          Clients   1 Flt/sec      495161.39 (  0.00%) 517292.87 (  4.47%)
          Clients   2 Flt/sec      820325.95 (  0.00%) 850289.77 (  3.65%)
          Clients   4 Flt/sec      1020068.93 (  0.00%) 1022674.06 (  0.26%)
          MMTests Statistics: duration
          Sys Time Running Test (seconds)             135.68    132.17
          User+Sys Time Running Test (seconds)         164.2    160.13
          Total Elapsed Time (seconds)                123.46    120.87
      
      The overall improvement is small but the System CPU time is much
      improved and roughly in correlation to what oprofile reported (these
      performance figures are without profiling so skew is expected).  The
      actual number of page faults is noticeably improved.
      
      For benchmarks like kernel builds, the overall benefit is marginal but
      the system CPU time is slightly reduced.
      
      To test the actual bug the commit fixed I opened two terminals.  The
      first ran within a cpuset and continually ran a small program that
      faulted 100M of anonymous data.  In a second window, the nodemask of the
      cpuset was continually randomised in a loop.
      
      Without the commit, the program would fail every so often (usually
      within 10 seconds) and obviously with the commit everything worked fine.
      With this patch applied, it also worked fine so the fix should be
      functionally equivalent.
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Cc: Miao Xie <miaox@cn.fujitsu.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Christoph Lameter <cl@linux.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      
      627c5c60
    • David Rientjes's avatar
      cpusets: stall when updating mems_allowed for mempolicy or disjoint nodemask · ba204b54
      David Rientjes authored
      commit b246272e upstream.
      
      Stable note: Not tracked in Bugzilla. [get|put]_mems_allowed() is extremely
      	expensive and severely impacted page allocator performance. This is
      	part of a series of patches that reduce page allocator overhead.
      
      Kernels where MAX_NUMNODES > BITS_PER_LONG may temporarily see an empty
      nodemask in a tsk's mempolicy if its previous nodemask is remapped onto a
      new set of allowed cpuset nodes where the two nodemasks, as a result of
      the remap, are now disjoint.
      
      c0ff7453 ("cpuset,mm: fix no node to alloc memory when changing
      cpuset's mems") adds get_mems_allowed() to prevent the set of allowed
      nodes from changing for a thread.  This causes any update to a set of
      allowed nodes to stall until put_mems_allowed() is called.
      
      This stall is unncessary, however, if at least one node remains unchanged
      in the update to the set of allowed nodes.  This was addressed by
      89e8a244 ("cpusets: avoid looping when storing to mems_allowed if one
      node remains set"), but it's still possible that an empty nodemask may be
      read from a mempolicy because the old nodemask may be remapped to the new
      nodemask during rebind.  To prevent this, only avoid the stall if there is
      no mempolicy for the thread being changed.
      
      This is a temporary solution until all reads from mempolicy nodemasks can
      be guaranteed to not be empty without the get_mems_allowed()
      synchronization.
      
      Also moves the check for nodemask intersection inside task_lock() so that
      tsk->mems_allowed cannot change.  This ensures that nothing can set this
      tsk's mems_allowed out from under us and also protects tsk->mempolicy.
      Reported-by: default avatarMiao Xie <miaox@cn.fujitsu.com>
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Paul Menage <paul@paulmenage.org>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      ba204b54
    • David Rientjes's avatar
      cpusets: avoid looping when storing to mems_allowed if one node remains set · 6b63ea81
      David Rientjes authored
      commit 89e8a244 upstream.
      
      Stable note: Not tracked in Bugzilla. [get|put]_mems_allowed() is
      	extremely expensive and severely impacted page allocator performance.
      	This is part of a series of patches that reduce page allocator
      	overhead.
      
      {get,put}_mems_allowed() exist so that general kernel code may locklessly
      access a task's set of allowable nodes without having the chance that a
      concurrent write will cause the nodemask to be empty on configurations
      where MAX_NUMNODES > BITS_PER_LONG.
      
      This could incur a significant delay, however, especially in low memory
      conditions because the page allocator is blocking and reclaim requires
      get_mems_allowed() itself.  It is not atypical to see writes to
      cpuset.mems take over 2 seconds to complete, for example.  In low memory
      conditions, this is problematic because it's one of the most imporant
      times to change cpuset.mems in the first place!
      
      The only way a task's set of allowable nodes may change is through cpusets
      by writing to cpuset.mems and when attaching a task to a generic code is
      not reading the nodemask with get_mems_allowed() at the same time, and
      then clearing all the old nodes.  This prevents the possibility that a
      reader will see an empty nodemask at the same time the writer is storing a
      new nodemask.
      
      If at least one node remains unchanged, though, it's possible to simply
      set all new nodes and then clear all the old nodes.  Changing a task's
      nodemask is protected by cgroup_mutex so it's guaranteed that two threads
      are not changing the same task's nodemask at the same time, so the
      nodemask is guaranteed to be stored before another thread changes it and
      determines whether a node remains set or not.
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Miao Xie <miaox@cn.fujitsu.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Nick Piggin <npiggin@kernel.dk>
      Cc: Paul Menage <paul@paulmenage.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      6b63ea81
    • Johannes Weiner's avatar
      mm: vmscan: convert global reclaim to per-memcg LRU lists · 4d01a2e3
      Johannes Weiner authored
      commit b95a2f2d upstream - WARNING: this is a substitute patch.
      
      Stable note: Not tracked in Bugzilla. This is a partial backport of an
      	upstream commit addressing a completely different issue
      	that accidentally contained an important fix. The workload
      	this patch helps was memcached when IO is started in the
      	background. memcached should stay resident but without this patch
      	it gets swapped. Sometimes this manifests as a drop in throughput
      	but mostly it was observed through /proc/vmstat.
      
      Commit [246e87a9: memcg: fix get_scan_count() for small targets] was meant
      to fix a problem whereby small scan targets on memcg were ignored causing
      priority to raise too sharply. It forced scanning to take place if the
      target was small, memcg or kswapd.
      
      From the time it was introduced it caused excessive reclaim by kswapd
      with workloads being pushed to swap that previously would have stayed
      resident. This was accidentally fixed in commit [b95a2f2d: mm: vmscan:
      convert global reclaim to per-memcg LRU lists] by making it harder for
      kswapd to force scan small targets but that patchset is not suitable for
      backporting. This was later changed again by commit [90126375: mm/vmscan:
      push lruvec pointer into get_scan_count()] into a format that looks
      like it would be a straight-forward backport but there is a subtle
      difference due to the use of lruvecs.
      
      The impact of the accidental fix is to make it harder for kswapd to force
      scan small targets by taking zone->all_unreclaimable into account. This
      patch is the closest equivalent available based on what is backported.
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      4d01a2e3
    • Hugh Dickins's avatar
      mm: test PageSwapBacked in lumpy reclaim · d2b02236
      Hugh Dickins authored
      commit 043bcbe5 upstream.
      
      Stable note: Not tracked in Bugzilla. There were reports of shared
      	mapped pages being unfairly reclaimed in comparison to older kernels.
      	This is being addressed over time. Even though the subject
      	refers to lumpy reclaim, it impacts compaction as well.
      
      Lumpy reclaim does well to stop at a PageAnon when there's no swap, but
      better is to stop at any PageSwapBacked, which includes shmem/tmpfs too.
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Reviewed-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Reviewed-by: default avatarMinchan Kim <minchan@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      d2b02236
    • Minchan Kim's avatar
      mm/vmscan.c: consider swap space when deciding whether to continue reclaim · 503e973c
      Minchan Kim authored
      commit 86cfd3a4 upstream.
      
      Stable note: Not tracked in Bugzilla. This patch reduces kswapd CPU
      	usage on swapless systems with high anonymous memory usage.
      
      It's pointless to continue reclaiming when we have no swap space and lots
      of anon pages in the inactive list.
      
      Without this patch, it is possible when swap is disabled to continue
      trying to reclaim when there are only anonymous pages in the system even
      though that will not make any progress.
      Signed-off-by: default avatarMinchan Kim <minchan@kernel.org>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Acked-by: default avatarMel Gorman <mgorman@suse.de>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      Cc: Johannes Weiner <jweiner@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      503e973c
    • Konstantin Khlebnikov's avatar
      vmscan: activate executable pages after first usage · 4391b5f4
      Konstantin Khlebnikov authored
      commit c909e993 upstream.
      
      Stable note: Not tracked in Bugzilla. There were reports of shared
      	mapped pages being unfairly reclaimed in comparison to older kernels.
      	This is being addressed over time.
      
      Logic added in commit 8cab4754 ("vmscan: make mapped executable pages
      the first class citizen") was noticeably weakened in commit
      64574746 ("vmscan: detect mapped file pages used only once").
      
      Currently these pages can become "first class citizens" only after second
      usage.  After this patch page_check_references() will activate they after
      first usage, and executable code gets yet better chance to stay in memory.
      Signed-off-by: default avatarKonstantin Khlebnikov <khlebnikov@openvz.org>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Nick Piggin <npiggin@kernel.dk>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Shaohua Li <shaohua.li@intel.com>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      4391b5f4
    • Konstantin Khlebnikov's avatar
      vmscan: promote shared file mapped pages · 03722816
      Konstantin Khlebnikov authored
      commit 34dbc67a upstream.
      
      Stable note: Not tracked in Bugzilla. There were reports of shared
      	mapped pages being unfairly reclaimed in comparison to older kernels.
      	This is being addressed over time. The specific workload being
      	addressed here in described in paragraph four and while paragraph
      	five says it did not help performance as such, it made a difference
      	to major page faults. I'm aware of at least one bug for a large
      	vendor that was due to increased major faults.
      
      Commit 64574746 ("vmscan: detect mapped file pages used only once")
      greatly decreases lifetime of single-used mapped file pages.
      Unfortunately it also decreases life time of all shared mapped file
      pages.  Because after commit bf3f3bc5 ("mm: don't mark_page_accessed
      in fault path") page-fault handler does not mark page active or even
      referenced.
      
      Thus page_check_references() activates file page only if it was used twice
      while it stays in inactive list, meanwhile it activates anon pages after
      first access.  Inactive list can be small enough, this way reclaimer can
      accidentally throw away any widely used page if it wasn't used twice in
      short period.
      
      After this patch page_check_references() also activate file mapped page at
      first inactive list scan if this page is already used multiple times via
      several ptes.
      
      I found this while trying to fix degragation in rhel6 (~2.6.32) from rhel5
      (~2.6.18).  There a complete mess with >100 web/mail/spam/ftp containers,
      they share all their files but there a lot of anonymous pages: ~500mb
      shared file mapped memory and 15-20Gb non-shared anonymous memory.  In
      this situation major-pagefaults are very costly, because all containers
      share the same page.  In my load kernel created a disproportionate
      pressure on the file memory, compared with the anonymous, they equaled
      only if I raise swappiness up to 150 =)
      
      These patches actually wasn't helped a lot in my problem, but I saw
      noticable (10-20 times) reduce in count and average time of
      major-pagefault in file-mapped areas.
      
      Actually both patches are fixes for commit v2.6.33-5448-g64574746, because
      it was aimed at one scenario (singly used pages), but it breaks the logic
      in other scenarios (shared and/or executable pages)
      Signed-off-by: default avatarKonstantin Khlebnikov <khlebnikov@openvz.org>
      Acked-by: default avatarPekka Enberg <penberg@kernel.org>
      Acked-by: default avatarMinchan Kim <minchan.kim@gmail.com>
      Reviewed-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Nick Piggin <npiggin@kernel.dk>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Shaohua Li <shaohua.li@intel.com>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      03722816
    • Mel Gorman's avatar
      mm: vmscan: check if reclaim should really abort even if compaction_ready() is true for one zone · 9cad5d6a
      Mel Gorman authored
      commit 0cee34fd upstream.
      
      Stable note: Not tracked on Bugzilla. THP and compaction was found to
      	aggressively reclaim pages and stall systems under different
      	situations that was addressed piecemeal over time.
      
      If compaction can proceed for a given zone, shrink_zones() does not
      reclaim any more pages from it.  After commit [e0c23279: vmscan: abort
      reclaim/compaction if compaction can proceed], do_try_to_free_pages()
      tries to finish as soon as possible once one zone can compact.
      
      This was intended to prevent slabs being shrunk unnecessarily but there
      are side-effects.  One is that a small zone that is ready for compaction
      will abort reclaim even if the chances of successfully allocating a THP
      from that zone is small.  It also means that reclaim can return too early
      even though sc->nr_to_reclaim pages were not reclaimed.
      
      This partially reverts the commit until it is proven that slabs are really
      being shrunk unnecessarily but preserves the check to return 1 to avoid
      OOM if reclaim was aborted prematurely.
      
      [aarcange@redhat.com: This patch replaces a revert from Andrea]
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Cc: Dave Jones <davej@redhat.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Andy Isaacson <adi@hexapodia.org>
      Cc: Nai Xia <nai.xia@gmail.com>
      Cc: Johannes Weiner <jweiner@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      9cad5d6a
    • Mel Gorman's avatar
      mm: vmscan: do not OOM if aborting reclaim to start compaction · da0dc52b
      Mel Gorman authored
      commit 7335084d upstream.
      
      Stable note: Not tracked in Bugzilla. This patch makes later patches
      	easier to apply but otherwise has little to justify it. The
      	problem it fixes was never observed but the source of the
      	theoretical problem did not exist for very long.
      
      During direct reclaim it is possible that reclaim will be aborted so that
      compaction can be attempted to satisfy a high-order allocation.  If this
      decision is made before any pages are reclaimed, it is possible that 0 is
      returned to the page allocator potentially triggering an OOM.  This has
      not been observed but it is a possibility so this patch addresses it.
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Cc: Dave Jones <davej@redhat.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Andy Isaacson <adi@hexapodia.org>
      Cc: Nai Xia <nai.xia@gmail.com>
      Cc: Johannes Weiner <jweiner@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      
      da0dc52b
    • Mel Gorman's avatar
      mm: vmscan: when reclaiming for compaction, ensure there are sufficient free pages available · d50462a3
      Mel Gorman authored
      commit fe4b1b24 upstream.
      
      Stable note: Not tracked on Bugzilla. THP and compaction was found to
      	aggressively reclaim pages and stall systems under different
      	situations that was addressed piecemeal over time. This patch
      	addresses a problem where the fix regressed THP allocation
      	success rates.
      
      In commit e0887c19 ("vmscan: limit direct reclaim for higher order
      allocations"), Rik noted that reclaim was too aggressive when THP was
      enabled.  In his initial patch he used the number of free pages to decide
      if reclaim should abort for compaction.  My feedback was that reclaim and
      compaction should be using the same logic when deciding if reclaim should
      be aborted.
      
      Unfortunately, this had the effect of reducing THP success rates when the
      workload included something like streaming reads that continually
      allocated pages.  The window during which compaction could run and return
      a THP was too small.
      
      This patch combines Rik's two patches together.  compaction_suitable() is
      still used to decide if reclaim should be aborted to allow compaction is
      used.  However, it will also ensure that there is a reasonable buffer of
      free pages available.  This improves upon the THP allocation success rates
      but bounds the number of pages that are freed for compaction.
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Reviewed-by: Rik van Riel<riel@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Cc: Dave Jones <davej@redhat.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Andy Isaacson <adi@hexapodia.org>
      Cc: Nai Xia <nai.xia@gmail.com>
      Cc: Johannes Weiner <jweiner@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      d50462a3
    • Mel Gorman's avatar
      mm: compaction: introduce sync-light migration for use by compaction · f869774c
      Mel Gorman authored
      commit a6bc32b8 upstream.
      
      Stable note: Not tracked in Buzilla. This was part of a series that
      	reduced interactivity stalls experienced when THP was enabled.
      	These stalls were particularly noticable when copying data
      	to a USB stick but the experiences for users varied a lot.
      
      This patch adds a lightweight sync migrate operation MIGRATE_SYNC_LIGHT
      mode that avoids writing back pages to backing storage.  Async compaction
      maps to MIGRATE_ASYNC while sync compaction maps to MIGRATE_SYNC_LIGHT.
      For other migrate_pages users such as memory hotplug, MIGRATE_SYNC is
      used.
      
      This avoids sync compaction stalling for an excessive length of time,
      particularly when copying files to a USB stick where there might be a
      large number of dirty pages backed by a filesystem that does not support
      ->writepages.
      
      [aarcange@redhat.com: This patch is heavily based on Andrea's work]
      [akpm@linux-foundation.org: fix fs/nfs/write.c build]
      [akpm@linux-foundation.org: fix fs/btrfs/disk-io.c build]
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Cc: Dave Jones <davej@redhat.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Andy Isaacson <adi@hexapodia.org>
      Cc: Nai Xia <nai.xia@gmail.com>
      Cc: Johannes Weiner <jweiner@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      f869774c