1. 06 Nov, 2015 40 commits
    • Andrey Konovalov's avatar
      kasan: accurately determine the type of the bad access · cdf6a273
      Andrey Konovalov authored
      Makes KASAN accurately determine the type of the bad access. If the shadow
      byte value is in the [0, KASAN_SHADOW_SCALE_SIZE) range we can look at
      the next shadow byte to determine the type of the access.
      Signed-off-by: default avatarAndrey Konovalov <andreyknvl@google.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      cdf6a273
    • Andrey Konovalov's avatar
      kasan: update reported bug types for kernel memory accesses · 0952d87f
      Andrey Konovalov authored
      Update the names of the bad access types to better reflect the type of
      the access that happended and make these error types "literals" that can
      be used for classification and deduplication in scripts.
      Signed-off-by: default avatarAndrey Konovalov <andreyknvl@google.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0952d87f
    • Andrey Konovalov's avatar
      kasan: update reported bug types for not user nor kernel memory accesses · e9121076
      Andrey Konovalov authored
      Each access with address lower than
      kasan_shadow_to_mem(KASAN_SHADOW_START) is reported as user-memory-access.
      This is not always true, the accessed address might not be in user space.
      Fix this by reporting such accesses as null-ptr-derefs or
      wild-memory-accesses.
      
      There's another reason for this change.  For userspace ASan we have a
      bunch of systems that analyze error types for the purpose of
      classification and deduplication.  Sooner of later we will write them to
      KASAN as well.  Then clearly and explicitly stated error types will bring
      value.
      Signed-off-by: default avatarAndrey Konovalov <andreyknvl@google.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e9121076
    • Aneesh Kumar K.V's avatar
      mm/kasan: prevent deadlock in kasan reporting · fc5aeeaf
      Aneesh Kumar K.V authored
      When we end up calling kasan_report in real mode, our shadow mapping for
      the spinlock variable will show poisoned.  This will result in us calling
      kasan_report_error with lock_report spin lock held.  To prevent this
      disable kasan reporting when we are priting error w.r.t kasan.
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Reviewed-by: default avatarAndrey Ryabinin <ryabinin.a.a@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      fc5aeeaf
    • Aneesh Kumar K.V's avatar
      mm/kasan: don't use kasan shadow pointer in generic functions · f2377d4e
      Aneesh Kumar K.V authored
      We can't use generic functions like print_hex_dump to access kasan shadow
      region.  This require us to setup another kasan shadow region for the
      address passed (kasan shadow address).  Some architectures won't be able
      to do that.  Hence make a copy of the shadow region row and pass that to
      generic functions.
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Reviewed-by: default avatarAndrey Ryabinin <ryabinin.a.a@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f2377d4e
    • Aneesh Kumar K.V's avatar
    • Aneesh Kumar K.V's avatar
      mm/kasan: rename kasan_enabled() to kasan_report_enabled() · 0ba8663c
      Aneesh Kumar K.V authored
      The function only disable/enable reporting.  In the later patch we will be
      adding a kasan early enable/disable.  Rename kasan_enabled to properly
      reflect its function.
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Reviewed-by: default avatarAndrey Ryabinin <ryabinin.a.a@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0ba8663c
    • Tetsuo Handa's avatar
      mm: remove refresh_cpu_vm_stats() definition for !SMP kernel · 5ba97bf9
      Tetsuo Handa authored
      refresh_cpu_vm_stats(int cpu) is no longer referenced by !SMP kernel
      since Linux 3.12.
      Signed-off-by: default avatarTetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5ba97bf9
    • Hugh Dickins's avatar
      Documentation/filesystems/proc.txt: a little tidying · a5be3563
      Hugh Dickins authored
      There's an odd line about "Locked" at the head of the description of
      /proc/meminfo: it seems to have strayed from /proc/PID/smaps, so lead it
      back there.  Move "Swap" and "SwapPss" descriptions down above it, to
      match the order in the file (though "PageSize"s still undescribed).
      
      The example of "Locked: 374 kB" (the same as Pss, neither Rss nor Size) is
      so unlikely as to be misleading: just make it 0, this is /bin/bash text;
      which would be "dw" (disabled write) not "de" (do not expand).
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a5be3563
    • Hugh Dickins's avatar
      tmpfs: avoid a little creat and stat slowdown · d0424c42
      Hugh Dickins authored
      LKP reports that v4.2 commit afa2db2f ("tmpfs: truncate prealloc
      blocks past i_size") causes a 14.5% slowdown in the AIM9 creat-clo
      benchmark.
      
      creat-clo does just what you'd expect from the name, and creat's O_TRUNC
      on 0-length file does indeed get into more overhead now shmem_setattr()
      tests "0 <= 0" instead of "0 < 0".
      
      I'm not sure how much we care, but I think it would not be too VW-like to
      add in a check for whether any pages (or swap) are allocated: if none are
      allocated, there's none to remove from the radix_tree.  At first I thought
      that check would be good enough for the unmaps too, but no: we should not
      skip the unlikely case of unmapping pages beyond the new EOF, which were
      COWed from holes which have now been reclaimed, leaving none.
      
      This gives me an 8.5% speedup: on Haswell instead of LKP's Westmere, and
      running a debug config before and after: I hope those account for the
      lesser speedup.
      
      And probably someone has a benchmark where a thousand threads keep on
      stat'ing the same file repeatedly: forestall that report by adjusting v4.3
      commit 44a30220 ("shmem: recalculate file inode when fstat") not to
      take the spinlock in shmem_getattr() when there's no work to do.
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Reported-by: default avatarYing Huang <ying.huang@linux.intel.com>
      Tested-by: default avatarYing Huang <ying.huang@linux.intel.com>
      Cc: Josef Bacik <jbacik@fb.com>
      Cc: Yu Zhao <yuzhao@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d0424c42
    • David Rientjes's avatar
      mm, oom: add comment for why oom_adj exists · b72bdfa7
      David Rientjes authored
      /proc/pid/oom_adj exists solely to avoid breaking existing userspace
      binaries that write to the tunable.
      
      Add a comment in the only possible location within the kernel tree to
      describe the situation and motivation for keeping it around.
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b72bdfa7
    • Michal Hocko's avatar
      memcg: fix thresholds for 32b architectures. · c12176d3
      Michal Hocko authored
      Commit 424cdc14 ("memcg: convert threshold to bytes") has fixed a
      regression introduced by 3e32cb2e ("mm: memcontrol: lockless page
      counters") where thresholds were silently converted to use page units
      rather than bytes when interpreting the user input.
      
      The fix is not complete, though, as properly pointed out by Ben Hutchings
      during stable backport review.  The page count is converted to bytes but
      unsigned long is used to hold the value which would be obviously not
      sufficient for 32b systems with more than 4G thresholds.  The same applies
      to usage as taken from mem_cgroup_usage which might overflow.
      
      Let's remove this bytes vs.  pages internal tracking differences and
      handle thresholds in page units internally.  Chage mem_cgroup_usage() to
      return the value in page units and revert 424cdc14 because this should
      be sufficient for the consistent handling.  mem_cgroup_read_u64 as the
      only users of mem_cgroup_usage outside of the threshold handling code is
      converted to give the proper in bytes result.  It is doing that already
      for page_counter output so this is more consistent as well.
      
      The value presented to the userspace is still in bytes units.
      
      Fixes: 424cdc14 ("memcg: convert threshold to bytes")
      Fixes: 3e32cb2e ("mm: memcontrol: lockless page counters")
      Signed-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Reported-by: default avatarBen Hutchings <ben@decadent.org.uk>
      Reviewed-by: default avatarVladimir Davydov <vdavydov@virtuozzo.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: <stable@vger.kernel.org>
      From: Michal Hocko <mhocko@kernel.org>
      Subject: memcg-fix-thresholds-for-32b-architectures-fix
      
      Cc: Ben Hutchings <ben@decadent.org.uk>
      Cc: Vladimir Davydov <vdavydov@virtuozzo.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      From: Andrew Morton <akpm@linux-foundation.org>
      Subject: memcg-fix-thresholds-for-32b-architectures-fix-fix
      
      don't attempt to inline mem_cgroup_usage()
      
      The compiler ignores the inline anwyay.  And __always_inlining it adds 600
      bytes of goop to the .o file.
      
      Cc: Ben Hutchings <ben@decadent.org.uk>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Vladimir Davydov <vdavydov@virtuozzo.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c12176d3
    • Johannes Weiner's avatar
      mm: page_counter: let page_counter_try_charge() return bool · 6071ca52
      Johannes Weiner authored
      page_counter_try_charge() currently returns 0 on success and -ENOMEM on
      failure, which is surprising behavior given the function name.
      
      Make it follow the expected pattern of try_stuff() functions that return a
      boolean true to indicate success, or false for failure.
      Signed-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Vladimir Davydov <vdavydov@virtuozzo.com
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      6071ca52
    • Johannes Weiner's avatar
      mm: memcontrol: eliminate root memory.current · f5fc3c5d
      Johannes Weiner authored
      memory.current on the root level doesn't add anything that wouldn't be
      more accurate and detailed using system statistics.  It already doesn't
      include slabs, and it'll be a pain to keep in sync when further memory
      types are accounted in the memory controller.  Remove it.
      
      Note that this applies to the new unified hierarchy interface only.
      Signed-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f5fc3c5d
    • Dave Hansen's avatar
      mm, hugetlbfs: optimize when NUMA=n · e0ec90ee
      Dave Hansen authored
      My recent patch "mm, hugetlb: use memory policy when available" added some
      bloat to hugetlb.o.  This patch aims to get some of the bloat back,
      especially when NUMA is not in play.
      
      It does this with an implicit #ifdef and marking some things static that
      should have been static in my first patch.  It also makes the warnings
      only VM_WARN_ON()s.  They were responsible for a pretty big chunk of the
      bloat.
      
      Doing this gets our NUMA=n text size back to a wee bit _below_ where we
      started before the original patch.
      
      It also shaves a bit of space off the NUMA=y case, but not much.
      Enforcing the mempolicy definitely takes some text and it's hard to avoid.
      
      size(1) output:
      
         text	   data	    bss	    dec	    hex	filename
        30745	   3433	   2492	  36670	   8f3e	hugetlb.o.nonuma.baseline
        31305	   3755	   2492	  37552	   92b0	hugetlb.o.nonuma.patch1
        30713	   3433	   2492	  36638	   8f1e	hugetlb.o.nonuma.patch2 (this patch)
        25235	    473	  41276	  66984	  105a8	hugetlb.o.numa.baseline
        25715	    475	  41276	  67466	  1078a	hugetlb.o.numa.patch1
        25491	    473	  41276	  67240	  106a8	hugetlb.o.numa.patch2 (this patch)
      Signed-off-by: default avatarDave Hansen <dave.hansen@linux.intel.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Cc: Hillf Danton <hillf.zj@alibaba-inc.com>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e0ec90ee
    • Dave Hansen's avatar
      mm, hugetlb: use memory policy when available · 099730d6
      Dave Hansen authored
      I have a hugetlbfs user which is never explicitly allocating huge pages
      with 'nr_hugepages'.  They only set 'nr_overcommit_hugepages' and then let
      the pages be allocated from the buddy allocator at fault time.
      
      This works, but they noticed that mbind() was not doing them any good and
      the pages were being allocated without respect for the policy they
      specified.
      
      The code in question is this:
      
      > struct page *alloc_huge_page(struct vm_area_struct *vma,
      ...
      >         page = dequeue_huge_page_vma(h, vma, addr, avoid_reserve, gbl_chg);
      >         if (!page) {
      >                 page = alloc_buddy_huge_page(h, NUMA_NO_NODE);
      
      dequeue_huge_page_vma() is smart and will respect the VMA's memory policy.
       But, it only grabs _existing_ huge pages from the huge page pool.  If the
      pool is empty, we fall back to alloc_buddy_huge_page() which obviously
      can't do anything with the VMA's policy because it isn't even passed the
      VMA.
      
      Almost everybody preallocates huge pages.  That's probably why nobody has
      ever noticed this.  Looking back at the git history, I don't think this
      _ever_ worked from when alloc_buddy_huge_page() was introduced in
      7893d1d5, 8 years ago.
      
      The fix is to pass vma/addr down in to the places where we actually call
      in to the buddy allocator.  It's fairly straightforward plumbing.  This
      has been lightly tested.
      Signed-off-by: default avatarDave Hansen <dave.hansen@linux.intel.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Cc: Hillf Danton <hillf.zj@alibaba-inc.com>
      Cc: David Rientjes <rientjes@google.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      099730d6
    • Alexander Kuleshov's avatar
      mm/hugetlb: make node_hstates array static · b4e289a6
      Alexander Kuleshov authored
      There are no users of the node_hstates array outside of the
      mm/hugetlb.c. So let's make it static.
      Signed-off-by: default avatarAlexander Kuleshov <kuleshovmail@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b4e289a6
    • Rasmus Villemoes's avatar
      mm/maccess.c: actually return -EFAULT from strncpy_from_unsafe · 9dd861d5
      Rasmus Villemoes authored
      As far as I can tell, strncpy_from_unsafe never returns -EFAULT.  ret is
      the result of a __copy_from_user_inatomic(), which is 0 for success and
      positive (in this case necessarily 1) for access error - it is never
      negative.  So we were always returning the length of the, possibly
      truncated, destination string.
      Signed-off-by: default avatarRasmus Villemoes <linux@rasmusvillemoes.dk>
      Acked-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9dd861d5
    • Andrew Morton's avatar
      mm/cma.c: suppress warning · 3acaea68
      Andrew Morton authored
      mm/cma.c: In function 'cma_alloc':
      mm/cma.c:366: warning: 'pfn' may be used uninitialized in this function
      
      The patch actually improves the tracing a bit: if alloc_contig_range()
      fails, tracing will display the offending pfn rather than -1.
      
      Cc: Stefan Strogin <stefan.strogin@gmail.com>
      Cc: Michal Nazarewicz <mpn@google.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
      Cc: Thierry Reding <treding@nvidia.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      3acaea68
    • Hugh Dickins's avatar
      mm: migrate dirty page without clear_page_dirty_for_io etc · 42cb14b1
      Hugh Dickins authored
      clear_page_dirty_for_io() has accumulated writeback and memcg subtleties
      since v2.6.16 first introduced page migration; and the set_page_dirty()
      which completed its migration of PageDirty, later had to be moderated to
      __set_page_dirty_nobuffers(); then PageSwapBacked had to skip that too.
      
      No actual problems seen with this procedure recently, but if you look into
      what the clear_page_dirty_for_io(page)+set_page_dirty(newpage) is actually
      achieving, it turns out to be nothing more than moving the PageDirty flag,
      and its NR_FILE_DIRTY stat from one zone to another.
      
      It would be good to avoid a pile of irrelevant decrementations and
      incrementations, and improper event counting, and unnecessary descent of
      the radix_tree under tree_lock (to set the PAGECACHE_TAG_DIRTY which
      radix_tree_replace_slot() left in place anyway).
      
      Do the NR_FILE_DIRTY movement, like the other stats movements, while
      interrupts still disabled in migrate_page_move_mapping(); and don't even
      bother if the zone is the same.  Do the PageDirty movement there under
      tree_lock too, where old page is frozen and newpage not yet visible:
      bearing in mind that as soon as newpage becomes visible in radix_tree, an
      un-page-locked set_page_dirty() might interfere (or perhaps that's just
      not possible: anything doing so should already hold an additional
      reference to the old page, preventing its migration; but play safe).
      
      But we do still need to transfer PageDirty in migrate_page_copy(), for
      those who don't go the mapping route through migrate_page_move_mapping().
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      42cb14b1
    • Hugh Dickins's avatar
      mm: page migration avoid touching newpage until no going back · cf4b769a
      Hugh Dickins authored
      We have had trouble in the past from the way in which page migration's
      newpage is initialized in dribs and drabs - see commit 8bdd6380 ("mm:
      fix direct reclaim writeback regression") which proposed a cleanup.
      
      We have no actual problem now, but I think the procedure would be clearer
      (and alternative get_new_page pools safer to implement) if we assert that
      newpage is not touched until we are sure that it's going to be used -
      except for taking the trylock on it in __unmap_and_move().
      
      So shift the early initializations from move_to_new_page() into
      migrate_page_move_mapping(), mapping and NULL-mapping paths.  Similarly
      migrate_huge_page_move_mapping(), but its NULL-mapping path can just be
      deleted: you cannot reach hugetlbfs_migrate_page() with a NULL mapping.
      
      Adjust stages 3 to 8 in the Documentation file accordingly.
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      cf4b769a
    • Hugh Dickins's avatar
      mm: page migration use migration entry for swapcache too · 470f119f
      Hugh Dickins authored
      Hitherto page migration has avoided using a migration entry for a
      swapcache page mapped into userspace, apparently for historical reasons.
      So any page blessed with swapcache would entail a minor fault when it's
      next touched, which page migration otherwise tries to avoid.  Swapcache in
      an mlocked area is rare, so won't often matter, but still better fixed.
      
      Just rearrange the block in try_to_unmap_one(), to handle TTU_MIGRATION
      before checking PageAnon, that's all (apart from some reindenting).
      
      Well, no, that's not quite all: doesn't this by the way fix a soft_dirty
      bug, that page migration of a file page was forgetting to transfer the
      soft_dirty bit?  Probably not a serious bug: if I understand correctly,
      soft_dirty afficionados usually have to handle file pages separately
      anyway; but we publish the bit in /proc/<pid>/pagemap on file mappings as
      well as anonymous, so page migration ought not to perturb it.
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Reviewed-by: default avatarCyrill Gorcunov <gorcunov@openvz.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      470f119f
    • Hugh Dickins's avatar
      mm: simplify page migration's anon_vma comment and flow · 03f15c86
      Hugh Dickins authored
      __unmap_and_move() contains a long stale comment on page_get_anon_vma()
      and PageSwapCache(), with an odd control flow that's hard to follow.
      Mostly this reflects our confusion about the lifetime of an anon_vma, in
      the early days of page migration, before we could take a reference to one.
       Nowadays this seems quite straightforward: cut it all down to essentials.
      
      I cannot see the relevance of swapcache here at all, so don't treat it any
      differently: I believe the old comment reflects in part our anon_vma
      confusions, and in part the original v2.6.16 page migration technique,
      which used actual swap to migrate anon instead of swap-like migration
      entries.  Why should a swapcache page not be migrated with the aid of
      migration entry ptes like everything else?  So lose that comment now, and
      enable migration entries for swapcache in the next patch.
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      03f15c86
    • Hugh Dickins's avatar
      mm: page migration remove_migration_ptes at lock+unlock level · 5c3f9a67
      Hugh Dickins authored
      Clean up page migration a little more by calling remove_migration_ptes()
      from the same level, on success or on failure, from __unmap_and_move() or
      from unmap_and_move_huge_page().
      
      Don't reset page->mapping of a PageAnon old page in move_to_new_page(),
      leave that to when the page is freed.  Except for here in page migration,
      it has been an invariant that a PageAnon (bit set in page->mapping) page
      stays PageAnon until it is freed, and I think we're safer to keep to that.
      
      And with the above rearrangement, it's necessary because zap_pte_range()
      wants to identify whether a migration entry represents a file or an anon
      page, to update the appropriate rss stats without waiting on it.
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5c3f9a67
    • Hugh Dickins's avatar
      mm: page migration trylock newpage at same level as oldpage · 7db7671f
      Hugh Dickins authored
      Clean up page migration a little by moving the trylock of newpage from
      move_to_new_page() into __unmap_and_move(), where the old page has been
      locked.  Adjust unmap_and_move_huge_page() and balloon_page_migrate()
      accordingly.
      
      But make one kind-of-functional change on the way: whereas trylock of
      newpage used to BUG() if it failed, now simply return -EAGAIN if so.
      Cutting out BUG()s is good, right?  But, to be honest, this is really to
      extend the usefulness of the custom put_new_page feature, allowing a pool
      of new pages to be shared perhaps with racing uses.
      
      Use an "else" instead of that "skip_unmap" label.
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Acked-by: default avatarRafael Aquini <aquini@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7db7671f
    • Hugh Dickins's avatar
      mm: page migration use the put_new_page whenever necessary · 2def7424
      Hugh Dickins authored
      I don't know of any problem from the way it's used in our current tree,
      but there is one defect in page migration's custom put_new_page feature.
      
      An unused newpage is expected to be released with the put_new_page(), but
      there was one MIGRATEPAGE_SUCCESS (0) path which released it with
      putback_lru_page(): which can be very wrong for a custom pool.
      
      Fixed more easily by resetting put_new_page once it won't be needed, than
      by adding a further flag to modify the rc test.
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Rik van Riel <riel@redhat.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2def7424
    • Hugh Dickins's avatar
      mm: correct a couple of page migration comments · 14e0f9bc
      Hugh Dickins authored
      It's migrate.c not migration,c, and nowadays putback_movable_pages() not
      putback_lru_pages().
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Acked-by: default avatarRafael Aquini <aquini@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      14e0f9bc
    • Hugh Dickins's avatar
      mm: rename mem_cgroup_migrate to mem_cgroup_replace_page · 45637bab
      Hugh Dickins authored
      After v4.3's commit 0610c25d ("memcg: fix dirty page migration")
      mem_cgroup_migrate() doesn't have much to offer in page migration: convert
      migrate_misplaced_transhuge_page() to set_page_memcg() instead.
      
      Then rename mem_cgroup_migrate() to mem_cgroup_replace_page(), since its
      remaining callers are replace_page_cache_page() and shmem_replace_page():
      both of whom passed lrucare true, so just eliminate that argument.
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      45637bab
    • Hugh Dickins's avatar
      mm: page migration fix PageMlocked on migrated pages · 51afb12b
      Hugh Dickins authored
      Commit e6c509f8 ("mm: use clear_page_mlock() in page_remove_rmap()")
      in v3.7 inadvertently made mlock_migrate_page() impotent: page migration
      unmaps the page from userspace before migrating, and that commit clears
      PageMlocked on the final unmap, leaving mlock_migrate_page() with
      nothing to do.  Not a serious bug, the next attempt at reclaiming the
      page would fix it up; but a betrayal of page migration's intent - the
      new page ought to emerge as PageMlocked.
      
      I don't see how to fix it for mlock_migrate_page() itself; but easily
      fixed in remove_migration_pte(), by calling mlock_vma_page() when the vma
      is VM_LOCKED - under pte lock as in try_to_unmap_one().
      
      Delete mlock_migrate_page()?  Not quite, it does still serve a purpose for
      migrate_misplaced_transhuge_page(): where we could replace it by a test,
      clear_page_mlock(), mlock_vma_page() sequence; but would that be an
      improvement?  mlock_migrate_page() is fairly lean, and let's make it
      leaner by skipping the irq save/restore now clearly not needed.
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Rik van Riel <riel@redhat.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      51afb12b
    • Hugh Dickins's avatar
      mm: rmap use pte lock not mmap_sem to set PageMlocked · b87537d9
      Hugh Dickins authored
      KernelThreadSanitizer (ktsan) has shown that the down_read_trylock() of
      mmap_sem in try_to_unmap_one() (when going to set PageMlocked on a page
      found mapped in a VM_LOCKED vma) is ineffective against races with
      exit_mmap()'s munlock_vma_pages_all(), because mmap_sem is not held when
      tearing down an mm.
      
      But that's okay, those races are benign; and although we've believed for
      years in that ugly down_read_trylock(), it's unsuitable for the job, and
      frustrates the good intention of setting PageMlocked when it fails.
      
      It just doesn't matter if here we read vm_flags an instant before or after
      a racing mlock() or munlock() or exit_mmap() sets or clears VM_LOCKED: the
      syscalls (or exit) work their way up the address space (taking pt locks
      after updating vm_flags) to establish the final state.
      
      We do still need to be careful never to mark a page Mlocked (hence
      unevictable) by any race that will not be corrected shortly after.  The
      page lock protects from many of the races, but not all (a page is not
      necessarily locked when it's unmapped).  But the pte lock we just dropped
      is good to cover the rest (and serializes even with
      munlock_vma_pages_all(), so no special barriers required): now hold on to
      the pte lock while calling mlock_vma_page().  Is that lock ordering safe?
      Yes, that's how follow_page_pte() calls it, and how page_remove_rmap()
      calls the complementary clear_page_mlock().
      
      This fixes the following case (though not a case which anyone has
      complained of), which mmap_sem did not: truncation's preliminary
      unmap_mapping_range() is supposed to remove even the anonymous COWs of
      filecache pages, and that might race with try_to_unmap_one() on a
      VM_LOCKED vma, so that mlock_vma_page() sets PageMlocked just after
      zap_pte_range() unmaps the page, causing "Bad page state (mlocked)" when
      freed.  The pte lock protects against this.
      
      You could say that it also protects against the more ordinary case, racing
      with the preliminary unmapping of a filecache page itself: but in our
      current tree, that's independently protected by i_mmap_rwsem; and that
      race would be why "Bad page state (mlocked)" was seen before commit
      48ec833b ("Revert mm/memory.c: share the i_mmap_rwsem").
      
      Vlastimil Babka points out another race which this patch protects against.
       try_to_unmap_one() might reach its mlock_vma_page() TestSetPageMlocked a
      moment after munlock_vma_pages_all() did its Phase 1 TestClearPageMlocked:
      leaving PageMlocked and unevictable when it should be evictable.  mmap_sem
      is ineffective because exit_mmap() does not hold it; page lock ineffective
      because __munlock_pagevec() only takes it afterwards, in Phase 2; pte lock
      is effective because __munlock_pagevec_fill() takes it to get the page,
      after VM_LOCKED was cleared from vm_flags, so visible to try_to_unmap_one.
      
      Kirill Shutemov points out that if the compiler chooses to implement a
      "vma->vm_flags &= VM_WHATEVER" or "vma->vm_flags |= VM_WHATEVER" operation
      with an intermediate store of unrelated bits set, since I'm here foregoing
      its usual protection by mmap_sem, try_to_unmap_one() might catch sight of
      a spurious VM_LOCKED in vm_flags, and make the wrong decision.  This does
      not appear to be an immediate problem, but we may want to define vm_flags
      accessors in future, to guard against such a possibility.
      
      While we're here, make a related optimization in try_to_munmap_one(): if
      it's doing TTU_MUNLOCK, then there's no point at all in descending the
      page tables and getting the pt lock, unless the vma is VM_LOCKED.  Yes,
      that can change racily, but it can change racily even without the
      optimization: it's not critical.  Far better not to waste time here.
      
      Stopped short of separating try_to_munlock_one() from try_to_munmap_one()
      on this occasion, but that's probably the sensible next step - with a
      rename, given that try_to_munlock()'s business is to try to set Mlocked.
      
      Updated the unevictable-lru Documentation, to remove its reference to mmap
      semaphore, but found a few more updates needed in just that area.
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Rik van Riel <riel@redhat.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b87537d9
    • Hugh Dickins's avatar
      mm Documentation: undoc non-linear vmas · 7a14239a
      Hugh Dickins authored
      While updating some mm Documentation, I came across a few straggling
      references to the non-linear vmas which were happily removed in v4.0.
      Delete them.
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Rik van Riel <riel@redhat.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7a14239a
    • Vladimir Davydov's avatar
      mm: do not inc NR_PAGETABLE if ptlock_init failed · 706874e9
      Vladimir Davydov authored
      If ALLOC_SPLIT_PTLOCKS is defined, ptlock_init may fail, in which case we
      shouldn't increment NR_PAGETABLE.
      
      Since small allocations, such as ptlock, normally do not fail (currently
      they can fail if kmemcg is used though), this patch does not really fix
      anything and should be considered as a code cleanup.
      Signed-off-by: default avatarVladimir Davydov <vdavydov@virtuozzo.com>
      Acked-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      706874e9
    • Laurent Dufour's avatar
      mm: clear_soft_dirty_pmd() requires THP · 5d3875a0
      Laurent Dufour authored
      Don't build clear_soft_dirty_pmd() if transparent huge pages are not
      enabled.
      Signed-off-by: default avatarLaurent Dufour <ldufour@linux.vnet.ibm.com>
      Reviewed-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Pavel Emelyanov <xemul@parallels.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5d3875a0
    • Laurent Dufour's avatar
      mm: clear pte in clear_soft_dirty() · 326c2597
      Laurent Dufour authored
      As mentioned in the commit 56eecdb9 ("mm: Use ptep/pmdp_set_numa()
      for updating _PAGE_NUMA bit"), architectures like ppc64 don't do tlb
      flush in set_pte/pmd functions.
      
      So when dealing with existing pte in clear_soft_dirty, the pte must be
      cleared before being modified.
      Signed-off-by: default avatarLaurent Dufour <ldufour@linux.vnet.ibm.com>
      Reviewed-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Cc: Pavel Emelyanov <xemul@parallels.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      326c2597
    • Andrea Arcangeli's avatar
      ksm: unstable_tree_search_insert error checking cleanup · c8f95ed1
      Andrea Arcangeli authored
      get_mergeable_page() can only return NULL (also in case of errors) or the
      pinned mergeable page.  It can't return an error different than NULL.
      This optimizes away the unnecessary error check.
      
      Add a return after the "out:" label in the callee to make it more
      readable.
      Signed-off-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Acked-by: default avatarHugh Dickins <hughd@google.com>
      Cc: Petr Holasek <pholasek@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c8f95ed1
    • Andrea Arcangeli's avatar
      ksm: use find_mergeable_vma in try_to_merge_with_ksm_page · 85c6e8dd
      Andrea Arcangeli authored
      Doing the VM_MERGEABLE check after the page == kpage check won't provide
      any meaningful benefit.  The !vma->anon_vma check of find_mergeable_vma is
      the only superfluous bit in using find_mergeable_vma because the !PageAnon
      check of try_to_merge_one_page() implicitly checks for that, but it still
      looks cleaner to share the same find_mergeable_vma().
      Signed-off-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Acked-by: default avatarHugh Dickins <hughd@google.com>
      Cc: Petr Holasek <pholasek@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      85c6e8dd
    • Andrea Arcangeli's avatar
      ksm: use the helper method to do the hlist_empty check · 98666f8a
      Andrea Arcangeli authored
      This just uses the helper function to cleanup the assumption on the
      hlist_node internals.
      Signed-off-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Acked-by: default avatarHugh Dickins <hughd@google.com>
      Cc: Petr Holasek <pholasek@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      98666f8a
    • Andrea Arcangeli's avatar
      ksm: don't fail stable tree lookups if walking over stale stable_nodes · f2e5ff85
      Andrea Arcangeli authored
      The stable_nodes can become stale at any time if the underlying pages gets
      freed.  The stable_node gets collected and removed from the stable rbtree
      if that is detected during the rbtree lookups.
      
      Don't fail the lookup if running into stale stable_nodes, just restart the
      lookup after collecting the stale stable_nodes.  Otherwise the CPU spent
      in the preparation stage is wasted and the lookup must be repeated at the
      next loop potentially failing a second time in a second stale stable_node.
      
      If we don't prune aggressively we delay the merging of the unstable node
      candidates and at the same time we delay the freeing of the stale
      stable_nodes.  Keeping stale stable_nodes around wastes memory and it
      can't provide any benefit.
      Signed-off-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Acked-by: default avatarHugh Dickins <hughd@google.com>
      Cc: Petr Holasek <pholasek@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f2e5ff85
    • Andrea Arcangeli's avatar
      ksm: add cond_resched() to the rmap_walks · ad12695f
      Andrea Arcangeli authored
      While at it add it to the file and anon walks too.
      Signed-off-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Acked-by: default avatarHugh Dickins <hughd@google.com>
      Cc: Petr Holasek <pholasek@redhat.com>
      Acked-by: default avatarDavidlohr Bueso <dbueso@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ad12695f
    • Vladimir Davydov's avatar
      memcg: simplify and inline __mem_cgroup_from_kmem · df406551
      Vladimir Davydov authored
      Before the previous patch ("memcg: unify slab and other kmem pages
      charging"), __mem_cgroup_from_kmem had to handle two types of kmem - slab
      pages and pages allocated with alloc_kmem_pages - memcg in the page
      struct.  Now we can unify it.  Since after it, this function becomes tiny
      we can fold it into mem_cgroup_from_kmem.
      
      [hughd@google.com: move mem_cgroup_from_kmem into list_lru.c]
      Signed-off-by: default avatarVladimir Davydov <vdavydov@virtuozzo.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Hugh Dickins <hughd@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      df406551