1. 06 Nov, 2015 40 commits
    • Eric B Munson's avatar
      mm: mlock: add mlock flags to enable VM_LOCKONFAULT usage · b0f205c2
      Eric B Munson authored
      The previous patch introduced a flag that specified pages in a VMA should
      be placed on the unevictable LRU, but they should not be made present when
      the area is created.  This patch adds the ability to set this state via
      the new mlock system calls.
      
      We add MLOCK_ONFAULT for mlock2 and MCL_ONFAULT for mlockall.
      MLOCK_ONFAULT will set the VM_LOCKONFAULT modifier for VM_LOCKED.
      MCL_ONFAULT should be used as a modifier to the two other mlockall flags.
      When used with MCL_CURRENT, all current mappings will be marked with
      VM_LOCKED | VM_LOCKONFAULT.  When used with MCL_FUTURE, the mm->def_flags
      will be marked with VM_LOCKED | VM_LOCKONFAULT.  When used with both
      MCL_CURRENT and MCL_FUTURE, all current mappings and mm->def_flags will be
      marked with VM_LOCKED | VM_LOCKONFAULT.
      
      Prior to this patch, mlockall() will unconditionally clear the
      mm->def_flags any time it is called without MCL_FUTURE.  This behavior is
      maintained after adding MCL_ONFAULT.  If a call to mlockall(MCL_FUTURE) is
      followed by mlockall(MCL_CURRENT), the mm->def_flags will be cleared and
      new VMAs will be unlocked.  This remains true with or without MCL_ONFAULT
      in either mlockall() invocation.
      
      munlock() will unconditionally clear both vma flags.  munlockall()
      unconditionally clears for VMA flags on all VMAs and in the mm->def_flags
      field.
      Signed-off-by: default avatarEric B Munson <emunson@akamai.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Guenter Roeck <linux@roeck-us.net>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Shuah Khan <shuahkh@osg.samsung.com>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b0f205c2
    • Eric B Munson's avatar
      mm: introduce VM_LOCKONFAULT · de60f5f1
      Eric B Munson authored
      The cost of faulting in all memory to be locked can be very high when
      working with large mappings.  If only portions of the mapping will be used
      this can incur a high penalty for locking.
      
      For the example of a large file, this is the usage pattern for a large
      statical language model (probably applies to other statical or graphical
      models as well).  For the security example, any application transacting in
      data that cannot be swapped out (credit card data, medical records, etc).
      
      This patch introduces the ability to request that pages are not
      pre-faulted, but are placed on the unevictable LRU when they are finally
      faulted in.  The VM_LOCKONFAULT flag will be used together with VM_LOCKED
      and has no effect when set without VM_LOCKED.  Setting the VM_LOCKONFAULT
      flag for a VMA will cause pages faulted into that VMA to be added to the
      unevictable LRU when they are faulted or if they are already present, but
      will not cause any missing pages to be faulted in.
      
      Exposing this new lock state means that we cannot overload the meaning of
      the FOLL_POPULATE flag any longer.  Prior to this patch it was used to
      mean that the VMA for a fault was locked.  This means we need the new
      FOLL_MLOCK flag to communicate the locked state of a VMA.  FOLL_POPULATE
      will now only control if the VMA should be populated and in the case of
      VM_LOCKONFAULT, it will not be set.
      Signed-off-by: default avatarEric B Munson <emunson@akamai.com>
      Acked-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Guenter Roeck <linux@roeck-us.net>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Shuah Khan <shuahkh@osg.samsung.com>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      de60f5f1
    • Eric B Munson's avatar
      mm: mlock: add new mlock system call · a8ca5d0e
      Eric B Munson authored
      With the refactored mlock code, introduce a new system call for mlock.
      The new call will allow the user to specify what lock states are being
      added.  mlock2 is trivial at the moment, but a follow on patch will add a
      new mlock state making it useful.
      Signed-off-by: default avatarEric B Munson <emunson@akamai.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Cc: Guenter Roeck <linux@roeck-us.net>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Shuah Khan <shuahkh@osg.samsung.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a8ca5d0e
    • Eric B Munson's avatar
      mm: mlock: refactor mlock, munlock, and munlockall code · 1aab92ec
      Eric B Munson authored
      mlock() allows a user to control page out of program memory, but this
      comes at the cost of faulting in the entire mapping when it is allocated.
      For large mappings where the entire area is not necessary this is not
      ideal.  Instead of forcing all locked pages to be present when they are
      allocated, this set creates a middle ground.  Pages are marked to be
      placed on the unevictable LRU (locked) when they are first used, but they
      are not faulted in by the mlock call.
      
      This series introduces a new mlock() system call that takes a flags
      argument along with the start address and size.  This flags argument gives
      the caller the ability to request memory be locked in the traditional way,
      or to be locked after the page is faulted in.  A new MCL flag is added to
      mirror the lock on fault behavior from mlock() in mlockall().
      
      There are two main use cases that this set covers.  The first is the
      security focussed mlock case.  A buffer is needed that cannot be written
      to swap.  The maximum size is known, but on average the memory used is
      significantly less than this maximum.  With lock on fault, the buffer is
      guaranteed to never be paged out without consuming the maximum size every
      time such a buffer is created.
      
      The second use case is focussed on performance.  Portions of a large file
      are needed and we want to keep the used portions in memory once accessed.
      This is the case for large graphical models where the path through the
      graph is not known until run time.  The entire graph is unlikely to be
      used in a given invocation, but once a node has been used it needs to stay
      resident for further processing.  Given these constraints we have a number
      of options.  We can potentially waste a large amount of memory by mlocking
      the entire region (this can also cause a significant stall at startup as
      the entire file is read in).  We can mlock every page as we access them
      without tracking if the page is already resident but this introduces large
      overhead for each access.  The third option is mapping the entire region
      with PROT_NONE and using a signal handler for SIGSEGV to
      mprotect(PROT_READ) and mlock() the needed page.  Doing this page at a
      time adds a significant performance penalty.  Batching can be used to
      mitigate this overhead, but in order to safely avoid trying to mprotect
      pages outside of the mapping, the boundaries of each mapping to be used in
      this way must be tracked and available to the signal handler.  This is
      precisely what the mm system in the kernel should already be doing.
      
      For mlock(MLOCK_ONFAULT) the user is charged against RLIMIT_MEMLOCK as if
      mlock(MLOCK_LOCKED) or mmap(MAP_LOCKED) was used, so when the VMA is
      created not when the pages are faulted in.  For mlockall(MCL_ONFAULT) the
      user is charged as if MCL_FUTURE was used.  This decision was made to keep
      the accounting checks out of the page fault path.
      
      To illustrate the benefit of this set I wrote a test program that mmaps a
      5 GB file filled with random data and then makes 15,000,000 accesses to
      random addresses in that mapping.  The test program was run 20 times for
      each setup.  Results are reported for two program portions, setup and
      execution.  The setup phase is calling mmap and optionally mlock on the
      entire region.  For most experiments this is trivial, but it highlights
      the cost of faulting in the entire region.  Results are averages across
      the 20 runs in milliseconds.
      
      mmap with mlock(MLOCK_LOCKED) on entire range:
      Setup avg:      8228.666
      Processing avg: 8274.257
      
      mmap with mlock(MLOCK_LOCKED) before each access:
      Setup avg:      0.113
      Processing avg: 90993.552
      
      mmap with PROT_NONE and signal handler and batch size of 1 page:
      With the default value in max_map_count, this gets ENOMEM as I attempt
      to change the permissions, after upping the sysctl significantly I get:
      Setup avg:      0.058
      Processing avg: 69488.073
      mmap with PROT_NONE and signal handler and batch size of 8 pages:
      Setup avg:      0.068
      Processing avg: 38204.116
      
      mmap with PROT_NONE and signal handler and batch size of 16 pages:
      Setup avg:      0.044
      Processing avg: 29671.180
      
      mmap with mlock(MLOCK_ONFAULT) on entire range:
      Setup avg:      0.189
      Processing avg: 17904.899
      
      The signal handler in the batch cases faulted in memory in two steps to
      avoid having to know the start and end of the faulting mapping.  The first
      step covers the page that caused the fault as we know that it will be
      possible to lock.  The second step speculatively tries to mlock and
      mprotect the batch size - 1 pages that follow.  There may be a clever way
      to avoid this without having the program track each mapping to be covered
      by this handeler in a globally accessible structure, but I could not find
      it.  It should be noted that with a large enough batch size this two step
      fault handler can still cause the program to crash if it reaches far
      beyond the end of the mapping.
      
      These results show that if the developer knows that a majority of the
      mapping will be used, it is better to try and fault it in at once,
      otherwise mlock(MLOCK_ONFAULT) is significantly faster.
      
      The performance cost of these patches are minimal on the two benchmarks I
      have tested (stream and kernbench).  The following are the average values
      across 20 runs of stream and 10 runs of kernbench after a warmup run whose
      results were discarded.
      
      Avg throughput in MB/s from stream using 1000000 element arrays
      Test     4.2-rc1      4.2-rc1+lock-on-fault
      Copy:    10,566.5     10,421
      Scale:   10,685       10,503.5
      Add:     12,044.1     11,814.2
      Triad:   12,064.8     11,846.3
      
      Kernbench optimal load
                       4.2-rc1  4.2-rc1+lock-on-fault
      Elapsed Time     78.453   78.991
      User Time        64.2395  65.2355
      System Time      9.7335   9.7085
      Context Switches 22211.5  22412.1
      Sleeps           14965.3  14956.1
      
      This patch (of 6):
      
      Extending the mlock system call is very difficult because it currently
      does not take a flags argument.  A later patch in this set will extend
      mlock to support a middle ground between pages that are locked and faulted
      in immediately and unlocked pages.  To pave the way for the new system
      call, the code needs some reorganization so that all the actual entry
      point handles is checking input and translating to VMA flags.
      Signed-off-by: default avatarEric B Munson <emunson@akamai.com>
      Acked-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Guenter Roeck <linux@roeck-us.net>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Shuah Khan <shuahkh@osg.samsung.com>
      Cc: Stephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      1aab92ec
    • Andrey Ryabinin's avatar
      kasan: always taint kernel on report · eb06f43f
      Andrey Ryabinin authored
      Currently we already taint the kernel in some cases.  E.g.  if we hit some
      bug in slub memory we call object_err() which will taint the kernel with
      TAINT_BAD_PAGE flag.  But for other kind of bugs kernel left untainted.
      
      Always taint with TAINT_BAD_PAGE if kasan found some bug.  This is useful
      for automated testing.
      Signed-off-by: default avatarAndrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Alexander Potapenko <glider@google.com>
      Reviewed-by: default avatarDmitry Vyukov <dvyukov@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      eb06f43f
    • Andrey Ryabinin's avatar
      mm, slub, kasan: enable user tracking by default with KASAN=y · 89d3c87e
      Andrey Ryabinin authored
      It's recommended to have slub's user tracking enabled with CONFIG_KASAN,
      because:
      
      a) User tracking disables slab merging which improves
          detecting out-of-bounds accesses.
      b) User tracking metadata acts as redzone which also improves
          detecting out-of-bounds accesses.
      c) User tracking provides additional information about object.
          This information helps to understand bugs.
      
      Currently it is not enabled by default.  Besides recompiling the kernel
      with KASAN and reinstalling it, user also have to change the boot cmdline,
      which is not very handy.
      
      Enable slub user tracking by default with KASAN=y, since there is no good
      reason to not do this.
      
      [akpm@linux-foundation.org: little fixes, per David]
      Signed-off-by: default avatarAndrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      89d3c87e
    • Xishi Qiu's avatar
      kasan: use IS_ALIGNED in memory_is_poisoned_8() · 10f70262
      Xishi Qiu authored
      Use IS_ALIGNED() to determine whether the shadow span two bytes.  It
      generates less code and more readable.  Also add some comments in shadow
      check functions.
      Signed-off-by: default avatarXishi Qiu <qiuxishi@huawei.com>
      Acked-by: default avatarAndrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Andrey Konovalov <adech.fo@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      10f70262
    • Wang Long's avatar
      kasan: Fix a type conversion error · e0d57714
      Wang Long authored
      The current KASAN code can not find the following out-of-bounds bugs:
      
              char *ptr;
              ptr = kmalloc(8, GFP_KERNEL);
              memset(ptr+7, 0, 2);
      
      the cause of the problem is the type conversion error in
      *memory_is_poisoned_n* function.  So this patch fix that.
      Signed-off-by: default avatarWang Long <long.wanglong@huawei.com>
      Acked-by: default avatarAndrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Vladimir Murzin <vladimir.murzin@arm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e0d57714
    • Wang Long's avatar
      lib: test_kasan: add some testcases · f523e737
      Wang Long authored
      Add some out of bounds testcases to test_kasan module.
      Signed-off-by: default avatarWang Long <long.wanglong@huawei.com>
      Acked-by: default avatarAndrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Vladimir Murzin <vladimir.murzin@arm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f523e737
    • Andrey Konovalov's avatar
      kasan: update reference to kasan prototype repo · 5d0926ef
      Andrey Konovalov authored
      Update the reference to the kasan prototype repository on github, since it
      was renamed.
      Signed-off-by: default avatarAndrey Konovalov <andreyknvl@google.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5d0926ef
    • Andrey Konovalov's avatar
      kasan: move KASAN_SANITIZE in arch/x86/boot/Makefile · c63f06dd
      Andrey Konovalov authored
      Move KASAN_SANITIZE in arch/x86/boot/Makefile above the comment
      related to SVGA_MODE, since the comment refers to 'the next line'.
      Signed-off-by: default avatarAndrey Konovalov <andreyknvl@google.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c63f06dd
    • Andrey Konovalov's avatar
      kasan: various fixes in documentation · 0295fd5d
      Andrey Konovalov authored
      [akpm@linux-foundation.org: coding-style fixes]
      Signed-off-by: default avatarAndrey Konovalov <andreyknvl@google.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0295fd5d
    • Andrey Konovalov's avatar
      kasan: update log messages · 25add7ec
      Andrey Konovalov authored
      We decided to use KASAN as the short name of the tool and
      KernelAddressSanitizer as the full one.  Update log messages according to
      that.
      Signed-off-by: default avatarAndrey Konovalov <andreyknvl@google.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      25add7ec
    • Andrey Konovalov's avatar
      kasan: accurately determine the type of the bad access · cdf6a273
      Andrey Konovalov authored
      Makes KASAN accurately determine the type of the bad access. If the shadow
      byte value is in the [0, KASAN_SHADOW_SCALE_SIZE) range we can look at
      the next shadow byte to determine the type of the access.
      Signed-off-by: default avatarAndrey Konovalov <andreyknvl@google.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      cdf6a273
    • Andrey Konovalov's avatar
      kasan: update reported bug types for kernel memory accesses · 0952d87f
      Andrey Konovalov authored
      Update the names of the bad access types to better reflect the type of
      the access that happended and make these error types "literals" that can
      be used for classification and deduplication in scripts.
      Signed-off-by: default avatarAndrey Konovalov <andreyknvl@google.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0952d87f
    • Andrey Konovalov's avatar
      kasan: update reported bug types for not user nor kernel memory accesses · e9121076
      Andrey Konovalov authored
      Each access with address lower than
      kasan_shadow_to_mem(KASAN_SHADOW_START) is reported as user-memory-access.
      This is not always true, the accessed address might not be in user space.
      Fix this by reporting such accesses as null-ptr-derefs or
      wild-memory-accesses.
      
      There's another reason for this change.  For userspace ASan we have a
      bunch of systems that analyze error types for the purpose of
      classification and deduplication.  Sooner of later we will write them to
      KASAN as well.  Then clearly and explicitly stated error types will bring
      value.
      Signed-off-by: default avatarAndrey Konovalov <andreyknvl@google.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Konstantin Serebryany <kcc@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e9121076
    • Aneesh Kumar K.V's avatar
      mm/kasan: prevent deadlock in kasan reporting · fc5aeeaf
      Aneesh Kumar K.V authored
      When we end up calling kasan_report in real mode, our shadow mapping for
      the spinlock variable will show poisoned.  This will result in us calling
      kasan_report_error with lock_report spin lock held.  To prevent this
      disable kasan reporting when we are priting error w.r.t kasan.
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Reviewed-by: default avatarAndrey Ryabinin <ryabinin.a.a@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      fc5aeeaf
    • Aneesh Kumar K.V's avatar
      mm/kasan: don't use kasan shadow pointer in generic functions · f2377d4e
      Aneesh Kumar K.V authored
      We can't use generic functions like print_hex_dump to access kasan shadow
      region.  This require us to setup another kasan shadow region for the
      address passed (kasan shadow address).  Some architectures won't be able
      to do that.  Hence make a copy of the shadow region row and pass that to
      generic functions.
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Reviewed-by: default avatarAndrey Ryabinin <ryabinin.a.a@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f2377d4e
    • Aneesh Kumar K.V's avatar
    • Aneesh Kumar K.V's avatar
      mm/kasan: rename kasan_enabled() to kasan_report_enabled() · 0ba8663c
      Aneesh Kumar K.V authored
      The function only disable/enable reporting.  In the later patch we will be
      adding a kasan early enable/disable.  Rename kasan_enabled to properly
      reflect its function.
      Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Reviewed-by: default avatarAndrey Ryabinin <ryabinin.a.a@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0ba8663c
    • Tetsuo Handa's avatar
      mm: remove refresh_cpu_vm_stats() definition for !SMP kernel · 5ba97bf9
      Tetsuo Handa authored
      refresh_cpu_vm_stats(int cpu) is no longer referenced by !SMP kernel
      since Linux 3.12.
      Signed-off-by: default avatarTetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5ba97bf9
    • Hugh Dickins's avatar
      Documentation/filesystems/proc.txt: a little tidying · a5be3563
      Hugh Dickins authored
      There's an odd line about "Locked" at the head of the description of
      /proc/meminfo: it seems to have strayed from /proc/PID/smaps, so lead it
      back there.  Move "Swap" and "SwapPss" descriptions down above it, to
      match the order in the file (though "PageSize"s still undescribed).
      
      The example of "Locked: 374 kB" (the same as Pss, neither Rss nor Size) is
      so unlikely as to be misleading: just make it 0, this is /bin/bash text;
      which would be "dw" (disabled write) not "de" (do not expand).
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a5be3563
    • Hugh Dickins's avatar
      tmpfs: avoid a little creat and stat slowdown · d0424c42
      Hugh Dickins authored
      LKP reports that v4.2 commit afa2db2f ("tmpfs: truncate prealloc
      blocks past i_size") causes a 14.5% slowdown in the AIM9 creat-clo
      benchmark.
      
      creat-clo does just what you'd expect from the name, and creat's O_TRUNC
      on 0-length file does indeed get into more overhead now shmem_setattr()
      tests "0 <= 0" instead of "0 < 0".
      
      I'm not sure how much we care, but I think it would not be too VW-like to
      add in a check for whether any pages (or swap) are allocated: if none are
      allocated, there's none to remove from the radix_tree.  At first I thought
      that check would be good enough for the unmaps too, but no: we should not
      skip the unlikely case of unmapping pages beyond the new EOF, which were
      COWed from holes which have now been reclaimed, leaving none.
      
      This gives me an 8.5% speedup: on Haswell instead of LKP's Westmere, and
      running a debug config before and after: I hope those account for the
      lesser speedup.
      
      And probably someone has a benchmark where a thousand threads keep on
      stat'ing the same file repeatedly: forestall that report by adjusting v4.3
      commit 44a30220 ("shmem: recalculate file inode when fstat") not to
      take the spinlock in shmem_getattr() when there's no work to do.
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Reported-by: default avatarYing Huang <ying.huang@linux.intel.com>
      Tested-by: default avatarYing Huang <ying.huang@linux.intel.com>
      Cc: Josef Bacik <jbacik@fb.com>
      Cc: Yu Zhao <yuzhao@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d0424c42
    • David Rientjes's avatar
      mm, oom: add comment for why oom_adj exists · b72bdfa7
      David Rientjes authored
      /proc/pid/oom_adj exists solely to avoid breaking existing userspace
      binaries that write to the tunable.
      
      Add a comment in the only possible location within the kernel tree to
      describe the situation and motivation for keeping it around.
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b72bdfa7
    • Michal Hocko's avatar
      memcg: fix thresholds for 32b architectures. · c12176d3
      Michal Hocko authored
      Commit 424cdc14 ("memcg: convert threshold to bytes") has fixed a
      regression introduced by 3e32cb2e ("mm: memcontrol: lockless page
      counters") where thresholds were silently converted to use page units
      rather than bytes when interpreting the user input.
      
      The fix is not complete, though, as properly pointed out by Ben Hutchings
      during stable backport review.  The page count is converted to bytes but
      unsigned long is used to hold the value which would be obviously not
      sufficient for 32b systems with more than 4G thresholds.  The same applies
      to usage as taken from mem_cgroup_usage which might overflow.
      
      Let's remove this bytes vs.  pages internal tracking differences and
      handle thresholds in page units internally.  Chage mem_cgroup_usage() to
      return the value in page units and revert 424cdc14 because this should
      be sufficient for the consistent handling.  mem_cgroup_read_u64 as the
      only users of mem_cgroup_usage outside of the threshold handling code is
      converted to give the proper in bytes result.  It is doing that already
      for page_counter output so this is more consistent as well.
      
      The value presented to the userspace is still in bytes units.
      
      Fixes: 424cdc14 ("memcg: convert threshold to bytes")
      Fixes: 3e32cb2e ("mm: memcontrol: lockless page counters")
      Signed-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Reported-by: default avatarBen Hutchings <ben@decadent.org.uk>
      Reviewed-by: default avatarVladimir Davydov <vdavydov@virtuozzo.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: <stable@vger.kernel.org>
      From: Michal Hocko <mhocko@kernel.org>
      Subject: memcg-fix-thresholds-for-32b-architectures-fix
      
      Cc: Ben Hutchings <ben@decadent.org.uk>
      Cc: Vladimir Davydov <vdavydov@virtuozzo.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      From: Andrew Morton <akpm@linux-foundation.org>
      Subject: memcg-fix-thresholds-for-32b-architectures-fix-fix
      
      don't attempt to inline mem_cgroup_usage()
      
      The compiler ignores the inline anwyay.  And __always_inlining it adds 600
      bytes of goop to the .o file.
      
      Cc: Ben Hutchings <ben@decadent.org.uk>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Vladimir Davydov <vdavydov@virtuozzo.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c12176d3
    • Johannes Weiner's avatar
      mm: page_counter: let page_counter_try_charge() return bool · 6071ca52
      Johannes Weiner authored
      page_counter_try_charge() currently returns 0 on success and -ENOMEM on
      failure, which is surprising behavior given the function name.
      
      Make it follow the expected pattern of try_stuff() functions that return a
      boolean true to indicate success, or false for failure.
      Signed-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Vladimir Davydov <vdavydov@virtuozzo.com
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      6071ca52
    • Johannes Weiner's avatar
      mm: memcontrol: eliminate root memory.current · f5fc3c5d
      Johannes Weiner authored
      memory.current on the root level doesn't add anything that wouldn't be
      more accurate and detailed using system statistics.  It already doesn't
      include slabs, and it'll be a pain to keep in sync when further memory
      types are accounted in the memory controller.  Remove it.
      
      Note that this applies to the new unified hierarchy interface only.
      Signed-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f5fc3c5d
    • Dave Hansen's avatar
      mm, hugetlbfs: optimize when NUMA=n · e0ec90ee
      Dave Hansen authored
      My recent patch "mm, hugetlb: use memory policy when available" added some
      bloat to hugetlb.o.  This patch aims to get some of the bloat back,
      especially when NUMA is not in play.
      
      It does this with an implicit #ifdef and marking some things static that
      should have been static in my first patch.  It also makes the warnings
      only VM_WARN_ON()s.  They were responsible for a pretty big chunk of the
      bloat.
      
      Doing this gets our NUMA=n text size back to a wee bit _below_ where we
      started before the original patch.
      
      It also shaves a bit of space off the NUMA=y case, but not much.
      Enforcing the mempolicy definitely takes some text and it's hard to avoid.
      
      size(1) output:
      
         text	   data	    bss	    dec	    hex	filename
        30745	   3433	   2492	  36670	   8f3e	hugetlb.o.nonuma.baseline
        31305	   3755	   2492	  37552	   92b0	hugetlb.o.nonuma.patch1
        30713	   3433	   2492	  36638	   8f1e	hugetlb.o.nonuma.patch2 (this patch)
        25235	    473	  41276	  66984	  105a8	hugetlb.o.numa.baseline
        25715	    475	  41276	  67466	  1078a	hugetlb.o.numa.patch1
        25491	    473	  41276	  67240	  106a8	hugetlb.o.numa.patch2 (this patch)
      Signed-off-by: default avatarDave Hansen <dave.hansen@linux.intel.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Cc: Hillf Danton <hillf.zj@alibaba-inc.com>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e0ec90ee
    • Dave Hansen's avatar
      mm, hugetlb: use memory policy when available · 099730d6
      Dave Hansen authored
      I have a hugetlbfs user which is never explicitly allocating huge pages
      with 'nr_hugepages'.  They only set 'nr_overcommit_hugepages' and then let
      the pages be allocated from the buddy allocator at fault time.
      
      This works, but they noticed that mbind() was not doing them any good and
      the pages were being allocated without respect for the policy they
      specified.
      
      The code in question is this:
      
      > struct page *alloc_huge_page(struct vm_area_struct *vma,
      ...
      >         page = dequeue_huge_page_vma(h, vma, addr, avoid_reserve, gbl_chg);
      >         if (!page) {
      >                 page = alloc_buddy_huge_page(h, NUMA_NO_NODE);
      
      dequeue_huge_page_vma() is smart and will respect the VMA's memory policy.
       But, it only grabs _existing_ huge pages from the huge page pool.  If the
      pool is empty, we fall back to alloc_buddy_huge_page() which obviously
      can't do anything with the VMA's policy because it isn't even passed the
      VMA.
      
      Almost everybody preallocates huge pages.  That's probably why nobody has
      ever noticed this.  Looking back at the git history, I don't think this
      _ever_ worked from when alloc_buddy_huge_page() was introduced in
      7893d1d5, 8 years ago.
      
      The fix is to pass vma/addr down in to the places where we actually call
      in to the buddy allocator.  It's fairly straightforward plumbing.  This
      has been lightly tested.
      Signed-off-by: default avatarDave Hansen <dave.hansen@linux.intel.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Cc: Hillf Danton <hillf.zj@alibaba-inc.com>
      Cc: David Rientjes <rientjes@google.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      099730d6
    • Alexander Kuleshov's avatar
      mm/hugetlb: make node_hstates array static · b4e289a6
      Alexander Kuleshov authored
      There are no users of the node_hstates array outside of the
      mm/hugetlb.c. So let's make it static.
      Signed-off-by: default avatarAlexander Kuleshov <kuleshovmail@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b4e289a6
    • Rasmus Villemoes's avatar
      mm/maccess.c: actually return -EFAULT from strncpy_from_unsafe · 9dd861d5
      Rasmus Villemoes authored
      As far as I can tell, strncpy_from_unsafe never returns -EFAULT.  ret is
      the result of a __copy_from_user_inatomic(), which is 0 for success and
      positive (in this case necessarily 1) for access error - it is never
      negative.  So we were always returning the length of the, possibly
      truncated, destination string.
      Signed-off-by: default avatarRasmus Villemoes <linux@rasmusvillemoes.dk>
      Acked-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9dd861d5
    • Andrew Morton's avatar
      mm/cma.c: suppress warning · 3acaea68
      Andrew Morton authored
      mm/cma.c: In function 'cma_alloc':
      mm/cma.c:366: warning: 'pfn' may be used uninitialized in this function
      
      The patch actually improves the tracing a bit: if alloc_contig_range()
      fails, tracing will display the offending pfn rather than -1.
      
      Cc: Stefan Strogin <stefan.strogin@gmail.com>
      Cc: Michal Nazarewicz <mpn@google.com>
      Cc: Marek Szyprowski <m.szyprowski@samsung.com>
      Cc: Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
      Cc: Thierry Reding <treding@nvidia.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      3acaea68
    • Hugh Dickins's avatar
      mm: migrate dirty page without clear_page_dirty_for_io etc · 42cb14b1
      Hugh Dickins authored
      clear_page_dirty_for_io() has accumulated writeback and memcg subtleties
      since v2.6.16 first introduced page migration; and the set_page_dirty()
      which completed its migration of PageDirty, later had to be moderated to
      __set_page_dirty_nobuffers(); then PageSwapBacked had to skip that too.
      
      No actual problems seen with this procedure recently, but if you look into
      what the clear_page_dirty_for_io(page)+set_page_dirty(newpage) is actually
      achieving, it turns out to be nothing more than moving the PageDirty flag,
      and its NR_FILE_DIRTY stat from one zone to another.
      
      It would be good to avoid a pile of irrelevant decrementations and
      incrementations, and improper event counting, and unnecessary descent of
      the radix_tree under tree_lock (to set the PAGECACHE_TAG_DIRTY which
      radix_tree_replace_slot() left in place anyway).
      
      Do the NR_FILE_DIRTY movement, like the other stats movements, while
      interrupts still disabled in migrate_page_move_mapping(); and don't even
      bother if the zone is the same.  Do the PageDirty movement there under
      tree_lock too, where old page is frozen and newpage not yet visible:
      bearing in mind that as soon as newpage becomes visible in radix_tree, an
      un-page-locked set_page_dirty() might interfere (or perhaps that's just
      not possible: anything doing so should already hold an additional
      reference to the old page, preventing its migration; but play safe).
      
      But we do still need to transfer PageDirty in migrate_page_copy(), for
      those who don't go the mapping route through migrate_page_move_mapping().
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      42cb14b1
    • Hugh Dickins's avatar
      mm: page migration avoid touching newpage until no going back · cf4b769a
      Hugh Dickins authored
      We have had trouble in the past from the way in which page migration's
      newpage is initialized in dribs and drabs - see commit 8bdd6380 ("mm:
      fix direct reclaim writeback regression") which proposed a cleanup.
      
      We have no actual problem now, but I think the procedure would be clearer
      (and alternative get_new_page pools safer to implement) if we assert that
      newpage is not touched until we are sure that it's going to be used -
      except for taking the trylock on it in __unmap_and_move().
      
      So shift the early initializations from move_to_new_page() into
      migrate_page_move_mapping(), mapping and NULL-mapping paths.  Similarly
      migrate_huge_page_move_mapping(), but its NULL-mapping path can just be
      deleted: you cannot reach hugetlbfs_migrate_page() with a NULL mapping.
      
      Adjust stages 3 to 8 in the Documentation file accordingly.
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      cf4b769a
    • Hugh Dickins's avatar
      mm: page migration use migration entry for swapcache too · 470f119f
      Hugh Dickins authored
      Hitherto page migration has avoided using a migration entry for a
      swapcache page mapped into userspace, apparently for historical reasons.
      So any page blessed with swapcache would entail a minor fault when it's
      next touched, which page migration otherwise tries to avoid.  Swapcache in
      an mlocked area is rare, so won't often matter, but still better fixed.
      
      Just rearrange the block in try_to_unmap_one(), to handle TTU_MIGRATION
      before checking PageAnon, that's all (apart from some reindenting).
      
      Well, no, that's not quite all: doesn't this by the way fix a soft_dirty
      bug, that page migration of a file page was forgetting to transfer the
      soft_dirty bit?  Probably not a serious bug: if I understand correctly,
      soft_dirty afficionados usually have to handle file pages separately
      anyway; but we publish the bit in /proc/<pid>/pagemap on file mappings as
      well as anonymous, so page migration ought not to perturb it.
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Reviewed-by: default avatarCyrill Gorcunov <gorcunov@openvz.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      470f119f
    • Hugh Dickins's avatar
      mm: simplify page migration's anon_vma comment and flow · 03f15c86
      Hugh Dickins authored
      __unmap_and_move() contains a long stale comment on page_get_anon_vma()
      and PageSwapCache(), with an odd control flow that's hard to follow.
      Mostly this reflects our confusion about the lifetime of an anon_vma, in
      the early days of page migration, before we could take a reference to one.
       Nowadays this seems quite straightforward: cut it all down to essentials.
      
      I cannot see the relevance of swapcache here at all, so don't treat it any
      differently: I believe the old comment reflects in part our anon_vma
      confusions, and in part the original v2.6.16 page migration technique,
      which used actual swap to migrate anon instead of swap-like migration
      entries.  Why should a swapcache page not be migrated with the aid of
      migration entry ptes like everything else?  So lose that comment now, and
      enable migration entries for swapcache in the next patch.
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      03f15c86
    • Hugh Dickins's avatar
      mm: page migration remove_migration_ptes at lock+unlock level · 5c3f9a67
      Hugh Dickins authored
      Clean up page migration a little more by calling remove_migration_ptes()
      from the same level, on success or on failure, from __unmap_and_move() or
      from unmap_and_move_huge_page().
      
      Don't reset page->mapping of a PageAnon old page in move_to_new_page(),
      leave that to when the page is freed.  Except for here in page migration,
      it has been an invariant that a PageAnon (bit set in page->mapping) page
      stays PageAnon until it is freed, and I think we're safer to keep to that.
      
      And with the above rearrangement, it's necessary because zap_pte_range()
      wants to identify whether a migration entry represents a file or an anon
      page, to update the appropriate rss stats without waiting on it.
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5c3f9a67
    • Hugh Dickins's avatar
      mm: page migration trylock newpage at same level as oldpage · 7db7671f
      Hugh Dickins authored
      Clean up page migration a little by moving the trylock of newpage from
      move_to_new_page() into __unmap_and_move(), where the old page has been
      locked.  Adjust unmap_and_move_huge_page() and balloon_page_migrate()
      accordingly.
      
      But make one kind-of-functional change on the way: whereas trylock of
      newpage used to BUG() if it failed, now simply return -EAGAIN if so.
      Cutting out BUG()s is good, right?  But, to be honest, this is really to
      extend the usefulness of the custom put_new_page feature, allowing a pool
      of new pages to be shared perhaps with racing uses.
      
      Use an "else" instead of that "skip_unmap" label.
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Acked-by: default avatarRafael Aquini <aquini@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7db7671f
    • Hugh Dickins's avatar
      mm: page migration use the put_new_page whenever necessary · 2def7424
      Hugh Dickins authored
      I don't know of any problem from the way it's used in our current tree,
      but there is one defect in page migration's custom put_new_page feature.
      
      An unused newpage is expected to be released with the put_new_page(), but
      there was one MIGRATEPAGE_SUCCESS (0) path which released it with
      putback_lru_page(): which can be very wrong for a custom pool.
      
      Fixed more easily by resetting put_new_page once it won't be needed, than
      by adding a further flag to modify the rc test.
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Rik van Riel <riel@redhat.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2def7424
    • Hugh Dickins's avatar
      mm: correct a couple of page migration comments · 14e0f9bc
      Hugh Dickins authored
      It's migrate.c not migration,c, and nowadays putback_movable_pages() not
      putback_lru_pages().
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Acked-by: default avatarRafael Aquini <aquini@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      14e0f9bc