1. 26 Oct, 2018 40 commits
    • Oscar Salvador's avatar
      mm/memory_hotplug.c: simplify node_states_check_changes_online · 8efe33f4
      Oscar Salvador authored
      While looking at node_states_check_changes_online, I stumbled upon some
      confusing things.
      
      Right after entering the function, we find this:
      
      if (N_MEMORY == N_NORMAL_MEMORY)
              zone_last = ZONE_MOVABLE;
      
      This is wrong.
      N_MEMORY cannot really be equal to N_NORMAL_MEMORY.
      My guess is that this wanted to be something like:
      
      if (N_NORMAL_MEMORY == N_HIGH_MEMORY)
      
      to check if we have CONFIG_HIGHMEM.
      
      Later on, in the CONFIG_HIGHMEM block, we have:
      
      if (N_MEMORY == N_HIGH_MEMORY)
              zone_last = ZONE_MOVABLE;
      
      Again, this is wrong, and will never be evaluated to true.
      
      Besides removing these wrong if statements, I simplified the function a
      bit.
      
      [osalvador@suse.de: address feedback from Pavel]
        Link: http://lkml.kernel.org/r/20180921132634.10103-4-osalvador@techadventures.net
      Link: http://lkml.kernel.org/r/20180919100819.25518-5-osalvador@techadventures.netSigned-off-by: default avatarOscar Salvador <osalvador@suse.de>
      Reviewed-by: default avatarPavel Tatashin <pavel.tatashin@microsoft.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
      Cc: Mathieu Malaterre <malat@debian.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: <yasu.isimatu@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      8efe33f4
    • Oscar Salvador's avatar
      mm/memory_hotplug.c: tidy up node_states_clear_node() · cf01f6f5
      Oscar Salvador authored
      node_states_clear has the following if statements:
      
      if ((N_MEMORY != N_NORMAL_MEMORY) &&
          (arg->status_change_nid_high >= 0))
              ...
      
      if ((N_MEMORY != N_HIGH_MEMORY) &&
          (arg->status_change_nid >= 0))
              ...
      
      N_MEMORY can never be equal to neither N_NORMAL_MEMORY nor
      N_HIGH_MEMORY.
      
      Similar problem was found in [1].
      Since this is wrong, let us get rid of it.
      
      [1] https://patchwork.kernel.org/patch/10579155/
      
      Link: http://lkml.kernel.org/r/20180919100819.25518-4-osalvador@techadventures.netSigned-off-by: default avatarOscar Salvador <osalvador@suse.de>
      Reviewed-by: default avatarPavel Tatashin <pavel.tatashin@microsoft.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
      Cc: Mathieu Malaterre <malat@debian.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: <yasu.isimatu@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      cf01f6f5
    • Oscar Salvador's avatar
      mm/memory_hotplug.c: spare unnecessary calls to node_set_state · 83d83612
      Oscar Salvador authored
      In node_states_check_changes_online, we check if the node will have to be
      set for any of the N_*_MEMORY states after the pages have been onlined.
      
      Later on, we perform the activation in node_states_set_node.  Currently,
      in node_states_set_node we set the node to N_MEMORY unconditionally.
      
      This means that we call node_set_state for N_MEMORY every time pages go
      online, but we only need to do it if the node has not yet been set for
      N_MEMORY.
      
      Fix this by checking status_change_nid.
      
      Link: http://lkml.kernel.org/r/20180919100819.25518-2-osalvador@techadventures.netSigned-off-by: default avatarOscar Salvador <osalvador@suse.de>
      Reviewed-by: default avatarPavel Tatashin <pavel.tatashin@microsoft.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
      Cc: <yasu.isimatu@gmail.com>
      Cc: Mathieu Malaterre <malat@debian.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      83d83612
    • haiqing.shq's avatar
      mm/filemap.c: Use existing variable · 3cb7b121
      haiqing.shq authored
      Use the variable write_len instead of ov_iter_count(from).
      
      Link: http://lkml.kernel.org/r/1537375855-2088-1-git-send-email-leviathan0992@gmail.comSigned-off-by: default avatarhaiqing.shq <leviathan0992@gmail.com>
      Reviewed-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      3cb7b121
    • Yang Shi's avatar
      mm: unmap VM_PFNMAP mappings with optimized path · cb492249
      Yang Shi authored
      When unmapping VM_PFNMAP mappings, vm flags need to be updated.  Since the
      vmas have been detached, so it sounds safe to update vm flags with read
      mmap_sem.
      
      Link: http://lkml.kernel.org/r/1537376621-51150-4-git-send-email-yang.shi@linux.alibaba.comSigned-off-by: default avatarYang Shi <yang.shi@linux.alibaba.com>
      Reviewed-by: default avatarMatthew Wilcox <willy@infradead.org>
      Acked-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      cb492249
    • Yang Shi's avatar
      mm: unmap VM_HUGETLB mappings with optimized path · b4cefb36
      Yang Shi authored
      When unmapping VM_HUGETLB mappings, vm flags need to be updated.  Since
      the vmas have been detached, so it sounds safe to update vm flags with
      read mmap_sem.
      
      Link: http://lkml.kernel.org/r/1537376621-51150-3-git-send-email-yang.shi@linux.alibaba.comSigned-off-by: default avatarYang Shi <yang.shi@linux.alibaba.com>
      Reviewed-by: default avatarMatthew Wilcox <willy@infradead.org>
      Acked-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b4cefb36
    • Yang Shi's avatar
      mm: mmap: zap pages with read mmap_sem in munmap · dd2283f2
      Yang Shi authored
      Patch series "mm: zap pages with read mmap_sem in munmap for large
      mapping", v11.
      
      Background:
      Recently, when we ran some vm scalability tests on machines with large memory,
      we ran into a couple of mmap_sem scalability issues when unmapping large memory
      space, please refer to https://lkml.org/lkml/2017/12/14/733 and
      https://lkml.org/lkml/2018/2/20/576.
      
      History:
      Then akpm suggested to unmap large mapping section by section and drop mmap_sem
      at a time to mitigate it (see https://lkml.org/lkml/2018/3/6/784).
      
      V1 patch series was submitted to the mailing list per Andrew's suggestion
      (see https://lkml.org/lkml/2018/3/20/786).  Then I received a lot great
      feedback and suggestions.
      
      Then this topic was discussed on LSFMM summit 2018.  In the summit, Michal
      Hocko suggested (also in the v1 patches review) to try "two phases"
      approach.  Zapping pages with read mmap_sem, then doing via cleanup with
      write mmap_sem (for discussion detail, see
      https://lwn.net/Articles/753269/)
      
      Approach:
      Zapping pages is the most time consuming part, according to the suggestion from
      Michal Hocko [1], zapping pages can be done with holding read mmap_sem, like
      what MADV_DONTNEED does. Then re-acquire write mmap_sem to cleanup vmas.
      
      But, we can't call MADV_DONTNEED directly, since there are two major drawbacks:
        * The unexpected state from PF if it wins the race in the middle of munmap.
          It may return zero page, instead of the content or SIGSEGV.
        * Can't handle VM_LOCKED | VM_HUGETLB | VM_PFNMAP and uprobe mappings, which
          is a showstopper from akpm
      
      But, some part may need write mmap_sem, for example, vma splitting. So,
      the design is as follows:
              acquire write mmap_sem
              lookup vmas (find and split vmas)
              deal with special mappings
              detach vmas
              downgrade_write
      
              zap pages
              free page tables
              release mmap_sem
      
      The vm events with read mmap_sem may come in during page zapping, but
      since vmas have been detached before, they, i.e.  page fault, gup, etc,
      will not be able to find valid vma, then just return SIGSEGV or -EFAULT as
      expected.
      
      If the vma has VM_HUGETLB | VM_PFNMAP, they are considered as special
      mappings.  They will be handled by falling back to regular do_munmap()
      with exclusive mmap_sem held in this patch since they may update vm flags.
      
      But, with the "detach vmas first" approach, the vmas have been detached
      when vm flags are updated, so it sounds safe to update vm flags with read
      mmap_sem for this specific case.  So, VM_HUGETLB and VM_PFNMAP will be
      handled by using the optimized path in the following separate patches for
      bisectable sake.
      
      Unmapping uprobe areas may need update mm flags (MMF_RECALC_UPROBES).
      However it is fine to have false-positive MMF_RECALC_UPROBES according to
      uprobes developer.  So, uprobe unmap will not be handled by the regular
      path.
      
      With the "detach vmas first" approach we don't have to re-acquire mmap_sem
      again to clean up vmas to avoid race window which might get the address
      space changed since downgrade_write() doesn't release the lock to lead
      regression, which simply downgrades to read lock.
      
      And, since the lock acquire/release cost is managed to the minimum and
      almost as same as before, the optimization could be extended to any size
      of mapping without incurring significant penalty to small mappings.
      
      For the time being, just do this in munmap syscall path.  Other
      vm_munmap() or do_munmap() call sites (i.e mmap, mremap, etc) remain
      intact due to some implementation difficulties since they acquire write
      mmap_sem from very beginning and hold it until the end, do_munmap() might
      be called in the middle.  But, the optimized do_munmap would like to be
      called without mmap_sem held so that we can do the optimization.  So, if
      we want to do the similar optimization for mmap/mremap path, I'm afraid we
      would have to redesign them.  mremap might be called on very large area
      depending on the usecases, the optimization to it will be considered in
      the future.
      
      This patch (of 3):
      
      When running some mmap/munmap scalability tests with large memory (i.e.
      > 300GB), the below hung task issue may happen occasionally.
      
      INFO: task ps:14018 blocked for more than 120 seconds.
             Tainted: G            E 4.9.79-009.ali3000.alios7.x86_64 #1
       "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this
      message.
       ps              D    0 14018      1 0x00000004
        ffff885582f84000 ffff885e8682f000 ffff880972943000 ffff885ebf499bc0
        ffff8828ee120000 ffffc900349bfca8 ffffffff817154d0 0000000000000040
        00ffffff812f872a ffff885ebf499bc0 024000d000948300 ffff880972943000
       Call Trace:
        [<ffffffff817154d0>] ? __schedule+0x250/0x730
        [<ffffffff817159e6>] schedule+0x36/0x80
        [<ffffffff81718560>] rwsem_down_read_failed+0xf0/0x150
        [<ffffffff81390a28>] call_rwsem_down_read_failed+0x18/0x30
        [<ffffffff81717db0>] down_read+0x20/0x40
        [<ffffffff812b9439>] proc_pid_cmdline_read+0xd9/0x4e0
        [<ffffffff81253c95>] ? do_filp_open+0xa5/0x100
        [<ffffffff81241d87>] __vfs_read+0x37/0x150
        [<ffffffff812f824b>] ? security_file_permission+0x9b/0xc0
        [<ffffffff81242266>] vfs_read+0x96/0x130
        [<ffffffff812437b5>] SyS_read+0x55/0xc0
        [<ffffffff8171a6da>] entry_SYSCALL_64_fastpath+0x1a/0xc5
      
      It is because munmap holds mmap_sem exclusively from very beginning to all
      the way down to the end, and doesn't release it in the middle.  When
      unmapping large mapping, it may take long time (take ~18 seconds to unmap
      320GB mapping with every single page mapped on an idle machine).
      
      Zapping pages is the most time consuming part, according to the suggestion
      from Michal Hocko [1], zapping pages can be done with holding read
      mmap_sem, like what MADV_DONTNEED does.  Then re-acquire write mmap_sem to
      cleanup vmas.
      
      But, some part may need write mmap_sem, for example, vma splitting. So,
      the design is as follows:
              acquire write mmap_sem
              lookup vmas (find and split vmas)
              deal with special mappings
              detach vmas
              downgrade_write
      
              zap pages
              free page tables
              release mmap_sem
      
      The vm events with read mmap_sem may come in during page zapping, but
      since vmas have been detached before, they, i.e.  page fault, gup, etc,
      will not be able to find valid vma, then just return SIGSEGV or -EFAULT as
      expected.
      
      If the vma has VM_HUGETLB | VM_PFNMAP, they are considered as special
      mappings.  They will be handled by without downgrading mmap_sem in this
      patch since they may update vm flags.
      
      But, with the "detach vmas first" approach, the vmas have been detached
      when vm flags are updated, so it sounds safe to update vm flags with read
      mmap_sem for this specific case.  So, VM_HUGETLB and VM_PFNMAP will be
      handled by using the optimized path in the following separate patches for
      bisectable sake.
      
      Unmapping uprobe areas may need update mm flags (MMF_RECALC_UPROBES).
      However it is fine to have false-positive MMF_RECALC_UPROBES according to
      uprobes developer.
      
      With the "detach vmas first" approach we don't have to re-acquire mmap_sem
      again to clean up vmas to avoid race window which might get the address
      space changed since downgrade_write() doesn't release the lock to lead
      regression, which simply downgrades to read lock.
      
      And, since the lock acquire/release cost is managed to the minimum and
      almost as same as before, the optimization could be extended to any size
      of mapping without incurring significant penalty to small mappings.
      
      For the time being, just do this in munmap syscall path.  Other
      vm_munmap() or do_munmap() call sites (i.e mmap, mremap, etc) remain
      intact due to some implementation difficulties since they acquire write
      mmap_sem from very beginning and hold it until the end, do_munmap() might
      be called in the middle.  But, the optimized do_munmap would like to be
      called without mmap_sem held so that we can do the optimization.  So, if
      we want to do the similar optimization for mmap/mremap path, I'm afraid we
      would have to redesign them.  mremap might be called on very large area
      depending on the usecases, the optimization to it will be considered in
      the future.
      
      With the patches, exclusive mmap_sem hold time when munmap a 80GB address
      space on a machine with 32 cores of E5-2680 @ 2.70GHz dropped to us level
      from second.
      
      munmap_test-15002 [008]   594.380138: funcgraph_entry: |
      __vm_munmap() {
      munmap_test-15002 [008]   594.380146: funcgraph_entry:      !2485684 us
      |    unmap_region();
      munmap_test-15002 [008]   596.865836: funcgraph_exit:       !2485692 us
      |  }
      
      Here the execution time of unmap_region() is used to evaluate the time of
      holding read mmap_sem, then the remaining time is used with holding
      exclusive lock.
      
      [1] https://lwn.net/Articles/753269/
      
      Link: http://lkml.kernel.org/r/1537376621-51150-2-git-send-email-yang.shi@linux.alibaba.comSigned-off-by: default avatarYang Shi &lt;yang.shi@linux.alibaba.com&gt;Suggested-by: Michal Hocko <mhocko@kernel.org>
      Suggested-by: default avatarKirill A. Shutemov <kirill@shutemov.name>
      Suggested-by: default avatarMatthew Wilcox <willy@infradead.org>
      Reviewed-by: default avatarMatthew Wilcox <willy@infradead.org>
      Acked-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Laurent Dufour <ldufour@linux.vnet.ibm.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      dd2283f2
    • Andrey Ryabinin's avatar
      vfree: add debug might_sleep() · a8dda165
      Andrey Ryabinin authored
      Add might_sleep() call to vfree() to catch potential sleep-in-atomic bugs
      earlier.
      
      [aryabinin@virtuozzo.com: drop might_sleep_if() from kvfree()]
        Link: http://lkml.kernel.org/r/7e19e4df-b1a6-29bd-9ae7-0266d50bef1d@virtuozzo.com
      Link: http://lkml.kernel.org/r/20180914130512.10394-3-aryabinin@virtuozzo.comSigned-off-by: default avatarAndrey Ryabinin <aryabinin@virtuozzo.com>
      Reviewed-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a8dda165
    • Andrey Ryabinin's avatar
      mm/vmalloc.c: improve vfree() kerneldoc · 3ca4ea3a
      Andrey Ryabinin authored
      vfree() might sleep if called not in interrupt context.  Explain that in
      the comment.
      
      Link: http://lkml.kernel.org/r/20180914130512.10394-2-aryabinin@virtuozzo.comSigned-off-by: default avatarAndrey Ryabinin <aryabinin@virtuozzo.com>
      Reviewed-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      3ca4ea3a
    • Andrey Ryabinin's avatar
      kvfree(): fix misleading comment · 52414d33
      Andrey Ryabinin authored
      vfree() might sleep if called not in interrupt context.  So does kvfree()
      too.  Fix misleading kvfree()'s comment about allowed context.
      
      Link: http://lkml.kernel.org/r/20180914130512.10394-1-aryabinin@virtuozzo.com
      Fixes: 04b8e946 ("mm/util.c: improve kvfree() kerneldoc")
      Signed-off-by: default avatarAndrey Ryabinin <aryabinin@virtuozzo.com>
      Reviewed-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      52414d33
    • zhong jiang's avatar
      mm/mempolicy.c: use match_string() helper to simplify the code · dedf2c73
      zhong jiang authored
      match_string() returns the index of an array for a matching string, which
      can be used intead of open coded implementation.
      
      Link: http://lkml.kernel.org/r/1536988365-50310-1-git-send-email-zhongjiang@huawei.comSigned-off-by: default avatarzhong jiang <zhongjiang@huawei.com>
      Reviewed-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      dedf2c73
    • YueHaibing's avatar
    • Michal Hocko's avatar
      mm, page_alloc: drop should_suppress_show_mem · 2c029a1e
      Michal Hocko authored
      should_suppress_show_mem() was introduced to reduce the overhead of
      show_mem on large NUMA systems.  Things have changed since then though.
      Namely c78e9363 ("mm: do not walk all of system memory during
      show_mem") has reduced the overhead considerably.
      
      Moreover warn_alloc_show_mem clears SHOW_MEM_FILTER_NODES when called from
      the IRQ context already so we are not printing per node stats.
      
      Remove should_suppress_show_mem because we are losing potentially
      interesting information about allocation failures.  We have seen a bug
      report where system gets unresponsive under memory pressure and there is
      only
      
      kernel: [2032243.696888] qlge 0000:8b:00.1 ql1: Could not get a page chunk, i=8, clean_idx =200 .
      kernel: [2032243.710725] swapper/7: page allocation failure: order:1, mode:0x1084120(GFP_ATOMIC|__GFP_COLD|__GFP_COMP)
      
      without an additional information for debugging.  It would be great to see
      the state of the page allocator at the moment.
      
      Link: http://lkml.kernel.org/r/20180907114334.7088-1-mhocko@kernel.orgSigned-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2c029a1e
    • Johannes Weiner's avatar
      mm/memcontrol.c: fix memory.stat item ordering · e9b257ed
      Johannes Weiner authored
      The refault stats go better with the page fault stats, and are of
      higher interest than the stats on LRU operations. In fact they used to
      be grouped together; when the LRU operation stats were added later on,
      they were wedged in between.
      
      Move them back together. Documentation/admin-guide/cgroup-v2.rst
      already lists them in the right order.
      
      Link: http://lkml.kernel.org/r/20181010140239.GA2527@cmpxchg.orgSigned-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e9b257ed
    • Johannes Weiner's avatar
      mm: zero-seek shrinkers · 4b85afbd
      Johannes Weiner authored
      The page cache and most shrinkable slab caches hold data that has been
      read from disk, but there are some caches that only cache CPU work, such
      as the dentry and inode caches of procfs and sysfs, as well as the subset
      of radix tree nodes that track non-resident page cache.
      
      Currently, all these are shrunk at the same rate: using DEFAULT_SEEKS for
      the shrinker's seeks setting tells the reclaim algorithm that for every
      two page cache pages scanned it should scan one slab object.
      
      This is a bogus setting.  A virtual inode that required no IO to create is
      not twice as valuable as a page cache page; shadow cache entries with
      eviction distances beyond the size of memory aren't either.
      
      In most cases, the behavior in practice is still fine.  Such virtual
      caches don't tend to grow and assert themselves aggressively, and usually
      get picked up before they cause problems.  But there are scenarios where
      that's not true.
      
      Our database workloads suffer from two of those.  For one, their file
      workingset is several times bigger than available memory, which has the
      kernel aggressively create shadow page cache entries for the non-resident
      parts of it.  The workingset code does tell the VM that most of these are
      expendable, but the VM ends up balancing them 2:1 to cache pages as per
      the seeks setting.  This is a huge waste of memory.
      
      These workloads also deal with tens of thousands of open files and use
      /proc for introspection, which ends up growing the proc_inode_cache to
      absurdly large sizes - again at the cost of valuable cache space, which
      isn't a reasonable trade-off, given that proc inodes can be re-created
      without involving the disk.
      
      This patch implements a "zero-seek" setting for shrinkers that results in
      a target ratio of 0:1 between their objects and IO-backed caches.  This
      allows such virtual caches to grow when memory is available (they do
      cache/avoid CPU work after all), but effectively disables them as soon as
      IO-backed objects are under pressure.
      
      It then switches the shrinkers for procfs and sysfs metadata, as well as
      excess page cache shadow nodes, to the new zero-seek setting.
      
      Link: http://lkml.kernel.org/r/20181009184732.762-5-hannes@cmpxchg.orgSigned-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Reported-by: default avatarDomas Mituzas <dmituzas@fb.com>
      Reviewed-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: default avatarRik van Riel <riel@surriel.com>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4b85afbd
    • Johannes Weiner's avatar
      mm: workingset: add vmstat counter for shadow nodes · 68d48e6a
      Johannes Weiner authored
      Make it easier to catch bugs in the shadow node shrinker by adding a
      counter for the shadow nodes in circulation.
      
      [akpm@linux-foundation.org: assert that irqs are disabled, for __inc_lruvec_page_state()]
      [akpm@linux-foundation.org: s/WARN_ON_ONCE/VM_WARN_ON_ONCE/, per Johannes]
      Link: http://lkml.kernel.org/r/20181009184732.762-4-hannes@cmpxchg.orgSigned-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      68d48e6a
    • Johannes Weiner's avatar
      mm: workingset: use cheaper __inc_lruvec_state in irqsafe node reclaim · 505802a5
      Johannes Weiner authored
      No need to use the preemption-safe lruvec state function inside the
      reclaim region that has irqs disabled.
      
      Link: http://lkml.kernel.org/r/20181009184732.762-3-hannes@cmpxchg.orgSigned-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: default avatarRik van Riel <riel@surriel.com>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      505802a5
    • Johannes Weiner's avatar
      psi: cgroup support · 2ce7135a
      Johannes Weiner authored
      On a system that executes multiple cgrouped jobs and independent
      workloads, we don't just care about the health of the overall system, but
      also that of individual jobs, so that we can ensure individual job health,
      fairness between jobs, or prioritize some jobs over others.
      
      This patch implements pressure stall tracking for cgroups.  In kernels
      with CONFIG_PSI=y, cgroup2 groups will have cpu.pressure, memory.pressure,
      and io.pressure files that track aggregate pressure stall times for only
      the tasks inside the cgroup.
      
      Link: http://lkml.kernel.org/r/20180828172258.3185-10-hannes@cmpxchg.orgSigned-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: default avatarTejun Heo <tj@kernel.org>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Tested-by: default avatarDaniel Drake <drake@endlessm.com>
      Tested-by: default avatarSuren Baghdasaryan <surenb@google.com>
      Cc: Christopher Lameter <cl@linux.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Johannes Weiner <jweiner@fb.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Enderborg <peter.enderborg@sony.com>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: Vinayak Menon <vinmenon@codeaurora.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2ce7135a
    • Johannes Weiner's avatar
      psi: pressure stall information for CPU, memory, and IO · eb414681
      Johannes Weiner authored
      When systems are overcommitted and resources become contended, it's hard
      to tell exactly the impact this has on workload productivity, or how close
      the system is to lockups and OOM kills.  In particular, when machines work
      multiple jobs concurrently, the impact of overcommit in terms of latency
      and throughput on the individual job can be enormous.
      
      In order to maximize hardware utilization without sacrificing individual
      job health or risk complete machine lockups, this patch implements a way
      to quantify resource pressure in the system.
      
      A kernel built with CONFIG_PSI=y creates files in /proc/pressure/ that
      expose the percentage of time the system is stalled on CPU, memory, or IO,
      respectively.  Stall states are aggregate versions of the per-task delay
      accounting delays:
      
             cpu: some tasks are runnable but not executing on a CPU
             memory: tasks are reclaiming, or waiting for swapin or thrashing cache
             io: tasks are waiting for io completions
      
      These percentages of walltime can be thought of as pressure percentages,
      and they give a general sense of system health and productivity loss
      incurred by resource overcommit.  They can also indicate when the system
      is approaching lockup scenarios and OOMs.
      
      To do this, psi keeps track of the task states associated with each CPU
      and samples the time they spend in stall states.  Every 2 seconds, the
      samples are averaged across CPUs - weighted by the CPUs' non-idle time to
      eliminate artifacts from unused CPUs - and translated into percentages of
      walltime.  A running average of those percentages is maintained over 10s,
      1m, and 5m periods (similar to the loadaverage).
      
      [hannes@cmpxchg.org: doc fixlet, per Randy]
        Link: http://lkml.kernel.org/r/20180828205625.GA14030@cmpxchg.org
      [hannes@cmpxchg.org: code optimization]
        Link: http://lkml.kernel.org/r/20180907175015.GA8479@cmpxchg.org
      [hannes@cmpxchg.org: rename psi_clock() to psi_update_work(), per Peter]
        Link: http://lkml.kernel.org/r/20180907145404.GB11088@cmpxchg.org
      [hannes@cmpxchg.org: fix build]
        Link: http://lkml.kernel.org/r/20180913014222.GA2370@cmpxchg.org
      Link: http://lkml.kernel.org/r/20180828172258.3185-9-hannes@cmpxchg.orgSigned-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Tested-by: default avatarDaniel Drake <drake@endlessm.com>
      Tested-by: default avatarSuren Baghdasaryan <surenb@google.com>
      Cc: Christopher Lameter <cl@linux.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Johannes Weiner <jweiner@fb.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Enderborg <peter.enderborg@sony.com>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Vinayak Menon <vinmenon@codeaurora.org>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      eb414681
    • Johannes Weiner's avatar
      sched: introduce this_rq_lock_irq() · 246b3b33
      Johannes Weiner authored
      do_sched_yield() disables IRQs, looks up this_rq() and locks it.  The next
      patch is adding another site with the same pattern, so provide a
      convenience function for it.
      
      Link: http://lkml.kernel.org/r/20180828172258.3185-8-hannes@cmpxchg.orgSigned-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Tested-by: default avatarSuren Baghdasaryan <surenb@google.com>
      Tested-by: default avatarDaniel Drake <drake@endlessm.com>
      Cc: Christopher Lameter <cl@linux.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Johannes Weiner <jweiner@fb.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Enderborg <peter.enderborg@sony.com>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Vinayak Menon <vinmenon@codeaurora.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      246b3b33
    • Johannes Weiner's avatar
      sched: sched.h: make rq locking and clock functions available in stats.h · 1f351d7f
      Johannes Weiner authored
      kernel/sched/sched.h includes "stats.h" half-way through the file.  The
      next patch introduces users of sched.h's rq locking functions and
      update_rq_clock() in kernel/sched/stats.h.  Move those definitions up in
      the file so they are available in stats.h.
      
      Link: http://lkml.kernel.org/r/20180828172258.3185-7-hannes@cmpxchg.orgSigned-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Tested-by: default avatarSuren Baghdasaryan <surenb@google.com>
      Tested-by: default avatarDaniel Drake <drake@endlessm.com>
      Cc: Christopher Lameter <cl@linux.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Johannes Weiner <jweiner@fb.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Enderborg <peter.enderborg@sony.com>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Vinayak Menon <vinmenon@codeaurora.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      1f351d7f
    • Johannes Weiner's avatar
      sched: loadavg: make calc_load_n() public · 5c54f5b9
      Johannes Weiner authored
      It's going to be used in a later patch. Keep the churn separate.
      
      Link: http://lkml.kernel.org/r/20180828172258.3185-6-hannes@cmpxchg.orgSigned-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Tested-by: default avatarSuren Baghdasaryan <surenb@google.com>
      Tested-by: default avatarDaniel Drake <drake@endlessm.com>
      Cc: Christopher Lameter <cl@linux.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Johannes Weiner <jweiner@fb.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Enderborg <peter.enderborg@sony.com>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Vinayak Menon <vinmenon@codeaurora.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5c54f5b9
    • Johannes Weiner's avatar
      sched: loadavg: consolidate LOAD_INT, LOAD_FRAC, CALC_LOAD · 8508cf3f
      Johannes Weiner authored
      There are several definitions of those functions/macros in places that
      mess with fixed-point load averages.  Provide an official version.
      
      [akpm@linux-foundation.org: fix missed conversion in block/blk-iolatency.c]
      Link: http://lkml.kernel.org/r/20180828172258.3185-5-hannes@cmpxchg.orgSigned-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Tested-by: default avatarSuren Baghdasaryan <surenb@google.com>
      Tested-by: default avatarDaniel Drake <drake@endlessm.com>
      Cc: Christopher Lameter <cl@linux.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Johannes Weiner <jweiner@fb.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Enderborg <peter.enderborg@sony.com>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Vinayak Menon <vinmenon@codeaurora.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      8508cf3f
    • Johannes Weiner's avatar
      delayacct: track delays from thrashing cache pages · b1d29ba8
      Johannes Weiner authored
      Delay accounting already measures the time a task spends in direct reclaim
      and waiting for swapin, but in low memory situations tasks spend can spend
      a significant amount of their time waiting on thrashing page cache.  This
      isn't tracked right now.
      
      To know the full impact of memory contention on an individual task,
      measure the delay when waiting for a recently evicted active cache page to
      read back into memory.
      
      Also update tools/accounting/getdelays.c:
      
           [hannes@computer accounting]$ sudo ./getdelays -d -p 1
           print delayacct stats ON
           PID     1
      
           CPU             count     real total  virtual total    delay total  delay average
                           50318      745000000      847346785      400533713          0.008ms
           IO              count    delay total  delay average
                             435      122601218              0ms
           SWAP            count    delay total  delay average
                               0              0              0ms
           RECLAIM         count    delay total  delay average
                               0              0              0ms
           THRASHING       count    delay total  delay average
                              19       12621439              0ms
      
      Link: http://lkml.kernel.org/r/20180828172258.3185-4-hannes@cmpxchg.orgSigned-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Tested-by: default avatarDaniel Drake <drake@endlessm.com>
      Tested-by: default avatarSuren Baghdasaryan <surenb@google.com>
      Cc: Christopher Lameter <cl@linux.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Johannes Weiner <jweiner@fb.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Enderborg <peter.enderborg@sony.com>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Vinayak Menon <vinmenon@codeaurora.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b1d29ba8
    • Johannes Weiner's avatar
      mm: workingset: tell cache transitions from workingset thrashing · 1899ad18
      Johannes Weiner authored
      Refaults happen during transitions between workingsets as well as in-place
      thrashing.  Knowing the difference between the two has a range of
      applications, including measuring the impact of memory shortage on the
      system performance, as well as the ability to smarter balance pressure
      between the filesystem cache and the swap-backed workingset.
      
      During workingset transitions, inactive cache refaults and pushes out
      established active cache.  When that active cache isn't stale, however,
      and also ends up refaulting, that's bonafide thrashing.
      
      Introduce a new page flag that tells on eviction whether the page has been
      active or not in its lifetime.  This bit is then stored in the shadow
      entry, to classify refaults as transitioning or thrashing.
      
      How many page->flags does this leave us with on 32-bit?
      
      	20 bits are always page flags
      
      	21 if you have an MMU
      
      	23 with the zone bits for DMA, Normal, HighMem, Movable
      
      	29 with the sparsemem section bits
      
      	30 if PAE is enabled
      
      	31 with this patch.
      
      So on 32-bit PAE, that leaves 1 bit for distinguishing two NUMA nodes.  If
      that's not enough, the system can switch to discontigmem and re-gain the 6
      or 7 sparsemem section bits.
      
      Link: http://lkml.kernel.org/r/20180828172258.3185-3-hannes@cmpxchg.orgSigned-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Tested-by: default avatarDaniel Drake <drake@endlessm.com>
      Tested-by: default avatarSuren Baghdasaryan <surenb@google.com>
      Cc: Christopher Lameter <cl@linux.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Johannes Weiner <jweiner@fb.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Enderborg <peter.enderborg@sony.com>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Vinayak Menon <vinmenon@codeaurora.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      1899ad18
    • Johannes Weiner's avatar
      mm: workingset: don't drop refault information prematurely · 95f9ab2d
      Johannes Weiner authored
      Patch series "psi: pressure stall information for CPU, memory, and IO", v4.
      
      		Overview
      
      PSI reports the overall wallclock time in which the tasks in a system (or
      cgroup) wait for (contended) hardware resources.
      
      This helps users understand the resource pressure their workloads are
      under, which allows them to rootcause and fix throughput and latency
      problems caused by overcommitting, underprovisioning, suboptimal job
      placement in a grid; as well as anticipate major disruptions like OOM.
      
      		Real-world applications
      
      We're using the data collected by PSI (and its previous incarnation,
      memdelay) quite extensively at Facebook, and with several success stories.
      
      One usecase is avoiding OOM hangs/livelocks.  The reason these happen is
      because the OOM killer is triggered by reclaim not being able to free
      pages, but with fast flash devices there is *always* some clean and
      uptodate cache to reclaim; the OOM killer never kicks in, even as tasks
      spend 90% of the time thrashing the cache pages of their own executables.
      There is no situation where this ever makes sense in practice.  We wrote a
      <100 line POC python script to monitor memory pressure and kill stuff way
      before such pathological thrashing leads to full system losses that would
      require forcible hard resets.
      
      We've since extended and deployed this code into other places to guarantee
      latency and throughput SLAs, since they're usually violated way before the
      kernel OOM killer would ever kick in.
      
      It is available here: https://github.com/facebookincubator/oomd
      
      Eventually we probably want to trigger the in-kernel OOM killer based on
      extreme sustained pressure as well, so that Linux can avoid memory
      livelocks - which technically aren't deadlocks, but to the user
      indistinguishable from them - out of the box.  We'd continue using OOMD as
      the first line of defense to ensure workload health and implement complex
      kill policies that are beyond the scope of the kernel.
      
      We also use PSI memory pressure for loadshedding.  Our batch job
      infrastructure used to use heuristics based on various VM stats to
      anticipate OOM situations, with lackluster success.  We switched it to PSI
      and managed to anticipate and avoid OOM kills and lockups fairly reliably.
      The reduction of OOM outages in the worker pool raised the pool's
      aggregate productivity, and we were able to switch that service to smaller
      machines.
      
      Lastly, we use cgroups to isolate a machine's main workload from
      maintenance crap like package upgrades, logging, configuration, as well as
      to prevent multiple workloads on a machine from stepping on each others'
      toes.  We were not able to configure this properly without the pressure
      metrics; we would see latency or bandwidth drops, but it would often be
      hard to impossible to rootcause it post-mortem.
      
      We now log and graph pressure for the containers in our fleet and can
      trivially link latency spikes and throughput drops to shortages of
      specific resources after the fact, and fix the job config/scheduling.
      
      PSI has also received testing, feedback, and feature requests from Android
      and EndlessOS for the purpose of low-latency OOM killing, to intervene in
      pressure situations before the UI starts hanging.
      
      		How do you use this feature?
      
      A kernel with CONFIG_PSI=y will create a /proc/pressure directory with 3
      files: cpu, memory, and io.  If using cgroup2, cgroups will also have
      cpu.pressure, memory.pressure and io.pressure files, which simply
      aggregate task stalls at the cgroup level instead of system-wide.
      
      The cpu file contains one line:
      
      	some avg10=2.04 avg60=0.75 avg300=0.40 total=157656722
      
      The averages give the percentage of walltime in which one or more tasks
      are delayed on the runqueue while another task has the CPU.  They're
      recent averages over 10s, 1m, 5m windows, so you can tell short term
      trends from long term ones, similarly to the load average.
      
      The total= value gives the absolute stall time in microseconds.  This
      allows detecting latency spikes that might be too short to sway the
      running averages.  It also allows custom time averaging in case the
      10s/1m/5m windows aren't adequate for the usecase (or are too coarse with
      future hardware).
      
      What to make of this "some" metric?  If CPU utilization is at 100% and CPU
      pressure is 0, it means the system is perfectly utilized, with one
      runnable thread per CPU and nobody waiting.  At two or more runnable tasks
      per CPU, the system is 100% overcommitted and the pressure average will
      indicate as much.  From a utilization perspective this is a great state of
      course: no CPU cycles are being wasted, even when 50% of the threads were
      to go idle (as most workloads do vary).  From the perspective of the
      individual job it's not great, however, and they would do better with more
      resources.  Depending on what your priority and options are, raised "some"
      numbers may or may not require action.
      
      The memory file contains two lines:
      
      some avg10=70.24 avg60=68.52 avg300=69.91 total=3559632828
      full avg10=57.59 avg60=58.06 avg300=60.38 total=3300487258
      
      The some line is the same as for cpu, the time in which at least one task
      is stalled on the resource.  In the case of memory, this includes waiting
      on swap-in, page cache refaults and page reclaim.
      
      The full line, however, indicates time in which *nobody* is using the CPU
      productively due to pressure: all non-idle tasks are waiting for memory in
      one form or another.  Significant time spent in there is a good trigger
      for killing things, moving jobs to other machines, or dropping incoming
      requests, since neither the jobs nor the machine overall are making too
      much headway.
      
      The io file is similar to memory.  Because the block layer doesn't have a
      concept of hardware contention right now (how much longer is my IO request
      taking due to other tasks?), it reports CPU potential lost on all IO
      delays, not just the potential lost due to competition.
      
      		FAQ
      
      Q: How is PSI's CPU component different from the load average?
      
      A: There are several quirks in the load average that make it hard to
         impossible to tell how overcommitted the CPU really is.
      
         1. The load average is reported as a raw number of active tasks.
            You need to know how many CPUs there are in the system, how many
            CPUs the workload is allowed to use, then think about what the
            proportion between load and the number of CPUs mean for the
            tasks trying to run.
      
            PSI reports the percentage of wallclock time in which tasks are
            waiting for a CPU to run on. It doesn't matter how many CPUs are
            present or usable. The number always tells the quality of life
            of tasks in the system or in a particular cgroup.
      
         2. The shortest averaging window is 1m, which is extremely coarse,
            and it's sampled in 5s intervals. A *lot* can happen on a CPU in
            5 seconds. This *may* be able to identify persistent long-term
            trends and very clear and obvious overloads, but it's unusable
            for latency spikes and more subtle overutilization.
      
            PSI's shortest window is 10s. It also exports the cumulative
            stall times (in microseconds) of synchronously recorded events.
      
         3. On Linux, the load average for historical reasons includes all
            TASK_UNINTERRUPTIBLE tasks. This gives a broader sense of how
            busy the system is, but on the flipside it doesn't distinguish
            whether tasks are likely to contend over the CPU or IO - which
            obviously requires very different interventions from a sys admin
            or a job scheduler.
      
            PSI reports independent metrics for CPU and IO. You can tell
            which resource is making the tasks wait, but in conjunction
            still see how overloaded the system is overall.
      
      Q: What's the cost / performance impact of this feature?
      
      A: PSI's primary cost is in the scheduler, in particular task wakeups
         and sleeps.
      
         I benchmarked this code using Facebook's two most scheduling
         sensitive workloads: memcache and webserver. They handle a ton of
         small requests - lots of wakeups and sleeps with little actual work
         in between - so they tend to be canaries for scheduler regressions.
      
         In the tests, the boxes were handling live traffic over the course
         of several hours. Half the machines, the control, ran with
         CONFIG_PSI=n.
      
         For memcache I used eight machines total. They're 2-socket, 14
         core, 56 thread boxes. The test runs for half the test period,
         flips the test and control kernels on the hardware to rule out HW
         factors, DC location etc., then runs the other half of the test.
      
         For the webservers, I used 32 machines total. They're single
         socket, 16 core, 32 thread machines.
      
         During the memcache test, CPU load was nopsi=78.05% psi=78.98% in
         the first half and nopsi=77.52% psi=78.25%, so PSI added between
         0.7 and 0.9 percentage points to the CPU load, a difference of
         about 1%.
      
         UPDATE: I re-ran this test with the v3 version of this patch set
         and the CPU utilization was equivalent between test and control.
      
         UPDATE: v4 is on par with v3.
      
         As far as end-to-end request latency from the client perspective
         goes, we don't sample those finely enough to capture the requests
         going to those particular machines during the test, but we know the
         p50 turnaround time in this workload is 54us, and perf bench sched
         pipe on those machines show nopsi=5.232666 us/op and psi=5.587347
         us/op, so this doesn't add much here either.
      
         The profile for the pipe benchmark shows:
      
              0.87%  sched-pipe  [kernel.vmlinux]    [k] psi_group_change
              0.83%  perf.real   [kernel.vmlinux]    [k] psi_group_change
              0.82%  perf.real   [kernel.vmlinux]    [k] psi_task_change
              0.58%  sched-pipe  [kernel.vmlinux]    [k] psi_task_change
      
         The webserver load is running inside 4 nested cgroup levels. The
         CPU load with both nopsi and psi kernels was indistinguishable at
         81%.
      
         For comparison, we had to disable the cgroup cpu controller on the
         webservers because it added 4 percentage points to the CPU% during
         this same exact test.
      
         Versions of this accounting code now run on 80% of our fleet. None
         of our workloads have reported regressions during the rollout.
      
      Daniel Drake said:
      
      : I just retested the latest version at
      : http://git.cmpxchg.org/cgit.cgi/linux-psi.git (Linux 4.18) and the results
      : are great.
      :
      : Test setup:
      : Endless OS
      : GeminiLake N4200 low end laptop
      : 2GB RAM
      : swap (and zram swap) disabled
      :
      : Baseline test: open a handful of large-ish apps and several website
      : tabs in Google Chrome.
      :
      : Results: after a couple of minutes, system is excessively thrashing, mouse
      : cursor can barely be moved, UI is not responding to mouse clicks, so it's
      : impractical to recover from this situation as an ordinary user
      :
      : Add my simple killer:
      : https://gist.github.com/dsd/a8988bf0b81a6163475988120fe8d9cd
      :
      : Results: when the thrashing causes the UI to become sluggish, the killer
      : steps in and kills something (usually a chrome tab), and the system
      : remains usable.  I repeatedly opened more apps and more websites over a 15
      : minute period but I wasn't able to get the system to a point of UI
      : unresponsiveness.
      
      Suren said:
      
      : Backported to 4.9 and retested on ARMv8 8 code system running Android.
      : Signals behave as expected reacting to memory pressure, no jumps in
      : "total" counters that would indicate an overflow/underflow issues.  Nicely
      : done!
      
      This patch (of 9):
      
      If we keep just enough refault information to match the *current* page
      cache during reclaim time, we could lose a lot of events when there is
      only a temporary spike in non-cache memory consumption that pushes out all
      the cache.  Once cache comes back, we won't see those refaults.  They
      might not be actionable for LRU aging, but we want to know about them for
      measuring memory pressure.
      
      [hannes@cmpxchg.org: switch to NUMA-aware lru and slab counters]
        Link: http://lkml.kernel.org/r/20181009184732.762-2-hannes@cmpxchg.org
      Link: http://lkml.kernel.org/r/20180828172258.3185-2-hannes@cmpxchg.orgSigned-off-by: default avatarJohannes Weiner <jweiner@fb.com>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: default avatarRik van Riel <riel@surriel.com>
      Tested-by: default avatarDaniel Drake <drake@endlessm.com>
      Tested-by: default avatarSuren Baghdasaryan <surenb@google.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Vinayak Menon <vinmenon@codeaurora.org>
      Cc: Christopher Lameter <cl@linux.com>
      Cc: Peter Enderborg <peter.enderborg@sony.com>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      95f9ab2d
    • Vlastimil Babka's avatar
      mm, slab: shorten kmalloc cache names for large sizes · f0d77874
      Vlastimil Babka authored
      Kmalloc cache names can get quite long for large object sizes, when the
      sizes are expressed in bytes.  Use 'k' and 'M' prefixes to make the names
      as short as possible e.g.  in /proc/slabinfo.  This works, as we mostly
      use power-of-two sizes, with exceptions only below 1k.
      
      Example: 'kmalloc-4194304' becomes 'kmalloc-4M'
      
      Link: http://lkml.kernel.org/r/20180731090649.16028-7-vbabka@suse.czSuggested-by: default avatarMatthew Wilcox <willy@infradead.org>
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarChristoph Lameter <cl@linux.com>
      Acked-by: default avatarRoman Gushchin <guro@fb.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Laura Abbott <labbott@redhat.com>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Sumit Semwal <sumit.semwal@linaro.org>
      Cc: Vijayanand Jitta <vjitta@codeaurora.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f0d77874
    • Vlastimil Babka's avatar
      mm, proc: add KReclaimable to /proc/meminfo · 61f94e18
      Vlastimil Babka authored
      The vmstat NR_KERNEL_MISC_RECLAIMABLE counter is for kernel non-slab
      allocations that can be reclaimed via shrinker.  In /proc/meminfo, we can
      show the sum of all reclaimable kernel allocations (including slab) as
      "KReclaimable".  Add the same counter also to per-node meminfo under /sys
      
      With this counter, users will have more complete information about kernel
      memory usage.  Non-slab reclaimable pages (currently just the ION
      allocator) will not be missing from /proc/meminfo, making users wonder
      where part of their memory went.  More precisely, they already appear in
      MemAvailable, but without the new counter, it's not obvious why the value
      in MemAvailable doesn't fully correspond with the sum of other counters
      participating in it.
      
      Link: http://lkml.kernel.org/r/20180731090649.16028-6-vbabka@suse.czSigned-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarRoman Gushchin <guro@fb.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Laura Abbott <labbott@redhat.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Sumit Semwal <sumit.semwal@linaro.org>
      Cc: Vijayanand Jitta <vjitta@codeaurora.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      61f94e18
    • Vlastimil Babka's avatar
      mm: rename and change semantics of nr_indirectly_reclaimable_bytes · b29940c1
      Vlastimil Babka authored
      The vmstat counter NR_INDIRECTLY_RECLAIMABLE_BYTES was introduced by
      commit eb592546 ("mm: introduce NR_INDIRECTLY_RECLAIMABLE_BYTES") with
      the goal of accounting objects that can be reclaimed, but cannot be
      allocated via a SLAB_RECLAIM_ACCOUNT cache.  This is now possible via
      kmalloc() with __GFP_RECLAIMABLE flag, and the dcache external names user
      is converted.
      
      The counter is however still useful for accounting direct page allocations
      (i.e.  not slab) with a shrinker, such as the ION page pool.  So keep it,
      and:
      
      - change granularity to pages to be more like other counters; sub-page
        allocations should be able to use kmalloc
      - rename the counter to NR_KERNEL_MISC_RECLAIMABLE
      - expose the counter again in vmstat as "nr_kernel_misc_reclaimable"; we can
        again remove the check for not printing "hidden" counters
      
      Link: http://lkml.kernel.org/r/20180731090649.16028-5-vbabka@suse.czSigned-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarChristoph Lameter <cl@linux.com>
      Acked-by: default avatarRoman Gushchin <guro@fb.com>
      Cc: Vijayanand Jitta <vjitta@codeaurora.org>
      Cc: Laura Abbott <labbott@redhat.com>
      Cc: Sumit Semwal <sumit.semwal@linaro.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Michal Hocko <mhocko@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b29940c1
    • Vlastimil Babka's avatar
      dcache: allocate external names from reclaimable kmalloc caches · 2e03b4bc
      Vlastimil Babka authored
      We can use the newly introduced kmalloc-reclaimable-X caches, to allocate
      external names in dcache, which will take care of the proper accounting
      automatically, and also improve anti-fragmentation page grouping.
      
      This effectively reverts commit f1782c9b ("dcache: account external
      names as indirectly reclaimable memory") and instead passes
      __GFP_RECLAIMABLE to kmalloc().  The accounting thus moves from
      NR_INDIRECTLY_RECLAIMABLE_BYTES to NR_SLAB_RECLAIMABLE, which is also
      considered in MemAvailable calculation and overcommit decisions.
      
      Link: http://lkml.kernel.org/r/20180731090649.16028-4-vbabka@suse.czSigned-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarRoman Gushchin <guro@fb.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Laura Abbott <labbott@redhat.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Sumit Semwal <sumit.semwal@linaro.org>
      Cc: Vijayanand Jitta <vjitta@codeaurora.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2e03b4bc
    • Vlastimil Babka's avatar
      mm, slab/slub: introduce kmalloc-reclaimable caches · 1291523f
      Vlastimil Babka authored
      Kmem caches can be created with a SLAB_RECLAIM_ACCOUNT flag, which
      indicates they contain objects which can be reclaimed under memory
      pressure (typically through a shrinker).  This makes the slab pages
      accounted as NR_SLAB_RECLAIMABLE in vmstat, which is reflected also the
      MemAvailable meminfo counter and in overcommit decisions.  The slab pages
      are also allocated with __GFP_RECLAIMABLE, which is good for
      anti-fragmentation through grouping pages by mobility.
      
      The generic kmalloc-X caches are created without this flag, but sometimes
      are used also for objects that can be reclaimed, which due to varying size
      cannot have a dedicated kmem cache with SLAB_RECLAIM_ACCOUNT flag.  A
      prominent example are dcache external names, which prompted the creation
      of a new, manually managed vmstat counter NR_INDIRECTLY_RECLAIMABLE_BYTES
      in commit f1782c9b ("dcache: account external names as indirectly
      reclaimable memory").
      
      To better handle this and any other similar cases, this patch introduces
      SLAB_RECLAIM_ACCOUNT variants of kmalloc caches, named kmalloc-rcl-X.
      They are used whenever the kmalloc() call passes __GFP_RECLAIMABLE among
      gfp flags.  They are added to the kmalloc_caches array as a new type.
      Allocations with both __GFP_DMA and __GFP_RECLAIMABLE will use a dma type
      cache.
      
      This change only applies to SLAB and SLUB, not SLOB.  This is fine, since
      SLOB's target are tiny system and this patch does add some overhead of
      kmem management objects.
      
      Link: http://lkml.kernel.org/r/20180731090649.16028-3-vbabka@suse.czSigned-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarChristoph Lameter <cl@linux.com>
      Acked-by: default avatarRoman Gushchin <guro@fb.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Laura Abbott <labbott@redhat.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Sumit Semwal <sumit.semwal@linaro.org>
      Cc: Vijayanand Jitta <vjitta@codeaurora.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      1291523f
    • Vlastimil Babka's avatar
      mm, slab: combine kmalloc_caches and kmalloc_dma_caches · cc252eae
      Vlastimil Babka authored
      Patch series "kmalloc-reclaimable caches", v4.
      
      As discussed at LSF/MM [1] here's a patchset that introduces
      kmalloc-reclaimable caches (more details in the second patch) and uses
      them for dcache external names.  That allows us to repurpose the
      NR_INDIRECTLY_RECLAIMABLE_BYTES counter later in the series.
      
      With patch 3/6, dcache external names are allocated from kmalloc-rcl-*
      caches, eliminating the need for manual accounting.  More importantly, it
      also ensures the reclaimable kmalloc allocations are grouped in pages
      separate from the regular kmalloc allocations.  The need for proper
      accounting of dcache external names has shown it's easy for misbehaving
      process to allocate lots of them, causing premature OOMs.  Without the
      added grouping, it's likely that a similar workload can interleave the
      dcache external names allocations with regular kmalloc allocations (note:
      I haven't searched myself for an example of such regular kmalloc
      allocation, but I would be very surprised if there wasn't some).  A
      pathological case would be e.g.  one 64byte regular allocations with 63
      external dcache names in a page (64x64=4096), which means the page is not
      freed even after reclaiming after all dcache names, and the process can
      thus "steal" the whole page with single 64byte allocation.
      
      If other kmalloc users similar to dcache external names become identified,
      they can also benefit from the new functionality simply by adding
      __GFP_RECLAIMABLE to the kmalloc calls.
      
      Side benefits of the patchset (that could be also merged separately)
      include removed branch for detecting __GFP_DMA kmalloc(), and shortening
      kmalloc cache names in /proc/slabinfo output.  The latter is potentially
      an ABI break in case there are tools parsing the names and expecting the
      values to be in bytes.
      
      This is how /proc/slabinfo looks like after booting in virtme:
      
      ...
      kmalloc-rcl-4M         0      0 4194304    1 1024 : tunables    1    1    0 : slabdata      0      0      0
      ...
      kmalloc-rcl-96         7     32    128   32    1 : tunables  120   60    8 : slabdata      1      1      0
      kmalloc-rcl-64        25    128     64   64    1 : tunables  120   60    8 : slabdata      2      2      0
      kmalloc-rcl-32         0      0     32  124    1 : tunables  120   60    8 : slabdata      0      0      0
      kmalloc-4M             0      0 4194304    1 1024 : tunables    1    1    0 : slabdata      0      0      0
      kmalloc-2M             0      0 2097152    1  512 : tunables    1    1    0 : slabdata      0      0      0
      kmalloc-1M             0      0 1048576    1  256 : tunables    1    1    0 : slabdata      0      0      0
      ...
      
      /proc/vmstat with renamed nr_indirectly_reclaimable_bytes counter:
      
      ...
      nr_slab_reclaimable 2817
      nr_slab_unreclaimable 1781
      ...
      nr_kernel_misc_reclaimable 0
      ...
      
      /proc/meminfo with new KReclaimable counter:
      
      ...
      Shmem:               564 kB
      KReclaimable:      11260 kB
      Slab:              18368 kB
      SReclaimable:      11260 kB
      SUnreclaim:         7108 kB
      KernelStack:        1248 kB
      ...
      
      This patch (of 6):
      
      The kmalloc caches currently mainain separate (optional) array
      kmalloc_dma_caches for __GFP_DMA allocations.  There are tests for
      __GFP_DMA in the allocation hotpaths.  We can avoid the branches by
      combining kmalloc_caches and kmalloc_dma_caches into a single
      two-dimensional array where the outer dimension is cache "type".  This
      will also allow to add kmalloc-reclaimable caches as a third type.
      
      Link: http://lkml.kernel.org/r/20180731090649.16028-2-vbabka@suse.czSigned-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Acked-by: default avatarChristoph Lameter <cl@linux.com>
      Acked-by: default avatarRoman Gushchin <guro@fb.com>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Laura Abbott <labbott@redhat.com>
      Cc: Sumit Semwal <sumit.semwal@linaro.org>
      Cc: Vijayanand Jitta <vjitta@codeaurora.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      cc252eae
    • Andrea Arcangeli's avatar
      userfaultfd: allow get_mempolicy(MPOL_F_NODE|MPOL_F_ADDR) to trigger userfaults · 3b9aadf7
      Andrea Arcangeli authored
      get_mempolicy(MPOL_F_NODE|MPOL_F_ADDR) called a get_user_pages that would
      not be waiting for userfaults before failing and it would hit on a SIGBUS
      instead.  Using get_user_pages_locked/unlocked instead will allow
      get_mempolicy to allow userfaults to resolve the fault and fill the hole,
      before grabbing the node id of the page.
      
      If the user calls get_mempolicy() with MPOL_F_ADDR | MPOL_F_NODE for an
      address inside an area managed by uffd and there is no page at that
      address, the page allocation from within get_mempolicy() will fail
      because get_user_pages() does not allow for page fault retry required
      for uffd; the user will get SIGBUS.
      
      With this patch, the page fault will be resolved by the uffd and the
      get_mempolicy() will continue normally.
      
      Background:
      
      Via code review, previously the syscall would have returned -EFAULT
      (vm_fault_to_errno), now it will block and wait for an userfault (if
      it's waken before the fault is resolved it'll still -EFAULT).
      
      This way get_mempolicy will give a chance to an "unaware" app to be
      compliant with userfaults.
      
      The reason this visible change is that becoming "userfault compliant"
      cannot regress anything: all other syscalls including read(2)/write(2)
      had to become "userfault compliant" long time ago (that's one of the
      things userfaultfd can do that PROT_NONE and trapping segfaults can't).
      
      So this is just one more syscall that become "userfault compliant" like
      all other major ones already were.
      
      This has been happening on virtio-bridge dpdk process which just called
      get_mempolicy on the guest space post live migration, but before the
      memory had a chance to be migrated to destination.
      
      I didn't run an strace to be able to show the -EFAULT going away, but
      I've the confirmation of the below debug aid information (only visible
      with CONFIG_DEBUG_VM=y) going away with the patch:
      
          [20116.371461] FAULT_FLAG_ALLOW_RETRY missing 0
          [20116.371464] CPU: 1 PID: 13381 Comm: vhost-events Not tainted 4.17.12-200.fc28.x86_64 #1
          [20116.371465] Hardware name: LENOVO 20FAS2BN0A/20FAS2BN0A, BIOS N1CET54W (1.22 ) 02/10/2017
          [20116.371466] Call Trace:
          [20116.371473]  dump_stack+0x5c/0x80
          [20116.371476]  handle_userfault.cold.37+0x1b/0x22
          [20116.371479]  ? remove_wait_queue+0x20/0x60
          [20116.371481]  ? poll_freewait+0x45/0xa0
          [20116.371483]  ? do_sys_poll+0x31c/0x520
          [20116.371485]  ? radix_tree_lookup_slot+0x1e/0x50
          [20116.371488]  shmem_getpage_gfp+0xce7/0xe50
          [20116.371491]  ? page_add_file_rmap+0x1a/0x2c0
          [20116.371493]  shmem_fault+0x78/0x1e0
          [20116.371495]  ? filemap_map_pages+0x3a1/0x450
          [20116.371498]  __do_fault+0x1f/0xc0
          [20116.371500]  __handle_mm_fault+0xe2e/0x12f0
          [20116.371502]  handle_mm_fault+0xda/0x200
          [20116.371504]  __get_user_pages+0x238/0x790
          [20116.371506]  get_user_pages+0x3e/0x50
          [20116.371510]  kernel_get_mempolicy+0x40b/0x700
          [20116.371512]  ? vfs_write+0x170/0x1a0
          [20116.371515]  __x64_sys_get_mempolicy+0x21/0x30
          [20116.371517]  do_syscall_64+0x5b/0x160
          [20116.371520]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
      
      The above harmless debug message (not a kernel crash, just a
      dump_stack()) is shown with CONFIG_DEBUG_VM=y to more quickly identify
      and improve kernel spots that may have to become "userfaultfd
      compliant" like this one (without having to run an strace and search
      for syscall misbehavior).  Spots like the above are more closer to a
      kernel bug for the non-cooperative usages that Mike focuses on, than
      for for dpdk qemu-cooperative usages that reproduced it, but it's still
      nicer to get this fixed for dpdk too.
      
      The part of the patch that caused me to think is only the
      implementation issue of mpol_get, but it looks like it should work safe
      no matter the kind of mempolicy structure that is (the default static
      policy also starts at 1 so it'll go to 2 and back to 1 without crashing
      everything at 0).
      
      [rppt@linux.vnet.ibm.com: changelog addition]
        http://lkml.kernel.org/r/20180904073718.GA26916@rapoport-lnx
      Link: http://lkml.kernel.org/r/20180831214848.23676-1-aarcange@redhat.comSigned-off-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Reported-by: default avatarMaxime Coquelin <maxime.coquelin@redhat.com>
      Tested-by: default avatarDr. David Alan Gilbert <dgilbert@redhat.com>
      Reviewed-by: default avatarMike Rapoport <rppt@linux.vnet.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      3b9aadf7
    • Mike Rapoport's avatar
      alpha: switch to NO_BOOTMEM · 6471f52a
      Mike Rapoport authored
      Replace bootmem allocator with memblock and enable use of NO_BOOTMEM like
      on most other architectures.
      
      Alpha gets the description of the physical memory from the firmware as an
      array of memory clusters.  Each cluster that is not reserved by the
      firmware is added to memblock.memory.
      
      Once the memblock.memory is set up, we reserve the kernel and initrd pages
      with memblock reserve.
      
      Since we don't need the bootmem bitmap anymore, the code that finds an
      appropriate place is removed.
      
      The conversion does not take care of NUMA support which is marked broken
      for more than 10 years now.
      
      Link: http://lkml.kernel.org/r/1535952894-10967-1-git-send-email-rppt@linux.vnet.ibm.comSigned-off-by: default avatarMike Rapoport <rppt@linux.vnet.ibm.com>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: Michal Hocko <mhocko@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      6471f52a
    • Mike Rapoport's avatar
      unicore32: switch to NO_BOOTMEM · e92d39cd
      Mike Rapoport authored
      The unicore32 architecture already supports memblock and uses it for some
      early memory reservations, e.g initrd and the page tables.
      
      At some point unicore32 allocates the bootmem bitmap from the memblock and
      then hands over the memory reservations from memblock to bootmem.
      
      This patch removes the bootmem initialization and leaves memblock as the
      only boot time memory manager for unicore32.
      
      Link: http://lkml.kernel.org/r/1533326330-31677-8-git-send-email-rppt@linux.vnet.ibm.comSigned-off-by: default avatarMike Rapoport <rppt@linux.vnet.ibm.com>
      Acked-by: default avatarGuan Xuetao <gxt@pku.edu.cn>
      Cc: Ley Foon Tan <ley.foon.tan@intel.com>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rob Herring <robh@kernel.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e92d39cd
    • Mike Rapoport's avatar
      um: switch to NO_BOOTMEM · ddf63983
      Mike Rapoport authored
      Replace bootmem initialization with memblock_add and memblock_reserve calls
      and explicit initialization of {min,max}_low_pfn.
      
      Link: http://lkml.kernel.org/r/1533326330-31677-7-git-send-email-rppt@linux.vnet.ibm.comSigned-off-by: default avatarMike Rapoport <rppt@linux.vnet.ibm.com>
      Acked-by: default avatarRichard Weinberger <richard@nod.at>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Ley Foon Tan <ley.foon.tan@intel.com>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Rob Herring <robh@kernel.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ddf63983
    • Mike Rapoport's avatar
      um: setup_physmem: stop using global variables · be6ec5b1
      Mike Rapoport authored
      The setup_physmem() function receives uml_physmem and uml_reserved as
      parameters and still used these global variables.  Replace such usage with
      local variables.
      
      Link: http://lkml.kernel.org/r/1533326330-31677-6-git-send-email-rppt@linux.vnet.ibm.comSigned-off-by: default avatarMike Rapoport <rppt@linux.vnet.ibm.com>
      Acked-by: default avatarRichard Weinberger <richard@nod.at>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Ley Foon Tan <ley.foon.tan@intel.com>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Rob Herring <robh@kernel.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      be6ec5b1
    • Mike Rapoport's avatar
      nios2: switch to NO_BOOTMEM · 00423792
      Mike Rapoport authored
      Remove bootmem bitmap initialization and replace reserve_bootmem() with
      memblock_reserve().
      
      Link: http://lkml.kernel.org/r/1533326330-31677-5-git-send-email-rppt@linux.vnet.ibm.comSigned-off-by: default avatarMike Rapoport <rppt@linux.vnet.ibm.com>
      Acked-by: default avatarLey Foon Tan <ley.foon.tan@intel.com>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rob Herring <robh@kernel.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      00423792
    • Mike Rapoport's avatar
      nios2: use generic early_init_dt_add_memory_arch · a811c05c
      Mike Rapoport authored
      All we have to do is to enable memblock, the generic FDT code will take
      care of the rest.
      
      Link: http://lkml.kernel.org/r/1533326330-31677-4-git-send-email-rppt@linux.vnet.ibm.comSigned-off-by: default avatarMike Rapoport <rppt@linux.vnet.ibm.com>
      Acked-by: default avatarLey Foon Tan <ley.foon.tan@intel.com>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rob Herring <robh@kernel.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a811c05c
    • Mike Rapoport's avatar
      of: ignore sub-page memory regions · 6072cf56
      Mike Rapoport authored
      Memory region size is rounded down to page boundary and with sub-page
      region it becomes 0 and there is no point to add an empty region.
      Moreover, when the base is less than PAGE_SIZE we get a bogus size as
      (base + size - 1) evaluates to -1.
      
      8cccffc5 ("of: check for size < 0 after rounding in
      early_init_dt_add_memory_arch") introduced a test for wrap around for the
      case when base is not page aligned, the same test can be used to ignore
      sub-page region sizes.
      
      Link: http://lkml.kernel.org/r/1533326330-31677-3-git-send-email-rppt@linux.vnet.ibm.comSigned-off-by: default avatarMike Rapoport <rppt@linux.vnet.ibm.com>
      Reviewed-by: default avatarRob Herring <robh@kernel.org>
      Cc: Guan Xuetao <gxt@pku.edu.cn>
      Cc: Ley Foon Tan <ley.foon.tan@intel.com>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Michal Hocko <mhocko@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      6072cf56