1. 15 May, 2019 15 commits
    • Dan Williams's avatar
      mm: shuffle initial free memory to improve memory-side-cache utilization · e900a918
      Dan Williams authored
      Patch series "mm: Randomize free memory", v10.
      
      This patch (of 3):
      
      Randomization of the page allocator improves the average utilization of
      a direct-mapped memory-side-cache.  Memory side caching is a platform
      capability that Linux has been previously exposed to in HPC
      (high-performance computing) environments on specialty platforms.  In
      that instance it was a smaller pool of high-bandwidth-memory relative to
      higher-capacity / lower-bandwidth DRAM.  Now, this capability is going
      to be found on general purpose server platforms where DRAM is a cache in
      front of higher latency persistent memory [1].
      
      Robert offered an explanation of the state of the art of Linux
      interactions with memory-side-caches [2], and I copy it here:
      
          It's been a problem in the HPC space:
          http://www.nersc.gov/research-and-development/knl-cache-mode-performance-coe/
      
          A kernel module called zonesort is available to try to help:
          https://software.intel.com/en-us/articles/xeon-phi-software
      
          and this abandoned patch series proposed that for the kernel:
          https://lkml.kernel.org/r/20170823100205.17311-1-lukasz.daniluk@intel.com
      
          Dan's patch series doesn't attempt to ensure buffers won't conflict, but
          also reduces the chance that the buffers will. This will make performance
          more consistent, albeit slower than "optimal" (which is near impossible
          to attain in a general-purpose kernel).  That's better than forcing
          users to deploy remedies like:
              "To eliminate this gradual degradation, we have added a Stream
               measurement to the Node Health Check that follows each job;
               nodes are rebooted whenever their measured memory bandwidth
               falls below 300 GB/s."
      
      A replacement for zonesort was merged upstream in commit cc9aec03
      ("x86/numa_emulation: Introduce uniform split capability").  With this
      numa_emulation capability, memory can be split into cache sized
      ("near-memory" sized) numa nodes.  A bind operation to such a node, and
      disabling workloads on other nodes, enables full cache performance.
      However, once the workload exceeds the cache size then cache conflicts
      are unavoidable.  While HPC environments might be able to tolerate
      time-scheduling of cache sized workloads, for general purpose server
      platforms, the oversubscribed cache case will be the common case.
      
      The worst case scenario is that a server system owner benchmarks a
      workload at boot with an un-contended cache only to see that performance
      degrade over time, even below the average cache performance due to
      excessive conflicts.  Randomization clips the peaks and fills in the
      valleys of cache utilization to yield steady average performance.
      
      Here are some performance impact details of the patches:
      
      1/ An Intel internal synthetic memory bandwidth measurement tool, saw a
         3X speedup in a contrived case that tries to force cache conflicts.
         The contrived cased used the numa_emulation capability to force an
         instance of the benchmark to be run in two of the near-memory sized
         numa nodes.  If both instances were placed on the same emulated they
         would fit and cause zero conflicts.  While on separate emulated nodes
         without randomization they underutilized the cache and conflicted
         unnecessarily due to the in-order allocation per node.
      
      2/ A well known Java server application benchmark was run with a heap
         size that exceeded cache size by 3X.  The cache conflict rate was 8%
         for the first run and degraded to 21% after page allocator aging.  With
         randomization enabled the rate levelled out at 11%.
      
      3/ A MongoDB workload did not observe measurable difference in
         cache-conflict rates, but the overall throughput dropped by 7% with
         randomization in one case.
      
      4/ Mel Gorman ran his suite of performance workloads with randomization
         enabled on platforms without a memory-side-cache and saw a mix of some
         improvements and some losses [3].
      
      While there is potentially significant improvement for applications that
      depend on low latency access across a wide working-set, the performance
      may be negligible to negative for other workloads.  For this reason the
      shuffle capability defaults to off unless a direct-mapped
      memory-side-cache is detected.  Even then, the page_alloc.shuffle=0
      parameter can be specified to disable the randomization on those systems.
      
      Outside of memory-side-cache utilization concerns there is potentially
      security benefit from randomization.  Some data exfiltration and
      return-oriented-programming attacks rely on the ability to infer the
      location of sensitive data objects.  The kernel page allocator, especially
      early in system boot, has predictable first-in-first out behavior for
      physical pages.  Pages are freed in physical address order when first
      onlined.
      
      Quoting Kees:
          "While we already have a base-address randomization
           (CONFIG_RANDOMIZE_MEMORY), attacks against the same hardware and
           memory layouts would certainly be using the predictability of
           allocation ordering (i.e. for attacks where the base address isn't
           important: only the relative positions between allocated memory).
           This is common in lots of heap-style attacks. They try to gain
           control over ordering by spraying allocations, etc.
      
           I'd really like to see this because it gives us something similar
           to CONFIG_SLAB_FREELIST_RANDOM but for the page allocator."
      
      While SLAB_FREELIST_RANDOM reduces the predictability of some local slab
      caches it leaves vast bulk of memory to be predictably in order allocated.
      However, it should be noted, the concrete security benefits are hard to
      quantify, and no known CVE is mitigated by this randomization.
      
      Introduce shuffle_free_memory(), and its helper shuffle_zone(), to perform
      a Fisher-Yates shuffle of the page allocator 'free_area' lists when they
      are initially populated with free memory at boot and at hotplug time.  Do
      this based on either the presence of a page_alloc.shuffle=Y command line
      parameter, or autodetection of a memory-side-cache (to be added in a
      follow-on patch).
      
      The shuffling is done in terms of CONFIG_SHUFFLE_PAGE_ORDER sized free
      pages where the default CONFIG_SHUFFLE_PAGE_ORDER is MAX_ORDER-1 i.e.  10,
      4MB this trades off randomization granularity for time spent shuffling.
      MAX_ORDER-1 was chosen to be minimally invasive to the page allocator
      while still showing memory-side cache behavior improvements, and the
      expectation that the security implications of finer granularity
      randomization is mitigated by CONFIG_SLAB_FREELIST_RANDOM.  The
      performance impact of the shuffling appears to be in the noise compared to
      other memory initialization work.
      
      This initial randomization can be undone over time so a follow-on patch is
      introduced to inject entropy on page free decisions.  It is reasonable to
      ask if the page free entropy is sufficient, but it is not enough due to
      the in-order initial freeing of pages.  At the start of that process
      putting page1 in front or behind page0 still keeps them close together,
      page2 is still near page1 and has a high chance of being adjacent.  As
      more pages are added ordering diversity improves, but there is still high
      page locality for the low address pages and this leads to no significant
      impact to the cache conflict rate.
      
      [1]: https://itpeernetwork.intel.com/intel-optane-dc-persistent-memory-operating-modes/
      [2]: https://lkml.kernel.org/r/AT5PR8401MB1169D656C8B5E121752FC0F8AB120@AT5PR8401MB1169.NAMPRD84.PROD.OUTLOOK.COM
      [3]: https://lkml.org/lkml/2018/10/12/309
      
      [dan.j.williams@intel.com: fix shuffle enable]
        Link: http://lkml.kernel.org/r/154943713038.3858443.4125180191382062871.stgit@dwillia2-desk3.amr.corp.intel.com
      [cai@lca.pw: fix SHUFFLE_PAGE_ALLOCATOR help texts]
        Link: http://lkml.kernel.org/r/20190425201300.75650-1-cai@lca.pw
      Link: http://lkml.kernel.org/r/154899811738.3165233.12325692939590944259.stgit@dwillia2-desk3.amr.corp.intel.comSigned-off-by: default avatarDan Williams <dan.j.williams@intel.com>
      Signed-off-by: default avatarQian Cai <cai@lca.pw>
      Reviewed-by: default avatarKees Cook <keescook@chromium.org>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Keith Busch <keith.busch@intel.com>
      Cc: Robert Elliott <elliott@hpe.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e900a918
    • Uladzislau Rezki (Sony)'s avatar
      mm/vmalloc.c: convert vmap_lazy_nr to atomic_long_t · 4d36e6f8
      Uladzislau Rezki (Sony) authored
      vmap_lazy_nr variable has atomic_t type that is 4 bytes integer value on
      both 32 and 64 bit systems.  lazy_max_pages() deals with "unsigned long"
      that is 8 bytes on 64 bit system, thus vmap_lazy_nr should be 8 bytes on
      64 bit as well.
      
      Link: http://lkml.kernel.org/r/20190131162452.25879-1-urezki@gmail.comSigned-off-by: default avatarUladzislau Rezki (Sony) <urezki@gmail.com>
      Reviewed-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Reviewed-by: default avatarWilliam Kucharski <william.kucharski@oracle.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Thomas Garnier <thgarnie@google.com>
      Cc: Oleksiy Avramchenko <oleksiy.avramchenko@sonymobile.com>
      Cc: Joel Fernandes <joelaf@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4d36e6f8
    • Uladzislau Rezki (Sony)'s avatar
      mm/vmalloc.c: add priority threshold to __purge_vmap_area_lazy() · 68571be9
      Uladzislau Rezki (Sony) authored
      Commit 763b218d ("mm: add preempt points into __purge_vmap_area_lazy()")
      introduced some preempt points, one of those is making an allocation
      more prioritized over lazy free of vmap areas.
      
      Prioritizing an allocation over freeing does not work well all the time,
      i.e.  it should be rather a compromise.
      
      1) Number of lazy pages directly influences the busy list length thus
         on operations like: allocation, lookup, unmap, remove, etc.
      
      2) Under heavy stress of vmalloc subsystem I run into a situation when
         memory usage gets increased hitting out_of_memory -> panic state due to
         completely blocking of logic that frees vmap areas in the
         __purge_vmap_area_lazy() function.
      
      Establish a threshold passing which the freeing is prioritized back over
      allocation creating a balance between each other.
      
      Using vmalloc test driver in "stress mode", i.e.  When all available
      test cases are run simultaneously on all online CPUs applying a
      pressure on the vmalloc subsystem, my HiKey 960 board runs out of
      memory due to the fact that __purge_vmap_area_lazy() logic simply is
      not able to free pages in time.
      
      How I run it:
      
      1) You should build your kernel with CONFIG_TEST_VMALLOC=m
      2) ./tools/testing/selftests/vm/test_vmalloc.sh stress
      
      During this test "vmap_lazy_nr" pages will go far beyond acceptable
      lazy_max_pages() threshold, that will lead to enormous busy list size
      and other problems including allocation time and so on.
      
      Link: http://lkml.kernel.org/r/20190124115648.9433-3-urezki@gmail.comSigned-off-by: default avatarUladzislau Rezki (Sony) <urezki@gmail.com>
      Reviewed-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Thomas Garnier <thgarnie@google.com>
      Cc: Oleksiy Avramchenko <oleksiy.avramchenko@sonymobile.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Joel Fernandes <joelaf@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Joel Fernandes <joel@joelfernandes.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      68571be9
    • Dan Schatzberg's avatar
      kernel/sched/psi.c: expose pressure metrics on root cgroup · df5ba5be
      Dan Schatzberg authored
      Pressure metrics are already recorded and exposed in procfs for the
      entire system, but any tool which monitors cgroup pressure has to
      special case the root cgroup to read from procfs.  This patch exposes
      the already recorded pressure metrics on the root cgroup.
      
      Link: http://lkml.kernel.org/r/20190510174938.3361741-1-dschatzberg@fb.comSigned-off-by: default avatarDan Schatzberg <dschatzberg@fb.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      df5ba5be
    • Suren Baghdasaryan's avatar
      psi: introduce psi monitor · 0e94682b
      Suren Baghdasaryan authored
      Psi monitor aims to provide a low-latency short-term pressure detection
      mechanism configurable by users.  It allows users to monitor psi metrics
      growth and trigger events whenever a metric raises above user-defined
      threshold within user-defined time window.
      
      Time window and threshold are both expressed in usecs.  Multiple psi
      resources with different thresholds and window sizes can be monitored
      concurrently.
      
      Psi monitors activate when system enters stall state for the monitored
      psi metric and deactivate upon exit from the stall state.  While system
      is in the stall state psi signal growth is monitored at a rate of 10
      times per tracking window.  Min window size is 500ms, therefore the min
      monitoring interval is 50ms.  Max window size is 10s with monitoring
      interval of 1s.
      
      When activated psi monitor stays active for at least the duration of one
      tracking window to avoid repeated activations/deactivations when psi
      signal is bouncing.
      
      Notifications to the users are rate-limited to one per tracking window.
      
      Link: http://lkml.kernel.org/r/20190319235619.260832-8-surenb@google.comSigned-off-by: default avatarSuren Baghdasaryan <surenb@google.com>
      Signed-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Dennis Zhou <dennis@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0e94682b
    • Suren Baghdasaryan's avatar
      include/: refactor headers to allow kthread.h inclusion in psi_types.h · 8af0c18a
      Suren Baghdasaryan authored
      kthread.h can't be included in psi_types.h because it creates a circular
      inclusion with kthread.h eventually including psi_types.h and
      complaining on kthread structures not being defined because they are
      defined further in the kthread.h.  Resolve this by removing psi_types.h
      inclusion from the headers included from kthread.h.
      
      Link: http://lkml.kernel.org/r/20190319235619.260832-7-surenb@google.comSigned-off-by: default avatarSuren Baghdasaryan <surenb@google.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Dennis Zhou <dennis@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      8af0c18a
    • Suren Baghdasaryan's avatar
      psi: track changed states · 333f3017
      Suren Baghdasaryan authored
      Introduce changed_states parameter into collect_percpu_times to track
      the states changed since the last update.
      
      This will be needed to detect whether polled states activated in the
      monitor patch.
      
      Link: http://lkml.kernel.org/r/20190319235619.260832-6-surenb@google.comSigned-off-by: default avatarSuren Baghdasaryan <surenb@google.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Dennis Zhou <dennis@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      333f3017
    • Suren Baghdasaryan's avatar
      psi: split update_stats into parts · 7fc70a39
      Suren Baghdasaryan authored
      Split update_stats into collect_percpu_times and update_averages for
      collect_percpu_times to be reused later inside psi monitor.
      
      Link: http://lkml.kernel.org/r/20190319235619.260832-5-surenb@google.comSigned-off-by: default avatarSuren Baghdasaryan <surenb@google.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Dennis Zhou <dennis@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7fc70a39
    • Suren Baghdasaryan's avatar
      psi: rename psi fields in preparation for psi trigger addition · bcc78db6
      Suren Baghdasaryan authored
      Rename psi_group structure member fields used for calculating psi totals
      and averages for clear distinction between them and for trigger-related
      fields that will be added by "psi: introduce psi monitor".
      
      [surenb@google.com: v6]
        Link: http://lkml.kernel.org/r/20190319235619.260832-4-surenb@google.com
      Link: http://lkml.kernel.org/r/20190124211518.244221-5-surenb@google.comSigned-off-by: default avatarSuren Baghdasaryan <surenb@google.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Dennis Zhou <dennis@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      bcc78db6
    • Suren Baghdasaryan's avatar
    • Suren Baghdasaryan's avatar
      psi: introduce state_mask to represent stalled psi states · 33b2d630
      Suren Baghdasaryan authored
      Patch series "psi: pressure stall monitors", v6.
      
      This is a respin of:
        https://lwn.net/ml/linux-kernel/20190308184311.144521-1-surenb%40google.com/
      
      Android is adopting psi to detect and remedy memory pressure that
      results in stuttering and decreased responsiveness on mobile devices.
      
      Psi gives us the stall information, but because we're dealing with
      latencies in the millisecond range, periodically reading the pressure
      files to detect stalls in a timely fashion is not feasible.  Psi also
      doesn't aggregate its averages at a high-enough frequency right now.
      
      This patch series extends the psi interface such that users can
      configure sensitive latency thresholds and use poll() and friends to be
      notified when these are breached.
      
      As high-frequency aggregation is costly, it implements an aggregation
      method that is optimized for fast, short-interval averaging, and makes
      the aggregation frequency adaptive, such that high-frequency updates
      only happen while monitored stall events are actively occurring.
      
      With these patches applied, Android can monitor for, and ward off,
      mounting memory shortages before they cause problems for the user.  For
      example, using memory stall monitors in userspace low memory killer
      daemon (lmkd) we can detect mounting pressure and kill less important
      processes before device becomes visibly sluggish.  In our memory stress
      testing psi memory monitors produce roughly 10x less false positives
      compared to vmpressure signals.  Having ability to specify multiple
      triggers for the same psi metric allows other parts of Android framework
      to monitor memory state of the device and act accordingly.
      
      The new interface is straight-forward.  The user opens one of the
      pressure files for writing and writes a trigger description into the
      file descriptor that defines the stall state - some or full, and the
      maximum stall time over a given window of time.  E.g.:
      
              /* Signal when stall time exceeds 100ms of a 1s window */
              char trigger[] = "full 100000 1000000"
              fd = open("/proc/pressure/memory")
              write(fd, trigger, sizeof(trigger))
              while (poll() >= 0) {
                      ...
              };
              close(fd);
      
      When the monitored stall state is entered, psi adapts its aggregation
      frequency according to what the configured time window requires in order
      to emit event signals in a timely fashion.  Once the stalling subsides,
      aggregation reverts back to normal.
      
      The trigger is associated with the open file descriptor.  To stop
      monitoring, the user only needs to close the file descriptor and the
      trigger is discarded.
      
      Patches 1-6 prepare the psi code for polling support.  Patch 7
      implements the adaptive polling logic, the pressure growth detection
      optimized for short intervals, and hooks up write() and poll() on the
      pressure files.
      
      The patches were developed in collaboration with Johannes Weiner.
      
      This patch (of 7):
      
      The psi monitoring patches will need to determine the same states as
      record_times().  To avoid calculating them twice, maintain a state mask
      that can be consulted cheaply.  Do this in a separate patch to keep the
      churn in the main feature patch at a minimum.
      
      This adds 4-byte state_mask member into psi_group_cpu struct which
      results in its first cacheline-aligned part becoming 52 bytes long.  Add
      explicit values to enumeration element counters that affect
      psi_group_cpu struct size.
      
      Link: http://lkml.kernel.org/r/20190124211518.244221-4-surenb@google.comSigned-off-by: default avatarSuren Baghdasaryan <surenb@google.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Dennis Zhou <dennis@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Li Zefan <lizefan@huawei.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      33b2d630
    • Baruch Siach's avatar
      mm: update references to page _refcount · 136ac591
      Baruch Siach authored
      Commit 0139aa7b ("mm: rename _count, field of the struct page, to
      _refcount") left out a couple of references to the old field name.  Fix
      that.
      
      Link: http://lkml.kernel.org/r/cedf87b02eb8a6b3eac57e8e91da53fb15c3c44c.1556537475.git.baruch@tkos.co.il
      Fixes: 0139aa7b ("mm: rename _count, field of the struct page, to _refcount")
      Signed-off-by: default avatarBaruch Siach <baruch@tkos.co.il>
      Reviewed-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      136ac591
    • Andrea Arcangeli's avatar
      mm: change mm_update_next_owner() to update mm->owner with WRITE_ONCE · 987717e5
      Andrea Arcangeli authored
      The RCU reader uses rcu_dereference() inside rcu_read_lock critical
      sections, so the writer shall use WRITE_ONCE.  Just a cleanup, we still
      rely on gcc to emit atomic writes in other places.
      
      Link: http://lkml.kernel.org/r/20190325225636.11635-3-aarcange@redhat.comSigned-off-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Reviewed-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Jann Horn <jannh@google.com>
      Cc: Jason Gunthorpe <jgg@mellanox.com>
      Cc: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Xu <peterx@redhat.com>
      Cc: zhong jiang <zhongjiang@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      987717e5
    • Andrea Arcangeli's avatar
      userfaultfd: use RCU to free the task struct when fork fails · c3f3ce04
      Andrea Arcangeli authored
      The task structure is freed while get_mem_cgroup_from_mm() holds
      rcu_read_lock() and dereferences mm->owner.
      
        get_mem_cgroup_from_mm()                failing fork()
        ----                                    ---
        task = mm->owner
                                                mm->owner = NULL;
                                                free(task)
        if (task) *task; /* use after free */
      
      The fix consists in freeing the task with RCU also in the fork failure
      case, exactly like it always happens for the regular exit(2) path.  That
      is enough to make the rcu_read_lock hold in get_mem_cgroup_from_mm()
      (left side above) effective to avoid a use after free when dereferencing
      the task structure.
      
      An alternate possible fix would be to defer the delivery of the
      userfaultfd contexts to the monitor until after fork() is guaranteed to
      succeed.  Such a change would require more changes because it would
      create a strict ordering dependency where the uffd methods would need to
      be called beyond the last potentially failing branch in order to be
      safe.  This solution as opposed only adds the dependency to common code
      to set mm->owner to NULL and to free the task struct that was pointed by
      mm->owner with RCU, if fork ends up failing.  The userfaultfd methods
      can still be called anywhere during the fork runtime and the monitor
      will keep discarding orphaned "mm" coming from failed forks in userland.
      
      This race condition couldn't trigger if CONFIG_MEMCG was set =n at build
      time.
      
      [aarcange@redhat.com: improve changelog, reduce #ifdefs per Michal]
        Link: http://lkml.kernel.org/r/20190429035752.4508-1-aarcange@redhat.com
      Link: http://lkml.kernel.org/r/20190325225636.11635-2-aarcange@redhat.com
      Fixes: 893e26e6 ("userfaultfd: non-cooperative: Add fork() event")
      Signed-off-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Tested-by: default avatarzhong jiang <zhongjiang@huawei.com>
      Reported-by: syzbot+cbb52e396df3e565ab02@syzkaller.appspotmail.com
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Jann Horn <jannh@google.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Cc: Peter Xu <peterx@redhat.com>
      Cc: Jason Gunthorpe <jgg@mellanox.com>
      Cc: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: zhong jiang <zhongjiang@huawei.com>
      Cc: syzbot+cbb52e396df3e565ab02@syzkaller.appspotmail.com
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c3f3ce04
    • Andrew Morton's avatar
      kernel/Makefile: don't assume that kernel/gen_ikh_data.sh is executable · acb2ec3d
      Andrew Morton authored
      If the user downloads and applies patch-5.1.gz using patch(1), the x bit
      on kernel/gen_ikh_data.sh is not set.
      
        /bin/sh: 1: ./kernel/gen_ikh_data.sh: Permission denied
      
      Fix this by using CONFIG_SHELL.
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      acb2ec3d
  2. 14 May, 2019 25 commits