1. 04 Sep, 2024 40 commits
    • Lorenzo Stoakes's avatar
      mm: abstract vma_expand() to use vma_merge_struct · fc21959f
      Lorenzo Stoakes authored
      The purpose of the vmg is to thread merge state through functions and
      avoid egregious parameter lists.  We expand this to vma_expand(), which is
      used for a number of merge cases.
      
      Accordingly, adjust its callers, mmap_region() and relocate_vma_down(), to
      use a vmg.
      
      An added purpose of this change is the ability in a future commit to
      perform all new VMA range merging using vma_expand().
      
      Link: https://lkml.kernel.org/r/4bc8c9dbc9ca52452ef8e587b28fe555854ceb38.1725040657.git.lorenzo.stoakes@oracle.comSigned-off-by: default avatarLorenzo Stoakes <lorenzo.stoakes@oracle.com>
      Reviewed-by: default avatarLiam R. Howlett <Liam.Howlett@oracle.com>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Bert Karwatzki <spasswolf@web.de>
      Cc: Jeff Xu <jeffxu@chromium.org>
      Cc: Jiri Olsa <olsajiri@gmail.com>
      Cc: Kees Cook <kees@kernel.org>
      Cc: Lorenzo Stoakes <lstoakes@gmail.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: "Paul E. McKenney" <paulmck@kernel.org>
      Cc: Paul Moore <paul@paul-moore.com>
      Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
      Cc: Suren Baghdasaryan <surenb@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      fc21959f
    • Lorenzo Stoakes's avatar
      mm: remove duplicated open-coded VMA policy check · 3e01310d
      Lorenzo Stoakes authored
      Both can_vma_merge_before() and can_vma_merge_after() are invoked after
      checking for compatible VMA NUMA policy, we can simply move this to
      is_mergeable_vma() and abstract this altogether.
      
      In mmap_region() we set vmg->policy to NULL, so the policy comparisons
      checked in can_vma_merge_before() and can_vma_merge_after() are exactly
      equivalent to !vma_policy(vmg.next) and !vma_policy(vmg.prev).
      
      Equally, in do_brk_flags(), vmg->policy is NULL, so the
      can_vma_merge_after() is checking !vma_policy(vma), as we set vmg.prev to
      vma.
      
      In vma_merge(), we compare prev and next policies with vmg->policy before
      checking can_vma_merge_after() and can_vma_merge_before() respectively,
      which this patch causes to be checked in precisely the same way.
      
      This therefore maintains precisely the same logic as before, only now
      abstracted into is_mergeable_vma().
      
      Link: https://lkml.kernel.org/r/0dbff286d9c4988333bc6f4ff3734cb95dd5410a.1725040657.git.lorenzo.stoakes@oracle.comSigned-off-by: default avatarLorenzo Stoakes <lorenzo.stoakes@oracle.com>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: default avatarLiam R. Howlett <Liam.Howlett@oracle.com>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Bert Karwatzki <spasswolf@web.de>
      Cc: Jeff Xu <jeffxu@chromium.org>
      Cc: Jiri Olsa <olsajiri@gmail.com>
      Cc: Kees Cook <kees@kernel.org>
      Cc: Lorenzo Stoakes <lstoakes@gmail.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: "Paul E. McKenney" <paulmck@kernel.org>
      Cc: Paul Moore <paul@paul-moore.com>
      Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
      Cc: Suren Baghdasaryan <surenb@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      3e01310d
    • Lorenzo Stoakes's avatar
      mm: introduce vma_merge_struct and abstract vma_merge(),vma_modify() · 2f1c6611
      Lorenzo Stoakes authored
      Rather than passing around huge numbers of parameters to numerous helper
      functions, abstract them into a single struct that we thread through the
      operation, the vma_merge_struct ('vmg').
      
      Adjust vma_merge() and vma_modify() to accept this parameter, as well as
      predicate functions can_vma_merge_before(), can_vma_merge_after(), and the
      vma_modify_...() helper functions.
      
      Also introduce VMG_STATE() and VMG_VMA_STATE() helper macros to allow for
      easy vmg declaration.
      
      We additionally remove the requirement that vma_merge() is passed a VMA
      object representing the candidate new VMA.  Previously it used this to
      obtain the mm_struct, file and anon_vma properties of the proposed range
      (a rather confusing state of affairs), which are now provided by the vmg
      directly.
      
      We also remove the pgoff calculation previously performed vma_modify(),
      and instead calculate this in VMG_VMA_STATE() via the vma_pgoff_offset()
      helper.
      
      Link: https://lkml.kernel.org/r/a955aad09d81329f6fbeb636b2dd10cde7b73dab.1725040657.git.lorenzo.stoakes@oracle.comSigned-off-by: default avatarLorenzo Stoakes <lorenzo.stoakes@oracle.com>
      Reviewed-by: default avatarLiam R. Howlett <Liam.Howlett@oracle.com>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Bert Karwatzki <spasswolf@web.de>
      Cc: Jeff Xu <jeffxu@chromium.org>
      Cc: Jiri Olsa <olsajiri@gmail.com>
      Cc: Kees Cook <kees@kernel.org>
      Cc: Lorenzo Stoakes <lstoakes@gmail.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: "Paul E. McKenney" <paulmck@kernel.org>
      Cc: Paul Moore <paul@paul-moore.com>
      Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
      Cc: Suren Baghdasaryan <surenb@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      2f1c6611
    • Lorenzo Stoakes's avatar
      tools: add VMA merge tests · 955db396
      Lorenzo Stoakes authored
      Add a variety of VMA merge unit tests to assert that the behaviour of VMA
      merge is correct at an abstract level and VMAs are merged or not merged as
      expected.
      
      These are intentionally added _before_ we start refactoring vma_merge() in
      order that we can continually assert correctness throughout the rest of
      the series.
      
      In order to reduce churn going forward, we backport the vma_merge_struct
      data type to the test code which we introduce and use in a future commit,
      and add wrappers around the merge new and existing VMA cases.
      
      Link: https://lkml.kernel.org/r/1c7a0b43cfad2c511a6b1b52f3507696478ff51a.1725040657.git.lorenzo.stoakes@oracle.comSigned-off-by: default avatarLorenzo Stoakes <lorenzo.stoakes@oracle.com>
      Reviewed-by: default avatarLiam R. Howlett <Liam.Howlett@oracle.com>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Bert Karwatzki <spasswolf@web.de>
      Cc: Jeff Xu <jeffxu@chromium.org>
      Cc: Jiri Olsa <olsajiri@gmail.com>
      Cc: Kees Cook <kees@kernel.org>
      Cc: Lorenzo Stoakes <lstoakes@gmail.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: "Paul E. McKenney" <paulmck@kernel.org>
      Cc: Paul Moore <paul@paul-moore.com>
      Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
      Cc: Suren Baghdasaryan <surenb@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      955db396
    • Lorenzo Stoakes's avatar
      tools: improve vma test Makefile · 4e52a60a
      Lorenzo Stoakes authored
      Patch series "mm: remove vma_merge()", v3.
      
      The infamous vma_merge() function has been the cause of a great deal of
      pain, bugs and confusion for a very long time.
      
      It is subtle, contains many corner cases, tries to do far too much and is
      as a result very fragile.
      
      The fact that the function requires there to be a numbering system to
      cover each possible eventuality with references to each in the many
      branches of its implementation as to which case you are looking at speaks
      to all this.
      
      Some of this complexity is inherent - unfortunately there is no getting
      away from the need to figure out precisely how to execute the merge,
      whether we need to remove VMAs, whether it is safe to do so, what
      constitutes a mergeable VMA and so on.
      
      However, a lot of the complexity is not inherent but instead a product of
      the function's 'organic' development.
      
      Liam has gone to great lengths to improve the situation as a part of his
      maple tree implementation, greatly improving the readability of the code,
      and Vlastimil and myself have additionally gone to lengths to try to
      improve things further.
      
      However, with the availability of userland VMA testing, it now becomes
      possible to perform a rather more significant refactoring while
      maintaining confidence in its correct operation.
      
      An attempt was previously made by Vlastimil [0] to eliminate vma_merge(),
      however it was rather - brutal - and an astute reader might refer to the
      date of that patch for insight as to its intent.
      
      This series instead divides merge operations into two natural kinds -
      merges which occur when a NEW vma is being added to the address space, and
      merges which occur when a vma is being MODIFIED.
      
      Happily, the vma_expand() function introduced by Liam, which has the
      capacity for also deleting a subsequent VMA, covers each of the NEW vma
      cases.
      
      By abstracting the actual final commit of changes to a VMA to its own
      function, commit_merge() and writing a wrapper around vma_expand() for new
      VMA cases vma_merge_new_range(), we can avoid having to use vma_merge()
      for these instances altogether.
      
      By doing so we are also able to then de-duplicate all existing merge logic
      in mmap_region() and do_brk_flags() and have everything invoke this new
      function, so we universally take the same approach to merging new VMAs.
      
      Having done so, we can then completely rework vma_merge() into
      vma_merge_existing_range() and use this for the instances where a merge is
      proposed for a region of an existing VMA.
      
      This eliminates vma_merge() and its numbered cases and instead divides
      things into logical cases - merge both, merge left, merge right (the
      latter 2 being either partial or full merges).
      
      The code is heavily annotated with ASCII diagrams and greatly simplified
      in comparison to the existing vma_merge() function.
      
      Having made this change, we take the opportunity to address an issue with
      merging VMAs possessing a vm_ops->close() hook - commit 714965ca
      ("mm/mmap: start distinguishing if vma can be removed in mergeability
      test") and commit fc0c8f90 ("mm, mmap: fix vma_merge() case 7 with
      vma_ops->close") make efforts to relax how we handle these, making
      assumptions about which VMAs might end up deleted (and thus, if possessing
      a vm_ops->close() hook, cannot be).
      
      This refactor means we do not need to guess, so instead explicitly only
      disallow merge in instances where a VMA with a vm_ops->close() hook would
      be deleted (and try a smaller merge in cases where this is possible).
      
      In addition to these changes, we introduce a new vma_merge_struct
      abstraction to allow VMA merge state to be threaded through the operation
      neatly.
      
      There is heavy unit testing provided for all merge functionality, added
      prior to the refactoring, allowing for before/after testing.
      
      The vm_ops->close() change also introduces exhaustive testing to
      demonstrate that this functions as expected, and in addition to this the
      reproduction code from commit fc0c8f90 ("mm, mmap: fix vma_merge()
      case 7 with vma_ops->close") was tested and confirmed passing.
      
      [0]:https://lore.kernel.org/linux-mm/20240401192623.18575-2-vbabka@suse.cz/
      
      
      This patch (of 10):
      
      Have vma.o depend on its source dependencies explicitly, as previously
      these were simply being ignored as existing object files were up to date.
      
      This now correctly re-triggers the build if mm/ source is changed as well
      as local source code.
      
      Also set clean as a phony rule.
      
      Link: https://lkml.kernel.org/r/cover.1725040657.git.lorenzo.stoakes@oracle.com
      Link: https://lkml.kernel.org/r/e3ea58f08364ae5432c9a074de0195a7c7e0b04a.1725040657.git.lorenzo.stoakes@oracle.comSigned-off-by: default avatarLorenzo Stoakes <lorenzo.stoakes@oracle.com>
      Reviewed-by: default avatarLiam R. Howlett <Liam.Howlett@oracle.com>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Bert Karwatzki <spasswolf@web.de>
      Cc: Jeff Xu <jeffxu@chromium.org>
      Cc: Jiri Olsa <olsajiri@gmail.com>
      Cc: Kees Cook <kees@kernel.org>
      Cc: Lorenzo Stoakes <lstoakes@gmail.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: "Paul E. McKenney" <paulmck@kernel.org>
      Cc: Paul Moore <paul@paul-moore.com>
      Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
      Cc: Suren Baghdasaryan <surenb@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      4e52a60a
    • Liam R. Howlett's avatar
      mm/vma.h: optimise vma_munmap_struct · 723e1e8b
      Liam R. Howlett authored
      The vma_munmap_struct has a hole of 4 bytes and pushes the struct to three
      cachelines.  Relocating the three booleans upwards allows for the struct
      to only use two cachelines (as reported by pahole on amd64).
      
      Before:
      struct vma_munmap_struct {
              struct vma_iterator *      vmi;                  /*     0     8 */
              struct vm_area_struct *    vma;                  /*     8     8 */
              struct vm_area_struct *    prev;                 /*    16     8 */
              struct vm_area_struct *    next;                 /*    24     8 */
              struct list_head *         uf;                   /*    32     8 */
              long unsigned int          start;                /*    40     8 */
              long unsigned int          end;                  /*    48     8 */
              long unsigned int          unmap_start;          /*    56     8 */
              /* --- cacheline 1 boundary (64 bytes) --- */
              long unsigned int          unmap_end;            /*    64     8 */
              int                        vma_count;            /*    72     4 */
      
              /* XXX 4 bytes hole, try to pack */
      
              long unsigned int          nr_pages;             /*    80     8 */
              long unsigned int          locked_vm;            /*    88     8 */
              long unsigned int          nr_accounted;         /*    96     8 */
              long unsigned int          exec_vm;              /*   104     8 */
              long unsigned int          stack_vm;             /*   112     8 */
              long unsigned int          data_vm;              /*   120     8 */
              /* --- cacheline 2 boundary (128 bytes) --- */
              bool                       unlock;               /*   128     1 */
              bool                       clear_ptes;           /*   129     1 */
              bool                       closed_vm_ops;        /*   130     1 */
      
              /* size: 136, cachelines: 3, members: 19 */
              /* sum members: 127, holes: 1, sum holes: 4 */
              /* padding: 5 */
              /* last cacheline: 8 bytes */
      };
      
      After:
      struct vma_munmap_struct {
              struct vma_iterator *      vmi;                  /*     0     8 */
              struct vm_area_struct *    vma;                  /*     8     8 */
              struct vm_area_struct *    prev;                 /*    16     8 */
              struct vm_area_struct *    next;                 /*    24     8 */
              struct list_head *         uf;                   /*    32     8 */
              long unsigned int          start;                /*    40     8 */
              long unsigned int          end;                  /*    48     8 */
              long unsigned int          unmap_start;          /*    56     8 */
              /* --- cacheline 1 boundary (64 bytes) --- */
              long unsigned int          unmap_end;            /*    64     8 */
              int                        vma_count;            /*    72     4 */
              bool                       unlock;               /*    76     1 */
              bool                       clear_ptes;           /*    77     1 */
              bool                       closed_vm_ops;        /*    78     1 */
      
              /* XXX 1 byte hole, try to pack */
      
              long unsigned int          nr_pages;             /*    80     8 */
              long unsigned int          locked_vm;            /*    88     8 */
              long unsigned int          nr_accounted;         /*    96     8 */
              long unsigned int          exec_vm;              /*   104     8 */
              long unsigned int          stack_vm;             /*   112     8 */
              long unsigned int          data_vm;              /*   120     8 */
      
              /* size: 128, cachelines: 2, members: 19 */
              /* sum members: 127, holes: 1, sum holes: 1 */
      };
      
      Link: https://lkml.kernel.org/r/20240830040101.822209-22-Liam.Howlett@oracle.comSigned-off-by: default avatarLiam R. Howlett <Liam.Howlett@Oracle.com>
      Reviewed-by: default avatarLorenzo Stoakes <lorenzo.stoakes@oracle.com>
      Cc: Bert Karwatzki <spasswolf@web.de>
      Cc: Jeff Xu <jeffxu@chromium.org>
      Cc: Jiri Olsa <olsajiri@gmail.com>
      Cc: Kees Cook <kees@kernel.org>
      Cc: Lorenzo Stoakes <lstoakes@gmail.com>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: "Paul E. McKenney" <paulmck@kernel.org>
      Cc: Paul Moore <paul@paul-moore.com>
      Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
      Cc: Suren Baghdasaryan <surenb@google.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      723e1e8b
    • Liam R. Howlett's avatar
      mm/vma: drop incorrect comment from vms_gather_munmap_vmas() · 20831cd6
      Liam R. Howlett authored
      The comment has been outdated since 6b73cff2 ("mm: change munmap
      splitting order and move_vma()").  The move_vma() was altered to fix the
      fragile state of the accounting since then.
      
      Link: https://lkml.kernel.org/r/20240830040101.822209-21-Liam.Howlett@oracle.comSigned-off-by: default avatarLiam R. Howlett <Liam.Howlett@Oracle.com>
      Reviewed-by: default avatarLorenzo Stoakes <lorenzo.stoakes@oracle.com>
      Cc: Bert Karwatzki <spasswolf@web.de>
      Cc: Jeff Xu <jeffxu@chromium.org>
      Cc: Jiri Olsa <olsajiri@gmail.com>
      Cc: Kees Cook <kees@kernel.org>
      Cc: Lorenzo Stoakes <lstoakes@gmail.com>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: "Paul E. McKenney" <paulmck@kernel.org>
      Cc: Paul Moore <paul@paul-moore.com>
      Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
      Cc: Suren Baghdasaryan <surenb@google.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      20831cd6
    • Liam R. Howlett's avatar
      mm: move may_expand_vm() check in mmap_region() · 224c1c70
      Liam R. Howlett authored
      The may_expand_vm() check requires the count of the pages within the
      munmap range.  Since this is needed for accounting and obtained later, the
      reodering of ma_expand_vm() to later in the call stack, after the vma
      munmap struct (vms) is initialised and the gather stage is potentially
      run, will allow for a single loop over the vmas.  The gather sage does not
      commit any work and so everything can be undone in the case of a failure.
      
      The MAP_FIXED page count is available after the vms_gather_munmap_vmas()
      call, so use it instead of looping over the vmas twice.
      
      Link: https://lkml.kernel.org/r/20240830040101.822209-20-Liam.Howlett@oracle.comSigned-off-by: default avatarLiam R. Howlett <Liam.Howlett@Oracle.com>
      Reviewed-by: default avatarLorenzo Stoakes <lorenzo.stoakes@oracle.com>
      Cc: Bert Karwatzki <spasswolf@web.de>
      Cc: Jeff Xu <jeffxu@chromium.org>
      Cc: Jiri Olsa <olsajiri@gmail.com>
      Cc: Kees Cook <kees@kernel.org>
      Cc: Lorenzo Stoakes <lstoakes@gmail.com>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: "Paul E. McKenney" <paulmck@kernel.org>
      Cc: Paul Moore <paul@paul-moore.com>
      Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
      Cc: Suren Baghdasaryan <surenb@google.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      224c1c70
    • Liam R. Howlett's avatar
      ipc/shm, mm: drop do_vma_munmap() · 63fc66f5
      Liam R. Howlett authored
      The do_vma_munmap() wrapper existed for callers that didn't have a vma
      iterator and needed to check the vma mseal status prior to calling the
      underlying munmap().  All callers now use a vma iterator and since the
      mseal check has been moved to do_vmi_align_munmap() and the vmas are
      aligned, this function can just be called instead.
      
      do_vmi_align_munmap() can no longer be static as ipc/shm is using it and
      it is exported via the mm.h header.
      
      Link: https://lkml.kernel.org/r/20240830040101.822209-19-Liam.Howlett@oracle.comSigned-off-by: default avatarLiam R. Howlett <Liam.Howlett@Oracle.com>
      Reviewed-by: default avatarLorenzo Stoakes <lorenzo.stoakes@oracle.com>
      Cc: Bert Karwatzki <spasswolf@web.de>
      Cc: Jeff Xu <jeffxu@chromium.org>
      Cc: Jiri Olsa <olsajiri@gmail.com>
      Cc: Kees Cook <kees@kernel.org>
      Cc: Lorenzo Stoakes <lstoakes@gmail.com>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: "Paul E. McKenney" <paulmck@kernel.org>
      Cc: Paul Moore <paul@paul-moore.com>
      Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
      Cc: Suren Baghdasaryan <surenb@google.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      63fc66f5
    • Liam R. Howlett's avatar
      mm/mmap: use vms accounted pages in mmap_region() · 13d77e01
      Liam R. Howlett authored
      Change from nr_pages variable to vms.nr_accounted for the charged pages
      calculation.  This is necessary for a future patch.
      
      This also avoids checking security_vm_enough_memory_mm() if the amount of
      memory won't change.
      
      Link: https://lkml.kernel.org/r/20240830040101.822209-18-Liam.Howlett@oracle.comSigned-off-by: default avatarLiam R. Howlett <Liam.Howlett@Oracle.com>
      Reviewed-by: default avatarKees Cook <kees@kernel.org>
      Reviewed-by: default avatarLorenzo Stoakes <lorenzo.stoakes@oracle.com>
      Reviewed-by: default avatarSuren Baghdasaryan <surenb@google.com>
      Acked-by: Paul Moore <paul@paul-moore.com>	[LSM]
      Cc: Kees Cook <kees@kernel.org>
      Cc: Bert Karwatzki <spasswolf@web.de>
      Cc: Jeff Xu <jeffxu@chromium.org>
      Cc: Jiri Olsa <olsajiri@gmail.com>
      Cc: Lorenzo Stoakes <lstoakes@gmail.com>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: "Paul E. McKenney" <paulmck@kernel.org>
      Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      13d77e01
    • Liam R. Howlett's avatar
      mm/mmap: use PHYS_PFN in mmap_region() · 5972d97c
      Liam R. Howlett authored
      Instead of shifting the length by PAGE_SIZE, use PHYS_PFN.  Also use the
      existing local variable everywhere instead of some of the time.
      
      Link: https://lkml.kernel.org/r/20240830040101.822209-17-Liam.Howlett@oracle.comSigned-off-by: default avatarLiam R. Howlett <Liam.Howlett@Oracle.com>
      Reviewed-by: default avatarLorenzo Stoakes <lorenzo.stoakes@oracle.com>
      Reviewed-by: default avatarSuren Baghdasaryan <surenb@google.com>
      Cc: Bert Karwatzki <spasswolf@web.de>
      Cc: Jeff Xu <jeffxu@chromium.org>
      Cc: Jiri Olsa <olsajiri@gmail.com>
      Cc: Kees Cook <kees@kernel.org>
      Cc: Lorenzo Stoakes <lstoakes@gmail.com>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: "Paul E. McKenney" <paulmck@kernel.org>
      Cc: Paul Moore <paul@paul-moore.com>
      Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      5972d97c
    • Liam R. Howlett's avatar
      mm: change failure of MAP_FIXED to restoring the gap on failure · 4f87153e
      Liam R. Howlett authored
      Prior to call_mmap(), the vmas that will be replaced need to clear the way
      for what may happen in the call_mmap().  This clean up work includes
      clearing the ptes and calling the close() vm_ops.  Some users do more
      setup than can be restored by calling the vm_ops open() function.  It is
      safer to store the gap in the vma tree in these cases.
      
      That is to say that the failure scenario that existed before the MAP_FIXED
      gap exposure is restored as it is safer than trying to undo a partial
      mapping.
      
      Since abort_munmap_vmas() is only reattaching vmas with this change, the
      function is renamed to reattach_vmas().
      
      There is also a secondary failure that may occur if there is not enough
      memory to store the gap.  In this case, the vmas are reattached and
      resources freed.  If the system cannot complete the call_mmap() and fails
      to allocate with GFP_KERNEL, then the system will print a warning about
      the failure.
      
      [lorenzo.stoakes@oracle.com: fix off-by-one error in vms_abort_munmap_vmas()]
        Link: https://lkml.kernel.org/r/52ee7eb3-955c-4ade-b5f0-28fed8ba3d0b@lucifer.local
      Link: https://lkml.kernel.org/r/20240830040101.822209-16-Liam.Howlett@oracle.comSigned-off-by: default avatarLiam R. Howlett <Liam.Howlett@Oracle.com>
      Signed-off-by: default avatarLorenzo Stoakes <lorenzo.stoakes@oracle.com>
      Reviewed-by: default avatarLorenzo Stoakes <lorenzo.stoakes@oracle.com>
      Cc: Bert Karwatzki <spasswolf@web.de>
      Cc: Jeff Xu <jeffxu@chromium.org>
      Cc: Jiri Olsa <olsajiri@gmail.com>
      Cc: Kees Cook <kees@kernel.org>
      Cc: Lorenzo Stoakes <lstoakes@gmail.com>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: "Paul E. McKenney" <paulmck@kernel.org>
      Cc: Paul Moore <paul@paul-moore.com>
      Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
      Cc: Suren Baghdasaryan <surenb@google.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      4f87153e
    • Liam R. Howlett's avatar
      mm/mmap: avoid zeroing vma tree in mmap_region() · f8d112a4
      Liam R. Howlett authored
      Instead of zeroing the vma tree and then overwriting the area, let the
      area be overwritten and then clean up the gathered vmas using
      vms_complete_munmap_vmas().
      
      To ensure locking is downgraded correctly, the mm is set regardless of
      MAP_FIXED or not (NULL vma).
      
      If a driver is mapping over an existing vma, then clear the ptes before
      the call_mmap() invocation.  This is done using the vms_clean_up_area()
      helper.  If there is a close vm_ops, that must also be called to ensure
      any cleanup is done before mapping over the area.  This also means that
      calling open has been added to the abort of an unmap operation, for now.
      
      Since vm_ops->open() and vm_ops->close() are not always undo each other
      (state cleanup may exist in ->close() that is lost forever), the code
      cannot be left in this way, but that change has been isolated to another
      commit to make this point very obvious for traceability.
      
      Temporarily keep track of the number of pages that will be removed and
      reduce the charged amount.
      
      This also drops the validate_mm() call in the vma_expand() function.  It
      is necessary to drop the validate as it would fail since the mm map_count
      would be incorrect during a vma expansion, prior to the cleanup from
      vms_complete_munmap_vmas().
      
      Clean up the error handing of the vms_gather_munmap_vmas() by calling the
      verification within the function.
      
      Link: https://lkml.kernel.org/r/20240830040101.822209-15-Liam.Howlett@oracle.comSigned-off-by: default avatarLiam R. Howlett <Liam.Howlett@Oracle.com>
      Reviewed-by: default avatarLorenzo Stoakes <lorenzo.stoakes@oracle.com>
      Cc: Bert Karwatzki <spasswolf@web.de>
      Cc: Jeff Xu <jeffxu@chromium.org>
      Cc: Jiri Olsa <olsajiri@gmail.com>
      Cc: Kees Cook <kees@kernel.org>
      Cc: Lorenzo Stoakes <lstoakes@gmail.com>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: "Paul E. McKenney" <paulmck@kernel.org>
      Cc: Paul Moore <paul@paul-moore.com>
      Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
      Cc: Suren Baghdasaryan <surenb@google.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      f8d112a4
    • Liam R. Howlett's avatar
      mm: clean up unmap_region() argument list · 94f59ea5
      Liam R. Howlett authored
      With the only caller to unmap_region() being the error path of
      mmap_region(), the argument list can be significantly reduced.
      
      Link: https://lkml.kernel.org/r/20240830040101.822209-14-Liam.Howlett@oracle.comSigned-off-by: default avatarLiam R. Howlett <Liam.Howlett@Oracle.com>
      Reviewed-by: default avatarLorenzo Stoakes <lorenzo.stoakes@oracle.com>
      Cc: Bert Karwatzki <spasswolf@web.de>
      Cc: Jeff Xu <jeffxu@chromium.org>
      Cc: Jiri Olsa <olsajiri@gmail.com>
      Cc: Kees Cook <kees@kernel.org>
      Cc: Lorenzo Stoakes <lstoakes@gmail.com>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: "Paul E. McKenney" <paulmck@kernel.org>
      Cc: Paul Moore <paul@paul-moore.com>
      Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
      Cc: Suren Baghdasaryan <surenb@google.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      94f59ea5
    • Liam R. Howlett's avatar
      mm/vma: track start and end for munmap in vma_munmap_struct · 9c3ebeda
      Liam R. Howlett authored
      Set the start and end address for munmap when the prev and next are
      gathered.  This is needed to avoid incorrect addresses being used during
      the vms_complete_munmap_vmas() function if the prev/next vma are expanded.
      
      Add a new helper vms_complete_pte_clear(), which is needed later and will
      avoid growing the argument list to unmap_region() beyond the 9 it already
      has.
      
      Link: https://lkml.kernel.org/r/20240830040101.822209-13-Liam.Howlett@oracle.comSigned-off-by: default avatarLiam R. Howlett <Liam.Howlett@Oracle.com>
      Reviewed-by: default avatarLorenzo Stoakes <lorenzo.stoakes@oracle.com>
      Cc: Bert Karwatzki <spasswolf@web.de>
      Cc: Jeff Xu <jeffxu@chromium.org>
      Cc: Jiri Olsa <olsajiri@gmail.com>
      Cc: Kees Cook <kees@kernel.org>
      Cc: Lorenzo Stoakes <lstoakes@gmail.com>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: "Paul E. McKenney" <paulmck@kernel.org>
      Cc: Paul Moore <paul@paul-moore.com>
      Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
      Cc: Suren Baghdasaryan <surenb@google.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      9c3ebeda
    • Liam R. Howlett's avatar
      mm/mmap: reposition vma iterator in mmap_region() · d744f4ac
      Liam R. Howlett authored
      Instead of moving (or leaving) the vma iterator pointing at the previous
      vma, leave it pointing at the insert location.  Pointing the vma iterator
      at the insert location allows for a cleaner walk of the vma tree for
      MAP_FIXED and the no expansion cases.
      
      The vma_prev() call in the case of merging the previous vma is equivalent
      to vma_iter_prev_range(), since the vma iterator will be pointing to the
      location just before the previous vma.
      
      This change needs to export abort_munmap_vmas() from mm/vma.
      
      Link: https://lkml.kernel.org/r/20240830040101.822209-12-Liam.Howlett@oracle.comSigned-off-by: default avatarLiam R. Howlett <Liam.Howlett@Oracle.com>
      Reviewed-by: default avatarLorenzo Stoakes <lorenzo.stoakes@oracle.com>
      Cc: Bert Karwatzki <spasswolf@web.de>
      Cc: Jeff Xu <jeffxu@chromium.org>
      Cc: Jiri Olsa <olsajiri@gmail.com>
      Cc: Kees Cook <kees@kernel.org>
      Cc: Lorenzo Stoakes <lstoakes@gmail.com>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: "Paul E. McKenney" <paulmck@kernel.org>
      Cc: Paul Moore <paul@paul-moore.com>
      Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
      Cc: Suren Baghdasaryan <surenb@google.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      d744f4ac
    • Liam R. Howlett's avatar
      mm/vma: support vma == NULL in init_vma_munmap() · 58e60f82
      Liam R. Howlett authored
      Adding support for a NULL vma means the init_vma_munmap() can be
      initialized for a less error-prone process when calling
      vms_complete_munmap_vmas() later on.
      
      Link: https://lkml.kernel.org/r/20240830040101.822209-11-Liam.Howlett@oracle.comSigned-off-by: default avatarLiam R. Howlett <Liam.Howlett@Oracle.com>
      Reviewed-by: default avatarLorenzo Stoakes <lorenzo.stoakes@oracle.com>
      Cc: Bert Karwatzki <spasswolf@web.de>
      Cc: Jeff Xu <jeffxu@chromium.org>
      Cc: Jiri Olsa <olsajiri@gmail.com>
      Cc: Kees Cook <kees@kernel.org>
      Cc: Lorenzo Stoakes <lstoakes@gmail.com>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: "Paul E. McKenney" <paulmck@kernel.org>
      Cc: Paul Moore <paul@paul-moore.com>
      Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
      Cc: Suren Baghdasaryan <surenb@google.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      58e60f82
    • Liam R. Howlett's avatar
      mm/vma: expand mmap_region() munmap call · 9014b230
      Liam R. Howlett authored
      Open code the do_vmi_align_munmap() call so that it can be broken up later
      in the series.
      
      This requires exposing a few more vma operations.
      
      Link: https://lkml.kernel.org/r/20240830040101.822209-10-Liam.Howlett@oracle.comSigned-off-by: default avatarLiam R. Howlett <Liam.Howlett@Oracle.com>
      Reviewed-by: default avatarLorenzo Stoakes <lorenzo.stoakes@oracle.com>
      Cc: Bert Karwatzki <spasswolf@web.de>
      Cc: Jeff Xu <jeffxu@chromium.org>
      Cc: Jiri Olsa <olsajiri@gmail.com>
      Cc: Kees Cook <kees@kernel.org>
      Cc: Lorenzo Stoakes <lstoakes@gmail.com>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: "Paul E. McKenney" <paulmck@kernel.org>
      Cc: Paul Moore <paul@paul-moore.com>
      Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
      Cc: Suren Baghdasaryan <surenb@google.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      9014b230
    • Liam R. Howlett's avatar
      mm/vma: inline munmap operation in mmap_region() · c7c0c3c3
      Liam R. Howlett authored
      mmap_region is already passed sanitized addr and len, so change the call
      to do_vmi_munmap() to do_vmi_align_munmap() and inline the other checks.
      
      The inlining of the function and checks is an intermediate step in the
      series so future patches are easier to follow.
      
      Link: https://lkml.kernel.org/r/20240830040101.822209-9-Liam.Howlett@oracle.comSigned-off-by: default avatarLiam R. Howlett <Liam.Howlett@Oracle.com>
      Reviewed-by: default avatarLorenzo Stoakes <lorenzo.stoakes@oracle.com>
      Cc: Bert Karwatzki <spasswolf@web.de>
      Cc: Jeff Xu <jeffxu@chromium.org>
      Cc: Jiri Olsa <olsajiri@gmail.com>
      Cc: Kees Cook <kees@kernel.org>
      Cc: Lorenzo Stoakes <lstoakes@gmail.com>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: "Paul E. McKenney" <paulmck@kernel.org>
      Cc: Paul Moore <paul@paul-moore.com>
      Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
      Cc: Suren Baghdasaryan <surenb@google.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      c7c0c3c3
    • Liam R. Howlett's avatar
      mm/vma: extract validate_mm() from vma_complete() · 89b2d2a5
      Liam R. Howlett authored
      vma_complete() will need to be called during an unsafe time to call
      validate_mm().  Extract the call in all places now so that only one
      location can be modified in the next change.
      
      Link: https://lkml.kernel.org/r/20240830040101.822209-8-Liam.Howlett@oracle.comSigned-off-by: default avatarLiam R. Howlett <Liam.Howlett@oracle.com>
      Reviewed-by: default avatarLorenzo Stoakes <lorenzo.stoakes@oracle.com>
      Cc: Bert Karwatzki <spasswolf@web.de>
      Cc: Jeff Xu <jeffxu@chromium.org>
      Cc: Jiri Olsa <olsajiri@gmail.com>
      Cc: Kees Cook <kees@kernel.org>
      Cc: Lorenzo Stoakes <lstoakes@gmail.com>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: "Paul E. McKenney" <paulmck@kernel.org>
      Cc: Paul Moore <paul@paul-moore.com>
      Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
      Cc: Suren Baghdasaryan <surenb@google.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      89b2d2a5
    • Liam R. Howlett's avatar
      mm/vma: change munmap to use vma_munmap_struct() for accounting and surrounding vmas · 17f1ae9b
      Liam R. Howlett authored
      Clean up the code by changing the munmap operation to use a structure for
      the accounting and munmap variables.
      
      Since remove_mt() is only called in one location and the contents will be
      reduced to almost nothing.  The remains of the function can be added to
      vms_complete_munmap_vmas().
      
      Link: https://lkml.kernel.org/r/20240830040101.822209-7-Liam.Howlett@oracle.comSigned-off-by: default avatarLiam R. Howlett <Liam.Howlett@oracle.com>
      Reviewed-by: default avatarLorenzo Stoakes <lorenzo.stoakes@oracle.com>
      Reviewed-by: default avatarSuren Baghdasaryan <surenb@google.com>
      Cc: Bert Karwatzki <spasswolf@web.de>
      Cc: Jeff Xu <jeffxu@chromium.org>
      Cc: Jiri Olsa <olsajiri@gmail.com>
      Cc: Kees Cook <kees@kernel.org>
      Cc: Lorenzo Stoakes <lstoakes@gmail.com>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: "Paul E. McKenney" <paulmck@kernel.org>
      Cc: Paul Moore <paul@paul-moore.com>
      Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      17f1ae9b
    • Liam R. Howlett's avatar
      mm/vma: introduce vma_munmap_struct for use in munmap operations · dba14840
      Liam R. Howlett authored
      Use a structure to pass along all the necessary information and counters
      involved in removing vmas from the mm_struct.
      
      Update vmi_ function names to vms_ to indicate the first argument type
      change.
      
      Link: https://lkml.kernel.org/r/20240830040101.822209-6-Liam.Howlett@oracle.comSigned-off-by: default avatarLiam R. Howlett <Liam.Howlett@oracle.com>
      Reviewed-by: default avatarSuren Baghdasaryan <surenb@google.com>
      Reviewed-by: default avatarLorenzo Stoakes <lorenzo.stoakes@oracle.com>
      Cc: Bert Karwatzki <spasswolf@web.de>
      Cc: Jeff Xu <jeffxu@chromium.org>
      Cc: Jiri Olsa <olsajiri@gmail.com>
      Cc: Kees Cook <kees@kernel.org>
      Cc: Lorenzo Stoakes <lstoakes@gmail.com>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: "Paul E. McKenney" <paulmck@kernel.org>
      Cc: Paul Moore <paul@paul-moore.com>
      Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      dba14840
    • Liam R. Howlett's avatar
      mm/vma: extract the gathering of vmas from do_vmi_align_munmap() · 6898c903
      Liam R. Howlett authored
      Create vmi_gather_munmap_vmas() to handle the gathering of vmas into a
      detached maple tree for removal later.  Part of the gathering is the
      splitting of vmas that span the boundary.
      
      Link: https://lkml.kernel.org/r/20240830040101.822209-5-Liam.Howlett@oracle.comSigned-off-by: default avatarLiam R. Howlett <Liam.Howlett@oracle.com>
      Reviewed-by: default avatarLorenzo Stoakes <lorenzo.stoakes@oracle.com>
      Cc: Bert Karwatzki <spasswolf@web.de>
      Cc: Jeff Xu <jeffxu@chromium.org>
      Cc: Jiri Olsa <olsajiri@gmail.com>
      Cc: Kees Cook <kees@kernel.org>
      Cc: Lorenzo Stoakes <lstoakes@gmail.com>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: "Paul E. McKenney" <paulmck@kernel.org>
      Cc: Paul Moore <paul@paul-moore.com>
      Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
      Cc: Suren Baghdasaryan <surenb@google.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      6898c903
    • Liam R. Howlett's avatar
      mm/vma: introduce vmi_complete_munmap_vmas() · 01cf21e9
      Liam R. Howlett authored
      Extract all necessary operations that need to be completed after the vma
      maple tree is updated from a munmap() operation.  Extracting this makes
      the later patch in the series easier to understand.
      
      Link: https://lkml.kernel.org/r/20240830040101.822209-4-Liam.Howlett@oracle.comSigned-off-by: default avatarLiam R. Howlett <Liam.Howlett@oracle.com>
      Reviewed-by: default avatarLorenzo Stoakes <lorenzo.stoakes@oracle.com>
      Reviewed-by: default avatarSuren Baghdasaryan <surenb@google.com>
      Cc: Bert Karwatzki <spasswolf@web.de>
      Cc: Jeff Xu <jeffxu@chromium.org>
      Cc: Jiri Olsa <olsajiri@gmail.com>
      Cc: Kees Cook <kees@kernel.org>
      Cc: Lorenzo Stoakes <lstoakes@gmail.com>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: "Paul E. McKenney" <paulmck@kernel.org>
      Cc: Paul Moore <paul@paul-moore.com>
      Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      01cf21e9
    • Liam R. Howlett's avatar
      mm/vma: introduce abort_munmap_vmas() · 7e7b2370
      Liam R. Howlett authored
      Extract clean up of failed munmap() operations from do_vmi_align_munmap().
      This simplifies later patches in the series.
      
      It is worth noting that the mas_for_each() loop now has a different upper
      limit.  This should not change the number of vmas visited for reattaching
      to the main vma tree (mm_mt), as all vmas are reattached in both
      scenarios.
      
      Link: https://lkml.kernel.org/r/20240830040101.822209-3-Liam.Howlett@oracle.comSigned-off-by: default avatarLiam R. Howlett <Liam.Howlett@oracle.com>
      Reviewed-by: default avatarLorenzo Stoakes <lorenzo.stoakes@oracle.com>
      Reviewed-by: default avatarSuren Baghdasaryan <surenb@google.com>
      Cc: Bert Karwatzki <spasswolf@web.de>
      Cc: Jeff Xu <jeffxu@chromium.org>
      Cc: Jiri Olsa <olsajiri@gmail.com>
      Cc: Kees Cook <kees@kernel.org>
      Cc: Lorenzo Stoakes <lstoakes@gmail.com>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: "Paul E. McKenney" <paulmck@kernel.org>
      Cc: Paul Moore <paul@paul-moore.com>
      Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      7e7b2370
    • Liam R. Howlett's avatar
      mm/vma: correctly position vma_iterator in __split_vma() · b7012d51
      Liam R. Howlett authored
      Patch series "Avoid MAP_FIXED gap exposure", v8.
      
      It is now possible to walk the vma tree using the rcu read locks and is
      beneficial to do so to reduce lock contention.  Doing so while a MAP_FIXED
      mapping is executing means that a reader may see a gap in the vma tree
      that should never logically exist - and does not when using the mmap lock
      in read mode.  The temporal gap exists because mmap_region() calls
      munmap() prior to installing the new mapping.
      
      This patch set stops rcu readers from seeing the temporal gap by splitting
      up the munmap() function into two parts.  The first part prepares the vma
      tree for modifications by doing the necessary splits and tracks the vmas
      marked for removal in a side tree.  The second part completes the
      munmapping of the vmas after the vma tree has been overwritten (either by
      a MAP_FIXED replacement vma or by a NULL in the munmap() case).
      
      Please note that rcu walkers will still be able to see a temporary state
      of split vmas that may be in the process of being removed, but the
      temporal gap will not be exposed.  vma_start_write() are called on both
      parts of the split vma, so this state is detectable.
      
      If existing vmas have a vm_ops->close(), then they will be called prior to
      mapping the new vmas (and ptes are cleared out).  Without calling
      ->close(), hugetlbfs tests fail (hugemmap06 specifically) due to resources
      still being marked as 'busy'.  Unfortunately, calling the corresponding
      ->open() may not restore the state of the vmas, so it is safer to keep the
      existing failure scenario where a gap is inserted and never replaced.  The
      failure scenario is in its own patch (0015) for traceability.
      
      
      This patch (of 21):
      
      The vma iterator may be left pointing to the newly created vma.  This
      happens when inserting the new vma at the end of the old vma (!new_below).
      
      The incorrect position in the vma iterator is not exposed currently since
      the vma iterator is repositioned in the munmap path and is not reused in
      any of the other paths.
      
      This has limited impact in the current code, but is required for future
      changes.
      
      Link: https://lkml.kernel.org/r/20240830040101.822209-2-Liam.Howlett@oracle.com
      Fixes: b2b3b886 ("mm: don't use __vma_adjust() in __split_vma()")
      Signed-off-by: default avatarLiam R. Howlett <Liam.Howlett@Oracle.com>
      Reviewed-by: default avatarSuren Baghdasaryan <surenb@google.com>
      Reviewed-by: default avatarLorenzo Stoakes <lstoakes@gmail.com>
      Cc: Bert Karwatzki <spasswolf@web.de>
      Cc: Jeff Xu <jeffxu@chromium.org>
      Cc: Jiri Olsa <olsajiri@gmail.com>
      Cc: Kees Cook <kees@kernel.org>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: "Paul E. McKenney" <paulmck@kernel.org>
      Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
      Cc: Mark Brown <broonie@kernel.org>
      Cc: Paul Moore <paul@paul-moore.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      b7012d51
    • Mike Yuan's avatar
      Documentation/cgroup-v2: clarify that zswap.writeback is ignored if zswap is disabled · 5a53623d
      Mike Yuan authored
      As discussed in [1], zswap-related settings natually lose their effect
      when zswap is disabled, specifically zswap.writeback here.  Be explicit
      about this behavior.
      
      [1] https://lore.kernel.org/linux-kernel/CAKEwX=Mhbwhh-=xxCU-RjMXS_n=RpV3Gtznb2m_3JgL+jzz++g@mail.gmail.com/
      
      [akpm@linux-foundation.org: fix/simplify text]
      Link: https://lkml.kernel.org/r/20240823162506.12117-3-me@yhndnzj.comSigned-off-by: default avatarMike Yuan <me@yhndnzj.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Michal Koutný <mkoutny@suse.com>
      Cc: Muchun Song <muchun.song@linux.dev>
      Cc: Nhat Pham <nphamcs@gmail.com>
      Cc: Roman Gushchin <roman.gushchin@linux.dev>
      Cc: Shakeel Butt <shakeel.butt@linux.dev>
      Cc: Yosry Ahmed <yosryahmed@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      5a53623d
    • Mike Yuan's avatar
      selftests: test_zswap: add test for hierarchical zswap.writeback · fd06ce2c
      Mike Yuan authored
      Ensure that zswap.writeback check goes up the cgroup tree, i.e.  is
      hierarchical.  Create a subcgroup which has zswap.writeback set to 1, and
      the upper hierarchy's restrictions shall apply.
      
      Link: https://lkml.kernel.org/r/20240823162506.12117-2-me@yhndnzj.comSigned-off-by: default avatarMike Yuan <me@yhndnzj.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Michal Koutný <mkoutny@suse.com>
      Cc: Muchun Song <muchun.song@linux.dev>
      Cc: Nhat Pham <nphamcs@gmail.com>
      Cc: Roman Gushchin <roman.gushchin@linux.dev>
      Cc: Shakeel Butt <shakeel.butt@linux.dev>
      Cc: Yosry Ahmed <yosryahmed@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      fd06ce2c
    • Usama Arif's avatar
      mm: remove code to handle same filled pages · 20a5532f
      Usama Arif authored
      With an earlier commit to handle zero-filled pages in swap directly, and
      with only 1% of the same-filled pages being non-zero, zswap no longer
      needs to handle same-filled pages and can just work on compressed pages.
      
      Link: https://lkml.kernel.org/r/20240823190545.979059-3-usamaarif642@gmail.comSigned-off-by: default avatarUsama Arif <usamaarif642@gmail.com>
      Reviewed-by: default avatarChengming Zhou <chengming.zhou@linux.dev>
      Acked-by: default avatarYosry Ahmed <yosryahmed@google.com>
      Reviewed-by: default avatarNhat Pham <nphamcs@gmail.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
      Cc: Shakeel Butt <shakeel.butt@linux.dev>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      20a5532f
    • Usama Arif's avatar
      mm: store zero pages to be swapped out in a bitmap · 0ca0c24e
      Usama Arif authored
      Patch series "mm: store zero pages to be swapped out in a bitmap", v8.
      
      As shown in the patch series that introduced the zswap same-filled
      optimization [1], 10-20% of the pages stored in zswap are same-filled. 
      This is also observed across Meta's server fleet.  By using VM counters in
      swap_writepage (not included in this patchseries) it was found that less
      than 1% of the same-filled pages to be swapped out are non-zero pages.
      
      For conventional swap setup (without zswap), rather than reading/writing
      these pages to flash resulting in increased I/O and flash wear, a bitmap
      can be used to mark these pages as zero at write time, and the pages can
      be filled at read time if the bit corresponding to the page is set.
      
      When using zswap with swap, this also means that a zswap_entry does not
      need to be allocated for zero filled pages resulting in memory savings
      which would offset the memory used for the bitmap.
      
      A similar attempt was made earlier in [2] where zswap would only track
      zero-filled pages instead of same-filled.  This patchseries adds
      zero-filled pages optimization to swap (hence it can be used even if zswap
      is disabled) and removes the same-filled code from zswap (as only 1% of
      the same-filled pages are non-zero), simplifying code.
      
      [1] https://lore.kernel.org/all/20171018104832epcms5p1b2232e2236258de3d03d1344dde9fce0@epcms5p1/
      [2] https://lore.kernel.org/lkml/20240325235018.2028408-1-yosryahmed@google.com/
      
      
      This patch (of 2):
      
      Approximately 10-20% of pages to be swapped out are zero pages [1].
      Rather than reading/writing these pages to flash resulting
      in increased I/O and flash wear, a bitmap can be used to mark these
      pages as zero at write time, and the pages can be filled at
      read time if the bit corresponding to the page is set.
      With this patch, NVMe writes in Meta server fleet decreased
      by almost 10% with conventional swap setup (zswap disabled).
      
      [1] https://lore.kernel.org/all/20171018104832epcms5p1b2232e2236258de3d03d1344dde9fce0@epcms5p1/
      
      Link: https://lkml.kernel.org/r/20240823190545.979059-1-usamaarif642@gmail.com
      Link: https://lkml.kernel.org/r/20240823190545.979059-2-usamaarif642@gmail.comSigned-off-by: default avatarUsama Arif <usamaarif642@gmail.com>
      Reviewed-by: default avatarChengming Zhou <chengming.zhou@linux.dev>
      Reviewed-by: default avatarYosry Ahmed <yosryahmed@google.com>
      Reviewed-by: default avatarNhat Pham <nphamcs@gmail.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
      Cc: Shakeel Butt <shakeel.butt@linux.dev>
      Cc: Usama Arif <usamaarif642@gmail.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      0ca0c24e
    • Zhongkun He's avatar
      mm:page_alloc: fix the NULL ac->nodemask in __alloc_pages_slowpath() · 435b3894
      Zhongkun He authored
      should_reclaim_retry() is not ALLOC_CPUSET aware and that means that it
      considers reclaimability of NUMA nodes which are outside of the cpuset. 
      If other nodes have a lot of reclaimable memory then should_reclaim_retry
      would instruct page allocator to retry even though there is no memory
      reclaimable on the cpuset nodemask.  This is not really a huge problem
      because the number of retries without any reclaim progress is bound but it
      could be certainly improved.  This is a cold path so this shouldn't really
      have a measurable impact on performance on most workloads.
      
      1.Test step and the machines.
      ------------
      root@vm:/sys/fs/cgroup/test# numactl -H | grep size
      node 0 size: 9477 MB
      node 1 size: 10079 MB
      node 2 size: 10079 MB
      node 3 size: 10078 MB
      
      root@vm:/sys/fs/cgroup/test# cat cpuset.mems
          2
      
      root@vm:/sys/fs/cgroup/test# stress --vm 1 --vm-bytes 12g  --vm-keep
      stress: info: [33430] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
      stress: FAIL: [33430] (425) <-- worker 33431 got signal 9
      stress: WARN: [33430] (427) now reaping child worker processes
      stress: FAIL: [33430] (461) failed run completed in 2s
      
      2. reclaim_retry_zone info:
      
      We can only alloc pages from node=2, but the reclaim_retry_zone is
      node=0 and return true.
      
      root@vm:/sys/kernel/debug/tracing# cat trace
      stress-33431   [001] ..... 13223.617311: reclaim_retry_zone: node=0 zone=Normal   order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=1 wmark_check=1
      stress-33431   [001] ..... 13223.617682: reclaim_retry_zone: node=0 zone=Normal   order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=2 wmark_check=1
      stress-33431   [001] ..... 13223.618103: reclaim_retry_zone: node=0 zone=Normal   order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=3 wmark_check=1
      stress-33431   [001] ..... 13223.618454: reclaim_retry_zone: node=0 zone=Normal   order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=4 wmark_check=1
      stress-33431   [001] ..... 13223.618770: reclaim_retry_zone: node=0 zone=Normal   order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=5 wmark_check=1
      stress-33431   [001] ..... 13223.619150: reclaim_retry_zone: node=0 zone=Normal   order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=6 wmark_check=1
      stress-33431   [001] ..... 13223.619510: reclaim_retry_zone: node=0 zone=Normal   order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=7 wmark_check=1
      stress-33431   [001] ..... 13223.619850: reclaim_retry_zone: node=0 zone=Normal   order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=8 wmark_check=1
      stress-33431   [001] ..... 13223.620171: reclaim_retry_zone: node=0 zone=Normal   order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=9 wmark_check=1
      stress-33431   [001] ..... 13223.620533: reclaim_retry_zone: node=0 zone=Normal   order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=10 wmark_check=1
      stress-33431   [001] ..... 13223.620894: reclaim_retry_zone: node=0 zone=Normal   order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=11 wmark_check=1
      stress-33431   [001] ..... 13223.621224: reclaim_retry_zone: node=0 zone=Normal   order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=12 wmark_check=1
      stress-33431   [001] ..... 13223.621551: reclaim_retry_zone: node=0 zone=Normal   order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=13 wmark_check=1
      stress-33431   [001] ..... 13223.621847: reclaim_retry_zone: node=0 zone=Normal   order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=14 wmark_check=1
      stress-33431   [001] ..... 13223.622200: reclaim_retry_zone: node=0 zone=Normal   order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=15 wmark_check=1
      stress-33431   [001] ..... 13223.622580: reclaim_retry_zone: node=0 zone=Normal   order=0 reclaimable=4260 available=1772019 min_wmark=5962 no_progress_loops=16 wmark_check=1
      
      With this patch, we can check the right node and get less retry in
      __alloc_pages_slowpath() because there is nothing to do.
      
      Link: https://lkml.kernel.org/r/20240822092612.3209286-1-hezhongkun.hzk@bytedance.comSigned-off-by: default avatarZhongkun He <hezhongkun.hzk@bytedance.com>
      Suggested-by: default avatarMichal Hocko <mhocko@suse.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Zefan Li <lizefan.x@bytedance.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      435b3894
    • Johannes Weiner's avatar
      mm: swapfile: fix SSD detection with swapfile on btrfs · b843786b
      Johannes Weiner authored
      We've been noticing a trend of significant lock contention in the swap
      subsystem as core counts have been increasing in our fleet.  It turns out
      that our swapfiles on btrfs on flash were in fact using the old swap code
      for rotational storage.
      
      This turns out to be a detection issue in the swapon sequence: btrfs sets
      si->bdev during swap activation, which currently happens *after* swapon's
      SSD detection and cluster setup.  Thus, none of the SSD optimizations and
      cluster lock splitting are enabled for btrfs swap.
      
      Rearrange the swapon sequence so that filesystem activation happens
      *before* determining swap behavior based on the backing device.
      
      Afterwards, the nonrotational drive is detected correctly:
      
      - Adding 2097148k swap on /mnt/swapfile.  Priority:-3 extents:1 across:2097148k
      + Adding 2097148k swap on /mnt/swapfile.  Priority:-3 extents:1 across:2097148k SS
      
      Link: https://lkml.kernel.org/r/20240822112707.351844-1-hannes@cmpxchg.orgSigned-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: "Huang, Ying" <ying.huang@intel.com>
      Cc: Hugh Dickins <hughd@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      b843786b
    • Yuesong Li's avatar
      mm:page-writeback: use folio_next_index() helper in writeback_iter() · 0692fad5
      Yuesong Li authored
      Simplify code pattern of 'folio->index + folio_nr_pages(folio)' by using
      the existing helper folio_next_index().
      
      Link: https://lkml.kernel.org/r/20240821063112.4053157-1-liyuesong@vivo.comSigned-off-by: default avatarYuesong Li <liyuesong@vivo.com>
      Cc: Matthew Wilcox <willy@infradead.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      0692fad5
    • David Hildenbrand's avatar
      selftests/mm: fix charge_reserved_hugetlb.sh test · c41a701d
      David Hildenbrand authored
      Currently, running the charge_reserved_hugetlb.sh selftest we can
      sometimes observe something like:
      
        $ ./charge_reserved_hugetlb.sh -cgroup-v2
        ...
        write_result is 0
        After write:
        hugetlb_usage=0
        reserved_usage=10485760
        killing write_to_hugetlbfs
        Received 2.
        Deleting the memory
        Detach failure: Invalid argument
        umount: /mnt/huge: target is busy.
      
      Both cases are issues in the test.
      
      While the unmount error seems to be racy, it will make the test fail:
      	$ ./run_vmtests.sh -t hugetlb
      	...
      	# [FAIL]
      	not ok 10 charge_reserved_hugetlb.sh -cgroup-v2 # exit=32
      
      The issue is that we are not waiting for the write_to_hugetlbfs process to
      quit.  So it might still have a hugetlbfs file open, about which umount is
      not happy.  Fix that by making "killall" wait for the process to quit.
      
      The other error ("Detach failure: Invalid argument") does not seem to
      result in a test error, but is misleading.  Turns out write_to_hugetlbfs.c
      unconditionally tries to cleanup using shmdt(), even when we only
      mmap()'ed a hugetlb file.  Even worse, shmaddr is never even set for the
      SHM case.  Fix that as well.
      
      With this change it seems to work as expected.
      
      Link: https://lkml.kernel.org/r/20240821123115.2068812-1-david@redhat.com
      Fixes: 29750f71 ("hugetlb_cgroup: add hugetlb_cgroup reservation tests")
      Signed-off-by: default avatarDavid Hildenbrand <david@redhat.com>
      Reported-by: default avatarMario Casquero <mcasquer@redhat.com>
      Reviewed-by: default avatarMina Almasry <almasrymina@google.com>
      Tested-by: default avatarMario Casquero <mcasquer@redhat.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: Muchun Song <muchun.song@linux.dev>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      c41a701d
    • Matthew Wilcox (Oracle)'s avatar
      x86: remove PG_uncached · 7a87225a
      Matthew Wilcox (Oracle) authored
      Convert x86 to use PG_arch_2 instead of PG_uncached and remove
      PG_uncached.
      
      Link: https://lkml.kernel.org/r/20240821193445.2294269-11-willy@infradead.orgSigned-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      7a87225a
    • Matthew Wilcox (Oracle)'s avatar
      mm: rename PG_mappedtodisk to PG_owner_2 · 02e1960a
      Matthew Wilcox (Oracle) authored
      This flag has similar constraints to PG_owner_priv_1 -- it is ignored by
      core code, and is entirely for the use of the code which allocated the
      folio.  Since the pagecache does not use it, individual filesystems can
      use it.  The bufferhead code does use it, so filesystems which use the
      buffer cache must not use it for another purpose.
      
      Link: https://lkml.kernel.org/r/20240821193445.2294269-10-willy@infradead.orgSigned-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      02e1960a
    • Matthew Wilcox (Oracle)'s avatar
      mm: remove page_has_private() · 6dc15138
      Matthew Wilcox (Oracle) authored
      This function has no more callers, except folio_has_private().  Combine
      the two functions.
      
      Link: https://lkml.kernel.org/r/20240821193445.2294269-9-willy@infradead.orgSigned-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      6dc15138
    • Matthew Wilcox (Oracle)'s avatar
      mm: remove PageOwnerPriv1 · 3026bc1e
      Matthew Wilcox (Oracle) authored
      While there are many aliases for this flag, nobody actually uses the
      *PageOwnerPriv1() nor folio_*_owner_priv_1() accessors.  Remove them.
      
      Link: https://lkml.kernel.org/r/20240821193445.2294269-8-willy@infradead.orgSigned-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      3026bc1e
    • Matthew Wilcox (Oracle)'s avatar
      mm: remove PageMlocked · 99f86bbd
      Matthew Wilcox (Oracle) authored
      This flag is now only used on folios, so we can remove all the page
      accessors.
      
      Link: https://lkml.kernel.org/r/20240821193445.2294269-7-willy@infradead.orgSigned-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      99f86bbd
    • Matthew Wilcox (Oracle)'s avatar
      mm: remove PageUnevictable · cb29e794
      Matthew Wilcox (Oracle) authored
      There is only one caller of PageUnevictable() left; convert it to call
      folio_test_unevictable() and remove all the page accessors.
      
      Link: https://lkml.kernel.org/r/20240821193445.2294269-6-willy@infradead.orgSigned-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      cb29e794