1. 06 May, 2024 10 commits
    • David Hildenbrand's avatar
      mm/rmap: always inline anon/file rmap duplication of a single PTE · c2e65ebc
      David Hildenbrand authored
      As we grow the code, the compiler might make stupid decisions and
      unnecessarily degrade fork() performance.  Let's make sure to always
      inline functions that operate on a single PTE so the compiler will always
      optimize out the loop and avoid a function call.
      
      This is a preparation for maintining a total mapcount for large folios.
      
      Link: https://lkml.kernel.org/r/20240409192301.907377-3-david@redhat.comSigned-off-by: default avatarDavid Hildenbrand <david@redhat.com>
      Reviewed-by: default avatarYin Fengwei <fengwei.yin@intel.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Miaohe Lin <linmiaohe@huawei.com>
      Cc: Muchun Song <muchun.song@linux.dev>
      Cc: Naoya Horiguchi <nao.horiguchi@gmail.com>
      Cc: Peter Xu <peterx@redhat.com>
      Cc: Richard Chang <richardycc@google.com>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Ryan Roberts <ryan.roberts@arm.com>
      Cc: Yang Shi <shy828301@gmail.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: Zi Yan <ziy@nvidia.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      c2e65ebc
    • David Hildenbrand's avatar
      mm: allow for detecting underflows with page_mapcount() again · 02faa73f
      David Hildenbrand authored
      Patch series "mm: mapcount for large folios + page_mapcount() cleanups".
      
      This series tracks the mapcount of large folios in a single value, so it
      can be read efficiently and atomically, just like the mapcount of small
      folios.
      
      folio_mapcount() is then used in a couple more places, most notably to
      reduce false negatives in folio_likely_mapped_shared(), and many users of
      page_mapcount() are cleaned up (that's maybe why you got CCed on the full
      series, sorry sh+xtensa folks!  :) ).
      
      The remaining s390x user and one KSM user of page_mapcount() are getting
      removed separately on the list right now.  I have patches to handle the
      other KSM one, the khugepaged one and the kpagecount one; as they are not
      as "obvious", I will send them out separately in the future.  Once that is
      all in place, I'm planning on moving page_mapcount() into
      fs/proc/task_mmu.c, the remaining user for the time being (and we can
      discuss at LSF/MM details on that :) ).
      
      I proposed the mapcount for large folios (previously called total
      mapcount) originally in part of [1] and I later included it in [2] where
      it is a requirement.  In the meantime, I changed the patch a bit so I
      dropped all RB's.  During the discussion of [1], Peter Xu correctly raised
      that this additional tracking might affect the performance when PMD->PTE
      remapping THPs.  In the meantime.  I addressed that by batching RMAP
      operations during fork(), unmap/zap and when PMD->PTE remapping THPs.
      
      Running some of my micro-benchmarks [3] (fork,munmap,cow-byte,remap) on 1
      GiB of memory backed by folios with the same order, I observe the
      following on an Intel(R) Xeon(R) Silver 4210R CPU @ 2.40GHz tuned for
      reproducible results as much as possible:
      
      Standard deviation is mostly < 1%, except for order-9, where it's < 2% for
      fork() and munmap().
      
      (1) Small folios are not affected (< 1%) in all 4 microbenchmarks.
      (2) Order-4 folios are not affected (< 1%) in all 4 microbenchmarks. A bit
          weird comapred to the other orders ...
      (3) PMD->PTE remapping of order-9 THPs is not affected (< 1%)
      (4) COW-byte (COWing a single page by writing a single byte) is not
          affected for any order (< 1 %). The page copy_fault overhead dominates
          everything.
      (5) fork() is mostly not affected (< 1%), except order-2, where we have
          a slowdown of ~4%. Already for order-3 folios, we're down to a slowdown
          of < 1%.
      (6) munmap() sees a slowdown by < 3% for some orders (order-5,
          order-6, order-9), but less for others (< 1% for order-4 and order-8,
          < 2% for order-2, order-3, order-7).
      
      Especially the fork() and munmap() benchmark are sensitive to each added
      instruction and other system noise, so I suspect some of the change and
      observed weirdness (order-4) is due to code layout changes and other
      factors, but not really due to the added atomics.
      
      So in the common case where we can batch, the added atomics don't really
      make a big difference, especially in light of the recent improvements for
      large folios that we recently gained due to batching.  Surprisingly, for
      some cases where we cannot batch (e.g., COW), the added atomics don't seem
      to matter, because other overhead dominates.
      
      My fork and munmap micro-benchmarks don't cover cases where we cannot
      batch-process bigger parts of large folios.  As this is not the common
      case, I'm not worrying about that right now.
      
      Future work is batching RMAP operations during swapout and folio
      migration.
      
      [1] https://lore.kernel.org/all/20230809083256.699513-1-david@redhat.com/
      [2] https://lore.kernel.org/all/20231124132626.235350-1-david@redhat.com/
      [3] https://gitlab.com/davidhildenbrand/scratchspace/-/raw/main/pte-mapped-folio-benchmarks.c?ref_type=heads
      
      
      This patch (of 18):
      
      Commit 53277bcf126d ("mm: support page_mapcount() on page_has_type()
      pages") made it impossible to detect mapcount underflows by treating any
      negative raw mapcount value as a mapcount of 0.
      
      We perform such underflow checks in zap_present_folio_ptes() and
      zap_huge_pmd(), which would currently no longer trigger.
      
      Let's check against PAGE_MAPCOUNT_RESERVE instead by using
      page_type_has_type(), like page_has_type() would, so we can still catch
      some underflows.
      
      [david@redhat.com: make page_mapcount() slighly more efficient]
        Link: https://lkml.kernel.org/r/1af4fd61-7926-47c8-be45-833c0dbec08b@redhat.com
      Link: https://lkml.kernel.org/r/20240409192301.907377-1-david@redhat.com
      Link: https://lkml.kernel.org/r/20240409192301.907377-2-david@redhat.com
      Fixes: 53277bcf126d ("mm: support page_mapcount() on page_has_type() pages")
      Signed-off-by: default avatarDavid Hildenbrand <david@redhat.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Miaohe Lin <linmiaohe@huawei.com>
      Cc: Muchun Song <muchun.song@linux.dev>
      Cc: Naoya Horiguchi <nao.horiguchi@gmail.com>
      Cc: Peter Xu <peterx@redhat.com>
      Cc: Richard Chang <richardycc@google.com>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Ryan Roberts <ryan.roberts@arm.com>
      Cc: Yang Shi <shy828301@gmail.com>
      Cc: Yin Fengwei <fengwei.yin@intel.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Cc: Zi Yan <ziy@nvidia.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      02faa73f
    • David Hildenbrand's avatar
      mm: follow_pte() improvements · c5541ba3
      David Hildenbrand authored
      follow_pte() is now our main function to lookup PTEs in VM_PFNMAP/VM_IO
      VMAs.  Let's perform some more sanity checks to make this exported
      function harder to abuse.
      
      Further, extend the doc a bit, it still focuses on the KVM use case with
      MMU notifiers.  Drop the KVM+follow_pfn() comment, follow_pfn() is no
      more, and we have other users nowadays.
      
      Also extend the doc regarding refcounted pages and the interaction with
      MMU notifiers.
      
      KVM is one example that uses MMU notifiers and can deal with refcounted
      pages properly.  VFIO is one example that doesn't use MMU notifiers, and
      to prevent use-after-free, rejects refcounted pages: pfn_valid(pfn) &&
      !PageReserved(pfn_to_page(pfn)).  Protection changes are less of a concern
      for users like VFIO: the behavior is similar to longterm-pinning a page,
      and getting the PTE protection changed afterwards.
      
      The primary concern with refcounted pages is use-after-free, which callers
      should be aware of.
      
      Link: https://lkml.kernel.org/r/20240410155527.474777-4-david@redhat.comSigned-off-by: default avatarDavid Hildenbrand <david@redhat.com>
      Cc: Alex Williamson <alex.williamson@redhat.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Fei Li <fei1.li@intel.com>
      Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
      Cc: Heiko Carstens <hca@linux.ibm.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Sean Christopherson <seanjc@google.com>
      Cc: Yonghua Huang <yonghua.huang@intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      c5541ba3
    • David Hildenbrand's avatar
      mm: pass VMA instead of MM to follow_pte() · 29ae7d96
      David Hildenbrand authored
      ... and centralize the VM_IO/VM_PFNMAP sanity check in there. We'll
      now also perform these sanity checks for direct follow_pte()
      invocations.
      
      For generic_access_phys(), we might now check multiple times: nothing to
      worry about, really.
      
      Link: https://lkml.kernel.org/r/20240410155527.474777-3-david@redhat.comSigned-off-by: default avatarDavid Hildenbrand <david@redhat.com>
      Acked-by: Sean Christopherson <seanjc@google.com>	[KVM]
      Cc: Alex Williamson <alex.williamson@redhat.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Fei Li <fei1.li@intel.com>
      Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
      Cc: Heiko Carstens <hca@linux.ibm.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Yonghua Huang <yonghua.huang@intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      29ae7d96
    • David Hildenbrand's avatar
      drivers/virt/acrn: fix PFNMAP PTE checks in acrn_vm_ram_map() · 3d658600
      David Hildenbrand authored
      Patch series "mm: follow_pte() improvements and acrn follow_pte() fixes".
      
      Patch #1 fixes a bunch of issues I spotted in the acrn driver.  It
      compiles, that's all I know.  I'll appreciate some review and testing from
      acrn folks.
      
      Patch #2+#3 improve follow_pte(), passing a VMA instead of the MM, adding
      more sanity checks, and improving the documentation.  Gave it a quick test
      on x86-64 using VM_PAT that ends up using follow_pte().
      
      
      This patch (of 3):
      
      We currently miss handling various cases, resulting in a dangerous
      follow_pte() (previously follow_pfn()) usage.
      
      (1) We're not checking PTE write permissions.
      
      Maybe we should simply always require pte_write() like we do for
      pin_user_pages_fast(FOLL_WRITE)? Hard to tell, so let's check for
      ACRN_MEM_ACCESS_WRITE for now.
      
      (2) We're not rejecting refcounted pages.
      
      As we are not using MMU notifiers, messing with refcounted pages is
      dangerous and can result in use-after-free. Let's make sure to reject them.
      
      (3) We are only looking at the first PTE of a bigger range.
      
      We only lookup a single PTE, but memmap->len may span a larger area.
      Let's loop over all involved PTEs and make sure the PFN range is
      actually contiguous. Reject everything else: it couldn't have worked
      either way, and rather made use access PFNs we shouldn't be accessing.
      
      Link: https://lkml.kernel.org/r/20240410155527.474777-1-david@redhat.com
      Link: https://lkml.kernel.org/r/20240410155527.474777-2-david@redhat.com
      Fixes: 8a6e85f7 ("virt: acrn: obtain pa from VMA with PFNMAP flag")
      Signed-off-by: default avatarDavid Hildenbrand <david@redhat.com>
      Cc: Alex Williamson <alex.williamson@redhat.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Fei Li <fei1.li@intel.com>
      Cc: Gerald Schaefer <gerald.schaefer@linux.ibm.com>
      Cc: Heiko Carstens <hca@linux.ibm.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Yonghua Huang <yonghua.huang@intel.com>
      Cc: Sean Christopherson <seanjc@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      3d658600
    • Huang Ying's avatar
      mm,swap: add document about RCU read lock and swapoff interaction · d4a34d7f
      Huang Ying authored
      During reviewing a patch to fix the race condition between
      free_swap_and_cache() and swapoff() [1], it was found that the document
      about how to prevent racing with swapoff isn't clear enough.  Especially
      RCU read lock can prevent swapoff from freeing data structures.  So, the
      document is added as comments.
      
      [1] https://lore.kernel.org/linux-mm/c8fe62d0-78b8-527a-5bef-ee663ccdc37a@huawei.com/
      
      Link: https://lkml.kernel.org/r/20240407065450.498821-1-ying.huang@intel.comSigned-off-by: default avatar"Huang, Ying" <ying.huang@intel.com>
      Reviewed-by: default avatarRyan Roberts <ryan.roberts@arm.com>
      Reviewed-by: default avatarDavid Hildenbrand <david@redhat.com>
      Reviewed-by: default avatarMiaohe Lin <linmiaohe@huawei.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      d4a34d7f
    • Hao Ge's avatar
      mm/mmap: make accountable_mapping return bool · 2bd9e6ee
      Hao Ge authored
      accountable_mapping() can return bool, so change it.
      
      Link: https://lkml.kernel.org/r/20240407063843.804274-1-gehao@kylinos.cnSigned-off-by: default avatarHao Ge <gehao@kylinos.cn>
      Cc: Liam R. Howlett <Liam.Howlett@oracle.com>
      Cc: Lorenzo Stoakes <lstoakes@gmail.com>
      Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      2bd9e6ee
    • Hao Ge's avatar
      mm/mmap: make vma_wants_writenotify return bool · 38bc9c28
      Hao Ge authored
      vma_wants_writenotify() should return bool, so change it.
      
      Link: https://lkml.kernel.org/r/20240407062653.803142-1-gehao@kylinos.cnSigned-off-by: default avatarHao Ge <gehao@kylinos.cn>
      Cc: Liam R. Howlett <Liam.Howlett@oracle.com>
      Cc: Lorenzo Stoakes <lstoakes@gmail.com>
      Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      38bc9c28
    • Ho-Ren (Jack) Chuang's avatar
      memory tier: create CPUless memory tiers after obtaining HMAT info · cf93be18
      Ho-Ren (Jack) Chuang authored
      The current implementation treats emulated memory devices, such as CXL1.1
      type3 memory, as normal DRAM when they are emulated as normal memory
      (E820_TYPE_RAM).  However, these emulated devices have different
      characteristics than traditional DRAM, making it important to distinguish
      them.  Thus, we modify the tiered memory initialization process to
      introduce a delay specifically for CPUless NUMA nodes.  This delay ensures
      that the memory tier initialization for these nodes is deferred until HMAT
      information is obtained during the boot process.  Finally, demotion tables
      are recalculated at the end.
      
      * late_initcall(memory_tier_late_init);
        Some device drivers may have initialized memory tiers between
        `memory_tier_init()` and `memory_tier_late_init()`, potentially bringing
        online memory nodes and configuring memory tiers.  They should be
        excluded in the late init.
      
      * Handle cases where there is no HMAT when creating memory tiers
        There is a scenario where a CPUless node does not provide HMAT
        information.  If no HMAT is specified, it falls back to using the
        default DRAM tier.
      
      * Introduce another new lock `default_dram_perf_lock` for adist
        calculation In the current implementation, iterating through CPUlist
        nodes requires holding the `memory_tier_lock`.  However,
        `mt_calc_adistance()` will end up trying to acquire the same lock,
        leading to a potential deadlock.  Therefore, we propose introducing a
        standalone `default_dram_perf_lock` to protect `default_dram_perf_*`. 
        This approach not only avoids deadlock but also prevents holding a large
        lock simultaneously.
      
      * Upgrade `set_node_memory_tier` to support additional cases, including
        default DRAM, late CPUless, and hot-plugged initializations.  To cover
        hot-plugged memory nodes, `mt_calc_adistance()` and
        `mt_find_alloc_memory_type()` are moved into `set_node_memory_tier()` to
        handle cases where memtype is not initialized and where HMAT information
        is available.
      
      * Introduce `default_memory_types` for those memory types that are not
        initialized by device drivers.  Because late initialized memory and
        default DRAM memory need to be managed, a default memory type is created
        for storing all memory types that are not initialized by device drivers
        and as a fallback.
      
      Link: https://lkml.kernel.org/r/20240405000707.2670063-3-horenchuang@bytedance.comSigned-off-by: default avatarHo-Ren (Jack) Chuang <horenchuang@bytedance.com>
      Signed-off-by: default avatarHao Xiang <hao.xiang@bytedance.com>
      Reviewed-by: default avatar"Huang, Ying" <ying.huang@intel.com>
      Reviewed-by: default avatarJonathan Cameron <Jonathan.Cameron@huawei.com>
      Cc: Alistair Popple <apopple@nvidia.com>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Jiang <dave.jiang@intel.com>
      Cc: Gregory Price <gourry.memverge@gmail.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Ravi Jonnalagadda <ravis.opensrc@micron.com>
      Cc: SeongJae Park <sj@kernel.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Vishal Verma <vishal.l.verma@intel.com>
      Cc: Jonathan Cameron <Jonathan.Cameron@huawie.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      cf93be18
    • Ho-Ren (Jack) Chuang's avatar
      memory tier: dax/kmem: introduce an abstract layer for finding, allocating,... · a72a30af
      Ho-Ren (Jack) Chuang authored
      memory tier: dax/kmem: introduce an abstract layer for finding, allocating, and putting memory types
      
      Patch series "Improved Memory Tier Creation for CPUless NUMA Nodes", v11.
      
      When a memory device, such as CXL1.1 type3 memory, is emulated as normal
      memory (E820_TYPE_RAM), the memory device is indistinguishable from normal
      DRAM in terms of memory tiering with the current implementation.  The
      current memory tiering assigns all detected normal memory nodes to the
      same DRAM tier.  This results in normal memory devices with different
      attributions being unable to be assigned to the correct memory tier,
      leading to the inability to migrate pages between different types of
      memory. 
      https://lore.kernel.org/linux-mm/PH0PR08MB7955E9F08CCB64F23963B5C3A860A@PH0PR08MB7955.namprd08.prod.outlook.com/T/
      
      This patchset automatically resolves the issues.  It delays the
      initialization of memory tiers for CPUless NUMA nodes until they obtain
      HMAT information and after all devices are initialized at boot time,
      eliminating the need for user intervention.  If no HMAT is specified, it
      falls back to using `default_dram_type`.
      
      Example usecase:
      We have CXL memory on the host, and we create VMs with a new system memory
      device backed by host CXL memory.  We inject CXL memory performance
      attributes through QEMU, and the guest now sees memory nodes with
      performance attributes in HMAT.  With this change, we enable the guest
      kernel to construct the correct memory tiering for the memory nodes.
      
      
      This patch (of 2):
      
      Since different memory devices require finding, allocating, and putting
      memory types, these common steps are abstracted in this patch, enhancing
      the scalability and conciseness of the code.
      
      Link: https://lkml.kernel.org/r/20240405000707.2670063-1-horenchuang@bytedance.com
      Link: https://lkml.kernel.org/r/20240405000707.2670063-2-horenchuang@bytedance.comSigned-off-by: default avatarHo-Ren (Jack) Chuang <horenchuang@bytedance.com>
      Reviewed-by: default avatar"Huang, Ying" <ying.huang@intel.com>
      Reviewed-by: default avatarJonathan Cameron <Jonathan.Cameron@huawie.com>
      Cc: Alistair Popple <apopple@nvidia.com>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Jiang <dave.jiang@intel.com>
      Cc: Gregory Price <gourry.memverge@gmail.com>
      Cc: Hao Xiang <hao.xiang@bytedance.com>
      Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Ravi Jonnalagadda <ravis.opensrc@micron.com>
      Cc: SeongJae Park <sj@kernel.org>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Vishal Verma <vishal.l.verma@intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      a72a30af
  2. 26 Apr, 2024 30 commits