• Peter Xu's avatar
    mm/mprotect: fix dax pud handlings · cb0f01be
    Peter Xu authored
    This is only relevant to the two archs that support PUD dax, aka, x86_64
    and ppc64.  PUD THPs do not yet exist elsewhere, and hugetlb PUDs do not
    count in this case.
    
    DAX have had PUD mappings for years, but change protection path never
    worked.  When the path is triggered in any form (a simple test program
    would be: call mprotect() on a 1G dev_dax mapping), the kernel will report
    "bad pud".  This patch should fix that.
    
    The new change_huge_pud() tries to keep everything simple.  For example,
    it doesn't optimize write bit as that will need even more PUD helpers. 
    It's not too bad anyway to have one more write fault in the worst case
    once for 1G range; may be a bigger thing for each PAGE_SIZE, though. 
    Neither does it support userfault-wp bits, as there isn't such PUD
    mappings that is supported; file mappings always need a split there.
    
    The same to TLB shootdown: the pmd path (which was for x86 only) has the
    trick of using _ad() version of pmdp_invalidate*() which can avoid one
    redundant TLB, but let's also leave that for later.  Again, the larger the
    mapping, the smaller of such effect.
    
    There's some difference on handling "retry" for change_huge_pud() (where
    it can return 0): it isn't like change_huge_pmd(), as the pmd version is
    safe with all conditions handled in change_pte_range() later, thanks to
    Hugh's new pte_offset_map_lock().  In short, change_pte_range() is simply
    smarter.  For that, change_pud_range() will need proper retry if it races
    with something else when a huge PUD changed from under us.
    
    The last thing to mention is currently the PUD path ignores the huge pte
    numa counter (NUMA_HUGE_PTE_UPDATES), not only because DAX is not
    applicable to NUMA, but also that it's ambiguous on its own to decide how
    to account pud in this case.  In one earlier version of this patchset I
    proposed to remove the counter as it doesn't even look right to do the
    accounting as of now [1], but then a further discussion suggests we can
    leave that for later, as that doesn't block this series if we choose to
    ignore that counter.  That's what this patch does, by ignoring it.
    
    When at it, touch up the comment in pgtable_split_needed() to make it
    generic to either pmd or pud file THPs.
    
    [1] https://lore.kernel.org/all/20240715192142.3241557-3-peterx@redhat.com/
    [2] https://lore.kernel.org/r/added2d0-b8be-4108-82ca-1367a388d0b1@redhat.com
    
    Link: https://lkml.kernel.org/r/20240812181225.1360970-8-peterx@redhat.com
    Fixes: a00cc7d9 ("mm, x86: add support for PUD-sized transparent hugepages")
    Fixes: 27af67f3 ("powerpc/book3s64/mm: enable transparent pud hugepage")
    Signed-off-by: default avatarPeter Xu <peterx@redhat.com>
    Cc: Dan Williams <dan.j.williams@intel.com>
    Cc: Matthew Wilcox <willy@infradead.org>
    Cc: Dave Jiang <dave.jiang@intel.com>
    Cc: Hugh Dickins <hughd@google.com>
    Cc: Kirill A. Shutemov <kirill@shutemov.name>
    Cc: Vlastimil Babka <vbabka@suse.cz>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: Borislav Petkov <bp@alien8.de>
    Cc: Dave Hansen <dave.hansen@linux.intel.com>
    Cc: Michael Ellerman <mpe@ellerman.id.au>
    Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
    Cc: Oscar Salvador <osalvador@suse.de>
    Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
    Cc: David Hildenbrand <david@redhat.com>
    Cc: David Rientjes <rientjes@google.com>
    Cc: "Edgecombe, Rick P" <rick.p.edgecombe@intel.com>
    Cc: Nicholas Piggin <npiggin@gmail.com>
    Cc: Paolo Bonzini <pbonzini@redhat.com>
    Cc: Rik van Riel <riel@surriel.com>
    Cc: Sean Christopherson <seanjc@google.com>
    Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
    cb0f01be
mprotect.c 23.1 KB