• Yang Shi's avatar
    mm: khugepaged: don't carry huge page to the next loop for !CONFIG_NUMA · c6a7f445
    Yang Shi authored
    Patch series "mm: userspace hugepage collapse", v7.
    
    Introduction
    --------------------------------
    
    This series provides a mechanism for userspace to induce a collapse of
    eligible ranges of memory into transparent hugepages in process context,
    thus permitting users to more tightly control their own hugepage
    utilization policy at their own expense.
    
    This idea was introduced by David Rientjes[5].
    
    Interface
    --------------------------------
    
    The proposed interface adds a new madvise(2) mode, MADV_COLLAPSE, and
    leverages the new process_madvise(2) call.
    
    process_madvise(2)
    
    	Performs a synchronous collapse of the native pages
    	mapped by the list of iovecs into transparent hugepages.
    
    	This operation is independent of the system THP sysfs settings,
    	but attempts to collapse VMAs marked VM_NOHUGEPAGE will still fail.
    
    	THP allocation may enter direct reclaim and/or compaction.
    
    	When a range spans multiple VMAs, the semantics of the collapse
    	over of each VMA is independent from the others.
    
    	Caller must have CAP_SYS_ADMIN if not acting on self.
    
    	Return value follows existing process_madvise(2) conventions.  A
    	“success” indicates that all hugepage-sized/aligned regions
    	covered by the provided range were either successfully
    	collapsed, or were already pmd-mapped THPs.
    
    madvise(2)
    
    	Equivalent to process_madvise(2) on self, with 0 returned on
    	“success”.
    
    Current Use-Cases
    --------------------------------
    
    (1)	Immediately back executable text by THPs.  Current support provided
    	by CONFIG_READ_ONLY_THP_FOR_FS may take a long time on a large
    	system which might impair services from serving at their full rated
    	load after (re)starting.  Tricks like mremap(2)'ing text onto
    	anonymous memory to immediately realize iTLB performance prevents
    	page sharing and demand paging, both of which increase steady state
    	memory footprint.  With MADV_COLLAPSE, we get the best of both
    	worlds: Peak upfront performance and lower RAM footprints.  Note
    	that subsequent support for file-backed memory is required here.
    
    (2)	malloc() implementations that manage memory in hugepage-sized
    	chunks, but sometimes subrelease memory back to the system in
    	native-sized chunks via MADV_DONTNEED; zapping the pmd.  Later,
    	when the memory is hot, the implementation could
    	madvise(MADV_COLLAPSE) to re-back the memory by THPs to regain
    	hugepage coverage and dTLB performance.  TCMalloc is such an
    	implementation that could benefit from this[6].  A prior study of
    	Google internal workloads during evaluation of Temeraire, a
    	hugepage-aware enhancement to TCMalloc, showed that nearly 20% of
    	all cpu cycles were spent in dTLB stalls, and that increasing
    	hugepage coverage by even small amount can help with that[7].
    
    (3)	userfaultfd-based live migration of virtual machines satisfy UFFD
    	faults by fetching native-sized pages over the network (to avoid
    	latency of transferring an entire hugepage).  However, after guest
    	memory has been fully copied to the new host, MADV_COLLAPSE can
    	be used to immediately increase guest performance.  Note that
    	subsequent support for file/shmem-backed memory is required here.
    
    (4)	HugeTLB high-granularity mapping allows HugeTLB a HugeTLB page to
    	be mapped at different levels in the page tables[8].  As it's not
    	"transparent" like THP, HugeTLB high-granularity mappings require
    	an explicit user API. It is intended that MADV_COLLAPSE be co-opted
    	for this use case[9].  Note that subsequent support for HugeTLB
    	memory is required here.
    
    Future work
    --------------------------------
    
    Only private anonymous memory is supported by this series. File and
    shmem memory support will be added later.
    
    One possible user of this functionality is a userspace agent that
    attempts to optimize THP utilization system-wide by allocating THPs
    based on, for example, task priority, task performance requirements, or
    heatmaps.  For the latter, one idea that has already surfaced is using
    DAMON to identify hot regions, and driving THP collapse through a new
    DAMOS_COLLAPSE scheme[10].
    
    
    This patch (of 17):
    
    The khugepaged has optimization to reduce huge page allocation calls for
    !CONFIG_NUMA by carrying the allocated but failed to collapse huge page to
    the next loop.  CONFIG_NUMA doesn't do so since the next loop may try to
    collapse huge page from a different node, so it doesn't make too much
    sense to carry it.
    
    But when NUMA=n, the huge page is allocated by khugepaged_prealloc_page()
    before scanning the address space, so it means huge page may be allocated
    even though there is no suitable range for collapsing.  Then the page
    would be just freed if khugepaged already made enough progress.  This
    could make NUMA=n run have 5 times as much thp_collapse_alloc as NUMA=y
    run.  This problem actually makes things worse due to the way more
    pointless THP allocations and makes the optimization pointless.
    
    This could be fixed by carrying the huge page across scans, but it will
    complicate the code further and the huge page may be carried indefinitely.
    But if we take one step back, the optimization itself seems not worth
    keeping nowadays since:
    
      * Not too many users build NUMA=n kernel nowadays even though the kernel is
        actually running on a non-NUMA machine. Some small devices may run NUMA=n
        kernel, but I don't think they actually use THP.
      * Since commit 44042b44 ("mm/page_alloc: allow high-order pages to be
        stored on the per-cpu lists"), THP could be cached by pcp.  This actually
        somehow does the job done by the optimization.
    
    Link: https://lkml.kernel.org/r/20220706235936.2197195-1-zokeefe@google.com
    Link: https://lkml.kernel.org/r/20220706235936.2197195-3-zokeefe@google.comSigned-off-by: default avatarYang Shi <shy828301@gmail.com>
    Signed-off-by: default avatarZach O'Keefe <zokeefe@google.com>
    Co-developed-by: default avatarPeter Xu <peterx@redhat.com>
    Signed-off-by: default avatarPeter Xu <peterx@redhat.com>
    Cc: Hugh Dickins <hughd@google.com>
    Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
    Cc: Alex Shi <alex.shi@linux.alibaba.com>
    Cc: Andrea Arcangeli <aarcange@redhat.com>
    Cc: Arnd Bergmann <arnd@arndb.de>
    Cc: Axel Rasmussen <axelrasmussen@google.com>
    Cc: Chris Kennelly <ckennelly@google.com>
    Cc: Chris Zankel <chris@zankel.net>
    Cc: David Hildenbrand <david@redhat.com>
    Cc: David Rientjes <rientjes@google.com>
    Cc: Helge Deller <deller@gmx.de>
    Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
    Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
    Cc: Jens Axboe <axboe@kernel.dk>
    Cc: Matthew Wilcox <willy@infradead.org>
    Cc: Matt Turner <mattst88@gmail.com>
    Cc: Max Filippov <jcmvbkbc@gmail.com>
    Cc: Miaohe Lin <linmiaohe@huawei.com>
    Cc: Michal Hocko <mhocko@suse.com>
    Cc: Minchan Kim <minchan@kernel.org>
    Cc: Pasha Tatashin <pasha.tatashin@soleen.com>
    Cc: Pavel Begunkov <asml.silence@gmail.com>
    Cc: Rongwei Wang <rongwei.wang@linux.alibaba.com>
    Cc: SeongJae Park <sj@kernel.org>
    Cc: Song Liu <songliubraving@fb.com>
    Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
    Cc: Vlastimil Babka <vbabka@suse.cz>
    Cc: Zi Yan <ziy@nvidia.com>
    Cc: Dan Carpenter <dan.carpenter@oracle.com>
    Cc: "Souptick Joarder (HPE)" <jrdr.linux@gmail.com>
    Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
    c6a7f445
khugepaged.c 57.6 KB