• David Hildenbrand's avatar
    mm/page_alloc: place pages to tail in __putback_isolated_page() · 47b6a24a
    David Hildenbrand authored
    __putback_isolated_page() already documents that pages will be placed to
    the tail of the freelist - this is, however, not the case for "order >=
    MAX_ORDER - 2" (see buddy_merge_likely()) - which should be the case for
    all existing users.
    
    This change affects two users:
    - free page reporting
    - page isolation, when undoing the isolation (including memory onlining).
    
    This behavior is desirable for pages that haven't really been touched
    lately, so exactly the two users that don't actually read/write page
    content, but rather move untouched pages.
    
    The new behavior is especially desirable for memory onlining, where we
    allow allocation of newly onlined pages via undo_isolate_page_range() in
    online_pages().  Right now, we always place them to the head of the
    freelist, resulting in undesireable behavior: Assume we add individual
    memory chunks via add_memory() and online them right away to the NORMAL
    zone.  We create a dependency chain of unmovable allocations e.g., via the
    memmap.  The memmap of the next chunk will be placed onto previous chunks
    - if the last block cannot get offlined+removed, all dependent ones cannot
    get offlined+removed.  While this can already be observed with individual
    DIMMs, it's more of an issue for virtio-mem (and I suspect also ppc
    DLPAR).
    
    Document that this should only be used for optimizations, and no code
    should rely on this behavior for correction (if the order of the freelists
    ever changes).
    
    We won't care about page shuffling: memory onlining already properly
    shuffles after onlining.  free page reporting doesn't care about
    physically contiguous ranges, and there are already cases where page
    isolation will simply move (physically close) free pages to (currently)
    the head of the freelists via move_freepages_block() instead of shuffling.
    If this becomes ever relevant, we should shuffle the whole zone when
    undoing isolation of larger ranges, and after free_contig_range().
    Signed-off-by: default avatarDavid Hildenbrand <david@redhat.com>
    Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
    Reviewed-by: default avatarAlexander Duyck <alexander.h.duyck@linux.intel.com>
    Reviewed-by: default avatarOscar Salvador <osalvador@suse.de>
    Reviewed-by: default avatarWei Yang <richard.weiyang@linux.alibaba.com>
    Reviewed-by: default avatarPankaj Gupta <pankaj.gupta.linux@gmail.com>
    Acked-by: default avatarMichal Hocko <mhocko@suse.com>
    Cc: Mel Gorman <mgorman@techsingularity.net>
    Cc: Dave Hansen <dave.hansen@intel.com>
    Cc: Vlastimil Babka <vbabka@suse.cz>
    Cc: Mike Rapoport <rppt@kernel.org>
    Cc: Scott Cheloha <cheloha@linux.ibm.com>
    Cc: Michael Ellerman <mpe@ellerman.id.au>
    Cc: Haiyang Zhang <haiyangz@microsoft.com>
    Cc: "K. Y. Srinivasan" <kys@microsoft.com>
    Cc: Matthew Wilcox <willy@infradead.org>
    Cc: Michal Hocko <mhocko@kernel.org>
    Cc: Stephen Hemminger <sthemmin@microsoft.com>
    Cc: Wei Liu <wei.liu@kernel.org>
    Link: https://lkml.kernel.org/r/20201005121534.15649-3-david@redhat.comSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
    47b6a24a
page_alloc.c 244 KB