• Gerald Schaefer's avatar
    numa: fix /proc/<pid>/numa_maps for THP · 28093f9f
    Gerald Schaefer authored
    In gather_pte_stats() a THP pmd is cast into a pte, which is wrong
    because the layouts may differ depending on the architecture.  On s390
    this will lead to inaccurate numa_maps accounting in /proc because of
    misguided pte_present() and pte_dirty() checks on the fake pte.
    
    On other architectures pte_present() and pte_dirty() may work by chance,
    but there may be an issue with direct-access (dax) mappings w/o
    underlying struct pages when HAVE_PTE_SPECIAL is set and THP is
    available.  In vm_normal_page() the fake pte will be checked with
    pte_special() and because there is no "special" bit in a pmd, this will
    always return false and the VM_PFNMAP | VM_MIXEDMAP checking will be
    skipped.  On dax mappings w/o struct pages, an invalid struct page
    pointer would then be returned that can crash the kernel.
    
    This patch fixes the numa_maps THP handling by introducing new "_pmd"
    variants of the can_gather_numa_stats() and vm_normal_page() functions.
    Signed-off-by: default avatarGerald Schaefer <gerald.schaefer@de.ibm.com>
    Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
    Cc: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
    Cc: Konstantin Khlebnikov <koct9i@gmail.com>
    Cc: Michal Hocko <mhocko@suse.com>
    Cc: Vlastimil Babka <vbabka@suse.cz>
    Cc: Jerome Marchand <jmarchan@redhat.com>
    Cc: Johannes Weiner <hannes@cmpxchg.org>
    Cc: Dave Hansen <dave.hansen@intel.com>
    Cc: Mel Gorman <mgorman@suse.de>
    Cc: Dan Williams <dan.j.williams@intel.com>
    Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
    Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
    Cc: Michael Holzheu <holzheu@linux.vnet.ibm.com>
    Cc: <stable@vger.kernel.org>	[4.3+]
    Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
    Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
    28093f9f
memory.c 107 KB