1. 15 Mar, 2016 40 commits
    • Johannes Weiner's avatar
      mm: memcontrol: generalize locking for the page->mem_cgroup binding · 81f8c3a4
      Johannes Weiner authored
      These patches tag the page cache radix tree eviction entries with the
      memcg an evicted page belonged to, thus making per-cgroup LRU reclaim
      work properly and be as adaptive to new cache workingsets as global
      reclaim already is.
      
      This should have been part of the original thrash detection patch
      series, but was deferred due to the complexity of those patches.
      
      This patch (of 5):
      
      So far the only sites that needed to exclude charge migration to
      stabilize page->mem_cgroup have been per-cgroup page statistics, hence
      the name mem_cgroup_begin_page_stat().  But per-cgroup thrash detection
      will add another site that needs to ensure page->mem_cgroup lifetime.
      
      Rename these locking functions to the more generic lock_page_memcg() and
      unlock_page_memcg().  Since charge migration is a cgroup1 feature only,
      we might be able to delete it at some point, and these now easy to
      identify locking sites along with it.
      Signed-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Suggested-by: default avatarVladimir Davydov <vdavydov@virtuozzo.com>
      Acked-by: default avatarVladimir Davydov <vdavydov@virtuozzo.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      81f8c3a4
    • Michal Hocko's avatar
      mm, vmscan: make zone_reclaimable_pages more precise · 0db2cb8d
      Michal Hocko authored
      zone_reclaimable_pages() is used in should_reclaim_retry() which uses it
      to calculate the target for the watermark check.  This means that
      precise numbers are important for the correct decision.
      zone_reclaimable_pages uses zone_page_state which can contain stale data
      with per-cpu diffs not synced yet (the last vmstat_update might have run
      1s in the past).
      
      Use zone_page_state_snapshot() in zone_reclaimable_pages() instead.
      None of the current callers is in a hot path where getting the precise
      value (which involves per-cpu iteration) would cause an unreasonable
      overhead.
      Signed-off-by: default avatarMichal Hocko <mhocko@suse.com>
      Signed-off-by: default avatarTetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Suggested-by: default avatarDavid Rientjes <rientjes@google.com>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Acked-by: default avatarHillf Danton <hillf.zj@alibaba-inc.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0db2cb8d
    • Naoya Horiguchi's avatar
      mm/madvise: update comment on sys_madvise() · d7206a70
      Naoya Horiguchi authored
      Some new MADV_* advices are not documented in sys_madvise() comment.  So
      let's update it.
      
      [akpm@linux-foundation.org: modifications suggested by Michal]
      Signed-off-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
      Cc: Jason Baron <jbaron@redhat.com>
      Cc: Chen Gong <gong.chen@linux.intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d7206a70
    • Vladimir Davydov's avatar
      mm: vmscan: do not clear SHRINKER_NUMA_AWARE if nr_node_ids == 1 · cecf257b
      Vladimir Davydov authored
      Currently, on shrinker registration we clear SHRINKER_NUMA_AWARE if
      there's the only NUMA node present.  The comment states that this will
      allow us to save some small loop time later.  It used to be true when
      this code was added (see commit 1d3d4437 ("vmscan: per-node
      deferred work")), but since commit 6b4f7799 ("mm: vmscan: invoke
      slab shrinkers from shrink_zone()") it doesn't make any difference.
      Anyway, running on non-NUMA machine shouldn't make a shrinker NUMA
      unaware, so zap this hunk.
      Signed-off-by: default avatarVladimir Davydov <vdavydov@virtuozzo.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      cecf257b
    • Vitaly Kuznetsov's avatar
      xen_balloon: support memory auto onlining policy · 703fc13a
      Vitaly Kuznetsov authored
      Add support for the newly added kernel memory auto onlining policy to
      Xen ballon driver.
      Signed-off-by: default avatarVitaly Kuznetsov <vkuznets@redhat.com>
      Suggested-by: default avatarDaniel Kiper <daniel.kiper@oracle.com>
      Reviewed-by: default avatarDaniel Kiper <daniel.kiper@oracle.com>
      Acked-by: default avatarDavid Vrabel <david.vrabel@citrix.com>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Daniel Kiper <daniel.kiper@oracle.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Tang Chen <tangchen@cn.fujitsu.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Xishi Qiu <qiuxishi@huawei.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: "K. Y. Srinivasan" <kys@microsoft.com>
      Cc: Igor Mammedov <imammedo@redhat.com>
      Cc: Kay Sievers <kay@vrfy.org>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      703fc13a
    • Vitaly Kuznetsov's avatar
      memory-hotplug: add automatic onlining policy for the newly added memory · 31bc3858
      Vitaly Kuznetsov authored
      Currently, all newly added memory blocks remain in 'offline' state
      unless someone onlines them, some linux distributions carry special udev
      rules like:
      
        SUBSYSTEM=="memory", ACTION=="add", ATTR{state}=="offline", ATTR{state}="online"
      
      to make this happen automatically.  This is not a great solution for
      virtual machines where memory hotplug is being used to address high
      memory pressure situations as such onlining is slow and a userspace
      process doing this (udev) has a chance of being killed by the OOM killer
      as it will probably require to allocate some memory.
      
      Introduce default policy for the newly added memory blocks in
      /sys/devices/system/memory/auto_online_blocks file with two possible
      values: "offline" which preserves the current behavior and "online"
      which causes all newly added memory blocks to go online as soon as
      they're added.  The default is "offline".
      Signed-off-by: default avatarVitaly Kuznetsov <vkuznets@redhat.com>
      Reviewed-by: default avatarDaniel Kiper <daniel.kiper@oracle.com>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Daniel Kiper <daniel.kiper@oracle.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Tang Chen <tangchen@cn.fujitsu.com>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Xishi Qiu <qiuxishi@huawei.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: "K. Y. Srinivasan" <kys@microsoft.com>
      Cc: Igor Mammedov <imammedo@redhat.com>
      Cc: Kay Sievers <kay@vrfy.org>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      31bc3858
    • Mika Penttilä's avatar
      mm/memory.c: make apply_to_page_range() more robust · 9cb65bc3
      Mika Penttilä authored
      Arm and arm64 used to trigger this BUG_ON() - this has now been fixed.
      
      But a WARN_ON() here is sufficient to catch future buggy callers.
      Signed-off-by: default avatarMika Penttilä <mika.penttila@nextfour.com>
      Reviewed-by: default avatarPekka Enberg <penberg@kernel.org>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9cb65bc3
    • Liang Chen's avatar
      mm/mempolicy.c: skip VM_HUGETLB and VM_MIXEDMAP VMA for lazy mbind · 4355c018
      Liang Chen authored
      VM_HUGETLB and VM_MIXEDMAP vma needs to be excluded to avoid compound
      pages being marked for migration and unexpected COWs when handling
      hugetlb fault.
      
      Thanks to Naoya Horiguchi for reminding me on these checks.
      Signed-off-by: default avatarLiang Chen <liangchen.linux@gmail.com>
      Signed-off-by: default avatarGavin Guo <gavin.guo@canonical.com>
      Suggested-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: SeongJae Park <sj38.park@gmail.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4355c018
    • Wang Xiaoqiang's avatar
      mm/memory-failure.c: remove useless "undef"s · 0b94f175
      Wang Xiaoqiang authored
      Remove the useless #undef, since the corresponding #define has already
      been removed.
      Signed-off-by: default avatarWang Xiaoqiang <wangxq10@lzu.edu.cn>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0b94f175
    • Naoya Horiguchi's avatar
      mm/madvise: pass return code of memory_failure() to userspace · 23a003bf
      Naoya Horiguchi authored
      Currently the return value of memory_failure() is not passed to
      userspace when madvise(MADV_HWPOISON) is used.  This is inconvenient for
      test programs that want to know the result of error handling.  So let's
      return it to the caller as we already do in the MADV_SOFT_OFFLINE case.
      Signed-off-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Cc: Chen Gong <gong.chen@linux.intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      23a003bf
    • Vlastimil Babka's avatar
      mm, sl[au]b: print gfp_flags as strings in slab_out_of_memory() · 5b3810e5
      Vlastimil Babka authored
      We can now print gfp_flags more human-readable.  Make use of this in
      slab_out_of_memory() for SLUB and SLAB.  Also convert the SLAB variant
      it to pr_warn() along the way.
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5b3810e5
    • Laura Abbott's avatar
      mm/page_poisoning.c: allow for zero poisoning · 1414c7f4
      Laura Abbott authored
      By default, page poisoning uses a poison value (0xaa) on free.  If this
      is changed to 0, the page is not only sanitized but zeroing on alloc
      with __GFP_ZERO can be skipped as well.  The tradeoff is that detecting
      corruption from the poisoning is harder to detect.  This feature also
      cannot be used with hibernation since pages are not guaranteed to be
      zeroed after hibernation.
      
      Credit to Grsecurity/PaX team for inspiring this work
      Signed-off-by: default avatarLaura Abbott <labbott@fedoraproject.org>
      Acked-by: default avatarRafael J. Wysocki <rjw@rjwysocki.net>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Mathias Krause <minipli@googlemail.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Jianyu Zhan <nasa4836@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      1414c7f4
    • Laura Abbott's avatar
      mm/page_poison.c: enable PAGE_POISONING as a separate option · 8823b1db
      Laura Abbott authored
      Page poisoning is currently set up as a feature if architectures don't
      have architecture debug page_alloc to allow unmapping of pages.  It has
      uses apart from that though.  Clearing of the pages on free provides an
      increase in security as it helps to limit the risk of information leaks.
      Allow page poisoning to be enabled as a separate option independent of
      kernel_map pages since the two features do separate work.  Because of
      how hiberanation is implemented, the checks on alloc cannot occur if
      hibernation is enabled.  The runtime alloc checks can also be enabled
      with an option when !HIBERNATION.
      
      Credit to Grsecurity/PaX team for inspiring this work
      Signed-off-by: default avatarLaura Abbott <labbott@fedoraproject.org>
      Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Mathias Krause <minipli@googlemail.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Jianyu Zhan <nasa4836@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      8823b1db
    • Vlastimil Babka's avatar
      mm, debug: move bad flags printing to bad_page() · ff8e8116
      Vlastimil Babka authored
      Since bad_page() is the only user of the badflags parameter of
      dump_page_badflags(), we can move the code to bad_page() and simplify a
      bit.
      
      The dump_page_badflags() function is renamed to __dump_page() and can
      still be called separately from dump_page() for temporary debug prints
      where page_owner info is not desired.
      
      The only user-visible change is that page->mem_cgroup is printed before
      the bad flags.
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ff8e8116
    • Vlastimil Babka's avatar
      mm, page_owner: dump page owner info from dump_page() · 4e462112
      Vlastimil Babka authored
      The page_owner mechanism is useful for dealing with memory leaks.  By
      reading /sys/kernel/debug/page_owner one can determine the stack traces
      leading to allocations of all pages, and find e.g.  a buggy driver.
      
      This information might be also potentially useful for debugging, such as
      the VM_BUG_ON_PAGE() calls to dump_page().  So let's print the stored
      info from dump_page().
      
      Example output:
      
        page:ffffea000292f1c0 count:1 mapcount:0 mapping:ffff8800b2f6cc18 index:0x91d
        flags: 0x1fffff8001002c(referenced|uptodate|lru|mappedtodisk)
        page dumped because: VM_BUG_ON_PAGE(1)
        page->mem_cgroup:ffff8801392c5000
        page allocated via order 0, migratetype Movable, gfp_mask 0x24213ca(GFP_HIGHUSER_MOVABLE|__GFP_COLD|__GFP_NOWARN|__GFP_NORETRY)
         [<ffffffff811682c4>] __alloc_pages_nodemask+0x134/0x230
         [<ffffffff811b40c8>] alloc_pages_current+0x88/0x120
         [<ffffffff8115e386>] __page_cache_alloc+0xe6/0x120
         [<ffffffff8116ba6c>] __do_page_cache_readahead+0xdc/0x240
         [<ffffffff8116bd05>] ondemand_readahead+0x135/0x260
         [<ffffffff8116be9c>] page_cache_async_readahead+0x6c/0x70
         [<ffffffff811604c2>] generic_file_read_iter+0x3f2/0x760
         [<ffffffff811e0dc7>] __vfs_read+0xa7/0xd0
        page has been migrated, last migrate reason: compaction
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4e462112
    • Vlastimil Babka's avatar
      mm, page_owner: track and print last migrate reason · 7cd12b4a
      Vlastimil Babka authored
      During migration, page_owner info is now copied with the rest of the
      page, so the stacktrace leading to free page allocation during migration
      is overwritten.  For debugging purposes, it might be however useful to
      know that the page has been migrated since its initial allocation.  This
      might happen many times during the lifetime for different reasons and
      fully tracking this, especially with stacktraces would incur extra
      memory costs.  As a compromise, store and print the migrate_reason of
      the last migration that occurred to the page.  This is enough to
      distinguish compaction, numa balancing etc.
      
      Example page_owner entry after the patch:
      
        Page allocated via order 0, mask 0x24200ca(GFP_HIGHUSER_MOVABLE)
        PFN 628753 type Movable Block 1228 type Movable Flags 0x1fffff80040030(dirty|lru|swapbacked)
         [<ffffffff811682c4>] __alloc_pages_nodemask+0x134/0x230
         [<ffffffff811b6325>] alloc_pages_vma+0xb5/0x250
         [<ffffffff81177491>] shmem_alloc_page+0x61/0x90
         [<ffffffff8117a438>] shmem_getpage_gfp+0x678/0x960
         [<ffffffff8117c2b9>] shmem_fallocate+0x329/0x440
         [<ffffffff811de600>] vfs_fallocate+0x140/0x230
         [<ffffffff811df434>] SyS_fallocate+0x44/0x70
         [<ffffffff8158cc2e>] entry_SYSCALL_64_fastpath+0x12/0x71
        Page has been migrated, last migrate reason: compaction
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Hugh Dickins <hughd@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7cd12b4a
    • Vlastimil Babka's avatar
      mm, page_owner: copy page owner info during migration · d435edca
      Vlastimil Babka authored
      The page_owner mechanism stores gfp_flags of an allocation and stack
      trace that lead to it.  During page migration, the original information
      is practically replaced by the allocation of free page as the migration
      target.  Arguably this is less useful and might lead to all the
      page_owner info for migratable pages gradually converge towards
      compaction or numa balancing migrations.  It has also lead to
      inaccuracies such as one fixed by commit e2cfc911 ("mm/page_owner:
      set correct gfp_mask on page_owner").
      
      This patch thus introduces copying the page_owner info during migration.
      However, since the fact that the page has been migrated from its
      original place might be useful for debugging, the next patch will
      introduce a way to track that information as well.
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Hugh Dickins <hughd@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d435edca
    • Vlastimil Babka's avatar
      mm, page_owner: convert page_owner_inited to static key · 7dd80b8a
      Vlastimil Babka authored
      CONFIG_PAGE_OWNER attempts to impose negligible runtime overhead when
      enabled during compilation, but not actually enabled during runtime by
      boot param page_owner=on.  This overhead can be further reduced using
      the static key mechanism, which this patch does.
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7dd80b8a
    • Vlastimil Babka's avatar
      mm, page_owner: print migratetype of page and pageblock, symbolic flags · 60f30350
      Vlastimil Babka authored
      The information in /sys/kernel/debug/page_owner includes the migratetype
      of the pageblock the page belongs to.  This is also checked against the
      page's migratetype (as declared by gfp_flags during its allocation), and
      the page is reported as Fallback if its migratetype differs from the
      pageblock's one.  t This is somewhat misleading because in fact fallback
      allocation is not the only reason why these two can differ.  It also
      doesn't direcly provide the page's migratetype, although it's possible
      to derive that from the gfp_flags.
      
      It's arguably better to print both page and pageblock's migratetype and
      leave the interpretation to the consumer than to suggest fallback
      allocation as the only possible reason.  While at it, we can print the
      migratetypes as string the same way as /proc/pagetypeinfo does, as some
      of the numeric values depend on kernel configuration.  For that, this
      patch moves the migratetype_names array from #ifdef CONFIG_PROC_FS part
      of mm/vmstat.c to mm/page_alloc.c and exports it.
      
      With the new format strings for flags, we can now also provide symbolic
      page and gfp flags in the /sys/kernel/debug/page_owner file.  This
      replaces the positional printing of page flags as single letters, which
      might have looked nicer, but was limited to a subset of flags, and
      required the user to remember the letters.
      
      Example page_owner entry after the patch:
      
        Page allocated via order 0, mask 0x24213ca(GFP_HIGHUSER_MOVABLE|__GFP_COLD|__GFP_NOWARN|__GFP_NORETRY)
        PFN 520 type Movable Block 1 type Movable Flags 0xfffff8001006c(referenced|uptodate|lru|active|mappedtodisk)
         [<ffffffff811682c4>] __alloc_pages_nodemask+0x134/0x230
         [<ffffffff811b4058>] alloc_pages_current+0x88/0x120
         [<ffffffff8115e386>] __page_cache_alloc+0xe6/0x120
         [<ffffffff8116ba6c>] __do_page_cache_readahead+0xdc/0x240
         [<ffffffff8116bd05>] ondemand_readahead+0x135/0x260
         [<ffffffff8116bfb1>] page_cache_sync_readahead+0x31/0x50
         [<ffffffff81160523>] generic_file_read_iter+0x453/0x760
         [<ffffffff811e0d57>] __vfs_read+0xa7/0xd0
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      60f30350
    • Vlastimil Babka's avatar
      mm, oom: print symbolic gfp_flags in oom warning · a0795cd4
      Vlastimil Babka authored
      It would be useful to translate gfp_flags into string representation
      when printing in case of an OOM, especially as the flags have been
      undergoing some changes recently and the script ./scripts/gfp-translate
      needs a matching source version to be accurate.
      
      Example output:
      
        a.out invoked oom-killer: gfp_mask=0x24280ca(GFP_HIGHUSER_MOVABLE|GFP_ZERO), order=0, om_score_adj=0
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a0795cd4
    • Vlastimil Babka's avatar
      mm, page_alloc: print symbolic gfp_flags on allocation failure · c5c990e8
      Vlastimil Babka authored
      It would be useful to translate gfp_flags into string representation
      when printing in case of an allocation failure, especially as the flags
      have been undergoing some changes recently and the script
      ./scripts/gfp-translate needs a matching source version to be accurate.
      
      Example output:
      
        stapio: page allocation failure: order:9, mode:0x2080020(GFP_ATOMIC)
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c5c990e8
    • Vlastimil Babka's avatar
      mm, debug: replace dump_flags() with the new printk formats · b8eceeb9
      Vlastimil Babka authored
      With the new printk format strings for flags, we can get rid of
      dump_flags() in mm/debug.c.
      
      This also fixes dump_vma() which used dump_flags() for printing vma
      flags.  However dump_flags() did a page-flags specific filtering of bits
      higher than NR_PAGEFLAGS in order to remove the zone id part.  For
      dump_vma() this resulted in removing several VM_* flags from the
      symbolic translation.
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b8eceeb9
    • Vlastimil Babka's avatar
      mm, printk: introduce new format string for flags · edf14cdb
      Vlastimil Babka authored
      In mm we use several kinds of flags bitfields that are sometimes printed
      for debugging purposes, or exported to userspace via sysfs.  To make
      them easier to interpret independently on kernel version and config, we
      want to dump also the symbolic flag names.  So far this has been done
      with repeated calls to pr_cont(), which is unreliable on SMP, and not
      usable for e.g.  sysfs export.
      
      To get a more reliable and universal solution, this patch extends
      printk() format string for pointers to handle the page flags (%pGp),
      gfp_flags (%pGg) and vma flags (%pGv).  Existing users of
      dump_flag_names() are converted and simplified.
      
      It would be possible to pass flags by value instead of pointer, but the
      %p format string for pointers already has extensions for various kernel
      structures, so it's a good fit, and the extra indirection in a
      non-critical path is negligible.
      
      [linux@rasmusvillemoes.dk: lots of good implementation suggestions]
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      edf14cdb
    • Vlastimil Babka's avatar
      mm, tracing: unify mm flags handling in tracepoints and printk · 420adbe9
      Vlastimil Babka authored
      In tracepoints, it's possible to print gfp flags in a human-friendly
      format through a macro show_gfp_flags(), which defines a translation
      array and passes is to __print_flags().  Since the following patch will
      introduce support for gfp flags printing in printk(), it would be nice
      to reuse the array.  This is not straightforward, since __print_flags()
      can't simply reference an array defined in a .c file such as mm/debug.c
      - it has to be a macro to allow the macro magic to communicate the
      format to userspace tools such as trace-cmd.
      
      The solution is to create a macro __def_gfpflag_names which is used both
      in show_gfp_flags(), and to define the gfpflag_names[] array in
      mm/debug.c.
      
      On the other hand, mm/debug.c also defines translation tables for page
      flags and vma flags, and desire was expressed (but not implemented in
      this series) to use these also from tracepoints.  Thus, this patch also
      renames the events/gfpflags.h file to events/mmflags.h and moves the
      table definitions there, using the same macro approach as for gfpflags.
      This allows translating all three kinds of mm-specific flags both in
      tracepoints and printk.
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      420adbe9
    • Vlastimil Babka's avatar
      tools, perf: make gfp_compact_table up to date · 14e0a214
      Vlastimil Babka authored
      When updating tracing's show_gfp_flags() I have noticed that perf's
      gfp_compact_table is also outdated.  Fill in the missing flags and place
      a note in gfp.h to increase chance that future updates are synced.
      Convert the __GFP_X flags from "GFP_X" to "__GFP_X" strings in line with
      the previous patch.
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Michal Hocko <mhocko@suse.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      14e0a214
    • Vlastimil Babka's avatar
      mm, tracing: make show_gfp_flags() up to date · 1f7866b4
      Vlastimil Babka authored
      The show_gfp_flags() macro provides human-friendly printing of gfp flags
      in tracepoints.  However, it is somewhat out of date and missing several
      flags.  This patches fills in the missing flags, and distinguishes
      properly between GFP_ATOMIC and __GFP_ATOMIC which were both translated
      to "GFP_ATOMIC".  More generally, all __GFP_X flags which were
      previously printed as GFP_X, are now printed as __GFP_X, since ommiting
      the underscores results in output that doesn't actually match the source
      code, and can only lead to confusion.  Where both variants are defined
      equal (e.g.  _DMA and _DMA32), the variant without underscores are
      preferred.
      
      Also add a note in gfp.h so hopefully future changes will be synced
      better.
      
      __GFP_MOVABLE is defined twice in include/linux/gfp.h with different
      comments.  Leave just the newer one, which was intended to replace the
      old one.
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: default avatarMichal Hocko <mhocko@suse.com>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      1f7866b4
    • Vlastimil Babka's avatar
      tracepoints: move trace_print_flags definitions to tracepoint-defs.h · 20f6e03a
      Vlastimil Babka authored
      The following patch will need to declare array of struct
      trace_print_flags in a header.  To prevent this header from pulling in
      all of RCU through trace_events.h, move the struct
      trace_print_flags{_64} definitions to the new lightweight
      tracepoint-defs.h header.
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Michal Hocko <mhocko@suse.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      20f6e03a
    • Mel Gorman's avatar
      mm: filemap: avoid unnecessary calls to lock_page when waiting for IO to complete during a read · ebded027
      Mel Gorman authored
      In the generic read paths the kernel looks up a page in the page cache
      and if it's up to date, it is used.  If not, the page lock is acquired
      to wait for IO to complete and then check the page.  If multiple
      processes are waiting on IO, they all serialise against the lock and
      duplicate the checks.  This is unnecessary.
      
      The page lock in itself does not give any guarantees to the callers
      about the page state as it can be immediately truncated or reclaimed
      after the page is unlocked.  It's sufficient to wait_on_page_locked and
      then continue if the page is up to date on wakeup.
      
      It is possible that a truncated but up-to-date page is returned but the
      reference taken during read prevents it disappearing underneath the
      caller and the data is still valid if PageUptodate.
      
      The overall impact is small as even if processes serialise on the lock,
      the lock section is tiny once the IO is complete.  Profiles indicated
      that unlock_page and friends are generally a tiny portion of a
      read-intensive workload.  An artificial test was created that had
      instances of dd access a cache-cold file on an ext4 filesystem and
      measure how long the read took.
      
      paralleldd
                                          4.4.0                 4.4.0
                                        vanilla             avoidlock
      Amean    Elapsd-1          5.28 (  0.00%)        5.15 (  2.50%)
      Amean    Elapsd-4          5.29 (  0.00%)        5.17 (  2.12%)
      Amean    Elapsd-7          5.28 (  0.00%)        5.18 (  1.78%)
      Amean    Elapsd-12         5.20 (  0.00%)        5.33 ( -2.50%)
      Amean    Elapsd-21         5.14 (  0.00%)        5.21 ( -1.41%)
      Amean    Elapsd-30         5.30 (  0.00%)        5.12 (  3.38%)
      Amean    Elapsd-48         5.78 (  0.00%)        5.42 (  6.21%)
      Amean    Elapsd-79         6.78 (  0.00%)        6.62 (  2.46%)
      Amean    Elapsd-110        9.09 (  0.00%)        8.99 (  1.15%)
      Amean    Elapsd-128       10.60 (  0.00%)       10.43 (  1.66%)
      
      The impact is small but intuitively, it makes sense to avoid unnecessary
      calls to lock_page.
      Signed-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Reviewed-by: default avatarJan Kara <jack@suse.cz>
      Cc: Hugh Dickins <hughd@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ebded027
    • Mel Gorman's avatar
      mm: filemap: remove redundant code in do_read_cache_page · 32b63529
      Mel Gorman authored
      do_read_cache_page and __read_cache_page duplicate page filler code when
      filling the page for the first time.  This patch simply removes the
      duplicate logic.
      Signed-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Reviewed-by: default avatarJan Kara <jack@suse.cz>
      Cc: Hugh Dickins <hughd@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      32b63529
    • Andreas Ziegler's avatar
      mm: fix two typos in comments for to_vmem_altmap() · 07061aab
      Andreas Ziegler authored
      Commit 4b94ffdc ("x86, mm: introduce vmem_altmap to augment
      vmemmap_populate()"), introduced the to_vmem_altmap() function.
      
      The comments in this function contain two typos (one misspelling of the
      Kconfig option CONFIG_SPARSEMEM_VMEMMAP, and one missing letter 'n'),
      let's fix them up.
      Signed-off-by: default avatarAndreas Ziegler <andreas.ziegler@fau.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      07061aab
    • Christian Borntraeger's avatar
      mm/debug_pagealloc: ask users for default setting of debug_pagealloc · ea6eabb0
      Christian Borntraeger authored
      Since commit 031bc574 ("mm/debug-pagealloc: make debug-pagealloc
      boottime configurable") CONFIG_DEBUG_PAGEALLOC is by default not adding
      any page debugging.
      
      This resulted in several unnoticed bugs, e.g.
      
          https://lkml.kernel.org/g/<569F5E29.3090107@de.ibm.com>
      or
          https://lkml.kernel.org/g/<56A20F30.4050705@de.ibm.com>
      
      as this behaviour change was not even documented in Kconfig.
      
      Let's provide a new Kconfig symbol that allows to change the default
      back to enabled, e.g.  for debug kernels.  This also makes the change
      obvious to kernel packagers.
      
      Let's also change the Kconfig description for CONFIG_DEBUG_PAGEALLOC, to
      indicate that there are two stages of overhead.
      Signed-off-by: default avatarChristian Borntraeger <borntraeger@de.ibm.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ea6eabb0
    • Andrey Ryabinin's avatar
      mm/page-writeback: fix dirty_ratelimit calculation · d59b1087
      Andrey Ryabinin authored
      Calculation of dirty_ratelimit sometimes is not correct.  E.g.  initial
      values of dirty_ratelimit == INIT_BW and step == 0, lead to the
      following result:
      
         UBSAN: Undefined behaviour in ../mm/page-writeback.c:1286:7
         shift exponent 25600 is too large for 64-bit type 'long unsigned int'
      
      The fix is straightforward - make step 0 if the shift exponent is too
      big.
      Signed-off-by: default avatarAndrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Andy Shevchenko <andy.shevchenko@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d59b1087
    • Andrew Morton's avatar
      mm/page_alloc.c: rework code layout in memmap_init_zone() · b72d0ffb
      Andrew Morton authored
      This function is getting full of weird tricks to avoid word-wrapping.
      Use a goto to eliminate a tab stop then use the new space
      
      Cc: Taku Izumi <izumi.taku@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b72d0ffb
    • Taku Izumi's avatar
      mm/page_alloc.c: introduce kernelcore=mirror option · 342332e6
      Taku Izumi authored
      This patch extends existing "kernelcore" option and introduces
      kernelcore=mirror option.  By specifying "mirror" instead of specifying
      the amount of memory, non-mirrored (non-reliable) region will be
      arranged into ZONE_MOVABLE.
      
      [akpm@linux-foundation.org: fix build with CONFIG_HAVE_MEMBLOCK_NODE_MAP=n]
      Signed-off-by: default avatarTaku Izumi <izumi.taku@jp.fujitsu.com>
      Tested-by: default avatarSudeep Holla <sudeep.holla@arm.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Xishi Qiu <qiuxishi@huawei.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Matt Fleming <matt@codeblueprint.co.uk>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Steve Capper <steve.capper@linaro.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      342332e6
    • Taku Izumi's avatar
      mm/page_alloc.c: calculate zone_start_pfn at zone_spanned_pages_in_node() · d91749c1
      Taku Izumi authored
      Xeon E7 v3 based systems supports Address Range Mirroring and UEFI BIOS
      complied with UEFI spec 2.5 can notify which ranges are mirrored
      (reliable) via EFI memory map.  Now Linux kernel utilize its information
      and allocates boot time memory from reliable region.
      
      My requirement is:
        - allocate kernel memory from mirrored region
        - allocate user memory from non-mirrored region
      
      In order to meet my requirement, ZONE_MOVABLE is useful.  By arranging
      non-mirrored range into ZONE_MOVABLE, mirrored memory is used for kernel
      allocations.
      
      My idea is to extend existing "kernelcore" option and introduces
      kernelcore=mirror option.  By specifying "mirror" instead of specifying
      the amount of memory, non-mirrored region will be arranged into
      ZONE_MOVABLE.
      
      Earlier discussions are at:
       https://lkml.org/lkml/2015/10/9/24
       https://lkml.org/lkml/2015/10/15/9
       https://lkml.org/lkml/2015/11/27/18
       https://lkml.org/lkml/2015/12/8/836
      
      For example, suppose 2-nodes system with the following memory range:
      
        node 0 [mem 0x0000000000001000-0x000000109fffffff]
        node 1 [mem 0x00000010a0000000-0x000000209fffffff]
      and the following ranges are marked as reliable (mirrored):
        [0x0000000000000000-0x0000000100000000]
        [0x0000000100000000-0x0000000180000000]
        [0x0000000800000000-0x0000000880000000]
        [0x00000010a0000000-0x0000001120000000]
        [0x00000017a0000000-0x0000001820000000]
      
      If you specify kernelcore=mirror, ZONE_NORMAL and ZONE_MOVABLE are
      arranged like bellow:
      
       - node 0:
        ZONE_NORMAL : [0x0000000100000000-0x00000010a0000000]
        ZONE_MOVABLE: [0x0000000180000000-0x00000010a0000000]
       - node 1:
        ZONE_NORMAL : [0x00000010a0000000-0x00000020a0000000]
        ZONE_MOVABLE: [0x0000001120000000-0x00000020a0000000]
      
      In overlapped range, pages to be ZONE_MOVABLE in ZONE_NORMAL are treated
      as absent pages, and vice versa.
      
      This patch (of 2):
      
      Currently each zone's zone_start_pfn is calculated at
      free_area_init_core().  However zone's range is fixed at the time when
      invoking zone_spanned_pages_in_node().
      
      This patch changes how each zone->zone_start_pfn is calculated in
      zone_spanned_pages_in_node().
      Signed-off-by: default avatarTaku Izumi <izumi.taku@jp.fujitsu.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Xishi Qiu <qiuxishi@huawei.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Matt Fleming <matt@codeblueprint.co.uk>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Steve Capper <steve.capper@linaro.org>
      Cc: Sudeep Holla <sudeep.holla@arm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d91749c1
    • Andrew Morton's avatar
    • Joonsoo Kim's avatar
      mm/slub: support left redzone · d86bd1be
      Joonsoo Kim authored
      SLUB already has a redzone debugging feature.  But it is only positioned
      at the end of object (aka right redzone) so it cannot catch left oob.
      Although current object's right redzone acts as left redzone of next
      object, first object in a slab cannot take advantage of this effect.
      This patch explicitly adds a left red zone to each object to detect left
      oob more precisely.
      
      Background:
      
      Someone complained to me that left OOB doesn't catch even if KASAN is
      enabled which does page allocation debugging.  That page is out of our
      control so it would be allocated when left OOB happens and, in this
      case, we can't find OOB.  Moreover, SLUB debugging feature can be
      enabled without page allocator debugging and, in this case, we will miss
      that OOB.
      
      Before trying to implement, I expected that changes would be too
      complex, but, it doesn't look that complex to me now.  Almost changes
      are applied to debug specific functions so I feel okay.
      Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d86bd1be
    • Laura Abbott's avatar
      slub: relax CMPXCHG consistency restrictions · 149daaf3
      Laura Abbott authored
      When debug options are enabled, cmpxchg on the page is disabled.  This
      is because the page must be locked to ensure there are no false
      positives when performing consistency checks.  Some debug options such
      as poisoning and red zoning only act on the object itself.  There is no
      need to protect other CPUs from modification on only the object.  Allow
      cmpxchg to happen with poisoning and red zoning are set on a slab.
      
      Credit to Mathias Krause for the original work which inspired this
      series
      Signed-off-by: default avatarLaura Abbott <labbott@fedoraproject.org>
      Acked-by: default avatarChristoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <js1304@gmail.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Mathias Krause <minipli@googlemail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      149daaf3
    • Laura Abbott's avatar
      slub: convert SLAB_DEBUG_FREE to SLAB_CONSISTENCY_CHECKS · becfda68
      Laura Abbott authored
      SLAB_DEBUG_FREE allows expensive consistency checks at free to be turned
      on or off.  Expand its use to be able to turn off all consistency
      checks.  This gives a nice speed up if you only want features such as
      poisoning or tracing.
      
      Credit to Mathias Krause for the original work which inspired this
      series
      Signed-off-by: default avatarLaura Abbott <labbott@fedoraproject.org>
      Acked-by: default avatarChristoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <js1304@gmail.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Mathias Krause <minipli@googlemail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      becfda68
    • Laura Abbott's avatar
      slub: fix/clean free_debug_processing return paths · 804aa132
      Laura Abbott authored
      Since commit 19c7ff9e ("slub: Take node lock during object free
      checks") check_object has been incorrectly returning success as it
      follows the out label which just returns the node.
      
      Thanks to refactoring, the out and fail paths are now basically the
      same.  Combine the two into one and just use a single label.
      
      Credit to Mathias Krause for the original work which inspired this
      series
      Signed-off-by: default avatarLaura Abbott <labbott@fedoraproject.org>
      Acked-by: default avatarChristoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <js1304@gmail.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Mathias Krause <minipli@googlemail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      804aa132