1. 15 Mar, 2016 40 commits
    • Vlastimil Babka's avatar
      mm, page_owner: convert page_owner_inited to static key · 7dd80b8a
      Vlastimil Babka authored
      CONFIG_PAGE_OWNER attempts to impose negligible runtime overhead when
      enabled during compilation, but not actually enabled during runtime by
      boot param page_owner=on.  This overhead can be further reduced using
      the static key mechanism, which this patch does.
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7dd80b8a
    • Vlastimil Babka's avatar
      mm, page_owner: print migratetype of page and pageblock, symbolic flags · 60f30350
      Vlastimil Babka authored
      The information in /sys/kernel/debug/page_owner includes the migratetype
      of the pageblock the page belongs to.  This is also checked against the
      page's migratetype (as declared by gfp_flags during its allocation), and
      the page is reported as Fallback if its migratetype differs from the
      pageblock's one.  t This is somewhat misleading because in fact fallback
      allocation is not the only reason why these two can differ.  It also
      doesn't direcly provide the page's migratetype, although it's possible
      to derive that from the gfp_flags.
      
      It's arguably better to print both page and pageblock's migratetype and
      leave the interpretation to the consumer than to suggest fallback
      allocation as the only possible reason.  While at it, we can print the
      migratetypes as string the same way as /proc/pagetypeinfo does, as some
      of the numeric values depend on kernel configuration.  For that, this
      patch moves the migratetype_names array from #ifdef CONFIG_PROC_FS part
      of mm/vmstat.c to mm/page_alloc.c and exports it.
      
      With the new format strings for flags, we can now also provide symbolic
      page and gfp flags in the /sys/kernel/debug/page_owner file.  This
      replaces the positional printing of page flags as single letters, which
      might have looked nicer, but was limited to a subset of flags, and
      required the user to remember the letters.
      
      Example page_owner entry after the patch:
      
        Page allocated via order 0, mask 0x24213ca(GFP_HIGHUSER_MOVABLE|__GFP_COLD|__GFP_NOWARN|__GFP_NORETRY)
        PFN 520 type Movable Block 1 type Movable Flags 0xfffff8001006c(referenced|uptodate|lru|active|mappedtodisk)
         [<ffffffff811682c4>] __alloc_pages_nodemask+0x134/0x230
         [<ffffffff811b4058>] alloc_pages_current+0x88/0x120
         [<ffffffff8115e386>] __page_cache_alloc+0xe6/0x120
         [<ffffffff8116ba6c>] __do_page_cache_readahead+0xdc/0x240
         [<ffffffff8116bd05>] ondemand_readahead+0x135/0x260
         [<ffffffff8116bfb1>] page_cache_sync_readahead+0x31/0x50
         [<ffffffff81160523>] generic_file_read_iter+0x453/0x760
         [<ffffffff811e0d57>] __vfs_read+0xa7/0xd0
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      60f30350
    • Vlastimil Babka's avatar
      mm, oom: print symbolic gfp_flags in oom warning · a0795cd4
      Vlastimil Babka authored
      It would be useful to translate gfp_flags into string representation
      when printing in case of an OOM, especially as the flags have been
      undergoing some changes recently and the script ./scripts/gfp-translate
      needs a matching source version to be accurate.
      
      Example output:
      
        a.out invoked oom-killer: gfp_mask=0x24280ca(GFP_HIGHUSER_MOVABLE|GFP_ZERO), order=0, om_score_adj=0
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a0795cd4
    • Vlastimil Babka's avatar
      mm, page_alloc: print symbolic gfp_flags on allocation failure · c5c990e8
      Vlastimil Babka authored
      It would be useful to translate gfp_flags into string representation
      when printing in case of an allocation failure, especially as the flags
      have been undergoing some changes recently and the script
      ./scripts/gfp-translate needs a matching source version to be accurate.
      
      Example output:
      
        stapio: page allocation failure: order:9, mode:0x2080020(GFP_ATOMIC)
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c5c990e8
    • Vlastimil Babka's avatar
      mm, debug: replace dump_flags() with the new printk formats · b8eceeb9
      Vlastimil Babka authored
      With the new printk format strings for flags, we can get rid of
      dump_flags() in mm/debug.c.
      
      This also fixes dump_vma() which used dump_flags() for printing vma
      flags.  However dump_flags() did a page-flags specific filtering of bits
      higher than NR_PAGEFLAGS in order to remove the zone id part.  For
      dump_vma() this resulted in removing several VM_* flags from the
      symbolic translation.
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b8eceeb9
    • Vlastimil Babka's avatar
      mm, printk: introduce new format string for flags · edf14cdb
      Vlastimil Babka authored
      In mm we use several kinds of flags bitfields that are sometimes printed
      for debugging purposes, or exported to userspace via sysfs.  To make
      them easier to interpret independently on kernel version and config, we
      want to dump also the symbolic flag names.  So far this has been done
      with repeated calls to pr_cont(), which is unreliable on SMP, and not
      usable for e.g.  sysfs export.
      
      To get a more reliable and universal solution, this patch extends
      printk() format string for pointers to handle the page flags (%pGp),
      gfp_flags (%pGg) and vma flags (%pGv).  Existing users of
      dump_flag_names() are converted and simplified.
      
      It would be possible to pass flags by value instead of pointer, but the
      %p format string for pointers already has extensions for various kernel
      structures, so it's a good fit, and the extra indirection in a
      non-critical path is negligible.
      
      [linux@rasmusvillemoes.dk: lots of good implementation suggestions]
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      edf14cdb
    • Vlastimil Babka's avatar
      mm, tracing: unify mm flags handling in tracepoints and printk · 420adbe9
      Vlastimil Babka authored
      In tracepoints, it's possible to print gfp flags in a human-friendly
      format through a macro show_gfp_flags(), which defines a translation
      array and passes is to __print_flags().  Since the following patch will
      introduce support for gfp flags printing in printk(), it would be nice
      to reuse the array.  This is not straightforward, since __print_flags()
      can't simply reference an array defined in a .c file such as mm/debug.c
      - it has to be a macro to allow the macro magic to communicate the
      format to userspace tools such as trace-cmd.
      
      The solution is to create a macro __def_gfpflag_names which is used both
      in show_gfp_flags(), and to define the gfpflag_names[] array in
      mm/debug.c.
      
      On the other hand, mm/debug.c also defines translation tables for page
      flags and vma flags, and desire was expressed (but not implemented in
      this series) to use these also from tracepoints.  Thus, this patch also
      renames the events/gfpflags.h file to events/mmflags.h and moves the
      table definitions there, using the same macro approach as for gfpflags.
      This allows translating all three kinds of mm-specific flags both in
      tracepoints and printk.
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: default avatarMichal Hocko <mhocko@suse.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      420adbe9
    • Vlastimil Babka's avatar
      tools, perf: make gfp_compact_table up to date · 14e0a214
      Vlastimil Babka authored
      When updating tracing's show_gfp_flags() I have noticed that perf's
      gfp_compact_table is also outdated.  Fill in the missing flags and place
      a note in gfp.h to increase chance that future updates are synced.
      Convert the __GFP_X flags from "GFP_X" to "__GFP_X" strings in line with
      the previous patch.
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Michal Hocko <mhocko@suse.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      14e0a214
    • Vlastimil Babka's avatar
      mm, tracing: make show_gfp_flags() up to date · 1f7866b4
      Vlastimil Babka authored
      The show_gfp_flags() macro provides human-friendly printing of gfp flags
      in tracepoints.  However, it is somewhat out of date and missing several
      flags.  This patches fills in the missing flags, and distinguishes
      properly between GFP_ATOMIC and __GFP_ATOMIC which were both translated
      to "GFP_ATOMIC".  More generally, all __GFP_X flags which were
      previously printed as GFP_X, are now printed as __GFP_X, since ommiting
      the underscores results in output that doesn't actually match the source
      code, and can only lead to confusion.  Where both variants are defined
      equal (e.g.  _DMA and _DMA32), the variant without underscores are
      preferred.
      
      Also add a note in gfp.h so hopefully future changes will be synced
      better.
      
      __GFP_MOVABLE is defined twice in include/linux/gfp.h with different
      comments.  Leave just the newer one, which was intended to replace the
      old one.
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: default avatarMichal Hocko <mhocko@suse.com>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      1f7866b4
    • Vlastimil Babka's avatar
      tracepoints: move trace_print_flags definitions to tracepoint-defs.h · 20f6e03a
      Vlastimil Babka authored
      The following patch will need to declare array of struct
      trace_print_flags in a header.  To prevent this header from pulling in
      all of RCU through trace_events.h, move the struct
      trace_print_flags{_64} definitions to the new lightweight
      tracepoint-defs.h header.
      Signed-off-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Michal Hocko <mhocko@suse.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      20f6e03a
    • Mel Gorman's avatar
      mm: filemap: avoid unnecessary calls to lock_page when waiting for IO to complete during a read · ebded027
      Mel Gorman authored
      In the generic read paths the kernel looks up a page in the page cache
      and if it's up to date, it is used.  If not, the page lock is acquired
      to wait for IO to complete and then check the page.  If multiple
      processes are waiting on IO, they all serialise against the lock and
      duplicate the checks.  This is unnecessary.
      
      The page lock in itself does not give any guarantees to the callers
      about the page state as it can be immediately truncated or reclaimed
      after the page is unlocked.  It's sufficient to wait_on_page_locked and
      then continue if the page is up to date on wakeup.
      
      It is possible that a truncated but up-to-date page is returned but the
      reference taken during read prevents it disappearing underneath the
      caller and the data is still valid if PageUptodate.
      
      The overall impact is small as even if processes serialise on the lock,
      the lock section is tiny once the IO is complete.  Profiles indicated
      that unlock_page and friends are generally a tiny portion of a
      read-intensive workload.  An artificial test was created that had
      instances of dd access a cache-cold file on an ext4 filesystem and
      measure how long the read took.
      
      paralleldd
                                          4.4.0                 4.4.0
                                        vanilla             avoidlock
      Amean    Elapsd-1          5.28 (  0.00%)        5.15 (  2.50%)
      Amean    Elapsd-4          5.29 (  0.00%)        5.17 (  2.12%)
      Amean    Elapsd-7          5.28 (  0.00%)        5.18 (  1.78%)
      Amean    Elapsd-12         5.20 (  0.00%)        5.33 ( -2.50%)
      Amean    Elapsd-21         5.14 (  0.00%)        5.21 ( -1.41%)
      Amean    Elapsd-30         5.30 (  0.00%)        5.12 (  3.38%)
      Amean    Elapsd-48         5.78 (  0.00%)        5.42 (  6.21%)
      Amean    Elapsd-79         6.78 (  0.00%)        6.62 (  2.46%)
      Amean    Elapsd-110        9.09 (  0.00%)        8.99 (  1.15%)
      Amean    Elapsd-128       10.60 (  0.00%)       10.43 (  1.66%)
      
      The impact is small but intuitively, it makes sense to avoid unnecessary
      calls to lock_page.
      Signed-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Reviewed-by: default avatarJan Kara <jack@suse.cz>
      Cc: Hugh Dickins <hughd@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ebded027
    • Mel Gorman's avatar
      mm: filemap: remove redundant code in do_read_cache_page · 32b63529
      Mel Gorman authored
      do_read_cache_page and __read_cache_page duplicate page filler code when
      filling the page for the first time.  This patch simply removes the
      duplicate logic.
      Signed-off-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Reviewed-by: default avatarJan Kara <jack@suse.cz>
      Cc: Hugh Dickins <hughd@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      32b63529
    • Andreas Ziegler's avatar
      mm: fix two typos in comments for to_vmem_altmap() · 07061aab
      Andreas Ziegler authored
      Commit 4b94ffdc ("x86, mm: introduce vmem_altmap to augment
      vmemmap_populate()"), introduced the to_vmem_altmap() function.
      
      The comments in this function contain two typos (one misspelling of the
      Kconfig option CONFIG_SPARSEMEM_VMEMMAP, and one missing letter 'n'),
      let's fix them up.
      Signed-off-by: default avatarAndreas Ziegler <andreas.ziegler@fau.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      07061aab
    • Christian Borntraeger's avatar
      mm/debug_pagealloc: ask users for default setting of debug_pagealloc · ea6eabb0
      Christian Borntraeger authored
      Since commit 031bc574 ("mm/debug-pagealloc: make debug-pagealloc
      boottime configurable") CONFIG_DEBUG_PAGEALLOC is by default not adding
      any page debugging.
      
      This resulted in several unnoticed bugs, e.g.
      
          https://lkml.kernel.org/g/<569F5E29.3090107@de.ibm.com>
      or
          https://lkml.kernel.org/g/<56A20F30.4050705@de.ibm.com>
      
      as this behaviour change was not even documented in Kconfig.
      
      Let's provide a new Kconfig symbol that allows to change the default
      back to enabled, e.g.  for debug kernels.  This also makes the change
      obvious to kernel packagers.
      
      Let's also change the Kconfig description for CONFIG_DEBUG_PAGEALLOC, to
      indicate that there are two stages of overhead.
      Signed-off-by: default avatarChristian Borntraeger <borntraeger@de.ibm.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ea6eabb0
    • Andrey Ryabinin's avatar
      mm/page-writeback: fix dirty_ratelimit calculation · d59b1087
      Andrey Ryabinin authored
      Calculation of dirty_ratelimit sometimes is not correct.  E.g.  initial
      values of dirty_ratelimit == INIT_BW and step == 0, lead to the
      following result:
      
         UBSAN: Undefined behaviour in ../mm/page-writeback.c:1286:7
         shift exponent 25600 is too large for 64-bit type 'long unsigned int'
      
      The fix is straightforward - make step 0 if the shift exponent is too
      big.
      Signed-off-by: default avatarAndrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Andy Shevchenko <andy.shevchenko@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d59b1087
    • Andrew Morton's avatar
      mm/page_alloc.c: rework code layout in memmap_init_zone() · b72d0ffb
      Andrew Morton authored
      This function is getting full of weird tricks to avoid word-wrapping.
      Use a goto to eliminate a tab stop then use the new space
      
      Cc: Taku Izumi <izumi.taku@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b72d0ffb
    • Taku Izumi's avatar
      mm/page_alloc.c: introduce kernelcore=mirror option · 342332e6
      Taku Izumi authored
      This patch extends existing "kernelcore" option and introduces
      kernelcore=mirror option.  By specifying "mirror" instead of specifying
      the amount of memory, non-mirrored (non-reliable) region will be
      arranged into ZONE_MOVABLE.
      
      [akpm@linux-foundation.org: fix build with CONFIG_HAVE_MEMBLOCK_NODE_MAP=n]
      Signed-off-by: default avatarTaku Izumi <izumi.taku@jp.fujitsu.com>
      Tested-by: default avatarSudeep Holla <sudeep.holla@arm.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Xishi Qiu <qiuxishi@huawei.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Matt Fleming <matt@codeblueprint.co.uk>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Steve Capper <steve.capper@linaro.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      342332e6
    • Taku Izumi's avatar
      mm/page_alloc.c: calculate zone_start_pfn at zone_spanned_pages_in_node() · d91749c1
      Taku Izumi authored
      Xeon E7 v3 based systems supports Address Range Mirroring and UEFI BIOS
      complied with UEFI spec 2.5 can notify which ranges are mirrored
      (reliable) via EFI memory map.  Now Linux kernel utilize its information
      and allocates boot time memory from reliable region.
      
      My requirement is:
        - allocate kernel memory from mirrored region
        - allocate user memory from non-mirrored region
      
      In order to meet my requirement, ZONE_MOVABLE is useful.  By arranging
      non-mirrored range into ZONE_MOVABLE, mirrored memory is used for kernel
      allocations.
      
      My idea is to extend existing "kernelcore" option and introduces
      kernelcore=mirror option.  By specifying "mirror" instead of specifying
      the amount of memory, non-mirrored region will be arranged into
      ZONE_MOVABLE.
      
      Earlier discussions are at:
       https://lkml.org/lkml/2015/10/9/24
       https://lkml.org/lkml/2015/10/15/9
       https://lkml.org/lkml/2015/11/27/18
       https://lkml.org/lkml/2015/12/8/836
      
      For example, suppose 2-nodes system with the following memory range:
      
        node 0 [mem 0x0000000000001000-0x000000109fffffff]
        node 1 [mem 0x00000010a0000000-0x000000209fffffff]
      and the following ranges are marked as reliable (mirrored):
        [0x0000000000000000-0x0000000100000000]
        [0x0000000100000000-0x0000000180000000]
        [0x0000000800000000-0x0000000880000000]
        [0x00000010a0000000-0x0000001120000000]
        [0x00000017a0000000-0x0000001820000000]
      
      If you specify kernelcore=mirror, ZONE_NORMAL and ZONE_MOVABLE are
      arranged like bellow:
      
       - node 0:
        ZONE_NORMAL : [0x0000000100000000-0x00000010a0000000]
        ZONE_MOVABLE: [0x0000000180000000-0x00000010a0000000]
       - node 1:
        ZONE_NORMAL : [0x00000010a0000000-0x00000020a0000000]
        ZONE_MOVABLE: [0x0000001120000000-0x00000020a0000000]
      
      In overlapped range, pages to be ZONE_MOVABLE in ZONE_NORMAL are treated
      as absent pages, and vice versa.
      
      This patch (of 2):
      
      Currently each zone's zone_start_pfn is calculated at
      free_area_init_core().  However zone's range is fixed at the time when
      invoking zone_spanned_pages_in_node().
      
      This patch changes how each zone->zone_start_pfn is calculated in
      zone_spanned_pages_in_node().
      Signed-off-by: default avatarTaku Izumi <izumi.taku@jp.fujitsu.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Xishi Qiu <qiuxishi@huawei.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Matt Fleming <matt@codeblueprint.co.uk>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Steve Capper <steve.capper@linaro.org>
      Cc: Sudeep Holla <sudeep.holla@arm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d91749c1
    • Andrew Morton's avatar
    • Joonsoo Kim's avatar
      mm/slub: support left redzone · d86bd1be
      Joonsoo Kim authored
      SLUB already has a redzone debugging feature.  But it is only positioned
      at the end of object (aka right redzone) so it cannot catch left oob.
      Although current object's right redzone acts as left redzone of next
      object, first object in a slab cannot take advantage of this effect.
      This patch explicitly adds a left red zone to each object to detect left
      oob more precisely.
      
      Background:
      
      Someone complained to me that left OOB doesn't catch even if KASAN is
      enabled which does page allocation debugging.  That page is out of our
      control so it would be allocated when left OOB happens and, in this
      case, we can't find OOB.  Moreover, SLUB debugging feature can be
      enabled without page allocator debugging and, in this case, we will miss
      that OOB.
      
      Before trying to implement, I expected that changes would be too
      complex, but, it doesn't look that complex to me now.  Almost changes
      are applied to debug specific functions so I feel okay.
      Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d86bd1be
    • Laura Abbott's avatar
      slub: relax CMPXCHG consistency restrictions · 149daaf3
      Laura Abbott authored
      When debug options are enabled, cmpxchg on the page is disabled.  This
      is because the page must be locked to ensure there are no false
      positives when performing consistency checks.  Some debug options such
      as poisoning and red zoning only act on the object itself.  There is no
      need to protect other CPUs from modification on only the object.  Allow
      cmpxchg to happen with poisoning and red zoning are set on a slab.
      
      Credit to Mathias Krause for the original work which inspired this
      series
      Signed-off-by: default avatarLaura Abbott <labbott@fedoraproject.org>
      Acked-by: default avatarChristoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <js1304@gmail.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Mathias Krause <minipli@googlemail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      149daaf3
    • Laura Abbott's avatar
      slub: convert SLAB_DEBUG_FREE to SLAB_CONSISTENCY_CHECKS · becfda68
      Laura Abbott authored
      SLAB_DEBUG_FREE allows expensive consistency checks at free to be turned
      on or off.  Expand its use to be able to turn off all consistency
      checks.  This gives a nice speed up if you only want features such as
      poisoning or tracing.
      
      Credit to Mathias Krause for the original work which inspired this
      series
      Signed-off-by: default avatarLaura Abbott <labbott@fedoraproject.org>
      Acked-by: default avatarChristoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <js1304@gmail.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Mathias Krause <minipli@googlemail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      becfda68
    • Laura Abbott's avatar
      slub: fix/clean free_debug_processing return paths · 804aa132
      Laura Abbott authored
      Since commit 19c7ff9e ("slub: Take node lock during object free
      checks") check_object has been incorrectly returning success as it
      follows the out label which just returns the node.
      
      Thanks to refactoring, the out and fail paths are now basically the
      same.  Combine the two into one and just use a single label.
      
      Credit to Mathias Krause for the original work which inspired this
      series
      Signed-off-by: default avatarLaura Abbott <labbott@fedoraproject.org>
      Acked-by: default avatarChristoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <js1304@gmail.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Mathias Krause <minipli@googlemail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      804aa132
    • Laura Abbott's avatar
      slub: drop lock at the end of free_debug_processing · 282acb43
      Laura Abbott authored
      This series takes the suggestion of Christoph Lameter and only focuses
      on optimizing the slow path where the debug processing runs.  The two
      main optimizations in this series are letting the consistency checks be
      skipped and relaxing the cmpxchg restrictions when we are not doing
      consistency checks.  With hackbench -g 20 -l 1000 averaged over 100
      runs:
      
      Before slub_debug=P
        mean 15.607
        variance .086
        stdev .294
      
      After slub_debug=P
        mean 10.836
        variance .155
        stdev .394
      
      This still isn't as fast as what is in grsecurity unfortunately so there's
      still work to be done.  Profiling ___slab_alloc shows that 25-50% of time
      is spent in deactivate_slab.  I haven't looked too closely to see if this
      is something that can be optimized.  My plan for now is to focus on
      getting all of this merged (if appropriate) before digging in to another
      task.
      
      This patch (of 4):
      
      Currently, free_debug_processing has a comment "Keep node_lock to preserve
      integrity until the object is actually freed".  In actuallity, the lock is
      dropped immediately in __slab_free.  Rather than wait until __slab_free
      and potentially throw off the unlikely marking, just drop the lock in
      __slab_free.  This also lets free_debug_processing take its own copy of
      the spinlock flags rather than trying to share the ones from __slab_free.
      Since there is no use for the node afterwards, change the return type of
      free_debug_processing to return an int like alloc_debug_processing.
      
      Credit to Mathias Krause for the original work which inspired this series
      
      [akpm@linux-foundation.org: fix build]
      Signed-off-by: default avatarLaura Abbott <labbott@fedoraproject.org>
      Acked-by: default avatarChristoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <js1304@gmail.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Mathias Krause <minipli@googlemail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      282acb43
    • Joonsoo Kim's avatar
      mm/slab: re-implement pfmemalloc support · f68f8ddd
      Joonsoo Kim authored
      Current implementation of pfmemalloc handling in SLAB has some problems.
      
      1) pfmemalloc_active is set to true when there is just one or more
         pfmemalloc slabs in the system, but it is cleared when there is no
         pfmemalloc slab in one arbitrary kmem_cache.  So, pfmemalloc_active
         could be wrongly cleared.
      
      2) Search to partial and free list doesn't happen when non-pfmemalloc
         object are not found in cpu cache.  Instead, allocating new slab
         happens and it is not optimal.
      
      3) Even after sk_memalloc_socks() is disabled, cpu cache would keep
         pfmemalloc objects tagged with SLAB_OBJ_PFMEMALLOC.  It isn't cleared
         if sk_memalloc_socks() is disabled so it could cause problem.
      
      4) If cpu cache is filled with pfmemalloc objects, it would cause slow
         down non-pfmemalloc allocation.
      
      To me, current pointer tagging approach looks complex and fragile so this
      patch re-implement whole thing instead of fixing problems one by one.
      
      Design principle for new implementation is that
      
      1) Don't disrupt non-pfmemalloc allocation in fast path even if
         sk_memalloc_socks() is enabled.  It's more likely case than pfmemalloc
         allocation.
      
      2) Ensure that pfmemalloc slab is used only for pfmemalloc allocation.
      
      3) Don't consider performance of pfmemalloc allocation in memory
         deficiency state.
      
      As a result, all pfmemalloc alloc/free in memory tight state will be
      handled in slow-path.  If there is non-pfmemalloc free object, it will be
      returned first even for pfmemalloc user in fast-path so that performance
      of pfmemalloc user isn't affected in normal case and pfmemalloc objects
      will be kept as long as possible.
      Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Tested-by: default avatarMel Gorman <mgorman@techsingularity.net>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f68f8ddd
    • Joonsoo Kim's avatar
      mm/slab: avoid returning values by reference · 70f75067
      Joonsoo Kim authored
      Returing values by reference is bad practice.  Instead, just use
      function return value.
      Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Suggested-by: default avatarChristoph Lameter <cl@linux.com>
      Acked-by: default avatarChristoph Lameter <cl@linux.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      70f75067
    • Joonsoo Kim's avatar
      mm/slab: introduce new slab management type, OBJFREELIST_SLAB · b03a017b
      Joonsoo Kim authored
      SLAB needs an array to manage freed objects in a slab.  It is only used
      if some objects are freed so we can use free object itself as this
      array.  This requires additional branch in somewhat critical lock path
      to check if it is first freed object or not but that's all we need.
      Benefits is that we can save extra memory usage and reduce some
      computational overhead by allocating a management array when new slab is
      created.
      
      Code change is rather complex than what we can expect from the idea, in
      order to handle debugging feature efficiently.  If you want to see core
      idea only, please remove '#if DEBUG' block in the patch.
      
      Although this idea can apply to all caches whose size is larger than
      management array size, it isn't applied to caches which have a
      constructor.  If such cache's object is used for management array,
      constructor should be called for it before that object is returned to
      user.  I guess that overhead overwhelm benefit in that case so this idea
      doesn't applied to them at least now.
      
      For summary, from now on, slab management type is determined by
      following logic.
      
      1) if management array size is smaller than object size and no ctor, it
         becomes OBJFREELIST_SLAB.
      
      2) if management array size is smaller than leftover, it becomes
         NORMAL_SLAB which uses leftover as a array.
      
      3) if OFF_SLAB help to save memory than way 4), it becomes OFF_SLAB.
         It allocate a management array from the other cache so memory waste
         happens.
      
      4) others become NORMAL_SLAB.  It uses dedicated internal memory in a
         slab as a management array so it causes memory waste.
      
      In my system, without enabling CONFIG_DEBUG_SLAB, Almost caches become
      OBJFREELIST_SLAB and NORMAL_SLAB (using leftover) which doesn't waste
      memory.  Following is the result of number of caches with specific slab
      management type.
      
      TOTAL = OBJFREELIST + NORMAL(leftover) + NORMAL + OFF
      
      /Before/
      126 = 0 + 60 + 25 + 41
      
      /After/
      126 = 97 + 12 + 15 + 2
      
      Result shows that number of caches that doesn't waste memory increase
      from 60 to 109.
      
      I did some benchmarking and it looks that benefit are more than loss.
      
      Kmalloc: Repeatedly allocate then free test
      
      /Before/
      [    0.286809] 1. Kmalloc: Repeatedly allocate then free test
      [    1.143674] 100000 times kmalloc(32) -> 116 cycles kfree -> 78 cycles
      [    1.441726] 100000 times kmalloc(64) -> 121 cycles kfree -> 80 cycles
      [    1.815734] 100000 times kmalloc(128) -> 168 cycles kfree -> 85 cycles
      [    2.380709] 100000 times kmalloc(256) -> 287 cycles kfree -> 95 cycles
      [    3.101153] 100000 times kmalloc(512) -> 370 cycles kfree -> 117 cycles
      [    3.942432] 100000 times kmalloc(1024) -> 413 cycles kfree -> 156 cycles
      [    5.227396] 100000 times kmalloc(2048) -> 622 cycles kfree -> 248 cycles
      [    7.519793] 100000 times kmalloc(4096) -> 1102 cycles kfree -> 452 cycles
      
      /After/
      [    1.205313] 100000 times kmalloc(32) -> 117 cycles kfree -> 78 cycles
      [    1.510526] 100000 times kmalloc(64) -> 124 cycles kfree -> 81 cycles
      [    1.827382] 100000 times kmalloc(128) -> 130 cycles kfree -> 84 cycles
      [    2.226073] 100000 times kmalloc(256) -> 177 cycles kfree -> 92 cycles
      [    2.814747] 100000 times kmalloc(512) -> 286 cycles kfree -> 112 cycles
      [    3.532952] 100000 times kmalloc(1024) -> 344 cycles kfree -> 141 cycles
      [    4.608777] 100000 times kmalloc(2048) -> 519 cycles kfree -> 210 cycles
      [    6.350105] 100000 times kmalloc(4096) -> 789 cycles kfree -> 391 cycles
      
      In fact, I tested another idea implementing OBJFREELIST_SLAB with
      extendable linked array through another freed object.  It can remove
      memory waste completely but it causes more computational overhead in
      critical lock path and it seems that overhead outweigh benefit.  So, this
      patch doesn't include it.
      Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Jesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b03a017b
    • Joonsoo Kim's avatar
      mm/slab: factor out debugging initialization in cache_init_objs() · 10b2e9e8
      Joonsoo Kim authored
      cache_init_objs() will be changed in following patch and current form
      doesn't fit well for that change.  So, before doing it, this patch
      separates debugging initialization.  This would cause two loop iteration
      when debugging is enabled, but, this overhead seems too light than debug
      feature itself so effect may not be visible.  This patch will greatly
      simplify changes in cache_init_objs() in following patch.
      Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Jesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      10b2e9e8
    • Joonsoo Kim's avatar
      mm/slab: factor out slab list fixup code · d8410234
      Joonsoo Kim authored
      Slab list should be fixed up after object is detached from the slab and
      this happens at two places.  They do exactly same thing.  They will be
      changed in the following patch, so, to reduce code duplication, this
      patch factor out them and make it common function.
      Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Jesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d8410234
    • Joonsoo Kim's avatar
      mm/slab: make criteria for off slab determination robust and simple · 3217fd9b
      Joonsoo Kim authored
      To become an off slab, there are some constraints to avoid bootstrapping
      problem and recursive call.  This can be avoided differently by simply
      checking that corresponding kmalloc cache is ready and it's not a off
      slab.  It would be more robust because static size checking can be
      affected by cache size change or architecture type but dynamic checking
      isn't.
      
      One check 'freelist_cache->size > cachep->size / 2' is added to check
      benefit of choosing off slab, because, now, there is no size constraint
      which ensures enough advantage when selecting off slab.
      Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Jesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      3217fd9b
    • Joonsoo Kim's avatar
      mm/slab: do not change cache size if debug pagealloc isn't possible · f3a3c320
      Joonsoo Kim authored
      We can fail to setup off slab in some conditions.  Even in this case,
      debug pagealloc increases cache size to PAGE_SIZE in advance and it is
      waste because debug pagealloc cannot work for it when it isn't the off
      slab.  To improve this situation, this patch checks first that this
      cache with increased size is suitable for off slab.  It actually
      increases cache size when it is suitable for off-slab, so possible waste
      is removed.
      Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Jesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f3a3c320
    • Joonsoo Kim's avatar
      mm/slab: clean up cache type determination · 158e319b
      Joonsoo Kim authored
      Current cache type determination code is open-code and looks not
      understandable.  Following patch will introduce one more cache type and
      it would make code more complex.  So, before it happens, this patch
      abstracts these codes.
      Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Jesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      158e319b
    • Joonsoo Kim's avatar
      mm/slab: align cache size first before determination of OFF_SLAB candidate · 832a15d2
      Joonsoo Kim authored
      Finding suitable OFF_SLAB candidate is more related to aligned cache
      size rather than original size.  Same reasoning can be applied to the
      debug pagealloc candidate.  So, this patch moves up alignment fixup to
      proper position.  From that point, size is aligned so we can remove some
      alignment fixups.
      Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Jesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      832a15d2
    • Joonsoo Kim's avatar
      mm/slab: put the freelist at the end of slab page · 2e6b3602
      Joonsoo Kim authored
      Currently, the freelist is at the front of slab page.  This requires
      extra space to meet object alignment requirement.  If we put the
      freelist at the end of a slab page, objects could start at page boundary
      and will be at correct alignment.  This is possible because freelist has
      no alignment constraint itself.
      
      This gives us two benefits: It removes extra memory space for the
      freelist alignment and remove complex calculation at cache
      initialization step.  I can't think notable drawback here.
      
      I mentioned that this would reduce extra memory space, but, this benefit
      is rather theoretical because it can be applied to very few cases.
      Following is the example cache type that can get benefit from this
      change.
      
        size align num before after
          32    8  124  4100  4092
          64    8   63  4103  4095
          88    8   46  4102  4094
         272    8   15  4103  4095
         408    8   10  4098  4090
          32   16  124  4108  4092
          64   16   63  4111  4095
          32   32  124  4124  4092
          64   32   63  4127  4095
          96   32   42  4106  4074
      
      before means whole size for objects and aligned freelist before applying
      patch and after shows the result of this patch.
      
      Since before is more than 4096, number of object should decrease and
      memory waste happens.
      
      Anyway, this patch removes complex calculation so looks beneficial to
      me.
      
      [akpm@linux-foundation.org: fix kerneldoc]
      Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Acked-by: default avatarChristoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Jesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2e6b3602
    • Joonsoo Kim's avatar
      mm/slab: remove object status buffer for DEBUG_SLAB_LEAK · 249247b6
      Joonsoo Kim authored
      Now, we don't use object status buffer in any setup. Remove it.
      Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Jesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      249247b6
    • Joonsoo Kim's avatar
      mm/slab: alternative implementation for DEBUG_SLAB_LEAK · d31676df
      Joonsoo Kim authored
      DEBUG_SLAB_LEAK is a debug option.  It's current implementation requires
      status buffer so we need more memory to use it.  And, it cause
      kmem_cache initialization step more complex.
      
      To remove this extra memory usage and to simplify initialization step,
      this patch implement this feature with another way.
      
      When user requests to get slab object owner information, it marks that
      getting information is started.  And then, all free objects in caches
      are flushed to corresponding slab page.  Now, we can distinguish all
      freed object so we can know all allocated objects, too.  After
      collecting slab object owner information on allocated objects, mark is
      checked that there is no free during the processing.  If true, we can be
      sure that our information is correct so information is returned to user.
      
      Although this way is rather complex, it has two important benefits
      mentioned above.  So, I think it is worth changing.
      
      There is one drawback that it takes more time to get slab object owner
      information but it is just a debug option so it doesn't matter at all.
      
      To help review, this patch implements new way only.  Following patch
      will remove useless code.
      Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Jesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d31676df
    • Joonsoo Kim's avatar
      mm/slab: clean up DEBUG_PAGEALLOC processing code · 40b44137
      Joonsoo Kim authored
      Currently, open code for checking DEBUG_PAGEALLOC cache is spread to
      some sites.  It makes code unreadable and hard to change.
      
      This patch cleans up this code.  The following patch will change the
      criteria for DEBUG_PAGEALLOC cache so this clean-up will help it, too.
      
      [akpm@linux-foundation.org: fix build with CONFIG_DEBUG_PAGEALLOC=n]
      Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Jesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      40b44137
    • Joonsoo Kim's avatar
      mm/slab: use more appropriate condition check for debug_pagealloc · 40323278
      Joonsoo Kim authored
      debug_pagealloc debugging is related to SLAB_POISON flag rather than
      FORCED_DEBUG option, although FORCED_DEBUG option will enable
      SLAB_POISON.  Fix it.
      Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Acked-by: default avatarChristoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Jesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      40323278
    • Joonsoo Kim's avatar
      mm/slab: activate debug_pagealloc in SLAB when it is actually enabled · a307ebd4
      Joonsoo Kim authored
      Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Acked-by: default avatarChristoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Jesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a307ebd4
    • Joonsoo Kim's avatar
      mm/slab: remove the checks for slab implementation bug · 260b61dd
      Joonsoo Kim authored
      Some of "#if DEBUG" are for reporting slab implementation bug rather
      than user usecase bug.  It's not really needed because slab is stable
      for a quite long time and it makes code too dirty.  This patch remove
      it.
      Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Acked-by: default avatarChristoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Jesper Dangaard Brouer <brouer@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      260b61dd