1. 12 Dec, 2012 40 commits
    • Jeff Liu's avatar
      mm/vmscan.c: try_to_freeze() returns boolean · 6f6313d4
      Jeff Liu authored
      kswapd()->try_to_freeze() is defined to return a boolean, so it's better
      to use a bool to hold its return value.
      Signed-off-by: default avatarJie Liu <jeff.liu@oracle.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      6f6313d4
    • Rafael Aquini's avatar
      mm: introduce putback_movable_pages() · 5733c7d1
      Rafael Aquini authored
      The PATCH "mm: introduce compaction and migration for virtio ballooned pages"
      hacks around putback_lru_pages() in order to allow ballooned pages to be
      re-inserted on balloon page list as if a ballooned page was like a LRU page.
      
      As ballooned pages are not legitimate LRU pages, this patch introduces
      putback_movable_pages() to properly cope with cases where the isolated
      pageset contains ballooned pages and LRU pages, thus fixing the mentioned
      inelegant hack around putback_lru_pages().
      Signed-off-by: default avatarRafael Aquini <aquini@redhat.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: "Michael S. Tsirkin" <mst@redhat.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5733c7d1
    • Rafael Aquini's avatar
      virtio_balloon: introduce migration primitives to balloon pages · e2250429
      Rafael Aquini authored
      Memory fragmentation introduced by ballooning might reduce significantly
      the number of 2MB contiguous memory blocks that can be used within a guest,
      thus imposing performance penalties associated with the reduced number of
      transparent huge pages that could be used by the guest workload.
      
      Besides making balloon pages movable at allocation time and introducing
      the necessary primitives to perform balloon page migration/compaction,
      this patch also introduces the following locking scheme, in order to
      enhance the syncronization methods for accessing elements of struct
      virtio_balloon, thus providing protection against concurrent access
      introduced by parallel memory migration threads.
      
       - balloon_lock (mutex) : synchronizes the access demand to elements of
                                struct virtio_balloon and its queue operations;
      
      [yongjun_wei@trendmicro.com.cn: fix missing unlock on error in fill_balloon()]
      [akpm@linux-foundation.org: avoid having multiple return points in fill_balloon()]
      [akpm@linux-foundation.org: fix printk warning]Signed-off-by: Rafael Aquini <aquini@redhat.com>
      Acked-by: default avatarMichael S. Tsirkin <mst@redhat.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Signed-off-by: default avatarWei Yongjun <yongjun_wei@trendmicro.com.cn>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e2250429
    • Rafael Aquini's avatar
      mm: introduce compaction and migration for ballooned pages · bf6bddf1
      Rafael Aquini authored
      Memory fragmentation introduced by ballooning might reduce significantly
      the number of 2MB contiguous memory blocks that can be used within a guest,
      thus imposing performance penalties associated with the reduced number of
      transparent huge pages that could be used by the guest workload.
      
      This patch introduces the helper functions as well as the necessary changes
      to teach compaction and migration bits how to cope with pages which are
      part of a guest memory balloon, in order to make them movable by memory
      compaction procedures.
      Signed-off-by: default avatarRafael Aquini <aquini@redhat.com>
      Acked-by: default avatarMel Gorman <mel@csn.ul.ie>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: "Michael S. Tsirkin" <mst@redhat.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      bf6bddf1
    • Rafael Aquini's avatar
      mm: introduce a common interface for balloon pages mobility · 18468d93
      Rafael Aquini authored
      Memory fragmentation introduced by ballooning might reduce significantly
      the number of 2MB contiguous memory blocks that can be used within a guest,
      thus imposing performance penalties associated with the reduced number of
      transparent huge pages that could be used by the guest workload.
      
      This patch introduces a common interface to help a balloon driver on
      making its page set movable to compaction, and thus allowing the system
      to better leverage the compation efforts on memory defragmentation.
      
      [akpm@linux-foundation.org: use PAGE_FLAGS_CHECK_AT_PREP, s/__balloon_page_flags/page_flags_cleared/, small cleanups]
      [rientjes@google.com: allow balloon compaction for any system with memory compaction enabled, which is the defconfig]
      Signed-off-by: default avatarRafael Aquini <aquini@redhat.com>
      Acked-by: default avatarMel Gorman <mel@csn.ul.ie>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: "Michael S. Tsirkin" <mst@redhat.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      18468d93
    • Rafael Aquini's avatar
      mm: redefine address_space.assoc_mapping · 252aa6f5
      Rafael Aquini authored
      Overhaul struct address_space.assoc_mapping renaming it to
      address_space.private_data and its type is redefined to void*.  By this
      approach we consistently name the .private_* elements from struct
      address_space as well as allow extended usage for address_space
      association with other data structures through ->private_data.
      
      Also, all users of old ->assoc_mapping element are converted to reflect
      its new name and type change (->private_data).
      Signed-off-by: default avatarRafael Aquini <aquini@redhat.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: "Michael S. Tsirkin" <mst@redhat.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      252aa6f5
    • Rafael Aquini's avatar
      mm: adjust address_space_operations.migratepage() return code · 78bd5209
      Rafael Aquini authored
      Memory fragmentation introduced by ballooning might reduce significantly
      the number of 2MB contiguous memory blocks that can be used within a
      guest, thus imposing performance penalties associated with the reduced
      number of transparent huge pages that could be used by the guest workload.
      
      This patch-set follows the main idea discussed at 2012 LSFMMS session:
      "Ballooning for transparent huge pages" -- http://lwn.net/Articles/490114/
      to introduce the required changes to the virtio_balloon driver, as well as
      the changes to the core compaction & migration bits, in order to make
      those subsystems aware of ballooned pages and allow memory balloon pages
      become movable within a guest, thus avoiding the aforementioned
      fragmentation issue
      
      Following are numbers that prove this patch benefits on allowing
      compaction to be more effective at memory ballooned guests.
      
      Results for STRESS-HIGHALLOC benchmark, from Mel Gorman's mmtests suite,
      running on a 4gB RAM KVM guest which was ballooning 512mB RAM in 64mB
      chunks, at every minute (inflating/deflating), while test was running:
      
      ===BEGIN stress-highalloc
      
      STRESS-HIGHALLOC
                       highalloc-3.7     highalloc-3.7
                           rc4-clean         rc4-patch
      Pass 1          55.00 ( 0.00%)    62.00 ( 7.00%)
      Pass 2          54.00 ( 0.00%)    62.00 ( 8.00%)
      while Rested    75.00 ( 0.00%)    80.00 ( 5.00%)
      
      MMTests Statistics: duration
                       3.7         3.7
                 rc4-clean   rc4-patch
      User         1207.59     1207.46
      System       1300.55     1299.61
      Elapsed      2273.72     2157.06
      
      MMTests Statistics: vmstat
                                      3.7         3.7
                                rc4-clean   rc4-patch
      Page Ins                    3581516     2374368
      Page Outs                  11148692    10410332
      Swap Ins                         80          47
      Swap Outs                      3641         476
      Direct pages scanned          37978       33826
      Kswapd pages scanned        1828245     1342869
      Kswapd pages reclaimed      1710236     1304099
      Direct pages reclaimed        32207       31005
      Kswapd efficiency               93%         97%
      Kswapd velocity             804.077     622.546
      Direct efficiency               84%         91%
      Direct velocity              16.703      15.682
      Percentage direct scans          2%          2%
      Page writes by reclaim        79252        9704
      Page writes file              75611        9228
      Page writes anon               3641         476
      Page reclaim immediate        16764       11014
      Page rescued immediate            0           0
      Slabs scanned               2171904     2152448
      Direct inode steals             385        2261
      Kswapd inode steals          659137      609670
      Kswapd skipped wait               1          69
      THP fault alloc                 546         631
      THP collapse alloc              361         339
      THP splits                      259         263
      THP fault fallback               98          50
      THP collapse fail                20          17
      Compaction stalls               747         499
      Compaction success              244         145
      Compaction failures             503         354
      Compaction pages moved       370888      474837
      Compaction move failure       77378       65259
      
      ===END stress-highalloc
      
      This patch:
      
      Introduce MIGRATEPAGE_SUCCESS as the default return code for
      address_space_operations.migratepage() method and documents the expected
      return code for the same method in failure cases.
      Signed-off-by: default avatarRafael Aquini <aquini@redhat.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: "Michael S. Tsirkin" <mst@redhat.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      78bd5209
    • Andrew Morton's avatar
      arch/sparc/kernel/sys_sparc_64.c: s/COLOUR/COLOR/ · 748ba883
      Andrew Morton authored
      Consistently spell this word across arch/sparc/mm and arch/sparc/kernel.
      Acked-by: default avatarDavid Miller <davem@davemloft.net>
      Cc: Michel Lespinasse <walken@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      748ba883
    • Michel Lespinasse's avatar
      mm: use vm_unmapped_area() in hugetlbfs on sparc64 architecture · 2aea28b9
      Michel Lespinasse authored
      Update the sparc64 hugetlb_get_unmapped_area function to make use of
      vm_unmapped_area() instead of implementing a brute force search.
      Signed-off-by: default avatarMichel Lespinasse <walken@google.com>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2aea28b9
    • Michel Lespinasse's avatar
      mm: use vm_unmapped_area() on sparc64 architecture · bb64f550
      Michel Lespinasse authored
      Update the sparc64 arch_get_unmapped_area[_topdown] functions to make use
      of vm_unmapped_area() instead of implementing a brute force search.
      
      [akpm@linux-foundation.org: remove now-unused COLOUR_ALIGN_DOWN()]
      Signed-off-by: default avatarMichel Lespinasse <walken@google.com>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      bb64f550
    • Michel Lespinasse's avatar
      mm: use vm_unmapped_area() in hugetlbfs on tile architecture · dd529596
      Michel Lespinasse authored
      Update the tile hugetlb_get_unmapped_area function to make use of
      vm_unmapped_area() instead of implementing a brute force search.
      
      [akpm@linux-foundation.org: fix build]
      Signed-off-by: default avatarMichel Lespinasse <walken@google.com>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      dd529596
    • Michel Lespinasse's avatar
      mm: use vm_unmapped_area() on sparc32 architecture · a046be3d
      Michel Lespinasse authored
      Update the sparc32 arch_get_unmapped_area function to make use of
      vm_unmapped_area() instead of implementing a brute force search.
      
      [akpm@linux-foundation.org: fix build]
      [akpm@linux-foundation.org: remove now-unused COLOUR_ALIGN()]
      Signed-off-by: default avatarMichel Lespinasse <walken@google.com>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Acked-by: default avatar"David S. Miller" <davem@davemloft.net>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a046be3d
    • Michel Lespinasse's avatar
      mm: use vm_unmapped_area() on sh architecture · b4265f12
      Michel Lespinasse authored
      Update the sh arch_get_unmapped_area[_topdown] functions to make use of
      vm_unmapped_area() instead of implementing a brute force search.
      
      [akpm@linux-foundation.org: remove now-unused COLOUR_ALIGN_DOWN()]
      Signed-off-by: default avatarMichel Lespinasse <walken@google.com>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b4265f12
    • Michel Lespinasse's avatar
      mm: use vm_unmapped_area() on arm architecture · 394ef640
      Michel Lespinasse authored
      Update the arm arch_get_unmapped_area[_topdown] functions to make use of
      vm_unmapped_area() instead of implementing a brute force search.
      
      [akpm@linux-foundation.org: remove now-unused COLOUR_ALIGN_DOWN()]
      Signed-off-by: default avatarMichel Lespinasse <walken@google.com>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      394ef640
    • Michel Lespinasse's avatar
      mm: use vm_unmapped_area() on mips architecture · b6661861
      Michel Lespinasse authored
      Update the mips arch_get_unmapped_area[_topdown] functions to make use of
      vm_unmapped_area() instead of implementing a brute force search.
      
      [akpm@linux-foundation.org: remove now-unused COLOUR_ALIGN_DOWN()]
      Signed-off-by: default avatarMichel Lespinasse <walken@google.com>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b6661861
    • Michel Lespinasse's avatar
      mm: use vm_unmapped_area() in hugetlbfs on i386 architecture · cdc17344
      Michel Lespinasse authored
      Update the i386 hugetlb_get_unmapped_area function to make use of
      vm_unmapped_area() instead of implementing a brute force search.
      
      [akpm@linux-foundation.org: fix build]
      Signed-off-by: default avatarMichel Lespinasse <walken@google.com>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      cdc17344
    • Michel Lespinasse's avatar
      mm: use vm_unmapped_area() in hugetlbfs · 08659355
      Michel Lespinasse authored
      Update the hugetlb_get_unmapped_area function to make use of
      vm_unmapped_area() instead of implementing a brute force search.
      Signed-off-by: default avatarMichel Lespinasse <walken@google.com>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      08659355
    • Michel Lespinasse's avatar
      mm: fix cache coloring on x86_64 architecture · 7d025059
      Michel Lespinasse authored
      Fix the x86-64 cache alignment code to take pgoff into account.  Use the
      x86 and MIPS cache alignment code as the basis for a generic cache
      alignment function.
      
      The old x86 code will always align the mmap to aliasing boundaries,
      even if the program mmaps the file with a non-zero pgoff.
      
      If program A mmaps the file with pgoff 0, and program B mmaps the file
      with pgoff 1.  The old code would align the mmaps, resulting in misaligned
      pages:
      
        A:  0123
        B:  123
      
      After this patch, they are aligned so the pages line up:
      
        A: 0123
        B:  123
      
      Proposed by Rik van Riel.
      Signed-off-by: default avatarMichel Lespinasse <walken@google.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7d025059
    • Michel Lespinasse's avatar
      mm: use vm_unmapped_area() on x86_64 architecture · f9902472
      Michel Lespinasse authored
      Update the x86_64 arch_get_unmapped_area[_topdown] functions to make use
      of vm_unmapped_area() instead of implementing a brute force search.
      Signed-off-by: default avatarMichel Lespinasse <walken@google.com>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f9902472
    • Michel Lespinasse's avatar
      mm: vm_unmapped_area() lookup function · db4fbfb9
      Michel Lespinasse authored
      Implement vm_unmapped_area() using the rb_subtree_gap and highest_vm_end
      information to look up for suitable virtual address space gaps.
      
      struct vm_unmapped_area_info is used to define the desired allocation
      request:
       - lowest or highest possible address matching the remaining constraints
       - desired gap length
       - low/high address limits that the gap must fit into
       - alignment mask and offset
      
      Also update the generic arch_get_unmapped_area[_topdown] functions to make
      use of vm_unmapped_area() instead of implementing a brute force search.
      
      [akpm@linux-foundation.org: checkpatch fixes]
      Signed-off-by: default avatarMichel Lespinasse <walken@google.com>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      db4fbfb9
    • Rik van Riel's avatar
      mm: rearrange vm_area_struct for fewer cache misses · e4c6bfd2
      Rik van Riel authored
      The kernel walks the VMA rbtree in various places, including the page
      fault path.  However, the vm_rb node spanned two cache lines, on 64 bit
      systems with 64 byte cache lines (most x86 systems).
      
      Rearrange vm_area_struct a little, so all the information we need to do a
      VMA tree walk is in the first cache line.
      
      [akpm@linux-foundation.org: checkpatch fixes]
      Signed-off-by: default avatarMichel Lespinasse <walken@google.com>
      Signed-off-by: default avatarRik van Riel <riel@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e4c6bfd2
    • Michel Lespinasse's avatar
      mm: check rb_subtree_gap correctness · 5a0768f6
      Michel Lespinasse authored
      When CONFIG_DEBUG_VM_RB is enabled, check that rb_subtree_gap is correctly
      set for every vma and that mm->highest_vm_end is also correct.
      
      Also add an explicit 'bug' variable to track if browse_rb() detected any
      invalid condition.
      
      [akpm@linux-foundation.org: repair innovative coding-style inventions]
      Signed-off-by: default avatarMichel Lespinasse <walken@google.com>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5a0768f6
    • Michel Lespinasse's avatar
      mm: augment vma rbtree with rb_subtree_gap · d3737187
      Michel Lespinasse authored
      Define vma->rb_subtree_gap as the largest gap between any vma in the
      subtree rooted at that vma, and their predecessor.  Or, for a recursive
      definition, vma->rb_subtree_gap is the max of:
      
       - vma->vm_start - vma->vm_prev->vm_end
       - rb_subtree_gap fields of the vmas pointed by vma->rb.rb_left and
         vma->rb.rb_right
      
      This will allow get_unmapped_area_* to find a free area of the right
      size in O(log(N)) time, instead of potentially having to do a linear
      walk across all the VMAs.
      
      Also define mm->highest_vm_end as the vm_end field of the highest vma,
      so that we can easily check if the following gap is suitable.
      
      This does have the potential to make unmapping VMAs more expensive,
      especially for processes with very large numbers of VMAs, where the VMA
      rbtree can grow quite deep.
      Signed-off-by: default avatarMichel Lespinasse <walken@google.com>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d3737187
    • Andi Kleen's avatar
      selftests: add a test program for variable huge page sizes in mmap/shmget · fcc1f2d5
      Andi Kleen authored
      Also remove -Wextra because gcc-4.6 emits lots of irritating
      signed/unsigned comparison warnings.
      Signed-off-by: default avatarAndi Kleen <ak@linux.intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      fcc1f2d5
    • Andi Kleen's avatar
      mm: support more pagesizes for MAP_HUGETLB/SHM_HUGETLB · 42d7395f
      Andi Kleen authored
      There was some desire in large applications using MAP_HUGETLB or
      SHM_HUGETLB to use 1GB huge pages on some mappings, and stay with 2MB on
      others.  This is useful together with NUMA policy: use 2MB interleaving
      on some mappings, but 1GB on local mappings.
      
      This patch extends the IPC/SHM syscall interfaces slightly to allow
      specifying the page size.
      
      It borrows some upper bits in the existing flag arguments and allows
      encoding the log of the desired page size in addition to the *_HUGETLB
      flag.  When 0 is specified the default size is used, this makes the
      change fully compatible.
      
      Extending the internal hugetlb code to handle this is straight forward.
      Instead of a single mount it just keeps an array of them and selects the
      right mount based on the specified page size.  When no page size is
      specified it uses the mount of the default page size.
      
      The change is not visible in /proc/mounts because internal mounts don't
      appear there.  It also has very little overhead: the additional mounts
      just consume a super block, but not more memory when not used.
      
      I also exported the new flags to the user headers (they were previously
      under __KERNEL__).  Right now only symbols for x86 and some other
      architecture for 1GB and 2MB are defined.  The interface should already
      work for all other architectures though.  Only architectures that define
      multiple hugetlb sizes actually need it (that is currently x86, tile,
      powerpc).  However tile and powerpc have user configurable hugetlb
      sizes, so it's not easy to add defines.  A program on those
      architectures would need to query sysfs and use the appropiate log2.
      
      [akpm@linux-foundation.org: cleanups]
      [rientjes@google.com: fix build]
      [akpm@linux-foundation.org: checkpatch fixes]
      Signed-off-by: default avatarAndi Kleen <ak@linux.intel.com>
      Cc: Michael Kerrisk <mtk.manpages@gmail.com>
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      Acked-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Hillf Danton <dhillf@gmail.com>
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      42d7395f
    • Naoya Horiguchi's avatar
      mm: hwpoison: fix action_result() to print out dirty/clean · ff604cf6
      Naoya Horiguchi authored
      action_result() fails to print out "dirty" even if an error occurred on
      a dirty pagecache, because when we check PageDirty in action_result() it
      was cleared after page isolation even if it's dirty before error
      handling.  This can break some applications that monitor this message,
      so should be fixed.
      
      There are several callers of action_result() except page_action(), but
      either of them are not for LRU pages but for free pages or kernel pages,
      so we don't have to consider dirty or not for them.
      
      Note that PG_dirty can be set outside page locks as described in commit
      6746aff7 ("HWPOISON: shmem: call set_page_dirty() with locked
      page"), so this patch does not completely closes the race window, but
      just narrows it.
      Signed-off-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Reviewed-by: default avatarAndi Kleen <ak@linux.intel.com>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: "Jun'ichi Nomura" <j-nomura@ce.jp.nec.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ff604cf6
    • Matthieu CASTET's avatar
      dmapool: make DMAPOOL_DEBUG detect corruption of free marker · 5de55b26
      Matthieu CASTET authored
      This can help to catch the case where hardware is writing after dma free.
      
      [akpm@linux-foundation.org: tidy code, fix comment, use sizeof(page->offset), use pr_err()]
      Signed-off-by: default avatarMatthieu Castet <matthieu.castet@parrot.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5de55b26
    • David Rientjes's avatar
      mm, oom: allow exiting threads to have access to memory reserves · 9ff4868e
      David Rientjes authored
      Exiting threads, those with PF_EXITING set, can pagefault and require
      memory before they can make forward progress.  This happens, for instance,
      when a process must fault task->robust_list, a userspace structure, before
      detaching its memory.
      
      These threads also aren't guaranteed to get access to memory reserves
      unless oom killed or killed from userspace.  The oom killer won't grant
      memory reserves if other threads are also exiting other than current and
      stalling at the same point.  This prevents needlessly killing processes
      when others are already exiting.
      
      Instead of special casing all the possible situations between PF_EXITING
      getting set and a thread detaching its mm where it may allocate memory,
      which probably wouldn't get updated when a change is made to the exit
      path, the solution is to give all exiting threads access to memory
      reserves if they call the oom killer.  This allows them to quickly
      allocate, detach its mm, and free the memory it represents.
      
      Summary of Luigi's bug report:
      
      : He had an oom condition where threads were faulting on task->robust_list
      : and repeatedly called the oom killer but it would defer killing a thread
      : because it saw other PF_EXITING threads.  This can happen anytime we need
      : to allocate memory after setting PF_EXITING and before detaching our mm;
      : if there are other threads in the same state then the oom killer won't do
      : anything unless one of them happens to be killed from userspace.
      :
      : So instead of only deferring for PF_EXITING and !task->robust_list, it's
      : better to just give them access to memory reserves to prevent a potential
      : livelock so that any other faults that may be introduced in the future in
      : the exit path don't cause the same problem (and hopefully we don't allow
      : too many of those!).
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Acked-by: default avatarMinchan Kim <minchan@kernel.org>
      Tested-by: default avatarLuigi Semenzato <semenzato@google.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9ff4868e
    • Jeff Liu's avatar
      Documentation/cgroups/memory.txt: s/mem_cgroup_charge/mem_cgroup_change_common/ · 348b4655
      Jeff Liu authored
      mem_cgroup_charge_common() is invoked as the entry point for cgroup limits
      charge rather than mem_cgroup_charge(), as the later has been removed for
      years.  Update the cgroup/memory.txt to reflect this change.
      Signed-off-by: default avatarJie Liu <jeff.liu@oracle.com>
      Cc: Ying Han <yinghan@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      348b4655
    • Will Deacon's avatar
      mm: thp: set the accessed flag for old pages on access fault · a1dd450b
      Will Deacon authored
      On x86 memory accesses to pages without the ACCESSED flag set result in
      the ACCESSED flag being set automatically.  With the ARM architecture a
      page access fault is raised instead (and it will continue to be raised
      until the ACCESSED flag is set for the appropriate PTE/PMD).
      
      For normal memory pages, handle_pte_fault will call pte_mkyoung
      (effectively setting the ACCESSED flag).  For transparent huge pages,
      pmd_mkyoung will only be called for a write fault.
      
      This patch ensures that faults on transparent hugepages which do not
      result in a CoW update the access flags for the faulting pmd.
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Acked-by: default avatarKirill A. Shutemov <kirill@shutemov.name>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Ni zhan Chen <nizhan.chen@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a1dd450b
    • Joonsoo Kim's avatar
      mm, highmem: get virtual address of the page using PKMAP_ADDR() · eb2db439
      Joonsoo Kim authored
      In flush_all_zero_pkmaps(), we have an index of the pkmap associated with
      the page.  Using this index, we can simply get virtual address of the
      page.  So change it.
      Signed-off-by: default avatarJoonsoo Kim <js1304@gmail.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Reviewed-by: default avatarMinchan Kim <minchan@kernel.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      eb2db439
    • Joonsoo Kim's avatar
      mm, highmem: remove page_address_pool list · a354e2c8
      Joonsoo Kim authored
      We can find free page_address_map instance without the page_address_pool.
      So remove it.
      Signed-off-by: default avatarJoonsoo Kim <js1304@gmail.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Reviewed-by: default avatarMinchan Kim <minchan@kernel.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a354e2c8
    • Joonsoo Kim's avatar
      mm, highmem: remove useless pool_lock · cc33a303
      Joonsoo Kim authored
      The pool_lock protects the page_address_pool from concurrent access.  But,
      access to the page_address_pool is already protected by kmap_lock.  So
      remove it.
      Signed-off-by: default avatarJoonsoo Kim <js1304@gmail.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Reviewed-by: default avatarMinchan Kin <minchan@kernel.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      cc33a303
    • Joonsoo Kim's avatar
      mm, highmem: use PKMAP_NR() to calculate an index of pkmap · 4de22c05
      Joonsoo Kim authored
      To calculate an index of pkmap, using PKMAP_NR() is more understandable
      and maintainable, so change it.
      Signed-off-by: default avatarJoonsoo Kim <js1304@gmail.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Reviewed-by: default avatarMinchan Kim <minchan@kernel.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4de22c05
    • Cesar Eduardo Barros's avatar
      mm: do not call frontswap_init() during swapoff · 6555bc03
      Cesar Eduardo Barros authored
      The call to frontswap_init() was added within enable_swap_info(), which
      was called not only during sys_swapon, but also to reinsert the swap_info
      into the swap_list in case of failure of try_to_unuse() within
      sys_swapoff.  This means that frontswap_init() might be called more than
      once for the same swap area.
      
      While as far as I could see no frontswap implementation has any problem
      with it (and in fact, all the ones I found ignore the parameter passed to
      frontswap_init), this could change in the future.
      
      To prevent future problems, move the call to frontswap_init() to outside
      the code shared between sys_swapon and sys_swapoff.
      Signed-off-by: default avatarCesar Eduardo Barros <cesarb@cesarb.net>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Acked-by: default avatarDan Magenheimer <dan.magenheimer@oracle.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Hugh Dickins <hughd@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      6555bc03
    • Cesar Eduardo Barros's avatar
      mm: refactor reinsert of swap_info in sys_swapoff() · cf0cac0a
      Cesar Eduardo Barros authored
      The block within sys_swapoff() which re-inserts the swap_info into the
      swap_list in case of failure of try_to_unuse() reads a few values outside
      the swap_lock.  While this is safe at that point, it is subtle code.
      
      Simplify the code by moving the reading of these values to a separate
      function, refactoring it a bit so they are read from within the swap_lock.
       This is easier to understand, and matches better the way it worked before
      I unified the insertion of the swap_info from both sys_swapon and
      sys_swapoff.
      
      This change should make no functional difference.  The only real change is
      moving the read of two or three structure fields to within the lock
      (frontswap_map_get() is nothing more than a read of p->frontswap_map).
      Signed-off-by: default avatarCesar Eduardo Barros <cesarb@cesarb.net>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Dan Magenheimer <dan.magenheimer@oracle.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Hugh Dickins <hughd@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      cf0cac0a
    • Rik van Riel's avatar
      mm,vmscan: only evict file pages when we have plenty · e9868505
      Rik van Riel authored
      If we have more inactive file pages than active file pages, we skip
      scanning the active file pages altogether, with the idea that we do not
      want to evict the working set when there is plenty of streaming IO in the
      cache.
      
      However, the code forgot to also skip scanning anonymous pages in that
      situation.  That leads to the curious situation of keeping the active file
      pages protected from being paged out when there are lots of inactive file
      pages, while still scanning and evicting anonymous pages.
      
      This patch fixes that situation, by only evicting file pages when we have
      plenty of them and most are inactive.
      
      [akpm@linux-foundation.org: adjust comment layout]
      Signed-off-by: default avatarRik van Riel <riel@redhat.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e9868505
    • Jan Kara's avatar
      mm: add comment on storage key dirty bit semantics · e749eb95
      Jan Kara authored
      Add comments that dirty bit in storage key gets set whenever page content
      is changed.  Hopefully if someone will use this function, he'll have a
      look at one of the two places where we comment on this.
      Signed-off-by: default avatarJan Kara <jack@suse.cz>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Hugh Dickins <hughd@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e749eb95
    • Tang Chen's avatar
      mm/memory_hotplug.c: update start_pfn in zone and pg_data when spanned_pages == 0. · 712cd386
      Tang Chen authored
      If we hot-remove memory only and leave the cpus alive, the corresponding
      node will not be removed.  But the node_start_pfn and node_spanned_pages
      in pg_data will be reset to 0.  In this case, when we hot-add the memory
      back next time, the node_start_pfn will always be 0 because no pfn is less
      than 0.  After that, if we hot-remove the memory again, it will cause
      kernel panic in function find_biggest_section_pfn() when it tries to scan
      all the pfns.
      
      The zone will also have the same problem.
      
      This patch sets start_pfn to the start_pfn of the section being added when
      spanned_pages of the zone or pg_data is 0.
      
        ---How to reproduce---
      
      1. hot-add a container with some memory and cpus;
      2. hot-remove the container's memory, and leave cpus there;
      3. hot-add these memory again;
      4. hot-remove them again;
      
      then, the kernel will panic.
      
        ---Call trace---
      
        BUG: unable to handle kernel paging request at 00000fff82a8cc38
        IP: [<ffffffff811c0d55>] find_biggest_section_pfn+0xe5/0x180
        ......
        Call Trace:
         [<ffffffff811c1124>] __remove_zone+0x184/0x1b0
         [<ffffffff811c11dc>] __remove_section+0x8c/0xb0
         [<ffffffff811c12e7>] __remove_pages+0xe7/0x120
         [<ffffffff81654f7c>] arch_remove_memory+0x2c/0x80
         [<ffffffff81655bb6>] remove_memory+0x56/0x90
         [<ffffffff813da0c8>] acpi_memory_device_remove_memory+0x48/0x73
         [<ffffffff813da55a>] acpi_memory_device_notify+0x153/0x274
         [<ffffffff813b6786>] acpi_ev_notify_dispatch+0x41/0x5f
         [<ffffffff813a3867>] acpi_os_execute_deferred+0x27/0x34
         [<ffffffff81090589>] process_one_work+0x219/0x680
         [<ffffffff810923be>] worker_thread+0x12e/0x320
         [<ffffffff81098396>] kthread+0xc6/0xd0
         [<ffffffff8167c7c4>] kernel_thread_helper+0x4/0x10
        ......
        ---[ end trace 96d845dbf33fee11 ]---
      Signed-off-by: default avatarTang Chen <tangchen@cn.fujitsu.com>
      Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Cc: Wen Congyang <wency@cn.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      712cd386
    • Lai Jiangshan's avatar
      slub, hotplug: ignore unrelated node's hot-adding and hot-removing · b9d5ab25
      Lai Jiangshan authored
      SLUB only focuses on the nodes which have normal memory and it ignores the
      other node's hot-adding and hot-removing.
      
      Aka: if some memory of a node which has no onlined memory is online, but
      this new memory onlined is not normal memory (for example, highmem), we
      should not allocate kmem_cache_node for SLUB.
      
      And if the last normal memory is offlined, but the node still has memory,
      we should remove kmem_cache_node for that node.  (The current code delays
      it when all of the memory is offlined)
      
      So we only do something when marg->status_change_nid_normal > 0.
      marg->status_change_nid is not suitable here.
      
      The same problem doesn't exist in SLAB, because SLAB allocates kmem_list3
      for every node even the node don't have normal memory, SLAB tolerates
      kmem_list3 on alien nodes.  SLUB only focuses on the nodes which have
      normal memory, it don't tolerate alien kmem_cache_node.  The patch makes
      SLUB become self-compatible and avoids WARNs and BUGs in rare conditions.
      Signed-off-by: default avatarLai Jiangshan <laijs@cn.fujitsu.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Cc: Rob Landley <rob@landley.net>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Jiang Liu <jiang.liu@huawei.com>
      Cc: Kay Sievers <kay.sievers@vrfy.org>
      Cc: Greg Kroah-Hartman <gregkh@suse.de>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Wen Congyang <wency@cn.fujitsu.com>
      Acked-by: default avatarChristoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b9d5ab25