1. 25 May, 2011 2 commits
    • Johannes Weiner's avatar
      mm: vmscan: correct use of pgdat_balanced in sleeping_prematurely · afc7e326
      Johannes Weiner authored
      There are a few reports of people experiencing hangs when copying large
      amounts of data with kswapd using a large amount of CPU which appear to be
      due to recent reclaim changes.  SLUB using high orders is the trigger but
      not the root cause as SLUB has been using high orders for a while.  The
      root cause was bugs introduced into reclaim which are addressed by the
      following two patches.
      
      Patch 1 corrects logic introduced by commit 1741c877 ("mm: kswapd:
              keep kswapd awake for high-order allocations until a percentage of
              the node is balanced") to allow kswapd to go to sleep when
              balanced for high orders.
      
      Patch 2 notes that it is possible for kswapd to miss every
              cond_resched() and updates shrink_slab() so it'll at least reach
              that scheduling point.
      
      Chris Wood reports that these two patches in isolation are sufficient to
      prevent the system hanging.  AFAIK, they should also resolve similar hangs
      experienced by James Bottomley.
      
      This patch:
      
      Johannes Weiner poined out that the logic in commit 1741c877 ("mm: kswapd:
      keep kswapd awake for high-order allocations until a percentage of the
      node is balanced") is backwards.  Instead of allowing kswapd to go to
      sleep when balancing for high order allocations, it keeps it kswapd
      running uselessly.
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Reviewed-by: default avatarRik van Riel <riel@redhat.com>
      Signed-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: default avatarWu Fengguang <fengguang.wu@intel.com>
      Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
      Tested-by: default avatarColin King <colin.king@canonical.com>
      Cc: Raghavendra D Prabhu <raghu.prabhu13@gmail.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Chris Mason <chris.mason@oracle.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Rik van Riel <riel@redhat.com>
      Reviewed-by: default avatarMinchan Kim <minchan.kim@gmail.com>
      Reviewed-by: default avatarWu Fengguang <fengguang.wu@intel.com>
      Cc: <stable@kernel.org>		[2.6.38+]
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      afc7e326
    • Christoph Lameter's avatar
      slub: Fix double bit unlock in debug mode · a71ae47a
      Christoph Lameter authored
      Commit 442b06bc ("slub: Remove node check in slab_free") added a
      call to deactivate_slab() in the debug case in __slab_alloc(), which
      unlocks the current slab used for allocation.  Going to the label
      'unlock_out' then does it again.
      
      Also, in the debug case we do not need all the other processing that the
      'unlock_out' path does.  We always fall back to the slow path in the
      debug case.  So the tid update is useless.
      
      Similarly, ALLOC_SLOWPATH would just be incremented for all allocations.
      Also a pretty useless thing.
      
      So simply restore irq flags and return the object.
      Signed-off-by: default avatarChristoph Lameter <cl@linux.com>
      Reported-and-bisected-by: default avatarJames Morris <jmorris@namei.org>
      Reported-by: default avatarIngo Molnar <mingo@elte.hu>
      Reported-by: default avatarJens Axboe <jaxboe@fusionio.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a71ae47a
  2. 24 May, 2011 38 commits