1. 11 Dec, 2012 1 commit
  2. 10 Dec, 2012 4 commits
  3. 09 Dec, 2012 24 commits
  4. 08 Dec, 2012 5 commits
  5. 07 Dec, 2012 3 commits
    • Eric Dumazet's avatar
      net: gro: fix possible panic in skb_gro_receive() · c3c7c254
      Eric Dumazet authored
      commit 2e71a6f8 (net: gro: selective flush of packets) added
      a bug for skbs using frag_list. This part of the GRO stack is rarely
      used, as it needs skb not using a page fragment for their skb->head.
      
      Most drivers do use a page fragment, but some of them use GFP_KERNEL
      allocations for the initial fill of their RX ring buffer.
      
      napi_gro_flush() overwrite skb->prev that was used for these skb to
      point to the last skb in frag_list.
      
      Fix this using a separate field in struct napi_gro_cb to point to the
      last fragment.
      Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      c3c7c254
    • Yuchung Cheng's avatar
      tcp: bug fix Fast Open client retransmission · 93b174ad
      Yuchung Cheng authored
      If SYN-ACK partially acks SYN-data, the client retransmits the
      remaining data by tcp_retransmit_skb(). This increments lost recovery
      state variables like tp->retrans_out in Open state. If loss recovery
      happens before the retransmission is acked, it triggers the WARN_ON
      check in tcp_fastretrans_alert(). For example: the client sends
      SYN-data, gets SYN-ACK acking only ISN, retransmits data, sends
      another 4 data packets and get 3 dupacks.
      
      Since the retransmission is not caused by network drop it should not
      update the recovery state variables. Further the server may return a
      smaller MSS than the cached MSS used for SYN-data, so the retranmission
      needs a loop. Otherwise some data will not be retransmitted until timeout
      or other loss recovery events.
      Signed-off-by: default avatarYuchung Cheng <ycheng@google.com>
      Acked-by: default avatarNeal Cardwell <ncardwell@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      93b174ad
    • Linus Torvalds's avatar
      Merge tag 'mmc-fixes-for-3.7' of git://git.kernel.org/pub/scm/linux/kernel/git/cjb/mmc · 1afa4717
      Linus Torvalds authored
      Pull MMC fixes from Chris Ball:
       "Two small regression fixes:
      
         - sdhci-s3c: Fix runtime PM regression against 3.7-rc1
         - sh-mmcif: Fix oops against 3.6"
      
      * tag 'mmc-fixes-for-3.7' of git://git.kernel.org/pub/scm/linux/kernel/git/cjb/mmc:
        mmc: sh-mmcif: avoid oops on spurious interrupts (second try)
        Revert misapplied "mmc: sh-mmcif: avoid oops on spurious interrupts"
        mmc: sdhci-s3c: fix missing clock for gpio card-detect
      1afa4717
  6. 06 Dec, 2012 3 commits
    • Mel Gorman's avatar
      tmpfs: fix shared mempolicy leak · 18a2f371
      Mel Gorman authored
      This fixes a regression in 3.7-rc, which has since gone into stable.
      
      Commit 00442ad0 ("mempolicy: fix a memory corruption by refcount
      imbalance in alloc_pages_vma()") changed get_vma_policy() to raise the
      refcount on a shmem shared mempolicy; whereas shmem_alloc_page() went
      on expecting alloc_page_vma() to drop the refcount it had acquired.
      This deserves a rework: but for now fix the leak in shmem_alloc_page().
      
      Hugh: shmem_swapin() did not need a fix, but surely it's clearer to use
      the same refcounting there as in shmem_alloc_page(), delete its onstack
      mempolicy, and the strange mpol_cond_copy() and __mpol_cond_copy() -
      those were invented to let swapin_readahead() make an unknown number of
      calls to alloc_pages_vma() with one mempolicy; but since 00442ad0,
      alloc_pages_vma() has kept refcount in balance, so now no problem.
      Reported-and-tested-by: default avatarTommi Rantala <tt.rantala@gmail.com>
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Signed-off-by: default avatarHugh Dickins <hughd@google.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      18a2f371
    • Johannes Weiner's avatar
      mm: vmscan: do not keep kswapd looping forever due to individual uncompactable zones · c702418f
      Johannes Weiner authored
      When a zone meets its high watermark and is compactable in case of
      higher order allocations, it contributes to the percentage of the node's
      memory that is considered balanced.
      
      This requirement, that a node be only partially balanced, came about
      when kswapd was desparately trying to balance tiny zones when all bigger
      zones in the node had plenty of free memory.  Arguably, the same should
      apply to compaction: if a significant part of the node is balanced
      enough to run compaction, do not get hung up on that tiny zone that
      might never get in shape.
      
      When the compaction logic in kswapd is reached, we know that at least
      25% of the node's memory is balanced properly for compaction (see
      zone_balanced and pgdat_balanced).  Remove the individual zone checks
      that restart the kswapd cycle.
      
      Otherwise, we may observe more endless looping in kswapd where the
      compaction code loops back to reclaim because of a single zone and
      reclaim does nothing because the node is considered balanced overall.
      
      See for example
      
        https://bugzilla.redhat.com/show_bug.cgi?id=866988Signed-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Reported-and-tested-by: default avatarThorsten Leemhuis <fedora@leemhuis.info>
      Reported-by: default avatarJiri Slaby <jslaby@suse.cz>
      Tested-by: default avatarJohn Ellson <john.ellson@comcast.net>
      Tested-by: default avatarZdenek Kabelac <zkabelac@redhat.com>
      Tested-by: default avatarBruno Wolff III <bruno@wolff.to>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c702418f
    • Mel Gorman's avatar
      mm: compaction: validate pfn range passed to isolate_freepages_block · 60177d31
      Mel Gorman authored
      Commit 0bf380bc ("mm: compaction: check pfn_valid when entering a
      new MAX_ORDER_NR_PAGES block during isolation for migration") added a
      check for pfn_valid() when isolating pages for migration as the scanner
      does not necessarily start pageblock-aligned.
      
      Since commit c89511ab ("mm: compaction: Restart compaction from near
      where it left off"), the free scanner has the same problem.  This patch
      makes sure that the pfn range passed to isolate_freepages_block() is
      within the same block so that pfn_valid() checks are unnecessary.
      
      In answer to Henrik's wondering why others have not reported this:
      reproducing this requires a large enough hole with the right aligment to
      have compaction walk into a PFN range with no memmap.  Size and
      alignment depends in the memory model - 4M for FLATMEM and 128M for
      SPARSEMEM on x86.  It needs a "lucky" machine.
      Reported-by: default avatarHenrik Rydberg <rydberg@euromail.se>
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      60177d31