1. 30 Apr, 2013 12 commits
  2. 29 Apr, 2013 28 commits
    • Li Zefan's avatar
      memcg: take reference before releasing rcu_read_lock · ca0dde97
      Li Zefan authored
      The memcg is not referenced, so it can be destroyed at anytime right
      after we exit rcu read section, so it's not safe to access it.
      
      To fix this, we call css_tryget() to get a reference while we're still
      in rcu read section.
      
      This also removes a bogus comment above __memcg_create_cache_enqueue().
      Signed-off-by: default avatarLi Zefan <lizefan@huawei.com>
      Acked-by: default avatarGlauber Costa <glommer@parallels.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.cz>
      Acked-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ca0dde97
    • Yasuaki Ishimatsu's avatar
      mem hotunplug: fix kfree() of bootmem memory · ebff7d8f
      Yasuaki Ishimatsu authored
      When hot removing memory presented at boot time, following messages are shown:
      
        kernel BUG at mm/slub.c:3409!
        invalid opcode: 0000 [#1] SMP
        Modules linked in: ebtable_nat ebtables xt_CHECKSUM iptable_mangle bridge stp llc ipmi_devintf ipmi_msghandler sunrpc ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4 iptable_filter ip_tables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables binfmt_misc vfat fat dm_mirror dm_region_hash dm_log dm_mod vhost_net macvtap macvlan tun uinput iTCO_wdt iTCO_vendor_support coretemp kvm_intel kvm crc32c_intel ghash_clmulni_intel microcode pcspkr sg i2c_i801 lpc_ich mfd_core igb i2c_algo_bit i2c_core e1000e ptp pps_core tpm_infineon ioatdma dca sr_mod cdrom sd_mod crc_t10dif usb_storage megaraid_sas lpfc scsi_transport_fc scsi_tgt scsi_mod
        CPU 0
        Pid: 5091, comm: kworker/0:2 Tainted: G        W    3.9.0-rc6+ #15
        RIP: kfree+0x232/0x240
        Process kworker/0:2 (pid: 5091, threadinfo ffff88084678c000, task ffff88083928ca80)
        Call Trace:
          __release_region+0xd4/0xe0
          __remove_pages+0x52/0x110
          arch_remove_memory+0x89/0xd0
          remove_memory+0xc4/0x100
          acpi_memory_device_remove+0x6d/0xb1
          acpi_device_remove+0x89/0xab
          __device_release_driver+0x7c/0xf0
          device_release_driver+0x2f/0x50
          acpi_bus_device_detach+0x6c/0x70
          acpi_ns_walk_namespace+0x11a/0x250
          acpi_walk_namespace+0xee/0x137
          acpi_bus_trim+0x33/0x7a
          acpi_bus_hot_remove_device+0xc4/0x1a1
          acpi_os_execute_deferred+0x27/0x34
          process_one_work+0x1f7/0x590
          worker_thread+0x11a/0x370
          kthread+0xee/0x100
          ret_from_fork+0x7c/0xb0
        RIP  [<ffffffff811c41d2>] kfree+0x232/0x240
         RSP <ffff88084678d968>
      
      The reason why the messages are shown is to release a resource
      structure, allocated by bootmem, by kfree().  So when we release a
      resource structure, we should check whether it is allocated by bootmem
      or not.
      
      But even if we know a resource structure is allocated by bootmem, we
      cannot release it since SLxB cannot treat it.  So for reusing a resource
      structure, this patch remembers it by using bootmem_resource as follows:
      
      When releasing a resource structure by free_resource(), free_resource()
      checks whether the resource structure is allocated by bootmem or not.
      If it is allocated by bootmem, free_resource() adds it to
      bootmem_resource.  If it is not allocated by bootmem, free_resource()
      release it by kfree().
      
      And when getting a new resource structure by get_resource(),
      get_resource() checks whether bootmem_resource has released resource
      structures or not.  If there is a released resource structure,
      get_resource() returns it.  If there is not a releaed resource
      structure, get_resource() returns new resource structure allocated by
      kzalloc().
      
      [akpm@linux-foundation.org: s/get_resource/alloc_resource/]
      Signed-off-by: default avatarYasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Reviewed-by: default avatarToshi Kani <toshi.kani@hp.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Ram Pai <linuxram@us.ibm.com>
      Cc: David Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ebff7d8f
    • Vinayak Menon's avatar
      mmKconfig: add an option to disable bounce · 9ca24e2e
      Vinayak Menon authored
      There are times when HIGHMEM is enabled, but we don't prefer
      CONFIG_BOUNCE to be enabled.  CONFIG_BOUNCE can reduce the block device
      throughput, and this is not ideal for machines where we don't gain much
      by enabling it.  So provide an option to deselect CONFIG_BOUNCE.  The
      observation was made while measuring eMMC throughput using iozone on an
      ARM device with 1GB RAM.
      Signed-off-by: default avatarVinayak Menon <vinayakm.list@gmail.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Randy Dunlap <rdunlap@infradead.org>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      9ca24e2e
    • Joonsoo Kim's avatar
      mm, nobootmem: do memset() after memblock_reserve() · b476e295
      Joonsoo Kim authored
      Currently, we do memset() before reserving the area.  This may not cause
      any problem, but it is somewhat weird.  So change execution order.
      Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Jiang Liu <liuj97@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b476e295
    • Joonsoo Kim's avatar
      mm, nobootmem: clean-up of free_low_memory_core_early() · b4def350
      Joonsoo Kim authored
      Remove unused argument and make function static, because there is no user
      outside of nobootmem.c
      Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Jiang Liu <liuj97@gmail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b4def350
    • majianpeng's avatar
      fs/buffer.c: remove unnecessary init operation after allocating buffer_head. · e7600409
      majianpeng authored
      bh allocation uses kmem_cache_zalloc() so we needn't call
      'init_buffer(bh, NULL, NULL)' and perform other set-zero-operations.
      Signed-off-by: default avatarJianpeng Ma <majianpeng@gmail.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Theodore Ts'o <tytso@mit.edu>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e7600409
    • Yasuaki Ishimatsu's avatar
      numa, cpu hotplug: change links of CPU and node when changing node number by onlining CPU · 34640468
      Yasuaki Ishimatsu authored
      When booting x86 system contains memoryless node, node numbers of CPUs
      on memoryless node were changed to nearest online node number by
      init_cpu_to_node() because the node is not online.
      
      In my system, node numbers of cpu#30-44 and 75-89 were changed from 2 to
      0 as follows:
      
        $ numactl --hardware
        available: 2 nodes (0-1)
        node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 30 31 32 33 34 35 36 37 38 39 40
        41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 75 76 77 78 79 80 81 82
        83 84 85 86 87 88 89
        node 0 size: 32394 MB
        node 0 free: 27898 MB
        node 1 cpus: 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 60 61 62 63 64 65 66
        67 68 69 70 71 72 73 74
        node 1 size: 32768 MB
        node 1 free: 30335 MB
      
      If we hot add memory to memoryless node and offine/online all CPUs on
      the node, node numbers of these CPUs are changed to correct node numbers
      by srat_detect_node() because the node become online.
      
      In this case, node numbers of cpu#30-44 and 75-89 were changed from 0 to
      2 in my system as follows:
      
        $ numactl --hardware
        available: 3 nodes (0-2)
        node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 45 46 47 48 49 50 51 52 53 54 55
        56 57 58 59
        node 0 size: 32394 MB
        node 0 free: 27218 MB
        node 1 cpus: 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 60 61 62 63 64 65 66
        67 68 69 70 71 72 73 74
        node 1 size: 32768 MB
        node 1 free: 30014 MB
        node 2 cpus: 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 75 76 77 78 79 80 81
        82 83 84 85 86 87 88 89
        node 2 size: 16384 MB
        node 2 free: 16384 MB
      
      But "cpu to node" and "node to cpu" links were not changed as follows:
      
        $ ls /sys/devices/system/cpu/cpu30/|grep node
        node0
        $ ls /sys/devices/system/node/node0/|grep cpu30
        cpu30
      
      "numactl --hardware" shows that cpu30 belongs to node 2.  But sysfs
      links does not change.
      
      This patch changes "cpu to node" and "node to cpu" links when node
      number changed by onlining CPU.
      Signed-off-by: default avatarYasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>
      Cc: Greg KH <greg@kroah.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      34640468
    • Randy Dunlap's avatar
      mm: fix memory_hotplug.c printk format warning · 349daa0f
      Randy Dunlap authored
      PFN_PHYS() is a phys_addr_t, which can be u32 or u64.
      Fix the build warning when phys_addr_t is u32.
      
        mm/memory_hotplug.c: warning: format '%llx' expects argument of type 'long long unsigned int', but argument 2 has type 'unsigned int' [-Wformat]:  => 1685:3
        mm/memory_hotplug.c: warning: format '%llx' expects argument of type 'long long unsigned int', but argument 3 has type 'unsigned int' [-Wformat]:  => 1685:3
      Signed-off-by: default avatarRandy Dunlap <rdunlap@infradead.org>
      Reported-by: default avatarGeert Uytterhoeven <geert@linux-m68k.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      349daa0f
    • Mel Gorman's avatar
      mm: swap: mark swap pages writeback before queueing for direct IO · 0cdc444a
      Mel Gorman authored
      As pointed out by Andrew Morton, the swap-over-NFS writeback is not
      setting PageWriteback before it is queued for direct IO.  While swap
      pages do not participate in BDI or process dirty accounting and the IO
      is synchronous, the writeback bit is still required and not setting it
      in this case was an oversight.  swapoff depends on the page writeback to
      synchronoise all pending writes on a swap page before it is reused.
      Swapcache freeing and reuse depend on checking the PageWriteback under
      lock to ensure the page is safe to reuse.
      
      Direct IO handlers and the direct IO handler for NFS do not deal with
      PageWriteback as they are synchronous writes.  In the case of NFS, it
      schedules pages (or a page in the case of swap) for IO and then waits
      synchronously for IO to complete in nfs_direct_write().  It is
      recognised that this is a slowdown from normal swap handling which is
      asynchronous and uses a completion handler.  Shoving PageWriteback
      handling down into direct IO handlers looks like a bad fit to handle the
      swap case although it may have to be dealt with some day if swap is
      converted to use direct IO in general and bmap is finally done away
      with.  At that point it will be necessary to refit asynchronous direct
      IO with completion handlers onto the swap subsystem.
      
      As swapcache currently depends on PageWriteback to protect against
      races, this patch sets PageWriteback under the page lock before queueing
      it for direct IO.  It is cleared when the direct IO handler returns.  IO
      errors are treated similarly to the direct-to-bio case except PageError
      is not set as in the case of swap-over-NFS, it is likely to be a
      transient error.
      
      It was asked what prevents such a page being reclaimed in parallel.
      With this patch applied, such a page will now be skipped (most of the
      time) or blocked until the writeback completes.  Reclaim checks
      PageWriteback under the page lock before calling try_to_free_swap and
      the page lock should prevent the page being requeued for IO before it is
      freed.
      
      This and Jerome's related patch should considered for -stable as far
      back as 3.6 when swap-over-NFS was introduced.
      
      [akpm@linux-foundation.org: use pr_err_ratelimited()]
      [akpm@linux-foundation.org: remove hopefully-unneeded cast in printk]
      Signed-off-by: default avatarMel Gorman <mgorman@suse.de>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: <stable@vger.kernel.org>	[3.6+]
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0cdc444a
    • Jerome Marchand's avatar
      swap: redirty page if page write fails on swap file · 2d30d31e
      Jerome Marchand authored
      Since commit 62c230bc ("mm: add support for a filesystem to activate
      swap files and use direct_IO for writing swap pages"), swap_writepage()
      calls direct_IO on swap files.  However, in that case the page isn't
      redirtied if I/O fails, and is therefore handled afterwards as if it has
      been successfully written to the swap file, leading to memory corruption
      when the page is eventually swapped back in.
      
      This patch sets the page dirty when direct_IO() fails.  It fixes a
      memory corruption that happened while using swap-over-NFS.
      Signed-off-by: default avatarJerome Marchand <jmarchan@redhat.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Acked-by: default avatarMel Gorman <mgorman@suse.de>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: <stable@vger.kernel.org>	[3.6+]
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2d30d31e
    • David Rientjes's avatar
      mm, memcg: give exiting processes access to memory reserves · 465adcf1
      David Rientjes authored
      A memcg may livelock when oom if the process that grabs the hierarchy's
      oom lock is never the first process with PF_EXITING set in the memcg's
      task iteration.
      
      The oom killer, both global and memcg, will defer if it finds an
      eligible process that is in the process of exiting and it is not being
      ptraced.  The idea is to allow it to exit without using memory reserves
      before needlessly killing another process.
      
      This normally works fine except in the memcg case with a large number of
      threads attached to the oom memcg.  In this case, the memcg oom killer
      only gets called for the process that grabs the hierarchy's oom lock;
      all others end up blocked on the memcg's oom waitqueue.  Thus, if the
      process that grabs the hierarchy's oom lock is never the first
      PF_EXITING process in the memcg's task iteration, the oom killer is
      constantly deferred without anything making progress.
      
      The fix is to give PF_EXITING processes access to memory reserves so
      that we've marked them as oom killed without any iteration.  This allows
      __mem_cgroup_try_charge() to succeed so that the process may exit.  This
      makes the memcg oom killer exemption for TIF_MEMDIE tasks, now
      immediately granted for processes with pending SIGKILLs and those in the
      exit path, to be equivalent to what is done for the global oom killer.
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.cz>
      Acked-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      465adcf1
    • Kirill A. Shutemov's avatar
      thp: fix huge zero page logic for page with pfn == 0 · 5918d10a
      Kirill A. Shutemov authored
      Current implementation of huge zero page uses pfn value 0 to indicate
      that the page hasn't allocated yet.  It assumes that buddy page
      allocator can't return page with pfn == 0.
      
      Let's rework the code to store 'struct page *' of huge zero page, not
      its pfn.  This way we can avoid the weak assumption.
      
      [akpm@linux-foundation.org: fix sparse warning]
      Signed-off-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Reported-by: default avatarMinchan Kim <minchan@kernel.org>
      Acked-by: default avatarMinchan Kim <minchan@kernel.org>
      Reviewed-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5918d10a
    • Li Zefan's avatar
      memcg: avoid accessing memcg after releasing reference · fd0ccaf2
      Li Zefan authored
      This might cause a use-after-free bug.
      Signed-off-by: default avatarLi Zefan <lizefan@huawei.com>
      Cc: Glauber Costa <glommer@parallels.com>
      Acked-by: default avatarMichal Hocko <mhocko@suse.cz>
      Acked-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      fd0ccaf2
    • Dmitry Monakhov's avatar
      fs: fix fsync() error reporting · 865ffef3
      Dmitry Monakhov authored
      There are two convenient ways to report errors to userspace
      
      1) retun error to original syscall for example write(2)
      2) mark mapping with error flag and return it on later fsync(2)
      
      Second one is broken if (mapping->nrpages == 0) This is real-life
      situation because after error pages are likey to be truncated or
      invalidated.
      
      We have to return an error regardless to number of pages in the mapping.
      
      #Original testcase: git@github.com:dmonakhov/xfstests.git
      MOUNT_OPTIONS="-b1024"
      ./check shared/305
      Signed-off-by: default avatarDmitry Monakhov <dmonakhov@openvz.org>
      Reviewed-by: default avatarJan Kara <jack@suse.cz>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      865ffef3
    • Tang Chen's avatar
      memblock: fix missing comment of memblock_insert_region() · 209ff86d
      Tang Chen authored
      There is no comment for parameter nid of memblock_insert_region().
      This patch adds comment for it.
      Signed-off-by: default avatarTang Chen <tangchen@cn.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      209ff86d
    • Tang Chen's avatar
      mm: Remove unused parameter of pages_correctly_reserved() · 6056d619
      Tang Chen authored
      nr_pages is not used in pages_correctly_reserved().
      So remove it.
      Signed-off-by: default avatarTang Chen <tangchen@cn.fujitsu.com>
      Reviewed-by: default avatarWang Shilong <wangsl-fnst@cn.fujitsu.com>
      Reviewed-by: default avatarWen Congyang <wency@cn.fujitsu.com>
      Acked-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      6056d619
    • Yasuaki Ishimatsu's avatar
      firmware, memmap: fix firmware_map_entry leak · 7a6f93b0
      Yasuaki Ishimatsu authored
      When hot removing memory, a firmware_map_entry which has memory range of
      the memory is released by release_firmware_map_entry().  If the entry is
      allocated by bootmem, release_firmware_map_entry() adds the entry to
      map_entires_bootmem list when firmware_map_find_entry() finds the entry
      from map_entries list.  But firmware_map_find_entry never find the entry
      sicne map_entires list does not have the entry.  So the entry just
      leaks.
      
      Here are steps of leaking firmware_map_entry:
      firmware_map_remove()
      -> firmware_map_find_entry()
         Find released entry from map_entries list
      -> firmware_map_remove_entry()
         Delete the entry from map_entries list
      -> remove_sysfs_fw_map_entry()
         ...
         -> release_firmware_map_entry()
            -> firmware_map_find_entry()
               Find the entry from map_entries list but the entry has been
               deleted from map_entries list. So the entry is not added
               to map_entries_bootmem. Thus the entry leaks
      
      release_firmware_map_entry() should not call firmware_map_find_entry()
      since releaed entry has been deleted from map_entries list.  So the
      patch delete firmware_map_find_entry() from releae_firmware_map_entry()
      Signed-off-by: default avatarYasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Reviewed-by: default avatarWanpeng Li <liwanp@linux.vnet.ibm.com>
      Reviewed-by: default avatarTang Chen <tangchen@cn.fujitsu.com>
      Acked-by: default avatarToshi Kani <toshi.kani@hp.com>
      Cc: Wen Congyang <wency@cn.fujitsu.com>
      Cc: Greg KH <greg@kroah.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7a6f93b0
    • Cody P Schafer's avatar
    • Shaohua Li's avatar
      mm: thp: add split tail pages to shrink page list in page reclaim · 5bc7b8ac
      Shaohua Li authored
      In page reclaim, huge page is split.  split_huge_page() adds tail pages
      to LRU list.  Since we are reclaiming a huge page, it's better we
      reclaim all subpages of the huge page instead of just the head page.
      This patch adds split tail pages to shrink page list so the tail pages
      can be reclaimed soon.
      
      Before this patch, run a swap workload:
        thp_fault_alloc 3492
        thp_fault_fallback 608
        thp_collapse_alloc 6
        thp_collapse_alloc_failed 0
        thp_split 916
      
      With this patch:
        thp_fault_alloc 4085
        thp_fault_fallback 16
        thp_collapse_alloc 90
        thp_collapse_alloc_failed 0
        thp_split 1272
      
      fallback allocation is reduced a lot.
      
      [akpm@linux-foundation.org: fix CONFIG_SWAP=n build]
      Signed-off-by: default avatarShaohua Li <shli@fusionio.com>
      Acked-by: default avatarRik van Riel <riel@redhat.com>
      Acked-by: default avatarMinchan Kim <minchan@kernel.org>
      Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
      Reviewed-by: default avatarWanpeng Li <liwanp@linux.vnet.ibm.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5bc7b8ac
    • Seth Jennings's avatar
      mm: allow for outstanding swap writeback accounting · 1eec6702
      Seth Jennings authored
      To prevent flooding the swap device with writebacks, frontswap backends
      need to count and limit the number of outstanding writebacks.  The
      incrementing of the counter can be done before the call to
      __swap_writepage().  However, the caller must receive a notification
      when the writeback completes in order to decrement the counter.
      
      To achieve this functionality, this patch modifies __swap_writepage() to
      take the bio completion callback function as an argument.
      
      end_swap_bio_write(), the normal bio completion function, is also made
      non-static so that code doing the accounting can call it after the
      accounting is done.
      
      There should be no behavioural change to existing code.
      Signed-off-by: default avatarSeth Jennings <sjenning@linux.vnet.ibm.com>
      Signed-off-by: default avatarBob Liu <bob.liu@oracle.com>
      Acked-by: default avatarMinchan Kim <minchan@kernel.org>
      Reviewed-by: default avatarDan Magenheimer <dan.magenheimer@oracle.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      1eec6702
    • Seth Jennings's avatar
      mm: break up swap_writepage() for frontswap backends · 2f772e6c
      Seth Jennings authored
      swap_writepage() is currently where frontswap hooks into the swap write
      path to capture pages with the frontswap_store() function.  However, if
      a frontswap backend wants to "resume" the writeback of a page to the
      swap device, it can't call swap_writepage() as the page will simply
      reenter the backend.
      
      This patch separates swap_writepage() into a top and bottom half, the
      bottom half named __swap_writepage() to allow a frontswap backend, like
      zswap, to resume writeback beyond the frontswap_store() hook.
      
      __add_to_swap_cache() is also made non-static so that the page for which
      writeback is to be resumed can be added to the swap cache.
      Signed-off-by: default avatarSeth Jennings <sjenning@linux.vnet.ibm.com>
      Signed-off-by: default avatarBob Liu <bob.liu@oracle.com>
      Acked-by: default avatarMinchan Kim <minchan@kernel.org>
      Reviewed-by: default avatarDan Magenheimer <dan.magenheimer@oracle.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      2f772e6c
    • Cyril Hrubis's avatar
      mm/mmap: check for RLIMIT_AS before unmapping · e8420a8e
      Cyril Hrubis authored
      Fix a corner case for MAP_FIXED when requested mapping length is larger
      than rlimit for virtual memory.  In such case any overlapping mappings
      are unmapped before we check for the limit and return ENOMEM.
      
      The check is moved before the loop that unmaps overlapping parts of
      existing mappings.  When we are about to hit the limit (currently mapped
      pages + len > limit) we scan for overlapping pages and check again
      accounting for them.
      
      This fixes situation when userspace program expects that the previous
      mappings are preserved after the mmap() syscall has returned with error.
      (POSIX clearly states that successfull mapping shall replace any
      previous mappings.)
      
      This corner case was found and can be tested with LTP testcase:
      
      testcases/open_posix_testsuite/conformance/interfaces/mmap/24-2.c
      
      In this case the mmap, which is clearly over current limit, unmaps
      dynamic libraries and the testcase segfaults right after returning into
      userspace.
      
      I've also looked at the second instance of the unmapping loop in the
      do_brk().  The do_brk() is called from brk() syscall and from vm_brk().
      The brk() syscall checks for overlapping mappings and bails out when
      there are any (so it can't be triggered from the brk syscall).  The
      vm_brk() is called only from binmft handlers so it shouldn't be
      triggered unless binmft handler created overlapping mappings.
      Signed-off-by: default avatarCyril Hrubis <chrubis@suse.cz>
      Reviewed-by: default avatarMel Gorman <mgorman@suse.de>
      Reviewed-by: default avatarWanpeng Li <liwanp@linux.vnet.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e8420a8e
    • Anton Vorontsov's avatar
      memcg: add memory.pressure_level events · 70ddf637
      Anton Vorontsov authored
      With this patch userland applications that want to maintain the
      interactivity/memory allocation cost can use the pressure level
      notifications.  The levels are defined like this:
      
      The "low" level means that the system is reclaiming memory for new
      allocations.  Monitoring this reclaiming activity might be useful for
      maintaining cache level.  Upon notification, the program (typically
      "Activity Manager") might analyze vmstat and act in advance (i.e.
      prematurely shutdown unimportant services).
      
      The "medium" level means that the system is experiencing medium memory
      pressure, the system might be making swap, paging out active file
      caches, etc.  Upon this event applications may decide to further analyze
      vmstat/zoneinfo/memcg or internal memory usage statistics and free any
      resources that can be easily reconstructed or re-read from a disk.
      
      The "critical" level means that the system is actively thrashing, it is
      about to out of memory (OOM) or even the in-kernel OOM killer is on its
      way to trigger.  Applications should do whatever they can to help the
      system.  It might be too late to consult with vmstat or any other
      statistics, so it's advisable to take an immediate action.
      
      The events are propagated upward until the event is handled, i.e.  the
      events are not pass-through.  Here is what this means: for example you
      have three cgroups: A->B->C.  Now you set up an event listener on
      cgroups A, B and C, and suppose group C experiences some pressure.  In
      this situation, only group C will receive the notification, i.e.  groups
      A and B will not receive it.  This is done to avoid excessive
      "broadcasting" of messages, which disturbs the system and which is
      especially bad if we are low on memory or thrashing.  So, organize the
      cgroups wisely, or propagate the events manually (or, ask us to
      implement the pass-through events, explaining why would you need them.)
      
      Performance wise, the memory pressure notifications feature itself is
      lightweight and does not require much of bookkeeping, in contrast to the
      rest of memcg features.  Unfortunately, as of current memcg
      implementation, pages accounting is an inseparable part and cannot be
      turned off.  The good news is that there are some efforts[1] to improve
      the situation; plus, implementing the same, fully API-compatible[2]
      interface for CONFIG_MEMCG=n case (e.g.  embedded) is also a viable
      option, so it will not require any changes on the userland side.
      
      [1] http://permalink.gmane.org/gmane.linux.kernel.cgroups/6291
      [2] http://lkml.org/lkml/2013/2/21/454
      
      [akpm@linux-foundation.org: coding-style fixes]
      [akpm@linux-foundation.org: fix CONFIG_CGROPUPS=n warnings]
      Signed-off-by: default avatarAnton Vorontsov <anton.vorontsov@linaro.org>
      Acked-by: default avatarKirill A. Shutemov <kirill@shutemov.name>
      Acked-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Glauber Costa <glommer@parallels.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Luiz Capitulino <lcapitulino@redhat.com>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Leonid Moiseichuk <leonid.moiseichuk@nokia.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@gmail.com>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
      Cc: John Stultz <john.stultz@linaro.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      70ddf637
    • Rasmus Villemoes's avatar
      mm: madvise: complete input validation before taking lock · 84d96d89
      Rasmus Villemoes authored
      In madvise(), there doesn't seem to be any reason for taking the
      &current->mm->mmap_sem before start and len_in have been validated.
      Incidentally, this removes the need for the out: label.
      
      [akpm@linux-foundation.org: s/out_plug/out/, per David]
      Signed-off-by: default avatarRasmus Villemoes <linux@rasmusvillemoes.dk>
      Acked-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      84d96d89
    • David Rientjes's avatar
      mm, hotplug: avoid compiling memory hotremove functions when disabled · 4edd7cef
      David Rientjes authored
      __remove_pages() is only necessary for CONFIG_MEMORY_HOTREMOVE.  PowerPC
      pseries will return -EOPNOTSUPP if unsupported.
      
      Adding an #ifdef causes several other functions it depends on to also
      become unnecessary, which saves in .text when disabled (it's disabled in
      most defconfigs besides powerpc, including x86).  remove_memory_block()
      becomes static since it is not referenced outside of
      drivers/base/memory.c.
      
      Build tested on x86 and powerpc with CONFIG_MEMORY_HOTREMOVE both enabled
      and disabled.
      Signed-off-by: default avatarDavid Rientjes <rientjes@google.com>
      Acked-by: default avatarToshi Kani <toshi.kani@hp.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Wen Congyang <wency@cn.fujitsu.com>
      Cc: Tang Chen <tangchen@cn.fujitsu.com>
      Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4edd7cef
    • Toshi Kani's avatar
      mm: change __remove_pages() to call release_mem_region_adjustable() · fe74ebb1
      Toshi Kani authored
      Change __remove_pages() to call release_mem_region_adjustable().  This
      allows a requested memory range to be released from the iomem_resource
      table even if it does not match exactly to an resource entry but still
      fits into.  The resource entries initialized at bootup usually cover the
      whole contiguous memory ranges and may not necessarily match with the
      size of memory hot-delete requests.
      
      If release_mem_region_adjustable() failed, __remove_pages() emits a
      warning message and continues to proceed as it was the case with
      release_mem_region().  release_mem_region(), which is defined to
      __release_region(), emits a warning message and returns no error since a
      void function.
      Signed-off-by: default avatarToshi Kani <toshi.kani@hp.com>
      Reviewed-by : Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Ram Pai <linuxram@us.ibm.com>
      Cc: T Makphaibulchoke <tmac@hp.com>
      Cc: Wen Congyang <wency@cn.fujitsu.com>
      Cc: Tang Chen <tangchen@cn.fujitsu.com>
      Cc: Jiang Liu <jiang.liu@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      fe74ebb1
    • Toshi Kani's avatar
      resource: add release_mem_region_adjustable() · 825f787b
      Toshi Kani authored
      Add release_mem_region_adjustable(), which releases a requested region
      from a currently busy memory resource.  This interface adjusts the
      matched memory resource accordingly even if the requested region does
      not match exactly but still fits into.
      
      This new interface is intended for memory hot-delete.  During bootup,
      memory resources are inserted from the boot descriptor table, such as
      EFI Memory Table and e820.  Each memory resource entry usually covers
      the whole contigous memory range.  Memory hot-delete request, on the
      other hand, may target to a particular range of memory resource, and its
      size can be much smaller than the whole contiguous memory.  Since the
      existing release interfaces like __release_region() require a requested
      region to be exactly matched to a resource entry, they do not allow a
      partial resource to be released.
      
      This new interface is restrictive (i.e.  release under certain
      conditions), which is consistent with other release interfaces,
      __release_region() and __release_resource().  Additional release
      conditions, such as an overlapping region to a resource entry, can be
      supported after they are confirmed as valid cases.
      
      There is no change to the existing interfaces since their restriction is
      valid for I/O resources.
      
      [akpm@linux-foundation.org: use GFP_ATOMIC under write_lock()]
      [akpm@linux-foundation.org: switch back to GFP_KERNEL, less buggily]
      [akpm@linux-foundation.org: remove unneeded and wrong kfree(), per Toshi]
      Signed-off-by: default avatarToshi Kani <toshi.kani@hp.com>
      Reviewed-by : Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Cc: David Rientjes <rientjes@google.com>
      Reviewed-by: default avatarRam Pai <linuxram@us.ibm.com>
      Cc: T Makphaibulchoke <tmac@hp.com>
      Cc: Wen Congyang <wency@cn.fujitsu.com>
      Cc: Tang Chen <tangchen@cn.fujitsu.com>
      Cc: Jiang Liu <jiang.liu@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      825f787b
    • Toshi Kani's avatar
      resource: add __adjust_resource() for internal use · ae8e3a91
      Toshi Kani authored
      Add __adjust_resource(), which is called by adjust_resource() internally
      after the resource_lock is held.  There is no interface change to
      adjust_resource().  This change allows other functions to call
      __adjust_resource() internally while the resource_lock is held.
      Signed-off-by: default avatarToshi Kani <toshi.kani@hp.com>
      Reviewed-by: default avatarYasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
      Acked-by: default avatarDavid Rientjes <rientjes@google.com>
      Cc: Ram Pai <linuxram@us.ibm.com>
      Cc: T Makphaibulchoke <tmac@hp.com>
      Cc: Wen Congyang <wency@cn.fujitsu.com>
      Cc: Tang Chen <tangchen@cn.fujitsu.com>
      Cc: Jiang Liu <jiang.liu@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ae8e3a91