1. 24 Jul, 2008 40 commits
    • Johannes Weiner's avatar
      bootmem: clean up bootmem.c file header · 57cfc29e
      Johannes Weiner authored
      Change the description, move a misplaced comment about the allocator
      itself and add me to the list of copyright holders.
      Signed-off-by: default avatarJohannes Weiner <hannes@saeurebad.de>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      57cfc29e
    • Johannes Weiner's avatar
      bootmem: reorder code to match new bootmem structure · 223e8dc9
      Johannes Weiner authored
      This only reorders functions so that further patches will be easier to
      read.  No code changed.
      Signed-off-by: default avatarJohannes Weiner <hannes@saeurebad.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      223e8dc9
    • Adam Litke's avatar
      hugetlb: quota is not freed for unused reserved private huge pages · 7251ff78
      Adam Litke authored
      With shared reservations (and now also with private reservations), we reserve
      huge pages at mmap time.  We also account for the mapping against fs quota to
      prevent a reservation from being preempted by quota exhaustion.
      
      When testing with the libhugetlbfs test suite, I found a problem with quota
      accounting.  FS quota for allocated pages is handled correctly but we are not
      releasing quota for private pages that were reserved but never allocated.  Do
      this in hugetlb_vm_op_close() at the same time as unused page reservations are
      released.
      Signed-off-by: default avatarAdam Litke <agl@us.ibm.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Johannes Weiner <hannes@saeurebad.de>
      Cc: William Lee Irwin III <wli@holomorphy.com>
      Cc: Hugh Dickins <hugh@veritas.com>
      Acked-by: default avatarAndy Whitcroft <apw@shadowen.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7251ff78
    • Mel Gorman's avatar
      hugetlb: fix a hugepage reservation check for MAP_SHARED · 7f09ca51
      Mel Gorman authored
      When removing a huge page from the hugepage pool for a fault the system checks
      to see if the mapping requires additional pages to be reserved, and if it does
      whether there are any unreserved pages remaining.  If not, the allocation
      fails without even attempting to get a page.  In order to determine whether to
      apply this check we call vma_has_private_reserves() which tells us if this vma
      is MAP_PRIVATE and is the owner.  This incorrectly triggers the remaining
      reservation test for MAP_SHARED mappings which prevents allocation of the
      final page in the pool even though it is reserved for this mapping.
      
      In reality we only want to check this for MAP_PRIVATE mappings where the
      process is not the original mapper.  Replace vma_has_private_reserves() with
      vma_has_reserves() which indicates whether further reserves are required, and
      update the caller.
      Signed-off-by: default avatarMel Gorman <mel@csn.ul.ie>
      Acked-by: default avatarAdam Litke <agl@us.ibm.com>
      Acked-by: default avatarAndy Whitcroft <apw@shadowen.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      7f09ca51
    • Jon Tollefson's avatar
      powerpc: support multiple hugepage sizes · 0d9ea754
      Jon Tollefson authored
      Instead of using the variable mmu_huge_psize to keep track of the huge
      page size we use an array of MMU_PAGE_* values.  For each supported huge
      page size we need to know the hugepte_shift value and have a
      pgtable_cache.  The hstate or an mmu_huge_psizes index is passed to
      functions so that they know which huge page size they should use.
      
      The hugepage sizes 16M and 64K are setup(if available on the hardware) so
      that they don't have to be set on the boot cmd line in order to use them.
      The number of 16G pages have to be specified at boot-time though (e.g.
      hugepagesz=16G hugepages=5).
      Signed-off-by: default avatarJon Tollefson <kniht@linux.vnet.ibm.com>
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0d9ea754
    • Jon Tollefson's avatar
      fs: check for statfs overflow · f4a67cce
      Jon Tollefson authored
      Adds a check for an overflow in the filesystem size so if someone is
      checking with statfs() on a 16G blocksize hugetlbfs in a 32bit binary that
      it will report back EOVERFLOW instead of a size of 0.
      Acked-by: default avatarNishanth Aravamudan <nacc@us.ibm.com>
      Signed-off-by: default avatarJon Tollefson <kniht@linux.vnet.ibm.com>
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      f4a67cce
    • Jon Tollefson's avatar
      powerpc: define support for 16G hugepages · 91224346
      Jon Tollefson authored
      The huge page size is defined for 16G pages.  If a hugepagesz of 16G is
      specified at boot-time then it becomes the huge page size instead of the
      default 16M.
      
      The change in pgtable-64K.h is to the macro pte_iterate_hashed_subpages to
      make the increment to va (the 1 being shifted) be a long so that it is not
      shifted to 0.  Otherwise it would create an infinite loop when the shift
      value is for a 16G page (when base page size is 64K).
      Signed-off-by: default avatarJon Tollefson <kniht@linux.vnet.ibm.com>
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      91224346
    • Jon Tollefson's avatar
      powerpc: scan device tree for gigantic pages · 658013e9
      Jon Tollefson authored
      The 16G huge pages have to be reserved in the HMC prior to boot.  The
      location of the pages are placed in the device tree.  This patch adds code
      to scan the device tree during very early boot and save these page
      locations until hugetlbfs is ready for them.
      Acked-by: default avatarAdam Litke <agl@us.ibm.com>
      Signed-off-by: default avatarJon Tollefson <kniht@linux.vnet.ibm.com>
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      658013e9
    • Jon Tollefson's avatar
      powerpc: function to allocate gigantic hugepages · ec4b2c0c
      Jon Tollefson authored
      The 16G page locations have been saved during early boot in an array.  The
      alloc_bootmem_huge_page() function adds a page from here to the
      huge_boot_pages list.
      Acked-by: default avatarAdam Litke <agl@us.ibm.com>
      Signed-off-by: default avatarJon Tollefson <kniht@linux.vnet.ibm.com>
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ec4b2c0c
    • Jon Tollefson's avatar
      hugetlb: allow arch overridden hugepage allocation · 53ba51d2
      Jon Tollefson authored
      Allow alloc_bootmem_huge_page() to be overridden by architectures that
      can't always use bootmem.  This requires huge_boot_pages to be available
      for use by this function.
      
      This is required for powerpc 16G pages, which have to be reserved prior to
      boot-time.  The location of these pages are indicated in the device tree.
      Acked-by: default avatarAdam Litke <agl@us.ibm.com>
      Signed-off-by: default avatarJon Tollefson <kniht@linux.vnet.ibm.com>
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      53ba51d2
    • Nick Piggin's avatar
      hugetlb: override default huge page size · e11bfbfc
      Nick Piggin authored
      Allow configurations with the default huge page size which is different to
      the traditional HPAGE_SIZE size.  The default huge page size is the one
      represented in the legacy /proc ABIs, SHM, and which is defaulted to when
      mounting hugetlbfs filesystems.
      
      This is implemented with a new kernel option default_hugepagesz=, which
      defaults to HPAGE_SIZE if not specified.
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e11bfbfc
    • Andi Kleen's avatar
      x86: add hugepagesz option on 64-bit · b4718e62
      Andi Kleen authored
      Add an hugepagesz=...  option similar to IA64, PPC etc.  to x86-64.
      
      This finally allows to select GB pages for hugetlbfs in x86 now that all
      the infrastructure is in place.
      Signed-off-by: default avatarAndi Kleen <ak@suse.de>
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b4718e62
    • Andi Kleen's avatar
      39c11e6c
    • Andi Kleen's avatar
      hugetlb: introduce pud_huge · ceb86879
      Andi Kleen authored
      Straight forward extensions for huge pages located in the PUD instead of
      PMDs.
      Signed-off-by: default avatarAndi Kleen <ak@suse.de>
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ceb86879
    • Andi Kleen's avatar
      hugetlb: printk cleanup · 4abd32db
      Andi Kleen authored
      - Reword sentence to clarify meaning with multiple options
      - Add support for using GB prefixes for the page size
      - Add extra printk to delayed > MAX_ORDER allocation code
      Acked-by: default avatarAdam Litke <agl@us.ibm.com>
      Acked-by: default avatarNishanth Aravamudan <nacc@us.ibm.com>
      Signed-off-by: default avatarAndi Kleen <ak@suse.de>
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4abd32db
    • Andi Kleen's avatar
      hugetlb: support boot allocate different sizes · 8faa8b07
      Andi Kleen authored
      Make some infrastructure changes to allow boot-time allocation of
      different hugepage page sizes.
      
      - move all basic hstate initialisation into hugetlb_add_hstate
      - create a new function hugetlb_hstate_alloc_pages() to do the
        actual initial page allocations. Call this function early in
        order to allocate giant pages from bootmem.
      - Check for multiple hugepages= parameters
      Acked-by: default avatarAdam Litke <agl@us.ibm.com>
      Acked-by: default avatarNishanth Aravamudan <nacc@us.ibm.com>
      Acked-by: default avatarAndrew Hastings <abh@cray.com>
      Signed-off-by: default avatarAndi Kleen <ak@suse.de>
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      8faa8b07
    • Andi Kleen's avatar
      hugetlb: support larger than MAX_ORDER · aa888a74
      Andi Kleen authored
      This is needed on x86-64 to handle GB pages in hugetlbfs, because it is
      not practical to enlarge MAX_ORDER to 1GB.
      
      Instead the 1GB pages are only allocated at boot using the bootmem
      allocator using the hugepages=...  option.
      
      These 1G bootmem pages are never freed.  In theory it would be possible to
      implement that with some complications, but since it would be a one-way
      street (>= MAX_ORDER pages cannot be allocated later) I decided not to
      currently.
      
      The >= MAX_ORDER code is not ifdef'ed per architecture.  It is not very
      big and the ifdef uglyness seemed not be worth it.
      
      Known problems: /proc/meminfo and "free" do not display the memory
      allocated for gb pages in "Total".  This is a little confusing for the
      user.
      Acked-by: default avatarAndrew Hastings <abh@cray.com>
      Signed-off-by: default avatarAndi Kleen <ak@suse.de>
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      aa888a74
    • Andi Kleen's avatar
      mm: export prep_compound_page to mm · 01ad1c08
      Andi Kleen authored
      hugetlb will need to get compound pages from bootmem to handle the case of
      them being greater than or equal to MAX_ORDER.  Export the constructor
      function needed for this.
      Acked-by: default avatarAdam Litke <agl@us.ibm.com>
      Signed-off-by: default avatarAndi Kleen <ak@suse.de>
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      01ad1c08
    • Andi Kleen's avatar
      mm: introduce non panic alloc_bootmem · b54bbf7b
      Andi Kleen authored
      Straight forward variant of the existing __alloc_bootmem_node, only
      subsequent patch when allocating giant hugepages at boot -- don't want to
      panic if we can't allocate as many as the user asked for.
      Signed-off-by: default avatarAndi Kleen <ak@suse.de>
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b54bbf7b
    • Andi Kleen's avatar
      hugetlb: abstract numa round robin selection · 5ced66c9
      Andi Kleen authored
      Need this as a separate function for a future patch.
      
      No behaviour change.
      Acked-by: default avatarAdam Litke <agl@us.ibm.com>
      Acked-by: default avatarNishanth Aravamudan <nacc@us.ibm.com>
      Signed-off-by: default avatarAndi Kleen <ak@suse.de>
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5ced66c9
    • Nishanth Aravamudan's avatar
      hugetlb: new sysfs interface · a3437870
      Nishanth Aravamudan authored
      Provide new hugepages user APIs that are more suited to multiple hstates
      in sysfs.  There is a new directory, /sys/kernel/hugepages.  Underneath
      that directory there will be a directory per-supported hugepage size,
      e.g.:
      
      /sys/kernel/hugepages/hugepages-64kB
      /sys/kernel/hugepages/hugepages-16384kB
      /sys/kernel/hugepages/hugepages-16777216kB
      
      corresponding to 64k, 16m and 16g respectively.  Within each
      hugepages-size directory there are a number of files, corresponding to the
      tracked counters in the hstate, e.g.:
      
      /sys/kernel/hugepages/hugepages-64/nr_hugepages
      /sys/kernel/hugepages/hugepages-64/nr_overcommit_hugepages
      /sys/kernel/hugepages/hugepages-64/free_hugepages
      /sys/kernel/hugepages/hugepages-64/resv_hugepages
      /sys/kernel/hugepages/hugepages-64/surplus_hugepages
      
      Of these files, the first two are read-write and the latter three are
      read-only.  The size of the hugepage being manipulated is trivially
      deducible from the enclosing directory and is always expressed in kB (to
      match meminfo).
      
      [dave@linux.vnet.ibm.com: fix build]
      [nacc@us.ibm.com: hugetlb: hang off of /sys/kernel/mm rather than /sys/kernel]
      [nacc@us.ibm.com: hugetlb: remove CONFIG_SYSFS dependency]
      Acked-by: default avatarGreg Kroah-Hartman <gregkh@suse.de>
      Signed-off-by: default avatarNishanth Aravamudan <nacc@us.ibm.com>
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      Cc: Dave Hansen <dave@linux.vnet.ibm.com>
      Signed-off-by: default avatarNishanth Aravamudan <nacc@us.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a3437870
    • Andi Kleen's avatar
      hugetlbfs: per mount huge page sizes · a137e1cc
      Andi Kleen authored
      Add the ability to configure the hugetlb hstate used on a per mount basis.
      
      - Add a new pagesize= option to the hugetlbfs mount that allows setting
        the page size
      - This option causes the mount code to find the hstate corresponding to the
        specified size, and sets up a pointer to the hstate in the mount's
        superblock.
      - Change the hstate accessors to use this information rather than the
        global_hstate they were using (requires a slight change in mm/memory.c
        so we don't NULL deref in the error-unmap path -- see comments).
      
      [np: take hstate out of hugetlbfs inode and vma->vm_private_data]
      Acked-by: default avatarAdam Litke <agl@us.ibm.com>
      Acked-by: default avatarNishanth Aravamudan <nacc@us.ibm.com>
      Signed-off-by: default avatarAndi Kleen <ak@suse.de>
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a137e1cc
    • Andi Kleen's avatar
      hugetlb: multiple hstates for multiple page sizes · e5ff2159
      Andi Kleen authored
      Add basic support for more than one hstate in hugetlbfs.  This is the key
      to supporting multiple hugetlbfs page sizes at once.
      
      - Rather than a single hstate, we now have an array, with an iterator
      - default_hstate continues to be the struct hstate which we use by default
      - Add functions for architectures to register new hstates
      
      [akpm@linux-foundation.org: coding-style fixes]
      Acked-by: default avatarAdam Litke <agl@us.ibm.com>
      Acked-by: default avatarNishanth Aravamudan <nacc@us.ibm.com>
      Signed-off-by: default avatarAndi Kleen <ak@suse.de>
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e5ff2159
    • Andi Kleen's avatar
      hugetlb: modular state for hugetlb page size · a5516438
      Andi Kleen authored
      The goal of this patchset is to support multiple hugetlb page sizes.  This
      is achieved by introducing a new struct hstate structure, which
      encapsulates the important hugetlb state and constants (eg.  huge page
      size, number of huge pages currently allocated, etc).
      
      The hstate structure is then passed around the code which requires these
      fields, they will do the right thing regardless of the exact hstate they
      are operating on.
      
      This patch adds the hstate structure, with a single global instance of it
      (default_hstate), and does the basic work of converting hugetlb to use the
      hstate.
      
      Future patches will add more hstate structures to allow for different
      hugetlbfs mounts to have different page sizes.
      
      [akpm@linux-foundation.org: coding-style fixes]
      Acked-by: default avatarAdam Litke <agl@us.ibm.com>
      Acked-by: default avatarNishanth Aravamudan <nacc@us.ibm.com>
      Signed-off-by: default avatarAndi Kleen <ak@suse.de>
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a5516438
    • Andi Kleen's avatar
      hugetlb: factor out prep_new_huge_page · b7ba30c6
      Andi Kleen authored
      Needed to avoid code duplication in follow up patches.
      Acked-by: default avatarAdam Litke <agl@us.ibm.com>
      Acked-by: default avatarNishanth Aravamudan <nacc@us.ibm.com>
      Signed-off-by: default avatarAndi Kleen <ak@suse.de>
      Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      b7ba30c6
    • Nishanth Aravamudan's avatar
      mm: create /sys/kernel/mm · ff7ea79c
      Nishanth Aravamudan authored
      Add a kobject to create /sys/kernel/mm when sysfs is mounted.  The kobject
      will exist regardless.  This will allow for the hugepage related sysfs
      directories to exist under the mm "subsystem" directory.  Add an ABI file
      appropriately.
      
      [kosaki.motohiro@jp.fujitsu.com: fix build]
      Signed-off-by: default avatarNishanth Aravamudan <nacc@us.ibm.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Signed-off-by: default avatarKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ff7ea79c
    • Nishanth Aravamudan's avatar
      mm: remove mm_init compilation dependency on CONFIG_DEBUG_MEMORY_INIT · 5e9426ab
      Nishanth Aravamudan authored
      Towards the end of putting all core mm initialization in mm_init.c, I
      plan on putting the creation of a mm kobject in a function in that file.
      However, the file is currently only compiled if CONFIG_DEBUG_MEMORY_INIT
      is set. Remove this dependency, but put the code under an #ifdef on the
      same config option. This should result in no functional changes.
      Signed-off-by: default avatarNishanth Aravamudan <nacc@us.ibm.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5e9426ab
    • Eric Dumazet's avatar
      vmallocinfo: add NUMA information · a47a126a
      Eric Dumazet authored
      Christoph recently added /proc/vmallocinfo file to get information about
      vmalloc allocations.
      
      This patch adds NUMA specific information, giving number of pages
      allocated on each memory node.
      
      This should help to check that vmalloc() is able to respect NUMA policies.
      
      Example of output on a four nodes machine (one cpu per node)
      
      1) network hash tables are evenly spreaded on four nodes (OK) (Same
         point for inodes and dentries hash tables)
      
      2) iptables tables (x_tables) are correctly allocated on each cpu node
         (OK).
      
      3) sys_swapon() allocates its memory from one node only.
      
      4) each loaded module is using memory on one node.
      
      Sysadmins could tune their setup to change points 3) and 4) if necessary.
      
      grep "pages="  /proc/vmallocinfo
      0xffffc20000000000-0xffffc20000201000 2101248 alloc_large_system_hash+0x204/0x2c0 pages=512 vmalloc N0=128 N1=128 N2=128 N3=128
      0xffffc20000201000-0xffffc20000302000 1052672 alloc_large_system_hash+0x204/0x2c0 pages=256 vmalloc N0=64 N1=64 N2=64 N3=64
      0xffffc2000031a000-0xffffc2000031d000   12288 alloc_large_system_hash+0x204/0x2c0 pages=2 vmalloc N1=1 N2=1
      0xffffc2000031f000-0xffffc2000032b000   49152 cramfs_uncompress_init+0x2e/0x80 pages=11 vmalloc N0=3 N1=3 N2=2 N3=3
      0xffffc2000033e000-0xffffc20000341000   12288 sys_swapon+0x640/0xac0 pages=2 vmalloc N0=2
      0xffffc20000341000-0xffffc20000344000   12288 xt_alloc_table_info+0xfe/0x130 [x_tables] pages=2 vmalloc N0=2
      0xffffc20000344000-0xffffc20000347000   12288 xt_alloc_table_info+0xfe/0x130 [x_tables] pages=2 vmalloc N1=2
      0xffffc20000347000-0xffffc2000034a000   12288 xt_alloc_table_info+0xfe/0x130 [x_tables] pages=2 vmalloc N2=2
      0xffffc2000034a000-0xffffc2000034d000   12288 xt_alloc_table_info+0xfe/0x130 [x_tables] pages=2 vmalloc N3=2
      0xffffc20004381000-0xffffc20004402000  528384 alloc_large_system_hash+0x204/0x2c0 pages=128 vmalloc N0=32 N1=32 N2=32 N3=32
      0xffffc20004402000-0xffffc20004803000 4198400 alloc_large_system_hash+0x204/0x2c0 pages=1024 vmalloc vpages N0=256 N1=256 N2=256 N3=256
      0xffffc20004803000-0xffffc20004904000 1052672 alloc_large_system_hash+0x204/0x2c0 pages=256 vmalloc N0=64 N1=64 N2=64 N3=64
      0xffffc20004904000-0xffffc20004bec000 3047424 sys_swapon+0x640/0xac0 pages=743 vmalloc vpages N0=743
      0xffffffffa0000000-0xffffffffa000f000   61440 sys_init_module+0xc27/0x1d00 pages=14 vmalloc N1=14
      0xffffffffa000f000-0xffffffffa0014000   20480 sys_init_module+0xc27/0x1d00 pages=4 vmalloc N0=4
      0xffffffffa0014000-0xffffffffa0017000   12288 sys_init_module+0xc27/0x1d00 pages=2 vmalloc N0=2
      0xffffffffa0017000-0xffffffffa0022000   45056 sys_init_module+0xc27/0x1d00 pages=10 vmalloc N1=10
      0xffffffffa0022000-0xffffffffa0028000   24576 sys_init_module+0xc27/0x1d00 pages=5 vmalloc N3=5
      0xffffffffa0028000-0xffffffffa0050000  163840 sys_init_module+0xc27/0x1d00 pages=39 vmalloc N1=39
      0xffffffffa0050000-0xffffffffa0052000    8192 sys_init_module+0xc27/0x1d00 pages=1 vmalloc N1=1
      0xffffffffa0052000-0xffffffffa0056000   16384 sys_init_module+0xc27/0x1d00 pages=3 vmalloc N1=3
      0xffffffffa0056000-0xffffffffa0081000  176128 sys_init_module+0xc27/0x1d00 pages=42 vmalloc N3=42
      0xffffffffa0081000-0xffffffffa00ae000  184320 sys_init_module+0xc27/0x1d00 pages=44 vmalloc N3=44
      0xffffffffa00ae000-0xffffffffa00b1000   12288 sys_init_module+0xc27/0x1d00 pages=2 vmalloc N3=2
      0xffffffffa00b1000-0xffffffffa00b9000   32768 sys_init_module+0xc27/0x1d00 pages=7 vmalloc N0=7
      0xffffffffa00b9000-0xffffffffa00c4000   45056 sys_init_module+0xc27/0x1d00 pages=10 vmalloc N3=10
      0xffffffffa00c6000-0xffffffffa00e0000  106496 sys_init_module+0xc27/0x1d00 pages=25 vmalloc N2=25
      0xffffffffa00e0000-0xffffffffa00f1000   69632 sys_init_module+0xc27/0x1d00 pages=16 vmalloc N2=16
      0xffffffffa00f1000-0xffffffffa00f4000   12288 sys_init_module+0xc27/0x1d00 pages=2 vmalloc N3=2
      0xffffffffa00f4000-0xffffffffa00f7000   12288 sys_init_module+0xc27/0x1d00 pages=2 vmalloc N3=2
      
      [akpm@linux-foundation.org: fix comment]
      Signed-off-by: default avatarEric Dumazet <dada1@cosmosbay.com>
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: Randy Dunlap <randy.dunlap@oracle.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a47a126a
    • Pavel Machek's avatar
      SYNC_FILE_RANGE_WRITE may and will block. Document that. · cce77081
      Pavel Machek authored
      [akpm@linux-foundation.org: fix comment text]
      Signed-off-by: default avatarPavel Machek <pavel@suse.cz>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      cce77081
    • Hugh Dickins's avatar
      tmpfs: support aio · bcd78e49
      Hugh Dickins authored
      We have a request for tmpfs to support the AIO interface: easily done, no
      more than replacing the old shmem_file_read by shmem_file_aio_read,
      cribbed from generic_file_aio_read.  (In 2.6.25 its write side was already
      changed to use generic_file_aio_write.)
      
      Incorporate cleanups from Andrew Morton and Harvey Harrison.
      
      Tests out fine with LTP's ltp-aiodio.sh, given hacks (not included) to
      support O_DIRECT.  tmpfs cannot honestly support O_DIRECT: its
      cache-avoiding-IO nature is at odds with direct IO-avoiding-cache.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Tested-by: default avatarLawrence Greenfield <leg@google.com>
      Cc: Christoph Rohland <hans-christoph.rohland@sap.com>
      Cc: Badari Pulavarty <pbadari@us.ibm.com>
      Cc: Zach Brown <zach.brown@oracle.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      bcd78e49
    • Hugh Dickins's avatar
      generic_file_aio_read() cleanups · 11fa977e
      Hugh Dickins authored
      As akpm points out, there's really no need for generic_file_aio_read to
      make a special case of count 0: just loop through nr_segs doing nothing.
      And as Harvey Harrison points out, there's no need to reset retval to 0
      where it's already 0.
      
      Setting count (or ocount) to 0 before calling generic_segment_checks is
      unnecessary too; but reluctantly I'll leave that removal to someone with a
      wider range of gcc versions to hand - 4.1.2 and 4.2.1 don't warn about it,
      but perhaps others do - I forget which are the warniest versions.
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Tested-by: default avatarLawrence Greenfield <leg@google.com>
      Cc: Christoph Rohland <hans-christoph.rohland@sap.com>
      Cc: Badari Pulavarty <pbadari@us.ibm.com>
      Cc: Zach Brown <zach.brown@oracle.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      11fa977e
    • Johannes Weiner's avatar
      vma_page_offset() has no callees: drop it · a858f7b2
      Johannes Weiner authored
      Hugh adds: vma_pagecache_offset() has a dangerously misleading name, since
      it's using hugepage units: rename it to vma_hugecache_offset().
      
      [apw@shadowen.org: restack onto fixed MAP_PRIVATE reservations]
      [akpm@linux-foundation.org: vma_split conversion]
      Signed-off-by: default avatarJohannes Weiner <hannes@saeurebad.de>
      Signed-off-by: default avatarHugh Dickins <hugh@veritas.com>
      Cc: Adam Litke <agl@us.ibm.com>
      Cc: Nishanth Aravamudan <nacc@us.ibm.com>
      Cc: Andi Kleen <ak@suse.de>
      Cc: Nick Piggin <npiggin@suse.de>
      Signed-off-by: default avatarAndy Whitcroft <apw@shadowen.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a858f7b2
    • Andy Whitcroft's avatar
      hugetlb reservations: fix hugetlb MAP_PRIVATE reservations across vma splits · 84afd99b
      Andy Whitcroft authored
      When a hugetlb mapping with a reservation is split, a new VMA is cloned
      from the original.  This new VMA is a direct copy of the original
      including the reservation count.  When this pair of VMAs are unmapped we
      will incorrect double account the unused reservation and the overall
      reservation count will be incorrect, in extreme cases it will wrap.
      
      The problem occurs when we split an existing VMA say to unmap a page in
      the middle.  split_vma() will create a new VMA copying all fields from the
      original.  As we are storing our reservation count in vm_private_data this
      is also copies, endowing the new VMA with a duplicate of the original
      VMA's reservation.  Neither of the new VMAs can exhaust these reservations
      as they are too small, but when we unmap and close these VMAs we will
      incorrect credit the remainder twice and resv_huge_pages will become out
      of sync.  This can lead to allocation failures on mappings with
      reservations and even to resv_huge_pages wrapping which prevents all
      subsequent hugepage allocations.
      
      The simple fix would be to correctly apportion the remaining reservation
      count when the split is made.  However the only hook we have vm_ops->open
      only has the new VMA we do not know the identity of the preceeding VMA.
      Also even if we did have that VMA to hand we do not know how much of the
      reservation was consumed each side of the split.
      
      This patch therefore takes a different tack.  We know that the whole of
      any private mapping (which has a reservation) has a reservation over its
      whole size.  Any present pages represent consumed reservation.  Therefore
      if we track the instantiated pages we can calculate the remaining
      reservation.
      
      This patch reuses the existing regions code to track the regions for which
      we have consumed reservation (ie.  the instantiated pages), as each page
      is faulted in we record the consumption of reservation for the new page.
      When we need to return unused reservations at unmap time we simply count
      the consumed reservation region subtracting that from the whole of the
      map.  During a VMA split the newly opened VMA will point to the same
      region map, as this map is offset oriented it remains valid for both of
      the split VMAs.  This map is referenced counted so that it is removed when
      all VMAs which are part of the mmap are gone.
      
      Thanks to Adam Litke and Mel Gorman for their review feedback.
      Signed-off-by: default avatarAndy Whitcroft <apw@shadowen.org>
      Acked-by: default avatarMel Gorman <mel@csn.ul.ie>
      Cc: Adam Litke <agl@us.ibm.com>
      Cc: Johannes Weiner <hannes@saeurebad.de>
      Cc: Andy Whitcroft <apw@shadowen.org>
      Cc: William Lee Irwin III <wli@holomorphy.com>
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Michael Kerrisk <mtk.manpages@googlemail.com>
      Cc: Jon Tollefson <kniht@linux.vnet.ibm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      84afd99b
    • Andy Whitcroft's avatar
      hugetlb: allow huge page mappings to be created without reservations · c37f9fb1
      Andy Whitcroft authored
      By default all shared mappings and most private mappings now have
      reservations associated with them.  This improves semantics by providing
      allocation guarentees to the mapper.  However a small number of
      applications may attempt to make very large sparse mappings, with these
      strict reservations the system will never be able to honour the mapping.
      
      This patch set brings MAP_NORESERVE support to hugetlb files.  This allows
      new mappings to be made to hugetlbfs files without an associated
      reservation, for both shared and private mappings.  This allows
      applications which want to create very sparse mappings to opt-out of the
      reservation system.  Obviously as there is no reservation they are liable
      to fault at runtime if the huge page pool becomes exhausted; buyer beware.
      Signed-off-by: default avatarAndy Whitcroft <apw@shadowen.org>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Adam Litke <agl@us.ibm.com>
      Cc: Johannes Weiner <hannes@saeurebad.de>
      Cc: Andy Whitcroft <apw@shadowen.org>
      Cc: William Lee Irwin III <wli@holomorphy.com>
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Michael Kerrisk <mtk.manpages@googlemail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c37f9fb1
    • Andy Whitcroft's avatar
      hugetlb: move reservation region support earlier · 96822904
      Andy Whitcroft authored
      The following patch will require use of the reservation regions support.
      Move this earlier in the file.  No changes have been made to this code.
      Signed-off-by: default avatarAndy Whitcroft <apw@shadowen.org>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Acked-by: default avatarAdam Litke <agl@us.ibm.com>
      Cc: Johannes Weiner <hannes@saeurebad.de>
      Cc: Andy Whitcroft <apw@shadowen.org>
      Cc: William Lee Irwin III <wli@holomorphy.com>
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Michael Kerrisk <mtk.manpages@googlemail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      96822904
    • Andy Whitcroft's avatar
      mm: record MAP_NORESERVE status on vmas and fix small page mprotect reservations · cdfd4325
      Andy Whitcroft authored
      With Mel's hugetlb private reservation support patches applied, strict
      overcommit semantics are applied to both shared and private huge page
      mappings.  This can be a problem if an application relied on unlimited
      overcommit semantics for private mappings.  An example of this would be an
      application which maps a huge area with the intention of using it very
      sparsely.  These application would benefit from being able to opt-out of
      the strict overcommit.  It should be noted that prior to hugetlb
      supporting demand faulting all mappings were fully populated and so
      applications of this type should be rare.
      
      This patch stack implements the MAP_NORESERVE mmap() flag for huge page
      mappings.  This flag has the same meaning as for small page mappings,
      suppressing reservations for that mapping.
      
      Thanks to Mel Gorman for reviewing a number of early versions of these
      patches.
      
      This patch:
      
      When a small page mapping is created with mmap() reservations are created
      by default for any memory pages required.  When the region is read/write
      the reservation is increased for every page, no reservation is needed for
      read-only regions (as they implicitly share the zero page).  Reservations
      are tracked via the VM_ACCOUNT vma flag which is present when the region
      has reservation backing it.  When we convert a region from read-only to
      read-write new reservations are aquired and VM_ACCOUNT is set.  However,
      when a read-only map is created with MAP_NORESERVE it is indistinguishable
      from a normal mapping.  When we then convert that to read/write we are
      forced to incorrectly create reservations for it as we have no record of
      the original MAP_NORESERVE.
      
      This patch introduces a new vma flag VM_NORESERVE which records the
      presence of the original MAP_NORESERVE flag.  This allows us to
      distinguish these two circumstances and correctly account the reserve.
      
      As well as fixing this FIXME in the code, this makes it much easier to
      introduce MAP_NORESERVE support for huge pages as this flag is available
      consistantly for the life of the mapping.  VM_ACCOUNT on the other hand is
      heavily used at the generic level in association with small pages.
      Signed-off-by: default avatarAndy Whitcroft <apw@shadowen.org>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Adam Litke <agl@us.ibm.com>
      Cc: Johannes Weiner <hannes@saeurebad.de>
      Cc: Andy Whitcroft <apw@shadowen.org>
      Cc: William Lee Irwin III <wli@holomorphy.com>
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Michael Kerrisk <mtk.manpages@googlemail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      cdfd4325
    • Andy Whitcroft's avatar
      huge page private reservation review cleanups · e7c4b0bf
      Andy Whitcroft authored
      Create some new accessors for vma private data to cut down on and contain
      the casts.  Encapsulates the huge and small page offset calculations.
      Also adds a couple of VM_BUG_ONs for consistency.
      
      [akpm@linux-foundation.org: Make things static]
      Signed-off-by: default avatarAndy Whitcroft <apw@shadowen.org>
      Acked-by: default avatarMel Gorman <mel@csn.ul.ie>
      Cc: Adam Litke <agl@us.ibm.com>
      Cc: Johannes Weiner <hannes@saeurebad.de>
      Cc: Andy Whitcroft <apw@shadowen.org>
      Cc: William Lee Irwin III <wli@holomorphy.com>
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Michael Kerrisk <mtk.manpages@googlemail.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      e7c4b0bf
    • Mel Gorman's avatar
      hugetlb: guarantee that COW faults for a process that called mmap(MAP_PRIVATE)... · 04f2cbe3
      Mel Gorman authored
      hugetlb: guarantee that COW faults for a process that called mmap(MAP_PRIVATE) on hugetlbfs will succeed
      
      After patch 2 in this series, a process that successfully calls mmap() for
      a MAP_PRIVATE mapping will be guaranteed to successfully fault until a
      process calls fork().  At that point, the next write fault from the parent
      could fail due to COW if the child still has a reference.
      
      We only reserve pages for the parent but a copy must be made to avoid
      leaking data from the parent to the child after fork().  Reserves could be
      taken for both parent and child at fork time to guarantee faults but if
      the mapping is large it is highly likely we will not have sufficient pages
      for the reservation, and it is common to fork only to exec() immediatly
      after.  A failure here would be very undesirable.
      
      Note that the current behaviour of mainline with MAP_PRIVATE pages is
      pretty bad.  The following situation is allowed to occur today.
      
      1. Process calls mmap(MAP_PRIVATE)
      2. Process calls mlock() to fault all pages and makes sure it succeeds
      3. Process forks()
      4. Process writes to MAP_PRIVATE mapping while child still exists
      5. If the COW fails at this point, the process gets SIGKILLed even though it
         had taken care to ensure the pages existed
      
      This patch improves the situation by guaranteeing the reliability of the
      process that successfully calls mmap().  When the parent performs COW, it
      will try to satisfy the allocation without using reserves.  If that fails
      the parent will steal the page leaving any children without a page.
      Faults from the child after that point will result in failure.  If the
      child COW happens first, an attempt will be made to allocate the page
      without reserves and the child will get SIGKILLed on failure.
      
      To summarise the new behaviour:
      
      1. If the original mapper performs COW on a private mapping with multiple
         references, it will attempt to allocate a hugepage from the pool or
         the buddy allocator without using the existing reserves. On fail, VMAs
         mapping the same area are traversed and the page being COW'd is unmapped
         where found. It will then steal the original page as the last mapper in
         the normal way.
      
      2. The VMAs the pages were unmapped from are flagged to note that pages
         with data no longer exist. Future no-page faults on those VMAs will
         terminate the process as otherwise it would appear that data was corrupted.
         A warning is printed to the console that this situation occured.
      
      2. If the child performs COW first, it will attempt to satisfy the COW
         from the pool if there are enough pages or via the buddy allocator if
         overcommit is allowed and the buddy allocator can satisfy the request. If
         it fails, the child will be killed.
      
      If the pool is large enough, existing applications will not notice that
      the reserves were a factor.  Existing applications depending on the
      no-reserves been set are unlikely to exist as for much of the history of
      hugetlbfs, pages were prefaulted at mmap(), allocating the pages at that
      point or failing the mmap().
      
      [npiggin@suse.de: fix CONFIG_HUGETLB=n build]
      Signed-off-by: default avatarMel Gorman <mel@csn.ul.ie>
      Acked-by: default avatarAdam Litke <agl@us.ibm.com>
      Cc: Andy Whitcroft <apw@shadowen.org>
      Cc: William Lee Irwin III <wli@holomorphy.com>
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: Nick Piggin <npiggin@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      04f2cbe3
    • Mel Gorman's avatar
      hugetlb: reserve huge pages for reliable MAP_PRIVATE hugetlbfs mappings until fork() · a1e78772
      Mel Gorman authored
      This patch reserves huge pages at mmap() time for MAP_PRIVATE mappings in
      a similar manner to the reservations taken for MAP_SHARED mappings.  The
      reserve count is accounted both globally and on a per-VMA basis for
      private mappings.  This guarantees that a process that successfully calls
      mmap() will successfully fault all pages in the future unless fork() is
      called.
      
      The characteristics of private mappings of hugetlbfs files behaviour after
      this patch are;
      
      1. The process calling mmap() is guaranteed to succeed all future faults until
         it forks().
      2. On fork(), the parent may die due to SIGKILL on writes to the private
         mapping if enough pages are not available for the COW. For reasonably
         reliable behaviour in the face of a small huge page pool, children of
         hugepage-aware processes should not reference the mappings; such as
         might occur when fork()ing to exec().
      3. On fork(), the child VMAs inherit no reserves. Reads on pages already
         faulted by the parent will succeed. Successful writes will depend on enough
         huge pages being free in the pool.
      4. Quotas of the hugetlbfs mount are checked at reserve time for the mapper
         and at fault time otherwise.
      
      Before this patch, all reads or writes in the child potentially needs page
      allocations that can later lead to the death of the parent.  This applies
      to reads and writes of uninstantiated pages as well as COW.  After the
      patch it is only a write to an instantiated page that causes problems.
      Signed-off-by: default avatarMel Gorman <mel@csn.ul.ie>
      Acked-by: default avatarAdam Litke <agl@us.ibm.com>
      Cc: Andy Whitcroft <apw@shadowen.org>
      Cc: William Lee Irwin III <wli@holomorphy.com>
      Cc: Hugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      a1e78772
    • Mel Gorman's avatar
      hugetlb: move hugetlb_acct_memory() · fc1b8a73
      Mel Gorman authored
      This is a patchset to give reliable behaviour to a process that
      successfully calls mmap(MAP_PRIVATE) on a hugetlbfs file.  Currently, it
      is possible for the process to be killed due to a small hugepage pool size
      even if it calls mlock().
      
      MAP_SHARED mappings on hugetlbfs reserve huge pages at mmap() time.  This
      guarantees all future faults against the mapping will succeed.  This
      allows local allocations at first use improving NUMA locality whilst
      retaining reliability.
      
      MAP_PRIVATE mappings do not reserve pages.  This can result in an
      application being SIGKILLed later if a huge page is not available at fault
      time.  This makes huge pages usage very ill-advised in some cases as the
      unexpected application failure cannot be detected and handled as it is
      immediately fatal.  Although an application may force instantiation of the
      pages using mlock(), this may lead to poor memory placement and the
      process may still be killed when performing COW.
      
      This patchset introduces a reliability guarantee for the process which
      creates a private mapping, i.e.  the process that calls mmap() on a
      hugetlbfs file successfully.  The first patch of the set is purely
      mechanical code move to make later diffs easier to read.  The second patch
      will guarantee faults up until the process calls fork().  After patch two,
      as long as the child keeps the mappings, the parent is no longer
      guaranteed to be reliable.  Patch 3 guarantees that the parent will always
      successfully COW by unmapping the pages from the child in the event there
      are insufficient pages in the hugepage pool in allocate a new page, be it
      via a static or dynamic pool.
      
      Existing hugepage-aware applications are unlikely to be affected by this
      change.  For much of hugetlbfs's history, pages were pre-faulted at mmap()
      time or mmap() failed which acts in a reserve-like manner.  If the pool is
      sized correctly already so that parent and child can fault reliably, the
      application will not even notice the reserves.  It's only when the pool is
      too small for the application to function perfectly reliably that the
      reserves come into play.
      
      Credit goes to Andy Whitcroft for cleaning up a number of mistakes during
      review before the patches were released.
      
      This patch:
      
      A later patch in this set needs to call hugetlb_acct_memory() before it is
      defined.  This patch moves the function without modification.  This makes
      later diffs easier to read.
      Signed-off-by: default avatarMel Gorman <mel@csn.ul.ie>
      Acked-by: default avatarAdam Litke <agl@us.ibm.com>
      Cc: Andy Whitcroft <apw@shadowen.org>
      Cc: William Lee Irwin III <wli@holomorphy.com>
      Cc: Hugh Dickins <hugh@veritas.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      fc1b8a73