Commit 248ac0e1 authored by Johannes Weiner's avatar Johannes Weiner Committed by Linus Torvalds

mm/vmalloc: remove guard page from between vmap blocks

The vmap allocator is used to, among other things, allocate per-cpu vmap
blocks, where each vmap block is naturally aligned to its own size.
Obviously, leaving a guard page after each vmap area forbids packing vmap
blocks efficiently and can make the kernel run out of possible vmap blocks
long before overall vmap space is exhausted.

The new interface to map a user-supplied page array into linear vmalloc
space (vm_map_ram) insists on allocating from a vmap block (instead of
falling back to a custom area) when the area size is below a certain
threshold.  With heavy users of this interface (e.g.  XFS) and limited
vmalloc space on 32-bit, vmap block exhaustion is a real problem.

Remove the guard page from the core vmap allocator.  vmalloc and the old
vmap interface enforce a guard page on their own at a higher level.

Note that without this patch, we had accidental guard pages after those
vm_map_ram areas that happened to be at the end of a vmap block, but not
between every area.  This patch removes this accidental guard page only.

If we want guard pages after every vm_map_ram area, this should be done
separately.  And just like with vmalloc and the old interface on a
different level, not in the core allocator.

Mel pointed out: "If necessary, the guard page could be reintroduced as a
debugging-only option (CONFIG_DEBUG_PAGEALLOC?).  Otherwise it seems
reasonable."
Signed-off-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
Cc: Nick Piggin <npiggin@kernel.dk>
Cc: Dave Chinner <david@fromorbit.com>
Acked-by: default avatarMel Gorman <mel@csn.ul.ie>
Cc: Hugh Dickins <hughd@google.com>
Cc: Christoph Hellwig <hch@infradead.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 82d4b577
...@@ -375,7 +375,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, ...@@ -375,7 +375,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
/* find starting point for our search */ /* find starting point for our search */
if (free_vmap_cache) { if (free_vmap_cache) {
first = rb_entry(free_vmap_cache, struct vmap_area, rb_node); first = rb_entry(free_vmap_cache, struct vmap_area, rb_node);
addr = ALIGN(first->va_end + PAGE_SIZE, align); addr = ALIGN(first->va_end, align);
if (addr < vstart) if (addr < vstart)
goto nocache; goto nocache;
if (addr + size - 1 < addr) if (addr + size - 1 < addr)
...@@ -406,10 +406,10 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, ...@@ -406,10 +406,10 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
} }
/* from the starting point, walk areas until a suitable hole is found */ /* from the starting point, walk areas until a suitable hole is found */
while (addr + size >= first->va_start && addr + size <= vend) { while (addr + size > first->va_start && addr + size <= vend) {
if (addr + cached_hole_size < first->va_start) if (addr + cached_hole_size < first->va_start)
cached_hole_size = first->va_start - addr; cached_hole_size = first->va_start - addr;
addr = ALIGN(first->va_end + PAGE_SIZE, align); addr = ALIGN(first->va_end, align);
if (addr + size - 1 < addr) if (addr + size - 1 < addr)
goto overflow; goto overflow;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment