Commit 3a6be87f authored by David Howells's avatar David Howells Committed by Linus Torvalds

nommu: clamp zone_batchsize() to 0 under NOMMU conditions

Clamp zone_batchsize() to 0 under NOMMU conditions to stop
free_hot_cold_page() from queueing and batching frees.

The problem is that under NOMMU conditions it is really important to be
able to allocate large contiguous chunks of memory, but when munmap() or
exit_mmap() releases big stretches of memory, return of these to the buddy
allocator can be deferred, and when it does finally happen, it can be in
small chunks.

Whilst the fragmentation this incurs isn't so much of a problem under MMU
conditions as userspace VM is glued together from individual pages with
the aid of the MMU, it is a real problem if there isn't an MMU.

By clamping the page freeing queue size to 0, pages are returned to the
allocator immediately, and the buddy detector is more likely to be able to
glue them together into large chunks immediately, and fragmentation is
less likely to occur.

By disabling batching of frees, and by turning off the trimming of excess
space during boot, Coldfire can manage to boot.
Reported-by: default avatarLanttor Guo <lanttor.guo@freescale.com>
Signed-off-by: default avatarDavid Howells <dhowells@redhat.com>
Tested-by: default avatarLanttor Guo <lanttor.guo@freescale.com>
Cc: Greg Ungerer <gerg@snapgear.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 9155203a
...@@ -2681,6 +2681,7 @@ static void __meminit zone_init_free_lists(struct zone *zone) ...@@ -2681,6 +2681,7 @@ static void __meminit zone_init_free_lists(struct zone *zone)
static int zone_batchsize(struct zone *zone) static int zone_batchsize(struct zone *zone)
{ {
#ifdef CONFIG_MMU
int batch; int batch;
/* /*
...@@ -2709,6 +2710,23 @@ static int zone_batchsize(struct zone *zone) ...@@ -2709,6 +2710,23 @@ static int zone_batchsize(struct zone *zone)
batch = rounddown_pow_of_two(batch + batch/2) - 1; batch = rounddown_pow_of_two(batch + batch/2) - 1;
return batch; return batch;
#else
/* The deferral and batching of frees should be suppressed under NOMMU
* conditions.
*
* The problem is that NOMMU needs to be able to allocate large chunks
* of contiguous memory as there's no hardware page translation to
* assemble apparent contiguous memory from discontiguous pages.
*
* Queueing large contiguous runs of pages for batching, however,
* causes the pages to actually be freed in smaller chunks. As there
* can be a significant delay between the individual batches being
* recycled, this leads to the once large chunks of space being
* fragmented and becoming unavailable for high-order allocations.
*/
return 0;
#endif
} }
static void setup_pageset(struct per_cpu_pageset *p, unsigned long batch) static void setup_pageset(struct per_cpu_pageset *p, unsigned long batch)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment