Commit 602605a4 authored by Mel Gorman's avatar Mel Gorman Committed by Linus Torvalds

mm: compaction: minimise the time IRQs are disabled while isolating free pages

compaction_alloc() isolates free pages to be used as migration targets.
While its scanning, IRQs are disabled on the mistaken assumption the
scanning should be short.  Analysis showed that IRQs were in fact being
disabled for substantial time.  A simple test was run using large
anonymous mappings with transparent hugepage support enabled to trigger
frequent compactions.  A monitor sampled what the worst IRQ-off latencies
were and a post-processing tool found the following;

  Total sampled time IRQs off (not real total time): 22355
  Event compaction_alloc..compaction_alloc                 8409 us count 1
  Event compaction_alloc..compaction_alloc                 7341 us count 1
  Event compaction_alloc..compaction_alloc                 2463 us count 1
  Event compaction_alloc..compaction_alloc                 2054 us count 1
  Event shrink_inactive_list..shrink_zone                  1864 us count 1
  Event shrink_inactive_list..shrink_zone                    88 us count 1
  Event save_args..call_softirq                              36 us count 1
  Event save_args..call_softirq                              35 us count 2
  Event __make_request..__blk_run_queue                      24 us count 1
  Event __alloc_pages_nodemask..__alloc_pages_nodemask        6 us count 1

i.e.  compaction is disabled IRQs for a prolonged period of time - 8ms in
one instance.  The full report generated by the tool can be found at

 http://www.csn.ul.ie/~mel/postings/minfree-20110225/irqsoff-vanilla-micro.report

This patch reduces the time IRQs are disabled by simply disabling IRQs at
the last possible minute.  An updated IRQs-off summary report then looks
like;

  Total sampled time IRQs off (not real total time): 5493
  Event shrink_inactive_list..shrink_zone                  1596 us count 1
  Event shrink_inactive_list..shrink_zone                  1530 us count 1
  Event shrink_inactive_list..shrink_zone                   956 us count 1
  Event shrink_inactive_list..shrink_zone                   541 us count 1
  Event shrink_inactive_list..shrink_zone                   531 us count 1
  Event split_huge_page..add_to_swap                        232 us count 1
  Event save_args..call_softirq                              36 us count 1
  Event save_args..call_softirq                              35 us count 2
  Event __wake_up..__wake_up                                  1 us count 1

A full report is again available at

  http://www.csn.ul.ie/~mel/postings/minfree-20110225/irqsoff-minimiseirq-free-v1r4-micro.report

As should be obvious, IRQ disabled latencies due to compaction are
almost elimimnated for this particular test.

[aarcange@redhat.com: Fix initialisation of isolated]
Signed-off-by: default avatarMel Gorman <mel@csn.ul.ie>
Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
Reviewed-by: default avatarKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujisu.com>
Reviewed-by: default avatarMinchan Kim <minchan.kim@gmail.com>
Acked-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
Cc: Arthur Marsh <arthur.marsh@internode.on.net>
Cc: Clemens Ladisch <cladisch@googlemail.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 5b280c0c
...@@ -153,7 +153,6 @@ static void isolate_freepages(struct zone *zone, ...@@ -153,7 +153,6 @@ static void isolate_freepages(struct zone *zone,
* pages on cc->migratepages. We stop searching if the migrate * pages on cc->migratepages. We stop searching if the migrate
* and free page scanners meet or enough free pages are isolated. * and free page scanners meet or enough free pages are isolated.
*/ */
spin_lock_irqsave(&zone->lock, flags);
for (; pfn > low_pfn && cc->nr_migratepages > nr_freepages; for (; pfn > low_pfn && cc->nr_migratepages > nr_freepages;
pfn -= pageblock_nr_pages) { pfn -= pageblock_nr_pages) {
unsigned long isolated; unsigned long isolated;
...@@ -176,9 +175,19 @@ static void isolate_freepages(struct zone *zone, ...@@ -176,9 +175,19 @@ static void isolate_freepages(struct zone *zone,
if (!suitable_migration_target(page)) if (!suitable_migration_target(page))
continue; continue;
/* Found a block suitable for isolating free pages from */ /*
* Found a block suitable for isolating free pages from. Now
* we disabled interrupts, double check things are ok and
* isolate the pages. This is to minimise the time IRQs
* are disabled
*/
isolated = 0;
spin_lock_irqsave(&zone->lock, flags);
if (suitable_migration_target(page)) {
isolated = isolate_freepages_block(zone, pfn, freelist); isolated = isolate_freepages_block(zone, pfn, freelist);
nr_freepages += isolated; nr_freepages += isolated;
}
spin_unlock_irqrestore(&zone->lock, flags);
/* /*
* Record the highest PFN we isolated pages from. When next * Record the highest PFN we isolated pages from. When next
...@@ -188,7 +197,6 @@ static void isolate_freepages(struct zone *zone, ...@@ -188,7 +197,6 @@ static void isolate_freepages(struct zone *zone,
if (isolated) if (isolated)
high_pfn = max(high_pfn, pfn); high_pfn = max(high_pfn, pfn);
} }
spin_unlock_irqrestore(&zone->lock, flags);
/* split_free_page does not map the pages */ /* split_free_page does not map the pages */
list_for_each_entry(page, freelist, lru) { list_for_each_entry(page, freelist, lru) {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment