Commit a26b7c8a authored by Jin Xu's avatar Jin Xu Committed by Jaegeuk Kim

f2fs: optimize gc for better performance

This patch improves the gc efficiency by optimizing the victim
selection policy. With this optimization, the random re-write
performance could increase up to 20%.

For f2fs, when disk is in shortage of free spaces, gc will selects
dirty segments and moves valid blocks around for making more space
available. The gc cost of a segment is determined by the valid blocks
in the segment. The less the valid blocks, the higher the efficiency.
The ideal victim segment is the one that has the most garbage blocks.

Currently, it searches up to 20 dirty segments for a victim segment.
The selected victim is not likely the best victim for gc when there
are much more dirty segments. Why not searching more dirty segments
for a better victim? The cost of searching dirty segments is
negligible in comparison to moving blocks.

In this patch, it enlarges the MAX_VICTIM_SEARCH to 4096 to make
the search more aggressively for a possible better victim. Since
it also applies to victim selection for SSR, it will likely improve
the SSR efficiency as well.

The test case is simple. It creates as many files until the disk full.
The size for each file is 32KB. Then it writes as many as 100000
records of 4KB size to random offsets of random files in sync mode.
The testing was done on a 2GB partition of a SDHC card. Let's see the
test result of f2fs without and with the patch.

---------------------------------------
2GB partition, SDHC
create 52023 files of size 32768 bytes
random re-write 100000 records of 4KB
---------------------------------------
| file creation (s) | rewrite time (s) | gc count | gc garbage blocks |
[no patch]  341         4227             1174          174840
[patched]   324         2958             645           106682

It's obvious that, with the patch, f2fs finishes the test in 20+% less
time than without the patch. And internally it does much less gc with
higher efficiency than before.

Since the performance improvement is related to gc, it might not be so
obvious for other tests that do not trigger gc as often as this one (
This is because f2fs selects dirty segments for SSR use most of the
time when free space is in shortage). The well-known iozone test tool
was not used for benchmarking the patch becuase it seems do not have
a test case that performs random re-write on a full disk.

This patch is the revised version based on the suggestion from
Jaegeuk Kim.
Signed-off-by: default avatarJin Xu <jinuxstyle@gmail.com>
[Jaegeuk Kim: suggested simpler solution]
Reviewed-by: default avatarJaegeuk Kim <jaegeuk.kim@samsung.com>
Signed-off-by: default avatarJaegeuk Kim <jaegeuk.kim@samsung.com>
parent 423e95cc
...@@ -153,12 +153,18 @@ static void select_policy(struct f2fs_sb_info *sbi, int gc_type, ...@@ -153,12 +153,18 @@ static void select_policy(struct f2fs_sb_info *sbi, int gc_type,
if (p->alloc_mode == SSR) { if (p->alloc_mode == SSR) {
p->gc_mode = GC_GREEDY; p->gc_mode = GC_GREEDY;
p->dirty_segmap = dirty_i->dirty_segmap[type]; p->dirty_segmap = dirty_i->dirty_segmap[type];
p->max_search = dirty_i->nr_dirty[type];
p->ofs_unit = 1; p->ofs_unit = 1;
} else { } else {
p->gc_mode = select_gc_type(sbi->gc_thread, gc_type); p->gc_mode = select_gc_type(sbi->gc_thread, gc_type);
p->dirty_segmap = dirty_i->dirty_segmap[DIRTY]; p->dirty_segmap = dirty_i->dirty_segmap[DIRTY];
p->max_search = dirty_i->nr_dirty[DIRTY];
p->ofs_unit = sbi->segs_per_sec; p->ofs_unit = sbi->segs_per_sec;
} }
if (p->max_search > MAX_VICTIM_SEARCH)
p->max_search = MAX_VICTIM_SEARCH;
p->offset = sbi->last_victim[p->gc_mode]; p->offset = sbi->last_victim[p->gc_mode];
} }
...@@ -305,7 +311,7 @@ static int get_victim_by_default(struct f2fs_sb_info *sbi, ...@@ -305,7 +311,7 @@ static int get_victim_by_default(struct f2fs_sb_info *sbi,
if (cost == max_cost) if (cost == max_cost)
continue; continue;
if (nsearched++ >= MAX_VICTIM_SEARCH) { if (nsearched++ >= p.max_search) {
sbi->last_victim[p.gc_mode] = segno; sbi->last_victim[p.gc_mode] = segno;
break; break;
} }
......
...@@ -20,7 +20,7 @@ ...@@ -20,7 +20,7 @@
#define LIMIT_FREE_BLOCK 40 /* percentage over invalid + free space */ #define LIMIT_FREE_BLOCK 40 /* percentage over invalid + free space */
/* Search max. number of dirty segments to select a victim segment */ /* Search max. number of dirty segments to select a victim segment */
#define MAX_VICTIM_SEARCH 20 #define MAX_VICTIM_SEARCH 4096 /* covers 8GB */
struct f2fs_gc_kthread { struct f2fs_gc_kthread {
struct task_struct *f2fs_gc_task; struct task_struct *f2fs_gc_task;
......
...@@ -142,6 +142,7 @@ struct victim_sel_policy { ...@@ -142,6 +142,7 @@ struct victim_sel_policy {
int alloc_mode; /* LFS or SSR */ int alloc_mode; /* LFS or SSR */
int gc_mode; /* GC_CB or GC_GREEDY */ int gc_mode; /* GC_CB or GC_GREEDY */
unsigned long *dirty_segmap; /* dirty segment bitmap */ unsigned long *dirty_segmap; /* dirty segment bitmap */
unsigned int max_search; /* maximum # of segments to search */
unsigned int offset; /* last scanned bitmap offset */ unsigned int offset; /* last scanned bitmap offset */
unsigned int ofs_unit; /* bitmap search unit */ unsigned int ofs_unit; /* bitmap search unit */
unsigned int min_cost; /* minimum cost */ unsigned int min_cost; /* minimum cost */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment