Commit 26b28dce authored by Nick Terrell's avatar Nick Terrell Committed by David Sterba

btrfs: Keep one more workspace around

find_workspace() allocates up to num_online_cpus() + 1 workspaces.
free_workspace() will only keep num_online_cpus() workspaces. When
(de)compressing we will allocate num_online_cpus() + 1 workspaces, then
free one, and repeat. Instead, we can just keep num_online_cpus() + 1
workspaces around, and never have to allocate/free another workspace in the
common case.

I tested on a Ubuntu 14.04 VM with 2 cores and 4 GiB of RAM. I mounted a
BtrFS partition with -o compress-force={lzo,zlib,zstd} and logged whenever
a workspace was allocated of freed. Then I copied vmlinux (527 MB) to the
partition. Before the patch, during the copy it would allocate and free 5-6
workspaces. After, it only allocated the initial 3. This held true for lzo,
zlib, and zstd. The time it took to execute cp vmlinux /mnt/btrfs && sync
dropped from 1.70s to 1.44s with lzo compression, and from 2.04s to 1.80s
for zstd compression.
Signed-off-by: default avatarNick Terrell <terrelln@fb.com>
Reviewed-by: default avatarOmar Sandoval <osandov@fb.com>
Signed-off-by: default avatarDavid Sterba <dsterba@suse.com>
parent 913e1535
...@@ -825,7 +825,7 @@ static void free_workspace(int type, struct list_head *workspace) ...@@ -825,7 +825,7 @@ static void free_workspace(int type, struct list_head *workspace)
int *free_ws = &btrfs_comp_ws[idx].free_ws; int *free_ws = &btrfs_comp_ws[idx].free_ws;
spin_lock(ws_lock); spin_lock(ws_lock);
if (*free_ws < num_online_cpus()) { if (*free_ws <= num_online_cpus()) {
list_add(workspace, idle_ws); list_add(workspace, idle_ws);
(*free_ws)++; (*free_ws)++;
spin_unlock(ws_lock); spin_unlock(ws_lock);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment