Commit 5fda39e3 authored by Minchan Kim's avatar Minchan Kim Committed by Greg Kroah-Hartman

zram: do not use copy_page with non-page aligned address

commit d72e9a7a upstream.

The copy_page is optimized memcpy for page-alinged address.  If it is
used with non-page aligned address, it can corrupt memory which means
system corruption.  With zram, it can happen with

1. 64K architecture
2. partial IO
3. slub debug

Partial IO need to allocate a page and zram allocates it via kmalloc.
With slub debug, kmalloc(PAGE_SIZE) doesn't return page-size aligned
address.  And finally, copy_page(mem, cmem) corrupts memory.

So, this patch changes it to memcpy.

Actuaully, we don't need to change zram_bvec_write part because zsmalloc
returns page-aligned address in case of PAGE_SIZE class but it's not
good to rely on the internal of zsmalloc.

Note:
 When this patch is merged to stable, clear_page should be fixed, too.
 Unfortunately, recent zram removes it by "same page merge" feature so
 it's hard to backport this patch to -stable tree.

I will handle it when I receive the mail from stable tree maintainer to
merge this patch to backport.

Fixes: 42e99bd9 ("zram: optimize memory operations with clear_page()/copy_page()")
Link: http://lkml.kernel.org/r/1492042622-12074-2-git-send-email-minchan@kernel.orgSigned-off-by: default avatarMinchan Kim <minchan@kernel.org>
Reviewed-by: default avatarSergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
parent de3596d3
...@@ -431,13 +431,13 @@ static int zram_decompress_page(struct zram *zram, char *mem, u32 index) ...@@ -431,13 +431,13 @@ static int zram_decompress_page(struct zram *zram, char *mem, u32 index)
if (!handle || zram_test_flag(meta, index, ZRAM_ZERO)) { if (!handle || zram_test_flag(meta, index, ZRAM_ZERO)) {
bit_spin_unlock(ZRAM_ACCESS, &meta->table[index].value); bit_spin_unlock(ZRAM_ACCESS, &meta->table[index].value);
clear_page(mem); memset(mem, 0, PAGE_SIZE);
return 0; return 0;
} }
cmem = zs_map_object(meta->mem_pool, handle, ZS_MM_RO); cmem = zs_map_object(meta->mem_pool, handle, ZS_MM_RO);
if (size == PAGE_SIZE) if (size == PAGE_SIZE)
copy_page(mem, cmem); memcpy(mem, cmem, PAGE_SIZE);
else else
ret = zcomp_decompress(zram->comp, cmem, size, mem); ret = zcomp_decompress(zram->comp, cmem, size, mem);
zs_unmap_object(meta->mem_pool, handle); zs_unmap_object(meta->mem_pool, handle);
...@@ -612,7 +612,7 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index, ...@@ -612,7 +612,7 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
if ((clen == PAGE_SIZE) && !is_partial_io(bvec)) { if ((clen == PAGE_SIZE) && !is_partial_io(bvec)) {
src = kmap_atomic(page); src = kmap_atomic(page);
copy_page(cmem, src); memcpy(cmem, src, PAGE_SIZE);
kunmap_atomic(src); kunmap_atomic(src);
} else { } else {
memcpy(cmem, src, clen); memcpy(cmem, src, clen);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment