Commit 8376efd3 authored by Ben Hutchings's avatar Ben Hutchings Committed by Dan Williams

x86, pmem: Fix cache flushing for iovec write < 8 bytes

Commit 11e63f6d added cache flushing for unaligned writes from an
iovec, covering the first and last cache line of a >= 8 byte write and
the first cache line of a < 8 byte write.  But an unaligned write of
2-7 bytes can still cover two cache lines, so make sure we flush both
in that case.

Cc: <stable@vger.kernel.org>
Fixes: 11e63f6d ("x86, pmem: fix broken __copy_user_nocache ...")
Signed-off-by: default avatarBen Hutchings <ben.hutchings@codethink.co.uk>
Signed-off-by: default avatarDan Williams <dan.j.williams@intel.com>
parent cf1e2289
...@@ -98,7 +98,7 @@ static inline size_t arch_copy_from_iter_pmem(void *addr, size_t bytes, ...@@ -98,7 +98,7 @@ static inline size_t arch_copy_from_iter_pmem(void *addr, size_t bytes,
if (bytes < 8) { if (bytes < 8) {
if (!IS_ALIGNED(dest, 4) || (bytes != 4)) if (!IS_ALIGNED(dest, 4) || (bytes != 4))
arch_wb_cache_pmem(addr, 1); arch_wb_cache_pmem(addr, bytes);
} else { } else {
if (!IS_ALIGNED(dest, 8)) { if (!IS_ALIGNED(dest, 8)) {
dest = ALIGN(dest, boot_cpu_data.x86_clflush_size); dest = ALIGN(dest, boot_cpu_data.x86_clflush_size);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment