Commit e1f2750e authored by Linus Torvalds's avatar Linus Torvalds

x86: remove 'zerorest' argument from __copy_user_nocache()

Every caller passes in zero, meaning they don't want any partial copy to
zero the remainder of the destination buffer.

Which is just as well, because the implementation of that function
didn't actually even look at that argument, and wasn't even aware it
existed, although some misleading comments did mention it still.

The 'zerorest' thing is a historical artifact of how "copy_from_user()"
worked, in that it would zero the rest of the kernel buffer that it
copied into.

That zeroing still exists, but it's long since been moved to generic
code, and the raw architecture-specific code doesn't do it.  See
_copy_from_user() in lib/usercopy.c for this all.

However, while __copy_user_nocache() shares some history and superficial
other similarities with copy_from_user(), it is in many ways also very
different.

In particular, while the code makes it *look* similar to the generic
user copy functions that can copy both to and from user space, and take
faults on both reads and writes as a result, __copy_user_nocache() does
no such thing at all.

__copy_user_nocache() always copies to kernel space, and will never take
a page fault on the destination.  What *can* happen, though, is that the
non-temporal stores take a machine check because one of the use cases is
for writing to stable memory, and any memory errors would then take
synchronous faults.

So __copy_user_nocache() does look a lot like copy_from_user(), but has
faulting behavior that is more akin to our old copy_in_user() (which no
longer exists, but copied from user space to user space and could fault
on both source and destination).

And it very much does not have the "zero the end of the destination
buffer", since a problem with the destination buffer is very possibly
the very source of the partial copy.

So this whole thing was just a confusing historical artifact from having
shared some code with a completely different function with completely
different use cases.
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent e046fe5a
......@@ -52,9 +52,7 @@ raw_copy_to_user(void __user *dst, const void *src, unsigned long size)
return copy_user_generic((__force void *)dst, src, size);
}
extern long __copy_user_nocache(void *dst, const void __user *src,
unsigned size, int zerorest);
extern long __copy_user_nocache(void *dst, const void __user *src, unsigned size);
extern long __copy_user_flushcache(void *dst, const void __user *src, unsigned size);
extern void memcpy_page_flushcache(char *to, struct page *page, size_t offset,
size_t len);
......@@ -66,7 +64,7 @@ __copy_from_user_inatomic_nocache(void *dst, const void __user *src,
long ret;
kasan_check_write(dst, size);
stac();
ret = __copy_user_nocache(dst, src, size, 0);
ret = __copy_user_nocache(dst, src, size);
clac();
return ret;
}
......
......@@ -290,7 +290,7 @@ SYM_FUNC_START(__copy_user_nocache)
_ASM_EXTABLE_CPY(41b, .L_fixup_1b_copy)
/*
* Try to copy last bytes and clear the rest if needed.
* Try to copy last bytes.
* Since protection fault in copy_from/to_user is not a normal situation,
* it is not necessary to optimize tail handling.
* Don't try to copy the tail if machine check happened
......@@ -320,7 +320,7 @@ SYM_FUNC_START(__copy_user_nocache)
_ASM_EXTABLE_CPY(1b, 2b)
.Lcopy_user_handle_align:
addl %ecx,%edx /* ecx is zerorest also */
addl %ecx,%edx
jmp .Lcopy_user_handle_tail
SYM_FUNC_END(__copy_user_nocache)
......
......@@ -48,7 +48,7 @@ long __copy_user_flushcache(void *dst, const void __user *src, unsigned size)
long rc;
stac();
rc = __copy_user_nocache(dst, src, size, 0);
rc = __copy_user_nocache(dst, src, size);
clac();
/*
......
......@@ -97,7 +97,7 @@ static void cacheless_memcpy(void *dst, void *src, size_t n)
* there are no security issues. The extra fault recovery machinery
* is not invoked.
*/
__copy_user_nocache(dst, (void __user *)src, n, 0);
__copy_user_nocache(dst, (void __user *)src, n);
}
void rvt_wss_exit(struct rvt_dev_info *rdi)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment