Commit 1d731e73 authored by Sebastian Andrzej Siewior's avatar Sebastian Andrzej Siewior Committed by Borislav Petkov

x86/fpu: Add a fastpath to __fpu__restore_sig()

The previous commits refactor the restoration of the FPU registers so
that they can be loaded from in-kernel memory. This overhead can be
avoided if the load can be performed without a pagefault.

Attempt to restore FPU registers by invoking
copy_user_to_fpregs_zeroing(). If it fails try the slowpath which can
handle pagefaults.

 [ bp: Add a comment over the fastpath to be able to find one's way
   around the function. ]
Signed-off-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
Reviewed-by: default avatarDave Hansen <dave.hansen@intel.com>
Reviewed-by: default avatarThomas Gleixner <tglx@linutronix.de>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jann Horn <jannh@google.com>
Cc: "Jason A. Donenfeld" <Jason@zx2c4.com>
Cc: kvm ML <kvm@vger.kernel.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: x86-ml <x86@kernel.org>
Link: https://lkml.kernel.org/r/20190403164156.19645-25-bigeasy@linutronix.de
parent 5f409e20
......@@ -242,10 +242,10 @@ sanitize_restored_xstate(union fpregs_state *state,
/*
* Restore the extended state if present. Otherwise, restore the FP/SSE state.
*/
static inline int copy_user_to_fpregs_zeroing(void __user *buf, u64 xbv, int fx_only)
static int copy_user_to_fpregs_zeroing(void __user *buf, u64 xbv, int fx_only)
{
if (use_xsave()) {
if ((unsigned long)buf % 64 || fx_only) {
if (fx_only) {
u64 init_bv = xfeatures_mask & ~XFEATURE_MASK_FPSSE;
copy_kernel_to_xregs(&init_fpstate.xsave, init_bv);
return copy_user_to_fxregs(buf);
......@@ -327,8 +327,27 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size)
if (ret)
goto err_out;
envp = &env;
} else {
/*
* Attempt to restore the FPU registers directly from user
* memory. For that to succeed, the user access cannot cause
* page faults. If it does, fall back to the slow path below,
* going through the kernel buffer with the enabled pagefault
* handler.
*/
fpregs_lock();
pagefault_disable();
ret = copy_user_to_fpregs_zeroing(buf_fx, xfeatures, fx_only);
pagefault_enable();
if (!ret) {
fpregs_mark_activate();
fpregs_unlock();
return 0;
}
fpregs_unlock();
}
if (use_xsave() && !fx_only) {
u64 init_bv = xfeatures_mask & ~xfeatures;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment