Commit 6d9256f0 authored by Andy Lutomirski's avatar Andy Lutomirski Committed by Ingo Molnar

x86/espfix/64: Stop assuming that pt_regs is on the entry stack

When we start using an entry trampoline, a #GP from userspace will
be delivered on the entry stack, not on the task stack.  Fix the
espfix64 #DF fixup to set up #GP according to TSS.SP0, rather than
assuming that pt_regs + 1 == SP0.  This won't change anything
without an entry stack, but it will make the code continue to work
when an entry stack is added.

While we're at it, improve the comments to explain what's actually
going on.
Signed-off-by: default avatarAndy Lutomirski <luto@kernel.org>
Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
Reviewed-by: default avatarThomas Gleixner <tglx@linutronix.de>
Reviewed-by: default avatarBorislav Petkov <bp@suse.de>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Borislav Petkov <bpetkov@suse.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Laight <David.Laight@aculab.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Eduardo Valentin <eduval@amazon.com>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: aliguori@amazon.com
Cc: daniel.gruss@iaik.tugraz.at
Cc: hughd@google.com
Cc: keescook@google.com
Link: https://lkml.kernel.org/r/20171204150606.130778051@linutronix.deSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent 9aaefe7b
...@@ -348,9 +348,15 @@ dotraplinkage void do_double_fault(struct pt_regs *regs, long error_code) ...@@ -348,9 +348,15 @@ dotraplinkage void do_double_fault(struct pt_regs *regs, long error_code)
/* /*
* If IRET takes a non-IST fault on the espfix64 stack, then we * If IRET takes a non-IST fault on the espfix64 stack, then we
* end up promoting it to a doublefault. In that case, modify * end up promoting it to a doublefault. In that case, take
* the stack to make it look like we just entered the #GP * advantage of the fact that we're not using the normal (TSS.sp0)
* handler from user space, similar to bad_iret. * stack right now. We can write a fake #GP(0) frame at TSS.sp0
* and then modify our own IRET frame so that, when we return,
* we land directly at the #GP(0) vector with the stack already
* set up according to its expectations.
*
* The net result is that our #GP handler will think that we
* entered from usermode with the bad user context.
* *
* No need for ist_enter here because we don't use RCU. * No need for ist_enter here because we don't use RCU.
*/ */
...@@ -358,13 +364,26 @@ dotraplinkage void do_double_fault(struct pt_regs *regs, long error_code) ...@@ -358,13 +364,26 @@ dotraplinkage void do_double_fault(struct pt_regs *regs, long error_code)
regs->cs == __KERNEL_CS && regs->cs == __KERNEL_CS &&
regs->ip == (unsigned long)native_irq_return_iret) regs->ip == (unsigned long)native_irq_return_iret)
{ {
struct pt_regs *normal_regs = task_pt_regs(current); struct pt_regs *gpregs = (struct pt_regs *)this_cpu_read(cpu_tss.x86_tss.sp0) - 1;
/*
* regs->sp points to the failing IRET frame on the
* ESPFIX64 stack. Copy it to the entry stack. This fills
* in gpregs->ss through gpregs->ip.
*
*/
memmove(&gpregs->ip, (void *)regs->sp, 5*8);
gpregs->orig_ax = 0; /* Missing (lost) #GP error code */
/* Fake a #GP(0) from userspace. */ /*
memmove(&normal_regs->ip, (void *)regs->sp, 5*8); * Adjust our frame so that we return straight to the #GP
normal_regs->orig_ax = 0; /* Missing (lost) #GP error code */ * vector with the expected RSP value. This is safe because
* we won't enable interupts or schedule before we invoke
* general_protection, so nothing will clobber the stack
* frame we just set up.
*/
regs->ip = (unsigned long)general_protection; regs->ip = (unsigned long)general_protection;
regs->sp = (unsigned long)&normal_regs->orig_ax; regs->sp = (unsigned long)&gpregs->orig_ax;
return; return;
} }
...@@ -389,7 +408,7 @@ dotraplinkage void do_double_fault(struct pt_regs *regs, long error_code) ...@@ -389,7 +408,7 @@ dotraplinkage void do_double_fault(struct pt_regs *regs, long error_code)
* *
* Processors update CR2 whenever a page fault is detected. If a * Processors update CR2 whenever a page fault is detected. If a
* second page fault occurs while an earlier page fault is being * second page fault occurs while an earlier page fault is being
* deliv- ered, the faulting linear address of the second fault will * delivered, the faulting linear address of the second fault will
* overwrite the contents of CR2 (replacing the previous * overwrite the contents of CR2 (replacing the previous
* address). These updates to CR2 occur even if the page fault * address). These updates to CR2 occur even if the page fault
* results in a double fault or occurs during the delivery of a * results in a double fault or occurs during the delivery of a
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment