Commit 799de1ba authored by Joerg Roedel's avatar Joerg Roedel Committed by Borislav Petkov

x86/sev-es: Optimize __sev_es_ist_enter() for better readability

Reorganize the code and improve the comments to make the function more
readable and easier to understand.
Signed-off-by: default avatarJoerg Roedel <jroedel@suse.de>
Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
Link: https://lkml.kernel.org/r/20210303141716.29223-4-joro@8bytes.org
parent f15a0a73
...@@ -137,29 +137,41 @@ static __always_inline bool on_vc_stack(struct pt_regs *regs) ...@@ -137,29 +137,41 @@ static __always_inline bool on_vc_stack(struct pt_regs *regs)
} }
/* /*
* This function handles the case when an NMI is raised in the #VC exception * This function handles the case when an NMI is raised in the #VC
* handler entry code. In this case, the IST entry for #VC must be adjusted, so * exception handler entry code, before the #VC handler has switched off
* that any subsequent #VC exception will not overwrite the stack contents of the * its IST stack. In this case, the IST entry for #VC must be adjusted,
* interrupted #VC handler. * so that any nested #VC exception will not overwrite the stack
* contents of the interrupted #VC handler.
* *
* The IST entry is adjusted unconditionally so that it can be also be * The IST entry is adjusted unconditionally so that it can be also be
* unconditionally adjusted back in sev_es_ist_exit(). Otherwise a nested * unconditionally adjusted back in __sev_es_ist_exit(). Otherwise a
* sev_es_ist_exit() call may adjust back the IST entry too early. * nested sev_es_ist_exit() call may adjust back the IST entry too
* early.
*
* The __sev_es_ist_enter() and __sev_es_ist_exit() functions always run
* on the NMI IST stack, as they are only called from NMI handling code
* right now.
*/ */
void noinstr __sev_es_ist_enter(struct pt_regs *regs) void noinstr __sev_es_ist_enter(struct pt_regs *regs)
{ {
unsigned long old_ist, new_ist; unsigned long old_ist, new_ist;
/* Read old IST entry */ /* Read old IST entry */
old_ist = __this_cpu_read(cpu_tss_rw.x86_tss.ist[IST_INDEX_VC]); new_ist = old_ist = __this_cpu_read(cpu_tss_rw.x86_tss.ist[IST_INDEX_VC]);
/* Make room on the IST stack */ /*
* If NMI happened while on the #VC IST stack, set the new IST
* value below regs->sp, so that the interrupted stack frame is
* not overwritten by subsequent #VC exceptions.
*/
if (on_vc_stack(regs)) if (on_vc_stack(regs))
new_ist = ALIGN_DOWN(regs->sp, 8) - sizeof(old_ist); new_ist = regs->sp;
else
new_ist = old_ist - sizeof(old_ist);
/* Store old IST entry */ /*
* Reserve additional 8 bytes and store old IST value so this
* adjustment can be unrolled in __sev_es_ist_exit().
*/
new_ist -= sizeof(old_ist);
*(unsigned long *)new_ist = old_ist; *(unsigned long *)new_ist = old_ist;
/* Set new IST entry */ /* Set new IST entry */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment