• Sean Christopherson's avatar
    KVM: VMX: Move VM-Enter + VM-Exit handling to non-inline sub-routines · 453eafbe
    Sean Christopherson authored
    Transitioning to/from a VMX guest requires KVM to manually save/load
    the bulk of CPU state that the guest is allowed to direclty access,
    e.g. XSAVE state, CR2, GPRs, etc...  For obvious reasons, loading the
    guest's GPR snapshot prior to VM-Enter and saving the snapshot after
    VM-Exit is done via handcoded assembly.  The assembly blob is written
    as inline asm so that it can easily access KVM-defined structs that
    are used to hold guest state, e.g. moving the blob to a standalone
    assembly file would require generating defines for struct offsets.
    
    The other relevant aspect of VMX transitions in KVM is the handling of
    VM-Exits.  KVM doesn't employ a separate VM-Exit handler per se, but
    rather treats the VMX transition as a mega instruction (with many side
    effects), i.e. sets the VMCS.HOST_RIP to a label immediately following
    VMLAUNCH/VMRESUME.  The label is then exposed to C code via a global
    variable definition in the inline assembly.
    
    Because of the global variable, KVM takes steps to (attempt to) ensure
    only a single instance of the owning C function, e.g. vmx_vcpu_run, is
    generated by the compiler.  The earliest approach placed the inline
    assembly in a separate noinline function[1].  Later, the assembly was
    folded back into vmx_vcpu_run() and tagged with __noclone[2][3], which
    is still used today.
    
    After moving to __noclone, an edge case was encountered where GCC's
    -ftracer optimization resulted in the inline assembly blob being
    duplicated.  This was "fixed" by explicitly disabling -ftracer in the
    __noclone definition[4].
    
    Recently, it was found that disabling -ftracer causes build warnings
    for unsuspecting users of __noclone[5], and more importantly for KVM,
    prevents the compiler for properly optimizing vmx_vcpu_run()[6].  And
    perhaps most importantly of all, it was pointed out that there is no
    way to prevent duplication of a function with 100% reliability[7],
    i.e. more edge cases may be encountered in the future.
    
    So to summarize, the only way to prevent the compiler from duplicating
    the global variable definition is to move the variable out of inline
    assembly, which has been suggested several times over[1][7][8].
    
    Resolve the aforementioned issues by moving the VMLAUNCH+VRESUME and
    VM-Exit "handler" to standalone assembly sub-routines.  Moving only
    the core VMX transition codes allows the struct indexing to remain as
    inline assembly and also allows the sub-routines to be used by
    nested_vmx_check_vmentry_hw().  Reusing the sub-routines has a happy
    side-effect of eliminating two VMWRITEs in the nested_early_check path
    as there is no longer a need to dynamically change VMCS.HOST_RIP.
    
    Note that callers to vmx_vmenter() must account for the CALL modifying
    RSP, e.g. must subtract op-size from RSP when synchronizing RSP with
    VMCS.HOST_RSP and "restore" RSP prior to the CALL.  There are no great
    alternatives to fudging RSP.  Saving RSP in vmx_enter() is difficult
    because doing so requires a second register (VMWRITE does not provide
    an immediate encoding for the VMCS field and KVM supports Hyper-V's
    memory-based eVMCS ABI).  The other more drastic alternative would be
    to use eschew VMCS.HOST_RSP and manually save/load RSP using a per-cpu
    variable (which can be encoded as e.g. gs:[imm]).  But because a valid
    stack is needed at the time of VM-Exit (NMIs aren't blocked and a user
    could theoretically insert INT3/INT1ICEBRK at the VM-Exit handler), a
    dedicated per-cpu VM-Exit stack would be required.  A dedicated stack
    isn't difficult to implement, but it would require at least one page
    per CPU and knowledge of the stack in the dumpstack routines.  And in
    most cases there is essentially zero overhead in dynamically updating
    VMCS.HOST_RSP, e.g. the VMWRITE can be avoided for all but the first
    VMLAUNCH unless nested_early_check=1, which is not a fast path.  In
    other words, avoiding the VMCS.HOST_RSP by using a dedicated stack
    would only make the code marginally less ugly while requiring at least
    one page per CPU and forcing the kernel to be aware (and approve) of
    the VM-Exit stack shenanigans.
    
    [1] cea15c24ca39 ("KVM: Move KVM context switch into own function")
    [2] a3b5ba49 ("KVM: VMX: add the __noclone attribute to vmx_vcpu_run")
    [3] 104f226b ("KVM: VMX: Fold __vmx_vcpu_run() into vmx_vcpu_run()")
    [4] 95272c29 ("compiler-gcc: disable -ftracer for __noclone functions")
    [5] https://lkml.kernel.org/r/20181218140105.ajuiglkpvstt3qxs@treble
    [6] https://patchwork.kernel.org/patch/8707981/#21817015
    [7] https://lkml.kernel.org/r/ri6y38lo23g.fsf@suse.cz
    [8] https://lkml.kernel.org/r/20181218212042.GE25620@tassilo.jf.intel.comSuggested-by: default avatarAndi Kleen <ak@linux.intel.com>
    Suggested-by: default avatarMartin Jambor <mjambor@suse.cz>
    Cc: Paolo Bonzini <pbonzini@redhat.com>
    Cc: Nadav Amit <namit@vmware.com>
    Cc: Andi Kleen <ak@linux.intel.com>
    Cc: Josh Poimboeuf <jpoimboe@redhat.com>
    Cc: Martin Jambor <mjambor@suse.cz>
    Cc: Arnd Bergmann <arnd@arndb.de>
    Cc: Steven Rostedt <rostedt@goodmis.org>
    Cc: Miroslav Benes <mbenes@suse.cz>
    Signed-off-by: default avatarSean Christopherson <sean.j.christopherson@intel.com>
    Reviewed-by: default avatarAndi Kleen <ak@linux.intel.com>
    Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
    453eafbe
Makefile 687 Bytes