1. 14 Jul, 2016 17 commits
  2. 11 Jul, 2016 5 commits
    • Paolo Bonzini's avatar
      Merge branch 'kvm-ppc-next' of... · 6d5315b3
      Paolo Bonzini authored
      Merge branch 'kvm-ppc-next' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc into HEAD
      6d5315b3
    • Paolo Bonzini's avatar
      KVM: VMX: introduce vm_{entry,exit}_control_reset_shadow · 8391ce44
      Paolo Bonzini authored
      There is no reason to read the entry/exit control fields of the
      VMCS and immediately write back the same value.
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      8391ce44
    • Paolo Bonzini's avatar
      KVM: nVMX: keep preemption timer enabled during L2 execution · 9314006d
      Paolo Bonzini authored
      Because the vmcs12 preemption timer is emulated through a separate hrtimer,
      we can keep on using the preemption timer in the vmcs02 to emulare L1's
      TSC deadline timer.
      
      However, the corresponding bit in the pin-based execution control field
      must be kept consistent between vmcs01 and vmcs02.  On vmentry we copy
      it into the vmcs02; on vmexit the preemption timer must be disabled in
      the vmcs01 if a preemption timer vmexit happened while in guest mode.
      
      The preemption timer value in the vmcs02 is set by vmx_vcpu_run, so it
      need not be considered in prepare_vmcs02.
      
      Cc: Yunhong Jiang <yunhong.jiang@intel.com>
      Cc: Haozhong Zhang <haozhong.zhang@intel.com>
      Tested-by: default avatarWanpeng Li <kernellwp@gmail.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      9314006d
    • Wanpeng Li's avatar
      KVM: nVMX: avoid incorrect preemption timer vmexit in nested guest · 55123e3c
      Wanpeng Li authored
      The preemption timer for nested VMX is emulated by hrtimer which is started on L2
      entry, stopped on L2 exit and evaluated via the check_nested_events hook. However,
      nested_vmx_exit_handled is always returning true for preemption timer vmexit.  Then,
      the L1 preemption timer vmexit is captured and be treated as a L2 preemption
      timer vmexit, causing NULL pointer dereferences or worse in the L1 guest's
      vmexit handler:
      
          BUG: unable to handle kernel NULL pointer dereference at           (null)
          IP: [<          (null)>]           (null)
          PGD 0
          Oops: 0010 [#1] SMP
          Call Trace:
           ? kvm_lapic_expired_hv_timer+0x47/0x90 [kvm]
           handle_preemption_timer+0xe/0x20 [kvm_intel]
           vmx_handle_exit+0x169/0x15a0 [kvm_intel]
           ? kvm_arch_vcpu_ioctl_run+0xd5d/0x19d0 [kvm]
           kvm_arch_vcpu_ioctl_run+0xdee/0x19d0 [kvm]
           ? kvm_arch_vcpu_ioctl_run+0xd5d/0x19d0 [kvm]
           ? vcpu_load+0x1c/0x60 [kvm]
           ? kvm_arch_vcpu_load+0x57/0x260 [kvm]
           kvm_vcpu_ioctl+0x2d3/0x7c0 [kvm]
           do_vfs_ioctl+0x96/0x6a0
           ? __fget_light+0x2a/0x90
           SyS_ioctl+0x79/0x90
           do_syscall_64+0x68/0x180
           entry_SYSCALL64_slow_path+0x25/0x25
          Code:  Bad RIP value.
          RIP  [<          (null)>]           (null)
           RSP <ffff8800b5263c48>
          CR2: 0000000000000000
          ---[ end trace 9c70c48b1a2bc66e ]---
      
      This can be reproduced readily by preemption timer enabled on L0 and disabled
      on L1.
      
      Return false since preemption timer vmexits must never be reflected to L2.
      
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Yunhong Jiang <yunhong.jiang@intel.com>
      Cc: Jan Kiszka <jan.kiszka@siemens.com>
      Cc: Haozhong Zhang <haozhong.zhang@intel.com>
      Signed-off-by: default avatarWanpeng Li <wanpeng.li@hotmail.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      55123e3c
    • Paolo Bonzini's avatar
      KVM: VMX: reflect broken preemption timer in vmcs_config · 1c17c3e6
      Paolo Bonzini authored
      Simplify cpu_has_vmx_preemption_timer.  This is consistent with the
      rest of setup_vmcs_config and preparatory for the next patch.
      Tested-by: default avatarWanpeng Li <kernellwp@gmail.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      1c17c3e6
  3. 05 Jul, 2016 18 commits
    • James Hogan's avatar
      MIPS: KVM: Emulate generic QEMU machine on r6 T&E · 84260972
      James Hogan authored
      Default the guest PRId register to represent a generic QEMU machine
      instead of a 24kc on MIPSr6. 24kc isn't supported by r6 Linux kernels.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim KrÄmáŠ<rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      84260972
    • James Hogan's avatar
      MIPS: KVM: Decode RDHWR more strictly · 8eeab81c
      James Hogan authored
      When KVM emulates the RDHWR instruction, decode the instruction more
      strictly. The rs field (bits 25:21) should be zero, as should bits 10:9.
      Bits 8:6 is the register select field in MIPSr6, so we aren't strict
      about those bits (no other operations should use that encoding space).
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim KrÄmáŠ<rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      8eeab81c
    • James Hogan's avatar
      MIPS: KVM: Recognise r6 CACHE encoding · 5cc4aafc
      James Hogan authored
      Recognise the new MIPSr6 CACHE instruction encoding rather than the
      pre-r6 one when an r6 kernel is being built. A SPECIAL3 opcode is used
      and the immediate field is reduced to 9 bits wide since MIPSr6.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim KrÄmáŠ<rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      5cc4aafc
    • James Hogan's avatar
      MIPS: KVM: Support r6 compact branch emulation · 2e0badfa
      James Hogan authored
      Add support in KVM for emulation of instructions in the forbidden slot
      of MIPSr6 compact branches. If we hit an exception on the forbidden
      slot, then the branch must not have been taken, which makes calculation
      of the resume PC trivial.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim KrÄmáŠ<rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      2e0badfa
    • James Hogan's avatar
      MIPS: KVM: Don't save/restore lo/hi for r6 · 70e92c7e
      James Hogan authored
      MIPSr6 doesn't have lo/hi registers, so don't bother saving or
      restoring them, and don't expose them to userland with the KVM ioctl
      interface either.
      
      In fact the lo/hi registers aren't callee saved in the MIPS ABIs anyway,
      so there is no need to preserve the host lo/hi values at all when
      transitioning to and from the guest (which happens via a function call).
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim KrÄmáŠ<rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      70e92c7e
    • James Hogan's avatar
      MIPS: KVM: Fix pre-r6 ll/sc instructions on r6 · d85ebff0
      James Hogan authored
      The atomic KVM register access macros in kvm_host.h (for the guest Cause
      register with KVM in trap & emulate mode) use ll/sc instructions,
      however they still .set mips3, which causes pre-MIPSr6 instruction
      encodings to be emitted, even for a MIPSr6 build.
      
      Fix it to use MIPS_ISA_ARCH_LEVEL as other parts of arch/mips already
      do.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim KrÄmáŠ<rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      d85ebff0
    • James Hogan's avatar
      MIPS: KVM: Fix fpu.S misassembly with r6 · d14740fe
      James Hogan authored
      __kvm_save_fpu and __kvm_restore_fpu use .set mips64r2 so that they can
      access the odd FPU registers as well as the even, however this causes
      misassembly of the return instruction on MIPSr6.
      
      Fix by replacing .set mips64r2 with .set fp=64, which doesn't change the
      architecture revision.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim KrÄmáŠ<rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      d14740fe
    • Paul Burton's avatar
      MIPS: inst.h: Rename cbcond{0,1}_op to pop{1,3}0_op · 1b492600
      Paul Burton authored
      The opcodes currently defined in inst.h as cbcond0_op & cbcond1_op are
      actually defined in the MIPS base instruction set manuals as pop10 &
      pop30 respectively. Rename them as such, for consistency with the
      documentation.
      Signed-off-by: default avatarPaul Burton <paul.burton@imgtec.com>
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Acked-by: default avatarRalf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      1b492600
    • Paul Burton's avatar
      MIPS: inst.h: Rename b{eq,ne}zcji[al]c_op to pop{6,7}6_op · 1c66b79b
      Paul Burton authored
      The opcodes currently defined in inst.h as beqzcjic_op & bnezcjialc_op
      are actually defined in the MIPS base instruction set manuals as pop66 &
      pop76 respectively. Rename them as such, for consistency with the
      documentation.
      Signed-off-by: default avatarPaul Burton <paul.burton@imgtec.com>
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Acked-by: default avatarRalf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      1c66b79b
    • James Hogan's avatar
      MIPS: KVM: Save k0 straight into VCPU structure · eadfb501
      James Hogan authored
      Currently on a guest exception the guest's k0 register is saved to the
      scratch temp register and the guest k1 saved to the exception base
      address + 0x3000 using k0 to extract the Exception Base field of the
      EBase register and as the base operand to the store. Both are then
      copied into the VCPU structure after the other general purpose registers
      have been saved there.
      
      This bouncing to exception base + 0x3000 is not actually necessary as
      the VCPU pointer can be determined and written through just as easily
      with only a single spare register. The VCPU pointer is already needed in
      k1 for saving the other GP registers, so lets save the guest k0 register
      straight into the VCPU structure through k1, first saving k1 into the
      scratch temp register instead of k0.
      
      This could potentially pave the way for having a single exception base
      area for use by all guests.
      
      The ehb after saving the k register to the scratch temp register is also
      delayed until just before it needs to be read back.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim KrÄmáŠ<rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      eadfb501
    • James Hogan's avatar
      MIPS: KVM: Relative branch to common exit handler · 1f9ca62c
      James Hogan authored
      Use a relative branch to get from the individual exception vectors to
      the common guest exit handler, rather than loading the address of the
      exit handler and jumping to it.
      
      This is made easier due to the fact we are now generating the entry code
      dynamically. This will also allow the exception code to be further
      reduced in future patches.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim KrÄmáŠ<rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      1f9ca62c
    • James Hogan's avatar
      MIPS: KVM: Dynamically choose scratch registers · 1e5217f5
      James Hogan authored
      Scratch cop0 registers are needed by KVM to be able to save/restore all
      the GPRs, including k0/k1, and for storing the VCPU pointer. However no
      registers are universally suitable for these purposes, so the decision
      should be made at runtime.
      
      Until now, we've used DDATA_LO to store the VCPU pointer, and ErrorEPC
      as a temporary. It could be argued that this is abuse of those
      registers, and DDATA_LO is known not to be usable on certain
      implementations (Cavium Octeon). If KScratch registers are present, use
      them instead.
      
      We save & restore the temporary register in addition to the VCPU pointer
      register when using a KScratch register for it, as it may be used for
      normal host TLB handling too.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim KrÄmáŠ<rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      1e5217f5
    • James Hogan's avatar
      MIPS: KVM: Drop redundant restore of DDATA_LO · 025014e3
      James Hogan authored
      On return from the exit handler to the host (without re-entering the
      guest) we restore the saved value of the DDATA_LO register which we use
      as a scratch register. However we've already restored it ready for
      calling the exit handler so there is no need to do it again, so drop
      that code.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim KrÄmáŠ<rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      025014e3
    • James Hogan's avatar
      MIPS: KVM: Check MSA presence at uasm time · 38ea7a71
      James Hogan authored
      Check for presence of MSA at uasm assembly time rather than at runtime
      in the generated KVM host entry code. This optimises the guest exit path
      by eliminating the MSA code entirely if not present, and eliminating the
      read of Config3.MSAP and conditional branch if MSA is present.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim KrÄmáŠ<rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      38ea7a71
    • James Hogan's avatar
      MIPS: KVM: Omit FPU handling entry code if possible · d37f4038
      James Hogan authored
      The FPU handling code on entry from guest is unnecessary if no FPU is
      present, so allow it to be dropped at uasm assembly time.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim KrÄmáŠ<rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      d37f4038
    • James Hogan's avatar
      MIPS: KVM: Drop now unused asm offsets · 9c988658
      James Hogan authored
      Now that locore.S is converted to uasm, remove a bunch of the assembly
      offset definitions created by asm-offsets.c, including the CPUINFO_ ones
      for reading the variable asid mask, and the non FPU/MSA related VCPU_
      definitions. KVM's fpu.S and msa.S still use the remaining definitions.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim KrÄmáŠ<rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      9c988658
    • James Hogan's avatar
      MIPS: KVM: Add dumping of generated entry code · d7b8f890
      James Hogan authored
      Dump the generated entry code with pr_debug(), similar to how it is done
      in tlbex.c, so it can be more easily debugged.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim KrÄmáŠ<rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      d7b8f890
    • James Hogan's avatar
      MIPS; KVM: Convert exception entry to uasm · 90e9311a
      James Hogan authored
      Convert the whole of locore.S (assembly to enter guest and handle
      exception entry) to be generated dynamically with uasm. This is done
      with minimal changes to the resulting code.
      
      The main changes are:
      - Some constants are generated by uasm using LUI+ADDIU instead of
        LUI+ORI.
      - Loading of lo and hi are swapped around in vcpu_run but not when
        resuming the guest after an exit. Both bits of logic are now generated
        by the same code.
      - Register MOVEs in uasm use different ADDU operand ordering to GNU as,
        putting zero register into rs instead of rt.
      - The JALR.HB to call the C exit handler is switched to JALR, since the
        hazard barrier would appear to be unnecessary.
      
      This will allow further optimisation in the future to dynamically handle
      the capabilities of the CPU.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim KrÄmáŠ<rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      90e9311a