1. 30 May, 2014 8 commits
  2. 29 May, 2014 1 commit
  3. 27 May, 2014 4 commits
    • Christoffer Dall's avatar
      arm: Fix compile warning for psci · d6d7a95c
      Christoffer Dall authored
      Commit e71246a2 changes psci_init from a
      function returning a void to an int, but does not change the non
      CONFIG_ARM_PSCI implementation to return a value, which causes a compile
      warning.  Just return 0.
      
      Cc: Ashwin Chaugule <ashwin.chaugule@linaro.org>
      Cc: Shawn Guo <shawn.guo@freescale.com>
      Signed-off-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      d6d7a95c
    • Paolo Bonzini's avatar
      Merge tag 'kvm-arm-for-3.16' of... · 04092204
      Paolo Bonzini authored
      Merge tag 'kvm-arm-for-3.16' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into kvm-next
      
      Changed for the 3.16 merge window.
      
      This includes KVM support for PSCI v0.2 and also includes generic Linux
      support for PSCI v0.2 (on hosts that advertise that feature via their
      DT), since the latter depends on headers introduced by the former.
      
      Finally there's a small patch from Marc that enables Cortex-A53 support.
      04092204
    • Nadav Amit's avatar
      KVM: x86: MOV CR/DR emulation should ignore mod · 9b88ae99
      Nadav Amit authored
      MOV CR/DR instructions ignore the mod field (in the ModR/M byte). As the SDM
      states: "The 2 bits in the mod field are ignored".  Accordingly, the second
      operand of these instructions is always a general purpose register.
      
      The current emulator implementation does not do so. If the mod bits do not
      equal 3, it expects the second operand to be in memory.
      Signed-off-by: default avatarNadav Amit <namit@cs.technion.ac.il>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      9b88ae99
    • Paolo Bonzini's avatar
      KVM: lapic: sync highest ISR to hardware apic on EOI · fc57ac2c
      Paolo Bonzini authored
      When Hyper-V enlightenments are in effect, Windows prefers to issue an
      Hyper-V MSR write to issue an EOI rather than an x2apic MSR write.
      The Hyper-V MSR write is not handled by the processor, and besides
      being slower, this also causes bugs with APIC virtualization.  The
      reason is that on EOI the processor will modify the highest in-service
      interrupt (SVI) field of the VMCS, as explained in section 29.1.4 of
      the SDM; every other step in EOI virtualization is already done by
      apic_send_eoi or on VM entry, but this one is missing.
      
      We need to do the same, and be careful not to muck with the isr_count
      and highest_isr_cache fields that are unused when virtual interrupt
      delivery is enabled.
      
      Cc: stable@vger.kernel.org
      Reviewed-by: default avatarYang Zhang <yang.z.zhang@intel.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      fc57ac2c
  4. 25 May, 2014 1 commit
  5. 22 May, 2014 6 commits
    • Nadav Amit's avatar
      KVM: vmx: DR7 masking on task switch emulation is wrong · 1f854112
      Nadav Amit authored
      The DR7 masking which is done on task switch emulation should be in hex format
      (clearing the local breakpoints enable bits 0,2,4 and 6).
      Signed-off-by: default avatarNadav Amit <namit@cs.technion.ac.il>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      1f854112
    • Dave Hansen's avatar
      x86: fix page fault tracing when KVM guest support enabled · 65a7f03f
      Dave Hansen authored
      I noticed on some of my systems that page fault tracing doesn't
      work:
      
      	cd /sys/kernel/debug/tracing
      	echo 1 > events/exceptions/enable
      	cat trace;
      	# nothing shows up
      
      I eventually traced it down to CONFIG_KVM_GUEST.  At least in a
      KVM VM, enabling that option breaks page fault tracing, and
      disabling fixes it.  I tried on some old kernels and this does
      not appear to be a regression: it never worked.
      
      There are two page-fault entry functions today.  One when tracing
      is on and another when it is off.  The KVM code calls do_page_fault()
      directly instead of calling the traced version:
      
      > dotraplinkage void __kprobes
      > do_async_page_fault(struct pt_regs *regs, unsigned long
      > error_code)
      > {
      >         enum ctx_state prev_state;
      >
      >         switch (kvm_read_and_reset_pf_reason()) {
      >         default:
      >                 do_page_fault(regs, error_code);
      >                 break;
      >         case KVM_PV_REASON_PAGE_NOT_PRESENT:
      
      I'm also having problems with the page fault tracing on bare
      metal (same symptom of no trace output).  I'm unsure if it's
      related.
      
      Steven had an alternative to this which has zero overhead when
      tracing is off where this includes the standard noops even when
      tracing is disabled.  I'm unconvinced that the extra complexity
      of his apporach:
      
      	http://lkml.kernel.org/r/20140508194508.561ed220@gandalf.local.home
      
      is worth it, expecially considering that the KVM code is already
      making page fault entry slower here.  This solution is
      dirt-simple.
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: x86@kernel.org
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Gleb Natapov <gleb@redhat.com>
      Cc: kvm@vger.kernel.org
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarDave Hansen <dave.hansen@linux.intel.com>
      Acked-by: default avatar"H. Peter Anvin" <hpa@zytor.com>
      Acked-by: default avatarSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      65a7f03f
    • Paolo Bonzini's avatar
      KVM: x86: get CPL from SS.DPL · ae9fedc7
      Paolo Bonzini authored
      CS.RPL is not equal to the CPL in the few instructions between
      setting CR0.PE and reloading CS.  And CS.DPL is also not equal
      to the CPL for conforming code segments.
      
      However, SS.DPL *is* always equal to the CPL except for the weird
      case of SYSRET on AMD processors, which sets SS.DPL=SS.RPL from the
      value in the STAR MSR, but force CPL=3 (Intel instead forces
      SS.DPL=SS.RPL=CPL=3).
      
      So this patch:
      
      - modifies SVM to update the CPL from SS.DPL rather than CS.RPL;
      the above case with SYSRET is not broken further, and the way
      to fix it would be to pass the CPL to userspace and back
      
      - modifies VMX to always return the CPL from SS.DPL (except
      forcing it to 0 if we are emulating real mode via vm86 mode;
      in vm86 mode all DPLs have to be 3, but real mode does allow
      privileged instructions).  It also removes the CPL cache,
      which becomes a duplicate of the SS access rights cache.
      
      This fixes doing KVM_IOCTL_SET_SREGS exactly after setting
      CR0.PE=1 but before CS has been reloaded.
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      ae9fedc7
    • Paolo Bonzini's avatar
      KVM: x86: check CS.DPL against RPL during task switch · 5045b468
      Paolo Bonzini authored
      Table 7-1 of the SDM mentions a check that the code segment's
      DPL must match the selector's RPL.  This was not done by KVM,
      fix it.
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      5045b468
    • Paolo Bonzini's avatar
      KVM: x86: drop set_rflags callback · fb5e336b
      Paolo Bonzini authored
      Not needed anymore now that the CPL is computed directly
      during task switch.
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      fb5e336b
    • Paolo Bonzini's avatar
      KVM: x86: use new CS.RPL as CPL during task switch · 2356aaeb
      Paolo Bonzini authored
      During task switch, all of CS.DPL, CS.RPL, SS.DPL must match (in addition
      to all the other requirements) and will be the new CPL.  So far this
      worked by carefully setting the CS selector and flag before doing the
      task switch; setting CS.selector will already change the CPL.
      
      However, this will not work once we get the CPL from SS.DPL, because
      then you will have to set the full segment descriptor cache to change
      the CPL.  ctxt->ops->cpl(ctxt) will then return the old CPL during the
      task switch, and the check that SS.DPL == CPL will fail.
      
      Temporarily assume that the CPL comes from CS.RPL during task switch
      to a protected-mode task.  This is the same approach used in QEMU's
      emulation code, which (until version 2.0) manually tracks the CPL.
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      2356aaeb
  6. 16 May, 2014 13 commits
  7. 15 May, 2014 3 commits
  8. 12 May, 2014 1 commit
  9. 08 May, 2014 1 commit
    • Gabriel L. Somlo's avatar
      kvm: x86: emulate monitor and mwait instructions as nop · 87c00572
      Gabriel L. Somlo authored
      Treat monitor and mwait instructions as nop, which is architecturally
      correct (but inefficient) behavior. We do this to prevent misbehaving
      guests (e.g. OS X <= 10.7) from crashing after they fail to check for
      monitor/mwait availability via cpuid.
      
      Since mwait-based idle loops relying on these nop-emulated instructions
      would keep the host CPU pegged at 100%, do NOT advertise their presence
      via cpuid, to prevent compliant guests from using them inadvertently.
      Signed-off-by: default avatarGabriel L. Somlo <somlo@cmu.edu>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      87c00572
  10. 07 May, 2014 2 commits
    • Michael S. Tsirkin's avatar
      kvm/x86: implement hv EOI assist · b63cf42f
      Michael S. Tsirkin authored
      It seems that it's easy to implement the EOI assist
      on top of the PV EOI feature: simply convert the
      page address to the format expected by PV EOI.
      
      Notes:
      -"No EOI required" is set only if interrupt injected
       is edge triggered; this is true because level interrupts are going
       through IOAPIC which disables PV EOI.
       In any case, if guest triggers EOI the bit will get cleared on exit.
      -For migration, set of HV_X64_MSR_APIC_ASSIST_PAGE sets
       KVM_PV_EOI_EN internally, so restoring HV_X64_MSR_APIC_ASSIST_PAGE
       seems sufficient
       In any case, bit is cleared on exit so worst case it's never re-enabled
      -no handling of PV EOI data is performed at HV_X64_MSR_EOI write;
       HV_X64_MSR_EOI is a separate optimization - it's an X2APIC
       replacement that lets you do EOI with an MSR and not IO.
      Signed-off-by: default avatarMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      b63cf42f
    • Nadav Amit's avatar
      KVM: x86: Mark bit 7 in long-mode PDPTE according to 1GB pages support · 5f7dde7b
      Nadav Amit authored
      In long-mode, bit 7 in the PDPTE is not reserved only if 1GB pages are
      supported by the CPU. Currently the bit is considered by KVM as always
      reserved.
      Signed-off-by: default avatarNadav Amit <namit@cs.technion.ac.il>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      5f7dde7b