1. 11 Mar, 2014 1 commit
    • Jan Kiszka's avatar
      KVM: nVMX: Rework interception of IRQs and NMIs · b6b8a145
      Jan Kiszka authored
      Move the check for leaving L2 on pending and intercepted IRQs or NMIs
      from the *_allowed handler into a dedicated callback. Invoke this
      callback at the relevant points before KVM checks if IRQs/NMIs can be
      injected. The callback has the task to switch from L2 to L1 if needed
      and inject the proper vmexit events.
      
      The rework fixes L2 wakeups from HLT and provides the foundation for
      preemption timer emulation.
      Signed-off-by: default avatarJan Kiszka <jan.kiszka@siemens.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      b6b8a145
  2. 06 Mar, 2014 2 commits
  3. 04 Mar, 2014 12 commits
  4. 03 Mar, 2014 13 commits
  5. 27 Feb, 2014 4 commits
    • Paolo Bonzini's avatar
      kvm, vmx: Really fix lazy FPU on nested guest · 1b385cbd
      Paolo Bonzini authored
      Commit e504c909 (kvm, vmx: Fix lazy FPU on nested guest, 2013-11-13)
      highlighted a real problem, but the fix was subtly wrong.
      
      nested_read_cr0 is the CR0 as read by L2, but here we want to look at
      the CR0 value reflecting L1's setup.  In other words, L2 might think
      that TS=0 (so nested_read_cr0 has the bit clear); but if L1 is actually
      running it with TS=1, we should inject the fault into L1.
      
      The effective value of CR0 in L2 is contained in vmcs12->guest_cr0, use
      it.
      
      Fixes: e504c909Reported-by: default avatarKashyap Chamarty <kchamart@redhat.com>
      Reported-by: default avatarStefan Bader <stefan.bader@canonical.com>
      Tested-by: default avatarKashyap Chamarty <kchamart@redhat.com>
      Tested-by: default avatarAnthoine Bourgeois <bourgeois@bertin.fr>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      1b385cbd
    • Andrew Honig's avatar
      kvm: x86: fix emulator buffer overflow (CVE-2014-0049) · a08d3b3b
      Andrew Honig authored
      The problem occurs when the guest performs a pusha with the stack
      address pointing to an mmio address (or an invalid guest physical
      address) to start with, but then extending into an ordinary guest
      physical address.  When doing repeated emulated pushes
      emulator_read_write sets mmio_needed to 1 on the first one.  On a
      later push when the stack points to regular memory,
      mmio_nr_fragments is set to 0, but mmio_is_needed is not set to 0.
      
      As a result, KVM exits to userspace, and then returns to
      complete_emulated_mmio.  In complete_emulated_mmio
      vcpu->mmio_cur_fragment is incremented.  The termination condition of
      vcpu->mmio_cur_fragment == vcpu->mmio_nr_fragments is never achieved.
      The code bounces back and fourth to userspace incrementing
      mmio_cur_fragment past it's buffer.  If the guest does nothing else it
      eventually leads to a a crash on a memcpy from invalid memory address.
      
      However if a guest code can cause the vm to be destroyed in another
      vcpu with excellent timing, then kvm_clear_async_pf_completion_queue
      can be used by the guest to control the data that's pointed to by the
      call to cancel_work_item, which can be used to gain execution.
      
      Fixes: f78146b0Signed-off-by: default avatarAndrew Honig <ahonig@google.com>
      Cc: stable@vger.kernel.org (3.5+)
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      a08d3b3b
    • Marc Zyngier's avatar
      arm/arm64: KVM: detect CPU reset on CPU_PM_EXIT · b20c9f29
      Marc Zyngier authored
      Commit 1fcf7ce0 (arm: kvm: implement CPU PM notifier) added
      support for CPU power-management, using a cpu_notifier to re-init
      KVM on a CPU that entered CPU idle.
      
      The code assumed that a CPU entering idle would actually be powered
      off, loosing its state entierely, and would then need to be
      reinitialized. It turns out that this is not always the case, and
      some HW performs CPU PM without actually killing the core. In this
      case, we try to reinitialize KVM while it is still live. It ends up
      badly, as reported by Andre Przywara (using a Calxeda Midway):
      
      [    3.663897] Kernel panic - not syncing: unexpected prefetch abort in Hyp mode at: 0x685760
      [    3.663897] unexpected data abort in Hyp mode at: 0xc067d150
      [    3.663897] unexpected HVC/SVC trap in Hyp mode at: 0xc0901dd0
      
      The trick here is to detect if we've been through a full re-init or
      not by looking at HVBAR (VBAR_EL2 on arm64). This involves
      implementing the backend for __hyp_get_vectors in the main KVM HYP
      code (rather small), and checking the return value against the
      default one when the CPU notifier is called on CPU_PM_EXIT.
      Reported-by: default avatarAndre Przywara <osp@andrep.de>
      Tested-by: default avatarAndre Przywara <osp@andrep.de>
      Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: Rob Herring <rob.herring@linaro.org>
      Acked-by: default avatarChristoffer Dall <christoffer.dall@linaro.org>
      Signed-off-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      b20c9f29
    • Takuya Yoshikawa's avatar
      KVM: x86: Break kvm_for_each_vcpu loop after finding the VP_INDEX · 684851a1
      Takuya Yoshikawa authored
      No need to scan the entire VCPU array.
      Signed-off-by: default avatarTakuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      684851a1
  6. 26 Feb, 2014 5 commits
  7. 25 Feb, 2014 2 commits
  8. 24 Feb, 2014 1 commit