1. 31 Jan, 2018 2 commits
  2. 30 Jan, 2018 1 commit
  3. 26 Jan, 2018 12 commits
  4. 24 Jan, 2018 9 commits
  5. 23 Jan, 2018 1 commit
  6. 16 Jan, 2018 15 commits
    • Paolo Bonzini's avatar
      KVM: VMX: introduce X2APIC_MSR macro · d7231e75
      Paolo Bonzini authored
      Remove duplicate expression in nested_vmx_prepare_msr_bitmap, and make
      the register names clearer in hardware_setup.
      Suggested-by: default avatarJim Mattson <jmattson@google.com>
      Reviewed-by: default avatarJim Mattson <jmattson@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      [Resolved rebase conflict after removing Intel PT. - Radim]
      Signed-off-by: default avatarRadim Krčmář <rkrcmar@redhat.com>
      d7231e75
    • Paolo Bonzini's avatar
      KVM: vmx: speed up MSR bitmap merge · c992384b
      Paolo Bonzini authored
      The bulk of the MSR bitmap is either immutable, or can be copied from
      the L1 bitmap.  By initializing it at VMXON time, and copying the mutable
      parts one long at a time on vmentry (rather than one bit), about 4000
      clock cycles (30%) can be saved on a nested VMLAUNCH/VMRESUME.
      
      The resulting for loop only has four iterations, so it is cheap enough
      to reinitialize the MSR write bitmaps on every iteration, and it makes
      the code simpler.
      Suggested-by: default avatarJim Mattson <jmattson@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarRadim Krčmář <rkrcmar@redhat.com>
      c992384b
    • Paolo Bonzini's avatar
      KVM: vmx: simplify MSR bitmap setup · 1f6e5b25
      Paolo Bonzini authored
      The APICv-enabled MSR bitmap is a superset of the APICv-disabled bitmap.
      Make that obvious in vmx_disable_intercept_msr_x2apic.
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      [Resolved rebase conflict after removing Intel PT. - Radim]
      Signed-off-by: default avatarRadim Krčmář <rkrcmar@redhat.com>
      1f6e5b25
    • Paolo Bonzini's avatar
      KVM: nVMX: remove unnecessary vmwrite from L2->L1 vmexit · 07f36616
      Paolo Bonzini authored
      The POSTED_INTR_NV field is constant (though it differs between the vmcs01 and
      vmcs02), there is no need to reload it on vmexit to L1.
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarRadim Krčmář <rkrcmar@redhat.com>
      07f36616
    • Paolo Bonzini's avatar
      KVM: nVMX: initialize more non-shadowed fields in prepare_vmcs02_full · 25a2e4fe
      Paolo Bonzini authored
      These fields are also simple copies of the data in the vmcs12 struct.
      For some of them, prepare_vmcs02 was skipping the copy when the field
      was unused.  In prepare_vmcs02_full, we copy them always as long as the
      field exists on the host, because the corresponding execution control
      might be one of the shadowed fields.
      
      Optimization opportunities remain for MSRs that, depending on the
      entry/exit controls, have to be copied from either the vmcs01 or
      the vmcs12: EFER (whose value is partly stored in the entry controls
      too), PAT, DEBUGCTL (and also DR7).  Before moving these three and
      the entry/exit controls to prepare_vmcs02_full, KVM would have to set
      dirty_vmcs12 on writes to the L1 MSRs.
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarRadim Krčmář <rkrcmar@redhat.com>
      25a2e4fe
    • Paolo Bonzini's avatar
      KVM: nVMX: initialize descriptor cache fields in prepare_vmcs02_full · 8665c3f9
      Paolo Bonzini authored
      This part is separate for ease of review, because git prefers to move
      prepare_vmcs02 below the initial long sequence of vmcs_write* operations.
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarRadim Krčmář <rkrcmar@redhat.com>
      8665c3f9
    • Paolo Bonzini's avatar
      KVM: nVMX: track dirty state of non-shadowed VMCS fields · 74a497fa
      Paolo Bonzini authored
      VMCS12 fields that are not handled through shadow VMCS are rarely
      written, and thus they are also almost constant in the vmcs02.  We can
      thus optimize prepare_vmcs02 by skipping all the work for non-shadowed
      fields in the common case.
      
      This patch introduces the (pretty simple) tracking infrastructure; the
      next patches will move work to prepare_vmcs02_full and save a few hundred
      clock cycles per VMRESUME on a Haswell Xeon E5 system:
      
      	                                before  after
      	cpuid                           14159   13869
      	vmcall                          15290   14951
      	inl_from_kernel                 17703   17447
      	outl_to_kernel                  16011   14692
      	self_ipi_sti_nop                16763   15825
      	self_ipi_tpr_sti_nop            17341   15935
      	wr_tsc_adjust_msr               14510   14264
      	rd_tsc_adjust_msr               15018   14311
      	mmio-wildcard-eventfd:pci-mem   16381   14947
      	mmio-datamatch-eventfd:pci-mem  18620   17858
      	portio-wildcard-eventfd:pci-io  15121   14769
      	portio-datamatch-eventfd:pci-io 15761   14831
      
      (average savings 748, stdev 460).
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarRadim Krčmář <rkrcmar@redhat.com>
      74a497fa
    • Paolo Bonzini's avatar
      KVM: VMX: split list of shadowed VMCS field to a separate file · c9e9deae
      Paolo Bonzini authored
      Prepare for multiple inclusions of the list.
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarRadim Krčmář <rkrcmar@redhat.com>
      c9e9deae
    • Jim Mattson's avatar
      kvm: vmx: Reduce size of vmcs_field_to_offset_table · 58e9ffae
      Jim Mattson authored
      The vmcs_field_to_offset_table was a rather sparse table of short
      integers with a maximum index of 0x6c16, amounting to 55342 bytes. Now
      that we are considering support for multiple VMCS12 formats, it would
      be unfortunate to replicate that large, sparse table. Rotating the
      field encoding (as a 16-bit integer) left by 6 reduces that table to
      5926 bytes.
      Suggested-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarJim Mattson <jmattson@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarRadim Krčmář <rkrcmar@redhat.com>
      58e9ffae
    • Jim Mattson's avatar
      kvm: vmx: Change vmcs_field_type to vmcs_field_width · d37f4267
      Jim Mattson authored
      Per the SDM, "[VMCS] Fields are grouped by width (16-bit, 32-bit,
      etc.) and type (guest-state, host-state, etc.)." Previously, the width
      was indicated by vmcs_field_type. To avoid confusion when we start
      dealing with both field width and field type, change vmcs_field_type
      to vmcs_field_width.
      Signed-off-by: default avatarJim Mattson <jmattson@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarRadim Krčmář <rkrcmar@redhat.com>
      d37f4267
    • Jim Mattson's avatar
      kvm: vmx: Introduce VMCS12_MAX_FIELD_INDEX · 5b15706d
      Jim Mattson authored
      This is the highest index value used in any supported VMCS12 field
      encoding. It is used to populate the IA32_VMX_VMCS_ENUM MSR.
      Signed-off-by: default avatarJim Mattson <jmattson@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarRadim Krčmář <rkrcmar@redhat.com>
      5b15706d
    • Paolo Bonzini's avatar
      KVM: VMX: optimize shadow VMCS copying · 44900ba6
      Paolo Bonzini authored
      Because all fields can be read/written with a single vmread/vmwrite on
      64-bit kernels, the switch statements in copy_vmcs12_to_shadow and
      copy_shadow_to_vmcs12 are unnecessary.
      
      What I did in this patch is to copy the two parts of 64-bit fields
      separately on 32-bit kernels, to keep all complicated #ifdef-ery
      in init_vmcs_shadow_fields.  The disadvantage is that 64-bit fields
      have to be listed separately in shadow_read_only/read_write_fields,
      but those are few and we can validate the arrays when building the
      VMREAD and VMWRITE bitmaps.  This saves a few hundred clock cycles
      per nested vmexit.
      
      However there is still a "switch" in vmcs_read_any and vmcs_write_any.
      So, while at it, this patch reorders the fields by type, hoping that
      the branch predictor appreciates it.
      
      Cc: Jim Mattson <jmattson@google.com>
      Cc: Wanpeng Li <wanpeng.li@hotmail.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarRadim Krčmář <rkrcmar@redhat.com>
      44900ba6
    • Paolo Bonzini's avatar
      KVM: vmx: shadow more fields that are read/written on every vmexits · c5d167b2
      Paolo Bonzini authored
      Compared to when VMCS shadowing was added to KVM, we are reading/writing
      a few more fields: the PML index, the interrupt status and the preemption
      timer value.  The first two are because we are exposing more features
      to nested guests, the preemption timer is simply because we have grown
      a new optimization.  Adding them to the shadow VMCS field lists reduces
      the cost of a vmexit by about 1000 clock cycles for each field that exists
      on bare metal.
      
      On the other hand, the guest BNDCFGS and TSC offset are not written on
      fast paths, so remove them.
      Suggested-by: default avatarJim Mattson <jmattson@google.com>
      Cc: Jim Mattson <jmattson@google.com>
      Cc: Wanpeng Li <wanpeng.li@hotmail.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarRadim Krčmář <rkrcmar@redhat.com>
      c5d167b2
    • Radim Krčmář's avatar
      Merge tag 'kvm-s390-next-4.16-1' of git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux · 7cd91804
      Radim Krčmář authored
      KVM: s390: Fixes and features for 4.16
      
      - add the virtio-ccw transport for kvmconfig
      - more debug tracing for cpu model
      - cleanups and fixes
      7cd91804
    • Liran Alon's avatar
      KVM: nVMX: Fix races when sending nested PI while dest enters/leaves L2 · 6b697711
      Liran Alon authored
      Consider the following scenario:
      1. CPU A calls vmx_deliver_nested_posted_interrupt() to send an IPI
      to CPU B via virtual posted-interrupt mechanism.
      2. CPU B is currently executing L2 guest.
      3. vmx_deliver_nested_posted_interrupt() calls
      kvm_vcpu_trigger_posted_interrupt() which will note that
      vcpu->mode == IN_GUEST_MODE.
      4. Assume that before CPU A sends the physical POSTED_INTR_NESTED_VECTOR
      IPI, CPU B exits from L2 to L0 during event-delivery
      (valid IDT-vectoring-info).
      5. CPU A now sends the physical IPI. The IPI is received in host and
      it's handler (smp_kvm_posted_intr_nested_ipi()) does nothing.
      6. Assume that before CPU A sets pi_pending=true and KVM_REQ_EVENT,
      CPU B continues to run in L0 and reach vcpu_enter_guest(). As
      KVM_REQ_EVENT is not set yet, vcpu_enter_guest() will continue and resume
      L2 guest.
      7. At this point, CPU A sets pi_pending=true and KVM_REQ_EVENT but
      it's too late! CPU B already entered L2 and KVM_REQ_EVENT will only be
      consumed at next L2 entry!
      
      Another scenario to consider:
      1. CPU A calls vmx_deliver_nested_posted_interrupt() to send an IPI
      to CPU B via virtual posted-interrupt mechanism.
      2. Assume that before CPU A calls kvm_vcpu_trigger_posted_interrupt(),
      CPU B is at L0 and is about to resume into L2. Further assume that it is
      in vcpu_enter_guest() after check for KVM_REQ_EVENT.
      3. At this point, CPU A calls kvm_vcpu_trigger_posted_interrupt() which
      will note that vcpu->mode != IN_GUEST_MODE. Therefore, do nothing and
      return false. Then, will set pi_pending=true and KVM_REQ_EVENT.
      4. Now CPU B continue and resumes into L2 guest without processing
      the posted-interrupt until next L2 entry!
      
      To fix both issues, we just need to change
      vmx_deliver_nested_posted_interrupt() to set pi_pending=true and
      KVM_REQ_EVENT before calling kvm_vcpu_trigger_posted_interrupt().
      
      It will fix the first scenario by chaging step (6) to note that
      KVM_REQ_EVENT and pi_pending=true and therefore process
      nested posted-interrupt.
      
      It will fix the second scenario by two possible ways:
      1. If kvm_vcpu_trigger_posted_interrupt() is called while CPU B has changed
      vcpu->mode to IN_GUEST_MODE, physical IPI will be sent and will be received
      when CPU resumes into L2.
      2. If kvm_vcpu_trigger_posted_interrupt() is called while CPU B hasn't yet
      changed vcpu->mode to IN_GUEST_MODE, then after CPU B will change
      vcpu->mode it will call kvm_request_pending() which will return true and
      therefore force another round of vcpu_enter_guest() which will note that
      KVM_REQ_EVENT and pi_pending=true and therefore process nested
      posted-interrupt.
      
      Cc: stable@vger.kernel.org
      Fixes: 705699a1 ("KVM: nVMX: Enable nested posted interrupt processing")
      Signed-off-by: default avatarLiran Alon <liran.alon@oracle.com>
      Reviewed-by: default avatarNikita Leshenko <nikita.leshchenko@oracle.com>
      Reviewed-by: default avatarKrish Sadhukhan <krish.sadhukhan@oracle.com>
      [Add kvm_vcpu_kick to also handle the case where L1 doesn't intercept L2 HLT
       and L2 executes HLT instruction. - Paolo]
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: default avatarRadim Krčmář <rkrcmar@redhat.com>
      6b697711