Commit 36c344f3 authored by Paolo Bonzini's avatar Paolo Bonzini

Merge tag 'kvm-arm-for-v4.12-round2' of...

Merge tag 'kvm-arm-for-v4.12-round2' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD

Second round of KVM/ARM Changes for v4.12.

Changes include:
 - A fix related to the 32-bit idmap stub
 - A fix to the bitmask used to deode the operands of an AArch32 CP
   instruction
 - We have moved the files shared between arch/arm/kvm and
   arch/arm64/kvm to virt/kvm/arm
 - We add support for saving/restoring the virtual ITS state to
   userspace
parents 03efce6f a2b19e6e
...@@ -32,7 +32,128 @@ Groups: ...@@ -32,7 +32,128 @@ Groups:
KVM_DEV_ARM_VGIC_CTRL_INIT KVM_DEV_ARM_VGIC_CTRL_INIT
request the initialization of the ITS, no additional parameter in request the initialization of the ITS, no additional parameter in
kvm_device_attr.addr. kvm_device_attr.addr.
KVM_DEV_ARM_ITS_SAVE_TABLES
save the ITS table data into guest RAM, at the location provisioned
by the guest in corresponding registers/table entries.
The layout of the tables in guest memory defines an ABI. The entries
are laid out in little endian format as described in the last paragraph.
KVM_DEV_ARM_ITS_RESTORE_TABLES
restore the ITS tables from guest RAM to ITS internal structures.
The GICV3 must be restored before the ITS and all ITS registers but
the GITS_CTLR must be restored before restoring the ITS tables.
The GITS_IIDR read-only register must also be restored before
calling KVM_DEV_ARM_ITS_RESTORE_TABLES as the IIDR revision field
encodes the ABI revision.
The expected ordering when restoring the GICv3/ITS is described in section
"ITS Restore Sequence".
Errors: Errors:
-ENXIO: ITS not properly configured as required prior to setting -ENXIO: ITS not properly configured as required prior to setting
this attribute this attribute
-ENOMEM: Memory shortage when allocating ITS internal data -ENOMEM: Memory shortage when allocating ITS internal data
-EINVAL: Inconsistent restored data
-EFAULT: Invalid guest ram access
-EBUSY: One or more VCPUS are running
KVM_DEV_ARM_VGIC_GRP_ITS_REGS
Attributes:
The attr field of kvm_device_attr encodes the offset of the
ITS register, relative to the ITS control frame base address
(ITS_base).
kvm_device_attr.addr points to a __u64 value whatever the width
of the addressed register (32/64 bits). 64 bit registers can only
be accessed with full length.
Writes to read-only registers are ignored by the kernel except for:
- GITS_CREADR. It must be restored otherwise commands in the queue
will be re-executed after restoring CWRITER. GITS_CREADR must be
restored before restoring the GITS_CTLR which is likely to enable the
ITS. Also it must be restored after GITS_CBASER since a write to
GITS_CBASER resets GITS_CREADR.
- GITS_IIDR. The Revision field encodes the table layout ABI revision.
In the future we might implement direct injection of virtual LPIs.
This will require an upgrade of the table layout and an evolution of
the ABI. GITS_IIDR must be restored before calling
KVM_DEV_ARM_ITS_RESTORE_TABLES.
For other registers, getting or setting a register has the same
effect as reading/writing the register on real hardware.
Errors:
-ENXIO: Offset does not correspond to any supported register
-EFAULT: Invalid user pointer for attr->addr
-EINVAL: Offset is not 64-bit aligned
-EBUSY: one or more VCPUS are running
ITS Restore Sequence:
-------------------------
The following ordering must be followed when restoring the GIC and the ITS:
a) restore all guest memory and create vcpus
b) restore all redistributors
c) provide the its base address
(KVM_DEV_ARM_VGIC_GRP_ADDR)
d) restore the ITS in the following order:
1. Restore GITS_CBASER
2. Restore all other GITS_ registers, except GITS_CTLR!
3. Load the ITS table data (KVM_DEV_ARM_ITS_RESTORE_TABLES)
4. Restore GITS_CTLR
Then vcpus can be started.
ITS Table ABI REV0:
-------------------
Revision 0 of the ABI only supports the features of a virtual GICv3, and does
not support a virtual GICv4 with support for direct injection of virtual
interrupts for nested hypervisors.
The device table and ITT are indexed by the DeviceID and EventID,
respectively. The collection table is not indexed by CollectionID, and the
entries in the collection are listed in no particular order.
All entries are 8 bytes.
Device Table Entry (DTE):
bits: | 63| 62 ... 49 | 48 ... 5 | 4 ... 0 |
values: | V | next | ITT_addr | Size |
where;
- V indicates whether the entry is valid. If not, other fields
are not meaningful.
- next: equals to 0 if this entry is the last one; otherwise it
corresponds to the DeviceID offset to the next DTE, capped by
2^14 -1.
- ITT_addr matches bits [51:8] of the ITT address (256 Byte aligned).
- Size specifies the supported number of bits for the EventID,
minus one
Collection Table Entry (CTE):
bits: | 63| 62 .. 52 | 51 ... 16 | 15 ... 0 |
values: | V | RES0 | RDBase | ICID |
where:
- V indicates whether the entry is valid. If not, other fields are
not meaningful.
- RES0: reserved field with Should-Be-Zero-or-Preserved behavior.
- RDBase is the PE number (GICR_TYPER.Processor_Number semantic),
- ICID is the collection ID
Interrupt Translation Entry (ITE):
bits: | 63 ... 48 | 47 ... 16 | 15 ... 0 |
values: | next | pINTID | ICID |
where:
- next: equals to 0 if this entry is the last one; otherwise it corresponds
to the EventID offset to the next ITE capped by 2^16 -1.
- pINTID is the physical LPI ID; if zero, it means the entry is not valid
and other fields are not meaningful.
- ICID is the collection ID
...@@ -167,11 +167,17 @@ Groups: ...@@ -167,11 +167,17 @@ Groups:
KVM_DEV_ARM_VGIC_CTRL_INIT KVM_DEV_ARM_VGIC_CTRL_INIT
request the initialization of the VGIC, no additional parameter in request the initialization of the VGIC, no additional parameter in
kvm_device_attr.addr. kvm_device_attr.addr.
KVM_DEV_ARM_VGIC_SAVE_PENDING_TABLES
save all LPI pending bits into guest RAM pending tables.
The first kB of the pending table is not altered by this operation.
Errors: Errors:
-ENXIO: VGIC not properly configured as required prior to calling -ENXIO: VGIC not properly configured as required prior to calling
this attribute this attribute
-ENODEV: no online VCPU -ENODEV: no online VCPU
-ENOMEM: memory shortage when allocating vgic internal data -ENOMEM: memory shortage when allocating vgic internal data
-EFAULT: Invalid guest ram access
-EBUSY: One or more VCPUS are running
KVM_DEV_ARM_VGIC_GRP_LEVEL_INFO KVM_DEV_ARM_VGIC_GRP_LEVEL_INFO
......
...@@ -196,6 +196,7 @@ struct kvm_arch_memory_slot { ...@@ -196,6 +196,7 @@ struct kvm_arch_memory_slot {
#define KVM_DEV_ARM_VGIC_GRP_REDIST_REGS 5 #define KVM_DEV_ARM_VGIC_GRP_REDIST_REGS 5
#define KVM_DEV_ARM_VGIC_GRP_CPU_SYSREGS 6 #define KVM_DEV_ARM_VGIC_GRP_CPU_SYSREGS 6
#define KVM_DEV_ARM_VGIC_GRP_LEVEL_INFO 7 #define KVM_DEV_ARM_VGIC_GRP_LEVEL_INFO 7
#define KVM_DEV_ARM_VGIC_GRP_ITS_REGS 8
#define KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_SHIFT 10 #define KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_SHIFT 10
#define KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_MASK \ #define KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_MASK \
(0x3fffffULL << KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_SHIFT) (0x3fffffULL << KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_SHIFT)
...@@ -203,6 +204,9 @@ struct kvm_arch_memory_slot { ...@@ -203,6 +204,9 @@ struct kvm_arch_memory_slot {
#define VGIC_LEVEL_INFO_LINE_LEVEL 0 #define VGIC_LEVEL_INFO_LINE_LEVEL 0
#define KVM_DEV_ARM_VGIC_CTRL_INIT 0 #define KVM_DEV_ARM_VGIC_CTRL_INIT 0
#define KVM_DEV_ARM_ITS_SAVE_TABLES 1
#define KVM_DEV_ARM_ITS_RESTORE_TABLES 2
#define KVM_DEV_ARM_VGIC_SAVE_PENDING_TABLES 3
/* KVM_IRQ_LINE irq field index values */ /* KVM_IRQ_LINE irq field index values */
#define KVM_ARM_IRQ_TYPE_SHIFT 24 #define KVM_ARM_IRQ_TYPE_SHIFT 24
......
...@@ -18,9 +18,12 @@ KVM := ../../../virt/kvm ...@@ -18,9 +18,12 @@ KVM := ../../../virt/kvm
kvm-arm-y = $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o $(KVM)/vfio.o kvm-arm-y = $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o $(KVM)/vfio.o
obj-$(CONFIG_KVM_ARM_HOST) += hyp/ obj-$(CONFIG_KVM_ARM_HOST) += hyp/
obj-y += kvm-arm.o init.o interrupts.o obj-y += kvm-arm.o init.o interrupts.o
obj-y += arm.o handle_exit.o guest.o mmu.o emulate.o reset.o obj-y += handle_exit.o guest.o emulate.o reset.o
obj-y += coproc.o coproc_a15.o coproc_a7.o mmio.o psci.o perf.o vgic-v3-coproc.o obj-y += coproc.o coproc_a15.o coproc_a7.o vgic-v3-coproc.o
obj-y += $(KVM)/arm/arm.o $(KVM)/arm/mmu.o $(KVM)/arm/mmio.o
obj-y += $(KVM)/arm/psci.o $(KVM)/arm/perf.o
obj-y += $(KVM)/arm/aarch32.o obj-y += $(KVM)/arm/aarch32.o
obj-y += $(KVM)/arm/vgic/vgic.o obj-y += $(KVM)/arm/vgic/vgic.o
......
...@@ -6,133 +6,6 @@ ...@@ -6,133 +6,6 @@
#undef TRACE_SYSTEM #undef TRACE_SYSTEM
#define TRACE_SYSTEM kvm #define TRACE_SYSTEM kvm
/*
* Tracepoints for entry/exit to guest
*/
TRACE_EVENT(kvm_entry,
TP_PROTO(unsigned long vcpu_pc),
TP_ARGS(vcpu_pc),
TP_STRUCT__entry(
__field( unsigned long, vcpu_pc )
),
TP_fast_assign(
__entry->vcpu_pc = vcpu_pc;
),
TP_printk("PC: 0x%08lx", __entry->vcpu_pc)
);
TRACE_EVENT(kvm_exit,
TP_PROTO(int idx, unsigned int exit_reason, unsigned long vcpu_pc),
TP_ARGS(idx, exit_reason, vcpu_pc),
TP_STRUCT__entry(
__field( int, idx )
__field( unsigned int, exit_reason )
__field( unsigned long, vcpu_pc )
),
TP_fast_assign(
__entry->idx = idx;
__entry->exit_reason = exit_reason;
__entry->vcpu_pc = vcpu_pc;
),
TP_printk("%s: HSR_EC: 0x%04x (%s), PC: 0x%08lx",
__print_symbolic(__entry->idx, kvm_arm_exception_type),
__entry->exit_reason,
__print_symbolic(__entry->exit_reason, kvm_arm_exception_class),
__entry->vcpu_pc)
);
TRACE_EVENT(kvm_guest_fault,
TP_PROTO(unsigned long vcpu_pc, unsigned long hsr,
unsigned long hxfar,
unsigned long long ipa),
TP_ARGS(vcpu_pc, hsr, hxfar, ipa),
TP_STRUCT__entry(
__field( unsigned long, vcpu_pc )
__field( unsigned long, hsr )
__field( unsigned long, hxfar )
__field( unsigned long long, ipa )
),
TP_fast_assign(
__entry->vcpu_pc = vcpu_pc;
__entry->hsr = hsr;
__entry->hxfar = hxfar;
__entry->ipa = ipa;
),
TP_printk("ipa %#llx, hsr %#08lx, hxfar %#08lx, pc %#08lx",
__entry->ipa, __entry->hsr,
__entry->hxfar, __entry->vcpu_pc)
);
TRACE_EVENT(kvm_access_fault,
TP_PROTO(unsigned long ipa),
TP_ARGS(ipa),
TP_STRUCT__entry(
__field( unsigned long, ipa )
),
TP_fast_assign(
__entry->ipa = ipa;
),
TP_printk("IPA: %lx", __entry->ipa)
);
TRACE_EVENT(kvm_irq_line,
TP_PROTO(unsigned int type, int vcpu_idx, int irq_num, int level),
TP_ARGS(type, vcpu_idx, irq_num, level),
TP_STRUCT__entry(
__field( unsigned int, type )
__field( int, vcpu_idx )
__field( int, irq_num )
__field( int, level )
),
TP_fast_assign(
__entry->type = type;
__entry->vcpu_idx = vcpu_idx;
__entry->irq_num = irq_num;
__entry->level = level;
),
TP_printk("Inject %s interrupt (%d), vcpu->idx: %d, num: %d, level: %d",
(__entry->type == KVM_ARM_IRQ_TYPE_CPU) ? "CPU" :
(__entry->type == KVM_ARM_IRQ_TYPE_PPI) ? "VGIC PPI" :
(__entry->type == KVM_ARM_IRQ_TYPE_SPI) ? "VGIC SPI" : "UNKNOWN",
__entry->type, __entry->vcpu_idx, __entry->irq_num, __entry->level)
);
TRACE_EVENT(kvm_mmio_emulate,
TP_PROTO(unsigned long vcpu_pc, unsigned long instr,
unsigned long cpsr),
TP_ARGS(vcpu_pc, instr, cpsr),
TP_STRUCT__entry(
__field( unsigned long, vcpu_pc )
__field( unsigned long, instr )
__field( unsigned long, cpsr )
),
TP_fast_assign(
__entry->vcpu_pc = vcpu_pc;
__entry->instr = instr;
__entry->cpsr = cpsr;
),
TP_printk("Emulate MMIO at: 0x%08lx (instr: %08lx, cpsr: %08lx)",
__entry->vcpu_pc, __entry->instr, __entry->cpsr)
);
/* Architecturally implementation defined CP15 register access */ /* Architecturally implementation defined CP15 register access */
TRACE_EVENT(kvm_emulate_cp15_imp, TRACE_EVENT(kvm_emulate_cp15_imp,
TP_PROTO(unsigned long Op1, unsigned long Rt1, unsigned long CRn, TP_PROTO(unsigned long Op1, unsigned long Rt1, unsigned long CRn,
...@@ -181,87 +54,6 @@ TRACE_EVENT(kvm_wfx, ...@@ -181,87 +54,6 @@ TRACE_EVENT(kvm_wfx,
__entry->is_wfe ? 'e' : 'i', __entry->vcpu_pc) __entry->is_wfe ? 'e' : 'i', __entry->vcpu_pc)
); );
TRACE_EVENT(kvm_unmap_hva,
TP_PROTO(unsigned long hva),
TP_ARGS(hva),
TP_STRUCT__entry(
__field( unsigned long, hva )
),
TP_fast_assign(
__entry->hva = hva;
),
TP_printk("mmu notifier unmap hva: %#08lx", __entry->hva)
);
TRACE_EVENT(kvm_unmap_hva_range,
TP_PROTO(unsigned long start, unsigned long end),
TP_ARGS(start, end),
TP_STRUCT__entry(
__field( unsigned long, start )
__field( unsigned long, end )
),
TP_fast_assign(
__entry->start = start;
__entry->end = end;
),
TP_printk("mmu notifier unmap range: %#08lx -- %#08lx",
__entry->start, __entry->end)
);
TRACE_EVENT(kvm_set_spte_hva,
TP_PROTO(unsigned long hva),
TP_ARGS(hva),
TP_STRUCT__entry(
__field( unsigned long, hva )
),
TP_fast_assign(
__entry->hva = hva;
),
TP_printk("mmu notifier set pte hva: %#08lx", __entry->hva)
);
TRACE_EVENT(kvm_age_hva,
TP_PROTO(unsigned long start, unsigned long end),
TP_ARGS(start, end),
TP_STRUCT__entry(
__field( unsigned long, start )
__field( unsigned long, end )
),
TP_fast_assign(
__entry->start = start;
__entry->end = end;
),
TP_printk("mmu notifier age hva: %#08lx -- %#08lx",
__entry->start, __entry->end)
);
TRACE_EVENT(kvm_test_age_hva,
TP_PROTO(unsigned long hva),
TP_ARGS(hva),
TP_STRUCT__entry(
__field( unsigned long, hva )
),
TP_fast_assign(
__entry->hva = hva;
),
TP_printk("mmu notifier test age hva: %#08lx", __entry->hva)
);
TRACE_EVENT(kvm_hvc, TRACE_EVENT(kvm_hvc,
TP_PROTO(unsigned long vcpu_pc, unsigned long r0, unsigned long imm), TP_PROTO(unsigned long vcpu_pc, unsigned long r0, unsigned long imm),
TP_ARGS(vcpu_pc, r0, imm), TP_ARGS(vcpu_pc, r0, imm),
...@@ -282,45 +74,6 @@ TRACE_EVENT(kvm_hvc, ...@@ -282,45 +74,6 @@ TRACE_EVENT(kvm_hvc,
__entry->vcpu_pc, __entry->r0, __entry->imm) __entry->vcpu_pc, __entry->r0, __entry->imm)
); );
TRACE_EVENT(kvm_set_way_flush,
TP_PROTO(unsigned long vcpu_pc, bool cache),
TP_ARGS(vcpu_pc, cache),
TP_STRUCT__entry(
__field( unsigned long, vcpu_pc )
__field( bool, cache )
),
TP_fast_assign(
__entry->vcpu_pc = vcpu_pc;
__entry->cache = cache;
),
TP_printk("S/W flush at 0x%016lx (cache %s)",
__entry->vcpu_pc, __entry->cache ? "on" : "off")
);
TRACE_EVENT(kvm_toggle_cache,
TP_PROTO(unsigned long vcpu_pc, bool was, bool now),
TP_ARGS(vcpu_pc, was, now),
TP_STRUCT__entry(
__field( unsigned long, vcpu_pc )
__field( bool, was )
__field( bool, now )
),
TP_fast_assign(
__entry->vcpu_pc = vcpu_pc;
__entry->was = was;
__entry->now = now;
),
TP_printk("VM op at 0x%016lx (cache was %s, now %s)",
__entry->vcpu_pc, __entry->was ? "on" : "off",
__entry->now ? "on" : "off")
);
#endif /* _TRACE_KVM_H */ #endif /* _TRACE_KVM_H */
#undef TRACE_INCLUDE_PATH #undef TRACE_INCLUDE_PATH
......
...@@ -240,6 +240,12 @@ static inline u8 kvm_vcpu_trap_get_fault_type(const struct kvm_vcpu *vcpu) ...@@ -240,6 +240,12 @@ static inline u8 kvm_vcpu_trap_get_fault_type(const struct kvm_vcpu *vcpu)
return kvm_vcpu_get_hsr(vcpu) & ESR_ELx_FSC_TYPE; return kvm_vcpu_get_hsr(vcpu) & ESR_ELx_FSC_TYPE;
} }
static inline int kvm_vcpu_sys_get_rt(struct kvm_vcpu *vcpu)
{
u32 esr = kvm_vcpu_get_hsr(vcpu);
return (esr & ESR_ELx_SYS64_ISS_RT_MASK) >> ESR_ELx_SYS64_ISS_RT_SHIFT;
}
static inline unsigned long kvm_vcpu_get_mpidr_aff(struct kvm_vcpu *vcpu) static inline unsigned long kvm_vcpu_get_mpidr_aff(struct kvm_vcpu *vcpu)
{ {
return vcpu_sys_reg(vcpu, MPIDR_EL1) & MPIDR_HWID_BITMASK; return vcpu_sys_reg(vcpu, MPIDR_EL1) & MPIDR_HWID_BITMASK;
......
...@@ -216,6 +216,7 @@ struct kvm_arch_memory_slot { ...@@ -216,6 +216,7 @@ struct kvm_arch_memory_slot {
#define KVM_DEV_ARM_VGIC_GRP_REDIST_REGS 5 #define KVM_DEV_ARM_VGIC_GRP_REDIST_REGS 5
#define KVM_DEV_ARM_VGIC_GRP_CPU_SYSREGS 6 #define KVM_DEV_ARM_VGIC_GRP_CPU_SYSREGS 6
#define KVM_DEV_ARM_VGIC_GRP_LEVEL_INFO 7 #define KVM_DEV_ARM_VGIC_GRP_LEVEL_INFO 7
#define KVM_DEV_ARM_VGIC_GRP_ITS_REGS 8
#define KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_SHIFT 10 #define KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_SHIFT 10
#define KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_MASK \ #define KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_MASK \
(0x3fffffULL << KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_SHIFT) (0x3fffffULL << KVM_DEV_ARM_VGIC_LINE_LEVEL_INFO_SHIFT)
...@@ -223,6 +224,9 @@ struct kvm_arch_memory_slot { ...@@ -223,6 +224,9 @@ struct kvm_arch_memory_slot {
#define VGIC_LEVEL_INFO_LINE_LEVEL 0 #define VGIC_LEVEL_INFO_LINE_LEVEL 0
#define KVM_DEV_ARM_VGIC_CTRL_INIT 0 #define KVM_DEV_ARM_VGIC_CTRL_INIT 0
#define KVM_DEV_ARM_ITS_SAVE_TABLES 1
#define KVM_DEV_ARM_ITS_RESTORE_TABLES 2
#define KVM_DEV_ARM_VGIC_SAVE_PENDING_TABLES 3
/* Device Control API on vcpu fd */ /* Device Control API on vcpu fd */
#define KVM_ARM_VCPU_PMU_V3_CTRL 0 #define KVM_ARM_VCPU_PMU_V3_CTRL 0
......
...@@ -7,14 +7,13 @@ CFLAGS_arm.o := -I. ...@@ -7,14 +7,13 @@ CFLAGS_arm.o := -I.
CFLAGS_mmu.o := -I. CFLAGS_mmu.o := -I.
KVM=../../../virt/kvm KVM=../../../virt/kvm
ARM=../../../arch/arm/kvm
obj-$(CONFIG_KVM_ARM_HOST) += kvm.o obj-$(CONFIG_KVM_ARM_HOST) += kvm.o
obj-$(CONFIG_KVM_ARM_HOST) += hyp/ obj-$(CONFIG_KVM_ARM_HOST) += hyp/
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o $(KVM)/vfio.o kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o $(KVM)/eventfd.o $(KVM)/vfio.o
kvm-$(CONFIG_KVM_ARM_HOST) += $(ARM)/arm.o $(ARM)/mmu.o $(ARM)/mmio.o kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/arm.o $(KVM)/arm/mmu.o $(KVM)/arm/mmio.o
kvm-$(CONFIG_KVM_ARM_HOST) += $(ARM)/psci.o $(ARM)/perf.o kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/psci.o $(KVM)/arm/perf.o
kvm-$(CONFIG_KVM_ARM_HOST) += inject_fault.o regmap.o kvm-$(CONFIG_KVM_ARM_HOST) += inject_fault.o regmap.o
kvm-$(CONFIG_KVM_ARM_HOST) += hyp.o hyp-init.o handle_exit.o kvm-$(CONFIG_KVM_ARM_HOST) += hyp.o hyp-init.o handle_exit.o
......
...@@ -1529,8 +1529,8 @@ static int kvm_handle_cp_64(struct kvm_vcpu *vcpu, ...@@ -1529,8 +1529,8 @@ static int kvm_handle_cp_64(struct kvm_vcpu *vcpu,
{ {
struct sys_reg_params params; struct sys_reg_params params;
u32 hsr = kvm_vcpu_get_hsr(vcpu); u32 hsr = kvm_vcpu_get_hsr(vcpu);
int Rt = (hsr >> 5) & 0xf; int Rt = kvm_vcpu_sys_get_rt(vcpu);
int Rt2 = (hsr >> 10) & 0xf; int Rt2 = (hsr >> 10) & 0x1f;
params.is_aarch32 = true; params.is_aarch32 = true;
params.is_32bit = false; params.is_32bit = false;
...@@ -1586,7 +1586,7 @@ static int kvm_handle_cp_32(struct kvm_vcpu *vcpu, ...@@ -1586,7 +1586,7 @@ static int kvm_handle_cp_32(struct kvm_vcpu *vcpu,
{ {
struct sys_reg_params params; struct sys_reg_params params;
u32 hsr = kvm_vcpu_get_hsr(vcpu); u32 hsr = kvm_vcpu_get_hsr(vcpu);
int Rt = (hsr >> 5) & 0xf; int Rt = kvm_vcpu_sys_get_rt(vcpu);
params.is_aarch32 = true; params.is_aarch32 = true;
params.is_32bit = true; params.is_32bit = true;
...@@ -1688,7 +1688,7 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu, struct kvm_run *run) ...@@ -1688,7 +1688,7 @@ int kvm_handle_sys_reg(struct kvm_vcpu *vcpu, struct kvm_run *run)
{ {
struct sys_reg_params params; struct sys_reg_params params;
unsigned long esr = kvm_vcpu_get_hsr(vcpu); unsigned long esr = kvm_vcpu_get_hsr(vcpu);
int Rt = (esr >> 5) & 0x1f; int Rt = kvm_vcpu_sys_get_rt(vcpu);
int ret; int ret;
trace_kvm_handle_sys_reg(esr); trace_kvm_handle_sys_reg(esr);
......
...@@ -148,7 +148,6 @@ struct vgic_its { ...@@ -148,7 +148,6 @@ struct vgic_its {
gpa_t vgic_its_base; gpa_t vgic_its_base;
bool enabled; bool enabled;
bool initialized;
struct vgic_io_device iodev; struct vgic_io_device iodev;
struct kvm_device *dev; struct kvm_device *dev;
...@@ -162,6 +161,9 @@ struct vgic_its { ...@@ -162,6 +161,9 @@ struct vgic_its {
u32 creadr; u32 creadr;
u32 cwriter; u32 cwriter;
/* migration ABI revision in use */
u32 abi_rev;
/* Protects the device and collection lists */ /* Protects the device and collection lists */
struct mutex its_lock; struct mutex its_lock;
struct list_head device_list; struct list_head device_list;
...@@ -283,6 +285,7 @@ extern struct static_key_false vgic_v2_cpuif_trap; ...@@ -283,6 +285,7 @@ extern struct static_key_false vgic_v2_cpuif_trap;
int kvm_vgic_addr(struct kvm *kvm, unsigned long type, u64 *addr, bool write); int kvm_vgic_addr(struct kvm *kvm, unsigned long type, u64 *addr, bool write);
void kvm_vgic_early_init(struct kvm *kvm); void kvm_vgic_early_init(struct kvm *kvm);
int kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu);
int kvm_vgic_create(struct kvm *kvm, u32 type); int kvm_vgic_create(struct kvm *kvm, u32 type);
void kvm_vgic_destroy(struct kvm *kvm); void kvm_vgic_destroy(struct kvm *kvm);
void kvm_vgic_vcpu_early_init(struct kvm_vcpu *vcpu); void kvm_vgic_vcpu_early_init(struct kvm_vcpu *vcpu);
......
...@@ -132,6 +132,9 @@ ...@@ -132,6 +132,9 @@
#define GIC_BASER_SHAREABILITY(reg, type) \ #define GIC_BASER_SHAREABILITY(reg, type) \
(GIC_BASER_##type << reg##_SHAREABILITY_SHIFT) (GIC_BASER_##type << reg##_SHAREABILITY_SHIFT)
/* encode a size field of width @w containing @n - 1 units */
#define GIC_ENCODE_SZ(n, w) (((unsigned long)(n) - 1) & GENMASK_ULL(((w) - 1), 0))
#define GICR_PROPBASER_SHAREABILITY_SHIFT (10) #define GICR_PROPBASER_SHAREABILITY_SHIFT (10)
#define GICR_PROPBASER_INNER_CACHEABILITY_SHIFT (7) #define GICR_PROPBASER_INNER_CACHEABILITY_SHIFT (7)
#define GICR_PROPBASER_OUTER_CACHEABILITY_SHIFT (56) #define GICR_PROPBASER_OUTER_CACHEABILITY_SHIFT (56)
...@@ -156,6 +159,8 @@ ...@@ -156,6 +159,8 @@
#define GICR_PROPBASER_RaWaWb GIC_BASER_CACHEABILITY(GICR_PROPBASER, INNER, RaWaWb) #define GICR_PROPBASER_RaWaWb GIC_BASER_CACHEABILITY(GICR_PROPBASER, INNER, RaWaWb)
#define GICR_PROPBASER_IDBITS_MASK (0x1f) #define GICR_PROPBASER_IDBITS_MASK (0x1f)
#define GICR_PROPBASER_ADDRESS(x) ((x) & GENMASK_ULL(51, 12))
#define GICR_PENDBASER_ADDRESS(x) ((x) & GENMASK_ULL(51, 16))
#define GICR_PENDBASER_SHAREABILITY_SHIFT (10) #define GICR_PENDBASER_SHAREABILITY_SHIFT (10)
#define GICR_PENDBASER_INNER_CACHEABILITY_SHIFT (7) #define GICR_PENDBASER_INNER_CACHEABILITY_SHIFT (7)
...@@ -232,12 +237,18 @@ ...@@ -232,12 +237,18 @@
#define GITS_CTLR_QUIESCENT (1U << 31) #define GITS_CTLR_QUIESCENT (1U << 31)
#define GITS_TYPER_PLPIS (1UL << 0) #define GITS_TYPER_PLPIS (1UL << 0)
#define GITS_TYPER_ITT_ENTRY_SIZE_SHIFT 4
#define GITS_TYPER_IDBITS_SHIFT 8 #define GITS_TYPER_IDBITS_SHIFT 8
#define GITS_TYPER_DEVBITS_SHIFT 13 #define GITS_TYPER_DEVBITS_SHIFT 13
#define GITS_TYPER_DEVBITS(r) ((((r) >> GITS_TYPER_DEVBITS_SHIFT) & 0x1f) + 1) #define GITS_TYPER_DEVBITS(r) ((((r) >> GITS_TYPER_DEVBITS_SHIFT) & 0x1f) + 1)
#define GITS_TYPER_PTA (1UL << 19) #define GITS_TYPER_PTA (1UL << 19)
#define GITS_TYPER_HWCOLLCNT_SHIFT 24 #define GITS_TYPER_HWCOLLCNT_SHIFT 24
#define GITS_IIDR_REV_SHIFT 12
#define GITS_IIDR_REV_MASK (0xf << GITS_IIDR_REV_SHIFT)
#define GITS_IIDR_REV(r) (((r) >> GITS_IIDR_REV_SHIFT) & 0xf)
#define GITS_IIDR_PRODUCTID_SHIFT 24
#define GITS_CBASER_VALID (1ULL << 63) #define GITS_CBASER_VALID (1ULL << 63)
#define GITS_CBASER_SHAREABILITY_SHIFT (10) #define GITS_CBASER_SHAREABILITY_SHIFT (10)
#define GITS_CBASER_INNER_CACHEABILITY_SHIFT (59) #define GITS_CBASER_INNER_CACHEABILITY_SHIFT (59)
...@@ -290,6 +301,7 @@ ...@@ -290,6 +301,7 @@
#define GITS_BASER_TYPE(r) (((r) >> GITS_BASER_TYPE_SHIFT) & 7) #define GITS_BASER_TYPE(r) (((r) >> GITS_BASER_TYPE_SHIFT) & 7)
#define GITS_BASER_ENTRY_SIZE_SHIFT (48) #define GITS_BASER_ENTRY_SIZE_SHIFT (48)
#define GITS_BASER_ENTRY_SIZE(r) ((((r) >> GITS_BASER_ENTRY_SIZE_SHIFT) & 0x1f) + 1) #define GITS_BASER_ENTRY_SIZE(r) ((((r) >> GITS_BASER_ENTRY_SIZE_SHIFT) & 0x1f) + 1)
#define GITS_BASER_ENTRY_SIZE_MASK GENMASK_ULL(52, 48)
#define GITS_BASER_SHAREABILITY_SHIFT (10) #define GITS_BASER_SHAREABILITY_SHIFT (10)
#define GITS_BASER_InnerShareable \ #define GITS_BASER_InnerShareable \
GIC_BASER_SHAREABILITY(GITS_BASER, InnerShareable) GIC_BASER_SHAREABILITY(GITS_BASER, InnerShareable)
...@@ -337,9 +349,11 @@ ...@@ -337,9 +349,11 @@
#define E_ITS_INT_UNMAPPED_INTERRUPT 0x010307 #define E_ITS_INT_UNMAPPED_INTERRUPT 0x010307
#define E_ITS_CLEAR_UNMAPPED_INTERRUPT 0x010507 #define E_ITS_CLEAR_UNMAPPED_INTERRUPT 0x010507
#define E_ITS_MAPD_DEVICE_OOR 0x010801 #define E_ITS_MAPD_DEVICE_OOR 0x010801
#define E_ITS_MAPD_ITTSIZE_OOR 0x010802
#define E_ITS_MAPC_PROCNUM_OOR 0x010902 #define E_ITS_MAPC_PROCNUM_OOR 0x010902
#define E_ITS_MAPC_COLLECTION_OOR 0x010903 #define E_ITS_MAPC_COLLECTION_OOR 0x010903
#define E_ITS_MAPTI_UNMAPPED_DEVICE 0x010a04 #define E_ITS_MAPTI_UNMAPPED_DEVICE 0x010a04
#define E_ITS_MAPTI_ID_OOR 0x010a05
#define E_ITS_MAPTI_PHYSICALID_OOR 0x010a06 #define E_ITS_MAPTI_PHYSICALID_OOR 0x010a06
#define E_ITS_INV_UNMAPPED_INTERRUPT 0x010c07 #define E_ITS_INV_UNMAPPED_INTERRUPT 0x010c07
#define E_ITS_INVALL_UNMAPPED_COLLECTION 0x010d09 #define E_ITS_INVALL_UNMAPPED_COLLECTION 0x010d09
......
...@@ -499,6 +499,17 @@ static inline struct kvm_vcpu *kvm_get_vcpu_by_id(struct kvm *kvm, int id) ...@@ -499,6 +499,17 @@ static inline struct kvm_vcpu *kvm_get_vcpu_by_id(struct kvm *kvm, int id)
return NULL; return NULL;
} }
static inline int kvm_vcpu_get_idx(struct kvm_vcpu *vcpu)
{
struct kvm_vcpu *tmp;
int idx;
kvm_for_each_vcpu(idx, tmp, vcpu->kvm)
if (tmp == vcpu)
return idx;
BUG();
}
#define kvm_for_each_memslot(memslot, slots) \ #define kvm_for_each_memslot(memslot, slots) \
for (memslot = &slots->memslots[0]; \ for (memslot = &slots->memslots[0]; \
memslot < slots->memslots + KVM_MEM_SLOTS_NUM && memslot->npages;\ memslot < slots->memslots + KVM_MEM_SLOTS_NUM && memslot->npages;\
......
...@@ -332,7 +332,7 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu) ...@@ -332,7 +332,7 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu)
kvm_arm_reset_debug_ptr(vcpu); kvm_arm_reset_debug_ptr(vcpu);
return 0; return kvm_vgic_vcpu_init(vcpu);
} }
void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
......
...@@ -7,26 +7,250 @@ ...@@ -7,26 +7,250 @@
#define TRACE_SYSTEM kvm #define TRACE_SYSTEM kvm
/* /*
* Tracepoints for vgic * Tracepoints for entry/exit to guest
*/ */
TRACE_EVENT(vgic_update_irq_pending, TRACE_EVENT(kvm_entry,
TP_PROTO(unsigned long vcpu_id, __u32 irq, bool level), TP_PROTO(unsigned long vcpu_pc),
TP_ARGS(vcpu_id, irq, level), TP_ARGS(vcpu_pc),
TP_STRUCT__entry( TP_STRUCT__entry(
__field( unsigned long, vcpu_id ) __field( unsigned long, vcpu_pc )
__field( __u32, irq )
__field( bool, level )
), ),
TP_fast_assign( TP_fast_assign(
__entry->vcpu_id = vcpu_id; __entry->vcpu_pc = vcpu_pc;
__entry->irq = irq; ),
TP_printk("PC: 0x%08lx", __entry->vcpu_pc)
);
TRACE_EVENT(kvm_exit,
TP_PROTO(int idx, unsigned int exit_reason, unsigned long vcpu_pc),
TP_ARGS(idx, exit_reason, vcpu_pc),
TP_STRUCT__entry(
__field( int, idx )
__field( unsigned int, exit_reason )
__field( unsigned long, vcpu_pc )
),
TP_fast_assign(
__entry->idx = idx;
__entry->exit_reason = exit_reason;
__entry->vcpu_pc = vcpu_pc;
),
TP_printk("%s: HSR_EC: 0x%04x (%s), PC: 0x%08lx",
__print_symbolic(__entry->idx, kvm_arm_exception_type),
__entry->exit_reason,
__print_symbolic(__entry->exit_reason, kvm_arm_exception_class),
__entry->vcpu_pc)
);
TRACE_EVENT(kvm_guest_fault,
TP_PROTO(unsigned long vcpu_pc, unsigned long hsr,
unsigned long hxfar,
unsigned long long ipa),
TP_ARGS(vcpu_pc, hsr, hxfar, ipa),
TP_STRUCT__entry(
__field( unsigned long, vcpu_pc )
__field( unsigned long, hsr )
__field( unsigned long, hxfar )
__field( unsigned long long, ipa )
),
TP_fast_assign(
__entry->vcpu_pc = vcpu_pc;
__entry->hsr = hsr;
__entry->hxfar = hxfar;
__entry->ipa = ipa;
),
TP_printk("ipa %#llx, hsr %#08lx, hxfar %#08lx, pc %#08lx",
__entry->ipa, __entry->hsr,
__entry->hxfar, __entry->vcpu_pc)
);
TRACE_EVENT(kvm_access_fault,
TP_PROTO(unsigned long ipa),
TP_ARGS(ipa),
TP_STRUCT__entry(
__field( unsigned long, ipa )
),
TP_fast_assign(
__entry->ipa = ipa;
),
TP_printk("IPA: %lx", __entry->ipa)
);
TRACE_EVENT(kvm_irq_line,
TP_PROTO(unsigned int type, int vcpu_idx, int irq_num, int level),
TP_ARGS(type, vcpu_idx, irq_num, level),
TP_STRUCT__entry(
__field( unsigned int, type )
__field( int, vcpu_idx )
__field( int, irq_num )
__field( int, level )
),
TP_fast_assign(
__entry->type = type;
__entry->vcpu_idx = vcpu_idx;
__entry->irq_num = irq_num;
__entry->level = level; __entry->level = level;
), ),
TP_printk("VCPU: %ld, IRQ %d, level: %d", TP_printk("Inject %s interrupt (%d), vcpu->idx: %d, num: %d, level: %d",
__entry->vcpu_id, __entry->irq, __entry->level) (__entry->type == KVM_ARM_IRQ_TYPE_CPU) ? "CPU" :
(__entry->type == KVM_ARM_IRQ_TYPE_PPI) ? "VGIC PPI" :
(__entry->type == KVM_ARM_IRQ_TYPE_SPI) ? "VGIC SPI" : "UNKNOWN",
__entry->type, __entry->vcpu_idx, __entry->irq_num, __entry->level)
);
TRACE_EVENT(kvm_mmio_emulate,
TP_PROTO(unsigned long vcpu_pc, unsigned long instr,
unsigned long cpsr),
TP_ARGS(vcpu_pc, instr, cpsr),
TP_STRUCT__entry(
__field( unsigned long, vcpu_pc )
__field( unsigned long, instr )
__field( unsigned long, cpsr )
),
TP_fast_assign(
__entry->vcpu_pc = vcpu_pc;
__entry->instr = instr;
__entry->cpsr = cpsr;
),
TP_printk("Emulate MMIO at: 0x%08lx (instr: %08lx, cpsr: %08lx)",
__entry->vcpu_pc, __entry->instr, __entry->cpsr)
);
TRACE_EVENT(kvm_unmap_hva,
TP_PROTO(unsigned long hva),
TP_ARGS(hva),
TP_STRUCT__entry(
__field( unsigned long, hva )
),
TP_fast_assign(
__entry->hva = hva;
),
TP_printk("mmu notifier unmap hva: %#08lx", __entry->hva)
);
TRACE_EVENT(kvm_unmap_hva_range,
TP_PROTO(unsigned long start, unsigned long end),
TP_ARGS(start, end),
TP_STRUCT__entry(
__field( unsigned long, start )
__field( unsigned long, end )
),
TP_fast_assign(
__entry->start = start;
__entry->end = end;
),
TP_printk("mmu notifier unmap range: %#08lx -- %#08lx",
__entry->start, __entry->end)
);
TRACE_EVENT(kvm_set_spte_hva,
TP_PROTO(unsigned long hva),
TP_ARGS(hva),
TP_STRUCT__entry(
__field( unsigned long, hva )
),
TP_fast_assign(
__entry->hva = hva;
),
TP_printk("mmu notifier set pte hva: %#08lx", __entry->hva)
);
TRACE_EVENT(kvm_age_hva,
TP_PROTO(unsigned long start, unsigned long end),
TP_ARGS(start, end),
TP_STRUCT__entry(
__field( unsigned long, start )
__field( unsigned long, end )
),
TP_fast_assign(
__entry->start = start;
__entry->end = end;
),
TP_printk("mmu notifier age hva: %#08lx -- %#08lx",
__entry->start, __entry->end)
);
TRACE_EVENT(kvm_test_age_hva,
TP_PROTO(unsigned long hva),
TP_ARGS(hva),
TP_STRUCT__entry(
__field( unsigned long, hva )
),
TP_fast_assign(
__entry->hva = hva;
),
TP_printk("mmu notifier test age hva: %#08lx", __entry->hva)
);
TRACE_EVENT(kvm_set_way_flush,
TP_PROTO(unsigned long vcpu_pc, bool cache),
TP_ARGS(vcpu_pc, cache),
TP_STRUCT__entry(
__field( unsigned long, vcpu_pc )
__field( bool, cache )
),
TP_fast_assign(
__entry->vcpu_pc = vcpu_pc;
__entry->cache = cache;
),
TP_printk("S/W flush at 0x%016lx (cache %s)",
__entry->vcpu_pc, __entry->cache ? "on" : "off")
);
TRACE_EVENT(kvm_toggle_cache,
TP_PROTO(unsigned long vcpu_pc, bool was, bool now),
TP_ARGS(vcpu_pc, was, now),
TP_STRUCT__entry(
__field( unsigned long, vcpu_pc )
__field( bool, was )
__field( bool, now )
),
TP_fast_assign(
__entry->vcpu_pc = vcpu_pc;
__entry->was = was;
__entry->now = now;
),
TP_printk("VM op at 0x%016lx (cache was %s, now %s)",
__entry->vcpu_pc, __entry->was ? "on" : "off",
__entry->now ? "on" : "off")
); );
/* /*
......
#if !defined(_TRACE_VGIC_H) || defined(TRACE_HEADER_MULTI_READ)
#define _TRACE_VGIC_H
#include <linux/tracepoint.h>
#undef TRACE_SYSTEM
#define TRACE_SYSTEM kvm
TRACE_EVENT(vgic_update_irq_pending,
TP_PROTO(unsigned long vcpu_id, __u32 irq, bool level),
TP_ARGS(vcpu_id, irq, level),
TP_STRUCT__entry(
__field( unsigned long, vcpu_id )
__field( __u32, irq )
__field( bool, level )
),
TP_fast_assign(
__entry->vcpu_id = vcpu_id;
__entry->irq = irq;
__entry->level = level;
),
TP_printk("VCPU: %ld, IRQ %d, level: %d",
__entry->vcpu_id, __entry->irq, __entry->level)
);
#endif /* _TRACE_VGIC_H */
#undef TRACE_INCLUDE_PATH
#define TRACE_INCLUDE_PATH ../../../virt/kvm/arm/vgic
#undef TRACE_INCLUDE_FILE
#define TRACE_INCLUDE_FILE trace
/* This part must be outside protection */
#include <trace/define_trace.h>
...@@ -227,10 +227,27 @@ static int kvm_vgic_dist_init(struct kvm *kvm, unsigned int nr_spis) ...@@ -227,10 +227,27 @@ static int kvm_vgic_dist_init(struct kvm *kvm, unsigned int nr_spis)
} }
/** /**
* kvm_vgic_vcpu_init() - Enable the VCPU interface * kvm_vgic_vcpu_init() - Register VCPU-specific KVM iodevs
* @vcpu: the VCPU which's VGIC should be enabled * @vcpu: pointer to the VCPU being created and initialized
*/ */
static void kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu) int kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu)
{
int ret = 0;
struct vgic_dist *dist = &vcpu->kvm->arch.vgic;
if (!irqchip_in_kernel(vcpu->kvm))
return 0;
/*
* If we are creating a VCPU with a GICv3 we must also register the
* KVM io device for the redistributor that belongs to this VCPU.
*/
if (dist->vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3)
ret = vgic_register_redist_iodev(vcpu);
return ret;
}
static void kvm_vgic_vcpu_enable(struct kvm_vcpu *vcpu)
{ {
if (kvm_vgic_global_state.type == VGIC_V2) if (kvm_vgic_global_state.type == VGIC_V2)
vgic_v2_enable(vcpu); vgic_v2_enable(vcpu);
...@@ -269,7 +286,7 @@ int vgic_init(struct kvm *kvm) ...@@ -269,7 +286,7 @@ int vgic_init(struct kvm *kvm)
dist->msis_require_devid = true; dist->msis_require_devid = true;
kvm_for_each_vcpu(i, vcpu, kvm) kvm_for_each_vcpu(i, vcpu, kvm)
kvm_vgic_vcpu_init(vcpu); kvm_vgic_vcpu_enable(vcpu);
ret = kvm_vgic_setup_default_irq_routing(kvm); ret = kvm_vgic_setup_default_irq_routing(kvm);
if (ret) if (ret)
......
This diff is collapsed.
...@@ -37,6 +37,14 @@ int vgic_check_ioaddr(struct kvm *kvm, phys_addr_t *ioaddr, ...@@ -37,6 +37,14 @@ int vgic_check_ioaddr(struct kvm *kvm, phys_addr_t *ioaddr,
return 0; return 0;
} }
static int vgic_check_type(struct kvm *kvm, int type_needed)
{
if (kvm->arch.vgic.vgic_model != type_needed)
return -ENODEV;
else
return 0;
}
/** /**
* kvm_vgic_addr - set or get vgic VM base addresses * kvm_vgic_addr - set or get vgic VM base addresses
* @kvm: pointer to the vm struct * @kvm: pointer to the vm struct
...@@ -57,40 +65,41 @@ int kvm_vgic_addr(struct kvm *kvm, unsigned long type, u64 *addr, bool write) ...@@ -57,40 +65,41 @@ int kvm_vgic_addr(struct kvm *kvm, unsigned long type, u64 *addr, bool write)
{ {
int r = 0; int r = 0;
struct vgic_dist *vgic = &kvm->arch.vgic; struct vgic_dist *vgic = &kvm->arch.vgic;
int type_needed;
phys_addr_t *addr_ptr, alignment; phys_addr_t *addr_ptr, alignment;
mutex_lock(&kvm->lock); mutex_lock(&kvm->lock);
switch (type) { switch (type) {
case KVM_VGIC_V2_ADDR_TYPE_DIST: case KVM_VGIC_V2_ADDR_TYPE_DIST:
type_needed = KVM_DEV_TYPE_ARM_VGIC_V2; r = vgic_check_type(kvm, KVM_DEV_TYPE_ARM_VGIC_V2);
addr_ptr = &vgic->vgic_dist_base; addr_ptr = &vgic->vgic_dist_base;
alignment = SZ_4K; alignment = SZ_4K;
break; break;
case KVM_VGIC_V2_ADDR_TYPE_CPU: case KVM_VGIC_V2_ADDR_TYPE_CPU:
type_needed = KVM_DEV_TYPE_ARM_VGIC_V2; r = vgic_check_type(kvm, KVM_DEV_TYPE_ARM_VGIC_V2);
addr_ptr = &vgic->vgic_cpu_base; addr_ptr = &vgic->vgic_cpu_base;
alignment = SZ_4K; alignment = SZ_4K;
break; break;
case KVM_VGIC_V3_ADDR_TYPE_DIST: case KVM_VGIC_V3_ADDR_TYPE_DIST:
type_needed = KVM_DEV_TYPE_ARM_VGIC_V3; r = vgic_check_type(kvm, KVM_DEV_TYPE_ARM_VGIC_V3);
addr_ptr = &vgic->vgic_dist_base; addr_ptr = &vgic->vgic_dist_base;
alignment = SZ_64K; alignment = SZ_64K;
break; break;
case KVM_VGIC_V3_ADDR_TYPE_REDIST: case KVM_VGIC_V3_ADDR_TYPE_REDIST:
type_needed = KVM_DEV_TYPE_ARM_VGIC_V3; r = vgic_check_type(kvm, KVM_DEV_TYPE_ARM_VGIC_V3);
if (r)
break;
if (write) {
r = vgic_v3_set_redist_base(kvm, *addr);
goto out;
}
addr_ptr = &vgic->vgic_redist_base; addr_ptr = &vgic->vgic_redist_base;
alignment = SZ_64K;
break; break;
default: default:
r = -ENODEV; r = -ENODEV;
goto out;
} }
if (vgic->vgic_model != type_needed) { if (r)
r = -ENODEV;
goto out; goto out;
}
if (write) { if (write) {
r = vgic_check_ioaddr(kvm, addr_ptr, *addr, alignment); r = vgic_check_ioaddr(kvm, addr_ptr, *addr, alignment);
...@@ -259,13 +268,13 @@ static void unlock_vcpus(struct kvm *kvm, int vcpu_lock_idx) ...@@ -259,13 +268,13 @@ static void unlock_vcpus(struct kvm *kvm, int vcpu_lock_idx)
} }
} }
static void unlock_all_vcpus(struct kvm *kvm) void unlock_all_vcpus(struct kvm *kvm)
{ {
unlock_vcpus(kvm, atomic_read(&kvm->online_vcpus) - 1); unlock_vcpus(kvm, atomic_read(&kvm->online_vcpus) - 1);
} }
/* Returns true if all vcpus were locked, false otherwise */ /* Returns true if all vcpus were locked, false otherwise */
static bool lock_all_vcpus(struct kvm *kvm) bool lock_all_vcpus(struct kvm *kvm)
{ {
struct kvm_vcpu *tmp_vcpu; struct kvm_vcpu *tmp_vcpu;
int c; int c;
...@@ -580,6 +589,24 @@ static int vgic_v3_set_attr(struct kvm_device *dev, ...@@ -580,6 +589,24 @@ static int vgic_v3_set_attr(struct kvm_device *dev,
reg = tmp32; reg = tmp32;
return vgic_v3_attr_regs_access(dev, attr, &reg, true); return vgic_v3_attr_regs_access(dev, attr, &reg, true);
} }
case KVM_DEV_ARM_VGIC_GRP_CTRL: {
int ret;
switch (attr->attr) {
case KVM_DEV_ARM_VGIC_SAVE_PENDING_TABLES:
mutex_lock(&dev->kvm->lock);
if (!lock_all_vcpus(dev->kvm)) {
mutex_unlock(&dev->kvm->lock);
return -EBUSY;
}
ret = vgic_v3_save_pending_tables(dev->kvm);
unlock_all_vcpus(dev->kvm);
mutex_unlock(&dev->kvm->lock);
return ret;
}
break;
}
} }
return -ENXIO; return -ENXIO;
} }
...@@ -658,6 +685,8 @@ static int vgic_v3_has_attr(struct kvm_device *dev, ...@@ -658,6 +685,8 @@ static int vgic_v3_has_attr(struct kvm_device *dev,
switch (attr->attr) { switch (attr->attr) {
case KVM_DEV_ARM_VGIC_CTRL_INIT: case KVM_DEV_ARM_VGIC_CTRL_INIT:
return 0; return 0;
case KVM_DEV_ARM_VGIC_SAVE_PENDING_TABLES:
return 0;
} }
} }
return -ENXIO; return -ENXIO;
......
...@@ -556,16 +556,38 @@ unsigned int vgic_v3_init_dist_iodev(struct vgic_io_device *dev) ...@@ -556,16 +556,38 @@ unsigned int vgic_v3_init_dist_iodev(struct vgic_io_device *dev)
return SZ_64K; return SZ_64K;
} }
int vgic_register_redist_iodevs(struct kvm *kvm, gpa_t redist_base_address) /**
* vgic_register_redist_iodev - register a single redist iodev
* @vcpu: The VCPU to which the redistributor belongs
*
* Register a KVM iodev for this VCPU's redistributor using the address
* provided.
*
* Return 0 on success, -ERRNO otherwise.
*/
int vgic_register_redist_iodev(struct kvm_vcpu *vcpu)
{ {
struct kvm_vcpu *vcpu; struct kvm *kvm = vcpu->kvm;
int c, ret = 0; struct vgic_dist *vgic = &kvm->arch.vgic;
kvm_for_each_vcpu(c, vcpu, kvm) {
gpa_t rd_base = redist_base_address + c * SZ_64K * 2;
gpa_t sgi_base = rd_base + SZ_64K;
struct vgic_io_device *rd_dev = &vcpu->arch.vgic_cpu.rd_iodev; struct vgic_io_device *rd_dev = &vcpu->arch.vgic_cpu.rd_iodev;
struct vgic_io_device *sgi_dev = &vcpu->arch.vgic_cpu.sgi_iodev; struct vgic_io_device *sgi_dev = &vcpu->arch.vgic_cpu.sgi_iodev;
gpa_t rd_base, sgi_base;
int ret;
/*
* We may be creating VCPUs before having set the base address for the
* redistributor region, in which case we will come back to this
* function for all VCPUs when the base address is set. Just return
* without doing any work for now.
*/
if (IS_VGIC_ADDR_UNDEF(vgic->vgic_redist_base))
return 0;
if (!vgic_v3_check_base(kvm))
return -EINVAL;
rd_base = vgic->vgic_redist_base + kvm_vcpu_get_idx(vcpu) * SZ_64K * 2;
sgi_base = rd_base + SZ_64K;
kvm_iodevice_init(&rd_dev->dev, &kvm_io_gic_ops); kvm_iodevice_init(&rd_dev->dev, &kvm_io_gic_ops);
rd_dev->base_addr = rd_base; rd_dev->base_addr = rd_base;
...@@ -580,7 +602,7 @@ int vgic_register_redist_iodevs(struct kvm *kvm, gpa_t redist_base_address) ...@@ -580,7 +602,7 @@ int vgic_register_redist_iodevs(struct kvm *kvm, gpa_t redist_base_address)
mutex_unlock(&kvm->slots_lock); mutex_unlock(&kvm->slots_lock);
if (ret) if (ret)
break; return ret;
kvm_iodevice_init(&sgi_dev->dev, &kvm_io_gic_ops); kvm_iodevice_init(&sgi_dev->dev, &kvm_io_gic_ops);
sgi_dev->base_addr = sgi_base; sgi_dev->base_addr = sgi_base;
...@@ -593,30 +615,71 @@ int vgic_register_redist_iodevs(struct kvm *kvm, gpa_t redist_base_address) ...@@ -593,30 +615,71 @@ int vgic_register_redist_iodevs(struct kvm *kvm, gpa_t redist_base_address)
ret = kvm_io_bus_register_dev(kvm, KVM_MMIO_BUS, sgi_base, ret = kvm_io_bus_register_dev(kvm, KVM_MMIO_BUS, sgi_base,
SZ_64K, &sgi_dev->dev); SZ_64K, &sgi_dev->dev);
mutex_unlock(&kvm->slots_lock); mutex_unlock(&kvm->slots_lock);
if (ret) { if (ret)
kvm_io_bus_unregister_dev(kvm, KVM_MMIO_BUS, kvm_io_bus_unregister_dev(kvm, KVM_MMIO_BUS,
&rd_dev->dev); &rd_dev->dev);
return ret;
}
static void vgic_unregister_redist_iodev(struct kvm_vcpu *vcpu)
{
struct vgic_io_device *rd_dev = &vcpu->arch.vgic_cpu.rd_iodev;
struct vgic_io_device *sgi_dev = &vcpu->arch.vgic_cpu.sgi_iodev;
kvm_io_bus_unregister_dev(vcpu->kvm, KVM_MMIO_BUS, &rd_dev->dev);
kvm_io_bus_unregister_dev(vcpu->kvm, KVM_MMIO_BUS, &sgi_dev->dev);
}
static int vgic_register_all_redist_iodevs(struct kvm *kvm)
{
struct kvm_vcpu *vcpu;
int c, ret = 0;
kvm_for_each_vcpu(c, vcpu, kvm) {
ret = vgic_register_redist_iodev(vcpu);
if (ret)
break; break;
} }
}
if (ret) { if (ret) {
/* The current c failed, so we start with the previous one. */ /* The current c failed, so we start with the previous one. */
for (c--; c >= 0; c--) { for (c--; c >= 0; c--) {
struct vgic_cpu *vgic_cpu;
vcpu = kvm_get_vcpu(kvm, c); vcpu = kvm_get_vcpu(kvm, c);
vgic_cpu = &vcpu->arch.vgic_cpu; vgic_unregister_redist_iodev(vcpu);
kvm_io_bus_unregister_dev(kvm, KVM_MMIO_BUS,
&vgic_cpu->rd_iodev.dev);
kvm_io_bus_unregister_dev(kvm, KVM_MMIO_BUS,
&vgic_cpu->sgi_iodev.dev);
} }
} }
return ret; return ret;
} }
int vgic_v3_set_redist_base(struct kvm *kvm, u64 addr)
{
struct vgic_dist *vgic = &kvm->arch.vgic;
int ret;
/* vgic_check_ioaddr makes sure we don't do this twice */
ret = vgic_check_ioaddr(kvm, &vgic->vgic_redist_base, addr, SZ_64K);
if (ret)
return ret;
vgic->vgic_redist_base = addr;
if (!vgic_v3_check_base(kvm)) {
vgic->vgic_redist_base = VGIC_ADDR_UNDEF;
return -EINVAL;
}
/*
* Register iodevs for each existing VCPU. Adding more VCPUs
* afterwards will register the iodevs when needed.
*/
ret = vgic_register_all_redist_iodevs(kvm);
if (ret)
return ret;
return 0;
}
int vgic_v3_has_attr_regs(struct kvm_device *dev, struct kvm_device_attr *attr) int vgic_v3_has_attr_regs(struct kvm_device *dev, struct kvm_device_attr *attr)
{ {
const struct vgic_register_region *region; const struct vgic_register_region *region;
......
...@@ -446,13 +446,12 @@ static int match_region(const void *key, const void *elt) ...@@ -446,13 +446,12 @@ static int match_region(const void *key, const void *elt)
return 0; return 0;
} }
/* Find the proper register handler entry given a certain address offset. */ const struct vgic_register_region *
static const struct vgic_register_region * vgic_find_mmio_region(const struct vgic_register_region *regions,
vgic_find_mmio_region(const struct vgic_register_region *region, int nr_regions, int nr_regions, unsigned int offset)
unsigned int offset)
{ {
return bsearch((void *)(uintptr_t)offset, region, nr_regions, return bsearch((void *)(uintptr_t)offset, regions, nr_regions,
sizeof(region[0]), match_region); sizeof(regions[0]), match_region);
} }
void vgic_set_vmcr(struct kvm_vcpu *vcpu, struct vgic_vmcr *vmcr) void vgic_set_vmcr(struct kvm_vcpu *vcpu, struct vgic_vmcr *vmcr)
......
...@@ -36,8 +36,13 @@ struct vgic_register_region { ...@@ -36,8 +36,13 @@ struct vgic_register_region {
}; };
unsigned long (*uaccess_read)(struct kvm_vcpu *vcpu, gpa_t addr, unsigned long (*uaccess_read)(struct kvm_vcpu *vcpu, gpa_t addr,
unsigned int len); unsigned int len);
union {
void (*uaccess_write)(struct kvm_vcpu *vcpu, gpa_t addr, void (*uaccess_write)(struct kvm_vcpu *vcpu, gpa_t addr,
unsigned int len, unsigned long val); unsigned int len, unsigned long val);
int (*uaccess_its_write)(struct kvm *kvm, struct vgic_its *its,
gpa_t addr, unsigned int len,
unsigned long val);
};
}; };
extern struct kvm_io_device_ops kvm_io_gic_ops; extern struct kvm_io_device_ops kvm_io_gic_ops;
...@@ -192,4 +197,9 @@ u64 vgic_sanitise_shareability(u64 reg); ...@@ -192,4 +197,9 @@ u64 vgic_sanitise_shareability(u64 reg);
u64 vgic_sanitise_field(u64 reg, u64 field_mask, int field_shift, u64 vgic_sanitise_field(u64 reg, u64 field_mask, int field_shift,
u64 (*sanitise_fn)(u64)); u64 (*sanitise_fn)(u64));
/* Find the proper register handler entry given a certain address offset */
const struct vgic_register_region *
vgic_find_mmio_region(const struct vgic_register_region *regions,
int nr_regions, unsigned int offset);
#endif #endif
...@@ -234,19 +234,125 @@ void vgic_v3_enable(struct kvm_vcpu *vcpu) ...@@ -234,19 +234,125 @@ void vgic_v3_enable(struct kvm_vcpu *vcpu)
vgic_v3->vgic_hcr = ICH_HCR_EN; vgic_v3->vgic_hcr = ICH_HCR_EN;
} }
/* check for overlapping regions and for regions crossing the end of memory */ int vgic_v3_lpi_sync_pending_status(struct kvm *kvm, struct vgic_irq *irq)
static bool vgic_v3_check_base(struct kvm *kvm) {
struct kvm_vcpu *vcpu;
int byte_offset, bit_nr;
gpa_t pendbase, ptr;
bool status;
u8 val;
int ret;
retry:
vcpu = irq->target_vcpu;
if (!vcpu)
return 0;
pendbase = GICR_PENDBASER_ADDRESS(vcpu->arch.vgic_cpu.pendbaser);
byte_offset = irq->intid / BITS_PER_BYTE;
bit_nr = irq->intid % BITS_PER_BYTE;
ptr = pendbase + byte_offset;
ret = kvm_read_guest(kvm, ptr, &val, 1);
if (ret)
return ret;
status = val & (1 << bit_nr);
spin_lock(&irq->irq_lock);
if (irq->target_vcpu != vcpu) {
spin_unlock(&irq->irq_lock);
goto retry;
}
irq->pending_latch = status;
vgic_queue_irq_unlock(vcpu->kvm, irq);
if (status) {
/* clear consumed data */
val &= ~(1 << bit_nr);
ret = kvm_write_guest(kvm, ptr, &val, 1);
if (ret)
return ret;
}
return 0;
}
/**
* vgic_its_save_pending_tables - Save the pending tables into guest RAM
* kvm lock and all vcpu lock must be held
*/
int vgic_v3_save_pending_tables(struct kvm *kvm)
{
struct vgic_dist *dist = &kvm->arch.vgic;
int last_byte_offset = -1;
struct vgic_irq *irq;
int ret;
list_for_each_entry(irq, &dist->lpi_list_head, lpi_list) {
int byte_offset, bit_nr;
struct kvm_vcpu *vcpu;
gpa_t pendbase, ptr;
bool stored;
u8 val;
vcpu = irq->target_vcpu;
if (!vcpu)
continue;
pendbase = GICR_PENDBASER_ADDRESS(vcpu->arch.vgic_cpu.pendbaser);
byte_offset = irq->intid / BITS_PER_BYTE;
bit_nr = irq->intid % BITS_PER_BYTE;
ptr = pendbase + byte_offset;
if (byte_offset != last_byte_offset) {
ret = kvm_read_guest(kvm, ptr, &val, 1);
if (ret)
return ret;
last_byte_offset = byte_offset;
}
stored = val & (1U << bit_nr);
if (stored == irq->pending_latch)
continue;
if (irq->pending_latch)
val |= 1 << bit_nr;
else
val &= ~(1 << bit_nr);
ret = kvm_write_guest(kvm, ptr, &val, 1);
if (ret)
return ret;
}
return 0;
}
/*
* Check for overlapping regions and for regions crossing the end of memory
* for base addresses which have already been set.
*/
bool vgic_v3_check_base(struct kvm *kvm)
{ {
struct vgic_dist *d = &kvm->arch.vgic; struct vgic_dist *d = &kvm->arch.vgic;
gpa_t redist_size = KVM_VGIC_V3_REDIST_SIZE; gpa_t redist_size = KVM_VGIC_V3_REDIST_SIZE;
redist_size *= atomic_read(&kvm->online_vcpus); redist_size *= atomic_read(&kvm->online_vcpus);
if (d->vgic_dist_base + KVM_VGIC_V3_DIST_SIZE < d->vgic_dist_base) if (!IS_VGIC_ADDR_UNDEF(d->vgic_dist_base) &&
d->vgic_dist_base + KVM_VGIC_V3_DIST_SIZE < d->vgic_dist_base)
return false; return false;
if (d->vgic_redist_base + redist_size < d->vgic_redist_base)
if (!IS_VGIC_ADDR_UNDEF(d->vgic_redist_base) &&
d->vgic_redist_base + redist_size < d->vgic_redist_base)
return false; return false;
/* Both base addresses must be set to check if they overlap */
if (IS_VGIC_ADDR_UNDEF(d->vgic_dist_base) ||
IS_VGIC_ADDR_UNDEF(d->vgic_redist_base))
return true;
if (d->vgic_dist_base + KVM_VGIC_V3_DIST_SIZE <= d->vgic_redist_base) if (d->vgic_dist_base + KVM_VGIC_V3_DIST_SIZE <= d->vgic_redist_base)
return true; return true;
if (d->vgic_redist_base + redist_size <= d->vgic_dist_base) if (d->vgic_redist_base + redist_size <= d->vgic_dist_base)
...@@ -291,20 +397,6 @@ int vgic_v3_map_resources(struct kvm *kvm) ...@@ -291,20 +397,6 @@ int vgic_v3_map_resources(struct kvm *kvm)
goto out; goto out;
} }
ret = vgic_register_redist_iodevs(kvm, dist->vgic_redist_base);
if (ret) {
kvm_err("Unable to register VGICv3 redist MMIO regions\n");
goto out;
}
if (vgic_has_its(kvm)) {
ret = vgic_register_its_iodevs(kvm);
if (ret) {
kvm_err("Unable to register VGIC ITS MMIO regions\n");
goto out;
}
}
dist->ready = true; dist->ready = true;
out: out:
......
...@@ -21,7 +21,7 @@ ...@@ -21,7 +21,7 @@
#include "vgic.h" #include "vgic.h"
#define CREATE_TRACE_POINTS #define CREATE_TRACE_POINTS
#include "../trace.h" #include "trace.h"
#ifdef CONFIG_DEBUG_SPINLOCK #ifdef CONFIG_DEBUG_SPINLOCK
#define DEBUG_SPINLOCK_BUG_ON(p) BUG_ON(p) #define DEBUG_SPINLOCK_BUG_ON(p) BUG_ON(p)
......
...@@ -73,6 +73,29 @@ ...@@ -73,6 +73,29 @@
KVM_REG_ARM_VGIC_SYSREG_CRM_MASK | \ KVM_REG_ARM_VGIC_SYSREG_CRM_MASK | \
KVM_REG_ARM_VGIC_SYSREG_OP2_MASK) KVM_REG_ARM_VGIC_SYSREG_OP2_MASK)
/*
* As per Documentation/virtual/kvm/devices/arm-vgic-its.txt,
* below macros are defined for ITS table entry encoding.
*/
#define KVM_ITS_CTE_VALID_SHIFT 63
#define KVM_ITS_CTE_VALID_MASK BIT_ULL(63)
#define KVM_ITS_CTE_RDBASE_SHIFT 16
#define KVM_ITS_CTE_ICID_MASK GENMASK_ULL(15, 0)
#define KVM_ITS_ITE_NEXT_SHIFT 48
#define KVM_ITS_ITE_PINTID_SHIFT 16
#define KVM_ITS_ITE_PINTID_MASK GENMASK_ULL(47, 16)
#define KVM_ITS_ITE_ICID_MASK GENMASK_ULL(15, 0)
#define KVM_ITS_DTE_VALID_SHIFT 63
#define KVM_ITS_DTE_VALID_MASK BIT_ULL(63)
#define KVM_ITS_DTE_NEXT_SHIFT 49
#define KVM_ITS_DTE_NEXT_MASK GENMASK_ULL(62, 49)
#define KVM_ITS_DTE_ITTADDR_SHIFT 5
#define KVM_ITS_DTE_ITTADDR_MASK GENMASK_ULL(48, 5)
#define KVM_ITS_DTE_SIZE_MASK GENMASK_ULL(4, 0)
#define KVM_ITS_L1E_VALID_MASK BIT_ULL(63)
/* we only support 64 kB translation table page size */
#define KVM_ITS_L1E_ADDR_MASK GENMASK_ULL(51, 16)
static inline bool irq_is_pending(struct vgic_irq *irq) static inline bool irq_is_pending(struct vgic_irq *irq)
{ {
if (irq->config == VGIC_CONFIG_EDGE) if (irq->config == VGIC_CONFIG_EDGE)
...@@ -157,12 +180,15 @@ void vgic_v3_get_vmcr(struct kvm_vcpu *vcpu, struct vgic_vmcr *vmcr); ...@@ -157,12 +180,15 @@ void vgic_v3_get_vmcr(struct kvm_vcpu *vcpu, struct vgic_vmcr *vmcr);
void vgic_v3_enable(struct kvm_vcpu *vcpu); void vgic_v3_enable(struct kvm_vcpu *vcpu);
int vgic_v3_probe(const struct gic_kvm_info *info); int vgic_v3_probe(const struct gic_kvm_info *info);
int vgic_v3_map_resources(struct kvm *kvm); int vgic_v3_map_resources(struct kvm *kvm);
int vgic_register_redist_iodevs(struct kvm *kvm, gpa_t dist_base_address); int vgic_v3_lpi_sync_pending_status(struct kvm *kvm, struct vgic_irq *irq);
int vgic_v3_save_pending_tables(struct kvm *kvm);
int vgic_v3_set_redist_base(struct kvm *kvm, u64 addr);
int vgic_register_redist_iodev(struct kvm_vcpu *vcpu);
bool vgic_v3_check_base(struct kvm *kvm);
void vgic_v3_load(struct kvm_vcpu *vcpu); void vgic_v3_load(struct kvm_vcpu *vcpu);
void vgic_v3_put(struct kvm_vcpu *vcpu); void vgic_v3_put(struct kvm_vcpu *vcpu);
int vgic_register_its_iodevs(struct kvm *kvm);
bool vgic_has_its(struct kvm *kvm); bool vgic_has_its(struct kvm *kvm);
int kvm_vgic_register_its_device(void); int kvm_vgic_register_its_device(void);
void vgic_enable_lpis(struct kvm_vcpu *vcpu); void vgic_enable_lpis(struct kvm_vcpu *vcpu);
...@@ -187,4 +213,7 @@ int vgic_init(struct kvm *kvm); ...@@ -187,4 +213,7 @@ int vgic_init(struct kvm *kvm);
int vgic_debug_init(struct kvm *kvm); int vgic_debug_init(struct kvm *kvm);
int vgic_debug_destroy(struct kvm *kvm); int vgic_debug_destroy(struct kvm *kvm);
bool lock_all_vcpus(struct kvm *kvm);
void unlock_all_vcpus(struct kvm *kvm);
#endif #endif
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment