Commit 6db167df authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'for-linus-2' of git://git.linaro.org/people/rmk/linux-arm

Pull ARM updates (part two) from Russell King:

 - breakpoint and perf updates from Will Deacon.

 - hypervisor boot mode updates from Will.

 - support for Power State Coordination Interface via the Hypervisor

 - core ARM support for KVM

* 'for-linus-2' of git://git.linaro.org/people/rmk/linux-arm: (32 commits)
  KVM: ARM: Add maintainer entry for KVM/ARM
  KVM: ARM: Power State Coordination Interface implementation
  KVM: ARM: Handle I/O aborts
  KVM: ARM: Handle guest faults in KVM
  KVM: ARM: VFP userspace interface
  KVM: ARM: Demux CCSIDR in the userspace API
  KVM: ARM: User space API for getting/setting co-proc registers
  KVM: ARM: Emulation framework and CP15 emulation
  KVM: ARM: World-switch implementation
  KVM: ARM: Inject IRQs and FIQs from userspace
  KVM: ARM: Memory virtualization setup
  KVM: ARM: Hypervisor initialization
  KVM: ARM: Initial skeleton to compile KVM support
  ARM: Section based HYP idmap
  ARM: Add page table and page defines needed by KVM
  ARM: perf: simplify __hw_perf_event_init err handling
  ARM: perf: remove unnecessary checks for idx < 0
  ARM: perf: handle armpmu_register failing
  ARM: perf: don't pretend to support counting of L1I writes
  ARM: perf: remove redundant NULL check on cpu_pmu
  ...
parents 32f9aab8 9cb54312
* Power State Coordination Interface (PSCI)
Firmware implementing the PSCI functions described in ARM document number
ARM DEN 0022A ("Power State Coordination Interface System Software on ARM
processors") can be used by Linux to initiate various CPU-centric power
operations.
Issue A of the specification describes functions for CPU suspend, hotplug
and migration of secure software.
Functions are invoked by trapping to the privilege level of the PSCI
firmware (specified as part of the binding below) and passing arguments
in a manner similar to that specified by AAPCS:
r0 => 32-bit Function ID / return value
{r1 - r3} => Parameters
Note that the immediate field of the trapping instruction must be set
to #0.
Main node required properties:
- compatible : Must be "arm,psci"
- method : The method of calling the PSCI firmware. Permitted
values are:
"smc" : SMC #0, with the register assignments specified
in this binding.
"hvc" : HVC #0, with the register assignments specified
in this binding.
Main node optional properties:
- cpu_suspend : Function ID for CPU_SUSPEND operation
- cpu_off : Function ID for CPU_OFF operation
- cpu_on : Function ID for CPU_ON operation
- migrate : Function ID for MIGRATE operation
Example:
psci {
compatible = "arm,psci";
method = "smc";
cpu_suspend = <0x95c10000>;
cpu_off = <0x95c10001>;
cpu_on = <0x95c10002>;
migrate = <0x95c10003>;
};
...@@ -293,7 +293,7 @@ kvm_run' (see below). ...@@ -293,7 +293,7 @@ kvm_run' (see below).
4.11 KVM_GET_REGS 4.11 KVM_GET_REGS
Capability: basic Capability: basic
Architectures: all Architectures: all except ARM
Type: vcpu ioctl Type: vcpu ioctl
Parameters: struct kvm_regs (out) Parameters: struct kvm_regs (out)
Returns: 0 on success, -1 on error Returns: 0 on success, -1 on error
...@@ -314,7 +314,7 @@ struct kvm_regs { ...@@ -314,7 +314,7 @@ struct kvm_regs {
4.12 KVM_SET_REGS 4.12 KVM_SET_REGS
Capability: basic Capability: basic
Architectures: all Architectures: all except ARM
Type: vcpu ioctl Type: vcpu ioctl
Parameters: struct kvm_regs (in) Parameters: struct kvm_regs (in)
Returns: 0 on success, -1 on error Returns: 0 on success, -1 on error
...@@ -600,7 +600,7 @@ struct kvm_fpu { ...@@ -600,7 +600,7 @@ struct kvm_fpu {
4.24 KVM_CREATE_IRQCHIP 4.24 KVM_CREATE_IRQCHIP
Capability: KVM_CAP_IRQCHIP Capability: KVM_CAP_IRQCHIP
Architectures: x86, ia64 Architectures: x86, ia64, ARM
Type: vm ioctl Type: vm ioctl
Parameters: none Parameters: none
Returns: 0 on success, -1 on error Returns: 0 on success, -1 on error
...@@ -608,21 +608,39 @@ Returns: 0 on success, -1 on error ...@@ -608,21 +608,39 @@ Returns: 0 on success, -1 on error
Creates an interrupt controller model in the kernel. On x86, creates a virtual Creates an interrupt controller model in the kernel. On x86, creates a virtual
ioapic, a virtual PIC (two PICs, nested), and sets up future vcpus to have a ioapic, a virtual PIC (two PICs, nested), and sets up future vcpus to have a
local APIC. IRQ routing for GSIs 0-15 is set to both PIC and IOAPIC; GSI 16-23 local APIC. IRQ routing for GSIs 0-15 is set to both PIC and IOAPIC; GSI 16-23
only go to the IOAPIC. On ia64, a IOSAPIC is created. only go to the IOAPIC. On ia64, a IOSAPIC is created. On ARM, a GIC is
created.
4.25 KVM_IRQ_LINE 4.25 KVM_IRQ_LINE
Capability: KVM_CAP_IRQCHIP Capability: KVM_CAP_IRQCHIP
Architectures: x86, ia64 Architectures: x86, ia64, arm
Type: vm ioctl Type: vm ioctl
Parameters: struct kvm_irq_level Parameters: struct kvm_irq_level
Returns: 0 on success, -1 on error Returns: 0 on success, -1 on error
Sets the level of a GSI input to the interrupt controller model in the kernel. Sets the level of a GSI input to the interrupt controller model in the kernel.
Requires that an interrupt controller model has been previously created with On some architectures it is required that an interrupt controller model has
KVM_CREATE_IRQCHIP. Note that edge-triggered interrupts require the level been previously created with KVM_CREATE_IRQCHIP. Note that edge-triggered
to be set to 1 and then back to 0. interrupts require the level to be set to 1 and then back to 0.
ARM can signal an interrupt either at the CPU level, or at the in-kernel irqchip
(GIC), and for in-kernel irqchip can tell the GIC to use PPIs designated for
specific cpus. The irq field is interpreted like this:
 bits: | 31 ... 24 | 23 ... 16 | 15 ... 0 |
field: | irq_type | vcpu_index | irq_id |
The irq_type field has the following values:
- irq_type[0]: out-of-kernel GIC: irq_id 0 is IRQ, irq_id 1 is FIQ
- irq_type[1]: in-kernel GIC: SPI, irq_id between 32 and 1019 (incl.)
(the vcpu_index field is ignored)
- irq_type[2]: in-kernel GIC: PPI, irq_id between 16 and 31 (incl.)
(The irq_id field thus corresponds nicely to the IRQ ID in the ARM GIC specs)
In both cases, level is used to raise/lower the line.
struct kvm_irq_level { struct kvm_irq_level {
union { union {
...@@ -1775,6 +1793,27 @@ registers, find a list below: ...@@ -1775,6 +1793,27 @@ registers, find a list below:
PPC | KVM_REG_PPC_VPA_DTL | 128 PPC | KVM_REG_PPC_VPA_DTL | 128
PPC | KVM_REG_PPC_EPCR | 32 PPC | KVM_REG_PPC_EPCR | 32
ARM registers are mapped using the lower 32 bits. The upper 16 of that
is the register group type, or coprocessor number:
ARM core registers have the following id bit patterns:
0x4002 0000 0010 <index into the kvm_regs struct:16>
ARM 32-bit CP15 registers have the following id bit patterns:
0x4002 0000 000F <zero:1> <crn:4> <crm:4> <opc1:4> <opc2:3>
ARM 64-bit CP15 registers have the following id bit patterns:
0x4003 0000 000F <zero:1> <zero:4> <crm:4> <opc1:4> <zero:3>
ARM CCSIDR registers are demultiplexed by CSSELR value:
0x4002 0000 0011 00 <csselr:8>
ARM 32-bit VFP control registers have the following id bit patterns:
0x4002 0000 0012 1 <regno:12>
ARM 64-bit FP registers have the following id bit patterns:
0x4002 0000 0012 0 <regno:12>
4.69 KVM_GET_ONE_REG 4.69 KVM_GET_ONE_REG
Capability: KVM_CAP_ONE_REG Capability: KVM_CAP_ONE_REG
...@@ -2127,6 +2166,50 @@ written, then `n_invalid' invalid entries, invalidating any previously ...@@ -2127,6 +2166,50 @@ written, then `n_invalid' invalid entries, invalidating any previously
valid entries found. valid entries found.
4.77 KVM_ARM_VCPU_INIT
Capability: basic
Architectures: arm
Type: vcpu ioctl
Parameters: struct struct kvm_vcpu_init (in)
Returns: 0 on success; -1 on error
Errors:
 EINVAL:    the target is unknown, or the combination of features is invalid.
 ENOENT:    a features bit specified is unknown.
This tells KVM what type of CPU to present to the guest, and what
optional features it should have.  This will cause a reset of the cpu
registers to their initial values.  If this is not called, KVM_RUN will
return ENOEXEC for that vcpu.
Note that because some registers reflect machine topology, all vcpus
should be created before this ioctl is invoked.
Possible features:
- KVM_ARM_VCPU_POWER_OFF: Starts the CPU in a power-off state.
Depends on KVM_CAP_ARM_PSCI.
4.78 KVM_GET_REG_LIST
Capability: basic
Architectures: arm
Type: vcpu ioctl
Parameters: struct kvm_reg_list (in/out)
Returns: 0 on success; -1 on error
Errors:
 E2BIG:     the reg index list is too big to fit in the array specified by
            the user (the number required will be written into n).
struct kvm_reg_list {
__u64 n; /* number of registers in reg[] */
__u64 reg[0];
};
This ioctl returns the guest registers that are supported for the
KVM_GET_ONE_REG/KVM_SET_ONE_REG calls.
5. The kvm_run structure 5. The kvm_run structure
------------------------ ------------------------
......
...@@ -4488,6 +4488,15 @@ F: arch/s390/include/asm/kvm* ...@@ -4488,6 +4488,15 @@ F: arch/s390/include/asm/kvm*
F: arch/s390/kvm/ F: arch/s390/kvm/
F: drivers/s390/kvm/ F: drivers/s390/kvm/
KERNEL VIRTUAL MACHINE (KVM) FOR ARM
M: Christoffer Dall <cdall@cs.columbia.edu>
L: kvmarm@lists.cs.columbia.edu
W: http://systems.cs.columbia.edu/projects/kvm-arm
S: Maintained
F: arch/arm/include/uapi/asm/kvm*
F: arch/arm/include/asm/kvm*
F: arch/arm/kvm/
KEXEC KEXEC
M: Eric Biederman <ebiederm@xmission.com> M: Eric Biederman <ebiederm@xmission.com>
W: http://kernel.org/pub/linux/utils/kernel/kexec/ W: http://kernel.org/pub/linux/utils/kernel/kexec/
......
...@@ -1619,6 +1619,16 @@ config HOTPLUG_CPU ...@@ -1619,6 +1619,16 @@ config HOTPLUG_CPU
Say Y here to experiment with turning CPUs off and on. CPUs Say Y here to experiment with turning CPUs off and on. CPUs
can be controlled through /sys/devices/system/cpu. can be controlled through /sys/devices/system/cpu.
config ARM_PSCI
bool "Support for the ARM Power State Coordination Interface (PSCI)"
depends on CPU_V7
help
Say Y here if you want Linux to communicate with system firmware
implementing the PSCI specification for CPU-centric power
management operations described in ARM document number ARM DEN
0022A ("Power State Coordination Interface System Software on
ARM processors").
config LOCAL_TIMERS config LOCAL_TIMERS
bool "Use local timer interrupts" bool "Use local timer interrupts"
depends on SMP depends on SMP
...@@ -2324,3 +2334,5 @@ source "security/Kconfig" ...@@ -2324,3 +2334,5 @@ source "security/Kconfig"
source "crypto/Kconfig" source "crypto/Kconfig"
source "lib/Kconfig" source "lib/Kconfig"
source "arch/arm/kvm/Kconfig"
...@@ -252,6 +252,7 @@ core-$(CONFIG_FPE_NWFPE) += arch/arm/nwfpe/ ...@@ -252,6 +252,7 @@ core-$(CONFIG_FPE_NWFPE) += arch/arm/nwfpe/
core-$(CONFIG_FPE_FASTFPE) += $(FASTFPE_OBJ) core-$(CONFIG_FPE_FASTFPE) += $(FASTFPE_OBJ)
core-$(CONFIG_VFP) += arch/arm/vfp/ core-$(CONFIG_VFP) += arch/arm/vfp/
core-$(CONFIG_XEN) += arch/arm/xen/ core-$(CONFIG_XEN) += arch/arm/xen/
core-$(CONFIG_KVM_ARM_HOST) += arch/arm/kvm/
# If we have a machine-specific directory, then include it in the build. # If we have a machine-specific directory, then include it in the build.
core-y += arch/arm/kernel/ arch/arm/mm/ arch/arm/common/ core-y += arch/arm/kernel/ arch/arm/mm/ arch/arm/common/
......
...@@ -246,18 +246,14 @@ ...@@ -246,18 +246,14 @@
* *
* This macro is intended for forcing the CPU into SVC mode at boot time. * This macro is intended for forcing the CPU into SVC mode at boot time.
* you cannot return to the original mode. * you cannot return to the original mode.
*
* Beware, it also clobers LR.
*/ */
.macro safe_svcmode_maskall reg:req .macro safe_svcmode_maskall reg:req
#if __LINUX_ARM_ARCH__ >= 6 #if __LINUX_ARM_ARCH__ >= 6
mrs \reg , cpsr mrs \reg , cpsr
mov lr , \reg eor \reg, \reg, #HYP_MODE
and lr , lr , #MODE_MASK tst \reg, #MODE_MASK
cmp lr , #HYP_MODE
orr \reg , \reg , #PSR_I_BIT | PSR_F_BIT
bic \reg , \reg , #MODE_MASK bic \reg , \reg , #MODE_MASK
orr \reg , \reg , #SVC_MODE orr \reg , \reg , #PSR_I_BIT | PSR_F_BIT | SVC_MODE
THUMB( orr \reg , \reg , #PSR_T_BIT ) THUMB( orr \reg , \reg , #PSR_T_BIT )
bne 1f bne 1f
orr \reg, \reg, #PSR_A_BIT orr \reg, \reg, #PSR_A_BIT
......
...@@ -64,6 +64,24 @@ extern unsigned int processor_id; ...@@ -64,6 +64,24 @@ extern unsigned int processor_id;
#define read_cpuid_ext(reg) 0 #define read_cpuid_ext(reg) 0
#endif #endif
#define ARM_CPU_IMP_ARM 0x41
#define ARM_CPU_IMP_INTEL 0x69
#define ARM_CPU_PART_ARM1136 0xB360
#define ARM_CPU_PART_ARM1156 0xB560
#define ARM_CPU_PART_ARM1176 0xB760
#define ARM_CPU_PART_ARM11MPCORE 0xB020
#define ARM_CPU_PART_CORTEX_A8 0xC080
#define ARM_CPU_PART_CORTEX_A9 0xC090
#define ARM_CPU_PART_CORTEX_A5 0xC050
#define ARM_CPU_PART_CORTEX_A15 0xC0F0
#define ARM_CPU_PART_CORTEX_A7 0xC070
#define ARM_CPU_XSCALE_ARCH_MASK 0xe000
#define ARM_CPU_XSCALE_ARCH_V1 0x2000
#define ARM_CPU_XSCALE_ARCH_V2 0x4000
#define ARM_CPU_XSCALE_ARCH_V3 0x6000
/* /*
* The CPU ID never changes at run time, so we might as well tell the * The CPU ID never changes at run time, so we might as well tell the
* compiler that it's constant. Use this function to read the CPU ID * compiler that it's constant. Use this function to read the CPU ID
...@@ -74,6 +92,21 @@ static inline unsigned int __attribute_const__ read_cpuid_id(void) ...@@ -74,6 +92,21 @@ static inline unsigned int __attribute_const__ read_cpuid_id(void)
return read_cpuid(CPUID_ID); return read_cpuid(CPUID_ID);
} }
static inline unsigned int __attribute_const__ read_cpuid_implementor(void)
{
return (read_cpuid_id() & 0xFF000000) >> 24;
}
static inline unsigned int __attribute_const__ read_cpuid_part_number(void)
{
return read_cpuid_id() & 0xFFF0;
}
static inline unsigned int __attribute_const__ xscale_cpu_arch_version(void)
{
return read_cpuid_part_number() & ARM_CPU_XSCALE_ARCH_MASK;
}
static inline unsigned int __attribute_const__ read_cpuid_cachetype(void) static inline unsigned int __attribute_const__ read_cpuid_cachetype(void)
{ {
return read_cpuid(CPUID_CACHETYPE); return read_cpuid(CPUID_CACHETYPE);
......
...@@ -2,6 +2,7 @@ ...@@ -2,6 +2,7 @@
#define __ASMARM_CTI_H #define __ASMARM_CTI_H
#include <asm/io.h> #include <asm/io.h>
#include <asm/hardware/coresight.h>
/* The registers' definition is from section 3.2 of /* The registers' definition is from section 3.2 of
* Embedded Cross Trigger Revision: r0p0 * Embedded Cross Trigger Revision: r0p0
...@@ -35,11 +36,6 @@ ...@@ -35,11 +36,6 @@
#define LOCKACCESS 0xFB0 #define LOCKACCESS 0xFB0
#define LOCKSTATUS 0xFB4 #define LOCKSTATUS 0xFB4
/* write this value to LOCKACCESS will unlock the module, and
* other value will lock the module
*/
#define LOCKCODE 0xC5ACCE55
/** /**
* struct cti - cross trigger interface struct * struct cti - cross trigger interface struct
* @base: mapped virtual address for the cti base * @base: mapped virtual address for the cti base
...@@ -146,7 +142,7 @@ static inline void cti_irq_ack(struct cti *cti) ...@@ -146,7 +142,7 @@ static inline void cti_irq_ack(struct cti *cti)
*/ */
static inline void cti_unlock(struct cti *cti) static inline void cti_unlock(struct cti *cti)
{ {
__raw_writel(LOCKCODE, cti->base + LOCKACCESS); __raw_writel(CS_LAR_KEY, cti->base + LOCKACCESS);
} }
/** /**
...@@ -158,6 +154,6 @@ static inline void cti_unlock(struct cti *cti) ...@@ -158,6 +154,6 @@ static inline void cti_unlock(struct cti *cti)
*/ */
static inline void cti_lock(struct cti *cti) static inline void cti_lock(struct cti *cti)
{ {
__raw_writel(~LOCKCODE, cti->base + LOCKACCESS); __raw_writel(~CS_LAR_KEY, cti->base + LOCKACCESS);
} }
#endif #endif
...@@ -36,7 +36,7 @@ ...@@ -36,7 +36,7 @@
/* CoreSight Component Registers */ /* CoreSight Component Registers */
#define CSCR_CLASS 0xff4 #define CSCR_CLASS 0xff4
#define UNLOCK_MAGIC 0xc5acce55 #define CS_LAR_KEY 0xc5acce55
/* ETM control register, "ETM Architecture", 3.3.1 */ /* ETM control register, "ETM Architecture", 3.3.1 */
#define ETMR_CTRL 0 #define ETMR_CTRL 0
...@@ -147,11 +147,11 @@ ...@@ -147,11 +147,11 @@
#define etm_lock(t) do { etm_writel((t), 0, CSMR_LOCKACCESS); } while (0) #define etm_lock(t) do { etm_writel((t), 0, CSMR_LOCKACCESS); } while (0)
#define etm_unlock(t) \ #define etm_unlock(t) \
do { etm_writel((t), UNLOCK_MAGIC, CSMR_LOCKACCESS); } while (0) do { etm_writel((t), CS_LAR_KEY, CSMR_LOCKACCESS); } while (0)
#define etb_lock(t) do { etb_writel((t), 0, CSMR_LOCKACCESS); } while (0) #define etb_lock(t) do { etb_writel((t), 0, CSMR_LOCKACCESS); } while (0)
#define etb_unlock(t) \ #define etb_unlock(t) \
do { etb_writel((t), UNLOCK_MAGIC, CSMR_LOCKACCESS); } while (0) do { etb_writel((t), CS_LAR_KEY, CSMR_LOCKACCESS); } while (0)
#endif /* __ASM_HARDWARE_CORESIGHT_H */ #endif /* __ASM_HARDWARE_CORESIGHT_H */
...@@ -85,6 +85,9 @@ static inline void decode_ctrl_reg(u32 reg, ...@@ -85,6 +85,9 @@ static inline void decode_ctrl_reg(u32 reg,
#define ARM_DSCR_HDBGEN (1 << 14) #define ARM_DSCR_HDBGEN (1 << 14)
#define ARM_DSCR_MDBGEN (1 << 15) #define ARM_DSCR_MDBGEN (1 << 15)
/* OSLSR os lock model bits */
#define ARM_OSLSR_OSLM0 (1 << 0)
/* opcode2 numbers for the co-processor instructions. */ /* opcode2 numbers for the co-processor instructions. */
#define ARM_OP2_BVR 4 #define ARM_OP2_BVR 4
#define ARM_OP2_BCR 5 #define ARM_OP2_BCR 5
......
...@@ -8,6 +8,7 @@ ...@@ -8,6 +8,7 @@
#define __idmap __section(.idmap.text) noinline notrace #define __idmap __section(.idmap.text) noinline notrace
extern pgd_t *idmap_pgd; extern pgd_t *idmap_pgd;
extern pgd_t *hyp_pgd;
void setup_mm_for_reboot(void); void setup_mm_for_reboot(void);
......
/*
* Copyright (C) 2012 - Virtual Open Systems and Columbia University
* Author: Christoffer Dall <c.dall@virtualopensystems.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License, version 2, as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#ifndef __ARM_KVM_ARM_H__
#define __ARM_KVM_ARM_H__
#include <linux/types.h>
/* Hyp Configuration Register (HCR) bits */
#define HCR_TGE (1 << 27)
#define HCR_TVM (1 << 26)
#define HCR_TTLB (1 << 25)
#define HCR_TPU (1 << 24)
#define HCR_TPC (1 << 23)
#define HCR_TSW (1 << 22)
#define HCR_TAC (1 << 21)
#define HCR_TIDCP (1 << 20)
#define HCR_TSC (1 << 19)
#define HCR_TID3 (1 << 18)
#define HCR_TID2 (1 << 17)
#define HCR_TID1 (1 << 16)
#define HCR_TID0 (1 << 15)
#define HCR_TWE (1 << 14)
#define HCR_TWI (1 << 13)
#define HCR_DC (1 << 12)
#define HCR_BSU (3 << 10)
#define HCR_BSU_IS (1 << 10)
#define HCR_FB (1 << 9)
#define HCR_VA (1 << 8)
#define HCR_VI (1 << 7)
#define HCR_VF (1 << 6)
#define HCR_AMO (1 << 5)
#define HCR_IMO (1 << 4)
#define HCR_FMO (1 << 3)
#define HCR_PTW (1 << 2)
#define HCR_SWIO (1 << 1)
#define HCR_VM 1
/*
* The bits we set in HCR:
* TAC: Trap ACTLR
* TSC: Trap SMC
* TSW: Trap cache operations by set/way
* TWI: Trap WFI
* TIDCP: Trap L2CTLR/L2ECTLR
* BSU_IS: Upgrade barriers to the inner shareable domain
* FB: Force broadcast of all maintainance operations
* AMO: Override CPSR.A and enable signaling with VA
* IMO: Override CPSR.I and enable signaling with VI
* FMO: Override CPSR.F and enable signaling with VF
* SWIO: Turn set/way invalidates into set/way clean+invalidate
*/
#define HCR_GUEST_MASK (HCR_TSC | HCR_TSW | HCR_TWI | HCR_VM | HCR_BSU_IS | \
HCR_FB | HCR_TAC | HCR_AMO | HCR_IMO | HCR_FMO | \
HCR_SWIO | HCR_TIDCP)
#define HCR_VIRT_EXCP_MASK (HCR_VA | HCR_VI | HCR_VF)
/* System Control Register (SCTLR) bits */
#define SCTLR_TE (1 << 30)
#define SCTLR_EE (1 << 25)
#define SCTLR_V (1 << 13)
/* Hyp System Control Register (HSCTLR) bits */
#define HSCTLR_TE (1 << 30)
#define HSCTLR_EE (1 << 25)
#define HSCTLR_FI (1 << 21)
#define HSCTLR_WXN (1 << 19)
#define HSCTLR_I (1 << 12)
#define HSCTLR_C (1 << 2)
#define HSCTLR_A (1 << 1)
#define HSCTLR_M 1
#define HSCTLR_MASK (HSCTLR_M | HSCTLR_A | HSCTLR_C | HSCTLR_I | \
HSCTLR_WXN | HSCTLR_FI | HSCTLR_EE | HSCTLR_TE)
/* TTBCR and HTCR Registers bits */
#define TTBCR_EAE (1 << 31)
#define TTBCR_IMP (1 << 30)
#define TTBCR_SH1 (3 << 28)
#define TTBCR_ORGN1 (3 << 26)
#define TTBCR_IRGN1 (3 << 24)
#define TTBCR_EPD1 (1 << 23)
#define TTBCR_A1 (1 << 22)
#define TTBCR_T1SZ (3 << 16)
#define TTBCR_SH0 (3 << 12)
#define TTBCR_ORGN0 (3 << 10)
#define TTBCR_IRGN0 (3 << 8)
#define TTBCR_EPD0 (1 << 7)
#define TTBCR_T0SZ 3
#define HTCR_MASK (TTBCR_T0SZ | TTBCR_IRGN0 | TTBCR_ORGN0 | TTBCR_SH0)
/* Hyp System Trap Register */
#define HSTR_T(x) (1 << x)
#define HSTR_TTEE (1 << 16)
#define HSTR_TJDBX (1 << 17)
/* Hyp Coprocessor Trap Register */
#define HCPTR_TCP(x) (1 << x)
#define HCPTR_TCP_MASK (0x3fff)
#define HCPTR_TASE (1 << 15)
#define HCPTR_TTA (1 << 20)
#define HCPTR_TCPAC (1 << 31)
/* Hyp Debug Configuration Register bits */
#define HDCR_TDRA (1 << 11)
#define HDCR_TDOSA (1 << 10)
#define HDCR_TDA (1 << 9)
#define HDCR_TDE (1 << 8)
#define HDCR_HPME (1 << 7)
#define HDCR_TPM (1 << 6)
#define HDCR_TPMCR (1 << 5)
#define HDCR_HPMN_MASK (0x1F)
/*
* The architecture supports 40-bit IPA as input to the 2nd stage translations
* and PTRS_PER_S2_PGD becomes 1024, because each entry covers 1GB of address
* space.
*/
#define KVM_PHYS_SHIFT (40)
#define KVM_PHYS_SIZE (1ULL << KVM_PHYS_SHIFT)
#define KVM_PHYS_MASK (KVM_PHYS_SIZE - 1ULL)
#define PTRS_PER_S2_PGD (1ULL << (KVM_PHYS_SHIFT - 30))
#define S2_PGD_ORDER get_order(PTRS_PER_S2_PGD * sizeof(pgd_t))
#define S2_PGD_SIZE (1 << S2_PGD_ORDER)
/* Virtualization Translation Control Register (VTCR) bits */
#define VTCR_SH0 (3 << 12)
#define VTCR_ORGN0 (3 << 10)
#define VTCR_IRGN0 (3 << 8)
#define VTCR_SL0 (3 << 6)
#define VTCR_S (1 << 4)
#define VTCR_T0SZ (0xf)
#define VTCR_MASK (VTCR_SH0 | VTCR_ORGN0 | VTCR_IRGN0 | VTCR_SL0 | \
VTCR_S | VTCR_T0SZ)
#define VTCR_HTCR_SH (VTCR_SH0 | VTCR_ORGN0 | VTCR_IRGN0)
#define VTCR_SL_L2 (0 << 6) /* Starting-level: 2 */
#define VTCR_SL_L1 (1 << 6) /* Starting-level: 1 */
#define KVM_VTCR_SL0 VTCR_SL_L1
/* stage-2 input address range defined as 2^(32-T0SZ) */
#define KVM_T0SZ (32 - KVM_PHYS_SHIFT)
#define KVM_VTCR_T0SZ (KVM_T0SZ & VTCR_T0SZ)
#define KVM_VTCR_S ((KVM_VTCR_T0SZ << 1) & VTCR_S)
/* Virtualization Translation Table Base Register (VTTBR) bits */
#if KVM_VTCR_SL0 == VTCR_SL_L2 /* see ARM DDI 0406C: B4-1720 */
#define VTTBR_X (14 - KVM_T0SZ)
#else
#define VTTBR_X (5 - KVM_T0SZ)
#endif
#define VTTBR_BADDR_SHIFT (VTTBR_X - 1)
#define VTTBR_BADDR_MASK (((1LLU << (40 - VTTBR_X)) - 1) << VTTBR_BADDR_SHIFT)
#define VTTBR_VMID_SHIFT (48LLU)
#define VTTBR_VMID_MASK (0xffLLU << VTTBR_VMID_SHIFT)
/* Hyp Syndrome Register (HSR) bits */
#define HSR_EC_SHIFT (26)
#define HSR_EC (0x3fU << HSR_EC_SHIFT)
#define HSR_IL (1U << 25)
#define HSR_ISS (HSR_IL - 1)
#define HSR_ISV_SHIFT (24)
#define HSR_ISV (1U << HSR_ISV_SHIFT)
#define HSR_SRT_SHIFT (16)
#define HSR_SRT_MASK (0xf << HSR_SRT_SHIFT)
#define HSR_FSC (0x3f)
#define HSR_FSC_TYPE (0x3c)
#define HSR_SSE (1 << 21)
#define HSR_WNR (1 << 6)
#define HSR_CV_SHIFT (24)
#define HSR_CV (1U << HSR_CV_SHIFT)
#define HSR_COND_SHIFT (20)
#define HSR_COND (0xfU << HSR_COND_SHIFT)
#define FSC_FAULT (0x04)
#define FSC_PERM (0x0c)
/* Hyp Prefetch Fault Address Register (HPFAR/HDFAR) */
#define HPFAR_MASK (~0xf)
#define HSR_EC_UNKNOWN (0x00)
#define HSR_EC_WFI (0x01)
#define HSR_EC_CP15_32 (0x03)
#define HSR_EC_CP15_64 (0x04)
#define HSR_EC_CP14_MR (0x05)
#define HSR_EC_CP14_LS (0x06)
#define HSR_EC_CP_0_13 (0x07)
#define HSR_EC_CP10_ID (0x08)
#define HSR_EC_JAZELLE (0x09)
#define HSR_EC_BXJ (0x0A)
#define HSR_EC_CP14_64 (0x0C)
#define HSR_EC_SVC_HYP (0x11)
#define HSR_EC_HVC (0x12)
#define HSR_EC_SMC (0x13)
#define HSR_EC_IABT (0x20)
#define HSR_EC_IABT_HYP (0x21)
#define HSR_EC_DABT (0x24)
#define HSR_EC_DABT_HYP (0x25)
#define HSR_HVC_IMM_MASK ((1UL << 16) - 1)
#endif /* __ARM_KVM_ARM_H__ */
/*
* Copyright (C) 2012 - Virtual Open Systems and Columbia University
* Author: Christoffer Dall <c.dall@virtualopensystems.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License, version 2, as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#ifndef __ARM_KVM_ASM_H__
#define __ARM_KVM_ASM_H__
/* 0 is reserved as an invalid value. */
#define c0_MPIDR 1 /* MultiProcessor ID Register */
#define c0_CSSELR 2 /* Cache Size Selection Register */
#define c1_SCTLR 3 /* System Control Register */
#define c1_ACTLR 4 /* Auxilliary Control Register */
#define c1_CPACR 5 /* Coprocessor Access Control */
#define c2_TTBR0 6 /* Translation Table Base Register 0 */
#define c2_TTBR0_high 7 /* TTBR0 top 32 bits */
#define c2_TTBR1 8 /* Translation Table Base Register 1 */
#define c2_TTBR1_high 9 /* TTBR1 top 32 bits */
#define c2_TTBCR 10 /* Translation Table Base Control R. */
#define c3_DACR 11 /* Domain Access Control Register */
#define c5_DFSR 12 /* Data Fault Status Register */
#define c5_IFSR 13 /* Instruction Fault Status Register */
#define c5_ADFSR 14 /* Auxilary Data Fault Status R */
#define c5_AIFSR 15 /* Auxilary Instrunction Fault Status R */
#define c6_DFAR 16 /* Data Fault Address Register */
#define c6_IFAR 17 /* Instruction Fault Address Register */
#define c9_L2CTLR 18 /* Cortex A15 L2 Control Register */
#define c10_PRRR 19 /* Primary Region Remap Register */
#define c10_NMRR 20 /* Normal Memory Remap Register */
#define c12_VBAR 21 /* Vector Base Address Register */
#define c13_CID 22 /* Context ID Register */
#define c13_TID_URW 23 /* Thread ID, User R/W */
#define c13_TID_URO 24 /* Thread ID, User R/O */
#define c13_TID_PRIV 25 /* Thread ID, Privileged */
#define NR_CP15_REGS 26 /* Number of regs (incl. invalid) */
#define ARM_EXCEPTION_RESET 0
#define ARM_EXCEPTION_UNDEFINED 1
#define ARM_EXCEPTION_SOFTWARE 2
#define ARM_EXCEPTION_PREF_ABORT 3
#define ARM_EXCEPTION_DATA_ABORT 4
#define ARM_EXCEPTION_IRQ 5
#define ARM_EXCEPTION_FIQ 6
#define ARM_EXCEPTION_HVC 7
#ifndef __ASSEMBLY__
struct kvm;
struct kvm_vcpu;
extern char __kvm_hyp_init[];
extern char __kvm_hyp_init_end[];
extern char __kvm_hyp_exit[];
extern char __kvm_hyp_exit_end[];
extern char __kvm_hyp_vector[];
extern char __kvm_hyp_code_start[];
extern char __kvm_hyp_code_end[];
extern void __kvm_tlb_flush_vmid(struct kvm *kvm);
extern void __kvm_flush_vm_context(void);
extern void __kvm_tlb_flush_vmid(struct kvm *kvm);
extern int __kvm_vcpu_run(struct kvm_vcpu *vcpu);
#endif
#endif /* __ARM_KVM_ASM_H__ */
/*
* Copyright (C) 2012 Rusty Russell IBM Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License, version 2, as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#ifndef __ARM_KVM_COPROC_H__
#define __ARM_KVM_COPROC_H__
#include <linux/kvm_host.h>
void kvm_reset_coprocs(struct kvm_vcpu *vcpu);
struct kvm_coproc_target_table {
unsigned target;
const struct coproc_reg *table;
size_t num;
};
void kvm_register_target_coproc_table(struct kvm_coproc_target_table *table);
int kvm_handle_cp10_id(struct kvm_vcpu *vcpu, struct kvm_run *run);
int kvm_handle_cp_0_13_access(struct kvm_vcpu *vcpu, struct kvm_run *run);
int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu, struct kvm_run *run);
int kvm_handle_cp14_access(struct kvm_vcpu *vcpu, struct kvm_run *run);
int kvm_handle_cp15_32(struct kvm_vcpu *vcpu, struct kvm_run *run);
int kvm_handle_cp15_64(struct kvm_vcpu *vcpu, struct kvm_run *run);
unsigned long kvm_arm_num_guest_msrs(struct kvm_vcpu *vcpu);
int kvm_arm_copy_msrindices(struct kvm_vcpu *vcpu, u64 __user *uindices);
void kvm_coproc_table_init(void);
struct kvm_one_reg;
int kvm_arm_copy_coproc_indices(struct kvm_vcpu *vcpu, u64 __user *uindices);
int kvm_arm_coproc_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *);
int kvm_arm_coproc_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *);
unsigned long kvm_arm_num_coproc_regs(struct kvm_vcpu *vcpu);
#endif /* __ARM_KVM_COPROC_H__ */
/*
* Copyright (C) 2012 - Virtual Open Systems and Columbia University
* Author: Christoffer Dall <c.dall@virtualopensystems.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License, version 2, as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#ifndef __ARM_KVM_EMULATE_H__
#define __ARM_KVM_EMULATE_H__
#include <linux/kvm_host.h>
#include <asm/kvm_asm.h>
#include <asm/kvm_mmio.h>
u32 *vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num);
u32 *vcpu_spsr(struct kvm_vcpu *vcpu);
int kvm_handle_wfi(struct kvm_vcpu *vcpu, struct kvm_run *run);
void kvm_skip_instr(struct kvm_vcpu *vcpu, bool is_wide_instr);
void kvm_inject_undefined(struct kvm_vcpu *vcpu);
void kvm_inject_dabt(struct kvm_vcpu *vcpu, unsigned long addr);
void kvm_inject_pabt(struct kvm_vcpu *vcpu, unsigned long addr);
static inline bool vcpu_mode_is_32bit(struct kvm_vcpu *vcpu)
{
return 1;
}
static inline u32 *vcpu_pc(struct kvm_vcpu *vcpu)
{
return (u32 *)&vcpu->arch.regs.usr_regs.ARM_pc;
}
static inline u32 *vcpu_cpsr(struct kvm_vcpu *vcpu)
{
return (u32 *)&vcpu->arch.regs.usr_regs.ARM_cpsr;
}
static inline void vcpu_set_thumb(struct kvm_vcpu *vcpu)
{
*vcpu_cpsr(vcpu) |= PSR_T_BIT;
}
static inline bool mode_has_spsr(struct kvm_vcpu *vcpu)
{
unsigned long cpsr_mode = vcpu->arch.regs.usr_regs.ARM_cpsr & MODE_MASK;
return (cpsr_mode > USR_MODE && cpsr_mode < SYSTEM_MODE);
}
static inline bool vcpu_mode_priv(struct kvm_vcpu *vcpu)
{
unsigned long cpsr_mode = vcpu->arch.regs.usr_regs.ARM_cpsr & MODE_MASK;
return cpsr_mode > USR_MODE;;
}
static inline bool kvm_vcpu_reg_is_pc(struct kvm_vcpu *vcpu, int reg)
{
return reg == 15;
}
#endif /* __ARM_KVM_EMULATE_H__ */
/*
* Copyright (C) 2012 - Virtual Open Systems and Columbia University
* Author: Christoffer Dall <c.dall@virtualopensystems.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License, version 2, as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#ifndef __ARM_KVM_HOST_H__
#define __ARM_KVM_HOST_H__
#include <asm/kvm.h>
#include <asm/kvm_asm.h>
#include <asm/kvm_mmio.h>
#include <asm/fpstate.h>
#define KVM_MAX_VCPUS CONFIG_KVM_ARM_MAX_VCPUS
#define KVM_MEMORY_SLOTS 32
#define KVM_PRIVATE_MEM_SLOTS 4
#define KVM_COALESCED_MMIO_PAGE_OFFSET 1
#define KVM_HAVE_ONE_REG
#define KVM_VCPU_MAX_FEATURES 1
/* We don't currently support large pages. */
#define KVM_HPAGE_GFN_SHIFT(x) 0
#define KVM_NR_PAGE_SIZES 1
#define KVM_PAGES_PER_HPAGE(x) (1UL<<31)
struct kvm_vcpu;
u32 *kvm_vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num, u32 mode);
int kvm_target_cpu(void);
int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
void kvm_reset_coprocs(struct kvm_vcpu *vcpu);
struct kvm_arch {
/* VTTBR value associated with below pgd and vmid */
u64 vttbr;
/*
* Anything that is not used directly from assembly code goes
* here.
*/
/* The VMID generation used for the virt. memory system */
u64 vmid_gen;
u32 vmid;
/* Stage-2 page table */
pgd_t *pgd;
};
#define KVM_NR_MEM_OBJS 40
/*
* We don't want allocation failures within the mmu code, so we preallocate
* enough memory for a single page fault in a cache.
*/
struct kvm_mmu_memory_cache {
int nobjs;
void *objects[KVM_NR_MEM_OBJS];
};
struct kvm_vcpu_arch {
struct kvm_regs regs;
int target; /* Processor target */
DECLARE_BITMAP(features, KVM_VCPU_MAX_FEATURES);
/* System control coprocessor (cp15) */
u32 cp15[NR_CP15_REGS];
/* The CPU type we expose to the VM */
u32 midr;
/* Exception Information */
u32 hsr; /* Hyp Syndrome Register */
u32 hxfar; /* Hyp Data/Inst Fault Address Register */
u32 hpfar; /* Hyp IPA Fault Address Register */
/* Floating point registers (VFP and Advanced SIMD/NEON) */
struct vfp_hard_struct vfp_guest;
struct vfp_hard_struct *vfp_host;
/*
* Anything that is not used directly from assembly code goes
* here.
*/
/* dcache set/way operation pending */
int last_pcpu;
cpumask_t require_dcache_flush;
/* Don't run the guest on this vcpu */
bool pause;
/* IO related fields */
struct kvm_decode mmio_decode;
/* Interrupt related fields */
u32 irq_lines; /* IRQ and FIQ levels */
/* Hyp exception information */
u32 hyp_pc; /* PC when exception was taken from Hyp mode */
/* Cache some mmu pages needed inside spinlock regions */
struct kvm_mmu_memory_cache mmu_page_cache;
/* Detect first run of a vcpu */
bool has_run_once;
};
struct kvm_vm_stat {
u32 remote_tlb_flush;
};
struct kvm_vcpu_stat {
u32 halt_wakeup;
};
struct kvm_vcpu_init;
int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
const struct kvm_vcpu_init *init);
unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu);
int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *indices);
struct kvm_one_reg;
int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg);
int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg);
u64 kvm_call_hyp(void *hypfn, ...);
void force_vm_exit(const cpumask_t *mask);
#define KVM_ARCH_WANT_MMU_NOTIFIER
struct kvm;
int kvm_unmap_hva(struct kvm *kvm, unsigned long hva);
int kvm_unmap_hva_range(struct kvm *kvm,
unsigned long start, unsigned long end);
void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu);
int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *indices);
/* We do not have shadow page tables, hence the empty hooks */
static inline int kvm_age_hva(struct kvm *kvm, unsigned long hva)
{
return 0;
}
static inline int kvm_test_age_hva(struct kvm *kvm, unsigned long hva)
{
return 0;
}
#endif /* __ARM_KVM_HOST_H__ */
/*
* Copyright (C) 2012 - Virtual Open Systems and Columbia University
* Author: Christoffer Dall <c.dall@virtualopensystems.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License, version 2, as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#ifndef __ARM_KVM_MMIO_H__
#define __ARM_KVM_MMIO_H__
#include <linux/kvm_host.h>
#include <asm/kvm_asm.h>
#include <asm/kvm_arm.h>
struct kvm_decode {
unsigned long rt;
bool sign_extend;
};
/*
* The in-kernel MMIO emulation code wants to use a copy of run->mmio,
* which is an anonymous type. Use our own type instead.
*/
struct kvm_exit_mmio {
phys_addr_t phys_addr;
u8 data[8];
u32 len;
bool is_write;
};
static inline void kvm_prepare_mmio(struct kvm_run *run,
struct kvm_exit_mmio *mmio)
{
run->mmio.phys_addr = mmio->phys_addr;
run->mmio.len = mmio->len;
run->mmio.is_write = mmio->is_write;
memcpy(run->mmio.data, mmio->data, mmio->len);
run->exit_reason = KVM_EXIT_MMIO;
}
int kvm_handle_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run);
int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
phys_addr_t fault_ipa);
#endif /* __ARM_KVM_MMIO_H__ */
/*
* Copyright (C) 2012 - Virtual Open Systems and Columbia University
* Author: Christoffer Dall <c.dall@virtualopensystems.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License, version 2, as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#ifndef __ARM_KVM_MMU_H__
#define __ARM_KVM_MMU_H__
int create_hyp_mappings(void *from, void *to);
int create_hyp_io_mappings(void *from, void *to, phys_addr_t);
void free_hyp_pmds(void);
int kvm_alloc_stage2_pgd(struct kvm *kvm);
void kvm_free_stage2_pgd(struct kvm *kvm);
int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
phys_addr_t pa, unsigned long size);
int kvm_handle_guest_abort(struct kvm_vcpu *vcpu, struct kvm_run *run);
void kvm_mmu_free_memory_caches(struct kvm_vcpu *vcpu);
phys_addr_t kvm_mmu_get_httbr(void);
int kvm_mmu_init(void);
void kvm_clear_hyp_idmap(void);
static inline bool kvm_is_write_fault(unsigned long hsr)
{
unsigned long hsr_ec = hsr >> HSR_EC_SHIFT;
if (hsr_ec == HSR_EC_IABT)
return false;
else if ((hsr & HSR_ISV) && !(hsr & HSR_WNR))
return false;
else
return true;
}
#endif /* __ARM_KVM_MMU_H__ */
/*
* Copyright (C) 2012 - ARM Ltd
* Author: Marc Zyngier <marc.zyngier@arm.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef __ARM_KVM_PSCI_H__
#define __ARM_KVM_PSCI_H__
bool kvm_psci_call(struct kvm_vcpu *vcpu);
#endif /* __ARM_KVM_PSCI_H__ */
/*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* Copyright (C) 2012 ARM Limited
*/
#ifndef __ASM_ARM_OPCODES_SEC_H
#define __ASM_ARM_OPCODES_SEC_H
#include <asm/opcodes.h>
#define __SMC(imm4) __inst_arm_thumb32( \
0xE1600070 | (((imm4) & 0xF) << 0), \
0xF7F08000 | (((imm4) & 0xF) << 16) \
)
#endif /* __ASM_ARM_OPCODES_SEC_H */
...@@ -10,6 +10,7 @@ ...@@ -10,6 +10,7 @@
#define __ASM_ARM_OPCODES_H #define __ASM_ARM_OPCODES_H
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <linux/linkage.h>
extern asmlinkage unsigned int arm_check_condition(u32 opcode, u32 psr); extern asmlinkage unsigned int arm_check_condition(u32 opcode, u32 psr);
#endif #endif
......
...@@ -32,6 +32,9 @@ ...@@ -32,6 +32,9 @@
#define PMD_TYPE_SECT (_AT(pmdval_t, 1) << 0) #define PMD_TYPE_SECT (_AT(pmdval_t, 1) << 0)
#define PMD_BIT4 (_AT(pmdval_t, 0)) #define PMD_BIT4 (_AT(pmdval_t, 0))
#define PMD_DOMAIN(x) (_AT(pmdval_t, 0)) #define PMD_DOMAIN(x) (_AT(pmdval_t, 0))
#define PMD_APTABLE_SHIFT (61)
#define PMD_APTABLE (_AT(pgdval_t, 3) << PGD_APTABLE_SHIFT)
#define PMD_PXNTABLE (_AT(pgdval_t, 1) << 59)
/* /*
* - section * - section
...@@ -41,9 +44,11 @@ ...@@ -41,9 +44,11 @@
#define PMD_SECT_S (_AT(pmdval_t, 3) << 8) #define PMD_SECT_S (_AT(pmdval_t, 3) << 8)
#define PMD_SECT_AF (_AT(pmdval_t, 1) << 10) #define PMD_SECT_AF (_AT(pmdval_t, 1) << 10)
#define PMD_SECT_nG (_AT(pmdval_t, 1) << 11) #define PMD_SECT_nG (_AT(pmdval_t, 1) << 11)
#define PMD_SECT_PXN (_AT(pmdval_t, 1) << 53)
#define PMD_SECT_XN (_AT(pmdval_t, 1) << 54) #define PMD_SECT_XN (_AT(pmdval_t, 1) << 54)
#define PMD_SECT_AP_WRITE (_AT(pmdval_t, 0)) #define PMD_SECT_AP_WRITE (_AT(pmdval_t, 0))
#define PMD_SECT_AP_READ (_AT(pmdval_t, 0)) #define PMD_SECT_AP_READ (_AT(pmdval_t, 0))
#define PMD_SECT_AP1 (_AT(pmdval_t, 1) << 6)
#define PMD_SECT_TEX(x) (_AT(pmdval_t, 0)) #define PMD_SECT_TEX(x) (_AT(pmdval_t, 0))
/* /*
......
...@@ -104,11 +104,29 @@ ...@@ -104,11 +104,29 @@
*/ */
#define L_PGD_SWAPPER (_AT(pgdval_t, 1) << 55) /* swapper_pg_dir entry */ #define L_PGD_SWAPPER (_AT(pgdval_t, 1) << 55) /* swapper_pg_dir entry */
/*
* 2nd stage PTE definitions for LPAE.
*/
#define L_PTE_S2_MT_UNCACHED (_AT(pteval_t, 0x5) << 2) /* MemAttr[3:0] */
#define L_PTE_S2_MT_WRITETHROUGH (_AT(pteval_t, 0xa) << 2) /* MemAttr[3:0] */
#define L_PTE_S2_MT_WRITEBACK (_AT(pteval_t, 0xf) << 2) /* MemAttr[3:0] */
#define L_PTE_S2_RDONLY (_AT(pteval_t, 1) << 6) /* HAP[1] */
#define L_PTE_S2_RDWR (_AT(pteval_t, 2) << 6) /* HAP[2:1] */
/*
* Hyp-mode PL2 PTE definitions for LPAE.
*/
#define L_PTE_HYP L_PTE_USER
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#define pud_none(pud) (!pud_val(pud)) #define pud_none(pud) (!pud_val(pud))
#define pud_bad(pud) (!(pud_val(pud) & 2)) #define pud_bad(pud) (!(pud_val(pud) & 2))
#define pud_present(pud) (pud_val(pud)) #define pud_present(pud) (pud_val(pud))
#define pmd_table(pmd) ((pmd_val(pmd) & PMD_TYPE_MASK) == \
PMD_TYPE_TABLE)
#define pmd_sect(pmd) ((pmd_val(pmd) & PMD_TYPE_MASK) == \
PMD_TYPE_SECT)
#define pud_clear(pudp) \ #define pud_clear(pudp) \
do { \ do { \
......
...@@ -70,6 +70,9 @@ extern void __pgd_error(const char *file, int line, pgd_t); ...@@ -70,6 +70,9 @@ extern void __pgd_error(const char *file, int line, pgd_t);
extern pgprot_t pgprot_user; extern pgprot_t pgprot_user;
extern pgprot_t pgprot_kernel; extern pgprot_t pgprot_kernel;
extern pgprot_t pgprot_hyp_device;
extern pgprot_t pgprot_s2;
extern pgprot_t pgprot_s2_device;
#define _MOD_PROT(p, b) __pgprot(pgprot_val(p) | (b)) #define _MOD_PROT(p, b) __pgprot(pgprot_val(p) | (b))
...@@ -82,6 +85,10 @@ extern pgprot_t pgprot_kernel; ...@@ -82,6 +85,10 @@ extern pgprot_t pgprot_kernel;
#define PAGE_READONLY_EXEC _MOD_PROT(pgprot_user, L_PTE_USER | L_PTE_RDONLY) #define PAGE_READONLY_EXEC _MOD_PROT(pgprot_user, L_PTE_USER | L_PTE_RDONLY)
#define PAGE_KERNEL _MOD_PROT(pgprot_kernel, L_PTE_XN) #define PAGE_KERNEL _MOD_PROT(pgprot_kernel, L_PTE_XN)
#define PAGE_KERNEL_EXEC pgprot_kernel #define PAGE_KERNEL_EXEC pgprot_kernel
#define PAGE_HYP _MOD_PROT(pgprot_kernel, L_PTE_HYP)
#define PAGE_HYP_DEVICE _MOD_PROT(pgprot_hyp_device, L_PTE_HYP)
#define PAGE_S2 _MOD_PROT(pgprot_s2, L_PTE_S2_RDONLY)
#define PAGE_S2_DEVICE _MOD_PROT(pgprot_s2_device, L_PTE_USER | L_PTE_S2_RDONLY)
#define __PAGE_NONE __pgprot(_L_PTE_DEFAULT | L_PTE_RDONLY | L_PTE_XN | L_PTE_NONE) #define __PAGE_NONE __pgprot(_L_PTE_DEFAULT | L_PTE_RDONLY | L_PTE_XN | L_PTE_NONE)
#define __PAGE_SHARED __pgprot(_L_PTE_DEFAULT | L_PTE_USER | L_PTE_XN) #define __PAGE_SHARED __pgprot(_L_PTE_DEFAULT | L_PTE_USER | L_PTE_XN)
......
/*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* Copyright (C) 2012 ARM Limited
*/
#ifndef __ASM_ARM_PSCI_H
#define __ASM_ARM_PSCI_H
#define PSCI_POWER_STATE_TYPE_STANDBY 0
#define PSCI_POWER_STATE_TYPE_POWER_DOWN 1
struct psci_power_state {
u16 id;
u8 type;
u8 affinity_level;
};
struct psci_operations {
int (*cpu_suspend)(struct psci_power_state state,
unsigned long entry_point);
int (*cpu_off)(struct psci_power_state state);
int (*cpu_on)(unsigned long cpuid, unsigned long entry_point);
int (*migrate)(unsigned long cpuid);
};
extern struct psci_operations psci_ops;
#endif /* __ASM_ARM_PSCI_H */
...@@ -24,9 +24,9 @@ ...@@ -24,9 +24,9 @@
/* /*
* Flag indicating that the kernel was not entered in the same mode on every * Flag indicating that the kernel was not entered in the same mode on every
* CPU. The zImage loader stashes this value in an SPSR, so we need an * CPU. The zImage loader stashes this value in an SPSR, so we need an
* architecturally defined flag bit here (the N flag, as it happens) * architecturally defined flag bit here.
*/ */
#define BOOT_CPU_MODE_MISMATCH (1<<31) #define BOOT_CPU_MODE_MISMATCH PSR_N_BIT
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
......
/*
* Copyright (C) 2012 - Virtual Open Systems and Columbia University
* Author: Christoffer Dall <c.dall@virtualopensystems.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License, version 2, as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#ifndef __ARM_KVM_H__
#define __ARM_KVM_H__
#include <linux/types.h>
#include <asm/ptrace.h>
#define __KVM_HAVE_GUEST_DEBUG
#define __KVM_HAVE_IRQ_LINE
#define KVM_REG_SIZE(id) \
(1U << (((id) & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT))
/* Valid for svc_regs, abt_regs, und_regs, irq_regs in struct kvm_regs */
#define KVM_ARM_SVC_sp svc_regs[0]
#define KVM_ARM_SVC_lr svc_regs[1]
#define KVM_ARM_SVC_spsr svc_regs[2]
#define KVM_ARM_ABT_sp abt_regs[0]
#define KVM_ARM_ABT_lr abt_regs[1]
#define KVM_ARM_ABT_spsr abt_regs[2]
#define KVM_ARM_UND_sp und_regs[0]
#define KVM_ARM_UND_lr und_regs[1]
#define KVM_ARM_UND_spsr und_regs[2]
#define KVM_ARM_IRQ_sp irq_regs[0]
#define KVM_ARM_IRQ_lr irq_regs[1]
#define KVM_ARM_IRQ_spsr irq_regs[2]
/* Valid only for fiq_regs in struct kvm_regs */
#define KVM_ARM_FIQ_r8 fiq_regs[0]
#define KVM_ARM_FIQ_r9 fiq_regs[1]
#define KVM_ARM_FIQ_r10 fiq_regs[2]
#define KVM_ARM_FIQ_fp fiq_regs[3]
#define KVM_ARM_FIQ_ip fiq_regs[4]
#define KVM_ARM_FIQ_sp fiq_regs[5]
#define KVM_ARM_FIQ_lr fiq_regs[6]
#define KVM_ARM_FIQ_spsr fiq_regs[7]
struct kvm_regs {
struct pt_regs usr_regs;/* R0_usr - R14_usr, PC, CPSR */
__u32 svc_regs[3]; /* SP_svc, LR_svc, SPSR_svc */
__u32 abt_regs[3]; /* SP_abt, LR_abt, SPSR_abt */
__u32 und_regs[3]; /* SP_und, LR_und, SPSR_und */
__u32 irq_regs[3]; /* SP_irq, LR_irq, SPSR_irq */
__u32 fiq_regs[8]; /* R8_fiq - R14_fiq, SPSR_fiq */
};
/* Supported Processor Types */
#define KVM_ARM_TARGET_CORTEX_A15 0
#define KVM_ARM_NUM_TARGETS 1
#define KVM_ARM_VCPU_POWER_OFF 0 /* CPU is started in OFF state */
struct kvm_vcpu_init {
__u32 target;
__u32 features[7];
};
struct kvm_sregs {
};
struct kvm_fpu {
};
struct kvm_guest_debug_arch {
};
struct kvm_debug_exit_arch {
};
struct kvm_sync_regs {
};
struct kvm_arch_memory_slot {
};
/* If you need to interpret the index values, here is the key: */
#define KVM_REG_ARM_COPROC_MASK 0x000000000FFF0000
#define KVM_REG_ARM_COPROC_SHIFT 16
#define KVM_REG_ARM_32_OPC2_MASK 0x0000000000000007
#define KVM_REG_ARM_32_OPC2_SHIFT 0
#define KVM_REG_ARM_OPC1_MASK 0x0000000000000078
#define KVM_REG_ARM_OPC1_SHIFT 3
#define KVM_REG_ARM_CRM_MASK 0x0000000000000780
#define KVM_REG_ARM_CRM_SHIFT 7
#define KVM_REG_ARM_32_CRN_MASK 0x0000000000007800
#define KVM_REG_ARM_32_CRN_SHIFT 11
/* Normal registers are mapped as coprocessor 16. */
#define KVM_REG_ARM_CORE (0x0010 << KVM_REG_ARM_COPROC_SHIFT)
#define KVM_REG_ARM_CORE_REG(name) (offsetof(struct kvm_regs, name) / 4)
/* Some registers need more space to represent values. */
#define KVM_REG_ARM_DEMUX (0x0011 << KVM_REG_ARM_COPROC_SHIFT)
#define KVM_REG_ARM_DEMUX_ID_MASK 0x000000000000FF00
#define KVM_REG_ARM_DEMUX_ID_SHIFT 8
#define KVM_REG_ARM_DEMUX_ID_CCSIDR (0x00 << KVM_REG_ARM_DEMUX_ID_SHIFT)
#define KVM_REG_ARM_DEMUX_VAL_MASK 0x00000000000000FF
#define KVM_REG_ARM_DEMUX_VAL_SHIFT 0
/* VFP registers: we could overload CP10 like ARM does, but that's ugly. */
#define KVM_REG_ARM_VFP (0x0012 << KVM_REG_ARM_COPROC_SHIFT)
#define KVM_REG_ARM_VFP_MASK 0x000000000000FFFF
#define KVM_REG_ARM_VFP_BASE_REG 0x0
#define KVM_REG_ARM_VFP_FPSID 0x1000
#define KVM_REG_ARM_VFP_FPSCR 0x1001
#define KVM_REG_ARM_VFP_MVFR1 0x1006
#define KVM_REG_ARM_VFP_MVFR0 0x1007
#define KVM_REG_ARM_VFP_FPEXC 0x1008
#define KVM_REG_ARM_VFP_FPINST 0x1009
#define KVM_REG_ARM_VFP_FPINST2 0x100A
/* KVM_IRQ_LINE irq field index values */
#define KVM_ARM_IRQ_TYPE_SHIFT 24
#define KVM_ARM_IRQ_TYPE_MASK 0xff
#define KVM_ARM_IRQ_VCPU_SHIFT 16
#define KVM_ARM_IRQ_VCPU_MASK 0xff
#define KVM_ARM_IRQ_NUM_SHIFT 0
#define KVM_ARM_IRQ_NUM_MASK 0xffff
/* irq_type field */
#define KVM_ARM_IRQ_TYPE_CPU 0
#define KVM_ARM_IRQ_TYPE_SPI 1
#define KVM_ARM_IRQ_TYPE_PPI 2
/* out-of-kernel GIC cpu interrupt injection irq_number field */
#define KVM_ARM_IRQ_CPU_IRQ 0
#define KVM_ARM_IRQ_CPU_FIQ 1
/* Highest supported SPI, from VGIC_NR_IRQS */
#define KVM_ARM_IRQ_GIC_MAX 127
/* PSCI interface */
#define KVM_PSCI_FN_BASE 0x95c1ba5e
#define KVM_PSCI_FN(n) (KVM_PSCI_FN_BASE + (n))
#define KVM_PSCI_FN_CPU_SUSPEND KVM_PSCI_FN(0)
#define KVM_PSCI_FN_CPU_OFF KVM_PSCI_FN(1)
#define KVM_PSCI_FN_CPU_ON KVM_PSCI_FN(2)
#define KVM_PSCI_FN_MIGRATE KVM_PSCI_FN(3)
#define KVM_PSCI_RET_SUCCESS 0
#define KVM_PSCI_RET_NI ((unsigned long)-1)
#define KVM_PSCI_RET_INVAL ((unsigned long)-2)
#define KVM_PSCI_RET_DENIED ((unsigned long)-3)
#endif /* __ARM_KVM_H__ */
...@@ -82,5 +82,6 @@ obj-$(CONFIG_DEBUG_LL) += debug.o ...@@ -82,5 +82,6 @@ obj-$(CONFIG_DEBUG_LL) += debug.o
obj-$(CONFIG_EARLY_PRINTK) += early_printk.o obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
obj-$(CONFIG_ARM_VIRT_EXT) += hyp-stub.o obj-$(CONFIG_ARM_VIRT_EXT) += hyp-stub.o
obj-$(CONFIG_ARM_PSCI) += psci.o
extra-y := $(head-y) vmlinux.lds extra-y := $(head-y) vmlinux.lds
...@@ -13,6 +13,9 @@ ...@@ -13,6 +13,9 @@
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/dma-mapping.h> #include <linux/dma-mapping.h>
#ifdef CONFIG_KVM_ARM_HOST
#include <linux/kvm_host.h>
#endif
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
#include <asm/glue-df.h> #include <asm/glue-df.h>
#include <asm/glue-pf.h> #include <asm/glue-pf.h>
...@@ -146,5 +149,27 @@ int main(void) ...@@ -146,5 +149,27 @@ int main(void)
DEFINE(DMA_BIDIRECTIONAL, DMA_BIDIRECTIONAL); DEFINE(DMA_BIDIRECTIONAL, DMA_BIDIRECTIONAL);
DEFINE(DMA_TO_DEVICE, DMA_TO_DEVICE); DEFINE(DMA_TO_DEVICE, DMA_TO_DEVICE);
DEFINE(DMA_FROM_DEVICE, DMA_FROM_DEVICE); DEFINE(DMA_FROM_DEVICE, DMA_FROM_DEVICE);
#ifdef CONFIG_KVM_ARM_HOST
DEFINE(VCPU_KVM, offsetof(struct kvm_vcpu, kvm));
DEFINE(VCPU_MIDR, offsetof(struct kvm_vcpu, arch.midr));
DEFINE(VCPU_CP15, offsetof(struct kvm_vcpu, arch.cp15));
DEFINE(VCPU_VFP_GUEST, offsetof(struct kvm_vcpu, arch.vfp_guest));
DEFINE(VCPU_VFP_HOST, offsetof(struct kvm_vcpu, arch.vfp_host));
DEFINE(VCPU_REGS, offsetof(struct kvm_vcpu, arch.regs));
DEFINE(VCPU_USR_REGS, offsetof(struct kvm_vcpu, arch.regs.usr_regs));
DEFINE(VCPU_SVC_REGS, offsetof(struct kvm_vcpu, arch.regs.svc_regs));
DEFINE(VCPU_ABT_REGS, offsetof(struct kvm_vcpu, arch.regs.abt_regs));
DEFINE(VCPU_UND_REGS, offsetof(struct kvm_vcpu, arch.regs.und_regs));
DEFINE(VCPU_IRQ_REGS, offsetof(struct kvm_vcpu, arch.regs.irq_regs));
DEFINE(VCPU_FIQ_REGS, offsetof(struct kvm_vcpu, arch.regs.fiq_regs));
DEFINE(VCPU_PC, offsetof(struct kvm_vcpu, arch.regs.usr_regs.ARM_pc));
DEFINE(VCPU_CPSR, offsetof(struct kvm_vcpu, arch.regs.usr_regs.ARM_cpsr));
DEFINE(VCPU_IRQ_LINES, offsetof(struct kvm_vcpu, arch.irq_lines));
DEFINE(VCPU_HSR, offsetof(struct kvm_vcpu, arch.hsr));
DEFINE(VCPU_HxFAR, offsetof(struct kvm_vcpu, arch.hxfar));
DEFINE(VCPU_HPFAR, offsetof(struct kvm_vcpu, arch.hpfar));
DEFINE(VCPU_HYP_PC, offsetof(struct kvm_vcpu, arch.hyp_pc));
DEFINE(KVM_VTTBR, offsetof(struct kvm, arch.vttbr));
#endif
return 0; return 0;
} }
...@@ -28,6 +28,7 @@ ...@@ -28,6 +28,7 @@
#include <linux/perf_event.h> #include <linux/perf_event.h>
#include <linux/hw_breakpoint.h> #include <linux/hw_breakpoint.h>
#include <linux/smp.h> #include <linux/smp.h>
#include <linux/cpu_pm.h>
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
#include <asm/cputype.h> #include <asm/cputype.h>
...@@ -35,6 +36,7 @@ ...@@ -35,6 +36,7 @@
#include <asm/hw_breakpoint.h> #include <asm/hw_breakpoint.h>
#include <asm/kdebug.h> #include <asm/kdebug.h>
#include <asm/traps.h> #include <asm/traps.h>
#include <asm/hardware/coresight.h>
/* Breakpoint currently in use for each BRP. */ /* Breakpoint currently in use for each BRP. */
static DEFINE_PER_CPU(struct perf_event *, bp_on_reg[ARM_MAX_BRP]); static DEFINE_PER_CPU(struct perf_event *, bp_on_reg[ARM_MAX_BRP]);
...@@ -49,6 +51,9 @@ static int core_num_wrps; ...@@ -49,6 +51,9 @@ static int core_num_wrps;
/* Debug architecture version. */ /* Debug architecture version. */
static u8 debug_arch; static u8 debug_arch;
/* Does debug architecture support OS Save and Restore? */
static bool has_ossr;
/* Maximum supported watchpoint length. */ /* Maximum supported watchpoint length. */
static u8 max_watchpoint_len; static u8 max_watchpoint_len;
...@@ -903,6 +908,23 @@ static struct undef_hook debug_reg_hook = { ...@@ -903,6 +908,23 @@ static struct undef_hook debug_reg_hook = {
.fn = debug_reg_trap, .fn = debug_reg_trap,
}; };
/* Does this core support OS Save and Restore? */
static bool core_has_os_save_restore(void)
{
u32 oslsr;
switch (get_debug_arch()) {
case ARM_DEBUG_ARCH_V7_1:
return true;
case ARM_DEBUG_ARCH_V7_ECP14:
ARM_DBG_READ(c1, c1, 4, oslsr);
if (oslsr & ARM_OSLSR_OSLM0)
return true;
default:
return false;
}
}
static void reset_ctrl_regs(void *unused) static void reset_ctrl_regs(void *unused)
{ {
int i, raw_num_brps, err = 0, cpu = smp_processor_id(); int i, raw_num_brps, err = 0, cpu = smp_processor_id();
...@@ -930,11 +952,7 @@ static void reset_ctrl_regs(void *unused) ...@@ -930,11 +952,7 @@ static void reset_ctrl_regs(void *unused)
if ((val & 0x1) == 0) if ((val & 0x1) == 0)
err = -EPERM; err = -EPERM;
/* if (!has_ossr)
* Check whether we implement OS save and restore.
*/
ARM_DBG_READ(c1, c1, 4, val);
if ((val & 0x9) == 0)
goto clear_vcr; goto clear_vcr;
break; break;
case ARM_DEBUG_ARCH_V7_1: case ARM_DEBUG_ARCH_V7_1:
...@@ -955,9 +973,9 @@ static void reset_ctrl_regs(void *unused) ...@@ -955,9 +973,9 @@ static void reset_ctrl_regs(void *unused)
/* /*
* Unconditionally clear the OS lock by writing a value * Unconditionally clear the OS lock by writing a value
* other than 0xC5ACCE55 to the access register. * other than CS_LAR_KEY to the access register.
*/ */
ARM_DBG_WRITE(c1, c0, 4, 0); ARM_DBG_WRITE(c1, c0, 4, ~CS_LAR_KEY);
isb(); isb();
/* /*
...@@ -1015,6 +1033,30 @@ static struct notifier_block __cpuinitdata dbg_reset_nb = { ...@@ -1015,6 +1033,30 @@ static struct notifier_block __cpuinitdata dbg_reset_nb = {
.notifier_call = dbg_reset_notify, .notifier_call = dbg_reset_notify,
}; };
#ifdef CONFIG_CPU_PM
static int dbg_cpu_pm_notify(struct notifier_block *self, unsigned long action,
void *v)
{
if (action == CPU_PM_EXIT)
reset_ctrl_regs(NULL);
return NOTIFY_OK;
}
static struct notifier_block __cpuinitdata dbg_cpu_pm_nb = {
.notifier_call = dbg_cpu_pm_notify,
};
static void __init pm_init(void)
{
cpu_pm_register_notifier(&dbg_cpu_pm_nb);
}
#else
static inline void pm_init(void)
{
}
#endif
static int __init arch_hw_breakpoint_init(void) static int __init arch_hw_breakpoint_init(void)
{ {
debug_arch = get_debug_arch(); debug_arch = get_debug_arch();
...@@ -1024,6 +1066,8 @@ static int __init arch_hw_breakpoint_init(void) ...@@ -1024,6 +1066,8 @@ static int __init arch_hw_breakpoint_init(void)
return 0; return 0;
} }
has_ossr = core_has_os_save_restore();
/* Determine how many BRPs/WRPs are available. */ /* Determine how many BRPs/WRPs are available. */
core_num_brps = get_num_brps(); core_num_brps = get_num_brps();
core_num_wrps = get_num_wrps(); core_num_wrps = get_num_wrps();
...@@ -1062,8 +1106,9 @@ static int __init arch_hw_breakpoint_init(void) ...@@ -1062,8 +1106,9 @@ static int __init arch_hw_breakpoint_init(void)
hook_ifault_code(FAULT_CODE_DEBUG, hw_breakpoint_pending, SIGTRAP, hook_ifault_code(FAULT_CODE_DEBUG, hw_breakpoint_pending, SIGTRAP,
TRAP_HWBKPT, "breakpoint debug exception"); TRAP_HWBKPT, "breakpoint debug exception");
/* Register hotplug notifier. */ /* Register hotplug and PM notifiers. */
register_cpu_notifier(&dbg_reset_nb); register_cpu_notifier(&dbg_reset_nb);
pm_init();
return 0; return 0;
} }
arch_initcall(arch_hw_breakpoint_init); arch_initcall(arch_hw_breakpoint_init);
......
...@@ -149,12 +149,6 @@ u64 armpmu_event_update(struct perf_event *event) ...@@ -149,12 +149,6 @@ u64 armpmu_event_update(struct perf_event *event)
static void static void
armpmu_read(struct perf_event *event) armpmu_read(struct perf_event *event)
{ {
struct hw_perf_event *hwc = &event->hw;
/* Don't read disabled counters! */
if (hwc->idx < 0)
return;
armpmu_event_update(event); armpmu_event_update(event);
} }
...@@ -207,8 +201,6 @@ armpmu_del(struct perf_event *event, int flags) ...@@ -207,8 +201,6 @@ armpmu_del(struct perf_event *event, int flags)
struct hw_perf_event *hwc = &event->hw; struct hw_perf_event *hwc = &event->hw;
int idx = hwc->idx; int idx = hwc->idx;
WARN_ON(idx < 0);
armpmu_stop(event, PERF_EF_UPDATE); armpmu_stop(event, PERF_EF_UPDATE);
hw_events->events[idx] = NULL; hw_events->events[idx] = NULL;
clear_bit(idx, hw_events->used_mask); clear_bit(idx, hw_events->used_mask);
...@@ -358,7 +350,7 @@ __hw_perf_event_init(struct perf_event *event) ...@@ -358,7 +350,7 @@ __hw_perf_event_init(struct perf_event *event)
{ {
struct arm_pmu *armpmu = to_arm_pmu(event->pmu); struct arm_pmu *armpmu = to_arm_pmu(event->pmu);
struct hw_perf_event *hwc = &event->hw; struct hw_perf_event *hwc = &event->hw;
int mapping, err; int mapping;
mapping = armpmu->map_event(event); mapping = armpmu->map_event(event);
...@@ -407,14 +399,12 @@ __hw_perf_event_init(struct perf_event *event) ...@@ -407,14 +399,12 @@ __hw_perf_event_init(struct perf_event *event)
local64_set(&hwc->period_left, hwc->sample_period); local64_set(&hwc->period_left, hwc->sample_period);
} }
err = 0;
if (event->group_leader != event) { if (event->group_leader != event) {
err = validate_group(event); if (validate_group(event) != 0);
if (err)
return -EINVAL; return -EINVAL;
} }
return err; return 0;
} }
static int armpmu_event_init(struct perf_event *event) static int armpmu_event_init(struct perf_event *event)
......
...@@ -147,7 +147,7 @@ static void cpu_pmu_init(struct arm_pmu *cpu_pmu) ...@@ -147,7 +147,7 @@ static void cpu_pmu_init(struct arm_pmu *cpu_pmu)
cpu_pmu->free_irq = cpu_pmu_free_irq; cpu_pmu->free_irq = cpu_pmu_free_irq;
/* Ensure the PMU has sane values out of reset. */ /* Ensure the PMU has sane values out of reset. */
if (cpu_pmu && cpu_pmu->reset) if (cpu_pmu->reset)
on_each_cpu(cpu_pmu->reset, cpu_pmu, 1); on_each_cpu(cpu_pmu->reset, cpu_pmu, 1);
} }
...@@ -201,48 +201,46 @@ static struct platform_device_id cpu_pmu_plat_device_ids[] = { ...@@ -201,48 +201,46 @@ static struct platform_device_id cpu_pmu_plat_device_ids[] = {
static int probe_current_pmu(struct arm_pmu *pmu) static int probe_current_pmu(struct arm_pmu *pmu)
{ {
int cpu = get_cpu(); int cpu = get_cpu();
unsigned long cpuid = read_cpuid_id(); unsigned long implementor = read_cpuid_implementor();
unsigned long implementor = (cpuid & 0xFF000000) >> 24; unsigned long part_number = read_cpuid_part_number();
unsigned long part_number = (cpuid & 0xFFF0);
int ret = -ENODEV; int ret = -ENODEV;
pr_info("probing PMU on CPU %d\n", cpu); pr_info("probing PMU on CPU %d\n", cpu);
/* ARM Ltd CPUs. */ /* ARM Ltd CPUs. */
if (0x41 == implementor) { if (implementor == ARM_CPU_IMP_ARM) {
switch (part_number) { switch (part_number) {
case 0xB360: /* ARM1136 */ case ARM_CPU_PART_ARM1136:
case 0xB560: /* ARM1156 */ case ARM_CPU_PART_ARM1156:
case 0xB760: /* ARM1176 */ case ARM_CPU_PART_ARM1176:
ret = armv6pmu_init(pmu); ret = armv6pmu_init(pmu);
break; break;
case 0xB020: /* ARM11mpcore */ case ARM_CPU_PART_ARM11MPCORE:
ret = armv6mpcore_pmu_init(pmu); ret = armv6mpcore_pmu_init(pmu);
break; break;
case 0xC080: /* Cortex-A8 */ case ARM_CPU_PART_CORTEX_A8:
ret = armv7_a8_pmu_init(pmu); ret = armv7_a8_pmu_init(pmu);
break; break;
case 0xC090: /* Cortex-A9 */ case ARM_CPU_PART_CORTEX_A9:
ret = armv7_a9_pmu_init(pmu); ret = armv7_a9_pmu_init(pmu);
break; break;
case 0xC050: /* Cortex-A5 */ case ARM_CPU_PART_CORTEX_A5:
ret = armv7_a5_pmu_init(pmu); ret = armv7_a5_pmu_init(pmu);
break; break;
case 0xC0F0: /* Cortex-A15 */ case ARM_CPU_PART_CORTEX_A15:
ret = armv7_a15_pmu_init(pmu); ret = armv7_a15_pmu_init(pmu);
break; break;
case 0xC070: /* Cortex-A7 */ case ARM_CPU_PART_CORTEX_A7:
ret = armv7_a7_pmu_init(pmu); ret = armv7_a7_pmu_init(pmu);
break; break;
} }
/* Intel CPUs [xscale]. */ /* Intel CPUs [xscale]. */
} else if (0x69 == implementor) { } else if (implementor == ARM_CPU_IMP_INTEL) {
part_number = (cpuid >> 13) & 0x7; switch (xscale_cpu_arch_version()) {
switch (part_number) { case ARM_CPU_XSCALE_ARCH_V1:
case 1:
ret = xscale1pmu_init(pmu); ret = xscale1pmu_init(pmu);
break; break;
case 2: case ARM_CPU_XSCALE_ARCH_V2:
ret = xscale2pmu_init(pmu); ret = xscale2pmu_init(pmu);
break; break;
} }
...@@ -279,17 +277,22 @@ static int cpu_pmu_device_probe(struct platform_device *pdev) ...@@ -279,17 +277,22 @@ static int cpu_pmu_device_probe(struct platform_device *pdev)
} }
if (ret) { if (ret) {
pr_info("failed to register PMU devices!"); pr_info("failed to probe PMU!");
kfree(pmu); goto out_free;
return ret;
} }
cpu_pmu = pmu; cpu_pmu = pmu;
cpu_pmu->plat_device = pdev; cpu_pmu->plat_device = pdev;
cpu_pmu_init(cpu_pmu); cpu_pmu_init(cpu_pmu);
armpmu_register(cpu_pmu, PERF_TYPE_RAW); ret = armpmu_register(cpu_pmu, PERF_TYPE_RAW);
if (!ret)
return 0; return 0;
out_free:
pr_info("failed to register PMU devices!");
kfree(pmu);
return ret;
} }
static struct platform_driver cpu_pmu_driver = { static struct platform_driver cpu_pmu_driver = {
......
...@@ -106,7 +106,7 @@ static const unsigned armv6_perf_cache_map[PERF_COUNT_HW_CACHE_MAX] ...@@ -106,7 +106,7 @@ static const unsigned armv6_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
}, },
[C(OP_WRITE)] = { [C(OP_WRITE)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED, [C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = ARMV6_PERFCTR_ICACHE_MISS, [C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
}, },
[C(OP_PREFETCH)] = { [C(OP_PREFETCH)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED, [C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
...@@ -259,7 +259,7 @@ static const unsigned armv6mpcore_perf_cache_map[PERF_COUNT_HW_CACHE_MAX] ...@@ -259,7 +259,7 @@ static const unsigned armv6mpcore_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
}, },
[C(OP_WRITE)] = { [C(OP_WRITE)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED, [C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = ARMV6MPCORE_PERFCTR_ICACHE_MISS, [C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
}, },
[C(OP_PREFETCH)] = { [C(OP_PREFETCH)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED, [C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
......
...@@ -157,8 +157,8 @@ static const unsigned armv7_a8_perf_cache_map[PERF_COUNT_HW_CACHE_MAX] ...@@ -157,8 +157,8 @@ static const unsigned armv7_a8_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
[C(RESULT_MISS)] = ARMV7_PERFCTR_L1_ICACHE_REFILL, [C(RESULT_MISS)] = ARMV7_PERFCTR_L1_ICACHE_REFILL,
}, },
[C(OP_WRITE)] = { [C(OP_WRITE)] = {
[C(RESULT_ACCESS)] = ARMV7_A8_PERFCTR_L1_ICACHE_ACCESS, [C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = ARMV7_PERFCTR_L1_ICACHE_REFILL, [C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
}, },
[C(OP_PREFETCH)] = { [C(OP_PREFETCH)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED, [C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
...@@ -282,7 +282,7 @@ static const unsigned armv7_a9_perf_cache_map[PERF_COUNT_HW_CACHE_MAX] ...@@ -282,7 +282,7 @@ static const unsigned armv7_a9_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
}, },
[C(OP_WRITE)] = { [C(OP_WRITE)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED, [C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = ARMV7_PERFCTR_L1_ICACHE_REFILL, [C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
}, },
[C(OP_PREFETCH)] = { [C(OP_PREFETCH)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED, [C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
...@@ -399,8 +399,8 @@ static const unsigned armv7_a5_perf_cache_map[PERF_COUNT_HW_CACHE_MAX] ...@@ -399,8 +399,8 @@ static const unsigned armv7_a5_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
[C(RESULT_MISS)] = ARMV7_PERFCTR_L1_ICACHE_REFILL, [C(RESULT_MISS)] = ARMV7_PERFCTR_L1_ICACHE_REFILL,
}, },
[C(OP_WRITE)] = { [C(OP_WRITE)] = {
[C(RESULT_ACCESS)] = ARMV7_PERFCTR_L1_ICACHE_ACCESS, [C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = ARMV7_PERFCTR_L1_ICACHE_REFILL, [C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
}, },
/* /*
* The prefetch counters don't differentiate between the I * The prefetch counters don't differentiate between the I
...@@ -527,8 +527,8 @@ static const unsigned armv7_a15_perf_cache_map[PERF_COUNT_HW_CACHE_MAX] ...@@ -527,8 +527,8 @@ static const unsigned armv7_a15_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
[C(RESULT_MISS)] = ARMV7_PERFCTR_L1_ICACHE_REFILL, [C(RESULT_MISS)] = ARMV7_PERFCTR_L1_ICACHE_REFILL,
}, },
[C(OP_WRITE)] = { [C(OP_WRITE)] = {
[C(RESULT_ACCESS)] = ARMV7_PERFCTR_L1_ICACHE_ACCESS, [C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = ARMV7_PERFCTR_L1_ICACHE_REFILL, [C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
}, },
[C(OP_PREFETCH)] = { [C(OP_PREFETCH)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED, [C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
...@@ -651,8 +651,8 @@ static const unsigned armv7_a7_perf_cache_map[PERF_COUNT_HW_CACHE_MAX] ...@@ -651,8 +651,8 @@ static const unsigned armv7_a7_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
[C(RESULT_MISS)] = ARMV7_PERFCTR_L1_ICACHE_REFILL, [C(RESULT_MISS)] = ARMV7_PERFCTR_L1_ICACHE_REFILL,
}, },
[C(OP_WRITE)] = { [C(OP_WRITE)] = {
[C(RESULT_ACCESS)] = ARMV7_PERFCTR_L1_ICACHE_ACCESS, [C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = ARMV7_PERFCTR_L1_ICACHE_REFILL, [C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
}, },
[C(OP_PREFETCH)] = { [C(OP_PREFETCH)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED, [C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
......
...@@ -83,7 +83,7 @@ static const unsigned xscale_perf_cache_map[PERF_COUNT_HW_CACHE_MAX] ...@@ -83,7 +83,7 @@ static const unsigned xscale_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
}, },
[C(OP_WRITE)] = { [C(OP_WRITE)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED, [C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
[C(RESULT_MISS)] = XSCALE_PERFCTR_ICACHE_MISS, [C(RESULT_MISS)] = CACHE_OP_UNSUPPORTED,
}, },
[C(OP_PREFETCH)] = { [C(OP_PREFETCH)] = {
[C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED, [C(RESULT_ACCESS)] = CACHE_OP_UNSUPPORTED,
......
/*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* Copyright (C) 2012 ARM Limited
*
* Author: Will Deacon <will.deacon@arm.com>
*/
#define pr_fmt(fmt) "psci: " fmt
#include <linux/init.h>
#include <linux/of.h>
#include <asm/compiler.h>
#include <asm/errno.h>
#include <asm/opcodes-sec.h>
#include <asm/opcodes-virt.h>
#include <asm/psci.h>
struct psci_operations psci_ops;
static int (*invoke_psci_fn)(u32, u32, u32, u32);
enum psci_function {
PSCI_FN_CPU_SUSPEND,
PSCI_FN_CPU_ON,
PSCI_FN_CPU_OFF,
PSCI_FN_MIGRATE,
PSCI_FN_MAX,
};
static u32 psci_function_id[PSCI_FN_MAX];
#define PSCI_RET_SUCCESS 0
#define PSCI_RET_EOPNOTSUPP -1
#define PSCI_RET_EINVAL -2
#define PSCI_RET_EPERM -3
static int psci_to_linux_errno(int errno)
{
switch (errno) {
case PSCI_RET_SUCCESS:
return 0;
case PSCI_RET_EOPNOTSUPP:
return -EOPNOTSUPP;
case PSCI_RET_EINVAL:
return -EINVAL;
case PSCI_RET_EPERM:
return -EPERM;
};
return -EINVAL;
}
#define PSCI_POWER_STATE_ID_MASK 0xffff
#define PSCI_POWER_STATE_ID_SHIFT 0
#define PSCI_POWER_STATE_TYPE_MASK 0x1
#define PSCI_POWER_STATE_TYPE_SHIFT 16
#define PSCI_POWER_STATE_AFFL_MASK 0x3
#define PSCI_POWER_STATE_AFFL_SHIFT 24
static u32 psci_power_state_pack(struct psci_power_state state)
{
return ((state.id & PSCI_POWER_STATE_ID_MASK)
<< PSCI_POWER_STATE_ID_SHIFT) |
((state.type & PSCI_POWER_STATE_TYPE_MASK)
<< PSCI_POWER_STATE_TYPE_SHIFT) |
((state.affinity_level & PSCI_POWER_STATE_AFFL_MASK)
<< PSCI_POWER_STATE_AFFL_SHIFT);
}
/*
* The following two functions are invoked via the invoke_psci_fn pointer
* and will not be inlined, allowing us to piggyback on the AAPCS.
*/
static noinline int __invoke_psci_fn_hvc(u32 function_id, u32 arg0, u32 arg1,
u32 arg2)
{
asm volatile(
__asmeq("%0", "r0")
__asmeq("%1", "r1")
__asmeq("%2", "r2")
__asmeq("%3", "r3")
__HVC(0)
: "+r" (function_id)
: "r" (arg0), "r" (arg1), "r" (arg2));
return function_id;
}
static noinline int __invoke_psci_fn_smc(u32 function_id, u32 arg0, u32 arg1,
u32 arg2)
{
asm volatile(
__asmeq("%0", "r0")
__asmeq("%1", "r1")
__asmeq("%2", "r2")
__asmeq("%3", "r3")
__SMC(0)
: "+r" (function_id)
: "r" (arg0), "r" (arg1), "r" (arg2));
return function_id;
}
static int psci_cpu_suspend(struct psci_power_state state,
unsigned long entry_point)
{
int err;
u32 fn, power_state;
fn = psci_function_id[PSCI_FN_CPU_SUSPEND];
power_state = psci_power_state_pack(state);
err = invoke_psci_fn(fn, power_state, entry_point, 0);
return psci_to_linux_errno(err);
}
static int psci_cpu_off(struct psci_power_state state)
{
int err;
u32 fn, power_state;
fn = psci_function_id[PSCI_FN_CPU_OFF];
power_state = psci_power_state_pack(state);
err = invoke_psci_fn(fn, power_state, 0, 0);
return psci_to_linux_errno(err);
}
static int psci_cpu_on(unsigned long cpuid, unsigned long entry_point)
{
int err;
u32 fn;
fn = psci_function_id[PSCI_FN_CPU_ON];
err = invoke_psci_fn(fn, cpuid, entry_point, 0);
return psci_to_linux_errno(err);
}
static int psci_migrate(unsigned long cpuid)
{
int err;
u32 fn;
fn = psci_function_id[PSCI_FN_MIGRATE];
err = invoke_psci_fn(fn, cpuid, 0, 0);
return psci_to_linux_errno(err);
}
static const struct of_device_id psci_of_match[] __initconst = {
{ .compatible = "arm,psci", },
{},
};
static int __init psci_init(void)
{
struct device_node *np;
const char *method;
u32 id;
np = of_find_matching_node(NULL, psci_of_match);
if (!np)
return 0;
pr_info("probing function IDs from device-tree\n");
if (of_property_read_string(np, "method", &method)) {
pr_warning("missing \"method\" property\n");
goto out_put_node;
}
if (!strcmp("hvc", method)) {
invoke_psci_fn = __invoke_psci_fn_hvc;
} else if (!strcmp("smc", method)) {
invoke_psci_fn = __invoke_psci_fn_smc;
} else {
pr_warning("invalid \"method\" property: %s\n", method);
goto out_put_node;
}
if (!of_property_read_u32(np, "cpu_suspend", &id)) {
psci_function_id[PSCI_FN_CPU_SUSPEND] = id;
psci_ops.cpu_suspend = psci_cpu_suspend;
}
if (!of_property_read_u32(np, "cpu_off", &id)) {
psci_function_id[PSCI_FN_CPU_OFF] = id;
psci_ops.cpu_off = psci_cpu_off;
}
if (!of_property_read_u32(np, "cpu_on", &id)) {
psci_function_id[PSCI_FN_CPU_ON] = id;
psci_ops.cpu_on = psci_cpu_on;
}
if (!of_property_read_u32(np, "migrate", &id)) {
psci_function_id[PSCI_FN_MIGRATE] = id;
psci_ops.migrate = psci_migrate;
}
out_put_node:
of_node_put(np);
return 0;
}
early_initcall(psci_init);
...@@ -19,7 +19,11 @@ ...@@ -19,7 +19,11 @@
ALIGN_FUNCTION(); \ ALIGN_FUNCTION(); \
VMLINUX_SYMBOL(__idmap_text_start) = .; \ VMLINUX_SYMBOL(__idmap_text_start) = .; \
*(.idmap.text) \ *(.idmap.text) \
VMLINUX_SYMBOL(__idmap_text_end) = .; VMLINUX_SYMBOL(__idmap_text_end) = .; \
ALIGN_FUNCTION(); \
VMLINUX_SYMBOL(__hyp_idmap_text_start) = .; \
*(.hyp.idmap.text) \
VMLINUX_SYMBOL(__hyp_idmap_text_end) = .;
#ifdef CONFIG_HOTPLUG_CPU #ifdef CONFIG_HOTPLUG_CPU
#define ARM_CPU_DISCARD(x) #define ARM_CPU_DISCARD(x)
......
#
# KVM configuration
#
source "virt/kvm/Kconfig"
menuconfig VIRTUALIZATION
bool "Virtualization"
---help---
Say Y here to get to see options for using your Linux host to run
other operating systems inside virtual machines (guests).
This option alone does not add any kernel code.
If you say N, all options in this submenu will be skipped and
disabled.
if VIRTUALIZATION
config KVM
bool "Kernel-based Virtual Machine (KVM) support"
select PREEMPT_NOTIFIERS
select ANON_INODES
select KVM_MMIO
select KVM_ARM_HOST
depends on ARM_VIRT_EXT && ARM_LPAE
---help---
Support hosting virtualized guest machines. You will also
need to select one or more of the processor modules below.
This module provides access to the hardware capabilities through
a character device node named /dev/kvm.
If unsure, say N.
config KVM_ARM_HOST
bool "KVM host support for ARM cpus."
depends on KVM
depends on MMU
select MMU_NOTIFIER
---help---
Provides host support for ARM processors.
config KVM_ARM_MAX_VCPUS
int "Number maximum supported virtual CPUs per VM"
depends on KVM_ARM_HOST
default 4
help
Static number of max supported virtual CPUs per VM.
If you choose a high number, the vcpu structures will be quite
large, so only choose a reasonable number that you expect to
actually use.
source drivers/virtio/Kconfig
endif # VIRTUALIZATION
#
# Makefile for Kernel-based Virtual Machine module
#
plus_virt := $(call as-instr,.arch_extension virt,+virt)
ifeq ($(plus_virt),+virt)
plus_virt_def := -DREQUIRES_VIRT=1
endif
ccflags-y += -Ivirt/kvm -Iarch/arm/kvm
CFLAGS_arm.o := -I. $(plus_virt_def)
CFLAGS_mmu.o := -I.
AFLAGS_init.o := -Wa,-march=armv7-a$(plus_virt)
AFLAGS_interrupts.o := -Wa,-march=armv7-a$(plus_virt)
kvm-arm-y = $(addprefix ../../../virt/kvm/, kvm_main.o coalesced_mmio.o)
obj-y += kvm-arm.o init.o interrupts.o
obj-y += arm.o guest.o mmu.o emulate.o reset.o
obj-y += coproc.o coproc_a15.o mmio.o psci.o
This diff is collapsed.
This diff is collapsed.
/*
* Copyright (C) 2012 - Virtual Open Systems and Columbia University
* Authors: Christoffer Dall <c.dall@virtualopensystems.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License, version 2, as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#ifndef __ARM_KVM_COPROC_LOCAL_H__
#define __ARM_KVM_COPROC_LOCAL_H__
struct coproc_params {
unsigned long CRn;
unsigned long CRm;
unsigned long Op1;
unsigned long Op2;
unsigned long Rt1;
unsigned long Rt2;
bool is_64bit;
bool is_write;
};
struct coproc_reg {
/* MRC/MCR/MRRC/MCRR instruction which accesses it. */
unsigned long CRn;
unsigned long CRm;
unsigned long Op1;
unsigned long Op2;
bool is_64;
/* Trapped access from guest, if non-NULL. */
bool (*access)(struct kvm_vcpu *,
const struct coproc_params *,
const struct coproc_reg *);
/* Initialization for vcpu. */
void (*reset)(struct kvm_vcpu *, const struct coproc_reg *);
/* Index into vcpu->arch.cp15[], or 0 if we don't need to save it. */
unsigned long reg;
/* Value (usually reset value) */
u64 val;
};
static inline void print_cp_instr(const struct coproc_params *p)
{
/* Look, we even formatted it for you to paste into the table! */
if (p->is_64bit) {
kvm_pr_unimpl(" { CRm(%2lu), Op1(%2lu), is64, func_%s },\n",
p->CRm, p->Op1, p->is_write ? "write" : "read");
} else {
kvm_pr_unimpl(" { CRn(%2lu), CRm(%2lu), Op1(%2lu), Op2(%2lu), is32,"
" func_%s },\n",
p->CRn, p->CRm, p->Op1, p->Op2,
p->is_write ? "write" : "read");
}
}
static inline bool ignore_write(struct kvm_vcpu *vcpu,
const struct coproc_params *p)
{
return true;
}
static inline bool read_zero(struct kvm_vcpu *vcpu,
const struct coproc_params *p)
{
*vcpu_reg(vcpu, p->Rt1) = 0;
return true;
}
static inline bool write_to_read_only(struct kvm_vcpu *vcpu,
const struct coproc_params *params)
{
kvm_debug("CP15 write to read-only register at: %08x\n",
*vcpu_pc(vcpu));
print_cp_instr(params);
return false;
}
static inline bool read_from_write_only(struct kvm_vcpu *vcpu,
const struct coproc_params *params)
{
kvm_debug("CP15 read to write-only register at: %08x\n",
*vcpu_pc(vcpu));
print_cp_instr(params);
return false;
}
/* Reset functions */
static inline void reset_unknown(struct kvm_vcpu *vcpu,
const struct coproc_reg *r)
{
BUG_ON(!r->reg);
BUG_ON(r->reg >= ARRAY_SIZE(vcpu->arch.cp15));
vcpu->arch.cp15[r->reg] = 0xdecafbad;
}
static inline void reset_val(struct kvm_vcpu *vcpu, const struct coproc_reg *r)
{
BUG_ON(!r->reg);
BUG_ON(r->reg >= ARRAY_SIZE(vcpu->arch.cp15));
vcpu->arch.cp15[r->reg] = r->val;
}
static inline void reset_unknown64(struct kvm_vcpu *vcpu,
const struct coproc_reg *r)
{
BUG_ON(!r->reg);
BUG_ON(r->reg + 1 >= ARRAY_SIZE(vcpu->arch.cp15));
vcpu->arch.cp15[r->reg] = 0xdecafbad;
vcpu->arch.cp15[r->reg+1] = 0xd0c0ffee;
}
static inline int cmp_reg(const struct coproc_reg *i1,
const struct coproc_reg *i2)
{
BUG_ON(i1 == i2);
if (!i1)
return 1;
else if (!i2)
return -1;
if (i1->CRn != i2->CRn)
return i1->CRn - i2->CRn;
if (i1->CRm != i2->CRm)
return i1->CRm - i2->CRm;
if (i1->Op1 != i2->Op1)
return i1->Op1 - i2->Op1;
return i1->Op2 - i2->Op2;
}
#define CRn(_x) .CRn = _x
#define CRm(_x) .CRm = _x
#define Op1(_x) .Op1 = _x
#define Op2(_x) .Op2 = _x
#define is64 .is_64 = true
#define is32 .is_64 = false
#endif /* __ARM_KVM_COPROC_LOCAL_H__ */
/*
* Copyright (C) 2012 - Virtual Open Systems and Columbia University
* Authors: Rusty Russell <rusty@rustcorp.au>
* Christoffer Dall <c.dall@virtualopensystems.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License, version 2, as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#include <linux/kvm_host.h>
#include <asm/cputype.h>
#include <asm/kvm_arm.h>
#include <asm/kvm_host.h>
#include <asm/kvm_emulate.h>
#include <asm/kvm_coproc.h>
#include <linux/init.h>
static void reset_mpidr(struct kvm_vcpu *vcpu, const struct coproc_reg *r)
{
/*
* Compute guest MPIDR:
* (Even if we present only one VCPU to the guest on an SMP
* host we don't set the U bit in the MPIDR, or vice versa, as
* revealing the underlying hardware properties is likely to
* be the best choice).
*/
vcpu->arch.cp15[c0_MPIDR] = (read_cpuid_mpidr() & ~MPIDR_LEVEL_MASK)
| (vcpu->vcpu_id & MPIDR_LEVEL_MASK);
}
#include "coproc.h"
/* A15 TRM 4.3.28: RO WI */
static bool access_actlr(struct kvm_vcpu *vcpu,
const struct coproc_params *p,
const struct coproc_reg *r)
{
if (p->is_write)
return ignore_write(vcpu, p);
*vcpu_reg(vcpu, p->Rt1) = vcpu->arch.cp15[c1_ACTLR];
return true;
}
/* A15 TRM 4.3.60: R/O. */
static bool access_cbar(struct kvm_vcpu *vcpu,
const struct coproc_params *p,
const struct coproc_reg *r)
{
if (p->is_write)
return write_to_read_only(vcpu, p);
return read_zero(vcpu, p);
}
/* A15 TRM 4.3.48: R/O WI. */
static bool access_l2ctlr(struct kvm_vcpu *vcpu,
const struct coproc_params *p,
const struct coproc_reg *r)
{
if (p->is_write)
return ignore_write(vcpu, p);
*vcpu_reg(vcpu, p->Rt1) = vcpu->arch.cp15[c9_L2CTLR];
return true;
}
static void reset_l2ctlr(struct kvm_vcpu *vcpu, const struct coproc_reg *r)
{
u32 l2ctlr, ncores;
asm volatile("mrc p15, 1, %0, c9, c0, 2\n" : "=r" (l2ctlr));
l2ctlr &= ~(3 << 24);
ncores = atomic_read(&vcpu->kvm->online_vcpus) - 1;
l2ctlr |= (ncores & 3) << 24;
vcpu->arch.cp15[c9_L2CTLR] = l2ctlr;
}
static void reset_actlr(struct kvm_vcpu *vcpu, const struct coproc_reg *r)
{
u32 actlr;
/* ACTLR contains SMP bit: make sure you create all cpus first! */
asm volatile("mrc p15, 0, %0, c1, c0, 1\n" : "=r" (actlr));
/* Make the SMP bit consistent with the guest configuration */
if (atomic_read(&vcpu->kvm->online_vcpus) > 1)
actlr |= 1U << 6;
else
actlr &= ~(1U << 6);
vcpu->arch.cp15[c1_ACTLR] = actlr;
}
/* A15 TRM 4.3.49: R/O WI (even if NSACR.NS_L2ERR, a write of 1 is ignored). */
static bool access_l2ectlr(struct kvm_vcpu *vcpu,
const struct coproc_params *p,
const struct coproc_reg *r)
{
if (p->is_write)
return ignore_write(vcpu, p);
*vcpu_reg(vcpu, p->Rt1) = 0;
return true;
}
/*
* A15-specific CP15 registers.
* Important: Must be sorted ascending by CRn, CRM, Op1, Op2
*/
static const struct coproc_reg a15_regs[] = {
/* MPIDR: we use VMPIDR for guest access. */
{ CRn( 0), CRm( 0), Op1( 0), Op2( 5), is32,
NULL, reset_mpidr, c0_MPIDR },
/* SCTLR: swapped by interrupt.S. */
{ CRn( 1), CRm( 0), Op1( 0), Op2( 0), is32,
NULL, reset_val, c1_SCTLR, 0x00C50078 },
/* ACTLR: trapped by HCR.TAC bit. */
{ CRn( 1), CRm( 0), Op1( 0), Op2( 1), is32,
access_actlr, reset_actlr, c1_ACTLR },
/* CPACR: swapped by interrupt.S. */
{ CRn( 1), CRm( 0), Op1( 0), Op2( 2), is32,
NULL, reset_val, c1_CPACR, 0x00000000 },
/*
* L2CTLR access (guest wants to know #CPUs).
*/
{ CRn( 9), CRm( 0), Op1( 1), Op2( 2), is32,
access_l2ctlr, reset_l2ctlr, c9_L2CTLR },
{ CRn( 9), CRm( 0), Op1( 1), Op2( 3), is32, access_l2ectlr},
/* The Configuration Base Address Register. */
{ CRn(15), CRm( 0), Op1( 4), Op2( 0), is32, access_cbar},
};
static struct kvm_coproc_target_table a15_target_table = {
.target = KVM_ARM_TARGET_CORTEX_A15,
.table = a15_regs,
.num = ARRAY_SIZE(a15_regs),
};
static int __init coproc_a15_init(void)
{
unsigned int i;
for (i = 1; i < ARRAY_SIZE(a15_regs); i++)
BUG_ON(cmp_reg(&a15_regs[i-1],
&a15_regs[i]) >= 0);
kvm_register_target_coproc_table(&a15_target_table);
return 0;
}
late_initcall(coproc_a15_init);
This diff is collapsed.
This diff is collapsed.
/*
* Copyright (C) 2012 - Virtual Open Systems and Columbia University
* Author: Christoffer Dall <c.dall@virtualopensystems.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License, version 2, as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#include <linux/linkage.h>
#include <asm/unified.h>
#include <asm/asm-offsets.h>
#include <asm/kvm_asm.h>
#include <asm/kvm_arm.h>
/********************************************************************
* Hypervisor initialization
* - should be called with:
* r0,r1 = Hypervisor pgd pointer
* r2 = top of Hyp stack (kernel VA)
* r3 = pointer to hyp vectors
*/
.text
.pushsection .hyp.idmap.text,"ax"
.align 5
__kvm_hyp_init:
.globl __kvm_hyp_init
@ Hyp-mode exception vector
W(b) .
W(b) .
W(b) .
W(b) .
W(b) .
W(b) __do_hyp_init
W(b) .
W(b) .
__do_hyp_init:
@ Set the HTTBR to point to the hypervisor PGD pointer passed
mcrr p15, 4, r0, r1, c2
@ Set the HTCR and VTCR to the same shareability and cacheability
@ settings as the non-secure TTBCR and with T0SZ == 0.
mrc p15, 4, r0, c2, c0, 2 @ HTCR
ldr r12, =HTCR_MASK
bic r0, r0, r12
mrc p15, 0, r1, c2, c0, 2 @ TTBCR
and r1, r1, #(HTCR_MASK & ~TTBCR_T0SZ)
orr r0, r0, r1
mcr p15, 4, r0, c2, c0, 2 @ HTCR
mrc p15, 4, r1, c2, c1, 2 @ VTCR
ldr r12, =VTCR_MASK
bic r1, r1, r12
bic r0, r0, #(~VTCR_HTCR_SH) @ clear non-reusable HTCR bits
orr r1, r0, r1
orr r1, r1, #(KVM_VTCR_SL0 | KVM_VTCR_T0SZ | KVM_VTCR_S)
mcr p15, 4, r1, c2, c1, 2 @ VTCR
@ Use the same memory attributes for hyp. accesses as the kernel
@ (copy MAIRx ro HMAIRx).
mrc p15, 0, r0, c10, c2, 0
mcr p15, 4, r0, c10, c2, 0
mrc p15, 0, r0, c10, c2, 1
mcr p15, 4, r0, c10, c2, 1
@ Set the HSCTLR to:
@ - ARM/THUMB exceptions: Kernel config (Thumb-2 kernel)
@ - Endianness: Kernel config
@ - Fast Interrupt Features: Kernel config
@ - Write permission implies XN: disabled
@ - Instruction cache: enabled
@ - Data/Unified cache: enabled
@ - Memory alignment checks: enabled
@ - MMU: enabled (this code must be run from an identity mapping)
mrc p15, 4, r0, c1, c0, 0 @ HSCR
ldr r12, =HSCTLR_MASK
bic r0, r0, r12
mrc p15, 0, r1, c1, c0, 0 @ SCTLR
ldr r12, =(HSCTLR_EE | HSCTLR_FI | HSCTLR_I | HSCTLR_C)
and r1, r1, r12
ARM( ldr r12, =(HSCTLR_M | HSCTLR_A) )
THUMB( ldr r12, =(HSCTLR_M | HSCTLR_A | HSCTLR_TE) )
orr r1, r1, r12
orr r0, r0, r1
isb
mcr p15, 4, r0, c1, c0, 0 @ HSCR
isb
@ Set stack pointer and return to the kernel
mov sp, r2
@ Set HVBAR to point to the HYP vectors
mcr p15, 4, r3, c12, c0, 0 @ HVBAR
eret
.ltorg
.globl __kvm_hyp_init_end
__kvm_hyp_init_end:
.popsection
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -629,8 +629,9 @@ config ARM_THUMBEE ...@@ -629,8 +629,9 @@ config ARM_THUMBEE
make use of it. Say N for code that can run on CPUs without ThumbEE. make use of it. Say N for code that can run on CPUs without ThumbEE.
config ARM_VIRT_EXT config ARM_VIRT_EXT
bool "Native support for the ARM Virtualization Extensions" bool
depends on MMU && CPU_V7 depends on MMU
default y if CPU_V7
help help
Enable the kernel to make use of the ARM Virtualization Enable the kernel to make use of the ARM Virtualization
Extensions to install hypervisors without run-time firmware Extensions to install hypervisors without run-time firmware
...@@ -640,11 +641,6 @@ config ARM_VIRT_EXT ...@@ -640,11 +641,6 @@ config ARM_VIRT_EXT
use of this feature. Refer to Documentation/arm/Booting for use of this feature. Refer to Documentation/arm/Booting for
details. details.
It is safe to enable this option even if the kernel may not be
booted in HYP mode, may not have support for the
virtualization extensions, or may be booted with a
non-compliant bootloader.
config SWP_EMULATE config SWP_EMULATE
bool "Emulate SWP/SWPB instructions" bool "Emulate SWP/SWPB instructions"
depends on !CPU_USE_DOMAINS && CPU_V7 depends on !CPU_USE_DOMAINS && CPU_V7
......
This diff is collapsed.
This diff is collapsed.
...@@ -115,6 +115,7 @@ struct kvm_irq_level { ...@@ -115,6 +115,7 @@ struct kvm_irq_level {
* ACPI gsi notion of irq. * ACPI gsi notion of irq.
* For IA-64 (APIC model) IOAPIC0: irq 0-23; IOAPIC1: irq 24-47.. * For IA-64 (APIC model) IOAPIC0: irq 0-23; IOAPIC1: irq 24-47..
* For X86 (standard AT mode) PIC0/1: irq 0-15. IOAPIC0: 0-23.. * For X86 (standard AT mode) PIC0/1: irq 0-15. IOAPIC0: 0-23..
* For ARM: See Documentation/virtual/kvm/api.txt
*/ */
union { union {
__u32 irq; __u32 irq;
...@@ -635,6 +636,7 @@ struct kvm_ppc_smmu_info { ...@@ -635,6 +636,7 @@ struct kvm_ppc_smmu_info {
#define KVM_CAP_IRQFD_RESAMPLE 82 #define KVM_CAP_IRQFD_RESAMPLE 82
#define KVM_CAP_PPC_BOOKE_WATCHDOG 83 #define KVM_CAP_PPC_BOOKE_WATCHDOG 83
#define KVM_CAP_PPC_HTAB_FD 84 #define KVM_CAP_PPC_HTAB_FD 84
#define KVM_CAP_ARM_PSCI 87
#ifdef KVM_CAP_IRQ_ROUTING #ifdef KVM_CAP_IRQ_ROUTING
...@@ -764,6 +766,11 @@ struct kvm_dirty_tlb { ...@@ -764,6 +766,11 @@ struct kvm_dirty_tlb {
#define KVM_REG_SIZE_U512 0x0060000000000000ULL #define KVM_REG_SIZE_U512 0x0060000000000000ULL
#define KVM_REG_SIZE_U1024 0x0070000000000000ULL #define KVM_REG_SIZE_U1024 0x0070000000000000ULL
struct kvm_reg_list {
__u64 n; /* number of regs */
__u64 reg[0];
};
struct kvm_one_reg { struct kvm_one_reg {
__u64 id; __u64 id;
__u64 addr; __u64 addr;
...@@ -932,6 +939,8 @@ struct kvm_s390_ucas_mapping { ...@@ -932,6 +939,8 @@ struct kvm_s390_ucas_mapping {
#define KVM_SET_ONE_REG _IOW(KVMIO, 0xac, struct kvm_one_reg) #define KVM_SET_ONE_REG _IOW(KVMIO, 0xac, struct kvm_one_reg)
/* VM is being stopped by host */ /* VM is being stopped by host */
#define KVM_KVMCLOCK_CTRL _IO(KVMIO, 0xad) #define KVM_KVMCLOCK_CTRL _IO(KVMIO, 0xad)
#define KVM_ARM_VCPU_INIT _IOW(KVMIO, 0xae, struct kvm_vcpu_init)
#define KVM_GET_REG_LIST _IOWR(KVMIO, 0xb0, struct kvm_reg_list)
#define KVM_DEV_ASSIGN_ENABLE_IOMMU (1 << 0) #define KVM_DEV_ASSIGN_ENABLE_IOMMU (1 << 0)
#define KVM_DEV_ASSIGN_PCI_2_3 (1 << 1) #define KVM_DEV_ASSIGN_PCI_2_3 (1 << 1)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment