Commit 7e4dc77b authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull perf updates from Ingo Molnar:
 "With over 300 commits it's been a busy cycle - with most of the work
  concentrated on the tooling side (as it should).

  The main kernel side enhancements were:

   - Add per event callchain limit: Recently we introduced a sysctl to
     tune the max-stack for all events for which callchains were
     requested:

       $ sysctl kernel.perf_event_max_stack
       kernel.perf_event_max_stack = 127

     Now this patch introduces a way to configure this per event, i.e.
     this becomes possible:

       $ perf record -e sched:*/max-stack=2/ -e block:*/max-stack=10/ -a

     allowing finer tuning of how much buffer space callchains use.

     This uses an u16 from the reserved space at the end, leaving
     another u16 for future use.

     There has been interest in even finer tuning, namely to control the
     max stack for kernel and userspace callchains separately.  Further
     discussion is needed, we may for instance use the remaining u16 for
     that and when it is present, assume that the sample_max_stack
     introduced in this patch applies for the kernel, and the u16 left
     is used for limiting the userspace callchain (Arnaldo Carvalho de
     Melo)

   - Optimize AUX event (hardware assisted side-band event) delivery
     (Kan Liang)

   - Rework Intel family name macro usage (this is partially x86 arch
     work) (Dave Hansen)

   - Refine and fix Intel LBR support (David Carrillo-Cisneros)

   - Add support for Intel 'TopDown' events (Andi Kleen)

   - Intel uncore PMU driver fixes and enhancements (Kan Liang)

   - ... other misc changes.

  Here's an incomplete list of the tooling enhancements (but there's
  much more, see the shortlog and the git log for details):

   - Support cross unwinding, i.e.  collecting '--call-graph dwarf'
     perf.data files in one machine and then doing analysis in another
     machine of a different hardware architecture.  This enables, for
     instance, to do:

       $ perf record -a --call-graph dwarf

     on a x86-32 or aarch64 system and then do 'perf report' on it on a
     x86_64 workstation (He Kuang)

   - Allow reading from a backward ring buffer (one setup via
     sys_perf_event_open() with perf_event_attr.write_backward = 1)
     (Wang Nan)

   - Finish merging initial SDT (Statically Defined Traces) support, see
     cset comments for details about how it all works (Masami Hiramatsu)

   - Support attaching eBPF programs to tracepoints (Wang Nan)

   - Add demangling of symbols in programs written in the Rust language
     (David Tolnay)

   - Add support for tracepoints in the python binding, including an
     example, that sets up and parses sched:sched_switch events,
     tools/perf/python/tracepoint.py (Jiri Olsa)

   - Introduce --stdio-color to set up the color output mode selection
     in 'annotate' and 'report', allowing emit color escape sequences
     when redirecting the output of these tools (Arnaldo Carvalho de
     Melo)

   - Add 'callindent' option to 'perf script -F', to indent the Intel PT
     call stack, making this output more ftrace-like (Adrian Hunter,
     Andi Kleen)

   - Allow dumping the object files generated by llvm when processing
     eBPF scriptlet events (Wang Nan)

   - Add stackcollapse.py script to help generating flame graphs (Paolo
     Bonzini)

   - Add --ldlat option to 'perf mem' to specify load latency for loads
     event (e.g. cpu/mem-loads/ ) (Jiri Olsa)

   - Tooling support for Intel TopDown counters, recently added to the
     kernel (Andi Kleen)"

* 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (303 commits)
  perf tests: Add is_printable_array test
  perf tools: Make is_printable_array global
  perf script python: Fix string vs byte array resolving
  perf probe: Warn unmatched function filter correctly
  perf cpu_map: Add more helpers
  perf stat: Balance opening and reading events
  tools: Copy linux/{hash,poison}.h and check for drift
  perf tools: Remove include/linux/list.h from perf's MANIFEST
  tools: Copy the bitops files accessed from the kernel and check for drift
  Remove: kernel unistd*h files from perf's MANIFEST, not used
  perf tools: Remove tools/perf/util/include/linux/const.h
  perf tools: Remove tools/perf/util/include/asm/byteorder.h
  perf tools: Add missing linux/compiler.h include to perf-sys.h
  perf jit: Remove some no-op error handling
  perf jit: Add missing curly braces
  objtool: Initialize variable to silence old compiler
  objtool: Add -I$(srctree)/tools/arch/$(ARCH)/include/uapi
  perf record: Add --tail-synthesize option
  perf session: Don't warn about out of order event if write_backward is used
  perf tools: Enable overwrite settings
  ...
parents 89e7eb09 5048c2af
......@@ -1040,7 +1040,7 @@ ifdef CONFIG_STACK_VALIDATION
ifeq ($(has_libelf),1)
objtool_target := tools/objtool FORCE
else
$(warning "Cannot use CONFIG_STACK_VALIDATION, please install libelf-dev or elfutils-libelf-devel")
$(warning "Cannot use CONFIG_STACK_VALIDATION, please install libelf-dev, libelf-devel or elfutils-libelf-devel")
SKIP_STACK_VALIDATION := 1
export SKIP_STACK_VALIDATION
endif
......
......@@ -139,8 +139,8 @@ struct kvm_arch_memory_slot {
#define ARM_CP15_REG64(...) __ARM_CP15_REG64(__VA_ARGS__)
#define KVM_REG_ARM_TIMER_CTL ARM_CP15_REG32(0, 14, 3, 1)
#define KVM_REG_ARM_TIMER_CNT ARM_CP15_REG64(1, 14)
#define KVM_REG_ARM_TIMER_CVAL ARM_CP15_REG64(3, 14)
#define KVM_REG_ARM_TIMER_CNT ARM_CP15_REG64(1, 14)
#define KVM_REG_ARM_TIMER_CVAL ARM_CP15_REG64(3, 14)
/* Normal registers are mapped as coprocessor 16. */
#define KVM_REG_ARM_CORE (0x0010 << KVM_REG_ARM_COPROC_SHIFT)
......
......@@ -1622,6 +1622,29 @@ ssize_t events_sysfs_show(struct device *dev, struct device_attribute *attr, cha
}
EXPORT_SYMBOL_GPL(events_sysfs_show);
ssize_t events_ht_sysfs_show(struct device *dev, struct device_attribute *attr,
char *page)
{
struct perf_pmu_events_ht_attr *pmu_attr =
container_of(attr, struct perf_pmu_events_ht_attr, attr);
/*
* Report conditional events depending on Hyper-Threading.
*
* This is overly conservative as usually the HT special
* handling is not needed if the other CPU thread is idle.
*
* Note this does not (and cannot) handle the case when thread
* siblings are invisible, for example with virtualization
* if they are owned by some other guest. The user tool
* has to re-read when a thread sibling gets onlined later.
*/
return sprintf(page, "%s",
topology_max_smt_threads() > 1 ?
pmu_attr->event_str_ht :
pmu_attr->event_str_noht);
}
EVENT_ATTR(cpu-cycles, CPU_CYCLES );
EVENT_ATTR(instructions, INSTRUCTIONS );
EVENT_ATTR(cache-references, CACHE_REFERENCES );
......
This diff is collapsed.
......@@ -89,6 +89,7 @@
#include <linux/slab.h>
#include <linux/perf_event.h>
#include <asm/cpu_device_id.h>
#include <asm/intel-family.h>
#include "../perf_event.h"
MODULE_LICENSE("GPL");
......@@ -511,37 +512,37 @@ static const struct cstate_model slm_cstates __initconst = {
{ X86_VENDOR_INTEL, 6, model, X86_FEATURE_ANY, (unsigned long) &(states) }
static const struct x86_cpu_id intel_cstates_match[] __initconst = {
X86_CSTATES_MODEL(30, nhm_cstates), /* 45nm Nehalem */
X86_CSTATES_MODEL(26, nhm_cstates), /* 45nm Nehalem-EP */
X86_CSTATES_MODEL(46, nhm_cstates), /* 45nm Nehalem-EX */
X86_CSTATES_MODEL(INTEL_FAM6_NEHALEM, nhm_cstates),
X86_CSTATES_MODEL(INTEL_FAM6_NEHALEM_EP, nhm_cstates),
X86_CSTATES_MODEL(INTEL_FAM6_NEHALEM_EX, nhm_cstates),
X86_CSTATES_MODEL(37, nhm_cstates), /* 32nm Westmere */
X86_CSTATES_MODEL(44, nhm_cstates), /* 32nm Westmere-EP */
X86_CSTATES_MODEL(47, nhm_cstates), /* 32nm Westmere-EX */
X86_CSTATES_MODEL(INTEL_FAM6_WESTMERE, nhm_cstates),
X86_CSTATES_MODEL(INTEL_FAM6_WESTMERE_EP, nhm_cstates),
X86_CSTATES_MODEL(INTEL_FAM6_WESTMERE_EX, nhm_cstates),
X86_CSTATES_MODEL(42, snb_cstates), /* 32nm SandyBridge */
X86_CSTATES_MODEL(45, snb_cstates), /* 32nm SandyBridge-E/EN/EP */
X86_CSTATES_MODEL(INTEL_FAM6_SANDYBRIDGE, snb_cstates),
X86_CSTATES_MODEL(INTEL_FAM6_SANDYBRIDGE_X, snb_cstates),
X86_CSTATES_MODEL(58, snb_cstates), /* 22nm IvyBridge */
X86_CSTATES_MODEL(62, snb_cstates), /* 22nm IvyBridge-EP/EX */
X86_CSTATES_MODEL(INTEL_FAM6_IVYBRIDGE, snb_cstates),
X86_CSTATES_MODEL(INTEL_FAM6_IVYBRIDGE_X, snb_cstates),
X86_CSTATES_MODEL(60, snb_cstates), /* 22nm Haswell Core */
X86_CSTATES_MODEL(63, snb_cstates), /* 22nm Haswell Server */
X86_CSTATES_MODEL(70, snb_cstates), /* 22nm Haswell + GT3e */
X86_CSTATES_MODEL(INTEL_FAM6_HASWELL_CORE, snb_cstates),
X86_CSTATES_MODEL(INTEL_FAM6_HASWELL_X, snb_cstates),
X86_CSTATES_MODEL(INTEL_FAM6_HASWELL_GT3E, snb_cstates),
X86_CSTATES_MODEL(69, hswult_cstates), /* 22nm Haswell ULT */
X86_CSTATES_MODEL(INTEL_FAM6_HASWELL_ULT, hswult_cstates),
X86_CSTATES_MODEL(55, slm_cstates), /* 22nm Atom Silvermont */
X86_CSTATES_MODEL(77, slm_cstates), /* 22nm Atom Avoton/Rangely */
X86_CSTATES_MODEL(76, slm_cstates), /* 22nm Atom Airmont */
X86_CSTATES_MODEL(INTEL_FAM6_ATOM_SILVERMONT1, slm_cstates),
X86_CSTATES_MODEL(INTEL_FAM6_ATOM_SILVERMONT2, slm_cstates),
X86_CSTATES_MODEL(INTEL_FAM6_ATOM_AIRMONT, slm_cstates),
X86_CSTATES_MODEL(61, snb_cstates), /* 14nm Broadwell Core-M */
X86_CSTATES_MODEL(86, snb_cstates), /* 14nm Broadwell Xeon D */
X86_CSTATES_MODEL(71, snb_cstates), /* 14nm Broadwell + GT3e */
X86_CSTATES_MODEL(79, snb_cstates), /* 14nm Broadwell Server */
X86_CSTATES_MODEL(INTEL_FAM6_BROADWELL_CORE, snb_cstates),
X86_CSTATES_MODEL(INTEL_FAM6_BROADWELL_XEON_D, snb_cstates),
X86_CSTATES_MODEL(INTEL_FAM6_BROADWELL_GT3E, snb_cstates),
X86_CSTATES_MODEL(INTEL_FAM6_BROADWELL_X, snb_cstates),
X86_CSTATES_MODEL(78, snb_cstates), /* 14nm Skylake Mobile */
X86_CSTATES_MODEL(94, snb_cstates), /* 14nm Skylake Desktop */
X86_CSTATES_MODEL(INTEL_FAM6_SKYLAKE_MOBILE, snb_cstates),
X86_CSTATES_MODEL(INTEL_FAM6_SKYLAKE_DESKTOP, snb_cstates),
{ },
};
MODULE_DEVICE_TABLE(x86cpu, intel_cstates_match);
......
......@@ -77,9 +77,11 @@ static enum {
LBR_IND_JMP |\
LBR_FAR)
#define LBR_FROM_FLAG_MISPRED (1ULL << 63)
#define LBR_FROM_FLAG_IN_TX (1ULL << 62)
#define LBR_FROM_FLAG_ABORT (1ULL << 61)
#define LBR_FROM_FLAG_MISPRED BIT_ULL(63)
#define LBR_FROM_FLAG_IN_TX BIT_ULL(62)
#define LBR_FROM_FLAG_ABORT BIT_ULL(61)
#define LBR_FROM_SIGNEXT_2MSB (BIT_ULL(60) | BIT_ULL(59))
/*
* x86control flow change classification
......@@ -235,6 +237,97 @@ enum {
LBR_VALID,
};
/*
* For formats with LBR_TSX flags (e.g. LBR_FORMAT_EIP_FLAGS2), bits 61:62 in
* MSR_LAST_BRANCH_FROM_x are the TSX flags when TSX is supported, but when
* TSX is not supported they have no consistent behavior:
*
* - For wrmsr(), bits 61:62 are considered part of the sign extension.
* - For HW updates (branch captures) bits 61:62 are always OFF and are not
* part of the sign extension.
*
* Therefore, if:
*
* 1) LBR has TSX format
* 2) CPU has no TSX support enabled
*
* ... then any value passed to wrmsr() must be sign extended to 63 bits and any
* value from rdmsr() must be converted to have a 61 bits sign extension,
* ignoring the TSX flags.
*/
static inline bool lbr_from_signext_quirk_needed(void)
{
int lbr_format = x86_pmu.intel_cap.lbr_format;
bool tsx_support = boot_cpu_has(X86_FEATURE_HLE) ||
boot_cpu_has(X86_FEATURE_RTM);
return !tsx_support && (lbr_desc[lbr_format] & LBR_TSX);
}
DEFINE_STATIC_KEY_FALSE(lbr_from_quirk_key);
/* If quirk is enabled, ensure sign extension is 63 bits: */
inline u64 lbr_from_signext_quirk_wr(u64 val)
{
if (static_branch_unlikely(&lbr_from_quirk_key)) {
/*
* Sign extend into bits 61:62 while preserving bit 63.
*
* Quirk is enabled when TSX is disabled. Therefore TSX bits
* in val are always OFF and must be changed to be sign
* extension bits. Since bits 59:60 are guaranteed to be
* part of the sign extension bits, we can just copy them
* to 61:62.
*/
val |= (LBR_FROM_SIGNEXT_2MSB & val) << 2;
}
return val;
}
/*
* If quirk is needed, ensure sign extension is 61 bits:
*/
u64 lbr_from_signext_quirk_rd(u64 val)
{
if (static_branch_unlikely(&lbr_from_quirk_key)) {
/*
* Quirk is on when TSX is not enabled. Therefore TSX
* flags must be read as OFF.
*/
val &= ~(LBR_FROM_FLAG_IN_TX | LBR_FROM_FLAG_ABORT);
}
return val;
}
static inline void wrlbr_from(unsigned int idx, u64 val)
{
val = lbr_from_signext_quirk_wr(val);
wrmsrl(x86_pmu.lbr_from + idx, val);
}
static inline void wrlbr_to(unsigned int idx, u64 val)
{
wrmsrl(x86_pmu.lbr_to + idx, val);
}
static inline u64 rdlbr_from(unsigned int idx)
{
u64 val;
rdmsrl(x86_pmu.lbr_from + idx, val);
return lbr_from_signext_quirk_rd(val);
}
static inline u64 rdlbr_to(unsigned int idx)
{
u64 val;
rdmsrl(x86_pmu.lbr_to + idx, val);
return val;
}
static void __intel_pmu_lbr_restore(struct x86_perf_task_context *task_ctx)
{
int i;
......@@ -251,8 +344,9 @@ static void __intel_pmu_lbr_restore(struct x86_perf_task_context *task_ctx)
tos = task_ctx->tos;
for (i = 0; i < tos; i++) {
lbr_idx = (tos - i) & mask;
wrmsrl(x86_pmu.lbr_from + lbr_idx, task_ctx->lbr_from[i]);
wrmsrl(x86_pmu.lbr_to + lbr_idx, task_ctx->lbr_to[i]);
wrlbr_from(lbr_idx, task_ctx->lbr_from[i]);
wrlbr_to (lbr_idx, task_ctx->lbr_to[i]);
if (x86_pmu.intel_cap.lbr_format == LBR_FORMAT_INFO)
wrmsrl(MSR_LBR_INFO_0 + lbr_idx, task_ctx->lbr_info[i]);
}
......@@ -262,9 +356,9 @@ static void __intel_pmu_lbr_restore(struct x86_perf_task_context *task_ctx)
static void __intel_pmu_lbr_save(struct x86_perf_task_context *task_ctx)
{
int i;
unsigned lbr_idx, mask;
u64 tos;
int i;
if (task_ctx->lbr_callstack_users == 0) {
task_ctx->lbr_stack_state = LBR_NONE;
......@@ -275,8 +369,8 @@ static void __intel_pmu_lbr_save(struct x86_perf_task_context *task_ctx)
tos = intel_pmu_lbr_tos();
for (i = 0; i < tos; i++) {
lbr_idx = (tos - i) & mask;
rdmsrl(x86_pmu.lbr_from + lbr_idx, task_ctx->lbr_from[i]);
rdmsrl(x86_pmu.lbr_to + lbr_idx, task_ctx->lbr_to[i]);
task_ctx->lbr_from[i] = rdlbr_from(lbr_idx);
task_ctx->lbr_to[i] = rdlbr_to(lbr_idx);
if (x86_pmu.intel_cap.lbr_format == LBR_FORMAT_INFO)
rdmsrl(MSR_LBR_INFO_0 + lbr_idx, task_ctx->lbr_info[i]);
}
......@@ -452,8 +546,8 @@ static void intel_pmu_lbr_read_64(struct cpu_hw_events *cpuc)
u16 cycles = 0;
int lbr_flags = lbr_desc[lbr_format];
rdmsrl(x86_pmu.lbr_from + lbr_idx, from);
rdmsrl(x86_pmu.lbr_to + lbr_idx, to);
from = rdlbr_from(lbr_idx);
to = rdlbr_to(lbr_idx);
if (lbr_format == LBR_FORMAT_INFO && need_info) {
u64 info;
......@@ -956,7 +1050,6 @@ void __init intel_pmu_lbr_init_core(void)
* SW branch filter usage:
* - compensate for lack of HW filter
*/
pr_cont("4-deep LBR, ");
}
/* nehalem/westmere */
......@@ -977,7 +1070,6 @@ void __init intel_pmu_lbr_init_nhm(void)
* That requires LBR_FAR but that means far
* jmp need to be filtered out
*/
pr_cont("16-deep LBR, ");
}
/* sandy bridge */
......@@ -997,7 +1089,6 @@ void __init intel_pmu_lbr_init_snb(void)
* That requires LBR_FAR but that means far
* jmp need to be filtered out
*/
pr_cont("16-deep LBR, ");
}
/* haswell */
......@@ -1011,7 +1102,8 @@ void intel_pmu_lbr_init_hsw(void)
x86_pmu.lbr_sel_mask = LBR_SEL_MASK;
x86_pmu.lbr_sel_map = hsw_lbr_sel_map;
pr_cont("16-deep LBR, ");
if (lbr_from_signext_quirk_needed())
static_branch_enable(&lbr_from_quirk_key);
}
/* skylake */
......@@ -1031,7 +1123,6 @@ __init void intel_pmu_lbr_init_skl(void)
* That requires LBR_FAR but that means far
* jmp need to be filtered out
*/
pr_cont("32-deep LBR, ");
}
/* atom */
......@@ -1057,7 +1148,6 @@ void __init intel_pmu_lbr_init_atom(void)
* SW branch filter usage:
* - compensate for lack of HW filter
*/
pr_cont("8-deep LBR, ");
}
/* slm */
......@@ -1088,6 +1178,4 @@ void intel_pmu_lbr_init_knl(void)
x86_pmu.lbr_sel_mask = LBR_SEL_MASK;
x86_pmu.lbr_sel_map = snb_lbr_sel_map;
pr_cont("8-deep LBR, ");
}
......@@ -55,6 +55,7 @@
#include <linux/slab.h>
#include <linux/perf_event.h>
#include <asm/cpu_device_id.h>
#include <asm/intel-family.h>
#include "../perf_event.h"
MODULE_LICENSE("GPL");
......@@ -786,26 +787,27 @@ static const struct intel_rapl_init_fun skl_rapl_init __initconst = {
};
static const struct x86_cpu_id rapl_cpu_match[] __initconst = {
X86_RAPL_MODEL_MATCH(42, snb_rapl_init), /* Sandy Bridge */
X86_RAPL_MODEL_MATCH(45, snbep_rapl_init), /* Sandy Bridge-EP */
X86_RAPL_MODEL_MATCH(INTEL_FAM6_SANDYBRIDGE, snb_rapl_init),
X86_RAPL_MODEL_MATCH(INTEL_FAM6_SANDYBRIDGE_X, snbep_rapl_init),
X86_RAPL_MODEL_MATCH(58, snb_rapl_init), /* Ivy Bridge */
X86_RAPL_MODEL_MATCH(62, snbep_rapl_init), /* IvyTown */
X86_RAPL_MODEL_MATCH(INTEL_FAM6_IVYBRIDGE, snb_rapl_init),
X86_RAPL_MODEL_MATCH(INTEL_FAM6_IVYBRIDGE_X, snbep_rapl_init),
X86_RAPL_MODEL_MATCH(60, hsw_rapl_init), /* Haswell */
X86_RAPL_MODEL_MATCH(63, hsx_rapl_init), /* Haswell-Server */
X86_RAPL_MODEL_MATCH(69, hsw_rapl_init), /* Haswell-Celeron */
X86_RAPL_MODEL_MATCH(70, hsw_rapl_init), /* Haswell GT3e */
X86_RAPL_MODEL_MATCH(INTEL_FAM6_HASWELL_CORE, hsw_rapl_init),
X86_RAPL_MODEL_MATCH(INTEL_FAM6_HASWELL_X, hsw_rapl_init),
X86_RAPL_MODEL_MATCH(INTEL_FAM6_HASWELL_ULT, hsw_rapl_init),
X86_RAPL_MODEL_MATCH(INTEL_FAM6_HASWELL_GT3E, hsw_rapl_init),
X86_RAPL_MODEL_MATCH(61, hsw_rapl_init), /* Broadwell */
X86_RAPL_MODEL_MATCH(71, hsw_rapl_init), /* Broadwell-H */
X86_RAPL_MODEL_MATCH(79, hsx_rapl_init), /* Broadwell-Server */
X86_RAPL_MODEL_MATCH(86, hsx_rapl_init), /* Broadwell Xeon D */
X86_RAPL_MODEL_MATCH(INTEL_FAM6_BROADWELL_CORE, hsw_rapl_init),
X86_RAPL_MODEL_MATCH(INTEL_FAM6_BROADWELL_GT3E, hsw_rapl_init),
X86_RAPL_MODEL_MATCH(INTEL_FAM6_BROADWELL_X, hsw_rapl_init),
X86_RAPL_MODEL_MATCH(INTEL_FAM6_BROADWELL_XEON_D, hsw_rapl_init),
X86_RAPL_MODEL_MATCH(87, knl_rapl_init), /* Knights Landing */
X86_RAPL_MODEL_MATCH(INTEL_FAM6_XEON_PHI_KNL, knl_rapl_init),
X86_RAPL_MODEL_MATCH(78, skl_rapl_init), /* Skylake */
X86_RAPL_MODEL_MATCH(94, skl_rapl_init), /* Skylake H/S */
X86_RAPL_MODEL_MATCH(INTEL_FAM6_SKYLAKE_MOBILE, skl_rapl_init),
X86_RAPL_MODEL_MATCH(INTEL_FAM6_SKYLAKE_DESKTOP, skl_rapl_init),
X86_RAPL_MODEL_MATCH(INTEL_FAM6_SKYLAKE_X, hsx_rapl_init),
{},
};
......
#include <asm/cpu_device_id.h>
#include <asm/intel-family.h>
#include "uncore.h"
static struct intel_uncore_type *empty_uncore[] = { NULL, };
......@@ -882,7 +883,7 @@ uncore_types_init(struct intel_uncore_type **types, bool setid)
static int uncore_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
{
struct intel_uncore_type *type;
struct intel_uncore_pmu *pmu;
struct intel_uncore_pmu *pmu = NULL;
struct intel_uncore_box *box;
int phys_id, pkg, ret;
......@@ -903,20 +904,37 @@ static int uncore_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id
}
type = uncore_pci_uncores[UNCORE_PCI_DEV_TYPE(id->driver_data)];
/*
* for performance monitoring unit with multiple boxes,
* each box has a different function id.
*/
pmu = &type->pmus[UNCORE_PCI_DEV_IDX(id->driver_data)];
/* Knights Landing uses a common PCI device ID for multiple instances of
* an uncore PMU device type. There is only one entry per device type in
* the knl_uncore_pci_ids table inspite of multiple devices present for
* some device types. Hence PCI device idx would be 0 for all devices.
* So increment pmu pointer to point to an unused array element.
* Some platforms, e.g. Knights Landing, use a common PCI device ID
* for multiple instances of an uncore PMU device type. We should check
* PCI slot and func to indicate the uncore box.
*/
if (boot_cpu_data.x86_model == 87) {
while (pmu->func_id >= 0)
pmu++;
if (id->driver_data & ~0xffff) {
struct pci_driver *pci_drv = pdev->driver;
const struct pci_device_id *ids = pci_drv->id_table;
unsigned int devfn;
while (ids && ids->vendor) {
if ((ids->vendor == pdev->vendor) &&
(ids->device == pdev->device)) {
devfn = PCI_DEVFN(UNCORE_PCI_DEV_DEV(ids->driver_data),
UNCORE_PCI_DEV_FUNC(ids->driver_data));
if (devfn == pdev->devfn) {
pmu = &type->pmus[UNCORE_PCI_DEV_IDX(ids->driver_data)];
break;
}
}
ids++;
}
if (pmu == NULL)
return -ENODEV;
} else {
/*
* for performance monitoring unit with multiple boxes,
* each box has a different function id.
*/
pmu = &type->pmus[UNCORE_PCI_DEV_IDX(id->driver_data)];
}
if (WARN_ON_ONCE(pmu->boxes[pkg] != NULL))
......@@ -956,7 +974,7 @@ static int uncore_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id
static void uncore_pci_remove(struct pci_dev *pdev)
{
struct intel_uncore_box *box = pci_get_drvdata(pdev);
struct intel_uncore_box *box;
struct intel_uncore_pmu *pmu;
int i, phys_id, pkg;
......@@ -1361,30 +1379,32 @@ static const struct intel_uncore_init_fun knl_uncore_init __initconst = {
};
static const struct intel_uncore_init_fun skl_uncore_init __initconst = {
.cpu_init = skl_uncore_cpu_init,
.pci_init = skl_uncore_pci_init,
};
static const struct x86_cpu_id intel_uncore_match[] __initconst = {
X86_UNCORE_MODEL_MATCH(26, nhm_uncore_init), /* Nehalem */
X86_UNCORE_MODEL_MATCH(30, nhm_uncore_init),
X86_UNCORE_MODEL_MATCH(37, nhm_uncore_init), /* Westmere */
X86_UNCORE_MODEL_MATCH(44, nhm_uncore_init),
X86_UNCORE_MODEL_MATCH(42, snb_uncore_init), /* Sandy Bridge */
X86_UNCORE_MODEL_MATCH(58, ivb_uncore_init), /* Ivy Bridge */
X86_UNCORE_MODEL_MATCH(60, hsw_uncore_init), /* Haswell */
X86_UNCORE_MODEL_MATCH(69, hsw_uncore_init), /* Haswell Celeron */
X86_UNCORE_MODEL_MATCH(70, hsw_uncore_init), /* Haswell */
X86_UNCORE_MODEL_MATCH(61, bdw_uncore_init), /* Broadwell */
X86_UNCORE_MODEL_MATCH(71, bdw_uncore_init), /* Broadwell */
X86_UNCORE_MODEL_MATCH(45, snbep_uncore_init), /* Sandy Bridge-EP */
X86_UNCORE_MODEL_MATCH(46, nhmex_uncore_init), /* Nehalem-EX */
X86_UNCORE_MODEL_MATCH(47, nhmex_uncore_init), /* Westmere-EX aka. Xeon E7 */
X86_UNCORE_MODEL_MATCH(62, ivbep_uncore_init), /* Ivy Bridge-EP */
X86_UNCORE_MODEL_MATCH(63, hswep_uncore_init), /* Haswell-EP */
X86_UNCORE_MODEL_MATCH(79, bdx_uncore_init), /* BDX-EP */
X86_UNCORE_MODEL_MATCH(86, bdx_uncore_init), /* BDX-DE */
X86_UNCORE_MODEL_MATCH(87, knl_uncore_init), /* Knights Landing */
X86_UNCORE_MODEL_MATCH(94, skl_uncore_init), /* SkyLake */
X86_UNCORE_MODEL_MATCH(INTEL_FAM6_NEHALEM_EP, nhm_uncore_init),
X86_UNCORE_MODEL_MATCH(INTEL_FAM6_NEHALEM, nhm_uncore_init),
X86_UNCORE_MODEL_MATCH(INTEL_FAM6_WESTMERE, nhm_uncore_init),
X86_UNCORE_MODEL_MATCH(INTEL_FAM6_WESTMERE_EP, nhm_uncore_init),
X86_UNCORE_MODEL_MATCH(INTEL_FAM6_SANDYBRIDGE, snb_uncore_init),
X86_UNCORE_MODEL_MATCH(INTEL_FAM6_IVYBRIDGE, ivb_uncore_init),
X86_UNCORE_MODEL_MATCH(INTEL_FAM6_HASWELL_CORE, hsw_uncore_init),
X86_UNCORE_MODEL_MATCH(INTEL_FAM6_HASWELL_ULT, hsw_uncore_init),
X86_UNCORE_MODEL_MATCH(INTEL_FAM6_HASWELL_GT3E, hsw_uncore_init),
X86_UNCORE_MODEL_MATCH(INTEL_FAM6_BROADWELL_CORE, bdw_uncore_init),
X86_UNCORE_MODEL_MATCH(INTEL_FAM6_BROADWELL_GT3E, bdw_uncore_init),
X86_UNCORE_MODEL_MATCH(INTEL_FAM6_SANDYBRIDGE_X, snbep_uncore_init),
X86_UNCORE_MODEL_MATCH(INTEL_FAM6_NEHALEM_EX, nhmex_uncore_init),
X86_UNCORE_MODEL_MATCH(INTEL_FAM6_WESTMERE_EX, nhmex_uncore_init),
X86_UNCORE_MODEL_MATCH(INTEL_FAM6_IVYBRIDGE_X, ivbep_uncore_init),
X86_UNCORE_MODEL_MATCH(INTEL_FAM6_HASWELL_X, hswep_uncore_init),
X86_UNCORE_MODEL_MATCH(INTEL_FAM6_BROADWELL_X, bdx_uncore_init),
X86_UNCORE_MODEL_MATCH(INTEL_FAM6_BROADWELL_XEON_D, bdx_uncore_init),
X86_UNCORE_MODEL_MATCH(INTEL_FAM6_XEON_PHI_KNL, knl_uncore_init),
X86_UNCORE_MODEL_MATCH(INTEL_FAM6_SKYLAKE_DESKTOP,skl_uncore_init),
X86_UNCORE_MODEL_MATCH(INTEL_FAM6_SKYLAKE_MOBILE, skl_uncore_init),
{},
};
......
......@@ -15,7 +15,11 @@
#define UNCORE_PMC_IDX_FIXED UNCORE_PMC_IDX_MAX_GENERIC
#define UNCORE_PMC_IDX_MAX (UNCORE_PMC_IDX_FIXED + 1)
#define UNCORE_PCI_DEV_FULL_DATA(dev, func, type, idx) \
((dev << 24) | (func << 16) | (type << 8) | idx)
#define UNCORE_PCI_DEV_DATA(type, idx) ((type << 8) | idx)
#define UNCORE_PCI_DEV_DEV(data) ((data >> 24) & 0xff)
#define UNCORE_PCI_DEV_FUNC(data) ((data >> 16) & 0xff)
#define UNCORE_PCI_DEV_TYPE(data) ((data >> 8) & 0xff)
#define UNCORE_PCI_DEV_IDX(data) (data & 0xff)
#define UNCORE_EXTRA_PCI_DEV 0xff
......@@ -360,6 +364,7 @@ int bdw_uncore_pci_init(void);
int skl_uncore_pci_init(void);
void snb_uncore_cpu_init(void);
void nhm_uncore_cpu_init(void);
void skl_uncore_cpu_init(void);
int snb_pci2phy_map_init(int devid);
/* perf_event_intel_uncore_snbep.c */
......
/* Nehalem/SandBridge/Haswell uncore support */
/* Nehalem/SandBridge/Haswell/Broadwell/Skylake uncore support */
#include "uncore.h"
/* Uncore IMC PCI IDs */
......@@ -9,6 +9,7 @@
#define PCI_DEVICE_ID_INTEL_HSW_U_IMC 0x0a04
#define PCI_DEVICE_ID_INTEL_BDW_IMC 0x1604
#define PCI_DEVICE_ID_INTEL_SKL_IMC 0x191f
#define PCI_DEVICE_ID_INTEL_SKL_U_IMC 0x190c
/* SNB event control */
#define SNB_UNC_CTL_EV_SEL_MASK 0x000000ff
......@@ -64,6 +65,10 @@
#define NHM_UNC_PERFEVTSEL0 0x3c0
#define NHM_UNC_UNCORE_PMC0 0x3b0
/* SKL uncore global control */
#define SKL_UNC_PERF_GLOBAL_CTL 0xe01
#define SKL_UNC_GLOBAL_CTL_CORE_ALL ((1 << 5) - 1)
DEFINE_UNCORE_FORMAT_ATTR(event, event, "config:0-7");
DEFINE_UNCORE_FORMAT_ATTR(umask, umask, "config:8-15");
DEFINE_UNCORE_FORMAT_ATTR(edge, edge, "config:18");
......@@ -179,6 +184,60 @@ void snb_uncore_cpu_init(void)
snb_uncore_cbox.num_boxes = boot_cpu_data.x86_max_cores;
}
static void skl_uncore_msr_init_box(struct intel_uncore_box *box)
{
if (box->pmu->pmu_idx == 0) {
wrmsrl(SKL_UNC_PERF_GLOBAL_CTL,
SNB_UNC_GLOBAL_CTL_EN | SKL_UNC_GLOBAL_CTL_CORE_ALL);
}
}
static void skl_uncore_msr_exit_box(struct intel_uncore_box *box)
{
if (box->pmu->pmu_idx == 0)
wrmsrl(SKL_UNC_PERF_GLOBAL_CTL, 0);
}
static struct intel_uncore_ops skl_uncore_msr_ops = {
.init_box = skl_uncore_msr_init_box,
.exit_box = skl_uncore_msr_exit_box,
.disable_event = snb_uncore_msr_disable_event,
.enable_event = snb_uncore_msr_enable_event,
.read_counter = uncore_msr_read_counter,
};
static struct intel_uncore_type skl_uncore_cbox = {
.name = "cbox",
.num_counters = 4,
.num_boxes = 5,
.perf_ctr_bits = 44,
.fixed_ctr_bits = 48,
.perf_ctr = SNB_UNC_CBO_0_PER_CTR0,
.event_ctl = SNB_UNC_CBO_0_PERFEVTSEL0,
.fixed_ctr = SNB_UNC_FIXED_CTR,
.fixed_ctl = SNB_UNC_FIXED_CTR_CTRL,
.single_fixed = 1,
.event_mask = SNB_UNC_RAW_EVENT_MASK,
.msr_offset = SNB_UNC_CBO_MSR_OFFSET,
.ops = &skl_uncore_msr_ops,
.format_group = &snb_uncore_format_group,
.event_descs = snb_uncore_events,
};
static struct intel_uncore_type *skl_msr_uncores[] = {
&skl_uncore_cbox,
&snb_uncore_arb,
NULL,
};
void skl_uncore_cpu_init(void)
{
uncore_msr_uncores = skl_msr_uncores;
if (skl_uncore_cbox.num_boxes > boot_cpu_data.x86_max_cores)
skl_uncore_cbox.num_boxes = boot_cpu_data.x86_max_cores;
snb_uncore_arb.ops = &skl_uncore_msr_ops;
}
enum {
SNB_PCI_UNCORE_IMC,
};
......@@ -544,6 +603,11 @@ static const struct pci_device_id skl_uncore_pci_ids[] = {
PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_SKL_IMC),
.driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
},
{ /* IMC */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_SKL_U_IMC),
.driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0),
},
{ /* end: all zeroes */ },
};
......@@ -587,6 +651,7 @@ static const struct imc_uncore_pci_dev desktop_imc_pci_ids[] = {
IMC_DEV(HSW_U_IMC, &hsw_uncore_pci_driver), /* 4th Gen Core ULT Mobile Processor */
IMC_DEV(BDW_IMC, &bdw_uncore_pci_driver), /* 5th Gen Core U */
IMC_DEV(SKL_IMC, &skl_uncore_pci_driver), /* 6th Gen Core */
IMC_DEV(SKL_U_IMC, &skl_uncore_pci_driver), /* 6th Gen Core U */
{ /* end marker */ }
};
......
......@@ -2164,21 +2164,101 @@ static struct intel_uncore_type *knl_pci_uncores[] = {
*/
static const struct pci_device_id knl_uncore_pci_ids[] = {
{ /* MC UClk */
{ /* MC0 UClk */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7841),
.driver_data = UNCORE_PCI_DEV_DATA(KNL_PCI_UNCORE_MC_UCLK, 0),
.driver_data = UNCORE_PCI_DEV_FULL_DATA(10, 0, KNL_PCI_UNCORE_MC_UCLK, 0),
},
{ /* MC DClk Channel */
{ /* MC1 UClk */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7841),
.driver_data = UNCORE_PCI_DEV_FULL_DATA(11, 0, KNL_PCI_UNCORE_MC_UCLK, 1),
},
{ /* MC0 DClk CH 0 */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7843),
.driver_data = UNCORE_PCI_DEV_FULL_DATA(8, 2, KNL_PCI_UNCORE_MC_DCLK, 0),
},
{ /* MC0 DClk CH 1 */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7843),
.driver_data = UNCORE_PCI_DEV_FULL_DATA(8, 3, KNL_PCI_UNCORE_MC_DCLK, 1),
},
{ /* MC0 DClk CH 2 */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7843),
.driver_data = UNCORE_PCI_DEV_FULL_DATA(8, 4, KNL_PCI_UNCORE_MC_DCLK, 2),
},
{ /* MC1 DClk CH 0 */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7843),
.driver_data = UNCORE_PCI_DEV_FULL_DATA(9, 2, KNL_PCI_UNCORE_MC_DCLK, 3),
},
{ /* MC1 DClk CH 1 */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7843),
.driver_data = UNCORE_PCI_DEV_DATA(KNL_PCI_UNCORE_MC_DCLK, 0),
.driver_data = UNCORE_PCI_DEV_FULL_DATA(9, 3, KNL_PCI_UNCORE_MC_DCLK, 4),
},
{ /* MC1 DClk CH 2 */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7843),
.driver_data = UNCORE_PCI_DEV_FULL_DATA(9, 4, KNL_PCI_UNCORE_MC_DCLK, 5),
},
{ /* EDC0 UClk */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7833),
.driver_data = UNCORE_PCI_DEV_FULL_DATA(15, 0, KNL_PCI_UNCORE_EDC_UCLK, 0),
},
{ /* EDC1 UClk */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7833),
.driver_data = UNCORE_PCI_DEV_FULL_DATA(16, 0, KNL_PCI_UNCORE_EDC_UCLK, 1),
},
{ /* EDC2 UClk */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7833),
.driver_data = UNCORE_PCI_DEV_FULL_DATA(17, 0, KNL_PCI_UNCORE_EDC_UCLK, 2),
},
{ /* EDC3 UClk */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7833),
.driver_data = UNCORE_PCI_DEV_FULL_DATA(18, 0, KNL_PCI_UNCORE_EDC_UCLK, 3),
},
{ /* EDC UClk */
{ /* EDC4 UClk */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7833),
.driver_data = UNCORE_PCI_DEV_DATA(KNL_PCI_UNCORE_EDC_UCLK, 0),
.driver_data = UNCORE_PCI_DEV_FULL_DATA(19, 0, KNL_PCI_UNCORE_EDC_UCLK, 4),
},
{ /* EDC5 UClk */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7833),
.driver_data = UNCORE_PCI_DEV_FULL_DATA(20, 0, KNL_PCI_UNCORE_EDC_UCLK, 5),
},
{ /* EDC6 UClk */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7833),
.driver_data = UNCORE_PCI_DEV_FULL_DATA(21, 0, KNL_PCI_UNCORE_EDC_UCLK, 6),
},
{ /* EDC7 UClk */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7833),
.driver_data = UNCORE_PCI_DEV_FULL_DATA(22, 0, KNL_PCI_UNCORE_EDC_UCLK, 7),
},
{ /* EDC0 EClk */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7835),
.driver_data = UNCORE_PCI_DEV_FULL_DATA(24, 2, KNL_PCI_UNCORE_EDC_ECLK, 0),
},
{ /* EDC1 EClk */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7835),
.driver_data = UNCORE_PCI_DEV_FULL_DATA(25, 2, KNL_PCI_UNCORE_EDC_ECLK, 1),
},
{ /* EDC2 EClk */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7835),
.driver_data = UNCORE_PCI_DEV_FULL_DATA(26, 2, KNL_PCI_UNCORE_EDC_ECLK, 2),
},
{ /* EDC3 EClk */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7835),
.driver_data = UNCORE_PCI_DEV_FULL_DATA(27, 2, KNL_PCI_UNCORE_EDC_ECLK, 3),
},
{ /* EDC4 EClk */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7835),
.driver_data = UNCORE_PCI_DEV_FULL_DATA(28, 2, KNL_PCI_UNCORE_EDC_ECLK, 4),
},
{ /* EDC5 EClk */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7835),
.driver_data = UNCORE_PCI_DEV_FULL_DATA(29, 2, KNL_PCI_UNCORE_EDC_ECLK, 5),
},
{ /* EDC6 EClk */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7835),
.driver_data = UNCORE_PCI_DEV_FULL_DATA(30, 2, KNL_PCI_UNCORE_EDC_ECLK, 6),
},
{ /* EDC EClk */
{ /* EDC7 EClk */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7835),
.driver_data = UNCORE_PCI_DEV_DATA(KNL_PCI_UNCORE_EDC_ECLK, 0),
.driver_data = UNCORE_PCI_DEV_FULL_DATA(31, 2, KNL_PCI_UNCORE_EDC_ECLK, 7),
},
{ /* M2PCIe */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7817),
......
#include <linux/perf_event.h>
#include <asm/intel-family.h>
enum perf_msr_id {
PERF_MSR_TSC = 0,
......@@ -34,39 +35,43 @@ static bool test_intel(int idx)
return false;
switch (boot_cpu_data.x86_model) {
case 30: /* 45nm Nehalem */
case 26: /* 45nm Nehalem-EP */
case 46: /* 45nm Nehalem-EX */
case 37: /* 32nm Westmere */
case 44: /* 32nm Westmere-EP */
case 47: /* 32nm Westmere-EX */
case 42: /* 32nm SandyBridge */
case 45: /* 32nm SandyBridge-E/EN/EP */
case 58: /* 22nm IvyBridge */
case 62: /* 22nm IvyBridge-EP/EX */
case 60: /* 22nm Haswell Core */
case 63: /* 22nm Haswell Server */
case 69: /* 22nm Haswell ULT */
case 70: /* 22nm Haswell + GT3e (Intel Iris Pro graphics) */
case 61: /* 14nm Broadwell Core-M */
case 86: /* 14nm Broadwell Xeon D */
case 71: /* 14nm Broadwell + GT3e (Intel Iris Pro graphics) */
case 79: /* 14nm Broadwell Server */
case 55: /* 22nm Atom "Silvermont" */
case 77: /* 22nm Atom "Silvermont Avoton/Rangely" */
case 76: /* 14nm Atom "Airmont" */
case INTEL_FAM6_NEHALEM:
case INTEL_FAM6_NEHALEM_EP:
case INTEL_FAM6_NEHALEM_EX:
case INTEL_FAM6_WESTMERE:
case INTEL_FAM6_WESTMERE2:
case INTEL_FAM6_WESTMERE_EP:
case INTEL_FAM6_WESTMERE_EX:
case INTEL_FAM6_SANDYBRIDGE:
case INTEL_FAM6_SANDYBRIDGE_X:
case INTEL_FAM6_IVYBRIDGE:
case INTEL_FAM6_IVYBRIDGE_X:
case INTEL_FAM6_HASWELL_CORE:
case INTEL_FAM6_HASWELL_X:
case INTEL_FAM6_HASWELL_ULT:
case INTEL_FAM6_HASWELL_GT3E:
case INTEL_FAM6_BROADWELL_CORE:
case INTEL_FAM6_BROADWELL_XEON_D:
case INTEL_FAM6_BROADWELL_GT3E:
case INTEL_FAM6_BROADWELL_X:
case INTEL_FAM6_ATOM_SILVERMONT1:
case INTEL_FAM6_ATOM_SILVERMONT2:
case INTEL_FAM6_ATOM_AIRMONT:
if (idx == PERF_MSR_SMI)
return true;
break;
case 78: /* 14nm Skylake Mobile */
case 94: /* 14nm Skylake Desktop */
case INTEL_FAM6_SKYLAKE_MOBILE:
case INTEL_FAM6_SKYLAKE_DESKTOP:
case INTEL_FAM6_SKYLAKE_X:
case INTEL_FAM6_KABYLAKE_MOBILE:
case INTEL_FAM6_KABYLAKE_DESKTOP:
if (idx == PERF_MSR_SMI || idx == PERF_MSR_PPERF)
return true;
break;
......
......@@ -668,6 +668,14 @@ static struct perf_pmu_events_attr event_attr_##v = { \
.event_str = str, \
};
#define EVENT_ATTR_STR_HT(_name, v, noht, ht) \
static struct perf_pmu_events_ht_attr event_attr_##v = { \
.attr = __ATTR(_name, 0444, events_ht_sysfs_show, NULL),\
.id = 0, \
.event_str_noht = noht, \
.event_str_ht = ht, \
}
extern struct x86_pmu x86_pmu __read_mostly;
static inline bool x86_pmu_has_lbr_callstack(void)
......@@ -803,6 +811,8 @@ struct attribute **merge_attr(struct attribute **a, struct attribute **b);
ssize_t events_sysfs_show(struct device *dev, struct device_attribute *attr,
char *page);
ssize_t events_ht_sysfs_show(struct device *dev, struct device_attribute *attr,
char *page);
#ifdef CONFIG_CPU_SUP_AMD
......@@ -892,6 +902,8 @@ void intel_ds_init(void);
void intel_pmu_lbr_sched_task(struct perf_event_context *ctx, bool sched_in);
u64 lbr_from_signext_quirk_wr(u64 val);
void intel_pmu_lbr_reset(void);
void intel_pmu_lbr_enable(struct perf_event *event);
......
......@@ -129,6 +129,14 @@ extern const struct cpumask *cpu_coregroup_mask(int cpu);
extern unsigned int __max_logical_packages;
#define topology_max_packages() (__max_logical_packages)
extern int __max_smt_threads;
static inline int topology_max_smt_threads(void)
{
return __max_smt_threads;
}
int topology_update_package_map(unsigned int apicid, unsigned int cpu);
extern int topology_phys_to_logical_pkg(unsigned int pkg);
#else
......@@ -136,6 +144,7 @@ extern int topology_phys_to_logical_pkg(unsigned int pkg);
static inline int
topology_update_package_map(unsigned int apicid, unsigned int cpu) { return 0; }
static inline int topology_phys_to_logical_pkg(unsigned int pkg) { return 0; }
static inline int topology_max_smt_threads(void) { return 1; }
#endif
static inline void arch_fix_phys_package_id(int num, u32 slot)
......
......@@ -105,6 +105,9 @@ static unsigned int max_physical_pkg_id __read_mostly;
unsigned int __max_logical_packages __read_mostly;
EXPORT_SYMBOL(__max_logical_packages);
/* Maximum number of SMT threads on any online core */
int __max_smt_threads __read_mostly;
static inline void smpboot_setup_warm_reset_vector(unsigned long start_eip)
{
unsigned long flags;
......@@ -493,7 +496,7 @@ void set_cpu_sibling_map(int cpu)
bool has_mp = has_smt || boot_cpu_data.x86_max_cores > 1;
struct cpuinfo_x86 *c = &cpu_data(cpu);
struct cpuinfo_x86 *o;
int i;
int i, threads;
cpumask_set_cpu(cpu, cpu_sibling_setup_mask);
......@@ -550,6 +553,10 @@ void set_cpu_sibling_map(int cpu)
if (match_die(c, o) && !topology_same_node(c, o))
primarily_use_numa_for_topology();
}
threads = cpumask_weight(topology_sibling_cpumask(cpu));
if (threads > __max_smt_threads)
__max_smt_threads = threads;
}
/* maps the cpu to the sched domain representing multi-core */
......@@ -1441,6 +1448,21 @@ __init void prefill_possible_map(void)
#ifdef CONFIG_HOTPLUG_CPU
/* Recompute SMT state for all CPUs on offline */
static void recompute_smt_state(void)
{
int max_threads, cpu;
max_threads = 0;
for_each_online_cpu (cpu) {
int threads = cpumask_weight(topology_sibling_cpumask(cpu));
if (threads > max_threads)
max_threads = threads;
}
__max_smt_threads = max_threads;
}
static void remove_siblinginfo(int cpu)
{
int sibling;
......@@ -1465,6 +1487,7 @@ static void remove_siblinginfo(int cpu)
c->phys_proc_id = 0;
c->cpu_core_id = 0;
cpumask_clear_cpu(cpu, cpu_sibling_setup_mask);
recompute_smt_state();
}
static void remove_cpu_from_maps(int cpu)
......
......@@ -26,6 +26,7 @@
#include <linux/seq_file.h>
#include <asm/cpu_device_id.h>
#include <asm/intel-family.h>
#include <asm/pmc_core.h>
#include "intel_pmc_core.h"
......@@ -138,10 +139,10 @@ static inline void pmc_core_dbgfs_unregister(struct pmc_dev *pmcdev)
#endif /* CONFIG_DEBUG_FS */
static const struct x86_cpu_id intel_pmc_core_ids[] = {
{ X86_VENDOR_INTEL, 6, 0x4e, X86_FEATURE_MWAIT,
(kernel_ulong_t)NULL}, /* Skylake CPUID Signature */
{ X86_VENDOR_INTEL, 6, 0x5e, X86_FEATURE_MWAIT,
(kernel_ulong_t)NULL}, /* Skylake CPUID Signature */
{ X86_VENDOR_INTEL, 6, INTEL_FAM6_SKYLAKE_MOBILE, X86_FEATURE_MWAIT,
(kernel_ulong_t)NULL},
{ X86_VENDOR_INTEL, 6, INTEL_FAM6_SKYLAKE_DESKTOP, X86_FEATURE_MWAIT,
(kernel_ulong_t)NULL},
{}
};
......
......@@ -517,6 +517,11 @@ struct swevent_hlist {
struct perf_cgroup;
struct ring_buffer;
struct pmu_event_list {
raw_spinlock_t lock;
struct list_head list;
};
/**
* struct perf_event - performance event kernel representation:
*/
......@@ -675,6 +680,7 @@ struct perf_event {
int cgrp_defer_enabled;
#endif
struct list_head sb_list;
#endif /* CONFIG_PERF_EVENTS */
};
......@@ -1074,7 +1080,7 @@ extern void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct
extern struct perf_callchain_entry *
get_perf_callchain(struct pt_regs *regs, u32 init_nr, bool kernel, bool user,
u32 max_stack, bool crosstask, bool add_mark);
extern int get_callchain_buffers(void);
extern int get_callchain_buffers(int max_stack);
extern void put_callchain_buffers(void);
extern int sysctl_perf_event_max_stack;
......@@ -1326,6 +1332,13 @@ struct perf_pmu_events_attr {
const char *event_str;
};
struct perf_pmu_events_ht_attr {
struct device_attribute attr;
u64 id;
const char *event_str_ht;
const char *event_str_noht;
};
ssize_t perf_event_sysfs_show(struct device *dev, struct device_attribute *attr,
char *page);
......
......@@ -276,6 +276,9 @@ enum perf_event_read_format {
/*
* Hardware event_id to monitor via a performance monitoring event:
*
* @sample_max_stack: Max number of frame pointers in a callchain,
* should be < /proc/sys/kernel/perf_event_max_stack
*/
struct perf_event_attr {
......@@ -385,7 +388,8 @@ struct perf_event_attr {
* Wakeup watermark for AUX area
*/
__u32 aux_watermark;
__u32 __reserved_2; /* align to __u64 */
__u16 sample_max_stack;
__u16 __reserved_2; /* align to __u64 */
};
#define perf_flags(attr) (*(&(attr)->read_format + 1))
......
......@@ -99,7 +99,7 @@ static struct bpf_map *stack_map_alloc(union bpf_attr *attr)
if (err)
goto free_smap;
err = get_callchain_buffers();
err = get_callchain_buffers(sysctl_perf_event_max_stack);
if (err)
goto free_smap;
......
......@@ -104,7 +104,7 @@ static int alloc_callchain_buffers(void)
return -ENOMEM;
}
int get_callchain_buffers(void)
int get_callchain_buffers(int event_max_stack)
{
int err = 0;
int count;
......@@ -121,6 +121,15 @@ int get_callchain_buffers(void)
/* If the allocation failed, give up */
if (!callchain_cpus_entries)
err = -ENOMEM;
/*
* If requesting per event more than the global cap,
* return a different error to help userspace figure
* this out.
*
* And also do it here so that we have &callchain_mutex held.
*/
if (event_max_stack > sysctl_perf_event_max_stack)
err = -EOVERFLOW;
goto exit;
}
......@@ -174,11 +183,12 @@ perf_callchain(struct perf_event *event, struct pt_regs *regs)
bool user = !event->attr.exclude_callchain_user;
/* Disallow cross-task user callchains. */
bool crosstask = event->ctx->task && event->ctx->task != current;
const u32 max_stack = event->attr.sample_max_stack;
if (!kernel && !user)
return NULL;
return get_perf_callchain(regs, 0, kernel, user, sysctl_perf_event_max_stack, crosstask, true);
return get_perf_callchain(regs, 0, kernel, user, max_stack, crosstask, true);
}
struct perf_callchain_entry *
......
......@@ -335,6 +335,7 @@ static atomic_t perf_sched_count;
static DEFINE_PER_CPU(atomic_t, perf_cgroup_events);
static DEFINE_PER_CPU(int, perf_sched_cb_usages);
static DEFINE_PER_CPU(struct pmu_event_list, pmu_sb_events);
static atomic_t nr_mmap_events __read_mostly;
static atomic_t nr_comm_events __read_mostly;
......@@ -396,6 +397,13 @@ int perf_proc_update_handler(struct ctl_table *table, int write,
if (ret || !write)
return ret;
/*
* If throttling is disabled don't allow the write:
*/
if (sysctl_perf_cpu_time_max_percent == 100 ||
sysctl_perf_cpu_time_max_percent == 0)
return -EINVAL;
max_samples_per_tick = DIV_ROUND_UP(sysctl_perf_event_sample_rate, HZ);
perf_sample_period_ns = NSEC_PER_SEC / sysctl_perf_event_sample_rate;
update_perf_cpu_limits();
......@@ -3686,6 +3694,39 @@ static void free_event_rcu(struct rcu_head *head)
static void ring_buffer_attach(struct perf_event *event,
struct ring_buffer *rb);
static void detach_sb_event(struct perf_event *event)
{
struct pmu_event_list *pel = per_cpu_ptr(&pmu_sb_events, event->cpu);
raw_spin_lock(&pel->lock);
list_del_rcu(&event->sb_list);
raw_spin_unlock(&pel->lock);
}
static bool is_sb_event(struct perf_event *event)
{
struct perf_event_attr *attr = &event->attr;
if (event->parent)
return false;
if (event->attach_state & PERF_ATTACH_TASK)
return false;
if (attr->mmap || attr->mmap_data || attr->mmap2 ||
attr->comm || attr->comm_exec ||
attr->task ||
attr->context_switch)
return true;
return false;
}
static void unaccount_pmu_sb_event(struct perf_event *event)
{
if (is_sb_event(event))
detach_sb_event(event);
}
static void unaccount_event_cpu(struct perf_event *event, int cpu)
{
if (event->parent)
......@@ -3749,6 +3790,8 @@ static void unaccount_event(struct perf_event *event)
}
unaccount_event_cpu(event, event->cpu);
unaccount_pmu_sb_event(event);
}
static void perf_sched_delayed(struct work_struct *work)
......@@ -5875,11 +5918,11 @@ perf_event_read_event(struct perf_event *event,
perf_output_end(&handle);
}
typedef void (perf_event_aux_output_cb)(struct perf_event *event, void *data);
typedef void (perf_iterate_f)(struct perf_event *event, void *data);
static void
perf_event_aux_ctx(struct perf_event_context *ctx,
perf_event_aux_output_cb output,
perf_iterate_ctx(struct perf_event_context *ctx,
perf_iterate_f output,
void *data, bool all)
{
struct perf_event *event;
......@@ -5896,52 +5939,55 @@ perf_event_aux_ctx(struct perf_event_context *ctx,
}
}
static void
perf_event_aux_task_ctx(perf_event_aux_output_cb output, void *data,
struct perf_event_context *task_ctx)
static void perf_iterate_sb_cpu(perf_iterate_f output, void *data)
{
rcu_read_lock();
preempt_disable();
perf_event_aux_ctx(task_ctx, output, data, false);
preempt_enable();
rcu_read_unlock();
struct pmu_event_list *pel = this_cpu_ptr(&pmu_sb_events);
struct perf_event *event;
list_for_each_entry_rcu(event, &pel->list, sb_list) {
if (event->state < PERF_EVENT_STATE_INACTIVE)
continue;
if (!event_filter_match(event))
continue;
output(event, data);
}
}
/*
* Iterate all events that need to receive side-band events.
*
* For new callers; ensure that account_pmu_sb_event() includes
* your event, otherwise it might not get delivered.
*/
static void
perf_event_aux(perf_event_aux_output_cb output, void *data,
perf_iterate_sb(perf_iterate_f output, void *data,
struct perf_event_context *task_ctx)
{
struct perf_cpu_context *cpuctx;
struct perf_event_context *ctx;
struct pmu *pmu;
int ctxn;
rcu_read_lock();
preempt_disable();
/*
* If we have task_ctx != NULL we only notify
* the task context itself. The task_ctx is set
* only for EXIT events before releasing task
* If we have task_ctx != NULL we only notify the task context itself.
* The task_ctx is set only for EXIT events before releasing task
* context.
*/
if (task_ctx) {
perf_event_aux_task_ctx(output, data, task_ctx);
return;
perf_iterate_ctx(task_ctx, output, data, false);
goto done;
}
rcu_read_lock();
list_for_each_entry_rcu(pmu, &pmus, entry) {
cpuctx = get_cpu_ptr(pmu->pmu_cpu_context);
if (cpuctx->unique_pmu != pmu)
goto next;
perf_event_aux_ctx(&cpuctx->ctx, output, data, false);
ctxn = pmu->task_ctx_nr;
if (ctxn < 0)
goto next;
perf_iterate_sb_cpu(output, data);
for_each_task_context_nr(ctxn) {
ctx = rcu_dereference(current->perf_event_ctxp[ctxn]);
if (ctx)
perf_event_aux_ctx(ctx, output, data, false);
next:
put_cpu_ptr(pmu->pmu_cpu_context);
perf_iterate_ctx(ctx, output, data, false);
}
done:
preempt_enable();
rcu_read_unlock();
}
......@@ -5990,7 +6036,7 @@ void perf_event_exec(void)
perf_event_enable_on_exec(ctxn);
perf_event_aux_ctx(ctx, perf_event_addr_filters_exec, NULL,
perf_iterate_ctx(ctx, perf_event_addr_filters_exec, NULL,
true);
}
rcu_read_unlock();
......@@ -6034,9 +6080,9 @@ static int __perf_pmu_output_stop(void *info)
};
rcu_read_lock();
perf_event_aux_ctx(&cpuctx->ctx, __perf_event_output_stop, &ro, false);
perf_iterate_ctx(&cpuctx->ctx, __perf_event_output_stop, &ro, false);
if (cpuctx->task_ctx)
perf_event_aux_ctx(cpuctx->task_ctx, __perf_event_output_stop,
perf_iterate_ctx(cpuctx->task_ctx, __perf_event_output_stop,
&ro, false);
rcu_read_unlock();
......@@ -6165,7 +6211,7 @@ static void perf_event_task(struct task_struct *task,
},
};
perf_event_aux(perf_event_task_output,
perf_iterate_sb(perf_event_task_output,
&task_event,
task_ctx);
}
......@@ -6244,7 +6290,7 @@ static void perf_event_comm_event(struct perf_comm_event *comm_event)
comm_event->event_id.header.size = sizeof(comm_event->event_id) + size;
perf_event_aux(perf_event_comm_output,
perf_iterate_sb(perf_event_comm_output,
comm_event,
NULL);
}
......@@ -6475,7 +6521,7 @@ static void perf_event_mmap_event(struct perf_mmap_event *mmap_event)
mmap_event->event_id.header.size = sizeof(mmap_event->event_id) + size;
perf_event_aux(perf_event_mmap_output,
perf_iterate_sb(perf_event_mmap_output,
mmap_event,
NULL);
......@@ -6558,7 +6604,7 @@ static void perf_addr_filters_adjust(struct vm_area_struct *vma)
if (!ctx)
continue;
perf_event_aux_ctx(ctx, __perf_addr_filters_adjust, vma, true);
perf_iterate_ctx(ctx, __perf_addr_filters_adjust, vma, true);
}
rcu_read_unlock();
}
......@@ -6745,7 +6791,7 @@ static void perf_event_switch(struct task_struct *task,
},
};
perf_event_aux(perf_event_switch_output,
perf_iterate_sb(perf_event_switch_output,
&switch_event,
NULL);
}
......@@ -8667,6 +8713,28 @@ static struct pmu *perf_init_event(struct perf_event *event)
return pmu;
}
static void attach_sb_event(struct perf_event *event)
{
struct pmu_event_list *pel = per_cpu_ptr(&pmu_sb_events, event->cpu);
raw_spin_lock(&pel->lock);
list_add_rcu(&event->sb_list, &pel->list);
raw_spin_unlock(&pel->lock);
}
/*
* We keep a list of all !task (and therefore per-cpu) events
* that need to receive side-band records.
*
* This avoids having to scan all the various PMU per-cpu contexts
* looking for them.
*/
static void account_pmu_sb_event(struct perf_event *event)
{
if (is_sb_event(event))
attach_sb_event(event);
}
static void account_event_cpu(struct perf_event *event, int cpu)
{
if (event->parent)
......@@ -8747,6 +8815,8 @@ static void account_event(struct perf_event *event)
enabled:
account_event_cpu(event, event->cpu);
account_pmu_sb_event(event);
}
/*
......@@ -8895,7 +8965,7 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
if (!event->parent) {
if (event->attr.sample_type & PERF_SAMPLE_CALLCHAIN) {
err = get_callchain_buffers();
err = get_callchain_buffers(attr->sample_max_stack);
if (err)
goto err_addr_filters;
}
......@@ -9217,6 +9287,9 @@ SYSCALL_DEFINE5(perf_event_open,
return -EINVAL;
}
if (!attr.sample_max_stack)
attr.sample_max_stack = sysctl_perf_event_max_stack;
/*
* In cgroup mode, the pid argument is used to pass the fd
* opened to the cgroup directory in cgroupfs. The cpu argument
......@@ -9290,7 +9363,7 @@ SYSCALL_DEFINE5(perf_event_open,
if (is_sampling_event(event)) {
if (event->pmu->capabilities & PERF_PMU_CAP_NO_INTERRUPT) {
err = -ENOTSUPP;
err = -EOPNOTSUPP;
goto err_alloc;
}
}
......@@ -10252,6 +10325,9 @@ static void __init perf_event_init_all_cpus(void)
swhash = &per_cpu(swevent_htable, cpu);
mutex_init(&swhash->hlist_mutex);
INIT_LIST_HEAD(&per_cpu(active_ctx_list, cpu));
INIT_LIST_HEAD(&per_cpu(pmu_sb_events.list, cpu));
raw_spin_lock_init(&per_cpu(pmu_sb_events.lock, cpu));
}
}
......
#ifndef __ASM_ALPHA_BITSPERLONG_H
#define __ASM_ALPHA_BITSPERLONG_H
#define __BITS_PER_LONG 64
#include <asm-generic/bitsperlong.h>
#endif /* __ASM_ALPHA_BITSPERLONG_H */
/*
* Copyright (C) 2012 - Virtual Open Systems and Columbia University
* Author: Christoffer Dall <c.dall@virtualopensystems.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License, version 2, as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#ifndef __ARM_KVM_H__
#define __ARM_KVM_H__
#include <linux/types.h>
#include <linux/psci.h>
#include <asm/ptrace.h>
#define __KVM_HAVE_GUEST_DEBUG
#define __KVM_HAVE_IRQ_LINE
#define __KVM_HAVE_READONLY_MEM
#define KVM_REG_SIZE(id) \
(1U << (((id) & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT))
/* Valid for svc_regs, abt_regs, und_regs, irq_regs in struct kvm_regs */
#define KVM_ARM_SVC_sp svc_regs[0]
#define KVM_ARM_SVC_lr svc_regs[1]
#define KVM_ARM_SVC_spsr svc_regs[2]
#define KVM_ARM_ABT_sp abt_regs[0]
#define KVM_ARM_ABT_lr abt_regs[1]
#define KVM_ARM_ABT_spsr abt_regs[2]
#define KVM_ARM_UND_sp und_regs[0]
#define KVM_ARM_UND_lr und_regs[1]
#define KVM_ARM_UND_spsr und_regs[2]
#define KVM_ARM_IRQ_sp irq_regs[0]
#define KVM_ARM_IRQ_lr irq_regs[1]
#define KVM_ARM_IRQ_spsr irq_regs[2]
/* Valid only for fiq_regs in struct kvm_regs */
#define KVM_ARM_FIQ_r8 fiq_regs[0]
#define KVM_ARM_FIQ_r9 fiq_regs[1]
#define KVM_ARM_FIQ_r10 fiq_regs[2]
#define KVM_ARM_FIQ_fp fiq_regs[3]
#define KVM_ARM_FIQ_ip fiq_regs[4]
#define KVM_ARM_FIQ_sp fiq_regs[5]
#define KVM_ARM_FIQ_lr fiq_regs[6]
#define KVM_ARM_FIQ_spsr fiq_regs[7]
struct kvm_regs {
struct pt_regs usr_regs; /* R0_usr - R14_usr, PC, CPSR */
unsigned long svc_regs[3]; /* SP_svc, LR_svc, SPSR_svc */
unsigned long abt_regs[3]; /* SP_abt, LR_abt, SPSR_abt */
unsigned long und_regs[3]; /* SP_und, LR_und, SPSR_und */
unsigned long irq_regs[3]; /* SP_irq, LR_irq, SPSR_irq */
unsigned long fiq_regs[8]; /* R8_fiq - R14_fiq, SPSR_fiq */
};
/* Supported Processor Types */
#define KVM_ARM_TARGET_CORTEX_A15 0
#define KVM_ARM_TARGET_CORTEX_A7 1
#define KVM_ARM_NUM_TARGETS 2
/* KVM_ARM_SET_DEVICE_ADDR ioctl id encoding */
#define KVM_ARM_DEVICE_TYPE_SHIFT 0
#define KVM_ARM_DEVICE_TYPE_MASK (0xffff << KVM_ARM_DEVICE_TYPE_SHIFT)
#define KVM_ARM_DEVICE_ID_SHIFT 16
#define KVM_ARM_DEVICE_ID_MASK (0xffff << KVM_ARM_DEVICE_ID_SHIFT)
/* Supported device IDs */
#define KVM_ARM_DEVICE_VGIC_V2 0
/* Supported VGIC address types */
#define KVM_VGIC_V2_ADDR_TYPE_DIST 0
#define KVM_VGIC_V2_ADDR_TYPE_CPU 1
#define KVM_VGIC_V2_DIST_SIZE 0x1000
#define KVM_VGIC_V2_CPU_SIZE 0x2000
#define KVM_ARM_VCPU_POWER_OFF 0 /* CPU is started in OFF state */
#define KVM_ARM_VCPU_PSCI_0_2 1 /* CPU uses PSCI v0.2 */
struct kvm_vcpu_init {
__u32 target;
__u32 features[7];
};
struct kvm_sregs {
};
struct kvm_fpu {
};
struct kvm_guest_debug_arch {
};
struct kvm_debug_exit_arch {
};
struct kvm_sync_regs {
};
struct kvm_arch_memory_slot {
};
/* If you need to interpret the index values, here is the key: */
#define KVM_REG_ARM_COPROC_MASK 0x000000000FFF0000
#define KVM_REG_ARM_COPROC_SHIFT 16
#define KVM_REG_ARM_32_OPC2_MASK 0x0000000000000007
#define KVM_REG_ARM_32_OPC2_SHIFT 0
#define KVM_REG_ARM_OPC1_MASK 0x0000000000000078
#define KVM_REG_ARM_OPC1_SHIFT 3
#define KVM_REG_ARM_CRM_MASK 0x0000000000000780
#define KVM_REG_ARM_CRM_SHIFT 7
#define KVM_REG_ARM_32_CRN_MASK 0x0000000000007800
#define KVM_REG_ARM_32_CRN_SHIFT 11
#define ARM_CP15_REG_SHIFT_MASK(x,n) \
(((x) << KVM_REG_ARM_ ## n ## _SHIFT) & KVM_REG_ARM_ ## n ## _MASK)
#define __ARM_CP15_REG(op1,crn,crm,op2) \
(KVM_REG_ARM | (15 << KVM_REG_ARM_COPROC_SHIFT) | \
ARM_CP15_REG_SHIFT_MASK(op1, OPC1) | \
ARM_CP15_REG_SHIFT_MASK(crn, 32_CRN) | \
ARM_CP15_REG_SHIFT_MASK(crm, CRM) | \
ARM_CP15_REG_SHIFT_MASK(op2, 32_OPC2))
#define ARM_CP15_REG32(...) (__ARM_CP15_REG(__VA_ARGS__) | KVM_REG_SIZE_U32)
#define __ARM_CP15_REG64(op1,crm) \
(__ARM_CP15_REG(op1, 0, crm, 0) | KVM_REG_SIZE_U64)
#define ARM_CP15_REG64(...) __ARM_CP15_REG64(__VA_ARGS__)
#define KVM_REG_ARM_TIMER_CTL ARM_CP15_REG32(0, 14, 3, 1)
#define KVM_REG_ARM_TIMER_CNT ARM_CP15_REG64(1, 14)
#define KVM_REG_ARM_TIMER_CVAL ARM_CP15_REG64(3, 14)
/* Normal registers are mapped as coprocessor 16. */
#define KVM_REG_ARM_CORE (0x0010 << KVM_REG_ARM_COPROC_SHIFT)
#define KVM_REG_ARM_CORE_REG(name) (offsetof(struct kvm_regs, name) / 4)
/* Some registers need more space to represent values. */
#define KVM_REG_ARM_DEMUX (0x0011 << KVM_REG_ARM_COPROC_SHIFT)
#define KVM_REG_ARM_DEMUX_ID_MASK 0x000000000000FF00
#define KVM_REG_ARM_DEMUX_ID_SHIFT 8
#define KVM_REG_ARM_DEMUX_ID_CCSIDR (0x00 << KVM_REG_ARM_DEMUX_ID_SHIFT)
#define KVM_REG_ARM_DEMUX_VAL_MASK 0x00000000000000FF
#define KVM_REG_ARM_DEMUX_VAL_SHIFT 0
/* VFP registers: we could overload CP10 like ARM does, but that's ugly. */
#define KVM_REG_ARM_VFP (0x0012 << KVM_REG_ARM_COPROC_SHIFT)
#define KVM_REG_ARM_VFP_MASK 0x000000000000FFFF
#define KVM_REG_ARM_VFP_BASE_REG 0x0
#define KVM_REG_ARM_VFP_FPSID 0x1000
#define KVM_REG_ARM_VFP_FPSCR 0x1001
#define KVM_REG_ARM_VFP_MVFR1 0x1006
#define KVM_REG_ARM_VFP_MVFR0 0x1007
#define KVM_REG_ARM_VFP_FPEXC 0x1008
#define KVM_REG_ARM_VFP_FPINST 0x1009
#define KVM_REG_ARM_VFP_FPINST2 0x100A
/* Device Control API: ARM VGIC */
#define KVM_DEV_ARM_VGIC_GRP_ADDR 0
#define KVM_DEV_ARM_VGIC_GRP_DIST_REGS 1
#define KVM_DEV_ARM_VGIC_GRP_CPU_REGS 2
#define KVM_DEV_ARM_VGIC_CPUID_SHIFT 32
#define KVM_DEV_ARM_VGIC_CPUID_MASK (0xffULL << KVM_DEV_ARM_VGIC_CPUID_SHIFT)
#define KVM_DEV_ARM_VGIC_OFFSET_SHIFT 0
#define KVM_DEV_ARM_VGIC_OFFSET_MASK (0xffffffffULL << KVM_DEV_ARM_VGIC_OFFSET_SHIFT)
#define KVM_DEV_ARM_VGIC_GRP_NR_IRQS 3
#define KVM_DEV_ARM_VGIC_GRP_CTRL 4
#define KVM_DEV_ARM_VGIC_CTRL_INIT 0
/* KVM_IRQ_LINE irq field index values */
#define KVM_ARM_IRQ_TYPE_SHIFT 24
#define KVM_ARM_IRQ_TYPE_MASK 0xff
#define KVM_ARM_IRQ_VCPU_SHIFT 16
#define KVM_ARM_IRQ_VCPU_MASK 0xff
#define KVM_ARM_IRQ_NUM_SHIFT 0
#define KVM_ARM_IRQ_NUM_MASK 0xffff
/* irq_type field */
#define KVM_ARM_IRQ_TYPE_CPU 0
#define KVM_ARM_IRQ_TYPE_SPI 1
#define KVM_ARM_IRQ_TYPE_PPI 2
/* out-of-kernel GIC cpu interrupt injection irq_number field */
#define KVM_ARM_IRQ_CPU_IRQ 0
#define KVM_ARM_IRQ_CPU_FIQ 1
/*
* This used to hold the highest supported SPI, but it is now obsolete
* and only here to provide source code level compatibility with older
* userland. The highest SPI number can be set via KVM_DEV_ARM_VGIC_GRP_NR_IRQS.
*/
#ifndef __KERNEL__
#define KVM_ARM_IRQ_GIC_MAX 127
#endif
/* One single KVM irqchip, ie. the VGIC */
#define KVM_NR_IRQCHIPS 1
/* PSCI interface */
#define KVM_PSCI_FN_BASE 0x95c1ba5e
#define KVM_PSCI_FN(n) (KVM_PSCI_FN_BASE + (n))
#define KVM_PSCI_FN_CPU_SUSPEND KVM_PSCI_FN(0)
#define KVM_PSCI_FN_CPU_OFF KVM_PSCI_FN(1)
#define KVM_PSCI_FN_CPU_ON KVM_PSCI_FN(2)
#define KVM_PSCI_FN_MIGRATE KVM_PSCI_FN(3)
#define KVM_PSCI_RET_SUCCESS PSCI_RET_SUCCESS
#define KVM_PSCI_RET_NI PSCI_RET_NOT_SUPPORTED
#define KVM_PSCI_RET_INVAL PSCI_RET_INVALID_PARAMS
#define KVM_PSCI_RET_DENIED PSCI_RET_DENIED
#endif /* __ARM_KVM_H__ */
#ifndef _ASM_ARM_PERF_REGS_H
#define _ASM_ARM_PERF_REGS_H
enum perf_event_arm_regs {
PERF_REG_ARM_R0,
PERF_REG_ARM_R1,
PERF_REG_ARM_R2,
PERF_REG_ARM_R3,
PERF_REG_ARM_R4,
PERF_REG_ARM_R5,
PERF_REG_ARM_R6,
PERF_REG_ARM_R7,
PERF_REG_ARM_R8,
PERF_REG_ARM_R9,
PERF_REG_ARM_R10,
PERF_REG_ARM_FP,
PERF_REG_ARM_IP,
PERF_REG_ARM_SP,
PERF_REG_ARM_LR,
PERF_REG_ARM_PC,
PERF_REG_ARM_MAX,
};
#endif /* _ASM_ARM_PERF_REGS_H */
/*
* Copyright (C) 2012 ARM Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef __ASM_BITSPERLONG_H
#define __ASM_BITSPERLONG_H
#define __BITS_PER_LONG 64
#include <asm-generic/bitsperlong.h>
#endif /* __ASM_BITSPERLONG_H */
/*
* Copyright (C) 2012,2013 - ARM Ltd
* Author: Marc Zyngier <marc.zyngier@arm.com>
*
* Derived from arch/arm/include/uapi/asm/kvm.h:
* Copyright (C) 2012 - Virtual Open Systems and Columbia University
* Author: Christoffer Dall <c.dall@virtualopensystems.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef __ARM_KVM_H__
#define __ARM_KVM_H__
#define KVM_SPSR_EL1 0
#define KVM_SPSR_SVC KVM_SPSR_EL1
#define KVM_SPSR_ABT 1
#define KVM_SPSR_UND 2
#define KVM_SPSR_IRQ 3
#define KVM_SPSR_FIQ 4
#define KVM_NR_SPSR 5
#ifndef __ASSEMBLY__
#include <linux/psci.h>
#include <linux/types.h>
#include <asm/ptrace.h>
#define __KVM_HAVE_GUEST_DEBUG
#define __KVM_HAVE_IRQ_LINE
#define __KVM_HAVE_READONLY_MEM
#define KVM_REG_SIZE(id) \
(1U << (((id) & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT))
struct kvm_regs {
struct user_pt_regs regs; /* sp = sp_el0 */
__u64 sp_el1;
__u64 elr_el1;
__u64 spsr[KVM_NR_SPSR];
struct user_fpsimd_state fp_regs;
};
/*
* Supported CPU Targets - Adding a new target type is not recommended,
* unless there are some special registers not supported by the
* genericv8 syreg table.
*/
#define KVM_ARM_TARGET_AEM_V8 0
#define KVM_ARM_TARGET_FOUNDATION_V8 1
#define KVM_ARM_TARGET_CORTEX_A57 2
#define KVM_ARM_TARGET_XGENE_POTENZA 3
#define KVM_ARM_TARGET_CORTEX_A53 4
/* Generic ARM v8 target */
#define KVM_ARM_TARGET_GENERIC_V8 5
#define KVM_ARM_NUM_TARGETS 6
/* KVM_ARM_SET_DEVICE_ADDR ioctl id encoding */
#define KVM_ARM_DEVICE_TYPE_SHIFT 0
#define KVM_ARM_DEVICE_TYPE_MASK (0xffff << KVM_ARM_DEVICE_TYPE_SHIFT)
#define KVM_ARM_DEVICE_ID_SHIFT 16
#define KVM_ARM_DEVICE_ID_MASK (0xffff << KVM_ARM_DEVICE_ID_SHIFT)
/* Supported device IDs */
#define KVM_ARM_DEVICE_VGIC_V2 0
/* Supported VGIC address types */
#define KVM_VGIC_V2_ADDR_TYPE_DIST 0
#define KVM_VGIC_V2_ADDR_TYPE_CPU 1
#define KVM_VGIC_V2_DIST_SIZE 0x1000
#define KVM_VGIC_V2_CPU_SIZE 0x2000
/* Supported VGICv3 address types */
#define KVM_VGIC_V3_ADDR_TYPE_DIST 2
#define KVM_VGIC_V3_ADDR_TYPE_REDIST 3
#define KVM_VGIC_V3_DIST_SIZE SZ_64K
#define KVM_VGIC_V3_REDIST_SIZE (2 * SZ_64K)
#define KVM_ARM_VCPU_POWER_OFF 0 /* CPU is started in OFF state */
#define KVM_ARM_VCPU_EL1_32BIT 1 /* CPU running a 32bit VM */
#define KVM_ARM_VCPU_PSCI_0_2 2 /* CPU uses PSCI v0.2 */
#define KVM_ARM_VCPU_PMU_V3 3 /* Support guest PMUv3 */
struct kvm_vcpu_init {
__u32 target;
__u32 features[7];
};
struct kvm_sregs {
};
struct kvm_fpu {
};
/*
* See v8 ARM ARM D7.3: Debug Registers
*
* The architectural limit is 16 debug registers of each type although
* in practice there are usually less (see ID_AA64DFR0_EL1).
*
* Although the control registers are architecturally defined as 32
* bits wide we use a 64 bit structure here to keep parity with
* KVM_GET/SET_ONE_REG behaviour which treats all system registers as
* 64 bit values. It also allows for the possibility of the
* architecture expanding the control registers without having to
* change the userspace ABI.
*/
#define KVM_ARM_MAX_DBG_REGS 16
struct kvm_guest_debug_arch {
__u64 dbg_bcr[KVM_ARM_MAX_DBG_REGS];
__u64 dbg_bvr[KVM_ARM_MAX_DBG_REGS];
__u64 dbg_wcr[KVM_ARM_MAX_DBG_REGS];
__u64 dbg_wvr[KVM_ARM_MAX_DBG_REGS];
};
struct kvm_debug_exit_arch {
__u32 hsr;
__u64 far; /* used for watchpoints */
};
/*
* Architecture specific defines for kvm_guest_debug->control
*/
#define KVM_GUESTDBG_USE_SW_BP (1 << 16)
#define KVM_GUESTDBG_USE_HW (1 << 17)
struct kvm_sync_regs {
};
struct kvm_arch_memory_slot {
};
/* If you need to interpret the index values, here is the key: */
#define KVM_REG_ARM_COPROC_MASK 0x000000000FFF0000
#define KVM_REG_ARM_COPROC_SHIFT 16
/* Normal registers are mapped as coprocessor 16. */
#define KVM_REG_ARM_CORE (0x0010 << KVM_REG_ARM_COPROC_SHIFT)
#define KVM_REG_ARM_CORE_REG(name) (offsetof(struct kvm_regs, name) / sizeof(__u32))
/* Some registers need more space to represent values. */
#define KVM_REG_ARM_DEMUX (0x0011 << KVM_REG_ARM_COPROC_SHIFT)
#define KVM_REG_ARM_DEMUX_ID_MASK 0x000000000000FF00
#define KVM_REG_ARM_DEMUX_ID_SHIFT 8
#define KVM_REG_ARM_DEMUX_ID_CCSIDR (0x00 << KVM_REG_ARM_DEMUX_ID_SHIFT)
#define KVM_REG_ARM_DEMUX_VAL_MASK 0x00000000000000FF
#define KVM_REG_ARM_DEMUX_VAL_SHIFT 0
/* AArch64 system registers */
#define KVM_REG_ARM64_SYSREG (0x0013 << KVM_REG_ARM_COPROC_SHIFT)
#define KVM_REG_ARM64_SYSREG_OP0_MASK 0x000000000000c000
#define KVM_REG_ARM64_SYSREG_OP0_SHIFT 14
#define KVM_REG_ARM64_SYSREG_OP1_MASK 0x0000000000003800
#define KVM_REG_ARM64_SYSREG_OP1_SHIFT 11
#define KVM_REG_ARM64_SYSREG_CRN_MASK 0x0000000000000780
#define KVM_REG_ARM64_SYSREG_CRN_SHIFT 7
#define KVM_REG_ARM64_SYSREG_CRM_MASK 0x0000000000000078
#define KVM_REG_ARM64_SYSREG_CRM_SHIFT 3
#define KVM_REG_ARM64_SYSREG_OP2_MASK 0x0000000000000007
#define KVM_REG_ARM64_SYSREG_OP2_SHIFT 0
#define ARM64_SYS_REG_SHIFT_MASK(x,n) \
(((x) << KVM_REG_ARM64_SYSREG_ ## n ## _SHIFT) & \
KVM_REG_ARM64_SYSREG_ ## n ## _MASK)
#define __ARM64_SYS_REG(op0,op1,crn,crm,op2) \
(KVM_REG_ARM64 | KVM_REG_ARM64_SYSREG | \
ARM64_SYS_REG_SHIFT_MASK(op0, OP0) | \
ARM64_SYS_REG_SHIFT_MASK(op1, OP1) | \
ARM64_SYS_REG_SHIFT_MASK(crn, CRN) | \
ARM64_SYS_REG_SHIFT_MASK(crm, CRM) | \
ARM64_SYS_REG_SHIFT_MASK(op2, OP2))
#define ARM64_SYS_REG(...) (__ARM64_SYS_REG(__VA_ARGS__) | KVM_REG_SIZE_U64)
#define KVM_REG_ARM_TIMER_CTL ARM64_SYS_REG(3, 3, 14, 3, 1)
#define KVM_REG_ARM_TIMER_CNT ARM64_SYS_REG(3, 3, 14, 3, 2)
#define KVM_REG_ARM_TIMER_CVAL ARM64_SYS_REG(3, 3, 14, 0, 2)
/* Device Control API: ARM VGIC */
#define KVM_DEV_ARM_VGIC_GRP_ADDR 0
#define KVM_DEV_ARM_VGIC_GRP_DIST_REGS 1
#define KVM_DEV_ARM_VGIC_GRP_CPU_REGS 2
#define KVM_DEV_ARM_VGIC_CPUID_SHIFT 32
#define KVM_DEV_ARM_VGIC_CPUID_MASK (0xffULL << KVM_DEV_ARM_VGIC_CPUID_SHIFT)
#define KVM_DEV_ARM_VGIC_OFFSET_SHIFT 0
#define KVM_DEV_ARM_VGIC_OFFSET_MASK (0xffffffffULL << KVM_DEV_ARM_VGIC_OFFSET_SHIFT)
#define KVM_DEV_ARM_VGIC_GRP_NR_IRQS 3
#define KVM_DEV_ARM_VGIC_GRP_CTRL 4
#define KVM_DEV_ARM_VGIC_CTRL_INIT 0
/* Device Control API on vcpu fd */
#define KVM_ARM_VCPU_PMU_V3_CTRL 0
#define KVM_ARM_VCPU_PMU_V3_IRQ 0
#define KVM_ARM_VCPU_PMU_V3_INIT 1
/* KVM_IRQ_LINE irq field index values */
#define KVM_ARM_IRQ_TYPE_SHIFT 24
#define KVM_ARM_IRQ_TYPE_MASK 0xff
#define KVM_ARM_IRQ_VCPU_SHIFT 16
#define KVM_ARM_IRQ_VCPU_MASK 0xff
#define KVM_ARM_IRQ_NUM_SHIFT 0
#define KVM_ARM_IRQ_NUM_MASK 0xffff
/* irq_type field */
#define KVM_ARM_IRQ_TYPE_CPU 0
#define KVM_ARM_IRQ_TYPE_SPI 1
#define KVM_ARM_IRQ_TYPE_PPI 2
/* out-of-kernel GIC cpu interrupt injection irq_number field */
#define KVM_ARM_IRQ_CPU_IRQ 0
#define KVM_ARM_IRQ_CPU_FIQ 1
/*
* This used to hold the highest supported SPI, but it is now obsolete
* and only here to provide source code level compatibility with older
* userland. The highest SPI number can be set via KVM_DEV_ARM_VGIC_GRP_NR_IRQS.
*/
#ifndef __KERNEL__
#define KVM_ARM_IRQ_GIC_MAX 127
#endif
/* One single KVM irqchip, ie. the VGIC */
#define KVM_NR_IRQCHIPS 1
/* PSCI interface */
#define KVM_PSCI_FN_BASE 0x95c1ba5e
#define KVM_PSCI_FN(n) (KVM_PSCI_FN_BASE + (n))
#define KVM_PSCI_FN_CPU_SUSPEND KVM_PSCI_FN(0)
#define KVM_PSCI_FN_CPU_OFF KVM_PSCI_FN(1)
#define KVM_PSCI_FN_CPU_ON KVM_PSCI_FN(2)
#define KVM_PSCI_FN_MIGRATE KVM_PSCI_FN(3)
#define KVM_PSCI_RET_SUCCESS PSCI_RET_SUCCESS
#define KVM_PSCI_RET_NI PSCI_RET_NOT_SUPPORTED
#define KVM_PSCI_RET_INVAL PSCI_RET_INVALID_PARAMS
#define KVM_PSCI_RET_DENIED PSCI_RET_DENIED
#endif
#endif /* __ARM_KVM_H__ */
#ifndef _ASM_ARM64_PERF_REGS_H
#define _ASM_ARM64_PERF_REGS_H
enum perf_event_arm_regs {
PERF_REG_ARM64_X0,
PERF_REG_ARM64_X1,
PERF_REG_ARM64_X2,
PERF_REG_ARM64_X3,
PERF_REG_ARM64_X4,
PERF_REG_ARM64_X5,
PERF_REG_ARM64_X6,
PERF_REG_ARM64_X7,
PERF_REG_ARM64_X8,
PERF_REG_ARM64_X9,
PERF_REG_ARM64_X10,
PERF_REG_ARM64_X11,
PERF_REG_ARM64_X12,
PERF_REG_ARM64_X13,
PERF_REG_ARM64_X14,
PERF_REG_ARM64_X15,
PERF_REG_ARM64_X16,
PERF_REG_ARM64_X17,
PERF_REG_ARM64_X18,
PERF_REG_ARM64_X19,
PERF_REG_ARM64_X20,
PERF_REG_ARM64_X21,
PERF_REG_ARM64_X22,
PERF_REG_ARM64_X23,
PERF_REG_ARM64_X24,
PERF_REG_ARM64_X25,
PERF_REG_ARM64_X26,
PERF_REG_ARM64_X27,
PERF_REG_ARM64_X28,
PERF_REG_ARM64_X29,
PERF_REG_ARM64_LR,
PERF_REG_ARM64_SP,
PERF_REG_ARM64_PC,
PERF_REG_ARM64_MAX,
};
#endif /* _ASM_ARM64_PERF_REGS_H */
#include <asm-generic/bitsperlong.h>
#ifndef __ASM_H8300_BITS_PER_LONG
#define __ASM_H8300_BITS_PER_LONG
#include <asm-generic/bitsperlong.h>
#if !defined(__ASSEMBLY__)
/* h8300-unknown-linux required long */
#define __kernel_size_t __kernel_size_t
typedef unsigned long __kernel_size_t;
typedef long __kernel_ssize_t;
typedef long __kernel_ptrdiff_t;
#endif
#endif /* __ASM_H8300_BITS_PER_LONG */
/*
* Copyright (c) 2010-2011, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
* 02110-1301, USA.
*/
#ifndef __ASM_HEXAGON_BITSPERLONG_H
#define __ASM_HEXAGON_BITSPERLONG_H
#define __BITS_PER_LONG 32
#include <asm-generic/bitsperlong.h>
#endif
#ifndef __ASM_IA64_BITSPERLONG_H
#define __ASM_IA64_BITSPERLONG_H
#define __BITS_PER_LONG 64
#include <asm-generic/bitsperlong.h>
#endif /* __ASM_IA64_BITSPERLONG_H */
#include <asm-generic/bitsperlong.h>
#include <asm-generic/bitsperlong.h>
#ifndef __ASM_MIPS_BITSPERLONG_H
#define __ASM_MIPS_BITSPERLONG_H
#define __BITS_PER_LONG _MIPS_SZLONG
#include <asm-generic/bitsperlong.h>
#endif /* __ASM_MIPS_BITSPERLONG_H */
/*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
* Copyright (C) 2012 MIPS Technologies, Inc. All rights reserved.
* Copyright (C) 2013 Cavium, Inc.
* Authors: Sanjay Lal <sanjayl@kymasys.com>
*/
#ifndef __LINUX_KVM_MIPS_H
#define __LINUX_KVM_MIPS_H
#include <linux/types.h>
/*
* KVM MIPS specific structures and definitions.
*
* Some parts derived from the x86 version of this file.
*/
/*
* for KVM_GET_REGS and KVM_SET_REGS
*
* If Config[AT] is zero (32-bit CPU), the register contents are
* stored in the lower 32-bits of the struct kvm_regs fields and sign
* extended to 64-bits.
*/
struct kvm_regs {
/* out (KVM_GET_REGS) / in (KVM_SET_REGS) */
__u64 gpr[32];
__u64 hi;
__u64 lo;
__u64 pc;
};
/*
* for KVM_GET_FPU and KVM_SET_FPU
*/
struct kvm_fpu {
};
/*
* For MIPS, we use KVM_SET_ONE_REG and KVM_GET_ONE_REG to access various
* registers. The id field is broken down as follows:
*
* bits[63..52] - As per linux/kvm.h
* bits[51..32] - Must be zero.
* bits[31..16] - Register set.
*
* Register set = 0: GP registers from kvm_regs (see definitions below).
*
* Register set = 1: CP0 registers.
* bits[15..8] - Must be zero.
* bits[7..3] - Register 'rd' index.
* bits[2..0] - Register 'sel' index.
*
* Register set = 2: KVM specific registers (see definitions below).
*
* Register set = 3: FPU / MSA registers (see definitions below).
*
* Other sets registers may be added in the future. Each set would
* have its own identifier in bits[31..16].
*/
#define KVM_REG_MIPS_GP (KVM_REG_MIPS | 0x0000000000000000ULL)
#define KVM_REG_MIPS_CP0 (KVM_REG_MIPS | 0x0000000000010000ULL)
#define KVM_REG_MIPS_KVM (KVM_REG_MIPS | 0x0000000000020000ULL)
#define KVM_REG_MIPS_FPU (KVM_REG_MIPS | 0x0000000000030000ULL)
/*
* KVM_REG_MIPS_GP - General purpose registers from kvm_regs.
*/
#define KVM_REG_MIPS_R0 (KVM_REG_MIPS_GP | KVM_REG_SIZE_U64 | 0)
#define KVM_REG_MIPS_R1 (KVM_REG_MIPS_GP | KVM_REG_SIZE_U64 | 1)
#define KVM_REG_MIPS_R2 (KVM_REG_MIPS_GP | KVM_REG_SIZE_U64 | 2)
#define KVM_REG_MIPS_R3 (KVM_REG_MIPS_GP | KVM_REG_SIZE_U64 | 3)
#define KVM_REG_MIPS_R4 (KVM_REG_MIPS_GP | KVM_REG_SIZE_U64 | 4)
#define KVM_REG_MIPS_R5 (KVM_REG_MIPS_GP | KVM_REG_SIZE_U64 | 5)
#define KVM_REG_MIPS_R6 (KVM_REG_MIPS_GP | KVM_REG_SIZE_U64 | 6)
#define KVM_REG_MIPS_R7 (KVM_REG_MIPS_GP | KVM_REG_SIZE_U64 | 7)
#define KVM_REG_MIPS_R8 (KVM_REG_MIPS_GP | KVM_REG_SIZE_U64 | 8)
#define KVM_REG_MIPS_R9 (KVM_REG_MIPS_GP | KVM_REG_SIZE_U64 | 9)
#define KVM_REG_MIPS_R10 (KVM_REG_MIPS_GP | KVM_REG_SIZE_U64 | 10)
#define KVM_REG_MIPS_R11 (KVM_REG_MIPS_GP | KVM_REG_SIZE_U64 | 11)
#define KVM_REG_MIPS_R12 (KVM_REG_MIPS_GP | KVM_REG_SIZE_U64 | 12)
#define KVM_REG_MIPS_R13 (KVM_REG_MIPS_GP | KVM_REG_SIZE_U64 | 13)
#define KVM_REG_MIPS_R14 (KVM_REG_MIPS_GP | KVM_REG_SIZE_U64 | 14)
#define KVM_REG_MIPS_R15 (KVM_REG_MIPS_GP | KVM_REG_SIZE_U64 | 15)
#define KVM_REG_MIPS_R16 (KVM_REG_MIPS_GP | KVM_REG_SIZE_U64 | 16)
#define KVM_REG_MIPS_R17 (KVM_REG_MIPS_GP | KVM_REG_SIZE_U64 | 17)
#define KVM_REG_MIPS_R18 (KVM_REG_MIPS_GP | KVM_REG_SIZE_U64 | 18)
#define KVM_REG_MIPS_R19 (KVM_REG_MIPS_GP | KVM_REG_SIZE_U64 | 19)
#define KVM_REG_MIPS_R20 (KVM_REG_MIPS_GP | KVM_REG_SIZE_U64 | 20)
#define KVM_REG_MIPS_R21 (KVM_REG_MIPS_GP | KVM_REG_SIZE_U64 | 21)
#define KVM_REG_MIPS_R22 (KVM_REG_MIPS_GP | KVM_REG_SIZE_U64 | 22)
#define KVM_REG_MIPS_R23 (KVM_REG_MIPS_GP | KVM_REG_SIZE_U64 | 23)
#define KVM_REG_MIPS_R24 (KVM_REG_MIPS_GP | KVM_REG_SIZE_U64 | 24)
#define KVM_REG_MIPS_R25 (KVM_REG_MIPS_GP | KVM_REG_SIZE_U64 | 25)
#define KVM_REG_MIPS_R26 (KVM_REG_MIPS_GP | KVM_REG_SIZE_U64 | 26)
#define KVM_REG_MIPS_R27 (KVM_REG_MIPS_GP | KVM_REG_SIZE_U64 | 27)
#define KVM_REG_MIPS_R28 (KVM_REG_MIPS_GP | KVM_REG_SIZE_U64 | 28)
#define KVM_REG_MIPS_R29 (KVM_REG_MIPS_GP | KVM_REG_SIZE_U64 | 29)
#define KVM_REG_MIPS_R30 (KVM_REG_MIPS_GP | KVM_REG_SIZE_U64 | 30)
#define KVM_REG_MIPS_R31 (KVM_REG_MIPS_GP | KVM_REG_SIZE_U64 | 31)
#define KVM_REG_MIPS_HI (KVM_REG_MIPS_GP | KVM_REG_SIZE_U64 | 32)
#define KVM_REG_MIPS_LO (KVM_REG_MIPS_GP | KVM_REG_SIZE_U64 | 33)
#define KVM_REG_MIPS_PC (KVM_REG_MIPS_GP | KVM_REG_SIZE_U64 | 34)
/*
* KVM_REG_MIPS_KVM - KVM specific control registers.
*/
/*
* CP0_Count control
* DC: Set 0: Master disable CP0_Count and set COUNT_RESUME to now
* Set 1: Master re-enable CP0_Count with unchanged bias, handling timer
* interrupts since COUNT_RESUME
* This can be used to freeze the timer to get a consistent snapshot of
* the CP0_Count and timer interrupt pending state, while also resuming
* safely without losing time or guest timer interrupts.
* Other: Reserved, do not change.
*/
#define KVM_REG_MIPS_COUNT_CTL (KVM_REG_MIPS_KVM | KVM_REG_SIZE_U64 | 0)
#define KVM_REG_MIPS_COUNT_CTL_DC 0x00000001
/*
* CP0_Count resume monotonic nanoseconds
* The monotonic nanosecond time of the last set of COUNT_CTL.DC (master
* disable). Any reads and writes of Count related registers while
* COUNT_CTL.DC=1 will appear to occur at this time. When COUNT_CTL.DC is
* cleared again (master enable) any timer interrupts since this time will be
* emulated.
* Modifications to times in the future are rejected.
*/
#define KVM_REG_MIPS_COUNT_RESUME (KVM_REG_MIPS_KVM | KVM_REG_SIZE_U64 | 1)
/*
* CP0_Count rate in Hz
* Specifies the rate of the CP0_Count timer in Hz. Modifications occur without
* discontinuities in CP0_Count.
*/
#define KVM_REG_MIPS_COUNT_HZ (KVM_REG_MIPS_KVM | KVM_REG_SIZE_U64 | 2)
/*
* KVM_REG_MIPS_FPU - Floating Point and MIPS SIMD Architecture (MSA) registers.
*
* bits[15..8] - Register subset (see definitions below).
* bits[7..5] - Must be zero.
* bits[4..0] - Register number within register subset.
*/
#define KVM_REG_MIPS_FPR (KVM_REG_MIPS_FPU | 0x0000000000000000ULL)
#define KVM_REG_MIPS_FCR (KVM_REG_MIPS_FPU | 0x0000000000000100ULL)
#define KVM_REG_MIPS_MSACR (KVM_REG_MIPS_FPU | 0x0000000000000200ULL)
/*
* KVM_REG_MIPS_FPR - Floating point / Vector registers.
*/
#define KVM_REG_MIPS_FPR_32(n) (KVM_REG_MIPS_FPR | KVM_REG_SIZE_U32 | (n))
#define KVM_REG_MIPS_FPR_64(n) (KVM_REG_MIPS_FPR | KVM_REG_SIZE_U64 | (n))
#define KVM_REG_MIPS_VEC_128(n) (KVM_REG_MIPS_FPR | KVM_REG_SIZE_U128 | (n))
/*
* KVM_REG_MIPS_FCR - Floating point control registers.
*/
#define KVM_REG_MIPS_FCR_IR (KVM_REG_MIPS_FCR | KVM_REG_SIZE_U32 | 0)
#define KVM_REG_MIPS_FCR_CSR (KVM_REG_MIPS_FCR | KVM_REG_SIZE_U32 | 31)
/*
* KVM_REG_MIPS_MSACR - MIPS SIMD Architecture (MSA) control registers.
*/
#define KVM_REG_MIPS_MSA_IR (KVM_REG_MIPS_MSACR | KVM_REG_SIZE_U32 | 0)
#define KVM_REG_MIPS_MSA_CSR (KVM_REG_MIPS_MSACR | KVM_REG_SIZE_U32 | 1)
/*
* KVM MIPS specific structures and definitions
*
*/
struct kvm_debug_exit_arch {
__u64 epc;
};
/* for KVM_SET_GUEST_DEBUG */
struct kvm_guest_debug_arch {
};
/* definition of registers in kvm_run */
struct kvm_sync_regs {
};
/* dummy definition */
struct kvm_sregs {
};
struct kvm_mips_interrupt {
/* in */
__u32 cpu;
__u32 irq;
};
#endif /* __LINUX_KVM_MIPS_H */
#include <asm-generic/bitsperlong.h>
#ifndef __ASM_PARISC_BITSPERLONG_H
#define __ASM_PARISC_BITSPERLONG_H
#if defined(__LP64__)
#define __BITS_PER_LONG 64
#define SHIFT_PER_LONG 6
#else
#define __BITS_PER_LONG 32
#define SHIFT_PER_LONG 5
#endif
#include <asm-generic/bitsperlong.h>
#endif /* __ASM_PARISC_BITSPERLONG_H */
#ifndef __ASM_POWERPC_BITSPERLONG_H
#define __ASM_POWERPC_BITSPERLONG_H
#if defined(__powerpc64__)
# define __BITS_PER_LONG 64
#else
# define __BITS_PER_LONG 32
#endif
#include <asm-generic/bitsperlong.h>
#endif /* __ASM_POWERPC_BITSPERLONG_H */
This diff is collapsed.
#ifndef _UAPI_ASM_POWERPC_PERF_REGS_H
#define _UAPI_ASM_POWERPC_PERF_REGS_H
enum perf_event_powerpc_regs {
PERF_REG_POWERPC_R0,
PERF_REG_POWERPC_R1,
PERF_REG_POWERPC_R2,
PERF_REG_POWERPC_R3,
PERF_REG_POWERPC_R4,
PERF_REG_POWERPC_R5,
PERF_REG_POWERPC_R6,
PERF_REG_POWERPC_R7,
PERF_REG_POWERPC_R8,
PERF_REG_POWERPC_R9,
PERF_REG_POWERPC_R10,
PERF_REG_POWERPC_R11,
PERF_REG_POWERPC_R12,
PERF_REG_POWERPC_R13,
PERF_REG_POWERPC_R14,
PERF_REG_POWERPC_R15,
PERF_REG_POWERPC_R16,
PERF_REG_POWERPC_R17,
PERF_REG_POWERPC_R18,
PERF_REG_POWERPC_R19,
PERF_REG_POWERPC_R20,
PERF_REG_POWERPC_R21,
PERF_REG_POWERPC_R22,
PERF_REG_POWERPC_R23,
PERF_REG_POWERPC_R24,
PERF_REG_POWERPC_R25,
PERF_REG_POWERPC_R26,
PERF_REG_POWERPC_R27,
PERF_REG_POWERPC_R28,
PERF_REG_POWERPC_R29,
PERF_REG_POWERPC_R30,
PERF_REG_POWERPC_R31,
PERF_REG_POWERPC_NIP,
PERF_REG_POWERPC_MSR,
PERF_REG_POWERPC_ORIG_R3,
PERF_REG_POWERPC_CTR,
PERF_REG_POWERPC_LINK,
PERF_REG_POWERPC_XER,
PERF_REG_POWERPC_CCR,
PERF_REG_POWERPC_SOFTE,
PERF_REG_POWERPC_TRAP,
PERF_REG_POWERPC_DAR,
PERF_REG_POWERPC_DSISR,
PERF_REG_POWERPC_MAX,
};
#endif /* _UAPI_ASM_POWERPC_PERF_REGS_H */
#ifndef __ASM_S390_BITSPERLONG_H
#define __ASM_S390_BITSPERLONG_H
#ifndef __s390x__
#define __BITS_PER_LONG 32
#else
#define __BITS_PER_LONG 64
#endif
#include <asm-generic/bitsperlong.h>
#endif /* __ASM_S390_BITSPERLONG_H */
#ifndef __LINUX_KVM_S390_H
#define __LINUX_KVM_S390_H
/*
* KVM s390 specific structures and definitions
*
* Copyright IBM Corp. 2008
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License (version 2 only)
* as published by the Free Software Foundation.
*
* Author(s): Carsten Otte <cotte@de.ibm.com>
* Christian Borntraeger <borntraeger@de.ibm.com>
*/
#include <linux/types.h>
#define __KVM_S390
#define __KVM_HAVE_GUEST_DEBUG
/* Device control API: s390-specific devices */
#define KVM_DEV_FLIC_GET_ALL_IRQS 1
#define KVM_DEV_FLIC_ENQUEUE 2
#define KVM_DEV_FLIC_CLEAR_IRQS 3
#define KVM_DEV_FLIC_APF_ENABLE 4
#define KVM_DEV_FLIC_APF_DISABLE_WAIT 5
#define KVM_DEV_FLIC_ADAPTER_REGISTER 6
#define KVM_DEV_FLIC_ADAPTER_MODIFY 7
#define KVM_DEV_FLIC_CLEAR_IO_IRQ 8
/*
* We can have up to 4*64k pending subchannels + 8 adapter interrupts,
* as well as up to ASYNC_PF_PER_VCPU*KVM_MAX_VCPUS pfault done interrupts.
* There are also sclp and machine checks. This gives us
* sizeof(kvm_s390_irq)*(4*65536+8+64*64+1+1) = 72 * 266250 = 19170000
* Lets round up to 8192 pages.
*/
#define KVM_S390_MAX_FLOAT_IRQS 266250
#define KVM_S390_FLIC_MAX_BUFFER 0x2000000
struct kvm_s390_io_adapter {
__u32 id;
__u8 isc;
__u8 maskable;
__u8 swap;
__u8 pad;
};
#define KVM_S390_IO_ADAPTER_MASK 1
#define KVM_S390_IO_ADAPTER_MAP 2
#define KVM_S390_IO_ADAPTER_UNMAP 3
struct kvm_s390_io_adapter_req {
__u32 id;
__u8 type;
__u8 mask;
__u16 pad0;
__u64 addr;
};
/* kvm attr_group on vm fd */
#define KVM_S390_VM_MEM_CTRL 0
#define KVM_S390_VM_TOD 1
#define KVM_S390_VM_CRYPTO 2
#define KVM_S390_VM_CPU_MODEL 3
/* kvm attributes for mem_ctrl */
#define KVM_S390_VM_MEM_ENABLE_CMMA 0
#define KVM_S390_VM_MEM_CLR_CMMA 1
#define KVM_S390_VM_MEM_LIMIT_SIZE 2
#define KVM_S390_NO_MEM_LIMIT U64_MAX
/* kvm attributes for KVM_S390_VM_TOD */
#define KVM_S390_VM_TOD_LOW 0
#define KVM_S390_VM_TOD_HIGH 1
/* kvm attributes for KVM_S390_VM_CPU_MODEL */
/* processor related attributes are r/w */
#define KVM_S390_VM_CPU_PROCESSOR 0
struct kvm_s390_vm_cpu_processor {
__u64 cpuid;
__u16 ibc;
__u8 pad[6];
__u64 fac_list[256];
};
/* machine related attributes are r/o */
#define KVM_S390_VM_CPU_MACHINE 1
struct kvm_s390_vm_cpu_machine {
__u64 cpuid;
__u32 ibc;
__u8 pad[4];
__u64 fac_mask[256];
__u64 fac_list[256];
};
/* kvm attributes for crypto */
#define KVM_S390_VM_CRYPTO_ENABLE_AES_KW 0
#define KVM_S390_VM_CRYPTO_ENABLE_DEA_KW 1
#define KVM_S390_VM_CRYPTO_DISABLE_AES_KW 2
#define KVM_S390_VM_CRYPTO_DISABLE_DEA_KW 3
/* for KVM_GET_REGS and KVM_SET_REGS */
struct kvm_regs {
/* general purpose regs for s390 */
__u64 gprs[16];
};
/* for KVM_GET_SREGS and KVM_SET_SREGS */
struct kvm_sregs {
__u32 acrs[16];
__u64 crs[16];
};
/* for KVM_GET_FPU and KVM_SET_FPU */
struct kvm_fpu {
__u32 fpc;
__u64 fprs[16];
};
#define KVM_GUESTDBG_USE_HW_BP 0x00010000
#define KVM_HW_BP 1
#define KVM_HW_WP_WRITE 2
#define KVM_SINGLESTEP 4
struct kvm_debug_exit_arch {
__u64 addr;
__u8 type;
__u8 pad[7]; /* Should be set to 0 */
};
struct kvm_hw_breakpoint {
__u64 addr;
__u64 phys_addr;
__u64 len;
__u8 type;
__u8 pad[7]; /* Should be set to 0 */
};
/* for KVM_SET_GUEST_DEBUG */
struct kvm_guest_debug_arch {
__u32 nr_hw_bp;
__u32 pad; /* Should be set to 0 */
struct kvm_hw_breakpoint __user *hw_bp;
};
/* for KVM_SYNC_PFAULT and KVM_REG_S390_PFTOKEN */
#define KVM_S390_PFAULT_TOKEN_INVALID 0xffffffffffffffffULL
#define KVM_SYNC_PREFIX (1UL << 0)
#define KVM_SYNC_GPRS (1UL << 1)
#define KVM_SYNC_ACRS (1UL << 2)
#define KVM_SYNC_CRS (1UL << 3)
#define KVM_SYNC_ARCH0 (1UL << 4)
#define KVM_SYNC_PFAULT (1UL << 5)
#define KVM_SYNC_VRS (1UL << 6)
#define KVM_SYNC_RICCB (1UL << 7)
#define KVM_SYNC_FPRS (1UL << 8)
/* definition of registers in kvm_run */
struct kvm_sync_regs {
__u64 prefix; /* prefix register */
__u64 gprs[16]; /* general purpose registers */
__u32 acrs[16]; /* access registers */
__u64 crs[16]; /* control registers */
__u64 todpr; /* tod programmable register [ARCH0] */
__u64 cputm; /* cpu timer [ARCH0] */
__u64 ckc; /* clock comparator [ARCH0] */
__u64 pp; /* program parameter [ARCH0] */
__u64 gbea; /* guest breaking-event address [ARCH0] */
__u64 pft; /* pfault token [PFAULT] */
__u64 pfs; /* pfault select [PFAULT] */
__u64 pfc; /* pfault compare [PFAULT] */
union {
__u64 vrs[32][2]; /* vector registers (KVM_SYNC_VRS) */
__u64 fprs[16]; /* fp registers (KVM_SYNC_FPRS) */
};
__u8 reserved[512]; /* for future vector expansion */
__u32 fpc; /* valid on KVM_SYNC_VRS or KVM_SYNC_FPRS */
__u8 padding[52]; /* riccb needs to be 64byte aligned */
__u8 riccb[64]; /* runtime instrumentation controls block */
};
#define KVM_REG_S390_TODPR (KVM_REG_S390 | KVM_REG_SIZE_U32 | 0x1)
#define KVM_REG_S390_EPOCHDIFF (KVM_REG_S390 | KVM_REG_SIZE_U64 | 0x2)
#define KVM_REG_S390_CPU_TIMER (KVM_REG_S390 | KVM_REG_SIZE_U64 | 0x3)
#define KVM_REG_S390_CLOCK_COMP (KVM_REG_S390 | KVM_REG_SIZE_U64 | 0x4)
#define KVM_REG_S390_PFTOKEN (KVM_REG_S390 | KVM_REG_SIZE_U64 | 0x5)
#define KVM_REG_S390_PFCOMPARE (KVM_REG_S390 | KVM_REG_SIZE_U64 | 0x6)
#define KVM_REG_S390_PFSELECT (KVM_REG_S390 | KVM_REG_SIZE_U64 | 0x7)
#define KVM_REG_S390_PP (KVM_REG_S390 | KVM_REG_SIZE_U64 | 0x8)
#define KVM_REG_S390_GBEA (KVM_REG_S390 | KVM_REG_SIZE_U64 | 0x9)
#endif
/*
* Definitions for perf-kvm on s390
*
* Copyright 2014 IBM Corp.
* Author(s): Alexander Yarygin <yarygin@linux.vnet.ibm.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License (version 2 only)
* as published by the Free Software Foundation.
*/
#ifndef __LINUX_KVM_PERF_S390_H
#define __LINUX_KVM_PERF_S390_H
#include <asm/sie.h>
#define DECODE_STR_LEN 40
#define VCPU_ID "id"
#define KVM_ENTRY_TRACE "kvm:kvm_s390_sie_enter"
#define KVM_EXIT_TRACE "kvm:kvm_s390_sie_exit"
#define KVM_EXIT_REASON "icptcode"
#endif
#ifndef _UAPI_ASM_S390_SIE_H
#define _UAPI_ASM_S390_SIE_H
#define diagnose_codes \
{ 0x10, "DIAG (0x10) release pages" }, \
{ 0x44, "DIAG (0x44) time slice end" }, \
{ 0x9c, "DIAG (0x9c) time slice end directed" }, \
{ 0x204, "DIAG (0x204) logical-cpu utilization" }, \
{ 0x258, "DIAG (0x258) page-reference services" }, \
{ 0x288, "DIAG (0x288) watchdog functions" }, \
{ 0x308, "DIAG (0x308) ipl functions" }, \
{ 0x500, "DIAG (0x500) KVM virtio functions" }, \
{ 0x501, "DIAG (0x501) KVM breakpoint" }
#define sigp_order_codes \
{ 0x01, "SIGP sense" }, \
{ 0x02, "SIGP external call" }, \
{ 0x03, "SIGP emergency signal" }, \
{ 0x04, "SIGP start" }, \
{ 0x05, "SIGP stop" }, \
{ 0x06, "SIGP restart" }, \
{ 0x09, "SIGP stop and store status" }, \
{ 0x0b, "SIGP initial cpu reset" }, \
{ 0x0c, "SIGP cpu reset" }, \
{ 0x0d, "SIGP set prefix" }, \
{ 0x0e, "SIGP store status at address" }, \
{ 0x12, "SIGP set architecture" }, \
{ 0x13, "SIGP conditional emergency signal" }, \
{ 0x15, "SIGP sense running" }, \
{ 0x16, "SIGP set multithreading"}, \
{ 0x17, "SIGP store additional status ait address"}
#define icpt_prog_codes \
{ 0x0001, "Prog Operation" }, \
{ 0x0002, "Prog Privileged Operation" }, \
{ 0x0003, "Prog Execute" }, \
{ 0x0004, "Prog Protection" }, \
{ 0x0005, "Prog Addressing" }, \
{ 0x0006, "Prog Specification" }, \
{ 0x0007, "Prog Data" }, \
{ 0x0008, "Prog Fixedpoint overflow" }, \
{ 0x0009, "Prog Fixedpoint divide" }, \
{ 0x000A, "Prog Decimal overflow" }, \
{ 0x000B, "Prog Decimal divide" }, \
{ 0x000C, "Prog HFP exponent overflow" }, \
{ 0x000D, "Prog HFP exponent underflow" }, \
{ 0x000E, "Prog HFP significance" }, \
{ 0x000F, "Prog HFP divide" }, \
{ 0x0010, "Prog Segment translation" }, \
{ 0x0011, "Prog Page translation" }, \
{ 0x0012, "Prog Translation specification" }, \
{ 0x0013, "Prog Special operation" }, \
{ 0x0015, "Prog Operand" }, \
{ 0x0016, "Prog Trace table" }, \
{ 0x0017, "Prog ASNtranslation specification" }, \
{ 0x001C, "Prog Spaceswitch event" }, \
{ 0x001D, "Prog HFP square root" }, \
{ 0x001F, "Prog PCtranslation specification" }, \
{ 0x0020, "Prog AFX translation" }, \
{ 0x0021, "Prog ASX translation" }, \
{ 0x0022, "Prog LX translation" }, \
{ 0x0023, "Prog EX translation" }, \
{ 0x0024, "Prog Primary authority" }, \
{ 0x0025, "Prog Secondary authority" }, \
{ 0x0026, "Prog LFXtranslation exception" }, \
{ 0x0027, "Prog LSXtranslation exception" }, \
{ 0x0028, "Prog ALET specification" }, \
{ 0x0029, "Prog ALEN translation" }, \
{ 0x002A, "Prog ALE sequence" }, \
{ 0x002B, "Prog ASTE validity" }, \
{ 0x002C, "Prog ASTE sequence" }, \
{ 0x002D, "Prog Extended authority" }, \
{ 0x002E, "Prog LSTE sequence" }, \
{ 0x002F, "Prog ASTE instance" }, \
{ 0x0030, "Prog Stack full" }, \
{ 0x0031, "Prog Stack empty" }, \
{ 0x0032, "Prog Stack specification" }, \
{ 0x0033, "Prog Stack type" }, \
{ 0x0034, "Prog Stack operation" }, \
{ 0x0039, "Prog Region first translation" }, \
{ 0x003A, "Prog Region second translation" }, \
{ 0x003B, "Prog Region third translation" }, \
{ 0x0040, "Prog Monitor event" }, \
{ 0x0080, "Prog PER event" }, \
{ 0x0119, "Prog Crypto operation" }
#define exit_code_ipa0(ipa0, opcode, mnemonic) \
{ (ipa0 << 8 | opcode), #ipa0 " " mnemonic }
#define exit_code(opcode, mnemonic) \
{ opcode, mnemonic }
#define icpt_insn_codes \
exit_code_ipa0(0x01, 0x01, "PR"), \
exit_code_ipa0(0x01, 0x04, "PTFF"), \
exit_code_ipa0(0x01, 0x07, "SCKPF"), \
exit_code_ipa0(0xAA, 0x00, "RINEXT"), \
exit_code_ipa0(0xAA, 0x01, "RION"), \
exit_code_ipa0(0xAA, 0x02, "TRIC"), \
exit_code_ipa0(0xAA, 0x03, "RIOFF"), \
exit_code_ipa0(0xAA, 0x04, "RIEMIT"), \
exit_code_ipa0(0xB2, 0x02, "STIDP"), \
exit_code_ipa0(0xB2, 0x04, "SCK"), \
exit_code_ipa0(0xB2, 0x05, "STCK"), \
exit_code_ipa0(0xB2, 0x06, "SCKC"), \
exit_code_ipa0(0xB2, 0x07, "STCKC"), \
exit_code_ipa0(0xB2, 0x08, "SPT"), \
exit_code_ipa0(0xB2, 0x09, "STPT"), \
exit_code_ipa0(0xB2, 0x0d, "PTLB"), \
exit_code_ipa0(0xB2, 0x10, "SPX"), \
exit_code_ipa0(0xB2, 0x11, "STPX"), \
exit_code_ipa0(0xB2, 0x12, "STAP"), \
exit_code_ipa0(0xB2, 0x14, "SIE"), \
exit_code_ipa0(0xB2, 0x16, "SETR"), \
exit_code_ipa0(0xB2, 0x17, "STETR"), \
exit_code_ipa0(0xB2, 0x18, "PC"), \
exit_code_ipa0(0xB2, 0x20, "SERVC"), \
exit_code_ipa0(0xB2, 0x21, "IPTE"), \
exit_code_ipa0(0xB2, 0x28, "PT"), \
exit_code_ipa0(0xB2, 0x29, "ISKE"), \
exit_code_ipa0(0xB2, 0x2a, "RRBE"), \
exit_code_ipa0(0xB2, 0x2b, "SSKE"), \
exit_code_ipa0(0xB2, 0x2c, "TB"), \
exit_code_ipa0(0xB2, 0x2e, "PGIN"), \
exit_code_ipa0(0xB2, 0x2f, "PGOUT"), \
exit_code_ipa0(0xB2, 0x30, "CSCH"), \
exit_code_ipa0(0xB2, 0x31, "HSCH"), \
exit_code_ipa0(0xB2, 0x32, "MSCH"), \
exit_code_ipa0(0xB2, 0x33, "SSCH"), \
exit_code_ipa0(0xB2, 0x34, "STSCH"), \
exit_code_ipa0(0xB2, 0x35, "TSCH"), \
exit_code_ipa0(0xB2, 0x36, "TPI"), \
exit_code_ipa0(0xB2, 0x37, "SAL"), \
exit_code_ipa0(0xB2, 0x38, "RSCH"), \
exit_code_ipa0(0xB2, 0x39, "STCRW"), \
exit_code_ipa0(0xB2, 0x3a, "STCPS"), \
exit_code_ipa0(0xB2, 0x3b, "RCHP"), \
exit_code_ipa0(0xB2, 0x3c, "SCHM"), \
exit_code_ipa0(0xB2, 0x40, "BAKR"), \
exit_code_ipa0(0xB2, 0x48, "PALB"), \
exit_code_ipa0(0xB2, 0x4c, "TAR"), \
exit_code_ipa0(0xB2, 0x50, "CSP"), \
exit_code_ipa0(0xB2, 0x54, "MVPG"), \
exit_code_ipa0(0xB2, 0x58, "BSG"), \
exit_code_ipa0(0xB2, 0x5a, "BSA"), \
exit_code_ipa0(0xB2, 0x5f, "CHSC"), \
exit_code_ipa0(0xB2, 0x74, "SIGA"), \
exit_code_ipa0(0xB2, 0x76, "XSCH"), \
exit_code_ipa0(0xB2, 0x78, "STCKE"), \
exit_code_ipa0(0xB2, 0x7c, "STCKF"), \
exit_code_ipa0(0xB2, 0x7d, "STSI"), \
exit_code_ipa0(0xB2, 0xb0, "STFLE"), \
exit_code_ipa0(0xB2, 0xb1, "STFL"), \
exit_code_ipa0(0xB2, 0xb2, "LPSWE"), \
exit_code_ipa0(0xB2, 0xf8, "TEND"), \
exit_code_ipa0(0xB2, 0xfc, "TABORT"), \
exit_code_ipa0(0xB9, 0x1e, "KMAC"), \
exit_code_ipa0(0xB9, 0x28, "PCKMO"), \
exit_code_ipa0(0xB9, 0x2a, "KMF"), \
exit_code_ipa0(0xB9, 0x2b, "KMO"), \
exit_code_ipa0(0xB9, 0x2d, "KMCTR"), \
exit_code_ipa0(0xB9, 0x2e, "KM"), \
exit_code_ipa0(0xB9, 0x2f, "KMC"), \
exit_code_ipa0(0xB9, 0x3e, "KIMD"), \
exit_code_ipa0(0xB9, 0x3f, "KLMD"), \
exit_code_ipa0(0xB9, 0x8a, "CSPG"), \
exit_code_ipa0(0xB9, 0x8d, "EPSW"), \
exit_code_ipa0(0xB9, 0x8e, "IDTE"), \
exit_code_ipa0(0xB9, 0x8f, "CRDTE"), \
exit_code_ipa0(0xB9, 0x9c, "EQBS"), \
exit_code_ipa0(0xB9, 0xa2, "PTF"), \
exit_code_ipa0(0xB9, 0xab, "ESSA"), \
exit_code_ipa0(0xB9, 0xae, "RRBM"), \
exit_code_ipa0(0xB9, 0xaf, "PFMF"), \
exit_code_ipa0(0xE3, 0x03, "LRAG"), \
exit_code_ipa0(0xE3, 0x13, "LRAY"), \
exit_code_ipa0(0xE3, 0x25, "NTSTG"), \
exit_code_ipa0(0xE5, 0x00, "LASP"), \
exit_code_ipa0(0xE5, 0x01, "TPROT"), \
exit_code_ipa0(0xE5, 0x60, "TBEGIN"), \
exit_code_ipa0(0xE5, 0x61, "TBEGINC"), \
exit_code_ipa0(0xEB, 0x25, "STCTG"), \
exit_code_ipa0(0xEB, 0x2f, "LCTLG"), \
exit_code_ipa0(0xEB, 0x60, "LRIC"), \
exit_code_ipa0(0xEB, 0x61, "STRIC"), \
exit_code_ipa0(0xEB, 0x62, "MRIC"), \
exit_code_ipa0(0xEB, 0x8a, "SQBS"), \
exit_code_ipa0(0xC8, 0x01, "ECTG"), \
exit_code(0x0a, "SVC"), \
exit_code(0x80, "SSM"), \
exit_code(0x82, "LPSW"), \
exit_code(0x83, "DIAG"), \
exit_code(0xae, "SIGP"), \
exit_code(0xac, "STNSM"), \
exit_code(0xad, "STOSM"), \
exit_code(0xb1, "LRA"), \
exit_code(0xb6, "STCTL"), \
exit_code(0xb7, "LCTL"), \
exit_code(0xee, "PLO")
#define sie_intercept_code \
{ 0x00, "Host interruption" }, \
{ 0x04, "Instruction" }, \
{ 0x08, "Program interruption" }, \
{ 0x0c, "Instruction and program interruption" }, \
{ 0x10, "External request" }, \
{ 0x14, "External interruption" }, \
{ 0x18, "I/O request" }, \
{ 0x1c, "Wait state" }, \
{ 0x20, "Validity" }, \
{ 0x28, "Stop request" }, \
{ 0x2c, "Operation exception" }, \
{ 0x38, "Partial-execution" }, \
{ 0x3c, "I/O interruption" }, \
{ 0x40, "I/O instruction" }, \
{ 0x48, "Timing subset" }
/*
* This is the simple interceptable instructions decoder.
*
* It will be used as userspace interface and it can be used in places
* that does not allow to use general decoder functions,
* such as trace events declarations.
*
* Some userspace tools may want to parse this code
* and would be confused by switch(), if() and other statements,
* but they can understand conditional operator.
*/
#define INSN_DECODE_IPA0(ipa0, insn, rshift, mask) \
(insn >> 56) == (ipa0) ? \
((ipa0 << 8) | ((insn >> rshift) & mask)) :
#define INSN_DECODE(insn) (insn >> 56)
/*
* The macro icpt_insn_decoder() takes an intercepted instruction
* and returns a key, which can be used to find a mnemonic name
* of the instruction in the icpt_insn_codes table.
*/
#define icpt_insn_decoder(insn) ( \
INSN_DECODE_IPA0(0x01, insn, 48, 0xff) \
INSN_DECODE_IPA0(0xaa, insn, 48, 0x0f) \
INSN_DECODE_IPA0(0xb2, insn, 48, 0xff) \
INSN_DECODE_IPA0(0xb9, insn, 48, 0xff) \
INSN_DECODE_IPA0(0xe3, insn, 48, 0xff) \
INSN_DECODE_IPA0(0xe5, insn, 48, 0xff) \
INSN_DECODE_IPA0(0xeb, insn, 16, 0xff) \
INSN_DECODE_IPA0(0xc8, insn, 48, 0x0f) \
INSN_DECODE(insn))
#endif /* _UAPI_ASM_S390_SIE_H */
#ifndef _ASM_SCORE_BITSPERLONG_H
#define _ASM_SCORE_BITSPERLONG_H
#include <asm-generic/bitsperlong.h>
#endif /* _ASM_SCORE_BITSPERLONG_H */
#ifndef __ASM_ALPHA_BITSPERLONG_H
#define __ASM_ALPHA_BITSPERLONG_H
#if defined(__sparc__) && defined(__arch64__)
#define __BITS_PER_LONG 64
#else
#define __BITS_PER_LONG 32
#endif
#include <asm-generic/bitsperlong.h>
#endif /* __ASM_ALPHA_BITSPERLONG_H */
/*
* Copyright 2010 Tilera Corporation. All Rights Reserved.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation, version 2.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
* NON INFRINGEMENT. See the GNU General Public License for
* more details.
*/
#ifndef _ASM_TILE_BITSPERLONG_H
#define _ASM_TILE_BITSPERLONG_H
#ifdef __LP64__
# define __BITS_PER_LONG 64
#else
# define __BITS_PER_LONG 32
#endif
#include <asm-generic/bitsperlong.h>
#endif /* _ASM_TILE_BITSPERLONG_H */
This diff is collapsed.
#ifndef _ASM_X86_DISABLED_FEATURES_H
#define _ASM_X86_DISABLED_FEATURES_H
/* These features, although they might be available in a CPU
* will not be used because the compile options to support
* them are not present.
*
* This code allows them to be checked and disabled at
* compile time without an explicit #ifdef. Use
* cpu_feature_enabled().
*/
#ifdef CONFIG_X86_INTEL_MPX
# define DISABLE_MPX 0
#else
# define DISABLE_MPX (1<<(X86_FEATURE_MPX & 31))
#endif
#ifdef CONFIG_X86_64
# define DISABLE_VME (1<<(X86_FEATURE_VME & 31))
# define DISABLE_K6_MTRR (1<<(X86_FEATURE_K6_MTRR & 31))
# define DISABLE_CYRIX_ARR (1<<(X86_FEATURE_CYRIX_ARR & 31))
# define DISABLE_CENTAUR_MCR (1<<(X86_FEATURE_CENTAUR_MCR & 31))
#else
# define DISABLE_VME 0
# define DISABLE_K6_MTRR 0
# define DISABLE_CYRIX_ARR 0
# define DISABLE_CENTAUR_MCR 0
#endif /* CONFIG_X86_64 */
#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
# define DISABLE_PKU 0
# define DISABLE_OSPKE 0
#else
# define DISABLE_PKU (1<<(X86_FEATURE_PKU & 31))
# define DISABLE_OSPKE (1<<(X86_FEATURE_OSPKE & 31))
#endif /* CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS */
/*
* Make sure to add features to the correct mask
*/
#define DISABLED_MASK0 (DISABLE_VME)
#define DISABLED_MASK1 0
#define DISABLED_MASK2 0
#define DISABLED_MASK3 (DISABLE_CYRIX_ARR|DISABLE_CENTAUR_MCR|DISABLE_K6_MTRR)
#define DISABLED_MASK4 0
#define DISABLED_MASK5 0
#define DISABLED_MASK6 0
#define DISABLED_MASK7 0
#define DISABLED_MASK8 0
#define DISABLED_MASK9 (DISABLE_MPX)
#define DISABLED_MASK10 0
#define DISABLED_MASK11 0
#define DISABLED_MASK12 0
#define DISABLED_MASK13 0
#define DISABLED_MASK14 0
#define DISABLED_MASK15 0
#define DISABLED_MASK16 (DISABLE_PKU|DISABLE_OSPKE)
#endif /* _ASM_X86_DISABLED_FEATURES_H */
#ifndef _ASM_X86_REQUIRED_FEATURES_H
#define _ASM_X86_REQUIRED_FEATURES_H
/* Define minimum CPUID feature set for kernel These bits are checked
really early to actually display a visible error message before the
kernel dies. Make sure to assign features to the proper mask!
Some requirements that are not in CPUID yet are also in the
CONFIG_X86_MINIMUM_CPU_FAMILY which is checked too.
The real information is in arch/x86/Kconfig.cpu, this just converts
the CONFIGs into a bitmask */
#ifndef CONFIG_MATH_EMULATION
# define NEED_FPU (1<<(X86_FEATURE_FPU & 31))
#else
# define NEED_FPU 0
#endif
#if defined(CONFIG_X86_PAE) || defined(CONFIG_X86_64)
# define NEED_PAE (1<<(X86_FEATURE_PAE & 31))
#else
# define NEED_PAE 0
#endif
#ifdef CONFIG_X86_CMPXCHG64
# define NEED_CX8 (1<<(X86_FEATURE_CX8 & 31))
#else
# define NEED_CX8 0
#endif
#if defined(CONFIG_X86_CMOV) || defined(CONFIG_X86_64)
# define NEED_CMOV (1<<(X86_FEATURE_CMOV & 31))
#else
# define NEED_CMOV 0
#endif
#ifdef CONFIG_X86_USE_3DNOW
# define NEED_3DNOW (1<<(X86_FEATURE_3DNOW & 31))
#else
# define NEED_3DNOW 0
#endif
#if defined(CONFIG_X86_P6_NOP) || defined(CONFIG_X86_64)
# define NEED_NOPL (1<<(X86_FEATURE_NOPL & 31))
#else
# define NEED_NOPL 0
#endif
#ifdef CONFIG_MATOM
# define NEED_MOVBE (1<<(X86_FEATURE_MOVBE & 31))
#else
# define NEED_MOVBE 0
#endif
#ifdef CONFIG_X86_64
#ifdef CONFIG_PARAVIRT
/* Paravirtualized systems may not have PSE or PGE available */
#define NEED_PSE 0
#define NEED_PGE 0
#else
#define NEED_PSE (1<<(X86_FEATURE_PSE) & 31)
#define NEED_PGE (1<<(X86_FEATURE_PGE) & 31)
#endif
#define NEED_MSR (1<<(X86_FEATURE_MSR & 31))
#define NEED_FXSR (1<<(X86_FEATURE_FXSR & 31))
#define NEED_XMM (1<<(X86_FEATURE_XMM & 31))
#define NEED_XMM2 (1<<(X86_FEATURE_XMM2 & 31))
#define NEED_LM (1<<(X86_FEATURE_LM & 31))
#else
#define NEED_PSE 0
#define NEED_MSR 0
#define NEED_PGE 0
#define NEED_FXSR 0
#define NEED_XMM 0
#define NEED_XMM2 0
#define NEED_LM 0
#endif
#define REQUIRED_MASK0 (NEED_FPU|NEED_PSE|NEED_MSR|NEED_PAE|\
NEED_CX8|NEED_PGE|NEED_FXSR|NEED_CMOV|\
NEED_XMM|NEED_XMM2)
#define SSE_MASK (NEED_XMM|NEED_XMM2)
#define REQUIRED_MASK1 (NEED_LM|NEED_3DNOW)
#define REQUIRED_MASK2 0
#define REQUIRED_MASK3 (NEED_NOPL)
#define REQUIRED_MASK4 (NEED_MOVBE)
#define REQUIRED_MASK5 0
#define REQUIRED_MASK6 0
#define REQUIRED_MASK7 0
#define REQUIRED_MASK8 0
#define REQUIRED_MASK9 0
#define REQUIRED_MASK10 0
#define REQUIRED_MASK11 0
#define REQUIRED_MASK12 0
#define REQUIRED_MASK13 0
#define REQUIRED_MASK14 0
#define REQUIRED_MASK15 0
#define REQUIRED_MASK16 0
#endif /* _ASM_X86_REQUIRED_FEATURES_H */
#ifndef __NR_perf_event_open
# define __NR_perf_event_open 336
#endif
#ifndef __NR_futex
# define __NR_futex 240
#endif
#ifndef __NR_gettid
# define __NR_gettid 224
#endif
#ifndef __NR_getcpu
# define __NR_getcpu 318
#endif
#ifndef __NR_perf_event_open
# define __NR_perf_event_open 298
#endif
#ifndef __NR_futex
# define __NR_futex 202
#endif
#ifndef __NR_gettid
# define __NR_gettid 186
#endif
#ifndef __NR_getcpu
# define __NR_getcpu 309
#endif
#ifndef __ASM_X86_BITSPERLONG_H
#define __ASM_X86_BITSPERLONG_H
#if defined(__x86_64__) && !defined(__ILP32__)
# define __BITS_PER_LONG 64
#else
# define __BITS_PER_LONG 32
#endif
#include <asm-generic/bitsperlong.h>
#endif /* __ASM_X86_BITSPERLONG_H */
#ifndef _ASM_X86_KVM_H
#define _ASM_X86_KVM_H
/*
* KVM x86 specific structures and definitions
*
*/
#include <linux/types.h>
#include <linux/ioctl.h>
#define DE_VECTOR 0
#define DB_VECTOR 1
#define BP_VECTOR 3
#define OF_VECTOR 4
#define BR_VECTOR 5
#define UD_VECTOR 6
#define NM_VECTOR 7
#define DF_VECTOR 8
#define TS_VECTOR 10
#define NP_VECTOR 11
#define SS_VECTOR 12
#define GP_VECTOR 13
#define PF_VECTOR 14
#define MF_VECTOR 16
#define AC_VECTOR 17
#define MC_VECTOR 18
#define XM_VECTOR 19
#define VE_VECTOR 20
/* Select x86 specific features in <linux/kvm.h> */
#define __KVM_HAVE_PIT
#define __KVM_HAVE_IOAPIC
#define __KVM_HAVE_IRQ_LINE
#define __KVM_HAVE_MSI
#define __KVM_HAVE_USER_NMI
#define __KVM_HAVE_GUEST_DEBUG
#define __KVM_HAVE_MSIX
#define __KVM_HAVE_MCE
#define __KVM_HAVE_PIT_STATE2
#define __KVM_HAVE_XEN_HVM
#define __KVM_HAVE_VCPU_EVENTS
#define __KVM_HAVE_DEBUGREGS
#define __KVM_HAVE_XSAVE
#define __KVM_HAVE_XCRS
#define __KVM_HAVE_READONLY_MEM
/* Architectural interrupt line count. */
#define KVM_NR_INTERRUPTS 256
struct kvm_memory_alias {
__u32 slot; /* this has a different namespace than memory slots */
__u32 flags;
__u64 guest_phys_addr;
__u64 memory_size;
__u64 target_phys_addr;
};
/* for KVM_GET_IRQCHIP and KVM_SET_IRQCHIP */
struct kvm_pic_state {
__u8 last_irr; /* edge detection */
__u8 irr; /* interrupt request register */
__u8 imr; /* interrupt mask register */
__u8 isr; /* interrupt service register */
__u8 priority_add; /* highest irq priority */
__u8 irq_base;
__u8 read_reg_select;
__u8 poll;
__u8 special_mask;
__u8 init_state;
__u8 auto_eoi;
__u8 rotate_on_auto_eoi;
__u8 special_fully_nested_mode;
__u8 init4; /* true if 4 byte init */
__u8 elcr; /* PIIX edge/trigger selection */
__u8 elcr_mask;
};
#define KVM_IOAPIC_NUM_PINS 24
struct kvm_ioapic_state {
__u64 base_address;
__u32 ioregsel;
__u32 id;
__u32 irr;
__u32 pad;
union {
__u64 bits;
struct {
__u8 vector;
__u8 delivery_mode:3;
__u8 dest_mode:1;
__u8 delivery_status:1;
__u8 polarity:1;
__u8 remote_irr:1;
__u8 trig_mode:1;
__u8 mask:1;
__u8 reserve:7;
__u8 reserved[4];
__u8 dest_id;
} fields;
} redirtbl[KVM_IOAPIC_NUM_PINS];
};
#define KVM_IRQCHIP_PIC_MASTER 0
#define KVM_IRQCHIP_PIC_SLAVE 1
#define KVM_IRQCHIP_IOAPIC 2
#define KVM_NR_IRQCHIPS 3
#define KVM_RUN_X86_SMM (1 << 0)
/* for KVM_GET_REGS and KVM_SET_REGS */
struct kvm_regs {
/* out (KVM_GET_REGS) / in (KVM_SET_REGS) */
__u64 rax, rbx, rcx, rdx;
__u64 rsi, rdi, rsp, rbp;
__u64 r8, r9, r10, r11;
__u64 r12, r13, r14, r15;
__u64 rip, rflags;
};
/* for KVM_GET_LAPIC and KVM_SET_LAPIC */
#define KVM_APIC_REG_SIZE 0x400
struct kvm_lapic_state {
char regs[KVM_APIC_REG_SIZE];
};
struct kvm_segment {
__u64 base;
__u32 limit;
__u16 selector;
__u8 type;
__u8 present, dpl, db, s, l, g, avl;
__u8 unusable;
__u8 padding;
};
struct kvm_dtable {
__u64 base;
__u16 limit;
__u16 padding[3];
};
/* for KVM_GET_SREGS and KVM_SET_SREGS */
struct kvm_sregs {
/* out (KVM_GET_SREGS) / in (KVM_SET_SREGS) */
struct kvm_segment cs, ds, es, fs, gs, ss;
struct kvm_segment tr, ldt;
struct kvm_dtable gdt, idt;
__u64 cr0, cr2, cr3, cr4, cr8;
__u64 efer;
__u64 apic_base;
__u64 interrupt_bitmap[(KVM_NR_INTERRUPTS + 63) / 64];
};
/* for KVM_GET_FPU and KVM_SET_FPU */
struct kvm_fpu {
__u8 fpr[8][16];
__u16 fcw;
__u16 fsw;
__u8 ftwx; /* in fxsave format */
__u8 pad1;
__u16 last_opcode;
__u64 last_ip;
__u64 last_dp;
__u8 xmm[16][16];
__u32 mxcsr;
__u32 pad2;
};
struct kvm_msr_entry {
__u32 index;
__u32 reserved;
__u64 data;
};
/* for KVM_GET_MSRS and KVM_SET_MSRS */
struct kvm_msrs {
__u32 nmsrs; /* number of msrs in entries */
__u32 pad;
struct kvm_msr_entry entries[0];
};
/* for KVM_GET_MSR_INDEX_LIST */
struct kvm_msr_list {
__u32 nmsrs; /* number of msrs in entries */
__u32 indices[0];
};
struct kvm_cpuid_entry {
__u32 function;
__u32 eax;
__u32 ebx;
__u32 ecx;
__u32 edx;
__u32 padding;
};
/* for KVM_SET_CPUID */
struct kvm_cpuid {
__u32 nent;
__u32 padding;
struct kvm_cpuid_entry entries[0];
};
struct kvm_cpuid_entry2 {
__u32 function;
__u32 index;
__u32 flags;
__u32 eax;
__u32 ebx;
__u32 ecx;
__u32 edx;
__u32 padding[3];
};
#define KVM_CPUID_FLAG_SIGNIFCANT_INDEX (1 << 0)
#define KVM_CPUID_FLAG_STATEFUL_FUNC (1 << 1)
#define KVM_CPUID_FLAG_STATE_READ_NEXT (1 << 2)
/* for KVM_SET_CPUID2 */
struct kvm_cpuid2 {
__u32 nent;
__u32 padding;
struct kvm_cpuid_entry2 entries[0];
};
/* for KVM_GET_PIT and KVM_SET_PIT */
struct kvm_pit_channel_state {
__u32 count; /* can be 65536 */
__u16 latched_count;
__u8 count_latched;
__u8 status_latched;
__u8 status;
__u8 read_state;
__u8 write_state;
__u8 write_latch;
__u8 rw_mode;
__u8 mode;
__u8 bcd;
__u8 gate;
__s64 count_load_time;
};
struct kvm_debug_exit_arch {
__u32 exception;
__u32 pad;
__u64 pc;
__u64 dr6;
__u64 dr7;
};
#define KVM_GUESTDBG_USE_SW_BP 0x00010000
#define KVM_GUESTDBG_USE_HW_BP 0x00020000
#define KVM_GUESTDBG_INJECT_DB 0x00040000
#define KVM_GUESTDBG_INJECT_BP 0x00080000
/* for KVM_SET_GUEST_DEBUG */
struct kvm_guest_debug_arch {
__u64 debugreg[8];
};
struct kvm_pit_state {
struct kvm_pit_channel_state channels[3];
};
#define KVM_PIT_FLAGS_HPET_LEGACY 0x00000001
struct kvm_pit_state2 {
struct kvm_pit_channel_state channels[3];
__u32 flags;
__u32 reserved[9];
};
struct kvm_reinject_control {
__u8 pit_reinject;
__u8 reserved[31];
};
/* When set in flags, include corresponding fields on KVM_SET_VCPU_EVENTS */
#define KVM_VCPUEVENT_VALID_NMI_PENDING 0x00000001
#define KVM_VCPUEVENT_VALID_SIPI_VECTOR 0x00000002
#define KVM_VCPUEVENT_VALID_SHADOW 0x00000004
#define KVM_VCPUEVENT_VALID_SMM 0x00000008
/* Interrupt shadow states */
#define KVM_X86_SHADOW_INT_MOV_SS 0x01
#define KVM_X86_SHADOW_INT_STI 0x02
/* for KVM_GET/SET_VCPU_EVENTS */
struct kvm_vcpu_events {
struct {
__u8 injected;
__u8 nr;
__u8 has_error_code;
__u8 pad;
__u32 error_code;
} exception;
struct {
__u8 injected;
__u8 nr;
__u8 soft;
__u8 shadow;
} interrupt;
struct {
__u8 injected;
__u8 pending;
__u8 masked;
__u8 pad;
} nmi;
__u32 sipi_vector;
__u32 flags;
struct {
__u8 smm;
__u8 pending;
__u8 smm_inside_nmi;
__u8 latched_init;
} smi;
__u32 reserved[9];
};
/* for KVM_GET/SET_DEBUGREGS */
struct kvm_debugregs {
__u64 db[4];
__u64 dr6;
__u64 dr7;
__u64 flags;
__u64 reserved[9];
};
/* for KVM_CAP_XSAVE */
struct kvm_xsave {
__u32 region[1024];
};
#define KVM_MAX_XCRS 16
struct kvm_xcr {
__u32 xcr;
__u32 reserved;
__u64 value;
};
struct kvm_xcrs {
__u32 nr_xcrs;
__u32 flags;
struct kvm_xcr xcrs[KVM_MAX_XCRS];
__u64 padding[16];
};
/* definition of registers in kvm_run */
struct kvm_sync_regs {
};
#define KVM_X86_QUIRK_LINT0_REENABLED (1 << 0)
#define KVM_X86_QUIRK_CD_NW_CLEARED (1 << 1)
#endif /* _ASM_X86_KVM_H */
#ifndef _ASM_X86_KVM_PERF_H
#define _ASM_X86_KVM_PERF_H
#include <asm/svm.h>
#include <asm/vmx.h>
#include <asm/kvm.h>
#define DECODE_STR_LEN 20
#define VCPU_ID "vcpu_id"
#define KVM_ENTRY_TRACE "kvm:kvm_entry"
#define KVM_EXIT_TRACE "kvm:kvm_exit"
#define KVM_EXIT_REASON "exit_reason"
#endif /* _ASM_X86_KVM_PERF_H */
#ifndef _ASM_X86_PERF_REGS_H
#define _ASM_X86_PERF_REGS_H
enum perf_event_x86_regs {
PERF_REG_X86_AX,
PERF_REG_X86_BX,
PERF_REG_X86_CX,
PERF_REG_X86_DX,
PERF_REG_X86_SI,
PERF_REG_X86_DI,
PERF_REG_X86_BP,
PERF_REG_X86_SP,
PERF_REG_X86_IP,
PERF_REG_X86_FLAGS,
PERF_REG_X86_CS,
PERF_REG_X86_SS,
PERF_REG_X86_DS,
PERF_REG_X86_ES,
PERF_REG_X86_FS,
PERF_REG_X86_GS,
PERF_REG_X86_R8,
PERF_REG_X86_R9,
PERF_REG_X86_R10,
PERF_REG_X86_R11,
PERF_REG_X86_R12,
PERF_REG_X86_R13,
PERF_REG_X86_R14,
PERF_REG_X86_R15,
PERF_REG_X86_32_MAX = PERF_REG_X86_GS + 1,
PERF_REG_X86_64_MAX = PERF_REG_X86_R15 + 1,
};
#endif /* _ASM_X86_PERF_REGS_H */
#ifndef _UAPI__SVM_H
#define _UAPI__SVM_H
#define SVM_EXIT_READ_CR0 0x000
#define SVM_EXIT_READ_CR2 0x002
#define SVM_EXIT_READ_CR3 0x003
#define SVM_EXIT_READ_CR4 0x004
#define SVM_EXIT_READ_CR8 0x008
#define SVM_EXIT_WRITE_CR0 0x010
#define SVM_EXIT_WRITE_CR2 0x012
#define SVM_EXIT_WRITE_CR3 0x013
#define SVM_EXIT_WRITE_CR4 0x014
#define SVM_EXIT_WRITE_CR8 0x018
#define SVM_EXIT_READ_DR0 0x020
#define SVM_EXIT_READ_DR1 0x021
#define SVM_EXIT_READ_DR2 0x022
#define SVM_EXIT_READ_DR3 0x023
#define SVM_EXIT_READ_DR4 0x024
#define SVM_EXIT_READ_DR5 0x025
#define SVM_EXIT_READ_DR6 0x026
#define SVM_EXIT_READ_DR7 0x027
#define SVM_EXIT_WRITE_DR0 0x030
#define SVM_EXIT_WRITE_DR1 0x031
#define SVM_EXIT_WRITE_DR2 0x032
#define SVM_EXIT_WRITE_DR3 0x033
#define SVM_EXIT_WRITE_DR4 0x034
#define SVM_EXIT_WRITE_DR5 0x035
#define SVM_EXIT_WRITE_DR6 0x036
#define SVM_EXIT_WRITE_DR7 0x037
#define SVM_EXIT_EXCP_BASE 0x040
#define SVM_EXIT_INTR 0x060
#define SVM_EXIT_NMI 0x061
#define SVM_EXIT_SMI 0x062
#define SVM_EXIT_INIT 0x063
#define SVM_EXIT_VINTR 0x064
#define SVM_EXIT_CR0_SEL_WRITE 0x065
#define SVM_EXIT_IDTR_READ 0x066
#define SVM_EXIT_GDTR_READ 0x067
#define SVM_EXIT_LDTR_READ 0x068
#define SVM_EXIT_TR_READ 0x069
#define SVM_EXIT_IDTR_WRITE 0x06a
#define SVM_EXIT_GDTR_WRITE 0x06b
#define SVM_EXIT_LDTR_WRITE 0x06c
#define SVM_EXIT_TR_WRITE 0x06d
#define SVM_EXIT_RDTSC 0x06e
#define SVM_EXIT_RDPMC 0x06f
#define SVM_EXIT_PUSHF 0x070
#define SVM_EXIT_POPF 0x071
#define SVM_EXIT_CPUID 0x072
#define SVM_EXIT_RSM 0x073
#define SVM_EXIT_IRET 0x074
#define SVM_EXIT_SWINT 0x075
#define SVM_EXIT_INVD 0x076
#define SVM_EXIT_PAUSE 0x077
#define SVM_EXIT_HLT 0x078
#define SVM_EXIT_INVLPG 0x079
#define SVM_EXIT_INVLPGA 0x07a
#define SVM_EXIT_IOIO 0x07b
#define SVM_EXIT_MSR 0x07c
#define SVM_EXIT_TASK_SWITCH 0x07d
#define SVM_EXIT_FERR_FREEZE 0x07e
#define SVM_EXIT_SHUTDOWN 0x07f
#define SVM_EXIT_VMRUN 0x080
#define SVM_EXIT_VMMCALL 0x081
#define SVM_EXIT_VMLOAD 0x082
#define SVM_EXIT_VMSAVE 0x083
#define SVM_EXIT_STGI 0x084
#define SVM_EXIT_CLGI 0x085
#define SVM_EXIT_SKINIT 0x086
#define SVM_EXIT_RDTSCP 0x087
#define SVM_EXIT_ICEBP 0x088
#define SVM_EXIT_WBINVD 0x089
#define SVM_EXIT_MONITOR 0x08a
#define SVM_EXIT_MWAIT 0x08b
#define SVM_EXIT_MWAIT_COND 0x08c
#define SVM_EXIT_XSETBV 0x08d
#define SVM_EXIT_NPF 0x400
#define SVM_EXIT_AVIC_INCOMPLETE_IPI 0x401
#define SVM_EXIT_AVIC_UNACCELERATED_ACCESS 0x402
#define SVM_EXIT_ERR -1
#define SVM_EXIT_REASONS \
{ SVM_EXIT_READ_CR0, "read_cr0" }, \
{ SVM_EXIT_READ_CR2, "read_cr2" }, \
{ SVM_EXIT_READ_CR3, "read_cr3" }, \
{ SVM_EXIT_READ_CR4, "read_cr4" }, \
{ SVM_EXIT_READ_CR8, "read_cr8" }, \
{ SVM_EXIT_WRITE_CR0, "write_cr0" }, \
{ SVM_EXIT_WRITE_CR2, "write_cr2" }, \
{ SVM_EXIT_WRITE_CR3, "write_cr3" }, \
{ SVM_EXIT_WRITE_CR4, "write_cr4" }, \
{ SVM_EXIT_WRITE_CR8, "write_cr8" }, \
{ SVM_EXIT_READ_DR0, "read_dr0" }, \
{ SVM_EXIT_READ_DR1, "read_dr1" }, \
{ SVM_EXIT_READ_DR2, "read_dr2" }, \
{ SVM_EXIT_READ_DR3, "read_dr3" }, \
{ SVM_EXIT_READ_DR4, "read_dr4" }, \
{ SVM_EXIT_READ_DR5, "read_dr5" }, \
{ SVM_EXIT_READ_DR6, "read_dr6" }, \
{ SVM_EXIT_READ_DR7, "read_dr7" }, \
{ SVM_EXIT_WRITE_DR0, "write_dr0" }, \
{ SVM_EXIT_WRITE_DR1, "write_dr1" }, \
{ SVM_EXIT_WRITE_DR2, "write_dr2" }, \
{ SVM_EXIT_WRITE_DR3, "write_dr3" }, \
{ SVM_EXIT_WRITE_DR4, "write_dr4" }, \
{ SVM_EXIT_WRITE_DR5, "write_dr5" }, \
{ SVM_EXIT_WRITE_DR6, "write_dr6" }, \
{ SVM_EXIT_WRITE_DR7, "write_dr7" }, \
{ SVM_EXIT_EXCP_BASE + DE_VECTOR, "DE excp" }, \
{ SVM_EXIT_EXCP_BASE + DB_VECTOR, "DB excp" }, \
{ SVM_EXIT_EXCP_BASE + BP_VECTOR, "BP excp" }, \
{ SVM_EXIT_EXCP_BASE + OF_VECTOR, "OF excp" }, \
{ SVM_EXIT_EXCP_BASE + BR_VECTOR, "BR excp" }, \
{ SVM_EXIT_EXCP_BASE + UD_VECTOR, "UD excp" }, \
{ SVM_EXIT_EXCP_BASE + NM_VECTOR, "NM excp" }, \
{ SVM_EXIT_EXCP_BASE + DF_VECTOR, "DF excp" }, \
{ SVM_EXIT_EXCP_BASE + TS_VECTOR, "TS excp" }, \
{ SVM_EXIT_EXCP_BASE + NP_VECTOR, "NP excp" }, \
{ SVM_EXIT_EXCP_BASE + SS_VECTOR, "SS excp" }, \
{ SVM_EXIT_EXCP_BASE + GP_VECTOR, "GP excp" }, \
{ SVM_EXIT_EXCP_BASE + PF_VECTOR, "PF excp" }, \
{ SVM_EXIT_EXCP_BASE + MF_VECTOR, "MF excp" }, \
{ SVM_EXIT_EXCP_BASE + AC_VECTOR, "AC excp" }, \
{ SVM_EXIT_EXCP_BASE + MC_VECTOR, "MC excp" }, \
{ SVM_EXIT_EXCP_BASE + XM_VECTOR, "XF excp" }, \
{ SVM_EXIT_INTR, "interrupt" }, \
{ SVM_EXIT_NMI, "nmi" }, \
{ SVM_EXIT_SMI, "smi" }, \
{ SVM_EXIT_INIT, "init" }, \
{ SVM_EXIT_VINTR, "vintr" }, \
{ SVM_EXIT_CR0_SEL_WRITE, "cr0_sel_write" }, \
{ SVM_EXIT_IDTR_READ, "read_idtr" }, \
{ SVM_EXIT_GDTR_READ, "read_gdtr" }, \
{ SVM_EXIT_LDTR_READ, "read_ldtr" }, \
{ SVM_EXIT_TR_READ, "read_rt" }, \
{ SVM_EXIT_IDTR_WRITE, "write_idtr" }, \
{ SVM_EXIT_GDTR_WRITE, "write_gdtr" }, \
{ SVM_EXIT_LDTR_WRITE, "write_ldtr" }, \
{ SVM_EXIT_TR_WRITE, "write_rt" }, \
{ SVM_EXIT_RDTSC, "rdtsc" }, \
{ SVM_EXIT_RDPMC, "rdpmc" }, \
{ SVM_EXIT_PUSHF, "pushf" }, \
{ SVM_EXIT_POPF, "popf" }, \
{ SVM_EXIT_CPUID, "cpuid" }, \
{ SVM_EXIT_RSM, "rsm" }, \
{ SVM_EXIT_IRET, "iret" }, \
{ SVM_EXIT_SWINT, "swint" }, \
{ SVM_EXIT_INVD, "invd" }, \
{ SVM_EXIT_PAUSE, "pause" }, \
{ SVM_EXIT_HLT, "hlt" }, \
{ SVM_EXIT_INVLPG, "invlpg" }, \
{ SVM_EXIT_INVLPGA, "invlpga" }, \
{ SVM_EXIT_IOIO, "io" }, \
{ SVM_EXIT_MSR, "msr" }, \
{ SVM_EXIT_TASK_SWITCH, "task_switch" }, \
{ SVM_EXIT_FERR_FREEZE, "ferr_freeze" }, \
{ SVM_EXIT_SHUTDOWN, "shutdown" }, \
{ SVM_EXIT_VMRUN, "vmrun" }, \
{ SVM_EXIT_VMMCALL, "hypercall" }, \
{ SVM_EXIT_VMLOAD, "vmload" }, \
{ SVM_EXIT_VMSAVE, "vmsave" }, \
{ SVM_EXIT_STGI, "stgi" }, \
{ SVM_EXIT_CLGI, "clgi" }, \
{ SVM_EXIT_SKINIT, "skinit" }, \
{ SVM_EXIT_RDTSCP, "rdtscp" }, \
{ SVM_EXIT_ICEBP, "icebp" }, \
{ SVM_EXIT_WBINVD, "wbinvd" }, \
{ SVM_EXIT_MONITOR, "monitor" }, \
{ SVM_EXIT_MWAIT, "mwait" }, \
{ SVM_EXIT_XSETBV, "xsetbv" }, \
{ SVM_EXIT_NPF, "npf" }, \
{ SVM_EXIT_AVIC_INCOMPLETE_IPI, "avic_incomplete_ipi" }, \
{ SVM_EXIT_AVIC_UNACCELERATED_ACCESS, "avic_unaccelerated_access" }, \
{ SVM_EXIT_ERR, "invalid_guest_state" }
#endif /* _UAPI__SVM_H */
/*
* vmx.h: VMX Architecture related definitions
* Copyright (c) 2004, Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*
* This program is distributed in the hope it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program; if not, write to the Free Software Foundation, Inc., 59 Temple
* Place - Suite 330, Boston, MA 02111-1307 USA.
*
* A few random additions are:
* Copyright (C) 2006 Qumranet
* Avi Kivity <avi@qumranet.com>
* Yaniv Kamay <yaniv@qumranet.com>
*
*/
#ifndef _UAPIVMX_H
#define _UAPIVMX_H
#define VMX_EXIT_REASONS_FAILED_VMENTRY 0x80000000
#define EXIT_REASON_EXCEPTION_NMI 0
#define EXIT_REASON_EXTERNAL_INTERRUPT 1
#define EXIT_REASON_TRIPLE_FAULT 2
#define EXIT_REASON_PENDING_INTERRUPT 7
#define EXIT_REASON_NMI_WINDOW 8
#define EXIT_REASON_TASK_SWITCH 9
#define EXIT_REASON_CPUID 10
#define EXIT_REASON_HLT 12
#define EXIT_REASON_INVD 13
#define EXIT_REASON_INVLPG 14
#define EXIT_REASON_RDPMC 15
#define EXIT_REASON_RDTSC 16
#define EXIT_REASON_VMCALL 18
#define EXIT_REASON_VMCLEAR 19
#define EXIT_REASON_VMLAUNCH 20
#define EXIT_REASON_VMPTRLD 21
#define EXIT_REASON_VMPTRST 22
#define EXIT_REASON_VMREAD 23
#define EXIT_REASON_VMRESUME 24
#define EXIT_REASON_VMWRITE 25
#define EXIT_REASON_VMOFF 26
#define EXIT_REASON_VMON 27
#define EXIT_REASON_CR_ACCESS 28
#define EXIT_REASON_DR_ACCESS 29
#define EXIT_REASON_IO_INSTRUCTION 30
#define EXIT_REASON_MSR_READ 31
#define EXIT_REASON_MSR_WRITE 32
#define EXIT_REASON_INVALID_STATE 33
#define EXIT_REASON_MSR_LOAD_FAIL 34
#define EXIT_REASON_MWAIT_INSTRUCTION 36
#define EXIT_REASON_MONITOR_TRAP_FLAG 37
#define EXIT_REASON_MONITOR_INSTRUCTION 39
#define EXIT_REASON_PAUSE_INSTRUCTION 40
#define EXIT_REASON_MCE_DURING_VMENTRY 41
#define EXIT_REASON_TPR_BELOW_THRESHOLD 43
#define EXIT_REASON_APIC_ACCESS 44
#define EXIT_REASON_EOI_INDUCED 45
#define EXIT_REASON_EPT_VIOLATION 48
#define EXIT_REASON_EPT_MISCONFIG 49
#define EXIT_REASON_INVEPT 50
#define EXIT_REASON_RDTSCP 51
#define EXIT_REASON_PREEMPTION_TIMER 52
#define EXIT_REASON_INVVPID 53
#define EXIT_REASON_WBINVD 54
#define EXIT_REASON_XSETBV 55
#define EXIT_REASON_APIC_WRITE 56
#define EXIT_REASON_INVPCID 58
#define EXIT_REASON_PML_FULL 62
#define EXIT_REASON_XSAVES 63
#define EXIT_REASON_XRSTORS 64
#define EXIT_REASON_PCOMMIT 65
#define VMX_EXIT_REASONS \
{ EXIT_REASON_EXCEPTION_NMI, "EXCEPTION_NMI" }, \
{ EXIT_REASON_EXTERNAL_INTERRUPT, "EXTERNAL_INTERRUPT" }, \
{ EXIT_REASON_TRIPLE_FAULT, "TRIPLE_FAULT" }, \
{ EXIT_REASON_PENDING_INTERRUPT, "PENDING_INTERRUPT" }, \
{ EXIT_REASON_NMI_WINDOW, "NMI_WINDOW" }, \
{ EXIT_REASON_TASK_SWITCH, "TASK_SWITCH" }, \
{ EXIT_REASON_CPUID, "CPUID" }, \
{ EXIT_REASON_HLT, "HLT" }, \
{ EXIT_REASON_INVLPG, "INVLPG" }, \
{ EXIT_REASON_RDPMC, "RDPMC" }, \
{ EXIT_REASON_RDTSC, "RDTSC" }, \
{ EXIT_REASON_VMCALL, "VMCALL" }, \
{ EXIT_REASON_VMCLEAR, "VMCLEAR" }, \
{ EXIT_REASON_VMLAUNCH, "VMLAUNCH" }, \
{ EXIT_REASON_VMPTRLD, "VMPTRLD" }, \
{ EXIT_REASON_VMPTRST, "VMPTRST" }, \
{ EXIT_REASON_VMREAD, "VMREAD" }, \
{ EXIT_REASON_VMRESUME, "VMRESUME" }, \
{ EXIT_REASON_VMWRITE, "VMWRITE" }, \
{ EXIT_REASON_VMOFF, "VMOFF" }, \
{ EXIT_REASON_VMON, "VMON" }, \
{ EXIT_REASON_CR_ACCESS, "CR_ACCESS" }, \
{ EXIT_REASON_DR_ACCESS, "DR_ACCESS" }, \
{ EXIT_REASON_IO_INSTRUCTION, "IO_INSTRUCTION" }, \
{ EXIT_REASON_MSR_READ, "MSR_READ" }, \
{ EXIT_REASON_MSR_WRITE, "MSR_WRITE" }, \
{ EXIT_REASON_MWAIT_INSTRUCTION, "MWAIT_INSTRUCTION" }, \
{ EXIT_REASON_MONITOR_TRAP_FLAG, "MONITOR_TRAP_FLAG" }, \
{ EXIT_REASON_MONITOR_INSTRUCTION, "MONITOR_INSTRUCTION" }, \
{ EXIT_REASON_PAUSE_INSTRUCTION, "PAUSE_INSTRUCTION" }, \
{ EXIT_REASON_MCE_DURING_VMENTRY, "MCE_DURING_VMENTRY" }, \
{ EXIT_REASON_TPR_BELOW_THRESHOLD, "TPR_BELOW_THRESHOLD" }, \
{ EXIT_REASON_APIC_ACCESS, "APIC_ACCESS" }, \
{ EXIT_REASON_EPT_VIOLATION, "EPT_VIOLATION" }, \
{ EXIT_REASON_EPT_MISCONFIG, "EPT_MISCONFIG" }, \
{ EXIT_REASON_INVEPT, "INVEPT" }, \
{ EXIT_REASON_PREEMPTION_TIMER, "PREEMPTION_TIMER" }, \
{ EXIT_REASON_WBINVD, "WBINVD" }, \
{ EXIT_REASON_APIC_WRITE, "APIC_WRITE" }, \
{ EXIT_REASON_EOI_INDUCED, "EOI_INDUCED" }, \
{ EXIT_REASON_INVALID_STATE, "INVALID_STATE" }, \
{ EXIT_REASON_MSR_LOAD_FAIL, "MSR_LOAD_FAIL" }, \
{ EXIT_REASON_INVD, "INVD" }, \
{ EXIT_REASON_INVVPID, "INVVPID" }, \
{ EXIT_REASON_INVPCID, "INVPCID" }, \
{ EXIT_REASON_XSAVES, "XSAVES" }, \
{ EXIT_REASON_XRSTORS, "XRSTORS" }, \
{ EXIT_REASON_PCOMMIT, "PCOMMIT" }
#define VMX_ABORT_SAVE_GUEST_MSR_FAIL 1
#define VMX_ABORT_LOAD_HOST_MSR_FAIL 4
#endif /* _UAPIVMX_H */
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment