Commit 2e5fc478 authored by Ingo Molnar's avatar Ingo Molnar

Merge branch 'x86/sev' into x86/boot, to resolve conflicts and to pick up dependent tree

We are going to queue up a number of patches that depend
on fresh changes in x86/sev - merge in that branch to
reduce the number of conflicts going forward.

Also resolve a current conflict with x86/sev.

Conflicts:
	arch/x86/include/asm/coco.h
Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
parents 29cd8555 ee8ff876
...@@ -3320,9 +3320,7 @@ ...@@ -3320,9 +3320,7 @@
mem_encrypt= [X86-64] AMD Secure Memory Encryption (SME) control mem_encrypt= [X86-64] AMD Secure Memory Encryption (SME) control
Valid arguments: on, off Valid arguments: on, off
Default (depends on kernel configuration option): Default: off
on (CONFIG_AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT=y)
off (CONFIG_AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT=n)
mem_encrypt=on: Activate SME mem_encrypt=on: Activate SME
mem_encrypt=off: Do not activate SME mem_encrypt=off: Do not activate SME
......
...@@ -87,14 +87,14 @@ The state of SME in the Linux kernel can be documented as follows: ...@@ -87,14 +87,14 @@ The state of SME in the Linux kernel can be documented as follows:
kernel is non-zero). kernel is non-zero).
SME can also be enabled and activated in the BIOS. If SME is enabled and SME can also be enabled and activated in the BIOS. If SME is enabled and
activated in the BIOS, then all memory accesses will be encrypted and it will activated in the BIOS, then all memory accesses will be encrypted and it
not be necessary to activate the Linux memory encryption support. If the BIOS will not be necessary to activate the Linux memory encryption support.
merely enables SME (sets bit 23 of the MSR_AMD64_SYSCFG), then Linux can activate
memory encryption by default (CONFIG_AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT=y) or If the BIOS merely enables SME (sets bit 23 of the MSR_AMD64_SYSCFG),
by supplying mem_encrypt=on on the kernel command line. However, if BIOS does then memory encryption can be enabled by supplying mem_encrypt=on on the
not enable SME, then Linux will not be able to activate memory encryption, even kernel command line. However, if BIOS does not enable SME, then Linux
if configured to do so by default or the mem_encrypt=on command line parameter will not be able to activate memory encryption, even if configured to do
is specified. so by default or the mem_encrypt=on command line parameter is specified.
Secure Nested Paging (SNP) Secure Nested Paging (SNP)
========================== ==========================
......
...@@ -67,6 +67,23 @@ counter (e.g. counter overflow), then -EIO will be returned. ...@@ -67,6 +67,23 @@ counter (e.g. counter overflow), then -EIO will be returned.
}; };
}; };
The host ioctls are issued to a file descriptor of the /dev/sev device.
The ioctl accepts the command ID/input structure documented below.
::
struct sev_issue_cmd {
/* Command ID */
__u32 cmd;
/* Command request structure */
__u64 data;
/* Firmware error code on failure (see psp-sev.h) */
__u32 error;
};
2.1 SNP_GET_REPORT 2.1 SNP_GET_REPORT
------------------ ------------------
...@@ -124,6 +141,41 @@ be updated with the expected value. ...@@ -124,6 +141,41 @@ be updated with the expected value.
See GHCB specification for further detail on how to parse the certificate blob. See GHCB specification for further detail on how to parse the certificate blob.
2.4 SNP_PLATFORM_STATUS
-----------------------
:Technology: sev-snp
:Type: hypervisor ioctl cmd
:Parameters (out): struct sev_user_data_snp_status
:Returns (out): 0 on success, -negative on error
The SNP_PLATFORM_STATUS command is used to query the SNP platform status. The
status includes API major, minor version and more. See the SEV-SNP
specification for further details.
2.5 SNP_COMMIT
--------------
:Technology: sev-snp
:Type: hypervisor ioctl cmd
:Returns (out): 0 on success, -negative on error
SNP_COMMIT is used to commit the currently installed firmware using the
SEV-SNP firmware SNP_COMMIT command. This prevents roll-back to a previously
committed firmware version. This will also update the reported TCB to match
that of the currently installed firmware.
2.6 SNP_SET_CONFIG
------------------
:Technology: sev-snp
:Type: hypervisor ioctl cmd
:Parameters (in): struct sev_user_data_snp_config
:Returns (out): 0 on success, -negative on error
SNP_SET_CONFIG is used to set the system-wide configuration such as
reported TCB version in the attestation report. The command is similar
to SNP_CONFIG command defined in the SEV-SNP spec. The current values of
the firmware parameters affected by this command can be queried via
SNP_PLATFORM_STATUS.
3. SEV-SNP CPUID Enforcement 3. SEV-SNP CPUID Enforcement
============================ ============================
......
...@@ -28,5 +28,7 @@ obj-y += net/ ...@@ -28,5 +28,7 @@ obj-y += net/
obj-$(CONFIG_KEXEC_FILE) += purgatory/ obj-$(CONFIG_KEXEC_FILE) += purgatory/
obj-y += virt/svm/
# for cleaning # for cleaning
subdir- += boot tools subdir- += boot tools
...@@ -1539,19 +1539,6 @@ config AMD_MEM_ENCRYPT ...@@ -1539,19 +1539,6 @@ config AMD_MEM_ENCRYPT
This requires an AMD processor that supports Secure Memory This requires an AMD processor that supports Secure Memory
Encryption (SME). Encryption (SME).
config AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT
bool "Activate AMD Secure Memory Encryption (SME) by default"
depends on AMD_MEM_ENCRYPT
help
Say yes to have system memory encrypted by default if running on
an AMD processor that supports Secure Memory Encryption (SME).
If set to Y, then the encryption of system memory can be
deactivated with the mem_encrypt=off command line option.
If set to N, then the encryption of system memory can be
activated with the mem_encrypt=on command line option.
# Common NUMA Features # Common NUMA Features
config NUMA config NUMA
bool "NUMA Memory Allocation and Scheduler Support" bool "NUMA Memory Allocation and Scheduler Support"
......
...@@ -304,6 +304,10 @@ void do_boot_stage2_vc(struct pt_regs *regs, unsigned long exit_code) ...@@ -304,6 +304,10 @@ void do_boot_stage2_vc(struct pt_regs *regs, unsigned long exit_code)
if (result != ES_OK) if (result != ES_OK)
goto finish; goto finish;
result = vc_check_opcode_bytes(&ctxt, exit_code);
if (result != ES_OK)
goto finish;
switch (exit_code) { switch (exit_code) {
case SVM_EXIT_RDTSC: case SVM_EXIT_RDTSC:
case SVM_EXIT_RDTSCP: case SVM_EXIT_RDTSCP:
......
...@@ -14,7 +14,7 @@ ...@@ -14,7 +14,7 @@
#include <asm/processor.h> #include <asm/processor.h>
enum cc_vendor cc_vendor __ro_after_init = CC_VENDOR_NONE; enum cc_vendor cc_vendor __ro_after_init = CC_VENDOR_NONE;
static u64 cc_mask __ro_after_init; u64 cc_mask __ro_after_init;
static bool noinstr intel_cc_platform_has(enum cc_attr attr) static bool noinstr intel_cc_platform_has(enum cc_attr attr)
{ {
...@@ -148,8 +148,3 @@ u64 cc_mkdec(u64 val) ...@@ -148,8 +148,3 @@ u64 cc_mkdec(u64 val)
} }
} }
EXPORT_SYMBOL_GPL(cc_mkdec); EXPORT_SYMBOL_GPL(cc_mkdec);
__init void cc_set_mask(u64 mask)
{
cc_mask = mask;
}
...@@ -113,6 +113,20 @@ ...@@ -113,6 +113,20 @@
#endif #endif
#ifndef __ASSEMBLY__
#ifndef __pic__
static __always_inline __pure void *rip_rel_ptr(void *p)
{
asm("leaq %c1(%%rip), %0" : "=r"(p) : "i"(p));
return p;
}
#define RIP_REL_REF(var) (*(typeof(&(var)))rip_rel_ptr(&(var)))
#else
#define RIP_REL_REF(var) (var)
#endif
#endif
/* /*
* Macros to generate condition code outputs from inline assembly, * Macros to generate condition code outputs from inline assembly,
* The output operand must be type "bool". * The output operand must be type "bool".
......
...@@ -2,6 +2,7 @@ ...@@ -2,6 +2,7 @@
#ifndef _ASM_X86_COCO_H #ifndef _ASM_X86_COCO_H
#define _ASM_X86_COCO_H #define _ASM_X86_COCO_H
#include <asm/asm.h>
#include <asm/types.h> #include <asm/types.h>
enum cc_vendor { enum cc_vendor {
...@@ -10,9 +11,14 @@ enum cc_vendor { ...@@ -10,9 +11,14 @@ enum cc_vendor {
CC_VENDOR_INTEL, CC_VENDOR_INTEL,
}; };
extern u64 cc_mask;
#ifdef CONFIG_ARCH_HAS_CC_PLATFORM #ifdef CONFIG_ARCH_HAS_CC_PLATFORM
extern enum cc_vendor cc_vendor; extern enum cc_vendor cc_vendor;
void cc_set_mask(u64 mask); static inline void cc_set_mask(u64 mask)
{
RIP_REL_REF(cc_mask) = mask;
}
u64 cc_mkenc(u64 val); u64 cc_mkenc(u64 val);
u64 cc_mkdec(u64 val); u64 cc_mkdec(u64 val);
#else #else
......
...@@ -440,6 +440,7 @@ ...@@ -440,6 +440,7 @@
#define X86_FEATURE_SEV (19*32+ 1) /* AMD Secure Encrypted Virtualization */ #define X86_FEATURE_SEV (19*32+ 1) /* AMD Secure Encrypted Virtualization */
#define X86_FEATURE_VM_PAGE_FLUSH (19*32+ 2) /* "" VM Page Flush MSR is supported */ #define X86_FEATURE_VM_PAGE_FLUSH (19*32+ 2) /* "" VM Page Flush MSR is supported */
#define X86_FEATURE_SEV_ES (19*32+ 3) /* AMD Secure Encrypted Virtualization - Encrypted State */ #define X86_FEATURE_SEV_ES (19*32+ 3) /* AMD Secure Encrypted Virtualization - Encrypted State */
#define X86_FEATURE_SEV_SNP (19*32+ 4) /* AMD Secure Encrypted Virtualization - Secure Nested Paging */
#define X86_FEATURE_V_TSC_AUX (19*32+ 9) /* "" Virtual TSC_AUX */ #define X86_FEATURE_V_TSC_AUX (19*32+ 9) /* "" Virtual TSC_AUX */
#define X86_FEATURE_SME_COHERENT (19*32+10) /* "" AMD hardware-enforced cache coherency */ #define X86_FEATURE_SME_COHERENT (19*32+10) /* "" AMD hardware-enforced cache coherency */
#define X86_FEATURE_DEBUG_SWAP (19*32+14) /* AMD SEV-ES full debug state swap support */ #define X86_FEATURE_DEBUG_SWAP (19*32+14) /* AMD SEV-ES full debug state swap support */
......
...@@ -117,6 +117,12 @@ ...@@ -117,6 +117,12 @@
#define DISABLE_IBT (1 << (X86_FEATURE_IBT & 31)) #define DISABLE_IBT (1 << (X86_FEATURE_IBT & 31))
#endif #endif
#ifdef CONFIG_KVM_AMD_SEV
#define DISABLE_SEV_SNP 0
#else
#define DISABLE_SEV_SNP (1 << (X86_FEATURE_SEV_SNP & 31))
#endif
/* /*
* Make sure to add features to the correct mask * Make sure to add features to the correct mask
*/ */
...@@ -141,7 +147,7 @@ ...@@ -141,7 +147,7 @@
DISABLE_ENQCMD) DISABLE_ENQCMD)
#define DISABLED_MASK17 0 #define DISABLED_MASK17 0
#define DISABLED_MASK18 (DISABLE_IBT) #define DISABLED_MASK18 (DISABLE_IBT)
#define DISABLED_MASK19 0 #define DISABLED_MASK19 (DISABLE_SEV_SNP)
#define DISABLED_MASK20 0 #define DISABLED_MASK20 0
#define DISABLED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 21) #define DISABLED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 21)
......
...@@ -10,6 +10,7 @@ extern int force_iommu, no_iommu; ...@@ -10,6 +10,7 @@ extern int force_iommu, no_iommu;
extern int iommu_detected; extern int iommu_detected;
extern int iommu_merge; extern int iommu_merge;
extern int panic_on_overflow; extern int panic_on_overflow;
extern bool amd_iommu_snp_en;
#ifdef CONFIG_SWIOTLB #ifdef CONFIG_SWIOTLB
extern bool x86_swiotlb_enable; extern bool x86_swiotlb_enable;
......
...@@ -138,6 +138,7 @@ KVM_X86_OP(complete_emulated_msr) ...@@ -138,6 +138,7 @@ KVM_X86_OP(complete_emulated_msr)
KVM_X86_OP(vcpu_deliver_sipi_vector) KVM_X86_OP(vcpu_deliver_sipi_vector)
KVM_X86_OP_OPTIONAL_RET0(vcpu_get_apicv_inhibit_reasons); KVM_X86_OP_OPTIONAL_RET0(vcpu_get_apicv_inhibit_reasons);
KVM_X86_OP_OPTIONAL(get_untagged_addr) KVM_X86_OP_OPTIONAL(get_untagged_addr)
KVM_X86_OP_OPTIONAL(alloc_apic_backing_page)
#undef KVM_X86_OP #undef KVM_X86_OP
#undef KVM_X86_OP_OPTIONAL #undef KVM_X86_OP_OPTIONAL
......
...@@ -1796,6 +1796,7 @@ struct kvm_x86_ops { ...@@ -1796,6 +1796,7 @@ struct kvm_x86_ops {
unsigned long (*vcpu_get_apicv_inhibit_reasons)(struct kvm_vcpu *vcpu); unsigned long (*vcpu_get_apicv_inhibit_reasons)(struct kvm_vcpu *vcpu);
gva_t (*get_untagged_addr)(struct kvm_vcpu *vcpu, gva_t gva, unsigned int flags); gva_t (*get_untagged_addr)(struct kvm_vcpu *vcpu, gva_t gva, unsigned int flags);
void *(*alloc_apic_backing_page)(struct kvm_vcpu *vcpu);
}; };
struct kvm_x86_nested_ops { struct kvm_x86_nested_ops {
......
...@@ -15,7 +15,8 @@ ...@@ -15,7 +15,8 @@
#include <linux/init.h> #include <linux/init.h>
#include <linux/cc_platform.h> #include <linux/cc_platform.h>
#include <asm/bootparam.h> #include <asm/asm.h>
struct boot_params;
#ifdef CONFIG_X86_MEM_ENCRYPT #ifdef CONFIG_X86_MEM_ENCRYPT
void __init mem_encrypt_init(void); void __init mem_encrypt_init(void);
...@@ -58,6 +59,11 @@ void __init mem_encrypt_free_decrypted_mem(void); ...@@ -58,6 +59,11 @@ void __init mem_encrypt_free_decrypted_mem(void);
void __init sev_es_init_vc_handling(void); void __init sev_es_init_vc_handling(void);
static inline u64 sme_get_me_mask(void)
{
return RIP_REL_REF(sme_me_mask);
}
#define __bss_decrypted __section(".bss..decrypted") #define __bss_decrypted __section(".bss..decrypted")
#else /* !CONFIG_AMD_MEM_ENCRYPT */ #else /* !CONFIG_AMD_MEM_ENCRYPT */
...@@ -89,6 +95,8 @@ early_set_mem_enc_dec_hypercall(unsigned long vaddr, unsigned long size, bool en ...@@ -89,6 +95,8 @@ early_set_mem_enc_dec_hypercall(unsigned long vaddr, unsigned long size, bool en
static inline void mem_encrypt_free_decrypted_mem(void) { } static inline void mem_encrypt_free_decrypted_mem(void) { }
static inline u64 sme_get_me_mask(void) { return 0; }
#define __bss_decrypted #define __bss_decrypted
#endif /* CONFIG_AMD_MEM_ENCRYPT */ #endif /* CONFIG_AMD_MEM_ENCRYPT */
...@@ -106,11 +114,6 @@ void add_encrypt_protection_map(void); ...@@ -106,11 +114,6 @@ void add_encrypt_protection_map(void);
extern char __start_bss_decrypted[], __end_bss_decrypted[], __start_bss_decrypted_unused[]; extern char __start_bss_decrypted[], __end_bss_decrypted[], __start_bss_decrypted_unused[];
static inline u64 sme_get_me_mask(void)
{
return sme_me_mask;
}
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#endif /* __X86_MEM_ENCRYPT_H__ */ #endif /* __X86_MEM_ENCRYPT_H__ */
...@@ -599,6 +599,8 @@ ...@@ -599,6 +599,8 @@
#define MSR_AMD64_SEV_ENABLED BIT_ULL(MSR_AMD64_SEV_ENABLED_BIT) #define MSR_AMD64_SEV_ENABLED BIT_ULL(MSR_AMD64_SEV_ENABLED_BIT)
#define MSR_AMD64_SEV_ES_ENABLED BIT_ULL(MSR_AMD64_SEV_ES_ENABLED_BIT) #define MSR_AMD64_SEV_ES_ENABLED BIT_ULL(MSR_AMD64_SEV_ES_ENABLED_BIT)
#define MSR_AMD64_SEV_SNP_ENABLED BIT_ULL(MSR_AMD64_SEV_SNP_ENABLED_BIT) #define MSR_AMD64_SEV_SNP_ENABLED BIT_ULL(MSR_AMD64_SEV_SNP_ENABLED_BIT)
#define MSR_AMD64_RMP_BASE 0xc0010132
#define MSR_AMD64_RMP_END 0xc0010133
/* SNP feature bits enabled by the hypervisor */ /* SNP feature bits enabled by the hypervisor */
#define MSR_AMD64_SNP_VTOM BIT_ULL(3) #define MSR_AMD64_SNP_VTOM BIT_ULL(3)
...@@ -708,8 +710,15 @@ ...@@ -708,8 +710,15 @@
#define MSR_K8_TOP_MEM1 0xc001001a #define MSR_K8_TOP_MEM1 0xc001001a
#define MSR_K8_TOP_MEM2 0xc001001d #define MSR_K8_TOP_MEM2 0xc001001d
#define MSR_AMD64_SYSCFG 0xc0010010 #define MSR_AMD64_SYSCFG 0xc0010010
#define MSR_AMD64_SYSCFG_MEM_ENCRYPT_BIT 23 #define MSR_AMD64_SYSCFG_MEM_ENCRYPT_BIT 23
#define MSR_AMD64_SYSCFG_MEM_ENCRYPT BIT_ULL(MSR_AMD64_SYSCFG_MEM_ENCRYPT_BIT) #define MSR_AMD64_SYSCFG_MEM_ENCRYPT BIT_ULL(MSR_AMD64_SYSCFG_MEM_ENCRYPT_BIT)
#define MSR_AMD64_SYSCFG_SNP_EN_BIT 24
#define MSR_AMD64_SYSCFG_SNP_EN BIT_ULL(MSR_AMD64_SYSCFG_SNP_EN_BIT)
#define MSR_AMD64_SYSCFG_SNP_VMPL_EN_BIT 25
#define MSR_AMD64_SYSCFG_SNP_VMPL_EN BIT_ULL(MSR_AMD64_SYSCFG_SNP_VMPL_EN_BIT)
#define MSR_AMD64_SYSCFG_MFDM_BIT 19
#define MSR_AMD64_SYSCFG_MFDM BIT_ULL(MSR_AMD64_SYSCFG_MFDM_BIT)
#define MSR_K8_INT_PENDING_MSG 0xc0010055 #define MSR_K8_INT_PENDING_MSG 0xc0010055
/* C1E active bits in int pending message */ /* C1E active bits in int pending message */
#define K8_INTP_C1E_ACTIVE_MASK 0x18000000 #define K8_INTP_C1E_ACTIVE_MASK 0x18000000
......
...@@ -87,9 +87,23 @@ extern bool handle_vc_boot_ghcb(struct pt_regs *regs); ...@@ -87,9 +87,23 @@ extern bool handle_vc_boot_ghcb(struct pt_regs *regs);
/* Software defined (when rFlags.CF = 1) */ /* Software defined (when rFlags.CF = 1) */
#define PVALIDATE_FAIL_NOUPDATE 255 #define PVALIDATE_FAIL_NOUPDATE 255
/* RMUPDATE detected 4K page and 2MB page overlap. */
#define RMPUPDATE_FAIL_OVERLAP 4
/* RMP page size */ /* RMP page size */
#define RMP_PG_SIZE_4K 0 #define RMP_PG_SIZE_4K 0
#define RMP_PG_SIZE_2M 1 #define RMP_PG_SIZE_2M 1
#define RMP_TO_PG_LEVEL(level) (((level) == RMP_PG_SIZE_4K) ? PG_LEVEL_4K : PG_LEVEL_2M)
#define PG_LEVEL_TO_RMP(level) (((level) == PG_LEVEL_4K) ? RMP_PG_SIZE_4K : RMP_PG_SIZE_2M)
struct rmp_state {
u64 gpa;
u8 assigned;
u8 pagesize;
u8 immutable;
u8 rsvd;
u32 asid;
} __packed;
#define RMPADJUST_VMSA_PAGE_BIT BIT(16) #define RMPADJUST_VMSA_PAGE_BIT BIT(16)
...@@ -213,6 +227,7 @@ int snp_issue_guest_request(u64 exit_code, struct snp_req_data *input, struct sn ...@@ -213,6 +227,7 @@ int snp_issue_guest_request(u64 exit_code, struct snp_req_data *input, struct sn
void snp_accept_memory(phys_addr_t start, phys_addr_t end); void snp_accept_memory(phys_addr_t start, phys_addr_t end);
u64 snp_get_unsupported_features(u64 status); u64 snp_get_unsupported_features(u64 status);
u64 sev_get_status(void); u64 sev_get_status(void);
void kdump_sev_callback(void);
#else #else
static inline void sev_es_ist_enter(struct pt_regs *regs) { } static inline void sev_es_ist_enter(struct pt_regs *regs) { }
static inline void sev_es_ist_exit(void) { } static inline void sev_es_ist_exit(void) { }
...@@ -241,6 +256,29 @@ static inline int snp_issue_guest_request(u64 exit_code, struct snp_req_data *in ...@@ -241,6 +256,29 @@ static inline int snp_issue_guest_request(u64 exit_code, struct snp_req_data *in
static inline void snp_accept_memory(phys_addr_t start, phys_addr_t end) { } static inline void snp_accept_memory(phys_addr_t start, phys_addr_t end) { }
static inline u64 snp_get_unsupported_features(u64 status) { return 0; } static inline u64 snp_get_unsupported_features(u64 status) { return 0; }
static inline u64 sev_get_status(void) { return 0; } static inline u64 sev_get_status(void) { return 0; }
static inline void kdump_sev_callback(void) { }
#endif
#ifdef CONFIG_KVM_AMD_SEV
bool snp_probe_rmptable_info(void);
int snp_lookup_rmpentry(u64 pfn, bool *assigned, int *level);
void snp_dump_hva_rmpentry(unsigned long address);
int psmash(u64 pfn);
int rmp_make_private(u64 pfn, u64 gpa, enum pg_level level, u32 asid, bool immutable);
int rmp_make_shared(u64 pfn, enum pg_level level);
void snp_leak_pages(u64 pfn, unsigned int npages);
#else
static inline bool snp_probe_rmptable_info(void) { return false; }
static inline int snp_lookup_rmpentry(u64 pfn, bool *assigned, int *level) { return -ENODEV; }
static inline void snp_dump_hva_rmpentry(unsigned long address) {}
static inline int psmash(u64 pfn) { return -ENODEV; }
static inline int rmp_make_private(u64 pfn, u64 gpa, enum pg_level level, u32 asid,
bool immutable)
{
return -ENODEV;
}
static inline int rmp_make_shared(u64 pfn, enum pg_level level) { return -ENODEV; }
static inline void snp_leak_pages(u64 pfn, unsigned int npages) {}
#endif #endif
#endif #endif
...@@ -2,6 +2,8 @@ ...@@ -2,6 +2,8 @@
#ifndef _ASM_X86_TRAP_PF_H #ifndef _ASM_X86_TRAP_PF_H
#define _ASM_X86_TRAP_PF_H #define _ASM_X86_TRAP_PF_H
#include <linux/bits.h>
/* /*
* Page fault error code bits: * Page fault error code bits:
* *
...@@ -13,16 +15,18 @@ ...@@ -13,16 +15,18 @@
* bit 5 == 1: protection keys block access * bit 5 == 1: protection keys block access
* bit 6 == 1: shadow stack access fault * bit 6 == 1: shadow stack access fault
* bit 15 == 1: SGX MMU page-fault * bit 15 == 1: SGX MMU page-fault
* bit 31 == 1: fault was due to RMP violation
*/ */
enum x86_pf_error_code { enum x86_pf_error_code {
X86_PF_PROT = 1 << 0, X86_PF_PROT = BIT(0),
X86_PF_WRITE = 1 << 1, X86_PF_WRITE = BIT(1),
X86_PF_USER = 1 << 2, X86_PF_USER = BIT(2),
X86_PF_RSVD = 1 << 3, X86_PF_RSVD = BIT(3),
X86_PF_INSTR = 1 << 4, X86_PF_INSTR = BIT(4),
X86_PF_PK = 1 << 5, X86_PF_PK = BIT(5),
X86_PF_SHSTK = 1 << 6, X86_PF_SHSTK = BIT(6),
X86_PF_SGX = 1 << 15, X86_PF_SGX = BIT(15),
X86_PF_RMP = BIT(31),
}; };
#endif /* _ASM_X86_TRAP_PF_H */ #endif /* _ASM_X86_TRAP_PF_H */
...@@ -20,6 +20,7 @@ ...@@ -20,6 +20,7 @@
#include <asm/delay.h> #include <asm/delay.h>
#include <asm/debugreg.h> #include <asm/debugreg.h>
#include <asm/resctrl.h> #include <asm/resctrl.h>
#include <asm/sev.h>
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
# include <asm/mmconfig.h> # include <asm/mmconfig.h>
...@@ -587,6 +588,21 @@ static void bsp_init_amd(struct cpuinfo_x86 *c) ...@@ -587,6 +588,21 @@ static void bsp_init_amd(struct cpuinfo_x86 *c)
break; break;
} }
if (cpu_has(c, X86_FEATURE_SEV_SNP)) {
/*
* RMP table entry format is not architectural and it can vary by processor
* and is defined by the per-processor PPR. Restrict SNP support on the
* known CPU model and family for which the RMP table entry format is
* currently defined for.
*/
if (!boot_cpu_has(X86_FEATURE_ZEN3) &&
!boot_cpu_has(X86_FEATURE_ZEN4) &&
!boot_cpu_has(X86_FEATURE_ZEN5))
setup_clear_cpu_cap(X86_FEATURE_SEV_SNP);
else if (!snp_probe_rmptable_info())
setup_clear_cpu_cap(X86_FEATURE_SEV_SNP);
}
return; return;
warn: warn:
...@@ -605,8 +621,8 @@ static void early_detect_mem_encrypt(struct cpuinfo_x86 *c) ...@@ -605,8 +621,8 @@ static void early_detect_mem_encrypt(struct cpuinfo_x86 *c)
* SME feature (set in scattered.c). * SME feature (set in scattered.c).
* If the kernel has not enabled SME via any means then * If the kernel has not enabled SME via any means then
* don't advertise the SME feature. * don't advertise the SME feature.
* For SEV: If BIOS has not enabled SEV then don't advertise the * For SEV: If BIOS has not enabled SEV then don't advertise SEV and
* SEV and SEV_ES feature (set in scattered.c). * any additional functionality based on it.
* *
* In all cases, since support for SME and SEV requires long mode, * In all cases, since support for SME and SEV requires long mode,
* don't advertise the feature under CONFIG_X86_32. * don't advertise the feature under CONFIG_X86_32.
...@@ -641,6 +657,7 @@ static void early_detect_mem_encrypt(struct cpuinfo_x86 *c) ...@@ -641,6 +657,7 @@ static void early_detect_mem_encrypt(struct cpuinfo_x86 *c)
clear_sev: clear_sev:
setup_clear_cpu_cap(X86_FEATURE_SEV); setup_clear_cpu_cap(X86_FEATURE_SEV);
setup_clear_cpu_cap(X86_FEATURE_SEV_ES); setup_clear_cpu_cap(X86_FEATURE_SEV_ES);
setup_clear_cpu_cap(X86_FEATURE_SEV_SNP);
} }
} }
......
...@@ -1355,8 +1355,13 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c) ...@@ -1355,8 +1355,13 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
/* /*
* AMD's AutoIBRS is equivalent to Intel's eIBRS - use the Intel feature * AMD's AutoIBRS is equivalent to Intel's eIBRS - use the Intel feature
* flag and protect from vendor-specific bugs via the whitelist. * flag and protect from vendor-specific bugs via the whitelist.
*
* Don't use AutoIBRS when SNP is enabled because it degrades host
* userspace indirect branch performance.
*/ */
if ((ia32_cap & ARCH_CAP_IBRS_ALL) || cpu_has(c, X86_FEATURE_AUTOIBRS)) { if ((ia32_cap & ARCH_CAP_IBRS_ALL) ||
(cpu_has(c, X86_FEATURE_AUTOIBRS) &&
!cpu_feature_enabled(X86_FEATURE_SEV_SNP))) {
setup_force_cpu_cap(X86_FEATURE_IBRS_ENHANCED); setup_force_cpu_cap(X86_FEATURE_IBRS_ENHANCED);
if (!cpu_matches(cpu_vuln_whitelist, NO_EIBRS_PBRSB) && if (!cpu_matches(cpu_vuln_whitelist, NO_EIBRS_PBRSB) &&
!(ia32_cap & ARCH_CAP_PBRSB_NO)) !(ia32_cap & ARCH_CAP_PBRSB_NO))
......
...@@ -108,6 +108,9 @@ static inline void k8_check_syscfg_dram_mod_en(void) ...@@ -108,6 +108,9 @@ static inline void k8_check_syscfg_dram_mod_en(void)
(boot_cpu_data.x86 >= 0x0f))) (boot_cpu_data.x86 >= 0x0f)))
return; return;
if (cpu_feature_enabled(X86_FEATURE_SEV_SNP))
return;
rdmsr(MSR_AMD64_SYSCFG, lo, hi); rdmsr(MSR_AMD64_SYSCFG, lo, hi);
if (lo & K8_MTRRFIXRANGE_DRAM_MODIFY) { if (lo & K8_MTRRFIXRANGE_DRAM_MODIFY) {
pr_err(FW_WARN "MTRR: CPU %u: SYSCFG[MtrrFixDramModEn]" pr_err(FW_WARN "MTRR: CPU %u: SYSCFG[MtrrFixDramModEn]"
......
...@@ -40,6 +40,7 @@ ...@@ -40,6 +40,7 @@
#include <asm/intel_pt.h> #include <asm/intel_pt.h>
#include <asm/crash.h> #include <asm/crash.h>
#include <asm/cmdline.h> #include <asm/cmdline.h>
#include <asm/sev.h>
/* Used while preparing memory map entries for second kernel */ /* Used while preparing memory map entries for second kernel */
struct crash_memmap_data { struct crash_memmap_data {
...@@ -59,6 +60,8 @@ static void kdump_nmi_callback(int cpu, struct pt_regs *regs) ...@@ -59,6 +60,8 @@ static void kdump_nmi_callback(int cpu, struct pt_regs *regs)
*/ */
cpu_emergency_stop_pt(); cpu_emergency_stop_pt();
kdump_sev_callback();
disable_local_APIC(); disable_local_APIC();
} }
......
...@@ -10,11 +10,15 @@ ...@@ -10,11 +10,15 @@
*/ */
#ifndef __BOOT_COMPRESSED #ifndef __BOOT_COMPRESSED
#define error(v) pr_err(v) #define error(v) pr_err(v)
#define has_cpuflag(f) boot_cpu_has(f) #define has_cpuflag(f) boot_cpu_has(f)
#define sev_printk(fmt, ...) printk(fmt, ##__VA_ARGS__)
#define sev_printk_rtl(fmt, ...) printk_ratelimited(fmt, ##__VA_ARGS__)
#else #else
#undef WARN #undef WARN
#define WARN(condition, format...) (!!(condition)) #define WARN(condition, format...) (!!(condition))
#define sev_printk(fmt, ...)
#define sev_printk_rtl(fmt, ...)
#endif #endif
/* I/O parameters for CPUID-related helpers */ /* I/O parameters for CPUID-related helpers */
...@@ -556,9 +560,9 @@ static int snp_cpuid(struct ghcb *ghcb, struct es_em_ctxt *ctxt, struct cpuid_le ...@@ -556,9 +560,9 @@ static int snp_cpuid(struct ghcb *ghcb, struct es_em_ctxt *ctxt, struct cpuid_le
leaf->eax = leaf->ebx = leaf->ecx = leaf->edx = 0; leaf->eax = leaf->ebx = leaf->ecx = leaf->edx = 0;
/* Skip post-processing for out-of-range zero leafs. */ /* Skip post-processing for out-of-range zero leafs. */
if (!(leaf->fn <= cpuid_std_range_max || if (!(leaf->fn <= RIP_REL_REF(cpuid_std_range_max) ||
(leaf->fn >= 0x40000000 && leaf->fn <= cpuid_hyp_range_max) || (leaf->fn >= 0x40000000 && leaf->fn <= RIP_REL_REF(cpuid_hyp_range_max)) ||
(leaf->fn >= 0x80000000 && leaf->fn <= cpuid_ext_range_max))) (leaf->fn >= 0x80000000 && leaf->fn <= RIP_REL_REF(cpuid_ext_range_max))))
return 0; return 0;
} }
...@@ -574,6 +578,7 @@ void __init do_vc_no_ghcb(struct pt_regs *regs, unsigned long exit_code) ...@@ -574,6 +578,7 @@ void __init do_vc_no_ghcb(struct pt_regs *regs, unsigned long exit_code)
{ {
unsigned int subfn = lower_bits(regs->cx, 32); unsigned int subfn = lower_bits(regs->cx, 32);
unsigned int fn = lower_bits(regs->ax, 32); unsigned int fn = lower_bits(regs->ax, 32);
u16 opcode = *(unsigned short *)regs->ip;
struct cpuid_leaf leaf; struct cpuid_leaf leaf;
int ret; int ret;
...@@ -581,6 +586,10 @@ void __init do_vc_no_ghcb(struct pt_regs *regs, unsigned long exit_code) ...@@ -581,6 +586,10 @@ void __init do_vc_no_ghcb(struct pt_regs *regs, unsigned long exit_code)
if (exit_code != SVM_EXIT_CPUID) if (exit_code != SVM_EXIT_CPUID)
goto fail; goto fail;
/* Is it really a CPUID insn? */
if (opcode != 0xa20f)
goto fail;
leaf.fn = fn; leaf.fn = fn;
leaf.subfn = subfn; leaf.subfn = subfn;
...@@ -1063,11 +1072,11 @@ static void __init setup_cpuid_table(const struct cc_blob_sev_info *cc_info) ...@@ -1063,11 +1072,11 @@ static void __init setup_cpuid_table(const struct cc_blob_sev_info *cc_info)
const struct snp_cpuid_fn *fn = &cpuid_table->fn[i]; const struct snp_cpuid_fn *fn = &cpuid_table->fn[i];
if (fn->eax_in == 0x0) if (fn->eax_in == 0x0)
cpuid_std_range_max = fn->eax; RIP_REL_REF(cpuid_std_range_max) = fn->eax;
else if (fn->eax_in == 0x40000000) else if (fn->eax_in == 0x40000000)
cpuid_hyp_range_max = fn->eax; RIP_REL_REF(cpuid_hyp_range_max) = fn->eax;
else if (fn->eax_in == 0x80000000) else if (fn->eax_in == 0x80000000)
cpuid_ext_range_max = fn->eax; RIP_REL_REF(cpuid_ext_range_max) = fn->eax;
} }
} }
...@@ -1170,3 +1179,92 @@ static int vmgexit_psc(struct ghcb *ghcb, struct snp_psc_desc *desc) ...@@ -1170,3 +1179,92 @@ static int vmgexit_psc(struct ghcb *ghcb, struct snp_psc_desc *desc)
out: out:
return ret; return ret;
} }
static enum es_result vc_check_opcode_bytes(struct es_em_ctxt *ctxt,
unsigned long exit_code)
{
unsigned int opcode = (unsigned int)ctxt->insn.opcode.value;
u8 modrm = ctxt->insn.modrm.value;
switch (exit_code) {
case SVM_EXIT_IOIO:
case SVM_EXIT_NPF:
/* handled separately */
return ES_OK;
case SVM_EXIT_CPUID:
if (opcode == 0xa20f)
return ES_OK;
break;
case SVM_EXIT_INVD:
if (opcode == 0x080f)
return ES_OK;
break;
case SVM_EXIT_MONITOR:
if (opcode == 0x010f && modrm == 0xc8)
return ES_OK;
break;
case SVM_EXIT_MWAIT:
if (opcode == 0x010f && modrm == 0xc9)
return ES_OK;
break;
case SVM_EXIT_MSR:
/* RDMSR */
if (opcode == 0x320f ||
/* WRMSR */
opcode == 0x300f)
return ES_OK;
break;
case SVM_EXIT_RDPMC:
if (opcode == 0x330f)
return ES_OK;
break;
case SVM_EXIT_RDTSC:
if (opcode == 0x310f)
return ES_OK;
break;
case SVM_EXIT_RDTSCP:
if (opcode == 0x010f && modrm == 0xf9)
return ES_OK;
break;
case SVM_EXIT_READ_DR7:
if (opcode == 0x210f &&
X86_MODRM_REG(ctxt->insn.modrm.value) == 7)
return ES_OK;
break;
case SVM_EXIT_VMMCALL:
if (opcode == 0x010f && modrm == 0xd9)
return ES_OK;
break;
case SVM_EXIT_WRITE_DR7:
if (opcode == 0x230f &&
X86_MODRM_REG(ctxt->insn.modrm.value) == 7)
return ES_OK;
break;
case SVM_EXIT_WBINVD:
if (opcode == 0x90f)
return ES_OK;
break;
default:
break;
}
sev_printk(KERN_ERR "Wrong/unhandled opcode bytes: 0x%x, exit_code: 0x%lx, rIP: 0x%lx\n",
opcode, exit_code, ctxt->regs->ip);
return ES_UNSUPPORTED;
}
...@@ -748,7 +748,7 @@ void __init early_snp_set_memory_private(unsigned long vaddr, unsigned long padd ...@@ -748,7 +748,7 @@ void __init early_snp_set_memory_private(unsigned long vaddr, unsigned long padd
* This eliminates worries about jump tables or checking boot_cpu_data * This eliminates worries about jump tables or checking boot_cpu_data
* in the cc_platform_has() function. * in the cc_platform_has() function.
*/ */
if (!(sev_status & MSR_AMD64_SEV_SNP_ENABLED)) if (!(RIP_REL_REF(sev_status) & MSR_AMD64_SEV_SNP_ENABLED))
return; return;
/* /*
...@@ -767,7 +767,7 @@ void __init early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr ...@@ -767,7 +767,7 @@ void __init early_snp_set_memory_shared(unsigned long vaddr, unsigned long paddr
* This eliminates worries about jump tables or checking boot_cpu_data * This eliminates worries about jump tables or checking boot_cpu_data
* in the cc_platform_has() function. * in the cc_platform_has() function.
*/ */
if (!(sev_status & MSR_AMD64_SEV_SNP_ENABLED)) if (!(RIP_REL_REF(sev_status) & MSR_AMD64_SEV_SNP_ENABLED))
return; return;
/* Ask hypervisor to mark the memory pages shared in the RMP table. */ /* Ask hypervisor to mark the memory pages shared in the RMP table. */
...@@ -1752,7 +1752,10 @@ static enum es_result vc_handle_exitcode(struct es_em_ctxt *ctxt, ...@@ -1752,7 +1752,10 @@ static enum es_result vc_handle_exitcode(struct es_em_ctxt *ctxt,
struct ghcb *ghcb, struct ghcb *ghcb,
unsigned long exit_code) unsigned long exit_code)
{ {
enum es_result result; enum es_result result = vc_check_opcode_bytes(ctxt, exit_code);
if (result != ES_OK)
return result;
switch (exit_code) { switch (exit_code) {
case SVM_EXIT_READ_DR7: case SVM_EXIT_READ_DR7:
...@@ -2262,3 +2265,13 @@ static int __init snp_init_platform_device(void) ...@@ -2262,3 +2265,13 @@ static int __init snp_init_platform_device(void)
return 0; return 0;
} }
device_initcall(snp_init_platform_device); device_initcall(snp_init_platform_device);
void kdump_sev_callback(void)
{
/*
* Do wbinvd() on remote CPUs when SNP is enabled in order to
* safely do SNP_SHUTDOWN on the local CPU.
*/
if (cpu_feature_enabled(X86_FEATURE_SEV_SNP))
wbinvd();
}
...@@ -2815,7 +2815,10 @@ int kvm_create_lapic(struct kvm_vcpu *vcpu, int timer_advance_ns) ...@@ -2815,7 +2815,10 @@ int kvm_create_lapic(struct kvm_vcpu *vcpu, int timer_advance_ns)
vcpu->arch.apic = apic; vcpu->arch.apic = apic;
apic->regs = (void *)get_zeroed_page(GFP_KERNEL_ACCOUNT); if (kvm_x86_ops.alloc_apic_backing_page)
apic->regs = static_call(kvm_x86_alloc_apic_backing_page)(vcpu);
else
apic->regs = (void *)get_zeroed_page(GFP_KERNEL_ACCOUNT);
if (!apic->regs) { if (!apic->regs) {
printk(KERN_ERR "malloc apic regs error for vcpu %x\n", printk(KERN_ERR "malloc apic regs error for vcpu %x\n",
vcpu->vcpu_id); vcpu->vcpu_id);
......
...@@ -1181,7 +1181,7 @@ int svm_allocate_nested(struct vcpu_svm *svm) ...@@ -1181,7 +1181,7 @@ int svm_allocate_nested(struct vcpu_svm *svm)
if (svm->nested.initialized) if (svm->nested.initialized)
return 0; return 0;
vmcb02_page = alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO); vmcb02_page = snp_safe_alloc_page(&svm->vcpu);
if (!vmcb02_page) if (!vmcb02_page)
return -ENOMEM; return -ENOMEM;
svm->nested.vmcb02.ptr = page_address(vmcb02_page); svm->nested.vmcb02.ptr = page_address(vmcb02_page);
......
...@@ -246,6 +246,7 @@ static void sev_unbind_asid(struct kvm *kvm, unsigned int handle) ...@@ -246,6 +246,7 @@ static void sev_unbind_asid(struct kvm *kvm, unsigned int handle)
static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp) static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
{ {
struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info; struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
struct sev_platform_init_args init_args = {0};
int asid, ret; int asid, ret;
if (kvm->created_vcpus) if (kvm->created_vcpus)
...@@ -262,7 +263,8 @@ static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp) ...@@ -262,7 +263,8 @@ static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
goto e_no_asid; goto e_no_asid;
sev->asid = asid; sev->asid = asid;
ret = sev_platform_init(&argp->error); init_args.probe = false;
ret = sev_platform_init(&init_args);
if (ret) if (ret)
goto e_free; goto e_free;
...@@ -274,6 +276,7 @@ static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp) ...@@ -274,6 +276,7 @@ static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp)
return 0; return 0;
e_free: e_free:
argp->error = init_args.error;
sev_asid_free(sev); sev_asid_free(sev);
sev->asid = 0; sev->asid = 0;
e_no_asid: e_no_asid:
...@@ -3160,3 +3163,35 @@ void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector) ...@@ -3160,3 +3163,35 @@ void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector)
ghcb_set_sw_exit_info_2(svm->sev_es.ghcb, 1); ghcb_set_sw_exit_info_2(svm->sev_es.ghcb, 1);
} }
struct page *snp_safe_alloc_page(struct kvm_vcpu *vcpu)
{
unsigned long pfn;
struct page *p;
if (!cpu_feature_enabled(X86_FEATURE_SEV_SNP))
return alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO);
/*
* Allocate an SNP-safe page to workaround the SNP erratum where
* the CPU will incorrectly signal an RMP violation #PF if a
* hugepage (2MB or 1GB) collides with the RMP entry of a
* 2MB-aligned VMCB, VMSA, or AVIC backing page.
*
* Allocate one extra page, choose a page which is not
* 2MB-aligned, and free the other.
*/
p = alloc_pages(GFP_KERNEL_ACCOUNT | __GFP_ZERO, 1);
if (!p)
return NULL;
split_page(p, 1);
pfn = page_to_pfn(p);
if (IS_ALIGNED(pfn, PTRS_PER_PMD))
__free_page(p++);
else
__free_page(p + 1);
return p;
}
...@@ -703,7 +703,7 @@ static int svm_cpu_init(int cpu) ...@@ -703,7 +703,7 @@ static int svm_cpu_init(int cpu)
int ret = -ENOMEM; int ret = -ENOMEM;
memset(sd, 0, sizeof(struct svm_cpu_data)); memset(sd, 0, sizeof(struct svm_cpu_data));
sd->save_area = alloc_page(GFP_KERNEL | __GFP_ZERO); sd->save_area = snp_safe_alloc_page(NULL);
if (!sd->save_area) if (!sd->save_area)
return ret; return ret;
...@@ -1421,7 +1421,7 @@ static int svm_vcpu_create(struct kvm_vcpu *vcpu) ...@@ -1421,7 +1421,7 @@ static int svm_vcpu_create(struct kvm_vcpu *vcpu)
svm = to_svm(vcpu); svm = to_svm(vcpu);
err = -ENOMEM; err = -ENOMEM;
vmcb01_page = alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO); vmcb01_page = snp_safe_alloc_page(vcpu);
if (!vmcb01_page) if (!vmcb01_page)
goto out; goto out;
...@@ -1430,7 +1430,7 @@ static int svm_vcpu_create(struct kvm_vcpu *vcpu) ...@@ -1430,7 +1430,7 @@ static int svm_vcpu_create(struct kvm_vcpu *vcpu)
* SEV-ES guests require a separate VMSA page used to contain * SEV-ES guests require a separate VMSA page used to contain
* the encrypted register state of the guest. * the encrypted register state of the guest.
*/ */
vmsa_page = alloc_page(GFP_KERNEL_ACCOUNT | __GFP_ZERO); vmsa_page = snp_safe_alloc_page(vcpu);
if (!vmsa_page) if (!vmsa_page)
goto error_free_vmcb_page; goto error_free_vmcb_page;
...@@ -4900,6 +4900,16 @@ static int svm_vm_init(struct kvm *kvm) ...@@ -4900,6 +4900,16 @@ static int svm_vm_init(struct kvm *kvm)
return 0; return 0;
} }
static void *svm_alloc_apic_backing_page(struct kvm_vcpu *vcpu)
{
struct page *page = snp_safe_alloc_page(vcpu);
if (!page)
return NULL;
return page_address(page);
}
static struct kvm_x86_ops svm_x86_ops __initdata = { static struct kvm_x86_ops svm_x86_ops __initdata = {
.name = KBUILD_MODNAME, .name = KBUILD_MODNAME,
...@@ -5031,6 +5041,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = { ...@@ -5031,6 +5041,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
.vcpu_deliver_sipi_vector = svm_vcpu_deliver_sipi_vector, .vcpu_deliver_sipi_vector = svm_vcpu_deliver_sipi_vector,
.vcpu_get_apicv_inhibit_reasons = avic_vcpu_get_apicv_inhibit_reasons, .vcpu_get_apicv_inhibit_reasons = avic_vcpu_get_apicv_inhibit_reasons,
.alloc_apic_backing_page = svm_alloc_apic_backing_page,
}; };
/* /*
......
...@@ -694,6 +694,7 @@ void sev_es_vcpu_reset(struct vcpu_svm *svm); ...@@ -694,6 +694,7 @@ void sev_es_vcpu_reset(struct vcpu_svm *svm);
void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector); void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector);
void sev_es_prepare_switch_to_guest(struct sev_es_save_area *hostsa); void sev_es_prepare_switch_to_guest(struct sev_es_save_area *hostsa);
void sev_es_unmap_ghcb(struct vcpu_svm *svm); void sev_es_unmap_ghcb(struct vcpu_svm *svm);
struct page *snp_safe_alloc_page(struct kvm_vcpu *vcpu);
/* vmenter.S */ /* vmenter.S */
......
...@@ -34,6 +34,7 @@ ...@@ -34,6 +34,7 @@
#include <asm/kvm_para.h> /* kvm_handle_async_pf */ #include <asm/kvm_para.h> /* kvm_handle_async_pf */
#include <asm/vdso.h> /* fixup_vdso_exception() */ #include <asm/vdso.h> /* fixup_vdso_exception() */
#include <asm/irq_stack.h> #include <asm/irq_stack.h>
#include <asm/sev.h> /* snp_dump_hva_rmpentry() */
#define CREATE_TRACE_POINTS #define CREATE_TRACE_POINTS
#include <asm/trace/exceptions.h> #include <asm/trace/exceptions.h>
...@@ -547,6 +548,7 @@ show_fault_oops(struct pt_regs *regs, unsigned long error_code, unsigned long ad ...@@ -547,6 +548,7 @@ show_fault_oops(struct pt_regs *regs, unsigned long error_code, unsigned long ad
!(error_code & X86_PF_PROT) ? "not-present page" : !(error_code & X86_PF_PROT) ? "not-present page" :
(error_code & X86_PF_RSVD) ? "reserved bit violation" : (error_code & X86_PF_RSVD) ? "reserved bit violation" :
(error_code & X86_PF_PK) ? "protection keys violation" : (error_code & X86_PF_PK) ? "protection keys violation" :
(error_code & X86_PF_RMP) ? "RMP violation" :
"permissions violation"); "permissions violation");
if (!(error_code & X86_PF_USER) && user_mode(regs)) { if (!(error_code & X86_PF_USER) && user_mode(regs)) {
...@@ -579,6 +581,9 @@ show_fault_oops(struct pt_regs *regs, unsigned long error_code, unsigned long ad ...@@ -579,6 +581,9 @@ show_fault_oops(struct pt_regs *regs, unsigned long error_code, unsigned long ad
} }
dump_pagetable(address); dump_pagetable(address);
if (error_code & X86_PF_RMP)
snp_dump_hva_rmpentry(address);
} }
static noinline void static noinline void
......
...@@ -42,38 +42,42 @@ bool force_dma_unencrypted(struct device *dev) ...@@ -42,38 +42,42 @@ bool force_dma_unencrypted(struct device *dev)
static void print_mem_encrypt_feature_info(void) static void print_mem_encrypt_feature_info(void)
{ {
pr_info("Memory Encryption Features active:"); pr_info("Memory Encryption Features active: ");
if (cpu_feature_enabled(X86_FEATURE_TDX_GUEST)) { switch (cc_vendor) {
pr_cont(" Intel TDX\n"); case CC_VENDOR_INTEL:
return; pr_cont("Intel TDX\n");
} break;
case CC_VENDOR_AMD:
pr_cont(" AMD"); pr_cont("AMD");
/* Secure Memory Encryption */ /* Secure Memory Encryption */
if (cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT)) { if (cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT)) {
/* /*
* SME is mutually exclusive with any of the SEV * SME is mutually exclusive with any of the SEV
* features below. * features below.
*/ */
pr_cont(" SME\n"); pr_cont(" SME\n");
return; return;
}
/* Secure Encrypted Virtualization */
if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT))
pr_cont(" SEV");
/* Encrypted Register State */
if (cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT))
pr_cont(" SEV-ES");
/* Secure Nested Paging */
if (cc_platform_has(CC_ATTR_GUEST_SEV_SNP))
pr_cont(" SEV-SNP");
pr_cont("\n");
break;
default:
pr_cont("Unknown\n");
} }
/* Secure Encrypted Virtualization */
if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT))
pr_cont(" SEV");
/* Encrypted Register State */
if (cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT))
pr_cont(" SEV-ES");
/* Secure Nested Paging */
if (cc_platform_has(CC_ATTR_GUEST_SEV_SNP))
pr_cont(" SEV-SNP");
pr_cont("\n");
} }
/* Architecture __weak replacement functions */ /* Architecture __weak replacement functions */
......
...@@ -97,7 +97,6 @@ static char sme_workarea[2 * PMD_SIZE] __section(".init.scratch"); ...@@ -97,7 +97,6 @@ static char sme_workarea[2 * PMD_SIZE] __section(".init.scratch");
static char sme_cmdline_arg[] __initdata = "mem_encrypt"; static char sme_cmdline_arg[] __initdata = "mem_encrypt";
static char sme_cmdline_on[] __initdata = "on"; static char sme_cmdline_on[] __initdata = "on";
static char sme_cmdline_off[] __initdata = "off";
static void __init sme_clear_pgd(struct sme_populate_pgd_data *ppd) static void __init sme_clear_pgd(struct sme_populate_pgd_data *ppd)
{ {
...@@ -305,7 +304,8 @@ void __init sme_encrypt_kernel(struct boot_params *bp) ...@@ -305,7 +304,8 @@ void __init sme_encrypt_kernel(struct boot_params *bp)
* instrumentation or checking boot_cpu_data in the cc_platform_has() * instrumentation or checking boot_cpu_data in the cc_platform_has()
* function. * function.
*/ */
if (!sme_get_me_mask() || sev_status & MSR_AMD64_SEV_ENABLED) if (!sme_get_me_mask() ||
RIP_REL_REF(sev_status) & MSR_AMD64_SEV_ENABLED)
return; return;
/* /*
...@@ -504,10 +504,9 @@ void __init sme_encrypt_kernel(struct boot_params *bp) ...@@ -504,10 +504,9 @@ void __init sme_encrypt_kernel(struct boot_params *bp)
void __init sme_enable(struct boot_params *bp) void __init sme_enable(struct boot_params *bp)
{ {
const char *cmdline_ptr, *cmdline_arg, *cmdline_on, *cmdline_off; const char *cmdline_ptr, *cmdline_arg, *cmdline_on;
unsigned int eax, ebx, ecx, edx; unsigned int eax, ebx, ecx, edx;
unsigned long feature_mask; unsigned long feature_mask;
bool active_by_default;
unsigned long me_mask; unsigned long me_mask;
char buffer[16]; char buffer[16];
bool snp; bool snp;
...@@ -543,11 +542,11 @@ void __init sme_enable(struct boot_params *bp) ...@@ -543,11 +542,11 @@ void __init sme_enable(struct boot_params *bp)
me_mask = 1UL << (ebx & 0x3f); me_mask = 1UL << (ebx & 0x3f);
/* Check the SEV MSR whether SEV or SME is enabled */ /* Check the SEV MSR whether SEV or SME is enabled */
sev_status = __rdmsr(MSR_AMD64_SEV); RIP_REL_REF(sev_status) = msr = __rdmsr(MSR_AMD64_SEV);
feature_mask = (sev_status & MSR_AMD64_SEV_ENABLED) ? AMD_SEV_BIT : AMD_SME_BIT; feature_mask = (msr & MSR_AMD64_SEV_ENABLED) ? AMD_SEV_BIT : AMD_SME_BIT;
/* The SEV-SNP CC blob should never be present unless SEV-SNP is enabled. */ /* The SEV-SNP CC blob should never be present unless SEV-SNP is enabled. */
if (snp && !(sev_status & MSR_AMD64_SEV_SNP_ENABLED)) if (snp && !(msr & MSR_AMD64_SEV_SNP_ENABLED))
snp_abort(); snp_abort();
/* Check if memory encryption is enabled */ /* Check if memory encryption is enabled */
...@@ -573,7 +572,6 @@ void __init sme_enable(struct boot_params *bp) ...@@ -573,7 +572,6 @@ void __init sme_enable(struct boot_params *bp)
return; return;
} else { } else {
/* SEV state cannot be controlled by a command line option */ /* SEV state cannot be controlled by a command line option */
sme_me_mask = me_mask;
goto out; goto out;
} }
...@@ -588,31 +586,17 @@ void __init sme_enable(struct boot_params *bp) ...@@ -588,31 +586,17 @@ void __init sme_enable(struct boot_params *bp)
asm ("lea sme_cmdline_on(%%rip), %0" asm ("lea sme_cmdline_on(%%rip), %0"
: "=r" (cmdline_on) : "=r" (cmdline_on)
: "p" (sme_cmdline_on)); : "p" (sme_cmdline_on));
asm ("lea sme_cmdline_off(%%rip), %0"
: "=r" (cmdline_off)
: "p" (sme_cmdline_off));
if (IS_ENABLED(CONFIG_AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT))
active_by_default = true;
else
active_by_default = false;
cmdline_ptr = (const char *)((u64)bp->hdr.cmd_line_ptr | cmdline_ptr = (const char *)((u64)bp->hdr.cmd_line_ptr |
((u64)bp->ext_cmd_line_ptr << 32)); ((u64)bp->ext_cmd_line_ptr << 32));
if (cmdline_find_option(cmdline_ptr, cmdline_arg, buffer, sizeof(buffer)) < 0) if (cmdline_find_option(cmdline_ptr, cmdline_arg, buffer, sizeof(buffer)) < 0 ||
strncmp(buffer, cmdline_on, sizeof(buffer)))
return; return;
if (!strncmp(buffer, cmdline_on, sizeof(buffer)))
sme_me_mask = me_mask;
else if (!strncmp(buffer, cmdline_off, sizeof(buffer)))
sme_me_mask = 0;
else
sme_me_mask = active_by_default ? me_mask : 0;
out: out:
if (sme_me_mask) { RIP_REL_REF(sme_me_mask) = me_mask;
physical_mask &= ~sme_me_mask; physical_mask &= ~me_mask;
cc_vendor = CC_VENDOR_AMD; cc_vendor = CC_VENDOR_AMD;
cc_set_mask(sme_me_mask); cc_set_mask(me_mask);
}
} }
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_KVM_AMD_SEV) += sev.o
This diff is collapsed.
...@@ -38,7 +38,7 @@ config CRYPTO_DEV_CCP_CRYPTO ...@@ -38,7 +38,7 @@ config CRYPTO_DEV_CCP_CRYPTO
config CRYPTO_DEV_SP_PSP config CRYPTO_DEV_SP_PSP
bool "Platform Security Processor (PSP) device" bool "Platform Security Processor (PSP) device"
default y default y
depends on CRYPTO_DEV_CCP_DD && X86_64 depends on CRYPTO_DEV_CCP_DD && X86_64 && AMD_IOMMU
help help
Provide support for the AMD Platform Security Processor (PSP). Provide support for the AMD Platform Security Processor (PSP).
The PSP is a dedicated processor that provides support for key The PSP is a dedicated processor that provides support for key
......
This diff is collapsed.
...@@ -52,6 +52,11 @@ struct sev_device { ...@@ -52,6 +52,11 @@ struct sev_device {
u8 build; u8 build;
void *cmd_buf; void *cmd_buf;
void *cmd_buf_backup;
bool cmd_buf_active;
bool cmd_buf_backup_active;
bool snp_initialized;
}; };
int sev_dev_init(struct psp_device *psp); int sev_dev_init(struct psp_device *psp);
......
...@@ -164,5 +164,4 @@ void amd_iommu_domain_set_pgtable(struct protection_domain *domain, ...@@ -164,5 +164,4 @@ void amd_iommu_domain_set_pgtable(struct protection_domain *domain,
u64 *root, int mode); u64 *root, int mode);
struct dev_table_entry *get_dev_table(struct amd_iommu *iommu); struct dev_table_entry *get_dev_table(struct amd_iommu *iommu);
extern bool amd_iommu_snp_en;
#endif #endif
...@@ -30,6 +30,7 @@ ...@@ -30,6 +30,7 @@
#include <asm/io_apic.h> #include <asm/io_apic.h>
#include <asm/irq_remapping.h> #include <asm/irq_remapping.h>
#include <asm/set_memory.h> #include <asm/set_memory.h>
#include <asm/sev.h>
#include <linux/crash_dump.h> #include <linux/crash_dump.h>
...@@ -3221,6 +3222,36 @@ static bool __init detect_ivrs(void) ...@@ -3221,6 +3222,36 @@ static bool __init detect_ivrs(void)
return true; return true;
} }
static void iommu_snp_enable(void)
{
#ifdef CONFIG_KVM_AMD_SEV
if (!cpu_feature_enabled(X86_FEATURE_SEV_SNP))
return;
/*
* The SNP support requires that IOMMU must be enabled, and is
* not configured in the passthrough mode.
*/
if (no_iommu || iommu_default_passthrough()) {
pr_err("SNP: IOMMU disabled or configured in passthrough mode, SNP cannot be supported.\n");
return;
}
amd_iommu_snp_en = check_feature(FEATURE_SNP);
if (!amd_iommu_snp_en) {
pr_err("SNP: IOMMU SNP feature not enabled, SNP cannot be supported.\n");
return;
}
pr_info("IOMMU SNP support enabled.\n");
/* Enforce IOMMU v1 pagetable when SNP is enabled. */
if (amd_iommu_pgtable != AMD_IOMMU_V1) {
pr_warn("Forcing use of AMD IOMMU v1 page table due to SNP.\n");
amd_iommu_pgtable = AMD_IOMMU_V1;
}
#endif
}
/**************************************************************************** /****************************************************************************
* *
* AMD IOMMU Initialization State Machine * AMD IOMMU Initialization State Machine
...@@ -3256,6 +3287,7 @@ static int __init state_next(void) ...@@ -3256,6 +3287,7 @@ static int __init state_next(void)
break; break;
case IOMMU_ENABLED: case IOMMU_ENABLED:
register_syscore_ops(&amd_iommu_syscore_ops); register_syscore_ops(&amd_iommu_syscore_ops);
iommu_snp_enable();
ret = amd_iommu_init_pci(); ret = amd_iommu_init_pci();
init_state = ret ? IOMMU_INIT_ERROR : IOMMU_PCI_INIT; init_state = ret ? IOMMU_INIT_ERROR : IOMMU_PCI_INIT;
break; break;
...@@ -3767,40 +3799,85 @@ int amd_iommu_pc_set_reg(struct amd_iommu *iommu, u8 bank, u8 cntr, u8 fxn, u64 ...@@ -3767,40 +3799,85 @@ int amd_iommu_pc_set_reg(struct amd_iommu *iommu, u8 bank, u8 cntr, u8 fxn, u64
return iommu_pc_get_set_reg(iommu, bank, cntr, fxn, value, true); return iommu_pc_get_set_reg(iommu, bank, cntr, fxn, value, true);
} }
#ifdef CONFIG_AMD_MEM_ENCRYPT #ifdef CONFIG_KVM_AMD_SEV
int amd_iommu_snp_enable(void) static int iommu_page_make_shared(void *page)
{ {
/* unsigned long paddr, pfn;
* The SNP support requires that IOMMU must be enabled, and is
* not configured in the passthrough mode. paddr = iommu_virt_to_phys(page);
*/ /* Cbit maybe set in the paddr */
if (no_iommu || iommu_default_passthrough()) { pfn = __sme_clr(paddr) >> PAGE_SHIFT;
pr_err("SNP: IOMMU is disabled or configured in passthrough mode, SNP cannot be supported");
return -EINVAL; if (!(pfn % PTRS_PER_PMD)) {
int ret, level;
bool assigned;
ret = snp_lookup_rmpentry(pfn, &assigned, &level);
if (ret) {
pr_warn("IOMMU PFN %lx RMP lookup failed, ret %d\n", pfn, ret);
return ret;
}
if (!assigned) {
pr_warn("IOMMU PFN %lx not assigned in RMP table\n", pfn);
return -EINVAL;
}
if (level > PG_LEVEL_4K) {
ret = psmash(pfn);
if (!ret)
goto done;
pr_warn("PSMASH failed for IOMMU PFN %lx huge RMP entry, ret: %d, level: %d\n",
pfn, ret, level);
return ret;
}
} }
/* done:
* Prevent enabling SNP after IOMMU_ENABLED state because this process return rmp_make_shared(pfn, PG_LEVEL_4K);
* affect how IOMMU driver sets up data structures and configures }
* IOMMU hardware.
*/ static int iommu_make_shared(void *va, size_t size)
if (init_state > IOMMU_ENABLED) { {
pr_err("SNP: Too late to enable SNP for IOMMU.\n"); void *page;
return -EINVAL; int ret;
if (!va)
return 0;
for (page = va; page < (va + size); page += PAGE_SIZE) {
ret = iommu_page_make_shared(page);
if (ret)
return ret;
} }
amd_iommu_snp_en = check_feature(FEATURE_SNP); return 0;
}
int amd_iommu_snp_disable(void)
{
struct amd_iommu *iommu;
int ret;
if (!amd_iommu_snp_en) if (!amd_iommu_snp_en)
return -EINVAL; return 0;
for_each_iommu(iommu) {
ret = iommu_make_shared(iommu->evt_buf, EVT_BUFFER_SIZE);
if (ret)
return ret;
pr_info("SNP enabled\n"); ret = iommu_make_shared(iommu->ppr_log, PPR_LOG_SIZE);
if (ret)
return ret;
/* Enforce IOMMU v1 pagetable when SNP is enabled. */ ret = iommu_make_shared((void *)iommu->cmd_sem, PAGE_SIZE);
if (amd_iommu_pgtable != AMD_IOMMU_V1) { if (ret)
pr_warn("Force to using AMD IOMMU v1 page table due to SNP\n"); return ret;
amd_iommu_pgtable = AMD_IOMMU_V1;
} }
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(amd_iommu_snp_disable);
#endif #endif
...@@ -85,8 +85,10 @@ int amd_iommu_pc_get_reg(struct amd_iommu *iommu, u8 bank, u8 cntr, u8 fxn, ...@@ -85,8 +85,10 @@ int amd_iommu_pc_get_reg(struct amd_iommu *iommu, u8 bank, u8 cntr, u8 fxn,
u64 *value); u64 *value);
struct amd_iommu *get_amd_iommu(unsigned int idx); struct amd_iommu *get_amd_iommu(unsigned int idx);
#ifdef CONFIG_AMD_MEM_ENCRYPT #ifdef CONFIG_KVM_AMD_SEV
int amd_iommu_snp_enable(void); int amd_iommu_snp_disable(void);
#else
static inline int amd_iommu_snp_disable(void) { return 0; }
#endif #endif
#endif /* _ASM_X86_AMD_IOMMU_H */ #endif /* _ASM_X86_AMD_IOMMU_H */
This diff is collapsed.
...@@ -28,6 +28,9 @@ enum { ...@@ -28,6 +28,9 @@ enum {
SEV_PEK_CERT_IMPORT, SEV_PEK_CERT_IMPORT,
SEV_GET_ID, /* This command is deprecated, use SEV_GET_ID2 */ SEV_GET_ID, /* This command is deprecated, use SEV_GET_ID2 */
SEV_GET_ID2, SEV_GET_ID2,
SNP_PLATFORM_STATUS,
SNP_COMMIT,
SNP_SET_CONFIG,
SEV_MAX, SEV_MAX,
}; };
...@@ -69,6 +72,12 @@ typedef enum { ...@@ -69,6 +72,12 @@ typedef enum {
SEV_RET_RESOURCE_LIMIT, SEV_RET_RESOURCE_LIMIT,
SEV_RET_SECURE_DATA_INVALID, SEV_RET_SECURE_DATA_INVALID,
SEV_RET_INVALID_KEY = 0x27, SEV_RET_INVALID_KEY = 0x27,
SEV_RET_INVALID_PAGE_SIZE,
SEV_RET_INVALID_PAGE_STATE,
SEV_RET_INVALID_MDATA_ENTRY,
SEV_RET_INVALID_PAGE_OWNER,
SEV_RET_INVALID_PAGE_AEAD_OFLOW,
SEV_RET_RMP_INIT_REQUIRED,
SEV_RET_MAX, SEV_RET_MAX,
} sev_ret_code; } sev_ret_code;
...@@ -155,6 +164,56 @@ struct sev_user_data_get_id2 { ...@@ -155,6 +164,56 @@ struct sev_user_data_get_id2 {
__u32 length; /* In/Out */ __u32 length; /* In/Out */
} __packed; } __packed;
/**
* struct sev_user_data_snp_status - SNP status
*
* @api_major: API major version
* @api_minor: API minor version
* @state: current platform state
* @is_rmp_initialized: whether RMP is initialized or not
* @rsvd: reserved
* @build_id: firmware build id for the API version
* @mask_chip_id: whether chip id is present in attestation reports or not
* @mask_chip_key: whether attestation reports are signed or not
* @vlek_en: VLEK (Version Loaded Endorsement Key) hashstick is loaded
* @rsvd1: reserved
* @guest_count: the number of guest currently managed by the firmware
* @current_tcb_version: current TCB version
* @reported_tcb_version: reported TCB version
*/
struct sev_user_data_snp_status {
__u8 api_major; /* Out */
__u8 api_minor; /* Out */
__u8 state; /* Out */
__u8 is_rmp_initialized:1; /* Out */
__u8 rsvd:7;
__u32 build_id; /* Out */
__u32 mask_chip_id:1; /* Out */
__u32 mask_chip_key:1; /* Out */
__u32 vlek_en:1; /* Out */
__u32 rsvd1:29;
__u32 guest_count; /* Out */
__u64 current_tcb_version; /* Out */
__u64 reported_tcb_version; /* Out */
} __packed;
/**
* struct sev_user_data_snp_config - system wide configuration value for SNP.
*
* @reported_tcb: the TCB version to report in the guest attestation report.
* @mask_chip_id: whether chip id is present in attestation reports or not
* @mask_chip_key: whether attestation reports are signed or not
* @rsvd: reserved
* @rsvd1: reserved
*/
struct sev_user_data_snp_config {
__u64 reported_tcb ; /* In */
__u32 mask_chip_id:1; /* In */
__u32 mask_chip_key:1; /* In */
__u32 rsvd:30; /* In */
__u8 rsvd1[52];
} __packed;
/** /**
* struct sev_issue_cmd - SEV ioctl parameters * struct sev_issue_cmd - SEV ioctl parameters
* *
......
...@@ -442,6 +442,7 @@ ...@@ -442,6 +442,7 @@
#define X86_FEATURE_SEV (19*32+ 1) /* AMD Secure Encrypted Virtualization */ #define X86_FEATURE_SEV (19*32+ 1) /* AMD Secure Encrypted Virtualization */
#define X86_FEATURE_VM_PAGE_FLUSH (19*32+ 2) /* "" VM Page Flush MSR is supported */ #define X86_FEATURE_VM_PAGE_FLUSH (19*32+ 2) /* "" VM Page Flush MSR is supported */
#define X86_FEATURE_SEV_ES (19*32+ 3) /* AMD Secure Encrypted Virtualization - Encrypted State */ #define X86_FEATURE_SEV_ES (19*32+ 3) /* AMD Secure Encrypted Virtualization - Encrypted State */
#define X86_FEATURE_SEV_SNP (19*32+ 4) /* AMD Secure Encrypted Virtualization - Secure Nested Paging */
#define X86_FEATURE_V_TSC_AUX (19*32+ 9) /* "" Virtual TSC_AUX */ #define X86_FEATURE_V_TSC_AUX (19*32+ 9) /* "" Virtual TSC_AUX */
#define X86_FEATURE_SME_COHERENT (19*32+10) /* "" AMD hardware-enforced cache coherency */ #define X86_FEATURE_SME_COHERENT (19*32+10) /* "" AMD hardware-enforced cache coherency */
#define X86_FEATURE_DEBUG_SWAP (19*32+14) /* AMD SEV-ES full debug state swap support */ #define X86_FEATURE_DEBUG_SWAP (19*32+14) /* AMD SEV-ES full debug state swap support */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment