Commit 0a23fb26 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'x86_microcode_for_v6.7_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull x86 microcode loading updates from Borislac Petkov:
 "Major microcode loader restructuring, cleanup and improvements by
  Thomas Gleixner:

   - Restructure the code needed for it and add a temporary initrd
     mapping on 32-bit so that the loader can access the microcode
     blobs. This in itself is a preparation for the next major
     improvement:

   - Do not load microcode on 32-bit before paging has been enabled.

     Handling this has caused an endless stream of headaches, issues,
     ugly code and unnecessary hacks in the past. And there really
     wasn't any sensible reason to do that in the first place. So switch
     the 32-bit loading to happen after paging has been enabled and turn
     the loader code "real purrty" again

   - Drop mixed microcode steppings loading on Intel - there, a single
     patch loaded on the whole system is sufficient

   - Rework late loading to track which CPUs have updated microcode
     successfully and which haven't, act accordingly

   - Move late microcode loading on Intel in NMI context in order to
     guarantee concurrent loading on all threads

   - Make the late loading CPU-hotplug-safe and have the offlined
     threads be woken up for the purpose of the update

   - Add support for a minimum revision which determines whether late
     microcode loading is safe on a machine and the microcode does not
     change software visible features which the machine cannot use
     anyway since feature detection has happened already. Roughly, the
     minimum revision is the smallest revision number which must be
     loaded currently on the system so that late updates can be allowed

   - Other nice leanups, fixess, etc all over the place"

* tag 'x86_microcode_for_v6.7_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (40 commits)
  x86/microcode/intel: Add a minimum required revision for late loading
  x86/microcode: Prepare for minimal revision check
  x86/microcode: Handle "offline" CPUs correctly
  x86/apic: Provide apic_force_nmi_on_cpu()
  x86/microcode: Protect against instrumentation
  x86/microcode: Rendezvous and load in NMI
  x86/microcode: Replace the all-in-one rendevous handler
  x86/microcode: Provide new control functions
  x86/microcode: Add per CPU control field
  x86/microcode: Add per CPU result state
  x86/microcode: Sanitize __wait_for_cpus()
  x86/microcode: Clarify the late load logic
  x86/microcode: Handle "nosmt" correctly
  x86/microcode: Clean up mc_cpu_down_prep()
  x86/microcode: Get rid of the schedule work indirection
  x86/microcode: Mop up early loading leftovers
  x86/microcode/amd: Use cached microcode for AP load
  x86/microcode/amd: Cache builtin/initrd microcode early
  x86/microcode/amd: Cache builtin microcode too
  x86/microcode/amd: Use correct per CPU ucode_cpu_info
  ...
parents 5c5e048b cf5ab01c
......@@ -3333,6 +3333,11 @@
mga= [HW,DRM]
microcode.force_minrev= [X86]
Format: <bool>
Enable or disable the microcode minimal revision
enforcement for the runtime microcode loader.
min_addr=nn[KMG] [KNL,BOOT,IA-64] All physical memory below this
physical address is ignored.
......
......@@ -1313,16 +1313,41 @@ config MICROCODE
def_bool y
depends on CPU_SUP_AMD || CPU_SUP_INTEL
config MICROCODE_INITRD32
def_bool y
depends on MICROCODE && X86_32 && BLK_DEV_INITRD
config MICROCODE_LATE_LOADING
bool "Late microcode loading (DANGEROUS)"
default n
depends on MICROCODE
depends on MICROCODE && SMP
help
Loading microcode late, when the system is up and executing instructions
is a tricky business and should be avoided if possible. Just the sequence
of synchronizing all cores and SMT threads is one fragile dance which does
not guarantee that cores might not softlock after the loading. Therefore,
use this at your own risk. Late loading taints the kernel too.
use this at your own risk. Late loading taints the kernel unless the
microcode header indicates that it is safe for late loading via the
minimal revision check. This minimal revision check can be enforced on
the kernel command line with "microcode.minrev=Y".
config MICROCODE_LATE_FORCE_MINREV
bool "Enforce late microcode loading minimal revision check"
default n
depends on MICROCODE_LATE_LOADING
help
To prevent that users load microcode late which modifies already
in use features, newer microcode patches have a minimum revision field
in the microcode header, which tells the kernel which minimum
revision must be active in the CPU to safely load that new microcode
late into the running system. If disabled the check will not
be enforced but the kernel will be tainted when the minimal
revision check fails.
This minimal revision check can also be controlled via the
"microcode.minrev" parameter on the kernel command line.
If unsure say Y.
config X86_MSR
tristate "/dev/cpu/*/msr - Model-specific register support"
......
......@@ -276,7 +276,8 @@ struct apic {
u32 disable_esr : 1,
dest_mode_logical : 1,
x2apic_set_max_apicid : 1;
x2apic_set_max_apicid : 1,
nmi_to_offline_cpu : 1;
u32 (*calc_dest_apicid)(unsigned int cpu);
......@@ -531,6 +532,8 @@ extern u32 apic_flat_calc_apicid(unsigned int cpu);
extern void default_ioapic_phys_id_map(physid_mask_t *phys_map, physid_mask_t *retmap);
extern u32 default_cpu_present_to_apicid(int mps_cpu);
void apic_send_nmi_to_offline_cpu(unsigned int cpu);
#else /* CONFIG_X86_LOCAL_APIC */
static inline u32 read_apic_id(void) { return 0; }
......
......@@ -71,26 +71,12 @@ static inline void init_ia32_feat_ctl(struct cpuinfo_x86 *c) {}
extern __noendbr void cet_disable(void);
struct ucode_cpu_info;
struct cpu_signature;
int intel_cpu_collect_info(struct ucode_cpu_info *uci);
static inline bool intel_cpu_signatures_match(unsigned int s1, unsigned int p1,
unsigned int s2, unsigned int p2)
{
if (s1 != s2)
return false;
/* Processor flags are either both 0 ... */
if (!p1 && !p2)
return true;
/* ... or they intersect. */
return p1 & p2;
}
void intel_collect_cpu_info(struct cpu_signature *sig);
extern u64 x86_read_arch_cap_msr(void);
int intel_find_matching_signature(void *mc, unsigned int csig, int cpf);
bool intel_find_matching_signature(void *mc, struct cpu_signature *sig);
int intel_microcode_sanity_check(void *mc, bool print_err, int hdr_type);
extern struct cpumask cpus_stop_mask;
......
......@@ -23,6 +23,8 @@ static inline void load_ucode_ap(void) { }
static inline void microcode_bsp_resume(void) { }
#endif
extern unsigned long initrd_start_early;
#ifdef CONFIG_CPU_SUP_INTEL
/* Intel specific microcode defines. Public for IFS */
struct microcode_header_intel {
......@@ -36,7 +38,8 @@ struct microcode_header_intel {
unsigned int datasize;
unsigned int totalsize;
unsigned int metasize;
unsigned int reserved[2];
unsigned int min_req_ver;
unsigned int reserved;
};
struct microcode_intel {
......@@ -68,11 +71,19 @@ static inline u32 intel_get_microcode_revision(void)
return rev;
}
#endif /* !CONFIG_CPU_SUP_INTEL */
void show_ucode_info_early(void);
bool microcode_nmi_handler(void);
void microcode_offline_nmi_handler(void);
#else /* CONFIG_CPU_SUP_INTEL */
static inline void show_ucode_info_early(void) { }
#endif /* !CONFIG_CPU_SUP_INTEL */
#ifdef CONFIG_MICROCODE_LATE_LOADING
DECLARE_STATIC_KEY_FALSE(microcode_nmi_handler_enable);
static __always_inline bool microcode_nmi_handler_enabled(void)
{
return static_branch_unlikely(&microcode_nmi_handler_enable);
}
#else
static __always_inline bool microcode_nmi_handler_enabled(void) { return false; }
#endif
#endif /* _ASM_X86_MICROCODE_H */
......@@ -126,6 +126,7 @@ void clear_bss(void);
#ifdef __i386__
asmlinkage void __init __noreturn i386_start_kernel(void);
void __init mk_early_pgtbl_32(void);
#else
asmlinkage void __init __noreturn x86_64_start_kernel(char *real_mode);
......
......@@ -16,6 +16,7 @@ CFLAGS_REMOVE_kvmclock.o = -pg
CFLAGS_REMOVE_ftrace.o = -pg
CFLAGS_REMOVE_early_printk.o = -pg
CFLAGS_REMOVE_head64.o = -pg
CFLAGS_REMOVE_head32.o = -pg
CFLAGS_REMOVE_sev.o = -pg
CFLAGS_REMOVE_rethook.o = -pg
endif
......
......@@ -103,6 +103,7 @@ static struct apic apic_flat __ro_after_init = {
.send_IPI_allbutself = default_send_IPI_allbutself,
.send_IPI_all = default_send_IPI_all,
.send_IPI_self = default_send_IPI_self,
.nmi_to_offline_cpu = true,
.read = native_apic_mem_read,
.write = native_apic_mem_write,
......@@ -173,6 +174,7 @@ static struct apic apic_physflat __ro_after_init = {
.send_IPI_allbutself = default_send_IPI_allbutself,
.send_IPI_all = default_send_IPI_all,
.send_IPI_self = default_send_IPI_self,
.nmi_to_offline_cpu = true,
.read = native_apic_mem_read,
.write = native_apic_mem_write,
......
......@@ -97,6 +97,14 @@ void native_send_call_func_ipi(const struct cpumask *mask)
__apic_send_IPI_mask(mask, CALL_FUNCTION_VECTOR);
}
void apic_send_nmi_to_offline_cpu(unsigned int cpu)
{
if (WARN_ON_ONCE(!apic->nmi_to_offline_cpu))
return;
if (WARN_ON_ONCE(!cpumask_test_cpu(cpu, &cpus_booted_once_mask)))
return;
apic->send_IPI(cpu, NMI_VECTOR);
}
#endif /* CONFIG_SMP */
static inline int __prepare_ICR2(unsigned int mask)
......
......@@ -251,6 +251,7 @@ static struct apic apic_x2apic_cluster __ro_after_init = {
.send_IPI_allbutself = x2apic_send_IPI_allbutself,
.send_IPI_all = x2apic_send_IPI_all,
.send_IPI_self = x2apic_send_IPI_self,
.nmi_to_offline_cpu = true,
.read = native_apic_msr_read,
.write = native_apic_msr_write,
......
......@@ -166,6 +166,7 @@ static struct apic apic_x2apic_phys __ro_after_init = {
.send_IPI_allbutself = x2apic_send_IPI_allbutself,
.send_IPI_all = x2apic_send_IPI_all,
.send_IPI_self = x2apic_send_IPI_self,
.nmi_to_offline_cpu = true,
.read = native_apic_msr_read,
.write = native_apic_msr_write,
......
......@@ -2164,8 +2164,6 @@ static inline void setup_getcpu(int cpu)
}
#ifdef CONFIG_X86_64
static inline void ucode_cpu_init(int cpu) { }
static inline void tss_setup_ist(struct tss_struct *tss)
{
/* Set up the per-CPU TSS IST stacks */
......@@ -2176,16 +2174,8 @@ static inline void tss_setup_ist(struct tss_struct *tss)
/* Only mapped when SEV-ES is active */
tss->x86_tss.ist[IST_INDEX_VC] = __this_cpu_ist_top_va(VC);
}
#else /* CONFIG_X86_64 */
static inline void ucode_cpu_init(int cpu)
{
show_ucode_info_early();
}
static inline void tss_setup_ist(struct tss_struct *tss) { }
#endif /* !CONFIG_X86_64 */
static inline void tss_setup_io_bitmap(struct tss_struct *tss)
......@@ -2241,8 +2231,6 @@ void cpu_init(void)
struct task_struct *cur = current;
int cpu = raw_smp_processor_id();
ucode_cpu_init(cpu);
#ifdef CONFIG_NUMA
if (this_cpu_read(numa_node) == 0 &&
early_cpu_to_node(cpu) != NUMA_NO_NODE)
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -8,43 +8,37 @@
#include <asm/cpu.h>
#include <asm/microcode.h>
struct ucode_patch {
struct list_head plist;
void *data; /* Intel uses only this one */
unsigned int size;
u32 patch_id;
u16 equiv_cpu;
};
extern struct list_head microcode_cache;
struct device;
enum ucode_state {
UCODE_OK = 0,
UCODE_NEW,
UCODE_NEW_SAFE,
UCODE_UPDATED,
UCODE_NFOUND,
UCODE_ERROR,
UCODE_TIMEOUT,
UCODE_OFFLINE,
};
struct microcode_ops {
enum ucode_state (*request_microcode_fw)(int cpu, struct device *dev);
void (*microcode_fini_cpu)(int cpu);
/*
* The generic 'microcode_core' part guarantees that
* the callbacks below run on a target cpu when they
* are being called.
* The generic 'microcode_core' part guarantees that the callbacks
* below run on a target CPU when they are being called.
* See also the "Synchronization" section in microcode_core.c.
*/
enum ucode_state (*apply_microcode)(int cpu);
int (*collect_cpu_info)(int cpu, struct cpu_signature *csig);
void (*finalize_late_load)(int result);
unsigned int nmi_safe : 1,
use_nmi : 1;
};
extern struct ucode_cpu_info ucode_cpu_info[];
struct cpio_data find_microcode_in_initrd(const char *path, bool use_pa);
struct cpio_data find_microcode_in_initrd(const char *path);
#define MAX_UCODE_COUNT 128
......@@ -94,12 +88,12 @@ static inline unsigned int x86_cpuid_family(void)
return x86_family(eax);
}
extern bool initrd_gone;
extern bool dis_ucode_ldr;
extern bool force_minrev;
#ifdef CONFIG_CPU_SUP_AMD
void load_ucode_amd_bsp(unsigned int family);
void load_ucode_amd_ap(unsigned int family);
void load_ucode_amd_early(unsigned int cpuid_1_eax);
int save_microcode_in_initrd_amd(unsigned int family);
void reload_ucode_amd(unsigned int cpu);
struct microcode_ops *init_amd_microcode(void);
......@@ -107,7 +101,6 @@ void exit_amd_microcode(void);
#else /* CONFIG_CPU_SUP_AMD */
static inline void load_ucode_amd_bsp(unsigned int family) { }
static inline void load_ucode_amd_ap(unsigned int family) { }
static inline void load_ucode_amd_early(unsigned int family) { }
static inline int save_microcode_in_initrd_amd(unsigned int family) { return -EINVAL; }
static inline void reload_ucode_amd(unsigned int cpu) { }
static inline struct microcode_ops *init_amd_microcode(void) { return NULL; }
......@@ -117,13 +110,11 @@ static inline void exit_amd_microcode(void) { }
#ifdef CONFIG_CPU_SUP_INTEL
void load_ucode_intel_bsp(void);
void load_ucode_intel_ap(void);
int save_microcode_in_initrd_intel(void);
void reload_ucode_intel(void);
struct microcode_ops *init_intel_microcode(void);
#else /* CONFIG_CPU_SUP_INTEL */
static inline void load_ucode_intel_bsp(void) { }
static inline void load_ucode_intel_ap(void) { }
static inline int save_microcode_in_initrd_intel(void) { return -EINVAL; }
static inline void reload_ucode_intel(void) { }
static inline struct microcode_ops *init_intel_microcode(void) { return NULL; }
#endif /* !CONFIG_CPU_SUP_INTEL */
......
......@@ -19,6 +19,7 @@
#include <asm/apic.h>
#include <asm/io_apic.h>
#include <asm/bios_ebda.h>
#include <asm/microcode.h>
#include <asm/tlbflush.h>
#include <asm/bootparam_utils.h>
......@@ -29,11 +30,33 @@ static void __init i386_default_early_setup(void)
x86_init.mpparse.setup_ioapic_ids = setup_ioapic_ids_from_mpc;
}
#ifdef CONFIG_MICROCODE_INITRD32
unsigned long __initdata initrd_start_early;
static pte_t __initdata *initrd_pl2p_start, *initrd_pl2p_end;
static void zap_early_initrd_mapping(void)
{
pte_t *pl2p = initrd_pl2p_start;
for (; pl2p < initrd_pl2p_end; pl2p++) {
*pl2p = (pte_t){ .pte = 0 };
if (!IS_ENABLED(CONFIG_X86_PAE))
*(pl2p + ((PAGE_OFFSET >> PGDIR_SHIFT))) = (pte_t) {.pte = 0};
}
}
#else
static inline void zap_early_initrd_mapping(void) { }
#endif
asmlinkage __visible void __init __noreturn i386_start_kernel(void)
{
/* Make sure IDT is set up before any exception happens */
idt_setup_early_handler();
load_ucode_bsp();
zap_early_initrd_mapping();
cr4_init_shadow();
sanitize_boot_params(&boot_params);
......@@ -69,52 +92,83 @@ asmlinkage __visible void __init __noreturn i386_start_kernel(void)
* to the first kernel PMD. Note the upper half of each PMD or PTE are
* always zero at this stage.
*/
void __init mk_early_pgtbl_32(void);
void __init mk_early_pgtbl_32(void)
{
#ifdef __pa
#undef __pa
#endif
#define __pa(x) ((unsigned long)(x) - PAGE_OFFSET)
pte_t pte, *ptep;
int i;
unsigned long *ptr;
/* Enough space to fit pagetables for the low memory linear map */
const unsigned long limit = __pa(_end) +
(PAGE_TABLE_SIZE(LOWMEM_PAGES) << PAGE_SHIFT);
#ifdef CONFIG_X86_PAE
pmd_t pl2, *pl2p = (pmd_t *)__pa(initial_pg_pmd);
#define SET_PL2(pl2, val) { (pl2).pmd = (val); }
typedef pmd_t pl2_t;
#define pl2_base initial_pg_pmd
#define SET_PL2(val) { .pmd = (val), }
#else
pgd_t pl2, *pl2p = (pgd_t *)__pa(initial_page_table);
#define SET_PL2(pl2, val) { (pl2).pgd = (val); }
typedef pgd_t pl2_t;
#define pl2_base initial_page_table
#define SET_PL2(val) { .pgd = (val), }
#endif
ptep = (pte_t *)__pa(__brk_base);
pte.pte = PTE_IDENT_ATTR;
static __init __no_stack_protector pte_t init_map(pte_t pte, pte_t **ptep, pl2_t **pl2p,
const unsigned long limit)
{
while ((pte.pte & PTE_PFN_MASK) < limit) {
pl2_t pl2 = SET_PL2((unsigned long)*ptep | PDE_IDENT_ATTR);
int i;
SET_PL2(pl2, (unsigned long)ptep | PDE_IDENT_ATTR);
*pl2p = pl2;
#ifndef CONFIG_X86_PAE
**pl2p = pl2;
if (!IS_ENABLED(CONFIG_X86_PAE)) {
/* Kernel PDE entry */
*(pl2p + ((PAGE_OFFSET >> PGDIR_SHIFT))) = pl2;
#endif
*(*pl2p + ((PAGE_OFFSET >> PGDIR_SHIFT))) = pl2;
}
for (i = 0; i < PTRS_PER_PTE; i++) {
*ptep = pte;
**ptep = pte;
pte.pte += PAGE_SIZE;
ptep++;
(*ptep)++;
}
pl2p++;
(*pl2p)++;
}
return pte;
}
void __init __no_stack_protector mk_early_pgtbl_32(void)
{
/* Enough space to fit pagetables for the low memory linear map */
unsigned long limit = __pa_nodebug(_end) + (PAGE_TABLE_SIZE(LOWMEM_PAGES) << PAGE_SHIFT);
pte_t pte, *ptep = (pte_t *)__pa_nodebug(__brk_base);
struct boot_params __maybe_unused *params;
pl2_t *pl2p = (pl2_t *)__pa_nodebug(pl2_base);
unsigned long *ptr;
pte.pte = PTE_IDENT_ATTR;
pte = init_map(pte, &ptep, &pl2p, limit);
ptr = (unsigned long *)__pa(&max_pfn_mapped);
ptr = (unsigned long *)__pa_nodebug(&max_pfn_mapped);
/* Can't use pte_pfn() since it's a call with CONFIG_PARAVIRT */
*ptr = (pte.pte & PTE_PFN_MASK) >> PAGE_SHIFT;
ptr = (unsigned long *)__pa(&_brk_end);
ptr = (unsigned long *)__pa_nodebug(&_brk_end);
*ptr = (unsigned long)ptep + PAGE_OFFSET;
}
#ifdef CONFIG_MICROCODE_INITRD32
/* Running on a hypervisor? */
if (native_cpuid_ecx(1) & BIT(31))
return;
params = (struct boot_params *)__pa_nodebug(&boot_params);
if (!params->hdr.ramdisk_size || !params->hdr.ramdisk_image)
return;
/* Save the virtual start address */
ptr = (unsigned long *)__pa_nodebug(&initrd_start_early);
*ptr = (pte.pte & PTE_PFN_MASK) + PAGE_OFFSET;
*ptr += ((unsigned long)params->hdr.ramdisk_image) & ~PAGE_MASK;
/* Save PLP2 for cleanup */
ptr = (unsigned long *)__pa_nodebug(&initrd_pl2p_start);
*ptr = (unsigned long)pl2p + PAGE_OFFSET;
limit = (unsigned long)params->hdr.ramdisk_image;
pte.pte = PTE_IDENT_ATTR | PFN_ALIGN(limit);
limit = (unsigned long)params->hdr.ramdisk_image + params->hdr.ramdisk_size;
init_map(pte, &ptep, &pl2p, limit);
ptr = (unsigned long *)__pa_nodebug(&initrd_pl2p_end);
*ptr = (unsigned long)pl2p + PAGE_OFFSET;
#endif
}
......@@ -118,11 +118,6 @@ SYM_CODE_START(startup_32)
movl %eax, pa(olpc_ofw_pgd)
#endif
#ifdef CONFIG_MICROCODE
/* Early load ucode on BSP. */
call load_ucode_bsp
#endif
/* Create early pagetables. */
call mk_early_pgtbl_32
......@@ -157,11 +152,6 @@ SYM_FUNC_START(startup_32_smp)
movl %eax,%ss
leal -__PAGE_OFFSET(%ecx),%esp
#ifdef CONFIG_MICROCODE
/* Early load ucode on AP. */
call load_ucode_ap
#endif
.Ldefault_entry:
movl $(CR0_STATE & ~X86_CR0_PG),%eax
movl %eax,%cr0
......
......@@ -33,6 +33,7 @@
#include <asm/reboot.h>
#include <asm/cache.h>
#include <asm/nospec-branch.h>
#include <asm/microcode.h>
#include <asm/sev.h>
#define CREATE_TRACE_POINTS
......@@ -343,6 +344,9 @@ static noinstr void default_do_nmi(struct pt_regs *regs)
instrumentation_begin();
if (microcode_nmi_handler_enabled() && microcode_nmi_handler())
goto out;
handled = nmi_handle(NMI_LOCAL, regs);
__this_cpu_add(nmi_stats.normal, handled);
if (handled) {
......@@ -498,8 +502,11 @@ DEFINE_IDTENTRY_RAW(exc_nmi)
if (IS_ENABLED(CONFIG_NMI_CHECK_CPU))
raw_atomic_long_inc(&nsp->idt_calls);
if (IS_ENABLED(CONFIG_SMP) && arch_cpu_is_offline(smp_processor_id()))
if (IS_ENABLED(CONFIG_SMP) && arch_cpu_is_offline(smp_processor_id())) {
if (microcode_nmi_handler_enabled())
microcode_offline_nmi_handler();
return;
}
if (this_cpu_read(nmi_state) != NMI_NOT_RUNNING) {
this_cpu_write(nmi_state, NMI_LATCHED);
......
......@@ -272,12 +272,9 @@ static void notrace start_secondary(void *unused)
cpu_init_exception_handling();
/*
* 32-bit systems load the microcode from the ASM startup code for
* historical reasons.
*
* On 64-bit systems load it before reaching the AP alive
* synchronization point below so it is not part of the full per
* CPU serialized bringup part when "parallel" bringup is enabled.
* Load the microcode before reaching the AP alive synchronization
* point below so it is not part of the full per CPU serialized
* bringup part when "parallel" bringup is enabled.
*
* That's even safe when hyperthreading is enabled in the CPU as
* the core code starts the primary threads first and leaves the
......@@ -290,7 +287,6 @@ static void notrace start_secondary(void *unused)
* CPUID, MSRs etc. must be strictly serialized to maintain
* software state correctness.
*/
if (IS_ENABLED(CONFIG_X86_64))
load_ucode_ap();
/*
......
......@@ -349,7 +349,7 @@ static int scan_chunks_sanity_check(struct device *dev)
static int image_sanity_check(struct device *dev, const struct microcode_header_intel *data)
{
struct ucode_cpu_info uci;
struct cpu_signature sig;
/* Provide a specific error message when loading an older/unsupported image */
if (data->hdrver != MC_HEADER_TYPE_IFS) {
......@@ -362,11 +362,9 @@ static int image_sanity_check(struct device *dev, const struct microcode_header_
return -EINVAL;
}
intel_cpu_collect_info(&uci);
intel_collect_cpu_info(&sig);
if (!intel_find_matching_signature((void *)data,
uci.cpu_sig.sig,
uci.cpu_sig.pf)) {
if (!intel_find_matching_signature((void *)data, &sig)) {
dev_err(dev, "cpu signature, processor flags not matching\n");
return -EINVAL;
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment