Commit 8443516d authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'platform-drivers-x86-v5.19-1' of...

Merge tag 'platform-drivers-x86-v5.19-1' of git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86

Pull x86 platform driver updates from Hans de Goede:
 "This includes some small changes to kernel/stop_machine.c and arch/x86
  which are deps of the new Intel IFS support.

  Highlights:

   - New drivers:
       - Intel "In Field Scan" (IFS) support
       - Winmate FM07/FM07P buttons
       - Mellanox SN2201 support

   -  AMD PMC driver enhancements

   -  Lots of various other small fixes and hardware-id additions"

* tag 'platform-drivers-x86-v5.19-1' of git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86: (54 commits)
  platform/x86/intel/ifs: Add CPU_SUP_INTEL dependency
  platform/x86: intel_cht_int33fe: Set driver data
  platform/x86: intel-hid: fix _DSM function index handling
  platform/x86: toshiba_acpi: use kobj_to_dev()
  platform/x86: samsung-laptop: use kobj_to_dev()
  platform/x86: gigabyte-wmi: Add support for Z490 AORUS ELITE AC and X570 AORUS ELITE WIFI
  tools/power/x86/intel-speed-select: Fix warning for perf_cap.cpu
  tools/power/x86/intel-speed-select: Display error on turbo mode disabled
  Documentation: In-Field Scan
  platform/x86/intel/ifs: add ABI documentation for IFS
  trace: platform/x86/intel/ifs: Add trace point to track Intel IFS operations
  platform/x86/intel/ifs: Add IFS sysfs interface
  platform/x86/intel/ifs: Add scan test support
  platform/x86/intel/ifs: Authenticate and copy to secured memory
  platform/x86/intel/ifs: Check IFS Image sanity
  platform/x86/intel/ifs: Read IFS firmware image
  platform/x86/intel/ifs: Add stub driver for In-Field Scan
  stop_machine: Add stop_core_cpuslocked() for per-core operations
  x86/msr-index: Define INTEGRITY_CAPABILITIES MSR
  x86/microcode/intel: Expose collect_cpu_info_early() for IFS
  ...
parents cfe1cb01 badb81a5
......@@ -467,3 +467,39 @@ Description: These files provide the maximum powered required for line card
feeding and line card configuration Id.
The files are read only.
What: /sys/devices/platform/mlxplat/mlxreg-io/hwmon/hwmon*/phy_reset
Date: May 2022
KernelVersion: 5.19
Contact: Vadim Pasternak <vadimpmellanox.com>
Description: This file allows to reset PHY 88E1548 when attribute is set 0
due to some abnormal PHY behavior.
Expected behavior:
When phy_reset is written 1, all PHY 88E1548 are released
from the reset state, when 0 - are hold in reset state.
The files are read/write.
What: /sys/devices/platform/mlxplat/mlxreg-io/hwmon/hwmon*/mac_reset
Date: May 2022
KernelVersion: 5.19
Contact: Vadim Pasternak <vadimpmellanox.com>
Description: This file allows to reset ASIC MT52132 when attribute is set 0
due to some abnormal ASIC behavior.
Expected behavior:
When mac_reset is written 1, the ASIC MT52132 is released
from the reset state, when 0 - is hold in reset state.
The files are read/write.
What: /sys/devices/platform/mlxplat/mlxreg-io/hwmon/hwmon*/qsfp_pwr_good
Date: May 2022
KernelVersion: 5.19
Contact: Vadim Pasternak <vadimpmellanox.com>
Description: This file shows QSFP ports power status. The value is set to 0
when one of any QSFP ports is plugged. The value is set to 1 when
there are no any QSFP ports are plugged.
The possible values are:
0 - Power good, 1 - Not power good.
The files are read only.
What: /sys/devices/virtual/misc/intel_ifs_<N>/run_test
Date: April 21 2022
KernelVersion: 5.19
Contact: "Jithu Joseph" <jithu.joseph@intel.com>
Description: Write <cpu#> to trigger IFS test for one online core.
Note that the test is per core. The cpu# can be
for any thread on the core. Running on one thread
completes the test for the core containing that thread.
Example: to test the core containing cpu5: echo 5 >
/sys/devices/platform/intel_ifs.<N>/run_test
What: /sys/devices/virtual/misc/intel_ifs_<N>/status
Date: April 21 2022
KernelVersion: 5.19
Contact: "Jithu Joseph" <jithu.joseph@intel.com>
Description: The status of the last test. It can be one of "pass", "fail"
or "untested".
What: /sys/devices/virtual/misc/intel_ifs_<N>/details
Date: April 21 2022
KernelVersion: 5.19
Contact: "Jithu Joseph" <jithu.joseph@intel.com>
Description: Additional information regarding the last test. The details file reports
the hex value of the SCAN_STATUS MSR. Note that the error_code field
may contain driver defined software code not defined in the Intel SDM.
What: /sys/devices/virtual/misc/intel_ifs_<N>/image_version
Date: April 21 2022
KernelVersion: 5.19
Contact: "Jithu Joseph" <jithu.joseph@intel.com>
Description: Version (hexadecimal) of loaded IFS binary image. If no scan image
is loaded reports "none".
What: /sys/devices/virtual/misc/intel_ifs_<N>/reload
Date: April 21 2022
KernelVersion: 5.19
Contact: "Jithu Joseph" <jithu.joseph@intel.com>
Description: Write "1" (or "y" or "Y") to reload the IFS image from
/lib/firmware/intel/ifs/ff-mm-ss.scan.
.. SPDX-License-Identifier: GPL-2.0
.. kernel-doc:: drivers/platform/x86/intel/ifs/ifs.h
......@@ -36,6 +36,7 @@ x86-specific Documentation
usb-legacy-support
i386/index
x86_64/index
ifs
sva
sgx
features
......
......@@ -9863,6 +9863,14 @@ B: https://bugzilla.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux.git
F: drivers/idle/intel_idle.c
INTEL IN FIELD SCAN (IFS) DEVICE
M: Jithu Joseph <jithu.joseph@intel.com>
R: Ashok Raj <ashok.raj@intel.com>
R: Tony Luck <tony.luck@intel.com>
S: Maintained
F: drivers/platform/x86/intel/ifs
F: include/trace/events/intel_ifs.h
INTEL INTEGRATED SENSOR HUB DRIVER
M: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
M: Jiri Kosina <jikos@kernel.org>
......
......@@ -76,4 +76,22 @@ static inline void init_ia32_feat_ctl(struct cpuinfo_x86 *c) {}
extern __noendbr void cet_disable(void);
struct ucode_cpu_info;
int intel_cpu_collect_info(struct ucode_cpu_info *uci);
static inline bool intel_cpu_signatures_match(unsigned int s1, unsigned int p1,
unsigned int s2, unsigned int p2)
{
if (s1 != s2)
return false;
/* Processor flags are either both 0 ... */
if (!p1 && !p2)
return true;
/* ... or they intersect. */
return p1 & p2;
}
#endif /* _ASM_X86_CPU_H */
......@@ -76,6 +76,8 @@
/* Abbreviated from Intel SDM name IA32_CORE_CAPABILITIES */
#define MSR_IA32_CORE_CAPS 0x000000cf
#define MSR_IA32_CORE_CAPS_INTEGRITY_CAPS_BIT 2
#define MSR_IA32_CORE_CAPS_INTEGRITY_CAPS BIT(MSR_IA32_CORE_CAPS_INTEGRITY_CAPS_BIT)
#define MSR_IA32_CORE_CAPS_SPLIT_LOCK_DETECT_BIT 5
#define MSR_IA32_CORE_CAPS_SPLIT_LOCK_DETECT BIT(MSR_IA32_CORE_CAPS_SPLIT_LOCK_DETECT_BIT)
......@@ -154,6 +156,11 @@
#define MSR_IA32_POWER_CTL 0x000001fc
#define MSR_IA32_POWER_CTL_BIT_EE 19
/* Abbreviated from Intel SDM name IA32_INTEGRITY_CAPABILITIES */
#define MSR_INTEGRITY_CAPS 0x000002d9
#define MSR_INTEGRITY_CAPS_PERIODIC_BIST_BIT 4
#define MSR_INTEGRITY_CAPS_PERIODIC_BIST BIT(MSR_INTEGRITY_CAPS_PERIODIC_BIST_BIT)
#define MSR_LBR_NHM_FROM 0x00000680
#define MSR_LBR_NHM_TO 0x000006c0
#define MSR_LBR_CORE_FROM 0x00000040
......
......@@ -31,9 +31,22 @@ enum hsmp_message_ids {
HSMP_GET_CCLK_THROTTLE_LIMIT, /* 10h Get CCLK frequency limit in socket */
HSMP_GET_C0_PERCENT, /* 11h Get average C0 residency in socket */
HSMP_SET_NBIO_DPM_LEVEL, /* 12h Set max/min LCLK DPM Level for a given NBIO */
/* 13h Reserved */
HSMP_GET_DDR_BANDWIDTH = 0x14, /* 14h Get theoretical maximum and current DDR Bandwidth */
HSMP_GET_TEMP_MONITOR, /* 15h Get per-DIMM temperature and refresh rates */
HSMP_GET_NBIO_DPM_LEVEL, /* 13h Get LCLK DPM level min and max for a given NBIO */
HSMP_GET_DDR_BANDWIDTH, /* 14h Get theoretical maximum and current DDR Bandwidth */
HSMP_GET_TEMP_MONITOR, /* 15h Get socket temperature */
HSMP_GET_DIMM_TEMP_RANGE, /* 16h Get per-DIMM temperature range and refresh rate */
HSMP_GET_DIMM_POWER, /* 17h Get per-DIMM power consumption */
HSMP_GET_DIMM_THERMAL, /* 18h Get per-DIMM thermal sensors */
HSMP_GET_SOCKET_FREQ_LIMIT, /* 19h Get current active frequency per socket */
HSMP_GET_CCLK_CORE_LIMIT, /* 1Ah Get CCLK frequency limit per core */
HSMP_GET_RAILS_SVI, /* 1Bh Get SVI-based Telemetry for all rails */
HSMP_GET_SOCKET_FMAX_FMIN, /* 1Ch Get Fmax and Fmin per socket */
HSMP_GET_IOLINK_BANDWITH, /* 1Dh Get current bandwidth on IO Link */
HSMP_GET_XGMI_BANDWITH, /* 1Eh Get current bandwidth on xGMI Link */
HSMP_SET_GMI3_WIDTH, /* 1Fh Set max and min GMI3 Link width */
HSMP_SET_PCI_RATE, /* 20h Control link rate on PCIe devices */
HSMP_SET_POWER_MODE, /* 21h Select power efficiency profile policy */
HSMP_SET_PSTATE_MAX_MIN, /* 22h Set the max and min DF P-State */
HSMP_MSG_ID_MAX,
};
......@@ -175,8 +188,12 @@ static const struct hsmp_msg_desc hsmp_msg_desc_table[] = {
*/
{1, 0, HSMP_SET},
/* RESERVED message */
{0, 0, HSMP_RSVD},
/*
* HSMP_GET_NBIO_DPM_LEVEL, num_args = 1, response_sz = 1
* input: args[0] = nbioid[23:16]
* output: args[0] = max dpm level[15:8] + min dpm level[7:0]
*/
{1, 1, HSMP_GET},
/*
* HSMP_GET_DDR_BANDWIDTH, num_args = 0, response_sz = 1
......@@ -191,6 +208,93 @@ static const struct hsmp_msg_desc hsmp_msg_desc_table[] = {
* [7:5] fractional part
*/
{0, 1, HSMP_GET},
/*
* HSMP_GET_DIMM_TEMP_RANGE, num_args = 1, response_sz = 1
* input: args[0] = DIMM address[7:0]
* output: args[0] = refresh rate[3] + temperature range[2:0]
*/
{1, 1, HSMP_GET},
/*
* HSMP_GET_DIMM_POWER, num_args = 1, response_sz = 1
* input: args[0] = DIMM address[7:0]
* output: args[0] = DIMM power in mW[31:17] + update rate in ms[16:8] +
* DIMM address[7:0]
*/
{1, 1, HSMP_GET},
/*
* HSMP_GET_DIMM_THERMAL, num_args = 1, response_sz = 1
* input: args[0] = DIMM address[7:0]
* output: args[0] = temperature in degree celcius[31:21] + update rate in ms[16:8] +
* DIMM address[7:0]
*/
{1, 1, HSMP_GET},
/*
* HSMP_GET_SOCKET_FREQ_LIMIT, num_args = 0, response_sz = 1
* output: args[0] = frequency in MHz[31:16] + frequency source[15:0]
*/
{0, 1, HSMP_GET},
/*
* HSMP_GET_CCLK_CORE_LIMIT, num_args = 1, response_sz = 1
* input: args[0] = apic id [31:0]
* output: args[0] = frequency in MHz[31:0]
*/
{1, 1, HSMP_GET},
/*
* HSMP_GET_RAILS_SVI, num_args = 0, response_sz = 1
* output: args[0] = power in mW[31:0]
*/
{0, 1, HSMP_GET},
/*
* HSMP_GET_SOCKET_FMAX_FMIN, num_args = 0, response_sz = 1
* output: args[0] = fmax in MHz[31:16] + fmin in MHz[15:0]
*/
{0, 1, HSMP_GET},
/*
* HSMP_GET_IOLINK_BANDWITH, num_args = 1, response_sz = 1
* input: args[0] = link id[15:8] + bw type[2:0]
* output: args[0] = io bandwidth in Mbps[31:0]
*/
{1, 1, HSMP_GET},
/*
* HSMP_GET_XGMI_BANDWITH, num_args = 1, response_sz = 1
* input: args[0] = link id[15:8] + bw type[2:0]
* output: args[0] = xgmi bandwidth in Mbps[31:0]
*/
{1, 1, HSMP_GET},
/*
* HSMP_SET_GMI3_WIDTH, num_args = 1, response_sz = 0
* input: args[0] = min link width[15:8] + max link width[7:0]
*/
{1, 0, HSMP_SET},
/*
* HSMP_SET_PCI_RATE, num_args = 1, response_sz = 1
* input: args[0] = link rate control value
* output: args[0] = previous link rate control value
*/
{1, 1, HSMP_SET},
/*
* HSMP_SET_POWER_MODE, num_args = 1, response_sz = 0
* input: args[0] = power efficiency mode[2:0]
*/
{1, 0, HSMP_SET},
/*
* HSMP_SET_PSTATE_MAX_MIN, num_args = 1, response_sz = 0
* input: args[0] = min df pstate[15:8] + max df pstate[7:0]
*/
{1, 0, HSMP_SET},
};
/* Reset to default packing */
......
......@@ -184,6 +184,38 @@ static bool bad_spectre_microcode(struct cpuinfo_x86 *c)
return false;
}
int intel_cpu_collect_info(struct ucode_cpu_info *uci)
{
unsigned int val[2];
unsigned int family, model;
struct cpu_signature csig = { 0 };
unsigned int eax, ebx, ecx, edx;
memset(uci, 0, sizeof(*uci));
eax = 0x00000001;
ecx = 0;
native_cpuid(&eax, &ebx, &ecx, &edx);
csig.sig = eax;
family = x86_family(eax);
model = x86_model(eax);
if (model >= 5 || family > 6) {
/* get processor flags from MSR 0x17 */
native_rdmsr(MSR_IA32_PLATFORM_ID, val[0], val[1]);
csig.pf = 1 << ((val[1] >> 18) & 7);
}
csig.rev = intel_get_microcode_revision();
uci->cpu_sig = csig;
uci->valid = 1;
return 0;
}
EXPORT_SYMBOL_GPL(intel_cpu_collect_info);
static void early_init_intel(struct cpuinfo_x86 *c)
{
u64 misc_enable;
......
......@@ -45,20 +45,6 @@ static struct microcode_intel *intel_ucode_patch;
/* last level cache size per core */
static int llc_size_per_core;
static inline bool cpu_signatures_match(unsigned int s1, unsigned int p1,
unsigned int s2, unsigned int p2)
{
if (s1 != s2)
return false;
/* Processor flags are either both 0 ... */
if (!p1 && !p2)
return true;
/* ... or they intersect. */
return p1 & p2;
}
/*
* Returns 1 if update has been found, 0 otherwise.
*/
......@@ -69,7 +55,7 @@ static int find_matching_signature(void *mc, unsigned int csig, int cpf)
struct extended_signature *ext_sig;
int i;
if (cpu_signatures_match(csig, cpf, mc_hdr->sig, mc_hdr->pf))
if (intel_cpu_signatures_match(csig, cpf, mc_hdr->sig, mc_hdr->pf))
return 1;
/* Look for ext. headers: */
......@@ -80,7 +66,7 @@ static int find_matching_signature(void *mc, unsigned int csig, int cpf)
ext_sig = (void *)ext_hdr + EXT_HEADER_SIZE;
for (i = 0; i < ext_hdr->count; i++) {
if (cpu_signatures_match(csig, cpf, ext_sig->sig, ext_sig->pf))
if (intel_cpu_signatures_match(csig, cpf, ext_sig->sig, ext_sig->pf))
return 1;
ext_sig++;
}
......@@ -342,37 +328,6 @@ scan_microcode(void *data, size_t size, struct ucode_cpu_info *uci, bool save)
return patch;
}
static int collect_cpu_info_early(struct ucode_cpu_info *uci)
{
unsigned int val[2];
unsigned int family, model;
struct cpu_signature csig = { 0 };
unsigned int eax, ebx, ecx, edx;
memset(uci, 0, sizeof(*uci));
eax = 0x00000001;
ecx = 0;
native_cpuid(&eax, &ebx, &ecx, &edx);
csig.sig = eax;
family = x86_family(eax);
model = x86_model(eax);
if ((model >= 5) || (family > 6)) {
/* get processor flags from MSR 0x17 */
native_rdmsr(MSR_IA32_PLATFORM_ID, val[0], val[1]);
csig.pf = 1 << ((val[1] >> 18) & 7);
}
csig.rev = intel_get_microcode_revision();
uci->cpu_sig = csig;
uci->valid = 1;
return 0;
}
static void show_saved_mc(void)
{
#ifdef DEBUG
......@@ -386,7 +341,7 @@ static void show_saved_mc(void)
return;
}
collect_cpu_info_early(&uci);
intel_cpu_collect_info(&uci);
sig = uci.cpu_sig.sig;
pf = uci.cpu_sig.pf;
......@@ -502,7 +457,7 @@ void show_ucode_info_early(void)
struct ucode_cpu_info uci;
if (delay_ucode_info) {
collect_cpu_info_early(&uci);
intel_cpu_collect_info(&uci);
print_ucode_info(&uci, current_mc_date);
delay_ucode_info = 0;
}
......@@ -604,7 +559,7 @@ int __init save_microcode_in_initrd_intel(void)
if (!(cp.data && cp.size))
return 0;
collect_cpu_info_early(&uci);
intel_cpu_collect_info(&uci);
scan_microcode(cp.data, cp.size, &uci, true);
......@@ -637,7 +592,7 @@ static struct microcode_intel *__load_ucode_intel(struct ucode_cpu_info *uci)
if (!(cp.data && cp.size))
return NULL;
collect_cpu_info_early(uci);
intel_cpu_collect_info(uci);
return scan_microcode(cp.data, cp.size, uci, false);
}
......@@ -712,7 +667,7 @@ void reload_ucode_intel(void)
struct microcode_intel *p;
struct ucode_cpu_info uci;
collect_cpu_info_early(&uci);
intel_cpu_collect_info(&uci);
p = find_patch(&uci);
if (!p)
......
......@@ -78,4 +78,21 @@ config MLXBF_PMC
to performance monitoring counters within various blocks in the
Mellanox BlueField SoC via a sysfs interface.
config NVSW_SN2201
tristate "Nvidia SN2201 platform driver support"
depends on REGMAP
depends on HWMON
depends on I2C
depends on REGMAP_I2C
help
This driver provides support for the Nvidia SN2201 platfom.
The SN2201 is a highly integrated for one rack unit system with
L3 management switches. It has 48 x 1Gbps RJ45 + 4 x 100G QSFP28
ports in a compact 1RU form factor. The system also including a
serial port (RS-232 interface), an OOB port (1G/100M MDI interface)
and USB ports for management functions.
The processor used on SN2201 is Intel Atom®Processor C Series,
C3338R which is one of the Denverton product families.
System equipped with Nvidia®Spectrum-1 32x100GbE Ethernet switch.
endif # MELLANOX_PLATFORM
......@@ -9,3 +9,4 @@ obj-$(CONFIG_MLXBF_TMFIFO) += mlxbf-tmfifo.o
obj-$(CONFIG_MLXREG_HOTPLUG) += mlxreg-hotplug.o
obj-$(CONFIG_MLXREG_IO) += mlxreg-io.o
obj-$(CONFIG_MLXREG_LC) += mlxreg-lc.o
obj-$(CONFIG_NVSW_SN2201) += nvsw-sn2201.o
This diff is collapsed.
......@@ -1152,6 +1152,14 @@ config SIEMENS_SIMATIC_IPC
To compile this driver as a module, choose M here: the module
will be called simatic-ipc.
config WINMATE_FM07_KEYS
tristate "Winmate FM07/FM07P front-panel keys driver"
depends on INPUT
help
Winmate FM07 and FM07P in-vehicle computers have a row of five
buttons below the display. This module adds an input device
that delivers key events when these buttons are pressed.
endif # X86_PLATFORM_DEVICES
config PMC_ATOM
......
......@@ -130,3 +130,6 @@ obj-$(CONFIG_PMC_ATOM) += pmc_atom.o
# Siemens Simatic Industrial PCs
obj-$(CONFIG_SIEMENS_SIMATIC_IPC) += simatic-ipc.o
# Winmate
obj-$(CONFIG_WINMATE_FM07_KEYS) += winmate-fm07-keys.o
......@@ -192,26 +192,6 @@ struct smu_metrics {
u64 timecondition_notmet_totaltime[SOC_SUBSYSTEM_IP_MAX];
} __packed;
static int amd_pmc_get_smu_version(struct amd_pmc_dev *dev)
{
int rc;
u32 val;
rc = amd_pmc_send_cmd(dev, 0, &val, SMU_MSG_GETSMUVERSION, 1);
if (rc)
return rc;
dev->smu_program = (val >> 24) & GENMASK(7, 0);
dev->major = (val >> 16) & GENMASK(7, 0);
dev->minor = (val >> 8) & GENMASK(7, 0);
dev->rev = (val >> 0) & GENMASK(7, 0);
dev_dbg(dev->dev, "SMU program %u version is %u.%u.%u\n",
dev->smu_program, dev->major, dev->minor, dev->rev);
return 0;
}
static int amd_pmc_stb_debugfs_open(struct inode *inode, struct file *filp)
{
struct amd_pmc_dev *dev = filp->f_inode->i_private;
......@@ -294,6 +274,40 @@ static const struct file_operations amd_pmc_stb_debugfs_fops_v2 = {
.release = amd_pmc_stb_debugfs_release_v2,
};
#if defined(CONFIG_SUSPEND) || defined(CONFIG_DEBUG_FS)
static int amd_pmc_setup_smu_logging(struct amd_pmc_dev *dev)
{
if (dev->cpu_id == AMD_CPU_ID_PCO) {
dev_warn_once(dev->dev, "SMU debugging info not supported on this platform\n");
return -EINVAL;
}
/* Get Active devices list from SMU */
if (!dev->active_ips)
amd_pmc_send_cmd(dev, 0, &dev->active_ips, SMU_MSG_GET_SUP_CONSTRAINTS, 1);
/* Get dram address */
if (!dev->smu_virt_addr) {
u32 phys_addr_low, phys_addr_hi;
u64 smu_phys_addr;
amd_pmc_send_cmd(dev, 0, &phys_addr_low, SMU_MSG_LOG_GETDRAM_ADDR_LO, 1);
amd_pmc_send_cmd(dev, 0, &phys_addr_hi, SMU_MSG_LOG_GETDRAM_ADDR_HI, 1);
smu_phys_addr = ((u64)phys_addr_hi << 32 | phys_addr_low);
dev->smu_virt_addr = devm_ioremap(dev->dev, smu_phys_addr,
sizeof(struct smu_metrics));
if (!dev->smu_virt_addr)
return -ENOMEM;
}
/* Start the logging */
amd_pmc_send_cmd(dev, 0, NULL, SMU_MSG_LOG_RESET, 0);
amd_pmc_send_cmd(dev, 0, NULL, SMU_MSG_LOG_START, 0);
return 0;
}
static int amd_pmc_idlemask_read(struct amd_pmc_dev *pdev, struct device *dev,
struct seq_file *s)
{
......@@ -321,11 +335,19 @@ static int amd_pmc_idlemask_read(struct amd_pmc_dev *pdev, struct device *dev,
static int get_metrics_table(struct amd_pmc_dev *pdev, struct smu_metrics *table)
{
if (!pdev->smu_virt_addr) {
int ret = amd_pmc_setup_smu_logging(pdev);
if (ret)
return ret;
}
if (pdev->cpu_id == AMD_CPU_ID_PCO)
return -ENODEV;
memcpy_fromio(table, pdev->smu_virt_addr, sizeof(struct smu_metrics));
return 0;
}
#endif /* CONFIG_SUSPEND || CONFIG_DEBUG_FS */
#ifdef CONFIG_SUSPEND
static void amd_pmc_validate_deepest(struct amd_pmc_dev *pdev)
......@@ -379,6 +401,17 @@ static int s0ix_stats_show(struct seq_file *s, void *unused)
struct amd_pmc_dev *dev = s->private;
u64 entry_time, exit_time, residency;
/* Use FCH registers to get the S0ix stats */
if (!dev->fch_virt_addr) {
u32 base_addr_lo = FCH_BASE_PHY_ADDR_LOW;
u32 base_addr_hi = FCH_BASE_PHY_ADDR_HIGH;
u64 fch_phys_addr = ((u64)base_addr_hi << 32 | base_addr_lo);
dev->fch_virt_addr = devm_ioremap(dev->dev, fch_phys_addr, FCH_SSC_MAPPING_SIZE);
if (!dev->fch_virt_addr)
return -ENOMEM;
}
entry_time = ioread32(dev->fch_virt_addr + FCH_S0I3_ENTRY_TIME_H_OFFSET);
entry_time = entry_time << 32 | ioread32(dev->fch_virt_addr + FCH_S0I3_ENTRY_TIME_L_OFFSET);
......@@ -398,11 +431,38 @@ static int s0ix_stats_show(struct seq_file *s, void *unused)
}
DEFINE_SHOW_ATTRIBUTE(s0ix_stats);
static int amd_pmc_get_smu_version(struct amd_pmc_dev *dev)
{
int rc;
u32 val;
rc = amd_pmc_send_cmd(dev, 0, &val, SMU_MSG_GETSMUVERSION, 1);
if (rc)
return rc;
dev->smu_program = (val >> 24) & GENMASK(7, 0);
dev->major = (val >> 16) & GENMASK(7, 0);
dev->minor = (val >> 8) & GENMASK(7, 0);
dev->rev = (val >> 0) & GENMASK(7, 0);
dev_dbg(dev->dev, "SMU program %u version is %u.%u.%u\n",
dev->smu_program, dev->major, dev->minor, dev->rev);
return 0;
}
static int amd_pmc_idlemask_show(struct seq_file *s, void *unused)
{
struct amd_pmc_dev *dev = s->private;
int rc;
/* we haven't yet read SMU version */
if (!dev->major) {
rc = amd_pmc_get_smu_version(dev);
if (rc)
return rc;
}
if (dev->major > 56 || (dev->major >= 55 && dev->minor >= 37)) {
rc = amd_pmc_idlemask_read(dev, NULL, s);
if (rc)
......@@ -449,32 +509,6 @@ static inline void amd_pmc_dbgfs_unregister(struct amd_pmc_dev *dev)
}
#endif /* CONFIG_DEBUG_FS */
static int amd_pmc_setup_smu_logging(struct amd_pmc_dev *dev)
{
u32 phys_addr_low, phys_addr_hi;
u64 smu_phys_addr;
if (dev->cpu_id == AMD_CPU_ID_PCO)
return -EINVAL;
/* Get Active devices list from SMU */
amd_pmc_send_cmd(dev, 0, &dev->active_ips, SMU_MSG_GET_SUP_CONSTRAINTS, 1);
/* Get dram address */
amd_pmc_send_cmd(dev, 0, &phys_addr_low, SMU_MSG_LOG_GETDRAM_ADDR_LO, 1);
amd_pmc_send_cmd(dev, 0, &phys_addr_hi, SMU_MSG_LOG_GETDRAM_ADDR_HI, 1);
smu_phys_addr = ((u64)phys_addr_hi << 32 | phys_addr_low);
dev->smu_virt_addr = devm_ioremap(dev->dev, smu_phys_addr, sizeof(struct smu_metrics));
if (!dev->smu_virt_addr)
return -ENOMEM;
/* Start the logging */
amd_pmc_send_cmd(dev, 0, NULL, SMU_MSG_LOG_START, 0);
return 0;
}
static void amd_pmc_dump_registers(struct amd_pmc_dev *dev)
{
u32 value, message, argument, response;
......@@ -639,8 +673,7 @@ static void amd_pmc_s2idle_prepare(void)
u32 arg = 1;
/* Reset and Start SMU logging - to monitor the s0i3 stats */
amd_pmc_send_cmd(pdev, 0, NULL, SMU_MSG_LOG_RESET, 0);
amd_pmc_send_cmd(pdev, 0, NULL, SMU_MSG_LOG_START, 0);
amd_pmc_setup_smu_logging(pdev);
/* Activate CZN specific RTC functionality */
if (pdev->cpu_id == AMD_CPU_ID_CZN) {
......@@ -790,7 +823,7 @@ static int amd_pmc_probe(struct platform_device *pdev)
struct amd_pmc_dev *dev = &pmc;
struct pci_dev *rdev;
u32 base_addr_lo, base_addr_hi;
u64 base_addr, fch_phys_addr;
u64 base_addr;
int err;
u32 val;
......@@ -844,28 +877,12 @@ static int amd_pmc_probe(struct platform_device *pdev)
mutex_init(&dev->lock);
/* Use FCH registers to get the S0ix stats */
base_addr_lo = FCH_BASE_PHY_ADDR_LOW;
base_addr_hi = FCH_BASE_PHY_ADDR_HIGH;
fch_phys_addr = ((u64)base_addr_hi << 32 | base_addr_lo);
dev->fch_virt_addr = devm_ioremap(dev->dev, fch_phys_addr, FCH_SSC_MAPPING_SIZE);
if (!dev->fch_virt_addr) {
err = -ENOMEM;
goto err_pci_dev_put;
}
/* Use SMU to get the s0i3 debug stats */
err = amd_pmc_setup_smu_logging(dev);
if (err)
dev_err(dev->dev, "SMU debugging info not supported on this platform\n");
if (enable_stb && dev->cpu_id == AMD_CPU_ID_YC) {
err = amd_pmc_s2d_init(dev);
if (err)
return err;
}
amd_pmc_get_smu_version(dev);
platform_set_drvdata(pdev, dev);
#ifdef CONFIG_SUSPEND
err = acpi_register_lps0_dev(&amd_pmc_s2idle_dev_ops);
......
......@@ -553,6 +553,7 @@ static const struct key_entry asus_nb_wmi_keymap[] = {
{ KE_KEY, 0x7D, { KEY_BLUETOOTH } }, /* Bluetooth Enable */
{ KE_KEY, 0x7E, { KEY_BLUETOOTH } }, /* Bluetooth Disable */
{ KE_KEY, 0x82, { KEY_CAMERA } },
{ KE_KEY, 0x86, { KEY_PROG1 } }, /* MyASUS Key */
{ KE_KEY, 0x88, { KEY_RFKILL } }, /* Radio Toggle Key */
{ KE_KEY, 0x8A, { KEY_PROG1 } }, /* Color enhancement mode */
{ KE_KEY, 0x8C, { KEY_SWITCHVIDEOMODE } }, /* SDSP DVI only */
......
......@@ -2534,7 +2534,7 @@ static struct attribute *asus_fan_curve_attr[] = {
static umode_t asus_fan_curve_is_visible(struct kobject *kobj,
struct attribute *attr, int idx)
{
struct device *dev = container_of(kobj, struct device, kobj);
struct device *dev = kobj_to_dev(kobj);
struct asus_wmi *asus = dev_get_drvdata(dev->parent);
/*
......@@ -3114,7 +3114,7 @@ static void asus_wmi_handle_event_code(int code, struct asus_wmi *asus)
if (!sparse_keymap_report_event(asus->inputdev, code,
key_value, autorelease))
pr_info("Unknown key %x pressed\n", code);
pr_info("Unknown key code 0x%x\n", code);
}
static void asus_wmi_notify(u32 value, void *context)
......
......@@ -40,13 +40,10 @@
static struct platform_device *dcdbas_pdev;
static u8 *smi_data_buf;
static dma_addr_t smi_data_buf_handle;
static unsigned long smi_data_buf_size;
static unsigned long max_smi_data_buf_size = MAX_SMI_DATA_BUF_SIZE;
static u32 smi_data_buf_phys_addr;
static DEFINE_MUTEX(smi_data_lock);
static u8 *bios_buffer;
static struct smi_buffer smi_buf;
static unsigned int host_control_action;
static unsigned int host_control_smi_type;
......@@ -54,23 +51,49 @@ static unsigned int host_control_on_shutdown;
static bool wsmt_enabled;
int dcdbas_smi_alloc(struct smi_buffer *smi_buffer, unsigned long size)
{
smi_buffer->virt = dma_alloc_coherent(&dcdbas_pdev->dev, size,
&smi_buffer->dma, GFP_KERNEL);
if (!smi_buffer->virt) {
dev_dbg(&dcdbas_pdev->dev,
"%s: failed to allocate memory size %lu\n",
__func__, size);
return -ENOMEM;
}
smi_buffer->size = size;
dev_dbg(&dcdbas_pdev->dev, "%s: phys: %x size: %lu\n",
__func__, (u32)smi_buffer->dma, smi_buffer->size);
return 0;
}
EXPORT_SYMBOL_GPL(dcdbas_smi_alloc);
void dcdbas_smi_free(struct smi_buffer *smi_buffer)
{
if (!smi_buffer->virt)
return;
dev_dbg(&dcdbas_pdev->dev, "%s: phys: %x size: %lu\n",
__func__, (u32)smi_buffer->dma, smi_buffer->size);
dma_free_coherent(&dcdbas_pdev->dev, smi_buffer->size,
smi_buffer->virt, smi_buffer->dma);
smi_buffer->virt = NULL;
smi_buffer->dma = 0;
smi_buffer->size = 0;
}
EXPORT_SYMBOL_GPL(dcdbas_smi_free);
/**
* smi_data_buf_free: free SMI data buffer
*/
static void smi_data_buf_free(void)
{
if (!smi_data_buf || wsmt_enabled)
if (!smi_buf.virt || wsmt_enabled)
return;
dev_dbg(&dcdbas_pdev->dev, "%s: phys: %x size: %lu\n",
__func__, smi_data_buf_phys_addr, smi_data_buf_size);
dma_free_coherent(&dcdbas_pdev->dev, smi_data_buf_size, smi_data_buf,
smi_data_buf_handle);
smi_data_buf = NULL;
smi_data_buf_handle = 0;
smi_data_buf_phys_addr = 0;
smi_data_buf_size = 0;
dcdbas_smi_free(&smi_buf);
}
/**
......@@ -78,39 +101,29 @@ static void smi_data_buf_free(void)
*/
static int smi_data_buf_realloc(unsigned long size)
{
void *buf;
dma_addr_t handle;
struct smi_buffer tmp;
int ret;
if (smi_data_buf_size >= size)
if (smi_buf.size >= size)
return 0;
if (size > max_smi_data_buf_size)
return -EINVAL;
/* new buffer is needed */
buf = dma_alloc_coherent(&dcdbas_pdev->dev, size, &handle, GFP_KERNEL);
if (!buf) {
dev_dbg(&dcdbas_pdev->dev,
"%s: failed to allocate memory size %lu\n",
__func__, size);
return -ENOMEM;
}
/* memory zeroed by dma_alloc_coherent */
ret = dcdbas_smi_alloc(&tmp, size);
if (ret)
return ret;
if (smi_data_buf)
memcpy(buf, smi_data_buf, smi_data_buf_size);
/* memory zeroed by dma_alloc_coherent */
if (smi_buf.virt)
memcpy(tmp.virt, smi_buf.virt, smi_buf.size);
/* free any existing buffer */
smi_data_buf_free();
/* set up new buffer for use */
smi_data_buf = buf;
smi_data_buf_handle = handle;
smi_data_buf_phys_addr = (u32) virt_to_phys(buf);
smi_data_buf_size = size;
dev_dbg(&dcdbas_pdev->dev, "%s: phys: %x size: %lu\n",
__func__, smi_data_buf_phys_addr, smi_data_buf_size);
smi_buf = tmp;
return 0;
}
......@@ -119,14 +132,14 @@ static ssize_t smi_data_buf_phys_addr_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
return sprintf(buf, "%x\n", smi_data_buf_phys_addr);
return sprintf(buf, "%x\n", (u32)smi_buf.dma);
}
static ssize_t smi_data_buf_size_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
return sprintf(buf, "%lu\n", smi_data_buf_size);
return sprintf(buf, "%lu\n", smi_buf.size);
}
static ssize_t smi_data_buf_size_store(struct device *dev,
......@@ -155,8 +168,8 @@ static ssize_t smi_data_read(struct file *filp, struct kobject *kobj,
ssize_t ret;
mutex_lock(&smi_data_lock);
ret = memory_read_from_buffer(buf, count, &pos, smi_data_buf,
smi_data_buf_size);
ret = memory_read_from_buffer(buf, count, &pos, smi_buf.virt,
smi_buf.size);
mutex_unlock(&smi_data_lock);
return ret;
}
......@@ -176,7 +189,7 @@ static ssize_t smi_data_write(struct file *filp, struct kobject *kobj,
if (ret)
goto out;
memcpy(smi_data_buf + pos, buf, count);
memcpy(smi_buf.virt + pos, buf, count);
ret = count;
out:
mutex_unlock(&smi_data_lock);
......@@ -307,11 +320,11 @@ static ssize_t smi_request_store(struct device *dev,
mutex_lock(&smi_data_lock);
if (smi_data_buf_size < sizeof(struct smi_cmd)) {
if (smi_buf.size < sizeof(struct smi_cmd)) {
ret = -ENODEV;
goto out;
}
smi_cmd = (struct smi_cmd *)smi_data_buf;
smi_cmd = (struct smi_cmd *)smi_buf.virt;
switch (val) {
case 2:
......@@ -327,20 +340,20 @@ static ssize_t smi_request_store(struct device *dev,
* Provide physical address of command buffer field within
* the struct smi_cmd to BIOS.
*
* Because the address that smi_cmd (smi_data_buf) points to
* Because the address that smi_cmd (smi_buf.virt) points to
* will be from memremap() of a non-memory address if WSMT
* is present, we can't use virt_to_phys() on smi_cmd, so
* we have to use the physical address that was saved when
* the virtual address for smi_cmd was received.
*/
smi_cmd->ebx = smi_data_buf_phys_addr +
smi_cmd->ebx = (u32)smi_buf.dma +
offsetof(struct smi_cmd, command_buffer);
ret = dcdbas_smi_request(smi_cmd);
if (!ret)
ret = count;
break;
case 0:
memset(smi_data_buf, 0, smi_data_buf_size);
memset(smi_buf.virt, 0, smi_buf.size);
ret = count;
break;
default:
......@@ -356,7 +369,7 @@ static ssize_t smi_request_store(struct device *dev,
/**
* host_control_smi: generate host control SMI
*
* Caller must set up the host control command in smi_data_buf.
* Caller must set up the host control command in smi_buf.virt.
*/
static int host_control_smi(void)
{
......@@ -367,14 +380,14 @@ static int host_control_smi(void)
s8 cmd_status;
u8 index;
apm_cmd = (struct apm_cmd *)smi_data_buf;
apm_cmd = (struct apm_cmd *)smi_buf.virt;
apm_cmd->status = ESM_STATUS_CMD_UNSUCCESSFUL;
switch (host_control_smi_type) {
case HC_SMITYPE_TYPE1:
spin_lock_irqsave(&rtc_lock, flags);
/* write SMI data buffer physical address */
data = (u8 *)&smi_data_buf_phys_addr;
data = (u8 *)&smi_buf.dma;
for (index = PE1300_CMOS_CMD_STRUCT_PTR;
index < (PE1300_CMOS_CMD_STRUCT_PTR + 4);
index++, data++) {
......@@ -405,7 +418,7 @@ static int host_control_smi(void)
case HC_SMITYPE_TYPE3:
spin_lock_irqsave(&rtc_lock, flags);
/* write SMI data buffer physical address */
data = (u8 *)&smi_data_buf_phys_addr;
data = (u8 *)&smi_buf.dma;
for (index = PE1400_CMOS_CMD_STRUCT_PTR;
index < (PE1400_CMOS_CMD_STRUCT_PTR + 4);
index++, data++) {
......@@ -450,7 +463,7 @@ static int host_control_smi(void)
* This function is called by the driver after the system has
* finished shutting down if the user application specified a
* host control action to perform on shutdown. It is safe to
* use smi_data_buf at this point because the system has finished
* use smi_buf.virt at this point because the system has finished
* shutting down and no userspace apps are running.
*/
static void dcdbas_host_control(void)
......@@ -464,18 +477,18 @@ static void dcdbas_host_control(void)
action = host_control_action;
host_control_action = HC_ACTION_NONE;
if (!smi_data_buf) {
if (!smi_buf.virt) {
dev_dbg(&dcdbas_pdev->dev, "%s: no SMI buffer\n", __func__);
return;
}
if (smi_data_buf_size < sizeof(struct apm_cmd)) {
if (smi_buf.size < sizeof(struct apm_cmd)) {
dev_dbg(&dcdbas_pdev->dev, "%s: SMI buffer too small\n",
__func__);
return;
}
apm_cmd = (struct apm_cmd *)smi_data_buf;
apm_cmd = (struct apm_cmd *)smi_buf.virt;
/* power off takes precedence */
if (action & HC_ACTION_HOST_CONTROL_POWEROFF) {
......@@ -583,11 +596,11 @@ static int dcdbas_check_wsmt(void)
return -ENOMEM;
}
/* First 8 bytes is for a semaphore, not part of the smi_data_buf */
smi_data_buf_phys_addr = bios_buf_paddr + 8;
smi_data_buf = bios_buffer + 8;
smi_data_buf_size = remap_size - 8;
max_smi_data_buf_size = smi_data_buf_size;
/* First 8 bytes is for a semaphore, not part of the smi_buf.virt */
smi_buf.dma = bios_buf_paddr + 8;
smi_buf.virt = bios_buffer + 8;
smi_buf.size = remap_size - 8;
max_smi_data_buf_size = smi_buf.size;
wsmt_enabled = true;
dev_info(&dcdbas_pdev->dev,
"WSMT found, using firmware-provided SMI buffer.\n");
......
......@@ -105,5 +105,14 @@ struct smm_eps_table {
u64 num_of_4k_pages;
} __packed;
struct smi_buffer {
u8 *virt;
unsigned long size;
dma_addr_t dma;
};
int dcdbas_smi_alloc(struct smi_buffer *smi_buffer, unsigned long size);
void dcdbas_smi_free(struct smi_buffer *smi_buffer);
#endif /* _DCDBAS_H_ */
......@@ -20,6 +20,7 @@
static int da_command_address;
static int da_command_code;
static struct smi_buffer smi_buf;
static struct calling_interface_buffer *buffer;
static struct platform_device *platform_device;
static DEFINE_MUTEX(smm_mutex);
......@@ -57,7 +58,7 @@ static int dell_smbios_smm_call(struct calling_interface_buffer *input)
command.magic = SMI_CMD_MAGIC;
command.command_address = da_command_address;
command.command_code = da_command_code;
command.ebx = virt_to_phys(buffer);
command.ebx = smi_buf.dma;
command.ecx = 0x42534931;
mutex_lock(&smm_mutex);
......@@ -101,9 +102,10 @@ int init_dell_smbios_smm(void)
* Allocate buffer below 4GB for SMI data--only 32-bit physical addr
* is passed to SMI handler.
*/
buffer = (void *)__get_free_page(GFP_KERNEL | GFP_DMA32);
if (!buffer)
return -ENOMEM;
ret = dcdbas_smi_alloc(&smi_buf, PAGE_SIZE);
if (ret)
return ret;
buffer = (void *)smi_buf.virt;
dmi_walk(find_cmd_address, NULL);
......@@ -138,7 +140,7 @@ int init_dell_smbios_smm(void)
fail_wsmt:
fail_platform_device_alloc:
free_page((unsigned long)buffer);
dcdbas_smi_free(&smi_buf);
return ret;
}
......@@ -147,6 +149,6 @@ void exit_dell_smbios_smm(void)
if (platform_device) {
dell_smbios_unregister_device(&platform_device->dev);
platform_device_unregister(platform_device);
free_page((unsigned long)buffer);
dcdbas_smi_free(&smi_buf);
}
}
......@@ -150,7 +150,9 @@ static const struct dmi_system_id gigabyte_wmi_known_working_platforms[] = {
DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550M DS3H"),
DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B660 GAMING X DDR4"),
DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("Z390 I AORUS PRO WIFI-CF"),
DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("Z490 AORUS ELITE AC"),
DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("X570 AORUS ELITE"),
DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("X570 AORUS ELITE WIFI"),
DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("X570 GAMING X"),
DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("X570 I AORUS PRO WIFI"),
DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("X570 UD"),
......
......@@ -605,6 +605,7 @@ static int hp_wmi_rfkill2_refresh(void)
for (i = 0; i < rfkill2_count; i++) {
int num = rfkill2[i].num;
struct bios_rfkill2_device_state *devstate;
devstate = &state.device[num];
if (num >= state.count ||
......@@ -625,6 +626,7 @@ static ssize_t display_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
int value = hp_wmi_read_int(HPWMI_DISPLAY_QUERY);
if (value < 0)
return value;
return sprintf(buf, "%d\n", value);
......@@ -634,6 +636,7 @@ static ssize_t hddtemp_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
int value = hp_wmi_read_int(HPWMI_HDDTEMP_QUERY);
if (value < 0)
return value;
return sprintf(buf, "%d\n", value);
......@@ -643,6 +646,7 @@ static ssize_t als_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
int value = hp_wmi_read_int(HPWMI_ALS_QUERY);
if (value < 0)
return value;
return sprintf(buf, "%d\n", value);
......@@ -652,6 +656,7 @@ static ssize_t dock_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
int value = hp_wmi_get_dock_state();
if (value < 0)
return value;
return sprintf(buf, "%d\n", value);
......@@ -661,6 +666,7 @@ static ssize_t tablet_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
int value = hp_wmi_get_tablet_mode();
if (value < 0)
return value;
return sprintf(buf, "%d\n", value);
......@@ -671,6 +677,7 @@ static ssize_t postcode_show(struct device *dev, struct device_attribute *attr,
{
/* Get the POST error code of previous boot failure. */
int value = hp_wmi_read_int(HPWMI_POSTCODEERROR_QUERY);
if (value < 0)
return value;
return sprintf(buf, "0x%x\n", value);
......@@ -1013,6 +1020,7 @@ static int __init hp_wmi_rfkill2_setup(struct platform_device *device)
struct rfkill *rfkill;
enum rfkill_type type;
char *name;
switch (state.device[i].radio_type) {
case HPWMI_WIFI:
type = RFKILL_TYPE_WLAN;
......
......@@ -4,6 +4,7 @@
#
source "drivers/platform/x86/intel/atomisp2/Kconfig"
source "drivers/platform/x86/intel/ifs/Kconfig"
source "drivers/platform/x86/intel/int1092/Kconfig"
source "drivers/platform/x86/intel/int3472/Kconfig"
source "drivers/platform/x86/intel/pmc/Kconfig"
......
......@@ -5,6 +5,7 @@
#
obj-$(CONFIG_INTEL_ATOMISP2_PDX86) += atomisp2/
obj-$(CONFIG_INTEL_IFS) += ifs/
obj-$(CONFIG_INTEL_SAR_INT1092) += int1092/
obj-$(CONFIG_INTEL_SKL_INT3472) += int3472/
obj-$(CONFIG_INTEL_PMC_CORE) += pmc/
......
......@@ -389,6 +389,8 @@ static int cht_int33fe_typec_probe(struct platform_device *pdev)
goto out_unregister_fusb302;
}
platform_set_drvdata(pdev, data);
return 0;
out_unregister_fusb302:
......
......@@ -238,7 +238,7 @@ static bool intel_hid_evaluate_method(acpi_handle handle,
method_name = (char *)intel_hid_dsm_fn_to_method[fn_index];
if (!(intel_hid_dsm_fn_mask & fn_index))
if (!(intel_hid_dsm_fn_mask & BIT(fn_index)))
goto skip_dsm_eval;
obj = acpi_evaluate_dsm_typed(handle, &intel_dsm_guid,
......
config INTEL_IFS
tristate "Intel In Field Scan"
depends on X86 && CPU_SUP_INTEL && 64BIT && SMP
select INTEL_IFS_DEVICE
help
Enable support for the In Field Scan capability in select
CPUs. The capability allows for running low level tests via
a scan image distributed by Intel via Github to validate CPU
operation beyond baseline RAS capabilities. To compile this
support as a module, choose M here. The module will be called
intel_ifs.
If unsure, say N.
obj-$(CONFIG_INTEL_IFS) += intel_ifs.o
intel_ifs-objs := core.o load.o runtest.o sysfs.o
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright(c) 2022 Intel Corporation. */
#include <linux/module.h>
#include <linux/kdev_t.h>
#include <linux/semaphore.h>
#include <asm/cpu_device_id.h>
#include "ifs.h"
#define X86_MATCH(model) \
X86_MATCH_VENDOR_FAM_MODEL_FEATURE(INTEL, 6, \
INTEL_FAM6_##model, X86_FEATURE_CORE_CAPABILITIES, NULL)
static const struct x86_cpu_id ifs_cpu_ids[] __initconst = {
X86_MATCH(SAPPHIRERAPIDS_X),
{}
};
MODULE_DEVICE_TABLE(x86cpu, ifs_cpu_ids);
static struct ifs_device ifs_device = {
.data = {
.integrity_cap_bit = MSR_INTEGRITY_CAPS_PERIODIC_BIST_BIT,
},
.misc = {
.name = "intel_ifs_0",
.nodename = "intel_ifs/0",
.minor = MISC_DYNAMIC_MINOR,
},
};
static int __init ifs_init(void)
{
const struct x86_cpu_id *m;
u64 msrval;
m = x86_match_cpu(ifs_cpu_ids);
if (!m)
return -ENODEV;
if (rdmsrl_safe(MSR_IA32_CORE_CAPS, &msrval))
return -ENODEV;
if (!(msrval & MSR_IA32_CORE_CAPS_INTEGRITY_CAPS))
return -ENODEV;
if (rdmsrl_safe(MSR_INTEGRITY_CAPS, &msrval))
return -ENODEV;
ifs_device.misc.groups = ifs_get_groups();
if ((msrval & BIT(ifs_device.data.integrity_cap_bit)) &&
!misc_register(&ifs_device.misc)) {
down(&ifs_sem);
ifs_load_firmware(ifs_device.misc.this_device);
up(&ifs_sem);
return 0;
}
return -ENODEV;
}
static void __exit ifs_exit(void)
{
misc_deregister(&ifs_device.misc);
}
module_init(ifs_init);
module_exit(ifs_exit);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Intel In Field Scan (IFS) device");
/* SPDX-License-Identifier: GPL-2.0-only */
/* Copyright(c) 2022 Intel Corporation. */
#ifndef _IFS_H_
#define _IFS_H_
/**
* DOC: In-Field Scan
*
* =============
* In-Field Scan
* =============
*
* Introduction
* ------------
*
* In Field Scan (IFS) is a hardware feature to run circuit level tests on
* a CPU core to detect problems that are not caught by parity or ECC checks.
* Future CPUs will support more than one type of test which will show up
* with a new platform-device instance-id, for now only .0 is exposed.
*
*
* IFS Image
* ---------
*
* Intel provides a firmware file containing the scan tests via
* github [#f1]_. Similar to microcode there is a separate file for each
* family-model-stepping.
*
* IFS Image Loading
* -----------------
*
* The driver loads the tests into memory reserved BIOS local to each CPU
* socket in a two step process using writes to MSRs to first load the
* SHA hashes for the test. Then the tests themselves. Status MSRs provide
* feedback on the success/failure of these steps. When a new test file
* is installed it can be loaded by writing to the driver reload file::
*
* # echo 1 > /sys/devices/virtual/misc/intel_ifs_0/reload
*
* Similar to microcode, the current version of the scan tests is stored
* in a fixed location: /lib/firmware/intel/ifs.0/family-model-stepping.scan
*
* Running tests
* -------------
*
* Tests are run by the driver synchronizing execution of all threads on a
* core and then writing to the ACTIVATE_SCAN MSR on all threads. Instruction
* execution continues when:
*
* 1) All tests have completed.
* 2) Execution was interrupted.
* 3) A test detected a problem.
*
* Note that ALL THREADS ON THE CORE ARE EFFECTIVELY OFFLINE FOR THE
* DURATION OF THE TEST. This can be up to 200 milliseconds. If the system
* is running latency sensitive applications that cannot tolerate an
* interruption of this magnitude, the system administrator must arrange
* to migrate those applications to other cores before running a core test.
* It may also be necessary to redirect interrupts to other CPUs.
*
* In all cases reading the SCAN_STATUS MSR provides details on what
* happened. The driver makes the value of this MSR visible to applications
* via the "details" file (see below). Interrupted tests may be restarted.
*
* The IFS driver provides sysfs interfaces via /sys/devices/virtual/misc/intel_ifs_0/
* to control execution:
*
* Test a specific core::
*
* # echo <cpu#> > /sys/devices/virtual/misc/intel_ifs_0/run_test
*
* when HT is enabled any of the sibling cpu# can be specified to test
* its corresponding physical core. Since the tests are per physical core,
* the result of testing any thread is same. All siblings must be online
* to run a core test. It is only necessary to test one thread.
*
* For e.g. to test core corresponding to cpu5
*
* # echo 5 > /sys/devices/virtual/misc/intel_ifs_0/run_test
*
* Results of the last test is provided in /sys::
*
* $ cat /sys/devices/virtual/misc/intel_ifs_0/status
* pass
*
* Status can be one of pass, fail, untested
*
* Additional details of the last test is provided by the details file::
*
* $ cat /sys/devices/virtual/misc/intel_ifs_0/details
* 0x8081
*
* The details file reports the hex value of the SCAN_STATUS MSR.
* Hardware defined error codes are documented in volume 4 of the Intel
* Software Developer's Manual but the error_code field may contain one of
* the following driver defined software codes:
*
* +------+--------------------+
* | 0xFD | Software timeout |
* +------+--------------------+
* | 0xFE | Partial completion |
* +------+--------------------+
*
* Driver design choices
* ---------------------
*
* 1) The ACTIVATE_SCAN MSR allows for running any consecutive subrange of
* available tests. But the driver always tries to run all tests and only
* uses the subrange feature to restart an interrupted test.
*
* 2) Hardware allows for some number of cores to be tested in parallel.
* The driver does not make use of this, it only tests one core at a time.
*
* .. [#f1] https://github.com/intel/TBD
*/
#include <linux/device.h>
#include <linux/miscdevice.h>
#define MSR_COPY_SCAN_HASHES 0x000002c2
#define MSR_SCAN_HASHES_STATUS 0x000002c3
#define MSR_AUTHENTICATE_AND_COPY_CHUNK 0x000002c4
#define MSR_CHUNKS_AUTHENTICATION_STATUS 0x000002c5
#define MSR_ACTIVATE_SCAN 0x000002c6
#define MSR_SCAN_STATUS 0x000002c7
#define SCAN_NOT_TESTED 0
#define SCAN_TEST_PASS 1
#define SCAN_TEST_FAIL 2
/* MSR_SCAN_HASHES_STATUS bit fields */
union ifs_scan_hashes_status {
u64 data;
struct {
u32 chunk_size :16;
u32 num_chunks :8;
u32 rsvd1 :8;
u32 error_code :8;
u32 rsvd2 :11;
u32 max_core_limit :12;
u32 valid :1;
};
};
/* MSR_CHUNKS_AUTH_STATUS bit fields */
union ifs_chunks_auth_status {
u64 data;
struct {
u32 valid_chunks :8;
u32 total_chunks :8;
u32 rsvd1 :16;
u32 error_code :8;
u32 rsvd2 :24;
};
};
/* MSR_ACTIVATE_SCAN bit fields */
union ifs_scan {
u64 data;
struct {
u32 start :8;
u32 stop :8;
u32 rsvd :16;
u32 delay :31;
u32 sigmce :1;
};
};
/* MSR_SCAN_STATUS bit fields */
union ifs_status {
u64 data;
struct {
u32 chunk_num :8;
u32 chunk_stop_index :8;
u32 rsvd1 :16;
u32 error_code :8;
u32 rsvd2 :22;
u32 control_error :1;
u32 signature_error :1;
};
};
/*
* Driver populated error-codes
* 0xFD: Test timed out before completing all the chunks.
* 0xFE: not all scan chunks were executed. Maximum forward progress retries exceeded.
*/
#define IFS_SW_TIMEOUT 0xFD
#define IFS_SW_PARTIAL_COMPLETION 0xFE
/**
* struct ifs_data - attributes related to intel IFS driver
* @integrity_cap_bit: MSR_INTEGRITY_CAPS bit enumerating this test
* @loaded_version: stores the currently loaded ifs image version.
* @loaded: If a valid test binary has been loaded into the memory
* @loading_error: Error occurred on another CPU while loading image
* @valid_chunks: number of chunks which could be validated.
* @status: it holds simple status pass/fail/untested
* @scan_details: opaque scan status code from h/w
*/
struct ifs_data {
int integrity_cap_bit;
int loaded_version;
bool loaded;
bool loading_error;
int valid_chunks;
int status;
u64 scan_details;
};
struct ifs_work {
struct work_struct w;
struct device *dev;
};
struct ifs_device {
struct ifs_data data;
struct miscdevice misc;
};
static inline struct ifs_data *ifs_get_data(struct device *dev)
{
struct miscdevice *m = dev_get_drvdata(dev);
struct ifs_device *d = container_of(m, struct ifs_device, misc);
return &d->data;
}
void ifs_load_firmware(struct device *dev);
int do_core_test(int cpu, struct device *dev);
const struct attribute_group **ifs_get_groups(void);
extern struct semaphore ifs_sem;
#endif
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright(c) 2022 Intel Corporation. */
#include <linux/firmware.h>
#include <asm/cpu.h>
#include <linux/slab.h>
#include <asm/microcode_intel.h>
#include "ifs.h"
struct ifs_header {
u32 header_ver;
u32 blob_revision;
u32 date;
u32 processor_sig;
u32 check_sum;
u32 loader_rev;
u32 processor_flags;
u32 metadata_size;
u32 total_size;
u32 fusa_info;
u64 reserved;
};
#define IFS_HEADER_SIZE (sizeof(struct ifs_header))
static struct ifs_header *ifs_header_ptr; /* pointer to the ifs image header */
static u64 ifs_hash_ptr; /* Address of ifs metadata (hash) */
static u64 ifs_test_image_ptr; /* 256B aligned address of test pattern */
static DECLARE_COMPLETION(ifs_done);
static const char * const scan_hash_status[] = {
[0] = "No error reported",
[1] = "Attempt to copy scan hashes when copy already in progress",
[2] = "Secure Memory not set up correctly",
[3] = "FuSaInfo.ProgramID does not match or ff-mm-ss does not match",
[4] = "Reserved",
[5] = "Integrity check failed",
[6] = "Scan reload or test is in progress"
};
static const char * const scan_authentication_status[] = {
[0] = "No error reported",
[1] = "Attempt to authenticate a chunk which is already marked as authentic",
[2] = "Chunk authentication error. The hash of chunk did not match expected value"
};
/*
* To copy scan hashes and authenticate test chunks, the initiating cpu must point
* to the EDX:EAX to the test image in linear address.
* Run wrmsr(MSR_COPY_SCAN_HASHES) for scan hash copy and run wrmsr(MSR_AUTHENTICATE_AND_COPY_CHUNK)
* for scan hash copy and test chunk authentication.
*/
static void copy_hashes_authenticate_chunks(struct work_struct *work)
{
struct ifs_work *local_work = container_of(work, struct ifs_work, w);
union ifs_scan_hashes_status hashes_status;
union ifs_chunks_auth_status chunk_status;
struct device *dev = local_work->dev;
int i, num_chunks, chunk_size;
struct ifs_data *ifsd;
u64 linear_addr, base;
u32 err_code;
ifsd = ifs_get_data(dev);
/* run scan hash copy */
wrmsrl(MSR_COPY_SCAN_HASHES, ifs_hash_ptr);
rdmsrl(MSR_SCAN_HASHES_STATUS, hashes_status.data);
/* enumerate the scan image information */
num_chunks = hashes_status.num_chunks;
chunk_size = hashes_status.chunk_size * 1024;
err_code = hashes_status.error_code;
if (!hashes_status.valid) {
ifsd->loading_error = true;
if (err_code >= ARRAY_SIZE(scan_hash_status)) {
dev_err(dev, "invalid error code 0x%x for hash copy\n", err_code);
goto done;
}
dev_err(dev, "Hash copy error : %s", scan_hash_status[err_code]);
goto done;
}
/* base linear address to the scan data */
base = ifs_test_image_ptr;
/* scan data authentication and copy chunks to secured memory */
for (i = 0; i < num_chunks; i++) {
linear_addr = base + i * chunk_size;
linear_addr |= i;
wrmsrl(MSR_AUTHENTICATE_AND_COPY_CHUNK, linear_addr);
rdmsrl(MSR_CHUNKS_AUTHENTICATION_STATUS, chunk_status.data);
ifsd->valid_chunks = chunk_status.valid_chunks;
err_code = chunk_status.error_code;
if (err_code) {
ifsd->loading_error = true;
if (err_code >= ARRAY_SIZE(scan_authentication_status)) {
dev_err(dev,
"invalid error code 0x%x for authentication\n", err_code);
goto done;
}
dev_err(dev, "Chunk authentication error %s\n",
scan_authentication_status[err_code]);
goto done;
}
}
done:
complete(&ifs_done);
}
/*
* IFS requires scan chunks authenticated per each socket in the platform.
* Once the test chunk is authenticated, it is automatically copied to secured memory
* and proceed the authentication for the next chunk.
*/
static int scan_chunks_sanity_check(struct device *dev)
{
int metadata_size, curr_pkg, cpu, ret = -ENOMEM;
struct ifs_data *ifsd = ifs_get_data(dev);
bool *package_authenticated;
struct ifs_work local_work;
char *test_ptr;
package_authenticated = kcalloc(topology_max_packages(), sizeof(bool), GFP_KERNEL);
if (!package_authenticated)
return ret;
metadata_size = ifs_header_ptr->metadata_size;
/* Spec says that if the Meta Data Size = 0 then it should be treated as 2000 */
if (metadata_size == 0)
metadata_size = 2000;
/* Scan chunk start must be 256 byte aligned */
if ((metadata_size + IFS_HEADER_SIZE) % 256) {
dev_err(dev, "Scan pattern offset within the binary is not 256 byte aligned\n");
return -EINVAL;
}
test_ptr = (char *)ifs_header_ptr + IFS_HEADER_SIZE + metadata_size;
ifsd->loading_error = false;
ifs_test_image_ptr = (u64)test_ptr;
ifsd->loaded_version = ifs_header_ptr->blob_revision;
/* copy the scan hash and authenticate per package */
cpus_read_lock();
for_each_online_cpu(cpu) {
curr_pkg = topology_physical_package_id(cpu);
if (package_authenticated[curr_pkg])
continue;
reinit_completion(&ifs_done);
local_work.dev = dev;
INIT_WORK(&local_work.w, copy_hashes_authenticate_chunks);
schedule_work_on(cpu, &local_work.w);
wait_for_completion(&ifs_done);
if (ifsd->loading_error)
goto out;
package_authenticated[curr_pkg] = 1;
}
ret = 0;
out:
cpus_read_unlock();
kfree(package_authenticated);
return ret;
}
static int ifs_sanity_check(struct device *dev,
const struct microcode_header_intel *mc_header)
{
unsigned long total_size, data_size;
u32 sum, *mc;
total_size = get_totalsize(mc_header);
data_size = get_datasize(mc_header);
if ((data_size + MC_HEADER_SIZE > total_size) || (total_size % sizeof(u32))) {
dev_err(dev, "bad ifs data file size.\n");
return -EINVAL;
}
if (mc_header->ldrver != 1 || mc_header->hdrver != 1) {
dev_err(dev, "invalid/unknown ifs update format.\n");
return -EINVAL;
}
mc = (u32 *)mc_header;
sum = 0;
for (int i = 0; i < total_size / sizeof(u32); i++)
sum += mc[i];
if (sum) {
dev_err(dev, "bad ifs data checksum, aborting.\n");
return -EINVAL;
}
return 0;
}
static bool find_ifs_matching_signature(struct device *dev, struct ucode_cpu_info *uci,
const struct microcode_header_intel *shdr)
{
unsigned int mc_size;
mc_size = get_totalsize(shdr);
if (!mc_size || ifs_sanity_check(dev, shdr) < 0) {
dev_err(dev, "ifs sanity check failure\n");
return false;
}
if (!intel_cpu_signatures_match(uci->cpu_sig.sig, uci->cpu_sig.pf, shdr->sig, shdr->pf)) {
dev_err(dev, "ifs signature, pf not matching\n");
return false;
}
return true;
}
static bool ifs_image_sanity_check(struct device *dev, const struct microcode_header_intel *data)
{
struct ucode_cpu_info uci;
intel_cpu_collect_info(&uci);
return find_ifs_matching_signature(dev, &uci, data);
}
/*
* Load ifs image. Before loading ifs module, the ifs image must be located
* in /lib/firmware/intel/ifs and named as {family/model/stepping}.{testname}.
*/
void ifs_load_firmware(struct device *dev)
{
struct ifs_data *ifsd = ifs_get_data(dev);
const struct firmware *fw;
char scan_path[32];
int ret;
snprintf(scan_path, sizeof(scan_path), "intel/ifs/%02x-%02x-%02x.scan",
boot_cpu_data.x86, boot_cpu_data.x86_model, boot_cpu_data.x86_stepping);
ret = request_firmware_direct(&fw, scan_path, dev);
if (ret) {
dev_err(dev, "ifs file %s load failed\n", scan_path);
goto done;
}
if (!ifs_image_sanity_check(dev, (struct microcode_header_intel *)fw->data)) {
dev_err(dev, "ifs header sanity check failed\n");
goto release;
}
ifs_header_ptr = (struct ifs_header *)fw->data;
ifs_hash_ptr = (u64)(ifs_header_ptr + 1);
ret = scan_chunks_sanity_check(dev);
release:
release_firmware(fw);
done:
ifsd->loaded = (ret == 0);
}
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright(c) 2022 Intel Corporation. */
#include <linux/cpu.h>
#include <linux/delay.h>
#include <linux/fs.h>
#include <linux/nmi.h>
#include <linux/slab.h>
#include <linux/stop_machine.h>
#include "ifs.h"
/*
* Note all code and data in this file is protected by
* ifs_sem. On HT systems all threads on a core will
* execute together, but only the first thread on the
* core will update results of the test.
*/
#define CREATE_TRACE_POINTS
#include <trace/events/intel_ifs.h>
/* Max retries on the same chunk */
#define MAX_IFS_RETRIES 5
/*
* Number of TSC cycles that a logical CPU will wait for the other
* logical CPU on the core in the WRMSR(ACTIVATE_SCAN).
*/
#define IFS_THREAD_WAIT 100000
enum ifs_status_err_code {
IFS_NO_ERROR = 0,
IFS_OTHER_THREAD_COULD_NOT_JOIN = 1,
IFS_INTERRUPTED_BEFORE_RENDEZVOUS = 2,
IFS_POWER_MGMT_INADEQUATE_FOR_SCAN = 3,
IFS_INVALID_CHUNK_RANGE = 4,
IFS_MISMATCH_ARGUMENTS_BETWEEN_THREADS = 5,
IFS_CORE_NOT_CAPABLE_CURRENTLY = 6,
IFS_UNASSIGNED_ERROR_CODE = 7,
IFS_EXCEED_NUMBER_OF_THREADS_CONCURRENT = 8,
IFS_INTERRUPTED_DURING_EXECUTION = 9,
};
static const char * const scan_test_status[] = {
[IFS_NO_ERROR] = "SCAN no error",
[IFS_OTHER_THREAD_COULD_NOT_JOIN] = "Other thread could not join.",
[IFS_INTERRUPTED_BEFORE_RENDEZVOUS] = "Interrupt occurred prior to SCAN coordination.",
[IFS_POWER_MGMT_INADEQUATE_FOR_SCAN] =
"Core Abort SCAN Response due to power management condition.",
[IFS_INVALID_CHUNK_RANGE] = "Non valid chunks in the range",
[IFS_MISMATCH_ARGUMENTS_BETWEEN_THREADS] = "Mismatch in arguments between threads T0/T1.",
[IFS_CORE_NOT_CAPABLE_CURRENTLY] = "Core not capable of performing SCAN currently",
[IFS_UNASSIGNED_ERROR_CODE] = "Unassigned error code 0x7",
[IFS_EXCEED_NUMBER_OF_THREADS_CONCURRENT] =
"Exceeded number of Logical Processors (LP) allowed to run Scan-At-Field concurrently",
[IFS_INTERRUPTED_DURING_EXECUTION] = "Interrupt occurred prior to SCAN start",
};
static void message_not_tested(struct device *dev, int cpu, union ifs_status status)
{
if (status.error_code < ARRAY_SIZE(scan_test_status)) {
dev_info(dev, "CPU(s) %*pbl: SCAN operation did not start. %s\n",
cpumask_pr_args(cpu_smt_mask(cpu)),
scan_test_status[status.error_code]);
} else if (status.error_code == IFS_SW_TIMEOUT) {
dev_info(dev, "CPU(s) %*pbl: software timeout during scan\n",
cpumask_pr_args(cpu_smt_mask(cpu)));
} else if (status.error_code == IFS_SW_PARTIAL_COMPLETION) {
dev_info(dev, "CPU(s) %*pbl: %s\n",
cpumask_pr_args(cpu_smt_mask(cpu)),
"Not all scan chunks were executed. Maximum forward progress retries exceeded");
} else {
dev_info(dev, "CPU(s) %*pbl: SCAN unknown status %llx\n",
cpumask_pr_args(cpu_smt_mask(cpu)), status.data);
}
}
static void message_fail(struct device *dev, int cpu, union ifs_status status)
{
/*
* control_error is set when the microcode runs into a problem
* loading the image from the reserved BIOS memory, or it has
* been corrupted. Reloading the image may fix this issue.
*/
if (status.control_error) {
dev_err(dev, "CPU(s) %*pbl: could not execute from loaded scan image\n",
cpumask_pr_args(cpu_smt_mask(cpu)));
}
/*
* signature_error is set when the output from the scan chains does not
* match the expected signature. This might be a transient problem (e.g.
* due to a bit flip from an alpha particle or neutron). If the problem
* repeats on a subsequent test, then it indicates an actual problem in
* the core being tested.
*/
if (status.signature_error) {
dev_err(dev, "CPU(s) %*pbl: test signature incorrect.\n",
cpumask_pr_args(cpu_smt_mask(cpu)));
}
}
static bool can_restart(union ifs_status status)
{
enum ifs_status_err_code err_code = status.error_code;
/* Signature for chunk is bad, or scan test failed */
if (status.signature_error || status.control_error)
return false;
switch (err_code) {
case IFS_NO_ERROR:
case IFS_OTHER_THREAD_COULD_NOT_JOIN:
case IFS_INTERRUPTED_BEFORE_RENDEZVOUS:
case IFS_POWER_MGMT_INADEQUATE_FOR_SCAN:
case IFS_EXCEED_NUMBER_OF_THREADS_CONCURRENT:
case IFS_INTERRUPTED_DURING_EXECUTION:
return true;
case IFS_INVALID_CHUNK_RANGE:
case IFS_MISMATCH_ARGUMENTS_BETWEEN_THREADS:
case IFS_CORE_NOT_CAPABLE_CURRENTLY:
case IFS_UNASSIGNED_ERROR_CODE:
break;
}
return false;
}
/*
* Execute the scan. Called "simultaneously" on all threads of a core
* at high priority using the stop_cpus mechanism.
*/
static int doscan(void *data)
{
int cpu = smp_processor_id();
u64 *msrs = data;
int first;
/* Only the first logical CPU on a core reports result */
first = cpumask_first(cpu_smt_mask(cpu));
/*
* This WRMSR will wait for other HT threads to also write
* to this MSR (at most for activate.delay cycles). Then it
* starts scan of each requested chunk. The core scan happens
* during the "execution" of the WRMSR. This instruction can
* take up to 200 milliseconds (in the case where all chunks
* are processed in a single pass) before it retires.
*/
wrmsrl(MSR_ACTIVATE_SCAN, msrs[0]);
if (cpu == first) {
/* Pass back the result of the scan */
rdmsrl(MSR_SCAN_STATUS, msrs[1]);
}
return 0;
}
/*
* Use stop_core_cpuslocked() to synchronize writing to MSR_ACTIVATE_SCAN
* on all threads of the core to be tested. Loop if necessary to complete
* run of all chunks. Include some defensive tests to make sure forward
* progress is made, and that the whole test completes in a reasonable time.
*/
static void ifs_test_core(int cpu, struct device *dev)
{
union ifs_scan activate;
union ifs_status status;
unsigned long timeout;
struct ifs_data *ifsd;
u64 msrvals[2];
int retries;
ifsd = ifs_get_data(dev);
activate.rsvd = 0;
activate.delay = IFS_THREAD_WAIT;
activate.sigmce = 0;
activate.start = 0;
activate.stop = ifsd->valid_chunks - 1;
timeout = jiffies + HZ / 2;
retries = MAX_IFS_RETRIES;
while (activate.start <= activate.stop) {
if (time_after(jiffies, timeout)) {
status.error_code = IFS_SW_TIMEOUT;
break;
}
msrvals[0] = activate.data;
stop_core_cpuslocked(cpu, doscan, msrvals);
status.data = msrvals[1];
trace_ifs_status(cpu, activate, status);
/* Some cases can be retried, give up for others */
if (!can_restart(status))
break;
if (status.chunk_num == activate.start) {
/* Check for forward progress */
if (--retries == 0) {
if (status.error_code == IFS_NO_ERROR)
status.error_code = IFS_SW_PARTIAL_COMPLETION;
break;
}
} else {
retries = MAX_IFS_RETRIES;
activate.start = status.chunk_num;
}
}
/* Update status for this core */
ifsd->scan_details = status.data;
if (status.control_error || status.signature_error) {
ifsd->status = SCAN_TEST_FAIL;
message_fail(dev, cpu, status);
} else if (status.error_code) {
ifsd->status = SCAN_NOT_TESTED;
message_not_tested(dev, cpu, status);
} else {
ifsd->status = SCAN_TEST_PASS;
}
}
/*
* Initiate per core test. It wakes up work queue threads on the target cpu and
* its sibling cpu. Once all sibling threads wake up, the scan test gets executed and
* wait for all sibling threads to finish the scan test.
*/
int do_core_test(int cpu, struct device *dev)
{
int ret = 0;
/* Prevent CPUs from being taken offline during the scan test */
cpus_read_lock();
if (!cpu_online(cpu)) {
dev_info(dev, "cannot test on the offline cpu %d\n", cpu);
ret = -EINVAL;
goto out;
}
ifs_test_core(cpu, dev);
out:
cpus_read_unlock();
return ret;
}
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright(c) 2022 Intel Corporation. */
#include <linux/cpu.h>
#include <linux/delay.h>
#include <linux/fs.h>
#include <linux/semaphore.h>
#include <linux/slab.h>
#include "ifs.h"
/*
* Protects against simultaneous tests on multiple cores, or
* reloading can file while a test is in progress
*/
DEFINE_SEMAPHORE(ifs_sem);
/*
* The sysfs interface to check additional details of last test
* cat /sys/devices/system/platform/ifs/details
*/
static ssize_t details_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct ifs_data *ifsd = ifs_get_data(dev);
return sysfs_emit(buf, "%#llx\n", ifsd->scan_details);
}
static DEVICE_ATTR_RO(details);
static const char * const status_msg[] = {
[SCAN_NOT_TESTED] = "untested",
[SCAN_TEST_PASS] = "pass",
[SCAN_TEST_FAIL] = "fail"
};
/*
* The sysfs interface to check the test status:
* To check the status of last test
* cat /sys/devices/platform/ifs/status
*/
static ssize_t status_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct ifs_data *ifsd = ifs_get_data(dev);
return sysfs_emit(buf, "%s\n", status_msg[ifsd->status]);
}
static DEVICE_ATTR_RO(status);
/*
* The sysfs interface for single core testing
* To start test, for example, cpu5
* echo 5 > /sys/devices/platform/ifs/run_test
* To check the result:
* cat /sys/devices/platform/ifs/result
* The sibling core gets tested at the same time.
*/
static ssize_t run_test_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct ifs_data *ifsd = ifs_get_data(dev);
unsigned int cpu;
int rc;
rc = kstrtouint(buf, 0, &cpu);
if (rc < 0 || cpu >= nr_cpu_ids)
return -EINVAL;
if (down_interruptible(&ifs_sem))
return -EINTR;
if (!ifsd->loaded)
rc = -EPERM;
else
rc = do_core_test(cpu, dev);
up(&ifs_sem);
return rc ? rc : count;
}
static DEVICE_ATTR_WO(run_test);
/*
* Reload the IFS image. When user wants to install new IFS image
*/
static ssize_t reload_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct ifs_data *ifsd = ifs_get_data(dev);
bool res;
if (kstrtobool(buf, &res))
return -EINVAL;
if (!res)
return count;
if (down_interruptible(&ifs_sem))
return -EINTR;
ifs_load_firmware(dev);
up(&ifs_sem);
return ifsd->loaded ? count : -ENODEV;
}
static DEVICE_ATTR_WO(reload);
/*
* Display currently loaded IFS image version.
*/
static ssize_t image_version_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct ifs_data *ifsd = ifs_get_data(dev);
if (!ifsd->loaded)
return sysfs_emit(buf, "%s\n", "none");
else
return sysfs_emit(buf, "%#x\n", ifsd->loaded_version);
}
static DEVICE_ATTR_RO(image_version);
/* global scan sysfs attributes */
static struct attribute *plat_ifs_attrs[] = {
&dev_attr_details.attr,
&dev_attr_status.attr,
&dev_attr_run_test.attr,
&dev_attr_reload.attr,
&dev_attr_image_version.attr,
NULL
};
ATTRIBUTE_GROUPS(plat_ifs);
const struct attribute_group **ifs_get_groups(void)
{
return plat_ifs_groups;
}
......@@ -999,7 +999,7 @@ static umode_t etr3_is_visible(struct kobject *kobj,
struct attribute *attr,
int idx)
{
struct device *dev = container_of(kobj, struct device, kobj);
struct device *dev = kobj_to_dev(kobj);
struct pmc_dev *pmcdev = dev_get_drvdata(dev);
const struct pmc_reg_map *map = pmcdev->map;
u32 reg;
......
......@@ -221,19 +221,6 @@ int pmc_atom_read(int offset, u32 *value)
*value = pmc_reg_read(pmc, offset);
return 0;
}
EXPORT_SYMBOL_GPL(pmc_atom_read);
int pmc_atom_write(int offset, u32 value)
{
struct pmc_dev *pmc = &pmc_device;
if (!pmc->init)
return -ENODEV;
pmc_reg_write(pmc, offset, value);
return 0;
}
EXPORT_SYMBOL_GPL(pmc_atom_write);
static void pmc_power_off(void)
{
......
......@@ -1208,7 +1208,7 @@ static int __init samsung_backlight_init(struct samsung_laptop *samsung)
static umode_t samsung_sysfs_is_visible(struct kobject *kobj,
struct attribute *attr, int idx)
{
struct device *dev = container_of(kobj, struct device, kobj);
struct device *dev = kobj_to_dev(kobj);
struct samsung_laptop *samsung = dev_get_drvdata(dev);
bool ok = true;
......
......@@ -2353,7 +2353,7 @@ static struct attribute *toshiba_attributes[] = {
static umode_t toshiba_sysfs_is_visible(struct kobject *kobj,
struct attribute *attr, int idx)
{
struct device *dev = container_of(kobj, struct device, kobj);
struct device *dev = kobj_to_dev(kobj);
struct toshiba_acpi_dev *drv = dev_get_drvdata(dev);
bool exists = true;
......
// SPDX-License-Identifier: GPL-2.0
//
// Driver for the Winmate FM07 front-panel keys
//
// Author: Daniel Beer <daniel.beer@tirotech.co.nz>
#include <linux/init.h>
#include <linux/module.h>
#include <linux/input.h>
#include <linux/ioport.h>
#include <linux/platform_device.h>
#include <linux/dmi.h>
#include <linux/io.h>
#define DRV_NAME "winmate-fm07keys"
#define PORT_CMD 0x6c
#define PORT_DATA 0x68
#define EC_ADDR_KEYS 0x3b
#define EC_CMD_READ 0x80
#define BASE_KEY KEY_F13
#define NUM_KEYS 5
/* Typically we're done in fewer than 10 iterations */
#define LOOP_TIMEOUT 1000
static void fm07keys_poll(struct input_dev *input)
{
uint8_t k;
int i;
/* Flush output buffer */
i = 0;
while (inb(PORT_CMD) & 0x01) {
if (++i >= LOOP_TIMEOUT)
goto timeout;
inb(PORT_DATA);
}
/* Send request and wait for write completion */
outb(EC_CMD_READ, PORT_CMD);
i = 0;
while (inb(PORT_CMD) & 0x02)
if (++i >= LOOP_TIMEOUT)
goto timeout;
outb(EC_ADDR_KEYS, PORT_DATA);
i = 0;
while (inb(PORT_CMD) & 0x02)
if (++i >= LOOP_TIMEOUT)
goto timeout;
/* Wait for data ready */
i = 0;
while (!(inb(PORT_CMD) & 0x01))
if (++i >= LOOP_TIMEOUT)
goto timeout;
k = inb(PORT_DATA);
/* Notify of new key states */
for (i = 0; i < NUM_KEYS; i++) {
input_report_key(input, BASE_KEY + i, (~k) & 1);
k >>= 1;
}
input_sync(input);
return;
timeout:
dev_warn_ratelimited(&input->dev, "timeout polling IO memory\n");
}
static int fm07keys_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct input_dev *input;
int ret;
int i;
input = devm_input_allocate_device(dev);
if (!input) {
dev_err(dev, "no memory for input device\n");
return -ENOMEM;
}
if (!devm_request_region(dev, PORT_CMD, 1, "Winmate FM07 EC"))
return -EBUSY;
if (!devm_request_region(dev, PORT_DATA, 1, "Winmate FM07 EC"))
return -EBUSY;
input->name = "Winmate FM07 front-panel keys";
input->phys = DRV_NAME "/input0";
input->id.bustype = BUS_HOST;
input->id.vendor = 0x0001;
input->id.product = 0x0001;
input->id.version = 0x0100;
__set_bit(EV_KEY, input->evbit);
for (i = 0; i < NUM_KEYS; i++)
__set_bit(BASE_KEY + i, input->keybit);
ret = input_setup_polling(input, fm07keys_poll);
if (ret) {
dev_err(dev, "unable to set up polling, err=%d\n", ret);
return ret;
}
/* These are silicone buttons. They can't be pressed in rapid
* succession too quickly, and 50 Hz seems to be an adequate
* sampling rate without missing any events when tested.
*/
input_set_poll_interval(input, 20);
ret = input_register_device(input);
if (ret) {
dev_err(dev, "unable to register polled device, err=%d\n",
ret);
return ret;
}
input_sync(input);
return 0;
}
static struct platform_driver fm07keys_driver = {
.probe = fm07keys_probe,
.driver = {
.name = DRV_NAME
},
};
static struct platform_device *dev;
static const struct dmi_system_id fm07keys_dmi_table[] __initconst = {
{
/* FM07 and FM07P */
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Winmate Inc."),
DMI_MATCH(DMI_PRODUCT_NAME, "IP30"),
},
},
{ }
};
MODULE_DEVICE_TABLE(dmi, fm07keys_dmi_table);
static int __init fm07keys_init(void)
{
int ret;
if (!dmi_check_system(fm07keys_dmi_table))
return -ENODEV;
ret = platform_driver_register(&fm07keys_driver);
if (ret) {
pr_err("fm07keys: failed to register driver, err=%d\n", ret);
return ret;
}
dev = platform_device_register_simple(DRV_NAME, -1, NULL, 0);
if (IS_ERR(dev)) {
ret = PTR_ERR(dev);
pr_err("fm07keys: failed to allocate device, err = %d\n", ret);
goto fail_register;
}
return 0;
fail_register:
platform_driver_unregister(&fm07keys_driver);
return ret;
}
static void __exit fm07keys_exit(void)
{
platform_driver_unregister(&fm07keys_driver);
platform_device_unregister(dev);
}
module_init(fm07keys_init);
module_exit(fm07keys_exit);
MODULE_AUTHOR("Daniel Beer <daniel.beer@tirotech.co.nz>");
MODULE_DESCRIPTION("Winmate FM07 front-panel keys driver");
MODULE_LICENSE("GPL");
......@@ -1308,21 +1308,20 @@ acpi_wmi_ec_space_handler(u32 function, acpi_physical_address address,
static void acpi_wmi_notify_handler(acpi_handle handle, u32 event,
void *context)
{
struct wmi_block *wblock;
bool found_it = false;
struct wmi_block *wblock = NULL, *iter;
list_for_each_entry(wblock, &wmi_block_list, list) {
struct guid_block *block = &wblock->gblock;
list_for_each_entry(iter, &wmi_block_list, list) {
struct guid_block *block = &iter->gblock;
if (wblock->acpi_device->handle == handle &&
if (iter->acpi_device->handle == handle &&
(block->flags & ACPI_WMI_EVENT) &&
(block->notify_id == event)) {
found_it = true;
wblock = iter;
break;
}
}
if (!found_it)
if (!wblock)
return;
/* If a driver is bound, then notify the driver. */
......
......@@ -216,6 +216,8 @@ struct mlxreg_core_platform_data {
* @mask_low: low aggregation interrupt common mask;
* @deferred_nr: I2C adapter number must be exist prior probing execution;
* @shift_nr: I2C adapter numbers must be incremented by this value;
* @handle: handle to be passed by callback;
* @completion_notify: callback to notify when platform driver probing is done;
*/
struct mlxreg_core_hotplug_platform_data {
struct mlxreg_core_item *items;
......@@ -228,6 +230,8 @@ struct mlxreg_core_hotplug_platform_data {
u32 mask_low;
int deferred_nr;
int shift_nr;
void *handle;
int (*completion_notify)(void *handle, int id);
};
#endif /* __LINUX_PLATFORM_DATA_MLXREG_H */
......@@ -144,6 +144,5 @@
#define SLEEP_ENABLE 0x2000
extern int pmc_atom_read(int offset, u32 *value);
extern int pmc_atom_write(int offset, u32 value);
#endif /* PMC_ATOM_H */
......@@ -124,6 +124,22 @@ int stop_machine(cpu_stop_fn_t fn, void *data, const struct cpumask *cpus);
*/
int stop_machine_cpuslocked(cpu_stop_fn_t fn, void *data, const struct cpumask *cpus);
/**
* stop_core_cpuslocked: - stop all threads on just one core
* @cpu: any cpu in the targeted core
* @fn: the function to run
* @data: the data ptr for @fn()
*
* Same as above, but instead of every CPU, only the logical CPUs of a
* single core are affected.
*
* Context: Must be called from within a cpus_read_lock() protected region.
*
* Return: 0 if all executions of @fn returned 0, any non zero return
* value if any returned non zero.
*/
int stop_core_cpuslocked(unsigned int cpu, cpu_stop_fn_t fn, void *data);
int stop_machine_from_inactive_cpu(cpu_stop_fn_t fn, void *data,
const struct cpumask *cpus);
#else /* CONFIG_SMP || CONFIG_HOTPLUG_CPU */
......
/* SPDX-License-Identifier: GPL-2.0 */
#undef TRACE_SYSTEM
#define TRACE_SYSTEM intel_ifs
#if !defined(_TRACE_IFS_H) || defined(TRACE_HEADER_MULTI_READ)
#define _TRACE_IFS_H
#include <linux/ktime.h>
#include <linux/tracepoint.h>
TRACE_EVENT(ifs_status,
TP_PROTO(int cpu, union ifs_scan activate, union ifs_status status),
TP_ARGS(cpu, activate, status),
TP_STRUCT__entry(
__field( u64, status )
__field( int, cpu )
__field( u8, start )
__field( u8, stop )
),
TP_fast_assign(
__entry->cpu = cpu;
__entry->start = activate.start;
__entry->stop = activate.stop;
__entry->status = status.data;
),
TP_printk("cpu: %d, start: %.2x, stop: %.2x, status: %llx",
__entry->cpu,
__entry->start,
__entry->stop,
__entry->status)
);
#endif /* _TRACE_IFS_H */
/* This part must be outside protection */
#include <trace/define_trace.h>
......@@ -633,6 +633,27 @@ int stop_machine(cpu_stop_fn_t fn, void *data, const struct cpumask *cpus)
}
EXPORT_SYMBOL_GPL(stop_machine);
#ifdef CONFIG_SCHED_SMT
int stop_core_cpuslocked(unsigned int cpu, cpu_stop_fn_t fn, void *data)
{
const struct cpumask *smt_mask = cpu_smt_mask(cpu);
struct multi_stop_data msdata = {
.fn = fn,
.data = data,
.num_threads = cpumask_weight(smt_mask),
.active_cpus = smt_mask,
};
lockdep_assert_cpus_held();
/* Set the initial state and stop all online cpus. */
set_state(&msdata, MULTI_STOP_PREPARE);
return stop_cpus(smt_mask, multi_cpu_stop, &msdata);
}
EXPORT_SYMBOL_GPL(stop_core_cpuslocked);
#endif
/**
* stop_machine_from_inactive_cpu - stop_machine() from inactive CPU
* @fn: the function to run
......
......@@ -190,7 +190,7 @@ static int handle_event(struct nl_msg *n, void *arg)
struct genlmsghdr *genlhdr = genlmsg_hdr(nlh);
struct nlattr *attrs[THERMAL_GENL_ATTR_MAX + 1];
int ret;
struct perf_cap perf_cap;
struct perf_cap perf_cap = {0};
ret = genlmsg_parse(nlh, 0, attrs, THERMAL_GENL_ATTR_MAX, NULL);
......
......@@ -1892,6 +1892,12 @@ static void set_fact_for_cpu(int cpu, void *arg1, void *arg2, void *arg3,
int ret;
int status = *(int *)arg4;
if (status && no_turbo()) {
isst_display_error_info_message(1, "Turbo mode is disabled", 0, 0);
ret = -1;
goto disp_results;
}
ret = isst_get_ctdp_levels(cpu, &pkg_dev);
if (ret) {
isst_display_error_info_message(1, "Failed to get number of levels", 0, 0);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment