Commit 8443516d authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'platform-drivers-x86-v5.19-1' of...

Merge tag 'platform-drivers-x86-v5.19-1' of git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86

Pull x86 platform driver updates from Hans de Goede:
 "This includes some small changes to kernel/stop_machine.c and arch/x86
  which are deps of the new Intel IFS support.

  Highlights:

   - New drivers:
       - Intel "In Field Scan" (IFS) support
       - Winmate FM07/FM07P buttons
       - Mellanox SN2201 support

   -  AMD PMC driver enhancements

   -  Lots of various other small fixes and hardware-id additions"

* tag 'platform-drivers-x86-v5.19-1' of git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86: (54 commits)
  platform/x86/intel/ifs: Add CPU_SUP_INTEL dependency
  platform/x86: intel_cht_int33fe: Set driver data
  platform/x86: intel-hid: fix _DSM function index handling
  platform/x86: toshiba_acpi: use kobj_to_dev()
  platform/x86: samsung-laptop: use kobj_to_dev()
  platform/x86: gigabyte-wmi: Add support for Z490 AORUS ELITE AC and X570 AORUS ELITE WIFI
  tools/power/x86/intel-speed-select: Fix warning for perf_cap.cpu
  tools/power/x86/intel-speed-select: Display error on turbo mode disabled
  Documentation: In-Field Scan
  platform/x86/intel/ifs: add ABI documentation for IFS
  trace: platform/x86/intel/ifs: Add trace point to track Intel IFS operations
  platform/x86/intel/ifs: Add IFS sysfs interface
  platform/x86/intel/ifs: Add scan test support
  platform/x86/intel/ifs: Authenticate and copy to secured memory
  platform/x86/intel/ifs: Check IFS Image sanity
  platform/x86/intel/ifs: Read IFS firmware image
  platform/x86/intel/ifs: Add stub driver for In-Field Scan
  stop_machine: Add stop_core_cpuslocked() for per-core operations
  x86/msr-index: Define INTEGRITY_CAPABILITIES MSR
  x86/microcode/intel: Expose collect_cpu_info_early() for IFS
  ...
parents cfe1cb01 badb81a5
...@@ -467,3 +467,39 @@ Description: These files provide the maximum powered required for line card ...@@ -467,3 +467,39 @@ Description: These files provide the maximum powered required for line card
feeding and line card configuration Id. feeding and line card configuration Id.
The files are read only. The files are read only.
What: /sys/devices/platform/mlxplat/mlxreg-io/hwmon/hwmon*/phy_reset
Date: May 2022
KernelVersion: 5.19
Contact: Vadim Pasternak <vadimpmellanox.com>
Description: This file allows to reset PHY 88E1548 when attribute is set 0
due to some abnormal PHY behavior.
Expected behavior:
When phy_reset is written 1, all PHY 88E1548 are released
from the reset state, when 0 - are hold in reset state.
The files are read/write.
What: /sys/devices/platform/mlxplat/mlxreg-io/hwmon/hwmon*/mac_reset
Date: May 2022
KernelVersion: 5.19
Contact: Vadim Pasternak <vadimpmellanox.com>
Description: This file allows to reset ASIC MT52132 when attribute is set 0
due to some abnormal ASIC behavior.
Expected behavior:
When mac_reset is written 1, the ASIC MT52132 is released
from the reset state, when 0 - is hold in reset state.
The files are read/write.
What: /sys/devices/platform/mlxplat/mlxreg-io/hwmon/hwmon*/qsfp_pwr_good
Date: May 2022
KernelVersion: 5.19
Contact: Vadim Pasternak <vadimpmellanox.com>
Description: This file shows QSFP ports power status. The value is set to 0
when one of any QSFP ports is plugged. The value is set to 1 when
there are no any QSFP ports are plugged.
The possible values are:
0 - Power good, 1 - Not power good.
The files are read only.
What: /sys/devices/virtual/misc/intel_ifs_<N>/run_test
Date: April 21 2022
KernelVersion: 5.19
Contact: "Jithu Joseph" <jithu.joseph@intel.com>
Description: Write <cpu#> to trigger IFS test for one online core.
Note that the test is per core. The cpu# can be
for any thread on the core. Running on one thread
completes the test for the core containing that thread.
Example: to test the core containing cpu5: echo 5 >
/sys/devices/platform/intel_ifs.<N>/run_test
What: /sys/devices/virtual/misc/intel_ifs_<N>/status
Date: April 21 2022
KernelVersion: 5.19
Contact: "Jithu Joseph" <jithu.joseph@intel.com>
Description: The status of the last test. It can be one of "pass", "fail"
or "untested".
What: /sys/devices/virtual/misc/intel_ifs_<N>/details
Date: April 21 2022
KernelVersion: 5.19
Contact: "Jithu Joseph" <jithu.joseph@intel.com>
Description: Additional information regarding the last test. The details file reports
the hex value of the SCAN_STATUS MSR. Note that the error_code field
may contain driver defined software code not defined in the Intel SDM.
What: /sys/devices/virtual/misc/intel_ifs_<N>/image_version
Date: April 21 2022
KernelVersion: 5.19
Contact: "Jithu Joseph" <jithu.joseph@intel.com>
Description: Version (hexadecimal) of loaded IFS binary image. If no scan image
is loaded reports "none".
What: /sys/devices/virtual/misc/intel_ifs_<N>/reload
Date: April 21 2022
KernelVersion: 5.19
Contact: "Jithu Joseph" <jithu.joseph@intel.com>
Description: Write "1" (or "y" or "Y") to reload the IFS image from
/lib/firmware/intel/ifs/ff-mm-ss.scan.
.. SPDX-License-Identifier: GPL-2.0
.. kernel-doc:: drivers/platform/x86/intel/ifs/ifs.h
...@@ -36,6 +36,7 @@ x86-specific Documentation ...@@ -36,6 +36,7 @@ x86-specific Documentation
usb-legacy-support usb-legacy-support
i386/index i386/index
x86_64/index x86_64/index
ifs
sva sva
sgx sgx
features features
......
...@@ -9863,6 +9863,14 @@ B: https://bugzilla.kernel.org ...@@ -9863,6 +9863,14 @@ B: https://bugzilla.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux.git
F: drivers/idle/intel_idle.c F: drivers/idle/intel_idle.c
INTEL IN FIELD SCAN (IFS) DEVICE
M: Jithu Joseph <jithu.joseph@intel.com>
R: Ashok Raj <ashok.raj@intel.com>
R: Tony Luck <tony.luck@intel.com>
S: Maintained
F: drivers/platform/x86/intel/ifs
F: include/trace/events/intel_ifs.h
INTEL INTEGRATED SENSOR HUB DRIVER INTEL INTEGRATED SENSOR HUB DRIVER
M: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com> M: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
M: Jiri Kosina <jikos@kernel.org> M: Jiri Kosina <jikos@kernel.org>
......
...@@ -76,4 +76,22 @@ static inline void init_ia32_feat_ctl(struct cpuinfo_x86 *c) {} ...@@ -76,4 +76,22 @@ static inline void init_ia32_feat_ctl(struct cpuinfo_x86 *c) {}
extern __noendbr void cet_disable(void); extern __noendbr void cet_disable(void);
struct ucode_cpu_info;
int intel_cpu_collect_info(struct ucode_cpu_info *uci);
static inline bool intel_cpu_signatures_match(unsigned int s1, unsigned int p1,
unsigned int s2, unsigned int p2)
{
if (s1 != s2)
return false;
/* Processor flags are either both 0 ... */
if (!p1 && !p2)
return true;
/* ... or they intersect. */
return p1 & p2;
}
#endif /* _ASM_X86_CPU_H */ #endif /* _ASM_X86_CPU_H */
...@@ -76,6 +76,8 @@ ...@@ -76,6 +76,8 @@
/* Abbreviated from Intel SDM name IA32_CORE_CAPABILITIES */ /* Abbreviated from Intel SDM name IA32_CORE_CAPABILITIES */
#define MSR_IA32_CORE_CAPS 0x000000cf #define MSR_IA32_CORE_CAPS 0x000000cf
#define MSR_IA32_CORE_CAPS_INTEGRITY_CAPS_BIT 2
#define MSR_IA32_CORE_CAPS_INTEGRITY_CAPS BIT(MSR_IA32_CORE_CAPS_INTEGRITY_CAPS_BIT)
#define MSR_IA32_CORE_CAPS_SPLIT_LOCK_DETECT_BIT 5 #define MSR_IA32_CORE_CAPS_SPLIT_LOCK_DETECT_BIT 5
#define MSR_IA32_CORE_CAPS_SPLIT_LOCK_DETECT BIT(MSR_IA32_CORE_CAPS_SPLIT_LOCK_DETECT_BIT) #define MSR_IA32_CORE_CAPS_SPLIT_LOCK_DETECT BIT(MSR_IA32_CORE_CAPS_SPLIT_LOCK_DETECT_BIT)
...@@ -154,6 +156,11 @@ ...@@ -154,6 +156,11 @@
#define MSR_IA32_POWER_CTL 0x000001fc #define MSR_IA32_POWER_CTL 0x000001fc
#define MSR_IA32_POWER_CTL_BIT_EE 19 #define MSR_IA32_POWER_CTL_BIT_EE 19
/* Abbreviated from Intel SDM name IA32_INTEGRITY_CAPABILITIES */
#define MSR_INTEGRITY_CAPS 0x000002d9
#define MSR_INTEGRITY_CAPS_PERIODIC_BIST_BIT 4
#define MSR_INTEGRITY_CAPS_PERIODIC_BIST BIT(MSR_INTEGRITY_CAPS_PERIODIC_BIST_BIT)
#define MSR_LBR_NHM_FROM 0x00000680 #define MSR_LBR_NHM_FROM 0x00000680
#define MSR_LBR_NHM_TO 0x000006c0 #define MSR_LBR_NHM_TO 0x000006c0
#define MSR_LBR_CORE_FROM 0x00000040 #define MSR_LBR_CORE_FROM 0x00000040
......
...@@ -31,9 +31,22 @@ enum hsmp_message_ids { ...@@ -31,9 +31,22 @@ enum hsmp_message_ids {
HSMP_GET_CCLK_THROTTLE_LIMIT, /* 10h Get CCLK frequency limit in socket */ HSMP_GET_CCLK_THROTTLE_LIMIT, /* 10h Get CCLK frequency limit in socket */
HSMP_GET_C0_PERCENT, /* 11h Get average C0 residency in socket */ HSMP_GET_C0_PERCENT, /* 11h Get average C0 residency in socket */
HSMP_SET_NBIO_DPM_LEVEL, /* 12h Set max/min LCLK DPM Level for a given NBIO */ HSMP_SET_NBIO_DPM_LEVEL, /* 12h Set max/min LCLK DPM Level for a given NBIO */
/* 13h Reserved */ HSMP_GET_NBIO_DPM_LEVEL, /* 13h Get LCLK DPM level min and max for a given NBIO */
HSMP_GET_DDR_BANDWIDTH = 0x14, /* 14h Get theoretical maximum and current DDR Bandwidth */ HSMP_GET_DDR_BANDWIDTH, /* 14h Get theoretical maximum and current DDR Bandwidth */
HSMP_GET_TEMP_MONITOR, /* 15h Get per-DIMM temperature and refresh rates */ HSMP_GET_TEMP_MONITOR, /* 15h Get socket temperature */
HSMP_GET_DIMM_TEMP_RANGE, /* 16h Get per-DIMM temperature range and refresh rate */
HSMP_GET_DIMM_POWER, /* 17h Get per-DIMM power consumption */
HSMP_GET_DIMM_THERMAL, /* 18h Get per-DIMM thermal sensors */
HSMP_GET_SOCKET_FREQ_LIMIT, /* 19h Get current active frequency per socket */
HSMP_GET_CCLK_CORE_LIMIT, /* 1Ah Get CCLK frequency limit per core */
HSMP_GET_RAILS_SVI, /* 1Bh Get SVI-based Telemetry for all rails */
HSMP_GET_SOCKET_FMAX_FMIN, /* 1Ch Get Fmax and Fmin per socket */
HSMP_GET_IOLINK_BANDWITH, /* 1Dh Get current bandwidth on IO Link */
HSMP_GET_XGMI_BANDWITH, /* 1Eh Get current bandwidth on xGMI Link */
HSMP_SET_GMI3_WIDTH, /* 1Fh Set max and min GMI3 Link width */
HSMP_SET_PCI_RATE, /* 20h Control link rate on PCIe devices */
HSMP_SET_POWER_MODE, /* 21h Select power efficiency profile policy */
HSMP_SET_PSTATE_MAX_MIN, /* 22h Set the max and min DF P-State */
HSMP_MSG_ID_MAX, HSMP_MSG_ID_MAX,
}; };
...@@ -175,8 +188,12 @@ static const struct hsmp_msg_desc hsmp_msg_desc_table[] = { ...@@ -175,8 +188,12 @@ static const struct hsmp_msg_desc hsmp_msg_desc_table[] = {
*/ */
{1, 0, HSMP_SET}, {1, 0, HSMP_SET},
/* RESERVED message */ /*
{0, 0, HSMP_RSVD}, * HSMP_GET_NBIO_DPM_LEVEL, num_args = 1, response_sz = 1
* input: args[0] = nbioid[23:16]
* output: args[0] = max dpm level[15:8] + min dpm level[7:0]
*/
{1, 1, HSMP_GET},
/* /*
* HSMP_GET_DDR_BANDWIDTH, num_args = 0, response_sz = 1 * HSMP_GET_DDR_BANDWIDTH, num_args = 0, response_sz = 1
...@@ -191,6 +208,93 @@ static const struct hsmp_msg_desc hsmp_msg_desc_table[] = { ...@@ -191,6 +208,93 @@ static const struct hsmp_msg_desc hsmp_msg_desc_table[] = {
* [7:5] fractional part * [7:5] fractional part
*/ */
{0, 1, HSMP_GET}, {0, 1, HSMP_GET},
/*
* HSMP_GET_DIMM_TEMP_RANGE, num_args = 1, response_sz = 1
* input: args[0] = DIMM address[7:0]
* output: args[0] = refresh rate[3] + temperature range[2:0]
*/
{1, 1, HSMP_GET},
/*
* HSMP_GET_DIMM_POWER, num_args = 1, response_sz = 1
* input: args[0] = DIMM address[7:0]
* output: args[0] = DIMM power in mW[31:17] + update rate in ms[16:8] +
* DIMM address[7:0]
*/
{1, 1, HSMP_GET},
/*
* HSMP_GET_DIMM_THERMAL, num_args = 1, response_sz = 1
* input: args[0] = DIMM address[7:0]
* output: args[0] = temperature in degree celcius[31:21] + update rate in ms[16:8] +
* DIMM address[7:0]
*/
{1, 1, HSMP_GET},
/*
* HSMP_GET_SOCKET_FREQ_LIMIT, num_args = 0, response_sz = 1
* output: args[0] = frequency in MHz[31:16] + frequency source[15:0]
*/
{0, 1, HSMP_GET},
/*
* HSMP_GET_CCLK_CORE_LIMIT, num_args = 1, response_sz = 1
* input: args[0] = apic id [31:0]
* output: args[0] = frequency in MHz[31:0]
*/
{1, 1, HSMP_GET},
/*
* HSMP_GET_RAILS_SVI, num_args = 0, response_sz = 1
* output: args[0] = power in mW[31:0]
*/
{0, 1, HSMP_GET},
/*
* HSMP_GET_SOCKET_FMAX_FMIN, num_args = 0, response_sz = 1
* output: args[0] = fmax in MHz[31:16] + fmin in MHz[15:0]
*/
{0, 1, HSMP_GET},
/*
* HSMP_GET_IOLINK_BANDWITH, num_args = 1, response_sz = 1
* input: args[0] = link id[15:8] + bw type[2:0]
* output: args[0] = io bandwidth in Mbps[31:0]
*/
{1, 1, HSMP_GET},
/*
* HSMP_GET_XGMI_BANDWITH, num_args = 1, response_sz = 1
* input: args[0] = link id[15:8] + bw type[2:0]
* output: args[0] = xgmi bandwidth in Mbps[31:0]
*/
{1, 1, HSMP_GET},
/*
* HSMP_SET_GMI3_WIDTH, num_args = 1, response_sz = 0
* input: args[0] = min link width[15:8] + max link width[7:0]
*/
{1, 0, HSMP_SET},
/*
* HSMP_SET_PCI_RATE, num_args = 1, response_sz = 1
* input: args[0] = link rate control value
* output: args[0] = previous link rate control value
*/
{1, 1, HSMP_SET},
/*
* HSMP_SET_POWER_MODE, num_args = 1, response_sz = 0
* input: args[0] = power efficiency mode[2:0]
*/
{1, 0, HSMP_SET},
/*
* HSMP_SET_PSTATE_MAX_MIN, num_args = 1, response_sz = 0
* input: args[0] = min df pstate[15:8] + max df pstate[7:0]
*/
{1, 0, HSMP_SET},
}; };
/* Reset to default packing */ /* Reset to default packing */
......
...@@ -184,6 +184,38 @@ static bool bad_spectre_microcode(struct cpuinfo_x86 *c) ...@@ -184,6 +184,38 @@ static bool bad_spectre_microcode(struct cpuinfo_x86 *c)
return false; return false;
} }
int intel_cpu_collect_info(struct ucode_cpu_info *uci)
{
unsigned int val[2];
unsigned int family, model;
struct cpu_signature csig = { 0 };
unsigned int eax, ebx, ecx, edx;
memset(uci, 0, sizeof(*uci));
eax = 0x00000001;
ecx = 0;
native_cpuid(&eax, &ebx, &ecx, &edx);
csig.sig = eax;
family = x86_family(eax);
model = x86_model(eax);
if (model >= 5 || family > 6) {
/* get processor flags from MSR 0x17 */
native_rdmsr(MSR_IA32_PLATFORM_ID, val[0], val[1]);
csig.pf = 1 << ((val[1] >> 18) & 7);
}
csig.rev = intel_get_microcode_revision();
uci->cpu_sig = csig;
uci->valid = 1;
return 0;
}
EXPORT_SYMBOL_GPL(intel_cpu_collect_info);
static void early_init_intel(struct cpuinfo_x86 *c) static void early_init_intel(struct cpuinfo_x86 *c)
{ {
u64 misc_enable; u64 misc_enable;
......
...@@ -45,20 +45,6 @@ static struct microcode_intel *intel_ucode_patch; ...@@ -45,20 +45,6 @@ static struct microcode_intel *intel_ucode_patch;
/* last level cache size per core */ /* last level cache size per core */
static int llc_size_per_core; static int llc_size_per_core;
static inline bool cpu_signatures_match(unsigned int s1, unsigned int p1,
unsigned int s2, unsigned int p2)
{
if (s1 != s2)
return false;
/* Processor flags are either both 0 ... */
if (!p1 && !p2)
return true;
/* ... or they intersect. */
return p1 & p2;
}
/* /*
* Returns 1 if update has been found, 0 otherwise. * Returns 1 if update has been found, 0 otherwise.
*/ */
...@@ -69,7 +55,7 @@ static int find_matching_signature(void *mc, unsigned int csig, int cpf) ...@@ -69,7 +55,7 @@ static int find_matching_signature(void *mc, unsigned int csig, int cpf)
struct extended_signature *ext_sig; struct extended_signature *ext_sig;
int i; int i;
if (cpu_signatures_match(csig, cpf, mc_hdr->sig, mc_hdr->pf)) if (intel_cpu_signatures_match(csig, cpf, mc_hdr->sig, mc_hdr->pf))
return 1; return 1;
/* Look for ext. headers: */ /* Look for ext. headers: */
...@@ -80,7 +66,7 @@ static int find_matching_signature(void *mc, unsigned int csig, int cpf) ...@@ -80,7 +66,7 @@ static int find_matching_signature(void *mc, unsigned int csig, int cpf)
ext_sig = (void *)ext_hdr + EXT_HEADER_SIZE; ext_sig = (void *)ext_hdr + EXT_HEADER_SIZE;
for (i = 0; i < ext_hdr->count; i++) { for (i = 0; i < ext_hdr->count; i++) {
if (cpu_signatures_match(csig, cpf, ext_sig->sig, ext_sig->pf)) if (intel_cpu_signatures_match(csig, cpf, ext_sig->sig, ext_sig->pf))
return 1; return 1;
ext_sig++; ext_sig++;
} }
...@@ -342,37 +328,6 @@ scan_microcode(void *data, size_t size, struct ucode_cpu_info *uci, bool save) ...@@ -342,37 +328,6 @@ scan_microcode(void *data, size_t size, struct ucode_cpu_info *uci, bool save)
return patch; return patch;
} }
static int collect_cpu_info_early(struct ucode_cpu_info *uci)
{
unsigned int val[2];
unsigned int family, model;
struct cpu_signature csig = { 0 };
unsigned int eax, ebx, ecx, edx;
memset(uci, 0, sizeof(*uci));
eax = 0x00000001;
ecx = 0;
native_cpuid(&eax, &ebx, &ecx, &edx);
csig.sig = eax;
family = x86_family(eax);
model = x86_model(eax);
if ((model >= 5) || (family > 6)) {
/* get processor flags from MSR 0x17 */
native_rdmsr(MSR_IA32_PLATFORM_ID, val[0], val[1]);
csig.pf = 1 << ((val[1] >> 18) & 7);
}
csig.rev = intel_get_microcode_revision();
uci->cpu_sig = csig;
uci->valid = 1;
return 0;
}
static void show_saved_mc(void) static void show_saved_mc(void)
{ {
#ifdef DEBUG #ifdef DEBUG
...@@ -386,7 +341,7 @@ static void show_saved_mc(void) ...@@ -386,7 +341,7 @@ static void show_saved_mc(void)
return; return;
} }
collect_cpu_info_early(&uci); intel_cpu_collect_info(&uci);
sig = uci.cpu_sig.sig; sig = uci.cpu_sig.sig;
pf = uci.cpu_sig.pf; pf = uci.cpu_sig.pf;
...@@ -502,7 +457,7 @@ void show_ucode_info_early(void) ...@@ -502,7 +457,7 @@ void show_ucode_info_early(void)
struct ucode_cpu_info uci; struct ucode_cpu_info uci;
if (delay_ucode_info) { if (delay_ucode_info) {
collect_cpu_info_early(&uci); intel_cpu_collect_info(&uci);
print_ucode_info(&uci, current_mc_date); print_ucode_info(&uci, current_mc_date);
delay_ucode_info = 0; delay_ucode_info = 0;
} }
...@@ -604,7 +559,7 @@ int __init save_microcode_in_initrd_intel(void) ...@@ -604,7 +559,7 @@ int __init save_microcode_in_initrd_intel(void)
if (!(cp.data && cp.size)) if (!(cp.data && cp.size))
return 0; return 0;
collect_cpu_info_early(&uci); intel_cpu_collect_info(&uci);
scan_microcode(cp.data, cp.size, &uci, true); scan_microcode(cp.data, cp.size, &uci, true);
...@@ -637,7 +592,7 @@ static struct microcode_intel *__load_ucode_intel(struct ucode_cpu_info *uci) ...@@ -637,7 +592,7 @@ static struct microcode_intel *__load_ucode_intel(struct ucode_cpu_info *uci)
if (!(cp.data && cp.size)) if (!(cp.data && cp.size))
return NULL; return NULL;
collect_cpu_info_early(uci); intel_cpu_collect_info(uci);
return scan_microcode(cp.data, cp.size, uci, false); return scan_microcode(cp.data, cp.size, uci, false);
} }
...@@ -712,7 +667,7 @@ void reload_ucode_intel(void) ...@@ -712,7 +667,7 @@ void reload_ucode_intel(void)
struct microcode_intel *p; struct microcode_intel *p;
struct ucode_cpu_info uci; struct ucode_cpu_info uci;
collect_cpu_info_early(&uci); intel_cpu_collect_info(&uci);
p = find_patch(&uci); p = find_patch(&uci);
if (!p) if (!p)
......
...@@ -78,4 +78,21 @@ config MLXBF_PMC ...@@ -78,4 +78,21 @@ config MLXBF_PMC
to performance monitoring counters within various blocks in the to performance monitoring counters within various blocks in the
Mellanox BlueField SoC via a sysfs interface. Mellanox BlueField SoC via a sysfs interface.
config NVSW_SN2201
tristate "Nvidia SN2201 platform driver support"
depends on REGMAP
depends on HWMON
depends on I2C
depends on REGMAP_I2C
help
This driver provides support for the Nvidia SN2201 platfom.
The SN2201 is a highly integrated for one rack unit system with
L3 management switches. It has 48 x 1Gbps RJ45 + 4 x 100G QSFP28
ports in a compact 1RU form factor. The system also including a
serial port (RS-232 interface), an OOB port (1G/100M MDI interface)
and USB ports for management functions.
The processor used on SN2201 is Intel Atom®Processor C Series,
C3338R which is one of the Denverton product families.
System equipped with Nvidia®Spectrum-1 32x100GbE Ethernet switch.
endif # MELLANOX_PLATFORM endif # MELLANOX_PLATFORM
...@@ -9,3 +9,4 @@ obj-$(CONFIG_MLXBF_TMFIFO) += mlxbf-tmfifo.o ...@@ -9,3 +9,4 @@ obj-$(CONFIG_MLXBF_TMFIFO) += mlxbf-tmfifo.o
obj-$(CONFIG_MLXREG_HOTPLUG) += mlxreg-hotplug.o obj-$(CONFIG_MLXREG_HOTPLUG) += mlxreg-hotplug.o
obj-$(CONFIG_MLXREG_IO) += mlxreg-io.o obj-$(CONFIG_MLXREG_IO) += mlxreg-io.o
obj-$(CONFIG_MLXREG_LC) += mlxreg-lc.o obj-$(CONFIG_MLXREG_LC) += mlxreg-lc.o
obj-$(CONFIG_NVSW_SN2201) += nvsw-sn2201.o
// SPDX-License-Identifier: GPL-2.0+
/*
* Nvidia sn2201 driver
*
* Copyright (C) 2022 Nvidia Technologies Ltd.
*/
#include <linux/device.h>
#include <linux/i2c.h>
#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/gpio.h>
#include <linux/module.h>
#include <linux/platform_data/mlxcpld.h>
#include <linux/platform_data/mlxreg.h>
#include <linux/platform_device.h>
#include <linux/regmap.h>
/* SN2201 CPLD register offset. */
#define NVSW_SN2201_CPLD_LPC_I2C_BASE_ADRR 0x2000
#define NVSW_SN2201_CPLD_LPC_IO_RANGE 0x100
#define NVSW_SN2201_HW_VER_ID_OFFSET 0x00
#define NVSW_SN2201_BOARD_ID_OFFSET 0x01
#define NVSW_SN2201_CPLD_VER_OFFSET 0x02
#define NVSW_SN2201_CPLD_MVER_OFFSET 0x03
#define NVSW_SN2201_CPLD_ID_OFFSET 0x04
#define NVSW_SN2201_CPLD_PN_OFFSET 0x05
#define NVSW_SN2201_CPLD_PN1_OFFSET 0x06
#define NVSW_SN2201_PSU_CTRL_OFFSET 0x0a
#define NVSW_SN2201_QSFP28_STATUS_OFFSET 0x0b
#define NVSW_SN2201_QSFP28_INT_STATUS_OFFSET 0x0c
#define NVSW_SN2201_QSFP28_LP_STATUS_OFFSET 0x0d
#define NVSW_SN2201_QSFP28_RST_STATUS_OFFSET 0x0e
#define NVSW_SN2201_SYS_STATUS_OFFSET 0x0f
#define NVSW_SN2201_FRONT_SYS_LED_CTRL_OFFSET 0x10
#define NVSW_SN2201_FRONT_PSU_LED_CTRL_OFFSET 0x12
#define NVSW_SN2201_FRONT_UID_LED_CTRL_OFFSET 0x13
#define NVSW_SN2201_QSFP28_LED_TEST_STATUS_OFFSET 0x14
#define NVSW_SN2201_SYS_RST_STATUS_OFFSET 0x15
#define NVSW_SN2201_SYS_INT_STATUS_OFFSET 0x21
#define NVSW_SN2201_SYS_INT_MASK_OFFSET 0x22
#define NVSW_SN2201_ASIC_STATUS_OFFSET 0x24
#define NVSW_SN2201_ASIC_EVENT_OFFSET 0x25
#define NVSW_SN2201_ASIC_MAKS_OFFSET 0x26
#define NVSW_SN2201_THML_STATUS_OFFSET 0x27
#define NVSW_SN2201_THML_EVENT_OFFSET 0x28
#define NVSW_SN2201_THML_MASK_OFFSET 0x29
#define NVSW_SN2201_PS_ALT_STATUS_OFFSET 0x2a
#define NVSW_SN2201_PS_ALT_EVENT_OFFSET 0x2b
#define NVSW_SN2201_PS_ALT_MASK_OFFSET 0x2c
#define NVSW_SN2201_PS_PRSNT_STATUS_OFFSET 0x30
#define NVSW_SN2201_PS_PRSNT_EVENT_OFFSET 0x31
#define NVSW_SN2201_PS_PRSNT_MASK_OFFSET 0x32
#define NVSW_SN2201_PS_DC_OK_STATUS_OFFSET 0x33
#define NVSW_SN2201_PS_DC_OK_EVENT_OFFSET 0x34
#define NVSW_SN2201_PS_DC_OK_MASK_OFFSET 0x35
#define NVSW_SN2201_RST_CAUSE1_OFFSET 0x36
#define NVSW_SN2201_RST_CAUSE2_OFFSET 0x37
#define NVSW_SN2201_RST_SW_CTRL_OFFSET 0x38
#define NVSW_SN2201_FAN_PRSNT_STATUS_OFFSET 0x3a
#define NVSW_SN2201_FAN_PRSNT_EVENT_OFFSET 0x3b
#define NVSW_SN2201_FAN_PRSNT_MASK_OFFSET 0x3c
#define NVSW_SN2201_WD_TMR_OFFSET_LSB 0x40
#define NVSW_SN2201_WD_TMR_OFFSET_MSB 0x41
#define NVSW_SN2201_WD_ACT_OFFSET 0x42
#define NVSW_SN2201_FAN_LED1_CTRL_OFFSET 0x50
#define NVSW_SN2201_FAN_LED2_CTRL_OFFSET 0x51
#define NVSW_SN2201_REG_MAX 0x52
/* Number of physical I2C busses. */
#define NVSW_SN2201_PHY_I2C_BUS_NUM 2
/* Number of main mux channels. */
#define NVSW_SN2201_MAIN_MUX_CHNL_NUM 8
#define NVSW_SN2201_MAIN_NR 0
#define NVSW_SN2201_MAIN_MUX_NR 1
#define NVSW_SN2201_MAIN_MUX_DEFER_NR (NVSW_SN2201_PHY_I2C_BUS_NUM + \
NVSW_SN2201_MAIN_MUX_CHNL_NUM - 1)
#define NVSW_SN2201_MAIN_MUX_CH0_NR NVSW_SN2201_PHY_I2C_BUS_NUM
#define NVSW_SN2201_MAIN_MUX_CH1_NR (NVSW_SN2201_MAIN_MUX_CH0_NR + 1)
#define NVSW_SN2201_MAIN_MUX_CH2_NR (NVSW_SN2201_MAIN_MUX_CH0_NR + 2)
#define NVSW_SN2201_MAIN_MUX_CH3_NR (NVSW_SN2201_MAIN_MUX_CH0_NR + 3)
#define NVSW_SN2201_MAIN_MUX_CH5_NR (NVSW_SN2201_MAIN_MUX_CH0_NR + 5)
#define NVSW_SN2201_MAIN_MUX_CH6_NR (NVSW_SN2201_MAIN_MUX_CH0_NR + 6)
#define NVSW_SN2201_MAIN_MUX_CH7_NR (NVSW_SN2201_MAIN_MUX_CH0_NR + 7)
#define NVSW_SN2201_CPLD_NR NVSW_SN2201_MAIN_MUX_CH0_NR
#define NVSW_SN2201_NR_NONE -1
/* Masks for aggregation, PSU presence and power, ASIC events
* in CPLD related registers.
*/
#define NVSW_SN2201_CPLD_AGGR_ASIC_MASK_DEF 0xe0
#define NVSW_SN2201_CPLD_AGGR_PSU_MASK_DEF 0x04
#define NVSW_SN2201_CPLD_AGGR_PWR_MASK_DEF 0x02
#define NVSW_SN2201_CPLD_AGGR_FAN_MASK_DEF 0x10
#define NVSW_SN2201_CPLD_AGGR_MASK_DEF \
(NVSW_SN2201_CPLD_AGGR_ASIC_MASK_DEF \
| NVSW_SN2201_CPLD_AGGR_PSU_MASK_DEF \
| NVSW_SN2201_CPLD_AGGR_PWR_MASK_DEF \
| NVSW_SN2201_CPLD_AGGR_FAN_MASK_DEF)
#define NVSW_SN2201_CPLD_ASIC_MASK GENMASK(3, 1)
#define NVSW_SN2201_CPLD_PSU_MASK GENMASK(1, 0)
#define NVSW_SN2201_CPLD_PWR_MASK GENMASK(1, 0)
#define NVSW_SN2201_CPLD_FAN_MASK GENMASK(3, 0)
#define NVSW_SN2201_CPLD_SYSIRQ 26
#define NVSW_SN2201_LPC_SYSIRQ 28
#define NVSW_SN2201_CPLD_I2CADDR 0x41
#define NVSW_SN2201_WD_DFLT_TIMEOUT 600
/* nvsw_sn2201 - device private data
* @dev: platform device;
* @io_data: register access platform data;
* @led_data: LED platform data;
* @hotplug_data: hotplug platform data;
* @i2c_data: I2C controller platform data;
* @led: LED device;
* @io_regs: register access device;
* @pdev_hotplug: hotplug device;
* @sn2201_devs: I2C devices for sn2201 devices;
* @sn2201_devs_num: number of I2C devices for sn2201 device;
* @main_mux_devs: I2C devices for main mux;
* @main_mux_devs_num: number of I2C devices for main mux;
* @cpld_devs: I2C devices for cpld;
* @cpld_devs_num: number of I2C devices for cpld;
* @main_mux_deferred_nr: I2C adapter number must be exist prior creating devices execution;
*/
struct nvsw_sn2201 {
struct device *dev;
struct mlxreg_core_platform_data *io_data;
struct mlxreg_core_platform_data *led_data;
struct mlxreg_core_platform_data *wd_data;
struct mlxreg_core_hotplug_platform_data *hotplug_data;
struct mlxreg_core_hotplug_platform_data *i2c_data;
struct platform_device *led;
struct platform_device *wd;
struct platform_device *io_regs;
struct platform_device *pdev_hotplug;
struct platform_device *pdev_i2c;
struct mlxreg_hotplug_device *sn2201_devs;
int sn2201_devs_num;
struct mlxreg_hotplug_device *main_mux_devs;
int main_mux_devs_num;
struct mlxreg_hotplug_device *cpld_devs;
int cpld_devs_num;
int main_mux_deferred_nr;
};
static bool nvsw_sn2201_writeable_reg(struct device *dev, unsigned int reg)
{
switch (reg) {
case NVSW_SN2201_PSU_CTRL_OFFSET:
case NVSW_SN2201_QSFP28_LP_STATUS_OFFSET:
case NVSW_SN2201_QSFP28_RST_STATUS_OFFSET:
case NVSW_SN2201_FRONT_SYS_LED_CTRL_OFFSET:
case NVSW_SN2201_FRONT_PSU_LED_CTRL_OFFSET:
case NVSW_SN2201_FRONT_UID_LED_CTRL_OFFSET:
case NVSW_SN2201_QSFP28_LED_TEST_STATUS_OFFSET:
case NVSW_SN2201_SYS_RST_STATUS_OFFSET:
case NVSW_SN2201_SYS_INT_MASK_OFFSET:
case NVSW_SN2201_ASIC_EVENT_OFFSET:
case NVSW_SN2201_ASIC_MAKS_OFFSET:
case NVSW_SN2201_THML_EVENT_OFFSET:
case NVSW_SN2201_THML_MASK_OFFSET:
case NVSW_SN2201_PS_ALT_EVENT_OFFSET:
case NVSW_SN2201_PS_ALT_MASK_OFFSET:
case NVSW_SN2201_PS_PRSNT_EVENT_OFFSET:
case NVSW_SN2201_PS_PRSNT_MASK_OFFSET:
case NVSW_SN2201_PS_DC_OK_EVENT_OFFSET:
case NVSW_SN2201_PS_DC_OK_MASK_OFFSET:
case NVSW_SN2201_RST_SW_CTRL_OFFSET:
case NVSW_SN2201_FAN_PRSNT_EVENT_OFFSET:
case NVSW_SN2201_FAN_PRSNT_MASK_OFFSET:
case NVSW_SN2201_WD_TMR_OFFSET_LSB:
case NVSW_SN2201_WD_TMR_OFFSET_MSB:
case NVSW_SN2201_WD_ACT_OFFSET:
case NVSW_SN2201_FAN_LED1_CTRL_OFFSET:
case NVSW_SN2201_FAN_LED2_CTRL_OFFSET:
return true;
}
return false;
}
static bool nvsw_sn2201_readable_reg(struct device *dev, unsigned int reg)
{
switch (reg) {
case NVSW_SN2201_HW_VER_ID_OFFSET:
case NVSW_SN2201_BOARD_ID_OFFSET:
case NVSW_SN2201_CPLD_VER_OFFSET:
case NVSW_SN2201_CPLD_MVER_OFFSET:
case NVSW_SN2201_CPLD_ID_OFFSET:
case NVSW_SN2201_CPLD_PN_OFFSET:
case NVSW_SN2201_CPLD_PN1_OFFSET:
case NVSW_SN2201_PSU_CTRL_OFFSET:
case NVSW_SN2201_QSFP28_STATUS_OFFSET:
case NVSW_SN2201_QSFP28_INT_STATUS_OFFSET:
case NVSW_SN2201_QSFP28_LP_STATUS_OFFSET:
case NVSW_SN2201_QSFP28_RST_STATUS_OFFSET:
case NVSW_SN2201_SYS_STATUS_OFFSET:
case NVSW_SN2201_FRONT_SYS_LED_CTRL_OFFSET:
case NVSW_SN2201_FRONT_PSU_LED_CTRL_OFFSET:
case NVSW_SN2201_FRONT_UID_LED_CTRL_OFFSET:
case NVSW_SN2201_QSFP28_LED_TEST_STATUS_OFFSET:
case NVSW_SN2201_SYS_RST_STATUS_OFFSET:
case NVSW_SN2201_RST_CAUSE1_OFFSET:
case NVSW_SN2201_RST_CAUSE2_OFFSET:
case NVSW_SN2201_SYS_INT_STATUS_OFFSET:
case NVSW_SN2201_SYS_INT_MASK_OFFSET:
case NVSW_SN2201_ASIC_STATUS_OFFSET:
case NVSW_SN2201_ASIC_EVENT_OFFSET:
case NVSW_SN2201_ASIC_MAKS_OFFSET:
case NVSW_SN2201_THML_STATUS_OFFSET:
case NVSW_SN2201_THML_EVENT_OFFSET:
case NVSW_SN2201_THML_MASK_OFFSET:
case NVSW_SN2201_PS_ALT_STATUS_OFFSET:
case NVSW_SN2201_PS_ALT_EVENT_OFFSET:
case NVSW_SN2201_PS_ALT_MASK_OFFSET:
case NVSW_SN2201_PS_PRSNT_STATUS_OFFSET:
case NVSW_SN2201_PS_PRSNT_EVENT_OFFSET:
case NVSW_SN2201_PS_PRSNT_MASK_OFFSET:
case NVSW_SN2201_PS_DC_OK_STATUS_OFFSET:
case NVSW_SN2201_PS_DC_OK_EVENT_OFFSET:
case NVSW_SN2201_PS_DC_OK_MASK_OFFSET:
case NVSW_SN2201_RST_SW_CTRL_OFFSET:
case NVSW_SN2201_FAN_PRSNT_STATUS_OFFSET:
case NVSW_SN2201_FAN_PRSNT_EVENT_OFFSET:
case NVSW_SN2201_FAN_PRSNT_MASK_OFFSET:
case NVSW_SN2201_WD_TMR_OFFSET_LSB:
case NVSW_SN2201_WD_TMR_OFFSET_MSB:
case NVSW_SN2201_WD_ACT_OFFSET:
case NVSW_SN2201_FAN_LED1_CTRL_OFFSET:
case NVSW_SN2201_FAN_LED2_CTRL_OFFSET:
return true;
}
return false;
}
static bool nvsw_sn2201_volatile_reg(struct device *dev, unsigned int reg)
{
switch (reg) {
case NVSW_SN2201_HW_VER_ID_OFFSET:
case NVSW_SN2201_BOARD_ID_OFFSET:
case NVSW_SN2201_CPLD_VER_OFFSET:
case NVSW_SN2201_CPLD_MVER_OFFSET:
case NVSW_SN2201_CPLD_ID_OFFSET:
case NVSW_SN2201_CPLD_PN_OFFSET:
case NVSW_SN2201_CPLD_PN1_OFFSET:
case NVSW_SN2201_PSU_CTRL_OFFSET:
case NVSW_SN2201_QSFP28_STATUS_OFFSET:
case NVSW_SN2201_QSFP28_INT_STATUS_OFFSET:
case NVSW_SN2201_QSFP28_LP_STATUS_OFFSET:
case NVSW_SN2201_QSFP28_RST_STATUS_OFFSET:
case NVSW_SN2201_SYS_STATUS_OFFSET:
case NVSW_SN2201_FRONT_SYS_LED_CTRL_OFFSET:
case NVSW_SN2201_FRONT_PSU_LED_CTRL_OFFSET:
case NVSW_SN2201_FRONT_UID_LED_CTRL_OFFSET:
case NVSW_SN2201_QSFP28_LED_TEST_STATUS_OFFSET:
case NVSW_SN2201_SYS_RST_STATUS_OFFSET:
case NVSW_SN2201_RST_CAUSE1_OFFSET:
case NVSW_SN2201_RST_CAUSE2_OFFSET:
case NVSW_SN2201_SYS_INT_STATUS_OFFSET:
case NVSW_SN2201_SYS_INT_MASK_OFFSET:
case NVSW_SN2201_ASIC_STATUS_OFFSET:
case NVSW_SN2201_ASIC_EVENT_OFFSET:
case NVSW_SN2201_ASIC_MAKS_OFFSET:
case NVSW_SN2201_THML_STATUS_OFFSET:
case NVSW_SN2201_THML_EVENT_OFFSET:
case NVSW_SN2201_THML_MASK_OFFSET:
case NVSW_SN2201_PS_ALT_STATUS_OFFSET:
case NVSW_SN2201_PS_ALT_EVENT_OFFSET:
case NVSW_SN2201_PS_ALT_MASK_OFFSET:
case NVSW_SN2201_PS_PRSNT_STATUS_OFFSET:
case NVSW_SN2201_PS_PRSNT_EVENT_OFFSET:
case NVSW_SN2201_PS_PRSNT_MASK_OFFSET:
case NVSW_SN2201_PS_DC_OK_STATUS_OFFSET:
case NVSW_SN2201_PS_DC_OK_EVENT_OFFSET:
case NVSW_SN2201_PS_DC_OK_MASK_OFFSET:
case NVSW_SN2201_RST_SW_CTRL_OFFSET:
case NVSW_SN2201_FAN_PRSNT_STATUS_OFFSET:
case NVSW_SN2201_FAN_PRSNT_EVENT_OFFSET:
case NVSW_SN2201_FAN_PRSNT_MASK_OFFSET:
case NVSW_SN2201_WD_TMR_OFFSET_LSB:
case NVSW_SN2201_WD_TMR_OFFSET_MSB:
case NVSW_SN2201_FAN_LED1_CTRL_OFFSET:
case NVSW_SN2201_FAN_LED2_CTRL_OFFSET:
return true;
}
return false;
}
static const struct reg_default nvsw_sn2201_regmap_default[] = {
{ NVSW_SN2201_QSFP28_LED_TEST_STATUS_OFFSET, 0x00 },
{ NVSW_SN2201_WD_ACT_OFFSET, 0x00 },
};
/* Configuration for the register map of a device with 1 bytes address space. */
static const struct regmap_config nvsw_sn2201_regmap_conf = {
.reg_bits = 8,
.val_bits = 8,
.max_register = NVSW_SN2201_REG_MAX,
.cache_type = REGCACHE_FLAT,
.writeable_reg = nvsw_sn2201_writeable_reg,
.readable_reg = nvsw_sn2201_readable_reg,
.volatile_reg = nvsw_sn2201_volatile_reg,
.reg_defaults = nvsw_sn2201_regmap_default,
.num_reg_defaults = ARRAY_SIZE(nvsw_sn2201_regmap_default),
};
/* Regions for LPC I2C controller and LPC base register space. */
static const struct resource nvsw_sn2201_lpc_io_resources[] = {
[0] = DEFINE_RES_NAMED(NVSW_SN2201_CPLD_LPC_I2C_BASE_ADRR,
NVSW_SN2201_CPLD_LPC_IO_RANGE,
"mlxplat_cpld_lpc_i2c_ctrl", IORESOURCE_IO),
};
static struct resource nvsw_sn2201_cpld_res[] = {
[0] = DEFINE_RES_IRQ_NAMED(NVSW_SN2201_CPLD_SYSIRQ, "mlxreg-hotplug"),
};
static struct resource nvsw_sn2201_lpc_res[] = {
[0] = DEFINE_RES_IRQ_NAMED(NVSW_SN2201_LPC_SYSIRQ, "i2c-mlxcpld"),
};
/* SN2201 I2C platform data. */
struct mlxreg_core_hotplug_platform_data nvsw_sn2201_i2c_data = {
.irq = NVSW_SN2201_CPLD_SYSIRQ,
};
/* SN2201 CPLD device. */
static struct i2c_board_info nvsw_sn2201_cpld_devices[] = {
{
I2C_BOARD_INFO("nvsw-sn2201", 0x41),
},
};
/* SN2201 CPLD board info. */
static struct mlxreg_hotplug_device nvsw_sn2201_cpld_brdinfo[] = {
{
.brdinfo = &nvsw_sn2201_cpld_devices[0],
.nr = NVSW_SN2201_CPLD_NR,
},
};
/* SN2201 main mux device. */
static struct i2c_board_info nvsw_sn2201_main_mux_devices[] = {
{
I2C_BOARD_INFO("pca9548", 0x70),
},
};
/* SN2201 main mux board info. */
static struct mlxreg_hotplug_device nvsw_sn2201_main_mux_brdinfo[] = {
{
.brdinfo = &nvsw_sn2201_main_mux_devices[0],
.nr = NVSW_SN2201_MAIN_MUX_NR,
},
};
/* SN2201 power devices. */
static struct i2c_board_info nvsw_sn2201_pwr_devices[] = {
{
I2C_BOARD_INFO("pmbus", 0x58),
},
{
I2C_BOARD_INFO("pmbus", 0x58),
},
};
/* SN2201 fan devices. */
static struct i2c_board_info nvsw_sn2201_fan_devices[] = {
{
I2C_BOARD_INFO("24c02", 0x50),
},
{
I2C_BOARD_INFO("24c02", 0x51),
},
{
I2C_BOARD_INFO("24c02", 0x52),
},
{
I2C_BOARD_INFO("24c02", 0x53),
},
};
/* SN2201 hotplug default data. */
static struct mlxreg_core_data nvsw_sn2201_psu_items_data[] = {
{
.label = "psu1",
.reg = NVSW_SN2201_PS_PRSNT_STATUS_OFFSET,
.mask = BIT(0),
.hpdev.nr = NVSW_SN2201_NR_NONE,
},
{
.label = "psu2",
.reg = NVSW_SN2201_PS_PRSNT_STATUS_OFFSET,
.mask = BIT(1),
.hpdev.nr = NVSW_SN2201_NR_NONE,
},
};
static struct mlxreg_core_data nvsw_sn2201_pwr_items_data[] = {
{
.label = "pwr1",
.reg = NVSW_SN2201_PS_DC_OK_STATUS_OFFSET,
.mask = BIT(0),
.hpdev.brdinfo = &nvsw_sn2201_pwr_devices[0],
.hpdev.nr = NVSW_SN2201_MAIN_MUX_CH1_NR,
},
{
.label = "pwr2",
.reg = NVSW_SN2201_PS_DC_OK_STATUS_OFFSET,
.mask = BIT(1),
.hpdev.brdinfo = &nvsw_sn2201_pwr_devices[1],
.hpdev.nr = NVSW_SN2201_MAIN_MUX_CH2_NR,
},
};
static struct mlxreg_core_data nvsw_sn2201_fan_items_data[] = {
{
.label = "fan1",
.reg = NVSW_SN2201_FAN_PRSNT_STATUS_OFFSET,
.mask = BIT(0),
.hpdev.brdinfo = &nvsw_sn2201_fan_devices[0],
.hpdev.nr = NVSW_SN2201_NR_NONE,
},
{
.label = "fan2",
.reg = NVSW_SN2201_FAN_PRSNT_STATUS_OFFSET,
.mask = BIT(1),
.hpdev.brdinfo = &nvsw_sn2201_fan_devices[1],
.hpdev.nr = NVSW_SN2201_NR_NONE,
},
{
.label = "fan3",
.reg = NVSW_SN2201_FAN_PRSNT_STATUS_OFFSET,
.mask = BIT(2),
.hpdev.brdinfo = &nvsw_sn2201_fan_devices[2],
.hpdev.nr = NVSW_SN2201_NR_NONE,
},
{
.label = "fan4",
.reg = NVSW_SN2201_FAN_PRSNT_STATUS_OFFSET,
.mask = BIT(3),
.hpdev.brdinfo = &nvsw_sn2201_fan_devices[3],
.hpdev.nr = NVSW_SN2201_NR_NONE,
},
};
static struct mlxreg_core_data nvsw_sn2201_sys_items_data[] = {
{
.label = "nic_smb_alert",
.reg = NVSW_SN2201_ASIC_STATUS_OFFSET,
.mask = BIT(1),
.hpdev.nr = NVSW_SN2201_NR_NONE,
},
{
.label = "cpu_sd",
.reg = NVSW_SN2201_ASIC_STATUS_OFFSET,
.mask = BIT(2),
.hpdev.nr = NVSW_SN2201_NR_NONE,
},
{
.label = "mac_health",
.reg = NVSW_SN2201_ASIC_STATUS_OFFSET,
.mask = BIT(3),
.hpdev.nr = NVSW_SN2201_NR_NONE,
},
};
static struct mlxreg_core_item nvsw_sn2201_items[] = {
{
.data = nvsw_sn2201_psu_items_data,
.aggr_mask = NVSW_SN2201_CPLD_AGGR_PSU_MASK_DEF,
.reg = NVSW_SN2201_PS_PRSNT_STATUS_OFFSET,
.mask = NVSW_SN2201_CPLD_PSU_MASK,
.count = ARRAY_SIZE(nvsw_sn2201_psu_items_data),
.inversed = 1,
.health = false,
},
{
.data = nvsw_sn2201_pwr_items_data,
.aggr_mask = NVSW_SN2201_CPLD_AGGR_PWR_MASK_DEF,
.reg = NVSW_SN2201_PS_DC_OK_STATUS_OFFSET,
.mask = NVSW_SN2201_CPLD_PWR_MASK,
.count = ARRAY_SIZE(nvsw_sn2201_pwr_items_data),
.inversed = 0,
.health = false,
},
{
.data = nvsw_sn2201_fan_items_data,
.aggr_mask = NVSW_SN2201_CPLD_AGGR_FAN_MASK_DEF,
.reg = NVSW_SN2201_FAN_PRSNT_STATUS_OFFSET,
.mask = NVSW_SN2201_CPLD_FAN_MASK,
.count = ARRAY_SIZE(nvsw_sn2201_fan_items_data),
.inversed = 1,
.health = false,
},
{
.data = nvsw_sn2201_sys_items_data,
.aggr_mask = NVSW_SN2201_CPLD_AGGR_ASIC_MASK_DEF,
.reg = NVSW_SN2201_ASIC_STATUS_OFFSET,
.mask = NVSW_SN2201_CPLD_ASIC_MASK,
.count = ARRAY_SIZE(nvsw_sn2201_sys_items_data),
.inversed = 1,
.health = false,
},
};
static
struct mlxreg_core_hotplug_platform_data nvsw_sn2201_hotplug = {
.items = nvsw_sn2201_items,
.counter = ARRAY_SIZE(nvsw_sn2201_items),
.cell = NVSW_SN2201_SYS_INT_STATUS_OFFSET,
.mask = NVSW_SN2201_CPLD_AGGR_MASK_DEF,
};
/* SN2201 static devices. */
static struct i2c_board_info nvsw_sn2201_static_devices[] = {
{
I2C_BOARD_INFO("24c02", 0x57),
},
{
I2C_BOARD_INFO("lm75", 0x4b),
},
{
I2C_BOARD_INFO("24c64", 0x56),
},
{
I2C_BOARD_INFO("ads1015", 0x49),
},
{
I2C_BOARD_INFO("pca9546", 0x71),
},
{
I2C_BOARD_INFO("emc2305", 0x4d),
},
{
I2C_BOARD_INFO("lm75", 0x49),
},
{
I2C_BOARD_INFO("pca9555", 0x27),
},
{
I2C_BOARD_INFO("powr1014", 0x37),
},
{
I2C_BOARD_INFO("lm75", 0x4f),
},
{
I2C_BOARD_INFO("pmbus", 0x40),
},
};
/* SN2201 default static board info. */
static struct mlxreg_hotplug_device nvsw_sn2201_static_brdinfo[] = {
{
.brdinfo = &nvsw_sn2201_static_devices[0],
.nr = NVSW_SN2201_MAIN_NR,
},
{
.brdinfo = &nvsw_sn2201_static_devices[1],
.nr = NVSW_SN2201_MAIN_MUX_CH0_NR,
},
{
.brdinfo = &nvsw_sn2201_static_devices[2],
.nr = NVSW_SN2201_MAIN_MUX_CH0_NR,
},
{
.brdinfo = &nvsw_sn2201_static_devices[3],
.nr = NVSW_SN2201_MAIN_MUX_CH0_NR,
},
{
.brdinfo = &nvsw_sn2201_static_devices[4],
.nr = NVSW_SN2201_MAIN_MUX_CH3_NR,
},
{
.brdinfo = &nvsw_sn2201_static_devices[5],
.nr = NVSW_SN2201_MAIN_MUX_CH5_NR,
},
{
.brdinfo = &nvsw_sn2201_static_devices[6],
.nr = NVSW_SN2201_MAIN_MUX_CH5_NR,
},
{
.brdinfo = &nvsw_sn2201_static_devices[7],
.nr = NVSW_SN2201_MAIN_MUX_CH5_NR,
},
{
.brdinfo = &nvsw_sn2201_static_devices[8],
.nr = NVSW_SN2201_MAIN_MUX_CH6_NR,
},
{
.brdinfo = &nvsw_sn2201_static_devices[9],
.nr = NVSW_SN2201_MAIN_MUX_CH6_NR,
},
{
.brdinfo = &nvsw_sn2201_static_devices[10],
.nr = NVSW_SN2201_MAIN_MUX_CH7_NR,
},
};
/* LED default data. */
static struct mlxreg_core_data nvsw_sn2201_led_data[] = {
{
.label = "status:green",
.reg = NVSW_SN2201_FRONT_SYS_LED_CTRL_OFFSET,
.mask = GENMASK(7, 4),
},
{
.label = "status:orange",
.reg = NVSW_SN2201_FRONT_SYS_LED_CTRL_OFFSET,
.mask = GENMASK(7, 4),
},
{
.label = "psu:green",
.reg = NVSW_SN2201_FRONT_PSU_LED_CTRL_OFFSET,
.mask = GENMASK(7, 4),
},
{
.label = "psu:orange",
.reg = NVSW_SN2201_FRONT_PSU_LED_CTRL_OFFSET,
.mask = GENMASK(7, 4),
},
{
.label = "uid:blue",
.reg = NVSW_SN2201_FRONT_UID_LED_CTRL_OFFSET,
.mask = GENMASK(7, 4),
},
{
.label = "fan1:green",
.reg = NVSW_SN2201_FAN_LED1_CTRL_OFFSET,
.mask = GENMASK(7, 4),
},
{
.label = "fan1:orange",
.reg = NVSW_SN2201_FAN_LED1_CTRL_OFFSET,
.mask = GENMASK(7, 4),
},
{
.label = "fan2:green",
.reg = NVSW_SN2201_FAN_LED1_CTRL_OFFSET,
.mask = GENMASK(3, 0),
},
{
.label = "fan2:orange",
.reg = NVSW_SN2201_FAN_LED1_CTRL_OFFSET,
.mask = GENMASK(3, 0),
},
{
.label = "fan3:green",
.reg = NVSW_SN2201_FAN_LED2_CTRL_OFFSET,
.mask = GENMASK(7, 4),
},
{
.label = "fan3:orange",
.reg = NVSW_SN2201_FAN_LED2_CTRL_OFFSET,
.mask = GENMASK(7, 4),
},
{
.label = "fan4:green",
.reg = NVSW_SN2201_FAN_LED2_CTRL_OFFSET,
.mask = GENMASK(3, 0),
},
{
.label = "fan4:orange",
.reg = NVSW_SN2201_FAN_LED2_CTRL_OFFSET,
.mask = GENMASK(3, 0),
},
};
static struct mlxreg_core_platform_data nvsw_sn2201_led = {
.data = nvsw_sn2201_led_data,
.counter = ARRAY_SIZE(nvsw_sn2201_led_data),
};
/* Default register access data. */
static struct mlxreg_core_data nvsw_sn2201_io_data[] = {
{
.label = "cpld1_version",
.reg = NVSW_SN2201_CPLD_VER_OFFSET,
.bit = GENMASK(7, 0),
.mode = 0444,
},
{
.label = "cpld1_version_min",
.reg = NVSW_SN2201_CPLD_MVER_OFFSET,
.bit = GENMASK(7, 0),
.mode = 0444,
},
{
.label = "cpld1_pn",
.reg = NVSW_SN2201_CPLD_PN_OFFSET,
.bit = GENMASK(15, 0),
.mode = 0444,
.regnum = 2,
},
{
.label = "psu1_on",
.reg = NVSW_SN2201_PSU_CTRL_OFFSET,
.mask = GENMASK(7, 0) & ~BIT(0),
.mode = 0644,
},
{
.label = "psu2_on",
.reg = NVSW_SN2201_PSU_CTRL_OFFSET,
.mask = GENMASK(7, 0) & ~BIT(1),
.mode = 0644,
},
{
.label = "pwr_cycle",
.reg = NVSW_SN2201_PSU_CTRL_OFFSET,
.mask = GENMASK(7, 0) & ~BIT(2),
.mode = 0644,
},
{
.label = "asic_health",
.reg = NVSW_SN2201_SYS_STATUS_OFFSET,
.mask = GENMASK(4, 3),
.bit = 4,
.mode = 0444,
},
{
.label = "qsfp_pwr_good",
.reg = NVSW_SN2201_SYS_STATUS_OFFSET,
.mask = GENMASK(7, 0) & ~BIT(0),
.mode = 0444,
},
{
.label = "phy_reset",
.reg = NVSW_SN2201_SYS_RST_STATUS_OFFSET,
.mask = GENMASK(7, 0) & ~BIT(3),
.mode = 0644,
},
{
.label = "mac_reset",
.reg = NVSW_SN2201_SYS_RST_STATUS_OFFSET,
.mask = GENMASK(7, 0) & ~BIT(2),
.mode = 0644,
},
{
.label = "pwr_down",
.reg = NVSW_SN2201_RST_SW_CTRL_OFFSET,
.mask = GENMASK(7, 0) & ~BIT(0),
.mode = 0644,
},
{
.label = "reset_long_pb",
.reg = NVSW_SN2201_RST_CAUSE1_OFFSET,
.mask = GENMASK(7, 0) & ~BIT(0),
.mode = 0444,
},
{
.label = "reset_short_pb",
.reg = NVSW_SN2201_RST_CAUSE1_OFFSET,
.mask = GENMASK(7, 0) & ~BIT(1),
.mode = 0444,
},
{
.label = "reset_aux_pwr_or_fu",
.reg = NVSW_SN2201_RST_CAUSE1_OFFSET,
.mask = GENMASK(7, 0) & ~BIT(2),
.mode = 0444,
},
{
.label = "reset_swb_dc_dc_pwr_fail",
.reg = NVSW_SN2201_RST_CAUSE1_OFFSET,
.mask = GENMASK(7, 0) & ~BIT(3),
.mode = 0444,
},
{
.label = "reset_sw_reset",
.reg = NVSW_SN2201_RST_CAUSE1_OFFSET,
.mask = GENMASK(7, 0) & ~BIT(4),
.mode = 0444,
},
{
.label = "reset_fw_reset",
.reg = NVSW_SN2201_RST_CAUSE1_OFFSET,
.mask = GENMASK(7, 0) & ~BIT(5),
.mode = 0444,
},
{
.label = "reset_swb_wd",
.reg = NVSW_SN2201_RST_CAUSE1_OFFSET,
.mask = GENMASK(7, 0) & ~BIT(6),
.mode = 0444,
},
{
.label = "reset_asic_thermal",
.reg = NVSW_SN2201_RST_CAUSE1_OFFSET,
.mask = GENMASK(7, 0) & ~BIT(7),
.mode = 0444,
},
{
.label = "reset_system",
.reg = NVSW_SN2201_RST_CAUSE2_OFFSET,
.mask = GENMASK(7, 0) & ~BIT(1),
.mode = 0444,
},
{
.label = "reset_sw_pwr_off",
.reg = NVSW_SN2201_RST_CAUSE2_OFFSET,
.mask = GENMASK(7, 0) & ~BIT(2),
.mode = 0444,
},
{
.label = "reset_cpu_pwr_fail_thermal",
.reg = NVSW_SN2201_RST_CAUSE2_OFFSET,
.mask = GENMASK(7, 0) & ~BIT(4),
.mode = 0444,
},
{
.label = "reset_reload_bios",
.reg = NVSW_SN2201_RST_CAUSE2_OFFSET,
.mask = GENMASK(7, 0) & ~BIT(5),
.mode = 0444,
},
{
.label = "reset_ac_pwr_fail",
.reg = NVSW_SN2201_RST_CAUSE2_OFFSET,
.mask = GENMASK(7, 0) & ~BIT(6),
.mode = 0444,
},
{
.label = "psu1",
.reg = NVSW_SN2201_PS_PRSNT_STATUS_OFFSET,
.mask = GENMASK(7, 0) & ~BIT(0),
.mode = 0444,
},
{
.label = "psu2",
.reg = NVSW_SN2201_PS_PRSNT_STATUS_OFFSET,
.mask = GENMASK(7, 0) & ~BIT(1),
.mode = 0444,
},
};
static struct mlxreg_core_platform_data nvsw_sn2201_regs_io = {
.data = nvsw_sn2201_io_data,
.counter = ARRAY_SIZE(nvsw_sn2201_io_data),
};
/* Default watchdog data. */
static struct mlxreg_core_data nvsw_sn2201_wd_data[] = {
{
.label = "action",
.reg = NVSW_SN2201_WD_ACT_OFFSET,
.mask = GENMASK(7, 1),
.bit = 0,
},
{
.label = "timeout",
.reg = NVSW_SN2201_WD_TMR_OFFSET_LSB,
.mask = 0,
.health_cntr = NVSW_SN2201_WD_DFLT_TIMEOUT,
},
{
.label = "timeleft",
.reg = NVSW_SN2201_WD_TMR_OFFSET_LSB,
.mask = 0,
},
{
.label = "ping",
.reg = NVSW_SN2201_WD_ACT_OFFSET,
.mask = GENMASK(7, 1),
.bit = 0,
},
{
.label = "reset",
.reg = NVSW_SN2201_RST_CAUSE1_OFFSET,
.mask = GENMASK(7, 0) & ~BIT(6),
.bit = 6,
},
};
static struct mlxreg_core_platform_data nvsw_sn2201_wd = {
.data = nvsw_sn2201_wd_data,
.counter = ARRAY_SIZE(nvsw_sn2201_wd_data),
.version = MLX_WDT_TYPE3,
.identity = "mlx-wdt-main",
};
static int
nvsw_sn2201_create_static_devices(struct nvsw_sn2201 *nvsw_sn2201,
struct mlxreg_hotplug_device *devs,
int size)
{
struct mlxreg_hotplug_device *dev = devs;
int i;
/* Create I2C static devices. */
for (i = 0; i < size; i++, dev++) {
dev->client = i2c_new_client_device(dev->adapter, dev->brdinfo);
if (IS_ERR(dev->client)) {
dev_err(nvsw_sn2201->dev, "Failed to create client %s at bus %d at addr 0x%02x\n",
dev->brdinfo->type,
dev->nr, dev->brdinfo->addr);
dev->adapter = NULL;
goto fail_create_static_devices;
}
}
return 0;
fail_create_static_devices:
while (--i >= 0) {
dev = devs + i;
i2c_unregister_device(dev->client);
dev->client = NULL;
dev->adapter = NULL;
}
return IS_ERR(dev->client);
}
static void nvsw_sn2201_destroy_static_devices(struct nvsw_sn2201 *nvsw_sn2201,
struct mlxreg_hotplug_device *devs, int size)
{
struct mlxreg_hotplug_device *dev = devs;
int i;
/* Destroy static I2C device for SN2201 static devices. */
for (i = 0; i < size; i++, dev++) {
if (dev->client) {
i2c_unregister_device(dev->client);
dev->client = NULL;
i2c_put_adapter(dev->adapter);
dev->adapter = NULL;
}
}
}
static int nvsw_sn2201_config_post_init(struct nvsw_sn2201 *nvsw_sn2201)
{
struct mlxreg_hotplug_device *sn2201_dev;
struct i2c_adapter *adap;
struct device *dev;
int i, err;
dev = nvsw_sn2201->dev;
adap = i2c_get_adapter(nvsw_sn2201->main_mux_deferred_nr);
if (!adap) {
dev_err(dev, "Failed to get adapter for bus %d\n",
nvsw_sn2201->main_mux_deferred_nr);
return -ENODEV;
}
i2c_put_adapter(adap);
/* Update board info. */
sn2201_dev = nvsw_sn2201->sn2201_devs;
for (i = 0; i < nvsw_sn2201->sn2201_devs_num; i++, sn2201_dev++) {
sn2201_dev->adapter = i2c_get_adapter(sn2201_dev->nr);
if (!sn2201_dev->adapter)
return -ENODEV;
i2c_put_adapter(sn2201_dev->adapter);
}
err = nvsw_sn2201_create_static_devices(nvsw_sn2201, nvsw_sn2201->sn2201_devs,
nvsw_sn2201->sn2201_devs_num);
if (err)
dev_err(dev, "Failed to create static devices\n");
return err;
}
static int nvsw_sn2201_config_init(struct nvsw_sn2201 *nvsw_sn2201, void *regmap)
{
struct device *dev = nvsw_sn2201->dev;
int err;
nvsw_sn2201->io_data = &nvsw_sn2201_regs_io;
nvsw_sn2201->led_data = &nvsw_sn2201_led;
nvsw_sn2201->wd_data = &nvsw_sn2201_wd;
nvsw_sn2201->hotplug_data = &nvsw_sn2201_hotplug;
/* Register IO access driver. */
if (nvsw_sn2201->io_data) {
nvsw_sn2201->io_data->regmap = regmap;
nvsw_sn2201->io_regs =
platform_device_register_resndata(dev, "mlxreg-io", PLATFORM_DEVID_NONE, NULL, 0,
nvsw_sn2201->io_data,
sizeof(*nvsw_sn2201->io_data));
if (IS_ERR(nvsw_sn2201->io_regs)) {
err = PTR_ERR(nvsw_sn2201->io_regs);
goto fail_register_io;
}
}
/* Register LED driver. */
if (nvsw_sn2201->led_data) {
nvsw_sn2201->led_data->regmap = regmap;
nvsw_sn2201->led =
platform_device_register_resndata(dev, "leds-mlxreg", PLATFORM_DEVID_NONE, NULL, 0,
nvsw_sn2201->led_data,
sizeof(*nvsw_sn2201->led_data));
if (IS_ERR(nvsw_sn2201->led)) {
err = PTR_ERR(nvsw_sn2201->led);
goto fail_register_led;
}
}
/* Register WD driver. */
if (nvsw_sn2201->wd_data) {
nvsw_sn2201->wd_data->regmap = regmap;
nvsw_sn2201->wd =
platform_device_register_resndata(dev, "mlx-wdt", PLATFORM_DEVID_NONE, NULL, 0,
nvsw_sn2201->wd_data,
sizeof(*nvsw_sn2201->wd_data));
if (IS_ERR(nvsw_sn2201->wd)) {
err = PTR_ERR(nvsw_sn2201->wd);
goto fail_register_wd;
}
}
/* Register hotplug driver. */
if (nvsw_sn2201->hotplug_data) {
nvsw_sn2201->hotplug_data->regmap = regmap;
nvsw_sn2201->pdev_hotplug =
platform_device_register_resndata(dev, "mlxreg-hotplug", PLATFORM_DEVID_NONE,
nvsw_sn2201_cpld_res,
ARRAY_SIZE(nvsw_sn2201_cpld_res),
nvsw_sn2201->hotplug_data,
sizeof(*nvsw_sn2201->hotplug_data));
if (IS_ERR(nvsw_sn2201->pdev_hotplug)) {
err = PTR_ERR(nvsw_sn2201->pdev_hotplug);
goto fail_register_hotplug;
}
}
return nvsw_sn2201_config_post_init(nvsw_sn2201);
fail_register_hotplug:
if (nvsw_sn2201->wd)
platform_device_unregister(nvsw_sn2201->wd);
fail_register_wd:
if (nvsw_sn2201->led)
platform_device_unregister(nvsw_sn2201->led);
fail_register_led:
if (nvsw_sn2201->io_regs)
platform_device_unregister(nvsw_sn2201->io_regs);
fail_register_io:
return err;
}
static void nvsw_sn2201_config_exit(struct nvsw_sn2201 *nvsw_sn2201)
{
/* Unregister hotplug driver. */
if (nvsw_sn2201->pdev_hotplug)
platform_device_unregister(nvsw_sn2201->pdev_hotplug);
/* Unregister WD driver. */
if (nvsw_sn2201->wd)
platform_device_unregister(nvsw_sn2201->wd);
/* Unregister LED driver. */
if (nvsw_sn2201->led)
platform_device_unregister(nvsw_sn2201->led);
/* Unregister IO access driver. */
if (nvsw_sn2201->io_regs)
platform_device_unregister(nvsw_sn2201->io_regs);
}
/*
* Initialization is divided into two parts:
* - I2C main bus init.
* - Mux creation and attaching devices to the mux,
* which assumes that the main bus is already created.
* This separation is required for synchronization between these two parts.
* Completion notify callback is used to make this flow synchronized.
*/
static int nvsw_sn2201_i2c_completion_notify(void *handle, int id)
{
struct nvsw_sn2201 *nvsw_sn2201 = handle;
void *regmap;
int i, err;
/* Create main mux. */
nvsw_sn2201->main_mux_devs->adapter = i2c_get_adapter(nvsw_sn2201->main_mux_devs->nr);
if (!nvsw_sn2201->main_mux_devs->adapter) {
err = -ENODEV;
dev_err(nvsw_sn2201->dev, "Failed to get adapter for bus %d\n",
nvsw_sn2201->cpld_devs->nr);
goto i2c_get_adapter_main_fail;
}
nvsw_sn2201->main_mux_devs_num = ARRAY_SIZE(nvsw_sn2201_main_mux_brdinfo);
err = nvsw_sn2201_create_static_devices(nvsw_sn2201, nvsw_sn2201->main_mux_devs,
nvsw_sn2201->main_mux_devs_num);
if (err) {
dev_err(nvsw_sn2201->dev, "Failed to create main mux devices\n");
goto nvsw_sn2201_create_static_devices_fail;
}
nvsw_sn2201->cpld_devs->adapter = i2c_get_adapter(nvsw_sn2201->cpld_devs->nr);
if (!nvsw_sn2201->cpld_devs->adapter) {
err = -ENODEV;
dev_err(nvsw_sn2201->dev, "Failed to get adapter for bus %d\n",
nvsw_sn2201->cpld_devs->nr);
goto i2c_get_adapter_fail;
}
/* Create CPLD device. */
nvsw_sn2201->cpld_devs->client = i2c_new_dummy_device(nvsw_sn2201->cpld_devs->adapter,
NVSW_SN2201_CPLD_I2CADDR);
if (IS_ERR(nvsw_sn2201->cpld_devs->client)) {
err = PTR_ERR(nvsw_sn2201->cpld_devs->client);
dev_err(nvsw_sn2201->dev, "Failed to create %s cpld device at bus %d at addr 0x%02x\n",
nvsw_sn2201->cpld_devs->brdinfo->type, nvsw_sn2201->cpld_devs->nr,
nvsw_sn2201->cpld_devs->brdinfo->addr);
goto i2c_new_dummy_fail;
}
regmap = devm_regmap_init_i2c(nvsw_sn2201->cpld_devs->client, &nvsw_sn2201_regmap_conf);
if (IS_ERR(regmap)) {
err = PTR_ERR(regmap);
dev_err(nvsw_sn2201->dev, "Failed to initialise managed register map\n");
goto devm_regmap_init_i2c_fail;
}
/* Set default registers. */
for (i = 0; i < nvsw_sn2201_regmap_conf.num_reg_defaults; i++) {
err = regmap_write(regmap, nvsw_sn2201_regmap_default[i].reg,
nvsw_sn2201_regmap_default[i].def);
if (err) {
dev_err(nvsw_sn2201->dev, "Failed to set register at offset 0x%02x to default value: 0x%02x\n",
nvsw_sn2201_regmap_default[i].reg,
nvsw_sn2201_regmap_default[i].def);
goto regmap_write_fail;
}
}
/* Sync registers with hardware. */
regcache_mark_dirty(regmap);
err = regcache_sync(regmap);
if (err) {
dev_err(nvsw_sn2201->dev, "Failed to Sync registers with hardware\n");
goto regcache_sync_fail;
}
/* Configure SN2201 board. */
err = nvsw_sn2201_config_init(nvsw_sn2201, regmap);
if (err) {
dev_err(nvsw_sn2201->dev, "Failed to configure board\n");
goto nvsw_sn2201_config_init_fail;
}
return 0;
nvsw_sn2201_config_init_fail:
nvsw_sn2201_config_exit(nvsw_sn2201);
regcache_sync_fail:
regmap_write_fail:
devm_regmap_init_i2c_fail:
i2c_new_dummy_fail:
i2c_put_adapter(nvsw_sn2201->cpld_devs->adapter);
nvsw_sn2201->cpld_devs->adapter = NULL;
i2c_get_adapter_fail:
/* Destroy SN2201 static I2C devices. */
nvsw_sn2201_destroy_static_devices(nvsw_sn2201, nvsw_sn2201->sn2201_devs,
nvsw_sn2201->sn2201_devs_num);
/* Destroy main mux device. */
nvsw_sn2201_destroy_static_devices(nvsw_sn2201, nvsw_sn2201->main_mux_devs,
nvsw_sn2201->main_mux_devs_num);
nvsw_sn2201_create_static_devices_fail:
i2c_put_adapter(nvsw_sn2201->main_mux_devs->adapter);
i2c_get_adapter_main_fail:
return err;
}
static int nvsw_sn2201_config_pre_init(struct nvsw_sn2201 *nvsw_sn2201)
{
nvsw_sn2201->i2c_data = &nvsw_sn2201_i2c_data;
/* Register I2C controller. */
nvsw_sn2201->i2c_data->handle = nvsw_sn2201;
nvsw_sn2201->i2c_data->completion_notify = nvsw_sn2201_i2c_completion_notify;
nvsw_sn2201->pdev_i2c = platform_device_register_resndata(nvsw_sn2201->dev, "i2c_mlxcpld",
NVSW_SN2201_MAIN_MUX_NR,
nvsw_sn2201_lpc_res,
ARRAY_SIZE(nvsw_sn2201_lpc_res),
nvsw_sn2201->i2c_data,
sizeof(*nvsw_sn2201->i2c_data));
if (IS_ERR(nvsw_sn2201->pdev_i2c))
return PTR_ERR(nvsw_sn2201->pdev_i2c);
return 0;
}
static int nvsw_sn2201_probe(struct platform_device *pdev)
{
struct nvsw_sn2201 *nvsw_sn2201;
nvsw_sn2201 = devm_kzalloc(&pdev->dev, sizeof(*nvsw_sn2201), GFP_KERNEL);
if (!nvsw_sn2201)
return -ENOMEM;
nvsw_sn2201->dev = &pdev->dev;
platform_set_drvdata(pdev, nvsw_sn2201);
platform_device_add_resources(pdev, nvsw_sn2201_lpc_io_resources,
ARRAY_SIZE(nvsw_sn2201_lpc_io_resources));
nvsw_sn2201->main_mux_deferred_nr = NVSW_SN2201_MAIN_MUX_DEFER_NR;
nvsw_sn2201->main_mux_devs = nvsw_sn2201_main_mux_brdinfo;
nvsw_sn2201->cpld_devs = nvsw_sn2201_cpld_brdinfo;
nvsw_sn2201->sn2201_devs = nvsw_sn2201_static_brdinfo;
nvsw_sn2201->sn2201_devs_num = ARRAY_SIZE(nvsw_sn2201_static_brdinfo);
return nvsw_sn2201_config_pre_init(nvsw_sn2201);
}
static int nvsw_sn2201_remove(struct platform_device *pdev)
{
struct nvsw_sn2201 *nvsw_sn2201 = platform_get_drvdata(pdev);
/* Unregister underlying drivers. */
nvsw_sn2201_config_exit(nvsw_sn2201);
/* Destroy SN2201 static I2C devices. */
nvsw_sn2201_destroy_static_devices(nvsw_sn2201,
nvsw_sn2201->sn2201_devs,
nvsw_sn2201->sn2201_devs_num);
i2c_put_adapter(nvsw_sn2201->cpld_devs->adapter);
nvsw_sn2201->cpld_devs->adapter = NULL;
/* Destroy main mux device. */
nvsw_sn2201_destroy_static_devices(nvsw_sn2201,
nvsw_sn2201->main_mux_devs,
nvsw_sn2201->main_mux_devs_num);
/* Unregister I2C controller. */
if (nvsw_sn2201->pdev_i2c)
platform_device_unregister(nvsw_sn2201->pdev_i2c);
return 0;
}
static const struct acpi_device_id nvsw_sn2201_acpi_ids[] = {
{"NVSN2201", 0},
{}
};
MODULE_DEVICE_TABLE(acpi, nvsw_sn2201_acpi_ids);
static struct platform_driver nvsw_sn2201_driver = {
.probe = nvsw_sn2201_probe,
.remove = nvsw_sn2201_remove,
.driver = {
.name = "nvsw-sn2201",
.acpi_match_table = nvsw_sn2201_acpi_ids,
},
};
module_platform_driver(nvsw_sn2201_driver);
MODULE_AUTHOR("Nvidia");
MODULE_DESCRIPTION("Nvidia sn2201 platform driver");
MODULE_LICENSE("Dual BSD/GPL");
MODULE_ALIAS("platform:nvsw-sn2201");
...@@ -1152,6 +1152,14 @@ config SIEMENS_SIMATIC_IPC ...@@ -1152,6 +1152,14 @@ config SIEMENS_SIMATIC_IPC
To compile this driver as a module, choose M here: the module To compile this driver as a module, choose M here: the module
will be called simatic-ipc. will be called simatic-ipc.
config WINMATE_FM07_KEYS
tristate "Winmate FM07/FM07P front-panel keys driver"
depends on INPUT
help
Winmate FM07 and FM07P in-vehicle computers have a row of five
buttons below the display. This module adds an input device
that delivers key events when these buttons are pressed.
endif # X86_PLATFORM_DEVICES endif # X86_PLATFORM_DEVICES
config PMC_ATOM config PMC_ATOM
......
...@@ -130,3 +130,6 @@ obj-$(CONFIG_PMC_ATOM) += pmc_atom.o ...@@ -130,3 +130,6 @@ obj-$(CONFIG_PMC_ATOM) += pmc_atom.o
# Siemens Simatic Industrial PCs # Siemens Simatic Industrial PCs
obj-$(CONFIG_SIEMENS_SIMATIC_IPC) += simatic-ipc.o obj-$(CONFIG_SIEMENS_SIMATIC_IPC) += simatic-ipc.o
# Winmate
obj-$(CONFIG_WINMATE_FM07_KEYS) += winmate-fm07-keys.o
...@@ -192,26 +192,6 @@ struct smu_metrics { ...@@ -192,26 +192,6 @@ struct smu_metrics {
u64 timecondition_notmet_totaltime[SOC_SUBSYSTEM_IP_MAX]; u64 timecondition_notmet_totaltime[SOC_SUBSYSTEM_IP_MAX];
} __packed; } __packed;
static int amd_pmc_get_smu_version(struct amd_pmc_dev *dev)
{
int rc;
u32 val;
rc = amd_pmc_send_cmd(dev, 0, &val, SMU_MSG_GETSMUVERSION, 1);
if (rc)
return rc;
dev->smu_program = (val >> 24) & GENMASK(7, 0);
dev->major = (val >> 16) & GENMASK(7, 0);
dev->minor = (val >> 8) & GENMASK(7, 0);
dev->rev = (val >> 0) & GENMASK(7, 0);
dev_dbg(dev->dev, "SMU program %u version is %u.%u.%u\n",
dev->smu_program, dev->major, dev->minor, dev->rev);
return 0;
}
static int amd_pmc_stb_debugfs_open(struct inode *inode, struct file *filp) static int amd_pmc_stb_debugfs_open(struct inode *inode, struct file *filp)
{ {
struct amd_pmc_dev *dev = filp->f_inode->i_private; struct amd_pmc_dev *dev = filp->f_inode->i_private;
...@@ -294,6 +274,40 @@ static const struct file_operations amd_pmc_stb_debugfs_fops_v2 = { ...@@ -294,6 +274,40 @@ static const struct file_operations amd_pmc_stb_debugfs_fops_v2 = {
.release = amd_pmc_stb_debugfs_release_v2, .release = amd_pmc_stb_debugfs_release_v2,
}; };
#if defined(CONFIG_SUSPEND) || defined(CONFIG_DEBUG_FS)
static int amd_pmc_setup_smu_logging(struct amd_pmc_dev *dev)
{
if (dev->cpu_id == AMD_CPU_ID_PCO) {
dev_warn_once(dev->dev, "SMU debugging info not supported on this platform\n");
return -EINVAL;
}
/* Get Active devices list from SMU */
if (!dev->active_ips)
amd_pmc_send_cmd(dev, 0, &dev->active_ips, SMU_MSG_GET_SUP_CONSTRAINTS, 1);
/* Get dram address */
if (!dev->smu_virt_addr) {
u32 phys_addr_low, phys_addr_hi;
u64 smu_phys_addr;
amd_pmc_send_cmd(dev, 0, &phys_addr_low, SMU_MSG_LOG_GETDRAM_ADDR_LO, 1);
amd_pmc_send_cmd(dev, 0, &phys_addr_hi, SMU_MSG_LOG_GETDRAM_ADDR_HI, 1);
smu_phys_addr = ((u64)phys_addr_hi << 32 | phys_addr_low);
dev->smu_virt_addr = devm_ioremap(dev->dev, smu_phys_addr,
sizeof(struct smu_metrics));
if (!dev->smu_virt_addr)
return -ENOMEM;
}
/* Start the logging */
amd_pmc_send_cmd(dev, 0, NULL, SMU_MSG_LOG_RESET, 0);
amd_pmc_send_cmd(dev, 0, NULL, SMU_MSG_LOG_START, 0);
return 0;
}
static int amd_pmc_idlemask_read(struct amd_pmc_dev *pdev, struct device *dev, static int amd_pmc_idlemask_read(struct amd_pmc_dev *pdev, struct device *dev,
struct seq_file *s) struct seq_file *s)
{ {
...@@ -321,11 +335,19 @@ static int amd_pmc_idlemask_read(struct amd_pmc_dev *pdev, struct device *dev, ...@@ -321,11 +335,19 @@ static int amd_pmc_idlemask_read(struct amd_pmc_dev *pdev, struct device *dev,
static int get_metrics_table(struct amd_pmc_dev *pdev, struct smu_metrics *table) static int get_metrics_table(struct amd_pmc_dev *pdev, struct smu_metrics *table)
{ {
if (!pdev->smu_virt_addr) {
int ret = amd_pmc_setup_smu_logging(pdev);
if (ret)
return ret;
}
if (pdev->cpu_id == AMD_CPU_ID_PCO) if (pdev->cpu_id == AMD_CPU_ID_PCO)
return -ENODEV; return -ENODEV;
memcpy_fromio(table, pdev->smu_virt_addr, sizeof(struct smu_metrics)); memcpy_fromio(table, pdev->smu_virt_addr, sizeof(struct smu_metrics));
return 0; return 0;
} }
#endif /* CONFIG_SUSPEND || CONFIG_DEBUG_FS */
#ifdef CONFIG_SUSPEND #ifdef CONFIG_SUSPEND
static void amd_pmc_validate_deepest(struct amd_pmc_dev *pdev) static void amd_pmc_validate_deepest(struct amd_pmc_dev *pdev)
...@@ -379,6 +401,17 @@ static int s0ix_stats_show(struct seq_file *s, void *unused) ...@@ -379,6 +401,17 @@ static int s0ix_stats_show(struct seq_file *s, void *unused)
struct amd_pmc_dev *dev = s->private; struct amd_pmc_dev *dev = s->private;
u64 entry_time, exit_time, residency; u64 entry_time, exit_time, residency;
/* Use FCH registers to get the S0ix stats */
if (!dev->fch_virt_addr) {
u32 base_addr_lo = FCH_BASE_PHY_ADDR_LOW;
u32 base_addr_hi = FCH_BASE_PHY_ADDR_HIGH;
u64 fch_phys_addr = ((u64)base_addr_hi << 32 | base_addr_lo);
dev->fch_virt_addr = devm_ioremap(dev->dev, fch_phys_addr, FCH_SSC_MAPPING_SIZE);
if (!dev->fch_virt_addr)
return -ENOMEM;
}
entry_time = ioread32(dev->fch_virt_addr + FCH_S0I3_ENTRY_TIME_H_OFFSET); entry_time = ioread32(dev->fch_virt_addr + FCH_S0I3_ENTRY_TIME_H_OFFSET);
entry_time = entry_time << 32 | ioread32(dev->fch_virt_addr + FCH_S0I3_ENTRY_TIME_L_OFFSET); entry_time = entry_time << 32 | ioread32(dev->fch_virt_addr + FCH_S0I3_ENTRY_TIME_L_OFFSET);
...@@ -398,11 +431,38 @@ static int s0ix_stats_show(struct seq_file *s, void *unused) ...@@ -398,11 +431,38 @@ static int s0ix_stats_show(struct seq_file *s, void *unused)
} }
DEFINE_SHOW_ATTRIBUTE(s0ix_stats); DEFINE_SHOW_ATTRIBUTE(s0ix_stats);
static int amd_pmc_get_smu_version(struct amd_pmc_dev *dev)
{
int rc;
u32 val;
rc = amd_pmc_send_cmd(dev, 0, &val, SMU_MSG_GETSMUVERSION, 1);
if (rc)
return rc;
dev->smu_program = (val >> 24) & GENMASK(7, 0);
dev->major = (val >> 16) & GENMASK(7, 0);
dev->minor = (val >> 8) & GENMASK(7, 0);
dev->rev = (val >> 0) & GENMASK(7, 0);
dev_dbg(dev->dev, "SMU program %u version is %u.%u.%u\n",
dev->smu_program, dev->major, dev->minor, dev->rev);
return 0;
}
static int amd_pmc_idlemask_show(struct seq_file *s, void *unused) static int amd_pmc_idlemask_show(struct seq_file *s, void *unused)
{ {
struct amd_pmc_dev *dev = s->private; struct amd_pmc_dev *dev = s->private;
int rc; int rc;
/* we haven't yet read SMU version */
if (!dev->major) {
rc = amd_pmc_get_smu_version(dev);
if (rc)
return rc;
}
if (dev->major > 56 || (dev->major >= 55 && dev->minor >= 37)) { if (dev->major > 56 || (dev->major >= 55 && dev->minor >= 37)) {
rc = amd_pmc_idlemask_read(dev, NULL, s); rc = amd_pmc_idlemask_read(dev, NULL, s);
if (rc) if (rc)
...@@ -449,32 +509,6 @@ static inline void amd_pmc_dbgfs_unregister(struct amd_pmc_dev *dev) ...@@ -449,32 +509,6 @@ static inline void amd_pmc_dbgfs_unregister(struct amd_pmc_dev *dev)
} }
#endif /* CONFIG_DEBUG_FS */ #endif /* CONFIG_DEBUG_FS */
static int amd_pmc_setup_smu_logging(struct amd_pmc_dev *dev)
{
u32 phys_addr_low, phys_addr_hi;
u64 smu_phys_addr;
if (dev->cpu_id == AMD_CPU_ID_PCO)
return -EINVAL;
/* Get Active devices list from SMU */
amd_pmc_send_cmd(dev, 0, &dev->active_ips, SMU_MSG_GET_SUP_CONSTRAINTS, 1);
/* Get dram address */
amd_pmc_send_cmd(dev, 0, &phys_addr_low, SMU_MSG_LOG_GETDRAM_ADDR_LO, 1);
amd_pmc_send_cmd(dev, 0, &phys_addr_hi, SMU_MSG_LOG_GETDRAM_ADDR_HI, 1);
smu_phys_addr = ((u64)phys_addr_hi << 32 | phys_addr_low);
dev->smu_virt_addr = devm_ioremap(dev->dev, smu_phys_addr, sizeof(struct smu_metrics));
if (!dev->smu_virt_addr)
return -ENOMEM;
/* Start the logging */
amd_pmc_send_cmd(dev, 0, NULL, SMU_MSG_LOG_START, 0);
return 0;
}
static void amd_pmc_dump_registers(struct amd_pmc_dev *dev) static void amd_pmc_dump_registers(struct amd_pmc_dev *dev)
{ {
u32 value, message, argument, response; u32 value, message, argument, response;
...@@ -639,8 +673,7 @@ static void amd_pmc_s2idle_prepare(void) ...@@ -639,8 +673,7 @@ static void amd_pmc_s2idle_prepare(void)
u32 arg = 1; u32 arg = 1;
/* Reset and Start SMU logging - to monitor the s0i3 stats */ /* Reset and Start SMU logging - to monitor the s0i3 stats */
amd_pmc_send_cmd(pdev, 0, NULL, SMU_MSG_LOG_RESET, 0); amd_pmc_setup_smu_logging(pdev);
amd_pmc_send_cmd(pdev, 0, NULL, SMU_MSG_LOG_START, 0);
/* Activate CZN specific RTC functionality */ /* Activate CZN specific RTC functionality */
if (pdev->cpu_id == AMD_CPU_ID_CZN) { if (pdev->cpu_id == AMD_CPU_ID_CZN) {
...@@ -790,7 +823,7 @@ static int amd_pmc_probe(struct platform_device *pdev) ...@@ -790,7 +823,7 @@ static int amd_pmc_probe(struct platform_device *pdev)
struct amd_pmc_dev *dev = &pmc; struct amd_pmc_dev *dev = &pmc;
struct pci_dev *rdev; struct pci_dev *rdev;
u32 base_addr_lo, base_addr_hi; u32 base_addr_lo, base_addr_hi;
u64 base_addr, fch_phys_addr; u64 base_addr;
int err; int err;
u32 val; u32 val;
...@@ -844,28 +877,12 @@ static int amd_pmc_probe(struct platform_device *pdev) ...@@ -844,28 +877,12 @@ static int amd_pmc_probe(struct platform_device *pdev)
mutex_init(&dev->lock); mutex_init(&dev->lock);
/* Use FCH registers to get the S0ix stats */
base_addr_lo = FCH_BASE_PHY_ADDR_LOW;
base_addr_hi = FCH_BASE_PHY_ADDR_HIGH;
fch_phys_addr = ((u64)base_addr_hi << 32 | base_addr_lo);
dev->fch_virt_addr = devm_ioremap(dev->dev, fch_phys_addr, FCH_SSC_MAPPING_SIZE);
if (!dev->fch_virt_addr) {
err = -ENOMEM;
goto err_pci_dev_put;
}
/* Use SMU to get the s0i3 debug stats */
err = amd_pmc_setup_smu_logging(dev);
if (err)
dev_err(dev->dev, "SMU debugging info not supported on this platform\n");
if (enable_stb && dev->cpu_id == AMD_CPU_ID_YC) { if (enable_stb && dev->cpu_id == AMD_CPU_ID_YC) {
err = amd_pmc_s2d_init(dev); err = amd_pmc_s2d_init(dev);
if (err) if (err)
return err; return err;
} }
amd_pmc_get_smu_version(dev);
platform_set_drvdata(pdev, dev); platform_set_drvdata(pdev, dev);
#ifdef CONFIG_SUSPEND #ifdef CONFIG_SUSPEND
err = acpi_register_lps0_dev(&amd_pmc_s2idle_dev_ops); err = acpi_register_lps0_dev(&amd_pmc_s2idle_dev_ops);
......
...@@ -553,6 +553,7 @@ static const struct key_entry asus_nb_wmi_keymap[] = { ...@@ -553,6 +553,7 @@ static const struct key_entry asus_nb_wmi_keymap[] = {
{ KE_KEY, 0x7D, { KEY_BLUETOOTH } }, /* Bluetooth Enable */ { KE_KEY, 0x7D, { KEY_BLUETOOTH } }, /* Bluetooth Enable */
{ KE_KEY, 0x7E, { KEY_BLUETOOTH } }, /* Bluetooth Disable */ { KE_KEY, 0x7E, { KEY_BLUETOOTH } }, /* Bluetooth Disable */
{ KE_KEY, 0x82, { KEY_CAMERA } }, { KE_KEY, 0x82, { KEY_CAMERA } },
{ KE_KEY, 0x86, { KEY_PROG1 } }, /* MyASUS Key */
{ KE_KEY, 0x88, { KEY_RFKILL } }, /* Radio Toggle Key */ { KE_KEY, 0x88, { KEY_RFKILL } }, /* Radio Toggle Key */
{ KE_KEY, 0x8A, { KEY_PROG1 } }, /* Color enhancement mode */ { KE_KEY, 0x8A, { KEY_PROG1 } }, /* Color enhancement mode */
{ KE_KEY, 0x8C, { KEY_SWITCHVIDEOMODE } }, /* SDSP DVI only */ { KE_KEY, 0x8C, { KEY_SWITCHVIDEOMODE } }, /* SDSP DVI only */
......
...@@ -2534,7 +2534,7 @@ static struct attribute *asus_fan_curve_attr[] = { ...@@ -2534,7 +2534,7 @@ static struct attribute *asus_fan_curve_attr[] = {
static umode_t asus_fan_curve_is_visible(struct kobject *kobj, static umode_t asus_fan_curve_is_visible(struct kobject *kobj,
struct attribute *attr, int idx) struct attribute *attr, int idx)
{ {
struct device *dev = container_of(kobj, struct device, kobj); struct device *dev = kobj_to_dev(kobj);
struct asus_wmi *asus = dev_get_drvdata(dev->parent); struct asus_wmi *asus = dev_get_drvdata(dev->parent);
/* /*
...@@ -3114,7 +3114,7 @@ static void asus_wmi_handle_event_code(int code, struct asus_wmi *asus) ...@@ -3114,7 +3114,7 @@ static void asus_wmi_handle_event_code(int code, struct asus_wmi *asus)
if (!sparse_keymap_report_event(asus->inputdev, code, if (!sparse_keymap_report_event(asus->inputdev, code,
key_value, autorelease)) key_value, autorelease))
pr_info("Unknown key %x pressed\n", code); pr_info("Unknown key code 0x%x\n", code);
} }
static void asus_wmi_notify(u32 value, void *context) static void asus_wmi_notify(u32 value, void *context)
......
...@@ -40,13 +40,10 @@ ...@@ -40,13 +40,10 @@
static struct platform_device *dcdbas_pdev; static struct platform_device *dcdbas_pdev;
static u8 *smi_data_buf;
static dma_addr_t smi_data_buf_handle;
static unsigned long smi_data_buf_size;
static unsigned long max_smi_data_buf_size = MAX_SMI_DATA_BUF_SIZE; static unsigned long max_smi_data_buf_size = MAX_SMI_DATA_BUF_SIZE;
static u32 smi_data_buf_phys_addr;
static DEFINE_MUTEX(smi_data_lock); static DEFINE_MUTEX(smi_data_lock);
static u8 *bios_buffer; static u8 *bios_buffer;
static struct smi_buffer smi_buf;
static unsigned int host_control_action; static unsigned int host_control_action;
static unsigned int host_control_smi_type; static unsigned int host_control_smi_type;
...@@ -54,23 +51,49 @@ static unsigned int host_control_on_shutdown; ...@@ -54,23 +51,49 @@ static unsigned int host_control_on_shutdown;
static bool wsmt_enabled; static bool wsmt_enabled;
int dcdbas_smi_alloc(struct smi_buffer *smi_buffer, unsigned long size)
{
smi_buffer->virt = dma_alloc_coherent(&dcdbas_pdev->dev, size,
&smi_buffer->dma, GFP_KERNEL);
if (!smi_buffer->virt) {
dev_dbg(&dcdbas_pdev->dev,
"%s: failed to allocate memory size %lu\n",
__func__, size);
return -ENOMEM;
}
smi_buffer->size = size;
dev_dbg(&dcdbas_pdev->dev, "%s: phys: %x size: %lu\n",
__func__, (u32)smi_buffer->dma, smi_buffer->size);
return 0;
}
EXPORT_SYMBOL_GPL(dcdbas_smi_alloc);
void dcdbas_smi_free(struct smi_buffer *smi_buffer)
{
if (!smi_buffer->virt)
return;
dev_dbg(&dcdbas_pdev->dev, "%s: phys: %x size: %lu\n",
__func__, (u32)smi_buffer->dma, smi_buffer->size);
dma_free_coherent(&dcdbas_pdev->dev, smi_buffer->size,
smi_buffer->virt, smi_buffer->dma);
smi_buffer->virt = NULL;
smi_buffer->dma = 0;
smi_buffer->size = 0;
}
EXPORT_SYMBOL_GPL(dcdbas_smi_free);
/** /**
* smi_data_buf_free: free SMI data buffer * smi_data_buf_free: free SMI data buffer
*/ */
static void smi_data_buf_free(void) static void smi_data_buf_free(void)
{ {
if (!smi_data_buf || wsmt_enabled) if (!smi_buf.virt || wsmt_enabled)
return; return;
dev_dbg(&dcdbas_pdev->dev, "%s: phys: %x size: %lu\n", dcdbas_smi_free(&smi_buf);
__func__, smi_data_buf_phys_addr, smi_data_buf_size);
dma_free_coherent(&dcdbas_pdev->dev, smi_data_buf_size, smi_data_buf,
smi_data_buf_handle);
smi_data_buf = NULL;
smi_data_buf_handle = 0;
smi_data_buf_phys_addr = 0;
smi_data_buf_size = 0;
} }
/** /**
...@@ -78,39 +101,29 @@ static void smi_data_buf_free(void) ...@@ -78,39 +101,29 @@ static void smi_data_buf_free(void)
*/ */
static int smi_data_buf_realloc(unsigned long size) static int smi_data_buf_realloc(unsigned long size)
{ {
void *buf; struct smi_buffer tmp;
dma_addr_t handle; int ret;
if (smi_data_buf_size >= size) if (smi_buf.size >= size)
return 0; return 0;
if (size > max_smi_data_buf_size) if (size > max_smi_data_buf_size)
return -EINVAL; return -EINVAL;
/* new buffer is needed */ /* new buffer is needed */
buf = dma_alloc_coherent(&dcdbas_pdev->dev, size, &handle, GFP_KERNEL); ret = dcdbas_smi_alloc(&tmp, size);
if (!buf) { if (ret)
dev_dbg(&dcdbas_pdev->dev, return ret;
"%s: failed to allocate memory size %lu\n",
__func__, size);
return -ENOMEM;
}
/* memory zeroed by dma_alloc_coherent */
if (smi_data_buf) /* memory zeroed by dma_alloc_coherent */
memcpy(buf, smi_data_buf, smi_data_buf_size); if (smi_buf.virt)
memcpy(tmp.virt, smi_buf.virt, smi_buf.size);
/* free any existing buffer */ /* free any existing buffer */
smi_data_buf_free(); smi_data_buf_free();
/* set up new buffer for use */ /* set up new buffer for use */
smi_data_buf = buf; smi_buf = tmp;
smi_data_buf_handle = handle;
smi_data_buf_phys_addr = (u32) virt_to_phys(buf);
smi_data_buf_size = size;
dev_dbg(&dcdbas_pdev->dev, "%s: phys: %x size: %lu\n",
__func__, smi_data_buf_phys_addr, smi_data_buf_size);
return 0; return 0;
} }
...@@ -119,14 +132,14 @@ static ssize_t smi_data_buf_phys_addr_show(struct device *dev, ...@@ -119,14 +132,14 @@ static ssize_t smi_data_buf_phys_addr_show(struct device *dev,
struct device_attribute *attr, struct device_attribute *attr,
char *buf) char *buf)
{ {
return sprintf(buf, "%x\n", smi_data_buf_phys_addr); return sprintf(buf, "%x\n", (u32)smi_buf.dma);
} }
static ssize_t smi_data_buf_size_show(struct device *dev, static ssize_t smi_data_buf_size_show(struct device *dev,
struct device_attribute *attr, struct device_attribute *attr,
char *buf) char *buf)
{ {
return sprintf(buf, "%lu\n", smi_data_buf_size); return sprintf(buf, "%lu\n", smi_buf.size);
} }
static ssize_t smi_data_buf_size_store(struct device *dev, static ssize_t smi_data_buf_size_store(struct device *dev,
...@@ -155,8 +168,8 @@ static ssize_t smi_data_read(struct file *filp, struct kobject *kobj, ...@@ -155,8 +168,8 @@ static ssize_t smi_data_read(struct file *filp, struct kobject *kobj,
ssize_t ret; ssize_t ret;
mutex_lock(&smi_data_lock); mutex_lock(&smi_data_lock);
ret = memory_read_from_buffer(buf, count, &pos, smi_data_buf, ret = memory_read_from_buffer(buf, count, &pos, smi_buf.virt,
smi_data_buf_size); smi_buf.size);
mutex_unlock(&smi_data_lock); mutex_unlock(&smi_data_lock);
return ret; return ret;
} }
...@@ -176,7 +189,7 @@ static ssize_t smi_data_write(struct file *filp, struct kobject *kobj, ...@@ -176,7 +189,7 @@ static ssize_t smi_data_write(struct file *filp, struct kobject *kobj,
if (ret) if (ret)
goto out; goto out;
memcpy(smi_data_buf + pos, buf, count); memcpy(smi_buf.virt + pos, buf, count);
ret = count; ret = count;
out: out:
mutex_unlock(&smi_data_lock); mutex_unlock(&smi_data_lock);
...@@ -307,11 +320,11 @@ static ssize_t smi_request_store(struct device *dev, ...@@ -307,11 +320,11 @@ static ssize_t smi_request_store(struct device *dev,
mutex_lock(&smi_data_lock); mutex_lock(&smi_data_lock);
if (smi_data_buf_size < sizeof(struct smi_cmd)) { if (smi_buf.size < sizeof(struct smi_cmd)) {
ret = -ENODEV; ret = -ENODEV;
goto out; goto out;
} }
smi_cmd = (struct smi_cmd *)smi_data_buf; smi_cmd = (struct smi_cmd *)smi_buf.virt;
switch (val) { switch (val) {
case 2: case 2:
...@@ -327,20 +340,20 @@ static ssize_t smi_request_store(struct device *dev, ...@@ -327,20 +340,20 @@ static ssize_t smi_request_store(struct device *dev,
* Provide physical address of command buffer field within * Provide physical address of command buffer field within
* the struct smi_cmd to BIOS. * the struct smi_cmd to BIOS.
* *
* Because the address that smi_cmd (smi_data_buf) points to * Because the address that smi_cmd (smi_buf.virt) points to
* will be from memremap() of a non-memory address if WSMT * will be from memremap() of a non-memory address if WSMT
* is present, we can't use virt_to_phys() on smi_cmd, so * is present, we can't use virt_to_phys() on smi_cmd, so
* we have to use the physical address that was saved when * we have to use the physical address that was saved when
* the virtual address for smi_cmd was received. * the virtual address for smi_cmd was received.
*/ */
smi_cmd->ebx = smi_data_buf_phys_addr + smi_cmd->ebx = (u32)smi_buf.dma +
offsetof(struct smi_cmd, command_buffer); offsetof(struct smi_cmd, command_buffer);
ret = dcdbas_smi_request(smi_cmd); ret = dcdbas_smi_request(smi_cmd);
if (!ret) if (!ret)
ret = count; ret = count;
break; break;
case 0: case 0:
memset(smi_data_buf, 0, smi_data_buf_size); memset(smi_buf.virt, 0, smi_buf.size);
ret = count; ret = count;
break; break;
default: default:
...@@ -356,7 +369,7 @@ static ssize_t smi_request_store(struct device *dev, ...@@ -356,7 +369,7 @@ static ssize_t smi_request_store(struct device *dev,
/** /**
* host_control_smi: generate host control SMI * host_control_smi: generate host control SMI
* *
* Caller must set up the host control command in smi_data_buf. * Caller must set up the host control command in smi_buf.virt.
*/ */
static int host_control_smi(void) static int host_control_smi(void)
{ {
...@@ -367,14 +380,14 @@ static int host_control_smi(void) ...@@ -367,14 +380,14 @@ static int host_control_smi(void)
s8 cmd_status; s8 cmd_status;
u8 index; u8 index;
apm_cmd = (struct apm_cmd *)smi_data_buf; apm_cmd = (struct apm_cmd *)smi_buf.virt;
apm_cmd->status = ESM_STATUS_CMD_UNSUCCESSFUL; apm_cmd->status = ESM_STATUS_CMD_UNSUCCESSFUL;
switch (host_control_smi_type) { switch (host_control_smi_type) {
case HC_SMITYPE_TYPE1: case HC_SMITYPE_TYPE1:
spin_lock_irqsave(&rtc_lock, flags); spin_lock_irqsave(&rtc_lock, flags);
/* write SMI data buffer physical address */ /* write SMI data buffer physical address */
data = (u8 *)&smi_data_buf_phys_addr; data = (u8 *)&smi_buf.dma;
for (index = PE1300_CMOS_CMD_STRUCT_PTR; for (index = PE1300_CMOS_CMD_STRUCT_PTR;
index < (PE1300_CMOS_CMD_STRUCT_PTR + 4); index < (PE1300_CMOS_CMD_STRUCT_PTR + 4);
index++, data++) { index++, data++) {
...@@ -405,7 +418,7 @@ static int host_control_smi(void) ...@@ -405,7 +418,7 @@ static int host_control_smi(void)
case HC_SMITYPE_TYPE3: case HC_SMITYPE_TYPE3:
spin_lock_irqsave(&rtc_lock, flags); spin_lock_irqsave(&rtc_lock, flags);
/* write SMI data buffer physical address */ /* write SMI data buffer physical address */
data = (u8 *)&smi_data_buf_phys_addr; data = (u8 *)&smi_buf.dma;
for (index = PE1400_CMOS_CMD_STRUCT_PTR; for (index = PE1400_CMOS_CMD_STRUCT_PTR;
index < (PE1400_CMOS_CMD_STRUCT_PTR + 4); index < (PE1400_CMOS_CMD_STRUCT_PTR + 4);
index++, data++) { index++, data++) {
...@@ -450,7 +463,7 @@ static int host_control_smi(void) ...@@ -450,7 +463,7 @@ static int host_control_smi(void)
* This function is called by the driver after the system has * This function is called by the driver after the system has
* finished shutting down if the user application specified a * finished shutting down if the user application specified a
* host control action to perform on shutdown. It is safe to * host control action to perform on shutdown. It is safe to
* use smi_data_buf at this point because the system has finished * use smi_buf.virt at this point because the system has finished
* shutting down and no userspace apps are running. * shutting down and no userspace apps are running.
*/ */
static void dcdbas_host_control(void) static void dcdbas_host_control(void)
...@@ -464,18 +477,18 @@ static void dcdbas_host_control(void) ...@@ -464,18 +477,18 @@ static void dcdbas_host_control(void)
action = host_control_action; action = host_control_action;
host_control_action = HC_ACTION_NONE; host_control_action = HC_ACTION_NONE;
if (!smi_data_buf) { if (!smi_buf.virt) {
dev_dbg(&dcdbas_pdev->dev, "%s: no SMI buffer\n", __func__); dev_dbg(&dcdbas_pdev->dev, "%s: no SMI buffer\n", __func__);
return; return;
} }
if (smi_data_buf_size < sizeof(struct apm_cmd)) { if (smi_buf.size < sizeof(struct apm_cmd)) {
dev_dbg(&dcdbas_pdev->dev, "%s: SMI buffer too small\n", dev_dbg(&dcdbas_pdev->dev, "%s: SMI buffer too small\n",
__func__); __func__);
return; return;
} }
apm_cmd = (struct apm_cmd *)smi_data_buf; apm_cmd = (struct apm_cmd *)smi_buf.virt;
/* power off takes precedence */ /* power off takes precedence */
if (action & HC_ACTION_HOST_CONTROL_POWEROFF) { if (action & HC_ACTION_HOST_CONTROL_POWEROFF) {
...@@ -583,11 +596,11 @@ static int dcdbas_check_wsmt(void) ...@@ -583,11 +596,11 @@ static int dcdbas_check_wsmt(void)
return -ENOMEM; return -ENOMEM;
} }
/* First 8 bytes is for a semaphore, not part of the smi_data_buf */ /* First 8 bytes is for a semaphore, not part of the smi_buf.virt */
smi_data_buf_phys_addr = bios_buf_paddr + 8; smi_buf.dma = bios_buf_paddr + 8;
smi_data_buf = bios_buffer + 8; smi_buf.virt = bios_buffer + 8;
smi_data_buf_size = remap_size - 8; smi_buf.size = remap_size - 8;
max_smi_data_buf_size = smi_data_buf_size; max_smi_data_buf_size = smi_buf.size;
wsmt_enabled = true; wsmt_enabled = true;
dev_info(&dcdbas_pdev->dev, dev_info(&dcdbas_pdev->dev,
"WSMT found, using firmware-provided SMI buffer.\n"); "WSMT found, using firmware-provided SMI buffer.\n");
......
...@@ -105,5 +105,14 @@ struct smm_eps_table { ...@@ -105,5 +105,14 @@ struct smm_eps_table {
u64 num_of_4k_pages; u64 num_of_4k_pages;
} __packed; } __packed;
struct smi_buffer {
u8 *virt;
unsigned long size;
dma_addr_t dma;
};
int dcdbas_smi_alloc(struct smi_buffer *smi_buffer, unsigned long size);
void dcdbas_smi_free(struct smi_buffer *smi_buffer);
#endif /* _DCDBAS_H_ */ #endif /* _DCDBAS_H_ */
...@@ -20,6 +20,7 @@ ...@@ -20,6 +20,7 @@
static int da_command_address; static int da_command_address;
static int da_command_code; static int da_command_code;
static struct smi_buffer smi_buf;
static struct calling_interface_buffer *buffer; static struct calling_interface_buffer *buffer;
static struct platform_device *platform_device; static struct platform_device *platform_device;
static DEFINE_MUTEX(smm_mutex); static DEFINE_MUTEX(smm_mutex);
...@@ -57,7 +58,7 @@ static int dell_smbios_smm_call(struct calling_interface_buffer *input) ...@@ -57,7 +58,7 @@ static int dell_smbios_smm_call(struct calling_interface_buffer *input)
command.magic = SMI_CMD_MAGIC; command.magic = SMI_CMD_MAGIC;
command.command_address = da_command_address; command.command_address = da_command_address;
command.command_code = da_command_code; command.command_code = da_command_code;
command.ebx = virt_to_phys(buffer); command.ebx = smi_buf.dma;
command.ecx = 0x42534931; command.ecx = 0x42534931;
mutex_lock(&smm_mutex); mutex_lock(&smm_mutex);
...@@ -101,9 +102,10 @@ int init_dell_smbios_smm(void) ...@@ -101,9 +102,10 @@ int init_dell_smbios_smm(void)
* Allocate buffer below 4GB for SMI data--only 32-bit physical addr * Allocate buffer below 4GB for SMI data--only 32-bit physical addr
* is passed to SMI handler. * is passed to SMI handler.
*/ */
buffer = (void *)__get_free_page(GFP_KERNEL | GFP_DMA32); ret = dcdbas_smi_alloc(&smi_buf, PAGE_SIZE);
if (!buffer) if (ret)
return -ENOMEM; return ret;
buffer = (void *)smi_buf.virt;
dmi_walk(find_cmd_address, NULL); dmi_walk(find_cmd_address, NULL);
...@@ -138,7 +140,7 @@ int init_dell_smbios_smm(void) ...@@ -138,7 +140,7 @@ int init_dell_smbios_smm(void)
fail_wsmt: fail_wsmt:
fail_platform_device_alloc: fail_platform_device_alloc:
free_page((unsigned long)buffer); dcdbas_smi_free(&smi_buf);
return ret; return ret;
} }
...@@ -147,6 +149,6 @@ void exit_dell_smbios_smm(void) ...@@ -147,6 +149,6 @@ void exit_dell_smbios_smm(void)
if (platform_device) { if (platform_device) {
dell_smbios_unregister_device(&platform_device->dev); dell_smbios_unregister_device(&platform_device->dev);
platform_device_unregister(platform_device); platform_device_unregister(platform_device);
free_page((unsigned long)buffer); dcdbas_smi_free(&smi_buf);
} }
} }
...@@ -150,7 +150,9 @@ static const struct dmi_system_id gigabyte_wmi_known_working_platforms[] = { ...@@ -150,7 +150,9 @@ static const struct dmi_system_id gigabyte_wmi_known_working_platforms[] = {
DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550M DS3H"), DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B550M DS3H"),
DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B660 GAMING X DDR4"), DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("B660 GAMING X DDR4"),
DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("Z390 I AORUS PRO WIFI-CF"), DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("Z390 I AORUS PRO WIFI-CF"),
DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("Z490 AORUS ELITE AC"),
DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("X570 AORUS ELITE"), DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("X570 AORUS ELITE"),
DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("X570 AORUS ELITE WIFI"),
DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("X570 GAMING X"), DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("X570 GAMING X"),
DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("X570 I AORUS PRO WIFI"), DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("X570 I AORUS PRO WIFI"),
DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("X570 UD"), DMI_EXACT_MATCH_GIGABYTE_BOARD_NAME("X570 UD"),
......
...@@ -605,6 +605,7 @@ static int hp_wmi_rfkill2_refresh(void) ...@@ -605,6 +605,7 @@ static int hp_wmi_rfkill2_refresh(void)
for (i = 0; i < rfkill2_count; i++) { for (i = 0; i < rfkill2_count; i++) {
int num = rfkill2[i].num; int num = rfkill2[i].num;
struct bios_rfkill2_device_state *devstate; struct bios_rfkill2_device_state *devstate;
devstate = &state.device[num]; devstate = &state.device[num];
if (num >= state.count || if (num >= state.count ||
...@@ -625,6 +626,7 @@ static ssize_t display_show(struct device *dev, struct device_attribute *attr, ...@@ -625,6 +626,7 @@ static ssize_t display_show(struct device *dev, struct device_attribute *attr,
char *buf) char *buf)
{ {
int value = hp_wmi_read_int(HPWMI_DISPLAY_QUERY); int value = hp_wmi_read_int(HPWMI_DISPLAY_QUERY);
if (value < 0) if (value < 0)
return value; return value;
return sprintf(buf, "%d\n", value); return sprintf(buf, "%d\n", value);
...@@ -634,6 +636,7 @@ static ssize_t hddtemp_show(struct device *dev, struct device_attribute *attr, ...@@ -634,6 +636,7 @@ static ssize_t hddtemp_show(struct device *dev, struct device_attribute *attr,
char *buf) char *buf)
{ {
int value = hp_wmi_read_int(HPWMI_HDDTEMP_QUERY); int value = hp_wmi_read_int(HPWMI_HDDTEMP_QUERY);
if (value < 0) if (value < 0)
return value; return value;
return sprintf(buf, "%d\n", value); return sprintf(buf, "%d\n", value);
...@@ -643,6 +646,7 @@ static ssize_t als_show(struct device *dev, struct device_attribute *attr, ...@@ -643,6 +646,7 @@ static ssize_t als_show(struct device *dev, struct device_attribute *attr,
char *buf) char *buf)
{ {
int value = hp_wmi_read_int(HPWMI_ALS_QUERY); int value = hp_wmi_read_int(HPWMI_ALS_QUERY);
if (value < 0) if (value < 0)
return value; return value;
return sprintf(buf, "%d\n", value); return sprintf(buf, "%d\n", value);
...@@ -652,6 +656,7 @@ static ssize_t dock_show(struct device *dev, struct device_attribute *attr, ...@@ -652,6 +656,7 @@ static ssize_t dock_show(struct device *dev, struct device_attribute *attr,
char *buf) char *buf)
{ {
int value = hp_wmi_get_dock_state(); int value = hp_wmi_get_dock_state();
if (value < 0) if (value < 0)
return value; return value;
return sprintf(buf, "%d\n", value); return sprintf(buf, "%d\n", value);
...@@ -661,6 +666,7 @@ static ssize_t tablet_show(struct device *dev, struct device_attribute *attr, ...@@ -661,6 +666,7 @@ static ssize_t tablet_show(struct device *dev, struct device_attribute *attr,
char *buf) char *buf)
{ {
int value = hp_wmi_get_tablet_mode(); int value = hp_wmi_get_tablet_mode();
if (value < 0) if (value < 0)
return value; return value;
return sprintf(buf, "%d\n", value); return sprintf(buf, "%d\n", value);
...@@ -671,6 +677,7 @@ static ssize_t postcode_show(struct device *dev, struct device_attribute *attr, ...@@ -671,6 +677,7 @@ static ssize_t postcode_show(struct device *dev, struct device_attribute *attr,
{ {
/* Get the POST error code of previous boot failure. */ /* Get the POST error code of previous boot failure. */
int value = hp_wmi_read_int(HPWMI_POSTCODEERROR_QUERY); int value = hp_wmi_read_int(HPWMI_POSTCODEERROR_QUERY);
if (value < 0) if (value < 0)
return value; return value;
return sprintf(buf, "0x%x\n", value); return sprintf(buf, "0x%x\n", value);
...@@ -1013,6 +1020,7 @@ static int __init hp_wmi_rfkill2_setup(struct platform_device *device) ...@@ -1013,6 +1020,7 @@ static int __init hp_wmi_rfkill2_setup(struct platform_device *device)
struct rfkill *rfkill; struct rfkill *rfkill;
enum rfkill_type type; enum rfkill_type type;
char *name; char *name;
switch (state.device[i].radio_type) { switch (state.device[i].radio_type) {
case HPWMI_WIFI: case HPWMI_WIFI:
type = RFKILL_TYPE_WLAN; type = RFKILL_TYPE_WLAN;
......
...@@ -4,6 +4,7 @@ ...@@ -4,6 +4,7 @@
# #
source "drivers/platform/x86/intel/atomisp2/Kconfig" source "drivers/platform/x86/intel/atomisp2/Kconfig"
source "drivers/platform/x86/intel/ifs/Kconfig"
source "drivers/platform/x86/intel/int1092/Kconfig" source "drivers/platform/x86/intel/int1092/Kconfig"
source "drivers/platform/x86/intel/int3472/Kconfig" source "drivers/platform/x86/intel/int3472/Kconfig"
source "drivers/platform/x86/intel/pmc/Kconfig" source "drivers/platform/x86/intel/pmc/Kconfig"
......
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
# #
obj-$(CONFIG_INTEL_ATOMISP2_PDX86) += atomisp2/ obj-$(CONFIG_INTEL_ATOMISP2_PDX86) += atomisp2/
obj-$(CONFIG_INTEL_IFS) += ifs/
obj-$(CONFIG_INTEL_SAR_INT1092) += int1092/ obj-$(CONFIG_INTEL_SAR_INT1092) += int1092/
obj-$(CONFIG_INTEL_SKL_INT3472) += int3472/ obj-$(CONFIG_INTEL_SKL_INT3472) += int3472/
obj-$(CONFIG_INTEL_PMC_CORE) += pmc/ obj-$(CONFIG_INTEL_PMC_CORE) += pmc/
......
...@@ -389,6 +389,8 @@ static int cht_int33fe_typec_probe(struct platform_device *pdev) ...@@ -389,6 +389,8 @@ static int cht_int33fe_typec_probe(struct platform_device *pdev)
goto out_unregister_fusb302; goto out_unregister_fusb302;
} }
platform_set_drvdata(pdev, data);
return 0; return 0;
out_unregister_fusb302: out_unregister_fusb302:
......
...@@ -238,7 +238,7 @@ static bool intel_hid_evaluate_method(acpi_handle handle, ...@@ -238,7 +238,7 @@ static bool intel_hid_evaluate_method(acpi_handle handle,
method_name = (char *)intel_hid_dsm_fn_to_method[fn_index]; method_name = (char *)intel_hid_dsm_fn_to_method[fn_index];
if (!(intel_hid_dsm_fn_mask & fn_index)) if (!(intel_hid_dsm_fn_mask & BIT(fn_index)))
goto skip_dsm_eval; goto skip_dsm_eval;
obj = acpi_evaluate_dsm_typed(handle, &intel_dsm_guid, obj = acpi_evaluate_dsm_typed(handle, &intel_dsm_guid,
......
config INTEL_IFS
tristate "Intel In Field Scan"
depends on X86 && CPU_SUP_INTEL && 64BIT && SMP
select INTEL_IFS_DEVICE
help
Enable support for the In Field Scan capability in select
CPUs. The capability allows for running low level tests via
a scan image distributed by Intel via Github to validate CPU
operation beyond baseline RAS capabilities. To compile this
support as a module, choose M here. The module will be called
intel_ifs.
If unsure, say N.
obj-$(CONFIG_INTEL_IFS) += intel_ifs.o
intel_ifs-objs := core.o load.o runtest.o sysfs.o
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright(c) 2022 Intel Corporation. */
#include <linux/module.h>
#include <linux/kdev_t.h>
#include <linux/semaphore.h>
#include <asm/cpu_device_id.h>
#include "ifs.h"
#define X86_MATCH(model) \
X86_MATCH_VENDOR_FAM_MODEL_FEATURE(INTEL, 6, \
INTEL_FAM6_##model, X86_FEATURE_CORE_CAPABILITIES, NULL)
static const struct x86_cpu_id ifs_cpu_ids[] __initconst = {
X86_MATCH(SAPPHIRERAPIDS_X),
{}
};
MODULE_DEVICE_TABLE(x86cpu, ifs_cpu_ids);
static struct ifs_device ifs_device = {
.data = {
.integrity_cap_bit = MSR_INTEGRITY_CAPS_PERIODIC_BIST_BIT,
},
.misc = {
.name = "intel_ifs_0",
.nodename = "intel_ifs/0",
.minor = MISC_DYNAMIC_MINOR,
},
};
static int __init ifs_init(void)
{
const struct x86_cpu_id *m;
u64 msrval;
m = x86_match_cpu(ifs_cpu_ids);
if (!m)
return -ENODEV;
if (rdmsrl_safe(MSR_IA32_CORE_CAPS, &msrval))
return -ENODEV;
if (!(msrval & MSR_IA32_CORE_CAPS_INTEGRITY_CAPS))
return -ENODEV;
if (rdmsrl_safe(MSR_INTEGRITY_CAPS, &msrval))
return -ENODEV;
ifs_device.misc.groups = ifs_get_groups();
if ((msrval & BIT(ifs_device.data.integrity_cap_bit)) &&
!misc_register(&ifs_device.misc)) {
down(&ifs_sem);
ifs_load_firmware(ifs_device.misc.this_device);
up(&ifs_sem);
return 0;
}
return -ENODEV;
}
static void __exit ifs_exit(void)
{
misc_deregister(&ifs_device.misc);
}
module_init(ifs_init);
module_exit(ifs_exit);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Intel In Field Scan (IFS) device");
/* SPDX-License-Identifier: GPL-2.0-only */
/* Copyright(c) 2022 Intel Corporation. */
#ifndef _IFS_H_
#define _IFS_H_
/**
* DOC: In-Field Scan
*
* =============
* In-Field Scan
* =============
*
* Introduction
* ------------
*
* In Field Scan (IFS) is a hardware feature to run circuit level tests on
* a CPU core to detect problems that are not caught by parity or ECC checks.
* Future CPUs will support more than one type of test which will show up
* with a new platform-device instance-id, for now only .0 is exposed.
*
*
* IFS Image
* ---------
*
* Intel provides a firmware file containing the scan tests via
* github [#f1]_. Similar to microcode there is a separate file for each
* family-model-stepping.
*
* IFS Image Loading
* -----------------
*
* The driver loads the tests into memory reserved BIOS local to each CPU
* socket in a two step process using writes to MSRs to first load the
* SHA hashes for the test. Then the tests themselves. Status MSRs provide
* feedback on the success/failure of these steps. When a new test file
* is installed it can be loaded by writing to the driver reload file::
*
* # echo 1 > /sys/devices/virtual/misc/intel_ifs_0/reload
*
* Similar to microcode, the current version of the scan tests is stored
* in a fixed location: /lib/firmware/intel/ifs.0/family-model-stepping.scan
*
* Running tests
* -------------
*
* Tests are run by the driver synchronizing execution of all threads on a
* core and then writing to the ACTIVATE_SCAN MSR on all threads. Instruction
* execution continues when:
*
* 1) All tests have completed.
* 2) Execution was interrupted.
* 3) A test detected a problem.
*
* Note that ALL THREADS ON THE CORE ARE EFFECTIVELY OFFLINE FOR THE
* DURATION OF THE TEST. This can be up to 200 milliseconds. If the system
* is running latency sensitive applications that cannot tolerate an
* interruption of this magnitude, the system administrator must arrange
* to migrate those applications to other cores before running a core test.
* It may also be necessary to redirect interrupts to other CPUs.
*
* In all cases reading the SCAN_STATUS MSR provides details on what
* happened. The driver makes the value of this MSR visible to applications
* via the "details" file (see below). Interrupted tests may be restarted.
*
* The IFS driver provides sysfs interfaces via /sys/devices/virtual/misc/intel_ifs_0/
* to control execution:
*
* Test a specific core::
*
* # echo <cpu#> > /sys/devices/virtual/misc/intel_ifs_0/run_test
*
* when HT is enabled any of the sibling cpu# can be specified to test
* its corresponding physical core. Since the tests are per physical core,
* the result of testing any thread is same. All siblings must be online
* to run a core test. It is only necessary to test one thread.
*
* For e.g. to test core corresponding to cpu5
*
* # echo 5 > /sys/devices/virtual/misc/intel_ifs_0/run_test
*
* Results of the last test is provided in /sys::
*
* $ cat /sys/devices/virtual/misc/intel_ifs_0/status
* pass
*
* Status can be one of pass, fail, untested
*
* Additional details of the last test is provided by the details file::
*
* $ cat /sys/devices/virtual/misc/intel_ifs_0/details
* 0x8081
*
* The details file reports the hex value of the SCAN_STATUS MSR.
* Hardware defined error codes are documented in volume 4 of the Intel
* Software Developer's Manual but the error_code field may contain one of
* the following driver defined software codes:
*
* +------+--------------------+
* | 0xFD | Software timeout |
* +------+--------------------+
* | 0xFE | Partial completion |
* +------+--------------------+
*
* Driver design choices
* ---------------------
*
* 1) The ACTIVATE_SCAN MSR allows for running any consecutive subrange of
* available tests. But the driver always tries to run all tests and only
* uses the subrange feature to restart an interrupted test.
*
* 2) Hardware allows for some number of cores to be tested in parallel.
* The driver does not make use of this, it only tests one core at a time.
*
* .. [#f1] https://github.com/intel/TBD
*/
#include <linux/device.h>
#include <linux/miscdevice.h>
#define MSR_COPY_SCAN_HASHES 0x000002c2
#define MSR_SCAN_HASHES_STATUS 0x000002c3
#define MSR_AUTHENTICATE_AND_COPY_CHUNK 0x000002c4
#define MSR_CHUNKS_AUTHENTICATION_STATUS 0x000002c5
#define MSR_ACTIVATE_SCAN 0x000002c6
#define MSR_SCAN_STATUS 0x000002c7
#define SCAN_NOT_TESTED 0
#define SCAN_TEST_PASS 1
#define SCAN_TEST_FAIL 2
/* MSR_SCAN_HASHES_STATUS bit fields */
union ifs_scan_hashes_status {
u64 data;
struct {
u32 chunk_size :16;
u32 num_chunks :8;
u32 rsvd1 :8;
u32 error_code :8;
u32 rsvd2 :11;
u32 max_core_limit :12;
u32 valid :1;
};
};
/* MSR_CHUNKS_AUTH_STATUS bit fields */
union ifs_chunks_auth_status {
u64 data;
struct {
u32 valid_chunks :8;
u32 total_chunks :8;
u32 rsvd1 :16;
u32 error_code :8;
u32 rsvd2 :24;
};
};
/* MSR_ACTIVATE_SCAN bit fields */
union ifs_scan {
u64 data;
struct {
u32 start :8;
u32 stop :8;
u32 rsvd :16;
u32 delay :31;
u32 sigmce :1;
};
};
/* MSR_SCAN_STATUS bit fields */
union ifs_status {
u64 data;
struct {
u32 chunk_num :8;
u32 chunk_stop_index :8;
u32 rsvd1 :16;
u32 error_code :8;
u32 rsvd2 :22;
u32 control_error :1;
u32 signature_error :1;
};
};
/*
* Driver populated error-codes
* 0xFD: Test timed out before completing all the chunks.
* 0xFE: not all scan chunks were executed. Maximum forward progress retries exceeded.
*/
#define IFS_SW_TIMEOUT 0xFD
#define IFS_SW_PARTIAL_COMPLETION 0xFE
/**
* struct ifs_data - attributes related to intel IFS driver
* @integrity_cap_bit: MSR_INTEGRITY_CAPS bit enumerating this test
* @loaded_version: stores the currently loaded ifs image version.
* @loaded: If a valid test binary has been loaded into the memory
* @loading_error: Error occurred on another CPU while loading image
* @valid_chunks: number of chunks which could be validated.
* @status: it holds simple status pass/fail/untested
* @scan_details: opaque scan status code from h/w
*/
struct ifs_data {
int integrity_cap_bit;
int loaded_version;
bool loaded;
bool loading_error;
int valid_chunks;
int status;
u64 scan_details;
};
struct ifs_work {
struct work_struct w;
struct device *dev;
};
struct ifs_device {
struct ifs_data data;
struct miscdevice misc;
};
static inline struct ifs_data *ifs_get_data(struct device *dev)
{
struct miscdevice *m = dev_get_drvdata(dev);
struct ifs_device *d = container_of(m, struct ifs_device, misc);
return &d->data;
}
void ifs_load_firmware(struct device *dev);
int do_core_test(int cpu, struct device *dev);
const struct attribute_group **ifs_get_groups(void);
extern struct semaphore ifs_sem;
#endif
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright(c) 2022 Intel Corporation. */
#include <linux/firmware.h>
#include <asm/cpu.h>
#include <linux/slab.h>
#include <asm/microcode_intel.h>
#include "ifs.h"
struct ifs_header {
u32 header_ver;
u32 blob_revision;
u32 date;
u32 processor_sig;
u32 check_sum;
u32 loader_rev;
u32 processor_flags;
u32 metadata_size;
u32 total_size;
u32 fusa_info;
u64 reserved;
};
#define IFS_HEADER_SIZE (sizeof(struct ifs_header))
static struct ifs_header *ifs_header_ptr; /* pointer to the ifs image header */
static u64 ifs_hash_ptr; /* Address of ifs metadata (hash) */
static u64 ifs_test_image_ptr; /* 256B aligned address of test pattern */
static DECLARE_COMPLETION(ifs_done);
static const char * const scan_hash_status[] = {
[0] = "No error reported",
[1] = "Attempt to copy scan hashes when copy already in progress",
[2] = "Secure Memory not set up correctly",
[3] = "FuSaInfo.ProgramID does not match or ff-mm-ss does not match",
[4] = "Reserved",
[5] = "Integrity check failed",
[6] = "Scan reload or test is in progress"
};
static const char * const scan_authentication_status[] = {
[0] = "No error reported",
[1] = "Attempt to authenticate a chunk which is already marked as authentic",
[2] = "Chunk authentication error. The hash of chunk did not match expected value"
};
/*
* To copy scan hashes and authenticate test chunks, the initiating cpu must point
* to the EDX:EAX to the test image in linear address.
* Run wrmsr(MSR_COPY_SCAN_HASHES) for scan hash copy and run wrmsr(MSR_AUTHENTICATE_AND_COPY_CHUNK)
* for scan hash copy and test chunk authentication.
*/
static void copy_hashes_authenticate_chunks(struct work_struct *work)
{
struct ifs_work *local_work = container_of(work, struct ifs_work, w);
union ifs_scan_hashes_status hashes_status;
union ifs_chunks_auth_status chunk_status;
struct device *dev = local_work->dev;
int i, num_chunks, chunk_size;
struct ifs_data *ifsd;
u64 linear_addr, base;
u32 err_code;
ifsd = ifs_get_data(dev);
/* run scan hash copy */
wrmsrl(MSR_COPY_SCAN_HASHES, ifs_hash_ptr);
rdmsrl(MSR_SCAN_HASHES_STATUS, hashes_status.data);
/* enumerate the scan image information */
num_chunks = hashes_status.num_chunks;
chunk_size = hashes_status.chunk_size * 1024;
err_code = hashes_status.error_code;
if (!hashes_status.valid) {
ifsd->loading_error = true;
if (err_code >= ARRAY_SIZE(scan_hash_status)) {
dev_err(dev, "invalid error code 0x%x for hash copy\n", err_code);
goto done;
}
dev_err(dev, "Hash copy error : %s", scan_hash_status[err_code]);
goto done;
}
/* base linear address to the scan data */
base = ifs_test_image_ptr;
/* scan data authentication and copy chunks to secured memory */
for (i = 0; i < num_chunks; i++) {
linear_addr = base + i * chunk_size;
linear_addr |= i;
wrmsrl(MSR_AUTHENTICATE_AND_COPY_CHUNK, linear_addr);
rdmsrl(MSR_CHUNKS_AUTHENTICATION_STATUS, chunk_status.data);
ifsd->valid_chunks = chunk_status.valid_chunks;
err_code = chunk_status.error_code;
if (err_code) {
ifsd->loading_error = true;
if (err_code >= ARRAY_SIZE(scan_authentication_status)) {
dev_err(dev,
"invalid error code 0x%x for authentication\n", err_code);
goto done;
}
dev_err(dev, "Chunk authentication error %s\n",
scan_authentication_status[err_code]);
goto done;
}
}
done:
complete(&ifs_done);
}
/*
* IFS requires scan chunks authenticated per each socket in the platform.
* Once the test chunk is authenticated, it is automatically copied to secured memory
* and proceed the authentication for the next chunk.
*/
static int scan_chunks_sanity_check(struct device *dev)
{
int metadata_size, curr_pkg, cpu, ret = -ENOMEM;
struct ifs_data *ifsd = ifs_get_data(dev);
bool *package_authenticated;
struct ifs_work local_work;
char *test_ptr;
package_authenticated = kcalloc(topology_max_packages(), sizeof(bool), GFP_KERNEL);
if (!package_authenticated)
return ret;
metadata_size = ifs_header_ptr->metadata_size;
/* Spec says that if the Meta Data Size = 0 then it should be treated as 2000 */
if (metadata_size == 0)
metadata_size = 2000;
/* Scan chunk start must be 256 byte aligned */
if ((metadata_size + IFS_HEADER_SIZE) % 256) {
dev_err(dev, "Scan pattern offset within the binary is not 256 byte aligned\n");
return -EINVAL;
}
test_ptr = (char *)ifs_header_ptr + IFS_HEADER_SIZE + metadata_size;
ifsd->loading_error = false;
ifs_test_image_ptr = (u64)test_ptr;
ifsd->loaded_version = ifs_header_ptr->blob_revision;
/* copy the scan hash and authenticate per package */
cpus_read_lock();
for_each_online_cpu(cpu) {
curr_pkg = topology_physical_package_id(cpu);
if (package_authenticated[curr_pkg])
continue;
reinit_completion(&ifs_done);
local_work.dev = dev;
INIT_WORK(&local_work.w, copy_hashes_authenticate_chunks);
schedule_work_on(cpu, &local_work.w);
wait_for_completion(&ifs_done);
if (ifsd->loading_error)
goto out;
package_authenticated[curr_pkg] = 1;
}
ret = 0;
out:
cpus_read_unlock();
kfree(package_authenticated);
return ret;
}
static int ifs_sanity_check(struct device *dev,
const struct microcode_header_intel *mc_header)
{
unsigned long total_size, data_size;
u32 sum, *mc;
total_size = get_totalsize(mc_header);
data_size = get_datasize(mc_header);
if ((data_size + MC_HEADER_SIZE > total_size) || (total_size % sizeof(u32))) {
dev_err(dev, "bad ifs data file size.\n");
return -EINVAL;
}
if (mc_header->ldrver != 1 || mc_header->hdrver != 1) {
dev_err(dev, "invalid/unknown ifs update format.\n");
return -EINVAL;
}
mc = (u32 *)mc_header;
sum = 0;
for (int i = 0; i < total_size / sizeof(u32); i++)
sum += mc[i];
if (sum) {
dev_err(dev, "bad ifs data checksum, aborting.\n");
return -EINVAL;
}
return 0;
}
static bool find_ifs_matching_signature(struct device *dev, struct ucode_cpu_info *uci,
const struct microcode_header_intel *shdr)
{
unsigned int mc_size;
mc_size = get_totalsize(shdr);
if (!mc_size || ifs_sanity_check(dev, shdr) < 0) {
dev_err(dev, "ifs sanity check failure\n");
return false;
}
if (!intel_cpu_signatures_match(uci->cpu_sig.sig, uci->cpu_sig.pf, shdr->sig, shdr->pf)) {
dev_err(dev, "ifs signature, pf not matching\n");
return false;
}
return true;
}
static bool ifs_image_sanity_check(struct device *dev, const struct microcode_header_intel *data)
{
struct ucode_cpu_info uci;
intel_cpu_collect_info(&uci);
return find_ifs_matching_signature(dev, &uci, data);
}
/*
* Load ifs image. Before loading ifs module, the ifs image must be located
* in /lib/firmware/intel/ifs and named as {family/model/stepping}.{testname}.
*/
void ifs_load_firmware(struct device *dev)
{
struct ifs_data *ifsd = ifs_get_data(dev);
const struct firmware *fw;
char scan_path[32];
int ret;
snprintf(scan_path, sizeof(scan_path), "intel/ifs/%02x-%02x-%02x.scan",
boot_cpu_data.x86, boot_cpu_data.x86_model, boot_cpu_data.x86_stepping);
ret = request_firmware_direct(&fw, scan_path, dev);
if (ret) {
dev_err(dev, "ifs file %s load failed\n", scan_path);
goto done;
}
if (!ifs_image_sanity_check(dev, (struct microcode_header_intel *)fw->data)) {
dev_err(dev, "ifs header sanity check failed\n");
goto release;
}
ifs_header_ptr = (struct ifs_header *)fw->data;
ifs_hash_ptr = (u64)(ifs_header_ptr + 1);
ret = scan_chunks_sanity_check(dev);
release:
release_firmware(fw);
done:
ifsd->loaded = (ret == 0);
}
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright(c) 2022 Intel Corporation. */
#include <linux/cpu.h>
#include <linux/delay.h>
#include <linux/fs.h>
#include <linux/nmi.h>
#include <linux/slab.h>
#include <linux/stop_machine.h>
#include "ifs.h"
/*
* Note all code and data in this file is protected by
* ifs_sem. On HT systems all threads on a core will
* execute together, but only the first thread on the
* core will update results of the test.
*/
#define CREATE_TRACE_POINTS
#include <trace/events/intel_ifs.h>
/* Max retries on the same chunk */
#define MAX_IFS_RETRIES 5
/*
* Number of TSC cycles that a logical CPU will wait for the other
* logical CPU on the core in the WRMSR(ACTIVATE_SCAN).
*/
#define IFS_THREAD_WAIT 100000
enum ifs_status_err_code {
IFS_NO_ERROR = 0,
IFS_OTHER_THREAD_COULD_NOT_JOIN = 1,
IFS_INTERRUPTED_BEFORE_RENDEZVOUS = 2,
IFS_POWER_MGMT_INADEQUATE_FOR_SCAN = 3,
IFS_INVALID_CHUNK_RANGE = 4,
IFS_MISMATCH_ARGUMENTS_BETWEEN_THREADS = 5,
IFS_CORE_NOT_CAPABLE_CURRENTLY = 6,
IFS_UNASSIGNED_ERROR_CODE = 7,
IFS_EXCEED_NUMBER_OF_THREADS_CONCURRENT = 8,
IFS_INTERRUPTED_DURING_EXECUTION = 9,
};
static const char * const scan_test_status[] = {
[IFS_NO_ERROR] = "SCAN no error",
[IFS_OTHER_THREAD_COULD_NOT_JOIN] = "Other thread could not join.",
[IFS_INTERRUPTED_BEFORE_RENDEZVOUS] = "Interrupt occurred prior to SCAN coordination.",
[IFS_POWER_MGMT_INADEQUATE_FOR_SCAN] =
"Core Abort SCAN Response due to power management condition.",
[IFS_INVALID_CHUNK_RANGE] = "Non valid chunks in the range",
[IFS_MISMATCH_ARGUMENTS_BETWEEN_THREADS] = "Mismatch in arguments between threads T0/T1.",
[IFS_CORE_NOT_CAPABLE_CURRENTLY] = "Core not capable of performing SCAN currently",
[IFS_UNASSIGNED_ERROR_CODE] = "Unassigned error code 0x7",
[IFS_EXCEED_NUMBER_OF_THREADS_CONCURRENT] =
"Exceeded number of Logical Processors (LP) allowed to run Scan-At-Field concurrently",
[IFS_INTERRUPTED_DURING_EXECUTION] = "Interrupt occurred prior to SCAN start",
};
static void message_not_tested(struct device *dev, int cpu, union ifs_status status)
{
if (status.error_code < ARRAY_SIZE(scan_test_status)) {
dev_info(dev, "CPU(s) %*pbl: SCAN operation did not start. %s\n",
cpumask_pr_args(cpu_smt_mask(cpu)),
scan_test_status[status.error_code]);
} else if (status.error_code == IFS_SW_TIMEOUT) {
dev_info(dev, "CPU(s) %*pbl: software timeout during scan\n",
cpumask_pr_args(cpu_smt_mask(cpu)));
} else if (status.error_code == IFS_SW_PARTIAL_COMPLETION) {
dev_info(dev, "CPU(s) %*pbl: %s\n",
cpumask_pr_args(cpu_smt_mask(cpu)),
"Not all scan chunks were executed. Maximum forward progress retries exceeded");
} else {
dev_info(dev, "CPU(s) %*pbl: SCAN unknown status %llx\n",
cpumask_pr_args(cpu_smt_mask(cpu)), status.data);
}
}
static void message_fail(struct device *dev, int cpu, union ifs_status status)
{
/*
* control_error is set when the microcode runs into a problem
* loading the image from the reserved BIOS memory, or it has
* been corrupted. Reloading the image may fix this issue.
*/
if (status.control_error) {
dev_err(dev, "CPU(s) %*pbl: could not execute from loaded scan image\n",
cpumask_pr_args(cpu_smt_mask(cpu)));
}
/*
* signature_error is set when the output from the scan chains does not
* match the expected signature. This might be a transient problem (e.g.
* due to a bit flip from an alpha particle or neutron). If the problem
* repeats on a subsequent test, then it indicates an actual problem in
* the core being tested.
*/
if (status.signature_error) {
dev_err(dev, "CPU(s) %*pbl: test signature incorrect.\n",
cpumask_pr_args(cpu_smt_mask(cpu)));
}
}
static bool can_restart(union ifs_status status)
{
enum ifs_status_err_code err_code = status.error_code;
/* Signature for chunk is bad, or scan test failed */
if (status.signature_error || status.control_error)
return false;
switch (err_code) {
case IFS_NO_ERROR:
case IFS_OTHER_THREAD_COULD_NOT_JOIN:
case IFS_INTERRUPTED_BEFORE_RENDEZVOUS:
case IFS_POWER_MGMT_INADEQUATE_FOR_SCAN:
case IFS_EXCEED_NUMBER_OF_THREADS_CONCURRENT:
case IFS_INTERRUPTED_DURING_EXECUTION:
return true;
case IFS_INVALID_CHUNK_RANGE:
case IFS_MISMATCH_ARGUMENTS_BETWEEN_THREADS:
case IFS_CORE_NOT_CAPABLE_CURRENTLY:
case IFS_UNASSIGNED_ERROR_CODE:
break;
}
return false;
}
/*
* Execute the scan. Called "simultaneously" on all threads of a core
* at high priority using the stop_cpus mechanism.
*/
static int doscan(void *data)
{
int cpu = smp_processor_id();
u64 *msrs = data;
int first;
/* Only the first logical CPU on a core reports result */
first = cpumask_first(cpu_smt_mask(cpu));
/*
* This WRMSR will wait for other HT threads to also write
* to this MSR (at most for activate.delay cycles). Then it
* starts scan of each requested chunk. The core scan happens
* during the "execution" of the WRMSR. This instruction can
* take up to 200 milliseconds (in the case where all chunks
* are processed in a single pass) before it retires.
*/
wrmsrl(MSR_ACTIVATE_SCAN, msrs[0]);
if (cpu == first) {
/* Pass back the result of the scan */
rdmsrl(MSR_SCAN_STATUS, msrs[1]);
}
return 0;
}
/*
* Use stop_core_cpuslocked() to synchronize writing to MSR_ACTIVATE_SCAN
* on all threads of the core to be tested. Loop if necessary to complete
* run of all chunks. Include some defensive tests to make sure forward
* progress is made, and that the whole test completes in a reasonable time.
*/
static void ifs_test_core(int cpu, struct device *dev)
{
union ifs_scan activate;
union ifs_status status;
unsigned long timeout;
struct ifs_data *ifsd;
u64 msrvals[2];
int retries;
ifsd = ifs_get_data(dev);
activate.rsvd = 0;
activate.delay = IFS_THREAD_WAIT;
activate.sigmce = 0;
activate.start = 0;
activate.stop = ifsd->valid_chunks - 1;
timeout = jiffies + HZ / 2;
retries = MAX_IFS_RETRIES;
while (activate.start <= activate.stop) {
if (time_after(jiffies, timeout)) {
status.error_code = IFS_SW_TIMEOUT;
break;
}
msrvals[0] = activate.data;
stop_core_cpuslocked(cpu, doscan, msrvals);
status.data = msrvals[1];
trace_ifs_status(cpu, activate, status);
/* Some cases can be retried, give up for others */
if (!can_restart(status))
break;
if (status.chunk_num == activate.start) {
/* Check for forward progress */
if (--retries == 0) {
if (status.error_code == IFS_NO_ERROR)
status.error_code = IFS_SW_PARTIAL_COMPLETION;
break;
}
} else {
retries = MAX_IFS_RETRIES;
activate.start = status.chunk_num;
}
}
/* Update status for this core */
ifsd->scan_details = status.data;
if (status.control_error || status.signature_error) {
ifsd->status = SCAN_TEST_FAIL;
message_fail(dev, cpu, status);
} else if (status.error_code) {
ifsd->status = SCAN_NOT_TESTED;
message_not_tested(dev, cpu, status);
} else {
ifsd->status = SCAN_TEST_PASS;
}
}
/*
* Initiate per core test. It wakes up work queue threads on the target cpu and
* its sibling cpu. Once all sibling threads wake up, the scan test gets executed and
* wait for all sibling threads to finish the scan test.
*/
int do_core_test(int cpu, struct device *dev)
{
int ret = 0;
/* Prevent CPUs from being taken offline during the scan test */
cpus_read_lock();
if (!cpu_online(cpu)) {
dev_info(dev, "cannot test on the offline cpu %d\n", cpu);
ret = -EINVAL;
goto out;
}
ifs_test_core(cpu, dev);
out:
cpus_read_unlock();
return ret;
}
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright(c) 2022 Intel Corporation. */
#include <linux/cpu.h>
#include <linux/delay.h>
#include <linux/fs.h>
#include <linux/semaphore.h>
#include <linux/slab.h>
#include "ifs.h"
/*
* Protects against simultaneous tests on multiple cores, or
* reloading can file while a test is in progress
*/
DEFINE_SEMAPHORE(ifs_sem);
/*
* The sysfs interface to check additional details of last test
* cat /sys/devices/system/platform/ifs/details
*/
static ssize_t details_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct ifs_data *ifsd = ifs_get_data(dev);
return sysfs_emit(buf, "%#llx\n", ifsd->scan_details);
}
static DEVICE_ATTR_RO(details);
static const char * const status_msg[] = {
[SCAN_NOT_TESTED] = "untested",
[SCAN_TEST_PASS] = "pass",
[SCAN_TEST_FAIL] = "fail"
};
/*
* The sysfs interface to check the test status:
* To check the status of last test
* cat /sys/devices/platform/ifs/status
*/
static ssize_t status_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
struct ifs_data *ifsd = ifs_get_data(dev);
return sysfs_emit(buf, "%s\n", status_msg[ifsd->status]);
}
static DEVICE_ATTR_RO(status);
/*
* The sysfs interface for single core testing
* To start test, for example, cpu5
* echo 5 > /sys/devices/platform/ifs/run_test
* To check the result:
* cat /sys/devices/platform/ifs/result
* The sibling core gets tested at the same time.
*/
static ssize_t run_test_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct ifs_data *ifsd = ifs_get_data(dev);
unsigned int cpu;
int rc;
rc = kstrtouint(buf, 0, &cpu);
if (rc < 0 || cpu >= nr_cpu_ids)
return -EINVAL;
if (down_interruptible(&ifs_sem))
return -EINTR;
if (!ifsd->loaded)
rc = -EPERM;
else
rc = do_core_test(cpu, dev);
up(&ifs_sem);
return rc ? rc : count;
}
static DEVICE_ATTR_WO(run_test);
/*
* Reload the IFS image. When user wants to install new IFS image
*/
static ssize_t reload_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
struct ifs_data *ifsd = ifs_get_data(dev);
bool res;
if (kstrtobool(buf, &res))
return -EINVAL;
if (!res)
return count;
if (down_interruptible(&ifs_sem))
return -EINTR;
ifs_load_firmware(dev);
up(&ifs_sem);
return ifsd->loaded ? count : -ENODEV;
}
static DEVICE_ATTR_WO(reload);
/*
* Display currently loaded IFS image version.
*/
static ssize_t image_version_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct ifs_data *ifsd = ifs_get_data(dev);
if (!ifsd->loaded)
return sysfs_emit(buf, "%s\n", "none");
else
return sysfs_emit(buf, "%#x\n", ifsd->loaded_version);
}
static DEVICE_ATTR_RO(image_version);
/* global scan sysfs attributes */
static struct attribute *plat_ifs_attrs[] = {
&dev_attr_details.attr,
&dev_attr_status.attr,
&dev_attr_run_test.attr,
&dev_attr_reload.attr,
&dev_attr_image_version.attr,
NULL
};
ATTRIBUTE_GROUPS(plat_ifs);
const struct attribute_group **ifs_get_groups(void)
{
return plat_ifs_groups;
}
...@@ -999,7 +999,7 @@ static umode_t etr3_is_visible(struct kobject *kobj, ...@@ -999,7 +999,7 @@ static umode_t etr3_is_visible(struct kobject *kobj,
struct attribute *attr, struct attribute *attr,
int idx) int idx)
{ {
struct device *dev = container_of(kobj, struct device, kobj); struct device *dev = kobj_to_dev(kobj);
struct pmc_dev *pmcdev = dev_get_drvdata(dev); struct pmc_dev *pmcdev = dev_get_drvdata(dev);
const struct pmc_reg_map *map = pmcdev->map; const struct pmc_reg_map *map = pmcdev->map;
u32 reg; u32 reg;
......
...@@ -221,19 +221,6 @@ int pmc_atom_read(int offset, u32 *value) ...@@ -221,19 +221,6 @@ int pmc_atom_read(int offset, u32 *value)
*value = pmc_reg_read(pmc, offset); *value = pmc_reg_read(pmc, offset);
return 0; return 0;
} }
EXPORT_SYMBOL_GPL(pmc_atom_read);
int pmc_atom_write(int offset, u32 value)
{
struct pmc_dev *pmc = &pmc_device;
if (!pmc->init)
return -ENODEV;
pmc_reg_write(pmc, offset, value);
return 0;
}
EXPORT_SYMBOL_GPL(pmc_atom_write);
static void pmc_power_off(void) static void pmc_power_off(void)
{ {
......
...@@ -1208,7 +1208,7 @@ static int __init samsung_backlight_init(struct samsung_laptop *samsung) ...@@ -1208,7 +1208,7 @@ static int __init samsung_backlight_init(struct samsung_laptop *samsung)
static umode_t samsung_sysfs_is_visible(struct kobject *kobj, static umode_t samsung_sysfs_is_visible(struct kobject *kobj,
struct attribute *attr, int idx) struct attribute *attr, int idx)
{ {
struct device *dev = container_of(kobj, struct device, kobj); struct device *dev = kobj_to_dev(kobj);
struct samsung_laptop *samsung = dev_get_drvdata(dev); struct samsung_laptop *samsung = dev_get_drvdata(dev);
bool ok = true; bool ok = true;
......
...@@ -2353,7 +2353,7 @@ static struct attribute *toshiba_attributes[] = { ...@@ -2353,7 +2353,7 @@ static struct attribute *toshiba_attributes[] = {
static umode_t toshiba_sysfs_is_visible(struct kobject *kobj, static umode_t toshiba_sysfs_is_visible(struct kobject *kobj,
struct attribute *attr, int idx) struct attribute *attr, int idx)
{ {
struct device *dev = container_of(kobj, struct device, kobj); struct device *dev = kobj_to_dev(kobj);
struct toshiba_acpi_dev *drv = dev_get_drvdata(dev); struct toshiba_acpi_dev *drv = dev_get_drvdata(dev);
bool exists = true; bool exists = true;
......
// SPDX-License-Identifier: GPL-2.0
//
// Driver for the Winmate FM07 front-panel keys
//
// Author: Daniel Beer <daniel.beer@tirotech.co.nz>
#include <linux/init.h>
#include <linux/module.h>
#include <linux/input.h>
#include <linux/ioport.h>
#include <linux/platform_device.h>
#include <linux/dmi.h>
#include <linux/io.h>
#define DRV_NAME "winmate-fm07keys"
#define PORT_CMD 0x6c
#define PORT_DATA 0x68
#define EC_ADDR_KEYS 0x3b
#define EC_CMD_READ 0x80
#define BASE_KEY KEY_F13
#define NUM_KEYS 5
/* Typically we're done in fewer than 10 iterations */
#define LOOP_TIMEOUT 1000
static void fm07keys_poll(struct input_dev *input)
{
uint8_t k;
int i;
/* Flush output buffer */
i = 0;
while (inb(PORT_CMD) & 0x01) {
if (++i >= LOOP_TIMEOUT)
goto timeout;
inb(PORT_DATA);
}
/* Send request and wait for write completion */
outb(EC_CMD_READ, PORT_CMD);
i = 0;
while (inb(PORT_CMD) & 0x02)
if (++i >= LOOP_TIMEOUT)
goto timeout;
outb(EC_ADDR_KEYS, PORT_DATA);
i = 0;
while (inb(PORT_CMD) & 0x02)
if (++i >= LOOP_TIMEOUT)
goto timeout;
/* Wait for data ready */
i = 0;
while (!(inb(PORT_CMD) & 0x01))
if (++i >= LOOP_TIMEOUT)
goto timeout;
k = inb(PORT_DATA);
/* Notify of new key states */
for (i = 0; i < NUM_KEYS; i++) {
input_report_key(input, BASE_KEY + i, (~k) & 1);
k >>= 1;
}
input_sync(input);
return;
timeout:
dev_warn_ratelimited(&input->dev, "timeout polling IO memory\n");
}
static int fm07keys_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct input_dev *input;
int ret;
int i;
input = devm_input_allocate_device(dev);
if (!input) {
dev_err(dev, "no memory for input device\n");
return -ENOMEM;
}
if (!devm_request_region(dev, PORT_CMD, 1, "Winmate FM07 EC"))
return -EBUSY;
if (!devm_request_region(dev, PORT_DATA, 1, "Winmate FM07 EC"))
return -EBUSY;
input->name = "Winmate FM07 front-panel keys";
input->phys = DRV_NAME "/input0";
input->id.bustype = BUS_HOST;
input->id.vendor = 0x0001;
input->id.product = 0x0001;
input->id.version = 0x0100;
__set_bit(EV_KEY, input->evbit);
for (i = 0; i < NUM_KEYS; i++)
__set_bit(BASE_KEY + i, input->keybit);
ret = input_setup_polling(input, fm07keys_poll);
if (ret) {
dev_err(dev, "unable to set up polling, err=%d\n", ret);
return ret;
}
/* These are silicone buttons. They can't be pressed in rapid
* succession too quickly, and 50 Hz seems to be an adequate
* sampling rate without missing any events when tested.
*/
input_set_poll_interval(input, 20);
ret = input_register_device(input);
if (ret) {
dev_err(dev, "unable to register polled device, err=%d\n",
ret);
return ret;
}
input_sync(input);
return 0;
}
static struct platform_driver fm07keys_driver = {
.probe = fm07keys_probe,
.driver = {
.name = DRV_NAME
},
};
static struct platform_device *dev;
static const struct dmi_system_id fm07keys_dmi_table[] __initconst = {
{
/* FM07 and FM07P */
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Winmate Inc."),
DMI_MATCH(DMI_PRODUCT_NAME, "IP30"),
},
},
{ }
};
MODULE_DEVICE_TABLE(dmi, fm07keys_dmi_table);
static int __init fm07keys_init(void)
{
int ret;
if (!dmi_check_system(fm07keys_dmi_table))
return -ENODEV;
ret = platform_driver_register(&fm07keys_driver);
if (ret) {
pr_err("fm07keys: failed to register driver, err=%d\n", ret);
return ret;
}
dev = platform_device_register_simple(DRV_NAME, -1, NULL, 0);
if (IS_ERR(dev)) {
ret = PTR_ERR(dev);
pr_err("fm07keys: failed to allocate device, err = %d\n", ret);
goto fail_register;
}
return 0;
fail_register:
platform_driver_unregister(&fm07keys_driver);
return ret;
}
static void __exit fm07keys_exit(void)
{
platform_driver_unregister(&fm07keys_driver);
platform_device_unregister(dev);
}
module_init(fm07keys_init);
module_exit(fm07keys_exit);
MODULE_AUTHOR("Daniel Beer <daniel.beer@tirotech.co.nz>");
MODULE_DESCRIPTION("Winmate FM07 front-panel keys driver");
MODULE_LICENSE("GPL");
...@@ -1308,21 +1308,20 @@ acpi_wmi_ec_space_handler(u32 function, acpi_physical_address address, ...@@ -1308,21 +1308,20 @@ acpi_wmi_ec_space_handler(u32 function, acpi_physical_address address,
static void acpi_wmi_notify_handler(acpi_handle handle, u32 event, static void acpi_wmi_notify_handler(acpi_handle handle, u32 event,
void *context) void *context)
{ {
struct wmi_block *wblock; struct wmi_block *wblock = NULL, *iter;
bool found_it = false;
list_for_each_entry(wblock, &wmi_block_list, list) { list_for_each_entry(iter, &wmi_block_list, list) {
struct guid_block *block = &wblock->gblock; struct guid_block *block = &iter->gblock;
if (wblock->acpi_device->handle == handle && if (iter->acpi_device->handle == handle &&
(block->flags & ACPI_WMI_EVENT) && (block->flags & ACPI_WMI_EVENT) &&
(block->notify_id == event)) { (block->notify_id == event)) {
found_it = true; wblock = iter;
break; break;
} }
} }
if (!found_it) if (!wblock)
return; return;
/* If a driver is bound, then notify the driver. */ /* If a driver is bound, then notify the driver. */
......
...@@ -216,6 +216,8 @@ struct mlxreg_core_platform_data { ...@@ -216,6 +216,8 @@ struct mlxreg_core_platform_data {
* @mask_low: low aggregation interrupt common mask; * @mask_low: low aggregation interrupt common mask;
* @deferred_nr: I2C adapter number must be exist prior probing execution; * @deferred_nr: I2C adapter number must be exist prior probing execution;
* @shift_nr: I2C adapter numbers must be incremented by this value; * @shift_nr: I2C adapter numbers must be incremented by this value;
* @handle: handle to be passed by callback;
* @completion_notify: callback to notify when platform driver probing is done;
*/ */
struct mlxreg_core_hotplug_platform_data { struct mlxreg_core_hotplug_platform_data {
struct mlxreg_core_item *items; struct mlxreg_core_item *items;
...@@ -228,6 +230,8 @@ struct mlxreg_core_hotplug_platform_data { ...@@ -228,6 +230,8 @@ struct mlxreg_core_hotplug_platform_data {
u32 mask_low; u32 mask_low;
int deferred_nr; int deferred_nr;
int shift_nr; int shift_nr;
void *handle;
int (*completion_notify)(void *handle, int id);
}; };
#endif /* __LINUX_PLATFORM_DATA_MLXREG_H */ #endif /* __LINUX_PLATFORM_DATA_MLXREG_H */
...@@ -144,6 +144,5 @@ ...@@ -144,6 +144,5 @@
#define SLEEP_ENABLE 0x2000 #define SLEEP_ENABLE 0x2000
extern int pmc_atom_read(int offset, u32 *value); extern int pmc_atom_read(int offset, u32 *value);
extern int pmc_atom_write(int offset, u32 value);
#endif /* PMC_ATOM_H */ #endif /* PMC_ATOM_H */
...@@ -124,6 +124,22 @@ int stop_machine(cpu_stop_fn_t fn, void *data, const struct cpumask *cpus); ...@@ -124,6 +124,22 @@ int stop_machine(cpu_stop_fn_t fn, void *data, const struct cpumask *cpus);
*/ */
int stop_machine_cpuslocked(cpu_stop_fn_t fn, void *data, const struct cpumask *cpus); int stop_machine_cpuslocked(cpu_stop_fn_t fn, void *data, const struct cpumask *cpus);
/**
* stop_core_cpuslocked: - stop all threads on just one core
* @cpu: any cpu in the targeted core
* @fn: the function to run
* @data: the data ptr for @fn()
*
* Same as above, but instead of every CPU, only the logical CPUs of a
* single core are affected.
*
* Context: Must be called from within a cpus_read_lock() protected region.
*
* Return: 0 if all executions of @fn returned 0, any non zero return
* value if any returned non zero.
*/
int stop_core_cpuslocked(unsigned int cpu, cpu_stop_fn_t fn, void *data);
int stop_machine_from_inactive_cpu(cpu_stop_fn_t fn, void *data, int stop_machine_from_inactive_cpu(cpu_stop_fn_t fn, void *data,
const struct cpumask *cpus); const struct cpumask *cpus);
#else /* CONFIG_SMP || CONFIG_HOTPLUG_CPU */ #else /* CONFIG_SMP || CONFIG_HOTPLUG_CPU */
......
/* SPDX-License-Identifier: GPL-2.0 */
#undef TRACE_SYSTEM
#define TRACE_SYSTEM intel_ifs
#if !defined(_TRACE_IFS_H) || defined(TRACE_HEADER_MULTI_READ)
#define _TRACE_IFS_H
#include <linux/ktime.h>
#include <linux/tracepoint.h>
TRACE_EVENT(ifs_status,
TP_PROTO(int cpu, union ifs_scan activate, union ifs_status status),
TP_ARGS(cpu, activate, status),
TP_STRUCT__entry(
__field( u64, status )
__field( int, cpu )
__field( u8, start )
__field( u8, stop )
),
TP_fast_assign(
__entry->cpu = cpu;
__entry->start = activate.start;
__entry->stop = activate.stop;
__entry->status = status.data;
),
TP_printk("cpu: %d, start: %.2x, stop: %.2x, status: %llx",
__entry->cpu,
__entry->start,
__entry->stop,
__entry->status)
);
#endif /* _TRACE_IFS_H */
/* This part must be outside protection */
#include <trace/define_trace.h>
...@@ -633,6 +633,27 @@ int stop_machine(cpu_stop_fn_t fn, void *data, const struct cpumask *cpus) ...@@ -633,6 +633,27 @@ int stop_machine(cpu_stop_fn_t fn, void *data, const struct cpumask *cpus)
} }
EXPORT_SYMBOL_GPL(stop_machine); EXPORT_SYMBOL_GPL(stop_machine);
#ifdef CONFIG_SCHED_SMT
int stop_core_cpuslocked(unsigned int cpu, cpu_stop_fn_t fn, void *data)
{
const struct cpumask *smt_mask = cpu_smt_mask(cpu);
struct multi_stop_data msdata = {
.fn = fn,
.data = data,
.num_threads = cpumask_weight(smt_mask),
.active_cpus = smt_mask,
};
lockdep_assert_cpus_held();
/* Set the initial state and stop all online cpus. */
set_state(&msdata, MULTI_STOP_PREPARE);
return stop_cpus(smt_mask, multi_cpu_stop, &msdata);
}
EXPORT_SYMBOL_GPL(stop_core_cpuslocked);
#endif
/** /**
* stop_machine_from_inactive_cpu - stop_machine() from inactive CPU * stop_machine_from_inactive_cpu - stop_machine() from inactive CPU
* @fn: the function to run * @fn: the function to run
......
...@@ -190,7 +190,7 @@ static int handle_event(struct nl_msg *n, void *arg) ...@@ -190,7 +190,7 @@ static int handle_event(struct nl_msg *n, void *arg)
struct genlmsghdr *genlhdr = genlmsg_hdr(nlh); struct genlmsghdr *genlhdr = genlmsg_hdr(nlh);
struct nlattr *attrs[THERMAL_GENL_ATTR_MAX + 1]; struct nlattr *attrs[THERMAL_GENL_ATTR_MAX + 1];
int ret; int ret;
struct perf_cap perf_cap; struct perf_cap perf_cap = {0};
ret = genlmsg_parse(nlh, 0, attrs, THERMAL_GENL_ATTR_MAX, NULL); ret = genlmsg_parse(nlh, 0, attrs, THERMAL_GENL_ATTR_MAX, NULL);
......
...@@ -1892,6 +1892,12 @@ static void set_fact_for_cpu(int cpu, void *arg1, void *arg2, void *arg3, ...@@ -1892,6 +1892,12 @@ static void set_fact_for_cpu(int cpu, void *arg1, void *arg2, void *arg3,
int ret; int ret;
int status = *(int *)arg4; int status = *(int *)arg4;
if (status && no_turbo()) {
isst_display_error_info_message(1, "Turbo mode is disabled", 0, 0);
ret = -1;
goto disp_results;
}
ret = isst_get_ctdp_levels(cpu, &pkg_dev); ret = isst_get_ctdp_levels(cpu, &pkg_dev);
if (ret) { if (ret) {
isst_display_error_info_message(1, "Failed to get number of levels", 0, 0); isst_display_error_info_message(1, "Failed to get number of levels", 0, 0);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment