Commit 66537e88 authored by Josh Poimboeuf's avatar Josh Poimboeuf Committed by Kleber Sacilotto de Souza

x86/speculation: Enable Spectre v1 swapgs mitigations

The previous commit added macro calls in the entry code which mitigate the
Spectre v1 swapgs issue if the X86_FEATURE_FENCE_SWAPGS_* features are
enabled.  Enable those features where applicable.

The mitigations may be disabled with "nospectre_v1" or "mitigations=off".

There are different features which can affect the risk of attack:

- When FSGSBASE is enabled, unprivileged users are able to place any
  value in GS, using the wrgsbase instruction.  This means they can
  write a GS value which points to any value in kernel space, which can
  be useful with the following gadget in an interrupt/exception/NMI
  handler:

	if (coming from user space)
		swapgs
	mov %gs:<percpu_offset>, %reg1
	// dependent load or store based on the value of %reg
	// for example: mov %(reg1), %reg2

  If an interrupt is coming from user space, and the entry code
  speculatively skips the swapgs (due to user branch mistraining), it
  may speculatively execute the GS-based load and a subsequent dependent
  load or store, exposing the kernel data to an L1 side channel leak.

  Note that, on Intel, a similar attack exists in the above gadget when
  coming from kernel space, if the swapgs gets speculatively executed to
  switch back to the user GS.  On AMD, this variant isn't possible
  because swapgs is serializing with respect to future GS-based
  accesses.

  NOTE: The FSGSBASE patch set hasn't been merged yet, so the above case
	doesn't exist quite yet.

- When FSGSBASE is disabled, the issue is mitigated somewhat because
  unprivileged users must use prctl(ARCH_SET_GS) to set GS, which
  restricts GS values to user space addresses only.  That means the
  gadget would need an additional step, since the target kernel address
  needs to be read from user space first.  Something like:

	if (coming from user space)
		swapgs
	mov %gs:<percpu_offset>, %reg1
	mov (%reg1), %reg2
	// dependent load or store based on the value of %reg2
	// for example: mov %(reg2), %reg3

  It's difficult to audit for this gadget in all the handlers, so while
  there are no known instances of it, it's entirely possible that it
  exists somewhere (or could be introduced in the future).  Without
  tooling to analyze all such code paths, consider it vulnerable.

  Effects of SMAP on the !FSGSBASE case:

  - If SMAP is enabled, and the CPU reports RDCL_NO (i.e., not
    susceptible to Meltdown), the kernel is prevented from speculatively
    reading user space memory, even L1 cached values.  This effectively
    disables the !FSGSBASE attack vector.

  - If SMAP is enabled, but the CPU *is* susceptible to Meltdown, SMAP
    still prevents the kernel from speculatively reading user space
    memory.  But it does *not* prevent the kernel from reading the
    user value from L1, if it has already been cached.  This is probably
    only a small hurdle for an attacker to overcome.

Thanks to Dave Hansen for contributing the speculative_smap() function.

Thanks to Andrew Cooper for providing the inside scoop on whether swapgs
is serializing on AMD.

[ tglx: Fixed the USER fence decision and polished the comment as suggested
  	by Dave Hansen ]
Signed-off-by: default avatarJosh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
Reviewed-by: default avatarDave Hansen <dave.hansen@intel.com>

CVE-2019-1125

(backported from commit a2059825)
[tyhicks: Backport to Xenial:
 - Adjust the file path to and context in kernel-parameters.txt
 - Rename X86_FEATURE_PTI to X86_FEATURE_KAISER]
Signed-off-by: default avatarTyler Hicks <tyhicks@canonical.com>
Signed-off-by: default avatarKleber Sacilotto de Souza <kleber.souza@canonical.com>
parent 4b0c9c8a
...@@ -2271,7 +2271,7 @@ bytes respectively. Such letter suffixes can also be entirely omitted. ...@@ -2271,7 +2271,7 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
expose users to several CPU vulnerabilities. expose users to several CPU vulnerabilities.
Equivalent to: nopti [X86,PPC] Equivalent to: nopti [X86,PPC]
nobp=0 [S390] nobp=0 [S390]
nospectre_v1 [PPC] nospectre_v1 [X86,PPC]
nospectre_v2 [X86,PPC,S390] nospectre_v2 [X86,PPC,S390]
spectre_v2_user=off [X86] spectre_v2_user=off [X86]
spec_store_bypass_disable=off [X86,PPC] spec_store_bypass_disable=off [X86,PPC]
...@@ -2608,9 +2608,9 @@ bytes respectively. Such letter suffixes can also be entirely omitted. ...@@ -2608,9 +2608,9 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
nosmt=force: Force disable SMT, cannot be undone nosmt=force: Force disable SMT, cannot be undone
via the sysfs control file. via the sysfs control file.
nospectre_v1 [PPC] Disable mitigations for Spectre Variant 1 (bounds nospectre_v1 [X86,PPC] Disable mitigations for Spectre Variant 1
check bypass). With this option data leaks are possible (bounds check bypass). With this option data leaks are
in the system. possible in the system.
nospectre_v2 [X86,PPC_FSL_BOOK3E] Disable all mitigations for the Spectre variant 2 nospectre_v2 [X86,PPC_FSL_BOOK3E] Disable all mitigations for the Spectre variant 2
(indirect branch prediction) vulnerability. System may (indirect branch prediction) vulnerability. System may
......
...@@ -52,6 +52,7 @@ static int __init noibrs_handler(char *str) ...@@ -52,6 +52,7 @@ static int __init noibrs_handler(char *str)
early_param("noibrs", noibrs_handler); early_param("noibrs", noibrs_handler);
static void __init spectre_v1_select_mitigation(void);
static void __init spectre_v2_select_mitigation(void); static void __init spectre_v2_select_mitigation(void);
static void __init ssb_select_mitigation(void); static void __init ssb_select_mitigation(void);
static void __init l1tf_select_mitigation(void); static void __init l1tf_select_mitigation(void);
...@@ -116,17 +117,11 @@ void __init check_bugs(void) ...@@ -116,17 +117,11 @@ void __init check_bugs(void)
if (boot_cpu_has(X86_FEATURE_STIBP)) if (boot_cpu_has(X86_FEATURE_STIBP))
x86_spec_ctrl_mask |= SPEC_CTRL_STIBP; x86_spec_ctrl_mask |= SPEC_CTRL_STIBP;
/* Select the proper spectre mitigation before patching alternatives */ /* Select the proper CPU mitigations before patching alternatives: */
spectre_v1_select_mitigation();
spectre_v2_select_mitigation(); spectre_v2_select_mitigation();
/*
* Select proper mitigation for any exposure to the Speculative Store
* Bypass vulnerability.
*/
ssb_select_mitigation(); ssb_select_mitigation();
l1tf_select_mitigation(); l1tf_select_mitigation();
mds_select_mitigation(); mds_select_mitigation();
arch_smt_update(); arch_smt_update();
...@@ -295,6 +290,108 @@ static int __init mds_cmdline(char *str) ...@@ -295,6 +290,108 @@ static int __init mds_cmdline(char *str)
} }
early_param("mds", mds_cmdline); early_param("mds", mds_cmdline);
#undef pr_fmt
#define pr_fmt(fmt) "Spectre V1 : " fmt
enum spectre_v1_mitigation {
SPECTRE_V1_MITIGATION_NONE,
SPECTRE_V1_MITIGATION_AUTO,
};
static enum spectre_v1_mitigation spectre_v1_mitigation __ro_after_init =
SPECTRE_V1_MITIGATION_AUTO;
static const char * const spectre_v1_strings[] = {
[SPECTRE_V1_MITIGATION_NONE] = "Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers",
[SPECTRE_V1_MITIGATION_AUTO] = "Mitigation: usercopy/swapgs barriers and __user pointer sanitization",
};
static bool is_swapgs_serializing(void)
{
/*
* Technically, swapgs isn't serializing on AMD (despite it previously
* being documented as such in the APM). But according to AMD, %gs is
* updated non-speculatively, and the issuing of %gs-relative memory
* operands will be blocked until the %gs update completes, which is
* good enough for our purposes.
*/
return boot_cpu_data.x86_vendor == X86_VENDOR_AMD;
}
/*
* Does SMAP provide full mitigation against speculative kernel access to
* userspace?
*/
static bool smap_works_speculatively(void)
{
if (!boot_cpu_has(X86_FEATURE_SMAP))
return false;
/*
* On CPUs which are vulnerable to Meltdown, SMAP does not
* prevent speculative access to user data in the L1 cache.
* Consider SMAP to be non-functional as a mitigation on these
* CPUs.
*/
if (boot_cpu_has(X86_BUG_CPU_MELTDOWN))
return false;
return true;
}
static void __init spectre_v1_select_mitigation(void)
{
if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1) || cpu_mitigations_off()) {
spectre_v1_mitigation = SPECTRE_V1_MITIGATION_NONE;
return;
}
if (spectre_v1_mitigation == SPECTRE_V1_MITIGATION_AUTO) {
/*
* With Spectre v1, a user can speculatively control either
* path of a conditional swapgs with a user-controlled GS
* value. The mitigation is to add lfences to both code paths.
*
* If FSGSBASE is enabled, the user can put a kernel address in
* GS, in which case SMAP provides no protection.
*
* [ NOTE: Don't check for X86_FEATURE_FSGSBASE until the
* FSGSBASE enablement patches have been merged. ]
*
* If FSGSBASE is disabled, the user can only put a user space
* address in GS. That makes an attack harder, but still
* possible if there's no SMAP protection.
*/
if (!smap_works_speculatively()) {
/*
* Mitigation can be provided from SWAPGS itself or
* PTI as the CR3 write in the Meltdown mitigation
* is serializing.
*
* If neither is there, mitigate with an LFENCE.
*/
if (!is_swapgs_serializing() && !boot_cpu_has(X86_FEATURE_KAISER))
setup_force_cpu_cap(X86_FEATURE_FENCE_SWAPGS_USER);
/*
* Enable lfences in the kernel entry (non-swapgs)
* paths, to prevent user entry from speculatively
* skipping swapgs.
*/
setup_force_cpu_cap(X86_FEATURE_FENCE_SWAPGS_KERNEL);
}
}
pr_info("%s\n", spectre_v1_strings[spectre_v1_mitigation]);
}
static int __init nospectre_v1_cmdline(char *str)
{
spectre_v1_mitigation = SPECTRE_V1_MITIGATION_NONE;
return 0;
}
early_param("nospectre_v1", nospectre_v1_cmdline);
#undef pr_fmt #undef pr_fmt
#define pr_fmt(fmt) "Spectre V2 : " fmt #define pr_fmt(fmt) "Spectre V2 : " fmt
...@@ -1319,7 +1416,7 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr ...@@ -1319,7 +1416,7 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
break; break;
case X86_BUG_SPECTRE_V1: case X86_BUG_SPECTRE_V1:
return sprintf(buf, "Mitigation: __user pointer sanitization\n"); return sprintf(buf, "%s\n", spectre_v1_strings[spectre_v1_mitigation]);
case X86_BUG_SPECTRE_V2: case X86_BUG_SPECTRE_V2:
return sprintf(buf, "%s%s%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled], return sprintf(buf, "%s%s%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment