- 07 May, 2019 36 commits
-
-
Thomas Gleixner authored
Move L!TF to a separate directory so the MDS stuff can be added at the side. Otherwise the all hardware vulnerabilites have their own top level entry. Should have done that right away. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> CVE-2018-12126 CVE-2018-12127 CVE-2018-12130 (backported from commit a4117ea9cd8a01aa62d791fa3026ee7befe73614) [juergh: Adjusted content and file paths.] Signed-off-by: Juerg Haefliger <juergh@canonical.com> Acked-by: Tyler Hicks <tyhicks@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Thomas Gleixner authored
In virtualized environments it can happen that the host has the microcode update which utilizes the VERW instruction to clear CPU buffers, but the hypervisor is not yet updated to expose the X86_FEATURE_MD_CLEAR CPUID bit to guests. Introduce an internal mitigation mode VWWERV which enables the invocation of the CPU buffer clearing even if X86_FEATURE_MD_CLEAR is not set. If the system has no updated microcode this results in a pointless execution of the VERW instruction wasting a few CPU cycles. If the microcode is updated, but not exposed to a guest then the CPU buffers will be cleared. That said: Virtual Machines Will Eventually Receive Vaccine Signed-off-by: Thomas Gleixner <tglx@linutronix.de> CVE-2018-12126 CVE-2018-12127 CVE-2018-12130 (backported from commit 8a2979fc4e819a1bb77c782def72e45d9f5a3f0d) [juergh: Adjusted context.] Signed-off-by: Juerg Haefliger <juergh@canonical.com> Acked-by: Tyler Hicks <tyhicks@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Thomas Gleixner authored
Add the sysfs reporting file for MDS. It exposes the vulnerability and mitigation state similar to the existing files for the other speculative hardware vulnerabilities. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: Borislav Petkov <bp@suse.de> CVE-2018-12126 CVE-2018-12127 CVE-2018-12130 (backported from commit bd8651092f9656672e53feb1f8e793a0b960138d) [juergh: - Used x86_hyper instead of hypervisor_is_type() in bugs.c. - Included <asm/hypervisor.h> for x86_hyper.] Signed-off-by: Juerg Haefliger <juergh@canonical.com> Acked-by: Tyler Hicks <tyhicks@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Thomas Gleixner authored
Now that the mitigations are in place, add a command line parameter to control the mitigation, a mitigation selector function and a SMT update mechanism. This is the minimal straight forward initial implementation which just provides an always on/off mode. The command line parameter is: mds=[full|off] This is consistent with the existing mitigations for other speculative hardware vulnerabilities. The idle invocation is dynamically updated according to the SMT state of the system similar to the dynamic update of the STIBP mitigation. The idle mitigation is limited to CPUs which are only affected by MSBDS and not any other variant, because the other variants cannot be mitigated on SMT enabled systems. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> CVE-2018-12126 CVE-2018-12127 CVE-2018-12130 (backported from commit c5ee1ae05768716933d4e5d2015b59bdf6df04c5) [juergh: - Adjusted context due to empty arch_smt_update() stub. - Adjusted file path Documentation/kernel-parameters.txt.] Signed-off-by: Juerg Haefliger <juergh@canonical.com> Acked-by: Tyler Hicks <tyhicks@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Thomas Gleixner authored
There is no point in having two functions and a conditional at the call site. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Ingo Molnar <mingo@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Jiri Kosina <jkosina@suse.cz> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: David Woodhouse <dwmw@amazon.co.uk> Cc: Tim Chen <tim.c.chen@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Casey Schaufler <casey.schaufler@intel.com> Cc: Asit Mallick <asit.k.mallick@intel.com> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Jon Masters <jcm@redhat.com> Cc: Waiman Long <longman9394@gmail.com> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Dave Stewart <david.c.stewart@intel.com> Cc: Kees Cook <keescook@chromium.org> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20181125185004.986890749@linutronix.de CVE-2018-12126 CVE-2018-12127 CVE-2018-12130 (cherry picked from commit 495d470e) Signed-off-by: Juerg Haefliger <juergh@canonical.com> Acked-by: Tyler Hicks <tyhicks@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Thomas Gleixner authored
Reorder the code so it is better grouped. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Ingo Molnar <mingo@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Jiri Kosina <jkosina@suse.cz> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: David Woodhouse <dwmw@amazon.co.uk> Cc: Tim Chen <tim.c.chen@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Casey Schaufler <casey.schaufler@intel.com> Cc: Asit Mallick <asit.k.mallick@intel.com> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Jon Masters <jcm@redhat.com> Cc: Waiman Long <longman9394@gmail.com> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Dave Stewart <david.c.stewart@intel.com> Cc: Kees Cook <keescook@chromium.org> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20181125185004.707122879@linutronix.de CVE-2018-12126 CVE-2018-12127 CVE-2018-12130 (backported from commit 15d6b7aa) [juergh: Adjusted context heavily due to significant differences in Xenial.] Signed-off-by: Juerg Haefliger <juergh@canonical.com> Acked-by: Tyler Hicks <tyhicks@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Thomas Gleixner authored
arch_smt_update() is only called when the sysfs SMT control knob is changed. This means that when SMT is enabled in the sysfs control knob the system is considered to have SMT active even if all siblings are offline. To allow finegrained control of the speculation mitigations, the actual SMT state is more interesting than the fact that siblings could be enabled. Rework the code, so arch_smt_update() is invoked from each individual CPU hotplug function, and simplify the update function while at it. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Ingo Molnar <mingo@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Jiri Kosina <jkosina@suse.cz> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: David Woodhouse <dwmw@amazon.co.uk> Cc: Tim Chen <tim.c.chen@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Casey Schaufler <casey.schaufler@intel.com> Cc: Asit Mallick <asit.k.mallick@intel.com> Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: Jon Masters <jcm@redhat.com> Cc: Waiman Long <longman9394@gmail.com> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Dave Stewart <david.c.stewart@intel.com> Cc: Kees Cook <keescook@chromium.org> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20181125185004.521974984@linutronix.de CVE-2018-12126 CVE-2018-12127 CVE-2018-12130 (backported from commit a74cfffb) [juergh: Dropped all STIBP-related modifications (STIBP doesn't exist in Xenial).] Signed-off-by: Juerg Haefliger <juergh@canonical.com> Acked-by: Tyler Hicks <tyhicks@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Juerg Haefliger authored
Introduce an empty arch_smt_update() stub to be used in subsequent x86 speculation code. It's basically commit 53c613fe ("x86/speculation: Enable cross-hyperthread spectre v2 STIBP mitigation") without the STIBP-specific pieces. CVE-2018-12126 CVE-2018-12127 CVE-2018-12130 Signed-off-by: Juerg Haefliger <juergh@canonical.com> Acked-by: Tyler Hicks <tyhicks@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Juerg Haefliger authored
Prefix the mutex with ubuntu_ to prevent name space clashes with future upstream mutexes. CVE-2018-12126 CVE-2018-12127 CVE-2018-12130 Signed-off-by: Juerg Haefliger <juergh@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Peter Zijlstra authored
Introduce the scheduler's 'sched_smt_present' static key and provide the query function 'sched_smt_active()' to be used in subsequent x86 speculation code. Loosely based on the following upstream commits: - 321a874a ("sched/smt: Expose sched_smt_present static key") - c5511d03 ("sched/smt: Make sched_smt_present track topology") - ba2591a5 ("sched/smt: Update sched_smt_present at runtime") - 1b568f0a ("sched/core: Optimize SCHED_SMT") CVE-2018-12126 CVE-2018-12127 CVE-2018-12130 Signed-off-by: Juerg Haefliger <juergh@canonical.com> Acked-by: Tyler Hicks <tyhicks@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Thomas Gleixner authored
Add a static key which controls the invocation of the CPU buffer clear mechanism on idle entry. This is independent of other MDS mitigations because the idle entry invocation to mitigate the potential leakage due to store buffer repartitioning is only necessary on SMT systems. Add the actual invocations to the different halt/mwait variants which covers all usage sites. mwaitx is not patched as it's not available on Intel CPUs. The buffer clear is only invoked before entering the C-State to prevent that stale data from the idling CPU is spilled to the Hyper-Thread sibling after the Store buffer got repartitioned and all entries are available to the non idle sibling. When coming out of idle the store buffer is partitioned again so each sibling has half of it available. Now CPU which returned from idle could be speculatively exposed to contents of the sibling, but the buffers are flushed either on exit to user space or on VMENTER. When later on conditional buffer clearing is implemented on top of this, then there is no action required either because before returning to user space the context switch will set the condition flag which causes a flush on the return to user path. Note, that the buffer clearing on idle is only sensible on CPUs which are solely affected by MSBDS and not any other variant of MDS because the other MDS variants cannot be mitigated when SMT is enabled, so the buffer clearing on idle would be a window dressing exercise. This intentionally does not handle the case in the acpi/processor_idle driver which uses the legacy IO port interface for C-State transitions for two reasons: - The acpi/processor_idle driver was replaced by the intel_idle driver almost a decade ago. Anything Nehalem upwards supports it and defaults to that new driver. - The legacy IO port interface is likely to be used on older and therefore unaffected CPUs or on systems which do not receive microcode updates anymore, so there is no point in adding that. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Borislav Petkov <bp@suse.de> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> CVE-2018-12126 CVE-2018-12127 CVE-2018-12130 (backported from commit 131ddbba14bc156defba98235818f3fb0cd9537e) [juergh: Adjusted context.] Signed-off-by: Juerg Haefliger <juergh@canonical.com> Acked-by: Tyler Hicks <tyhicks@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Thomas Gleixner authored
CPUs which are affected by L1TF and MDS mitigate MDS with the L1D Flush on VMENTER when updated microcode is installed. If a CPU is not affected by L1TF or if the L1D Flush is not in use, then MDS mitigation needs to be invoked explicit. For these cases, follow the host mitigation state and invoke the MDS mitigation before VMENTER. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> CVE-2018-12126 CVE-2018-12127 CVE-2018-12130 (backported from commit e3f1fbbd5ffb7ca71bf13d447983c03cf4e27030) [juergh: Adjusted path arch/x86/kvm/vmx/vmx.c -> arch/x86/kvm/vmx.c.] Signed-off-by: Juerg Haefliger <juergh@canonical.com> Acked-by: Tyler Hicks <tyhicks@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Thomas Gleixner authored
Add a static key which controls the invocation of the CPU buffer clear mechanism on exit to user space and add the call into prepare_exit_to_usermode() and do_nmi() right before actually returning. Add documentation which kernel to user space transition this covers and explain why some corner cases are not mitigated. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> CVE-2018-12126 CVE-2018-12127 CVE-2018-12130 (backported from commit b5ce1d36407a2324d2ab63c30ca1f71f24e9861e) [juergh: - Adjusted context.] - Included linux/static_key.h for DEFINE_STATIC_KEY_FALSE.] Signed-off-by: Juerg Haefliger <juergh@canonical.com> Acked-by: Tyler Hicks <tyhicks@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Tony Luck authored
We will need to provide declarations of static keys in header files. Provide DECLARE_STATIC_KEY_{TRUE,FALSE} macros. Signed-off-by: Tony Luck <tony.luck@intel.com> Acked-by: Borislav Petkov <bp@suse.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: http://lkml.kernel.org/r/816881cf85bd3cf13385d212882618f38a3b5d33.1472754711.git.tony.luck@intel.comSigned-off-by: Thomas Gleixner <tglx@linutronix.de> CVE-2018-12126 CVE-2018-12127 CVE-2018-12130 (cherry picked from commit b8fb0378) Signed-off-by: Juerg Haefliger <juergh@canonical.com> Acked-by: Tyler Hicks <tyhicks@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Thomas Gleixner authored
The Microarchitectural Data Sampling (MDS) vulernabilities are mitigated by clearing the affected CPU buffers. The mechanism for clearing the buffers uses the unused and obsolete VERW instruction in combination with a microcode update which triggers a CPU buffer clear when VERW is executed. Provide a inline function with the assembly magic. The argument of the VERW instruction must be a memory operand as documented: "MD_CLEAR enumerates that the memory-operand variant of VERW (for example, VERW m16) has been extended to also overwrite buffers affected by MDS. This buffer overwriting functionality is not guaranteed for the register operand variant of VERW." Documentation also recommends to use a writable data segment selector: "The buffer overwriting occurs regardless of the result of the VERW permission check, as well as when the selector is null or causes a descriptor load segment violation. However, for lowest latency we recommend using a selector that indicates a valid writable data segment." Add x86 specific documentation about MDS and the internal workings of the mitigation. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Borislav Petkov <bp@suse.de> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: Frederic Weisbecker <frederic@kernel.org> CVE-2018-12126 CVE-2018-12127 CVE-2018-12130 (backported from commit 96ea82f6839ef16812729b3820feb25f293abf63) [juergh: Dropped modification of Documentation/index.rst (doesn't exist in Xenial).] Signed-off-by: Juerg Haefliger <juergh@canonical.com> Acked-by: Tyler Hicks <tyhicks@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Andi Kleen authored
X86_FEATURE_MD_CLEAR is a new CPUID bit which is set when microcode provides the mechanism to invoke a flush of various exploitable CPU buffers by invoking the VERW instruction. Hand it through to guests so they can adjust their mitigations. This also requires corresponding qemu changes, which are available separately. [ tglx: Massaged changelog ] Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Borislav Petkov <bp@suse.de> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> CVE-2018-12126 CVE-2018-12127 CVE-2018-12130 (backported from commit 1789c4f11b6cefc067e405233084a6b9f072f579) [juergh: Adjusted context.] Signed-off-by: Juerg Haefliger <juergh@canonical.com> Acked-by: Tyler Hicks <tyhicks@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Thomas Gleixner authored
This bug bit is set on CPUs which are only affected by Microarchitectural Store Buffer Data Sampling (MSBDS) and not by any other MDS variant. This is important because the Store Buffers are partitioned between Hyper-Threads so cross thread forwarding is not possible. But if a thread enters or exits a sleep state the store buffer is repartitioned which can expose data from one thread to the other. This transition can be mitigated. That means that for CPUs which are only affected by MSBDS SMT can be enabled, if the CPU is not affected by other SMT sensitive vulnerabilities, e.g. L1TF. The XEON PHI variants fall into that category. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> CVE-2018-12126 CVE-2018-12127 CVE-2018-12130 (cherry picked from commit ab12e2e20232e9b84e2b212346b9fe3956e91642) Signed-off-by: Juerg Haefliger <juergh@canonical.com> Acked-by: Tyler Hicks <tyhicks@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Andi Kleen authored
Microarchitectural Data Sampling (MDS), is a class of side channel attacks on internal buffers in Intel CPUs. The variants are: - Microarchitectural Store Buffer Data Sampling (MSBDS) (CVE-2018-12126) - Microarchitectural Fill Buffer Data Sampling (MFBDS) (CVE-2018-12130) - Microarchitectural Load Port Data Sampling (MLPDS) (CVE-2018-12127) MSBDS leaks Store Buffer Entries which can be speculatively forwarded to a dependent load (store-to-load forwarding) as an optimization. The forward can also happen to a faulting or assisting load operation for a different memory address, which can be exploited under certain conditions. Store buffers are partitioned between Hyper-Threads so cross thread forwarding is not possible. But if a thread enters or exits a sleep state the store buffer is repartitioned which can expose data from one thread to the other. MFBDS leaks Fill Buffer Entries. Fill buffers are used internally to manage L1 miss situations and to hold data which is returned or sent in response to a memory or I/O operation. Fill buffers can forward data to a load operation and also write data to the cache. When the fill buffer is deallocated it can retain the stale data of the preceding operations which can then be forwarded to a faulting or assisting load operation, which can be exploited under certain conditions. Fill buffers are shared between Hyper-Threads so cross thread leakage is possible. MLDPS leaks Load Port Data. Load ports are used to perform load operations from memory or I/O. The received data is then forwarded to the register file or a subsequent operation. In some implementations the Load Port can contain stale data from a previous operation which can be forwarded to faulting or assisting loads under certain conditions, which again can be exploited eventually. Load ports are shared between Hyper-Threads so cross thread leakage is possible. All variants have the same mitigation for single CPU thread case (SMT off), so the kernel can treat them as one MDS issue. Add the basic infrastructure to detect if the current CPU is affected by MDS. [ tglx: Rewrote changelog ] Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Borislav Petkov <bp@suse.de> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> CVE-2018-12126 CVE-2018-12127 CVE-2018-12130 (backported from commit 5edd28bac163689d3ab84c7545d9d0212504be86) [juergh: - Adjusted context. - Dropped X86_VENDOR_HYGON (doesn't exist in Xenial).] Signed-off-by: Juerg Haefliger <juergh@canonical.com> Acked-by: Tyler Hicks <tyhicks@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Thomas Gleixner authored
The CPU vulnerability whitelists have some overlap and there are more whitelists coming along. Use the driver_data field in the x86_cpu_id struct to denote the whitelisted vulnerabilities and combine all whitelists into one. Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> CVE-2018-12126 CVE-2018-12127 CVE-2018-12130 (backported from commit c01bba64427c723870372e97a82ae843ca14e876) [juergh: - Adjusted context. - Dropped X86_VENDOR_HYGON (doesn't exist in Xenial).] Signed-off-by: Juerg Haefliger <juergh@canonical.com> Acked-by: Tyler Hicks <tyhicks@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Thomas Gleixner authored
Greg pointed out that speculation related bit defines are using (1 << N) format instead of BIT(N). Aside of that (1 << N) is wrong as it should use 1UL at least. Clean it up. [ Josh Poimboeuf: Fix tools build ] Reported-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: Borislav Petkov <bp@suse.de> CVE-2018-12126 CVE-2018-12127 CVE-2018-12130 (backported from commit a4a931802e51223b3e0548fc31fb3f7dc8f6d032) [juergh: - Adjusted context. - Dropped modification to tools/power/x86/x86_energy_perf_policy/Makefile. - Introduced SPEC_CTRL_IBRS_SHIFT to be used in asm macros in arch/x86/include/asm/spec_ctrl.h (can't use BIT() macro in asm because of UUL).] Signed-off-by: Juerg Haefliger <juergh@canonical.com> Acked-by: Tyler Hicks <tyhicks@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Arnaldo Carvalho de Melo authored
So that we reduce the difference of tools/include/linux/bitops.h to the original kernel file, include/linux/bitops.h, trying to remove the need to define BITS_PER_LONG, to avoid clashes with asm/bitsperlong.h. And the things removed from tools/include/linux/bitops.h are really in linux/bits.h, so that we can have a copy and then tools/perf/check_headers.sh will tell us when new stuff gets added to linux/bits.h so that we can check if it is useful and if any adjustment needs to be done to the tools/{include,arch}/ copies. Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Sverdlin <alexander.sverdlin@nokia.com> Cc: David Ahern <dsahern@gmail.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Wang Nan <wangnan0@huawei.com> Link: https://lkml.kernel.org/n/tip-y1sqyydvfzo0bjjoj4zsl562@git.kernel.orgSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> CVE-2018-12126 CVE-2018-12127 CVE-2018-12130 (backported from commit ba4aa02b) [juergh: - Adjusted context. - Dropped modification of perf/check-headers.sh (doesn't exist in Xenial).] Signed-off-by: Juerg Haefliger <juergh@canonical.com> Acked-by: Tyler Hicks <tyhicks@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Will Deacon authored
In preparation for implementing the asm-generic atomic bitops in terms of atomic_long_*(), we need to prevent <asm/atomic.h> implementations from pulling in <linux/bitops.h>. A common reason for this include is for the BITS_PER_BYTE definition, so move this and some other BIT() and masking macros into a new header file, <linux/bits.h>. Signed-off-by: Will Deacon <will.deacon@arm.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-arm-kernel@lists.infradead.org Cc: yamada.masahiro@socionext.com Link: https://lore.kernel.org/lkml/1529412794-17720-4-git-send-email-will.deacon@arm.comSigned-off-by: Ingo Molnar <mingo@kernel.org> CVE-2018-12126 CVE-2018-12127 CVE-2018-12130 (cherry picked from commit 8bd9cb51) Signed-off-by: Juerg Haefliger <juergh@canonical.com> Acked-by: Tyler Hicks <tyhicks@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Matthias Kaehlcke authored
GENMASK(_ULL) performs a left-shift of ~0UL(L), which technically results in an integer overflow. clang raises a warning if the overflow occurs in a preprocessor expression. Clear the low-order bits through a substraction instead of the left-shift to avoid the overflow. (akpm: no change in .text size in my testing) Link: http://lkml.kernel.org/r/20170803212020.24939-1-mka@chromium.orgSigned-off-by: Matthias Kaehlcke <mka@chromium.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> CVE-2018-12126 CVE-2018-12127 CVE-2018-12130 (cherry picked from commit c32ee3d9) Signed-off-by: Juerg Haefliger <juergh@canonical.com> Acked-by: Tyler Hicks <tyhicks@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Eduardo Habkost authored
Months ago, we have added code to allow direct access to MSR_IA32_SPEC_CTRL to the guest, which makes STIBP available to guests. This was implemented by commits d28b387f ("KVM/VMX: Allow direct access to MSR_IA32_SPEC_CTRL") and b2ac58f9 ("KVM/SVM: Allow direct access to MSR_IA32_SPEC_CTRL"). However, we never updated GET_SUPPORTED_CPUID to let userspace know that STIBP can be enabled in CPUID. Fix that by updating kvm_cpuid_8000_0008_ebx_x86_features and kvm_cpuid_7_0_edx_x86_features. Signed-off-by: Eduardo Habkost <ehabkost@redhat.com> Reviewed-by: Jim Mattson <jmattson@google.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> CVE-2018-12126 CVE-2018-12127 CVE-2018-12130 (backported from commit d7b09c82) [juergh: Adjusted context.] Signed-off-by: Juerg Haefliger <juergh@canonical.com> Acked-by: Tyler Hicks <tyhicks@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Peter Zijlstra authored
Going primarily by: https://en.wikipedia.org/wiki/List_of_Intel_Atom_microprocessors with additional information gleaned from other related pages; notably: - Bonnell shrink was called Saltwell - Moorefield is the Merriefield refresh which makes it Airmont The general naming scheme is: FAM6_ATOM_UARCH_SOCTYPE for i in `git grep -l FAM6_ATOM` ; do sed -i -e 's/ATOM_PINEVIEW/ATOM_BONNELL/g' \ -e 's/ATOM_LINCROFT/ATOM_BONNELL_MID/' \ -e 's/ATOM_PENWELL/ATOM_SALTWELL_MID/g' \ -e 's/ATOM_CLOVERVIEW/ATOM_SALTWELL_TABLET/g' \ -e 's/ATOM_CEDARVIEW/ATOM_SALTWELL/g' \ -e 's/ATOM_SILVERMONT1/ATOM_SILVERMONT/g' \ -e 's/ATOM_SILVERMONT2/ATOM_SILVERMONT_X/g' \ -e 's/ATOM_MERRIFIELD/ATOM_SILVERMONT_MID/g' \ -e 's/ATOM_MOOREFIELD/ATOM_AIRMONT_MID/g' \ -e 's/ATOM_DENVERTON/ATOM_GOLDMONT_X/g' \ -e 's/ATOM_GEMINI_LAKE/ATOM_GOLDMONT_PLUS/g' ${i} done Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: dave.hansen@linux.intel.com Cc: len.brown@intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org> CVE-2018-12126 CVE-2018-12127 CVE-2018-12130 (backported from commit f2c4db1b) [juergh: - Adjusted context. - Dropped changes to non-existing files. - Fixed a few more places that only exist in Xenial 4.4. - Ran the loop above to verify that all the old macros where replaced.] Signed-off-by: Juerg Haefliger <juergh@canonical.com> Acked-by: Tyler Hicks <tyhicks@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Dominik Brodowski authored
Only CPUs which speculate can speculate. Therefore, it seems prudent to test for cpu_no_speculation first and only then determine whether a specific speculating CPU is susceptible to store bypass speculation. This is underlined by all CPUs currently listed in cpu_no_speculation were present in cpu_no_spec_store_bypass as well. Signed-off-by: Dominik Brodowski <linux@dominikbrodowski.net> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: bp@suse.de Cc: konrad.wilk@oracle.com Link: https://lkml.kernel.org/r/20180522090539.GA24668@light.dominikbrodowski.net CVE-2018-12126 CVE-2018-12127 CVE-2018-12130 (backported from commit 8ecc4979) [juergh: Adjusted context.] Signed-off-by: Juerg Haefliger <juergh@canonical.com> Acked-by: Tyler Hicks <tyhicks@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Kan Liang authored
Goldmont, Glodmont plus and Xeon Phi have MSR_SMI_COUNT as well. Signed-off-by: Kan Liang <Kan.liang@intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: ak@linux.intel.com Cc: peterz@infradead.org Cc: piotr.luc@intel.com Cc: harry.pan@intel.com Cc: srinivas.pandruvada@linux.intel.com Link: http://lkml.kernel.org/r/20170908213449.6224-2-kan.liang@intel.com CVE-2018-12126 CVE-2018-12127 CVE-2018-12130 (cherry picked from commit 1aaccc40) Signed-off-by: Juerg Haefliger <juergh@canonical.com> Acked-by: Tyler Hicks <tyhicks@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Juerg Haefliger authored
There are a few more places in Xenial 4.4 that still use open-coded magic numbers instead of the new model name macros. Fix that. CVE-2018-12126 CVE-2018-12127 CVE-2018-12130 Signed-off-by: Juerg Haefliger <juergh@canonical.com> Acked-by: Tyler Hicks <tyhicks@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Dave Hansen authored
This patch presumes that Kabylake and Skylake Server will be the same as the existing Skylake parts and adds them to the MSR events code. Also add handling for "WESTMERE2". Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Dave Hansen <dave@sr71.net> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: jacob.jun.pan@intel.com Link: http://lkml.kernel.org/r/20160603001935.FE6B3847@viggo.jf.intel.comSigned-off-by: Ingo Molnar <mingo@kernel.org> CVE-2018-12126 CVE-2018-12127 CVE-2018-12130 (cherry picked from commit 5134596c) Signed-off-by: Juerg Haefliger <juergh@canonical.com> Acked-by: Tyler Hicks <tyhicks@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Dave Hansen authored
Use the new INTEL_MODEL_* macros for arch/x86/events/msr.c. This code appears to be missing handling for "WESTMERE2" and "SKYLAKE_X". Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Andy Lutomirski <luto@kernel.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Dave Hansen <dave@sr71.net> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: jacob.jun.pan@intel.com Link: http://lkml.kernel.org/r/20160603001933.99A402B0@viggo.jf.intel.comSigned-off-by: Ingo Molnar <mingo@kernel.org> CVE-2018-12126 CVE-2018-12127 CVE-2018-12130 (backported from commit 353bf605) [juergh: Adjusted context.] Signed-off-by: Juerg Haefliger <juergh@canonical.com> Acked-by: Tyler Hicks <tyhicks@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Dave Hansen authored
Use the new model number macros instead of spelling things out in the comments. Note that this is missing a Nehalem model that is mentioned in intel_idle which is fixed up in a later patch. The resulting binary (arch/x86/events/intel/core.o) is exactly the same with and without this patch modulo some harmless changes to restoring %esi in the return path of functions, even those untouched by this patch. Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Dave Hansen <dave@sr71.net> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Kan Liang <kan.liang@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Cc: jacob.jun.pan@intel.com Link: http://lkml.kernel.org/r/20160603001929.C5F1C079@viggo.jf.intel.comSigned-off-by: Ingo Molnar <mingo@kernel.org> CVE-2018-12126 CVE-2018-12127 CVE-2018-12130 (backported from commit ef5f9f47) [juergh: Adjusted context.] Signed-off-by: Juerg Haefliger <juergh@canonical.com> Acked-by: Tyler Hicks <tyhicks@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Andi Kleen authored
Everything the same as Skylake, just new model numbers. Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: http://lkml.kernel.org/r/1461977748-17616-1-git-send-email-andi@firstfloor.orgSigned-off-by: Ingo Molnar <mingo@kernel.org> CVE-2018-12126 CVE-2018-12127 CVE-2018-12130 (cherry picked from commit cba1b379) Signed-off-by: Juerg Haefliger <juergh@canonical.com> Acked-by: Tyler Hicks <tyhicks@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Andi Kleen authored
Everything the same as base Skylake, just a new model number. Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Link: http://lkml.kernel.org/r/1460751933-2264-1-git-send-email-andi@firstfloor.orgSigned-off-by: Ingo Molnar <mingo@kernel.org> CVE-2018-12126 CVE-2018-12127 CVE-2018-12130 (cherry picked from commit b89c1737) Signed-off-by: Juerg Haefliger <juergh@canonical.com> Acked-by: Tyler Hicks <tyhicks@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Salvatore Bonaccorso authored
Fix small typo (wiil -> will) in the "3.4. Nested virtual machines" section. Fixes: 5b76a3cf ("KVM: VMX: Tell the nested hypervisor to skip L1D flush on vmentry") Cc: linux-kernel@vger.kernel.org Cc: Jonathan Corbet <corbet@lwn.net> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Tony Luck <tony.luck@intel.com> Cc: linux-doc@vger.kernel.org Cc: trivial@kernel.org Signed-off-by: Salvatore Bonaccorso <carnil@debian.org> Signed-off-by: Jonathan Corbet <corbet@lwn.net> CVE-2018-12126 CVE-2018-12127 CVE-2018-12130 (cherry picked from commit 60ca05c3) Signed-off-by: Juerg Haefliger <juergh@canonical.com> Acked-by: Tyler Hicks <tyhicks@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Stefan Bader authored
Ignore: yes Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
Stefan Bader authored
BugLink: http://bugs.launchpad.net/bugs/1786013Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
-
- 23 Apr, 2019 4 commits
-
-
Connor Kuehl authored
Signed-off-by: Connor Kuehl <connor.kuehl@canonical.com>
-
Connor Kuehl authored
BugLink: https://bugs.launchpad.net/bugs/1826036Signed-off-by: Connor Kuehl <connor.kuehl@canonical.com>
-
Connor Kuehl authored
Ignore: yes Signed-off-by: Connor Kuehl <connor.kuehl@canonical.com>
-
Connor Kuehl authored
BugLink: http://bugs.launchpad.net/bugs/1786013Signed-off-by: Connor Kuehl <connor.kuehl@canonical.com>
-