- 25 Nov, 2019 7 commits
-
-
Lucas Stach authored
commit 549766ac upstream. The driver for F54 just polls the status and doesn't even have a IRQ handler registered. Make sure to disable all F54 IRQs, so we don't crash the kernel on a nonexistent handler. Signed-off-by:
Lucas Stach <l.stach@pengutronix.de> Link: https://lore.kernel.org/r/20191105114402.6009-1-l.stach@pengutronix.de Cc: stable@vger.kernel.org Signed-off-by:
Dmitry Torokhov <dmitry.torokhov@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Lucas Stach authored
commit 003f01c7 upstream. The video buffer used by the queue is a vb2_v4l2_buffer, not a plain vb2_buffer. Using the wrong type causes the allocation of the buffer storage to be too small, causing a out of bounds write when __init_vb2_v4l2_buffer initializes the buffer. Signed-off-by:
Lucas Stach <l.stach@pengutronix.de> Fixes: 3a762dbd ("[media] Input: synaptics-rmi4 - add support for F54 diagnostics") Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20191104114454.10500-1-l.stach@pengutronix.deSigned-off-by:
Dmitry Torokhov <dmitry.torokhov@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Oliver Neukum authored
commit fa3a5a18 upstream. No timer must be left running when the device goes away. Signed-off-by:
Oliver Neukum <oneukum@suse.com> Reported-and-tested-by: syzbot+b6c55daa701fc389e286@syzkaller.appspotmail.com Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/1573726121.17351.3.camel@suse.comSigned-off-by:
Dmitry Torokhov <dmitry.torokhov@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Henry Lin authored
commit 52869931 upstream. While output urb's snd_complete_urb() is executing, calling prepare_outbound_urb() may cause endpoint stopped before prepare_outbound_urb() returns and result in next urb submitted to stopped endpoint. usb-audio driver cannot re-use it afterwards as the urb is still hold by usb stack. This change checks EP_FLAG_RUNNING flag after prepare_outbound_urb() again to let snd_complete_urb() know the endpoint already stopped and does not submit next urb. Below kind of error will be fixed: [ 213.153103] usb 1-2: timeout: still 1 active urbs on EP #1 [ 213.164121] usb 1-2: cannot submit urb 0, error -16: unknown error Signed-off-by:
Henry Lin <henryl@nvidia.com> Cc: <stable@vger.kernel.org> Link: https://lore.kernel.org/r/20191113021420.13377-1-henryl@nvidia.comSigned-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Takashi Iwai authored
commit 167beb17 upstream. A check of the return value from get_cur_mix_raw() is missing at the resolution test code in get_min_max_with_quirks(), which may leave the variable untouched, leading to a random uninitialized value, as detected by syzkaller fuzzer. Add the missing return error check for fixing that. Reported-and-tested-by: syzbot+abe1ab7afc62c6bb6377@syzkaller.appspotmail.com Cc: <stable@vger.kernel.org> Link: https://lore.kernel.org/r/20191109181658.30368-1-tiwai@suse.deSigned-off-by:
Takashi Iwai <tiwai@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jouni Hogander authored
[ Upstream commit 3b5a3997 ] Driver/net/can/slcan.c is derived from slip.c. Memory leak was detected by Syzkaller in slcan. Same issue exists in slip.c and this patch is addressing the leak in slip.c. Here is the slcan memory leak trace reported by Syzkaller: BUG: memory leak unreferenced object 0xffff888067f65500 (size 4096): comm "syz-executor043", pid 454, jiffies 4294759719 (age 11.930s) hex dump (first 32 bytes): 73 6c 63 61 6e 30 00 00 00 00 00 00 00 00 00 00 slcan0.......... 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<00000000a06eec0d>] __kmalloc+0x18b/0x2c0 [<0000000083306e66>] kvmalloc_node+0x3a/0xc0 [<000000006ac27f87>] alloc_netdev_mqs+0x17a/0x1080 [<0000000061a996c9>] slcan_open+0x3ae/0x9a0 [<000000001226f0f9>] tty_ldisc_open.isra.1+0x76/0xc0 [<0000000019289631>] tty_set_ldisc+0x28c/0x5f0 [<000000004de5a617>] tty_ioctl+0x48d/0x1590 [<00000000daef496f>] do_vfs_ioctl+0x1c7/0x1510 [<0000000059068dbc>] ksys_ioctl+0x99/0xb0 [<000000009a6eb334>] __x64_sys_ioctl+0x78/0xb0 [<0000000053d0332e>] do_syscall_64+0x16f/0x580 [<0000000021b83b99>] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [<000000008ea75434>] 0xfffffffffffffff Cc: "David S. Miller" <davem@davemloft.net> Cc: Oliver Hartkopp <socketcan@hartkopp.net> Cc: Lukas Bulwahn <lukas.bulwahn@gmail.com> Signed-off-by:
Jouni Hogander <jouni.hogander@unikie.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Oliver Neukum authored
[ Upstream commit a9a51bd7 ] If a malicious device gives a short MAC it can elicit up to 5 bytes of leaked memory out of the driver. We need to check for ETH_ALEN instead. Reported-by: syzbot+a8d4acdad35e6bbca308@syzkaller.appspotmail.com Signed-off-by:
Oliver Neukum <oneukum@suse.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 16 Nov, 2019 32 commits
-
-
Greg Kroah-Hartman authored
-
Gomez Iglesias, Antonio authored
commit 7f00cc8d upstream. Add the initial ITLB_MULTIHIT documentation. [ tglx: Add it to the index so it gets actually built. ] Signed-off-by:
Antonio Gomez Iglesias <antonio.gomez.iglesias@intel.com> Signed-off-by:
Nelson D'Souza <nelson.dsouza@linux.intel.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> [bwh: Backported to 4.9: adjust filenames] Signed-off-by:
Ben Hutchings <ben@decadent.org.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Junaid Shahid authored
commit 1aa9b957 upstream. The page table pages corresponding to broken down large pages are zapped in FIFO order, so that the large page can potentially be recovered, if it is not longer being used for execution. This removes the performance penalty for walking deeper EPT page tables. By default, one large page will last about one hour once the guest reaches a steady state. Signed-off-by:
Junaid Shahid <junaids@google.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> [bwh: Backported to 4.9: - Update another error path in kvm_create_vm() to use out_err_no_mmu_notifier - Adjust filename, context] Signed-off-by:
Ben Hutchings <ben@decadent.org.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Junaid Shahid authored
commit c57c8046 upstream. Add a function to create a kernel thread associated with a given VM. In particular, it ensures that the worker thread inherits the priority and cgroups of the calling thread. Signed-off-by:
Junaid Shahid <junaids@google.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> [bwh: Backported to 4.9: adjust context] Signed-off-by:
Ben Hutchings <ben@decadent.org.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Paolo Bonzini authored
commit b8e8c830 upstream. With some Intel processors, putting the same virtual address in the TLB as both a 4 KiB and 2 MiB page can confuse the instruction fetch unit and cause the processor to issue a machine check resulting in a CPU lockup. Unfortunately when EPT page tables use huge pages, it is possible for a malicious guest to cause this situation. Add a knob to mark huge pages as non-executable. When the nx_huge_pages parameter is enabled (and we are using EPT), all huge pages are marked as NX. If the guest attempts to execute in one of those pages, the page is broken down into 4K pages, which are then marked executable. This is not an issue for shadow paging (except nested EPT), because then the host is in control of TLB flushes and the problematic situation cannot happen. With nested EPT, again the nested guest can cause problems shadow and direct EPT is treated in the same way. [ tglx: Fixup default to auto and massage wording a bit ] Originally-by:
Junaid Shahid <junaids@google.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> [bwh: Backported to 4.9: - Use kvm_mmu_invalidate_zap_all_pages() instead of kvm_mmu_zap_all_fast() - Don't provide mode for nx_largepages_splitted as all stats are read-only - Adjust filename, context] Signed-off-by:
Ben Hutchings <ben@decadent.org.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tyler Hicks authored
commit 731dc9df upstream. A kernel module may need to check the value of the "mitigations=" kernel command line parameter as part of its setup when the module needs to perform software mitigations for a CPU flaw. Uninline and export the helper functions surrounding the cpu_mitigations enum to allow for their usage from a module. Lastly, privatize the enum and cpu_mitigations variable since the value of cpu_mitigations can be checked with the exported helper functions. Signed-off-by:
Tyler Hicks <tyhicks@canonical.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Ben Hutchings <ben@decadent.org.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Vineela Tummalapalli authored
commit db4d30fb upstream. Some processors may incur a machine check error possibly resulting in an unrecoverable CPU lockup when an instruction fetch encounters a TLB multi-hit in the instruction TLB. This can occur when the page size is changed along with either the physical address or cache type. The relevant erratum can be found here: https://bugzilla.kernel.org/show_bug.cgi?id=205195 There are other processors affected for which the erratum does not fully disclose the impact. This issue affects both bare-metal x86 page tables and EPT. It can be mitigated by either eliminating the use of large pages or by using careful TLB invalidations when changing the page size in the page tables. Just like Spectre, Meltdown, L1TF and MDS, a new bit has been allocated in MSR_IA32_ARCH_CAPABILITIES (PSCHANGE_MC_NO) and will be set on CPUs which are mitigated against this issue. Signed-off-by:
Vineela Tummalapalli <vineela.tummalapalli@intel.com> Co-developed-by:
Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by:
Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> [bwh: Backported to 4.9: - No support for X86_VENDOR_HYGON, ATOM_AIRMONT_NP - Adjust context, indentation] Signed-off-by:
Ben Hutchings <ben@decadent.org.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Paolo Bonzini authored
commit 9167ab79 upstream. VMX already does so if the host has SMEP, in order to support the combination of CR0.WP=1 and CR4.SMEP=1. However, it is perfectly safe to always do so, and in fact VMX already ends up running with EFER.NXE=1 on old processors that lack the "load EFER" controls, because it may help avoiding a slow MSR write. Removing all the conditionals simplifies the code. SVM does not have similar code, but it should since recent AMD processors do support SMEP. So this patch also makes the code for the two vendors more similar while fixing NPT=0, CR0.WP=1 and CR4.SMEP=1 on AMD processors. Cc: Joerg Roedel <jroedel@suse.de> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> [bwh: Backported to 4.9: adjust filename] Signed-off-by:
Ben Hutchings <ben@decadent.org.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Paolo Bonzini authored
commit 335e192a upstream. These are useful in debugging shadow paging. Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> [bwh: Backported to 4.9: - Keep using PT_PRESENT_MASK to test page write permission - Adjust context] Signed-off-by:
Ben Hutchings <ben@decadent.org.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ben Hutchings authored
Extracted from commit d3e328f2 "kvm: x86: mmu: Verify that restored PTE has needed perms in fast page fault". Signed-off-by:
Ben Hutchings <ben@decadent.org.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Paolo Bonzini authored
commit e9f2a760 upstream. Note that in such a case it is quite likely that KVM will BUG_ON in __pte_list_remove when the VM is closed. However, there is no immediate risk of memory corruption in the host so a WARN_ON is enough and it lets you gather traces for debugging. Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Ben Hutchings <ben@decadent.org.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Paolo Bonzini authored
commit d679b326 upstream. After the previous patch, the low bits of the gfn are masked in both FNAME(fetch) and __direct_map, so we do not need to clear them in transparent_hugepage_adjust. Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Ben Hutchings <ben@decadent.org.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Paolo Bonzini authored
commit 3fcf2d1b upstream. These two functions are basically doing the same thing through kvm_mmu_get_page, link_shadow_page and mmu_set_spte; yet, for historical reasons, their code looks very different. This patch tries to take the best of each and make them very similar, so that it is easy to understand changes that apply to both of them. Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> [bwh: Backported to 4.9: adjust context] Signed-off-by:
Ben Hutchings <ben@decadent.org.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Junaid Shahid authored
commit 43fdcda9 upstream. Release the page at the call-site where it was originally acquired. This makes the exit code cleaner for most call sites, since they do not need to duplicate code between success and the failure label. Signed-off-by:
Junaid Shahid <junaids@google.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Ben Hutchings <ben@decadent.org.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Junaid Shahid authored
commit 0d9ce162 upstream. It doesn't seem as if there is any particular need for kvm_lock to be a spinlock, so convert the lock to a mutex so that sleepable functions (in particular cond_resched()) can be called while holding it. Signed-off-by:
Junaid Shahid <junaids@google.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> [bwh: Backported to 4.9: - Drop changes in kvm_hyperv_tsc_notifier(), vm_stat_clear(), vcpu_stat_clear(), kvm_uevent_notify_change() - Adjust context] Signed-off-by:
Ben Hutchings <ben@decadent.org.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Paolo Bonzini authored
commit 9b8ebbdb upstream. The x86 MMU if full of code that returns 0 and 1 for retry/emulate. Use the existing RET_MMIO_PF_RETRY/RET_MMIO_PF_EMULATE enum, renaming it to drop the MMIO part. Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> [bwh: Backported to 4.9: adjust context] Signed-off-by:
Ben Hutchings <ben@decadent.org.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Paolo Bonzini authored
commit e08d26f0 upstream. Calling handle_mmio_page_fault() has been unnecessary since commit e9ee956e ("KVM: x86: MMU: Move handle_mmio_page_fault() call to kvm_mmu_page_fault()", 2016-02-22). handle_mmio_page_fault() can now be made static. Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Reviewed-by:
David Hildenbrand <david@redhat.com> Signed-off-by:
Radim Krčmář <rkrcmar@redhat.com> [bwh: Backported to 4.9: adjust context] Signed-off-by:
Ben Hutchings <ben@decadent.org.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Josh Poimboeuf authored
commit 012206a8 upstream. For new IBRS_ALL CPUs, the Enhanced IBRS check at the beginning of cpu_bugs_smt_update() causes the function to return early, unintentionally skipping the MDS and TAA logic. This is not a problem for MDS, because there appears to be no overlap between IBRS_ALL and MDS-affected CPUs. So the MDS mitigation would be disabled and nothing would need to be done in this function anyway. But for TAA, the TAA_MSG_SMT string will never get printed on Cascade Lake and newer. The check is superfluous anyway: when 'spectre_v2_enabled' is SPECTRE_V2_IBRS_ENHANCED, 'spectre_v2_user' is always SPECTRE_V2_USER_NONE, and so the 'spectre_v2_user' switch statement handles it appropriately by doing nothing. So just remove the check. Fixes: 1b42f017 ("x86/speculation/taa: Add mitigation for TSX Async Abort") Signed-off-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Reviewed-by:
Tyler Hicks <tyhicks@canonical.com> Reviewed-by:
Borislav Petkov <bp@suse.de> Signed-off-by:
Ben Hutchings <ben@decadent.org.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Michal Hocko authored
commit db616173 upstream. There is a general consensus that TSX usage is not largely spread while the history shows there is a non trivial space for side channel attacks possible. Therefore the tsx is disabled by default even on platforms that might have a safe implementation of TSX according to the current knowledge. This is a fair trade off to make. There are, however, workloads that really do benefit from using TSX and updating to a newer kernel with TSX disabled might introduce a noticeable regressions. This would be especially a problem for Linux distributions which will provide TAA mitigations. Introduce config options X86_INTEL_TSX_MODE_OFF, X86_INTEL_TSX_MODE_ON and X86_INTEL_TSX_MODE_AUTO to control the TSX feature. The config setting can be overridden by the tsx cmdline options. [ bp: Text cleanups from Josh. ] Suggested-by:
Borislav Petkov <bpetkov@suse.de> Signed-off-by:
Michal Hocko <mhocko@suse.com> Signed-off-by:
Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Reviewed-by:
Josh Poimboeuf <jpoimboe@redhat.com> [bwh: Backported to 4.9: adjust doc filename] Signed-off-by:
Ben Hutchings <ben@decadent.org.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Pawan Gupta authored
commit a7a248c5 upstream. Add the documenation for TSX Async Abort. Include the description of the issue, how to check the mitigation state, control the mitigation, guidance for system administrators. [ bp: Add proper SPDX tags, touch ups by Josh and me. ] Co-developed-by:
Antonio Gomez Iglesias <antonio.gomez.iglesias@intel.com> Signed-off-by:
Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by:
Antonio Gomez Iglesias <antonio.gomez.iglesias@intel.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Reviewed-by:
Mark Gross <mgross@linux.intel.com> Reviewed-by:
Tony Luck <tony.luck@intel.com> Reviewed-by:
Josh Poimboeuf <jpoimboe@redhat.com> [bwh: Backported to 4.9: adjust filenames, context] Signed-off-by:
Ben Hutchings <ben@decadent.org.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Pawan Gupta authored
commit 7531a359 upstream. Platforms which are not affected by X86_BUG_TAA may want the TSX feature enabled. Add "auto" option to the TSX cmdline parameter. When tsx=auto disable TSX when X86_BUG_TAA is present, otherwise enable TSX. More details on X86_BUG_TAA can be found here: https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html [ bp: Extend the arg buffer to accommodate "auto\0". ] Signed-off-by:
Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Reviewed-by:
Tony Luck <tony.luck@intel.com> Reviewed-by:
Josh Poimboeuf <jpoimboe@redhat.com> [bwh: Backported to 4.9: adjust filename] Signed-off-by:
Ben Hutchings <ben@decadent.org.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Pawan Gupta authored
commit e1d38b63 upstream. Export the IA32_ARCH_CAPABILITIES MSR bit MDS_NO=0 to guests on TSX Async Abort(TAA) affected hosts that have TSX enabled and updated microcode. This is required so that the guests don't complain, "Vulnerable: Clear CPU buffers attempted, no microcode" when the host has the updated microcode to clear CPU buffers. Microcode update also adds support for MSR_IA32_TSX_CTRL which is enumerated by the ARCH_CAP_TSX_CTRL bit in IA32_ARCH_CAPABILITIES MSR. Guests can't do this check themselves when the ARCH_CAP_TSX_CTRL bit is not exported to the guests. In this case export MDS_NO=0 to the guests. When guests have CPUID.MD_CLEAR=1, they deploy MDS mitigation which also mitigates TAA. Signed-off-by:
Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Tested-by:
Neelima Krishnan <neelima.krishnan@intel.com> Reviewed-by:
Tony Luck <tony.luck@intel.com> Reviewed-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Ben Hutchings <ben@decadent.org.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Pawan Gupta authored
commit 6608b45a upstream. Add the sysfs reporting file for TSX Async Abort. It exposes the vulnerability and the mitigation state similar to the existing files for the other hardware vulnerabilities. Sysfs file path is: /sys/devices/system/cpu/vulnerabilities/tsx_async_abort Signed-off-by:
Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Tested-by:
Neelima Krishnan <neelima.krishnan@intel.com> Reviewed-by:
Mark Gross <mgross@linux.intel.com> Reviewed-by:
Tony Luck <tony.luck@intel.com> Reviewed-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Ben Hutchings <ben@decadent.org.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Pawan Gupta authored
commit 1b42f017 upstream. TSX Async Abort (TAA) is a side channel vulnerability to the internal buffers in some Intel processors similar to Microachitectural Data Sampling (MDS). In this case, certain loads may speculatively pass invalid data to dependent operations when an asynchronous abort condition is pending in a TSX transaction. This includes loads with no fault or assist condition. Such loads may speculatively expose stale data from the uarch data structures as in MDS. Scope of exposure is within the same-thread and cross-thread. This issue affects all current processors that support TSX, but do not have ARCH_CAP_TAA_NO (bit 8) set in MSR_IA32_ARCH_CAPABILITIES. On CPUs which have their IA32_ARCH_CAPABILITIES MSR bit MDS_NO=0, CPUID.MD_CLEAR=1 and the MDS mitigation is clearing the CPU buffers using VERW or L1D_FLUSH, there is no additional mitigation needed for TAA. On affected CPUs with MDS_NO=1 this issue can be mitigated by disabling the Transactional Synchronization Extensions (TSX) feature. A new MSR IA32_TSX_CTRL in future and current processors after a microcode update can be used to control the TSX feature. There are two bits in that MSR: * TSX_CTRL_RTM_DISABLE disables the TSX sub-feature Restricted Transactional Memory (RTM). * TSX_CTRL_CPUID_CLEAR clears the RTM enumeration in CPUID. The other TSX sub-feature, Hardware Lock Elision (HLE), is unconditionally disabled with updated microcode but still enumerated as present by CPUID(EAX=7).EBX{bit4}. The second mitigation approach is similar to MDS which is clearing the affected CPU buffers on return to user space and when entering a guest. Relevant microcode update is required for the mitigation to work. More details on this approach can be found here: https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html The TSX feature can be controlled by the "tsx" command line parameter. If it is force-enabled then "Clear CPU buffers" (MDS mitigation) is deployed. The effective mitigation state can be read from sysfs. [ bp: - massage + comments cleanup - s/TAA_MITIGATION_TSX_DISABLE/TAA_MITIGATION_TSX_DISABLED/g - Josh. - remove partial TAA mitigation in update_mds_branch_idle() - Josh. - s/tsx_async_abort_cmdline/tsx_async_abort_parse_cmdline/g ] Signed-off-by:
Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Reviewed-by:
Josh Poimboeuf <jpoimboe@redhat.com> [bwh: Backported to 4.9: - Add #include "cpu.h" in bugs.c - Adjust context, indentation] Signed-off-by:
Ben Hutchings <ben@decadent.org.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Pawan Gupta authored
commit 95c5824f upstream. Add a kernel cmdline parameter "tsx" to control the Transactional Synchronization Extensions (TSX) feature. On CPUs that support TSX control, use "tsx=on|off" to enable or disable TSX. Not specifying this option is equivalent to "tsx=off". This is because on certain processors TSX may be used as a part of a speculative side channel attack. Carve out the TSX controlling functionality into a separate compilation unit because TSX is a CPU feature while the TSX async abort control machinery will go to cpu/bugs.c. [ bp: - Massage, shorten and clear the arg buffer. - Clarifications of the tsx= possible options - Josh. - Expand on TSX_CTRL availability - Pawan. ] Signed-off-by:
Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Reviewed-by:
Josh Poimboeuf <jpoimboe@redhat.com> [bwh: Backported to 4.9: adjust filenames, context] Signed-off-by:
Ben Hutchings <ben@decadent.org.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Pawan Gupta authored
commit 286836a7 upstream. Add a helper function to read the IA32_ARCH_CAPABILITIES MSR. Signed-off-by:
Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Tested-by:
Neelima Krishnan <neelima.krishnan@intel.com> Reviewed-by:
Mark Gross <mgross@linux.intel.com> Reviewed-by:
Tony Luck <tony.luck@intel.com> Reviewed-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Ben Hutchings <ben@decadent.org.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Pawan Gupta authored
commit c2955f27 upstream. Transactional Synchronization Extensions (TSX) may be used on certain processors as part of a speculative side channel attack. A microcode update for existing processors that are vulnerable to this attack will add a new MSR - IA32_TSX_CTRL to allow the system administrator the option to disable TSX as one of the possible mitigations. The CPUs which get this new MSR after a microcode upgrade are the ones which do not set MSR_IA32_ARCH_CAPABILITIES.MDS_NO (bit 5) because those CPUs have CPUID.MD_CLEAR, i.e., the VERW implementation which clears all CPU buffers takes care of the TAA case as well. [ Note that future processors that are not vulnerable will also support the IA32_TSX_CTRL MSR. ] Add defines for the new IA32_TSX_CTRL MSR and its bits. TSX has two sub-features: 1. Restricted Transactional Memory (RTM) is an explicitly-used feature where new instructions begin and end TSX transactions. 2. Hardware Lock Elision (HLE) is implicitly used when certain kinds of "old" style locks are used by software. Bit 7 of the IA32_ARCH_CAPABILITIES indicates the presence of the IA32_TSX_CTRL MSR. There are two control bits in IA32_TSX_CTRL MSR: Bit 0: When set, it disables the Restricted Transactional Memory (RTM) sub-feature of TSX (will force all transactions to abort on the XBEGIN instruction). Bit 1: When set, it disables the enumeration of the RTM and HLE feature (i.e. it will make CPUID(EAX=7).EBX{bit4} and CPUID(EAX=7).EBX{bit11} read as 0). The other TSX sub-feature, Hardware Lock Elision (HLE), is unconditionally disabled by the new microcode but still enumerated as present by CPUID(EAX=7).EBX{bit4}, unless disabled by IA32_TSX_CTRL_MSR[1] - TSX_CTRL_CPUID_CLEAR. Signed-off-by:
Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Tested-by:
Neelima Krishnan <neelima.krishnan@intel.com> Reviewed-by:
Mark Gross <mgross@linux.intel.com> Reviewed-by:
Tony Luck <tony.luck@intel.com> Reviewed-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Ben Hutchings <ben@decadent.org.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Paolo Bonzini authored
commit 0c54914d upstream. Similar to AMD bits, set the Intel bits from the vendor-independent feature and bug flags, because KVM_GET_SUPPORTED_CPUID does not care about the vendor and they should be set on AMD processors as well. Suggested-by:
Jim Mattson <jmattson@google.com> Reviewed-by:
Jim Mattson <jmattson@google.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Ben Hutchings <ben@decadent.org.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jack Pham authored
commit 31fe084f upstream. In the SG case this is already handled since a non-zero request->num_mapped_sgs is a clear indicator that dma_map_sg() had been called. While it would be nice to do the same for the singly mapped case by simply checking for non-zero request->dma, it's conceivable that 0 is a valid dma_addr_t handle. Hence add a flag 'dma_mapped' to struct usb_request and use this to determine the need to call dma_unmap_single(). Otherwise, if a request is not DMA mapped then the result of calling usb_request_unmap_request() would safely be a no-op. Signed-off-by:
Jack Pham <jackp@codeaurora.org> Signed-off-by:
Felipe Balbi <felipe.balbi@linux.intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jonas Gorski authored
commit 8a38dacf upstream. The Ethernet Switch core mask was set to 0, causing the switch core to be not reset on BCM6368 on boot. Provide the proper mask so the switch core gets reset to a known good state. Fixes: 799faa62 ("MIPS: BCM63XX: add core reset helper") Signed-off-by:
Jonas Gorski <jonas.gorski@gmail.com> Signed-off-by:
Paul Burton <paul.burton@mips.com> Cc: linux-mips@vger.kernel.org Cc: Ralf Baechle <ralf@linux-mips.org> Cc: James Hogan <jhogan@kernel.org> Cc: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by:
Amit Pundir <amit.pundir@linaro.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Kefeng Wang authored
commit 56897b21 upstream. task A: task B: hci_uart_set_proto flush_to_ldisc - p->open(hu) -> h5_open //alloc h5 - receive_buf - set_bit HCI_UART_PROTO_READY - tty_port_default_receive_buf - hci_uart_register_dev - tty_ldisc_receive_buf - hci_uart_tty_receive - test_bit HCI_UART_PROTO_READY - h5_recv - clear_bit HCI_UART_PROTO_READY while() { - p->open(hu) -> h5_close //free h5 - h5_rx_3wire_hdr - h5_reset() //use-after-free } It could use ioctl to set hci uart proto, but there is a use-after-free issue when hci_uart_register_dev() fail in hci_uart_set_proto(), see stack above, fix this by setting HCI_UART_PROTO_READY bit only when hci_uart_register_dev() return success. Reported-by: syzbot+899a33dc0fa0dbaf06a6@syzkaller.appspotmail.com Signed-off-by:
Kefeng Wang <wangkefeng.wang@huawei.com> Reviewed-by:
Jeremy Cline <jcline@redhat.com> Signed-off-by:
Marcel Holtmann <marcel@holtmann.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Junaid Shahid authored
[ Upstream commit d35b34a9 ] kvm should not attempt to read guest PDPTEs when CR0.PG = 0 and CR4.PAE = 1. Signed-off-by:
Junaid Shahid <junaids@google.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
- 12 Nov, 2019 1 commit
-
-
Greg Kroah-Hartman authored
-