1. 21 Apr, 2022 17 commits
    • Paolo Bonzini's avatar
      kvm: selftests: do not use bitfields larger than 32-bits for PTEs · f18b4aeb
      Paolo Bonzini authored
      Red Hat's QE team reported test failure on access_tracking_perf_test:
      
      Testing guest mode: PA-bits:ANY, VA-bits:48,  4K pages
      guest physical test memory offset: 0x3fffbffff000
      
      Populating memory             : 0.684014577s
      Writing to populated memory   : 0.006230175s
      Reading from populated memory : 0.004557805s
      ==== Test Assertion Failure ====
        lib/kvm_util.c:1411: false
        pid=125806 tid=125809 errno=4 - Interrupted system call
           1  0x0000000000402f7c: addr_gpa2hva at kvm_util.c:1411
           2   (inlined by) addr_gpa2hva at kvm_util.c:1405
           3  0x0000000000401f52: lookup_pfn at access_tracking_perf_test.c:98
           4   (inlined by) mark_vcpu_memory_idle at access_tracking_perf_test.c:152
           5   (inlined by) vcpu_thread_main at access_tracking_perf_test.c:232
           6  0x00007fefe9ff81ce: ?? ??:0
           7  0x00007fefe9c64d82: ?? ??:0
        No vm physical memory at 0xffbffff000
      
      I can easily reproduce it with a Intel(R) Xeon(R) CPU E5-2630 with 46 bits
      PA.
      
      It turns out that the address translation for clearing idle page tracking
      returned a wrong result; addr_gva2gpa()'s last step, which is based on
      "pte[index[0]].pfn", did the calculation with 40 bits length and the
      high 12 bits got truncated.  In above case the GPA address to be returned
      should be 0x3fffbffff000 for GVA 0xc0000000, but it got truncated into
      0xffbffff000 and the subsequent gpa2hva lookup failed.
      
      The width of operations on bit fields greater than 32-bit is
      implementation defined, and differs between GCC (which uses the bitfield
      precision) and clang (which uses 64-bit arithmetic), so this is a
      potential minefield.  Remove the bit fields and using manual masking
      instead.
      
      Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2075036Reported-by: default avatarNana Liu <nanliu@redhat.com>
      Reviewed-by: default avatarPeter Xu <peterx@redhat.com>
      Tested-by: default avatarPeter Xu <peterx@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      f18b4aeb
    • Mingwei Zhang's avatar
      KVM: SEV: add cache flush to solve SEV cache incoherency issues · 683412cc
      Mingwei Zhang authored
      Flush the CPU caches when memory is reclaimed from an SEV guest (where
      reclaim also includes it being unmapped from KVM's memslots).  Due to lack
      of coherency for SEV encrypted memory, failure to flush results in silent
      data corruption if userspace is malicious/broken and doesn't ensure SEV
      guest memory is properly pinned and unpinned.
      
      Cache coherency is not enforced across the VM boundary in SEV (AMD APM
      vol.2 Section 15.34.7). Confidential cachelines, generated by confidential
      VM guests have to be explicitly flushed on the host side. If a memory page
      containing dirty confidential cachelines was released by VM and reallocated
      to another user, the cachelines may corrupt the new user at a later time.
      
      KVM takes a shortcut by assuming all confidential memory remain pinned
      until the end of VM lifetime. Therefore, KVM does not flush cache at
      mmu_notifier invalidation events. Because of this incorrect assumption and
      the lack of cache flushing, malicous userspace can crash the host kernel:
      creating a malicious VM and continuously allocates/releases unpinned
      confidential memory pages when the VM is running.
      
      Add cache flush operations to mmu_notifier operations to ensure that any
      physical memory leaving the guest VM get flushed. In particular, hook
      mmu_notifier_invalidate_range_start and mmu_notifier_release events and
      flush cache accordingly. The hook after releasing the mmu lock to avoid
      contention with other vCPUs.
      
      Cc: stable@vger.kernel.org
      Suggested-by: default avatarSean Christpherson <seanjc@google.com>
      Reported-by: default avatarMingwei Zhang <mizhang@google.com>
      Signed-off-by: default avatarMingwei Zhang <mizhang@google.com>
      Message-Id: <20220421031407.2516575-4-mizhang@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      683412cc
    • Mingwei Zhang's avatar
      KVM: SVM: Flush when freeing encrypted pages even on SME_COHERENT CPUs · d45829b3
      Mingwei Zhang authored
      Use clflush_cache_range() to flush the confidential memory when
      SME_COHERENT is supported in AMD CPU. Cache flush is still needed since
      SME_COHERENT only support cache invalidation at CPU side. All confidential
      cache lines are still incoherent with DMA devices.
      
      Cc: stable@vger.kerel.org
      
      Fixes: add5e2f0 ("KVM: SVM: Add support for the SEV-ES VMSA")
      Reviewed-by: default avatarSean Christopherson <seanjc@google.com>
      Signed-off-by: default avatarMingwei Zhang <mizhang@google.com>
      Message-Id: <20220421031407.2516575-3-mizhang@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      d45829b3
    • Sean Christopherson's avatar
      KVM: SVM: Simplify and harden helper to flush SEV guest page(s) · 4bbef7e8
      Sean Christopherson authored
      Rework sev_flush_guest_memory() to explicitly handle only a single page,
      and harden it to fall back to WBINVD if VM_PAGE_FLUSH fails.  Per-page
      flushing is currently used only to flush the VMSA, and in its current
      form, the helper is completely broken with respect to flushing actual
      guest memory, i.e. won't work correctly for an arbitrary memory range.
      
      VM_PAGE_FLUSH takes a host virtual address, and is subject to normal page
      walks, i.e. will fault if the address is not present in the host page
      tables or does not have the correct permissions.  Current AMD CPUs also
      do not honor SMAP overrides (undocumented in kernel versions of the APM),
      so passing in a userspace address is completely out of the question.  In
      other words, KVM would need to manually walk the host page tables to get
      the pfn, ensure the pfn is stable, and then use the direct map to invoke
      VM_PAGE_FLUSH.  And the latter might not even work, e.g. if userspace is
      particularly evil/clever and backs the guest with Secret Memory (which
      unmaps memory from the direct map).
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      
      Fixes: add5e2f0 ("KVM: SVM: Add support for the SEV-ES VMSA")
      Reported-by: default avatarMingwei Zhang <mizhang@google.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarMingwei Zhang <mizhang@google.com>
      Message-Id: <20220421031407.2516575-2-mizhang@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      4bbef7e8
    • Thomas Huth's avatar
      KVM: selftests: Silence compiler warning in the kvm_page_table_test · 266a19a0
      Thomas Huth authored
      When compiling kvm_page_table_test.c, I get this compiler warning
      with gcc 11.2:
      
      kvm_page_table_test.c: In function 'pre_init_before_test':
      ../../../../tools/include/linux/kernel.h:44:24: warning: comparison of
       distinct pointer types lacks a cast
         44 |         (void) (&_max1 == &_max2);              \
            |                        ^~
      kvm_page_table_test.c:281:21: note: in expansion of macro 'max'
        281 |         alignment = max(0x100000, alignment);
            |                     ^~~
      
      Fix it by adjusting the type of the absolute value.
      Signed-off-by: default avatarThomas Huth <thuth@redhat.com>
      Reviewed-by: default avatarClaudio Imbrenda <imbrenda@linux.ibm.com>
      Message-Id: <20220414103031.565037-1-thuth@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      266a19a0
    • Like Xu's avatar
      KVM: x86/pmu: Update AMD PMC sample period to fix guest NMI-watchdog · 75189d1d
      Like Xu authored
      NMI-watchdog is one of the favorite features of kernel developers,
      but it does not work in AMD guest even with vPMU enabled and worse,
      the system misrepresents this capability via /proc.
      
      This is a PMC emulation error. KVM does not pass the latest valid
      value to perf_event in time when guest NMI-watchdog is running, thus
      the perf_event corresponding to the watchdog counter will enter the
      old state at some point after the first guest NMI injection, forcing
      the hardware register PMC0 to be constantly written to 0x800000000001.
      
      Meanwhile, the running counter should accurately reflect its new value
      based on the latest coordinated pmc->counter (from vPMC's point of view)
      rather than the value written directly by the guest.
      
      Fixes: 168d918f ("KVM: x86: Adjust counter sample period after a wrmsr")
      Reported-by: default avatarDongli Cao <caodongli@kingsoft.com>
      Signed-off-by: default avatarLike Xu <likexu@tencent.com>
      Reviewed-by: default avatarYanan Wang <wangyanan55@huawei.com>
      Tested-by: default avatarYanan Wang <wangyanan55@huawei.com>
      Reviewed-by: default avatarJim Mattson <jmattson@google.com>
      Message-Id: <20220409015226.38619-1-likexu@tencent.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      75189d1d
    • Wanpeng Li's avatar
      x86/kvm: Preserve BSP MSR_KVM_POLL_CONTROL across suspend/resume · 0361bdfd
      Wanpeng Li authored
      MSR_KVM_POLL_CONTROL is cleared on reset, thus reverting guests to
      host-side polling after suspend/resume.  Non-bootstrap CPUs are
      restored correctly by the haltpoll driver because they are hot-unplugged
      during suspend and hot-plugged during resume; however, the BSP
      is not hotpluggable and remains in host-sde polling mode after
      the guest resume.  The makes the guest pay for the cost of vmexits
      every time the guest enters idle.
      
      Fix it by recording BSP's haltpoll state and resuming it during guest
      resume.
      
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Signed-off-by: default avatarWanpeng Li <wanpengli@tencent.com>
      Message-Id: <1650267752-46796-1-git-send-email-wanpengli@tencent.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      0361bdfd
    • Tom Rix's avatar
      KVM: SPDX style and spelling fixes · a413a625
      Tom Rix authored
      SPDX comments use use /* */ style comments in headers anad
      // style comments in .c files.  Also fix two spelling mistakes.
      Signed-off-by: default avatarTom Rix <trix@redhat.com>
      Message-Id: <20220410153840.55506-1-trix@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      a413a625
    • Sean Christopherson's avatar
      KVM: x86: Skip KVM_GUESTDBG_BLOCKIRQ APICv update if APICv is disabled · 0047fb33
      Sean Christopherson authored
      Skip the APICv inhibit update for KVM_GUESTDBG_BLOCKIRQ if APICv is
      disabled at the module level to avoid having to acquire the mutex and
      potentially process all vCPUs. The DISABLE inhibit will (barring bugs)
      never be lifted, so piling on more inhibits is unnecessary.
      
      Fixes: cae72dcc ("KVM: x86: inhibit APICv when KVM_GUESTDBG_BLOCKIRQ active")
      Cc: Maxim Levitsky <mlevitsk@redhat.com>
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Reviewed-by: default avatarMaxim Levitsky <mlevitsk@redhat.com>
      Message-Id: <20220420013732.3308816-5-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      0047fb33
    • Sean Christopherson's avatar
      KVM: x86: Pend KVM_REQ_APICV_UPDATE during vCPU creation to fix a race · 423ecfea
      Sean Christopherson authored
      Make a KVM_REQ_APICV_UPDATE request when creating a vCPU with an
      in-kernel local APIC and APICv enabled at the module level.  Consuming
      kvm_apicv_activated() and stuffing vcpu->arch.apicv_active directly can
      race with __kvm_set_or_clear_apicv_inhibit(), as vCPU creation happens
      before the vCPU is fully onlined, i.e. it won't get the request made to
      "all" vCPUs.  If APICv is globally inhibited between setting apicv_active
      and onlining the vCPU, the vCPU will end up running with APICv enabled
      and trigger KVM's sanity check.
      
      Mark APICv as active during vCPU creation if APICv is enabled at the
      module level, both to be optimistic about it's final state, e.g. to avoid
      additional VMWRITEs on VMX, and because there are likely bugs lurking
      since KVM checks apicv_active in multiple vCPU creation paths.  While
      keeping the current behavior of consuming kvm_apicv_activated() is
      arguably safer from a regression perspective, force apicv_active so that
      vCPU creation runs with deterministic state and so that if there are bugs,
      they are found sooner than later, i.e. not when some crazy race condition
      is hit.
      
        WARNING: CPU: 0 PID: 484 at arch/x86/kvm/x86.c:9877 vcpu_enter_guest+0x2ae3/0x3ee0 arch/x86/kvm/x86.c:9877
        Modules linked in:
        CPU: 0 PID: 484 Comm: syz-executor361 Not tainted 5.16.13 #2
        Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1ubuntu1~cloud0 04/01/2014
        RIP: 0010:vcpu_enter_guest+0x2ae3/0x3ee0 arch/x86/kvm/x86.c:9877
        Call Trace:
         <TASK>
         vcpu_run arch/x86/kvm/x86.c:10039 [inline]
         kvm_arch_vcpu_ioctl_run+0x337/0x15e0 arch/x86/kvm/x86.c:10234
         kvm_vcpu_ioctl+0x4d2/0xc80 arch/x86/kvm/../../../virt/kvm/kvm_main.c:3727
         vfs_ioctl fs/ioctl.c:51 [inline]
         __do_sys_ioctl fs/ioctl.c:874 [inline]
         __se_sys_ioctl fs/ioctl.c:860 [inline]
         __x64_sys_ioctl+0x16d/0x1d0 fs/ioctl.c:860
         do_syscall_x64 arch/x86/entry/common.c:50 [inline]
         do_syscall_64+0x38/0x90 arch/x86/entry/common.c:80
         entry_SYSCALL_64_after_hwframe+0x44/0xae
      
      The bug was hit by a syzkaller spamming VM creation with 2 vCPUs and a
      call to KVM_SET_GUEST_DEBUG.
      
        r0 = openat$kvm(0xffffffffffffff9c, &(0x7f0000000000), 0x0, 0x0)
        r1 = ioctl$KVM_CREATE_VM(r0, 0xae01, 0x0)
        ioctl$KVM_CAP_SPLIT_IRQCHIP(r1, 0x4068aea3, &(0x7f0000000000)) (async)
        r2 = ioctl$KVM_CREATE_VCPU(r1, 0xae41, 0x0) (async)
        r3 = ioctl$KVM_CREATE_VCPU(r1, 0xae41, 0x400000000000002)
        ioctl$KVM_SET_GUEST_DEBUG(r3, 0x4048ae9b, &(0x7f00000000c0)={0x5dda9c14aa95f5c5})
        ioctl$KVM_RUN(r2, 0xae80, 0x0)
      Reported-by: default avatarGaoning Pan <pgn@zju.edu.cn>
      Reported-by: default avatarYongkang Jia <kangel@zju.edu.cn>
      Fixes: 8df14af4 ("kvm: x86: Add support for dynamic APICv activation")
      Cc: stable@vger.kernel.org
      Cc: Maxim Levitsky <mlevitsk@redhat.com>
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Reviewed-by: default avatarMaxim Levitsky <mlevitsk@redhat.com>
      Message-Id: <20220420013732.3308816-4-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      423ecfea
    • Sean Christopherson's avatar
      KVM: nVMX: Defer APICv updates while L2 is active until L1 is active · 7c69661e
      Sean Christopherson authored
      Defer APICv updates that occur while L2 is active until nested VM-Exit,
      i.e. until L1 regains control.  vmx_refresh_apicv_exec_ctrl() assumes L1
      is active and (a) stomps all over vmcs02 and (b) neglects to ever updated
      vmcs01.  E.g. if vmcs12 doesn't enable the TPR shadow for L2 (and thus no
      APICv controls), L1 performs nested VM-Enter APICv inhibited, and APICv
      becomes unhibited while L2 is active, KVM will set various APICv controls
      in vmcs02 and trigger a failed VM-Entry.  The kicker is that, unless
      running with nested_early_check=1, KVM blames L1 and chaos ensues.
      
      In all cases, ignoring vmcs02 and always deferring the inhibition change
      to vmcs01 is correct (or at least acceptable).  The ABSENT and DISABLE
      inhibitions cannot truly change while L2 is active (see below).
      
      IRQ_BLOCKING can change, but it is firmly a best effort debug feature.
      Furthermore, only L2's APIC is accelerated/virtualized to the full extent
      possible, e.g. even if L1 passes through its APIC to L2, normal MMIO/MSR
      interception will apply to the virtual APIC managed by KVM.
      The exception is the SELF_IPI register when x2APIC is enabled, but that's
      an acceptable hole.
      
      Lastly, Hyper-V's Auto EOI can technically be toggled if L1 exposes the
      MSRs to L2, but for that to work in any sane capacity, L1 would need to
      pass through IRQs to L2 as well, and IRQs must be intercepted to enable
      virtual interrupt delivery.  I.e. exposing Auto EOI to L2 and enabling
      VID for L2 are, for all intents and purposes, mutually exclusive.
      
      Lack of dynamic toggling is also why this scenario is all but impossible
      to encounter in KVM's current form.  But a future patch will pend an
      APICv update request _during_ vCPU creation to plug a race where a vCPU
      that's being created doesn't get included in the "all vCPUs request"
      because it's not yet visible to other vCPUs.  If userspaces restores L2
      after VM creation (hello, KVM selftests), the first KVM_RUN will occur
      while L2 is active and thus service the APICv update request made during
      VM creation.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Message-Id: <20220420013732.3308816-3-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      7c69661e
    • Sean Christopherson's avatar
      KVM: x86: Tag APICv DISABLE inhibit, not ABSENT, if APICv is disabled · 80f0497c
      Sean Christopherson authored
      Set the DISABLE inhibit, not the ABSENT inhibit, if APICv is disabled via
      module param.  A recent refactoring to add a wrapper for setting/clearing
      inhibits unintentionally changed the flag, probably due to a copy+paste
      goof.
      
      Fixes: 4f4c4a3e ("KVM: x86: Trace all APICv inhibit changes and capture overall status")
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Reviewed-by: default avatarMaxim Levitsky <mlevitsk@redhat.com>
      Message-Id: <20220420013732.3308816-2-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      80f0497c
    • Sean Christopherson's avatar
      KVM: Initialize debugfs_dentry when a VM is created to avoid NULL deref · 5c697c36
      Sean Christopherson authored
      Initialize debugfs_entry to its semi-magical -ENOENT value when the VM
      is created.  KVM's teardown when VM creation fails is kludgy and calls
      kvm_uevent_notify_change() and kvm_destroy_vm_debugfs() even if KVM never
      attempted kvm_create_vm_debugfs().  Because debugfs_entry is zero
      initialized, the IS_ERR() checks pass and KVM derefs a NULL pointer.
      
        BUG: kernel NULL pointer dereference, address: 0000000000000018
        #PF: supervisor read access in kernel mode
        #PF: error_code(0x0000) - not-present page
        PGD 1068b1067 P4D 1068b1067 PUD 1068b0067 PMD 0
        Oops: 0000 [#1] SMP
        CPU: 0 PID: 871 Comm: repro Not tainted 5.18.0-rc1+ #825
        Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
        RIP: 0010:__dentry_path+0x7b/0x130
        Call Trace:
         <TASK>
         dentry_path_raw+0x42/0x70
         kvm_uevent_notify_change.part.0+0x10c/0x200 [kvm]
         kvm_put_kvm+0x63/0x2b0 [kvm]
         kvm_dev_ioctl+0x43a/0x920 [kvm]
         __x64_sys_ioctl+0x83/0xb0
         do_syscall_64+0x31/0x50
         entry_SYSCALL_64_after_hwframe+0x44/0xae
         </TASK>
        Modules linked in: kvm_intel kvm irqbypass
      
      Fixes: a44a4cc1 ("KVM: Don't create VM debugfs files outside of the VM directory")
      Cc: stable@vger.kernel.org
      Cc: Marc Zyngier <maz@kernel.org>
      Cc: Oliver Upton <oupton@google.com>
      Reported-by: syzbot+df6fbbd2ee39f21289ef@syzkaller.appspotmail.com
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Reviewed-by: default avatarOliver Upton <oupton@google.com>
      Message-Id: <20220415004622.2207751-1-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      5c697c36
    • Sean Christopherson's avatar
      KVM: Add helpers to wrap vcpu->srcu_idx and yell if it's abused · 2031f287
      Sean Christopherson authored
      Add wrappers to acquire/release KVM's SRCU lock when stashing the index
      in vcpu->src_idx, along with rudimentary detection of illegal usage,
      e.g. re-acquiring SRCU and thus overwriting vcpu->src_idx.  Because the
      SRCU index is (currently) either 0 or 1, illegal nesting bugs can go
      unnoticed for quite some time and only cause problems when the nested
      lock happens to get a different index.
      
      Wrap the WARNs in PROVE_RCU=y, and make them ONCE, otherwise KVM will
      likely yell so loudly that it will bring the kernel to its knees.
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Tested-by: default avatarFabiano Rosas <farosas@linux.ibm.com>
      Message-Id: <20220415004343.2203171-4-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      2031f287
    • Sean Christopherson's avatar
      KVM: RISC-V: Use kvm_vcpu.srcu_idx, drop RISC-V's unnecessary copy · fdd6f6ac
      Sean Christopherson authored
      Use the generic kvm_vcpu's srcu_idx instead of using an indentical field
      in RISC-V's version of kvm_vcpu_arch.  Generic KVM very intentionally
      does not touch vcpu->srcu_idx, i.e. there's zero chance of running afoul
      of common code.
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Message-Id: <20220415004343.2203171-3-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      fdd6f6ac
    • Sean Christopherson's avatar
      KVM: x86: Don't re-acquire SRCU lock in complete_emulated_io() · 2d089356
      Sean Christopherson authored
      Don't re-acquire SRCU in complete_emulated_io() now that KVM acquires the
      lock in kvm_arch_vcpu_ioctl_run().  More importantly, don't overwrite
      vcpu->srcu_idx.  If the index acquired by complete_emulated_io() differs
      from the one acquired by kvm_arch_vcpu_ioctl_run(), KVM will effectively
      leak a lock and hang if/when synchronize_srcu() is invoked for the
      relevant grace period.
      
      Fixes: 8d25b7be ("KVM: x86: pull kvm->srcu read-side to kvm_arch_vcpu_ioctl_run")
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Reviewed-by: default avatarMaxim Levitsky <mlevitsk@redhat.com>
      Message-Id: <20220415004343.2203171-2-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      2d089356
    • Paolo Bonzini's avatar
      Merge tag 'kvm-riscv-fixes-5.18-2' of https://github.com/kvm-riscv/linux into HEAD · 012c7225
      Paolo Bonzini authored
      KVM/riscv fixes for 5.18, take #2
      
      - Remove 's' & 'u' as valid ISA extension
      
      - Do not allow disabling the base extensions 'i'/'m'/'a'/'c'
      012c7225
  2. 20 Apr, 2022 2 commits
  3. 17 Apr, 2022 10 commits
  4. 16 Apr, 2022 6 commits
    • Linus Torvalds's avatar
      Merge tag 'soc-fixes-5.18-2' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc · 70a0cec8
      Linus Torvalds authored
      Pull ARM SoC fixes from Arnd Bergmann:
       "There are a number of SoC bugfixes that came in since the merge
        window, and more of them are already pending.
      
        This batch includes:
      
         - A boot time regression fix for davinci that triggered on
           multi_v5_defconfig when booting any platform
      
         - Defconfig updates to address removed features, changed symbol names
           or dependencies, for gemini, ux500, and pxa
      
         - Email address changes for Krzysztof Kozlowski
      
         - Build warning fixes for ep93xx and iop32x
      
         - Devicetree warning fixes across many platforms
      
         - Minor bugfixes for the reset controller, memory controller and SCMI
           firmware subsystems plus the versatile-express board"
      
      * tag 'soc-fixes-5.18-2' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: (34 commits)
        ARM: config: Update Gemini defconfig
        arm64: dts: qcom/sdm845-shift-axolotl: Fix boolean properties with values
        ARM: dts: align SPI NOR node name with dtschema
        ARM: dts: Fix more boolean properties with values
        arm/arm64: dts: qcom: Fix boolean properties with values
        arm64: dts: imx: Fix imx8*-var-som touchscreen property sizes
        arm: dts: imx: Fix boolean properties with values
        arm64: dts: tegra: Fix boolean properties with values
        arm: dts: at91: Fix boolean properties with values
        arm: configs: imote2: Drop defconfig as board support dropped.
        ep93xx: clock: Don't use plain integer as NULL pointer
        ep93xx: clock: Fix UAF in ep93xx_clk_register_gate()
        ARM: vexpress/spc: Fix all the kernel-doc build warnings
        ARM: vexpress/spc: Fix kernel-doc build warning for ve_spc_cpu_in_wfi
        ARM: config: u8500: Re-enable AB8500 battery charging
        ARM: config: u8500: Add some common hardware
        memory: fsl_ifc: populate child nodes of buses and mfd devices
        ARM: config: Refresh U8500 defconfig
        firmware: arm_scmi: Fix sparse warnings in OPTEE transport driver
        firmware: arm_scmi: Replace zero-length array with flexible-array member
        ...
      70a0cec8
    • Linus Torvalds's avatar
      Merge tag 'random-5.18-rc3-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/crng/random · 92edbe32
      Linus Torvalds authored
      Pull random number generator fixes from Jason Donenfeld:
      
       - Per your suggestion, random reads now won't fail if there's a page
         fault after some non-zero amount of data has been read, which makes
         the behavior consistent with all other reads in the kernel.
      
       - Rather than an inconsistent mix of random_get_entropy() returning an
         unsigned long or a cycles_t, now it just returns an unsigned long.
      
       - A memcpy() was replaced with an memmove(), because the addresses are
         sometimes overlapping. In practice the destination is always before
         the source, so not really an issue, but better to be correct than
         not.
      
      * tag 'random-5.18-rc3-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/crng/random:
        random: use memmove instead of memcpy for remaining 32 bytes
        random: make random_get_entropy() return an unsigned long
        random: allow partial reads if later user copies fail
      92edbe32
    • Linus Torvalds's avatar
      Merge tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi · 90ea17a9
      Linus Torvalds authored
      Pull SCSI fixes from James Bottomley:
       "13 fixes, all in drivers.
      
        The most extensive changes are in the iscsi series (affecting drivers
        qedi, cxgbi and bnx2i), the next most is scsi_debug, but that's just a
        simple revert and then minor updates to pm80xx"
      
      * tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi:
        scsi: iscsi: MAINTAINERS: Add Mike Christie as co-maintainer
        scsi: qedi: Fix failed disconnect handling
        scsi: iscsi: Fix NOP handling during conn recovery
        scsi: iscsi: Merge suspend fields
        scsi: iscsi: Fix unbound endpoint error handling
        scsi: iscsi: Fix conn cleanup and stop race during iscsid restart
        scsi: iscsi: Fix endpoint reuse regression
        scsi: iscsi: Release endpoint ID when its freed
        scsi: iscsi: Fix offload conn cleanup when iscsid restarts
        scsi: iscsi: Move iscsi_ep_disconnect()
        scsi: pm80xx: Enable upper inbound, outbound queues
        scsi: pm80xx: Mask and unmask upper interrupt vectors 32-63
        Revert "scsi: scsi_debug: Address races following module load"
      90ea17a9
    • Bartosz Golaszewski's avatar
      Merge tag 'intel-gpio-v5.18-2' of... · 0ebb4fbe
      Bartosz Golaszewski authored
      Merge tag 'intel-gpio-v5.18-2' of gitolite.kernel.org:pub/scm/linux/kernel/git/andy/linux-gpio-intel into gpio/for-current
      
      intel-gpio for v5.18-2
      
      * Couple of fixes related to handling unsigned value of the pin from ACPI
      
      gpiolib:
       -  acpi: Convert type for pin to be unsigned
       -  acpi: use correct format characters
      0ebb4fbe
    • Linus Torvalds's avatar
      Merge tag 'dma-mapping-5.18-2' of git://git.infradead.org/users/hch/dma-mapping · b0086839
      Linus Torvalds authored
      Pull dma-mapping fix from Christoph Hellwig:
      
       - avoid a double memory copy for swiotlb (Chao Gao)
      
      * tag 'dma-mapping-5.18-2' of git://git.infradead.org/users/hch/dma-mapping:
        dma-direct: avoid redundant memory sync for swiotlb
      b0086839
    • Jason A. Donenfeld's avatar
      random: use memmove instead of memcpy for remaining 32 bytes · 35a33ff3
      Jason A. Donenfeld authored
      In order to immediately overwrite the old key on the stack, before
      servicing a userspace request for bytes, we use the remaining 32 bytes
      of block 0 as the key. This means moving indices 8,9,a,b,c,d,e,f ->
      4,5,6,7,8,9,a,b. Since 4 < 8, for the kernel implementations of
      memcpy(), this doesn't actually appear to be a problem in practice. But
      relying on that characteristic seems a bit brittle. So let's change that
      to a proper memmove(), which is the by-the-books way of handling
      overlapping memory copies.
      Reviewed-by: default avatarDominik Brodowski <linux@dominikbrodowski.net>
      Signed-off-by: default avatarJason A. Donenfeld <Jason@zx2c4.com>
      35a33ff3
  5. 15 Apr, 2022 5 commits
    • Linus Torvalds's avatar
      Merge branch 'akpm' (patches from Andrew) · 59250f8a
      Linus Torvalds authored
      Merge misc fixes from Andrew Morton:
       "14 patches.
      
        Subsystems affected by this patch series: MAINTAINERS, binfmt, and
        mm (tmpfs, secretmem, kasan, kfence, pagealloc, zram, compaction,
        hugetlb, vmalloc, and kmemleak)"
      
      * emailed patches from Andrew Morton <akpm@linux-foundation.org>:
        mm: kmemleak: take a full lowmem check in kmemleak_*_phys()
        mm/vmalloc: fix spinning drain_vmap_work after reading from /proc/vmcore
        revert "fs/binfmt_elf: use PT_LOAD p_align values for static PIE"
        revert "fs/binfmt_elf: fix PT_LOAD p_align values for loaders"
        hugetlb: do not demote poisoned hugetlb pages
        mm: compaction: fix compiler warning when CONFIG_COMPACTION=n
        mm: fix unexpected zeroed page mapping with zram swap
        mm, page_alloc: fix build_zonerefs_node()
        mm, kfence: support kmem_dump_obj() for KFENCE objects
        kasan: fix hw tags enablement when KUNIT tests are disabled
        irq_work: use kasan_record_aux_stack_noalloc() record callstack
        mm/secretmem: fix panic when growing a memfd_secret
        tmpfs: fix regressions from wider use of ZERO_PAGE
        MAINTAINERS: Broadcom internal lists aren't maintainers
      59250f8a
    • Linus Torvalds's avatar
      Merge tag 'for-5.18/dm-fixes-2' of... · ce673f63
      Linus Torvalds authored
      Merge tag 'for-5.18/dm-fixes-2' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm
      
      Pull device mapper fixes from Mike Snitzer:
      
       - Fix memory corruption in DM integrity target when tag_size is less
         than digest size.
      
       - Fix DM multipath's historical-service-time path selector to not use
         sched_clock() and ktime_get_ns(); only use ktime_get_ns().
      
       - Fix dm_io->orig_bio NULL pointer dereference in dm_zone_map_bio() due
         to 5.18 changes that overlooked DM zone's use of ->orig_bio
      
       - Fix for regression that broke the use of dm_accept_partial_bio() for
         "abnormal" IO (e.g. WRITE ZEROES) that does not need duplicate bios
      
       - Fix DM's issuing of empty flush bio so that it's size is 0.
      
      * tag 'for-5.18/dm-fixes-2' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm:
        dm: fix bio length of empty flush
        dm: allow dm_accept_partial_bio() for dm_io without duplicate bios
        dm zone: fix NULL pointer dereference in dm_zone_map_bio
        dm mpath: only use ktime_get_ns() in historical selector
        dm integrity: fix memory corruption when tag_size is less than digest size
      ce673f63
    • Patrick Wang's avatar
      mm: kmemleak: take a full lowmem check in kmemleak_*_phys() · 23c2d497
      Patrick Wang authored
      The kmemleak_*_phys() apis do not check the address for lowmem's min
      boundary, while the caller may pass an address below lowmem, which will
      trigger an oops:
      
        # echo scan > /sys/kernel/debug/kmemleak
        Unable to handle kernel paging request at virtual address ff5fffffffe00000
        Oops [#1]
        Modules linked in:
        CPU: 2 PID: 134 Comm: bash Not tainted 5.18.0-rc1-next-20220407 #33
        Hardware name: riscv-virtio,qemu (DT)
        epc : scan_block+0x74/0x15c
         ra : scan_block+0x72/0x15c
        epc : ffffffff801e5806 ra : ffffffff801e5804 sp : ff200000104abc30
         gp : ffffffff815cd4e8 tp : ff60000004cfa340 t0 : 0000000000000200
         t1 : 00aaaaaac23954cc t2 : 00000000000003ff s0 : ff200000104abc90
         s1 : ffffffff81b0ff28 a0 : 0000000000000000 a1 : ff5fffffffe01000
         a2 : ffffffff81b0ff28 a3 : 0000000000000002 a4 : 0000000000000001
         a5 : 0000000000000000 a6 : ff200000104abd7c a7 : 0000000000000005
         s2 : ff5fffffffe00ff9 s3 : ffffffff815cd998 s4 : ffffffff815d0e90
         s5 : ffffffff81b0ff28 s6 : 0000000000000020 s7 : ffffffff815d0eb0
         s8 : ffffffffffffffff s9 : ff5fffffffe00000 s10: ff5fffffffe01000
         s11: 0000000000000022 t3 : 00ffffffaa17db4c t4 : 000000000000000f
         t5 : 0000000000000001 t6 : 0000000000000000
        status: 0000000000000100 badaddr: ff5fffffffe00000 cause: 000000000000000d
          scan_gray_list+0x12e/0x1a6
          kmemleak_scan+0x2aa/0x57e
          kmemleak_write+0x32a/0x40c
          full_proxy_write+0x56/0x82
          vfs_write+0xa6/0x2a6
          ksys_write+0x6c/0xe2
          sys_write+0x22/0x2a
          ret_from_syscall+0x0/0x2
      
      The callers may not quite know the actual address they pass(e.g. from
      devicetree).  So the kmemleak_*_phys() apis should guarantee the address
      they finally use is in lowmem range, so check the address for lowmem's
      min boundary.
      
      Link: https://lkml.kernel.org/r/20220413122925.33856-1-patrick.wang.shcn@gmail.comSigned-off-by: default avatarPatrick Wang <patrick.wang.shcn@gmail.com>
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      23c2d497
    • Omar Sandoval's avatar
      mm/vmalloc: fix spinning drain_vmap_work after reading from /proc/vmcore · c12cd77c
      Omar Sandoval authored
      Commit 3ee48b6a ("mm, x86: Saving vmcore with non-lazy freeing of
      vmas") introduced set_iounmap_nonlazy(), which sets vmap_lazy_nr to
      lazy_max_pages() + 1, ensuring that any future vunmaps() immediately
      purge the vmap areas instead of doing it lazily.
      
      Commit 690467c8 ("mm/vmalloc: Move draining areas out of caller
      context") moved the purging from the vunmap() caller to a worker thread.
      Unfortunately, set_iounmap_nonlazy() can cause the worker thread to spin
      (possibly forever).  For example, consider the following scenario:
      
       1. Thread reads from /proc/vmcore. This eventually calls
          __copy_oldmem_page() -> set_iounmap_nonlazy(), which sets
          vmap_lazy_nr to lazy_max_pages() + 1.
      
       2. Then it calls free_vmap_area_noflush() (via iounmap()), which adds 2
          pages (one page plus the guard page) to the purge list and
          vmap_lazy_nr. vmap_lazy_nr is now lazy_max_pages() + 3, so the
          drain_vmap_work is scheduled.
      
       3. Thread returns from the kernel and is scheduled out.
      
       4. Worker thread is scheduled in and calls drain_vmap_area_work(). It
          frees the 2 pages on the purge list. vmap_lazy_nr is now
          lazy_max_pages() + 1.
      
       5. This is still over the threshold, so it tries to purge areas again,
          but doesn't find anything.
      
       6. Repeat 5.
      
      If the system is running with only one CPU (which is typicial for kdump)
      and preemption is disabled, then this will never make forward progress:
      there aren't any more pages to purge, so it hangs.  If there is more
      than one CPU or preemption is enabled, then the worker thread will spin
      forever in the background.  (Note that if there were already pages to be
      purged at the time that set_iounmap_nonlazy() was called, this bug is
      avoided.)
      
      This can be reproduced with anything that reads from /proc/vmcore
      multiple times.  E.g., vmcore-dmesg /proc/vmcore.
      
      It turns out that improvements to vmap() over the years have obsoleted
      the need for this "optimization".  I benchmarked `dd if=/proc/vmcore
      of=/dev/null` with 4k and 1M read sizes on a system with a 32GB vmcore.
      The test was run on 5.17, 5.18-rc1 with a fix that avoided the hang, and
      5.18-rc1 with set_iounmap_nonlazy() removed entirely:
      
          |5.17  |5.18+fix|5.18+removal
        4k|40.86s|  40.09s|      26.73s
        1M|24.47s|  23.98s|      21.84s
      
      The removal was the fastest (by a wide margin with 4k reads).  This
      patch removes set_iounmap_nonlazy().
      
      Link: https://lkml.kernel.org/r/52f819991051f9b865e9ce25605509bfdbacadcd.1649277321.git.osandov@fb.com
      Fixes: 690467c8  ("mm/vmalloc: Move draining areas out of caller context")
      Signed-off-by: default avatarOmar Sandoval <osandov@fb.com>
      Acked-by: default avatarChris Down <chris@chrisdown.name>
      Reviewed-by: default avatarUladzislau Rezki (Sony) <urezki@gmail.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Acked-by: default avatarBaoquan He <bhe@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      c12cd77c
    • Andrew Morton's avatar
      revert "fs/binfmt_elf: use PT_LOAD p_align values for static PIE" · aeb79237
      Andrew Morton authored
      Despite Mike's attempted fix (925346c1), regressions reports
      continue:
      
        https://lore.kernel.org/lkml/cb5b81bd-9882-e5dc-cd22-54bdbaaefbbc@leemhuis.info/
        https://bugzilla.kernel.org/show_bug.cgi?id=215720
        https://lkml.kernel.org/r/b685f3d0-da34-531d-1aa9-479accd3e21b@leemhuis.info
      
      So revert this patch.
      
      Fixes: 9630f0d6 ("fs/binfmt_elf: use PT_LOAD p_align values for static PIE")
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Chris Kennelly <ckennelly@google.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Fangrui Song <maskray@google.com>
      Cc: H.J. Lu <hjl.tools@gmail.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Ian Rogers <irogers@google.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Cc: Mike Rapoport <rppt@kernel.org>
      Cc: Nick Desaulniers <ndesaulniers@google.com>
      Cc: Sandeep Patil <sspatil@google.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: Song Liu <songliubraving@fb.com>
      Cc: Suren Baghdasaryan <surenb@google.com>
      Cc: Thorsten Leemhuis <regressions@leemhuis.info>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      aeb79237