1. 22 Sep, 2021 15 commits
    • Mingwei Zhang's avatar
      KVM: SVM: fix missing sev_decommission in sev_receive_start · f1815e0a
      Mingwei Zhang authored
      DECOMMISSION the current SEV context if binding an ASID fails after
      RECEIVE_START.  Per AMD's SEV API, RECEIVE_START generates a new guest
      context and thus needs to be paired with DECOMMISSION:
      
           The RECEIVE_START command is the only command other than the LAUNCH_START
           command that generates a new guest context and guest handle.
      
      The missing DECOMMISSION can result in subsequent SEV launch failures,
      as the firmware leaks memory and might not able to allocate more SEV
      guest contexts in the future.
      
      Note, LAUNCH_START suffered the same bug, but was previously fixed by
      commit 934002cd ("KVM: SVM: Call SEV Guest Decommission if ASID
      binding fails").
      
      Cc: Alper Gun <alpergun@google.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brijesh Singh <brijesh.singh@amd.com>
      Cc: David Rienjes <rientjes@google.com>
      Cc: Marc Orr <marcorr@google.com>
      Cc: John Allen <john.allen@amd.com>
      Cc: Peter Gonda <pgonda@google.com>
      Cc: Sean Christopherson <seanjc@google.com>
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Cc: Vipin Sharma <vipinsh@google.com>
      Cc: stable@vger.kernel.org
      Reviewed-by: default avatarMarc Orr <marcorr@google.com>
      Acked-by: default avatarBrijesh Singh <brijesh.singh@amd.com>
      Fixes: af43cbbf ("KVM: SVM: Add support for KVM_SEV_RECEIVE_START command")
      Signed-off-by: default avatarMingwei Zhang <mizhang@google.com>
      Reviewed-by: default avatarSean Christopherson <seanjc@google.com>
      Message-Id: <20210912181815.3899316-1-mizhang@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      f1815e0a
    • Peter Gonda's avatar
      KVM: SEV: Acquire vcpu mutex when updating VMSA · bb18a677
      Peter Gonda authored
      The update-VMSA ioctl touches data stored in struct kvm_vcpu, and
      therefore should not be performed concurrently with any VCPU ioctl
      that might cause KVM or the processor to use the same data.
      
      Adds vcpu mutex guard to the VMSA updating code. Refactors out
      __sev_launch_update_vmsa() function to deal with per vCPU parts
      of sev_launch_update_vmsa().
      
      Fixes: ad73109a ("KVM: SVM: Provide support to launch and run an SEV-ES guest")
      Signed-off-by: default avatarPeter Gonda <pgonda@google.com>
      Cc: Marc Orr <marcorr@google.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Sean Christopherson <seanjc@google.com>
      Cc: Brijesh Singh <brijesh.singh@amd.com>
      Cc: kvm@vger.kernel.org
      Cc: stable@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Message-Id: <20210915171755.3773766-1-pgonda@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      bb18a677
    • Sergey Senozhatsky's avatar
      KVM: do not shrink halt_poll_ns below grow_start · ae232ea4
      Sergey Senozhatsky authored
      grow_halt_poll_ns() ignores values between 0 and
      halt_poll_ns_grow_start (10000 by default). However,
      when we shrink halt_poll_ns we may fall way below
      halt_poll_ns_grow_start and endup with halt_poll_ns
      values that don't make a lot of sense: like 1 or 9,
      or 19.
      
      VCPU1 trace (halt_poll_ns_shrink equals 2):
      
      VCPU1 grow 10000
      VCPU1 shrink 5000
      VCPU1 shrink 2500
      VCPU1 shrink 1250
      VCPU1 shrink 625
      VCPU1 shrink 312
      VCPU1 shrink 156
      VCPU1 shrink 78
      VCPU1 shrink 39
      VCPU1 shrink 19
      VCPU1 shrink 9
      VCPU1 shrink 4
      
      Mirror what grow_halt_poll_ns() does and set halt_poll_ns
      to 0 as soon as new shrink-ed halt_poll_ns value falls
      below halt_poll_ns_grow_start.
      Signed-off-by: default avatarSergey Senozhatsky <senozhatsky@chromium.org>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Message-Id: <20210902031100.252080-1-senozhatsky@chromium.org>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      ae232ea4
    • Yu Zhang's avatar
      KVM: nVMX: fix comments of handle_vmon() · ed7023a1
      Yu Zhang authored
      "VMXON pointer" is saved in vmx->nested.vmxon_ptr since
      commit 3573e22c ("KVM: nVMX: additional checks on
      vmxon region"). Also, handle_vmptrld() & handle_vmclear()
      now have logic to check the VMCS pointer against the VMXON
      pointer.
      
      So just remove the obsolete comments of handle_vmon().
      Signed-off-by: default avatarYu Zhang <yu.c.zhang@linux.intel.com>
      Message-Id: <20210908171731.18885-1-yu.c.zhang@linux.intel.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      ed7023a1
    • Haimin Zhang's avatar
      KVM: x86: Handle SRCU initialization failure during page track init · eb7511bf
      Haimin Zhang authored
      Check the return of init_srcu_struct(), which can fail due to OOM, when
      initializing the page track mechanism.  Lack of checking leads to a NULL
      pointer deref found by a modified syzkaller.
      Reported-by: default avatarTCS Robot <tcs_robot@tencent.com>
      Signed-off-by: default avatarHaimin Zhang <tcs_kernel@tencent.com>
      Message-Id: <1630636626-12262-1-git-send-email-tcs_kernel@tencent.com>
      [Move the call towards the beginning of kvm_arch_init_vm. - Paolo]
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      eb7511bf
    • Sean Christopherson's avatar
      KVM: VMX: Remove defunct "nr_active_uret_msrs" field · cd36ae87
      Sean Christopherson authored
      Remove vcpu_vmx.nr_active_uret_msrs and its associated comment, which are
      both defunct now that KVM keeps the list constant and instead explicitly
      tracks which entries need to be loaded into hardware.
      
      No functional change intended.
      
      Fixes: ee9d22e0 ("KVM: VMX: Use flag to indicate "active" uret MSRs instead of sorting list")
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Message-Id: <20210908002401.1947049-1-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      cd36ae87
    • Oliver Upton's avatar
      selftests: KVM: Align SMCCC call with the spec in steal_time · 01f91acb
      Oliver Upton authored
      The SMC64 calling convention passes a function identifier in w0 and its
      parameters in x1-x17. Given this, there are two deviations in the
      SMC64 call performed by the steal_time test: the function identifier is
      assigned to a 64 bit register and the parameter is only 32 bits wide.
      
      Align the call with the SMCCC by using a 32 bit register to handle the
      function identifier and increasing the parameter width to 64 bits.
      Suggested-by: default avatarAndrew Jones <drjones@redhat.com>
      Signed-off-by: default avatarOliver Upton <oupton@google.com>
      Reviewed-by: default avatarAndrew Jones <drjones@redhat.com>
      Message-Id: <20210921171121.2148982-3-oupton@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      01f91acb
    • Oliver Upton's avatar
      selftests: KVM: Fix check for !POLLIN in demand_paging_test · 90b54129
      Oliver Upton authored
      The logical not operator applies only to the left hand side of a bitwise
      operator. As such, the check for POLLIN not being set in revents wrong.
      Fix it by adding parentheses around the bitwise expression.
      
      Fixes: 4f72180e ("KVM: selftests: Add demand paging content to the demand paging test")
      Reviewed-by: default avatarAndrew Jones <drjones@redhat.com>
      Signed-off-by: default avatarOliver Upton <oupton@google.com>
      Message-Id: <20210921171121.2148982-2-oupton@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      90b54129
    • Sean Christopherson's avatar
      KVM: x86: Clear KVM's cached guest CR3 at RESET/INIT · 03a6e840
      Sean Christopherson authored
      Explicitly zero the guest's CR3 and mark it available+dirty at RESET/INIT.
      Per Intel's SDM and AMD's APM, CR3 is zeroed at both RESET and INIT.  For
      RESET, this is a nop as vcpu is zero-allocated.  For INIT, the bug has
      likely escaped notice because no firmware/kernel puts its page tables root
      at PA=0, let alone relies on INIT to get the desired CR3 for such page
      tables.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Message-Id: <20210921000303.400537-3-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      03a6e840
    • Sean Christopherson's avatar
      KVM: x86: Mark all registers as avail/dirty at vCPU creation · 7117003f
      Sean Christopherson authored
      Mark all registers as available and dirty at vCPU creation, as the vCPU has
      obviously not been loaded into hardware, let alone been given the chance to
      be modified in hardware.  On SVM, reading from "uninitialized" hardware is
      a non-issue as VMCBs are zero allocated (thus not truly uninitialized) and
      hardware does not allow for arbitrary field encoding schemes.
      
      On VMX, backing memory for VMCSes is also zero allocated, but true
      initialization of the VMCS _technically_ requires VMWRITEs, as the VMX
      architectural specification technically allows CPU implementations to
      encode fields with arbitrary schemes.  E.g. a CPU could theoretically store
      the inverted value of every field, which would result in VMREAD to a
      zero-allocated field returns all ones.
      
      In practice, only the AR_BYTES fields are known to be manipulated by
      hardware during VMREAD/VMREAD; no known hardware or VMM (for nested VMX)
      does fancy encoding of cacheable field values (CR0, CR3, CR4, etc...).  In
      other words, this is technically a bug fix, but practically speakings it's
      a glorified nop.
      
      Failure to mark registers as available has been a lurking bug for quite
      some time.  The original register caching supported only GPRs (+RIP, which
      is kinda sorta a GPR), with the masks initialized at ->vcpu_reset().  That
      worked because the two cacheable registers, RIP and RSP, are generally
      speaking not read as side effects in other flows.
      
      Arguably, commit aff48baa ("KVM: Fetch guest cr3 from hardware on
      demand") was the first instance of failure to mark regs available.  While
      _just_ marking CR3 available during vCPU creation wouldn't have fixed the
      VMREAD from an uninitialized VMCS bug because ept_update_paging_mode_cr0()
      unconditionally read vmcs.GUEST_CR3, marking CR3 _and_ intentionally not
      reading GUEST_CR3 when it's available would have avoided VMREAD to a
      technically-uninitialized VMCS.
      
      Fixes: aff48baa ("KVM: Fetch guest cr3 from hardware on demand")
      Fixes: 6de4f3ad ("KVM: Cache pdptrs")
      Fixes: 6de12732 ("KVM: VMX: Optimize vmx_get_rflags()")
      Fixes: 2fb92db1 ("KVM: VMX: Cache vmcs segment fields")
      Fixes: bd31fe49 ("KVM: VMX: Add proper cache tracking for CR0")
      Fixes: f98c1e77 ("KVM: VMX: Add proper cache tracking for CR4")
      Fixes: 5addc235 ("KVM: VMX: Cache vmcs.EXIT_QUALIFICATION using arch avail_reg flags")
      Fixes: 87915858 ("KVM: VMX: Cache vmcs.EXIT_INTR_INFO using arch avail_reg flags")
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Message-Id: <20210921000303.400537-2-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      7117003f
    • Sean Christopherson's avatar
      KVM: selftests: Remove __NR_userfaultfd syscall fallback · 2da4a235
      Sean Christopherson authored
      Revert the __NR_userfaultfd syscall fallback added for KVM selftests now
      that x86's unistd_{32,63}.h overrides are under uapi/ and thus not in
      KVM selftests' search path, i.e. now that KVM gets x86 syscall numbers
      from the installed kernel headers.
      
      No functional change intended.
      Reviewed-by: default avatarBen Gardon <bgardon@google.com>
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Message-Id: <20210901203030.1292304-6-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      2da4a235
    • Sean Christopherson's avatar
      KVM: selftests: Add a test for KVM_RUN+rseq to detect task migration bugs · 61e52f16
      Sean Christopherson authored
      Add a test to verify an rseq's CPU ID is updated correctly if the task is
      migrated while the kernel is handling KVM_RUN.  This is a regression test
      for a bug introduced by commit 72c3c0fe ("x86/kvm: Use generic xfer
      to guest work function"), where TIF_NOTIFY_RESUME would be cleared by KVM
      without updating rseq, leading to a stale CPU ID and other badness.
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Acked-by: default avatarMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Message-Id: <20210901203030.1292304-5-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      61e52f16
    • Sean Christopherson's avatar
      tools: Move x86 syscall number fallbacks to .../uapi/ · de5f4213
      Sean Christopherson authored
      Move unistd_{32,64}.h from x86/include/asm to x86/include/uapi/asm so
      that tools/selftests that install kernel headers, e.g. KVM selftests, can
      include non-uapi tools headers, e.g. to get 'struct list_head', without
      effectively overriding the installed non-tool uapi headers.
      
      Swapping KVM's search order, e.g. to search the kernel headers before
      tool headers, is not a viable option as doing results in linux/type.h and
      other core headers getting pulled from the kernel headers, which do not
      have the kernel-internal typedefs that are used through tools, including
      many files outside of selftests/kvm's control.
      
      Prior to commit cec07f53 ("perf tools: Move syscall number fallbacks
      from perf-sys.h to tools/arch/x86/include/asm/"), the handcoded numbers
      were actual fallbacks, i.e. overriding unistd_{32,64}.h from the kernel
      headers was unintentional.
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Message-Id: <20210901203030.1292304-4-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      de5f4213
    • Sean Christopherson's avatar
      entry: rseq: Call rseq_handle_notify_resume() in tracehook_notify_resume() · a68de80f
      Sean Christopherson authored
      Invoke rseq_handle_notify_resume() from tracehook_notify_resume() now
      that the two function are always called back-to-back by architectures
      that have rseq.  The rseq helper is stubbed out for architectures that
      don't support rseq, i.e. this is a nop across the board.
      
      Note, tracehook_notify_resume() is horribly named and arguably does not
      belong in tracehook.h as literally every line of code in it has nothing
      to do with tracing.  But, that's been true since commit a42c6ded
      ("move key_repace_session_keyring() into tracehook_notify_resume()")
      first usurped tracehook_notify_resume() back in 2012.  Punt cleaning that
      mess up to future patches.
      
      No functional change intended.
      Acked-by: default avatarMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Message-Id: <20210901203030.1292304-3-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      a68de80f
    • Sean Christopherson's avatar
      KVM: rseq: Update rseq when processing NOTIFY_RESUME on xfer to KVM guest · 8646e536
      Sean Christopherson authored
      Invoke rseq's NOTIFY_RESUME handler when processing the flag prior to
      transferring to a KVM guest, which is roughly equivalent to an exit to
      userspace and processes many of the same pending actions.  While the task
      cannot be in an rseq critical section as the KVM path is reachable only
      by via ioctl(KVM_RUN), the side effects that apply to rseq outside of a
      critical section still apply, e.g. the current CPU needs to be updated if
      the task is migrated.
      
      Clearing TIF_NOTIFY_RESUME without informing rseq can lead to segfaults
      and other badness in userspace VMMs that use rseq in combination with KVM,
      e.g. due to the CPU ID being stale after task migration.
      
      Fixes: 72c3c0fe ("x86/kvm: Use generic xfer to guest work function")
      Reported-by: default avatarPeter Foley <pefoley@google.com>
      Bisected-by: default avatarDoug Evans <dje@google.com>
      Acked-by: default avatarMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: Shakeel Butt <shakeelb@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarSean Christopherson <seanjc@google.com>
      Message-Id: <20210901203030.1292304-2-seanjc@google.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      8646e536
  2. 20 Sep, 2021 2 commits
    • Linus Torvalds's avatar
      Linux 5.15-rc2 · e4e737bb
      Linus Torvalds authored
      e4e737bb
    • Linus Torvalds's avatar
      pci_iounmap'2: Electric Boogaloo: try to make sense of it all · 316e8d79
      Linus Torvalds authored
      Nathan Chancellor reports that the recent change to pci_iounmap in
      commit 9caea000 ("parisc: Declare pci_iounmap() parisc version only
      when CONFIG_PCI enabled") causes build errors on arm64.
      
      It took me about two hours to convince myself that I think I know what
      the logic of that mess of #ifdef's in the <asm-generic/io.h> header file
      really aim to do, and rewrite it to be easier to follow.
      
      Famous last words.
      
      Anyway, the code has now been lifted from that grotty header file into
      lib/pci_iomap.c, and has fairly extensive comments about what the logic
      is.  It also avoids indirecting through another confusing (and badly
      named) helper function that has other preprocessor config conditionals.
      
      Let's see what odd architecture did something else strange in this area
      to break things.  But my arm64 cross build is clean.
      
      Fixes: 9caea000 ("parisc: Declare pci_iounmap() parisc version only when CONFIG_PCI enabled")
      Reported-by: default avatarNathan Chancellor <nathan@kernel.org>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Guenter Roeck <linux@roeck-us.net>
      Cc: Ulrich Teichert <krypton@ulrich-teichert.org>
      Cc: James Bottomley <James.Bottomley@hansenpartnership.com>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      316e8d79
  3. 19 Sep, 2021 18 commits
  4. 18 Sep, 2021 5 commits
    • Linus Torvalds's avatar
      alpha: move __udiv_qrnnd library function to arch/alpha/lib/ · d4d016ca
      Linus Torvalds authored
      We already had the implementation for __udiv_qrnnd (unsigned divide for
      multi-precision arithmetic) as part of the alpha math emulation code.
      
      But you can disable the math emulation code - even if you shouldn't -
      and then the MPI code that actually wants this functionality (and is
      needed by various crypto functions) will fail to build.
      
      So move the extended-precision divide code to be a regular library
      function, just like all the regular division code is.  That way ie is
      available regardless of math-emulation.
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      d4d016ca
    • Linus Torvalds's avatar
      alpha: mark 'Jensen' platform as no longer broken · ab41f75e
      Linus Torvalds authored
      Ok, it almost certainly is still broken on actual hardware, but the
      immediate reason for it having been marked BROKEN was a build error that
      is fixed by just making sure the low-level IO header file is included
      sufficiently early that the __EXTERN_INLINE hackery takes effect.
      
      This was marked broken back in 2017 by commit 1883c9f4 ("alpha: mark
      jensen as broken"), but Ulrich Teichert made me look at it as part of my
      cross-build work to make sure -Werror actually does the right thing.
      
      There are lots of alpha configurations that do not build cleanly, but
      now it's no longer because Jensen wouldn't be buildable.  That said,
      because the Jensen platform doesn't force PCI to be enabled (Jensen only
      had EISA), it ends up being somewhat interesting as a source of odd
      configs.
      Reported-by: default avatarUlrich Teichert <krypton@ulrich-teichert.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      ab41f75e
    • Andrii Nakryiko's avatar
      perf bpf: Ignore deprecation warning when using libbpf's btf__get_from_id() · 219d720e
      Andrii Nakryiko authored
      Perf code re-implements libbpf's btf__load_from_kernel_by_id() API as
      a weak function, presumably to dynamically link against old version of
      libbpf shared library. Unfortunately this causes compilation warning
      when perf is compiled against libbpf v0.6+.
      
      For now, just ignore deprecation warning, but there might be a better
      solution, depending on perf's needs.
      Signed-off-by: default avatarAndrii Nakryiko <andrii@kernel.org>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Cc: kernel-team@fb.com
      LPU-Reference: 20210914170004.4185659-1-andrii@kernel.org
      Signed-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      219d720e
    • Ian Rogers's avatar
      libperf evsel: Make use of FD robust. · aba5daeb
      Ian Rogers authored
      FD uses xyarray__entry that may return NULL if an index is out of
      bounds. If NULL is returned then a segv happens as FD unconditionally
      dereferences the pointer. This was happening in a case of with perf
      iostat as shown below. The fix is to make FD an "int*" rather than an
      int and handle the NULL case as either invalid input or a closed fd.
      
        $ sudo gdb --args perf stat --iostat  list
        ...
        Breakpoint 1, perf_evsel__alloc_fd (evsel=0x5555560951a0, ncpus=1, nthreads=1) at evsel.c:50
        50      {
        (gdb) bt
         #0  perf_evsel__alloc_fd (evsel=0x5555560951a0, ncpus=1, nthreads=1) at evsel.c:50
         #1  0x000055555585c188 in evsel__open_cpu (evsel=0x5555560951a0, cpus=0x555556093410,
            threads=0x555556086fb0, start_cpu=0, end_cpu=1) at util/evsel.c:1792
         #2  0x000055555585cfb2 in evsel__open (evsel=0x5555560951a0, cpus=0x0, threads=0x555556086fb0)
            at util/evsel.c:2045
         #3  0x000055555585d0db in evsel__open_per_thread (evsel=0x5555560951a0, threads=0x555556086fb0)
            at util/evsel.c:2065
         #4  0x00005555558ece64 in create_perf_stat_counter (evsel=0x5555560951a0,
            config=0x555555c34700 <stat_config>, target=0x555555c2f1c0 <target>, cpu=0) at util/stat.c:590
         #5  0x000055555578e927 in __run_perf_stat (argc=1, argv=0x7fffffffe4a0, run_idx=0)
            at builtin-stat.c:833
         #6  0x000055555578f3c6 in run_perf_stat (argc=1, argv=0x7fffffffe4a0, run_idx=0)
            at builtin-stat.c:1048
         #7  0x0000555555792ee5 in cmd_stat (argc=1, argv=0x7fffffffe4a0) at builtin-stat.c:2534
         #8  0x0000555555835ed3 in run_builtin (p=0x555555c3f540 <commands+288>, argc=3,
            argv=0x7fffffffe4a0) at perf.c:313
         #9  0x0000555555836154 in handle_internal_command (argc=3, argv=0x7fffffffe4a0) at perf.c:365
         #10 0x000055555583629f in run_argv (argcp=0x7fffffffe2ec, argv=0x7fffffffe2e0) at perf.c:409
         #11 0x0000555555836692 in main (argc=3, argv=0x7fffffffe4a0) at perf.c:539
        ...
        (gdb) c
        Continuing.
        Error:
        The sys_perf_event_open() syscall returned with 22 (Invalid argument) for event (uncore_iio_0/event=0x83,umask=0x04,ch_mask=0xF,fc_mask=0x07/).
        /bin/dmesg | grep -i perf may provide additional information.
      
        Program received signal SIGSEGV, Segmentation fault.
        0x00005555559b03ea in perf_evsel__close_fd_cpu (evsel=0x5555560951a0, cpu=1) at evsel.c:166
        166                     if (FD(evsel, cpu, thread) >= 0)
      
      v3. fixes a bug in perf_evsel__run_ioctl where the sense of a branch was
          backward.
      Signed-off-by: default avatarIan Rogers <irogers@google.com>
      Acked-by: default avatarJiri Olsa <jolsa@redhat.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Link: http://lore.kernel.org/lkml/20210918054440.2350466-1-irogers@google.comSigned-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      aba5daeb
    • Michael Petlan's avatar
      perf machine: Initialize srcline string member in add_location struct · 57f0ff05
      Michael Petlan authored
      It's later supposed to be either a correct address or NULL. Without the
      initialization, it may contain an undefined value which results in the
      following segmentation fault:
      
        # perf top --sort comm -g --ignore-callees=do_idle
      
      terminates with:
      
        #0  0x00007ffff56b7685 in __strlen_avx2 () from /lib64/libc.so.6
        #1  0x00007ffff55e3802 in strdup () from /lib64/libc.so.6
        #2  0x00005555558cb139 in hist_entry__init (callchain_size=<optimized out>, sample_self=true, template=0x7fffde7fb110, he=0x7fffd801c250) at util/hist.c:489
        #3  hist_entry__new (template=template@entry=0x7fffde7fb110, sample_self=sample_self@entry=true) at util/hist.c:564
        #4  0x00005555558cb4ba in hists__findnew_entry (hists=hists@entry=0x5555561d9e38, entry=entry@entry=0x7fffde7fb110, al=al@entry=0x7fffde7fb420,
            sample_self=sample_self@entry=true) at util/hist.c:657
        #5  0x00005555558cba1b in __hists__add_entry (hists=hists@entry=0x5555561d9e38, al=0x7fffde7fb420, sym_parent=<optimized out>, bi=bi@entry=0x0, mi=mi@entry=0x0,
            sample=sample@entry=0x7fffde7fb4b0, sample_self=true, ops=0x0, block_info=0x0) at util/hist.c:288
        #6  0x00005555558cbb70 in hists__add_entry (sample_self=true, sample=0x7fffde7fb4b0, mi=0x0, bi=0x0, sym_parent=<optimized out>, al=<optimized out>, hists=0x5555561d9e38)
            at util/hist.c:1056
        #7  iter_add_single_cumulative_entry (iter=0x7fffde7fb460, al=<optimized out>) at util/hist.c:1056
        #8  0x00005555558cc8a4 in hist_entry_iter__add (iter=iter@entry=0x7fffde7fb460, al=al@entry=0x7fffde7fb420, max_stack_depth=<optimized out>, arg=arg@entry=0x7fffffff7db0)
            at util/hist.c:1231
        #9  0x00005555557cdc9a in perf_event__process_sample (machine=<optimized out>, sample=0x7fffde7fb4b0, evsel=<optimized out>, event=<optimized out>, tool=0x7fffffff7db0)
            at builtin-top.c:842
        #10 deliver_event (qe=<optimized out>, qevent=<optimized out>) at builtin-top.c:1202
        #11 0x00005555558a9318 in do_flush (show_progress=false, oe=0x7fffffff80e0) at util/ordered-events.c:244
        #12 __ordered_events__flush (oe=oe@entry=0x7fffffff80e0, how=how@entry=OE_FLUSH__TOP, timestamp=timestamp@entry=0) at util/ordered-events.c:323
        #13 0x00005555558a9789 in __ordered_events__flush (timestamp=<optimized out>, how=<optimized out>, oe=<optimized out>) at util/ordered-events.c:339
        #14 ordered_events__flush (how=OE_FLUSH__TOP, oe=0x7fffffff80e0) at util/ordered-events.c:341
        #15 ordered_events__flush (oe=oe@entry=0x7fffffff80e0, how=how@entry=OE_FLUSH__TOP) at util/ordered-events.c:339
        #16 0x00005555557cd631 in process_thread (arg=0x7fffffff7db0) at builtin-top.c:1114
        #17 0x00007ffff7bb817a in start_thread () from /lib64/libpthread.so.0
        #18 0x00007ffff5656dc3 in clone () from /lib64/libc.so.6
      
      If you look at the frame #2, the code is:
      
      488	 if (he->srcline) {
      489          he->srcline = strdup(he->srcline);
      490          if (he->srcline == NULL)
      491              goto err_rawdata;
      492	 }
      
      If he->srcline is not NULL (it is not NULL if it is uninitialized rubbish),
      it gets strdupped and strdupping a rubbish random string causes the problem.
      
      Also, if you look at the commit 1fb7d06a, it adds the srcline property
      into the struct, but not initializing it everywhere needed.
      
      Committer notes:
      
      Now I see, when using --ignore-callees=do_idle we end up here at line
      2189 in add_callchain_ip():
      
      2181         if (al.sym != NULL) {
      2182                 if (perf_hpp_list.parent && !*parent &&
      2183                     symbol__match_regex(al.sym, &parent_regex))
      2184                         *parent = al.sym;
      2185                 else if (have_ignore_callees && root_al &&
      2186                   symbol__match_regex(al.sym, &ignore_callees_regex)) {
      2187                         /* Treat this symbol as the root,
      2188                            forgetting its callees. */
      2189                         *root_al = al;
      2190                         callchain_cursor_reset(cursor);
      2191                 }
      2192         }
      
      And the al that doesn't have the ->srcline field initialized will be
      copied to the root_al, so then, back to:
      
      1211 int hist_entry_iter__add(struct hist_entry_iter *iter, struct addr_location *al,
      1212                          int max_stack_depth, void *arg)
      1213 {
      1214         int err, err2;
      1215         struct map *alm = NULL;
      1216
      1217         if (al)
      1218                 alm = map__get(al->map);
      1219
      1220         err = sample__resolve_callchain(iter->sample, &callchain_cursor, &iter->parent,
      1221                                         iter->evsel, al, max_stack_depth);
      1222         if (err) {
      1223                 map__put(alm);
      1224                 return err;
      1225         }
      1226
      1227         err = iter->ops->prepare_entry(iter, al);
      1228         if (err)
      1229                 goto out;
      1230
      1231         err = iter->ops->add_single_entry(iter, al);
      1232         if (err)
      1233                 goto out;
      1234
      
      That al at line 1221 is what hist_entry_iter__add() (called from
      sample__resolve_callchain()) saw as 'root_al', and then:
      
              iter->ops->add_single_entry(iter, al);
      
      will go on with al->srcline with a bogus value, I'll add the above
      sequence to the cset and apply, thanks!
      Signed-off-by: default avatarMichael Petlan <mpetlan@redhat.com>
      CC: Milian Wolff <milian.wolff@kdab.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Fixes: 1fb7d06a ("perf report Use srcline from callchain for hist entries")
      Link: https //lore.kernel.org/r/20210719145332.29747-1-mpetlan@redhat.com
      Reported-by: default avatarJuri Lelli <jlelli@redhat.com>
      Signed-off-by: default avatarArnaldo Carvalho de Melo <acme@redhat.com>
      57f0ff05