1. 20 Apr, 2023 9 commits
    • Will Deacon's avatar
      Merge branch 'for-next/mm' into for-next/core · 1bb31cc7
      Will Deacon authored
      * for-next/mm:
        arm64: mm: always map fixmap at page granularity
        arm64: mm: move fixmap code to its own file
        arm64: add FIXADDR_TOT_{START,SIZE}
        Revert "Revert "arm64: dma: Drop cache invalidation from arch_dma_prep_coherent()""
        arm: uaccess: Remove memcpy_page_flushcache()
        mm,kfence: decouple kfence from page granularity mapping judgement
      1bb31cc7
    • Will Deacon's avatar
      Merge branch 'for-next/misc' into for-next/core · 81444b77
      Will Deacon authored
      * for-next/misc:
        arm64: kexec: include reboot.h
        arm64: delete dead code in this_cpu_set_vectors()
        arm64: kernel: Fix kernel warning when nokaslr is passed to commandline
        arm64: kgdb: Set PSTATE.SS to 1 to re-enable single-step
        arm64/sme: Fix some comments of ARM SME
        arm64/signal: Alloc tpidr2 sigframe after checking system_supports_tpidr2()
        arm64/signal: Use system_supports_tpidr2() to check TPIDR2
        arm64: compat: Remove defines now in asm-generic
        arm64: kexec: remove unnecessary (void*) conversions
        arm64: armv8_deprecated: remove unnecessary (void*) conversions
        firmware: arm_sdei: Fix sleep from invalid context BUG
      81444b77
    • Will Deacon's avatar
      Merge branch 'for-next/kdump' into for-next/core · f8863bc8
      Will Deacon authored
      * for-next/kdump:
        arm64: kdump: defer the crashkernel reservation for platforms with no DMA memory zones
        arm64: kdump: do not map crashkernel region specifically
        arm64: kdump : take off the protection on crashkernel memory region
      f8863bc8
    • Will Deacon's avatar
      Merge branch 'for-next/ftrace' into for-next/core · ea88dc92
      Will Deacon authored
      * for-next/ftrace:
        arm64: ftrace: Simplify get_ftrace_plt
        arm64: ftrace: Add direct call support
        ftrace: selftest: remove broken trace_direct_tramp
        ftrace: Make DIRECT_CALLS work WITH_ARGS and !WITH_REGS
        ftrace: Store direct called addresses in their ops
        ftrace: Rename _ftrace_direct_multi APIs to _ftrace_direct APIs
        ftrace: Remove the legacy _ftrace_direct API
        ftrace: Replace uses of _ftrace_direct APIs with _ftrace_direct_multi
        ftrace: Let unregister_ftrace_direct_multi() call ftrace_free_filter()
      ea88dc92
    • Will Deacon's avatar
      Merge branch 'for-next/cpufeature' into for-next/core · 31eb87cf
      Will Deacon authored
      * for-next/cpufeature:
        arm64/cpufeature: Use helper macro to specify ID register for capabilites
        arm64/cpufeature: Consistently use symbolic constants for min_field_value
        arm64/cpufeature: Pull out helper for CPUID register definitions
      31eb87cf
    • Will Deacon's avatar
      Merge branch 'for-next/asm' into for-next/core · 0f6563a3
      Will Deacon authored
      * for-next/asm:
        arm64: uaccess: remove unnecessary earlyclobber
        arm64: uaccess: permit put_{user,kernel} to use zero register
        arm64: uaccess: permit __smp_store_release() to use zero register
        arm64: atomics: lse: improve cmpxchg implementation
      0f6563a3
    • Will Deacon's avatar
      Merge branch 'for-next/acpi' into for-next/core · 67eacd61
      Will Deacon authored
      * for-next/acpi:
        ACPI: AGDI: Improve error reporting for problems during .remove()
      67eacd61
    • Simon Horman's avatar
      arm64: kexec: include reboot.h · b7b4ce84
      Simon Horman authored
      Include reboot.h in machine_kexec.c for declaration of
      machine_crash_shutdown.
      
      gcc-12 with W=1 reports:
      
       arch/arm64/kernel/machine_kexec.c:257:6: warning: no previous prototype for 'machine_crash_shutdown' [-Wmissing-prototypes]
         257 | void machine_crash_shutdown(struct pt_regs *regs)
      
      No functional changes intended.
      Compile tested only.
      Signed-off-by: default avatarSimon Horman <horms@kernel.org>
      Link: https://lore.kernel.org/r/20230418-arm64-kexec-include-reboot-v1-1-8453fd4fb3fb@kernel.orgSigned-off-by: default avatarWill Deacon <will@kernel.org>
      b7b4ce84
    • Dan Carpenter's avatar
      arm64: delete dead code in this_cpu_set_vectors() · 460e70e2
      Dan Carpenter authored
      The "slot" variable is an enum, and in this context it is an unsigned
      int.  So the type means it can never be negative and also we never pass
      invalid data to this function.  If something did pass invalid data then
      this check would be insufficient protection.
      Signed-off-by: default avatarDan Carpenter <dan.carpenter@linaro.org>
      Acked-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Link: https://lore.kernel.org/r/73859c9e-dea0-4764-bf01-7ae694fa2e37@kili.mountainSigned-off-by: default avatarWill Deacon <will@kernel.org>
      460e70e2
  2. 17 Apr, 2023 4 commits
  3. 14 Apr, 2023 2 commits
    • Pavankumar Kondeti's avatar
      arm64: kernel: Fix kernel warning when nokaslr is passed to commandline · a2a83eb4
      Pavankumar Kondeti authored
      'Unknown kernel command line parameters "nokaslr", will be passed to
      user space' message is noticed in the dmesg when nokaslr is passed to
      the kernel commandline on ARM64 platform. This is because nokaslr param
      is handled by early cpufeature detection infrastructure and the parameter
      is never consumed by a kernel param handler. Fix this warning by
      providing a dummy kernel param handler for nokaslr.
      Signed-off-by: default avatarPavankumar Kondeti <quic_pkondeti@quicinc.com>
      Link: https://lore.kernel.org/r/20230412043258.397455-1-quic_pkondeti@quicinc.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      a2a83eb4
    • Sumit Garg's avatar
      arm64: kgdb: Set PSTATE.SS to 1 to re-enable single-step · af6c0bd5
      Sumit Garg authored
      Currently only the first attempt to single-step has any effect. After
      that all further stepping remains "stuck" at the same program counter
      value.
      
      Refer to the ARM Architecture Reference Manual (ARM DDI 0487E.a) D2.12,
      PSTATE.SS=1 should be set at each step before transferring the PE to the
      'Active-not-pending' state. The problem here is PSTATE.SS=1 is not set
      since the second single-step.
      
      After the first single-step, the PE transferes to the 'Inactive' state,
      with PSTATE.SS=0 and MDSCR.SS=1, thus PSTATE.SS won't be set to 1 due to
      kernel_active_single_step()=true. Then the PE transferes to the
      'Active-pending' state when ERET and returns to the debugger by step
      exception.
      
      Before this patch:
      ==================
      Entering kdb (current=0xffff3376039f0000, pid 1) on processor 0 due to Keyboard Entry
      [0]kdb>
      
      [0]kdb>
      [0]kdb> bp write_sysrq_trigger
      Instruction(i) BP #0 at 0xffffa45c13d09290 (write_sysrq_trigger)
          is enabled   addr at ffffa45c13d09290, hardtype=0 installed=0
      
      [0]kdb> go
      $ echo h > /proc/sysrq-trigger
      
      Entering kdb (current=0xffff4f7e453f8000, pid 175) on processor 1 due to Breakpoint @ 0xffffad651a309290
      [1]kdb> ss
      
      Entering kdb (current=0xffff4f7e453f8000, pid 175) on processor 1 due to SS trap @ 0xffffad651a309294
      [1]kdb> ss
      
      Entering kdb (current=0xffff4f7e453f8000, pid 175) on processor 1 due to SS trap @ 0xffffad651a309294
      [1]kdb>
      
      After this patch:
      =================
      Entering kdb (current=0xffff6851c39f0000, pid 1) on processor 0 due to Keyboard Entry
      [0]kdb> bp write_sysrq_trigger
      Instruction(i) BP #0 at 0xffffc02d2dd09290 (write_sysrq_trigger)
          is enabled   addr at ffffc02d2dd09290, hardtype=0 installed=0
      
      [0]kdb> go
      $ echo h > /proc/sysrq-trigger
      
      Entering kdb (current=0xffff6851c53c1840, pid 174) on processor 1 due to Breakpoint @ 0xffffc02d2dd09290
      [1]kdb> ss
      
      Entering kdb (current=0xffff6851c53c1840, pid 174) on processor 1 due to SS trap @ 0xffffc02d2dd09294
      [1]kdb> ss
      
      Entering kdb (current=0xffff6851c53c1840, pid 174) on processor 1 due to SS trap @ 0xffffc02d2dd09298
      [1]kdb> ss
      
      Entering kdb (current=0xffff6851c53c1840, pid 174) on processor 1 due to SS trap @ 0xffffc02d2dd0929c
      [1]kdb>
      
      Fixes: 44679a4f ("arm64: KGDB: Add step debugging support")
      Co-developed-by: default avatarWei Li <liwei391@huawei.com>
      Signed-off-by: default avatarWei Li <liwei391@huawei.com>
      Signed-off-by: default avatarSumit Garg <sumit.garg@linaro.org>
      Tested-by: default avatarDouglas Anderson <dianders@chromium.org>
      Acked-by: default avatarDaniel Thompson <daniel.thompson@linaro.org>
      Tested-by: default avatarDaniel Thompson <daniel.thompson@linaro.org>
      Link: https://lore.kernel.org/r/20230202073148.657746-3-sumit.garg@linaro.orgSigned-off-by: default avatarWill Deacon <will@kernel.org>
      af6c0bd5
  4. 12 Apr, 2023 3 commits
  5. 11 Apr, 2023 10 commits
  6. 30 Mar, 2023 1 commit
  7. 28 Mar, 2023 7 commits
    • Mark Rutland's avatar
      arm64: uaccess: remove unnecessary earlyclobber · 17242086
      Mark Rutland authored
      Currently the asm constraints for __get_mem_asm() mark the value
      register as an earlyclobber operand. This means that the compiler can't
      reuse the same register for both the address and value, even when the
      value is not subsequently used.
      
      There's no need for the value register to be marked as earlyclobber, as
      it's only written to after the address register is consumed, even when
      the access faults.
      
      Remove the unnecessary earlyclobber.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20230314153700.787701-5-mark.rutland@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      17242086
    • Mark Rutland's avatar
      arm64: uaccess: permit put_{user,kernel} to use zero register · 4a3f806e
      Mark Rutland authored
      Currently the asm constraints for __put_mem_asm() require that the value
      is placed in a "real" GPR (i.e. one other than [XW]ZR or SP). This means
      that for cases such as:
      
      	__put_user(0, addr)
      
      ... the compiler has to move '0' into "real" GPR, e.g.
      
      	mov	xN, #0
      	sttr	xN, [<addr>]
      
      This is unfortunate, as using the zero register would require fewer
      instructions and save a "real" GPR for other usage, allowing the
      compiler to generate:
      
      	sttr	xzr, [<addr>]
      
      Modify the asm constaints for __put_mem_asm() to permit the use of the
      zero register for the value.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20230314153700.787701-4-mark.rutland@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      4a3f806e
    • Mark Rutland's avatar
      arm64: uaccess: permit __smp_store_release() to use zero register · 39c8275d
      Mark Rutland authored
      Currently the asm constraints for __smp_store_release() require that the
      value is placed in a "real" GPR (i.e. one other than [XW]ZR or SP).
      This means that for cases such as:
      
          __smp_store_release(ptr, 0)
      
      ... the compiler has to move '0' into "real" GPR, e.g.
      
          mov     xN, #0
          stlr    xN, [<addr>]
      
      This is unfortunate, as using the zero register would require fewer
      instructions and save a "real" GPR for other usage, allowing the
      compiler to generate:
      
          stlr    xzr, [<addr>]
      
      Modify the asm constaints for __smp_store_release() to permit the use of
      the zero register for the value.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20230314153700.787701-3-mark.rutland@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      39c8275d
    • Mark Rutland's avatar
      arm64: atomics: lse: improve cmpxchg implementation · e5cacb54
      Mark Rutland authored
      For historical reasons, the LSE implementation of cmpxchg*() hard-codes
      the GPRs to use, and shuffles registers around with MOVs. This is no
      longer necessary, and can be simplified.
      
      When the LSE cmpxchg implementation was added in commit:
      
        c342f782 ("arm64: cmpxchg: patch in lse instructions when supported by the CPU")
      
      ... the LL/SC implementation of cmpxchg() would be placed out-of-line,
      and the in-line assembly for cmpxchg would default to:
      
      	NOP
      	BL	<ll_sc_cmpxchg*_implementation>
      	NOP
      
      The LL/SC implementation of each cmpxchg() function accepted arguments
      as per AAPCS64 rules, to it was necessary to place the pointer in x0,
      the older value in X1, and the new value in x2, and acquire the return
      value from x0. The LL/SC implementation required a temporary register
      (e.g. for the STXR status value). As the LL/SC implementation preserved
      the old value, the LSE implementation does likewise.
      
      Since commit:
      
        addfc386 ("arm64: atomics: avoid out-of-line ll/sc atomics")
      
      ... the LSE and LL/SC implementations of cmpxchg are inlined as separate
      asm blocks, with another branch choosing between thw two. Due to this,
      it is no longer necessary for the LSE implementation to match the
      register constraints of the LL/SC implementation. This was partially
      dealt with by removing the hard-coded use of x30 in commit:
      
        3337cb5a ("arm64: avoid using hard-coded registers for LSE atomics")
      
      ... but we didn't clean up the hard-coding of x0, x1, and x2.
      
      This patch simplifies the LSE implementation of cmpxchg, removing the
      register shuffling and directly clobbering the 'old' argument. This
      gives the compiler greater freedom for register allocation, and avoids
      redundant work.
      
      The new constraints permit 'old' (Rs) and 'new' (Rt) to be allocated to
      the same register when the initial values of the two are the same, e.g.
      resulting in:
      
      	CAS	X0, X0, [X1]
      
      This is safe as Rs is only written back after the initial values of Rs
      and Rt are consumed, and there are no UNPREDICTABLE behaviours to avoid
      when Rs == Rt.
      
      The new constraints also permit 'new' to be allocated to the zero
      register, avoiding a MOV in a few cases. The same cannot be done for
      'old' as it is both an input and output, and any caller of cmpxchg()
      should care about the output value. Note that for CAS* the use of the
      zero register never affects the ordering (while for SWP* the use of the
      zero regsiter for the 'old' value drops any ACQUIRE semantic).
      
      Compared to v6.2-rc4, a defconfig vmlinux is ~116KiB smaller, though the
      resulting Image is the same size due to internal alignment and padding:
      
        [mark@lakrids:~/src/linux]% ls -al vmlinux-*
        -rwxr-xr-x 1 mark mark 137269304 Jan 16 11:59 vmlinux-after
        -rwxr-xr-x 1 mark mark 137387936 Jan 16 10:54 vmlinux-before
        [mark@lakrids:~/src/linux]% ls -al Image-*
        -rw-r--r-- 1 mark mark 38711808 Jan 16 11:59 Image-after
        -rw-r--r-- 1 mark mark 38711808 Jan 16 10:54 Image-before
      
      This patch does not touch cmpxchg_double*() as that requires contiguous
      register pairs, and separate patches will replace it with cmpxchg128*().
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20230314153700.787701-2-mark.rutland@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      e5cacb54
    • Yu Zhe's avatar
      arm64: kexec: remove unnecessary (void*) conversions · 6915cffd
      Yu Zhe authored
      Pointer variables of void * type do not require type cast.
      Signed-off-by: default avatarYu Zhe <yuzhe@nfschina.com>
      Link: https://lore.kernel.org/r/20230303025715.32570-1-yuzhe@nfschina.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      6915cffd
    • Yu Zhe's avatar
      arm64: armv8_deprecated: remove unnecessary (void*) conversions · 0e2cb49e
      Yu Zhe authored
      Pointer variables of void * type do not require type cast.
      Signed-off-by: default avatarYu Zhe <yuzhe@nfschina.com>
      Link: https://lore.kernel.org/r/20230303025047.19717-1-yuzhe@nfschina.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      0e2cb49e
    • Pierre Gondois's avatar
      firmware: arm_sdei: Fix sleep from invalid context BUG · d2c48b23
      Pierre Gondois authored
      Running a preempt-rt (v6.2-rc3-rt1) based kernel on an Ampere Altra
      triggers:
      
        BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:46
        in_atomic(): 0, irqs_disabled(): 128, non_block: 0, pid: 24, name: cpuhp/0
        preempt_count: 0, expected: 0
        RCU nest depth: 0, expected: 0
        3 locks held by cpuhp/0/24:
          #0: ffffda30217c70d0 (cpu_hotplug_lock){++++}-{0:0}, at: cpuhp_thread_fun+0x5c/0x248
          #1: ffffda30217c7120 (cpuhp_state-up){+.+.}-{0:0}, at: cpuhp_thread_fun+0x5c/0x248
          #2: ffffda3021c711f0 (sdei_list_lock){....}-{3:3}, at: sdei_cpuhp_up+0x3c/0x130
        irq event stamp: 36
        hardirqs last  enabled at (35): [<ffffda301e85b7bc>] finish_task_switch+0xb4/0x2b0
        hardirqs last disabled at (36): [<ffffda301e812fec>] cpuhp_thread_fun+0x21c/0x248
        softirqs last  enabled at (0): [<ffffda301e80b184>] copy_process+0x63c/0x1ac0
        softirqs last disabled at (0): [<0000000000000000>] 0x0
        CPU: 0 PID: 24 Comm: cpuhp/0 Not tainted 5.19.0-rc3-rt5-[...]
        Hardware name: WIWYNN Mt.Jade Server [...]
        Call trace:
          dump_backtrace+0x114/0x120
          show_stack+0x20/0x70
          dump_stack_lvl+0x9c/0xd8
          dump_stack+0x18/0x34
          __might_resched+0x188/0x228
          rt_spin_lock+0x70/0x120
          sdei_cpuhp_up+0x3c/0x130
          cpuhp_invoke_callback+0x250/0xf08
          cpuhp_thread_fun+0x120/0x248
          smpboot_thread_fn+0x280/0x320
          kthread+0x130/0x140
          ret_from_fork+0x10/0x20
      
      sdei_cpuhp_up() is called in the STARTING hotplug section,
      which runs with interrupts disabled. Use a CPUHP_AP_ONLINE_DYN entry
      instead to execute the cpuhp cb later, with preemption enabled.
      
      SDEI originally got its own cpuhp slot to allow interacting
      with perf. It got superseded by pNMI and this early slot is not
      relevant anymore. [1]
      
      Some SDEI calls (e.g. SDEI_1_0_FN_SDEI_PE_MASK) take actions on the
      calling CPU. It is checked that preemption is disabled for them.
      _ONLINE cpuhp cb are executed in the 'per CPU hotplug thread'.
      Preemption is enabled in those threads, but their cpumask is limited
      to 1 CPU.
      Move 'WARN_ON_ONCE(preemptible())' statements so that SDEI cpuhp cb
      don't trigger them.
      
      Also add a check for the SDEI_1_0_FN_SDEI_PRIVATE_RESET SDEI call
      which acts on the calling CPU.
      
      [1]:
      https://lore.kernel.org/all/5813b8c5-ae3e-87fd-fccc-94c9cd08816d@arm.com/Suggested-by: default avatarJames Morse <james.morse@arm.com>
      Signed-off-by: default avatarPierre Gondois <pierre.gondois@arm.com>
      Reviewed-by: default avatarJames Morse <james.morse@arm.com>
      Link: https://lore.kernel.org/r/20230216084920.144064-1-pierre.gondois@arm.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      d2c48b23
  8. 27 Mar, 2023 2 commits
  9. 21 Mar, 2023 2 commits
    • Mark Rutland's avatar
      ftrace: selftest: remove broken trace_direct_tramp · fee86a4e
      Mark Rutland authored
      The ftrace selftest code has a trace_direct_tramp() function which it
      uses as a direct call trampoline. This happens to work on x86, since the
      direct call's return address is in the usual place, and can be returned
      to via a RET, but in general the calling convention for direct calls is
      different from regular function calls, and requires a trampoline written
      in assembly.
      
      On s390, regular function calls place the return address in %r14, and an
      ftrace patch-site in an instrumented function places the trampoline's
      return address (which is within the instrumented function) in %r0,
      preserving the original %r14 value in-place. As a regular C function
      will return to the address in %r14, using a C function as the trampoline
      results in the trampoline returning to the caller of the instrumented
      function, skipping the body of the instrumented function.
      
      Note that the s390 issue is not detcted by the ftrace selftest code, as
      the instrumented function is trivial, and returning back into the caller
      happens to be equivalent.
      
      On arm64, regular function calls place the return address in x30, and
      an ftrace patch-site in an instrumented function saves this into r9
      and places the trampoline's return address (within the instrumented
      function) in x30. A regular C function will return to the address in
      x30, but will not restore x9 into x30. Consequently, using a C function
      as the trampoline results in returning to the trampoline's return
      address having corrupted x30, such that when the instrumented function
      returns, it will return back into itself.
      
      To avoid future issues in this area, remove the trace_direct_tramp()
      function, and require that each architecture with direct calls provides
      a stub trampoline, named ftrace_stub_direct_tramp. This can be written
      to handle the architecture's trampoline calling convention, and in
      future could be used elsewhere (e.g. in the ftrace ops sample, to
      measure the overhead of direct calls), so we may as well always build it
      in.
      
      Link: https://lkml.kernel.org/r/20230321140424.345218-8-revest@chromium.orgSigned-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Li Huafei <lihuafei1@huawei.com>
      Cc: Xu Kuohai <xukuohai@huawei.com>
      Signed-off-by: default avatarFlorent Revest <revest@chromium.org>
      Acked-by: default avatarJiri Olsa <jolsa@kernel.org>
      Signed-off-by: default avatarSteven Rostedt (Google) <rostedt@goodmis.org>
      fee86a4e
    • Florent Revest's avatar
      ftrace: Make DIRECT_CALLS work WITH_ARGS and !WITH_REGS · 60c89718
      Florent Revest authored
      Direct called trampolines can be called in two ways:
      - either from the ftrace callsite. In this case, they do not access any
        struct ftrace_regs nor pt_regs
      - Or, if a ftrace ops is also attached, from the end of a ftrace
        trampoline. In this case, the call_direct_funcs ops is in charge of
        setting the direct call trampoline's address in a struct ftrace_regs
      
      Since:
      
      commit 9705bc70 ("ftrace: pass fregs to arch_ftrace_set_direct_caller()")
      
      The later case no longer requires a full pt_regs. It only needs a struct
      ftrace_regs so DIRECT_CALLS can work with both WITH_ARGS or WITH_REGS.
      With architectures like arm64 already abandoning WITH_REGS in favor of
      WITH_ARGS, it's important to have DIRECT_CALLS work WITH_ARGS only.
      
      Link: https://lkml.kernel.org/r/20230321140424.345218-7-revest@chromium.orgSigned-off-by: default avatarFlorent Revest <revest@chromium.org>
      Co-developed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarJiri Olsa <jolsa@kernel.org>
      Signed-off-by: default avatarSteven Rostedt (Google) <rostedt@goodmis.org>
      60c89718