Commit cccf0c2e authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'trace-v6.5' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace

Pull tracing updates from Steven Rostedt:

 - Add new feature to have function graph tracer record the return
   value. Adds a new option: funcgraph-retval ; when set, will show the
   return value of a function in the function graph tracer.

 - Also add the option: funcgraph-retval-hex where if it is not set, and
   the return value is an error code, then it will return the decimal of
   the error code, otherwise it still reports the hex value.

 - Add the file /sys/kernel/tracing/osnoise/per_cpu/cpu<cpu>/timerlat_fd
   That when a application opens it, it becomes the task that the timer
   lat tracer traces. The application can also read this file to find
   out how it's being interrupted.

 - Add the file /sys/kernel/tracing/available_filter_functions_addrs
   that works just the same as available_filter_functions but also shows
   the addresses of the functions like kallsyms, except that it gives
   the address of where the fentry/mcount jump/nop is. This is used by
   BPF to make it easier to attach BPF programs to ftrace hooks.

 - Replace strlcpy with strscpy in the tracing boot code.

* tag 'trace-v6.5' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace:
  tracing: Fix warnings when building htmldocs for function graph retval
  riscv: ftrace: Enable HAVE_FUNCTION_GRAPH_RETVAL
  tracing/boot: Replace strlcpy with strscpy
  tracing/timerlat: Add user-space interface
  tracing/osnoise: Skip running osnoise if all instances are off
  tracing/osnoise: Switch from PF_NO_SETAFFINITY to migrate_disable
  ftrace: Show all functions with addresses in available_filter_functions_addrs
  selftests/ftrace: Add funcgraph-retval test case
  LoongArch: ftrace: Enable HAVE_FUNCTION_GRAPH_RETVAL
  x86/ftrace: Enable HAVE_FUNCTION_GRAPH_RETVAL
  arm64: ftrace: Enable HAVE_FUNCTION_GRAPH_RETVAL
  tracing: Add documentation for funcgraph-retval and funcgraph-retval-hex
  function_graph: Support recording and printing the return value of function
  fgraph: Add declaration of "struct fgraph_ret_regs"
parents 533925cb fc30ace0
...@@ -324,6 +324,12 @@ of ftrace. Here is a list of some of the key files: ...@@ -324,6 +324,12 @@ of ftrace. Here is a list of some of the key files:
"set_graph_function", or "set_graph_notrace". "set_graph_function", or "set_graph_notrace".
(See the section "dynamic ftrace" below for more details.) (See the section "dynamic ftrace" below for more details.)
available_filter_functions_addrs:
Similar to available_filter_functions, but with address displayed
for each function. The displayed address is the patch-site address
and can differ from /proc/kallsyms address.
dyn_ftrace_total_info: dyn_ftrace_total_info:
This file is for debugging purposes. The number of functions that This file is for debugging purposes. The number of functions that
...@@ -1359,6 +1365,19 @@ Options for function_graph tracer: ...@@ -1359,6 +1365,19 @@ Options for function_graph tracer:
only a closing curly bracket "}" is displayed for only a closing curly bracket "}" is displayed for
the return of a function. the return of a function.
funcgraph-retval
When set, the return value of each traced function
will be printed after an equal sign "=". By default
this is off.
funcgraph-retval-hex
When set, the return value will always be printed
in hexadecimal format. If the option is not set and
the return value is an error code, it will be printed
in signed decimal format; otherwise it will also be
printed in hexadecimal format. By default, this option
is off.
sleep-time sleep-time
When running function graph tracer, to include When running function graph tracer, to include
the time a task schedules out in its function. the time a task schedules out in its function.
...@@ -2704,6 +2723,119 @@ It is default disabled. ...@@ -2704,6 +2723,119 @@ It is default disabled.
0) 1.757 us | } /* kmem_cache_free() */ 0) 1.757 us | } /* kmem_cache_free() */
0) 2.861 us | } /* putname() */ 0) 2.861 us | } /* putname() */
The return value of each traced function can be displayed after
an equal sign "=". When encountering system call failures, it
can be verfy helpful to quickly locate the function that first
returns an error code.
- hide: echo nofuncgraph-retval > trace_options
- show: echo funcgraph-retval > trace_options
Example with funcgraph-retval::
1) | cgroup_migrate() {
1) 0.651 us | cgroup_migrate_add_task(); /* = 0xffff93fcfd346c00 */
1) | cgroup_migrate_execute() {
1) | cpu_cgroup_can_attach() {
1) | cgroup_taskset_first() {
1) 0.732 us | cgroup_taskset_next(); /* = 0xffff93fc8fb20000 */
1) 1.232 us | } /* cgroup_taskset_first = 0xffff93fc8fb20000 */
1) 0.380 us | sched_rt_can_attach(); /* = 0x0 */
1) 2.335 us | } /* cpu_cgroup_can_attach = -22 */
1) 4.369 us | } /* cgroup_migrate_execute = -22 */
1) 7.143 us | } /* cgroup_migrate = -22 */
The above example shows that the function cpu_cgroup_can_attach
returned the error code -22 firstly, then we can read the code
of this function to get the root cause.
When the option funcgraph-retval-hex is not set, the return value can
be displayed in a smart way. Specifically, if it is an error code,
it will be printed in signed decimal format, otherwise it will
printed in hexadecimal format.
- smart: echo nofuncgraph-retval-hex > trace_options
- hexadecimal: echo funcgraph-retval-hex > trace_options
Example with funcgraph-retval-hex::
1) | cgroup_migrate() {
1) 0.651 us | cgroup_migrate_add_task(); /* = 0xffff93fcfd346c00 */
1) | cgroup_migrate_execute() {
1) | cpu_cgroup_can_attach() {
1) | cgroup_taskset_first() {
1) 0.732 us | cgroup_taskset_next(); /* = 0xffff93fc8fb20000 */
1) 1.232 us | } /* cgroup_taskset_first = 0xffff93fc8fb20000 */
1) 0.380 us | sched_rt_can_attach(); /* = 0x0 */
1) 2.335 us | } /* cpu_cgroup_can_attach = 0xffffffea */
1) 4.369 us | } /* cgroup_migrate_execute = 0xffffffea */
1) 7.143 us | } /* cgroup_migrate = 0xffffffea */
At present, there are some limitations when using the funcgraph-retval
option, and these limitations will be eliminated in the future:
- Even if the function return type is void, a return value will still
be printed, and you can just ignore it.
- Even if return values are stored in multiple registers, only the
value contained in the first register will be recorded and printed.
To illustrate, in the x86 architecture, eax and edx are used to store
a 64-bit return value, with the lower 32 bits saved in eax and the
upper 32 bits saved in edx. However, only the value stored in eax
will be recorded and printed.
- In certain procedure call standards, such as arm64's AAPCS64, when a
type is smaller than a GPR, it is the responsibility of the consumer
to perform the narrowing, and the upper bits may contain UNKNOWN values.
Therefore, it is advisable to check the code for such cases. For instance,
when using a u8 in a 64-bit GPR, bits [63:8] may contain arbitrary values,
especially when larger types are truncated, whether explicitly or implicitly.
Here are some specific cases to illustrate this point:
**Case One**:
The function narrow_to_u8 is defined as follows::
u8 narrow_to_u8(u64 val)
{
// implicitly truncated
return val;
}
It may be compiled to::
narrow_to_u8:
< ... ftrace instrumentation ... >
RET
If you pass 0x123456789abcdef to this function and want to narrow it,
it may be recorded as 0x123456789abcdef instead of 0xef.
**Case Two**:
The function error_if_not_4g_aligned is defined as follows::
int error_if_not_4g_aligned(u64 val)
{
if (val & GENMASK(31, 0))
return -EINVAL;
return 0;
}
It could be compiled to::
error_if_not_4g_aligned:
CBNZ w0, .Lnot_aligned
RET // bits [31:0] are zero, bits
// [63:32] are UNKNOWN
.Lnot_aligned:
MOV x0, #-EINVAL
RET
When passing 0x2_0000_0000 to it, the return value may be recorded as
0x2_0000_0000 instead of 0.
You can put some comments on specific functions by using You can put some comments on specific functions by using
trace_printk() For example, if you want to put a comment inside trace_printk() For example, if you want to put a comment inside
the __might_sleep() function, you just have to include the __might_sleep() function, you just have to include
......
...@@ -180,3 +180,81 @@ dummy_load_1ms_pd_init, which had the following code (on purpose):: ...@@ -180,3 +180,81 @@ dummy_load_1ms_pd_init, which had the following code (on purpose)::
return 0; return 0;
} }
User-space interface
---------------------------
Timerlat allows user-space threads to use timerlat infra-structure to
measure scheduling latency. This interface is accessible via a per-CPU
file descriptor inside $tracing_dir/osnoise/per_cpu/cpu$ID/timerlat_fd.
This interface is accessible under the following conditions:
- timerlat tracer is enable
- osnoise workload option is set to NO_OSNOISE_WORKLOAD
- The user-space thread is affined to a single processor
- The thread opens the file associated with its single processor
- Only one thread can access the file at a time
The open() syscall will fail if any of these conditions are not met.
After opening the file descriptor, the user space can read from it.
The read() system call will run a timerlat code that will arm the
timer in the future and wait for it as the regular kernel thread does.
When the timer IRQ fires, the timerlat IRQ will execute, report the
IRQ latency and wake up the thread waiting in the read. The thread will be
scheduled and report the thread latency via tracer - as for the kernel
thread.
The difference from the in-kernel timerlat is that, instead of re-arming
the timer, timerlat will return to the read() system call. At this point,
the user can run any code.
If the application rereads the file timerlat file descriptor, the tracer
will report the return from user-space latency, which is the total
latency. If this is the end of the work, it can be interpreted as the
response time for the request.
After reporting the total latency, timerlat will restart the cycle, arm
a timer, and go to sleep for the following activation.
If at any time one of the conditions is broken, e.g., the thread migrates
while in user space, or the timerlat tracer is disabled, the SIG_KILL
signal will be sent to the user-space thread.
Here is an basic example of user-space code for timerlat::
int main(void)
{
char buffer[1024];
int timerlat_fd;
int retval;
long cpu = 0; /* place in CPU 0 */
cpu_set_t set;
CPU_ZERO(&set);
CPU_SET(cpu, &set);
if (sched_setaffinity(gettid(), sizeof(set), &set) == -1)
return 1;
snprintf(buffer, sizeof(buffer),
"/sys/kernel/tracing/osnoise/per_cpu/cpu%ld/timerlat_fd",
cpu);
timerlat_fd = open(buffer, O_RDONLY);
if (timerlat_fd < 0) {
printf("error opening %s: %s\n", buffer, strerror(errno));
exit(1);
}
for (;;) {
retval = read(timerlat_fd, buffer, 1024);
if (retval < 0)
break;
}
close(timerlat_fd);
exit(0);
}
...@@ -202,6 +202,7 @@ config ARM64 ...@@ -202,6 +202,7 @@ config ARM64
select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_FUNCTION_TRACER select HAVE_FUNCTION_TRACER
select HAVE_FUNCTION_ERROR_INJECTION select HAVE_FUNCTION_ERROR_INJECTION
select HAVE_FUNCTION_GRAPH_RETVAL if HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FUNCTION_GRAPH_TRACER select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_GCC_PLUGINS select HAVE_GCC_PLUGINS
select HAVE_HARDLOCKUP_DETECTOR_PERF if PERF_EVENTS && \ select HAVE_HARDLOCKUP_DETECTOR_PERF if PERF_EVENTS && \
......
...@@ -192,4 +192,26 @@ static inline bool arch_syscall_match_sym_name(const char *sym, ...@@ -192,4 +192,26 @@ static inline bool arch_syscall_match_sym_name(const char *sym,
} }
#endif /* ifndef __ASSEMBLY__ */ #endif /* ifndef __ASSEMBLY__ */
#ifndef __ASSEMBLY__
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
struct fgraph_ret_regs {
/* x0 - x7 */
unsigned long regs[8];
unsigned long fp;
unsigned long __unused;
};
static inline unsigned long fgraph_ret_regs_return_value(struct fgraph_ret_regs *ret_regs)
{
return ret_regs->regs[0];
}
static inline unsigned long fgraph_ret_regs_frame_pointer(struct fgraph_ret_regs *ret_regs)
{
return ret_regs->fp;
}
#endif /* ifdef CONFIG_FUNCTION_GRAPH_TRACER */
#endif
#endif /* __ASM_FTRACE_H */ #endif /* __ASM_FTRACE_H */
...@@ -200,6 +200,19 @@ int main(void) ...@@ -200,6 +200,19 @@ int main(void)
#endif #endif
#ifdef CONFIG_FUNCTION_TRACER #ifdef CONFIG_FUNCTION_TRACER
DEFINE(FTRACE_OPS_FUNC, offsetof(struct ftrace_ops, func)); DEFINE(FTRACE_OPS_FUNC, offsetof(struct ftrace_ops, func));
#endif
BLANK();
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
DEFINE(FGRET_REGS_X0, offsetof(struct fgraph_ret_regs, regs[0]));
DEFINE(FGRET_REGS_X1, offsetof(struct fgraph_ret_regs, regs[1]));
DEFINE(FGRET_REGS_X2, offsetof(struct fgraph_ret_regs, regs[2]));
DEFINE(FGRET_REGS_X3, offsetof(struct fgraph_ret_regs, regs[3]));
DEFINE(FGRET_REGS_X4, offsetof(struct fgraph_ret_regs, regs[4]));
DEFINE(FGRET_REGS_X5, offsetof(struct fgraph_ret_regs, regs[5]));
DEFINE(FGRET_REGS_X6, offsetof(struct fgraph_ret_regs, regs[6]));
DEFINE(FGRET_REGS_X7, offsetof(struct fgraph_ret_regs, regs[7]));
DEFINE(FGRET_REGS_FP, offsetof(struct fgraph_ret_regs, fp));
DEFINE(FGRET_REGS_SIZE, sizeof(struct fgraph_ret_regs));
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS #ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
DEFINE(FTRACE_OPS_DIRECT_CALL, offsetof(struct ftrace_ops, direct_call)); DEFINE(FTRACE_OPS_DIRECT_CALL, offsetof(struct ftrace_ops, direct_call));
#endif #endif
......
...@@ -330,22 +330,23 @@ SYM_FUNC_END(ftrace_stub_graph) ...@@ -330,22 +330,23 @@ SYM_FUNC_END(ftrace_stub_graph)
*/ */
SYM_CODE_START(return_to_handler) SYM_CODE_START(return_to_handler)
/* save return value regs */ /* save return value regs */
sub sp, sp, #64 sub sp, sp, #FGRET_REGS_SIZE
stp x0, x1, [sp] stp x0, x1, [sp, #FGRET_REGS_X0]
stp x2, x3, [sp, #16] stp x2, x3, [sp, #FGRET_REGS_X2]
stp x4, x5, [sp, #32] stp x4, x5, [sp, #FGRET_REGS_X4]
stp x6, x7, [sp, #48] stp x6, x7, [sp, #FGRET_REGS_X6]
str x29, [sp, #FGRET_REGS_FP] // parent's fp
mov x0, x29 // parent's fp mov x0, sp
bl ftrace_return_to_handler// addr = ftrace_return_to_hander(fp); bl ftrace_return_to_handler // addr = ftrace_return_to_hander(regs);
mov x30, x0 // restore the original return address mov x30, x0 // restore the original return address
/* restore return value regs */ /* restore return value regs */
ldp x0, x1, [sp] ldp x0, x1, [sp, #FGRET_REGS_X0]
ldp x2, x3, [sp, #16] ldp x2, x3, [sp, #FGRET_REGS_X2]
ldp x4, x5, [sp, #32] ldp x4, x5, [sp, #FGRET_REGS_X4]
ldp x6, x7, [sp, #48] ldp x6, x7, [sp, #FGRET_REGS_X6]
add sp, sp, #64 add sp, sp, #FGRET_REGS_SIZE
ret ret
SYM_CODE_END(return_to_handler) SYM_CODE_END(return_to_handler)
......
...@@ -111,6 +111,7 @@ config LOONGARCH ...@@ -111,6 +111,7 @@ config LOONGARCH
select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_FUNCTION_ARG_ACCESS_API select HAVE_FUNCTION_ARG_ACCESS_API
select HAVE_FUNCTION_ERROR_INJECTION select HAVE_FUNCTION_ERROR_INJECTION
select HAVE_FUNCTION_GRAPH_RETVAL if HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FUNCTION_GRAPH_TRACER select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FUNCTION_TRACER select HAVE_FUNCTION_TRACER
select HAVE_GENERIC_VDSO select HAVE_GENERIC_VDSO
......
...@@ -100,4 +100,26 @@ __arch_ftrace_set_direct_caller(struct pt_regs *regs, unsigned long addr) ...@@ -100,4 +100,26 @@ __arch_ftrace_set_direct_caller(struct pt_regs *regs, unsigned long addr)
#endif /* CONFIG_FUNCTION_TRACER */ #endif /* CONFIG_FUNCTION_TRACER */
#ifndef __ASSEMBLY__
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
struct fgraph_ret_regs {
/* a0 - a1 */
unsigned long regs[2];
unsigned long fp;
unsigned long __unused;
};
static inline unsigned long fgraph_ret_regs_return_value(struct fgraph_ret_regs *ret_regs)
{
return ret_regs->regs[0];
}
static inline unsigned long fgraph_ret_regs_frame_pointer(struct fgraph_ret_regs *ret_regs)
{
return ret_regs->fp;
}
#endif /* ifdef CONFIG_FUNCTION_GRAPH_TRACER */
#endif
#endif /* _ASM_LOONGARCH_FTRACE_H */ #endif /* _ASM_LOONGARCH_FTRACE_H */
...@@ -12,6 +12,7 @@ ...@@ -12,6 +12,7 @@
#include <asm/cpu-info.h> #include <asm/cpu-info.h>
#include <asm/ptrace.h> #include <asm/ptrace.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/ftrace.h>
void output_ptreg_defines(void) void output_ptreg_defines(void)
{ {
...@@ -264,7 +265,7 @@ void output_smpboot_defines(void) ...@@ -264,7 +265,7 @@ void output_smpboot_defines(void)
#ifdef CONFIG_HIBERNATION #ifdef CONFIG_HIBERNATION
void output_pbe_defines(void) void output_pbe_defines(void)
{ {
COMMENT(" Linux struct pbe offsets. "); COMMENT("Linux struct pbe offsets.");
OFFSET(PBE_ADDRESS, pbe, address); OFFSET(PBE_ADDRESS, pbe, address);
OFFSET(PBE_ORIG_ADDRESS, pbe, orig_address); OFFSET(PBE_ORIG_ADDRESS, pbe, orig_address);
OFFSET(PBE_NEXT, pbe, next); OFFSET(PBE_NEXT, pbe, next);
...@@ -272,3 +273,15 @@ void output_pbe_defines(void) ...@@ -272,3 +273,15 @@ void output_pbe_defines(void)
BLANK(); BLANK();
} }
#endif #endif
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
void output_fgraph_ret_regs_defines(void)
{
COMMENT("LoongArch fgraph_ret_regs offsets.");
OFFSET(FGRET_REGS_A0, fgraph_ret_regs, regs[0]);
OFFSET(FGRET_REGS_A1, fgraph_ret_regs, regs[1]);
OFFSET(FGRET_REGS_FP, fgraph_ret_regs, fp);
DEFINE(FGRET_REGS_SIZE, sizeof(struct fgraph_ret_regs));
BLANK();
}
#endif
...@@ -79,18 +79,20 @@ SYM_FUNC_START(ftrace_graph_caller) ...@@ -79,18 +79,20 @@ SYM_FUNC_START(ftrace_graph_caller)
SYM_FUNC_END(ftrace_graph_caller) SYM_FUNC_END(ftrace_graph_caller)
SYM_FUNC_START(return_to_handler) SYM_FUNC_START(return_to_handler)
PTR_ADDI sp, sp, -2 * SZREG PTR_ADDI sp, sp, -FGRET_REGS_SIZE
PTR_S a0, sp, 0 PTR_S a0, sp, FGRET_REGS_A0
PTR_S a1, sp, SZREG PTR_S a1, sp, FGRET_REGS_A1
PTR_S zero, sp, FGRET_REGS_FP
move a0, sp
bl ftrace_return_to_handler bl ftrace_return_to_handler
/* Restore the real parent address: a0 -> ra */ /* Restore the real parent address: a0 -> ra */
move ra, a0 move ra, a0
PTR_L a0, sp, 0 PTR_L a0, sp, FGRET_REGS_A0
PTR_L a1, sp, SZREG PTR_L a1, sp, FGRET_REGS_A1
PTR_ADDI sp, sp, 2 * SZREG PTR_ADDI sp, sp, FGRET_REGS_SIZE
jr ra jr ra
SYM_FUNC_END(return_to_handler) SYM_FUNC_END(return_to_handler)
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */ #endif /* CONFIG_FUNCTION_GRAPH_TRACER */
...@@ -136,18 +136,19 @@ SYM_CODE_END(ftrace_graph_caller) ...@@ -136,18 +136,19 @@ SYM_CODE_END(ftrace_graph_caller)
SYM_CODE_START(return_to_handler) SYM_CODE_START(return_to_handler)
/* Save return value regs */ /* Save return value regs */
PTR_ADDI sp, sp, -2 * SZREG PTR_ADDI sp, sp, -FGRET_REGS_SIZE
PTR_S a0, sp, 0 PTR_S a0, sp, FGRET_REGS_A0
PTR_S a1, sp, SZREG PTR_S a1, sp, FGRET_REGS_A1
PTR_S zero, sp, FGRET_REGS_FP
move a0, zero move a0, sp
bl ftrace_return_to_handler bl ftrace_return_to_handler
move ra, a0 move ra, a0
/* Restore return value regs */ /* Restore return value regs */
PTR_L a0, sp, 0 PTR_L a0, sp, FGRET_REGS_A0
PTR_L a1, sp, SZREG PTR_L a1, sp, FGRET_REGS_A1
PTR_ADDI sp, sp, 2 * SZREG PTR_ADDI sp, sp, FGRET_REGS_SIZE
jr ra jr ra
SYM_CODE_END(return_to_handler) SYM_CODE_END(return_to_handler)
......
...@@ -152,6 +152,7 @@ config RISCV ...@@ -152,6 +152,7 @@ config RISCV
select HAVE_DYNAMIC_FTRACE_WITH_REGS if HAVE_DYNAMIC_FTRACE select HAVE_DYNAMIC_FTRACE_WITH_REGS if HAVE_DYNAMIC_FTRACE
select HAVE_FTRACE_MCOUNT_RECORD if !XIP_KERNEL select HAVE_FTRACE_MCOUNT_RECORD if !XIP_KERNEL
select HAVE_FUNCTION_GRAPH_TRACER select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FUNCTION_GRAPH_RETVAL if HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FUNCTION_TRACER if !XIP_KERNEL && !PREEMPTION select HAVE_FUNCTION_TRACER if !XIP_KERNEL && !PREEMPTION
config CLANG_SUPPORTS_DYNAMIC_FTRACE config CLANG_SUPPORTS_DYNAMIC_FTRACE
......
...@@ -111,4 +111,25 @@ int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec); ...@@ -111,4 +111,25 @@ int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec);
#endif /* CONFIG_DYNAMIC_FTRACE */ #endif /* CONFIG_DYNAMIC_FTRACE */
#ifndef __ASSEMBLY__
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
struct fgraph_ret_regs {
unsigned long a1;
unsigned long a0;
unsigned long s0;
unsigned long ra;
};
static inline unsigned long fgraph_ret_regs_return_value(struct fgraph_ret_regs *ret_regs)
{
return ret_regs->a0;
}
static inline unsigned long fgraph_ret_regs_frame_pointer(struct fgraph_ret_regs *ret_regs)
{
return ret_regs->s0;
}
#endif /* ifdef CONFIG_FUNCTION_GRAPH_TRACER */
#endif
#endif /* _ASM_RISCV_FTRACE_H */ #endif /* _ASM_RISCV_FTRACE_H */
...@@ -65,13 +65,8 @@ ENTRY(return_to_handler) ...@@ -65,13 +65,8 @@ ENTRY(return_to_handler)
* So alternatively we check the *old* frame pointer position, that is, the * So alternatively we check the *old* frame pointer position, that is, the
* value stored in -16(s0) on entry, and the s0 on return. * value stored in -16(s0) on entry, and the s0 on return.
*/ */
#ifdef HAVE_FUNCTION_GRAPH_FP_TEST
mv t6, s0
#endif
SAVE_RET_ABI_STATE SAVE_RET_ABI_STATE
#ifdef HAVE_FUNCTION_GRAPH_FP_TEST mv a0, sp
mv a0, t6
#endif
call ftrace_return_to_handler call ftrace_return_to_handler
mv a2, a0 mv a2, a0
RESTORE_RET_ABI_STATE RESTORE_RET_ABI_STATE
......
...@@ -217,6 +217,7 @@ config X86 ...@@ -217,6 +217,7 @@ config X86
select HAVE_FAST_GUP select HAVE_FAST_GUP
select HAVE_FENTRY if X86_64 || DYNAMIC_FTRACE select HAVE_FENTRY if X86_64 || DYNAMIC_FTRACE
select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_FUNCTION_GRAPH_RETVAL if HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FUNCTION_GRAPH_TRACER if X86_32 || (X86_64 && DYNAMIC_FTRACE) select HAVE_FUNCTION_GRAPH_TRACER if X86_32 || (X86_64 && DYNAMIC_FTRACE)
select HAVE_FUNCTION_TRACER select HAVE_FUNCTION_TRACER
select HAVE_GCC_PLUGINS select HAVE_GCC_PLUGINS
......
...@@ -150,4 +150,24 @@ static inline bool arch_trace_is_compat_syscall(struct pt_regs *regs) ...@@ -150,4 +150,24 @@ static inline bool arch_trace_is_compat_syscall(struct pt_regs *regs)
#endif /* !COMPILE_OFFSETS */ #endif /* !COMPILE_OFFSETS */
#endif /* !__ASSEMBLY__ */ #endif /* !__ASSEMBLY__ */
#ifndef __ASSEMBLY__
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
struct fgraph_ret_regs {
unsigned long ax;
unsigned long dx;
unsigned long bp;
};
static inline unsigned long fgraph_ret_regs_return_value(struct fgraph_ret_regs *ret_regs)
{
return ret_regs->ax;
}
static inline unsigned long fgraph_ret_regs_frame_pointer(struct fgraph_ret_regs *ret_regs)
{
return ret_regs->bp;
}
#endif /* ifdef CONFIG_FUNCTION_GRAPH_TRACER */
#endif
#endif /* _ASM_X86_FTRACE_H */ #endif /* _ASM_X86_FTRACE_H */
...@@ -187,12 +187,14 @@ SYM_CODE_END(ftrace_graph_caller) ...@@ -187,12 +187,14 @@ SYM_CODE_END(ftrace_graph_caller)
.globl return_to_handler .globl return_to_handler
return_to_handler: return_to_handler:
pushl %eax pushl $0
pushl %edx pushl %edx
movl $0, %eax pushl %eax
movl %esp, %eax
call ftrace_return_to_handler call ftrace_return_to_handler
movl %eax, %ecx movl %eax, %ecx
popl %edx
popl %eax popl %eax
popl %edx
addl $4, %esp # skip ebp
JMP_NOSPEC ecx JMP_NOSPEC ecx
#endif #endif
...@@ -348,12 +348,13 @@ STACK_FRAME_NON_STANDARD_FP(__fentry__) ...@@ -348,12 +348,13 @@ STACK_FRAME_NON_STANDARD_FP(__fentry__)
SYM_CODE_START(return_to_handler) SYM_CODE_START(return_to_handler)
UNWIND_HINT_UNDEFINED UNWIND_HINT_UNDEFINED
ANNOTATE_NOENDBR ANNOTATE_NOENDBR
subq $16, %rsp subq $24, %rsp
/* Save the return values */ /* Save the return values */
movq %rax, (%rsp) movq %rax, (%rsp)
movq %rdx, 8(%rsp) movq %rdx, 8(%rsp)
movq %rbp, %rdi movq %rbp, 16(%rsp)
movq %rsp, %rdi
call ftrace_return_to_handler call ftrace_return_to_handler
...@@ -361,7 +362,7 @@ SYM_CODE_START(return_to_handler) ...@@ -361,7 +362,7 @@ SYM_CODE_START(return_to_handler)
movq 8(%rsp), %rdx movq 8(%rsp), %rdx
movq (%rsp), %rax movq (%rsp), %rax
addq $16, %rsp addq $24, %rsp
/* /*
* Jump back to the old return address. This cannot be JMP_NOSPEC rdi * Jump back to the old return address. This cannot be JMP_NOSPEC rdi
* since IBT would demand that contain ENDBR, which simply isn't so for * since IBT would demand that contain ENDBR, which simply isn't so for
......
...@@ -633,6 +633,7 @@ enum { ...@@ -633,6 +633,7 @@ enum {
FTRACE_ITER_MOD = (1 << 5), FTRACE_ITER_MOD = (1 << 5),
FTRACE_ITER_ENABLED = (1 << 6), FTRACE_ITER_ENABLED = (1 << 6),
FTRACE_ITER_TOUCHED = (1 << 7), FTRACE_ITER_TOUCHED = (1 << 7),
FTRACE_ITER_ADDRS = (1 << 8),
}; };
void arch_ftrace_update_code(int command); void arch_ftrace_update_code(int command);
...@@ -1018,6 +1019,9 @@ struct ftrace_graph_ent { ...@@ -1018,6 +1019,9 @@ struct ftrace_graph_ent {
*/ */
struct ftrace_graph_ret { struct ftrace_graph_ret {
unsigned long func; /* Current function */ unsigned long func; /* Current function */
#ifdef CONFIG_FUNCTION_GRAPH_RETVAL
unsigned long retval;
#endif
int depth; int depth;
/* Number of functions that overran the depth limit for current task */ /* Number of functions that overran the depth limit for current task */
unsigned int overrun; unsigned int overrun;
......
...@@ -31,6 +31,9 @@ config HAVE_FUNCTION_GRAPH_TRACER ...@@ -31,6 +31,9 @@ config HAVE_FUNCTION_GRAPH_TRACER
help help
See Documentation/trace/ftrace-design.rst See Documentation/trace/ftrace-design.rst
config HAVE_FUNCTION_GRAPH_RETVAL
bool
config HAVE_DYNAMIC_FTRACE config HAVE_DYNAMIC_FTRACE
bool bool
help help
...@@ -227,6 +230,18 @@ config FUNCTION_GRAPH_TRACER ...@@ -227,6 +230,18 @@ config FUNCTION_GRAPH_TRACER
the return value. This is done by setting the current return the return value. This is done by setting the current return
address on the current task structure into a stack of calls. address on the current task structure into a stack of calls.
config FUNCTION_GRAPH_RETVAL
bool "Kernel Function Graph Return Value"
depends on HAVE_FUNCTION_GRAPH_RETVAL
depends on FUNCTION_GRAPH_TRACER
default n
help
Support recording and printing the function return value when
using function graph tracer. It can be helpful to locate functions
that return errors. This feature is off by default, and you can
enable it via the trace option funcgraph-retval.
See Documentation/trace/ftrace.rst
config DYNAMIC_FTRACE config DYNAMIC_FTRACE
bool "enable/disable function tracing dynamically" bool "enable/disable function tracing dynamically"
depends on FUNCTION_TRACER depends on FUNCTION_TRACER
......
...@@ -236,16 +236,23 @@ static struct notifier_block ftrace_suspend_notifier = { ...@@ -236,16 +236,23 @@ static struct notifier_block ftrace_suspend_notifier = {
.notifier_call = ftrace_suspend_notifier_call, .notifier_call = ftrace_suspend_notifier_call,
}; };
/* fgraph_ret_regs is not defined without CONFIG_FUNCTION_GRAPH_RETVAL */
struct fgraph_ret_regs;
/* /*
* Send the trace to the ring-buffer. * Send the trace to the ring-buffer.
* @return the original return address. * @return the original return address.
*/ */
unsigned long ftrace_return_to_handler(unsigned long frame_pointer) static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs,
unsigned long frame_pointer)
{ {
struct ftrace_graph_ret trace; struct ftrace_graph_ret trace;
unsigned long ret; unsigned long ret;
ftrace_pop_return_trace(&trace, &ret, frame_pointer); ftrace_pop_return_trace(&trace, &ret, frame_pointer);
#ifdef CONFIG_FUNCTION_GRAPH_RETVAL
trace.retval = fgraph_ret_regs_return_value(ret_regs);
#endif
trace.rettime = trace_clock_local(); trace.rettime = trace_clock_local();
ftrace_graph_return(&trace); ftrace_graph_return(&trace);
/* /*
...@@ -266,6 +273,23 @@ unsigned long ftrace_return_to_handler(unsigned long frame_pointer) ...@@ -266,6 +273,23 @@ unsigned long ftrace_return_to_handler(unsigned long frame_pointer)
return ret; return ret;
} }
/*
* After all architecures have selected HAVE_FUNCTION_GRAPH_RETVAL, we can
* leave only ftrace_return_to_handler(ret_regs).
*/
#ifdef CONFIG_HAVE_FUNCTION_GRAPH_RETVAL
unsigned long ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs)
{
return __ftrace_return_to_handler(ret_regs,
fgraph_ret_regs_frame_pointer(ret_regs));
}
#else
unsigned long ftrace_return_to_handler(unsigned long frame_pointer)
{
return __ftrace_return_to_handler(NULL, frame_pointer);
}
#endif
/** /**
* ftrace_graph_get_ret_stack - return the entry of the shadow stack * ftrace_graph_get_ret_stack - return the entry of the shadow stack
* @task: The task to read the shadow stack from * @task: The task to read the shadow stack from
......
...@@ -3861,6 +3861,9 @@ static int t_show(struct seq_file *m, void *v) ...@@ -3861,6 +3861,9 @@ static int t_show(struct seq_file *m, void *v)
if (!rec) if (!rec)
return 0; return 0;
if (iter->flags & FTRACE_ITER_ADDRS)
seq_printf(m, "%lx ", rec->ip);
if (print_rec(m, rec->ip)) { if (print_rec(m, rec->ip)) {
/* This should only happen when a rec is disabled */ /* This should only happen when a rec is disabled */
WARN_ON_ONCE(!(rec->flags & FTRACE_FL_DISABLED)); WARN_ON_ONCE(!(rec->flags & FTRACE_FL_DISABLED));
...@@ -3996,6 +3999,30 @@ ftrace_touched_open(struct inode *inode, struct file *file) ...@@ -3996,6 +3999,30 @@ ftrace_touched_open(struct inode *inode, struct file *file)
return 0; return 0;
} }
static int
ftrace_avail_addrs_open(struct inode *inode, struct file *file)
{
struct ftrace_iterator *iter;
int ret;
ret = security_locked_down(LOCKDOWN_TRACEFS);
if (ret)
return ret;
if (unlikely(ftrace_disabled))
return -ENODEV;
iter = __seq_open_private(file, &show_ftrace_seq_ops, sizeof(*iter));
if (!iter)
return -ENOMEM;
iter->pg = ftrace_pages_start;
iter->flags = FTRACE_ITER_ADDRS;
iter->ops = &global_ops;
return 0;
}
/** /**
* ftrace_regex_open - initialize function tracer filter files * ftrace_regex_open - initialize function tracer filter files
* @ops: The ftrace_ops that hold the hash filters * @ops: The ftrace_ops that hold the hash filters
...@@ -5916,6 +5943,13 @@ static const struct file_operations ftrace_touched_fops = { ...@@ -5916,6 +5943,13 @@ static const struct file_operations ftrace_touched_fops = {
.release = seq_release_private, .release = seq_release_private,
}; };
static const struct file_operations ftrace_avail_addrs_fops = {
.open = ftrace_avail_addrs_open,
.read = seq_read,
.llseek = seq_lseek,
.release = seq_release_private,
};
static const struct file_operations ftrace_filter_fops = { static const struct file_operations ftrace_filter_fops = {
.open = ftrace_filter_open, .open = ftrace_filter_open,
.read = seq_read, .read = seq_read,
...@@ -6377,6 +6411,9 @@ static __init int ftrace_init_dyn_tracefs(struct dentry *d_tracer) ...@@ -6377,6 +6411,9 @@ static __init int ftrace_init_dyn_tracefs(struct dentry *d_tracer)
trace_create_file("available_filter_functions", TRACE_MODE_READ, trace_create_file("available_filter_functions", TRACE_MODE_READ,
d_tracer, NULL, &ftrace_avail_fops); d_tracer, NULL, &ftrace_avail_fops);
trace_create_file("available_filter_functions_addrs", TRACE_MODE_READ,
d_tracer, NULL, &ftrace_avail_addrs_fops);
trace_create_file("enabled_functions", TRACE_MODE_READ, trace_create_file("enabled_functions", TRACE_MODE_READ,
d_tracer, NULL, &ftrace_enabled_fops); d_tracer, NULL, &ftrace_enabled_fops);
......
...@@ -832,6 +832,8 @@ static __always_inline bool ftrace_hash_empty(struct ftrace_hash *hash) ...@@ -832,6 +832,8 @@ static __always_inline bool ftrace_hash_empty(struct ftrace_hash *hash)
#define TRACE_GRAPH_PRINT_TAIL 0x100 #define TRACE_GRAPH_PRINT_TAIL 0x100
#define TRACE_GRAPH_SLEEP_TIME 0x200 #define TRACE_GRAPH_SLEEP_TIME 0x200
#define TRACE_GRAPH_GRAPH_TIME 0x400 #define TRACE_GRAPH_GRAPH_TIME 0x400
#define TRACE_GRAPH_PRINT_RETVAL 0x800
#define TRACE_GRAPH_PRINT_RETVAL_HEX 0x1000
#define TRACE_GRAPH_PRINT_FILL_SHIFT 28 #define TRACE_GRAPH_PRINT_FILL_SHIFT 28
#define TRACE_GRAPH_PRINT_FILL_MASK (0x3 << TRACE_GRAPH_PRINT_FILL_SHIFT) #define TRACE_GRAPH_PRINT_FILL_MASK (0x3 << TRACE_GRAPH_PRINT_FILL_SHIFT)
......
...@@ -31,7 +31,7 @@ trace_boot_set_instance_options(struct trace_array *tr, struct xbc_node *node) ...@@ -31,7 +31,7 @@ trace_boot_set_instance_options(struct trace_array *tr, struct xbc_node *node)
/* Common ftrace options */ /* Common ftrace options */
xbc_node_for_each_array_value(node, "options", anode, p) { xbc_node_for_each_array_value(node, "options", anode, p) {
if (strlcpy(buf, p, ARRAY_SIZE(buf)) >= ARRAY_SIZE(buf)) { if (strscpy(buf, p, ARRAY_SIZE(buf)) == -E2BIG) {
pr_err("String is too long: %s\n", p); pr_err("String is too long: %s\n", p);
continue; continue;
} }
...@@ -87,7 +87,7 @@ trace_boot_enable_events(struct trace_array *tr, struct xbc_node *node) ...@@ -87,7 +87,7 @@ trace_boot_enable_events(struct trace_array *tr, struct xbc_node *node)
const char *p; const char *p;
xbc_node_for_each_array_value(node, "events", anode, p) { xbc_node_for_each_array_value(node, "events", anode, p) {
if (strlcpy(buf, p, ARRAY_SIZE(buf)) >= ARRAY_SIZE(buf)) { if (strscpy(buf, p, ARRAY_SIZE(buf)) == -E2BIG) {
pr_err("String is too long: %s\n", p); pr_err("String is too long: %s\n", p);
continue; continue;
} }
...@@ -486,7 +486,7 @@ trace_boot_init_one_event(struct trace_array *tr, struct xbc_node *gnode, ...@@ -486,7 +486,7 @@ trace_boot_init_one_event(struct trace_array *tr, struct xbc_node *gnode,
p = xbc_node_find_value(enode, "filter", NULL); p = xbc_node_find_value(enode, "filter", NULL);
if (p && *p != '\0') { if (p && *p != '\0') {
if (strlcpy(buf, p, ARRAY_SIZE(buf)) >= ARRAY_SIZE(buf)) if (strscpy(buf, p, ARRAY_SIZE(buf)) == -E2BIG)
pr_err("filter string is too long: %s\n", p); pr_err("filter string is too long: %s\n", p);
else if (apply_event_filter(file, buf) < 0) else if (apply_event_filter(file, buf) < 0)
pr_err("Failed to apply filter: %s\n", buf); pr_err("Failed to apply filter: %s\n", buf);
...@@ -494,7 +494,7 @@ trace_boot_init_one_event(struct trace_array *tr, struct xbc_node *gnode, ...@@ -494,7 +494,7 @@ trace_boot_init_one_event(struct trace_array *tr, struct xbc_node *gnode,
if (IS_ENABLED(CONFIG_HIST_TRIGGERS)) { if (IS_ENABLED(CONFIG_HIST_TRIGGERS)) {
xbc_node_for_each_array_value(enode, "actions", anode, p) { xbc_node_for_each_array_value(enode, "actions", anode, p) {
if (strlcpy(buf, p, ARRAY_SIZE(buf)) >= ARRAY_SIZE(buf)) if (strscpy(buf, p, ARRAY_SIZE(buf)) == -E2BIG)
pr_err("action string is too long: %s\n", p); pr_err("action string is too long: %s\n", p);
else if (trigger_process_regex(file, buf) < 0) else if (trigger_process_regex(file, buf) < 0)
pr_err("Failed to apply an action: %s\n", p); pr_err("Failed to apply an action: %s\n", p);
......
...@@ -86,6 +86,30 @@ FTRACE_ENTRY_PACKED(funcgraph_entry, ftrace_graph_ent_entry, ...@@ -86,6 +86,30 @@ FTRACE_ENTRY_PACKED(funcgraph_entry, ftrace_graph_ent_entry,
); );
/* Function return entry */ /* Function return entry */
#ifdef CONFIG_FUNCTION_GRAPH_RETVAL
FTRACE_ENTRY_PACKED(funcgraph_exit, ftrace_graph_ret_entry,
TRACE_GRAPH_RET,
F_STRUCT(
__field_struct( struct ftrace_graph_ret, ret )
__field_packed( unsigned long, ret, func )
__field_packed( unsigned long, ret, retval )
__field_packed( int, ret, depth )
__field_packed( unsigned int, ret, overrun )
__field_packed( unsigned long long, ret, calltime)
__field_packed( unsigned long long, ret, rettime )
),
F_printk("<-- %ps (%d) (start: %llx end: %llx) over: %d retval: %lx",
(void *)__entry->func, __entry->depth,
__entry->calltime, __entry->rettime,
__entry->depth, __entry->retval)
);
#else
FTRACE_ENTRY_PACKED(funcgraph_exit, ftrace_graph_ret_entry, FTRACE_ENTRY_PACKED(funcgraph_exit, ftrace_graph_ret_entry,
TRACE_GRAPH_RET, TRACE_GRAPH_RET,
...@@ -105,6 +129,8 @@ FTRACE_ENTRY_PACKED(funcgraph_exit, ftrace_graph_ret_entry, ...@@ -105,6 +129,8 @@ FTRACE_ENTRY_PACKED(funcgraph_exit, ftrace_graph_ret_entry,
__entry->depth) __entry->depth)
); );
#endif
/* /*
* Context switch trace entry - which task (and prio) we switched from/to: * Context switch trace entry - which task (and prio) we switched from/to:
* *
......
...@@ -58,6 +58,12 @@ static struct tracer_opt trace_opts[] = { ...@@ -58,6 +58,12 @@ static struct tracer_opt trace_opts[] = {
{ TRACER_OPT(funcgraph-irqs, TRACE_GRAPH_PRINT_IRQS) }, { TRACER_OPT(funcgraph-irqs, TRACE_GRAPH_PRINT_IRQS) },
/* Display function name after trailing } */ /* Display function name after trailing } */
{ TRACER_OPT(funcgraph-tail, TRACE_GRAPH_PRINT_TAIL) }, { TRACER_OPT(funcgraph-tail, TRACE_GRAPH_PRINT_TAIL) },
#ifdef CONFIG_FUNCTION_GRAPH_RETVAL
/* Display function return value ? */
{ TRACER_OPT(funcgraph-retval, TRACE_GRAPH_PRINT_RETVAL) },
/* Display function return value in hexadecimal format ? */
{ TRACER_OPT(funcgraph-retval-hex, TRACE_GRAPH_PRINT_RETVAL_HEX) },
#endif
/* Include sleep time (scheduled out) between entry and return */ /* Include sleep time (scheduled out) between entry and return */
{ TRACER_OPT(sleep-time, TRACE_GRAPH_SLEEP_TIME) }, { TRACER_OPT(sleep-time, TRACE_GRAPH_SLEEP_TIME) },
...@@ -619,6 +625,56 @@ print_graph_duration(struct trace_array *tr, unsigned long long duration, ...@@ -619,6 +625,56 @@ print_graph_duration(struct trace_array *tr, unsigned long long duration,
trace_seq_puts(s, "| "); trace_seq_puts(s, "| ");
} }
#ifdef CONFIG_FUNCTION_GRAPH_RETVAL
#define __TRACE_GRAPH_PRINT_RETVAL TRACE_GRAPH_PRINT_RETVAL
static void print_graph_retval(struct trace_seq *s, unsigned long retval,
bool leaf, void *func, bool hex_format)
{
unsigned long err_code = 0;
if (retval == 0 || hex_format)
goto done;
/* Check if the return value matches the negative format */
if (IS_ENABLED(CONFIG_64BIT) && (retval & BIT(31)) &&
(((u64)retval) >> 32) == 0) {
/* sign extension */
err_code = (unsigned long)(s32)retval;
} else {
err_code = retval;
}
if (!IS_ERR_VALUE(err_code))
err_code = 0;
done:
if (leaf) {
if (hex_format || (err_code == 0))
trace_seq_printf(s, "%ps(); /* = 0x%lx */\n",
func, retval);
else
trace_seq_printf(s, "%ps(); /* = %ld */\n",
func, err_code);
} else {
if (hex_format || (err_code == 0))
trace_seq_printf(s, "} /* %ps = 0x%lx */\n",
func, retval);
else
trace_seq_printf(s, "} /* %ps = %ld */\n",
func, err_code);
}
}
#else
#define __TRACE_GRAPH_PRINT_RETVAL 0
#define print_graph_retval(_seq, _retval, _leaf, _func, _format) do {} while (0)
#endif
/* Case of a leaf function on its call entry */ /* Case of a leaf function on its call entry */
static enum print_line_t static enum print_line_t
print_graph_entry_leaf(struct trace_iterator *iter, print_graph_entry_leaf(struct trace_iterator *iter,
...@@ -663,7 +719,15 @@ print_graph_entry_leaf(struct trace_iterator *iter, ...@@ -663,7 +719,15 @@ print_graph_entry_leaf(struct trace_iterator *iter,
for (i = 0; i < call->depth * TRACE_GRAPH_INDENT; i++) for (i = 0; i < call->depth * TRACE_GRAPH_INDENT; i++)
trace_seq_putc(s, ' '); trace_seq_putc(s, ' ');
trace_seq_printf(s, "%ps();\n", (void *)call->func); /*
* Write out the function return value if the option function-retval is
* enabled.
*/
if (flags & __TRACE_GRAPH_PRINT_RETVAL)
print_graph_retval(s, graph_ret->retval, true, (void *)call->func,
!!(flags & TRACE_GRAPH_PRINT_RETVAL_HEX));
else
trace_seq_printf(s, "%ps();\n", (void *)call->func);
print_graph_irq(iter, graph_ret->func, TRACE_GRAPH_RET, print_graph_irq(iter, graph_ret->func, TRACE_GRAPH_RET,
cpu, iter->ent->pid, flags); cpu, iter->ent->pid, flags);
...@@ -942,16 +1006,25 @@ print_graph_return(struct ftrace_graph_ret *trace, struct trace_seq *s, ...@@ -942,16 +1006,25 @@ print_graph_return(struct ftrace_graph_ret *trace, struct trace_seq *s,
trace_seq_putc(s, ' '); trace_seq_putc(s, ' ');
/* /*
* If the return function does not have a matching entry, * Always write out the function name and its return value if the
* then the entry was lost. Instead of just printing * function-retval option is enabled.
* the '}' and letting the user guess what function this
* belongs to, write out the function name. Always do
* that if the funcgraph-tail option is enabled.
*/ */
if (func_match && !(flags & TRACE_GRAPH_PRINT_TAIL)) if (flags & __TRACE_GRAPH_PRINT_RETVAL) {
trace_seq_puts(s, "}\n"); print_graph_retval(s, trace->retval, false, (void *)trace->func,
else !!(flags & TRACE_GRAPH_PRINT_RETVAL_HEX));
trace_seq_printf(s, "} /* %ps */\n", (void *)trace->func); } else {
/*
* If the return function does not have a matching entry,
* then the entry was lost. Instead of just printing
* the '}' and letting the user guess what function this
* belongs to, write out the function name. Always do
* that if the funcgraph-tail option is enabled.
*/
if (func_match && !(flags & TRACE_GRAPH_PRINT_TAIL))
trace_seq_puts(s, "}\n");
else
trace_seq_printf(s, "} /* %ps */\n", (void *)trace->func);
}
/* Overrun */ /* Overrun */
if (flags & TRACE_GRAPH_PRINT_OVERRUN) if (flags & TRACE_GRAPH_PRINT_OVERRUN)
......
This diff is collapsed.
...@@ -1446,6 +1446,8 @@ static struct trace_event trace_osnoise_event = { ...@@ -1446,6 +1446,8 @@ static struct trace_event trace_osnoise_event = {
}; };
/* TRACE_TIMERLAT */ /* TRACE_TIMERLAT */
static char *timerlat_lat_context[] = {"irq", "thread", "user-ret"};
static enum print_line_t static enum print_line_t
trace_timerlat_print(struct trace_iterator *iter, int flags, trace_timerlat_print(struct trace_iterator *iter, int flags,
struct trace_event *event) struct trace_event *event)
...@@ -1458,7 +1460,7 @@ trace_timerlat_print(struct trace_iterator *iter, int flags, ...@@ -1458,7 +1460,7 @@ trace_timerlat_print(struct trace_iterator *iter, int flags,
trace_seq_printf(s, "#%-5u context %6s timer_latency %9llu ns\n", trace_seq_printf(s, "#%-5u context %6s timer_latency %9llu ns\n",
field->seqnum, field->seqnum,
field->context ? "thread" : "irq", timerlat_lat_context[field->context],
field->timer_latency); field->timer_latency);
return trace_handle_return(s); return trace_handle_return(s);
......
#!/bin/sh
# SPDX-License-Identifier: GPL-2.0
# description: ftrace - function graph print function return value
# requires: options/funcgraph-retval options/funcgraph-retval-hex function_graph:tracer
# Make sure that funcgraph-retval works
fail() { # msg
echo $1
exit_fail
}
disable_tracing
clear_trace
# get self PID, can not use $$, because it is PPID
read PID _ < /proc/self/stat
[ -f set_ftrace_filter ] && echo proc_reg_write > set_ftrace_filter
[ -f set_ftrace_pid ] && echo ${PID} > set_ftrace_pid
echo function_graph > current_tracer
echo 1 > options/funcgraph-retval
set +e
enable_tracing
echo > /proc/interrupts
disable_tracing
set -e
: "Test printing the error code in signed decimal format"
echo 0 > options/funcgraph-retval-hex
count=`cat trace | grep 'proc_reg_write' | grep '= -5' | wc -l`
if [ $count -eq 0 ]; then
fail "Return value can not be printed in signed decimal format"
fi
: "Test printing the error code in hexadecimal format"
echo 1 > options/funcgraph-retval-hex
count=`cat trace | grep 'proc_reg_write' | grep 'fffffffb' | wc -l`
if [ $count -eq 0 ]; then
fail "Return value can not be printed in hexadecimal format"
fi
exit 0
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment