Commit d2d8b146 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'trace-v5.2' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace

Pull tracing updates from Steven Rostedt:
 "The major changes in this tracing update includes:

   - Removal of non-DYNAMIC_FTRACE from 32bit x86

   - Removal of mcount support from x86

   - Emulating a call from int3 on x86_64, fixes live kernel patching

   - Consolidated Tracing Error logs file

  Minor updates:

   - Removal of klp_check_compiler_support()

   - kdb ftrace dumping output changes

   - Accessing and creating ftrace instances from inside the kernel

   - Clean up of #define if macro

   - Introduction of TRACE_EVENT_NOP() to disable trace events based on
     config options

  And other minor fixes and clean ups"

* tag 'trace-v5.2' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (44 commits)
  x86: Hide the int3_emulate_call/jmp functions from UML
  livepatch: Remove klp_check_compiler_support()
  ftrace/x86: Remove mcount support
  ftrace/x86_32: Remove support for non DYNAMIC_FTRACE
  tracing: Simplify "if" macro code
  tracing: Fix documentation about disabling options using trace_options
  tracing: Replace kzalloc with kcalloc
  tracing: Fix partial reading of trace event's id file
  tracing: Allow RCU to run between postponed startup tests
  tracing: Fix white space issues in parse_pred() function
  tracing: Eliminate const char[] auto variables
  ring-buffer: Fix mispelling of Calculate
  tracing: probeevent: Fix to make the type of $comm string
  tracing: probeevent: Do not accumulate on ret variable
  tracing: uprobes: Re-enable $comm support for uprobe events
  ftrace/x86_64: Emulate call function while updating in breakpoint handler
  x86_64: Allow breakpoints to emulate call instructions
  x86_64: Add gap to int3 to allow for call emulation
  tracing: kdb: Allow ftdump to skip all but the last few entries
  tracing: Add trace_total_entries() / trace_total_entries_cpu()
  ...
parents 2bbacd1a 693713cb
...@@ -765,6 +765,37 @@ Here is the list of current tracers that may be configured. ...@@ -765,6 +765,37 @@ Here is the list of current tracers that may be configured.
tracers from tracing simply echo "nop" into tracers from tracing simply echo "nop" into
current_tracer. current_tracer.
Error conditions
----------------
For most ftrace commands, failure modes are obvious and communicated
using standard return codes.
For other more involved commands, extended error information may be
available via the tracing/error_log file. For the commands that
support it, reading the tracing/error_log file after an error will
display more detailed information about what went wrong, if
information is available. The tracing/error_log file is a circular
error log displaying a small number (currently, 8) of ftrace errors
for the last (8) failed commands.
The extended error information and usage takes the form shown in
this example::
# echo xxx > /sys/kernel/debug/tracing/events/sched/sched_wakeup/trigger
echo: write error: Invalid argument
# cat /sys/kernel/debug/tracing/error_log
[ 5348.887237] location: error: Couldn't yyy: zzz
Command: xxx
^
[ 7517.023364] location: error: Bad rrr: sss
Command: ppp qqq
^
To clear the error log, echo the empty string into it::
# echo > /sys/kernel/debug/tracing/error_log
Examples of using the tracer Examples of using the tracer
---------------------------- ----------------------------
......
...@@ -199,20 +199,8 @@ Extended error information ...@@ -199,20 +199,8 @@ Extended error information
For some error conditions encountered when invoking a hist trigger For some error conditions encountered when invoking a hist trigger
command, extended error information is available via the command, extended error information is available via the
corresponding event's 'hist' file. Reading the hist file after an tracing/error_log file. See Error Conditions in
error will display more detailed information about what went wrong, :file:`Documentation/trace/ftrace.rst` for details.
if information is available. This extended error information will
be available until the next hist trigger command for that event.
If available for a given error condition, the extended error
information and usage takes the following form::
# echo xxx > /sys/kernel/debug/tracing/events/sched/sched_wakeup/trigger
echo: write error: Invalid argument
# cat /sys/kernel/debug/tracing/events/sched/sched_wakeup/hist
ERROR: Couldn't yyy: zzz
Last command: xxx
6.2 'hist' trigger examples 6.2 'hist' trigger examples
--------------------------- ---------------------------
......
...@@ -7,7 +7,6 @@ ...@@ -7,7 +7,6 @@
#ifndef CONFIG_DYNAMIC_FTRACE #ifndef CONFIG_DYNAMIC_FTRACE
extern void (*ftrace_trace_function)(unsigned long, unsigned long, extern void (*ftrace_trace_function)(unsigned long, unsigned long,
struct ftrace_ops*, struct pt_regs*); struct ftrace_ops*, struct pt_regs*);
extern int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace);
extern void ftrace_graph_caller(void); extern void ftrace_graph_caller(void);
noinline void __naked ftrace_stub(unsigned long ip, unsigned long parent_ip, noinline void __naked ftrace_stub(unsigned long ip, unsigned long parent_ip,
......
...@@ -51,7 +51,6 @@ void notrace __hot ftrace_function_trampoline(unsigned long parent, ...@@ -51,7 +51,6 @@ void notrace __hot ftrace_function_trampoline(unsigned long parent,
unsigned long org_sp_gr3) unsigned long org_sp_gr3)
{ {
extern ftrace_func_t ftrace_trace_function; /* depends on CONFIG_DYNAMIC_FTRACE */ extern ftrace_func_t ftrace_trace_function; /* depends on CONFIG_DYNAMIC_FTRACE */
extern int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace);
if (ftrace_trace_function != ftrace_stub) { if (ftrace_trace_function != ftrace_stub) {
/* struct ftrace_ops *op, struct pt_regs *regs); */ /* struct ftrace_ops *op, struct pt_regs *regs); */
......
...@@ -24,11 +24,6 @@ ...@@ -24,11 +24,6 @@
#include <linux/sched/task_stack.h> #include <linux/sched/task_stack.h>
#ifdef CONFIG_LIVEPATCH #ifdef CONFIG_LIVEPATCH
static inline int klp_check_compiler_support(void)
{
return 0;
}
static inline void klp_arch_set_pc(struct pt_regs *regs, unsigned long ip) static inline void klp_arch_set_pc(struct pt_regs *regs, unsigned long ip)
{ {
regs->nip = ip; regs->nip = ip;
......
...@@ -13,11 +13,6 @@ ...@@ -13,11 +13,6 @@
#include <asm/ptrace.h> #include <asm/ptrace.h>
static inline int klp_check_compiler_support(void)
{
return 0;
}
static inline void klp_arch_set_pc(struct pt_regs *regs, unsigned long ip) static inline void klp_arch_set_pc(struct pt_regs *regs, unsigned long ip)
{ {
regs->psw.addr = ip; regs->psw.addr = ip;
......
...@@ -31,6 +31,17 @@ config X86_64 ...@@ -31,6 +31,17 @@ config X86_64
select SWIOTLB select SWIOTLB
select ARCH_HAS_SYSCALL_WRAPPER select ARCH_HAS_SYSCALL_WRAPPER
config FORCE_DYNAMIC_FTRACE
def_bool y
depends on X86_32
depends on FUNCTION_TRACER
select DYNAMIC_FTRACE
help
We keep the static function tracing (!DYNAMIC_FTRACE) around
in order to test the non static function tracing in the
generic code, as other architectures still use it. But we
only need to keep it around for x86_64. No need to keep it
for x86_32. For x86_32, force DYNAMIC_FTRACE.
# #
# Arch settings # Arch settings
# #
......
...@@ -878,7 +878,7 @@ apicinterrupt IRQ_WORK_VECTOR irq_work_interrupt smp_irq_work_interrupt ...@@ -878,7 +878,7 @@ apicinterrupt IRQ_WORK_VECTOR irq_work_interrupt smp_irq_work_interrupt
* @paranoid == 2 is special: the stub will never switch stacks. This is for * @paranoid == 2 is special: the stub will never switch stacks. This is for
* #DF: if the thread stack is somehow unusable, we'll still get a useful OOPS. * #DF: if the thread stack is somehow unusable, we'll still get a useful OOPS.
*/ */
.macro idtentry sym do_sym has_error_code:req paranoid=0 shift_ist=-1 ist_offset=0 .macro idtentry sym do_sym has_error_code:req paranoid=0 shift_ist=-1 ist_offset=0 create_gap=0
ENTRY(\sym) ENTRY(\sym)
UNWIND_HINT_IRET_REGS offset=\has_error_code*8 UNWIND_HINT_IRET_REGS offset=\has_error_code*8
...@@ -898,6 +898,20 @@ ENTRY(\sym) ...@@ -898,6 +898,20 @@ ENTRY(\sym)
jnz .Lfrom_usermode_switch_stack_\@ jnz .Lfrom_usermode_switch_stack_\@
.endif .endif
.if \create_gap == 1
/*
* If coming from kernel space, create a 6-word gap to allow the
* int3 handler to emulate a call instruction.
*/
testb $3, CS-ORIG_RAX(%rsp)
jnz .Lfrom_usermode_no_gap_\@
.rept 6
pushq 5*8(%rsp)
.endr
UNWIND_HINT_IRET_REGS offset=8
.Lfrom_usermode_no_gap_\@:
.endif
.if \paranoid .if \paranoid
call paranoid_entry call paranoid_entry
.else .else
...@@ -1129,7 +1143,7 @@ apicinterrupt3 HYPERV_STIMER0_VECTOR \ ...@@ -1129,7 +1143,7 @@ apicinterrupt3 HYPERV_STIMER0_VECTOR \
#endif /* CONFIG_HYPERV */ #endif /* CONFIG_HYPERV */
idtentry debug do_debug has_error_code=0 paranoid=1 shift_ist=IST_INDEX_DB ist_offset=DB_STACK_OFFSET idtentry debug do_debug has_error_code=0 paranoid=1 shift_ist=IST_INDEX_DB ist_offset=DB_STACK_OFFSET
idtentry int3 do_int3 has_error_code=0 idtentry int3 do_int3 has_error_code=0 create_gap=1
idtentry stack_segment do_stack_segment has_error_code=1 idtentry stack_segment do_stack_segment has_error_code=1
#ifdef CONFIG_XEN_PV #ifdef CONFIG_XEN_PV
......
...@@ -3,12 +3,10 @@ ...@@ -3,12 +3,10 @@
#define _ASM_X86_FTRACE_H #define _ASM_X86_FTRACE_H
#ifdef CONFIG_FUNCTION_TRACER #ifdef CONFIG_FUNCTION_TRACER
#ifdef CC_USING_FENTRY #ifndef CC_USING_FENTRY
# define MCOUNT_ADDR ((unsigned long)(__fentry__)) # error Compiler does not support fentry?
#else
# define MCOUNT_ADDR ((unsigned long)(mcount))
# define HAVE_FUNCTION_GRAPH_FP_TEST
#endif #endif
# define MCOUNT_ADDR ((unsigned long)(__fentry__))
#define MCOUNT_INSN_SIZE 5 /* sizeof mcount call */ #define MCOUNT_INSN_SIZE 5 /* sizeof mcount call */
#ifdef CONFIG_DYNAMIC_FTRACE #ifdef CONFIG_DYNAMIC_FTRACE
......
...@@ -24,14 +24,6 @@ ...@@ -24,14 +24,6 @@
#include <asm/setup.h> #include <asm/setup.h>
#include <linux/ftrace.h> #include <linux/ftrace.h>
static inline int klp_check_compiler_support(void)
{
#ifndef CC_USING_FENTRY
return 1;
#endif
return 0;
}
static inline void klp_arch_set_pc(struct pt_regs *regs, unsigned long ip) static inline void klp_arch_set_pc(struct pt_regs *regs, unsigned long ip)
{ {
regs->ip = ip; regs->ip = ip;
......
...@@ -42,4 +42,34 @@ extern int after_bootmem; ...@@ -42,4 +42,34 @@ extern int after_bootmem;
extern __ro_after_init struct mm_struct *poking_mm; extern __ro_after_init struct mm_struct *poking_mm;
extern __ro_after_init unsigned long poking_addr; extern __ro_after_init unsigned long poking_addr;
#ifndef CONFIG_UML_X86
static inline void int3_emulate_jmp(struct pt_regs *regs, unsigned long ip)
{
regs->ip = ip;
}
#define INT3_INSN_SIZE 1
#define CALL_INSN_SIZE 5
#ifdef CONFIG_X86_64
static inline void int3_emulate_push(struct pt_regs *regs, unsigned long val)
{
/*
* The int3 handler in entry_64.S adds a gap between the
* stack where the break point happened, and the saving of
* pt_regs. We can extend the original stack because of
* this gap. See the idtentry macro's create_gap option.
*/
regs->sp -= sizeof(unsigned long);
*(unsigned long *)regs->sp = val;
}
static inline void int3_emulate_call(struct pt_regs *regs, unsigned long func)
{
int3_emulate_push(regs, regs->ip - INT3_INSN_SIZE + CALL_INSN_SIZE);
int3_emulate_jmp(regs, func);
}
#endif /* CONFIG_X86_64 */
#endif /* !CONFIG_UML_X86 */
#endif /* _ASM_X86_TEXT_PATCHING_H */ #endif /* _ASM_X86_TEXT_PATCHING_H */
...@@ -29,6 +29,7 @@ ...@@ -29,6 +29,7 @@
#include <asm/kprobes.h> #include <asm/kprobes.h>
#include <asm/ftrace.h> #include <asm/ftrace.h>
#include <asm/nops.h> #include <asm/nops.h>
#include <asm/text-patching.h>
#ifdef CONFIG_DYNAMIC_FTRACE #ifdef CONFIG_DYNAMIC_FTRACE
...@@ -231,6 +232,7 @@ int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr, ...@@ -231,6 +232,7 @@ int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
} }
static unsigned long ftrace_update_func; static unsigned long ftrace_update_func;
static unsigned long ftrace_update_func_call;
static int update_ftrace_func(unsigned long ip, void *new) static int update_ftrace_func(unsigned long ip, void *new)
{ {
...@@ -259,6 +261,8 @@ int ftrace_update_ftrace_func(ftrace_func_t func) ...@@ -259,6 +261,8 @@ int ftrace_update_ftrace_func(ftrace_func_t func)
unsigned char *new; unsigned char *new;
int ret; int ret;
ftrace_update_func_call = (unsigned long)func;
new = ftrace_call_replace(ip, (unsigned long)func); new = ftrace_call_replace(ip, (unsigned long)func);
ret = update_ftrace_func(ip, new); ret = update_ftrace_func(ip, new);
...@@ -294,13 +298,28 @@ int ftrace_int3_handler(struct pt_regs *regs) ...@@ -294,13 +298,28 @@ int ftrace_int3_handler(struct pt_regs *regs)
if (WARN_ON_ONCE(!regs)) if (WARN_ON_ONCE(!regs))
return 0; return 0;
ip = regs->ip - 1; ip = regs->ip - INT3_INSN_SIZE;
if (!ftrace_location(ip) && !is_ftrace_caller(ip))
return 0;
regs->ip += MCOUNT_INSN_SIZE - 1;
#ifdef CONFIG_X86_64
if (ftrace_location(ip)) {
int3_emulate_call(regs, (unsigned long)ftrace_regs_caller);
return 1; return 1;
} else if (is_ftrace_caller(ip)) {
if (!ftrace_update_func_call) {
int3_emulate_jmp(regs, ip + CALL_INSN_SIZE);
return 1;
}
int3_emulate_call(regs, ftrace_update_func_call);
return 1;
}
#else
if (ftrace_location(ip) || is_ftrace_caller(ip)) {
int3_emulate_jmp(regs, ip + CALL_INSN_SIZE);
return 1;
}
#endif
return 0;
} }
NOKPROBE_SYMBOL(ftrace_int3_handler); NOKPROBE_SYMBOL(ftrace_int3_handler);
...@@ -865,6 +884,8 @@ void arch_ftrace_update_trampoline(struct ftrace_ops *ops) ...@@ -865,6 +884,8 @@ void arch_ftrace_update_trampoline(struct ftrace_ops *ops)
func = ftrace_ops_get_func(ops); func = ftrace_ops_get_func(ops);
ftrace_update_func_call = (unsigned long)func;
/* Do a safe modify in case the trampoline is executing */ /* Do a safe modify in case the trampoline is executing */
new = ftrace_call_replace(ip, (unsigned long)func); new = ftrace_call_replace(ip, (unsigned long)func);
ret = update_ftrace_func(ip, new); ret = update_ftrace_func(ip, new);
...@@ -966,6 +987,7 @@ static int ftrace_mod_jmp(unsigned long ip, void *func) ...@@ -966,6 +987,7 @@ static int ftrace_mod_jmp(unsigned long ip, void *func)
{ {
unsigned char *new; unsigned char *new;
ftrace_update_func_call = 0UL;
new = ftrace_jmp_replace(ip, (unsigned long)func); new = ftrace_jmp_replace(ip, (unsigned long)func);
return update_ftrace_func(ip, new); return update_ftrace_func(ip, new);
......
...@@ -10,22 +10,10 @@ ...@@ -10,22 +10,10 @@
#include <asm/ftrace.h> #include <asm/ftrace.h>
#include <asm/nospec-branch.h> #include <asm/nospec-branch.h>
#ifdef CC_USING_FENTRY
# define function_hook __fentry__ # define function_hook __fentry__
EXPORT_SYMBOL(__fentry__) EXPORT_SYMBOL(__fentry__)
#else
# define function_hook mcount
EXPORT_SYMBOL(mcount)
#endif
#ifdef CONFIG_DYNAMIC_FTRACE
/* mcount uses a frame pointer even if CONFIG_FRAME_POINTER is not set */
#if !defined(CC_USING_FENTRY) || defined(CONFIG_FRAME_POINTER)
# define USING_FRAME_POINTER
#endif
#ifdef USING_FRAME_POINTER #ifdef CONFIG_FRAME_POINTER
# define MCOUNT_FRAME 1 /* using frame = true */ # define MCOUNT_FRAME 1 /* using frame = true */
#else #else
# define MCOUNT_FRAME 0 /* using frame = false */ # define MCOUNT_FRAME 0 /* using frame = false */
...@@ -37,8 +25,7 @@ END(function_hook) ...@@ -37,8 +25,7 @@ END(function_hook)
ENTRY(ftrace_caller) ENTRY(ftrace_caller)
#ifdef USING_FRAME_POINTER #ifdef CONFIG_FRAME_POINTER
# ifdef CC_USING_FENTRY
/* /*
* Frame pointers are of ip followed by bp. * Frame pointers are of ip followed by bp.
* Since fentry is an immediate jump, we are left with * Since fentry is an immediate jump, we are left with
...@@ -49,7 +36,7 @@ ENTRY(ftrace_caller) ...@@ -49,7 +36,7 @@ ENTRY(ftrace_caller)
pushl %ebp pushl %ebp
movl %esp, %ebp movl %esp, %ebp
pushl 2*4(%esp) /* function ip */ pushl 2*4(%esp) /* function ip */
# endif
/* For mcount, the function ip is directly above */ /* For mcount, the function ip is directly above */
pushl %ebp pushl %ebp
movl %esp, %ebp movl %esp, %ebp
...@@ -59,7 +46,7 @@ ENTRY(ftrace_caller) ...@@ -59,7 +46,7 @@ ENTRY(ftrace_caller)
pushl %edx pushl %edx
pushl $0 /* Pass NULL as regs pointer */ pushl $0 /* Pass NULL as regs pointer */
#ifdef USING_FRAME_POINTER #ifdef CONFIG_FRAME_POINTER
/* Load parent ebp into edx */ /* Load parent ebp into edx */
movl 4*4(%esp), %edx movl 4*4(%esp), %edx
#else #else
...@@ -82,13 +69,11 @@ ftrace_call: ...@@ -82,13 +69,11 @@ ftrace_call:
popl %edx popl %edx
popl %ecx popl %ecx
popl %eax popl %eax
#ifdef USING_FRAME_POINTER #ifdef CONFIG_FRAME_POINTER
popl %ebp popl %ebp
# ifdef CC_USING_FENTRY
addl $4,%esp /* skip function ip */ addl $4,%esp /* skip function ip */
popl %ebp /* this is the orig bp */ popl %ebp /* this is the orig bp */
addl $4, %esp /* skip parent ip */ addl $4, %esp /* skip parent ip */
# endif
#endif #endif
.Lftrace_ret: .Lftrace_ret:
#ifdef CONFIG_FUNCTION_GRAPH_TRACER #ifdef CONFIG_FUNCTION_GRAPH_TRACER
...@@ -133,11 +118,7 @@ ENTRY(ftrace_regs_caller) ...@@ -133,11 +118,7 @@ ENTRY(ftrace_regs_caller)
movl 12*4(%esp), %eax /* Load ip (1st parameter) */ movl 12*4(%esp), %eax /* Load ip (1st parameter) */
subl $MCOUNT_INSN_SIZE, %eax /* Adjust ip */ subl $MCOUNT_INSN_SIZE, %eax /* Adjust ip */
#ifdef CC_USING_FENTRY
movl 15*4(%esp), %edx /* Load parent ip (2nd parameter) */ movl 15*4(%esp), %edx /* Load parent ip (2nd parameter) */
#else
movl 0x4(%ebp), %edx /* Load parent ip (2nd parameter) */
#endif
movl function_trace_op, %ecx /* Save ftrace_pos in 3rd parameter */ movl function_trace_op, %ecx /* Save ftrace_pos in 3rd parameter */
pushl %esp /* Save pt_regs as 4th parameter */ pushl %esp /* Save pt_regs as 4th parameter */
...@@ -170,43 +151,6 @@ GLOBAL(ftrace_regs_call) ...@@ -170,43 +151,6 @@ GLOBAL(ftrace_regs_call)
lea 3*4(%esp), %esp /* Skip orig_ax, ip and cs */ lea 3*4(%esp), %esp /* Skip orig_ax, ip and cs */
jmp .Lftrace_ret jmp .Lftrace_ret
#else /* ! CONFIG_DYNAMIC_FTRACE */
ENTRY(function_hook)
cmpl $__PAGE_OFFSET, %esp
jb ftrace_stub /* Paging not enabled yet? */
cmpl $ftrace_stub, ftrace_trace_function
jnz .Ltrace
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
cmpl $ftrace_stub, ftrace_graph_return
jnz ftrace_graph_caller
cmpl $ftrace_graph_entry_stub, ftrace_graph_entry
jnz ftrace_graph_caller
#endif
.globl ftrace_stub
ftrace_stub:
ret
/* taken from glibc */
.Ltrace:
pushl %eax
pushl %ecx
pushl %edx
movl 0xc(%esp), %eax
movl 0x4(%ebp), %edx
subl $MCOUNT_INSN_SIZE, %eax
movl ftrace_trace_function, %ecx
CALL_NOSPEC %ecx
popl %edx
popl %ecx
popl %eax
jmp ftrace_stub
END(function_hook)
#endif /* CONFIG_DYNAMIC_FTRACE */
#ifdef CONFIG_FUNCTION_GRAPH_TRACER #ifdef CONFIG_FUNCTION_GRAPH_TRACER
ENTRY(ftrace_graph_caller) ENTRY(ftrace_graph_caller)
...@@ -215,13 +159,8 @@ ENTRY(ftrace_graph_caller) ...@@ -215,13 +159,8 @@ ENTRY(ftrace_graph_caller)
pushl %edx pushl %edx
movl 3*4(%esp), %eax movl 3*4(%esp), %eax
/* Even with frame pointers, fentry doesn't have one here */ /* Even with frame pointers, fentry doesn't have one here */
#ifdef CC_USING_FENTRY
lea 4*4(%esp), %edx lea 4*4(%esp), %edx
movl $0, %ecx movl $0, %ecx
#else
lea 0x4(%ebp), %edx
movl (%ebp), %ecx
#endif
subl $MCOUNT_INSN_SIZE, %eax subl $MCOUNT_INSN_SIZE, %eax
call prepare_ftrace_return call prepare_ftrace_return
popl %edx popl %edx
...@@ -234,11 +173,7 @@ END(ftrace_graph_caller) ...@@ -234,11 +173,7 @@ END(ftrace_graph_caller)
return_to_handler: return_to_handler:
pushl %eax pushl %eax
pushl %edx pushl %edx
#ifdef CC_USING_FENTRY
movl $0, %eax movl $0, %eax
#else
movl %ebp, %eax
#endif
call ftrace_return_to_handler call ftrace_return_to_handler
movl %eax, %ecx movl %eax, %ecx
popl %edx popl %edx
......
...@@ -13,22 +13,12 @@ ...@@ -13,22 +13,12 @@
.code64 .code64
.section .entry.text, "ax" .section .entry.text, "ax"
#ifdef CC_USING_FENTRY
# define function_hook __fentry__ # define function_hook __fentry__
EXPORT_SYMBOL(__fentry__) EXPORT_SYMBOL(__fentry__)
#else
# define function_hook mcount
EXPORT_SYMBOL(mcount)
#endif
#ifdef CONFIG_FRAME_POINTER #ifdef CONFIG_FRAME_POINTER
# ifdef CC_USING_FENTRY
/* Save parent and function stack frames (rip and rbp) */ /* Save parent and function stack frames (rip and rbp) */
# define MCOUNT_FRAME_SIZE (8+16*2) # define MCOUNT_FRAME_SIZE (8+16*2)
# else
/* Save just function stack frame (rip and rbp) */
# define MCOUNT_FRAME_SIZE (8+16)
# endif
#else #else
/* No need to save a stack frame */ /* No need to save a stack frame */
# define MCOUNT_FRAME_SIZE 0 # define MCOUNT_FRAME_SIZE 0
...@@ -75,17 +65,13 @@ EXPORT_SYMBOL(mcount) ...@@ -75,17 +65,13 @@ EXPORT_SYMBOL(mcount)
* fentry is called before the stack frame is set up, where as mcount * fentry is called before the stack frame is set up, where as mcount
* is called afterward. * is called afterward.
*/ */
#ifdef CC_USING_FENTRY
/* Save the parent pointer (skip orig rbp and our return address) */ /* Save the parent pointer (skip orig rbp and our return address) */
pushq \added+8*2(%rsp) pushq \added+8*2(%rsp)
pushq %rbp pushq %rbp
movq %rsp, %rbp movq %rsp, %rbp
/* Save the return address (now skip orig rbp, rbp and parent) */ /* Save the return address (now skip orig rbp, rbp and parent) */
pushq \added+8*3(%rsp) pushq \added+8*3(%rsp)
#else
/* Can't assume that rip is before this (unless added was zero) */
pushq \added+8(%rsp)
#endif
pushq %rbp pushq %rbp
movq %rsp, %rbp movq %rsp, %rbp
#endif /* CONFIG_FRAME_POINTER */ #endif /* CONFIG_FRAME_POINTER */
...@@ -113,12 +99,7 @@ EXPORT_SYMBOL(mcount) ...@@ -113,12 +99,7 @@ EXPORT_SYMBOL(mcount)
movq %rdx, RBP(%rsp) movq %rdx, RBP(%rsp)
/* Copy the parent address into %rsi (second parameter) */ /* Copy the parent address into %rsi (second parameter) */
#ifdef CC_USING_FENTRY
movq MCOUNT_REG_SIZE+8+\added(%rsp), %rsi movq MCOUNT_REG_SIZE+8+\added(%rsp), %rsi
#else
/* %rdx contains original %rbp */
movq 8(%rdx), %rsi
#endif
/* Move RIP to its proper location */ /* Move RIP to its proper location */
movq MCOUNT_REG_SIZE+\added(%rsp), %rdi movq MCOUNT_REG_SIZE+\added(%rsp), %rdi
...@@ -303,15 +284,8 @@ ENTRY(ftrace_graph_caller) ...@@ -303,15 +284,8 @@ ENTRY(ftrace_graph_caller)
/* Saves rbp into %rdx and fills first parameter */ /* Saves rbp into %rdx and fills first parameter */
save_mcount_regs save_mcount_regs
#ifdef CC_USING_FENTRY
leaq MCOUNT_REG_SIZE+8(%rsp), %rsi leaq MCOUNT_REG_SIZE+8(%rsp), %rsi
movq $0, %rdx /* No framepointers needed */ movq $0, %rdx /* No framepointers needed */
#else
/* Save address of the return address of traced function */
leaq 8(%rdx), %rsi
/* ftrace does sanity checks against frame pointers */
movq (%rdx), %rdx
#endif
call prepare_ftrace_return call prepare_ftrace_return
restore_mcount_regs restore_mcount_regs
......
...@@ -53,23 +53,24 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val, ...@@ -53,23 +53,24 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
* "Define 'is'", Bill Clinton * "Define 'is'", Bill Clinton
* "Define 'if'", Steven Rostedt * "Define 'if'", Steven Rostedt
*/ */
#define if(cond, ...) __trace_if( (cond , ## __VA_ARGS__) ) #define if(cond, ...) if ( __trace_if_var( !!(cond , ## __VA_ARGS__) ) )
#define __trace_if(cond) \
if (__builtin_constant_p(!!(cond)) ? !!(cond) : \ #define __trace_if_var(cond) (__builtin_constant_p(cond) ? (cond) : __trace_if_value(cond))
({ \
int ______r; \ #define __trace_if_value(cond) ({ \
static struct ftrace_branch_data \ static struct ftrace_branch_data \
__aligned(4) \ __aligned(4) \
__section("_ftrace_branch") \ __section("_ftrace_branch") \
______f = { \ __if_trace = { \
.func = __func__, \ .func = __func__, \
.file = __FILE__, \ .file = __FILE__, \
.line = __LINE__, \ .line = __LINE__, \
}; \ }; \
______r = !!(cond); \ (cond) ? \
______r ? ______f.miss_hit[1]++ : ______f.miss_hit[0]++;\ (__if_trace.miss_hit[1]++,1) : \
______r; \ (__if_trace.miss_hit[0]++,0); \
})) })
#endif /* CONFIG_PROFILE_ALL_BRANCHES */ #endif /* CONFIG_PROFILE_ALL_BRANCHES */
#else #else
......
...@@ -741,6 +741,8 @@ struct ftrace_graph_ret { ...@@ -741,6 +741,8 @@ struct ftrace_graph_ret {
typedef void (*trace_func_graph_ret_t)(struct ftrace_graph_ret *); /* return */ typedef void (*trace_func_graph_ret_t)(struct ftrace_graph_ret *); /* return */
typedef int (*trace_func_graph_ent_t)(struct ftrace_graph_ent *); /* entry */ typedef int (*trace_func_graph_ent_t)(struct ftrace_graph_ent *); /* entry */
extern int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace);
#ifdef CONFIG_FUNCTION_GRAPH_TRACER #ifdef CONFIG_FUNCTION_GRAPH_TRACER
struct fgraph_ops { struct fgraph_ops {
......
...@@ -548,4 +548,19 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p) ...@@ -548,4 +548,19 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
#define TRACE_EVENT_PERF_PERM(event, expr...) #define TRACE_EVENT_PERF_PERM(event, expr...)
#define DECLARE_EVENT_NOP(name, proto, args) \
static inline void trace_##name(proto) \
{ } \
static inline bool trace_##name##_enabled(void) \
{ \
return false; \
}
#define TRACE_EVENT_NOP(name, proto, args, struct, assign, print) \
DECLARE_EVENT_NOP(name, PARAMS(proto), PARAMS(args))
#define DECLARE_EVENT_CLASS_NOP(name, proto, args, tstruct, assign, print)
#define DEFINE_EVENT_NOP(template, name, proto, args) \
DECLARE_EVENT_NOP(name, PARAMS(proto), PARAMS(args))
#endif /* ifdef TRACE_EVENT (see note above) */ #endif /* ifdef TRACE_EVENT (see note above) */
...@@ -46,6 +46,12 @@ ...@@ -46,6 +46,12 @@
assign, print, reg, unreg) \ assign, print, reg, unreg) \
DEFINE_TRACE_FN(name, reg, unreg) DEFINE_TRACE_FN(name, reg, unreg)
#undef TRACE_EVENT_NOP
#define TRACE_EVENT_NOP(name, proto, args, struct, assign, print)
#undef DEFINE_EVENT_NOP
#define DEFINE_EVENT_NOP(template, name, proto, args)
#undef DEFINE_EVENT #undef DEFINE_EVENT
#define DEFINE_EVENT(template, name, proto, args) \ #define DEFINE_EVENT(template, name, proto, args) \
DEFINE_TRACE(name) DEFINE_TRACE(name)
...@@ -102,6 +108,8 @@ ...@@ -102,6 +108,8 @@
#undef TRACE_EVENT_FN #undef TRACE_EVENT_FN
#undef TRACE_EVENT_FN_COND #undef TRACE_EVENT_FN_COND
#undef TRACE_EVENT_CONDITION #undef TRACE_EVENT_CONDITION
#undef TRACE_EVENT_NOP
#undef DEFINE_EVENT_NOP
#undef DECLARE_EVENT_CLASS #undef DECLARE_EVENT_CLASS
#undef DEFINE_EVENT #undef DEFINE_EVENT
#undef DEFINE_EVENT_FN #undef DEFINE_EVENT_FN
......
...@@ -7,6 +7,12 @@ ...@@ -7,6 +7,12 @@
#include <linux/tracepoint.h> #include <linux/tracepoint.h>
#ifdef CONFIG_RCU_TRACE
#define TRACE_EVENT_RCU TRACE_EVENT
#else
#define TRACE_EVENT_RCU TRACE_EVENT_NOP
#endif
/* /*
* Tracepoint for start/end markers used for utilization calculations. * Tracepoint for start/end markers used for utilization calculations.
* By convention, the string is of the following forms: * By convention, the string is of the following forms:
...@@ -35,8 +41,6 @@ TRACE_EVENT(rcu_utilization, ...@@ -35,8 +41,6 @@ TRACE_EVENT(rcu_utilization,
TP_printk("%s", __entry->s) TP_printk("%s", __entry->s)
); );
#ifdef CONFIG_RCU_TRACE
#if defined(CONFIG_TREE_RCU) || defined(CONFIG_PREEMPT_RCU) #if defined(CONFIG_TREE_RCU) || defined(CONFIG_PREEMPT_RCU)
/* /*
...@@ -62,7 +66,7 @@ TRACE_EVENT(rcu_utilization, ...@@ -62,7 +66,7 @@ TRACE_EVENT(rcu_utilization,
* "end": End a grace period. * "end": End a grace period.
* "cpuend": CPU first notices a grace-period end. * "cpuend": CPU first notices a grace-period end.
*/ */
TRACE_EVENT(rcu_grace_period, TRACE_EVENT_RCU(rcu_grace_period,
TP_PROTO(const char *rcuname, unsigned long gp_seq, const char *gpevent), TP_PROTO(const char *rcuname, unsigned long gp_seq, const char *gpevent),
...@@ -101,7 +105,7 @@ TRACE_EVENT(rcu_grace_period, ...@@ -101,7 +105,7 @@ TRACE_EVENT(rcu_grace_period,
* "Cleanup": Clean up rcu_node structure after previous GP. * "Cleanup": Clean up rcu_node structure after previous GP.
* "CleanupMore": Clean up, and another GP is needed. * "CleanupMore": Clean up, and another GP is needed.
*/ */
TRACE_EVENT(rcu_future_grace_period, TRACE_EVENT_RCU(rcu_future_grace_period,
TP_PROTO(const char *rcuname, unsigned long gp_seq, TP_PROTO(const char *rcuname, unsigned long gp_seq,
unsigned long gp_seq_req, u8 level, int grplo, int grphi, unsigned long gp_seq_req, u8 level, int grplo, int grphi,
...@@ -141,7 +145,7 @@ TRACE_EVENT(rcu_future_grace_period, ...@@ -141,7 +145,7 @@ TRACE_EVENT(rcu_future_grace_period,
* rcu_node structure, and the mask of CPUs that will be waited for. * rcu_node structure, and the mask of CPUs that will be waited for.
* All but the type of RCU are extracted from the rcu_node structure. * All but the type of RCU are extracted from the rcu_node structure.
*/ */
TRACE_EVENT(rcu_grace_period_init, TRACE_EVENT_RCU(rcu_grace_period_init,
TP_PROTO(const char *rcuname, unsigned long gp_seq, u8 level, TP_PROTO(const char *rcuname, unsigned long gp_seq, u8 level,
int grplo, int grphi, unsigned long qsmask), int grplo, int grphi, unsigned long qsmask),
...@@ -186,7 +190,7 @@ TRACE_EVENT(rcu_grace_period_init, ...@@ -186,7 +190,7 @@ TRACE_EVENT(rcu_grace_period_init,
* "endwake": Woke piggybackers up. * "endwake": Woke piggybackers up.
* "done": Someone else did the expedited grace period for us. * "done": Someone else did the expedited grace period for us.
*/ */
TRACE_EVENT(rcu_exp_grace_period, TRACE_EVENT_RCU(rcu_exp_grace_period,
TP_PROTO(const char *rcuname, unsigned long gpseq, const char *gpevent), TP_PROTO(const char *rcuname, unsigned long gpseq, const char *gpevent),
...@@ -218,7 +222,7 @@ TRACE_EVENT(rcu_exp_grace_period, ...@@ -218,7 +222,7 @@ TRACE_EVENT(rcu_exp_grace_period,
* "nxtlvl": Advance to next level of rcu_node funnel * "nxtlvl": Advance to next level of rcu_node funnel
* "wait": Wait for someone else to do expedited GP * "wait": Wait for someone else to do expedited GP
*/ */
TRACE_EVENT(rcu_exp_funnel_lock, TRACE_EVENT_RCU(rcu_exp_funnel_lock,
TP_PROTO(const char *rcuname, u8 level, int grplo, int grphi, TP_PROTO(const char *rcuname, u8 level, int grplo, int grphi,
const char *gpevent), const char *gpevent),
...@@ -269,7 +273,7 @@ TRACE_EVENT(rcu_exp_funnel_lock, ...@@ -269,7 +273,7 @@ TRACE_EVENT(rcu_exp_funnel_lock,
* "WaitQueue": Enqueue partially done, timed wait for it to complete. * "WaitQueue": Enqueue partially done, timed wait for it to complete.
* "WokeQueue": Partial enqueue now complete. * "WokeQueue": Partial enqueue now complete.
*/ */
TRACE_EVENT(rcu_nocb_wake, TRACE_EVENT_RCU(rcu_nocb_wake,
TP_PROTO(const char *rcuname, int cpu, const char *reason), TP_PROTO(const char *rcuname, int cpu, const char *reason),
...@@ -297,7 +301,7 @@ TRACE_EVENT(rcu_nocb_wake, ...@@ -297,7 +301,7 @@ TRACE_EVENT(rcu_nocb_wake,
* include SRCU), the grace-period number that the task is blocking * include SRCU), the grace-period number that the task is blocking
* (the current or the next), and the task's PID. * (the current or the next), and the task's PID.
*/ */
TRACE_EVENT(rcu_preempt_task, TRACE_EVENT_RCU(rcu_preempt_task,
TP_PROTO(const char *rcuname, int pid, unsigned long gp_seq), TP_PROTO(const char *rcuname, int pid, unsigned long gp_seq),
...@@ -324,7 +328,7 @@ TRACE_EVENT(rcu_preempt_task, ...@@ -324,7 +328,7 @@ TRACE_EVENT(rcu_preempt_task,
* read-side critical section exiting that critical section. Track the * read-side critical section exiting that critical section. Track the
* type of RCU (which one day might include SRCU) and the task's PID. * type of RCU (which one day might include SRCU) and the task's PID.
*/ */
TRACE_EVENT(rcu_unlock_preempted_task, TRACE_EVENT_RCU(rcu_unlock_preempted_task,
TP_PROTO(const char *rcuname, unsigned long gp_seq, int pid), TP_PROTO(const char *rcuname, unsigned long gp_seq, int pid),
...@@ -353,7 +357,7 @@ TRACE_EVENT(rcu_unlock_preempted_task, ...@@ -353,7 +357,7 @@ TRACE_EVENT(rcu_unlock_preempted_task,
* whether there are any blocked tasks blocking the current grace period. * whether there are any blocked tasks blocking the current grace period.
* All but the type of RCU are extracted from the rcu_node structure. * All but the type of RCU are extracted from the rcu_node structure.
*/ */
TRACE_EVENT(rcu_quiescent_state_report, TRACE_EVENT_RCU(rcu_quiescent_state_report,
TP_PROTO(const char *rcuname, unsigned long gp_seq, TP_PROTO(const char *rcuname, unsigned long gp_seq,
unsigned long mask, unsigned long qsmask, unsigned long mask, unsigned long qsmask,
...@@ -396,7 +400,7 @@ TRACE_EVENT(rcu_quiescent_state_report, ...@@ -396,7 +400,7 @@ TRACE_EVENT(rcu_quiescent_state_report,
* state, which can be "dti" for dyntick-idle mode or "kick" when kicking * state, which can be "dti" for dyntick-idle mode or "kick" when kicking
* a CPU that has been in dyntick-idle mode for too long. * a CPU that has been in dyntick-idle mode for too long.
*/ */
TRACE_EVENT(rcu_fqs, TRACE_EVENT_RCU(rcu_fqs,
TP_PROTO(const char *rcuname, unsigned long gp_seq, int cpu, const char *qsevent), TP_PROTO(const char *rcuname, unsigned long gp_seq, int cpu, const char *qsevent),
...@@ -436,7 +440,7 @@ TRACE_EVENT(rcu_fqs, ...@@ -436,7 +440,7 @@ TRACE_EVENT(rcu_fqs,
* events use two separate counters, and that the "++=" and "--=" events * events use two separate counters, and that the "++=" and "--=" events
* for irq/NMI will change the counter by two, otherwise by one. * for irq/NMI will change the counter by two, otherwise by one.
*/ */
TRACE_EVENT(rcu_dyntick, TRACE_EVENT_RCU(rcu_dyntick,
TP_PROTO(const char *polarity, long oldnesting, long newnesting, atomic_t dynticks), TP_PROTO(const char *polarity, long oldnesting, long newnesting, atomic_t dynticks),
...@@ -468,7 +472,7 @@ TRACE_EVENT(rcu_dyntick, ...@@ -468,7 +472,7 @@ TRACE_EVENT(rcu_dyntick,
* number of lazy callbacks queued, and the fourth element is the * number of lazy callbacks queued, and the fourth element is the
* total number of callbacks queued. * total number of callbacks queued.
*/ */
TRACE_EVENT(rcu_callback, TRACE_EVENT_RCU(rcu_callback,
TP_PROTO(const char *rcuname, struct rcu_head *rhp, long qlen_lazy, TP_PROTO(const char *rcuname, struct rcu_head *rhp, long qlen_lazy,
long qlen), long qlen),
...@@ -504,7 +508,7 @@ TRACE_EVENT(rcu_callback, ...@@ -504,7 +508,7 @@ TRACE_EVENT(rcu_callback,
* the fourth argument is the number of lazy callbacks queued, and the * the fourth argument is the number of lazy callbacks queued, and the
* fifth argument is the total number of callbacks queued. * fifth argument is the total number of callbacks queued.
*/ */
TRACE_EVENT(rcu_kfree_callback, TRACE_EVENT_RCU(rcu_kfree_callback,
TP_PROTO(const char *rcuname, struct rcu_head *rhp, unsigned long offset, TP_PROTO(const char *rcuname, struct rcu_head *rhp, unsigned long offset,
long qlen_lazy, long qlen), long qlen_lazy, long qlen),
...@@ -539,7 +543,7 @@ TRACE_EVENT(rcu_kfree_callback, ...@@ -539,7 +543,7 @@ TRACE_EVENT(rcu_kfree_callback,
* the total number of callbacks queued, and the fourth argument is * the total number of callbacks queued, and the fourth argument is
* the current RCU-callback batch limit. * the current RCU-callback batch limit.
*/ */
TRACE_EVENT(rcu_batch_start, TRACE_EVENT_RCU(rcu_batch_start,
TP_PROTO(const char *rcuname, long qlen_lazy, long qlen, long blimit), TP_PROTO(const char *rcuname, long qlen_lazy, long qlen, long blimit),
...@@ -569,7 +573,7 @@ TRACE_EVENT(rcu_batch_start, ...@@ -569,7 +573,7 @@ TRACE_EVENT(rcu_batch_start,
* The first argument is the type of RCU, and the second argument is * The first argument is the type of RCU, and the second argument is
* a pointer to the RCU callback itself. * a pointer to the RCU callback itself.
*/ */
TRACE_EVENT(rcu_invoke_callback, TRACE_EVENT_RCU(rcu_invoke_callback,
TP_PROTO(const char *rcuname, struct rcu_head *rhp), TP_PROTO(const char *rcuname, struct rcu_head *rhp),
...@@ -598,7 +602,7 @@ TRACE_EVENT(rcu_invoke_callback, ...@@ -598,7 +602,7 @@ TRACE_EVENT(rcu_invoke_callback,
* is the offset of the callback within the enclosing RCU-protected * is the offset of the callback within the enclosing RCU-protected
* data structure. * data structure.
*/ */
TRACE_EVENT(rcu_invoke_kfree_callback, TRACE_EVENT_RCU(rcu_invoke_kfree_callback,
TP_PROTO(const char *rcuname, struct rcu_head *rhp, unsigned long offset), TP_PROTO(const char *rcuname, struct rcu_head *rhp, unsigned long offset),
...@@ -631,7 +635,7 @@ TRACE_EVENT(rcu_invoke_kfree_callback, ...@@ -631,7 +635,7 @@ TRACE_EVENT(rcu_invoke_kfree_callback,
* and the sixth argument (risk) is the return value from * and the sixth argument (risk) is the return value from
* rcu_is_callbacks_kthread(). * rcu_is_callbacks_kthread().
*/ */
TRACE_EVENT(rcu_batch_end, TRACE_EVENT_RCU(rcu_batch_end,
TP_PROTO(const char *rcuname, int callbacks_invoked, TP_PROTO(const char *rcuname, int callbacks_invoked,
char cb, char nr, char iit, char risk), char cb, char nr, char iit, char risk),
...@@ -673,7 +677,7 @@ TRACE_EVENT(rcu_batch_end, ...@@ -673,7 +677,7 @@ TRACE_EVENT(rcu_batch_end,
* callback address can be NULL. * callback address can be NULL.
*/ */
#define RCUTORTURENAME_LEN 8 #define RCUTORTURENAME_LEN 8
TRACE_EVENT(rcu_torture_read, TRACE_EVENT_RCU(rcu_torture_read,
TP_PROTO(const char *rcutorturename, struct rcu_head *rhp, TP_PROTO(const char *rcutorturename, struct rcu_head *rhp,
unsigned long secs, unsigned long c_old, unsigned long c), unsigned long secs, unsigned long c_old, unsigned long c),
...@@ -721,7 +725,7 @@ TRACE_EVENT(rcu_torture_read, ...@@ -721,7 +725,7 @@ TRACE_EVENT(rcu_torture_read,
* The "cpu" argument is the CPU or -1 if meaningless, the "cnt" argument * The "cpu" argument is the CPU or -1 if meaningless, the "cnt" argument
* is the count of remaining callbacks, and "done" is the piggybacking count. * is the count of remaining callbacks, and "done" is the piggybacking count.
*/ */
TRACE_EVENT(rcu_barrier, TRACE_EVENT_RCU(rcu_barrier,
TP_PROTO(const char *rcuname, const char *s, int cpu, int cnt, unsigned long done), TP_PROTO(const char *rcuname, const char *s, int cpu, int cnt, unsigned long done),
...@@ -748,41 +752,6 @@ TRACE_EVENT(rcu_barrier, ...@@ -748,41 +752,6 @@ TRACE_EVENT(rcu_barrier,
__entry->done) __entry->done)
); );
#else /* #ifdef CONFIG_RCU_TRACE */
#define trace_rcu_grace_period(rcuname, gp_seq, gpevent) do { } while (0)
#define trace_rcu_future_grace_period(rcuname, gp_seq, gp_seq_req, \
level, grplo, grphi, event) \
do { } while (0)
#define trace_rcu_grace_period_init(rcuname, gp_seq, level, grplo, grphi, \
qsmask) do { } while (0)
#define trace_rcu_exp_grace_period(rcuname, gqseq, gpevent) \
do { } while (0)
#define trace_rcu_exp_funnel_lock(rcuname, level, grplo, grphi, gpevent) \
do { } while (0)
#define trace_rcu_nocb_wake(rcuname, cpu, reason) do { } while (0)
#define trace_rcu_preempt_task(rcuname, pid, gp_seq) do { } while (0)
#define trace_rcu_unlock_preempted_task(rcuname, gp_seq, pid) do { } while (0)
#define trace_rcu_quiescent_state_report(rcuname, gp_seq, mask, qsmask, level, \
grplo, grphi, gp_tasks) do { } \
while (0)
#define trace_rcu_fqs(rcuname, gp_seq, cpu, qsevent) do { } while (0)
#define trace_rcu_dyntick(polarity, oldnesting, newnesting, dyntick) do { } while (0)
#define trace_rcu_callback(rcuname, rhp, qlen_lazy, qlen) do { } while (0)
#define trace_rcu_kfree_callback(rcuname, rhp, offset, qlen_lazy, qlen) \
do { } while (0)
#define trace_rcu_batch_start(rcuname, qlen_lazy, qlen, blimit) \
do { } while (0)
#define trace_rcu_invoke_callback(rcuname, rhp) do { } while (0)
#define trace_rcu_invoke_kfree_callback(rcuname, rhp, offset) do { } while (0)
#define trace_rcu_batch_end(rcuname, callbacks_invoked, cb, nr, iit, risk) \
do { } while (0)
#define trace_rcu_torture_read(rcutorturename, rhp, secs, c_old, c) \
do { } while (0)
#define trace_rcu_barrier(name, s, cpu, cnt, done) do { } while (0)
#endif /* #else #ifdef CONFIG_RCU_TRACE */
#endif /* _TRACE_RCU_H */ #endif /* _TRACE_RCU_H */
/* This part must be outside protection */ /* This part must be outside protection */
......
...@@ -242,7 +242,6 @@ DEFINE_EVENT(sched_process_template, sched_process_free, ...@@ -242,7 +242,6 @@ DEFINE_EVENT(sched_process_template, sched_process_free,
TP_PROTO(struct task_struct *p), TP_PROTO(struct task_struct *p),
TP_ARGS(p)); TP_ARGS(p));
/* /*
* Tracepoint for a task exiting: * Tracepoint for a task exiting:
*/ */
...@@ -336,11 +335,20 @@ TRACE_EVENT(sched_process_exec, ...@@ -336,11 +335,20 @@ TRACE_EVENT(sched_process_exec,
__entry->pid, __entry->old_pid) __entry->pid, __entry->old_pid)
); );
#ifdef CONFIG_SCHEDSTATS
#define DEFINE_EVENT_SCHEDSTAT DEFINE_EVENT
#define DECLARE_EVENT_CLASS_SCHEDSTAT DECLARE_EVENT_CLASS
#else
#define DEFINE_EVENT_SCHEDSTAT DEFINE_EVENT_NOP
#define DECLARE_EVENT_CLASS_SCHEDSTAT DECLARE_EVENT_CLASS_NOP
#endif
/* /*
* XXX the below sched_stat tracepoints only apply to SCHED_OTHER/BATCH/IDLE * XXX the below sched_stat tracepoints only apply to SCHED_OTHER/BATCH/IDLE
* adding sched_stat support to SCHED_FIFO/RR would be welcome. * adding sched_stat support to SCHED_FIFO/RR would be welcome.
*/ */
DECLARE_EVENT_CLASS(sched_stat_template, DECLARE_EVENT_CLASS_SCHEDSTAT(sched_stat_template,
TP_PROTO(struct task_struct *tsk, u64 delay), TP_PROTO(struct task_struct *tsk, u64 delay),
...@@ -363,12 +371,11 @@ DECLARE_EVENT_CLASS(sched_stat_template, ...@@ -363,12 +371,11 @@ DECLARE_EVENT_CLASS(sched_stat_template,
(unsigned long long)__entry->delay) (unsigned long long)__entry->delay)
); );
/* /*
* Tracepoint for accounting wait time (time the task is runnable * Tracepoint for accounting wait time (time the task is runnable
* but not actually running due to scheduler contention). * but not actually running due to scheduler contention).
*/ */
DEFINE_EVENT(sched_stat_template, sched_stat_wait, DEFINE_EVENT_SCHEDSTAT(sched_stat_template, sched_stat_wait,
TP_PROTO(struct task_struct *tsk, u64 delay), TP_PROTO(struct task_struct *tsk, u64 delay),
TP_ARGS(tsk, delay)); TP_ARGS(tsk, delay));
...@@ -376,7 +383,7 @@ DEFINE_EVENT(sched_stat_template, sched_stat_wait, ...@@ -376,7 +383,7 @@ DEFINE_EVENT(sched_stat_template, sched_stat_wait,
* Tracepoint for accounting sleep time (time the task is not runnable, * Tracepoint for accounting sleep time (time the task is not runnable,
* including iowait, see below). * including iowait, see below).
*/ */
DEFINE_EVENT(sched_stat_template, sched_stat_sleep, DEFINE_EVENT_SCHEDSTAT(sched_stat_template, sched_stat_sleep,
TP_PROTO(struct task_struct *tsk, u64 delay), TP_PROTO(struct task_struct *tsk, u64 delay),
TP_ARGS(tsk, delay)); TP_ARGS(tsk, delay));
...@@ -384,14 +391,14 @@ DEFINE_EVENT(sched_stat_template, sched_stat_sleep, ...@@ -384,14 +391,14 @@ DEFINE_EVENT(sched_stat_template, sched_stat_sleep,
* Tracepoint for accounting iowait time (time the task is not runnable * Tracepoint for accounting iowait time (time the task is not runnable
* due to waiting on IO to complete). * due to waiting on IO to complete).
*/ */
DEFINE_EVENT(sched_stat_template, sched_stat_iowait, DEFINE_EVENT_SCHEDSTAT(sched_stat_template, sched_stat_iowait,
TP_PROTO(struct task_struct *tsk, u64 delay), TP_PROTO(struct task_struct *tsk, u64 delay),
TP_ARGS(tsk, delay)); TP_ARGS(tsk, delay));
/* /*
* Tracepoint for accounting blocked time (time the task is in uninterruptible). * Tracepoint for accounting blocked time (time the task is in uninterruptible).
*/ */
DEFINE_EVENT(sched_stat_template, sched_stat_blocked, DEFINE_EVENT_SCHEDSTAT(sched_stat_template, sched_stat_blocked,
TP_PROTO(struct task_struct *tsk, u64 delay), TP_PROTO(struct task_struct *tsk, u64 delay),
TP_ARGS(tsk, delay)); TP_ARGS(tsk, delay));
......
...@@ -1208,14 +1208,6 @@ void klp_module_going(struct module *mod) ...@@ -1208,14 +1208,6 @@ void klp_module_going(struct module *mod)
static int __init klp_init(void) static int __init klp_init(void)
{ {
int ret;
ret = klp_check_compiler_support();
if (ret) {
pr_info("Your compiler is too old; turning off.\n");
return -EINVAL;
}
klp_root_kobj = kobject_create_and_add("livepatch", kernel_kobj); klp_root_kobj = kobject_create_and_add("livepatch", kernel_kobj);
if (!klp_root_kobj) if (!klp_root_kobj)
return -ENOMEM; return -ENOMEM;
......
...@@ -11,11 +11,6 @@ ...@@ -11,11 +11,6 @@
#define __LINUX_RCU_H #define __LINUX_RCU_H
#include <trace/events/rcu.h> #include <trace/events/rcu.h>
#ifdef CONFIG_RCU_TRACE
#define RCU_TRACE(stmt) stmt
#else /* #ifdef CONFIG_RCU_TRACE */
#define RCU_TRACE(stmt)
#endif /* #else #ifdef CONFIG_RCU_TRACE */
/* Offset to allow distinguishing irq vs. task-based idle entry/exit. */ /* Offset to allow distinguishing irq vs. task-based idle entry/exit. */
#define DYNTICK_IRQ_NONIDLE ((LONG_MAX / 2) + 1) #define DYNTICK_IRQ_NONIDLE ((LONG_MAX / 2) + 1)
...@@ -216,12 +211,12 @@ static inline bool __rcu_reclaim(const char *rn, struct rcu_head *head) ...@@ -216,12 +211,12 @@ static inline bool __rcu_reclaim(const char *rn, struct rcu_head *head)
rcu_lock_acquire(&rcu_callback_map); rcu_lock_acquire(&rcu_callback_map);
if (__is_kfree_rcu_offset(offset)) { if (__is_kfree_rcu_offset(offset)) {
RCU_TRACE(trace_rcu_invoke_kfree_callback(rn, head, offset);) trace_rcu_invoke_kfree_callback(rn, head, offset);
kfree((void *)head - offset); kfree((void *)head - offset);
rcu_lock_release(&rcu_callback_map); rcu_lock_release(&rcu_callback_map);
return true; return true;
} else { } else {
RCU_TRACE(trace_rcu_invoke_callback(rn, head);) trace_rcu_invoke_callback(rn, head);
f = head->func; f = head->func;
WRITE_ONCE(head->func, (rcu_callback_t)0L); WRITE_ONCE(head->func, (rcu_callback_t)0L);
f(head); f(head);
......
...@@ -1969,14 +1969,14 @@ rcu_check_quiescent_state(struct rcu_data *rdp) ...@@ -1969,14 +1969,14 @@ rcu_check_quiescent_state(struct rcu_data *rdp)
*/ */
int rcutree_dying_cpu(unsigned int cpu) int rcutree_dying_cpu(unsigned int cpu)
{ {
RCU_TRACE(bool blkd;) bool blkd;
RCU_TRACE(struct rcu_data *rdp = this_cpu_ptr(&rcu_data);) struct rcu_data *rdp = this_cpu_ptr(&rcu_data);
RCU_TRACE(struct rcu_node *rnp = rdp->mynode;) struct rcu_node *rnp = rdp->mynode;
if (!IS_ENABLED(CONFIG_HOTPLUG_CPU)) if (!IS_ENABLED(CONFIG_HOTPLUG_CPU))
return 0; return 0;
RCU_TRACE(blkd = !!(rnp->qsmask & rdp->grpmask);) blkd = !!(rnp->qsmask & rdp->grpmask);
trace_rcu_grace_period(rcu_state.name, rnp->gp_seq, trace_rcu_grace_period(rcu_state.name, rnp->gp_seq,
blkd ? TPS("cpuofl") : TPS("cpuofl-bgp")); blkd ? TPS("cpuofl") : TPS("cpuofl-bgp"));
return 0; return 0;
......
...@@ -70,12 +70,8 @@ ...@@ -70,12 +70,8 @@
#define INIT_OPS_HASH(opsname) \ #define INIT_OPS_HASH(opsname) \
.func_hash = &opsname.local_hash, \ .func_hash = &opsname.local_hash, \
.local_hash.regex_lock = __MUTEX_INITIALIZER(opsname.local_hash.regex_lock), .local_hash.regex_lock = __MUTEX_INITIALIZER(opsname.local_hash.regex_lock),
#define ASSIGN_OPS_HASH(opsname, val) \
.func_hash = val, \
.local_hash.regex_lock = __MUTEX_INITIALIZER(opsname.local_hash.regex_lock),
#else #else
#define INIT_OPS_HASH(opsname) #define INIT_OPS_HASH(opsname)
#define ASSIGN_OPS_HASH(opsname, val)
#endif #endif
enum { enum {
...@@ -3880,7 +3876,7 @@ static int ftrace_hash_move_and_update_ops(struct ftrace_ops *ops, ...@@ -3880,7 +3876,7 @@ static int ftrace_hash_move_and_update_ops(struct ftrace_ops *ops,
static bool module_exists(const char *module) static bool module_exists(const char *module)
{ {
/* All modules have the symbol __this_module */ /* All modules have the symbol __this_module */
const char this_mod[] = "__this_module"; static const char this_mod[] = "__this_module";
char modname[MAX_PARAM_PREFIX_LEN + sizeof(this_mod) + 2]; char modname[MAX_PARAM_PREFIX_LEN + sizeof(this_mod) + 2];
unsigned long val; unsigned long val;
int n; int n;
...@@ -6265,6 +6261,9 @@ __ftrace_ops_list_func(unsigned long ip, unsigned long parent_ip, ...@@ -6265,6 +6261,9 @@ __ftrace_ops_list_func(unsigned long ip, unsigned long parent_ip,
preempt_disable_notrace(); preempt_disable_notrace();
do_for_each_ftrace_op(op, ftrace_ops_list) { do_for_each_ftrace_op(op, ftrace_ops_list) {
/* Stub functions don't need to be called nor tested */
if (op->flags & FTRACE_OPS_FL_STUB)
continue;
/* /*
* Check the following for each ops before calling their func: * Check the following for each ops before calling their func:
* if RCU flag is set, then rcu_is_watching() must be true * if RCU flag is set, then rcu_is_watching() must be true
......
...@@ -4979,7 +4979,7 @@ static __init int rb_write_something(struct rb_test_data *data, bool nested) ...@@ -4979,7 +4979,7 @@ static __init int rb_write_something(struct rb_test_data *data, bool nested)
cnt = data->cnt + (nested ? 27 : 0); cnt = data->cnt + (nested ? 27 : 0);
/* Multiply cnt by ~e, to make some unique increment */ /* Multiply cnt by ~e, to make some unique increment */
size = (data->cnt * 68 / 25) % (sizeof(rb_string) - 1); size = (cnt * 68 / 25) % (sizeof(rb_string) - 1);
len = size + sizeof(struct rb_item); len = size + sizeof(struct rb_item);
......
...@@ -362,7 +362,7 @@ static void ring_buffer_producer(void) ...@@ -362,7 +362,7 @@ static void ring_buffer_producer(void)
hit--; /* make it non zero */ hit--; /* make it non zero */
} }
/* Caculate the average time in nanosecs */ /* Calculate the average time in nanosecs */
avg = NSEC_PER_MSEC / (hit + missed); avg = NSEC_PER_MSEC / (hit + missed);
trace_printk("%ld ns per entry\n", avg); trace_printk("%ld ns per entry\n", avg);
} }
......
This diff is collapsed.
...@@ -293,11 +293,13 @@ struct trace_array { ...@@ -293,11 +293,13 @@ struct trace_array {
int nr_topts; int nr_topts;
bool clear_trace; bool clear_trace;
int buffer_percent; int buffer_percent;
unsigned int n_err_log_entries;
struct tracer *current_trace; struct tracer *current_trace;
unsigned int trace_flags; unsigned int trace_flags;
unsigned char trace_flags_index[TRACE_FLAGS_MAX_SIZE]; unsigned char trace_flags_index[TRACE_FLAGS_MAX_SIZE];
unsigned int flags; unsigned int flags;
raw_spinlock_t start_lock; raw_spinlock_t start_lock;
struct list_head err_log;
struct dentry *dir; struct dentry *dir;
struct dentry *options; struct dentry *options;
struct dentry *percpu_dir; struct dentry *percpu_dir;
...@@ -719,6 +721,9 @@ void trace_init_global_iter(struct trace_iterator *iter); ...@@ -719,6 +721,9 @@ void trace_init_global_iter(struct trace_iterator *iter);
void tracing_iter_reset(struct trace_iterator *iter, int cpu); void tracing_iter_reset(struct trace_iterator *iter, int cpu);
unsigned long trace_total_entries_cpu(struct trace_array *tr, int cpu);
unsigned long trace_total_entries(struct trace_array *tr);
void trace_function(struct trace_array *tr, void trace_function(struct trace_array *tr,
unsigned long ip, unsigned long ip,
unsigned long parent_ip, unsigned long parent_ip,
...@@ -1545,7 +1550,8 @@ extern int apply_subsystem_event_filter(struct trace_subsystem_dir *dir, ...@@ -1545,7 +1550,8 @@ extern int apply_subsystem_event_filter(struct trace_subsystem_dir *dir,
extern void print_subsystem_event_filter(struct event_subsystem *system, extern void print_subsystem_event_filter(struct event_subsystem *system,
struct trace_seq *s); struct trace_seq *s);
extern int filter_assign_type(const char *type); extern int filter_assign_type(const char *type);
extern int create_event_filter(struct trace_event_call *call, extern int create_event_filter(struct trace_array *tr,
struct trace_event_call *call,
char *filter_str, bool set_str, char *filter_str, bool set_str,
struct event_filter **filterp); struct event_filter **filterp);
extern void free_event_filter(struct event_filter *filter); extern void free_event_filter(struct event_filter *filter);
...@@ -1876,6 +1882,11 @@ extern ssize_t trace_parse_run_command(struct file *file, ...@@ -1876,6 +1882,11 @@ extern ssize_t trace_parse_run_command(struct file *file,
const char __user *buffer, size_t count, loff_t *ppos, const char __user *buffer, size_t count, loff_t *ppos,
int (*createfn)(int, char**)); int (*createfn)(int, char**));
extern unsigned int err_pos(char *cmd, const char *str);
extern void tracing_log_err(struct trace_array *tr,
const char *loc, const char *cmd,
const char **errs, u8 type, u8 pos);
/* /*
* Normal trace_printk() and friends allocates special buffers * Normal trace_printk() and friends allocates special buffers
* to do the manipulation, as well as saves the print formats * to do the manipulation, as well as saves the print formats
......
...@@ -832,6 +832,7 @@ static int ftrace_set_clr_event(struct trace_array *tr, char *buf, int set) ...@@ -832,6 +832,7 @@ static int ftrace_set_clr_event(struct trace_array *tr, char *buf, int set)
return ret; return ret;
} }
EXPORT_SYMBOL_GPL(ftrace_set_clr_event);
/** /**
* trace_set_clr_event - enable or disable an event * trace_set_clr_event - enable or disable an event
...@@ -1318,9 +1319,6 @@ event_id_read(struct file *filp, char __user *ubuf, size_t cnt, loff_t *ppos) ...@@ -1318,9 +1319,6 @@ event_id_read(struct file *filp, char __user *ubuf, size_t cnt, loff_t *ppos)
char buf[32]; char buf[32];
int len; int len;
if (*ppos)
return 0;
if (unlikely(!id)) if (unlikely(!id))
return -ENODEV; return -ENODEV;
......
...@@ -66,7 +66,8 @@ static const char * ops[] = { OPS }; ...@@ -66,7 +66,8 @@ static const char * ops[] = { OPS };
C(INVALID_FILTER, "Meaningless filter expression"), \ C(INVALID_FILTER, "Meaningless filter expression"), \
C(IP_FIELD_ONLY, "Only 'ip' field is supported for function trace"), \ C(IP_FIELD_ONLY, "Only 'ip' field is supported for function trace"), \
C(INVALID_VALUE, "Invalid value (did you forget quotes)?"), \ C(INVALID_VALUE, "Invalid value (did you forget quotes)?"), \
C(NO_FILTER, "No filter found"), C(ERRNO, "Error"), \
C(NO_FILTER, "No filter found")
#undef C #undef C
#define C(a, b) FILT_ERR_##a #define C(a, b) FILT_ERR_##a
...@@ -76,7 +77,7 @@ enum { ERRORS }; ...@@ -76,7 +77,7 @@ enum { ERRORS };
#undef C #undef C
#define C(a, b) b #define C(a, b) b
static char *err_text[] = { ERRORS }; static const char *err_text[] = { ERRORS };
/* Called after a '!' character but "!=" and "!~" are not "not"s */ /* Called after a '!' character but "!=" and "!~" are not "not"s */
static bool is_not(const char *str) static bool is_not(const char *str)
...@@ -919,7 +920,8 @@ static void remove_filter_string(struct event_filter *filter) ...@@ -919,7 +920,8 @@ static void remove_filter_string(struct event_filter *filter)
filter->filter_string = NULL; filter->filter_string = NULL;
} }
static void append_filter_err(struct filter_parse_error *pe, static void append_filter_err(struct trace_array *tr,
struct filter_parse_error *pe,
struct event_filter *filter) struct event_filter *filter)
{ {
struct trace_seq *s; struct trace_seq *s;
...@@ -947,8 +949,14 @@ static void append_filter_err(struct filter_parse_error *pe, ...@@ -947,8 +949,14 @@ static void append_filter_err(struct filter_parse_error *pe,
if (pe->lasterr > 0) { if (pe->lasterr > 0) {
trace_seq_printf(s, "\n%*s", pos, "^"); trace_seq_printf(s, "\n%*s", pos, "^");
trace_seq_printf(s, "\nparse_error: %s\n", err_text[pe->lasterr]); trace_seq_printf(s, "\nparse_error: %s\n", err_text[pe->lasterr]);
tracing_log_err(tr, "event filter parse error",
filter->filter_string, err_text,
pe->lasterr, pe->lasterr_pos);
} else { } else {
trace_seq_printf(s, "\nError: (%d)\n", pe->lasterr); trace_seq_printf(s, "\nError: (%d)\n", pe->lasterr);
tracing_log_err(tr, "event filter parse error",
filter->filter_string, err_text,
FILT_ERR_ERRNO, 0);
} }
trace_seq_putc(s, 0); trace_seq_putc(s, 0);
buf = kmemdup_nul(s->buffer, s->seq.len, GFP_KERNEL); buf = kmemdup_nul(s->buffer, s->seq.len, GFP_KERNEL);
...@@ -1600,7 +1608,7 @@ static int process_system_preds(struct trace_subsystem_dir *dir, ...@@ -1600,7 +1608,7 @@ static int process_system_preds(struct trace_subsystem_dir *dir,
if (err) { if (err) {
filter_disable(file); filter_disable(file);
parse_error(pe, FILT_ERR_BAD_SUBSYS_FILTER, 0); parse_error(pe, FILT_ERR_BAD_SUBSYS_FILTER, 0);
append_filter_err(pe, filter); append_filter_err(tr, pe, filter);
} else } else
event_set_filtered_flag(file); event_set_filtered_flag(file);
...@@ -1712,7 +1720,8 @@ static void create_filter_finish(struct filter_parse_error *pe) ...@@ -1712,7 +1720,8 @@ static void create_filter_finish(struct filter_parse_error *pe)
* information if @set_str is %true and the caller is responsible for * information if @set_str is %true and the caller is responsible for
* freeing it. * freeing it.
*/ */
static int create_filter(struct trace_event_call *call, static int create_filter(struct trace_array *tr,
struct trace_event_call *call,
char *filter_string, bool set_str, char *filter_string, bool set_str,
struct event_filter **filterp) struct event_filter **filterp)
{ {
...@@ -1729,17 +1738,18 @@ static int create_filter(struct trace_event_call *call, ...@@ -1729,17 +1738,18 @@ static int create_filter(struct trace_event_call *call,
err = process_preds(call, filter_string, *filterp, pe); err = process_preds(call, filter_string, *filterp, pe);
if (err && set_str) if (err && set_str)
append_filter_err(pe, *filterp); append_filter_err(tr, pe, *filterp);
create_filter_finish(pe); create_filter_finish(pe);
return err; return err;
} }
int create_event_filter(struct trace_event_call *call, int create_event_filter(struct trace_array *tr,
struct trace_event_call *call,
char *filter_str, bool set_str, char *filter_str, bool set_str,
struct event_filter **filterp) struct event_filter **filterp)
{ {
return create_filter(call, filter_str, set_str, filterp); return create_filter(tr, call, filter_str, set_str, filterp);
} }
/** /**
...@@ -1766,7 +1776,7 @@ static int create_system_filter(struct trace_subsystem_dir *dir, ...@@ -1766,7 +1776,7 @@ static int create_system_filter(struct trace_subsystem_dir *dir,
kfree((*filterp)->filter_string); kfree((*filterp)->filter_string);
(*filterp)->filter_string = NULL; (*filterp)->filter_string = NULL;
} else { } else {
append_filter_err(pe, *filterp); append_filter_err(tr, pe, *filterp);
} }
} }
create_filter_finish(pe); create_filter_finish(pe);
...@@ -1797,7 +1807,7 @@ int apply_event_filter(struct trace_event_file *file, char *filter_string) ...@@ -1797,7 +1807,7 @@ int apply_event_filter(struct trace_event_file *file, char *filter_string)
return 0; return 0;
} }
err = create_filter(call, filter_string, true, &filter); err = create_filter(file->tr, call, filter_string, true, &filter);
/* /*
* Always swap the call filter with the new filter * Always swap the call filter with the new filter
...@@ -2053,7 +2063,7 @@ int ftrace_profile_set_filter(struct perf_event *event, int event_id, ...@@ -2053,7 +2063,7 @@ int ftrace_profile_set_filter(struct perf_event *event, int event_id,
if (event->filter) if (event->filter)
goto out_unlock; goto out_unlock;
err = create_filter(call, filter_str, false, &filter); err = create_filter(NULL, call, filter_str, false, &filter);
if (err) if (err)
goto free_filter; goto free_filter;
...@@ -2202,8 +2212,8 @@ static __init int ftrace_test_event_filter(void) ...@@ -2202,8 +2212,8 @@ static __init int ftrace_test_event_filter(void)
struct test_filter_data_t *d = &test_filter_data[i]; struct test_filter_data_t *d = &test_filter_data[i];
int err; int err;
err = create_filter(&event_ftrace_test_filter, d->filter, err = create_filter(NULL, &event_ftrace_test_filter,
false, &filter); d->filter, false, &filter);
if (err) { if (err) {
printk(KERN_INFO printk(KERN_INFO
"Failed to get filter for '%s', err %d\n", "Failed to get filter for '%s', err %d\n",
......
This diff is collapsed.
...@@ -731,7 +731,8 @@ int set_trigger_filter(char *filter_str, ...@@ -731,7 +731,8 @@ int set_trigger_filter(char *filter_str,
goto out; goto out;
/* The filter is for the 'trigger' event, not the triggered event */ /* The filter is for the 'trigger' event, not the triggered event */
ret = create_event_filter(file->event_call, filter_str, false, &filter); ret = create_event_filter(file->tr, file->event_call,
filter_str, false, &filter);
/* /*
* If create_event_filter() fails, filter still needs to be freed. * If create_event_filter() fails, filter still needs to be freed.
* Which the calling code will do with data->filter. * Which the calling code will do with data->filter.
......
...@@ -17,29 +17,25 @@ ...@@ -17,29 +17,25 @@
#include "trace.h" #include "trace.h"
#include "trace_output.h" #include "trace_output.h"
static void ftrace_dump_buf(int skip_lines, long cpu_file) static struct trace_iterator iter;
static struct ring_buffer_iter *buffer_iter[CONFIG_NR_CPUS];
static void ftrace_dump_buf(int skip_entries, long cpu_file)
{ {
/* use static because iter can be a bit big for the stack */
static struct trace_iterator iter;
static struct ring_buffer_iter *buffer_iter[CONFIG_NR_CPUS];
struct trace_array *tr; struct trace_array *tr;
unsigned int old_userobj; unsigned int old_userobj;
int cnt = 0, cpu; int cnt = 0, cpu;
trace_init_global_iter(&iter);
iter.buffer_iter = buffer_iter;
tr = iter.tr; tr = iter.tr;
for_each_tracing_cpu(cpu) {
atomic_inc(&per_cpu_ptr(iter.trace_buffer->data, cpu)->disabled);
}
old_userobj = tr->trace_flags; old_userobj = tr->trace_flags;
/* don't look at user memory in panic mode */ /* don't look at user memory in panic mode */
tr->trace_flags &= ~TRACE_ITER_SYM_USEROBJ; tr->trace_flags &= ~TRACE_ITER_SYM_USEROBJ;
kdb_printf("Dumping ftrace buffer:\n"); kdb_printf("Dumping ftrace buffer:\n");
if (skip_entries)
kdb_printf("(skipping %d entries)\n", skip_entries);
/* reset all but tr, trace, and overruns */ /* reset all but tr, trace, and overruns */
memset(&iter.seq, 0, memset(&iter.seq, 0,
...@@ -70,11 +66,11 @@ static void ftrace_dump_buf(int skip_lines, long cpu_file) ...@@ -70,11 +66,11 @@ static void ftrace_dump_buf(int skip_lines, long cpu_file)
kdb_printf("---------------------------------\n"); kdb_printf("---------------------------------\n");
cnt++; cnt++;
if (!skip_lines) { if (!skip_entries) {
print_trace_line(&iter); print_trace_line(&iter);
trace_printk_seq(&iter.seq); trace_printk_seq(&iter.seq);
} else { } else {
skip_lines--; skip_entries--;
} }
if (KDB_FLAG(CMD_INTERRUPT)) if (KDB_FLAG(CMD_INTERRUPT))
...@@ -89,10 +85,6 @@ static void ftrace_dump_buf(int skip_lines, long cpu_file) ...@@ -89,10 +85,6 @@ static void ftrace_dump_buf(int skip_lines, long cpu_file)
out: out:
tr->trace_flags = old_userobj; tr->trace_flags = old_userobj;
for_each_tracing_cpu(cpu) {
atomic_dec(&per_cpu_ptr(iter.trace_buffer->data, cpu)->disabled);
}
for_each_tracing_cpu(cpu) { for_each_tracing_cpu(cpu) {
if (iter.buffer_iter[cpu]) { if (iter.buffer_iter[cpu]) {
ring_buffer_read_finish(iter.buffer_iter[cpu]); ring_buffer_read_finish(iter.buffer_iter[cpu]);
...@@ -106,17 +98,19 @@ static void ftrace_dump_buf(int skip_lines, long cpu_file) ...@@ -106,17 +98,19 @@ static void ftrace_dump_buf(int skip_lines, long cpu_file)
*/ */
static int kdb_ftdump(int argc, const char **argv) static int kdb_ftdump(int argc, const char **argv)
{ {
int skip_lines = 0; int skip_entries = 0;
long cpu_file; long cpu_file;
char *cp; char *cp;
int cnt;
int cpu;
if (argc > 2) if (argc > 2)
return KDB_ARGCOUNT; return KDB_ARGCOUNT;
if (argc) { if (argc) {
skip_lines = simple_strtol(argv[1], &cp, 0); skip_entries = simple_strtol(argv[1], &cp, 0);
if (*cp) if (*cp)
skip_lines = 0; skip_entries = 0;
} }
if (argc == 2) { if (argc == 2) {
...@@ -129,7 +123,29 @@ static int kdb_ftdump(int argc, const char **argv) ...@@ -129,7 +123,29 @@ static int kdb_ftdump(int argc, const char **argv)
} }
kdb_trap_printk++; kdb_trap_printk++;
ftrace_dump_buf(skip_lines, cpu_file);
trace_init_global_iter(&iter);
iter.buffer_iter = buffer_iter;
for_each_tracing_cpu(cpu) {
atomic_inc(&per_cpu_ptr(iter.trace_buffer->data, cpu)->disabled);
}
/* A negative skip_entries means skip all but the last entries */
if (skip_entries < 0) {
if (cpu_file == RING_BUFFER_ALL_CPUS)
cnt = trace_total_entries(NULL);
else
cnt = trace_total_entries_cpu(NULL, cpu_file);
skip_entries = max(cnt + skip_entries, 0);
}
ftrace_dump_buf(skip_entries, cpu_file);
for_each_tracing_cpu(cpu) {
atomic_dec(&per_cpu_ptr(iter.trace_buffer->data, cpu)->disabled);
}
kdb_trap_printk--; kdb_trap_printk--;
return 0; return 0;
...@@ -137,8 +153,9 @@ static int kdb_ftdump(int argc, const char **argv) ...@@ -137,8 +153,9 @@ static int kdb_ftdump(int argc, const char **argv)
static __init int kdb_ftrace_register(void) static __init int kdb_ftrace_register(void)
{ {
kdb_register_flags("ftdump", kdb_ftdump, "[skip_#lines] [cpu]", kdb_register_flags("ftdump", kdb_ftdump, "[skip_#entries] [cpu]",
"Dump ftrace log", 0, KDB_ENABLE_ALWAYS_SAFE); "Dump ftrace log; -skip dumps last #entries", 0,
KDB_ENABLE_ALWAYS_SAFE);
return 0; return 0;
} }
......
...@@ -441,13 +441,8 @@ static int __register_trace_kprobe(struct trace_kprobe *tk) ...@@ -441,13 +441,8 @@ static int __register_trace_kprobe(struct trace_kprobe *tk)
else else
ret = register_kprobe(&tk->rp.kp); ret = register_kprobe(&tk->rp.kp);
if (ret == 0) { if (ret == 0)
tk->tp.flags |= TP_FLAG_REGISTERED; tk->tp.flags |= TP_FLAG_REGISTERED;
} else if (ret == -EILSEQ) {
pr_warn("Probing address(0x%p) is not an instruction boundary.\n",
tk->rp.kp.addr);
ret = -EINVAL;
}
return ret; return ret;
} }
...@@ -591,7 +586,7 @@ static int trace_kprobe_create(int argc, const char *argv[]) ...@@ -591,7 +586,7 @@ static int trace_kprobe_create(int argc, const char *argv[])
* Type of args: * Type of args:
* FETCHARG:TYPE : use TYPE instead of unsigned long. * FETCHARG:TYPE : use TYPE instead of unsigned long.
*/ */
struct trace_kprobe *tk; struct trace_kprobe *tk = NULL;
int i, len, ret = 0; int i, len, ret = 0;
bool is_return = false; bool is_return = false;
char *symbol = NULL, *tmp = NULL; char *symbol = NULL, *tmp = NULL;
...@@ -615,44 +610,50 @@ static int trace_kprobe_create(int argc, const char *argv[]) ...@@ -615,44 +610,50 @@ static int trace_kprobe_create(int argc, const char *argv[])
if (argc < 2) if (argc < 2)
return -ECANCELED; return -ECANCELED;
trace_probe_log_init("trace_kprobe", argc, argv);
event = strchr(&argv[0][1], ':'); event = strchr(&argv[0][1], ':');
if (event) if (event)
event++; event++;
if (isdigit(argv[0][1])) { if (isdigit(argv[0][1])) {
if (!is_return) { if (!is_return) {
pr_info("Maxactive is not for kprobe"); trace_probe_log_err(1, MAXACT_NO_KPROBE);
return -EINVAL; goto parse_error;
} }
if (event) if (event)
len = event - &argv[0][1] - 1; len = event - &argv[0][1] - 1;
else else
len = strlen(&argv[0][1]); len = strlen(&argv[0][1]);
if (len > MAX_EVENT_NAME_LEN - 1) if (len > MAX_EVENT_NAME_LEN - 1) {
return -E2BIG; trace_probe_log_err(1, BAD_MAXACT);
goto parse_error;
}
memcpy(buf, &argv[0][1], len); memcpy(buf, &argv[0][1], len);
buf[len] = '\0'; buf[len] = '\0';
ret = kstrtouint(buf, 0, &maxactive); ret = kstrtouint(buf, 0, &maxactive);
if (ret || !maxactive) { if (ret || !maxactive) {
pr_info("Invalid maxactive number\n"); trace_probe_log_err(1, BAD_MAXACT);
return ret; goto parse_error;
} }
/* kretprobes instances are iterated over via a list. The /* kretprobes instances are iterated over via a list. The
* maximum should stay reasonable. * maximum should stay reasonable.
*/ */
if (maxactive > KRETPROBE_MAXACTIVE_MAX) { if (maxactive > KRETPROBE_MAXACTIVE_MAX) {
pr_info("Maxactive is too big (%d > %d).\n", trace_probe_log_err(1, MAXACT_TOO_BIG);
maxactive, KRETPROBE_MAXACTIVE_MAX); goto parse_error;
return -E2BIG;
} }
} }
/* try to parse an address. if that fails, try to read the /* try to parse an address. if that fails, try to read the
* input as a symbol. */ * input as a symbol. */
if (kstrtoul(argv[1], 0, (unsigned long *)&addr)) { if (kstrtoul(argv[1], 0, (unsigned long *)&addr)) {
trace_probe_log_set_index(1);
/* Check whether uprobe event specified */ /* Check whether uprobe event specified */
if (strchr(argv[1], '/') && strchr(argv[1], ':')) if (strchr(argv[1], '/') && strchr(argv[1], ':')) {
return -ECANCELED; ret = -ECANCELED;
goto error;
}
/* a symbol specified */ /* a symbol specified */
symbol = kstrdup(argv[1], GFP_KERNEL); symbol = kstrdup(argv[1], GFP_KERNEL);
if (!symbol) if (!symbol)
...@@ -660,23 +661,23 @@ static int trace_kprobe_create(int argc, const char *argv[]) ...@@ -660,23 +661,23 @@ static int trace_kprobe_create(int argc, const char *argv[])
/* TODO: support .init module functions */ /* TODO: support .init module functions */
ret = traceprobe_split_symbol_offset(symbol, &offset); ret = traceprobe_split_symbol_offset(symbol, &offset);
if (ret || offset < 0 || offset > UINT_MAX) { if (ret || offset < 0 || offset > UINT_MAX) {
pr_info("Failed to parse either an address or a symbol.\n"); trace_probe_log_err(0, BAD_PROBE_ADDR);
goto out; goto parse_error;
} }
if (kprobe_on_func_entry(NULL, symbol, offset)) if (kprobe_on_func_entry(NULL, symbol, offset))
flags |= TPARG_FL_FENTRY; flags |= TPARG_FL_FENTRY;
if (offset && is_return && !(flags & TPARG_FL_FENTRY)) { if (offset && is_return && !(flags & TPARG_FL_FENTRY)) {
pr_info("Given offset is not valid for return probe.\n"); trace_probe_log_err(0, BAD_RETPROBE);
ret = -EINVAL; goto parse_error;
goto out;
} }
} }
argc -= 2; argv += 2;
trace_probe_log_set_index(0);
if (event) { if (event) {
ret = traceprobe_parse_event_name(&event, &group, buf); ret = traceprobe_parse_event_name(&event, &group, buf,
event - argv[0]);
if (ret) if (ret)
goto out; goto parse_error;
} else { } else {
/* Make a new event name */ /* Make a new event name */
if (symbol) if (symbol)
...@@ -691,13 +692,14 @@ static int trace_kprobe_create(int argc, const char *argv[]) ...@@ -691,13 +692,14 @@ static int trace_kprobe_create(int argc, const char *argv[])
/* setup a probe */ /* setup a probe */
tk = alloc_trace_kprobe(group, event, addr, symbol, offset, maxactive, tk = alloc_trace_kprobe(group, event, addr, symbol, offset, maxactive,
argc, is_return); argc - 2, is_return);
if (IS_ERR(tk)) { if (IS_ERR(tk)) {
ret = PTR_ERR(tk); ret = PTR_ERR(tk);
/* This must return -ENOMEM otherwise there is a bug */ /* This must return -ENOMEM, else there is a bug */
WARN_ON_ONCE(ret != -ENOMEM); WARN_ON_ONCE(ret != -ENOMEM);
goto out; goto out; /* We know tk is not allocated */
} }
argc -= 2; argv += 2;
/* parse arguments */ /* parse arguments */
for (i = 0; i < argc && i < MAX_TRACE_ARGS; i++) { for (i = 0; i < argc && i < MAX_TRACE_ARGS; i++) {
...@@ -707,19 +709,32 @@ static int trace_kprobe_create(int argc, const char *argv[]) ...@@ -707,19 +709,32 @@ static int trace_kprobe_create(int argc, const char *argv[])
goto error; goto error;
} }
trace_probe_log_set_index(i + 2);
ret = traceprobe_parse_probe_arg(&tk->tp, i, tmp, flags); ret = traceprobe_parse_probe_arg(&tk->tp, i, tmp, flags);
kfree(tmp); kfree(tmp);
if (ret) if (ret)
goto error; goto error; /* This can be -ENOMEM */
} }
ret = register_trace_kprobe(tk); ret = register_trace_kprobe(tk);
if (ret) if (ret) {
trace_probe_log_set_index(1);
if (ret == -EILSEQ)
trace_probe_log_err(0, BAD_INSN_BNDRY);
else if (ret == -ENOENT)
trace_probe_log_err(0, BAD_PROBE_ADDR);
else if (ret != -ENOMEM)
trace_probe_log_err(0, FAIL_REG_PROBE);
goto error; goto error;
}
out: out:
trace_probe_log_clear();
kfree(symbol); kfree(symbol);
return ret; return ret;
parse_error:
ret = -EINVAL;
error: error:
free_trace_kprobe(tk); free_trace_kprobe(tk);
goto out; goto out;
......
This diff is collapsed.
...@@ -124,6 +124,7 @@ struct fetch_insn { ...@@ -124,6 +124,7 @@ struct fetch_insn {
/* fetch + deref*N + store + mod + end <= 16, this allows N=12, enough */ /* fetch + deref*N + store + mod + end <= 16, this allows N=12, enough */
#define FETCH_INSN_MAX 16 #define FETCH_INSN_MAX 16
#define FETCH_TOKEN_COMM (-ECOMM)
/* Fetch type information table */ /* Fetch type information table */
struct fetch_type { struct fetch_type {
...@@ -280,8 +281,8 @@ extern int traceprobe_update_arg(struct probe_arg *arg); ...@@ -280,8 +281,8 @@ extern int traceprobe_update_arg(struct probe_arg *arg);
extern void traceprobe_free_probe_arg(struct probe_arg *arg); extern void traceprobe_free_probe_arg(struct probe_arg *arg);
extern int traceprobe_split_symbol_offset(char *symbol, long *offset); extern int traceprobe_split_symbol_offset(char *symbol, long *offset);
extern int traceprobe_parse_event_name(const char **pevent, int traceprobe_parse_event_name(const char **pevent, const char **pgroup,
const char **pgroup, char *buf); char *buf, int offset);
extern int traceprobe_set_print_fmt(struct trace_probe *tp, bool is_return); extern int traceprobe_set_print_fmt(struct trace_probe *tp, bool is_return);
...@@ -298,3 +299,76 @@ extern void destroy_local_trace_uprobe(struct trace_event_call *event_call); ...@@ -298,3 +299,76 @@ extern void destroy_local_trace_uprobe(struct trace_event_call *event_call);
#endif #endif
extern int traceprobe_define_arg_fields(struct trace_event_call *event_call, extern int traceprobe_define_arg_fields(struct trace_event_call *event_call,
size_t offset, struct trace_probe *tp); size_t offset, struct trace_probe *tp);
#undef ERRORS
#define ERRORS \
C(FILE_NOT_FOUND, "Failed to find the given file"), \
C(NO_REGULAR_FILE, "Not a regular file"), \
C(BAD_REFCNT, "Invalid reference counter offset"), \
C(REFCNT_OPEN_BRACE, "Reference counter brace is not closed"), \
C(BAD_REFCNT_SUFFIX, "Reference counter has wrong suffix"), \
C(BAD_UPROBE_OFFS, "Invalid uprobe offset"), \
C(MAXACT_NO_KPROBE, "Maxactive is not for kprobe"), \
C(BAD_MAXACT, "Invalid maxactive number"), \
C(MAXACT_TOO_BIG, "Maxactive is too big"), \
C(BAD_PROBE_ADDR, "Invalid probed address or symbol"), \
C(BAD_RETPROBE, "Retprobe address must be an function entry"), \
C(NO_GROUP_NAME, "Group name is not specified"), \
C(GROUP_TOO_LONG, "Group name is too long"), \
C(BAD_GROUP_NAME, "Group name must follow the same rules as C identifiers"), \
C(NO_EVENT_NAME, "Event name is not specified"), \
C(EVENT_TOO_LONG, "Event name is too long"), \
C(BAD_EVENT_NAME, "Event name must follow the same rules as C identifiers"), \
C(RETVAL_ON_PROBE, "$retval is not available on probe"), \
C(BAD_STACK_NUM, "Invalid stack number"), \
C(BAD_ARG_NUM, "Invalid argument number"), \
C(BAD_VAR, "Invalid $-valiable specified"), \
C(BAD_REG_NAME, "Invalid register name"), \
C(BAD_MEM_ADDR, "Invalid memory address"), \
C(FILE_ON_KPROBE, "File offset is not available with kprobe"), \
C(BAD_FILE_OFFS, "Invalid file offset value"), \
C(SYM_ON_UPROBE, "Symbol is not available with uprobe"), \
C(TOO_MANY_OPS, "Dereference is too much nested"), \
C(DEREF_NEED_BRACE, "Dereference needs a brace"), \
C(BAD_DEREF_OFFS, "Invalid dereference offset"), \
C(DEREF_OPEN_BRACE, "Dereference brace is not closed"), \
C(COMM_CANT_DEREF, "$comm can not be dereferenced"), \
C(BAD_FETCH_ARG, "Invalid fetch argument"), \
C(ARRAY_NO_CLOSE, "Array is not closed"), \
C(BAD_ARRAY_SUFFIX, "Array has wrong suffix"), \
C(BAD_ARRAY_NUM, "Invalid array size"), \
C(ARRAY_TOO_BIG, "Array number is too big"), \
C(BAD_TYPE, "Unknown type is specified"), \
C(BAD_STRING, "String accepts only memory argument"), \
C(BAD_BITFIELD, "Invalid bitfield"), \
C(ARG_NAME_TOO_LONG, "Argument name is too long"), \
C(NO_ARG_NAME, "Argument name is not specified"), \
C(BAD_ARG_NAME, "Argument name must follow the same rules as C identifiers"), \
C(USED_ARG_NAME, "This argument name is already used"), \
C(ARG_TOO_LONG, "Argument expression is too long"), \
C(NO_ARG_BODY, "No argument expression"), \
C(BAD_INSN_BNDRY, "Probe point is not an instruction boundary"),\
C(FAIL_REG_PROBE, "Failed to register probe event"),
#undef C
#define C(a, b) TP_ERR_##a
/* Define TP_ERR_ */
enum { ERRORS };
/* Error text is defined in trace_probe.c */
struct trace_probe_log {
const char *subsystem;
const char **argv;
int argc;
int index;
};
void trace_probe_log_init(const char *subsystem, int argc, const char **argv);
void trace_probe_log_set_index(int index);
void trace_probe_log_clear(void);
void __trace_probe_log_err(int offset, int err);
#define trace_probe_log_err(offs, err) \
__trace_probe_log_err(offs, TP_ERR_##err)
...@@ -88,7 +88,7 @@ process_fetch_insn_bottom(struct fetch_insn *code, unsigned long val, ...@@ -88,7 +88,7 @@ process_fetch_insn_bottom(struct fetch_insn *code, unsigned long val,
/* 3rd stage: store value to buffer */ /* 3rd stage: store value to buffer */
if (unlikely(!dest)) { if (unlikely(!dest)) {
if (code->op == FETCH_OP_ST_STRING) { if (code->op == FETCH_OP_ST_STRING) {
ret += fetch_store_strlen(val + code->offset); ret = fetch_store_strlen(val + code->offset);
code++; code++;
goto array; goto array;
} else } else
......
...@@ -792,7 +792,10 @@ trace_selftest_startup_function_graph(struct tracer *trace, ...@@ -792,7 +792,10 @@ trace_selftest_startup_function_graph(struct tracer *trace,
/* check the trace buffer */ /* check the trace buffer */
ret = trace_test_buffer(&tr->trace_buffer, &count); ret = trace_test_buffer(&tr->trace_buffer, &count);
trace->reset(tr); /* Need to also simulate the tr->reset to remove this fgraph_ops */
tracing_stop_cmdline_record();
unregister_ftrace_graph(&fgraph_ops);
tracing_start(); tracing_start();
if (!ret && !count) { if (!ret && !count) {
......
...@@ -156,6 +156,9 @@ fetch_store_string(unsigned long addr, void *dest, void *base) ...@@ -156,6 +156,9 @@ fetch_store_string(unsigned long addr, void *dest, void *base)
if (unlikely(!maxlen)) if (unlikely(!maxlen))
return -ENOMEM; return -ENOMEM;
if (addr == FETCH_TOKEN_COMM)
ret = strlcpy(dst, current->comm, maxlen);
else
ret = strncpy_from_user(dst, src, maxlen); ret = strncpy_from_user(dst, src, maxlen);
if (ret >= 0) { if (ret >= 0) {
if (ret == maxlen) if (ret == maxlen)
...@@ -180,6 +183,9 @@ fetch_store_strlen(unsigned long addr) ...@@ -180,6 +183,9 @@ fetch_store_strlen(unsigned long addr)
int len; int len;
void __user *vaddr = (void __force __user *) addr; void __user *vaddr = (void __force __user *) addr;
if (addr == FETCH_TOKEN_COMM)
len = strlen(current->comm) + 1;
else
len = strnlen_user(vaddr, MAX_STRING_SIZE); len = strnlen_user(vaddr, MAX_STRING_SIZE);
return (len > MAX_STRING_SIZE) ? 0 : len; return (len > MAX_STRING_SIZE) ? 0 : len;
...@@ -220,6 +226,9 @@ process_fetch_insn(struct fetch_insn *code, struct pt_regs *regs, void *dest, ...@@ -220,6 +226,9 @@ process_fetch_insn(struct fetch_insn *code, struct pt_regs *regs, void *dest,
case FETCH_OP_IMM: case FETCH_OP_IMM:
val = code->immediate; val = code->immediate;
break; break;
case FETCH_OP_COMM:
val = FETCH_TOKEN_COMM;
break;
case FETCH_OP_FOFFS: case FETCH_OP_FOFFS:
val = translate_user_vaddr(code->immediate); val = translate_user_vaddr(code->immediate);
break; break;
...@@ -457,13 +466,19 @@ static int trace_uprobe_create(int argc, const char **argv) ...@@ -457,13 +466,19 @@ static int trace_uprobe_create(int argc, const char **argv)
return -ECANCELED; return -ECANCELED;
} }
trace_probe_log_init("trace_uprobe", argc, argv);
trace_probe_log_set_index(1); /* filename is the 2nd argument */
*arg++ = '\0'; *arg++ = '\0';
ret = kern_path(filename, LOOKUP_FOLLOW, &path); ret = kern_path(filename, LOOKUP_FOLLOW, &path);
if (ret) { if (ret) {
trace_probe_log_err(0, FILE_NOT_FOUND);
kfree(filename); kfree(filename);
trace_probe_log_clear();
return ret; return ret;
} }
if (!d_is_reg(path.dentry)) { if (!d_is_reg(path.dentry)) {
trace_probe_log_err(0, NO_REGULAR_FILE);
ret = -EINVAL; ret = -EINVAL;
goto fail_address_parse; goto fail_address_parse;
} }
...@@ -472,9 +487,16 @@ static int trace_uprobe_create(int argc, const char **argv) ...@@ -472,9 +487,16 @@ static int trace_uprobe_create(int argc, const char **argv)
rctr = strchr(arg, '('); rctr = strchr(arg, '(');
if (rctr) { if (rctr) {
rctr_end = strchr(rctr, ')'); rctr_end = strchr(rctr, ')');
if (rctr > rctr_end || *(rctr_end + 1) != 0) { if (!rctr_end) {
ret = -EINVAL;
rctr_end = rctr + strlen(rctr);
trace_probe_log_err(rctr_end - filename,
REFCNT_OPEN_BRACE);
goto fail_address_parse;
} else if (rctr_end[1] != '\0') {
ret = -EINVAL; ret = -EINVAL;
pr_info("Invalid reference counter offset.\n"); trace_probe_log_err(rctr_end + 1 - filename,
BAD_REFCNT_SUFFIX);
goto fail_address_parse; goto fail_address_parse;
} }
...@@ -482,22 +504,23 @@ static int trace_uprobe_create(int argc, const char **argv) ...@@ -482,22 +504,23 @@ static int trace_uprobe_create(int argc, const char **argv)
*rctr_end = '\0'; *rctr_end = '\0';
ret = kstrtoul(rctr, 0, &ref_ctr_offset); ret = kstrtoul(rctr, 0, &ref_ctr_offset);
if (ret) { if (ret) {
pr_info("Invalid reference counter offset.\n"); trace_probe_log_err(rctr - filename, BAD_REFCNT);
goto fail_address_parse; goto fail_address_parse;
} }
} }
/* Parse uprobe offset. */ /* Parse uprobe offset. */
ret = kstrtoul(arg, 0, &offset); ret = kstrtoul(arg, 0, &offset);
if (ret) if (ret) {
trace_probe_log_err(arg - filename, BAD_UPROBE_OFFS);
goto fail_address_parse; goto fail_address_parse;
}
argc -= 2;
argv += 2;
/* setup a probe */ /* setup a probe */
trace_probe_log_set_index(0);
if (event) { if (event) {
ret = traceprobe_parse_event_name(&event, &group, buf); ret = traceprobe_parse_event_name(&event, &group, buf,
event - argv[0]);
if (ret) if (ret)
goto fail_address_parse; goto fail_address_parse;
} else { } else {
...@@ -519,6 +542,9 @@ static int trace_uprobe_create(int argc, const char **argv) ...@@ -519,6 +542,9 @@ static int trace_uprobe_create(int argc, const char **argv)
kfree(tail); kfree(tail);
} }
argc -= 2;
argv += 2;
tu = alloc_trace_uprobe(group, event, argc, is_return); tu = alloc_trace_uprobe(group, event, argc, is_return);
if (IS_ERR(tu)) { if (IS_ERR(tu)) {
ret = PTR_ERR(tu); ret = PTR_ERR(tu);
...@@ -539,6 +565,7 @@ static int trace_uprobe_create(int argc, const char **argv) ...@@ -539,6 +565,7 @@ static int trace_uprobe_create(int argc, const char **argv)
goto error; goto error;
} }
trace_probe_log_set_index(i + 2);
ret = traceprobe_parse_probe_arg(&tu->tp, i, tmp, ret = traceprobe_parse_probe_arg(&tu->tp, i, tmp,
is_return ? TPARG_FL_RETURN : 0); is_return ? TPARG_FL_RETURN : 0);
kfree(tmp); kfree(tmp);
...@@ -547,20 +574,20 @@ static int trace_uprobe_create(int argc, const char **argv) ...@@ -547,20 +574,20 @@ static int trace_uprobe_create(int argc, const char **argv)
} }
ret = register_trace_uprobe(tu); ret = register_trace_uprobe(tu);
if (ret) if (!ret)
goto error; goto out;
return 0;
error: error:
free_trace_uprobe(tu); free_trace_uprobe(tu);
out:
trace_probe_log_clear();
return ret; return ret;
fail_address_parse: fail_address_parse:
trace_probe_log_clear();
path_put(&path); path_put(&path);
kfree(filename); kfree(filename);
pr_info("Failed to parse address or file.\n");
return ret; return ret;
} }
......
#!/bin/sh
# SPDX-License-Identifier: GPL-2.0
# description: ftrace - test tracing error log support
fail() { #msg
echo $1
exit_fail
}
# event tracing is currently the only ftrace tracer that uses the
# tracing error_log, hence this check
if [ ! -f set_event ]; then
echo "event tracing is not supported"
exit_unsupported
fi
ftrace_errlog_check 'event filter parse error' '((sig >= 10 && sig < 15) || dsig ^== 17) && comm != bash' 'events/signal/signal_generate/filter'
exit 0
...@@ -109,3 +109,15 @@ LOCALHOST=127.0.0.1 ...@@ -109,3 +109,15 @@ LOCALHOST=127.0.0.1
yield() { yield() {
ping $LOCALHOST -c 1 || sleep .001 || usleep 1 || sleep 1 ping $LOCALHOST -c 1 || sleep .001 || usleep 1 || sleep 1
} }
ftrace_errlog_check() { # err-prefix command-with-error-pos-by-^ command-file
pos=$(echo -n "${2%^*}" | wc -c) # error position
command=$(echo "$2" | tr -d ^)
echo "Test command: $command"
echo > error_log
(! echo "$command" > "$3" ) 2> /dev/null
grep "$1: error:" -A 3 error_log
N=$(tail -n 1 error_log | wc -c)
# " Command: " and "^\n" => 13
test $(expr 13 + $pos) -eq $N
}
#!/bin/sh
# SPDX-License-Identifier: GPL-2.0
# description: Kprobe event parser error log check
[ -f kprobe_events ] || exit_unsupported # this is configurable
[ -f error_log ] || exit_unsupported
check_error() { # command-with-error-pos-by-^
ftrace_errlog_check 'trace_kprobe' "$1" 'kprobe_events'
}
if grep -q 'r\[maxactive\]' README; then
check_error 'p^100 vfs_read' # MAXACT_NO_KPROBE
check_error 'r^1a111 vfs_read' # BAD_MAXACT
check_error 'r^100000 vfs_read' # MAXACT_TOO_BIG
fi
check_error 'p ^non_exist_func' # BAD_PROBE_ADDR (enoent)
check_error 'p ^hoge-fuga' # BAD_PROBE_ADDR (bad syntax)
check_error 'p ^hoge+1000-1000' # BAD_PROBE_ADDR (bad syntax)
check_error 'r ^vfs_read+10' # BAD_RETPROBE
check_error 'p:^/bar vfs_read' # NO_GROUP_NAME
check_error 'p:^12345678901234567890123456789012345678901234567890123456789012345/bar vfs_read' # GROUP_TOO_LONG
check_error 'p:^foo.1/bar vfs_read' # BAD_GROUP_NAME
check_error 'p:foo/^ vfs_read' # NO_EVENT_NAME
check_error 'p:foo/^12345678901234567890123456789012345678901234567890123456789012345 vfs_read' # EVENT_TOO_LONG
check_error 'p:foo/^bar.1 vfs_read' # BAD_EVENT_NAME
check_error 'p vfs_read ^$retval' # RETVAL_ON_PROBE
check_error 'p vfs_read ^$stack10000' # BAD_STACK_NUM
if grep -q '$arg<N>' README; then
check_error 'p vfs_read ^$arg10000' # BAD_ARG_NUM
fi
check_error 'p vfs_read ^$none_var' # BAD_VAR
check_error 'p vfs_read ^%none_reg' # BAD_REG_NAME
check_error 'p vfs_read ^@12345678abcde' # BAD_MEM_ADDR
check_error 'p vfs_read ^@+10' # FILE_ON_KPROBE
check_error 'p vfs_read ^+0@0)' # DEREF_NEED_BRACE
check_error 'p vfs_read ^+0ab1(@0)' # BAD_DEREF_OFFS
check_error 'p vfs_read +0(+0(@0^)' # DEREF_OPEN_BRACE
if grep -A1 "fetcharg:" README | grep -q '\$comm' ; then
check_error 'p vfs_read +0(^$comm)' # COMM_CANT_DEREF
fi
check_error 'p vfs_read ^&1' # BAD_FETCH_ARG
# We've introduced this limitation with array support
if grep -q ' <type>\\\[<array-size>\\\]' README; then
check_error 'p vfs_read +0(^+0(+0(+0(+0(+0(+0(+0(+0(+0(+0(+0(+0(+0(@0))))))))))))))' # TOO_MANY_OPS?
check_error 'p vfs_read +0(@11):u8[10^' # ARRAY_NO_CLOSE
check_error 'p vfs_read +0(@11):u8[10]^a' # BAD_ARRAY_SUFFIX
check_error 'p vfs_read +0(@11):u8[^10a]' # BAD_ARRAY_NUM
check_error 'p vfs_read +0(@11):u8[^256]' # ARRAY_TOO_BIG
fi
check_error 'p vfs_read @11:^unknown_type' # BAD_TYPE
check_error 'p vfs_read $stack0:^string' # BAD_STRING
check_error 'p vfs_read @11:^b10@a/16' # BAD_BITFIELD
check_error 'p vfs_read ^arg123456789012345678901234567890=@11' # ARG_NAME_TOO_LOG
check_error 'p vfs_read ^=@11' # NO_ARG_NAME
check_error 'p vfs_read ^var.1=@11' # BAD_ARG_NAME
check_error 'p vfs_read var1=@11 ^var1=@12' # USED_ARG_NAME
check_error 'p vfs_read ^+1234567(+1234567(+1234567(+1234567(+1234567(+1234567(@1234))))))' # ARG_TOO_LONG
check_error 'p vfs_read arg1=^' # NO_ARG_BODY
# instruction boundary check is valid on x86 (at this moment)
case $(uname -m) in
x86_64|i[3456]86)
echo 'p vfs_read' > kprobe_events
if grep -q FTRACE ../kprobes/list ; then
check_error 'p ^vfs_read+3' # BAD_INSN_BNDRY (only if function-tracer is enabled)
fi
;;
esac
exit 0
#!/bin/sh
# SPDX-License-Identifier: GPL-2.0
# description: Uprobe event parser error log check
[ -f uprobe_events ] || exit_unsupported # this is configurable
[ -f error_log ] || exit_unsupported
check_error() { # command-with-error-pos-by-^
ftrace_errlog_check 'trace_uprobe' "$1" 'uprobe_events'
}
check_error 'p ^/non_exist_file:100' # FILE_NOT_FOUND
check_error 'p ^/sys:100' # NO_REGULAR_FILE
check_error 'p /bin/sh:^10a' # BAD_UPROBE_OFFS
check_error 'p /bin/sh:10(^1a)' # BAD_REFCNT
check_error 'p /bin/sh:10(10^' # REFCNT_OPEN_BRACE
check_error 'p /bin/sh:10(10)^a' # BAD_REFCNT_SUFFIX
check_error 'p /bin/sh:10 ^@+ab' # BAD_FILE_OFFS
check_error 'p /bin/sh:10 ^@symbol' # SYM_ON_UPROBE
exit 0
#!/bin/sh
# SPDX-License-Identifier: GPL-2.0
# description: event trigger - test extended error support
fail() { #msg
echo $1
exit_fail
}
if [ ! -f set_event ]; then
echo "event tracing is not supported"
exit_unsupported
fi
if [ ! -f synthetic_events ]; then
echo "synthetic event is not supported"
exit_unsupported
fi
echo "Test extended error support"
echo 'hist:keys=pid:ts0=common_timestamp.usecs if comm=="ping"' > events/sched/sched_wakeup/trigger
! echo 'hist:keys=pid:ts0=common_timestamp.usecs if comm=="ping"' >> events/sched/sched_wakeup/trigger 2> /dev/null
if ! grep -q "ERROR:" events/sched/sched_wakeup/hist; then
fail "Failed to generate extended error in histogram"
fi
exit 0
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment