- 16 Jul, 2019 15 commits
-
-
Cong Wang authored
trace_get_fields() is the only way to read tracepoint fields at run time, as their fields are defined at compile-time with macros. Make this function visible to all users and it will be used by trace event injection code to calculate the size of a tracepoint entry. Link: http://lkml.kernel.org/r/20190525165802.25944-4-xiyou.wangcong@gmail.com Cc: Ingo Molnar <mingo@redhat.com> Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
-
Cong Wang authored
filter_assign_type() could detect dynamic string and static string, but not string pointers. Teach filter_assign_type() to detect string pointers, and this will be needed by trace event injection code. BTW, trace event hist uses FILTER_PTR_STRING too. Link: http://lkml.kernel.org/r/20190525165802.25944-3-xiyou.wangcong@gmail.com Cc: Ingo Molnar <mingo@redhat.com> Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
-
Cong Wang authored
All callers of tracing_generic_entry_update() have to initialize entry->type, so let's just simply move it inside. Link: http://lkml.kernel.org/r/20190525165802.25944-2-xiyou.wangcong@gmail.com Cc: Ingo Molnar <mingo@redhat.com> Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
-
Steven Rostedt (VMware) authored
While testing on a very old kernel (3.5), the tests failed because the write to set_event_pid in the setup code, did not exist. The tests themselves could pass, but the setup failed causing an error. Other files test for existance before writing to them. Do the same for set_event_pid and set_ftrace_pid. Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
-
Steven Rostedt (VMware) authored
If the kernel is not configured with ftrace enabled, the ftracetest selftests should return the error code of "4" as that is the kselftests "skip" code, and not "1" which means an error. To determine if ftrace is enabled, first the newer "tracefs" is searched for in /proc/mounts. If it is not found, then "debugfs" is searched for (as old kernels do not have tracefs). If that is not found, an attempt to mount the tracefs or debugfs is performed. This is done by seeing first if the /sys/kernel/tracing directory exists. If it does than tracefs is configured in the kernel and an attempt to mount it is performed. If /sys/kernel/tracing does not exist, then /sys/kernel/debug is tested to see if that directory exists. If it does, then an attempt to mount debugfs on that directory is performed. If it does not exist, then debugfs is not configured in the running kernel and the test exits with the skip code. If either mount fails, then a normal error is returned as they do exist in the kernel but something went wrong to mount them. This changes the test to always try the tracefs file system first as it has been in the kernel for some time now and it is better to test it if it is available instead of always testing debugfs. Link: http://lkml.kernel.org/r/20190702062358.7330-1-po-hsu.lin@canonical.comReported-by: Po-Hsu Lin <po-hsu.lin@canonical.com> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
-
Masami Hiramatsu authored
Change registered check only by trace_kprobe and remove TP_FLAG_REGISTERED from trace_probe, since this feature is only used for trace_kprobe. Link: http://lkml.kernel.org/r/155931588704.28323.4952266828256245833.stgit@devnote2Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
-
Masami Hiramatsu authored
Add trace_event_call access APIs for trace_probe. Instead of accessing trace_probe.call directly, use those accesses by trace_probe_event_call() method. This hides the relationship of trace_event_call and trace_probe from trace_kprobe and trace_uprobe. Link: http://lkml.kernel.org/r/155931587711.28323.8335129014686133120.stgit@devnote2Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
-
Masami Hiramatsu authored
Add trace_probe_name() and trace_probe_group_name() functions for accessing probe name and group name of trace_probe. Link: http://lkml.kernel.org/r/155931586717.28323.8738615064952254761.stgit@devnote2Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
-
Masami Hiramatsu authored
Add trace_probe_test/set/clear_flag() functions for accessing trace_probe.flag field. This flags field should not be accessed directly. Link: http://lkml.kernel.org/r/155931585683.28323.314290023236905988.stgit@devnote2Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
-
Masami Hiramatsu authored
Add trace_event_file access APIs for trace_probe data structure. This simplifies enabling/disabling operations in uprobe and kprobe events so that those don't touch deep inside the trace_probe. This also removing a redundant synchronization when the kprobe event is used from perf, since the perf itself uses tracepoint_synchronize_unregister() after disabling (ftrace- defined) event, thus we don't have to synchronize in that path. Also we don't need to identify local trace_kprobe too anymore. Link: http://lkml.kernel.org/r/155931584587.28323.372301976283354629.stgit@devnote2Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
-
Masami Hiramatsu authored
Since trace_event_call is a field of trace_probe, these operations should be done in trace_probe.c. trace_kprobe and trace_uprobe use new functions to register/unregister trace_event_call. Link: http://lkml.kernel.org/r/155931583643.28323.14828411185591538876.stgit@devnote2Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
-
Masami Hiramatsu authored
Add common trace_probe init and cleanup function in trace_probe.c, and use it from trace_kprobe.c and trace_uprobe.c Link: http://lkml.kernel.org/r/155931582664.28323.5934870189034740822.stgit@devnote2Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
-
Masami Hiramatsu authored
Set event call's print format right after parsed command for simplifying (un)register_uprobe_event(). Link: http://lkml.kernel.org/r/155931581659.28323.5404667166417404076.stgit@devnote2Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
-
Masami Hiramatsu authored
Set event call's print format right after parsed command for simplifying (un)register_kprobe_event(). Link: http://lkml.kernel.org/r/155931580625.28323.5158822928646225903.stgit@devnote2Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
-
Masami Hiramatsu authored
Since arm64 kernel initializes breakpoint trap vector in arch_initcall(), initializing kprobe (and run smoke test) in postcore_initcall() causes a kernel panic. To fix this issue, move the kprobe initialization in subsys_initcall() (which is called right afer the arch_initcall). In-kernel kprobe users (ftrace and bpf) are using fs_initcall() which is called after subsys_initcall(), so this shouldn't cause more problem. Link: http://lkml.kernel.org/r/155956708268.12228.10363800793132214198.stgit@devnote2 Link: http://lkml.kernel.org/r/20190709153755.GB10123@lakrids.cambridge.arm.comReported-by: Anders Roxell <anders.roxell@linaro.org> Fixes: b5f8b32c ("kprobes: Initialize kprobes at postcore_initcall") Tested-by: Anders Roxell <anders.roxell@linaro.org> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
-
- 18 Jun, 2019 1 commit
-
-
Gustavo A. R. Silva authored
One of the more common cases of allocation size calculations is finding the size of a structure that has a zero-sized array at the end, along with memory for some number of elements for that array. For example: struct tp_probes { ... struct tracepoint_func probes[0]; }; instance = kmalloc(sizeof(sizeof(struct tp_probes) + sizeof(struct tracepoint_func) * count, GFP_KERNEL); Instead of leaving these open-coded and prone to type mistakes, we can now use the new struct_size() helper: instance = kmalloc(struct_size(instance, probes, count) GFP_KERNEL); This code was detected with the help of Coccinelle. Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
-
- 28 May, 2019 1 commit
-
-
Steven Rostedt (VMware) authored
Commit c19fa94a ("Add HAVE_64BIT_ALIGNED_ACCESS") added the config for architectures that required 64bit aligned access for all 64bit words. As the ftrace ring buffer stores data on 4 byte alignment, this config option was used to force it to store data on 8 byte alignment to make sure the data being stored and written directly into the ring buffer was 8 byte aligned as it would cause issues trying to write an 8 byte word on a 4 not 8 byte aligned memory location. But with the removal of the metag architecture, which was the only architecture to use this, there is no architecture supported by Linux that requires 8 byte aligne access for all 8 byte words (4 byte alignment is good enough). Removing this config can simplify the code a bit. Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
-
- 26 May, 2019 17 commits
-
-
Cheng Jian authored
Custom trampolines can only be enabled if there is only a single ops attached to it. If there's only a single callback registered to a function, and the ops has a trampoline registered for it, then we can call the trampoline directly. This is very useful for improving the performance of ftrace and livepatch. If more than one callback is registered to a function, the general trampoline is used, and the custom trampoline is not restored back to the direct call even if all the other callbacks were unregistered and we are back to one callback for the function. To fix this, set FTRACE_FL_TRAMP flag if rec count is decremented to one, and the ops that left has a trampoline. Testing After this patch : insmod livepatch_unshare_files.ko cat /sys/kernel/debug/tracing/enabled_functions unshare_files (1) R I tramp: 0xffffffffc0000000(klp_ftrace_handler+0x0/0xa0) ->ftrace_ops_assist_func+0x0/0xf0 echo unshare_files > /sys/kernel/debug/tracing/set_ftrace_filter echo function > /sys/kernel/debug/tracing/current_tracer cat /sys/kernel/debug/tracing/enabled_functions unshare_files (2) R I ->ftrace_ops_list_func+0x0/0x150 echo nop > /sys/kernel/debug/tracing/current_tracer cat /sys/kernel/debug/tracing/enabled_functions unshare_files (1) R I tramp: 0xffffffffc0000000(klp_ftrace_handler+0x0/0xa0) ->ftrace_ops_assist_func+0x0/0xf0 Link: http://lkml.kernel.org/r/1556969979-111047-1-git-send-email-cj.chengjian@huawei.comSigned-off-by: Cheng Jian <cj.chengjian@huawei.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
-
Steven Rostedt (VMware) authored
When having kprobe trace event start up tests enabled and adding a kprobe_event on the kernel command line, it produced the following: trace_kprobe: Testing kprobe tracing: WARNING: CPU: 5 PID: 1 at kernel/trace/trace_kprobe.c:1724 kprobe_trace_self_tests_init+0x32d/0x36b Modules linked in: CPU: 5 PID: 1 Comm: swapper/0 Not tainted 5.2.0-rc1-test+ #249 Hardware name: Hewlett-Packard HP Compaq Pro 6300 SFF/339A, BIOS K01 v03.03 07/14/2016 RIP: 0010:kprobe_trace_self_tests_init+0x32d/0x36b Code: b7 e8 4f 8d a2 fe 85 c0 74 10 0f 0b 48 c7 c7 c8 1b 0d b7 ff c3 e8 19 af 99 fe 48 c7 c7 40 93 27 b7 e8 7f 1a a5 fe 85 c0 74 10 <0f> 0b 48 c7 c7 f8 1b 0d b7 ff c3 e8 f9 ae 9 a0 fe 85 RSP: 0018:ffffb36e40653e08 EFLAGS: 00010286 RAX: 00000000fffffff0 RBX: 0000000000000000 RCX: ffffb36e40653d5c RDX: 0000000000000000 RSI: ffffffffb72776e0 RDI: 0000000000000246 RBP: ffff98414fe58ff8 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: ffff98415d8aa940 R12: 0000000000000000 R13: ffffffffb737c1b0 R14: 0000000000000000 R15: 0000000000000000 FS: 0000000000000000(0000) GS:ffff98415ea80000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007f959ce741b8 CR3: 000000011a210002 CR4: 00000000001606e0 Call Trace: ? init_kprobe_trace+0x19e/0x19e ? do_early_param+0x8e/0x8e do_one_initcall+0x6f/0x2b4 ? do_early_param+0x8e/0x8e kernel_init_freeable+0x21d/0x2c6 ? rest_init+0x146/0x146 kernel_init+0xa/0x10a ret_from_fork+0x3a/0x50 ---[ end trace 488430c083a4c956 ]--- As with the trace events, if a trace event is set on the kernel command line, the trace events start up tests are suspended. The kprobe start up tests should do the same when a kprobe is enabled on the kernel command line. Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
-
Steven Rostedt (VMware) authored
The trace event self tests enable loop through *all* events, enables each one, one at a time, runs some code to trigger various events (not necessarily the same events), and checks if anything went wrong. The issue is that trace events are usually the least likely start up test to cause a problem, but they take the longest to run (because there are so many events). When one of the other tests trigger a bug, the trace event start up tests causes the bisect to take much longer, because it takes 10s of seconds to get through the trace event tests. By making them a separate config (even though they are enabled by default if start up tests are set), it is possible to turn them off and still run the other tracing start up tests much quicker. Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
-
Masami Hiramatsu authored
Add kprobe_event= boot parameter to define kprobe events at boot time. The definition syntax is similar to tracefs/kprobe_events interface, but use ',' and ';' instead of ' ' and '\n' respectively. e.g. kprobe_event=p,vfs_read,$arg1,$arg2 This puts a probe on vfs_read with argument1 and 2, and enable the new event. Link: http://lkml.kernel.org/r/155851395498.15728.830529496248543583.stgit@devnote2Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
-
Masami Hiramatsu authored
Initialize kprobes at postcore_initcall level instead of module_init since kprobes is not a module, and it depends on only subsystems initialized in core_initcall. This will allow ftrace kprobe event to add new events when it is initializing because ftrace kprobe event is initialized at later initcall level. Link: http://lkml.kernel.org/r/155851394736.15728.13626739508905120098.stgit@devnote2Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
-
Masami Hiramatsu authored
Cast user-space address correctly to pass to probe_user_read(). Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
-
Masami Hiramatsu authored
Declare a prototype of non-static __probe_user_read() as same as __probe_kernel_read() at uaccess.h. Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
-
Matthias Kaehlcke authored
The comment of trace_filter_add_remove_task() refers to the function as 'trace_pid_filter_add_remove_task', use the correct name. Link: http://lkml.kernel.org/r/20190523192628.134406-1-mka@chromium.orgSigned-off-by: Matthias Kaehlcke <mka@chromium.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
-
Masami Hiramatsu authored
Add user memory access attribute for kprobe event arguments. If a given 'local variable' is in user-space, User can specify memory access method by '@user' suffix. This is not only for string but also for data structure. If we access a field of data structure in user memory from kernel on some arch, it will fail. e.g. perf probe -a "sched_setscheduler param->sched_priority" This will fail to access the "param->sched_priority" because the param is __user pointer. Instead, we can now specify @user suffix for such argument. perf probe -a "sched_setscheduler param->sched_priority@user" Note that kernel memory access with "@user" must always fail on any arch. Link: http://lkml.kernel.org/r/155789874562.26965.10836126971405890891.stgit@devnote2Acked-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
-
Masami Hiramatsu authored
Add a user-memory access syntax testcase which checks new user-memory access syntax and ustring type. Link: http://lkml.kernel.org/r/155789873385.26965.9557271156179140676.stgit@devnote2Acked-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
-
Masami Hiramatsu authored
Support user-space dereference syntax for probe event arguments to dereference the data-structure or array in user-space. The syntax is just adding 'u' before an offset value. +|-u<OFFSET>(<FETCHARG>) e.g. +u8(%ax), +u0(+0(%si)) For example, if you probe do_sched_setscheduler(pid, policy, param) and record param->sched_priority, you can add new probe as below; p do_sched_setscheduler priority=+u0($arg3) Note that kprobe event provides this and it doesn't change the dereference method automatically because we do not know whether the given address is in userspace or kernel on some archs. So as same as "ustring", this is an option for user, who has to carefully choose the dereference method. Link: http://lkml.kernel.org/r/155789872187.26965.4468456816590888687.stgit@devnote2Acked-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
-
Masami Hiramatsu authored
Add "ustring" type for fetching user-space string from kprobe event. User can specify ustring type at uprobe event, and it is same as "string" for uprobe. Note that probe-event provides this option but it doesn't choose the correct type automatically since we have not way to decide the address is in user-space or not on some arch (and on some other arch, you can fetch the string by "string" type). So user must carefully check the target code (e.g. if you see __user on the target variable) and use this new type. Link: http://lkml.kernel.org/r/155789871009.26965.14167558859557329331.stgit@devnote2Acked-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
-
Masami Hiramatsu authored
Add probe_user_read(), strncpy_from_unsafe_user() and strnlen_unsafe_user() which allows caller to access user-space in IRQ context. Current probe_kernel_read() and strncpy_from_unsafe() are not available for user-space memory, because it sets KERNEL_DS while accessing data. On some arch, user address space and kernel address space can be co-exist, but others can not. In that case, setting KERNEL_DS means given address is treated as a kernel address space. Also strnlen_user() is only available from user context since it can sleep if pagefault is enabled. To access user-space memory without pagefault, we need these new functions which sets USER_DS while accessing the data. Link: http://lkml.kernel.org/r/155789869802.26965.4940338412595759063.stgit@devnote2Acked-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
-
Masami Hiramatsu authored
WARN_ON_IN_IRQ() assumes that the access_ok() and following user memory access can sleep. But this assumption is not always correct; when the pagefault is disabled, following memory access will just returns -EFAULT and never sleep. Add pagefault_disabled() check in WARN_ON_ONCE() so that it can ignore the case we call it with disabling pagefault. For this purpose, this modified pagefault_disabled() as an inline function. Link: http://lkml.kernel.org/r/155789868664.26965.7932665824135793317.stgit@devnote2Acked-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
-
Steven Rostedt (VMware) authored
The code modification functions have an "enable" parameter that is an "int" but used as a boolean. Switch its type to "bool" to remove the ambiguity that "int" causes. Link: http://lkml.kernel.org/r/e1429923d9eda92a3cf5ee9e33c7eacce539781d.1558115654.git.naveen.n.rao@linux.vnet.ibm.comReported-by: "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
-
Steven Rostedt (VMware) authored
The code modification functions have "enable" and "update" variables that are sometimes "int" but used as "bool". Remove the ambiguity and make them "bool" when they are only used for true or false values. Link: http://lkml.kernel.org/r/e1429923d9eda92a3cf5ee9e33c7eacce539781d.1558115654.git.naveen.n.rao@linux.vnet.ibm.comReported-by: "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
-
Miguel Ojeda authored
Starting with GCC 9, -Warray-bounds detects cases when memset is called starting on a member of a struct but the size to be cleared ends up writing over further members. Such a call happens in the trace code to clear, at once, all members after and including `seq` on struct trace_iterator: In function 'memset', inlined from 'ftrace_dump' at kernel/trace/trace.c:8914:3: ./include/linux/string.h:344:9: warning: '__builtin_memset' offset [8505, 8560] from the object at 'iter' is out of the bounds of referenced subobject 'seq' with type 'struct trace_seq' at offset 4368 [-Warray-bounds] 344 | return __builtin_memset(p, c, size); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~ In order to avoid GCC complaining about it, we compute the address ourselves by adding the offsetof distance instead of referring directly to the member. Since there are two places doing this clear (trace.c and trace_kdb.c), take the chance to move the workaround into a single place in the internal header. Link: http://lkml.kernel.org/r/20190523124535.GA12931@gmail.comSigned-off-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> [ Removed unnecessary parenthesis around "iter" ] Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
-
- 22 May, 2019 1 commit
-
-
Jagadeesh Pagadala authored
Remove duplicate header which is included twice. Link: http://lkml.kernel.org/r/1553725186-41442-1-git-send-email-jagdsh.linux@gmail.comReviewed-by: Mukesh Ojha <mojha@codeaurora.org> Signed-off-by: Jagadeesh Pagadala <jagdsh.linux@gmail.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
-
- 21 May, 2019 3 commits
-
-
Tom Zanussi authored
Without this check a snapshot is taken whenever a bucket's max is hit, rather than only when the global max is hit, as it should be. Before: In this example, we do a first run of the workload (cyclictest), examine the output, note the max ('triggering value') (347), then do a second run and note the max again. In this case, the max in the second run (39) is below the max in the first run, but since we haven't cleared the histogram, the first max is still in the histogram and is higher than any other max, so it should still be the max for the snapshot. It isn't however - the value should still be 347 after the second run. # echo 'hist:keys=pid:ts0=common_timestamp.usecs if comm=="cyclictest"' >> /sys/kernel/debug/tracing/events/sched/sched_waking/trigger # echo 'hist:keys=next_pid:wakeup_lat=common_timestamp.usecs-$ts0:onmax($wakeup_lat).save(next_prio,next_comm,prev_pid,prev_prio,prev_comm):onmax($wakeup_lat).snapshot() if next_comm=="cyclictest"' >> /sys/kernel/debug/tracing/events/sched/sched_switch/trigger # cyclictest -p 80 -n -s -t 2 -D 2 # cat /sys/kernel/debug/tracing/events/sched/sched_switch/hist { next_pid: 2143 } hitcount: 199 max: 44 next_prio: 120 next_comm: cyclictest prev_pid: 0 prev_prio: 120 prev_comm: swapper/4 { next_pid: 2145 } hitcount: 1325 max: 38 next_prio: 19 next_comm: cyclictest prev_pid: 0 prev_prio: 120 prev_comm: swapper/2 { next_pid: 2144 } hitcount: 1982 max: 347 next_prio: 19 next_comm: cyclictest prev_pid: 0 prev_prio: 120 prev_comm: swapper/6 Snapshot taken (see tracing/snapshot). Details: triggering value { onmax($wakeup_lat) }: 347 triggered by event with key: { next_pid: 2144 } # cyclictest -p 80 -n -s -t 2 -D 2 # cat /sys/kernel/debug/tracing/events/sched/sched_switch/hist { next_pid: 2143 } hitcount: 199 max: 44 next_prio: 120 next_comm: cyclictest prev_pid: 0 prev_prio: 120 prev_comm: swapper/4 { next_pid: 2148 } hitcount: 199 max: 16 next_prio: 120 next_comm: cyclictest prev_pid: 0 prev_prio: 120 prev_comm: swapper/1 { next_pid: 2145 } hitcount: 1325 max: 38 next_prio: 19 next_comm: cyclictest prev_pid: 0 prev_prio: 120 prev_comm: swapper/2 { next_pid: 2150 } hitcount: 1326 max: 39 next_prio: 19 next_comm: cyclictest prev_pid: 0 prev_prio: 120 prev_comm: swapper/4 { next_pid: 2144 } hitcount: 1982 max: 347 next_prio: 19 next_comm: cyclictest prev_pid: 0 prev_prio: 120 prev_comm: swapper/6 { next_pid: 2149 } hitcount: 1983 max: 130 next_prio: 19 next_comm: cyclictest prev_pid: 0 prev_prio: 120 prev_comm: swapper/0 Snapshot taken (see tracing/snapshot). Details: triggering value { onmax($wakeup_lat) }: 39 triggered by event with key: { next_pid: 2150 } After: In this example, we do a first run of the workload (cyclictest), examine the output, note the max ('triggering value') (375), then do a second run and note the max again. In this case, the max in the second run is still 375, the highest in any bucket, as it should be. # cyclictest -p 80 -n -s -t 2 -D 2 # cat /sys/kernel/debug/tracing/events/sched/sched_switch/hist { next_pid: 2072 } hitcount: 200 max: 28 next_prio: 120 next_comm: cyclictest prev_pid: 0 prev_prio: 120 prev_comm: swapper/5 { next_pid: 2074 } hitcount: 1323 max: 375 next_prio: 19 next_comm: cyclictest prev_pid: 0 prev_prio: 120 prev_comm: swapper/2 { next_pid: 2073 } hitcount: 1980 max: 153 next_prio: 19 next_comm: cyclictest prev_pid: 0 prev_prio: 120 prev_comm: swapper/6 Snapshot taken (see tracing/snapshot). Details: triggering value { onmax($wakeup_lat) }: 375 triggered by event with key: { next_pid: 2074 } # cyclictest -p 80 -n -s -t 2 -D 2 # cat /sys/kernel/debug/tracing/events/sched/sched_switch/hist { next_pid: 2101 } hitcount: 199 max: 49 next_prio: 120 next_comm: cyclictest prev_pid: 0 prev_prio: 120 prev_comm: swapper/6 { next_pid: 2072 } hitcount: 200 max: 28 next_prio: 120 next_comm: cyclictest prev_pid: 0 prev_prio: 120 prev_comm: swapper/5 { next_pid: 2074 } hitcount: 1323 max: 375 next_prio: 19 next_comm: cyclictest prev_pid: 0 prev_prio: 120 prev_comm: swapper/2 { next_pid: 2103 } hitcount: 1325 max: 74 next_prio: 19 next_comm: cyclictest prev_pid: 0 prev_prio: 120 prev_comm: swapper/4 { next_pid: 2073 } hitcount: 1980 max: 153 next_prio: 19 next_comm: cyclictest prev_pid: 0 prev_prio: 120 prev_comm: swapper/6 { next_pid: 2102 } hitcount: 1981 max: 84 next_prio: 19 next_comm: cyclictest prev_pid: 12 prev_prio: 120 prev_comm: kworker/0:1 Snapshot taken (see tracing/snapshot). Details: triggering value { onmax($wakeup_lat) }: 375 triggered by event with key: { next_pid: 2074 } Link: http://lkml.kernel.org/r/95958351329f129c07504b4d1769c47a97b70d65.1555597045.git.tom.zanussi@linux.intel.com Cc: stable@vger.kernel.org Fixes: a3785b7e ("tracing: Add hist trigger snapshot() action") Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
-
Tom Zanussi authored
There's an existing check for variable references in keys, but it doesn't go far enough. It checks whether a key field is a variable reference but doesn't check whether it's an expression containing variable references, which can cause the same problems for callers. Use the existing field_has_hist_vars() function rather than a direct top-level flag check to catch all possible variable references. Link: http://lkml.kernel.org/r/e8c3d3d53db5ca90ceea5a46e5413103a6902fc7.1555597045.git.tom.zanussi@linux.intel.com Cc: stable@vger.kernel.org Fixes: 067fe038 ("tracing: Add variable reference handling to hist triggers") Reported-by: Vincent Bernat <vincent@bernat.ch> Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
-
Tom Zanussi authored
hist_field_var_ref() is an implementation of hist_field_fn_t(), which can be called with a null tracing_map_elt elt param when assembling a key in event_hist_trigger(). In the case of hist_field_var_ref() this doesn't make sense, because a variable can only be resolved by looking it up using an already assembled key i.e. a variable can't be used to assemble a key since the key is required in order to access the variable. Upper layers should prevent the user from constructing a key using a variable in the first place, but in case one slips through, it shouldn't cause a NULL pointer dereference. Also if one does slip through, we want to know about it, so emit a one-time warning in that case. Link: http://lkml.kernel.org/r/64ec8dc15c14d305295b64cdfcc6b2b9dd14753f.1555597045.git.tom.zanussi@linux.intel.comReported-by: Vincent Bernat <vincent@bernat.ch> Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
-
- 19 May, 2019 2 commits
-
-
Linus Torvalds authored
-
git://git.kernel.org/pub/scm/linux/kernel/git/rw/ubifsLinus Torvalds authored
Pull UBIFS fixes from Richard Weinberger: - build errors wrt xattrs - mismerge which lead to a wrong Kconfig ifdef - missing endianness conversion * tag 'upstream-5.2-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/rw/ubifs: ubifs: Convert xattr inum to host order ubifs: Use correct config name for encryption ubifs: Fix build error without CONFIG_UBIFS_FS_XATTR
-