Commit 877a0893 authored by Greg Kroah-Hartman's avatar Greg Kroah-Hartman

Staging: lttng: remove from the drivers/staging/ tree

The "proper" way to do this is to work with the existing in-kernel
tracing subsystem and work to get the missing features that are in lttng
into those subsystems.

Cc: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@suse.de>
parent 03cf1526
......@@ -66,8 +66,6 @@ source "drivers/staging/phison/Kconfig"
source "drivers/staging/line6/Kconfig"
source "drivers/staging/lttng/Kconfig"
source "drivers/gpu/drm/nouveau/Kconfig"
source "drivers/staging/octeon/Kconfig"
......
......@@ -25,7 +25,6 @@ obj-$(CONFIG_TRANZPORT) += frontier/
obj-$(CONFIG_POHMELFS) += pohmelfs/
obj-$(CONFIG_IDE_PHISON) += phison/
obj-$(CONFIG_LINE6_USB) += line6/
obj-$(CONFIG_LTTNG) += lttng/
obj-$(CONFIG_USB_SERIAL_QUATECH2) += serqt_usb2/
obj-$(CONFIG_USB_SERIAL_QUATECH_USB2) += quatech_usb2/
obj-$(CONFIG_OCTEON_ETHERNET) += octeon/
......
config LTTNG
tristate "LTTng kernel tracer"
depends on TRACEPOINTS
help
The LTTng 2.0 Tracer Toolchain allows integrated kernel and
user-space tracing from a single user interface: the "lttng"
command. See http://lttng.org website for the "lttng-tools"
user-space tracer control tools package and the "babeltrace"
package for conversion of trace data to a human-readable
format.
LTTng features:
- System-wide tracing across kernel, libraries and
applications,
- Tracepoints, detailed syscall tracing (fast strace replacement),
Function tracer, CPU Performance Monitoring Unit (PMU) counters
and kprobes support,
- Have the ability to attach "context" information to events in the
trace (e.g. any PMU counter, pid, ppid, tid, comm name, etc). All
the extra information fields to be collected with events are
optional, specified on a per-tracing-session basis (except for
timestamp and event id, which are mandatory).
- Precise and fast clock sources with near cycle-level
timestamps,
- Efficient trace data transport:
- Compact Binary format with CTF,
- Per-core buffers ensures scalability,
- Fast-paths in caller context, amortized synchronization,
- Zero-copy using splice and mmap system calls, over disk,
network or consumed in-place,
- Multiple concurrent tracing sessions are supported,
- Designed to meet hard real-time constraints,
- Supports live streaming of the trace data,
- Produces CTF (Common Trace Format) natively (see
http://www.efficios.com/ctf).
LTTng modules licensing
Mathieu Desnoyers
June 2, 2011
* LGPLv2.1/GPLv2 dual-license
The files contained within this package are licensed under
LGPLv2.1/GPLv2 dual-license (see lgpl-2.1.txt and gpl-2.0.txt for
details), except for files identified by the following sections.
* GPLv2 license
These files are licensed exclusively under the GPLv2 license. See
gpl-2.0.txt for details.
lib/ringbuffer/ring_buffer_splice.c
lib/ringbuffer/ring_buffer_mmap.c
instrumentation/events/mainline/*.h
instrumentation/events/lttng-modules/*.h
* MIT-style license
These files are licensed under an MIT-style license:
lib/prio_heap/lttng_prio_heap.h
lib/prio_heap/lttng_prio_heap.c
lib/bitfield.h
#
# Makefile for the LTTng modules.
#
obj-m += ltt-ring-buffer-client-discard.o
obj-m += ltt-ring-buffer-client-overwrite.o
obj-m += ltt-ring-buffer-metadata-client.o
obj-m += ltt-ring-buffer-client-mmap-discard.o
obj-m += ltt-ring-buffer-client-mmap-overwrite.o
obj-m += ltt-ring-buffer-metadata-mmap-client.o
obj-m += ltt-relay.o
ltt-relay-objs := ltt-events.o ltt-debugfs-abi.o \
ltt-probes.o ltt-context.o \
lttng-context-pid.o lttng-context-procname.o \
lttng-context-prio.o lttng-context-nice.o \
lttng-context-vpid.o lttng-context-tid.o \
lttng-context-vtid.o lttng-context-ppid.o \
lttng-context-vppid.o lttng-calibrate.o
ifneq ($(CONFIG_HAVE_SYSCALL_TRACEPOINTS),)
ltt-relay-objs += lttng-syscalls.o
endif
ifneq ($(CONFIG_PERF_EVENTS),)
ltt-relay-objs += $(shell \
if [ $(VERSION) -ge 3 \
-o \( $(VERSION) -eq 2 -a $(PATCHLEVEL) -ge 6 -a $(SUBLEVEL) -ge 33 \) ] ; then \
echo "lttng-context-perf-counters.o" ; fi;)
endif
obj-m += probes/
obj-m += lib/
LTTng 2.0 modules
Mathieu Desnoyers
November 1st, 2011
LTTng 2.0 kernel modules is currently part of the Linux kernel staging
tree. It features (new features since LTTng 0.x):
- Produces CTF (Common Trace Format) natively,
(http://www.efficios.com/ctf)
- Tracepoints, Function tracer, CPU Performance Monitoring Unit (PMU)
counters, kprobes, and kretprobes support,
- Integrated interface for both kernel and userspace tracing,
- Have the ability to attach "context" information to events in the
trace (e.g. any PMU counter, pid, ppid, tid, comm name, etc).
All the extra information fields to be collected with events are
optional, specified on a per-tracing-session basis (except for
timestamp and event id, which are mandatory).
To build and install, you need to select "Staging" modules, and the
LTTng kernel tracer.
Use lttng-tools to control the tracer. LTTng tools should automatically
load the kernel modules when needed. Use Babeltrace to print traces as a
human-readable text log. These tools are available at the following URL:
http://lttng.org/lttng2.0
Please note that the LTTng-UST 2.0 (user-space tracing counterpart of
LTTng 2.0) is now ready to be used, but still only available from the
git repository.
So far, it has been tested on vanilla Linux kernels 2.6.38, 2.6.39 and
3.0 (on x86 32/64-bit, and powerpc 32-bit at the moment, build tested on
ARM). It should work fine with newer kernels and other architectures,
but expect build issues with kernels older than 2.6.36. The clock source
currently used is the standard gettimeofday (slower, less scalable and
less precise than the LTTng 0.x clocks). Support for LTTng 0.x clocks
will be added back soon into LTTng 2.0. Please note that lttng-modules
2.0 can build on a Linux kernel patched with the LTTng 0.x patchset, but
the lttng-modules 2.0 replace the lttng-modules 0.x, so both tracers
cannot be installed at the same time for a given kernel version.
* Note about Perf PMU counters support
Each PMU counter has its zero value set when it is attached to a context with
add-context. Therefore, it is normal that the same counters attached to both the
stream context and event context show different values for a given event; what
matters is that they increment at the same rate.
Please contact Mathieu Desnoyers <mathieu.desnoyers@efficios.com> for
questions about this TODO list. The "Cleanup/Testing" section would be
good to go through before integration into mainline. The "Features"
section is a wish list of features to complete before releasing the
"LTTng 2.0" final version, but are not required to have LTTng working.
These features are mostly performance enhancements and instrumentation
enhancements.
TODO:
A) Cleanup/Testing
1) Remove debugfs "lttng" file (keep only procfs "lttng" file).
The rationale for this is that this file is needed for
user-level tracing support (LTTng-UST 2.0) intended to be
used on production system, and therefore should be present as
part of a "usually mounted" filesystem rather than a debug
filesystem.
2) Cleanup wrappers. The drivers/staging/lttng/wrapper directory
contains various wrapper headers that use kallsyms lookups to
work around some missing EXPORT_SYMBOL_GPL() in the mainline
kernel. Ideally, those few symbols should become exported to
modules by the kernel.
3) Test lib ring buffer snapshot feature.
When working on the lttngtop project, Julien Desfossez
reported that he needed to push the consumer position
forward explicitely with lib_ring_buffer_put_next_subbuf.
This means that although the usual case of pairs of
lib_ring_buffer_get_next_subbuf/lib_ring_buffer_put_next_subbuf
work fine, there is probably a problem that needs to be
investigated in
lib_ring_buffer_get_subbuf/lib_ring_buffer_put_subbuf, which
depend on the producer to push the reader position.
Contact: Julien Desfossez <julien.desfossez@polymtl.ca>
B) Features
1) Integration of the LTTng 0.x trace clocks into
LTTng 2.0.
Currently using mainline kernel monotonic clock. NMIs can
therefore not be traced, and this causes a significant
performance degradation compared to the LTTng 0.x trace
clocks. Imply the creation of drivers/staging/lttng/arch to
contain the arch-specific clock support files.
* Dependency: addition of clock descriptions to CTF.
See: http://git.lttng.org/?p=linux-2.6-lttng.git;a=summary
for the LTTng 0.x git tree.
2) Port OMAP3 LTTng trace clocks to x86 to support systems
without constant TSC.
* Dependency: (B.1)
See: http://git.lttng.org/?p=linux-2.6-lttng.git;a=summary
for the LTTng 0.x git tree.
3) Implement mmap operation on an anonymous file created by a
LTTNG_KERNEL_CLOCK ioctl to export data to export
synchronized kernel and user-level LTTng trace clocks:
with:
- shared per-cpu data,
- read seqlock.
The content exported by this shared memory area will be
arch-specific.
* Dependency: (B.1) && (B.2)
See: http://git.lttng.org/?p=linux-2.6-lttng.git;a=summary
for the LTTng 0.x git tree, which has vDSO support for
LTTng trace clock on the x86 architecture.
3) Integrate the "statedump" module from LTTng 0.x into LTTng
2.0.
* Dependency: addition of "dynamic enumerations" type to CTF.
See: http://git.lttng.org/?p=lttng-modules.git;a=shortlog;h=refs/heads/v0.19-stable
ltt-statedump.c
4) Generate system call TRACE_EVENT headers for all
architectures (currently done: x86 32/64).
5) Define "unknown" system calls into instrumentation/syscalls
override files / or do SYSCALL_DEFINE improvements to
mainline kernel to allow automatic generation of these
missing system call descriptions.
6) Create missing tracepoint event headers files into
instrumentation/events from headers located in
include/trace/events/. Choice: either do as currently done,
and copy those headers locally into the lttng driver and
perform the modifications locally, or push TRACE_EVENT API
modification into mainline headers, which would require
collaboration from Ftrace/Perf maintainers.
7) Poll: implement a poll and/or epoll exclusive wakeup scheme,
which contradicts POSIX, but protect multiple consumer
threads from thundering herd effect.
8) Re-integrate sample modules from libringbuffer into
lttng driver. Those modules can be used as example of how to
use libringbuffer in other contexts than LTTng, and are
useful to perform benchmarks of the ringbuffer library.
See: http://www.efficios.com/ringbuffer
9) NOHZ support for lib ring buffer. NOHZ infrastructure in the
Linux kernel does not support notifiers chains, which does
not let LTTng play nicely with low power consumption setups
for flight recorder (overwrite mode) live traces. One way to
allow integration between NOHZ and LTTng would be to add
support for such notifiers into NOHZ kernel infrastructure.
10) Turn drivers/staging/lttng/ltt-probes.c probe_list into a
hash table. Turns O(n^2) trace systems registration (cost
for n systems) into O(n). (O(1) per system)
11) drivers/staging/lttng/probes/lttng-ftrace.c:
LTTng currently uses kretprobes for per-function tracing,
not the function tracer. So lttng-ftrace.c should be used
for "all" function tracing.
12) drivers/staging/lttng/probes/lttng-types.c:
This is a currently unused placeholder to export entire C
type declarations into the trace metadata, e.g. for support
of describing the layout of structures/enumeration mapping
along with syscall entry events. The design of this support
will likely change though, and become integrated with the
TRACE_EVENT support within lttng, by adding new macros, and
support for generation of metadata from these macros, to
allow description of those compound types/enumerations.
Please send patches
To: Greg Kroah-Hartman <greg@kroah.com>
To: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
The workflow for updating patches from newer kernel:
Diff mainline/ and lttng-module/ directories.
Pull the new headers from mainline kernel to mainline/.
Copy them into lttng-modules.
Apply diff. Fix conflicts.
#undef TRACE_SYSTEM
#define TRACE_SYSTEM irq
#if !defined(_TRACE_IRQ_H) || defined(TRACE_HEADER_MULTI_READ)
#define _TRACE_IRQ_H
#include <linux/tracepoint.h>
#ifndef _TRACE_IRQ_DEF_
#define _TRACE_IRQ_DEF_
struct irqaction;
struct softirq_action;
#define softirq_name(sirq) { sirq##_SOFTIRQ, #sirq }
#define show_softirq_name(val) \
__print_symbolic(val, \
softirq_name(HI), \
softirq_name(TIMER), \
softirq_name(NET_TX), \
softirq_name(NET_RX), \
softirq_name(BLOCK), \
softirq_name(BLOCK_IOPOLL), \
softirq_name(TASKLET), \
softirq_name(SCHED), \
softirq_name(HRTIMER), \
softirq_name(RCU))
#endif /* _TRACE_IRQ_DEF_ */
/**
* irq_handler_entry - called immediately before the irq action handler
* @irq: irq number
* @action: pointer to struct irqaction
*
* The struct irqaction pointed to by @action contains various
* information about the handler, including the device name,
* @action->name, and the device id, @action->dev_id. When used in
* conjunction with the irq_handler_exit tracepoint, we can figure
* out irq handler latencies.
*/
TRACE_EVENT(irq_handler_entry,
TP_PROTO(int irq, struct irqaction *action),
TP_ARGS(irq, action),
TP_STRUCT__entry(
__field( int, irq )
__string( name, action->name )
),
TP_fast_assign(
tp_assign(irq, irq)
tp_strcpy(name, action->name)
),
TP_printk("irq=%d name=%s", __entry->irq, __get_str(name))
)
/**
* irq_handler_exit - called immediately after the irq action handler returns
* @irq: irq number
* @action: pointer to struct irqaction
* @ret: return value
*
* If the @ret value is set to IRQ_HANDLED, then we know that the corresponding
* @action->handler scuccessully handled this irq. Otherwise, the irq might be
* a shared irq line, or the irq was not handled successfully. Can be used in
* conjunction with the irq_handler_entry to understand irq handler latencies.
*/
TRACE_EVENT(irq_handler_exit,
TP_PROTO(int irq, struct irqaction *action, int ret),
TP_ARGS(irq, action, ret),
TP_STRUCT__entry(
__field( int, irq )
__field( int, ret )
),
TP_fast_assign(
tp_assign(irq, irq)
tp_assign(ret, ret)
),
TP_printk("irq=%d ret=%s",
__entry->irq, __entry->ret ? "handled" : "unhandled")
)
DECLARE_EVENT_CLASS(softirq,
TP_PROTO(unsigned int vec_nr),
TP_ARGS(vec_nr),
TP_STRUCT__entry(
__field( unsigned int, vec )
),
TP_fast_assign(
tp_assign(vec, vec_nr)
),
TP_printk("vec=%u [action=%s]", __entry->vec,
show_softirq_name(__entry->vec))
)
/**
* softirq_entry - called immediately before the softirq handler
* @vec_nr: softirq vector number
*
* When used in combination with the softirq_exit tracepoint
* we can determine the softirq handler runtine.
*/
DEFINE_EVENT(softirq, softirq_entry,
TP_PROTO(unsigned int vec_nr),
TP_ARGS(vec_nr)
)
/**
* softirq_exit - called immediately after the softirq handler returns
* @vec_nr: softirq vector number
*
* When used in combination with the softirq_entry tracepoint
* we can determine the softirq handler runtine.
*/
DEFINE_EVENT(softirq, softirq_exit,
TP_PROTO(unsigned int vec_nr),
TP_ARGS(vec_nr)
)
/**
* softirq_raise - called immediately when a softirq is raised
* @vec_nr: softirq vector number
*
* When used in combination with the softirq_entry tracepoint
* we can determine the softirq raise to run latency.
*/
DEFINE_EVENT(softirq, softirq_raise,
TP_PROTO(unsigned int vec_nr),
TP_ARGS(vec_nr)
)
#endif /* _TRACE_IRQ_H */
/* This part must be outside protection */
#include "../../../probes/define_trace.h"
#if !defined(_TRACE_KVM_MAIN_H) || defined(TRACE_HEADER_MULTI_READ)
#define _TRACE_KVM_MAIN_H
#include <linux/tracepoint.h>
#undef TRACE_SYSTEM
#define TRACE_SYSTEM kvm
#define ERSN(x) { KVM_EXIT_##x, "KVM_EXIT_" #x }
#define kvm_trace_exit_reason \
ERSN(UNKNOWN), ERSN(EXCEPTION), ERSN(IO), ERSN(HYPERCALL), \
ERSN(DEBUG), ERSN(HLT), ERSN(MMIO), ERSN(IRQ_WINDOW_OPEN), \
ERSN(SHUTDOWN), ERSN(FAIL_ENTRY), ERSN(INTR), ERSN(SET_TPR), \
ERSN(TPR_ACCESS), ERSN(S390_SIEIC), ERSN(S390_RESET), ERSN(DCR),\
ERSN(NMI), ERSN(INTERNAL_ERROR), ERSN(OSI)
TRACE_EVENT(kvm_userspace_exit,
TP_PROTO(__u32 reason, int errno),
TP_ARGS(reason, errno),
TP_STRUCT__entry(
__field( __u32, reason )
__field( int, errno )
),
TP_fast_assign(
tp_assign(reason, reason)
tp_assign(errno, errno)
),
TP_printk("reason %s (%d)",
__entry->errno < 0 ?
(__entry->errno == -EINTR ? "restart" : "error") :
__print_symbolic(__entry->reason, kvm_trace_exit_reason),
__entry->errno < 0 ? -__entry->errno : __entry->reason)
)
#if defined(__KVM_HAVE_IOAPIC)
TRACE_EVENT(kvm_set_irq,
TP_PROTO(unsigned int gsi, int level, int irq_source_id),
TP_ARGS(gsi, level, irq_source_id),
TP_STRUCT__entry(
__field( unsigned int, gsi )
__field( int, level )
__field( int, irq_source_id )
),
TP_fast_assign(
tp_assign(gsi, gsi)
tp_assign(level, level)
tp_assign(irq_source_id, irq_source_id)
),
TP_printk("gsi %u level %d source %d",
__entry->gsi, __entry->level, __entry->irq_source_id)
)
#define kvm_deliver_mode \
{0x0, "Fixed"}, \
{0x1, "LowPrio"}, \
{0x2, "SMI"}, \
{0x3, "Res3"}, \
{0x4, "NMI"}, \
{0x5, "INIT"}, \
{0x6, "SIPI"}, \
{0x7, "ExtINT"}
TRACE_EVENT(kvm_ioapic_set_irq,
TP_PROTO(__u64 e, int pin, bool coalesced),
TP_ARGS(e, pin, coalesced),
TP_STRUCT__entry(
__field( __u64, e )
__field( int, pin )
__field( bool, coalesced )
),
TP_fast_assign(
tp_assign(e, e)
tp_assign(pin, pin)
tp_assign(coalesced, coalesced)
),
TP_printk("pin %u dst %x vec=%u (%s|%s|%s%s)%s",
__entry->pin, (u8)(__entry->e >> 56), (u8)__entry->e,
__print_symbolic((__entry->e >> 8 & 0x7), kvm_deliver_mode),
(__entry->e & (1<<11)) ? "logical" : "physical",
(__entry->e & (1<<15)) ? "level" : "edge",
(__entry->e & (1<<16)) ? "|masked" : "",
__entry->coalesced ? " (coalesced)" : "")
)
TRACE_EVENT(kvm_msi_set_irq,
TP_PROTO(__u64 address, __u64 data),
TP_ARGS(address, data),
TP_STRUCT__entry(
__field( __u64, address )
__field( __u64, data )
),
TP_fast_assign(
tp_assign(address, address)
tp_assign(data, data)
),
TP_printk("dst %u vec %x (%s|%s|%s%s)",
(u8)(__entry->address >> 12), (u8)__entry->data,
__print_symbolic((__entry->data >> 8 & 0x7), kvm_deliver_mode),
(__entry->address & (1<<2)) ? "logical" : "physical",
(__entry->data & (1<<15)) ? "level" : "edge",
(__entry->address & (1<<3)) ? "|rh" : "")
)
#define kvm_irqchips \
{KVM_IRQCHIP_PIC_MASTER, "PIC master"}, \
{KVM_IRQCHIP_PIC_SLAVE, "PIC slave"}, \
{KVM_IRQCHIP_IOAPIC, "IOAPIC"}
TRACE_EVENT(kvm_ack_irq,
TP_PROTO(unsigned int irqchip, unsigned int pin),
TP_ARGS(irqchip, pin),
TP_STRUCT__entry(
__field( unsigned int, irqchip )
__field( unsigned int, pin )
),
TP_fast_assign(
tp_assign(irqchip, irqchip)
tp_assign(pin, pin)
),
TP_printk("irqchip %s pin %u",
__print_symbolic(__entry->irqchip, kvm_irqchips),
__entry->pin)
)
#endif /* defined(__KVM_HAVE_IOAPIC) */
#define KVM_TRACE_MMIO_READ_UNSATISFIED 0
#define KVM_TRACE_MMIO_READ 1
#define KVM_TRACE_MMIO_WRITE 2
#define kvm_trace_symbol_mmio \
{ KVM_TRACE_MMIO_READ_UNSATISFIED, "unsatisfied-read" }, \
{ KVM_TRACE_MMIO_READ, "read" }, \
{ KVM_TRACE_MMIO_WRITE, "write" }
TRACE_EVENT(kvm_mmio,
TP_PROTO(int type, int len, u64 gpa, u64 val),
TP_ARGS(type, len, gpa, val),
TP_STRUCT__entry(
__field( u32, type )
__field( u32, len )
__field( u64, gpa )
__field( u64, val )
),
TP_fast_assign(
tp_assign(type, type)
tp_assign(len, len)
tp_assign(gpa, gpa)
tp_assign(val, val)
),
TP_printk("mmio %s len %u gpa 0x%llx val 0x%llx",
__print_symbolic(__entry->type, kvm_trace_symbol_mmio),
__entry->len, __entry->gpa, __entry->val)
)
#define kvm_fpu_load_symbol \
{0, "unload"}, \
{1, "load"}
TRACE_EVENT(kvm_fpu,
TP_PROTO(int load),
TP_ARGS(load),
TP_STRUCT__entry(
__field( u32, load )
),
TP_fast_assign(
tp_assign(load, load)
),
TP_printk("%s", __print_symbolic(__entry->load, kvm_fpu_load_symbol))
)
TRACE_EVENT(kvm_age_page,
TP_PROTO(ulong hva, struct kvm_memory_slot *slot, int ref),
TP_ARGS(hva, slot, ref),
TP_STRUCT__entry(
__field( u64, hva )
__field( u64, gfn )
__field( u8, referenced )
),
TP_fast_assign(
tp_assign(hva, hva)
tp_assign(gfn,
slot->base_gfn + ((hva - slot->userspace_addr) >> PAGE_SHIFT))
tp_assign(referenced, ref)
),
TP_printk("hva %llx gfn %llx %s",
__entry->hva, __entry->gfn,
__entry->referenced ? "YOUNG" : "OLD")
)
#ifdef CONFIG_KVM_ASYNC_PF
DECLARE_EVENT_CLASS(kvm_async_get_page_class,
TP_PROTO(u64 gva, u64 gfn),
TP_ARGS(gva, gfn),
TP_STRUCT__entry(
__field(__u64, gva)
__field(u64, gfn)
),
TP_fast_assign(
tp_assign(gva, gva)
tp_assign(gfn, gfn)
),
TP_printk("gva = %#llx, gfn = %#llx", __entry->gva, __entry->gfn)
)
DEFINE_EVENT(kvm_async_get_page_class, kvm_try_async_get_page,
TP_PROTO(u64 gva, u64 gfn),
TP_ARGS(gva, gfn)
)
DEFINE_EVENT(kvm_async_get_page_class, kvm_async_pf_doublefault,
TP_PROTO(u64 gva, u64 gfn),
TP_ARGS(gva, gfn)
)
DECLARE_EVENT_CLASS(kvm_async_pf_nopresent_ready,
TP_PROTO(u64 token, u64 gva),
TP_ARGS(token, gva),
TP_STRUCT__entry(
__field(__u64, token)
__field(__u64, gva)
),
TP_fast_assign(
tp_assign(token, token)
tp_assign(gva, gva)
),
TP_printk("token %#llx gva %#llx", __entry->token, __entry->gva)
)
DEFINE_EVENT(kvm_async_pf_nopresent_ready, kvm_async_pf_not_present,
TP_PROTO(u64 token, u64 gva),
TP_ARGS(token, gva)
)
DEFINE_EVENT(kvm_async_pf_nopresent_ready, kvm_async_pf_ready,
TP_PROTO(u64 token, u64 gva),
TP_ARGS(token, gva)
)
TRACE_EVENT(
kvm_async_pf_completed,
TP_PROTO(unsigned long address, struct page *page, u64 gva),
TP_ARGS(address, page, gva),
TP_STRUCT__entry(
__field(unsigned long, address)
__field(pfn_t, pfn)
__field(u64, gva)
),
TP_fast_assign(
tp_assign(address, address)
tp_assign(pfn, page ? page_to_pfn(page) : 0)
tp_assign(gva, gva)
),
TP_printk("gva %#llx address %#lx pfn %#llx", __entry->gva,
__entry->address, __entry->pfn)
)
#endif
#endif /* _TRACE_KVM_MAIN_H */
/* This part must be outside protection */
#include "../../../probes/define_trace.h"
#undef TRACE_SYSTEM
#define TRACE_SYSTEM lttng
#if !defined(_TRACE_LTTNG_H) || defined(TRACE_HEADER_MULTI_READ)
#define _TRACE_LTTNG_H
#include <linux/tracepoint.h>
TRACE_EVENT(lttng_metadata,
TP_PROTO(const char *str),
TP_ARGS(str),
/*
* Not exactly a string: more a sequence of bytes (dynamic
* array) without the length. This is a dummy anyway: we only
* use this declaration to generate an event metadata entry.
*/
TP_STRUCT__entry(
__string( str, str )
),
TP_fast_assign(
tp_strcpy(str, str)
),
TP_printk("")
)
#endif /* _TRACE_LTTNG_H */
/* This part must be outside protection */
#include "../../../probes/define_trace.h"
#undef TRACE_SYSTEM
#define TRACE_SYSTEM sched
#if !defined(_TRACE_SCHED_H) || defined(TRACE_HEADER_MULTI_READ)
#define _TRACE_SCHED_H
#include <linux/sched.h>
#include <linux/tracepoint.h>
#ifndef _TRACE_SCHED_DEF_
#define _TRACE_SCHED_DEF_
static inline long __trace_sched_switch_state(struct task_struct *p)
{
long state = p->state;
#ifdef CONFIG_PREEMPT
/*
* For all intents and purposes a preempted task is a running task.
*/
if (task_thread_info(p)->preempt_count & PREEMPT_ACTIVE)
state = TASK_RUNNING;
#endif
return state;
}
#endif /* _TRACE_SCHED_DEF_ */
/*
* Tracepoint for calling kthread_stop, performed to end a kthread:
*/
TRACE_EVENT(sched_kthread_stop,
TP_PROTO(struct task_struct *t),
TP_ARGS(t),
TP_STRUCT__entry(
__array_text( char, comm, TASK_COMM_LEN )
__field( pid_t, tid )
),
TP_fast_assign(
tp_memcpy(comm, t->comm, TASK_COMM_LEN)
tp_assign(tid, t->pid)
),
TP_printk("comm=%s tid=%d", __entry->comm, __entry->tid)
)
/*
* Tracepoint for the return value of the kthread stopping:
*/
TRACE_EVENT(sched_kthread_stop_ret,
TP_PROTO(int ret),
TP_ARGS(ret),
TP_STRUCT__entry(
__field( int, ret )
),
TP_fast_assign(
tp_assign(ret, ret)
),
TP_printk("ret=%d", __entry->ret)
)
/*
* Tracepoint for waking up a task:
*/
DECLARE_EVENT_CLASS(sched_wakeup_template,
TP_PROTO(struct task_struct *p, int success),
TP_ARGS(p, success),
TP_STRUCT__entry(
__array_text( char, comm, TASK_COMM_LEN )
__field( pid_t, tid )
__field( int, prio )
__field( int, success )
__field( int, target_cpu )
),
TP_fast_assign(
tp_memcpy(comm, p->comm, TASK_COMM_LEN)
tp_assign(tid, p->pid)
tp_assign(prio, p->prio)
tp_assign(success, success)
tp_assign(target_cpu, task_cpu(p))
),
TP_printk("comm=%s tid=%d prio=%d success=%d target_cpu=%03d",
__entry->comm, __entry->tid, __entry->prio,
__entry->success, __entry->target_cpu)
)
DEFINE_EVENT(sched_wakeup_template, sched_wakeup,
TP_PROTO(struct task_struct *p, int success),
TP_ARGS(p, success))
/*
* Tracepoint for waking up a new task:
*/
DEFINE_EVENT(sched_wakeup_template, sched_wakeup_new,
TP_PROTO(struct task_struct *p, int success),
TP_ARGS(p, success))
/*
* Tracepoint for task switches, performed by the scheduler:
*/
TRACE_EVENT(sched_switch,
TP_PROTO(struct task_struct *prev,
struct task_struct *next),
TP_ARGS(prev, next),
TP_STRUCT__entry(
__array_text( char, prev_comm, TASK_COMM_LEN )
__field( pid_t, prev_tid )
__field( int, prev_prio )
__field( long, prev_state )
__array_text( char, next_comm, TASK_COMM_LEN )
__field( pid_t, next_tid )
__field( int, next_prio )
),
TP_fast_assign(
tp_memcpy(next_comm, next->comm, TASK_COMM_LEN)
tp_assign(prev_tid, prev->pid)
tp_assign(prev_prio, prev->prio - MAX_RT_PRIO)
tp_assign(prev_state, __trace_sched_switch_state(prev))
tp_memcpy(prev_comm, prev->comm, TASK_COMM_LEN)
tp_assign(next_tid, next->pid)
tp_assign(next_prio, next->prio - MAX_RT_PRIO)
),
TP_printk("prev_comm=%s prev_tid=%d prev_prio=%d prev_state=%s ==> next_comm=%s next_tid=%d next_prio=%d",
__entry->prev_comm, __entry->prev_tid, __entry->prev_prio,
__entry->prev_state ?
__print_flags(__entry->prev_state, "|",
{ 1, "S"} , { 2, "D" }, { 4, "T" }, { 8, "t" },
{ 16, "Z" }, { 32, "X" }, { 64, "x" },
{ 128, "W" }) : "R",
__entry->next_comm, __entry->next_tid, __entry->next_prio)
)
/*
* Tracepoint for a task being migrated:
*/
TRACE_EVENT(sched_migrate_task,
TP_PROTO(struct task_struct *p, int dest_cpu),
TP_ARGS(p, dest_cpu),
TP_STRUCT__entry(
__array_text( char, comm, TASK_COMM_LEN )
__field( pid_t, tid )
__field( int, prio )
__field( int, orig_cpu )
__field( int, dest_cpu )
),
TP_fast_assign(
tp_memcpy(comm, p->comm, TASK_COMM_LEN)
tp_assign(tid, p->pid)
tp_assign(prio, p->prio - MAX_RT_PRIO)
tp_assign(orig_cpu, task_cpu(p))
tp_assign(dest_cpu, dest_cpu)
),
TP_printk("comm=%s tid=%d prio=%d orig_cpu=%d dest_cpu=%d",
__entry->comm, __entry->tid, __entry->prio,
__entry->orig_cpu, __entry->dest_cpu)
)
DECLARE_EVENT_CLASS(sched_process_template,
TP_PROTO(struct task_struct *p),
TP_ARGS(p),
TP_STRUCT__entry(
__array_text( char, comm, TASK_COMM_LEN )
__field( pid_t, tid )
__field( int, prio )
),
TP_fast_assign(
tp_memcpy(comm, p->comm, TASK_COMM_LEN)
tp_assign(tid, p->pid)
tp_assign(prio, p->prio - MAX_RT_PRIO)
),
TP_printk("comm=%s tid=%d prio=%d",
__entry->comm, __entry->tid, __entry->prio)
)
/*
* Tracepoint for freeing a task:
*/
DEFINE_EVENT(sched_process_template, sched_process_free,
TP_PROTO(struct task_struct *p),
TP_ARGS(p))
/*
* Tracepoint for a task exiting:
*/
DEFINE_EVENT(sched_process_template, sched_process_exit,
TP_PROTO(struct task_struct *p),
TP_ARGS(p))
/*
* Tracepoint for waiting on task to unschedule:
*/
DEFINE_EVENT(sched_process_template, sched_wait_task,
TP_PROTO(struct task_struct *p),
TP_ARGS(p))
/*
* Tracepoint for a waiting task:
*/
TRACE_EVENT(sched_process_wait,
TP_PROTO(struct pid *pid),
TP_ARGS(pid),
TP_STRUCT__entry(
__array_text( char, comm, TASK_COMM_LEN )
__field( pid_t, tid )
__field( int, prio )
),
TP_fast_assign(
tp_memcpy(comm, current->comm, TASK_COMM_LEN)
tp_assign(tid, pid_nr(pid))
tp_assign(prio, current->prio - MAX_RT_PRIO)
),
TP_printk("comm=%s tid=%d prio=%d",
__entry->comm, __entry->tid, __entry->prio)
)
/*
* Tracepoint for do_fork:
*/
TRACE_EVENT(sched_process_fork,
TP_PROTO(struct task_struct *parent, struct task_struct *child),
TP_ARGS(parent, child),
TP_STRUCT__entry(
__array_text( char, parent_comm, TASK_COMM_LEN )
__field( pid_t, parent_tid )
__array_text( char, child_comm, TASK_COMM_LEN )
__field( pid_t, child_tid )
),
TP_fast_assign(
tp_memcpy(parent_comm, parent->comm, TASK_COMM_LEN)
tp_assign(parent_tid, parent->pid)
tp_memcpy(child_comm, child->comm, TASK_COMM_LEN)
tp_assign(child_tid, child->pid)
),
TP_printk("comm=%s tid=%d child_comm=%s child_tid=%d",
__entry->parent_comm, __entry->parent_tid,
__entry->child_comm, __entry->child_tid)
)
/*
* XXX the below sched_stat tracepoints only apply to SCHED_OTHER/BATCH/IDLE
* adding sched_stat support to SCHED_FIFO/RR would be welcome.
*/
DECLARE_EVENT_CLASS(sched_stat_template,
TP_PROTO(struct task_struct *tsk, u64 delay),
TP_ARGS(tsk, delay),
TP_STRUCT__entry(
__array_text( char, comm, TASK_COMM_LEN )
__field( pid_t, tid )
__field( u64, delay )
),
TP_fast_assign(
tp_memcpy(comm, tsk->comm, TASK_COMM_LEN)
tp_assign(tid, tsk->pid)
tp_assign(delay, delay)
)
TP_perf_assign(
__perf_count(delay)
),
TP_printk("comm=%s tid=%d delay=%Lu [ns]",
__entry->comm, __entry->tid,
(unsigned long long)__entry->delay)
)
/*
* Tracepoint for accounting wait time (time the task is runnable
* but not actually running due to scheduler contention).
*/
DEFINE_EVENT(sched_stat_template, sched_stat_wait,
TP_PROTO(struct task_struct *tsk, u64 delay),
TP_ARGS(tsk, delay))
/*
* Tracepoint for accounting sleep time (time the task is not runnable,
* including iowait, see below).
*/
DEFINE_EVENT(sched_stat_template, sched_stat_sleep,
TP_PROTO(struct task_struct *tsk, u64 delay),
TP_ARGS(tsk, delay))
/*
* Tracepoint for accounting iowait time (time the task is not runnable
* due to waiting on IO to complete).
*/
DEFINE_EVENT(sched_stat_template, sched_stat_iowait,
TP_PROTO(struct task_struct *tsk, u64 delay),
TP_ARGS(tsk, delay))
/*
* Tracepoint for accounting runtime (time the task is executing
* on a CPU).
*/
TRACE_EVENT(sched_stat_runtime,
TP_PROTO(struct task_struct *tsk, u64 runtime, u64 vruntime),
TP_ARGS(tsk, runtime, vruntime),
TP_STRUCT__entry(
__array_text( char, comm, TASK_COMM_LEN )
__field( pid_t, tid )
__field( u64, runtime )
__field( u64, vruntime )
),
TP_fast_assign(
tp_memcpy(comm, tsk->comm, TASK_COMM_LEN)
tp_assign(tid, tsk->pid)
tp_assign(runtime, runtime)
tp_assign(vruntime, vruntime)
)
TP_perf_assign(
__perf_count(runtime)
),
TP_printk("comm=%s tid=%d runtime=%Lu [ns] vruntime=%Lu [ns]",
__entry->comm, __entry->tid,
(unsigned long long)__entry->runtime,
(unsigned long long)__entry->vruntime)
)
/*
* Tracepoint for showing priority inheritance modifying a tasks
* priority.
*/
TRACE_EVENT(sched_pi_setprio,
TP_PROTO(struct task_struct *tsk, int newprio),
TP_ARGS(tsk, newprio),
TP_STRUCT__entry(
__array_text( char, comm, TASK_COMM_LEN )
__field( pid_t, tid )
__field( int, oldprio )
__field( int, newprio )
),
TP_fast_assign(
tp_memcpy(comm, tsk->comm, TASK_COMM_LEN)
tp_assign(tid, tsk->pid)
tp_assign(oldprio, tsk->prio - MAX_RT_PRIO)
tp_assign(newprio, newprio - MAX_RT_PRIO)
),
TP_printk("comm=%s tid=%d oldprio=%d newprio=%d",
__entry->comm, __entry->tid,
__entry->oldprio, __entry->newprio)
)
#endif /* _TRACE_SCHED_H */
/* This part must be outside protection */
#include "../../../probes/define_trace.h"
#undef TRACE_SYSTEM
#define TRACE_SYSTEM raw_syscalls
#define TRACE_INCLUDE_FILE syscalls
#if !defined(_TRACE_EVENTS_SYSCALLS_H) || defined(TRACE_HEADER_MULTI_READ)
#define _TRACE_EVENTS_SYSCALLS_H
#include <linux/tracepoint.h>
#ifdef CONFIG_HAVE_SYSCALL_TRACEPOINTS
#ifndef _TRACE_SYSCALLS_DEF_
#define _TRACE_SYSCALLS_DEF_
#include <asm/ptrace.h>
#include <asm/syscall.h>
#endif /* _TRACE_SYSCALLS_DEF_ */
TRACE_EVENT(sys_enter,
TP_PROTO(struct pt_regs *regs, long id),
TP_ARGS(regs, id),
TP_STRUCT__entry(
__field( long, id )
__array( unsigned long, args, 6 )
),
TP_fast_assign(
tp_assign(id, id)
{
tp_memcpy(args,
({
unsigned long args_copy[6];
syscall_get_arguments(current, regs,
0, 6, args_copy);
args_copy;
}), 6 * sizeof(unsigned long));
}
),
TP_printk("NR %ld (%lx, %lx, %lx, %lx, %lx, %lx)",
__entry->id,
__entry->args[0], __entry->args[1], __entry->args[2],
__entry->args[3], __entry->args[4], __entry->args[5])
)
TRACE_EVENT(sys_exit,
TP_PROTO(struct pt_regs *regs, long ret),
TP_ARGS(regs, ret),
TP_STRUCT__entry(
__field( long, id )
__field( long, ret )
),
TP_fast_assign(
tp_assign(id, syscall_get_nr(current, regs))
tp_assign(ret, ret)
),
TP_printk("NR %ld = %ld",
__entry->id, __entry->ret)
)
#endif /* CONFIG_HAVE_SYSCALL_TRACEPOINTS */
#endif /* _TRACE_EVENTS_SYSCALLS_H */
/* This part must be outside protection */
#include "../../../probes/define_trace.h"
#undef TRACE_SYSTEM
#define TRACE_SYSTEM irq
#if !defined(_TRACE_IRQ_H) || defined(TRACE_HEADER_MULTI_READ)
#define _TRACE_IRQ_H
#include <linux/tracepoint.h>
struct irqaction;
struct softirq_action;
#define softirq_name(sirq) { sirq##_SOFTIRQ, #sirq }
#define show_softirq_name(val) \
__print_symbolic(val, \
softirq_name(HI), \
softirq_name(TIMER), \
softirq_name(NET_TX), \
softirq_name(NET_RX), \
softirq_name(BLOCK), \
softirq_name(BLOCK_IOPOLL), \
softirq_name(TASKLET), \
softirq_name(SCHED), \
softirq_name(HRTIMER), \
softirq_name(RCU))
/**
* irq_handler_entry - called immediately before the irq action handler
* @irq: irq number
* @action: pointer to struct irqaction
*
* The struct irqaction pointed to by @action contains various
* information about the handler, including the device name,
* @action->name, and the device id, @action->dev_id. When used in
* conjunction with the irq_handler_exit tracepoint, we can figure
* out irq handler latencies.
*/
TRACE_EVENT(irq_handler_entry,
TP_PROTO(int irq, struct irqaction *action),
TP_ARGS(irq, action),
TP_STRUCT__entry(
__field( int, irq )
__string( name, action->name )
),
TP_fast_assign(
__entry->irq = irq;
__assign_str(name, action->name);
),
TP_printk("irq=%d name=%s", __entry->irq, __get_str(name))
);
/**
* irq_handler_exit - called immediately after the irq action handler returns
* @irq: irq number
* @action: pointer to struct irqaction
* @ret: return value
*
* If the @ret value is set to IRQ_HANDLED, then we know that the corresponding
* @action->handler scuccessully handled this irq. Otherwise, the irq might be
* a shared irq line, or the irq was not handled successfully. Can be used in
* conjunction with the irq_handler_entry to understand irq handler latencies.
*/
TRACE_EVENT(irq_handler_exit,
TP_PROTO(int irq, struct irqaction *action, int ret),
TP_ARGS(irq, action, ret),
TP_STRUCT__entry(
__field( int, irq )
__field( int, ret )
),
TP_fast_assign(
__entry->irq = irq;
__entry->ret = ret;
),
TP_printk("irq=%d ret=%s",
__entry->irq, __entry->ret ? "handled" : "unhandled")
);
DECLARE_EVENT_CLASS(softirq,
TP_PROTO(unsigned int vec_nr),
TP_ARGS(vec_nr),
TP_STRUCT__entry(
__field( unsigned int, vec )
),
TP_fast_assign(
__entry->vec = vec_nr;
),
TP_printk("vec=%u [action=%s]", __entry->vec,
show_softirq_name(__entry->vec))
);
/**
* softirq_entry - called immediately before the softirq handler
* @vec_nr: softirq vector number
*
* When used in combination with the softirq_exit tracepoint
* we can determine the softirq handler runtine.
*/
DEFINE_EVENT(softirq, softirq_entry,
TP_PROTO(unsigned int vec_nr),
TP_ARGS(vec_nr)
);
/**
* softirq_exit - called immediately after the softirq handler returns
* @vec_nr: softirq vector number
*
* When used in combination with the softirq_entry tracepoint
* we can determine the softirq handler runtine.
*/
DEFINE_EVENT(softirq, softirq_exit,
TP_PROTO(unsigned int vec_nr),
TP_ARGS(vec_nr)
);
/**
* softirq_raise - called immediately when a softirq is raised
* @vec_nr: softirq vector number
*
* When used in combination with the softirq_entry tracepoint
* we can determine the softirq raise to run latency.
*/
DEFINE_EVENT(softirq, softirq_raise,
TP_PROTO(unsigned int vec_nr),
TP_ARGS(vec_nr)
);
#endif /* _TRACE_IRQ_H */
/* This part must be outside protection */
#include <trace/define_trace.h>
#if !defined(_TRACE_KVM_MAIN_H) || defined(TRACE_HEADER_MULTI_READ)
#define _TRACE_KVM_MAIN_H
#include <linux/tracepoint.h>
#undef TRACE_SYSTEM
#define TRACE_SYSTEM kvm
#define ERSN(x) { KVM_EXIT_##x, "KVM_EXIT_" #x }
#define kvm_trace_exit_reason \
ERSN(UNKNOWN), ERSN(EXCEPTION), ERSN(IO), ERSN(HYPERCALL), \
ERSN(DEBUG), ERSN(HLT), ERSN(MMIO), ERSN(IRQ_WINDOW_OPEN), \
ERSN(SHUTDOWN), ERSN(FAIL_ENTRY), ERSN(INTR), ERSN(SET_TPR), \
ERSN(TPR_ACCESS), ERSN(S390_SIEIC), ERSN(S390_RESET), ERSN(DCR),\
ERSN(NMI), ERSN(INTERNAL_ERROR), ERSN(OSI)
TRACE_EVENT(kvm_userspace_exit,
TP_PROTO(__u32 reason, int errno),
TP_ARGS(reason, errno),
TP_STRUCT__entry(
__field( __u32, reason )
__field( int, errno )
),
TP_fast_assign(
__entry->reason = reason;
__entry->errno = errno;
),
TP_printk("reason %s (%d)",
__entry->errno < 0 ?
(__entry->errno == -EINTR ? "restart" : "error") :
__print_symbolic(__entry->reason, kvm_trace_exit_reason),
__entry->errno < 0 ? -__entry->errno : __entry->reason)
);
#if defined(__KVM_HAVE_IOAPIC)
TRACE_EVENT(kvm_set_irq,
TP_PROTO(unsigned int gsi, int level, int irq_source_id),
TP_ARGS(gsi, level, irq_source_id),
TP_STRUCT__entry(
__field( unsigned int, gsi )
__field( int, level )
__field( int, irq_source_id )
),
TP_fast_assign(
__entry->gsi = gsi;
__entry->level = level;
__entry->irq_source_id = irq_source_id;
),
TP_printk("gsi %u level %d source %d",
__entry->gsi, __entry->level, __entry->irq_source_id)
);
#define kvm_deliver_mode \
{0x0, "Fixed"}, \
{0x1, "LowPrio"}, \
{0x2, "SMI"}, \
{0x3, "Res3"}, \
{0x4, "NMI"}, \
{0x5, "INIT"}, \
{0x6, "SIPI"}, \
{0x7, "ExtINT"}
TRACE_EVENT(kvm_ioapic_set_irq,
TP_PROTO(__u64 e, int pin, bool coalesced),
TP_ARGS(e, pin, coalesced),
TP_STRUCT__entry(
__field( __u64, e )
__field( int, pin )
__field( bool, coalesced )
),
TP_fast_assign(
__entry->e = e;
__entry->pin = pin;
__entry->coalesced = coalesced;
),
TP_printk("pin %u dst %x vec=%u (%s|%s|%s%s)%s",
__entry->pin, (u8)(__entry->e >> 56), (u8)__entry->e,
__print_symbolic((__entry->e >> 8 & 0x7), kvm_deliver_mode),
(__entry->e & (1<<11)) ? "logical" : "physical",
(__entry->e & (1<<15)) ? "level" : "edge",
(__entry->e & (1<<16)) ? "|masked" : "",
__entry->coalesced ? " (coalesced)" : "")
);
TRACE_EVENT(kvm_msi_set_irq,
TP_PROTO(__u64 address, __u64 data),
TP_ARGS(address, data),
TP_STRUCT__entry(
__field( __u64, address )
__field( __u64, data )
),
TP_fast_assign(
__entry->address = address;
__entry->data = data;
),
TP_printk("dst %u vec %x (%s|%s|%s%s)",
(u8)(__entry->address >> 12), (u8)__entry->data,
__print_symbolic((__entry->data >> 8 & 0x7), kvm_deliver_mode),
(__entry->address & (1<<2)) ? "logical" : "physical",
(__entry->data & (1<<15)) ? "level" : "edge",
(__entry->address & (1<<3)) ? "|rh" : "")
);
#define kvm_irqchips \
{KVM_IRQCHIP_PIC_MASTER, "PIC master"}, \
{KVM_IRQCHIP_PIC_SLAVE, "PIC slave"}, \
{KVM_IRQCHIP_IOAPIC, "IOAPIC"}
TRACE_EVENT(kvm_ack_irq,
TP_PROTO(unsigned int irqchip, unsigned int pin),
TP_ARGS(irqchip, pin),
TP_STRUCT__entry(
__field( unsigned int, irqchip )
__field( unsigned int, pin )
),
TP_fast_assign(
__entry->irqchip = irqchip;
__entry->pin = pin;
),
TP_printk("irqchip %s pin %u",
__print_symbolic(__entry->irqchip, kvm_irqchips),
__entry->pin)
);
#endif /* defined(__KVM_HAVE_IOAPIC) */
#define KVM_TRACE_MMIO_READ_UNSATISFIED 0
#define KVM_TRACE_MMIO_READ 1
#define KVM_TRACE_MMIO_WRITE 2
#define kvm_trace_symbol_mmio \
{ KVM_TRACE_MMIO_READ_UNSATISFIED, "unsatisfied-read" }, \
{ KVM_TRACE_MMIO_READ, "read" }, \
{ KVM_TRACE_MMIO_WRITE, "write" }
TRACE_EVENT(kvm_mmio,
TP_PROTO(int type, int len, u64 gpa, u64 val),
TP_ARGS(type, len, gpa, val),
TP_STRUCT__entry(
__field( u32, type )
__field( u32, len )
__field( u64, gpa )
__field( u64, val )
),
TP_fast_assign(
__entry->type = type;
__entry->len = len;
__entry->gpa = gpa;
__entry->val = val;
),
TP_printk("mmio %s len %u gpa 0x%llx val 0x%llx",
__print_symbolic(__entry->type, kvm_trace_symbol_mmio),
__entry->len, __entry->gpa, __entry->val)
);
#define kvm_fpu_load_symbol \
{0, "unload"}, \
{1, "load"}
TRACE_EVENT(kvm_fpu,
TP_PROTO(int load),
TP_ARGS(load),
TP_STRUCT__entry(
__field( u32, load )
),
TP_fast_assign(
__entry->load = load;
),
TP_printk("%s", __print_symbolic(__entry->load, kvm_fpu_load_symbol))
);
TRACE_EVENT(kvm_age_page,
TP_PROTO(ulong hva, struct kvm_memory_slot *slot, int ref),
TP_ARGS(hva, slot, ref),
TP_STRUCT__entry(
__field( u64, hva )
__field( u64, gfn )
__field( u8, referenced )
),
TP_fast_assign(
__entry->hva = hva;
__entry->gfn =
slot->base_gfn + ((hva - slot->userspace_addr) >> PAGE_SHIFT);
__entry->referenced = ref;
),
TP_printk("hva %llx gfn %llx %s",
__entry->hva, __entry->gfn,
__entry->referenced ? "YOUNG" : "OLD")
);
#ifdef CONFIG_KVM_ASYNC_PF
DECLARE_EVENT_CLASS(kvm_async_get_page_class,
TP_PROTO(u64 gva, u64 gfn),
TP_ARGS(gva, gfn),
TP_STRUCT__entry(
__field(__u64, gva)
__field(u64, gfn)
),
TP_fast_assign(
__entry->gva = gva;
__entry->gfn = gfn;
),
TP_printk("gva = %#llx, gfn = %#llx", __entry->gva, __entry->gfn)
);
DEFINE_EVENT(kvm_async_get_page_class, kvm_try_async_get_page,
TP_PROTO(u64 gva, u64 gfn),
TP_ARGS(gva, gfn)
);
DEFINE_EVENT(kvm_async_get_page_class, kvm_async_pf_doublefault,
TP_PROTO(u64 gva, u64 gfn),
TP_ARGS(gva, gfn)
);
DECLARE_EVENT_CLASS(kvm_async_pf_nopresent_ready,
TP_PROTO(u64 token, u64 gva),
TP_ARGS(token, gva),
TP_STRUCT__entry(
__field(__u64, token)
__field(__u64, gva)
),
TP_fast_assign(
__entry->token = token;
__entry->gva = gva;
),
TP_printk("token %#llx gva %#llx", __entry->token, __entry->gva)
);
DEFINE_EVENT(kvm_async_pf_nopresent_ready, kvm_async_pf_not_present,
TP_PROTO(u64 token, u64 gva),
TP_ARGS(token, gva)
);
DEFINE_EVENT(kvm_async_pf_nopresent_ready, kvm_async_pf_ready,
TP_PROTO(u64 token, u64 gva),
TP_ARGS(token, gva)
);
TRACE_EVENT(
kvm_async_pf_completed,
TP_PROTO(unsigned long address, struct page *page, u64 gva),
TP_ARGS(address, page, gva),
TP_STRUCT__entry(
__field(unsigned long, address)
__field(pfn_t, pfn)
__field(u64, gva)
),
TP_fast_assign(
__entry->address = address;
__entry->pfn = page ? page_to_pfn(page) : 0;
__entry->gva = gva;
),
TP_printk("gva %#llx address %#lx pfn %#llx", __entry->gva,
__entry->address, __entry->pfn)
);
#endif
#endif /* _TRACE_KVM_MAIN_H */
/* This part must be outside protection */
#include <trace/define_trace.h>
#undef TRACE_SYSTEM
#define TRACE_SYSTEM sched
#if !defined(_TRACE_SCHED_H) || defined(TRACE_HEADER_MULTI_READ)
#define _TRACE_SCHED_H
#include <linux/sched.h>
#include <linux/tracepoint.h>
/*
* Tracepoint for calling kthread_stop, performed to end a kthread:
*/
TRACE_EVENT(sched_kthread_stop,
TP_PROTO(struct task_struct *t),
TP_ARGS(t),
TP_STRUCT__entry(
__array( char, comm, TASK_COMM_LEN )
__field( pid_t, pid )
),
TP_fast_assign(
memcpy(__entry->comm, t->comm, TASK_COMM_LEN);
__entry->pid = t->pid;
),
TP_printk("comm=%s pid=%d", __entry->comm, __entry->pid)
);
/*
* Tracepoint for the return value of the kthread stopping:
*/
TRACE_EVENT(sched_kthread_stop_ret,
TP_PROTO(int ret),
TP_ARGS(ret),
TP_STRUCT__entry(
__field( int, ret )
),
TP_fast_assign(
__entry->ret = ret;
),
TP_printk("ret=%d", __entry->ret)
);
/*
* Tracepoint for waking up a task:
*/
DECLARE_EVENT_CLASS(sched_wakeup_template,
TP_PROTO(struct task_struct *p, int success),
TP_ARGS(p, success),
TP_STRUCT__entry(
__array( char, comm, TASK_COMM_LEN )
__field( pid_t, pid )
__field( int, prio )
__field( int, success )
__field( int, target_cpu )
),
TP_fast_assign(
memcpy(__entry->comm, p->comm, TASK_COMM_LEN);
__entry->pid = p->pid;
__entry->prio = p->prio;
__entry->success = success;
__entry->target_cpu = task_cpu(p);
),
TP_printk("comm=%s pid=%d prio=%d success=%d target_cpu=%03d",
__entry->comm, __entry->pid, __entry->prio,
__entry->success, __entry->target_cpu)
);
DEFINE_EVENT(sched_wakeup_template, sched_wakeup,
TP_PROTO(struct task_struct *p, int success),
TP_ARGS(p, success));
/*
* Tracepoint for waking up a new task:
*/
DEFINE_EVENT(sched_wakeup_template, sched_wakeup_new,
TP_PROTO(struct task_struct *p, int success),
TP_ARGS(p, success));
#ifdef CREATE_TRACE_POINTS
static inline long __trace_sched_switch_state(struct task_struct *p)
{
long state = p->state;
#ifdef CONFIG_PREEMPT
/*
* For all intents and purposes a preempted task is a running task.
*/
if (task_thread_info(p)->preempt_count & PREEMPT_ACTIVE)
state = TASK_RUNNING;
#endif
return state;
}
#endif
/*
* Tracepoint for task switches, performed by the scheduler:
*/
TRACE_EVENT(sched_switch,
TP_PROTO(struct task_struct *prev,
struct task_struct *next),
TP_ARGS(prev, next),
TP_STRUCT__entry(
__array( char, prev_comm, TASK_COMM_LEN )
__field( pid_t, prev_pid )
__field( int, prev_prio )
__field( long, prev_state )
__array( char, next_comm, TASK_COMM_LEN )
__field( pid_t, next_pid )
__field( int, next_prio )
),
TP_fast_assign(
memcpy(__entry->next_comm, next->comm, TASK_COMM_LEN);
__entry->prev_pid = prev->pid;
__entry->prev_prio = prev->prio;
__entry->prev_state = __trace_sched_switch_state(prev);
memcpy(__entry->prev_comm, prev->comm, TASK_COMM_LEN);
__entry->next_pid = next->pid;
__entry->next_prio = next->prio;
),
TP_printk("prev_comm=%s prev_pid=%d prev_prio=%d prev_state=%s ==> next_comm=%s next_pid=%d next_prio=%d",
__entry->prev_comm, __entry->prev_pid, __entry->prev_prio,
__entry->prev_state ?
__print_flags(__entry->prev_state, "|",
{ 1, "S"} , { 2, "D" }, { 4, "T" }, { 8, "t" },
{ 16, "Z" }, { 32, "X" }, { 64, "x" },
{ 128, "W" }) : "R",
__entry->next_comm, __entry->next_pid, __entry->next_prio)
);
/*
* Tracepoint for a task being migrated:
*/
TRACE_EVENT(sched_migrate_task,
TP_PROTO(struct task_struct *p, int dest_cpu),
TP_ARGS(p, dest_cpu),
TP_STRUCT__entry(
__array( char, comm, TASK_COMM_LEN )
__field( pid_t, pid )
__field( int, prio )
__field( int, orig_cpu )
__field( int, dest_cpu )
),
TP_fast_assign(
memcpy(__entry->comm, p->comm, TASK_COMM_LEN);
__entry->pid = p->pid;
__entry->prio = p->prio;
__entry->orig_cpu = task_cpu(p);
__entry->dest_cpu = dest_cpu;
),
TP_printk("comm=%s pid=%d prio=%d orig_cpu=%d dest_cpu=%d",
__entry->comm, __entry->pid, __entry->prio,
__entry->orig_cpu, __entry->dest_cpu)
);
DECLARE_EVENT_CLASS(sched_process_template,
TP_PROTO(struct task_struct *p),
TP_ARGS(p),
TP_STRUCT__entry(
__array( char, comm, TASK_COMM_LEN )
__field( pid_t, pid )
__field( int, prio )
),
TP_fast_assign(
memcpy(__entry->comm, p->comm, TASK_COMM_LEN);
__entry->pid = p->pid;
__entry->prio = p->prio;
),
TP_printk("comm=%s pid=%d prio=%d",
__entry->comm, __entry->pid, __entry->prio)
);
/*
* Tracepoint for freeing a task:
*/
DEFINE_EVENT(sched_process_template, sched_process_free,
TP_PROTO(struct task_struct *p),
TP_ARGS(p));
/*
* Tracepoint for a task exiting:
*/
DEFINE_EVENT(sched_process_template, sched_process_exit,
TP_PROTO(struct task_struct *p),
TP_ARGS(p));
/*
* Tracepoint for waiting on task to unschedule:
*/
DEFINE_EVENT(sched_process_template, sched_wait_task,
TP_PROTO(struct task_struct *p),
TP_ARGS(p));
/*
* Tracepoint for a waiting task:
*/
TRACE_EVENT(sched_process_wait,
TP_PROTO(struct pid *pid),
TP_ARGS(pid),
TP_STRUCT__entry(
__array( char, comm, TASK_COMM_LEN )
__field( pid_t, pid )
__field( int, prio )
),
TP_fast_assign(
memcpy(__entry->comm, current->comm, TASK_COMM_LEN);
__entry->pid = pid_nr(pid);
__entry->prio = current->prio;
),
TP_printk("comm=%s pid=%d prio=%d",
__entry->comm, __entry->pid, __entry->prio)
);
/*
* Tracepoint for do_fork:
*/
TRACE_EVENT(sched_process_fork,
TP_PROTO(struct task_struct *parent, struct task_struct *child),
TP_ARGS(parent, child),
TP_STRUCT__entry(
__array( char, parent_comm, TASK_COMM_LEN )
__field( pid_t, parent_pid )
__array( char, child_comm, TASK_COMM_LEN )
__field( pid_t, child_pid )
),
TP_fast_assign(
memcpy(__entry->parent_comm, parent->comm, TASK_COMM_LEN);
__entry->parent_pid = parent->pid;
memcpy(__entry->child_comm, child->comm, TASK_COMM_LEN);
__entry->child_pid = child->pid;
),
TP_printk("comm=%s pid=%d child_comm=%s child_pid=%d",
__entry->parent_comm, __entry->parent_pid,
__entry->child_comm, __entry->child_pid)
);
/*
* XXX the below sched_stat tracepoints only apply to SCHED_OTHER/BATCH/IDLE
* adding sched_stat support to SCHED_FIFO/RR would be welcome.
*/
DECLARE_EVENT_CLASS(sched_stat_template,
TP_PROTO(struct task_struct *tsk, u64 delay),
TP_ARGS(tsk, delay),
TP_STRUCT__entry(
__array( char, comm, TASK_COMM_LEN )
__field( pid_t, pid )
__field( u64, delay )
),
TP_fast_assign(
memcpy(__entry->comm, tsk->comm, TASK_COMM_LEN);
__entry->pid = tsk->pid;
__entry->delay = delay;
)
TP_perf_assign(
__perf_count(delay);
),
TP_printk("comm=%s pid=%d delay=%Lu [ns]",
__entry->comm, __entry->pid,
(unsigned long long)__entry->delay)
);
/*
* Tracepoint for accounting wait time (time the task is runnable
* but not actually running due to scheduler contention).
*/
DEFINE_EVENT(sched_stat_template, sched_stat_wait,
TP_PROTO(struct task_struct *tsk, u64 delay),
TP_ARGS(tsk, delay));
/*
* Tracepoint for accounting sleep time (time the task is not runnable,
* including iowait, see below).
*/
DEFINE_EVENT(sched_stat_template, sched_stat_sleep,
TP_PROTO(struct task_struct *tsk, u64 delay),
TP_ARGS(tsk, delay));
/*
* Tracepoint for accounting iowait time (time the task is not runnable
* due to waiting on IO to complete).
*/
DEFINE_EVENT(sched_stat_template, sched_stat_iowait,
TP_PROTO(struct task_struct *tsk, u64 delay),
TP_ARGS(tsk, delay));
/*
* Tracepoint for accounting runtime (time the task is executing
* on a CPU).
*/
TRACE_EVENT(sched_stat_runtime,
TP_PROTO(struct task_struct *tsk, u64 runtime, u64 vruntime),
TP_ARGS(tsk, runtime, vruntime),
TP_STRUCT__entry(
__array( char, comm, TASK_COMM_LEN )
__field( pid_t, pid )
__field( u64, runtime )
__field( u64, vruntime )
),
TP_fast_assign(
memcpy(__entry->comm, tsk->comm, TASK_COMM_LEN);
__entry->pid = tsk->pid;
__entry->runtime = runtime;
__entry->vruntime = vruntime;
)
TP_perf_assign(
__perf_count(runtime);
),
TP_printk("comm=%s pid=%d runtime=%Lu [ns] vruntime=%Lu [ns]",
__entry->comm, __entry->pid,
(unsigned long long)__entry->runtime,
(unsigned long long)__entry->vruntime)
);
/*
* Tracepoint for showing priority inheritance modifying a tasks
* priority.
*/
TRACE_EVENT(sched_pi_setprio,
TP_PROTO(struct task_struct *tsk, int newprio),
TP_ARGS(tsk, newprio),
TP_STRUCT__entry(
__array( char, comm, TASK_COMM_LEN )
__field( pid_t, pid )
__field( int, oldprio )
__field( int, newprio )
),
TP_fast_assign(
memcpy(__entry->comm, tsk->comm, TASK_COMM_LEN);
__entry->pid = tsk->pid;
__entry->oldprio = tsk->prio;
__entry->newprio = newprio;
),
TP_printk("comm=%s pid=%d oldprio=%d newprio=%d",
__entry->comm, __entry->pid,
__entry->oldprio, __entry->newprio)
);
#endif /* _TRACE_SCHED_H */
/* This part must be outside protection */
#include <trace/define_trace.h>
#undef TRACE_SYSTEM
#define TRACE_SYSTEM raw_syscalls
#define TRACE_INCLUDE_FILE syscalls
#if !defined(_TRACE_EVENTS_SYSCALLS_H) || defined(TRACE_HEADER_MULTI_READ)
#define _TRACE_EVENTS_SYSCALLS_H
#include <linux/tracepoint.h>
#include <asm/ptrace.h>
#include <asm/syscall.h>
#ifdef CONFIG_HAVE_SYSCALL_TRACEPOINTS
extern void syscall_regfunc(void);
extern void syscall_unregfunc(void);
TRACE_EVENT_FN(sys_enter,
TP_PROTO(struct pt_regs *regs, long id),
TP_ARGS(regs, id),
TP_STRUCT__entry(
__field( long, id )
__array( unsigned long, args, 6 )
),
TP_fast_assign(
__entry->id = id;
syscall_get_arguments(current, regs, 0, 6, __entry->args);
),
TP_printk("NR %ld (%lx, %lx, %lx, %lx, %lx, %lx)",
__entry->id,
__entry->args[0], __entry->args[1], __entry->args[2],
__entry->args[3], __entry->args[4], __entry->args[5]),
syscall_regfunc, syscall_unregfunc
);
TRACE_EVENT_FLAGS(sys_enter, TRACE_EVENT_FL_CAP_ANY)
TRACE_EVENT_FN(sys_exit,
TP_PROTO(struct pt_regs *regs, long ret),
TP_ARGS(regs, ret),
TP_STRUCT__entry(
__field( long, id )
__field( long, ret )
),
TP_fast_assign(
__entry->id = syscall_get_nr(current, regs);
__entry->ret = ret;
),
TP_printk("NR %ld = %ld",
__entry->id, __entry->ret),
syscall_regfunc, syscall_unregfunc
);
TRACE_EVENT_FLAGS(sys_exit, TRACE_EVENT_FL_CAP_ANY)
#endif /* CONFIG_HAVE_SYSCALL_TRACEPOINTS */
#endif /* _TRACE_EVENTS_SYSCALLS_H */
/* This part must be outside protection */
#include <trace/define_trace.h>
LTTng system call tracing
1) lttng-syscall-extractor
You need to build a kernel with CONFIG_FTRACE_SYSCALLS=y and
CONFIG_KALLSYMS_ALL=y for extraction. Apply the linker patch to get your
kernel to keep the system call metadata after boot. Then build and load
the LTTng syscall extractor module. The module will fail to load (this
is expected). See the dmesg output for system call metadata.
2) Generate system call TRACE_EVENT().
Take the dmesg metadata and feed it to lttng-syscalls-generate-headers.sh, e.g.,
from the instrumentation/syscalls directory. See the script header for
usage example.
After these are created, we just need to follow the new system call additions,
no need to regenerate the whole thing, since system calls are only appended to.
#ifdef CONFIG_X86_64
#include "x86-32-syscalls-3.1.0-rc6_integers.h"
#endif
#ifdef CONFIG_X86_64
#include "x86-32-syscalls-3.1.0-rc6_pointers.h"
#endif
#ifdef CONFIG_X86_64
#include "x86-64-syscalls-3.0.4_integers.h"
#endif
#ifdef CONFIG_X86_32
#include "x86-32-syscalls-3.1.0-rc6_integers.h"
#endif
#define OVERRIDE_32_sys_mmap
#define OVERRIDE_64_sys_mmap
#ifndef CREATE_SYSCALL_TABLE
SC_TRACE_EVENT(sys_mmap,
TP_PROTO(unsigned long addr, unsigned long len, unsigned long prot, unsigned long flags, unsigned long fd, unsigned long off),
TP_ARGS(addr, len, prot, flags, fd, off),
TP_STRUCT__entry(__field_hex(unsigned long, addr) __field(size_t, len) __field(int, prot) __field(int, flags) __field(int, fd) __field(off_t, offset)),
TP_fast_assign(tp_assign(addr, addr) tp_assign(len, len) tp_assign(prot, prot) tp_assign(flags, flags) tp_assign(fd, fd) tp_assign(offset, off)),
TP_printk()
)
#endif /* CREATE_SYSCALL_TABLE */
#ifdef CONFIG_X86_64
#include "x86-64-syscalls-3.0.4_pointers.h"
#endif
#ifdef CONFIG_X86_32
#include "x86-32-syscalls-3.1.0-rc6_pointers.h"
#endif
/*
* This is a place-holder for override defines for system calls with
* pointers (all architectures).
*/
#if !defined(_TRACE_SYSCALLS_UNKNOWN_H) || defined(TRACE_HEADER_MULTI_READ)
#define _TRACE_SYSCALLS_UNKNOWN_H
#include <linux/tracepoint.h>
#include <linux/syscalls.h>
#define UNKNOWN_SYSCALL_NRARGS 6
TRACE_EVENT(sys_unknown,
TP_PROTO(unsigned int id, unsigned long *args),
TP_ARGS(id, args),
TP_STRUCT__entry(
__field(unsigned int, id)
__array(unsigned long, args, UNKNOWN_SYSCALL_NRARGS)
),
TP_fast_assign(
tp_assign(id, id)
tp_memcpy(args, args, UNKNOWN_SYSCALL_NRARGS * sizeof(*args))
),
TP_printk()
)
TRACE_EVENT(compat_sys_unknown,
TP_PROTO(unsigned int id, unsigned long *args),
TP_ARGS(id, args),
TP_STRUCT__entry(
__field(unsigned int, id)
__array(unsigned long, args, UNKNOWN_SYSCALL_NRARGS)
),
TP_fast_assign(
tp_assign(id, id)
tp_memcpy(args, args, UNKNOWN_SYSCALL_NRARGS * sizeof(*args))
),
TP_printk()
)
/*
* This is going to hook on sys_exit in the kernel.
* We change the name so we don't clash with the sys_exit syscall entry
* event.
*/
TRACE_EVENT(exit_syscall,
TP_PROTO(struct pt_regs *regs, long ret),
TP_ARGS(regs, ret),
TP_STRUCT__entry(
__field(long, ret)
),
TP_fast_assign(
tp_assign(ret, ret)
),
TP_printk()
)
#endif /* _TRACE_SYSCALLS_UNKNOWN_H */
/* This part must be outside protection */
#include "../../../probes/define_trace.h"
#ifndef CONFIG_UID16
#define OVERRIDE_32_sys_getuid16
#define OVERRIDE_32_sys_getgid16
#define OVERRIDE_32_sys_geteuid16
#define OVERRIDE_32_sys_getegid16
#define OVERRIDE_32_sys_setuid16
#define OVERRIDE_32_sys_setgid16
#define OVERRIDE_32_sys_setfsuid16
#define OVERRIDE_32_sys_setfsgid16
#define OVERRIDE_32_sys_setreuid16
#define OVERRIDE_32_sys_setregid16
#define OVERRIDE_32_sys_fchown16
#define OVERRIDE_32_sys_setresuid16
#define OVERRIDE_32_sys_setresgid16
#define OVERRIDE_TABLE_32_sys_getuid16
#define OVERRIDE_TABLE_32_sys_getgid16
#define OVERRIDE_TABLE_32_sys_geteuid16
#define OVERRIDE_TABLE_32_sys_getegid16
#define OVERRIDE_TABLE_32_sys_setuid16
#define OVERRIDE_TABLE_32_sys_setgid16
#define OVERRIDE_TABLE_32_sys_setreuid16
#define OVERRIDE_TABLE_32_sys_setregid16
#define OVERRIDE_TABLE_32_sys_fchown16
#define OVERRIDE_TABLE_32_sys_setfsuid16
#define OVERRIDE_TABLE_32_sys_setfsgid16
#define OVERRIDE_TABLE_32_sys_setresuid16
#define OVERRIDE_TABLE_32_sys_setresgid16
#endif
#ifdef CREATE_SYSCALL_TABLE
#define OVERRIDE_TABLE_32_sys_mmap
TRACE_SYSCALL_TABLE(sys_mmap, sys_mmap, 90, 6)
#endif
#ifndef CONFIG_UID16
#define OVERRIDE_32_sys_getgroups16
#define OVERRIDE_32_sys_setgroups16
#define OVERRIDE_32_sys_lchown16
#define OVERRIDE_32_sys_getresuid16
#define OVERRIDE_32_sys_getresgid16
#define OVERRIDE_32_sys_chown16
#define OVERRIDE_TABLE_32_sys_getgroups16
#define OVERRIDE_TABLE_32_sys_setgroups16
#define OVERRIDE_TABLE_32_sys_lchown16
#define OVERRIDE_TABLE_32_sys_getresuid16
#define OVERRIDE_TABLE_32_sys_getresgid16
#define OVERRIDE_TABLE_32_sys_chown16
#endif
/*
* this is a place-holder for x86_64 interger syscall definition override.
*/
#ifndef CREATE_SYSCALL_TABLE
#else /* CREATE_SYSCALL_TABLE */
#endif /* CREATE_SYSCALL_TABLE */
/*
* Copyright 2011 - Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
* Copyright 2011 - Julien Desfossez <julien.desfossez@polymtl.ca>
*
* Dump syscall metadata to console.
*
* GPLv2 license.
*/
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/list.h>
#include <linux/err.h>
#include <linux/slab.h>
#include <linux/kallsyms.h>
#include <linux/dcache.h>
#include <linux/ftrace_event.h>
#include <trace/syscall.h>
#ifndef CONFIG_FTRACE_SYSCALLS
#error "You need to set CONFIG_FTRACE_SYSCALLS=y"
#endif
#ifndef CONFIG_KALLSYMS_ALL
#error "You need to set CONFIG_KALLSYMS_ALL=y"
#endif
static struct syscall_metadata **__start_syscalls_metadata;
static struct syscall_metadata **__stop_syscalls_metadata;
static __init
struct syscall_metadata *find_syscall_meta(unsigned long syscall)
{
struct syscall_metadata **iter;
for (iter = __start_syscalls_metadata;
iter < __stop_syscalls_metadata; iter++) {
if ((*iter)->syscall_nr == syscall)
return (*iter);
}
return NULL;
}
int init_module(void)
{
struct syscall_metadata *meta;
int i;
__start_syscalls_metadata = (void *) kallsyms_lookup_name("__start_syscalls_metadata");
__stop_syscalls_metadata = (void *) kallsyms_lookup_name("__stop_syscalls_metadata");
for (i = 0; i < NR_syscalls; i++) {
int j;
meta = find_syscall_meta(i);
if (!meta)
continue;
printk("syscall %s nr %d nbargs %d ",
meta->name, meta->syscall_nr, meta->nb_args);
printk("types: (");
for (j = 0; j < meta->nb_args; j++) {
if (j > 0)
printk(", ");
printk("%s", meta->types[j]);
}
printk(") ");
printk("args: (");
for (j = 0; j < meta->nb_args; j++) {
if (j > 0)
printk(", ");
printk("%s", meta->args[j]);
}
printk(")\n");
}
printk("SUCCESS\n");
return -1;
}
void cleanup_module(void)
{
}
MODULE_LICENSE("GPL");
#!/bin/sh
# Generate system call probe description macros from syscall metadata dump file.
# example usage:
#
# lttng-syscalls-generate-headers.sh integers 3.0.4 x86-64-syscalls-3.0.4 64
# lttng-syscalls-generate-headers.sh pointers 3.0.4 x86-64-syscalls-3.0.4 64
CLASS=$1
INPUTDIR=$2
INPUTFILE=$3
BITNESS=$4
INPUT=${INPUTDIR}/${INPUTFILE}
SRCFILE=gen.tmp.0
TMPFILE=gen.tmp.1
HEADER=headers/${INPUTFILE}_${CLASS}.h
cp ${INPUT} ${SRCFILE}
#Cleanup
perl -p -e 's/^\[.*\] //g' ${SRCFILE} > ${TMPFILE}
mv ${TMPFILE} ${SRCFILE}
perl -p -e 's/^syscall sys_([^ ]*)/syscall $1/g' ${SRCFILE} > ${TMPFILE}
mv ${TMPFILE} ${SRCFILE}
#Filter
if [ "$CLASS" = integers ]; then
#select integers and no-args.
CLASSCAP=INTEGERS
grep -v "\\*\|cap_user_header_t" ${SRCFILE} > ${TMPFILE}
mv ${TMPFILE} ${SRCFILE}
fi
if [ "$CLASS" = pointers ]; then
#select system calls using pointers.
CLASSCAP=POINTERS
grep "\\*\|cap_#user_header_t" ${SRCFILE} > ${TMPFILE}
mv ${TMPFILE} ${SRCFILE}
fi
echo "/* THIS FILE IS AUTO-GENERATED. DO NOT EDIT */" > ${HEADER}
echo \
"#ifndef CREATE_SYSCALL_TABLE
#if !defined(_TRACE_SYSCALLS_${CLASSCAP}_H) || defined(TRACE_HEADER_MULTI_READ)
#define _TRACE_SYSCALLS_${CLASSCAP}_H
#include <linux/tracepoint.h>
#include <linux/syscalls.h>
#include \"${INPUTFILE}_${CLASS}_override.h\"
#include \"syscalls_${CLASS}_override.h\"
" >> ${HEADER}
if [ "$CLASS" = integers ]; then
NRARGS=0
echo \
'SC_DECLARE_EVENT_CLASS_NOARGS(syscalls_noargs,\n'\
' TP_STRUCT__entry(),\n'\
' TP_fast_assign(),\n'\
' TP_printk()\n'\
')'\
>> ${HEADER}
grep "^syscall [^ ]* nr [^ ]* nbargs ${NRARGS} " ${SRCFILE} > ${TMPFILE}
perl -p -e 's/^syscall ([^ ]*) nr ([^ ]*) nbargs ([^ ]*) '\
'types: \(([^)]*)\) '\
'args: \(([^)]*)\)/'\
'#ifndef OVERRIDE_'"${BITNESS}"'_sys_$1\n'\
'SC_DEFINE_EVENT_NOARGS(syscalls_noargs, sys_$1)\n'\
'#endif/g'\
${TMPFILE} >> ${HEADER}
fi
# types: 4
# args 5
NRARGS=1
grep "^syscall [^ ]* nr [^ ]* nbargs ${NRARGS} " ${SRCFILE} > ${TMPFILE}
perl -p -e 's/^syscall ([^ ]*) nr ([^ ]*) nbargs ([^ ]*) '\
'types: \(([^)]*)\) '\
'args: \(([^)]*)\)/'\
'#ifndef OVERRIDE_'"${BITNESS}"'_sys_$1\n'\
'SC_TRACE_EVENT(sys_$1,\n'\
' TP_PROTO($4 $5),\n'\
' TP_ARGS($5),\n'\
' TP_STRUCT__entry(__field($4, $5)),\n'\
' TP_fast_assign(tp_assign($4, $5, $5)),\n'\
' TP_printk()\n'\
')\n'\
'#endif/g'\
${TMPFILE} >> ${HEADER}
# types: 4 5
# args 6 7
NRARGS=2
grep "^syscall [^ ]* nr [^ ]* nbargs ${NRARGS} " ${SRCFILE} > ${TMPFILE}
perl -p -e 's/^syscall ([^ ]*) nr ([^ ]*) nbargs ([^ ]*) '\
'types: \(([^,]*), ([^)]*)\) '\
'args: \(([^,]*), ([^)]*)\)/'\
'#ifndef OVERRIDE_'"${BITNESS}"'_sys_$1\n'\
'SC_TRACE_EVENT(sys_$1,\n'\
' TP_PROTO($4 $6, $5 $7),\n'\
' TP_ARGS($6, $7),\n'\
' TP_STRUCT__entry(__field($4, $6) __field($5, $7)),\n'\
' TP_fast_assign(tp_assign($4, $6, $6) tp_assign($5, $7, $7)),\n'\
' TP_printk()\n'\
')\n'\
'#endif/g'\
${TMPFILE} >> ${HEADER}
# types: 4 5 6
# args 7 8 9
NRARGS=3
grep "^syscall [^ ]* nr [^ ]* nbargs ${NRARGS} " ${SRCFILE} > ${TMPFILE}
perl -p -e 's/^syscall ([^ ]*) nr ([^ ]*) nbargs ([^ ]*) '\
'types: \(([^,]*), ([^,]*), ([^)]*)\) '\
'args: \(([^,]*), ([^,]*), ([^)]*)\)/'\
'#ifndef OVERRIDE_'"${BITNESS}"'_sys_$1\n'\
'SC_TRACE_EVENT(sys_$1,\n'\
' TP_PROTO($4 $7, $5 $8, $6 $9),\n'\
' TP_ARGS($7, $8, $9),\n'\
' TP_STRUCT__entry(__field($4, $7) __field($5, $8) __field($6, $9)),\n'\
' TP_fast_assign(tp_assign($4, $7, $7) tp_assign($5, $8, $8) tp_assign($6, $9, $9)),\n'\
' TP_printk()\n'\
')\n'\
'#endif/g'\
${TMPFILE} >> ${HEADER}
# types: 4 5 6 7
# args 8 9 10 11
NRARGS=4
grep "^syscall [^ ]* nr [^ ]* nbargs ${NRARGS} " ${SRCFILE} > ${TMPFILE}
perl -p -e 's/^syscall ([^ ]*) nr ([^ ]*) nbargs ([^ ]*) '\
'types: \(([^,]*), ([^,]*), ([^,]*), ([^)]*)\) '\
'args: \(([^,]*), ([^,]*), ([^,]*), ([^)]*)\)/'\
'#ifndef OVERRIDE_'"${BITNESS}"'_sys_$1\n'\
'SC_TRACE_EVENT(sys_$1,\n'\
' TP_PROTO($4 $8, $5 $9, $6 $10, $7 $11),\n'\
' TP_ARGS($8, $9, $10, $11),\n'\
' TP_STRUCT__entry(__field($4, $8) __field($5, $9) __field($6, $10) __field($7, $11)),\n'\
' TP_fast_assign(tp_assign($4, $8, $8) tp_assign($5, $9, $9) tp_assign($6, $10, $10) tp_assign($7, $11, $11)),\n'\
' TP_printk()\n'\
')\n'\
'#endif/g'\
${TMPFILE} >> ${HEADER}
# types: 4 5 6 7 8
# args 9 10 11 12 13
NRARGS=5
grep "^syscall [^ ]* nr [^ ]* nbargs ${NRARGS} " ${SRCFILE} > ${TMPFILE}
perl -p -e 's/^syscall ([^ ]*) nr ([^ ]*) nbargs ([^ ]*) '\
'types: \(([^,]*), ([^,]*), ([^,]*), ([^,]*), ([^)]*)\) '\
'args: \(([^,]*), ([^,]*), ([^,]*), ([^,]*), ([^)]*)\)/'\
'#ifndef OVERRIDE_'"${BITNESS}"'_sys_$1\n'\
'SC_TRACE_EVENT(sys_$1,\n'\
' TP_PROTO($4 $9, $5 $10, $6 $11, $7 $12, $8 $13),\n'\
' TP_ARGS($9, $10, $11, $12, $13),\n'\
' TP_STRUCT__entry(__field($4, $9) __field($5, $10) __field($6, $11) __field($7, $12) __field($8, $13)),\n'\
' TP_fast_assign(tp_assign($4, $9, $9) tp_assign($5, $10, $10) tp_assign($6, $11, $11) tp_assign($7, $12, $12) tp_assign($8, $13, $13)),\n'\
' TP_printk()\n'\
')\n'\
'#endif/g'\
${TMPFILE} >> ${HEADER}
# types: 4 5 6 7 8 9
# args 10 11 12 13 14 15
NRARGS=6
grep "^syscall [^ ]* nr [^ ]* nbargs ${NRARGS} " ${SRCFILE} > ${TMPFILE}
perl -p -e 's/^syscall ([^ ]*) nr ([^ ]*) nbargs ([^ ]*) '\
'types: \(([^,]*), ([^,]*), ([^,]*), ([^,]*), ([^,]*), ([^\)]*)\) '\
'args: \(([^,]*), ([^,]*), ([^,]*), ([^,]*), ([^,]*), ([^\)]*)\)/'\
'#ifndef OVERRIDE_'"${BITNESS}"'_sys_$1\n'\
'SC_TRACE_EVENT(sys_$1,\n'\
' TP_PROTO($4 $10, $5 $11, $6 $12, $7 $13, $8 $14, $9 $15),\n'\
' TP_ARGS($10, $11, $12, $13, $14, $15),\n'\
' TP_STRUCT__entry(__field($4, $10) __field($5, $11) __field($6, $12) __field($7, $13) __field($8, $14) __field($9, $15)),\n'\
' TP_fast_assign(tp_assign($4, $10, $10) tp_assign($5, $11, $11) tp_assign($6, $12, $12) tp_assign($7, $13, $13) tp_assign($8, $14, $14) tp_assign($9, $15, $15)),\n'\
' TP_printk()\n'\
')\n'\
'#endif/g'\
${TMPFILE} >> ${HEADER}
# Macro for tracing syscall table
rm -f ${TMPFILE}
for NRARGS in $(seq 0 6); do
grep "^syscall [^ ]* nr [^ ]* nbargs ${NRARGS} " ${SRCFILE} >> ${TMPFILE}
done
echo \
"
#endif /* _TRACE_SYSCALLS_${CLASSCAP}_H */
/* This part must be outside protection */
#include \"../../../probes/define_trace.h\"
#else /* CREATE_SYSCALL_TABLE */
#include \"${INPUTFILE}_${CLASS}_override.h\"
#include \"syscalls_${CLASS}_override.h\"
" >> ${HEADER}
NRARGS=0
if [ "$CLASS" = integers ]; then
#noargs
grep "^syscall [^ ]* nr [^ ]* nbargs ${NRARGS} " ${SRCFILE} > ${TMPFILE}
perl -p -e 's/^syscall ([^ ]*) nr ([^ ]*) nbargs ([^ ]*) .*$/'\
'#ifndef OVERRIDE_TABLE_'"${BITNESS}"'_sys_$1\n'\
'TRACE_SYSCALL_TABLE\(syscalls_noargs, sys_$1, $2, $3\)\n'\
'#endif/g'\
${TMPFILE} >> ${HEADER}
fi
#others.
grep -v "^syscall [^ ]* nr [^ ]* nbargs ${NRARGS} " ${SRCFILE} > ${TMPFILE}
perl -p -e 's/^syscall ([^ ]*) nr ([^ ]*) nbargs ([^ ]*) .*$/'\
'#ifndef OVERRIDE_TABLE_'"${BITNESS}"'_sys_$1\n'\
'TRACE_SYSCALL_TABLE(sys_$1, sys_$1, $2, $3)\n'\
'#endif/g'\
${TMPFILE} >> ${HEADER}
echo -n \
"
#endif /* CREATE_SYSCALL_TABLE */
" >> ${HEADER}
#fields names: ...char * type with *name* or *file* or *path* or *root*
# or *put_old* or *type*
cp -f ${HEADER} ${TMPFILE}
rm -f ${HEADER}
perl -p -e 's/__field\(([^,)]*char \*), ([^\)]*)(name|file|path|root|put_old|type)([^\)]*)\)/__string_from_user($2$3$4, $2$3$4)/g'\
${TMPFILE} >> ${HEADER}
cp -f ${HEADER} ${TMPFILE}
rm -f ${HEADER}
perl -p -e 's/tp_assign\(([^,)]*char \*), ([^,]*)(name|file|path|root|put_old|type)([^,]*), ([^\)]*)\)/tp_copy_string_from_user($2$3$4, $5)/g'\
${TMPFILE} >> ${HEADER}
#prettify addresses heuristics.
#field names with addr or ptr
cp -f ${HEADER} ${TMPFILE}
rm -f ${HEADER}
perl -p -e 's/__field\(([^,)]*), ([^,)]*addr|[^,)]*ptr)([^),]*)\)/__field_hex($1, $2$3)/g'\
${TMPFILE} >> ${HEADER}
#field types ending with '*'
cp -f ${HEADER} ${TMPFILE}
rm -f ${HEADER}
perl -p -e 's/__field\(([^,)]*\*), ([^),]*)\)/__field_hex($1, $2)/g'\
${TMPFILE} >> ${HEADER}
#strip the extra type information from tp_assign.
cp -f ${HEADER} ${TMPFILE}
rm -f ${HEADER}
perl -p -e 's/tp_assign\(([^,)]*), ([^,]*), ([^\)]*)\)/tp_assign($2, $3)/g'\
${TMPFILE} >> ${HEADER}
rm -f ${INPUTFILE}.tmp
rm -f ${TMPFILE}
rm -f ${SRCFILE}
obj-m += lib-ring-buffer.o
lib-ring-buffer-objs := \
ringbuffer/ring_buffer_backend.o \
ringbuffer/ring_buffer_frontend.o \
ringbuffer/ring_buffer_iterator.o \
ringbuffer/ring_buffer_vfs.o \
ringbuffer/ring_buffer_splice.o \
ringbuffer/ring_buffer_mmap.o \
prio_heap/lttng_prio_heap.o \
../wrapper/splice.o
#ifndef _LTTNG_ALIGN_H
#define _LTTNG_ALIGN_H
/*
* lib/align.h
*
* (C) Copyright 2010-2011 - Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
*
* Dual LGPL v2.1/GPL v2 license.
*/
#ifdef __KERNEL__
#include <linux/types.h>
#include "bug.h"
#define ALIGN_FLOOR(x, a) __ALIGN_FLOOR_MASK(x, (typeof(x)) (a) - 1)
#define __ALIGN_FLOOR_MASK(x, mask) ((x) & ~(mask))
#define PTR_ALIGN_FLOOR(p, a) \
((typeof(p)) ALIGN_FLOOR((unsigned long) (p), a))
/*
* Align pointer on natural object alignment.
*/
#define object_align(obj) PTR_ALIGN(obj, __alignof__(*(obj)))
#define object_align_floor(obj) PTR_ALIGN_FLOOR(obj, __alignof__(*(obj)))
/**
* offset_align - Calculate the offset needed to align an object on its natural
* alignment towards higher addresses.
* @align_drift: object offset from an "alignment"-aligned address.
* @alignment: natural object alignment. Must be non-zero, power of 2.
*
* Returns the offset that must be added to align towards higher
* addresses.
*/
#define offset_align(align_drift, alignment) \
({ \
BUILD_RUNTIME_BUG_ON((alignment) == 0 \
|| ((alignment) & ((alignment) - 1))); \
(((alignment) - (align_drift)) & ((alignment) - 1)); \
})
/**
* offset_align_floor - Calculate the offset needed to align an object
* on its natural alignment towards lower addresses.
* @align_drift: object offset from an "alignment"-aligned address.
* @alignment: natural object alignment. Must be non-zero, power of 2.
*
* Returns the offset that must be substracted to align towards lower addresses.
*/
#define offset_align_floor(align_drift, alignment) \
({ \
BUILD_RUNTIME_BUG_ON((alignment) == 0 \
|| ((alignment) & ((alignment) - 1))); \
(((align_drift) - (alignment)) & ((alignment) - 1); \
})
#endif /* __KERNEL__ */
#endif
This diff is collapsed.
#ifndef _LTTNG_BUG_H
#define _LTTNG_BUG_H
/*
* lib/bug.h
*
* (C) Copyright 2010-2011 - Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
*
* Dual LGPL v2.1/GPL v2 license.
*/
/**
* BUILD_RUNTIME_BUG_ON - check condition at build (if constant) or runtime
* @condition: the condition which should be false.
*
* If the condition is a constant and true, the compiler will generate a build
* error. If the condition is not constant, a BUG will be triggered at runtime
* if the condition is ever true. If the condition is constant and false, no
* code is emitted.
*/
#define BUILD_RUNTIME_BUG_ON(condition) \
do { \
if (__builtin_constant_p(condition)) \
BUILD_BUG_ON(condition); \
else \
BUG_ON(condition); \
} while (0)
#endif
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment