- 25 Feb, 2010 5 commits
-
-
Wenji Huang authored
Signed-off-by: Wenji Huang <wenji.huang@oracle.com> LKML-Reference: <1266997226-6833-2-git-send-email-wenji.huang@oracle.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
-
Wenji Huang authored
Signed-off-by: Wenji Huang <wenji.huang@oracle.com> LKML-Reference: <1266997226-6833-1-git-send-email-wenji.huang@oracle.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
-
Li Zefan authored
The power tracer has been converted to power trace events. Acked-by: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: Li Zefan <lizf@cn.fujitsu.com> LKML-Reference: <4B84D50E.4070806@cn.fujitsu.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
-
Jeff Mahoney authored
GCC 4.5 introduces behavior that forces the alignment of structures to use the largest possible value. The default value is 32 bytes, so if some structures are defined with a 4-byte alignment and others aren't declared with an alignment constraint at all - it will align at 32-bytes. For things like the ftrace events, this results in a non-standard array. When initializing the ftrace subsystem, we traverse the _ftrace_events section and call the initialization callback for each event. When the structures are misaligned, we could be treating another part of the structure (or the zeroed out space between them) as a function pointer. This patch forces the alignment for all the ftrace_event_call structures to 4 bytes. Without this patch, the kernel fails to boot very early when built with gcc 4.5. It's trivial to check the alignment of the members of the array, so it might be worthwhile to add something to the build system to do that automatically. Unfortunately, that only covers this case. I've asked one of the gcc developers about adding a warning when this condition is seen. Cc: stable@kernel.org Signed-off-by: Jeff Mahoney <jeffm@suse.com> LKML-Reference: <4B85770B.6010901@suse.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
-
Steven Rostedt authored
The code in stop_machine that modifies the kernel text has a bit of logic to handle the case of NMIs. stop_machine does not prevent NMIs from executing, and if an NMI were to trigger on another CPU as the modifying CPU is changing the NMI text, a GPF could result. To prevent the GPF, the NMI calls ftrace_nmi_enter() which may modify the code first, then any other NMIs will just change the text to the same content which will do no harm. The code that stop_machine called must wait for NMIs to finish while it changes each location in the kernel. That code may also change the text to what the NMI changed it to. The key is that the text will never change content while another CPU is executing it. To make the above work, the call to ftrace_nmi_enter() must also do a smp_mb() as well as atomic_inc(). But for applications like perf that require a high number of NMIs for profiling, this can have a dramatic effect on the system. Not only is it doing a full memory barrier on both nmi_enter() as well as nmi_exit() it is also modifying a global variable with an atomic operation. This kills performance on large SMP machines. Since the memory barriers are only needed when ftrace is in the process of modifying the text (which is seldom), this patch adds a "modifying_code" variable that gets set before stop machine is executed and cleared afterwards. The NMIs will check this variable and store it in a per CPU "save_modifying_code" variable that it will use to check if it needs to do the memory barriers and atomic dec on NMI exit. Acked-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
-
- 16 Feb, 2010 1 commit
-
-
Steven Rostedt authored
The functions used to implement the TRACE_EVENT macro show up in function tracing. This is considered a distraction, and these should not be displayed. For example: <idle>-0 [000] 57.202149: task_of <-update_stats_wait_end <idle>-0 [000] 57.202149: ftrace_raw_event_sched_stat_wait <-update_stats_wait_end <idle>-0 [000] 57.202150: ftrace_raw_event_id_sched_stat_template <-ftrace_raw_event_sched_stat_wait <idle>-0 [000] 57.202150: sched_stat_wait: comm=sshd pid=2735 delay=19207 [ns] The "ftrace_raw_event_*" traces are just the utility functions used by TRACE_EVENT tracepoints. Cc: Thomas Gleixner <tglx@linutronix.de> Requested-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
-
- 11 Feb, 2010 1 commit
-
-
Li Zefan authored
I don't see why we can only clear all functions from the filter. After patching: # echo sys_open > set_graph_function # echo sys_close >> set_graph_function # cat set_graph_function sys_open sys_close # echo '!sys_close' >> set_graph_function # cat set_graph_function sys_open Signed-off-by: Li Zefan <lizf@cn.fujitsu.com> LKML-Reference: <4B726388.2000408@cn.fujitsu.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
-
- 10 Feb, 2010 1 commit
-
-
Steven Rostedt authored
The branch annotation is a bit difficult to see the worst offenders because it only sorts by percentage: correct incorrect % Function File Line ------- --------- - -------- ---- ---- 0 163 100 qdisc_restart sch_generic.c 179 0 163 100 pfifo_fast_dequeue sch_generic.c 447 0 4 100 pskb_trim_rcsum skbuff.h 1689 0 4 100 llc_rcv llc_input.c 170 0 18 100 psmouse_interrupt psmouse-base.c 304 0 3 100 atkbd_interrupt atkbd.c 389 0 5 100 usb_alloc_dev usb.c 437 0 11 100 vsscanf vsprintf.c 1897 0 2 100 IS_ERR err.h 34 0 23 100 __rmqueue_fallback page_alloc.c 865 0 4 100 probe_wakeup_sched_switch trace_sched_wakeup.c 142 0 3 100 move_masked_irq migration.c 11 Adding the incorrect and correct values as sort keys makes this file a bit more informative: correct incorrect % Function File Line ------- --------- - -------- ---- ---- 0 366541 100 audit_syscall_entry auditsc.c 1637 0 366538 100 audit_syscall_exit auditsc.c 1685 0 115839 100 sched_info_switch sched_stats.h 269 0 74567 100 sched_info_queued sched_stats.h 222 0 66578 100 sched_info_dequeued sched_stats.h 177 0 15113 100 trace_workqueue_insertion workqueue.h 38 0 15107 100 trace_workqueue_execution workqueue.h 45 0 3622 100 syscall_trace_leave ptrace.c 1772 0 2750 100 sched_move_task sched.c 10100 0 2750 100 sched_move_task sched.c 10110 0 1815 100 pre_schedule_rt sched_rt.c 1462 0 837 100 audit_alloc auditsc.c 879 0 814 100 tcp_mss_split_point tcp_output.c 1302 Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
-
- 29 Jan, 2010 1 commit
-
-
Lai Jiangshan authored
In the function graph tracer, a calling function is to be traced only when it is enabled through the set_graph_function file, or when it is nested in an enabled function. Current code uses TSK_TRACE_FL_GRAPH to test whether it is nested or not. Looking at the code, we can get this: (trace->depth > 0) <==> (TSK_TRACE_FL_GRAPH is set) trace->depth is more explicit to tell that it is nested. So we use trace->depth directly and simplify the code. No functionality is changed. TSK_TRACE_FL_GRAPH is not removed yet, it is left for future usage. Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Steven Rostedt <rostedt@goodmis.org> LKML-Reference: <4B4DB0B6.7040607@cn.fujitsu.com> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
-
- 17 Jan, 2010 1 commit
-
-
Frederic Weisbecker authored
Each time we save a function entry from the function graph tracer, we check if the trace array is set, which is wasteful because it is set anyway before we start the tracer. All we need is to ensure we have good read and write orderings. When we set the trace array, we just need to guarantee it to be visible before starting tracing. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Acked-by: Steven Rostedt <rostedt@goodmis.org> Cc: Lai Jiangshan <laijs@cn.fujitsu.com> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> LKML-Reference: <1263453795-7496-1-git-send-regression-fweisbec@gmail.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
-
- 06 Jan, 2010 15 commits
-
-
Steven Rostedt authored
If the ftrace stacktrace option is set, then add the stack dumps to trace_printk. Requested-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
-
Wolfram Sang authored
Modified recordmcount.pl to use perl constructs that are still understandable by C hackers that are not perl programmers. Signed-off-by: Wolfram Sang <w.sang@pengutronix.de> LKML-Reference: <1262724082-9517-1-git-send-email-w.sang@pengutronix.de> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
-
Wolfram Sang authored
- move check for open file in front of the writing loop - use perl-constructs to access the array Signed-off-by: Wolfram Sang <w.sang@pengutronix.de> LKML-Reference: <1262716072-14414-2-git-send-email-w.sang@pengutronix.de> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
-
Lai Jiangshan authored
At the beginning, access to the ring buffer was fully serialized by trace_types_lock. Patch d7350c3f gives more freedom to readers, and patch b04cc6b1 adds code to protect trace_pipe and cpu#/trace_pipe. But actually it is not enough, ring buffer readers are not always read-only, they may consume data. This patch makes accesses to trace, trace_pipe, trace_pipe_raw cpu#/trace, cpu#/trace_pipe and cpu#/trace_pipe_raw serialized. And removes tracing_reader_cpumask which is used to protect trace_pipe. Details: Ring buffer serializes readers, but it is low level protection. The validity of the events (which returns by ring_buffer_peek() ..etc) are not protected by ring buffer. The content of events may become garbage if we allow another process to consume these events concurrently: A) the page of the consumed events may become a normal page (not reader page) in ring buffer, and this page will be rewritten by the events producer. B) The page of the consumed events may become a page for splice_read, and this page will be returned to system. This patch adds trace_access_lock() and trace_access_unlock() primitives. These primitives allow multi process access to different cpu ring buffers concurrently. These primitives don't distinguish read-only and read-consume access. Multi read-only access is also serialized. And we don't use these primitives when we open files, we only use them when we read files. Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> LKML-Reference: <4B447D52.1050602@cn.fujitsu.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
-
Lai Jiangshan authored
The previous patches added the use of print_fmt string and changes the trace_define_field() function to also create the fields and format output for the event format files. text data bss dec hex filename 5857201 1355780 9336808 16549789 fc879d vmlinux 5884589 1351684 9337896 16574169 fce6d9 vmlinux-orig The above shows the size of the vmlinux after this patch set compared to the vmlinux-orig which is before the patch set. This saves us 27k on text, 1k on bss and adds just 4k of data. The total savings of 24k in size. Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> LKML-Reference: <4B273D4D.40604@cn.fujitsu.com> Acked-by: Masami Hiramatsu <mhiramat@redhat.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
-
Lai Jiangshan authored
The calls ftrace_format_##call() and ftrace_define_fields_##call() are almost duplicate in functionality. With the addition of the print_fmt in previous patches, these two functions can be merged into one. The trace_define_field() defines the fields and links them into the struct ftrace_event_call. The previous patches introduced the print_fmt field and this can now be used with the trace_define_field() to create the event format file fields and print_fmt field. The struct ftrace_event_call->fields are used to print the fields The struct ftrace_event_call->print_fmt is used to print the "print fmt: XXXXXXXXXXX" line. Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> LKML-Reference: <4B273D49.5000006@cn.fujitsu.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
-
Steven Rostedt authored
In the clean up of having all events call one specific function, the syscall event init was changed to call this helper function. With the new print_fmt updates, the syscalls need to do special initializations. This patch converts the syscall events to call its own init function again. Cc: Lai Jiangshan <laijs@cn.fujitsu.com> Cc: Li Zefan <lizf@cn.fujitsu.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
-
Lai Jiangshan authored
This is part of a patch set that removes the show_format method in the ftrace event macros. Add the print_fmt initialization to the kprobe events. The print_fmt is still not used, but will be in the follow up patches. Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> LKML-Reference: <4B273D45.3080100@cn.fujitsu.com> Acked-by: Masami Hiramatsu <mhiramat@redhat.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
-
Lai Jiangshan authored
This is part of a patch set that removes the show_format method in the ftrace event macros. Add the print_fmt initialization to the syscall events. The print_fmt is still not used, but will be in the follow up patches. Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> LKML-Reference: <4B273D41.609@cn.fujitsu.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
-
Lai Jiangshan authored
This is part of a patch set that removes the show_format method in the ftrace event macros. The print_fmt field is added to hold the string that shows the print_fmt in the event format files. This patch only adds the field but it is currently not used. Later patches will use this field to enable us to remove the show_format field and function. Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> LKML-Reference: <4B273D3E.2000704@cn.fujitsu.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
-
Lai Jiangshan authored
This is part of a patch set that removes the show_format method in the ftrace event macros. This patch set requires that all fields are added to the ftrace_event_call->fields. This patch changes __dynamic_array() to call trace_define_field() to include fields that use __dynamic_array(). Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> LKML-Reference: <4B273D36.8090100@cn.fujitsu.com> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
-
Linus Torvalds authored
-
git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-2.6Linus Torvalds authored
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-2.6: sparc64: Fix Niagara2 perf event handling. sparc64: Fix NMI programming when perf events are active. bbc_envctrl: Clean up properly if kthread_run() fails.
-
Rusty Russell authored
This reverts commit ae1b22f6. As Linus said in 982d007a: "There was something really messy about cmpxchg8b and clone CPU's, so if you enable it on other CPUs later, do it carefully." This breaks lguest for those configs, but we can fix that by emulating if we have to. Fixes: http://bugzilla.kernel.org/show_bug.cgi?id=14884Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Cc: stable@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
git://git.kernel.org/pub/scm/linux/kernel/git/jlbec/ocfs2Linus Torvalds authored
* 'upstream-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jlbec/ocfs2: ocfs2: Handle O_DIRECT when writing to a refcounted cluster.
-
- 05 Jan, 2010 2 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound-2.6Linus Torvalds authored
* 'for-2.6.33' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound-2.6: ASoC: fixup oops in generic AC97 codec glue ASoC: fix params_rate() macro use in several codecs ASoC: fsi-ak4642: Remove ak4642_add_i2c_device
-
David S. Miller authored
For chips like Niagara2 that have true overflow indications in the %pcr (which we don't actually need and don't use) the interrupt signal persists until the overflow bits are cleared by an explicit %pcr write. Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 04 Jan, 2010 13 commits
-
-
David S. Miller authored
If perf events are active, we should not reset the %pcr to PCR_PIC_PRIV. That perf events code does the management. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
-
David S. Miller authored
In bbc_envctrl_init() we have to unlink the fan and temp instances from the lists because our caller is going to free up the 'bp' object if we return an error. We can't rely upon bbc_envctrl_cleanup() to do this work for us in this case. Reported-by: Patrick Finnegan <pat@computer-refuge.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
git://decibel.fi.muni.cz/~xslaby/linuxLinus Torvalds authored
* 'limits_cleanup' of git://decibel.fi.muni.cz/~xslaby/linux: resource: add helpers for fetching rlimits resource: move kernel function inside __KERNEL__ SECURITY: selinux, fix update_rlimit_cpu parameter
-
git://git.fluff.org/bjdooks/linuxLinus Torvalds authored
* 'for-linus/samsung' of git://git.fluff.org/bjdooks/linux: ARM: S3C: Fix NAND device registration by s3c_nand_set_platdata(). ARM: S3C24XX: touchscreen device definition ARM: mach-bast: add NAND_SCAN_SILENT_NODEV to optional devices ARM: mach-osiris: add NAND_SCAN_SILENT_NODEV to optional devices ARM: S3C24XX: touchscreen device definition
-
Eric W. Biederman authored
Holding locks over device_del -> kobject_del -> sysfs_deactivate can cause deadlocks if those same locks are grabbed in sysfs show or store methods. The I model s_active count + completion as a sleeping read/write lock. I describe to lockdep sysfs_get_active as a read_trylock, sysfs_put_active as a read_unlock, and sysfs_deactivate as a write_lock and write_unlock pair. This seems to capture the essence for purposes of finding deadlocks, and in my testing gives finds real issues and ignores non-issues. This brings us back to holding locks over kobject_del is a problem that ideally we should find a way of addressing, but at least lockdep can tell us about the problems instead of requiring developers to debug rare strange system deadlocks, that happen when sysfs files are removed while being written to. Signed-off-by: Eric W. Biederman <ebiederm@xmission.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Rusty Russell authored
We kill the guest, but then we blatt random stuff. Reported-by: Dan Carpenter <error27@gmail.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Cc: stable@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
git://git390.marist.edu/pub/scm/linux-2.6Linus Torvalds authored
* 'for-linus' of git://git390.marist.edu/pub/scm/linux-2.6: [S390] Update default configuration. [S390] Have param.h simply include <asm-generic/param.h>. [S390] qdio: convert global statistics to per-device stats
-
git://git.kernel.org/pub/scm/linux/kernel/git/lethal/sh-2.6Linus Torvalds authored
* 'sh/for-2.6.33' of git://git.kernel.org/pub/scm/linux/kernel/git/lethal/sh-2.6: binfmt_elf_fdpic: Fix build breakage introduced by coredump changes. sh: update defconfigs. sh: Don't default enable PMB support. sh: Disable PMB for SH4AL-DSP CPUs. sh: Only provide a PCLK definition for legacy CPG CPUs.
-
git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4Linus Torvalds authored
* 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4: ext4: Calculate metadata requirements more accurately ext4: Fix accounting of reserved metadata blocks
-
Alan Cox authored
We wrap the smm calls and other bits with the BKL push down as a precaution but they can probably go Signed-off-by: Alan Cox <alan@linux.intel.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Alan Cox authored
Nobody seems to want to own I2O patches so sending this one directly. Signed-off-by: Alan Cox <alan@linux.intel.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
John Kacur authored
The BKL is in this function because of the BKL pushdown (see commit f8f2c79d) It is not needed here because the mutex_lock sonypi_device.lock provides the necessary locking. sonypi_misc_ioctl can be converted to unlocked ioctls since it relies on its own locking (the mutex sonypi_device.lock) and not the bkl Document that llseek is not needed by explictly setting it to no_llseek LKML-Reference: <alpine.LFD.2.00.0910192019420.3563@localhost.localdomain> Signed-off-by: John Kacur <jkacur@redhat.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-