1. 06 Nov, 2008 1 commit
  2. 04 Nov, 2008 11 commits
    • Frederic Weisbecker's avatar
      tracing/ftrace: fix a bug when switch current tracer to sched tracer · 79a9d461
      Frederic Weisbecker authored
      Impact: fix boot tracer + sched tracer coupling bug
      
      Fix a bug that made the sched_switch tracer unable to run
      if set as the current_tracer after the boot tracer.
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      79a9d461
    • Frederic Weisbecker's avatar
      tracing/ftrace: types and naming corrections for sched tracer · efade6e7
      Frederic Weisbecker authored
      Impact: cleanup
      
      This patch applies some corrections suggested by Steven Rostedt.
      
      Change the type of shed_ref into int since it is used
      into a Mutex, we don't need it anymore as an atomic
      variable in the sched_switch tracer.
      Also change the name of the register mutex.
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      efade6e7
    • Frederic Weisbecker's avatar
      tracing/fastboot: use sched switch tracer from boot tracer · d7ad44b6
      Frederic Weisbecker authored
      Impact: enhance boot trace output with scheduling events
      
      Use the sched_switch tracer from the boot tracer.
      
      We also can trace schedule events inside the initcalls.
      Sched tracing is disabled after the initcall has finished and
      then reenabled before the next one is started.
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      d7ad44b6
    • Frederic Weisbecker's avatar
      tracing/ftrace: remove unused code in sched_switch tracer · e55f605c
      Frederic Weisbecker authored
      Impact: cleanup
      
      When init_sched_switch_trace() is called, it has no reason to start
      the sched tracer if the sched_ref is not zero.
      
      _ If this is non-zero, the tracer is already used, but we can register it
      to the tracing engine. There is already a security which avoid the tracer
      probes not to be resgistered twice.
      
      _ If this is zero, this block will not be used.
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      e55f605c
    • Frederic Weisbecker's avatar
      tracing/ftrace: fix a race condition in sched_switch tracer · 07695fa0
      Frederic Weisbecker authored
      Impact: fix race condition in sched_switch tracer
      
      This patch fixes a race condition in the sched_switch tracer. If
      several tasks (IE: concurrent initcalls) are playing with
      tracing_start_cmdline_record() and tracing_stop_cmdline_record(), the
      following situation could happen:
      
      _ Task A and B are using the same tracepoint probe. Task A holds it.
        Task B is sleeping and doesn't hold it.
      
      _ Task A frees the sched tracer, then sched_ref is decremented to 0.
      
      _ Task A is preempted and hadn't yet unregistered its tracepoint
        probe, then B runs.
      
      _ B increments sched_ref, sees it's 1 and then guess it has to
        register its probe. But it has not been freed by task A.
      
      _ A lot of bad things can happen after that...
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      07695fa0
    • Frederic Weisbecker's avatar
      tracing/fastboot: Enable boot tracing only during initcalls · 71566a0d
      Frederic Weisbecker authored
      Impact: modify boot tracer
      
      We used to disable the initcall tracing at a specified time (IE: end
      of builtin initcalls). But we don't need it anymore. It will be
      stopped when initcalls are finished.
      
      However we want two things:
      
      _Start this tracing only after pre-smp initcalls are finished.
      
      _Since we are planning to trace sched_switches at the same time, we
      want to enable them only during the initcall execution.
      
      For this purpose, this patch introduce two functions to enable/disable
      the sched_switch tracing during boot.
      Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      71566a0d
    • Peter Zijlstra's avatar
      ftrace: sysctl typo · 3299b4dd
      Peter Zijlstra authored
      Impact: fix sysctl name typo
      
      Steve must have needed more coffee ;-)
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      3299b4dd
    • Peter Zijlstra's avatar
      ftrace: sysrq-z to dump the buffers · 69f698ad
      Peter Zijlstra authored
      Impact: add SysRq-z support to dump trace buffers
      
      Allows one to force an ftrace dump from sysrq
      Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      69f698ad
    • Steven Rostedt's avatar
      ftrace: function tracer with irqs disabled · b2a866f9
      Steven Rostedt authored
      Impact: disable interrupts during trace entry creation (as opposed to preempt)
      
      To help with performance, I set the ftracer to not disable interrupts,
      and only to disable preemption. If an interrupt occurred, it would not
      be traced, because the function tracer protects itself from recursion.
      This may be faster, but the trace output might miss some traces.
      
      This patch makes the fuction trace disable interrupts, but it also
      adds a runtime feature to disable preemption instead. It does this by
      having two different tracer functions. When the function tracer is
      enabled, it will check to see which version is requested (irqs disabled
      or preemption disabled). Then it will use the corresponding function
      as the tracer.
      
      Irq disabling is the default behaviour, but if the user wants better
      performance, with the chance of missing traces, then they can choose
      the preempt disabled version.
      
      Running hackbench 3 times with the irqs disabled and 3 times with
      the preempt disabled function tracer yielded:
      
      tracing type       times            entries recorded
      ------------      --------          ----------------
      irq disabled      43.393            166433066
                        43.282            166172618
                        43.298            166256704
      
      preempt disabled  38.969            159871710
                        38.943            159972935
                        39.325            161056510
      
      Average:
      
         irqs disabled:  43.324           166287462
      preempt disabled:  39.079           160300385
      
       preempt is 10.8 percent faster than irqs disabled.
      
      I wrote a patch to count function trace recursion and reran hackbench.
      
      With irq disabled: 1,150 times the function tracer did not trace due to
        recursion.
      with preempt disabled: 5,117,718 times.
      
      The thousand times with irq disabled could be due to NMIs, or simply a case
      where it called a function that was not protected by notrace.
      
      But we also see that a large amount of the trace is lost with the
      preempt version.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Steven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      b2a866f9
    • Steven Rostedt's avatar
      ftrace: insert in the ftrace_preempt_disable()/enable() functions · 182e9f5f
      Steven Rostedt authored
      Impact: use new, consolidated APIs in ftrace plugins
      
      This patch replaces the schedule safe preempt disable code with the
      ftrace_preempt_disable() and ftrace_preempt_enable() safe functions.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      182e9f5f
    • Steven Rostedt's avatar
      ftrace: introduce ftrace_preempt_disable()/enable() · 8f0a056f
      Steven Rostedt authored
      Impact: add new ftrace-plugin internal APIs
      
      Parts of the tracer needs to be careful about schedule recursion.
      If the NEED_RESCHED flag is set, a preempt_enable will call schedule.
      Inside the schedule function, the NEED_RESCHED flag is cleared.
      
      The problem arises when a trace happens in the schedule function but before
      NEED_RESCHED is cleared. The race is as follows:
      
      schedule()
        >> tracer called
      
          trace_function()
             preempt_disable()
             [ record trace ]
             preempt_enable()  <<- here's the issue.
      
               [check NEED_RESCHED]
                schedule()
                [ Repeat the above, over and over again ]
      
      The naive approach is simply to use preempt_enable_no_schedule instead.
      The problem with that approach is that, although we solve the schedule
      recursion issue, we now might lose a preemption check when not in the
      schedule function.
      
        trace_function()
          preempt_disable()
          [ record trace ]
          [Interrupt comes in and sets NEED_RESCHED]
          preempt_enable_no_resched()
          [continue without scheduling]
      
      The way ftrace handles this problem is with the following approach:
      
      	int resched;
      
      	resched = need_resched();
      	preempt_disable_notrace();
      	[record trace]
      	if (resched)
      		preempt_enable_no_sched_notrace();
      	else
      		preempt_enable_notrace();
      
      This may seem like the opposite of what we want. If resched is set
      then we call the "no_sched" version??  The reason we do this is because
      if NEED_RESCHED is set before we disable preemption, there's two reasons
      for that:
      
        1) we are in an atomic code path
        2) we are already on our way to the schedule function, and maybe even
           in the schedule function, but have yet to clear the flag.
      
      Both the above cases we do not want to schedule.
      
      This solution has already been implemented within the ftrace infrastructure.
      But the problem is that it has been implemented several times. This patch
      encapsulates this code to two nice functions.
      
        resched = ftrace_preempt_disable();
        [ record trace]
        ftrace_preempt_enable(resched);
      
      This way the tracers do not need to worry about getting it right.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
      8f0a056f
  3. 03 Nov, 2008 6 commits
  4. 02 Nov, 2008 19 commits
  5. 01 Nov, 2008 3 commits