1. 08 Feb, 2009 6 commits
    • Steven Rostedt's avatar
      ftrace: change function graph tracer to use new in_nmi · 9a5fd902
      Steven Rostedt authored
      The function graph tracer piggy backed onto the dynamic ftracer
      to use the in_nmi custom code for dynamic tracing. The problem
      was (as Andrew Morton pointed out) it really only wanted to bail
      out if the context of the current CPU was in NMI context. But the
      dynamic ftrace in_nmi custom code was true if _any_ CPU happened
      to be in NMI context.
      
      Now that we have a generic in_nmi interface, this patch changes
      the function graph code to use it instead of the dynamic ftarce
      custom code.
      Reported-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      9a5fd902
    • Steven Rostedt's avatar
      nmi: add generic nmi tracking state · 375b38b4
      Steven Rostedt authored
      This code adds an in_nmi() macro that uses the current tasks preempt count
      to track when it is in NMI context. Other parts of the kernel can
      use this to determine if the context is in NMI context or not.
      
      This code was inspired by the -rt patch in_nmi version that was
      written by Peter Zijlstra, who borrowed that code from
      Mathieu Desnoyers.
      Reported-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      375b38b4
    • Steven Rostedt's avatar
      ftrace, x86: rename in_nmi variable · 4e6ea144
      Steven Rostedt authored
      Impact: clean up
      
      The in_nmi variable in x86 arch ftrace.c is a misnomer.
      Andrew Morton pointed out that the in_nmi variable is incremented
      by all CPUS. It can be set when another CPU is running an NMI.
      
      Since this is actually intentional, the fix is to rename it to
      what it really is: "nmi_running"
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      4e6ea144
    • Steven Rostedt's avatar
      ring-buffer: allow tracing_off to be used in core kernel code · d8b891a2
      Steven Rostedt authored
      tracing_off() is the fastest way to stop recording to the ring buffers.
      This may be used in places like panic and die, just before the
      ftrace_dump is called.
      
      This patch adds the appropriate CPP conditionals to make it a stub
      function when the ring buffer is not configured it.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      d8b891a2
    • Steven Rostedt's avatar
      ring-buffer: add NMI protection for spinlocks · 78d904b4
      Steven Rostedt authored
      Impact: prevent deadlock in NMI
      
      The ring buffers are not yet totally lockless with writing to
      the buffer. When a writer crosses a page, it grabs a per cpu spinlock
      to protect against a reader. The spinlocks taken by a writer are not
      to protect against other writers, since a writer can only write to
      its own per cpu buffer. The spinlocks protect against readers that
      can touch any cpu buffer. The writers are made to be reentrant
      with the spinlocks disabling interrupts.
      
      The problem arises when an NMI writes to the buffer, and that write
      crosses a page boundary. If it grabs a spinlock, it can be racing
      with another writer (since disabling interrupts does not protect
      against NMIs) or with a reader on the same CPU. Luckily, most of the
      users are not reentrant and protects against this issue. But if a
      user of the ring buffer becomes reentrant (which is what the ring
      buffers do allow), if the NMI also writes to the ring buffer then
      we risk the chance of a deadlock.
      
      This patch moves the ftrace_nmi_enter called by nmi_enter() to the
      ring buffer code. It replaces the current ftrace_nmi_enter that is
      used by arch specific code to arch_ftrace_nmi_enter and updates
      the Kconfig to handle it.
      
      When an NMI is called, it will set a per cpu variable in the ring buffer
      code and will clear it when the NMI exits. If a write to the ring buffer
      crosses page boundaries inside an NMI, a trylock is used on the spin
      lock instead. If the spinlock fails to be acquired, then the entry
      is discarded.
      
      This bug appeared in the ftrace work in the RT tree, where event tracing
      is reentrant. This workaround solved the deadlocks that appeared there.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      78d904b4
    • Steven Rostedt's avatar
      trace: remove deprecated entry->cpu · 1830b52d
      Steven Rostedt authored
      Impact: fix to prevent developers from using entry->cpu
      
      With the new ring buffer infrastructure, the cpu for the entry is
      implicit with which CPU buffer it is on.
      
      The original code use to record the current cpu into the generic
      entry header, which can be retrieved by entry->cpu. When the
      ring buffer was introduced, the users were convert to use the
      the cpu number of which cpu ring buffer was in use (this was passed
      to the tracers by the iterator: iter->cpu).
      
      Unfortunately, the cpu item in the entry structure was never removed.
      This allowed for developers to use it instead of the proper iter->cpu,
      unknowingly, using an uninitialized variable. This was not the fault
      of the developers, since it would seem like the logical place to
      retrieve the cpu identifier.
      
      This patch removes the cpu item from the entry structure and fixes
      all the users that should have been using iter->cpu.
      Signed-off-by: default avatarSteven Rostedt <srostedt@redhat.com>
      1830b52d
  2. 05 Feb, 2009 1 commit
  3. 04 Feb, 2009 33 commits