Commit 921d4f67 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'trace-v4.15-rc4-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace

Pull tracing fixes from Steven Rostedt:

 - Bring back context level recursive protection in ring buffer.

   The simpler counter protection failed, due to a path when tracing
   with trace_clock_global() as it could not be reentrant and depended
   on the ring buffer recursive protection to keep that from happening.

 - Prevent branch profiling when FORTIFY_SOURCE is enabled.

   It causes 50 - 60 MB in warning messages. Branch profiling should
   never be run on production systems, so there's no reason that it
   needs to be enabled with FORTIFY_SOURCE.

* tag 'trace-v4.15-rc4-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
  tracing: Prevent PROFILE_ALL_BRANCHES when FORTIFY_SOURCE=y
  ring-buffer: Bring back context level recursive checks
parents a8750ddc 68e76e03
...@@ -355,7 +355,7 @@ config PROFILE_ANNOTATED_BRANCHES ...@@ -355,7 +355,7 @@ config PROFILE_ANNOTATED_BRANCHES
on if you need to profile the system's use of these macros. on if you need to profile the system's use of these macros.
config PROFILE_ALL_BRANCHES config PROFILE_ALL_BRANCHES
bool "Profile all if conditionals" bool "Profile all if conditionals" if !FORTIFY_SOURCE
select TRACE_BRANCH_PROFILING select TRACE_BRANCH_PROFILING
help help
This tracer profiles all branch conditions. Every if () This tracer profiles all branch conditions. Every if ()
......
...@@ -2534,29 +2534,59 @@ rb_wakeups(struct ring_buffer *buffer, struct ring_buffer_per_cpu *cpu_buffer) ...@@ -2534,29 +2534,59 @@ rb_wakeups(struct ring_buffer *buffer, struct ring_buffer_per_cpu *cpu_buffer)
* The lock and unlock are done within a preempt disable section. * The lock and unlock are done within a preempt disable section.
* The current_context per_cpu variable can only be modified * The current_context per_cpu variable can only be modified
* by the current task between lock and unlock. But it can * by the current task between lock and unlock. But it can
* be modified more than once via an interrupt. There are four * be modified more than once via an interrupt. To pass this
* different contexts that we need to consider. * information from the lock to the unlock without having to
* access the 'in_interrupt()' functions again (which do show
* a bit of overhead in something as critical as function tracing,
* we use a bitmask trick.
* *
* Normal context. * bit 0 = NMI context
* SoftIRQ context * bit 1 = IRQ context
* IRQ context * bit 2 = SoftIRQ context
* NMI context * bit 3 = normal context.
* *
* If for some reason the ring buffer starts to recurse, we * This works because this is the order of contexts that can
* only allow that to happen at most 4 times (one for each * preempt other contexts. A SoftIRQ never preempts an IRQ
* context). If it happens 5 times, then we consider this a * context.
* recusive loop and do not let it go further. *
* When the context is determined, the corresponding bit is
* checked and set (if it was set, then a recursion of that context
* happened).
*
* On unlock, we need to clear this bit. To do so, just subtract
* 1 from the current_context and AND it to itself.
*
* (binary)
* 101 - 1 = 100
* 101 & 100 = 100 (clearing bit zero)
*
* 1010 - 1 = 1001
* 1010 & 1001 = 1000 (clearing bit 1)
*
* The least significant bit can be cleared this way, and it
* just so happens that it is the same bit corresponding to
* the current context.
*/ */
static __always_inline int static __always_inline int
trace_recursive_lock(struct ring_buffer_per_cpu *cpu_buffer) trace_recursive_lock(struct ring_buffer_per_cpu *cpu_buffer)
{ {
if (cpu_buffer->current_context >= 4) unsigned int val = cpu_buffer->current_context;
unsigned long pc = preempt_count();
int bit;
if (!(pc & (NMI_MASK | HARDIRQ_MASK | SOFTIRQ_OFFSET)))
bit = RB_CTX_NORMAL;
else
bit = pc & NMI_MASK ? RB_CTX_NMI :
pc & HARDIRQ_MASK ? RB_CTX_IRQ :
pc & SOFTIRQ_OFFSET ? 2 : RB_CTX_SOFTIRQ;
if (unlikely(val & (1 << bit)))
return 1; return 1;
cpu_buffer->current_context++; val |= (1 << bit);
/* Interrupts must see this update */ cpu_buffer->current_context = val;
barrier();
return 0; return 0;
} }
...@@ -2564,9 +2594,7 @@ trace_recursive_lock(struct ring_buffer_per_cpu *cpu_buffer) ...@@ -2564,9 +2594,7 @@ trace_recursive_lock(struct ring_buffer_per_cpu *cpu_buffer)
static __always_inline void static __always_inline void
trace_recursive_unlock(struct ring_buffer_per_cpu *cpu_buffer) trace_recursive_unlock(struct ring_buffer_per_cpu *cpu_buffer)
{ {
/* Don't let the dec leak out */ cpu_buffer->current_context &= cpu_buffer->current_context - 1;
barrier();
cpu_buffer->current_context--;
} }
/** /**
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment