Commit 82146529 authored by Shan Wei's avatar Shan Wei Committed by Steven Rostedt

tracing: Use __this_cpu_inc/dec operation instead of __get_cpu_var

__this_cpu_inc_return() or __this_cpu_dec generates a single instruction,
which is faster than __get_cpu_var operation.

Link: http://lkml.kernel.org/r/50A9C1BD.1060308@gmail.comReviewed-by: default avatarChristoph Lameter <cl@linux.com>
Signed-off-by: default avatarShan Wei <davidshan@tencent.com>
Signed-off-by: default avatarSteven Rostedt <rostedt@goodmis.org>
parent d75f717e
...@@ -1344,7 +1344,7 @@ static void __ftrace_trace_stack(struct ring_buffer *buffer, ...@@ -1344,7 +1344,7 @@ static void __ftrace_trace_stack(struct ring_buffer *buffer,
*/ */
preempt_disable_notrace(); preempt_disable_notrace();
use_stack = ++__get_cpu_var(ftrace_stack_reserve); use_stack = __this_cpu_inc_return(ftrace_stack_reserve);
/* /*
* We don't need any atomic variables, just a barrier. * We don't need any atomic variables, just a barrier.
* If an interrupt comes in, we don't care, because it would * If an interrupt comes in, we don't care, because it would
...@@ -1398,7 +1398,7 @@ static void __ftrace_trace_stack(struct ring_buffer *buffer, ...@@ -1398,7 +1398,7 @@ static void __ftrace_trace_stack(struct ring_buffer *buffer,
out: out:
/* Again, don't let gcc optimize things here */ /* Again, don't let gcc optimize things here */
barrier(); barrier();
__get_cpu_var(ftrace_stack_reserve)--; __this_cpu_dec(ftrace_stack_reserve);
preempt_enable_notrace(); preempt_enable_notrace();
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment