Commit 40d14da3 authored by Davidlohr Bueso's avatar Davidlohr Bueso Committed by Steven Rostedt (VMware)

fgraph: Convert ret_stack tasklist scanning to rcu

It seems that alloc_retstack_tasklist() can also take a lockless
approach for scanning the tasklist, instead of using the big global
tasklist_lock. For this we also kill another deprecated and rcu-unsafe
tsk->thread_group user replacing it with for_each_process_thread(),
maintaining semantics.

Here tasklist_lock does not protect anything other than the list
against concurrent fork/exit. And considering that the whole thing
is capped by FTRACE_RETSTACK_ALLOC_SIZE (32), it should not be a
problem to have a pontentially stale, yet stable, list. The task cannot
go away either, so we don't risk racing with ftrace_graph_exit_task()
which clears the retstack.

The tsk->ret_stack management is not protected by tasklist_lock, being
serialized with the corresponding publish/subscribe barriers against
concurrent ftrace_push_return_trace(). In addition this plays nicer
with cachelines by avoiding two atomic ops in the uncontended case.

Link: https://lkml.kernel.org/r/20200907013326.9870-1-dave@stgolabs.netAcked-by: default avatarOleg Nesterov <oleg@redhat.com>
Signed-off-by: default avatarDavidlohr Bueso <dbueso@suse.de>
Signed-off-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
parent eb8d8b4c
...@@ -387,8 +387,8 @@ static int alloc_retstack_tasklist(struct ftrace_ret_stack **ret_stack_list) ...@@ -387,8 +387,8 @@ static int alloc_retstack_tasklist(struct ftrace_ret_stack **ret_stack_list)
} }
} }
read_lock(&tasklist_lock); rcu_read_lock();
do_each_thread(g, t) { for_each_process_thread(g, t) {
if (start == end) { if (start == end) {
ret = -EAGAIN; ret = -EAGAIN;
goto unlock; goto unlock;
...@@ -403,10 +403,10 @@ static int alloc_retstack_tasklist(struct ftrace_ret_stack **ret_stack_list) ...@@ -403,10 +403,10 @@ static int alloc_retstack_tasklist(struct ftrace_ret_stack **ret_stack_list)
smp_wmb(); smp_wmb();
t->ret_stack = ret_stack_list[start++]; t->ret_stack = ret_stack_list[start++];
} }
} while_each_thread(g, t); }
unlock: unlock:
read_unlock(&tasklist_lock); rcu_read_unlock();
free: free:
for (i = start; i < end; i++) for (i = start; i < end; i++)
kfree(ret_stack_list[i]); kfree(ret_stack_list[i]);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment