Commit 32132a3d authored by Peter Zijlstra's avatar Peter Zijlstra Committed by Ingo Molnar

perf: Specialize perf_event_exit_task()

The perf_remove_from_context() usage in __perf_event_exit_task() is
different from the other usages in that this site has already
detached and scheduled out the task context.

This will stand in the way of stronger assertions checking the (task)
context scheduling invariants.
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent 39a43640
...@@ -8726,7 +8726,13 @@ __perf_event_exit_task(struct perf_event *child_event, ...@@ -8726,7 +8726,13 @@ __perf_event_exit_task(struct perf_event *child_event,
* Do destroy all inherited groups, we don't care about those * Do destroy all inherited groups, we don't care about those
* and being thorough is better. * and being thorough is better.
*/ */
perf_remove_from_context(child_event, !!child_event->parent); raw_spin_lock_irq(&child_ctx->lock);
WARN_ON_ONCE(child_ctx->is_active);
if (!!child_event->parent)
perf_group_detach(child_event);
list_del_event(child_event, child_ctx);
raw_spin_unlock_irq(&child_ctx->lock);
/* /*
* It can happen that the parent exits first, and has events * It can happen that the parent exits first, and has events
...@@ -8746,17 +8752,15 @@ static void perf_event_exit_task_context(struct task_struct *child, int ctxn) ...@@ -8746,17 +8752,15 @@ static void perf_event_exit_task_context(struct task_struct *child, int ctxn)
{ {
struct perf_event *child_event, *next; struct perf_event *child_event, *next;
struct perf_event_context *child_ctx, *clone_ctx = NULL; struct perf_event_context *child_ctx, *clone_ctx = NULL;
unsigned long flags;
if (likely(!child->perf_event_ctxp[ctxn])) if (likely(!child->perf_event_ctxp[ctxn]))
return; return;
local_irq_save(flags); local_irq_disable();
WARN_ON_ONCE(child != current);
/* /*
* We can't reschedule here because interrupts are disabled, * We can't reschedule here because interrupts are disabled,
* and either child is current or it is a task that can't be * and child must be current.
* scheduled, so we are now safe from rescheduling changing
* our context.
*/ */
child_ctx = rcu_dereference_raw(child->perf_event_ctxp[ctxn]); child_ctx = rcu_dereference_raw(child->perf_event_ctxp[ctxn]);
...@@ -8776,7 +8780,7 @@ static void perf_event_exit_task_context(struct task_struct *child, int ctxn) ...@@ -8776,7 +8780,7 @@ static void perf_event_exit_task_context(struct task_struct *child, int ctxn)
*/ */
clone_ctx = unclone_ctx(child_ctx); clone_ctx = unclone_ctx(child_ctx);
update_context_time(child_ctx); update_context_time(child_ctx);
raw_spin_unlock_irqrestore(&child_ctx->lock, flags); raw_spin_unlock_irq(&child_ctx->lock);
if (clone_ctx) if (clone_ctx)
put_ctx(clone_ctx); put_ctx(clone_ctx);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment