Commit 8833d0e2 authored by Peter Zijlstra's avatar Peter Zijlstra Committed by Ingo Molnar

perf: Use task_ctx_sched_out()

We have a function that does exactly what we want here, use it. This
reduces the amount of cpuctx->task_ctx muckery.
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent 3e349507
...@@ -2545,8 +2545,7 @@ static void perf_event_context_sched_out(struct task_struct *task, int ctxn, ...@@ -2545,8 +2545,7 @@ static void perf_event_context_sched_out(struct task_struct *task, int ctxn,
if (do_switch) { if (do_switch) {
raw_spin_lock(&ctx->lock); raw_spin_lock(&ctx->lock);
ctx_sched_out(ctx, cpuctx, EVENT_ALL); task_ctx_sched_out(cpuctx, ctx);
cpuctx->task_ctx = NULL;
raw_spin_unlock(&ctx->lock); raw_spin_unlock(&ctx->lock);
} }
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment