Commit fe525a28 authored by Peter Zijlstra's avatar Peter Zijlstra Committed by Ben Hutchings

perf/core: Fix concurrent sys_perf_event_open() vs. 'move_group' race

commit 321027c1 upstream.

Di Shen reported a race between two concurrent sys_perf_event_open()
calls where both try and move the same pre-existing software group
into a hardware context.

The problem is exactly that described in commit:

  f63a8daa ("perf: Fix event->ctx locking")

... where, while we wait for a ctx->mutex acquisition, the event->ctx
relation can have changed under us.

That very same commit failed to recognise sys_perf_event_context() as an
external access vector to the events and thereby didn't apply the
established locking rules correctly.

So while one sys_perf_event_open() call is stuck waiting on
mutex_lock_double(), the other (which owns said locks) moves the group
about. So by the time the former sys_perf_event_open() acquires the
locks, the context we've acquired is stale (and possibly dead).

Apply the established locking rules as per perf_event_ctx_lock_nested()
to the mutex_lock_double() for the 'move_group' case. This obviously means
we need to validate state after we acquire the locks.

Reported-by: Di Shen (Keen Lab)
Tested-by: default avatarJohn Dias <joaodias@google.com>
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Min Chong <mchong@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Fixes: f63a8daa ("perf: Fix event->ctx locking")
Link: http://lkml.kernel.org/r/20170106131444.GZ3174@twins.programming.kicks-ass.netSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
[bwh: Backported to 3.16:
 - Use ACCESS_ONCE() instead of READ_ONCE()
 - Test perf_event::group_flags instead of group_caps
 - Add the err_locked cleanup block, which we didn't need before
 - Adjust context]
Signed-off-by: default avatarBen Hutchings <ben@decadent.org.uk>
parent 5838f3ef
...@@ -7311,6 +7311,37 @@ static void mutex_lock_double(struct mutex *a, struct mutex *b) ...@@ -7311,6 +7311,37 @@ static void mutex_lock_double(struct mutex *a, struct mutex *b)
mutex_lock_nested(b, SINGLE_DEPTH_NESTING); mutex_lock_nested(b, SINGLE_DEPTH_NESTING);
} }
/*
* Variation on perf_event_ctx_lock_nested(), except we take two context
* mutexes.
*/
static struct perf_event_context *
__perf_event_ctx_lock_double(struct perf_event *group_leader,
struct perf_event_context *ctx)
{
struct perf_event_context *gctx;
again:
rcu_read_lock();
gctx = ACCESS_ONCE(group_leader->ctx);
if (!atomic_inc_not_zero(&gctx->refcount)) {
rcu_read_unlock();
goto again;
}
rcu_read_unlock();
mutex_lock_double(&gctx->mutex, &ctx->mutex);
if (group_leader->ctx != gctx) {
mutex_unlock(&ctx->mutex);
mutex_unlock(&gctx->mutex);
put_ctx(gctx);
goto again;
}
return gctx;
}
/** /**
* sys_perf_event_open - open a performance event, associate it to a task/cpu * sys_perf_event_open - open a performance event, associate it to a task/cpu
* *
...@@ -7522,14 +7553,31 @@ SYSCALL_DEFINE5(perf_event_open, ...@@ -7522,14 +7553,31 @@ SYSCALL_DEFINE5(perf_event_open,
} }
if (move_group) { if (move_group) {
gctx = group_leader->ctx; gctx = __perf_event_ctx_lock_double(group_leader, ctx);
/*
* Check if we raced against another sys_perf_event_open() call
* moving the software group underneath us.
*/
if (!(group_leader->group_flags & PERF_GROUP_SOFTWARE)) {
/*
* If someone moved the group out from under us, check
* if this new event wound up on the same ctx, if so
* its the regular !move_group case, otherwise fail.
*/
if (gctx != ctx) {
err = -EINVAL;
goto err_locked;
} else {
perf_event_ctx_unlock(group_leader, gctx);
move_group = 0;
}
}
/* /*
* See perf_event_ctx_lock() for comments on the details * See perf_event_ctx_lock() for comments on the details
* of swizzling perf_event::ctx. * of swizzling perf_event::ctx.
*/ */
mutex_lock_double(&gctx->mutex, &ctx->mutex);
perf_remove_from_context(group_leader, false); perf_remove_from_context(group_leader, false);
/* /*
...@@ -7570,7 +7618,7 @@ SYSCALL_DEFINE5(perf_event_open, ...@@ -7570,7 +7618,7 @@ SYSCALL_DEFINE5(perf_event_open,
perf_unpin_context(ctx); perf_unpin_context(ctx);
if (move_group) { if (move_group) {
mutex_unlock(&gctx->mutex); perf_event_ctx_unlock(group_leader, gctx);
put_ctx(gctx); put_ctx(gctx);
} }
mutex_unlock(&ctx->mutex); mutex_unlock(&ctx->mutex);
...@@ -7599,6 +7647,11 @@ SYSCALL_DEFINE5(perf_event_open, ...@@ -7599,6 +7647,11 @@ SYSCALL_DEFINE5(perf_event_open,
fd_install(event_fd, event_file); fd_install(event_fd, event_file);
return event_fd; return event_fd;
err_locked:
if (move_group)
perf_event_ctx_unlock(group_leader, gctx);
mutex_unlock(&ctx->mutex);
fput(event_file);
err_context: err_context:
perf_unpin_context(ctx); perf_unpin_context(ctx);
put_ctx(ctx); put_ctx(ctx);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment