Commit 12709f95 authored by Oleg Nesterov's avatar Oleg Nesterov Committed by Greg Kroah-Hartman

perf: Fix ring_buffer_attach() RCU sync, again

commit 2f993cf0 upstream.

While looking for other users of get_state/cond_sync. I Found
ring_buffer_attach() and it looks obviously buggy?

Don't we need to ensure that we have "synchronize" _between_
list_del() and list_add() ?

IOW. Suppose that ring_buffer_attach() preempts right_after
get_state_synchronize_rcu() and gp completes before spin_lock().

In this case cond_synchronize_rcu() does nothing and we reuse
->rb_entry without waiting for gp in between?

It also moves the ->rcu_pending check under "if (rb)", to make it
more readable imo.
Signed-off-by: default avatarOleg Nesterov <oleg@redhat.com>
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: dave@stgolabs.net
Cc: der.herr@hofr.at
Cc: josh@joshtriplett.org
Cc: tj@kernel.org
Fixes: b69cf536 ("perf: Fix a race between ring_buffer_detach() and ring_buffer_attach()")
Link: http://lkml.kernel.org/r/20150530200425.GA15748@redhat.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
parent 51fbd77c
...@@ -4331,20 +4331,20 @@ static void ring_buffer_attach(struct perf_event *event, ...@@ -4331,20 +4331,20 @@ static void ring_buffer_attach(struct perf_event *event,
WARN_ON_ONCE(event->rcu_pending); WARN_ON_ONCE(event->rcu_pending);
old_rb = event->rb; old_rb = event->rb;
event->rcu_batches = get_state_synchronize_rcu();
event->rcu_pending = 1;
spin_lock_irqsave(&old_rb->event_lock, flags); spin_lock_irqsave(&old_rb->event_lock, flags);
list_del_rcu(&event->rb_entry); list_del_rcu(&event->rb_entry);
spin_unlock_irqrestore(&old_rb->event_lock, flags); spin_unlock_irqrestore(&old_rb->event_lock, flags);
}
if (event->rcu_pending && rb) { event->rcu_batches = get_state_synchronize_rcu();
cond_synchronize_rcu(event->rcu_batches); event->rcu_pending = 1;
event->rcu_pending = 0;
} }
if (rb) { if (rb) {
if (event->rcu_pending) {
cond_synchronize_rcu(event->rcu_batches);
event->rcu_pending = 0;
}
spin_lock_irqsave(&rb->event_lock, flags); spin_lock_irqsave(&rb->event_lock, flags);
list_add_rcu(&event->rb_entry, &rb->event_list); list_add_rcu(&event->rb_entry, &rb->event_list);
spin_unlock_irqrestore(&rb->event_lock, flags); spin_unlock_irqrestore(&rb->event_lock, flags);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment