Commit 68cccec0 authored by Peter Zijlstra's avatar Peter Zijlstra Committed by Greg Kroah-Hartman

locking/static_key: Fix false positive warnings on concurrent dec/inc

[ Upstream commit a1247d06 ]

Even though the atomic_dec_and_mutex_lock() in
__static_key_slow_dec_cpuslocked() can never see a negative value in
key->enabled the subsequent sanity check is re-reading key->enabled, which may
have been set to -1 in the meantime by static_key_slow_inc_cpuslocked().

                CPU  A                               CPU B

 __static_key_slow_dec_cpuslocked():          static_key_slow_inc_cpuslocked():
                               # enabled = 1
   atomic_dec_and_mutex_lock()
                               # enabled = 0
                                              atomic_read() == 0
                                              atomic_set(-1)
                               # enabled = -1
   val = atomic_read()
   # Oops - val == -1!

The test case is TCP's clean_acked_data_enable() / clean_acked_data_disable()
as tickled by KTLS (net/ktls).
Suggested-by: default avatarJakub Kicinski <jakub.kicinski@netronome.com>
Reported-by: default avatarJakub Kicinski <jakub.kicinski@netronome.com>
Tested-by: default avatarJakub Kicinski <jakub.kicinski@netronome.com>
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: ard.biesheuvel@linaro.org
Cc: oss-drivers@netronome.com
Cc: pbonzini@redhat.com
Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
parent 2f5decc2
...@@ -206,6 +206,8 @@ static void __static_key_slow_dec_cpuslocked(struct static_key *key, ...@@ -206,6 +206,8 @@ static void __static_key_slow_dec_cpuslocked(struct static_key *key,
unsigned long rate_limit, unsigned long rate_limit,
struct delayed_work *work) struct delayed_work *work)
{ {
int val;
lockdep_assert_cpus_held(); lockdep_assert_cpus_held();
/* /*
...@@ -215,17 +217,20 @@ static void __static_key_slow_dec_cpuslocked(struct static_key *key, ...@@ -215,17 +217,20 @@ static void __static_key_slow_dec_cpuslocked(struct static_key *key,
* returns is unbalanced, because all other static_key_slow_inc() * returns is unbalanced, because all other static_key_slow_inc()
* instances block while the update is in progress. * instances block while the update is in progress.
*/ */
if (!atomic_dec_and_mutex_lock(&key->enabled, &jump_label_mutex)) { val = atomic_fetch_add_unless(&key->enabled, -1, 1);
WARN(atomic_read(&key->enabled) < 0, if (val != 1) {
"jump label: negative count!\n"); WARN(val < 0, "jump label: negative count!\n");
return; return;
} }
if (rate_limit) { jump_label_lock();
atomic_inc(&key->enabled); if (atomic_dec_and_test(&key->enabled)) {
schedule_delayed_work(work, rate_limit); if (rate_limit) {
} else { atomic_inc(&key->enabled);
jump_label_update(key); schedule_delayed_work(work, rate_limit);
} else {
jump_label_update(key);
}
} }
jump_label_unlock(); jump_label_unlock();
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment