Commit e4a02ed2 authored by Guenter Roeck's avatar Guenter Roeck Committed by Ingo Molnar

locking/ww_mutex: Fix runtime warning in the WW mutex selftest

If CONFIG_WW_MUTEX_SELFTEST=y is enabled, booting an image
in an arm64 virtual machine results in the following
traceback if 8 CPUs are enabled:

  DEBUG_LOCKS_WARN_ON(__owner_task(owner) != current)
  WARNING: CPU: 2 PID: 537 at kernel/locking/mutex.c:1033 __mutex_unlock_slowpath+0x1a8/0x2e0
  ...
  Call trace:
   __mutex_unlock_slowpath()
   ww_mutex_unlock()
   test_cycle_work()
   process_one_work()
   worker_thread()
   kthread()
   ret_from_fork()

If requesting b_mutex fails with -EDEADLK, the error variable
is reassigned to the return value from calling ww_mutex_lock
on a_mutex again. If this call fails, a_mutex is not locked.
It is, however, unconditionally unlocked subsequently, causing
the reported warning. Fix the problem by using two error variables.

With this change, the selftest still fails as follows:

  cyclic deadlock not resolved, ret[7/8] = -35

However, the traceback is gone.
Signed-off-by: default avatarGuenter Roeck <linux@roeck-us.net>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Fixes: d1b42b80 ("locking/ww_mutex: Add kselftests for resolving ww_mutex cyclic deadlocks")
Link: http://lkml.kernel.org/r/1538516929-9734-1-git-send-email-linux@roeck-us.netSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent 6d348925
...@@ -260,7 +260,7 @@ static void test_cycle_work(struct work_struct *work) ...@@ -260,7 +260,7 @@ static void test_cycle_work(struct work_struct *work)
{ {
struct test_cycle *cycle = container_of(work, typeof(*cycle), work); struct test_cycle *cycle = container_of(work, typeof(*cycle), work);
struct ww_acquire_ctx ctx; struct ww_acquire_ctx ctx;
int err; int err, erra = 0;
ww_acquire_init(&ctx, &ww_class); ww_acquire_init(&ctx, &ww_class);
ww_mutex_lock(&cycle->a_mutex, &ctx); ww_mutex_lock(&cycle->a_mutex, &ctx);
...@@ -270,17 +270,19 @@ static void test_cycle_work(struct work_struct *work) ...@@ -270,17 +270,19 @@ static void test_cycle_work(struct work_struct *work)
err = ww_mutex_lock(cycle->b_mutex, &ctx); err = ww_mutex_lock(cycle->b_mutex, &ctx);
if (err == -EDEADLK) { if (err == -EDEADLK) {
err = 0;
ww_mutex_unlock(&cycle->a_mutex); ww_mutex_unlock(&cycle->a_mutex);
ww_mutex_lock_slow(cycle->b_mutex, &ctx); ww_mutex_lock_slow(cycle->b_mutex, &ctx);
err = ww_mutex_lock(&cycle->a_mutex, &ctx); erra = ww_mutex_lock(&cycle->a_mutex, &ctx);
} }
if (!err) if (!err)
ww_mutex_unlock(cycle->b_mutex); ww_mutex_unlock(cycle->b_mutex);
ww_mutex_unlock(&cycle->a_mutex); if (!erra)
ww_mutex_unlock(&cycle->a_mutex);
ww_acquire_fini(&ctx); ww_acquire_fini(&ctx);
cycle->result = err; cycle->result = err ?: erra;
} }
static int __test_cycle(unsigned int nthreads) static int __test_cycle(unsigned int nthreads)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment