Commit 18127d11 authored by Charles Keepax's avatar Charles Keepax Committed by Greg Kroah-Hartman

regulator: core: Avoid potential deadlock on regulator_unregister

[ Upstream commit 06377301 ]

Lockdep reports the following issue on my setup:

Possible unsafe locking scenario:

CPU0                    CPU1
----                    ----
lock((work_completion)(&(&rdev->disable_work)->work));
                        lock(regulator_list_mutex);
                        lock((work_completion)(&(&rdev->disable_work)->work));
lock(regulator_list_mutex);

The problem is that regulator_unregister takes the
regulator_list_mutex and then calls flush_work on disable_work. But
regulator_disable_work calls regulator_lock_dependent which will
also take the regulator_list_mutex. Resulting in a deadlock if the
flush_work call actually needs to flush the work.

Fix this issue by moving the flush_work outside of the
regulator_list_mutex. The list mutex is not used to guard the point at
which the delayed work is queued, so its use adds no additional safety.

Fixes: f8702f9e ("regulator: core: Use ww_mutex for regulators locking")
Signed-off-by: default avatarCharles Keepax <ckeepax@opensource.cirrus.com>
Reviewed-by: default avatarDmitry Osipenko <digetx@gmail.com>
Signed-off-by: default avatarMark Brown <broonie@kernel.org>
Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
parent cf0e0ec1
...@@ -5062,10 +5062,11 @@ void regulator_unregister(struct regulator_dev *rdev) ...@@ -5062,10 +5062,11 @@ void regulator_unregister(struct regulator_dev *rdev)
regulator_put(rdev->supply); regulator_put(rdev->supply);
} }
flush_work(&rdev->disable_work.work);
mutex_lock(&regulator_list_mutex); mutex_lock(&regulator_list_mutex);
debugfs_remove_recursive(rdev->debugfs); debugfs_remove_recursive(rdev->debugfs);
flush_work(&rdev->disable_work.work);
WARN_ON(rdev->open_count); WARN_ON(rdev->open_count);
regulator_remove_coupling(rdev); regulator_remove_coupling(rdev);
unset_regulator_supplies(rdev); unset_regulator_supplies(rdev);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment