Commit 2d7f9f8c authored by Tejun Heo's avatar Tejun Heo Committed by Greg Kroah-Hartman

kernfs: Improve kernfs_drain() and always call on removal

__kernfs_remove() was skipping draining based on KERNFS_ACTIVATED - whether
the node has ever been activated since creation. Instead, update it to
always call kernfs_drain() which now drains or skips based on the precise
drain conditions. This ensures that the nodes will be deactivated and
drained regardless of their states.

This doesn't make meaningful difference now but will enable deactivating and
draining nodes dynamically by making removals safe when racing those
operations.

While at it, drop / update comments.

v2: Fix the inverted test on kernfs_should_drain_open_files() noted by
    Chengming. This was fixed by the next unrelated patch in the previous
    posting.

Cc: Chengming Zhou <zhouchengming@bytedance.com>
Tested-by: default avatarChengming Zhou <zhouchengming@bytedance.com>
Reviewed-by: default avatarChengming Zhou <zhouchengming@bytedance.com>
Signed-off-by: default avatarTejun Heo <tj@kernel.org>
Link: https://lore.kernel.org/r/20220828050440.734579-6-tj@kernel.orgSigned-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
parent bdb2fd7f
...@@ -472,6 +472,16 @@ static void kernfs_drain(struct kernfs_node *kn) ...@@ -472,6 +472,16 @@ static void kernfs_drain(struct kernfs_node *kn)
lockdep_assert_held_write(&root->kernfs_rwsem); lockdep_assert_held_write(&root->kernfs_rwsem);
WARN_ON_ONCE(kernfs_active(kn)); WARN_ON_ONCE(kernfs_active(kn));
/*
* Skip draining if already fully drained. This avoids draining and its
* lockdep annotations for nodes which have never been activated
* allowing embedding kernfs_remove() in create error paths without
* worrying about draining.
*/
if (atomic_read(&kn->active) == KN_DEACTIVATED_BIAS &&
!kernfs_should_drain_open_files(kn))
return;
up_write(&root->kernfs_rwsem); up_write(&root->kernfs_rwsem);
if (kernfs_lockdep(kn)) { if (kernfs_lockdep(kn)) {
...@@ -480,7 +490,6 @@ static void kernfs_drain(struct kernfs_node *kn) ...@@ -480,7 +490,6 @@ static void kernfs_drain(struct kernfs_node *kn)
lock_contended(&kn->dep_map, _RET_IP_); lock_contended(&kn->dep_map, _RET_IP_);
} }
/* but everyone should wait for draining */
wait_event(root->deactivate_waitq, wait_event(root->deactivate_waitq,
atomic_read(&kn->active) == KN_DEACTIVATED_BIAS); atomic_read(&kn->active) == KN_DEACTIVATED_BIAS);
...@@ -1370,23 +1379,14 @@ static void __kernfs_remove(struct kernfs_node *kn) ...@@ -1370,23 +1379,14 @@ static void __kernfs_remove(struct kernfs_node *kn)
pos = kernfs_leftmost_descendant(kn); pos = kernfs_leftmost_descendant(kn);
/* /*
* kernfs_drain() drops kernfs_rwsem temporarily and @pos's * kernfs_drain() may drop kernfs_rwsem temporarily and @pos's
* base ref could have been put by someone else by the time * base ref could have been put by someone else by the time
* the function returns. Make sure it doesn't go away * the function returns. Make sure it doesn't go away
* underneath us. * underneath us.
*/ */
kernfs_get(pos); kernfs_get(pos);
/* kernfs_drain(pos);
* Drain iff @kn was activated. This avoids draining and
* its lockdep annotations for nodes which have never been
* activated and allows embedding kernfs_remove() in create
* error paths without worrying about draining.
*/
if (kn->flags & KERNFS_ACTIVATED)
kernfs_drain(pos);
else
WARN_ON_ONCE(atomic_read(&kn->active) != KN_DEACTIVATED_BIAS);
/* /*
* kernfs_unlink_sibling() succeeds once per node. Use it * kernfs_unlink_sibling() succeeds once per node. Use it
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment