Commit bcab1ddd authored by Kirill Tkhai's avatar Kirill Tkhai Committed by David S. Miller

net: Move mutex_unlock() in cleanup_net() up

net_sem protects from pernet_list changing, while
ops_free_list() makes simple kfree(), and it can't
race with other pernet_operations callbacks.

So we may release net_mutex earlier then it was.
Signed-off-by: default avatarKirill Tkhai <ktkhai@virtuozzo.com>
Acked-by: default avatarAndrei Vagin <avagin@virtuozzo.com>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent 1a57feb8
...@@ -522,11 +522,12 @@ static void cleanup_net(struct work_struct *work) ...@@ -522,11 +522,12 @@ static void cleanup_net(struct work_struct *work)
list_for_each_entry_reverse(ops, &pernet_list, list) list_for_each_entry_reverse(ops, &pernet_list, list)
ops_exit_list(ops, &net_exit_list); ops_exit_list(ops, &net_exit_list);
mutex_unlock(&net_mutex);
/* Free the net generic variables */ /* Free the net generic variables */
list_for_each_entry_reverse(ops, &pernet_list, list) list_for_each_entry_reverse(ops, &pernet_list, list)
ops_free_list(ops, &net_exit_list); ops_free_list(ops, &net_exit_list);
mutex_unlock(&net_mutex);
up_read(&net_sem); up_read(&net_sem);
/* Ensure there are no outstanding rcu callbacks using this /* Ensure there are no outstanding rcu callbacks using this
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment