Commit 194d4357 authored by Jan Kara's avatar Jan Kara Committed by Jiri Slaby

fsnotify: fix oops in fsnotify_clear_marks_by_group_flags()

commit 8f2f3eb5 upstream.

fsnotify_clear_marks_by_group_flags() can race with
fsnotify_destroy_marks() so that when fsnotify_destroy_mark_locked()
drops mark_mutex, a mark from the list iterated by
fsnotify_clear_marks_by_group_flags() can be freed and thus the next
entry pointer we have cached may become stale and we dereference free
memory.

Fix the problem by first moving marks to free to a special private list
and then always free the first entry in the special list.  This method
is safe even when entries from the list can disappear once we drop the
lock.
Signed-off-by: default avatarJan Kara <jack@suse.com>
Reported-by: default avatarAshish Sangwan <a.sangwan@samsung.com>
Reviewed-by: default avatarAshish Sangwan <a.sangwan@samsung.com>
Cc: Lino Sanfilippo <LinoSanfilippo@gmx.de>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
parent 91aca356
...@@ -293,16 +293,36 @@ void fsnotify_clear_marks_by_group_flags(struct fsnotify_group *group, ...@@ -293,16 +293,36 @@ void fsnotify_clear_marks_by_group_flags(struct fsnotify_group *group,
unsigned int flags) unsigned int flags)
{ {
struct fsnotify_mark *lmark, *mark; struct fsnotify_mark *lmark, *mark;
LIST_HEAD(to_free);
/*
* We have to be really careful here. Anytime we drop mark_mutex, e.g.
* fsnotify_clear_marks_by_inode() can come and free marks. Even in our
* to_free list so we have to use mark_mutex even when accessing that
* list. And freeing mark requires us to drop mark_mutex. So we can
* reliably free only the first mark in the list. That's why we first
* move marks to free to to_free list in one go and then free marks in
* to_free list one by one.
*/
mutex_lock_nested(&group->mark_mutex, SINGLE_DEPTH_NESTING); mutex_lock_nested(&group->mark_mutex, SINGLE_DEPTH_NESTING);
list_for_each_entry_safe(mark, lmark, &group->marks_list, g_list) { list_for_each_entry_safe(mark, lmark, &group->marks_list, g_list) {
if (mark->flags & flags) { if (mark->flags & flags)
fsnotify_get_mark(mark); list_move(&mark->g_list, &to_free);
fsnotify_destroy_mark_locked(mark, group);
fsnotify_put_mark(mark);
}
} }
mutex_unlock(&group->mark_mutex); mutex_unlock(&group->mark_mutex);
while (1) {
mutex_lock_nested(&group->mark_mutex, SINGLE_DEPTH_NESTING);
if (list_empty(&to_free)) {
mutex_unlock(&group->mark_mutex);
break;
}
mark = list_first_entry(&to_free, struct fsnotify_mark, g_list);
fsnotify_get_mark(mark);
fsnotify_destroy_mark_locked(mark, group);
mutex_unlock(&group->mark_mutex);
fsnotify_put_mark(mark);
}
} }
/* /*
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment