Commit 1a88b536 authored by Al Viro's avatar Al Viro Committed by Linus Torvalds

Fix incomplete __mntput locking

Getting this wrong caused

	WARNING: at fs/namespace.c:636 mntput_no_expire+0xac/0xf2()

due to optimistically checking cpu_writer->mnt outside the spinlock.

Here's what we really want:
 * we know that nobody will set cpu_writer->mnt to mnt from now on
 * all changes to that sucker are done under cpu_writer->lock
 * we want the laziest equivalent of
	spin_lock(&cpu_writer->lock);
	if (likely(cpu_writer->mnt != mnt)) {
		spin_unlock(&cpu_writer->lock);
		continue;
	}
	/* do stuff */
  that would make sure we won't miss earlier setting of ->mnt done by
  another CPU.

Anyway, for now we just move the spin_lock() earlier and move the test
into the properly locked region.
Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
Reported-and-tested-by: default avatarLi Zefan <lizf@cn.fujitsu.com>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent d2f8d7ee
...@@ -614,9 +614,11 @@ static inline void __mntput(struct vfsmount *mnt) ...@@ -614,9 +614,11 @@ static inline void __mntput(struct vfsmount *mnt)
*/ */
for_each_possible_cpu(cpu) { for_each_possible_cpu(cpu) {
struct mnt_writer *cpu_writer = &per_cpu(mnt_writers, cpu); struct mnt_writer *cpu_writer = &per_cpu(mnt_writers, cpu);
if (cpu_writer->mnt != mnt)
continue;
spin_lock(&cpu_writer->lock); spin_lock(&cpu_writer->lock);
if (cpu_writer->mnt != mnt) {
spin_unlock(&cpu_writer->lock);
continue;
}
atomic_add(cpu_writer->count, &mnt->__mnt_writers); atomic_add(cpu_writer->count, &mnt->__mnt_writers);
cpu_writer->count = 0; cpu_writer->count = 0;
/* /*
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment