Commit b7b5563e authored by Andrew Morton's avatar Andrew Morton Committed by Linus Torvalds

[PATCH] dentry d_bucket fix

The gap between checking d_bucket and sampling d_move_count looks like a bug
to me.

It feels safer to be checking d_bucket after taking the lock, when we know
that it is stable.

And it's a little faster to check d_bucket after having checked the hash
rather than before.
parent 90b163a4
......@@ -975,12 +975,6 @@ struct dentry * __d_lookup(struct dentry * parent, struct qstr * name)
smp_read_barrier_depends();
dentry = hlist_entry(node, struct dentry, d_hash);
/* if lookup ends up in a different bucket
* due to concurrent rename, fail it
*/
if (unlikely(dentry->d_bucket != head))
break;
smp_rmb();
if (dentry->d_name.hash != hash)
......@@ -990,6 +984,13 @@ struct dentry * __d_lookup(struct dentry * parent, struct qstr * name)
spin_lock(&dentry->d_lock);
/*
* If lookup ends up in a different bucket due to concurrent
* rename, fail it
*/
if (unlikely(dentry->d_bucket != head))
goto terminate;
/*
* Recheck the dentry after taking the lock - d_move may have
* changed things. Don't bother checking the hash because we're
......@@ -1014,6 +1015,7 @@ struct dentry * __d_lookup(struct dentry * parent, struct qstr * name)
atomic_inc(&dentry->d_count);
found = dentry;
}
terminate:
spin_unlock(&dentry->d_lock);
break;
next:
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment