Commit 2a1c7f53 authored by NeilBrown's avatar NeilBrown Committed by J. Bruce Fields

sunrpc/cache: use cache_fresh_unlocked consistently and correctly.

cache_fresh_unlocked() is called when a cache entry
has been updated and ensures that if there were any
pending upcalls, they are cleared.

So every time we update a cache entry, we should call this,
and this should be the only way that we try to clear
pending calls (that sort of uniformity makes code sooo much
easier to read).

try_to_negate_entry() will (possibly) mark an entry as
negative.  If it doesn't, it is because the entry already
is VALID.
So the entry will be valid on exit, so it is appropriate to
call cache_fresh_unlocked().
So tidy up try_to_negate_entry() to do that, and remove
partial open-coded cache_fresh_unlocked() from the one
call-site of try_to_negate_entry().

In the other branch of the 'switch(cache_make_upcall())',
we again have a partial open-coded version of cache_fresh_unlocked().
Replace that with a real call.

And again in cache_clean(), use a real call to cache_fresh_unlocked().

These call sites might previously have called
cache_revisit_request() if CACHE_PENDING wasn't set.
This is never necessary because cache_revisit_request() can
only do anything if the item is in the cache_defer_hash,
However any time that an item is added to the cache_defer_hash
(setup_deferral), the code immediately tests CACHE_PENDING,
and removes the entry again if it is clear.  So all other
places we only need to 'cache_revisit_request' if we've
just cleared CACHE_PENDING.
Reported-by: default avatarBodo Stroesser <bstroesser@ts.fujitsu.com>
Signed-off-by: default avatarNeilBrown  <neilb@suse.de>
Signed-off-by: default avatarJ. Bruce Fields <bfields@redhat.com>
parent f9e1aedc
......@@ -228,15 +228,14 @@ static int try_to_negate_entry(struct cache_detail *detail, struct cache_head *h
write_lock(&detail->hash_lock);
rv = cache_is_valid(h);
if (rv != -EAGAIN) {
write_unlock(&detail->hash_lock);
return rv;
if (rv == -EAGAIN) {
set_bit(CACHE_NEGATIVE, &h->flags);
cache_fresh_locked(h, seconds_since_boot()+CACHE_NEW_EXPIRY);
rv = -ENOENT;
}
set_bit(CACHE_NEGATIVE, &h->flags);
cache_fresh_locked(h, seconds_since_boot()+CACHE_NEW_EXPIRY);
write_unlock(&detail->hash_lock);
cache_fresh_unlocked(h, detail);
return -ENOENT;
return rv;
}
/*
......@@ -275,13 +274,10 @@ int cache_check(struct cache_detail *detail,
if (!test_and_set_bit(CACHE_PENDING, &h->flags)) {
switch (cache_make_upcall(detail, h)) {
case -EINVAL:
clear_bit(CACHE_PENDING, &h->flags);
cache_revisit_request(h);
rv = try_to_negate_entry(detail, h);
break;
case -EAGAIN:
clear_bit(CACHE_PENDING, &h->flags);
cache_revisit_request(h);
cache_fresh_unlocked(h, detail);
break;
}
}
......@@ -457,9 +453,7 @@ static int cache_clean(void)
current_index ++;
spin_unlock(&cache_list_lock);
if (ch) {
if (test_and_clear_bit(CACHE_PENDING, &ch->flags))
cache_dequeue(current_detail, ch);
cache_revisit_request(ch);
cache_fresh_unlocked(ch, d);
cache_put(ch, d);
}
} else
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment