Commit 56e3d1cd authored by Jason A. Donenfeld's avatar Jason A. Donenfeld Committed by Daniel Vetter

kref: prefer atomic_inc_not_zero to atomic_add_unless

On most platforms, there exists this ifdef:

 #define atomic_inc_not_zero(v) atomic_add_unless((v), 1, 0)

This makes this patch functionally useless. However, on PPC, there is
actually an explicit definition of atomic_inc_not_zero with its own
assembly that is slightly more optimized than atomic_add_unless. So,
this patch changes kref to use atomic_inc_not_zero instead, for PPC and
any future platforms that might provide an explicit implementation.

This also puts this usage of kref more in line with a verbatim reading
of the examples in Paul McKenney's paper [1] in the section titled "2.4
Atomic Counting With Check and Release Memory Barrier", which uses
atomic_inc_not_zero.

[1] http://open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2167.pdfSigned-off-by: default avatarJason A. Donenfeld <Jason@zx2c4.com>
Reviewed-by: default avatarThomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/20161215050110.3241-1-Jason@zx2c4.com
parent c02f39ac
...@@ -133,6 +133,6 @@ static inline int kref_put_mutex(struct kref *kref, ...@@ -133,6 +133,6 @@ static inline int kref_put_mutex(struct kref *kref,
*/ */
static inline int __must_check kref_get_unless_zero(struct kref *kref) static inline int __must_check kref_get_unless_zero(struct kref *kref)
{ {
return atomic_add_unless(&kref->refcount, 1, 0); return atomic_inc_not_zero(&kref->refcount);
} }
#endif /* _KREF_H_ */ #endif /* _KREF_H_ */
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment