Commit ac605bee authored by Dmitry Vyukov's avatar Dmitry Vyukov Committed by Ingo Molnar

locking/atomic, asm-generic, x86: Add comments for atomic instrumentation

The comments are factored out from the code changes to make them
easier to read. Add them separately to explain some non-obvious
aspects.
Signed-off-by: default avatarDmitry Vyukov <dvyukov@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: kasan-dev@googlegroups.com
Cc: linux-mm@kvack.org
Link: http://lkml.kernel.org/r/cc595efc644bb905407012d82d3eb8bac3368e7a.1517246437.git.dvyukov@google.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent a35353bb
...@@ -24,6 +24,10 @@ ...@@ -24,6 +24,10 @@
*/ */
static __always_inline int arch_atomic_read(const atomic_t *v) static __always_inline int arch_atomic_read(const atomic_t *v)
{ {
/*
* Note for KASAN: we deliberately don't use READ_ONCE_NOCHECK() here,
* it's non-inlined function that increases binary size and stack usage.
*/
return READ_ONCE((v)->counter); return READ_ONCE((v)->counter);
} }
......
/*
* This file provides wrappers with KASAN instrumentation for atomic operations.
* To use this functionality an arch's atomic.h file needs to define all
* atomic operations with arch_ prefix (e.g. arch_atomic_read()) and include
* this file at the end. This file provides atomic_read() that forwards to
* arch_atomic_read() for actual atomic operation.
* Note: if an arch atomic operation is implemented by means of other atomic
* operations (e.g. atomic_read()/atomic_cmpxchg() loop), then it needs to use
* arch_ variants (i.e. arch_atomic_read()/arch_atomic_cmpxchg()) to avoid
* double instrumentation.
*/
#ifndef _LINUX_ATOMIC_INSTRUMENTED_H #ifndef _LINUX_ATOMIC_INSTRUMENTED_H
#define _LINUX_ATOMIC_INSTRUMENTED_H #define _LINUX_ATOMIC_INSTRUMENTED_H
...@@ -442,6 +454,15 @@ cmpxchg64_local_size(volatile u64 *ptr, u64 old, u64 new) ...@@ -442,6 +454,15 @@ cmpxchg64_local_size(volatile u64 *ptr, u64 old, u64 new)
(u64)(new))); \ (u64)(new))); \
}) })
/*
* Originally we had the following code here:
* __typeof__(p1) ____p1 = (p1);
* kasan_check_write(____p1, 2 * sizeof(*____p1));
* arch_cmpxchg_double(____p1, (p2), (o1), (o2), (n1), (n2));
* But it leads to compilation failures (see gcc issue 72873).
* So for now it's left non-instrumented.
* There are few callers of cmpxchg_double(), so it's not critical.
*/
#define cmpxchg_double(p1, p2, o1, o2, n1, n2) \ #define cmpxchg_double(p1, p2, o1, o2, n1, n2) \
({ \ ({ \
arch_cmpxchg_double((p1), (p2), (o1), (o2), (n1), (n2)); \ arch_cmpxchg_double((p1), (p2), (o1), (o2), (n1), (n2)); \
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment