Commit c9e2fbd9 authored by Alexander Chumachenko's avatar Alexander Chumachenko Committed by H. Peter Anvin

x86: Avoid 'constant_test_bit()' misoptimization due to cast to non-volatile

While debugging bit_spin_lock() hang, it was tracked down to gcc-4.4
misoptimization of non-inlined constant_test_bit() due to non-volatile
addr when 'const volatile unsigned long *addr' cast to 'unsigned long *'
with subsequent unconditional jump to pause (and not to the test) leading
to hang.

Compiling with gcc-4.3 or disabling CONFIG_OPTIMIZE_INLINING yields inlined
constant_test_bit() and correct jump, thus working around the kernel bug.

Other arches than asm-x86 may implement this slightly differently;
2.6.29 mitigates the misoptimization by changing the function prototype
(commit c4295fbb) but probably fixing the issue
itself is better.
Signed-off-by: default avatarAlexander Chumachenko <ledest@gmail.com>
Signed-off-by: default avatarMichael Shigorin <mike@osdn.org.ua>
Acked-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: default avatarH. Peter Anvin <hpa@zytor.com>
parent 7329cf02
...@@ -309,7 +309,7 @@ static inline int test_and_change_bit(int nr, volatile unsigned long *addr) ...@@ -309,7 +309,7 @@ static inline int test_and_change_bit(int nr, volatile unsigned long *addr)
static __always_inline int constant_test_bit(unsigned int nr, const volatile unsigned long *addr) static __always_inline int constant_test_bit(unsigned int nr, const volatile unsigned long *addr)
{ {
return ((1UL << (nr % BITS_PER_LONG)) & return ((1UL << (nr % BITS_PER_LONG)) &
(((unsigned long *)addr)[nr / BITS_PER_LONG])) != 0; (addr[nr / BITS_PER_LONG])) != 0;
} }
static inline int variable_test_bit(int nr, volatile const unsigned long *addr) static inline int variable_test_bit(int nr, volatile const unsigned long *addr)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment