• Vineet Gupta's avatar
    ARC: Make ARC bitops "safer" (add anti-optimization) · 26121b67
    Vineet Gupta authored
    commit 80f42084 upstream.
    
    ARCompact/ARCv2 ISA provide that any instructions which deals with
    bitpos/count operand ASL, LSL, BSET, BCLR, BMSK .... will only consider
    lower 5 bits. i.e. auto-clamp the pos to 0-31.
    
    ARC Linux bitops exploited this fact by NOT explicitly masking out upper
    bits for @nr operand in general, saving a bunch of AND/BMSK instructions
    in generated code around bitops.
    
    While this micro-optimization has worked well over years it is NOT safe
    as shifting a number with a value, greater than native size is
    "undefined" per "C" spec.
    
    So as it turns outm EZChip ran into this eventually, in their massive
    muti-core SMP build with 64 cpus. There was a test_bit() inside a loop
    from 63 to 0 and gcc was weirdly optimizing away the first iteration
    (so it was really adhering to standard by implementing undefined behaviour
    vs. removing all the iterations which were phony i.e. (1 << [63..32])
    
    | for i = 63 to 0
    |    X = ( 1 << i )
    |    if X == 0
    |       continue
    
    So fix the code to do the explicit masking at the expense of generating
    additional instructions. Fortunately, this can be mitigated to a large
    extent as gcc has SHIFT_COUNT_TRUNCATED which allows combiner to fold
    masking into shift operation itself. It is currently not enabled in ARC
    gcc backend, but could be done after a bit of testing.
    
    Fixes STAR 9000866918 ("unsafe "undefined behavior" code in kernel")
    Reported-by: default avatarNoam Camus <noamc@ezchip.com>
    Signed-off-by: default avatarVineet Gupta <vgupta@synopsys.com>
    Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
    26121b67
bitops.h 7.75 KB