Commit 2fd8eb4a authored by Yandong Zhao's avatar Yandong Zhao Committed by Will Deacon

arm64: neon: Fix function may_use_simd() return error status

It does not matter if the caller of may_use_simd() migrates to
another cpu after the call, but it is still important that the
kernel_neon_busy percpu instance that is read matches the cpu the
task is running on at the time of the read.

This means that raw_cpu_read() is not sufficient.  kernel_neon_busy
may appear true if the caller migrates during the execution of
raw_cpu_read() and the next task to be scheduled in on the initial
cpu calls kernel_neon_begin().

This patch replaces raw_cpu_read() with this_cpu_read() to protect
against this race.

Cc: <stable@vger.kernel.org>
Fixes: cb84d11e ("arm64: neon: Remove support for nested or hardirq kernel-mode NEON")
Acked-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: default avatarDave Martin <Dave.Martin@arm.com>
Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
Signed-off-by: default avatarYandong Zhao <yandong77520@gmail.com>
Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
parent 96f95a17
...@@ -29,20 +29,15 @@ DECLARE_PER_CPU(bool, kernel_neon_busy); ...@@ -29,20 +29,15 @@ DECLARE_PER_CPU(bool, kernel_neon_busy);
static __must_check inline bool may_use_simd(void) static __must_check inline bool may_use_simd(void)
{ {
/* /*
* The raw_cpu_read() is racy if called with preemption enabled. * kernel_neon_busy is only set while preemption is disabled,
* This is not a bug: kernel_neon_busy is only set when * and is clear whenever preemption is enabled. Since
* preemption is disabled, so we cannot migrate to another CPU * this_cpu_read() is atomic w.r.t. preemption, kernel_neon_busy
* while it is set, nor can we migrate to a CPU where it is set. * cannot change under our feet -- if it's set we cannot be
* So, if we find it clear on some CPU then we're guaranteed to * migrated, and if it's clear we cannot be migrated to a CPU
* find it clear on any CPU we could migrate to. * where it is set.
*
* If we are in between kernel_neon_begin()...kernel_neon_end(),
* the flag will be set, but preemption is also disabled, so we
* can't migrate to another CPU and spuriously see it become
* false.
*/ */
return !in_irq() && !irqs_disabled() && !in_nmi() && return !in_irq() && !irqs_disabled() && !in_nmi() &&
!raw_cpu_read(kernel_neon_busy); !this_cpu_read(kernel_neon_busy);
} }
#else /* ! CONFIG_KERNEL_MODE_NEON */ #else /* ! CONFIG_KERNEL_MODE_NEON */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment