Commit fdb67b2a authored by Marc Zyngier's avatar Marc Zyngier Committed by Sasha Levin

arm: KVM: Allow unaligned accesses at HYP

[ Upstream commit 33b5c388 ]

We currently have the HSCTLR.A bit set, trapping unaligned accesses
at HYP, but we're not really prepared to deal with it.

Since the rest of the kernel is pretty happy about that, let's follow
its example and set HSCTLR.A to zero. Modern CPUs don't really care.

Cc: stable@vger.kernel.org
Signed-off-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
Signed-off-by: default avatarChristoffer Dall <cdall@linaro.org>
Signed-off-by: default avatarSasha Levin <alexander.levin@verizon.com>
parent 1e8dabb6
...@@ -110,7 +110,6 @@ __do_hyp_init: ...@@ -110,7 +110,6 @@ __do_hyp_init:
@ - Write permission implies XN: disabled @ - Write permission implies XN: disabled
@ - Instruction cache: enabled @ - Instruction cache: enabled
@ - Data/Unified cache: enabled @ - Data/Unified cache: enabled
@ - Memory alignment checks: enabled
@ - MMU: enabled (this code must be run from an identity mapping) @ - MMU: enabled (this code must be run from an identity mapping)
mrc p15, 4, r0, c1, c0, 0 @ HSCR mrc p15, 4, r0, c1, c0, 0 @ HSCR
ldr r2, =HSCTLR_MASK ldr r2, =HSCTLR_MASK
...@@ -118,8 +117,8 @@ __do_hyp_init: ...@@ -118,8 +117,8 @@ __do_hyp_init:
mrc p15, 0, r1, c1, c0, 0 @ SCTLR mrc p15, 0, r1, c1, c0, 0 @ SCTLR
ldr r2, =(HSCTLR_EE | HSCTLR_FI | HSCTLR_I | HSCTLR_C) ldr r2, =(HSCTLR_EE | HSCTLR_FI | HSCTLR_I | HSCTLR_C)
and r1, r1, r2 and r1, r1, r2
ARM( ldr r2, =(HSCTLR_M | HSCTLR_A) ) ARM( ldr r2, =(HSCTLR_M) )
THUMB( ldr r2, =(HSCTLR_M | HSCTLR_A | HSCTLR_TE) ) THUMB( ldr r2, =(HSCTLR_M | HSCTLR_TE) )
orr r1, r1, r2 orr r1, r1, r2
orr r0, r0, r1 orr r0, r0, r1
isb isb
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment