Commit fa8ff601 authored by Paul Burton's avatar Paul Burton Committed by Ralf Baechle

MIPS: Fix MSA ld unaligned failure cases

Copying the content of an MSA vector from user memory may involve TLB
faults & mapping in pages. This will fail when preemption is disabled
due to an inability to acquire mmap_sem from do_page_fault, which meant
such vector loads to unmapped pages would always fail to be emulated.
Fix this by disabling preemption later only around the updating of
vector register state.

This change does however introduce a race between performing the load
into thread context & the thread being preempted, saving its current
live context & clobbering the loaded value. This should be a rare
occureence, so optimise for the fast path by simply repeating the load if
we are preempted.

Additionally if the copy failed then the failure path was taken with
preemption left disabled, leading to the kernel typically encountering
further issues around sleeping whilst atomic. The change to where
preemption is disabled avoids this issue.

Fixes: e4aa1f15 "MIPS: MSA unaligned memory access support"
Reported-by: default avatarJames Hogan <james.hogan@imgtec.com>
Signed-off-by: default avatarPaul Burton <paul.burton@imgtec.com>
Reviewed-by: default avatarJames Hogan <james.hogan@imgtec.com>
Cc: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Cc: Maciej W. Rozycki <macro@linux-mips.org>
Cc: James Cowgill <James.Cowgill@imgtec.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: stable <stable@vger.kernel.org> # v4.3
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/12345/Signed-off-by: default avatarRalf Baechle <ralf@linux-mips.org>
parent 19fb5818
...@@ -885,7 +885,7 @@ static void emulate_load_store_insn(struct pt_regs *regs, ...@@ -885,7 +885,7 @@ static void emulate_load_store_insn(struct pt_regs *regs,
{ {
union mips_instruction insn; union mips_instruction insn;
unsigned long value; unsigned long value;
unsigned int res; unsigned int res, preempted;
unsigned long origpc; unsigned long origpc;
unsigned long orig31; unsigned long orig31;
void __user *fault_addr = NULL; void __user *fault_addr = NULL;
...@@ -1226,27 +1226,36 @@ static void emulate_load_store_insn(struct pt_regs *regs, ...@@ -1226,27 +1226,36 @@ static void emulate_load_store_insn(struct pt_regs *regs,
if (!access_ok(VERIFY_READ, addr, sizeof(*fpr))) if (!access_ok(VERIFY_READ, addr, sizeof(*fpr)))
goto sigbus; goto sigbus;
/* do {
* Disable preemption to avoid a race between copying /*
* state from userland, migrating to another CPU and * If we have live MSA context keep track of
* updating the hardware vector register below. * whether we get preempted in order to avoid
*/ * the register context we load being clobbered
preempt_disable(); * by the live context as it's saved during
* preemption. If we don't have live context
res = __copy_from_user_inatomic(fpr, addr, * then it can't be saved to clobber the value
sizeof(*fpr)); * we load.
if (res) */
goto fault; preempted = test_thread_flag(TIF_USEDMSA);
/* res = __copy_from_user_inatomic(fpr, addr,
* Update the hardware register if it is in use by the sizeof(*fpr));
* task in this quantum, in order to avoid having to if (res)
* save & restore the whole vector context. goto fault;
*/
if (test_thread_flag(TIF_USEDMSA))
write_msa_wr(wd, fpr, df);
preempt_enable(); /*
* Update the hardware register if it is in use
* by the task in this quantum, in order to
* avoid having to save & restore the whole
* vector context.
*/
preempt_disable();
if (test_thread_flag(TIF_USEDMSA)) {
write_msa_wr(wd, fpr, df);
preempted = 0;
}
preempt_enable();
} while (preempted);
break; break;
case msa_st_op: case msa_st_op:
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment