Commit 3040ecd4 authored by James Hogan's avatar James Hogan Committed by Greg Kroah-Hartman

metag/usercopy: Fix src fixup in from user rapf loops

commit 2c0b1df8 upstream.

The fixup code to rewind the source pointer in
__asm_copy_from_user_{32,64}bit_rapf_loop() always rewound the source by
a single unit (4 or 8 bytes), however this is insufficient if the fault
didn't occur on the first load in the loop, as the source pointer will
have been incremented but nothing will have been stored until all 4
register [pairs] are loaded.

Read the LSM_STEP field of TXSTATUS (which is already loaded into a
register), a bit like the copy_to_user versions, to determine how many
iterations of MGET[DL] have taken place, all of which need rewinding.

Fixes: 373cd784 ("metag: Memory handling")
Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
Cc: linux-metag@vger.kernel.org
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
parent beb0ad97
...@@ -687,29 +687,49 @@ EXPORT_SYMBOL(__copy_user); ...@@ -687,29 +687,49 @@ EXPORT_SYMBOL(__copy_user);
* *
* Rationale: * Rationale:
* A fault occurs while reading from user buffer, which is the * A fault occurs while reading from user buffer, which is the
* source. Since the fault is at a single address, we only * source.
* need to rewind by 8 bytes.
* Since we don't write to kernel buffer until we read first, * Since we don't write to kernel buffer until we read first,
* the kernel buffer is at the right state and needn't be * the kernel buffer is at the right state and needn't be
* corrected. * corrected, but the source must be rewound to the beginning of
* the block, which is LSM_STEP*8 bytes.
* LSM_STEP is bits 10:8 in TXSTATUS which is already read
* and stored in D0Ar2
*
* NOTE: If a fault occurs at the last operation in M{G,S}ETL
* LSM_STEP will be 0. ie: we do 4 writes in our case, if
* a fault happens at the 4th write, LSM_STEP will be 0
* instead of 4. The code copes with that.
*/ */
#define __asm_copy_from_user_64bit_rapf_loop(to, from, ret, n, id) \ #define __asm_copy_from_user_64bit_rapf_loop(to, from, ret, n, id) \
__asm_copy_user_64bit_rapf_loop(to, from, ret, n, id, \ __asm_copy_user_64bit_rapf_loop(to, from, ret, n, id, \
"SUB %1, %1, #8\n") "LSR D0Ar2, D0Ar2, #5\n" \
"ANDS D0Ar2, D0Ar2, #0x38\n" \
"ADDZ D0Ar2, D0Ar2, #32\n" \
"SUB %1, %1, D0Ar2\n")
/* rewind 'from' pointer when a fault occurs /* rewind 'from' pointer when a fault occurs
* *
* Rationale: * Rationale:
* A fault occurs while reading from user buffer, which is the * A fault occurs while reading from user buffer, which is the
* source. Since the fault is at a single address, we only * source.
* need to rewind by 4 bytes.
* Since we don't write to kernel buffer until we read first, * Since we don't write to kernel buffer until we read first,
* the kernel buffer is at the right state and needn't be * the kernel buffer is at the right state and needn't be
* corrected. * corrected, but the source must be rewound to the beginning of
* the block, which is LSM_STEP*4 bytes.
* LSM_STEP is bits 10:8 in TXSTATUS which is already read
* and stored in D0Ar2
*
* NOTE: If a fault occurs at the last operation in M{G,S}ETL
* LSM_STEP will be 0. ie: we do 4 writes in our case, if
* a fault happens at the 4th write, LSM_STEP will be 0
* instead of 4. The code copes with that.
*/ */
#define __asm_copy_from_user_32bit_rapf_loop(to, from, ret, n, id) \ #define __asm_copy_from_user_32bit_rapf_loop(to, from, ret, n, id) \
__asm_copy_user_32bit_rapf_loop(to, from, ret, n, id, \ __asm_copy_user_32bit_rapf_loop(to, from, ret, n, id, \
"SUB %1, %1, #4\n") "LSR D0Ar2, D0Ar2, #6\n" \
"ANDS D0Ar2, D0Ar2, #0x1c\n" \
"ADDZ D0Ar2, D0Ar2, #16\n" \
"SUB %1, %1, D0Ar2\n")
/* /*
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment