Commit 4f18d869 authored by Heiko Carstens's avatar Heiko Carstens Committed by Vasily Gorbik

s390: fix stfle zero padding

The stfle inline assembly returns the number of double words written
(condition code 0) or the double words it would have written
(condition code 3), if the memory array it got as parameter would have
been large enough.

The current stfle implementation assumes that the array is always
large enough and clears those parts of the array that have not been
written to with a subsequent memset call.

If however the array is not large enough memset will get a negative
length parameter, which means that memset clears memory until it gets
an exception and the kernel crashes.

To fix this simply limit the maximum length. Move also the inline
assembly to an extra function to avoid clobbering of register 0, which
might happen because of the added min_t invocation together with code
instrumentation.

The bug was introduced with commit 14375bc4 ("[S390] cleanup
facility list handling") but was rather harmless, since it would only
write to a rather large array. It became a potential problem with
commit 3ab121ab ("[S390] kernel: Add z/VM LGR detection"). Since
then it writes to an array with only four double words, while some
machines already deliver three double words. As soon as machines have
a facility bit within the fifth double a crash on IPL would happen.

Fixes: 14375bc4 ("[S390] cleanup facility list handling")
Cc: <stable@vger.kernel.org> # v2.6.37+
Reviewed-by: default avatarVasily Gorbik <gor@linux.ibm.com>
Signed-off-by: default avatarHeiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: default avatarVasily Gorbik <gor@linux.ibm.com>
parent 191fa92b
...@@ -59,6 +59,18 @@ static inline int test_facility(unsigned long nr) ...@@ -59,6 +59,18 @@ static inline int test_facility(unsigned long nr)
return __test_facility(nr, &S390_lowcore.stfle_fac_list); return __test_facility(nr, &S390_lowcore.stfle_fac_list);
} }
static inline unsigned long __stfle_asm(u64 *stfle_fac_list, int size)
{
register unsigned long reg0 asm("0") = size - 1;
asm volatile(
".insn s,0xb2b00000,0(%1)" /* stfle */
: "+d" (reg0)
: "a" (stfle_fac_list)
: "memory", "cc");
return reg0;
}
/** /**
* stfle - Store facility list extended * stfle - Store facility list extended
* @stfle_fac_list: array where facility list can be stored * @stfle_fac_list: array where facility list can be stored
...@@ -75,13 +87,8 @@ static inline void __stfle(u64 *stfle_fac_list, int size) ...@@ -75,13 +87,8 @@ static inline void __stfle(u64 *stfle_fac_list, int size)
memcpy(stfle_fac_list, &S390_lowcore.stfl_fac_list, 4); memcpy(stfle_fac_list, &S390_lowcore.stfl_fac_list, 4);
if (S390_lowcore.stfl_fac_list & 0x01000000) { if (S390_lowcore.stfl_fac_list & 0x01000000) {
/* More facility bits available with stfle */ /* More facility bits available with stfle */
register unsigned long reg0 asm("0") = size - 1; nr = __stfle_asm(stfle_fac_list, size);
nr = min_t(unsigned long, (nr + 1) * 8, size * 8);
asm volatile(".insn s,0xb2b00000,0(%1)" /* stfle */
: "+d" (reg0)
: "a" (stfle_fac_list)
: "memory", "cc");
nr = (reg0 + 1) * 8; /* # bytes stored by stfle */
} }
memset((char *) stfle_fac_list + nr, 0, size * 8 - nr); memset((char *) stfle_fac_list + nr, 0, size * 8 - nr);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment