Commit e49d3848 authored by Maciej W. Rozycki's avatar Maciej W. Rozycki Committed by Ralf Baechle

MIPS: MSA: Fix a link error on `_init_msa_upper' with older GCC

Fix a build regression from commit c9017757 ("MIPS: init upper 64b
of vector registers when MSA is first used"):

arch/mips/built-in.o: In function `enable_restore_fp_context':
traps.c:(.text+0xbb90): undefined reference to `_init_msa_upper'
traps.c:(.text+0xbb90): relocation truncated to fit: R_MIPS_26 against `_init_msa_upper'
traps.c:(.text+0xbef0): undefined reference to `_init_msa_upper'
traps.c:(.text+0xbef0): relocation truncated to fit: R_MIPS_26 against `_init_msa_upper'

to !CONFIG_CPU_HAS_MSA configurations with older GCC versions, which are
unable to figure out that calls to `_init_msa_upper' are indeed dead.
Of the many ways to tackle this failure choose the approach we have
already taken in `thread_msa_context_live'.

[ralf@linux-mips.org: Drop patch segment to junk file.]
Signed-off-by: default avatarMaciej W. Rozycki <macro@imgtec.com>
Cc: stable@vger.kernel.org # v3.16+
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13271/Signed-off-by: default avatarRalf Baechle <ralf@linux-mips.org>
parent 0868971a
...@@ -147,6 +147,19 @@ static inline void restore_msa(struct task_struct *t) ...@@ -147,6 +147,19 @@ static inline void restore_msa(struct task_struct *t)
_restore_msa(t); _restore_msa(t);
} }
static inline void init_msa_upper(void)
{
/*
* Check cpu_has_msa only if it's a constant. This will allow the
* compiler to optimise out code for CPUs without MSA without adding
* an extra redundant check for CPUs with MSA.
*/
if (__builtin_constant_p(cpu_has_msa) && !cpu_has_msa)
return;
_init_msa_upper();
}
#ifdef TOOLCHAIN_SUPPORTS_MSA #ifdef TOOLCHAIN_SUPPORTS_MSA
#define __BUILD_MSA_CTL_REG(name, cs) \ #define __BUILD_MSA_CTL_REG(name, cs) \
......
...@@ -1246,7 +1246,7 @@ static int enable_restore_fp_context(int msa) ...@@ -1246,7 +1246,7 @@ static int enable_restore_fp_context(int msa)
err = init_fpu(); err = init_fpu();
if (msa && !err) { if (msa && !err) {
enable_msa(); enable_msa();
_init_msa_upper(); init_msa_upper();
set_thread_flag(TIF_USEDMSA); set_thread_flag(TIF_USEDMSA);
set_thread_flag(TIF_MSA_CTX_LIVE); set_thread_flag(TIF_MSA_CTX_LIVE);
} }
...@@ -1309,7 +1309,7 @@ static int enable_restore_fp_context(int msa) ...@@ -1309,7 +1309,7 @@ static int enable_restore_fp_context(int msa)
*/ */
prior_msa = test_and_set_thread_flag(TIF_MSA_CTX_LIVE); prior_msa = test_and_set_thread_flag(TIF_MSA_CTX_LIVE);
if (!prior_msa && was_fpu_owner) { if (!prior_msa && was_fpu_owner) {
_init_msa_upper(); init_msa_upper();
goto out; goto out;
} }
...@@ -1326,7 +1326,7 @@ static int enable_restore_fp_context(int msa) ...@@ -1326,7 +1326,7 @@ static int enable_restore_fp_context(int msa)
* of each vector register such that it cannot see data left * of each vector register such that it cannot see data left
* behind by another task. * behind by another task.
*/ */
_init_msa_upper(); init_msa_upper();
} else { } else {
/* We need to restore the vector context. */ /* We need to restore the vector context. */
restore_msa(current); restore_msa(current);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment