Commit 5f40b909 authored by Will Deacon's avatar Will Deacon Committed by Russell King

ARM: 7559/1: smp: switch away from the idmap before updating init_mm.mm_count

When booting a secondary CPU, the primary CPU hands two sets of page
tables via the secondary_data struct:

	(1) swapper_pg_dir: a normal, cacheable, shared (if SMP) mapping
	    of the kernel image (i.e. the tables used by init_mm).

	(2) idmap_pgd: an uncached mapping of the .idmap.text ELF
	    section.

The idmap is generally used when enabling and disabling the MMU, which
includes early CPU boot. In this case, the secondary CPU switches to
swapper as soon as it enters C code:

	struct mm_struct *mm = &init_mm;
	unsigned int cpu = smp_processor_id();

	/*
	 * All kernel threads share the same mm context; grab a
	 * reference and switch to it.
	 */
	atomic_inc(&mm->mm_count);
	current->active_mm = mm;
	cpumask_set_cpu(cpu, mm_cpumask(mm));
	cpu_switch_mm(mm->pgd, mm);

This causes a problem on ARMv7, where the identity mapping is treated as
strongly-ordered leading to architecturally UNPREDICTABLE behaviour of
exclusive accesses, such as those used by atomic_inc.

This patch re-orders the secondary_start_kernel function so that we
switch to swapper before performing any exclusive accesses.

Cc: <stable@vger.kernel.org>
Cc: David McKay <david.mckay@st.com>
Reported-by: default avatarGilles Chanteperdrix <gilles.chanteperdrix@xenomai.org>
Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
Signed-off-by: default avatarRussell King <rmk+kernel@arm.linux.org.uk>
parent 3581fe0e
...@@ -294,18 +294,24 @@ static void percpu_timer_setup(void); ...@@ -294,18 +294,24 @@ static void percpu_timer_setup(void);
asmlinkage void __cpuinit secondary_start_kernel(void) asmlinkage void __cpuinit secondary_start_kernel(void)
{ {
struct mm_struct *mm = &init_mm; struct mm_struct *mm = &init_mm;
unsigned int cpu = smp_processor_id(); unsigned int cpu;
/*
* The identity mapping is uncached (strongly ordered), so
* switch away from it before attempting any exclusive accesses.
*/
cpu_switch_mm(mm->pgd, mm);
enter_lazy_tlb(mm, current);
local_flush_tlb_all();
/* /*
* All kernel threads share the same mm context; grab a * All kernel threads share the same mm context; grab a
* reference and switch to it. * reference and switch to it.
*/ */
cpu = smp_processor_id();
atomic_inc(&mm->mm_count); atomic_inc(&mm->mm_count);
current->active_mm = mm; current->active_mm = mm;
cpumask_set_cpu(cpu, mm_cpumask(mm)); cpumask_set_cpu(cpu, mm_cpumask(mm));
cpu_switch_mm(mm->pgd, mm);
enter_lazy_tlb(mm, current);
local_flush_tlb_all();
printk("CPU%u: Booted secondary processor\n", cpu); printk("CPU%u: Booted secondary processor\n", cpu);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment