Commit 421f4e4a authored by Andi Kleen's avatar Andi Kleen Committed by Linus Torvalds

[PATCH] x86_64: fix flush race on context switch

Fix a long standing race in x86-64 SMP TLB handling.  When a mm is freed and
another thread exits to a lazy TLB thread (like idle) the freed user page
tables would be still kept loaded in the idle thread.  When an interrupt does
a prefetch on NULL the CPU would try to follow it and read random data. 

This could lead to machine checks on Opterons in some cases.

Credit goes to some unnamed debugging wizards at AMD who described the
problem.  All blame to me.  I did the fix based on their description.
Signed-off-by: default avatarAndi Kleen <ak@suse.de>
Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
parent 25b5e19d
......@@ -51,9 +51,10 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
out_of_line_bug();
if(!test_and_set_bit(cpu, &next->cpu_vm_mask)) {
/* We were in lazy tlb mode and leave_mm disabled
* tlb flush IPI delivery. We must flush our tlb.
* tlb flush IPI delivery. We must reload CR3
* to make sure to use no freed page tables.
*/
local_flush_tlb();
asm volatile("movq %0,%%cr3" :: "r" (__pa(next->pgd)) : "memory");
load_LDT_nolock(&next->context, cpu);
}
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment