Commit a6fca40f authored by Suresh Siddha's avatar Suresh Siddha Committed by H. Peter Anvin

x86, tlb: Switch cr3 in leave_mm() only when needed

Currently leave_mm() unconditionally switches the cr3 to swapper_pg_dir.
But there is no need to change the cr3, if we already left that mm.

intel_idle() for example calls leave_mm() on every deep c-state entry where
the CPU flushes the TLB for us. Similarly flush_tlb_all() was also calling
leave_mm() whenever the TLB is in LAZY state. Both these paths will be
improved with this change.
Signed-off-by: default avatarSuresh Siddha <suresh.b.siddha@intel.com>
Link: http://lkml.kernel.org/r/1332460885.16101.147.camel@sbsiddha-desk.sc.intel.comSigned-off-by: default avatarH. Peter Anvin <hpa@zytor.com>
parent 722bc6b1
...@@ -61,11 +61,13 @@ static DEFINE_PER_CPU_READ_MOSTLY(int, tlb_vector_offset); ...@@ -61,11 +61,13 @@ static DEFINE_PER_CPU_READ_MOSTLY(int, tlb_vector_offset);
*/ */
void leave_mm(int cpu) void leave_mm(int cpu)
{ {
struct mm_struct *active_mm = percpu_read(cpu_tlbstate.active_mm);
if (percpu_read(cpu_tlbstate.state) == TLBSTATE_OK) if (percpu_read(cpu_tlbstate.state) == TLBSTATE_OK)
BUG(); BUG();
cpumask_clear_cpu(cpu, if (cpumask_test_cpu(cpu, mm_cpumask(active_mm))) {
mm_cpumask(percpu_read(cpu_tlbstate.active_mm))); cpumask_clear_cpu(cpu, mm_cpumask(active_mm));
load_cr3(swapper_pg_dir); load_cr3(swapper_pg_dir);
}
} }
EXPORT_SYMBOL_GPL(leave_mm); EXPORT_SYMBOL_GPL(leave_mm);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment