Commit 2ed53c0d authored by Lan Tianyu's avatar Lan Tianyu Committed by Ingo Molnar

x86/smpboot: Speed up suspend/resume by avoiding 100ms sleep for CPU offline during S3

With certain kernel configurations, CPU offline consumes more than
100ms during S3.

It's a timing related issue: native_cpu_die() would occasionally fall
into a 100ms sleep when the CPU idle loop thread marked the CPU state
to DEAD too slowly.

What native_cpu_die() does is that it polls the CPU state and waits
for 100ms if CPU state hasn't been marked to DEAD. The 100ms sleep
doesn't make sense and is purely historic.

To avoid such long sleeping, this patch adds a 'struct completion'
to each CPU, waits for the completion in native_cpu_die() and wakes
up the completion when the CPU state is marked to DEAD.

Tested on an Intel Xeon server with 48 cores, Ivybridge and on
Haswell laptops. The CPU offlining cost on these machines is
reduced from more than 100ms to less than 5ms. The system
suspend time is reduced by 2.3s on the servers.

Borislav and Prarit also helped to test the patch on an AMD
machine and a few systems of various sizes and configurations
(multi-socket, single-socket, no hyper threading, etc.). No
issues were seen.
Tested-by: default avatarPrarit Bhargava <prarit@redhat.com>
Signed-off-by: default avatarLan Tianyu <tianyu.lan@intel.com>
Acked-by: default avatarBorislav Petkov <bp@suse.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: srostedt@redhat.com
Cc: toshi.kani@hp.com
Cc: imammedo@redhat.com
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/1409039025-32310-1-git-send-email-tianyu.lan@intel.com
[ Improved a few minor details in the code, cleaned up the changelog. ]
Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent f3670394
...@@ -102,6 +102,8 @@ DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_llc_shared_map); ...@@ -102,6 +102,8 @@ DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_llc_shared_map);
DEFINE_PER_CPU_SHARED_ALIGNED(struct cpuinfo_x86, cpu_info); DEFINE_PER_CPU_SHARED_ALIGNED(struct cpuinfo_x86, cpu_info);
EXPORT_PER_CPU_SYMBOL(cpu_info); EXPORT_PER_CPU_SYMBOL(cpu_info);
static DEFINE_PER_CPU(struct completion, die_complete);
atomic_t init_deasserted; atomic_t init_deasserted;
/* /*
...@@ -1323,26 +1325,24 @@ int native_cpu_disable(void) ...@@ -1323,26 +1325,24 @@ int native_cpu_disable(void)
return ret; return ret;
clear_local_APIC(); clear_local_APIC();
init_completion(&per_cpu(die_complete, smp_processor_id()));
cpu_disable_common(); cpu_disable_common();
return 0; return 0;
} }
void native_cpu_die(unsigned int cpu) void native_cpu_die(unsigned int cpu)
{ {
/* We don't do anything here: idle task is faking death itself. */ /* We don't do anything here: idle task is faking death itself. */
unsigned int i; wait_for_completion_timeout(&per_cpu(die_complete, cpu), HZ);
for (i = 0; i < 10; i++) { /* They ack this in play_dead() by setting CPU_DEAD */
/* They ack this in play_dead by setting CPU_DEAD */
if (per_cpu(cpu_state, cpu) == CPU_DEAD) { if (per_cpu(cpu_state, cpu) == CPU_DEAD) {
if (system_state == SYSTEM_RUNNING) if (system_state == SYSTEM_RUNNING)
pr_info("CPU %u is now offline\n", cpu); pr_info("CPU %u is now offline\n", cpu);
return; } else {
}
msleep(100);
}
pr_err("CPU %u didn't die...\n", cpu); pr_err("CPU %u didn't die...\n", cpu);
}
} }
void play_dead_common(void) void play_dead_common(void)
...@@ -1354,6 +1354,7 @@ void play_dead_common(void) ...@@ -1354,6 +1354,7 @@ void play_dead_common(void)
mb(); mb();
/* Ack it */ /* Ack it */
__this_cpu_write(cpu_state, CPU_DEAD); __this_cpu_write(cpu_state, CPU_DEAD);
complete(&per_cpu(die_complete, smp_processor_id()));
/* /*
* With physical CPU hotplug, we should halt the cpu * With physical CPU hotplug, we should halt the cpu
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment