Commit 8e7fbcbc authored by Peter Zijlstra's avatar Peter Zijlstra Committed by Ingo Molnar

sched: Remove stale power aware scheduling remnants and dysfunctional knobs

It's been broken forever (i.e. it's not scheduling in a power
aware fashion), as reported by Suresh and others sending
patches, and nobody cares enough to fix it properly ...
so remove it to make space free for something better.

There's various problems with the code as it stands today, first
and foremost the user interface which is bound to topology
levels and has multiple values per level. This results in a
state explosion which the administrator or distro needs to
master and almost nobody does.

Furthermore large configuration state spaces aren't good, it
means the thing doesn't just work right because it's either
under so many impossibe to meet constraints, or even if
there's an achievable state workloads have to be aware of
it precisely and can never meet it for dynamic workloads.

So pushing this kind of decision to user-space was a bad idea
even with a single knob - it's exponentially worse with knobs
on every node of the topology.

There is a proposal to replace the user interface with a single
3 state knob:

 sched_balance_policy := { performance, power, auto }

where 'auto' would be the preferred default which looks at things
like Battery/AC mode and possible cpufreq state or whatever the hw
exposes to show us power use expectations - but there's been no
progress on it in the past many months.

Aside from that, the actual implementation of the various knobs
is known to be broken. There have been sporadic attempts at
fixing things but these always stop short of reaching a mergable
state.

Therefore this wholesale removal with the hopes of spurring
people who care to come forward once again and work on a
coherent replacement.
Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Suresh Siddha <suresh.b.siddha@intel.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Link: http://lkml.kernel.org/r/1326104915.2442.53.camel@twinsSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent fac536f7
...@@ -9,31 +9,6 @@ Description: ...@@ -9,31 +9,6 @@ Description:
/sys/devices/system/cpu/cpu#/ /sys/devices/system/cpu/cpu#/
What: /sys/devices/system/cpu/sched_mc_power_savings
/sys/devices/system/cpu/sched_smt_power_savings
Date: June 2006
Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
Description: Discover and adjust the kernel's multi-core scheduler support.
Possible values are:
0 - No power saving load balance (default value)
1 - Fill one thread/core/package first for long running threads
2 - Also bias task wakeups to semi-idle cpu package for power
savings
sched_mc_power_savings is dependent upon SCHED_MC, which is
itself architecture dependent.
sched_smt_power_savings is dependent upon SCHED_SMT, which
is itself architecture dependent.
The two files are independent of each other. It is possible
that one file may be present without the other.
Introduced by git commit 5c45bf27.
What: /sys/devices/system/cpu/kernel_max What: /sys/devices/system/cpu/kernel_max
/sys/devices/system/cpu/offline /sys/devices/system/cpu/offline
/sys/devices/system/cpu/online /sys/devices/system/cpu/online
......
...@@ -61,10 +61,6 @@ The implementor should read comments in include/linux/sched.h: ...@@ -61,10 +61,6 @@ The implementor should read comments in include/linux/sched.h:
struct sched_domain fields, SD_FLAG_*, SD_*_INIT to get an idea of struct sched_domain fields, SD_FLAG_*, SD_*_INIT to get an idea of
the specifics and what to tune. the specifics and what to tune.
For SMT, the architecture must define CONFIG_SCHED_SMT and provide a
cpumask_t cpu_sibling_map[NR_CPUS], where cpu_sibling_map[i] is the mask of
all "i"'s siblings as well as "i" itself.
Architectures may retain the regular override the default SD_*_INIT flags Architectures may retain the regular override the default SD_*_INIT flags
while using the generic domain builder in kernel/sched.c if they wish to while using the generic domain builder in kernel/sched.c if they wish to
retain the traditional SMT->SMP->NUMA topology (or some subset of that). This retain the traditional SMT->SMP->NUMA topology (or some subset of that). This
......
...@@ -429,8 +429,7 @@ const struct cpumask *cpu_coregroup_mask(int cpu) ...@@ -429,8 +429,7 @@ const struct cpumask *cpu_coregroup_mask(int cpu)
* For perf, we return last level cache shared map. * For perf, we return last level cache shared map.
* And for power savings, we return cpu_core_map * And for power savings, we return cpu_core_map
*/ */
if ((sched_mc_power_savings || sched_smt_power_savings) && if (!(cpu_has(c, X86_FEATURE_AMD_DCM)))
!(cpu_has(c, X86_FEATURE_AMD_DCM)))
return cpu_core_mask(cpu); return cpu_core_mask(cpu);
else else
return cpu_llc_shared_mask(cpu); return cpu_llc_shared_mask(cpu);
......
...@@ -330,8 +330,4 @@ void __init cpu_dev_init(void) ...@@ -330,8 +330,4 @@ void __init cpu_dev_init(void)
panic("Failed to register CPU subsystem"); panic("Failed to register CPU subsystem");
cpu_dev_register_generic(); cpu_dev_register_generic();
#if defined(CONFIG_SCHED_MC) || defined(CONFIG_SCHED_SMT)
sched_create_sysfs_power_savings_entries(cpu_subsys.dev_root);
#endif
} }
...@@ -36,8 +36,6 @@ extern void cpu_remove_dev_attr(struct device_attribute *attr); ...@@ -36,8 +36,6 @@ extern void cpu_remove_dev_attr(struct device_attribute *attr);
extern int cpu_add_dev_attr_group(struct attribute_group *attrs); extern int cpu_add_dev_attr_group(struct attribute_group *attrs);
extern void cpu_remove_dev_attr_group(struct attribute_group *attrs); extern void cpu_remove_dev_attr_group(struct attribute_group *attrs);
extern int sched_create_sysfs_power_savings_entries(struct device *dev);
#ifdef CONFIG_HOTPLUG_CPU #ifdef CONFIG_HOTPLUG_CPU
extern void unregister_cpu(struct cpu *cpu); extern void unregister_cpu(struct cpu *cpu);
extern ssize_t arch_cpu_probe(const char *, size_t); extern ssize_t arch_cpu_probe(const char *, size_t);
......
...@@ -855,61 +855,14 @@ enum cpu_idle_type { ...@@ -855,61 +855,14 @@ enum cpu_idle_type {
#define SD_WAKE_AFFINE 0x0020 /* Wake task to waking CPU */ #define SD_WAKE_AFFINE 0x0020 /* Wake task to waking CPU */
#define SD_PREFER_LOCAL 0x0040 /* Prefer to keep tasks local to this domain */ #define SD_PREFER_LOCAL 0x0040 /* Prefer to keep tasks local to this domain */
#define SD_SHARE_CPUPOWER 0x0080 /* Domain members share cpu power */ #define SD_SHARE_CPUPOWER 0x0080 /* Domain members share cpu power */
#define SD_POWERSAVINGS_BALANCE 0x0100 /* Balance for power savings */
#define SD_SHARE_PKG_RESOURCES 0x0200 /* Domain members share cpu pkg resources */ #define SD_SHARE_PKG_RESOURCES 0x0200 /* Domain members share cpu pkg resources */
#define SD_SERIALIZE 0x0400 /* Only a single load balancing instance */ #define SD_SERIALIZE 0x0400 /* Only a single load balancing instance */
#define SD_ASYM_PACKING 0x0800 /* Place busy groups earlier in the domain */ #define SD_ASYM_PACKING 0x0800 /* Place busy groups earlier in the domain */
#define SD_PREFER_SIBLING 0x1000 /* Prefer to place tasks in a sibling domain */ #define SD_PREFER_SIBLING 0x1000 /* Prefer to place tasks in a sibling domain */
#define SD_OVERLAP 0x2000 /* sched_domains of this level overlap */ #define SD_OVERLAP 0x2000 /* sched_domains of this level overlap */
enum powersavings_balance_level {
POWERSAVINGS_BALANCE_NONE = 0, /* No power saving load balance */
POWERSAVINGS_BALANCE_BASIC, /* Fill one thread/core/package
* first for long running threads
*/
POWERSAVINGS_BALANCE_WAKEUP, /* Also bias task wakeups to semi-idle
* cpu package for power savings
*/
MAX_POWERSAVINGS_BALANCE_LEVELS
};
extern int sched_mc_power_savings, sched_smt_power_savings;
static inline int sd_balance_for_mc_power(void)
{
if (sched_smt_power_savings)
return SD_POWERSAVINGS_BALANCE;
if (!sched_mc_power_savings)
return SD_PREFER_SIBLING;
return 0;
}
static inline int sd_balance_for_package_power(void)
{
if (sched_mc_power_savings | sched_smt_power_savings)
return SD_POWERSAVINGS_BALANCE;
return SD_PREFER_SIBLING;
}
extern int __weak arch_sd_sibiling_asym_packing(void); extern int __weak arch_sd_sibiling_asym_packing(void);
/*
* Optimise SD flags for power savings:
* SD_BALANCE_NEWIDLE helps aggressive task consolidation and power savings.
* Keep default SD flags if sched_{smt,mc}_power_saving=0
*/
static inline int sd_power_saving_flags(void)
{
if (sched_mc_power_savings | sched_smt_power_savings)
return SD_BALANCE_NEWIDLE;
return 0;
}
struct sched_group_power { struct sched_group_power {
atomic_t ref; atomic_t ref;
/* /*
......
...@@ -98,7 +98,6 @@ int arch_update_cpu_topology(void); ...@@ -98,7 +98,6 @@ int arch_update_cpu_topology(void);
| 0*SD_BALANCE_WAKE \ | 0*SD_BALANCE_WAKE \
| 1*SD_WAKE_AFFINE \ | 1*SD_WAKE_AFFINE \
| 1*SD_SHARE_CPUPOWER \ | 1*SD_SHARE_CPUPOWER \
| 0*SD_POWERSAVINGS_BALANCE \
| 1*SD_SHARE_PKG_RESOURCES \ | 1*SD_SHARE_PKG_RESOURCES \
| 0*SD_SERIALIZE \ | 0*SD_SERIALIZE \
| 0*SD_PREFER_SIBLING \ | 0*SD_PREFER_SIBLING \
...@@ -134,8 +133,6 @@ int arch_update_cpu_topology(void); ...@@ -134,8 +133,6 @@ int arch_update_cpu_topology(void);
| 0*SD_SHARE_CPUPOWER \ | 0*SD_SHARE_CPUPOWER \
| 1*SD_SHARE_PKG_RESOURCES \ | 1*SD_SHARE_PKG_RESOURCES \
| 0*SD_SERIALIZE \ | 0*SD_SERIALIZE \
| sd_balance_for_mc_power() \
| sd_power_saving_flags() \
, \ , \
.last_balance = jiffies, \ .last_balance = jiffies, \
.balance_interval = 1, \ .balance_interval = 1, \
...@@ -167,8 +164,6 @@ int arch_update_cpu_topology(void); ...@@ -167,8 +164,6 @@ int arch_update_cpu_topology(void);
| 0*SD_SHARE_CPUPOWER \ | 0*SD_SHARE_CPUPOWER \
| 0*SD_SHARE_PKG_RESOURCES \ | 0*SD_SHARE_PKG_RESOURCES \
| 0*SD_SERIALIZE \ | 0*SD_SERIALIZE \
| sd_balance_for_package_power() \
| sd_power_saving_flags() \
, \ , \
.last_balance = jiffies, \ .last_balance = jiffies, \
.balance_interval = 1, \ .balance_interval = 1, \
......
...@@ -5929,8 +5929,6 @@ static const struct cpumask *cpu_cpu_mask(int cpu) ...@@ -5929,8 +5929,6 @@ static const struct cpumask *cpu_cpu_mask(int cpu)
return cpumask_of_node(cpu_to_node(cpu)); return cpumask_of_node(cpu_to_node(cpu));
} }
int sched_smt_power_savings = 0, sched_mc_power_savings = 0;
struct sd_data { struct sd_data {
struct sched_domain **__percpu sd; struct sched_domain **__percpu sd;
struct sched_group **__percpu sg; struct sched_group **__percpu sg;
...@@ -6322,7 +6320,6 @@ sd_numa_init(struct sched_domain_topology_level *tl, int cpu) ...@@ -6322,7 +6320,6 @@ sd_numa_init(struct sched_domain_topology_level *tl, int cpu)
| 0*SD_WAKE_AFFINE | 0*SD_WAKE_AFFINE
| 0*SD_PREFER_LOCAL | 0*SD_PREFER_LOCAL
| 0*SD_SHARE_CPUPOWER | 0*SD_SHARE_CPUPOWER
| 0*SD_POWERSAVINGS_BALANCE
| 0*SD_SHARE_PKG_RESOURCES | 0*SD_SHARE_PKG_RESOURCES
| 1*SD_SERIALIZE | 1*SD_SERIALIZE
| 0*SD_PREFER_SIBLING | 0*SD_PREFER_SIBLING
...@@ -6819,97 +6816,6 @@ void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[], ...@@ -6819,97 +6816,6 @@ void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
mutex_unlock(&sched_domains_mutex); mutex_unlock(&sched_domains_mutex);
} }
#if defined(CONFIG_SCHED_MC) || defined(CONFIG_SCHED_SMT)
static void reinit_sched_domains(void)
{
get_online_cpus();
/* Destroy domains first to force the rebuild */
partition_sched_domains(0, NULL, NULL);
rebuild_sched_domains();
put_online_cpus();
}
static ssize_t sched_power_savings_store(const char *buf, size_t count, int smt)
{
unsigned int level = 0;
if (sscanf(buf, "%u", &level) != 1)
return -EINVAL;
/*
* level is always be positive so don't check for
* level < POWERSAVINGS_BALANCE_NONE which is 0
* What happens on 0 or 1 byte write,
* need to check for count as well?
*/
if (level >= MAX_POWERSAVINGS_BALANCE_LEVELS)
return -EINVAL;
if (smt)
sched_smt_power_savings = level;
else
sched_mc_power_savings = level;
reinit_sched_domains();
return count;
}
#ifdef CONFIG_SCHED_MC
static ssize_t sched_mc_power_savings_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
return sprintf(buf, "%u\n", sched_mc_power_savings);
}
static ssize_t sched_mc_power_savings_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
return sched_power_savings_store(buf, count, 0);
}
static DEVICE_ATTR(sched_mc_power_savings, 0644,
sched_mc_power_savings_show,
sched_mc_power_savings_store);
#endif
#ifdef CONFIG_SCHED_SMT
static ssize_t sched_smt_power_savings_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
return sprintf(buf, "%u\n", sched_smt_power_savings);
}
static ssize_t sched_smt_power_savings_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
return sched_power_savings_store(buf, count, 1);
}
static DEVICE_ATTR(sched_smt_power_savings, 0644,
sched_smt_power_savings_show,
sched_smt_power_savings_store);
#endif
int __init sched_create_sysfs_power_savings_entries(struct device *dev)
{
int err = 0;
#ifdef CONFIG_SCHED_SMT
if (smt_capable())
err = device_create_file(dev, &dev_attr_sched_smt_power_savings);
#endif
#ifdef CONFIG_SCHED_MC
if (!err && mc_capable())
err = device_create_file(dev, &dev_attr_sched_mc_power_savings);
#endif
return err;
}
#endif /* CONFIG_SCHED_MC || CONFIG_SCHED_SMT */
/* /*
* Update cpusets according to cpu_active mask. If cpusets are * Update cpusets according to cpu_active mask. If cpusets are
* disabled, cpuset_update_active_cpus() becomes a simple wrapper * disabled, cpuset_update_active_cpus() becomes a simple wrapper
......
This diff is collapsed.
...@@ -85,15 +85,6 @@ Possible values are: ...@@ -85,15 +85,6 @@ Possible values are:
savings savings
.RE .RE
sched_mc_power_savings is dependent upon SCHED_MC, which is
itself architecture dependent.
sched_smt_power_savings is dependent upon SCHED_SMT, which
is itself architecture dependent.
The two files are independent of each other. It is possible
that one file may be present without the other.
.SH "SEE ALSO" .SH "SEE ALSO"
cpupower-info(1), cpupower-monitor(1), powertop(1) cpupower-info(1), cpupower-monitor(1), powertop(1)
.PP .PP
......
...@@ -362,22 +362,7 @@ char *sysfs_get_cpuidle_driver(void) ...@@ -362,22 +362,7 @@ char *sysfs_get_cpuidle_driver(void)
*/ */
int sysfs_get_sched(const char *smt_mc) int sysfs_get_sched(const char *smt_mc)
{ {
unsigned long value; return -ENODEV;
char linebuf[MAX_LINE_LEN];
char *endp;
char path[SYSFS_PATH_MAX];
if (strcmp("mc", smt_mc) && strcmp("smt", smt_mc))
return -EINVAL;
snprintf(path, sizeof(path),
PATH_TO_CPU "sched_%s_power_savings", smt_mc);
if (sysfs_read_file(path, linebuf, MAX_LINE_LEN) == 0)
return -1;
value = strtoul(linebuf, &endp, 0);
if (endp == linebuf || errno == ERANGE)
return -1;
return value;
} }
/* /*
...@@ -388,21 +373,5 @@ int sysfs_get_sched(const char *smt_mc) ...@@ -388,21 +373,5 @@ int sysfs_get_sched(const char *smt_mc)
*/ */
int sysfs_set_sched(const char *smt_mc, int val) int sysfs_set_sched(const char *smt_mc, int val)
{ {
char linebuf[MAX_LINE_LEN];
char path[SYSFS_PATH_MAX];
struct stat statbuf;
if (strcmp("mc", smt_mc) && strcmp("smt", smt_mc))
return -EINVAL;
snprintf(path, sizeof(path),
PATH_TO_CPU "sched_%s_power_savings", smt_mc);
sprintf(linebuf, "%d", val);
if (stat(path, &statbuf) != 0)
return -ENODEV; return -ENODEV;
if (sysfs_write_file(path, linebuf, MAX_LINE_LEN) == 0)
return -1;
return 0;
} }
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment