Commit d57d3943 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'pm-4.7-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull power management updates from Rafael Wysocki:
 "The majority of changes go into the cpufreq subsystem this time.

  To me, quite obviously, the biggest ticket item is the new "schedutil"
  governor.  Interestingly enough, it's the first new cpufreq governor
  since the beginning of the git era (except for some out-of-the-tree
  ones).

  There are two main differences between it and the existing governors.
  First, it uses the information provided by the scheduler directly for
  making its decisions, so it doesn't have to track anything by itself.
  Second, it can invoke drivers (supporting that feature) to adjust CPU
  performance right away without having to spawn work items to be
  executed in process context or similar.  Currently, the acpi-cpufreq
  driver is the only one supporting that mode of operation, but then it
  is used on a large number of systems.

  The "schedutil" governor as included here is very simple and mostly
  regarded as a foundation for future work on the integration of the
  scheduler with CPU power management (in fact, there is work in
  progress on top of it already).  Nevertheless it works and the
  preliminary results obtained with it are encouraging.

  There also is some consolidation of CPU frequency management for ARM
  platforms that can add their machine IDs the the new stub dt-platdev
  driver now and that will take care of creating the requisite platform
  device for cpufreq-dt, so it is not necessary to do that in platform
  code any more.  Several ARM platforms are switched over to using this
  generic mechanism.

  In addition to that, the intel_pstate driver is now going to respect
  CPU frequency limits set by the platform firmware (or a BMC) and
  provided via the ACPI _PPC object.

  The devfreq subsystem is getting a new "passive" governor for SoCs
  subsystems that will depend on somebody else to manage their voltage
  rails and its support for Samsung Exynos SoCs is consolidated.

  The rest is support for new hardware (Intel Broxton support in
  intel_idle for one example), bug fixes, optimizations and cleanups in
  a number of places.

  Specifics:

   - New cpufreq "schedutil" governor (making decisions based on CPU
     utilization information provided by the scheduler and capable of
     switching CPU frequencies right away if the underlying driver
     supports that) and support for fast frequency switching in the
     acpi-cpufreq driver (Rafael Wysocki)

   - Consolidation of CPU frequency management on ARM platforms allowing
     them to get rid of some platform-specific boilerplate code if they
     are going to use the cpufreq-dt driver (Viresh Kumar, Finley Xiao,
     Marc Gonzalez)

   - Support for ACPI _PPC and CPU frequency limits in the intel_pstate
     driver (Srinivas Pandruvada)

   - Fixes and cleanups in the cpufreq core and generic governor code
     (Rafael Wysocki, Sai Gurrappadi)

   - intel_pstate driver optimizations and cleanups (Rafael Wysocki,
     Philippe Longepe, Chen Yu, Joe Perches)

   - cpufreq powernv driver fixes and cleanups (Akshay Adiga, Shilpasri
     Bhat)

   - cpufreq qoriq driver fixes and cleanups (Jia Hongtao)

   - ACPI cpufreq driver cleanups (Viresh Kumar)

   - Assorted cpufreq driver updates (Ashwin Chaugule, Geliang Tang,
     Javier Martinez Canillas, Paul Gortmaker, Sudeep Holla)

   - Assorted cpufreq fixes and cleanups (Joe Perches, Arnd Bergmann)

   - Fixes and cleanups in the OPP (Operating Performance Points)
     framework, mostly related to OPP sharing, and reorganization of
     OF-dependent code in it (Viresh Kumar, Arnd Bergmann, Sudeep Holla)

   - New "passive" governor for devfreq (for SoC subsystems that will
     rely on someone else for the management of their power resources)
     and consolidation of devfreq support for Exynos platforms, coding
     style and typo fixes for devfreq (Chanwoo Choi, MyungJoo Ham)

   - PM core fixes and cleanups, mostly to make it work better with the
     generic power domains (genpd) framework, and updates for that
     framework (Ulf Hansson, Thierry Reding, Colin Ian King)

   - Intel Broxton support for the intel_idle driver (Len Brown)

   - cpuidle core optimization and fix (Daniel Lezcano, Dave Gerlach)

   - ARM cpuidle cleanups (Jisheng Zhang)

   - Intel Kabylake support for the RAPL power capping driver (Jacob
     Pan)

   - AVS (Adaptive Voltage Switching) rockchip-io driver update (Heiko
     Stuebner)

   - Updates for the cpupower tool (Arjun Sreedharan, Colin Ian King,
     Mattia Dongili, Thomas Renninger)"

* tag 'pm-4.7-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (112 commits)
  intel_pstate: Clean up get_target_pstate_use_performance()
  intel_pstate: Use sample.core_avg_perf in get_avg_pstate()
  intel_pstate: Clarify average performance computation
  intel_pstate: Avoid unnecessary synchronize_sched() during initialization
  cpufreq: schedutil: Make default depend on CONFIG_SMP
  cpufreq: powernv: del_timer_sync when global and local pstate are equal
  cpufreq: powernv: Move smp_call_function_any() out of irq safe block
  intel_pstate: Clean up intel_pstate_get()
  cpufreq: schedutil: Make it depend on CONFIG_SMP
  cpufreq: governor: Fix handling of special cases in dbs_update()
  PM / OPP: Move CONFIG_OF dependent code in a separate file
  cpufreq: intel_pstate: Ignore _PPC processing under HWP
  cpufreq: arm_big_little: use generic OPP functions for {init, free}_opp_table
  PM / OPP: add non-OF versions of dev_pm_opp_{cpumask_, }remove_table
  cpufreq: tango: Use generic platdev driver
  PM / OPP: pass cpumask by reference
  cpufreq: Fix GOV_LIMITS handling for the userspace governor
  cpupower: fix potential memory leak
  PM / devfreq: style/typo fixes
  PM / devfreq: exynos: Add the detailed correlation for Exynos5422 bus
  ..
parents 3e21e5dd 27c4a1c5
* Samsung Exynos NoC (Network on Chip) Probe device
The Samsung Exynos542x SoC has NoC (Network on Chip) Probe for NoC bus.
NoC provides the primitive values to get the performance data. The packets
that the Network on Chip (NoC) probes detects are transported over
the network infrastructure to observer units. You can configure probes to
capture packets with header or data on the data request response network,
or as traffic debug or statistic collectors. Exynos542x bus has multiple
NoC probes to provide bandwidth information about behavior of the SoC
that you can use while analyzing system performance.
Required properties:
- compatible: Should be "samsung,exynos5420-nocp"
- reg: physical base address of each NoC Probe and length of memory mapped region.
Optional properties:
- clock-names : the name of clock used by the NoC Probe, "nocp"
- clocks : phandles for clock specified in "clock-names" property
Example : NoC Probe nodes in Device Tree are listed below.
nocp_mem0_0: nocp@10CA1000 {
compatible = "samsung,exynos5420-nocp";
reg = <0x10CA1000 0x200>;
};
This diff is collapsed.
...@@ -37,8 +37,10 @@ Required properties: ...@@ -37,8 +37,10 @@ Required properties:
- "rockchip,rk3368-pmu-io-voltage-domain" for rk3368 pmu-domains - "rockchip,rk3368-pmu-io-voltage-domain" for rk3368 pmu-domains
- "rockchip,rk3399-io-voltage-domain" for rk3399 - "rockchip,rk3399-io-voltage-domain" for rk3399
- "rockchip,rk3399-pmu-io-voltage-domain" for rk3399 pmu-domains - "rockchip,rk3399-pmu-io-voltage-domain" for rk3399 pmu-domains
- rockchip,grf: phandle to the syscon managing the "general register files"
Deprecated properties:
- rockchip,grf: phandle to the syscon managing the "general register files"
Systems should move the io-domains to a sub-node of the grf simple-mfd.
You specify supplies using the standard regulator bindings by including You specify supplies using the standard regulator bindings by including
a phandle the relevant regulator. All specified supplies must be able a phandle the relevant regulator. All specified supplies must be able
......
...@@ -1671,6 +1671,11 @@ bytes respectively. Such letter suffixes can also be entirely omitted. ...@@ -1671,6 +1671,11 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
hwp_only hwp_only
Only load intel_pstate on systems which support Only load intel_pstate on systems which support
hardware P state control (HWP) if available. hardware P state control (HWP) if available.
support_acpi_ppc
Enforce ACPI _PPC performance limits. If the Fixed ACPI
Description Table, specifies preferred power management
profile as "Enterprise Server" or "Performance Server",
then this feature is turned on by default.
intremap= [X86-64, Intel-IOMMU] intremap= [X86-64, Intel-IOMMU]
on enable Interrupt Remapping (default) on enable Interrupt Remapping (default)
......
...@@ -1322,6 +1322,7 @@ F: drivers/rtc/rtc-armada38x.c ...@@ -1322,6 +1322,7 @@ F: drivers/rtc/rtc-armada38x.c
F: arch/arm/boot/dts/armada* F: arch/arm/boot/dts/armada*
F: arch/arm/boot/dts/kirkwood* F: arch/arm/boot/dts/kirkwood*
F: arch/arm64/boot/dts/marvell/armada* F: arch/arm64/boot/dts/marvell/armada*
F: drivers/cpufreq/mvebu-cpufreq.c
ARM/Marvell Berlin SoC support ARM/Marvell Berlin SoC support
...@@ -3539,6 +3540,15 @@ F: drivers/devfreq/devfreq-event.c ...@@ -3539,6 +3540,15 @@ F: drivers/devfreq/devfreq-event.c
F: include/linux/devfreq-event.h F: include/linux/devfreq-event.h
F: Documentation/devicetree/bindings/devfreq/event/ F: Documentation/devicetree/bindings/devfreq/event/
BUS FREQUENCY DRIVER FOR SAMSUNG EXYNOS
M: Chanwoo Choi <cw00.choi@samsung.com>
L: linux-pm@vger.kernel.org
L: linux-samsung-soc@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mzx/devfreq.git
S: Maintained
F: drivers/devfreq/exynos-bus.c
F: Documentation/devicetree/bindings/devfreq/exynos-bus.txt
DEVICE NUMBER REGISTRY DEVICE NUMBER REGISTRY
M: Torben Mathiasen <device@lanana.org> M: Torben Mathiasen <device@lanana.org>
W: http://lanana.org/docs/device-list/index.html W: http://lanana.org/docs/device-list/index.html
......
...@@ -36,7 +36,7 @@ struct cpuidle_ops { ...@@ -36,7 +36,7 @@ struct cpuidle_ops {
struct of_cpuidle_method { struct of_cpuidle_method {
const char *method; const char *method;
struct cpuidle_ops *ops; const struct cpuidle_ops *ops;
}; };
#define CPUIDLE_METHOD_OF_DECLARE(name, _method, _ops) \ #define CPUIDLE_METHOD_OF_DECLARE(name, _method, _ops) \
......
...@@ -70,7 +70,7 @@ int arm_cpuidle_suspend(int index) ...@@ -70,7 +70,7 @@ int arm_cpuidle_suspend(int index)
* *
* Returns a struct cpuidle_ops pointer, NULL if not found. * Returns a struct cpuidle_ops pointer, NULL if not found.
*/ */
static struct cpuidle_ops *__init arm_cpuidle_get_ops(const char *method) static const struct cpuidle_ops *__init arm_cpuidle_get_ops(const char *method)
{ {
struct of_cpuidle_method *m = __cpuidle_method_of_table; struct of_cpuidle_method *m = __cpuidle_method_of_table;
...@@ -88,7 +88,7 @@ static struct cpuidle_ops *__init arm_cpuidle_get_ops(const char *method) ...@@ -88,7 +88,7 @@ static struct cpuidle_ops *__init arm_cpuidle_get_ops(const char *method)
* *
* Get the method name defined in the 'enable-method' property, retrieve the * Get the method name defined in the 'enable-method' property, retrieve the
* associated cpuidle_ops and do a struct copy. This copy is needed because all * associated cpuidle_ops and do a struct copy. This copy is needed because all
* cpuidle_ops are tagged __initdata and will be unloaded after the init * cpuidle_ops are tagged __initconst and will be unloaded after the init
* process. * process.
* *
* Return 0 on sucess, -ENOENT if no 'enable-method' is defined, -EOPNOTSUPP if * Return 0 on sucess, -ENOENT if no 'enable-method' is defined, -EOPNOTSUPP if
...@@ -97,7 +97,7 @@ static struct cpuidle_ops *__init arm_cpuidle_get_ops(const char *method) ...@@ -97,7 +97,7 @@ static struct cpuidle_ops *__init arm_cpuidle_get_ops(const char *method)
static int __init arm_cpuidle_read_ops(struct device_node *dn, int cpu) static int __init arm_cpuidle_read_ops(struct device_node *dn, int cpu)
{ {
const char *enable_method; const char *enable_method;
struct cpuidle_ops *ops; const struct cpuidle_ops *ops;
enable_method = of_get_property(dn, "enable-method", NULL); enable_method = of_get_property(dn, "enable-method", NULL);
if (!enable_method) if (!enable_method)
......
...@@ -18,11 +18,6 @@ ...@@ -18,11 +18,6 @@
#include <asm/hardware/cache-l2x0.h> #include <asm/hardware/cache-l2x0.h>
#include <asm/mach/arch.h> #include <asm/mach/arch.h>
static void __init berlin_init_late(void)
{
platform_device_register_simple("cpufreq-dt", -1, NULL, 0);
}
static const char * const berlin_dt_compat[] = { static const char * const berlin_dt_compat[] = {
"marvell,berlin", "marvell,berlin",
NULL, NULL,
...@@ -30,7 +25,6 @@ static const char * const berlin_dt_compat[] = { ...@@ -30,7 +25,6 @@ static const char * const berlin_dt_compat[] = {
DT_MACHINE_START(BERLIN_DT, "Marvell Berlin") DT_MACHINE_START(BERLIN_DT, "Marvell Berlin")
.dt_compat = berlin_dt_compat, .dt_compat = berlin_dt_compat,
.init_late = berlin_init_late,
/* /*
* with DT probing for L2CCs, berlin_init_machine can be removed. * with DT probing for L2CCs, berlin_init_machine can be removed.
* Note: 88DE3005 (Armada 1500-mini) uses pl310 l2cc * Note: 88DE3005 (Armada 1500-mini) uses pl310 l2cc
......
...@@ -213,33 +213,6 @@ static void __init exynos_init_irq(void) ...@@ -213,33 +213,6 @@ static void __init exynos_init_irq(void)
exynos_map_pmu(); exynos_map_pmu();
} }
static const struct of_device_id exynos_cpufreq_matches[] = {
{ .compatible = "samsung,exynos3250", .data = "cpufreq-dt" },
{ .compatible = "samsung,exynos4210", .data = "cpufreq-dt" },
{ .compatible = "samsung,exynos4212", .data = "cpufreq-dt" },
{ .compatible = "samsung,exynos4412", .data = "cpufreq-dt" },
{ .compatible = "samsung,exynos5250", .data = "cpufreq-dt" },
#ifndef CONFIG_BL_SWITCHER
{ .compatible = "samsung,exynos5420", .data = "cpufreq-dt" },
{ .compatible = "samsung,exynos5800", .data = "cpufreq-dt" },
#endif
{ /* sentinel */ }
};
static void __init exynos_cpufreq_init(void)
{
struct device_node *root = of_find_node_by_path("/");
const struct of_device_id *match;
match = of_match_node(exynos_cpufreq_matches, root);
if (!match) {
platform_device_register_simple("exynos-cpufreq", -1, NULL, 0);
return;
}
platform_device_register_simple(match->data, -1, NULL, 0);
}
static void __init exynos_dt_machine_init(void) static void __init exynos_dt_machine_init(void)
{ {
/* /*
...@@ -262,8 +235,6 @@ static void __init exynos_dt_machine_init(void) ...@@ -262,8 +235,6 @@ static void __init exynos_dt_machine_init(void)
of_machine_is_compatible("samsung,exynos5250")) of_machine_is_compatible("samsung,exynos5250"))
platform_device_register(&exynos_cpuidle); platform_device_register(&exynos_cpuidle);
exynos_cpufreq_init();
of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL); of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL);
} }
......
...@@ -18,15 +18,6 @@ ...@@ -18,15 +18,6 @@
#include "common.h" #include "common.h"
#include "mx27.h" #include "mx27.h"
static void __init imx27_dt_init(void)
{
struct platform_device_info devinfo = { .name = "cpufreq-dt", };
of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL);
platform_device_register_full(&devinfo);
}
static const char * const imx27_dt_board_compat[] __initconst = { static const char * const imx27_dt_board_compat[] __initconst = {
"fsl,imx27", "fsl,imx27",
NULL NULL
...@@ -36,6 +27,5 @@ DT_MACHINE_START(IMX27_DT, "Freescale i.MX27 (Device Tree Support)") ...@@ -36,6 +27,5 @@ DT_MACHINE_START(IMX27_DT, "Freescale i.MX27 (Device Tree Support)")
.map_io = mx27_map_io, .map_io = mx27_map_io,
.init_early = imx27_init_early, .init_early = imx27_init_early,
.init_irq = mx27_init_irq, .init_irq = mx27_init_irq,
.init_machine = imx27_dt_init,
.dt_compat = imx27_dt_board_compat, .dt_compat = imx27_dt_board_compat,
MACHINE_END MACHINE_END
...@@ -50,13 +50,10 @@ static void __init imx51_ipu_mipi_setup(void) ...@@ -50,13 +50,10 @@ static void __init imx51_ipu_mipi_setup(void)
static void __init imx51_dt_init(void) static void __init imx51_dt_init(void)
{ {
struct platform_device_info devinfo = { .name = "cpufreq-dt", };
imx51_ipu_mipi_setup(); imx51_ipu_mipi_setup();
imx_src_init(); imx_src_init();
of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL); of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL);
platform_device_register_full(&devinfo);
} }
static void __init imx51_init_late(void) static void __init imx51_init_late(void)
......
...@@ -40,8 +40,6 @@ static void __init imx53_dt_init(void) ...@@ -40,8 +40,6 @@ static void __init imx53_dt_init(void)
static void __init imx53_init_late(void) static void __init imx53_init_late(void)
{ {
imx53_pm_init(); imx53_pm_init();
platform_device_register_simple("cpufreq-dt", -1, NULL, 0);
} }
static const char * const imx53_dt_board_compat[] __initconst = { static const char * const imx53_dt_board_compat[] __initconst = {
......
...@@ -105,11 +105,6 @@ static void __init imx7d_init_irq(void) ...@@ -105,11 +105,6 @@ static void __init imx7d_init_irq(void)
irqchip_init(); irqchip_init();
} }
static void __init imx7d_init_late(void)
{
platform_device_register_simple("cpufreq-dt", -1, NULL, 0);
}
static const char *const imx7d_dt_compat[] __initconst = { static const char *const imx7d_dt_compat[] __initconst = {
"fsl,imx7d", "fsl,imx7d",
NULL, NULL,
...@@ -117,7 +112,6 @@ static const char *const imx7d_dt_compat[] __initconst = { ...@@ -117,7 +112,6 @@ static const char *const imx7d_dt_compat[] __initconst = {
DT_MACHINE_START(IMX7D, "Freescale i.MX7 Dual (Device Tree)") DT_MACHINE_START(IMX7D, "Freescale i.MX7 Dual (Device Tree)")
.init_irq = imx7d_init_irq, .init_irq = imx7d_init_irq,
.init_late = imx7d_init_late,
.init_machine = imx7d_init_machine, .init_machine = imx7d_init_machine,
.dt_compat = imx7d_dt_compat, .dt_compat = imx7d_dt_compat,
MACHINE_END MACHINE_END
...@@ -20,7 +20,6 @@ ...@@ -20,7 +20,6 @@
#include <linux/clk.h> #include <linux/clk.h>
#include <linux/cpu_pm.h> #include <linux/cpu_pm.h>
#include <linux/cpufreq-dt.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/io.h> #include <linux/io.h>
...@@ -29,7 +28,6 @@ ...@@ -29,7 +28,6 @@
#include <linux/of_address.h> #include <linux/of_address.h>
#include <linux/of_device.h> #include <linux/of_device.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/pm_opp.h>
#include <linux/resource.h> #include <linux/resource.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/smp.h> #include <linux/smp.h>
...@@ -608,86 +606,3 @@ int mvebu_pmsu_dfs_request(int cpu) ...@@ -608,86 +606,3 @@ int mvebu_pmsu_dfs_request(int cpu)
return 0; return 0;
} }
struct cpufreq_dt_platform_data cpufreq_dt_pd = {
.independent_clocks = true,
};
static int __init armada_xp_pmsu_cpufreq_init(void)
{
struct device_node *np;
struct resource res;
int ret, cpu;
if (!of_machine_is_compatible("marvell,armadaxp"))
return 0;
/*
* In order to have proper cpufreq handling, we need to ensure
* that the Device Tree description of the CPU clock includes
* the definition of the PMU DFS registers. If not, we do not
* register the clock notifier and the cpufreq driver. This
* piece of code is only for compatibility with old Device
* Trees.
*/
np = of_find_compatible_node(NULL, NULL, "marvell,armada-xp-cpu-clock");
if (!np)
return 0;
ret = of_address_to_resource(np, 1, &res);
if (ret) {
pr_warn(FW_WARN "not enabling cpufreq, deprecated armada-xp-cpu-clock binding\n");
of_node_put(np);
return 0;
}
of_node_put(np);
/*
* For each CPU, this loop registers the operating points
* supported (which are the nominal CPU frequency and half of
* it), and registers the clock notifier that will take care
* of doing the PMSU part of a frequency transition.
*/
for_each_possible_cpu(cpu) {
struct device *cpu_dev;
struct clk *clk;
int ret;
cpu_dev = get_cpu_device(cpu);
if (!cpu_dev) {
pr_err("Cannot get CPU %d\n", cpu);
continue;
}
clk = clk_get(cpu_dev, 0);
if (IS_ERR(clk)) {
pr_err("Cannot get clock for CPU %d\n", cpu);
return PTR_ERR(clk);
}
/*
* In case of a failure of dev_pm_opp_add(), we don't
* bother with cleaning up the registered OPP (there's
* no function to do so), and simply cancel the
* registration of the cpufreq device.
*/
ret = dev_pm_opp_add(cpu_dev, clk_get_rate(clk), 0);
if (ret) {
clk_put(clk);
return ret;
}
ret = dev_pm_opp_add(cpu_dev, clk_get_rate(clk) / 2, 0);
if (ret) {
clk_put(clk);
return ret;
}
}
platform_device_register_data(NULL, "cpufreq-dt", -1,
&cpufreq_dt_pd, sizeof(cpufreq_dt_pd));
return 0;
}
device_initcall(armada_xp_pmsu_cpufreq_init);
...@@ -277,12 +277,9 @@ static void __init omap4_init_voltages(void) ...@@ -277,12 +277,9 @@ static void __init omap4_init_voltages(void)
static inline void omap_init_cpufreq(void) static inline void omap_init_cpufreq(void)
{ {
struct platform_device_info devinfo = { }; struct platform_device_info devinfo = { .name = "omap-cpufreq" };
if (!of_have_populated_dt()) if (!of_have_populated_dt())
devinfo.name = "omap-cpufreq";
else
devinfo.name = "cpufreq-dt";
platform_device_register_full(&devinfo); platform_device_register_full(&devinfo);
} }
......
...@@ -74,7 +74,6 @@ static void __init rockchip_dt_init(void) ...@@ -74,7 +74,6 @@ static void __init rockchip_dt_init(void)
{ {
rockchip_suspend_init(); rockchip_suspend_init();
of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL); of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL);
platform_device_register_simple("cpufreq-dt", 0, NULL, 0);
} }
static const char * const rockchip_board_dt_compat[] = { static const char * const rockchip_board_dt_compat[] = {
......
...@@ -38,7 +38,6 @@ smp-$(CONFIG_ARCH_EMEV2) += smp-emev2.o headsmp-scu.o platsmp-scu.o ...@@ -38,7 +38,6 @@ smp-$(CONFIG_ARCH_EMEV2) += smp-emev2.o headsmp-scu.o platsmp-scu.o
# PM objects # PM objects
obj-$(CONFIG_SUSPEND) += suspend.o obj-$(CONFIG_SUSPEND) += suspend.o
obj-$(CONFIG_CPU_FREQ) += cpufreq.o
obj-$(CONFIG_PM_RCAR) += pm-rcar.o obj-$(CONFIG_PM_RCAR) += pm-rcar.o
obj-$(CONFIG_PM_RMOBILE) += pm-rmobile.o obj-$(CONFIG_PM_RMOBILE) += pm-rmobile.o
obj-$(CONFIG_ARCH_RCAR_GEN2) += pm-rcar-gen2.o obj-$(CONFIG_ARCH_RCAR_GEN2) += pm-rcar-gen2.o
......
...@@ -25,16 +25,9 @@ static inline int shmobile_suspend_init(void) { return 0; } ...@@ -25,16 +25,9 @@ static inline int shmobile_suspend_init(void) { return 0; }
static inline void shmobile_smp_apmu_suspend_init(void) { } static inline void shmobile_smp_apmu_suspend_init(void) { }
#endif #endif
#ifdef CONFIG_CPU_FREQ
int shmobile_cpufreq_init(void);
#else
static inline int shmobile_cpufreq_init(void) { return 0; }
#endif
static inline void __init shmobile_init_late(void) static inline void __init shmobile_init_late(void)
{ {
shmobile_suspend_init(); shmobile_suspend_init();
shmobile_cpufreq_init();
} }
#endif /* __ARCH_MACH_COMMON_H */ #endif /* __ARCH_MACH_COMMON_H */
/*
* CPUFreq support code for SH-Mobile ARM
*
* Copyright (C) 2014 Gaku Inami
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
#include <linux/platform_device.h>
#include "common.h"
int __init shmobile_cpufreq_init(void)
{
platform_device_register_simple("cpufreq-dt", -1, NULL, 0);
return 0;
}
...@@ -17,11 +17,6 @@ ...@@ -17,11 +17,6 @@
#include <asm/mach/arch.h> #include <asm/mach/arch.h>
static void __init sunxi_dt_cpufreq_init(void)
{
platform_device_register_simple("cpufreq-dt", -1, NULL, 0);
}
static const char * const sunxi_board_dt_compat[] = { static const char * const sunxi_board_dt_compat[] = {
"allwinner,sun4i-a10", "allwinner,sun4i-a10",
"allwinner,sun5i-a10s", "allwinner,sun5i-a10s",
...@@ -32,7 +27,6 @@ static const char * const sunxi_board_dt_compat[] = { ...@@ -32,7 +27,6 @@ static const char * const sunxi_board_dt_compat[] = {
DT_MACHINE_START(SUNXI_DT, "Allwinner sun4i/sun5i Families") DT_MACHINE_START(SUNXI_DT, "Allwinner sun4i/sun5i Families")
.dt_compat = sunxi_board_dt_compat, .dt_compat = sunxi_board_dt_compat,
.init_late = sunxi_dt_cpufreq_init,
MACHINE_END MACHINE_END
static const char * const sun6i_board_dt_compat[] = { static const char * const sun6i_board_dt_compat[] = {
...@@ -53,7 +47,6 @@ static void __init sun6i_timer_init(void) ...@@ -53,7 +47,6 @@ static void __init sun6i_timer_init(void)
DT_MACHINE_START(SUN6I_DT, "Allwinner sun6i (A31) Family") DT_MACHINE_START(SUN6I_DT, "Allwinner sun6i (A31) Family")
.init_time = sun6i_timer_init, .init_time = sun6i_timer_init,
.dt_compat = sun6i_board_dt_compat, .dt_compat = sun6i_board_dt_compat,
.init_late = sunxi_dt_cpufreq_init,
MACHINE_END MACHINE_END
static const char * const sun7i_board_dt_compat[] = { static const char * const sun7i_board_dt_compat[] = {
...@@ -63,7 +56,6 @@ static const char * const sun7i_board_dt_compat[] = { ...@@ -63,7 +56,6 @@ static const char * const sun7i_board_dt_compat[] = {
DT_MACHINE_START(SUN7I_DT, "Allwinner sun7i (A20) Family") DT_MACHINE_START(SUN7I_DT, "Allwinner sun7i (A20) Family")
.dt_compat = sun7i_board_dt_compat, .dt_compat = sun7i_board_dt_compat,
.init_late = sunxi_dt_cpufreq_init,
MACHINE_END MACHINE_END
static const char * const sun8i_board_dt_compat[] = { static const char * const sun8i_board_dt_compat[] = {
...@@ -77,7 +69,6 @@ static const char * const sun8i_board_dt_compat[] = { ...@@ -77,7 +69,6 @@ static const char * const sun8i_board_dt_compat[] = {
DT_MACHINE_START(SUN8I_DT, "Allwinner sun8i Family") DT_MACHINE_START(SUN8I_DT, "Allwinner sun8i Family")
.init_time = sun6i_timer_init, .init_time = sun6i_timer_init,
.dt_compat = sun8i_board_dt_compat, .dt_compat = sun8i_board_dt_compat,
.init_late = sunxi_dt_cpufreq_init,
MACHINE_END MACHINE_END
static const char * const sun9i_board_dt_compat[] = { static const char * const sun9i_board_dt_compat[] = {
......
...@@ -110,7 +110,6 @@ static void __init zynq_init_late(void) ...@@ -110,7 +110,6 @@ static void __init zynq_init_late(void)
*/ */
static void __init zynq_init_machine(void) static void __init zynq_init_machine(void)
{ {
struct platform_device_info devinfo = { .name = "cpufreq-dt", };
struct soc_device_attribute *soc_dev_attr; struct soc_device_attribute *soc_dev_attr;
struct soc_device *soc_dev; struct soc_device *soc_dev;
struct device *parent = NULL; struct device *parent = NULL;
...@@ -145,7 +144,6 @@ static void __init zynq_init_machine(void) ...@@ -145,7 +144,6 @@ static void __init zynq_init_machine(void)
of_platform_populate(NULL, of_default_bus_match_table, NULL, parent); of_platform_populate(NULL, of_default_bus_match_table, NULL, parent);
platform_device_register(&zynq_cpuidle_device); platform_device_register(&zynq_cpuidle_device);
platform_device_register_full(&devinfo);
} }
static void __init zynq_timer_init(void) static void __init zynq_timer_init(void)
......
...@@ -159,7 +159,7 @@ int of_pm_clk_add_clks(struct device *dev) ...@@ -159,7 +159,7 @@ int of_pm_clk_add_clks(struct device *dev)
count = of_count_phandle_with_args(dev->of_node, "clocks", count = of_count_phandle_with_args(dev->of_node, "clocks",
"#clock-cells"); "#clock-cells");
if (count == 0) if (count <= 0)
return -ENODEV; return -ENODEV;
clks = kcalloc(count, sizeof(*clks), GFP_KERNEL); clks = kcalloc(count, sizeof(*clks), GFP_KERNEL);
......
...@@ -229,17 +229,6 @@ static int genpd_poweron(struct generic_pm_domain *genpd, unsigned int depth) ...@@ -229,17 +229,6 @@ static int genpd_poweron(struct generic_pm_domain *genpd, unsigned int depth)
return ret; return ret;
} }
static int genpd_save_dev(struct generic_pm_domain *genpd, struct device *dev)
{
return GENPD_DEV_CALLBACK(genpd, int, save_state, dev);
}
static int genpd_restore_dev(struct generic_pm_domain *genpd,
struct device *dev)
{
return GENPD_DEV_CALLBACK(genpd, int, restore_state, dev);
}
static int genpd_dev_pm_qos_notifier(struct notifier_block *nb, static int genpd_dev_pm_qos_notifier(struct notifier_block *nb,
unsigned long val, void *ptr) unsigned long val, void *ptr)
{ {
...@@ -372,17 +361,63 @@ static void genpd_power_off_work_fn(struct work_struct *work) ...@@ -372,17 +361,63 @@ static void genpd_power_off_work_fn(struct work_struct *work)
} }
/** /**
* pm_genpd_runtime_suspend - Suspend a device belonging to I/O PM domain. * __genpd_runtime_suspend - walk the hierarchy of ->runtime_suspend() callbacks
* @dev: Device to handle.
*/
static int __genpd_runtime_suspend(struct device *dev)
{
int (*cb)(struct device *__dev);
if (dev->type && dev->type->pm)
cb = dev->type->pm->runtime_suspend;
else if (dev->class && dev->class->pm)
cb = dev->class->pm->runtime_suspend;
else if (dev->bus && dev->bus->pm)
cb = dev->bus->pm->runtime_suspend;
else
cb = NULL;
if (!cb && dev->driver && dev->driver->pm)
cb = dev->driver->pm->runtime_suspend;
return cb ? cb(dev) : 0;
}
/**
* __genpd_runtime_resume - walk the hierarchy of ->runtime_resume() callbacks
* @dev: Device to handle.
*/
static int __genpd_runtime_resume(struct device *dev)
{
int (*cb)(struct device *__dev);
if (dev->type && dev->type->pm)
cb = dev->type->pm->runtime_resume;
else if (dev->class && dev->class->pm)
cb = dev->class->pm->runtime_resume;
else if (dev->bus && dev->bus->pm)
cb = dev->bus->pm->runtime_resume;
else
cb = NULL;
if (!cb && dev->driver && dev->driver->pm)
cb = dev->driver->pm->runtime_resume;
return cb ? cb(dev) : 0;
}
/**
* genpd_runtime_suspend - Suspend a device belonging to I/O PM domain.
* @dev: Device to suspend. * @dev: Device to suspend.
* *
* Carry out a runtime suspend of a device under the assumption that its * Carry out a runtime suspend of a device under the assumption that its
* pm_domain field points to the domain member of an object of type * pm_domain field points to the domain member of an object of type
* struct generic_pm_domain representing a PM domain consisting of I/O devices. * struct generic_pm_domain representing a PM domain consisting of I/O devices.
*/ */
static int pm_genpd_runtime_suspend(struct device *dev) static int genpd_runtime_suspend(struct device *dev)
{ {
struct generic_pm_domain *genpd; struct generic_pm_domain *genpd;
bool (*stop_ok)(struct device *__dev); bool (*suspend_ok)(struct device *__dev);
struct gpd_timing_data *td = &dev_gpd_data(dev)->td; struct gpd_timing_data *td = &dev_gpd_data(dev)->td;
bool runtime_pm = pm_runtime_enabled(dev); bool runtime_pm = pm_runtime_enabled(dev);
ktime_t time_start; ktime_t time_start;
...@@ -401,21 +436,21 @@ static int pm_genpd_runtime_suspend(struct device *dev) ...@@ -401,21 +436,21 @@ static int pm_genpd_runtime_suspend(struct device *dev)
* runtime PM is disabled. Under these circumstances, we shall skip * runtime PM is disabled. Under these circumstances, we shall skip
* validating/measuring the PM QoS latency. * validating/measuring the PM QoS latency.
*/ */
stop_ok = genpd->gov ? genpd->gov->stop_ok : NULL; suspend_ok = genpd->gov ? genpd->gov->suspend_ok : NULL;
if (runtime_pm && stop_ok && !stop_ok(dev)) if (runtime_pm && suspend_ok && !suspend_ok(dev))
return -EBUSY; return -EBUSY;
/* Measure suspend latency. */ /* Measure suspend latency. */
if (runtime_pm) if (runtime_pm)
time_start = ktime_get(); time_start = ktime_get();
ret = genpd_save_dev(genpd, dev); ret = __genpd_runtime_suspend(dev);
if (ret) if (ret)
return ret; return ret;
ret = genpd_stop_dev(genpd, dev); ret = genpd_stop_dev(genpd, dev);
if (ret) { if (ret) {
genpd_restore_dev(genpd, dev); __genpd_runtime_resume(dev);
return ret; return ret;
} }
...@@ -446,14 +481,14 @@ static int pm_genpd_runtime_suspend(struct device *dev) ...@@ -446,14 +481,14 @@ static int pm_genpd_runtime_suspend(struct device *dev)
} }
/** /**
* pm_genpd_runtime_resume - Resume a device belonging to I/O PM domain. * genpd_runtime_resume - Resume a device belonging to I/O PM domain.
* @dev: Device to resume. * @dev: Device to resume.
* *
* Carry out a runtime resume of a device under the assumption that its * Carry out a runtime resume of a device under the assumption that its
* pm_domain field points to the domain member of an object of type * pm_domain field points to the domain member of an object of type
* struct generic_pm_domain representing a PM domain consisting of I/O devices. * struct generic_pm_domain representing a PM domain consisting of I/O devices.
*/ */
static int pm_genpd_runtime_resume(struct device *dev) static int genpd_runtime_resume(struct device *dev)
{ {
struct generic_pm_domain *genpd; struct generic_pm_domain *genpd;
struct gpd_timing_data *td = &dev_gpd_data(dev)->td; struct gpd_timing_data *td = &dev_gpd_data(dev)->td;
...@@ -491,7 +526,7 @@ static int pm_genpd_runtime_resume(struct device *dev) ...@@ -491,7 +526,7 @@ static int pm_genpd_runtime_resume(struct device *dev)
if (ret) if (ret)
goto err_poweroff; goto err_poweroff;
ret = genpd_restore_dev(genpd, dev); ret = __genpd_runtime_resume(dev);
if (ret) if (ret)
goto err_stop; goto err_stop;
...@@ -695,15 +730,6 @@ static int pm_genpd_prepare(struct device *dev) ...@@ -695,15 +730,6 @@ static int pm_genpd_prepare(struct device *dev)
* at this point and a system wakeup event should be reported if it's * at this point and a system wakeup event should be reported if it's
* set up to wake up the system from sleep states. * set up to wake up the system from sleep states.
*/ */
pm_runtime_get_noresume(dev);
if (pm_runtime_barrier(dev) && device_may_wakeup(dev))
pm_wakeup_event(dev, 0);
if (pm_wakeup_pending()) {
pm_runtime_put(dev);
return -EBUSY;
}
if (resume_needed(dev, genpd)) if (resume_needed(dev, genpd))
pm_runtime_resume(dev); pm_runtime_resume(dev);
...@@ -716,10 +742,8 @@ static int pm_genpd_prepare(struct device *dev) ...@@ -716,10 +742,8 @@ static int pm_genpd_prepare(struct device *dev)
mutex_unlock(&genpd->lock); mutex_unlock(&genpd->lock);
if (genpd->suspend_power_off) { if (genpd->suspend_power_off)
pm_runtime_put_noidle(dev);
return 0; return 0;
}
/* /*
* The PM domain must be in the GPD_STATE_ACTIVE state at this point, * The PM domain must be in the GPD_STATE_ACTIVE state at this point,
...@@ -741,7 +765,6 @@ static int pm_genpd_prepare(struct device *dev) ...@@ -741,7 +765,6 @@ static int pm_genpd_prepare(struct device *dev)
pm_runtime_enable(dev); pm_runtime_enable(dev);
} }
pm_runtime_put(dev);
return ret; return ret;
} }
...@@ -1427,54 +1450,6 @@ int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd, ...@@ -1427,54 +1450,6 @@ int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd,
} }
EXPORT_SYMBOL_GPL(pm_genpd_remove_subdomain); EXPORT_SYMBOL_GPL(pm_genpd_remove_subdomain);
/* Default device callbacks for generic PM domains. */
/**
* pm_genpd_default_save_state - Default "save device state" for PM domains.
* @dev: Device to handle.
*/
static int pm_genpd_default_save_state(struct device *dev)
{
int (*cb)(struct device *__dev);
if (dev->type && dev->type->pm)
cb = dev->type->pm->runtime_suspend;
else if (dev->class && dev->class->pm)
cb = dev->class->pm->runtime_suspend;
else if (dev->bus && dev->bus->pm)
cb = dev->bus->pm->runtime_suspend;
else
cb = NULL;
if (!cb && dev->driver && dev->driver->pm)
cb = dev->driver->pm->runtime_suspend;
return cb ? cb(dev) : 0;
}
/**
* pm_genpd_default_restore_state - Default PM domains "restore device state".
* @dev: Device to handle.
*/
static int pm_genpd_default_restore_state(struct device *dev)
{
int (*cb)(struct device *__dev);
if (dev->type && dev->type->pm)
cb = dev->type->pm->runtime_resume;
else if (dev->class && dev->class->pm)
cb = dev->class->pm->runtime_resume;
else if (dev->bus && dev->bus->pm)
cb = dev->bus->pm->runtime_resume;
else
cb = NULL;
if (!cb && dev->driver && dev->driver->pm)
cb = dev->driver->pm->runtime_resume;
return cb ? cb(dev) : 0;
}
/** /**
* pm_genpd_init - Initialize a generic I/O PM domain object. * pm_genpd_init - Initialize a generic I/O PM domain object.
* @genpd: PM domain object to initialize. * @genpd: PM domain object to initialize.
...@@ -1498,8 +1473,8 @@ void pm_genpd_init(struct generic_pm_domain *genpd, ...@@ -1498,8 +1473,8 @@ void pm_genpd_init(struct generic_pm_domain *genpd,
genpd->device_count = 0; genpd->device_count = 0;
genpd->max_off_time_ns = -1; genpd->max_off_time_ns = -1;
genpd->max_off_time_changed = true; genpd->max_off_time_changed = true;
genpd->domain.ops.runtime_suspend = pm_genpd_runtime_suspend; genpd->domain.ops.runtime_suspend = genpd_runtime_suspend;
genpd->domain.ops.runtime_resume = pm_genpd_runtime_resume; genpd->domain.ops.runtime_resume = genpd_runtime_resume;
genpd->domain.ops.prepare = pm_genpd_prepare; genpd->domain.ops.prepare = pm_genpd_prepare;
genpd->domain.ops.suspend = pm_genpd_suspend; genpd->domain.ops.suspend = pm_genpd_suspend;
genpd->domain.ops.suspend_late = pm_genpd_suspend_late; genpd->domain.ops.suspend_late = pm_genpd_suspend_late;
...@@ -1520,8 +1495,6 @@ void pm_genpd_init(struct generic_pm_domain *genpd, ...@@ -1520,8 +1495,6 @@ void pm_genpd_init(struct generic_pm_domain *genpd,
genpd->domain.ops.restore_early = pm_genpd_resume_early; genpd->domain.ops.restore_early = pm_genpd_resume_early;
genpd->domain.ops.restore = pm_genpd_resume; genpd->domain.ops.restore = pm_genpd_resume;
genpd->domain.ops.complete = pm_genpd_complete; genpd->domain.ops.complete = pm_genpd_complete;
genpd->dev_ops.save_state = pm_genpd_default_save_state;
genpd->dev_ops.restore_state = pm_genpd_default_restore_state;
if (genpd->flags & GENPD_FLAG_PM_CLK) { if (genpd->flags & GENPD_FLAG_PM_CLK) {
genpd->dev_ops.stop = pm_clk_suspend; genpd->dev_ops.stop = pm_clk_suspend;
......
...@@ -37,10 +37,10 @@ static int dev_update_qos_constraint(struct device *dev, void *data) ...@@ -37,10 +37,10 @@ static int dev_update_qos_constraint(struct device *dev, void *data)
} }
/** /**
* default_stop_ok - Default PM domain governor routine for stopping devices. * default_suspend_ok - Default PM domain governor routine to suspend devices.
* @dev: Device to check. * @dev: Device to check.
*/ */
static bool default_stop_ok(struct device *dev) static bool default_suspend_ok(struct device *dev)
{ {
struct gpd_timing_data *td = &dev_gpd_data(dev)->td; struct gpd_timing_data *td = &dev_gpd_data(dev)->td;
unsigned long flags; unsigned long flags;
...@@ -51,13 +51,13 @@ static bool default_stop_ok(struct device *dev) ...@@ -51,13 +51,13 @@ static bool default_stop_ok(struct device *dev)
spin_lock_irqsave(&dev->power.lock, flags); spin_lock_irqsave(&dev->power.lock, flags);
if (!td->constraint_changed) { if (!td->constraint_changed) {
bool ret = td->cached_stop_ok; bool ret = td->cached_suspend_ok;
spin_unlock_irqrestore(&dev->power.lock, flags); spin_unlock_irqrestore(&dev->power.lock, flags);
return ret; return ret;
} }
td->constraint_changed = false; td->constraint_changed = false;
td->cached_stop_ok = false; td->cached_suspend_ok = false;
td->effective_constraint_ns = -1; td->effective_constraint_ns = -1;
constraint_ns = __dev_pm_qos_read_value(dev); constraint_ns = __dev_pm_qos_read_value(dev);
...@@ -83,13 +83,13 @@ static bool default_stop_ok(struct device *dev) ...@@ -83,13 +83,13 @@ static bool default_stop_ok(struct device *dev)
return false; return false;
} }
td->effective_constraint_ns = constraint_ns; td->effective_constraint_ns = constraint_ns;
td->cached_stop_ok = constraint_ns >= 0; td->cached_suspend_ok = constraint_ns >= 0;
/* /*
* The children have been suspended already, so we don't need to take * The children have been suspended already, so we don't need to take
* their stop latencies into account here. * their suspend latencies into account here.
*/ */
return td->cached_stop_ok; return td->cached_suspend_ok;
} }
/** /**
...@@ -150,7 +150,7 @@ static bool __default_power_down_ok(struct dev_pm_domain *pd, ...@@ -150,7 +150,7 @@ static bool __default_power_down_ok(struct dev_pm_domain *pd,
*/ */
td = &to_gpd_data(pdd)->td; td = &to_gpd_data(pdd)->td;
constraint_ns = td->effective_constraint_ns; constraint_ns = td->effective_constraint_ns;
/* default_stop_ok() need not be called before us. */ /* default_suspend_ok() need not be called before us. */
if (constraint_ns < 0) { if (constraint_ns < 0) {
constraint_ns = dev_pm_qos_read_value(pdd->dev); constraint_ns = dev_pm_qos_read_value(pdd->dev);
constraint_ns *= NSEC_PER_USEC; constraint_ns *= NSEC_PER_USEC;
...@@ -227,7 +227,7 @@ static bool always_on_power_down_ok(struct dev_pm_domain *domain) ...@@ -227,7 +227,7 @@ static bool always_on_power_down_ok(struct dev_pm_domain *domain)
} }
struct dev_power_governor simple_qos_governor = { struct dev_power_governor simple_qos_governor = {
.stop_ok = default_stop_ok, .suspend_ok = default_suspend_ok,
.power_down_ok = default_power_down_ok, .power_down_ok = default_power_down_ok,
}; };
...@@ -236,5 +236,5 @@ struct dev_power_governor simple_qos_governor = { ...@@ -236,5 +236,5 @@ struct dev_power_governor simple_qos_governor = {
*/ */
struct dev_power_governor pm_domain_always_on_gov = { struct dev_power_governor pm_domain_always_on_gov = {
.power_down_ok = always_on_power_down_ok, .power_down_ok = always_on_power_down_ok,
.stop_ok = default_stop_ok, .suspend_ok = default_suspend_ok,
}; };
...@@ -1556,7 +1556,6 @@ int dpm_suspend(pm_message_t state) ...@@ -1556,7 +1556,6 @@ int dpm_suspend(pm_message_t state)
static int device_prepare(struct device *dev, pm_message_t state) static int device_prepare(struct device *dev, pm_message_t state)
{ {
int (*callback)(struct device *) = NULL; int (*callback)(struct device *) = NULL;
char *info = NULL;
int ret = 0; int ret = 0;
if (dev->power.syscore) if (dev->power.syscore)
...@@ -1579,24 +1578,17 @@ static int device_prepare(struct device *dev, pm_message_t state) ...@@ -1579,24 +1578,17 @@ static int device_prepare(struct device *dev, pm_message_t state)
goto unlock; goto unlock;
} }
if (dev->pm_domain) { if (dev->pm_domain)
info = "preparing power domain ";
callback = dev->pm_domain->ops.prepare; callback = dev->pm_domain->ops.prepare;
} else if (dev->type && dev->type->pm) { else if (dev->type && dev->type->pm)
info = "preparing type ";
callback = dev->type->pm->prepare; callback = dev->type->pm->prepare;
} else if (dev->class && dev->class->pm) { else if (dev->class && dev->class->pm)
info = "preparing class ";
callback = dev->class->pm->prepare; callback = dev->class->pm->prepare;
} else if (dev->bus && dev->bus->pm) { else if (dev->bus && dev->bus->pm)
info = "preparing bus ";
callback = dev->bus->pm->prepare; callback = dev->bus->pm->prepare;
}
if (!callback && dev->driver && dev->driver->pm) { if (!callback && dev->driver && dev->driver->pm)
info = "preparing driver ";
callback = dev->driver->pm->prepare; callback = dev->driver->pm->prepare;
}
if (callback) if (callback)
ret = callback(dev); ret = callback(dev);
......
ccflags-$(CONFIG_DEBUG_DRIVER) := -DDEBUG ccflags-$(CONFIG_DEBUG_DRIVER) := -DDEBUG
obj-y += core.o cpu.o obj-y += core.o cpu.o
obj-$(CONFIG_OF) += of.o
obj-$(CONFIG_DEBUG_FS) += debugfs.o obj-$(CONFIG_DEBUG_FS) += debugfs.o
This diff is collapsed.
...@@ -18,7 +18,6 @@ ...@@ -18,7 +18,6 @@
#include <linux/err.h> #include <linux/err.h>
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/export.h> #include <linux/export.h>
#include <linux/of.h>
#include <linux/slab.h> #include <linux/slab.h>
#include "opp.h" #include "opp.h"
...@@ -119,8 +118,66 @@ void dev_pm_opp_free_cpufreq_table(struct device *dev, ...@@ -119,8 +118,66 @@ void dev_pm_opp_free_cpufreq_table(struct device *dev,
EXPORT_SYMBOL_GPL(dev_pm_opp_free_cpufreq_table); EXPORT_SYMBOL_GPL(dev_pm_opp_free_cpufreq_table);
#endif /* CONFIG_CPU_FREQ */ #endif /* CONFIG_CPU_FREQ */
/* Required only for V1 bindings, as v2 can manage it from DT itself */ void _dev_pm_opp_cpumask_remove_table(const struct cpumask *cpumask, bool of)
int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, cpumask_var_t cpumask) {
struct device *cpu_dev;
int cpu;
WARN_ON(cpumask_empty(cpumask));
for_each_cpu(cpu, cpumask) {
cpu_dev = get_cpu_device(cpu);
if (!cpu_dev) {
pr_err("%s: failed to get cpu%d device\n", __func__,
cpu);
continue;
}
if (of)
dev_pm_opp_of_remove_table(cpu_dev);
else
dev_pm_opp_remove_table(cpu_dev);
}
}
/**
* dev_pm_opp_cpumask_remove_table() - Removes OPP table for @cpumask
* @cpumask: cpumask for which OPP table needs to be removed
*
* This removes the OPP tables for CPUs present in the @cpumask.
* This should be used to remove all the OPPs entries associated with
* the cpus in @cpumask.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*/
void dev_pm_opp_cpumask_remove_table(const struct cpumask *cpumask)
{
_dev_pm_opp_cpumask_remove_table(cpumask, false);
}
EXPORT_SYMBOL_GPL(dev_pm_opp_cpumask_remove_table);
/**
* dev_pm_opp_set_sharing_cpus() - Mark OPP table as shared by few CPUs
* @cpu_dev: CPU device for which we do this operation
* @cpumask: cpumask of the CPUs which share the OPP table with @cpu_dev
*
* This marks OPP table of the @cpu_dev as shared by the CPUs present in
* @cpumask.
*
* Returns -ENODEV if OPP table isn't already present.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*/
int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev,
const struct cpumask *cpumask)
{ {
struct opp_device *opp_dev; struct opp_device *opp_dev;
struct opp_table *opp_table; struct opp_table *opp_table;
...@@ -131,7 +188,7 @@ int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, cpumask_var_t cpumask) ...@@ -131,7 +188,7 @@ int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, cpumask_var_t cpumask)
opp_table = _find_opp_table(cpu_dev); opp_table = _find_opp_table(cpu_dev);
if (IS_ERR(opp_table)) { if (IS_ERR(opp_table)) {
ret = -EINVAL; ret = PTR_ERR(opp_table);
goto unlock; goto unlock;
} }
...@@ -152,6 +209,9 @@ int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, cpumask_var_t cpumask) ...@@ -152,6 +209,9 @@ int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, cpumask_var_t cpumask)
__func__, cpu); __func__, cpu);
continue; continue;
} }
/* Mark opp-table as multiple CPUs are sharing it now */
opp_table->shared_opp = true;
} }
unlock: unlock:
mutex_unlock(&opp_table_lock); mutex_unlock(&opp_table_lock);
...@@ -160,112 +220,47 @@ int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, cpumask_var_t cpumask) ...@@ -160,112 +220,47 @@ int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, cpumask_var_t cpumask)
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_set_sharing_cpus); EXPORT_SYMBOL_GPL(dev_pm_opp_set_sharing_cpus);
#ifdef CONFIG_OF /**
void dev_pm_opp_of_cpumask_remove_table(cpumask_var_t cpumask) * dev_pm_opp_get_sharing_cpus() - Get cpumask of CPUs sharing OPPs with @cpu_dev
{ * @cpu_dev: CPU device for which we do this operation
struct device *cpu_dev; * @cpumask: cpumask to update with information of sharing CPUs
int cpu; *
* This updates the @cpumask with CPUs that are sharing OPPs with @cpu_dev.
WARN_ON(cpumask_empty(cpumask));
for_each_cpu(cpu, cpumask) {
cpu_dev = get_cpu_device(cpu);
if (!cpu_dev) {
pr_err("%s: failed to get cpu%d device\n", __func__,
cpu);
continue;
}
dev_pm_opp_of_remove_table(cpu_dev);
}
}
EXPORT_SYMBOL_GPL(dev_pm_opp_of_cpumask_remove_table);
int dev_pm_opp_of_cpumask_add_table(cpumask_var_t cpumask)
{
struct device *cpu_dev;
int cpu, ret = 0;
WARN_ON(cpumask_empty(cpumask));
for_each_cpu(cpu, cpumask) {
cpu_dev = get_cpu_device(cpu);
if (!cpu_dev) {
pr_err("%s: failed to get cpu%d device\n", __func__,
cpu);
continue;
}
ret = dev_pm_opp_of_add_table(cpu_dev);
if (ret) {
pr_err("%s: couldn't find opp table for cpu:%d, %d\n",
__func__, cpu, ret);
/* Free all other OPPs */
dev_pm_opp_of_cpumask_remove_table(cpumask);
break;
}
}
return ret;
}
EXPORT_SYMBOL_GPL(dev_pm_opp_of_cpumask_add_table);
/*
* Works only for OPP v2 bindings.
* *
* Returns -ENOENT if operating-points-v2 bindings aren't supported. * Returns -ENODEV if OPP table isn't already present.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*/ */
int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, cpumask_var_t cpumask) int dev_pm_opp_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask)
{ {
struct device_node *np, *tmp_np; struct opp_device *opp_dev;
struct device *tcpu_dev; struct opp_table *opp_table;
int cpu, ret = 0; int ret = 0;
/* Get OPP descriptor node */
np = _of_get_opp_desc_node(cpu_dev);
if (!np) {
dev_dbg(cpu_dev, "%s: Couldn't find cpu_dev node.\n", __func__);
return -ENOENT;
}
cpumask_set_cpu(cpu_dev->id, cpumask);
/* OPPs are shared ? */
if (!of_property_read_bool(np, "opp-shared"))
goto put_cpu_node;
for_each_possible_cpu(cpu) {
if (cpu == cpu_dev->id)
continue;
tcpu_dev = get_cpu_device(cpu); mutex_lock(&opp_table_lock);
if (!tcpu_dev) {
dev_err(cpu_dev, "%s: failed to get cpu%d device\n",
__func__, cpu);
ret = -ENODEV;
goto put_cpu_node;
}
/* Get OPP descriptor node */ opp_table = _find_opp_table(cpu_dev);
tmp_np = _of_get_opp_desc_node(tcpu_dev); if (IS_ERR(opp_table)) {
if (!tmp_np) { ret = PTR_ERR(opp_table);
dev_err(tcpu_dev, "%s: Couldn't find tcpu_dev node.\n", goto unlock;
__func__);
ret = -ENOENT;
goto put_cpu_node;
} }
/* CPUs are sharing opp node */ cpumask_clear(cpumask);
if (np == tmp_np)
cpumask_set_cpu(cpu, cpumask);
of_node_put(tmp_np); if (opp_table->shared_opp) {
list_for_each_entry(opp_dev, &opp_table->dev_list, node)
cpumask_set_cpu(opp_dev->dev->id, cpumask);
} else {
cpumask_set_cpu(cpu_dev->id, cpumask);
} }
put_cpu_node: unlock:
of_node_put(np); mutex_unlock(&opp_table_lock);
return ret; return ret;
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_of_get_sharing_cpus); EXPORT_SYMBOL_GPL(dev_pm_opp_get_sharing_cpus);
#endif
This diff is collapsed.
...@@ -28,6 +28,8 @@ struct regulator; ...@@ -28,6 +28,8 @@ struct regulator;
/* Lock to allow exclusive modification to the device and opp lists */ /* Lock to allow exclusive modification to the device and opp lists */
extern struct mutex opp_table_lock; extern struct mutex opp_table_lock;
extern struct list_head opp_tables;
/* /*
* Internal data structure organization with the OPP layer library is as * Internal data structure organization with the OPP layer library is as
* follows: * follows:
...@@ -183,6 +185,18 @@ struct opp_table { ...@@ -183,6 +185,18 @@ struct opp_table {
struct opp_table *_find_opp_table(struct device *dev); struct opp_table *_find_opp_table(struct device *dev);
struct opp_device *_add_opp_dev(const struct device *dev, struct opp_table *opp_table); struct opp_device *_add_opp_dev(const struct device *dev, struct opp_table *opp_table);
struct device_node *_of_get_opp_desc_node(struct device *dev); struct device_node *_of_get_opp_desc_node(struct device *dev);
void _dev_pm_opp_remove_table(struct device *dev, bool remove_all);
struct dev_pm_opp *_allocate_opp(struct device *dev, struct opp_table **opp_table);
int _opp_add(struct device *dev, struct dev_pm_opp *new_opp, struct opp_table *opp_table);
void _opp_remove(struct opp_table *opp_table, struct dev_pm_opp *opp, bool notify);
int _opp_add_v1(struct device *dev, unsigned long freq, long u_volt, bool dynamic);
void _dev_pm_opp_cpumask_remove_table(const struct cpumask *cpumask, bool of);
#ifdef CONFIG_OF
void _of_init_opp_table(struct opp_table *opp_table, struct device *dev);
#else
static inline void _of_init_opp_table(struct opp_table *opp_table, struct device *dev) {}
#endif
#ifdef CONFIG_DEBUG_FS #ifdef CONFIG_DEBUG_FS
void opp_debug_remove_one(struct dev_pm_opp *opp); void opp_debug_remove_one(struct dev_pm_opp *opp);
......
...@@ -1506,11 +1506,16 @@ int pm_runtime_force_resume(struct device *dev) ...@@ -1506,11 +1506,16 @@ int pm_runtime_force_resume(struct device *dev)
goto out; goto out;
} }
ret = callback(dev); ret = pm_runtime_set_active(dev);
if (ret) if (ret)
goto out; goto out;
pm_runtime_set_active(dev); ret = callback(dev);
if (ret) {
pm_runtime_set_suspended(dev);
goto out;
}
pm_runtime_mark_last_busy(dev); pm_runtime_mark_last_busy(dev);
out: out:
pm_runtime_enable(dev); pm_runtime_enable(dev);
......
...@@ -18,7 +18,11 @@ config CPU_FREQ ...@@ -18,7 +18,11 @@ config CPU_FREQ
if CPU_FREQ if CPU_FREQ
config CPU_FREQ_GOV_ATTR_SET
bool
config CPU_FREQ_GOV_COMMON config CPU_FREQ_GOV_COMMON
select CPU_FREQ_GOV_ATTR_SET
select IRQ_WORK select IRQ_WORK
bool bool
...@@ -103,6 +107,17 @@ config CPU_FREQ_DEFAULT_GOV_CONSERVATIVE ...@@ -103,6 +107,17 @@ config CPU_FREQ_DEFAULT_GOV_CONSERVATIVE
Be aware that not all cpufreq drivers support the conservative Be aware that not all cpufreq drivers support the conservative
governor. If unsure have a look at the help section of the governor. If unsure have a look at the help section of the
driver. Fallback governor will be the performance governor. driver. Fallback governor will be the performance governor.
config CPU_FREQ_DEFAULT_GOV_SCHEDUTIL
bool "schedutil"
depends on SMP
select CPU_FREQ_GOV_SCHEDUTIL
select CPU_FREQ_GOV_PERFORMANCE
help
Use the 'schedutil' CPUFreq governor by default. If unsure,
have a look at the help section of that governor. The fallback
governor will be 'performance'.
endchoice endchoice
config CPU_FREQ_GOV_PERFORMANCE config CPU_FREQ_GOV_PERFORMANCE
...@@ -184,6 +199,26 @@ config CPU_FREQ_GOV_CONSERVATIVE ...@@ -184,6 +199,26 @@ config CPU_FREQ_GOV_CONSERVATIVE
If in doubt, say N. If in doubt, say N.
config CPU_FREQ_GOV_SCHEDUTIL
tristate "'schedutil' cpufreq policy governor"
depends on CPU_FREQ && SMP
select CPU_FREQ_GOV_ATTR_SET
select IRQ_WORK
help
This governor makes decisions based on the utilization data provided
by the scheduler. It sets the CPU frequency to be proportional to
the utilization/capacity ratio coming from the scheduler. If the
utilization is frequency-invariant, the new frequency is also
proportional to the maximum available frequency. If that is not the
case, it is proportional to the current frequency of the CPU. The
frequency tipping point is at utilization/capacity equal to 80% in
both cases.
To compile this driver as a module, choose M here: the module will
be called cpufreq_schedutil.
If in doubt, say N.
comment "CPU frequency scaling drivers" comment "CPU frequency scaling drivers"
config CPUFREQ_DT config CPUFREQ_DT
...@@ -191,6 +226,7 @@ config CPUFREQ_DT ...@@ -191,6 +226,7 @@ config CPUFREQ_DT
depends on HAVE_CLK && OF depends on HAVE_CLK && OF
# if CPU_THERMAL is on and THERMAL=m, CPUFREQ_DT cannot be =y: # if CPU_THERMAL is on and THERMAL=m, CPUFREQ_DT cannot be =y:
depends on !CPU_THERMAL || THERMAL depends on !CPU_THERMAL || THERMAL
select CPUFREQ_DT_PLATDEV
select PM_OPP select PM_OPP
help help
This adds a generic DT based cpufreq driver for frequency management. This adds a generic DT based cpufreq driver for frequency management.
...@@ -199,6 +235,15 @@ config CPUFREQ_DT ...@@ -199,6 +235,15 @@ config CPUFREQ_DT
If in doubt, say N. If in doubt, say N.
config CPUFREQ_DT_PLATDEV
bool
help
This adds a generic DT based cpufreq platdev driver for frequency
management. This creates a 'cpufreq-dt' platform device, on the
supported platforms.
If in doubt, say N.
if X86 if X86
source "drivers/cpufreq/Kconfig.x86" source "drivers/cpufreq/Kconfig.x86"
endif endif
......
...@@ -50,15 +50,6 @@ config ARM_HIGHBANK_CPUFREQ ...@@ -50,15 +50,6 @@ config ARM_HIGHBANK_CPUFREQ
If in doubt, say N. If in doubt, say N.
config ARM_HISI_ACPU_CPUFREQ
tristate "Hisilicon ACPU CPUfreq driver"
depends on ARCH_HISI && CPUFREQ_DT
select PM_OPP
help
This enables the hisilicon ACPU CPUfreq driver.
If in doubt, say N.
config ARM_IMX6Q_CPUFREQ config ARM_IMX6Q_CPUFREQ
tristate "Freescale i.MX6 cpufreq support" tristate "Freescale i.MX6 cpufreq support"
depends on ARCH_MXC depends on ARCH_MXC
......
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
config X86_INTEL_PSTATE config X86_INTEL_PSTATE
bool "Intel P state control" bool "Intel P state control"
depends on X86 depends on X86
select ACPI_PROCESSOR if ACPI
help help
This driver provides a P state for Intel core processors. This driver provides a P state for Intel core processors.
The driver implements an internal governor and will become The driver implements an internal governor and will become
......
...@@ -11,8 +11,10 @@ obj-$(CONFIG_CPU_FREQ_GOV_USERSPACE) += cpufreq_userspace.o ...@@ -11,8 +11,10 @@ obj-$(CONFIG_CPU_FREQ_GOV_USERSPACE) += cpufreq_userspace.o
obj-$(CONFIG_CPU_FREQ_GOV_ONDEMAND) += cpufreq_ondemand.o obj-$(CONFIG_CPU_FREQ_GOV_ONDEMAND) += cpufreq_ondemand.o
obj-$(CONFIG_CPU_FREQ_GOV_CONSERVATIVE) += cpufreq_conservative.o obj-$(CONFIG_CPU_FREQ_GOV_CONSERVATIVE) += cpufreq_conservative.o
obj-$(CONFIG_CPU_FREQ_GOV_COMMON) += cpufreq_governor.o obj-$(CONFIG_CPU_FREQ_GOV_COMMON) += cpufreq_governor.o
obj-$(CONFIG_CPU_FREQ_GOV_ATTR_SET) += cpufreq_governor_attr_set.o
obj-$(CONFIG_CPUFREQ_DT) += cpufreq-dt.o obj-$(CONFIG_CPUFREQ_DT) += cpufreq-dt.o
obj-$(CONFIG_CPUFREQ_DT_PLATDEV) += cpufreq-dt-platdev.o
################################################################################## ##################################################################################
# x86 drivers. # x86 drivers.
...@@ -53,7 +55,6 @@ obj-$(CONFIG_ARCH_DAVINCI) += davinci-cpufreq.o ...@@ -53,7 +55,6 @@ obj-$(CONFIG_ARCH_DAVINCI) += davinci-cpufreq.o
obj-$(CONFIG_UX500_SOC_DB8500) += dbx500-cpufreq.o obj-$(CONFIG_UX500_SOC_DB8500) += dbx500-cpufreq.o
obj-$(CONFIG_ARM_EXYNOS5440_CPUFREQ) += exynos5440-cpufreq.o obj-$(CONFIG_ARM_EXYNOS5440_CPUFREQ) += exynos5440-cpufreq.o
obj-$(CONFIG_ARM_HIGHBANK_CPUFREQ) += highbank-cpufreq.o obj-$(CONFIG_ARM_HIGHBANK_CPUFREQ) += highbank-cpufreq.o
obj-$(CONFIG_ARM_HISI_ACPU_CPUFREQ) += hisi-acpu-cpufreq.o
obj-$(CONFIG_ARM_IMX6Q_CPUFREQ) += imx6q-cpufreq.o obj-$(CONFIG_ARM_IMX6Q_CPUFREQ) += imx6q-cpufreq.o
obj-$(CONFIG_ARM_INTEGRATOR) += integrator-cpufreq.o obj-$(CONFIG_ARM_INTEGRATOR) += integrator-cpufreq.o
obj-$(CONFIG_ARM_KIRKWOOD_CPUFREQ) += kirkwood-cpufreq.o obj-$(CONFIG_ARM_KIRKWOOD_CPUFREQ) += kirkwood-cpufreq.o
...@@ -78,6 +79,7 @@ obj-$(CONFIG_ARM_TEGRA20_CPUFREQ) += tegra20-cpufreq.o ...@@ -78,6 +79,7 @@ obj-$(CONFIG_ARM_TEGRA20_CPUFREQ) += tegra20-cpufreq.o
obj-$(CONFIG_ARM_TEGRA124_CPUFREQ) += tegra124-cpufreq.o obj-$(CONFIG_ARM_TEGRA124_CPUFREQ) += tegra124-cpufreq.o
obj-$(CONFIG_ARM_VEXPRESS_SPC_CPUFREQ) += vexpress-spc-cpufreq.o obj-$(CONFIG_ARM_VEXPRESS_SPC_CPUFREQ) += vexpress-spc-cpufreq.o
obj-$(CONFIG_ACPI_CPPC_CPUFREQ) += cppc_cpufreq.o obj-$(CONFIG_ACPI_CPPC_CPUFREQ) += cppc_cpufreq.o
obj-$(CONFIG_MACH_MVEBU_V7) += mvebu-cpufreq.o
################################################################################## ##################################################################################
......
...@@ -25,6 +25,8 @@ ...@@ -25,6 +25,8 @@
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*/ */
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/init.h> #include <linux/init.h>
...@@ -50,8 +52,6 @@ MODULE_AUTHOR("Paul Diefenbaugh, Dominik Brodowski"); ...@@ -50,8 +52,6 @@ MODULE_AUTHOR("Paul Diefenbaugh, Dominik Brodowski");
MODULE_DESCRIPTION("ACPI Processor P-States Driver"); MODULE_DESCRIPTION("ACPI Processor P-States Driver");
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
#define PFX "acpi-cpufreq: "
enum { enum {
UNDEFINED_CAPABLE = 0, UNDEFINED_CAPABLE = 0,
SYSTEM_INTEL_MSR_CAPABLE, SYSTEM_INTEL_MSR_CAPABLE,
...@@ -65,7 +65,6 @@ enum { ...@@ -65,7 +65,6 @@ enum {
#define MSR_K7_HWCR_CPB_DIS (1ULL << 25) #define MSR_K7_HWCR_CPB_DIS (1ULL << 25)
struct acpi_cpufreq_data { struct acpi_cpufreq_data {
struct cpufreq_frequency_table *freq_table;
unsigned int resume; unsigned int resume;
unsigned int cpu_feature; unsigned int cpu_feature;
unsigned int acpi_perf_cpu; unsigned int acpi_perf_cpu;
...@@ -200,8 +199,9 @@ static int check_amd_hwpstate_cpu(unsigned int cpuid) ...@@ -200,8 +199,9 @@ static int check_amd_hwpstate_cpu(unsigned int cpuid)
return cpu_has(cpu, X86_FEATURE_HW_PSTATE); return cpu_has(cpu, X86_FEATURE_HW_PSTATE);
} }
static unsigned extract_io(u32 value, struct acpi_cpufreq_data *data) static unsigned extract_io(struct cpufreq_policy *policy, u32 value)
{ {
struct acpi_cpufreq_data *data = policy->driver_data;
struct acpi_processor_performance *perf; struct acpi_processor_performance *perf;
int i; int i;
...@@ -209,13 +209,14 @@ static unsigned extract_io(u32 value, struct acpi_cpufreq_data *data) ...@@ -209,13 +209,14 @@ static unsigned extract_io(u32 value, struct acpi_cpufreq_data *data)
for (i = 0; i < perf->state_count; i++) { for (i = 0; i < perf->state_count; i++) {
if (value == perf->states[i].status) if (value == perf->states[i].status)
return data->freq_table[i].frequency; return policy->freq_table[i].frequency;
} }
return 0; return 0;
} }
static unsigned extract_msr(u32 msr, struct acpi_cpufreq_data *data) static unsigned extract_msr(struct cpufreq_policy *policy, u32 msr)
{ {
struct acpi_cpufreq_data *data = policy->driver_data;
struct cpufreq_frequency_table *pos; struct cpufreq_frequency_table *pos;
struct acpi_processor_performance *perf; struct acpi_processor_performance *perf;
...@@ -226,20 +227,22 @@ static unsigned extract_msr(u32 msr, struct acpi_cpufreq_data *data) ...@@ -226,20 +227,22 @@ static unsigned extract_msr(u32 msr, struct acpi_cpufreq_data *data)
perf = to_perf_data(data); perf = to_perf_data(data);
cpufreq_for_each_entry(pos, data->freq_table) cpufreq_for_each_entry(pos, policy->freq_table)
if (msr == perf->states[pos->driver_data].status) if (msr == perf->states[pos->driver_data].status)
return pos->frequency; return pos->frequency;
return data->freq_table[0].frequency; return policy->freq_table[0].frequency;
} }
static unsigned extract_freq(u32 val, struct acpi_cpufreq_data *data) static unsigned extract_freq(struct cpufreq_policy *policy, u32 val)
{ {
struct acpi_cpufreq_data *data = policy->driver_data;
switch (data->cpu_feature) { switch (data->cpu_feature) {
case SYSTEM_INTEL_MSR_CAPABLE: case SYSTEM_INTEL_MSR_CAPABLE:
case SYSTEM_AMD_MSR_CAPABLE: case SYSTEM_AMD_MSR_CAPABLE:
return extract_msr(val, data); return extract_msr(policy, val);
case SYSTEM_IO_CAPABLE: case SYSTEM_IO_CAPABLE:
return extract_io(val, data); return extract_io(policy, val);
default: default:
return 0; return 0;
} }
...@@ -374,11 +377,11 @@ static unsigned int get_cur_freq_on_cpu(unsigned int cpu) ...@@ -374,11 +377,11 @@ static unsigned int get_cur_freq_on_cpu(unsigned int cpu)
return 0; return 0;
data = policy->driver_data; data = policy->driver_data;
if (unlikely(!data || !data->freq_table)) if (unlikely(!data || !policy->freq_table))
return 0; return 0;
cached_freq = data->freq_table[to_perf_data(data)->state].frequency; cached_freq = policy->freq_table[to_perf_data(data)->state].frequency;
freq = extract_freq(get_cur_val(cpumask_of(cpu), data), data); freq = extract_freq(policy, get_cur_val(cpumask_of(cpu), data));
if (freq != cached_freq) { if (freq != cached_freq) {
/* /*
* The dreaded BIOS frequency change behind our back. * The dreaded BIOS frequency change behind our back.
...@@ -392,14 +395,15 @@ static unsigned int get_cur_freq_on_cpu(unsigned int cpu) ...@@ -392,14 +395,15 @@ static unsigned int get_cur_freq_on_cpu(unsigned int cpu)
return freq; return freq;
} }
static unsigned int check_freqs(const struct cpumask *mask, unsigned int freq, static unsigned int check_freqs(struct cpufreq_policy *policy,
struct acpi_cpufreq_data *data) const struct cpumask *mask, unsigned int freq)
{ {
struct acpi_cpufreq_data *data = policy->driver_data;
unsigned int cur_freq; unsigned int cur_freq;
unsigned int i; unsigned int i;
for (i = 0; i < 100; i++) { for (i = 0; i < 100; i++) {
cur_freq = extract_freq(get_cur_val(mask, data), data); cur_freq = extract_freq(policy, get_cur_val(mask, data));
if (cur_freq == freq) if (cur_freq == freq)
return 1; return 1;
udelay(10); udelay(10);
...@@ -416,12 +420,12 @@ static int acpi_cpufreq_target(struct cpufreq_policy *policy, ...@@ -416,12 +420,12 @@ static int acpi_cpufreq_target(struct cpufreq_policy *policy,
unsigned int next_perf_state = 0; /* Index into perf table */ unsigned int next_perf_state = 0; /* Index into perf table */
int result = 0; int result = 0;
if (unlikely(data == NULL || data->freq_table == NULL)) { if (unlikely(!data)) {
return -ENODEV; return -ENODEV;
} }
perf = to_perf_data(data); perf = to_perf_data(data);
next_perf_state = data->freq_table[index].driver_data; next_perf_state = policy->freq_table[index].driver_data;
if (perf->state == next_perf_state) { if (perf->state == next_perf_state) {
if (unlikely(data->resume)) { if (unlikely(data->resume)) {
pr_debug("Called after resume, resetting to P%d\n", pr_debug("Called after resume, resetting to P%d\n",
...@@ -444,8 +448,8 @@ static int acpi_cpufreq_target(struct cpufreq_policy *policy, ...@@ -444,8 +448,8 @@ static int acpi_cpufreq_target(struct cpufreq_policy *policy,
drv_write(data, mask, perf->states[next_perf_state].control); drv_write(data, mask, perf->states[next_perf_state].control);
if (acpi_pstate_strict) { if (acpi_pstate_strict) {
if (!check_freqs(mask, data->freq_table[index].frequency, if (!check_freqs(policy, mask,
data)) { policy->freq_table[index].frequency)) {
pr_debug("acpi_cpufreq_target failed (%d)\n", pr_debug("acpi_cpufreq_target failed (%d)\n",
policy->cpu); policy->cpu);
result = -EAGAIN; result = -EAGAIN;
...@@ -458,6 +462,43 @@ static int acpi_cpufreq_target(struct cpufreq_policy *policy, ...@@ -458,6 +462,43 @@ static int acpi_cpufreq_target(struct cpufreq_policy *policy,
return result; return result;
} }
unsigned int acpi_cpufreq_fast_switch(struct cpufreq_policy *policy,
unsigned int target_freq)
{
struct acpi_cpufreq_data *data = policy->driver_data;
struct acpi_processor_performance *perf;
struct cpufreq_frequency_table *entry;
unsigned int next_perf_state, next_freq, freq;
/*
* Find the closest frequency above target_freq.
*
* The table is sorted in the reverse order with respect to the
* frequency and all of the entries are valid (see the initialization).
*/
entry = policy->freq_table;
do {
entry++;
freq = entry->frequency;
} while (freq >= target_freq && freq != CPUFREQ_TABLE_END);
entry--;
next_freq = entry->frequency;
next_perf_state = entry->driver_data;
perf = to_perf_data(data);
if (perf->state == next_perf_state) {
if (unlikely(data->resume))
data->resume = 0;
else
return next_freq;
}
data->cpu_freq_write(&perf->control_register,
perf->states[next_perf_state].control);
perf->state = next_perf_state;
return next_freq;
}
static unsigned long static unsigned long
acpi_cpufreq_guess_freq(struct acpi_cpufreq_data *data, unsigned int cpu) acpi_cpufreq_guess_freq(struct acpi_cpufreq_data *data, unsigned int cpu)
{ {
...@@ -611,10 +652,7 @@ static int acpi_cpufreq_blacklist(struct cpuinfo_x86 *c) ...@@ -611,10 +652,7 @@ static int acpi_cpufreq_blacklist(struct cpuinfo_x86 *c)
if ((c->x86 == 15) && if ((c->x86 == 15) &&
(c->x86_model == 6) && (c->x86_model == 6) &&
(c->x86_mask == 8)) { (c->x86_mask == 8)) {
printk(KERN_INFO "acpi-cpufreq: Intel(R) " pr_info("Intel(R) Xeon(R) 7100 Errata AL30, processors may lock up on frequency changes: disabling acpi-cpufreq\n");
"Xeon(R) 7100 Errata AL30, processors may "
"lock up on frequency changes: disabling "
"acpi-cpufreq.\n");
return -ENODEV; return -ENODEV;
} }
} }
...@@ -631,6 +669,7 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy) ...@@ -631,6 +669,7 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
unsigned int result = 0; unsigned int result = 0;
struct cpuinfo_x86 *c = &cpu_data(policy->cpu); struct cpuinfo_x86 *c = &cpu_data(policy->cpu);
struct acpi_processor_performance *perf; struct acpi_processor_performance *perf;
struct cpufreq_frequency_table *freq_table;
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
static int blacklisted; static int blacklisted;
#endif #endif
...@@ -690,7 +729,7 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy) ...@@ -690,7 +729,7 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
cpumask_copy(data->freqdomain_cpus, cpumask_copy(data->freqdomain_cpus,
topology_sibling_cpumask(cpu)); topology_sibling_cpumask(cpu));
policy->shared_type = CPUFREQ_SHARED_TYPE_HW; policy->shared_type = CPUFREQ_SHARED_TYPE_HW;
pr_info_once(PFX "overriding BIOS provided _PSD data\n"); pr_info_once("overriding BIOS provided _PSD data\n");
} }
#endif #endif
...@@ -742,9 +781,9 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy) ...@@ -742,9 +781,9 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
goto err_unreg; goto err_unreg;
} }
data->freq_table = kzalloc(sizeof(*data->freq_table) * freq_table = kzalloc(sizeof(*freq_table) *
(perf->state_count+1), GFP_KERNEL); (perf->state_count+1), GFP_KERNEL);
if (!data->freq_table) { if (!freq_table) {
result = -ENOMEM; result = -ENOMEM;
goto err_unreg; goto err_unreg;
} }
...@@ -762,30 +801,29 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy) ...@@ -762,30 +801,29 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
if (perf->control_register.space_id == ACPI_ADR_SPACE_FIXED_HARDWARE && if (perf->control_register.space_id == ACPI_ADR_SPACE_FIXED_HARDWARE &&
policy->cpuinfo.transition_latency > 20 * 1000) { policy->cpuinfo.transition_latency > 20 * 1000) {
policy->cpuinfo.transition_latency = 20 * 1000; policy->cpuinfo.transition_latency = 20 * 1000;
printk_once(KERN_INFO pr_info_once("P-state transition latency capped at 20 uS\n");
"P-state transition latency capped at 20 uS\n");
} }
/* table init */ /* table init */
for (i = 0; i < perf->state_count; i++) { for (i = 0; i < perf->state_count; i++) {
if (i > 0 && perf->states[i].core_frequency >= if (i > 0 && perf->states[i].core_frequency >=
data->freq_table[valid_states-1].frequency / 1000) freq_table[valid_states-1].frequency / 1000)
continue; continue;
data->freq_table[valid_states].driver_data = i; freq_table[valid_states].driver_data = i;
data->freq_table[valid_states].frequency = freq_table[valid_states].frequency =
perf->states[i].core_frequency * 1000; perf->states[i].core_frequency * 1000;
valid_states++; valid_states++;
} }
data->freq_table[valid_states].frequency = CPUFREQ_TABLE_END; freq_table[valid_states].frequency = CPUFREQ_TABLE_END;
perf->state = 0; perf->state = 0;
result = cpufreq_table_validate_and_show(policy, data->freq_table); result = cpufreq_table_validate_and_show(policy, freq_table);
if (result) if (result)
goto err_freqfree; goto err_freqfree;
if (perf->states[0].core_frequency * 1000 != policy->cpuinfo.max_freq) if (perf->states[0].core_frequency * 1000 != policy->cpuinfo.max_freq)
printk(KERN_WARNING FW_WARN "P-state 0 is not max freq\n"); pr_warn(FW_WARN "P-state 0 is not max freq\n");
switch (perf->control_register.space_id) { switch (perf->control_register.space_id) {
case ACPI_ADR_SPACE_SYSTEM_IO: case ACPI_ADR_SPACE_SYSTEM_IO:
...@@ -821,10 +859,13 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy) ...@@ -821,10 +859,13 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
*/ */
data->resume = 1; data->resume = 1;
policy->fast_switch_possible = !acpi_pstate_strict &&
!(policy_is_shared(policy) && policy->shared_type != CPUFREQ_SHARED_TYPE_ANY);
return result; return result;
err_freqfree: err_freqfree:
kfree(data->freq_table); kfree(freq_table);
err_unreg: err_unreg:
acpi_processor_unregister_performance(cpu); acpi_processor_unregister_performance(cpu);
err_free_mask: err_free_mask:
...@@ -842,13 +883,12 @@ static int acpi_cpufreq_cpu_exit(struct cpufreq_policy *policy) ...@@ -842,13 +883,12 @@ static int acpi_cpufreq_cpu_exit(struct cpufreq_policy *policy)
pr_debug("acpi_cpufreq_cpu_exit\n"); pr_debug("acpi_cpufreq_cpu_exit\n");
if (data) { policy->fast_switch_possible = false;
policy->driver_data = NULL; policy->driver_data = NULL;
acpi_processor_unregister_performance(data->acpi_perf_cpu); acpi_processor_unregister_performance(data->acpi_perf_cpu);
free_cpumask_var(data->freqdomain_cpus); free_cpumask_var(data->freqdomain_cpus);
kfree(data->freq_table); kfree(policy->freq_table);
kfree(data); kfree(data);
}
return 0; return 0;
} }
...@@ -876,6 +916,7 @@ static struct freq_attr *acpi_cpufreq_attr[] = { ...@@ -876,6 +916,7 @@ static struct freq_attr *acpi_cpufreq_attr[] = {
static struct cpufreq_driver acpi_cpufreq_driver = { static struct cpufreq_driver acpi_cpufreq_driver = {
.verify = cpufreq_generic_frequency_table_verify, .verify = cpufreq_generic_frequency_table_verify,
.target_index = acpi_cpufreq_target, .target_index = acpi_cpufreq_target,
.fast_switch = acpi_cpufreq_fast_switch,
.bios_limit = acpi_processor_get_bios_limit, .bios_limit = acpi_processor_get_bios_limit,
.init = acpi_cpufreq_cpu_init, .init = acpi_cpufreq_cpu_init,
.exit = acpi_cpufreq_cpu_exit, .exit = acpi_cpufreq_cpu_exit,
......
...@@ -298,7 +298,8 @@ static int merge_cluster_tables(void) ...@@ -298,7 +298,8 @@ static int merge_cluster_tables(void)
return 0; return 0;
} }
static void _put_cluster_clk_and_freq_table(struct device *cpu_dev) static void _put_cluster_clk_and_freq_table(struct device *cpu_dev,
const struct cpumask *cpumask)
{ {
u32 cluster = raw_cpu_to_cluster(cpu_dev->id); u32 cluster = raw_cpu_to_cluster(cpu_dev->id);
...@@ -308,11 +309,12 @@ static void _put_cluster_clk_and_freq_table(struct device *cpu_dev) ...@@ -308,11 +309,12 @@ static void _put_cluster_clk_and_freq_table(struct device *cpu_dev)
clk_put(clk[cluster]); clk_put(clk[cluster]);
dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table[cluster]); dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table[cluster]);
if (arm_bL_ops->free_opp_table) if (arm_bL_ops->free_opp_table)
arm_bL_ops->free_opp_table(cpu_dev); arm_bL_ops->free_opp_table(cpumask);
dev_dbg(cpu_dev, "%s: cluster: %d\n", __func__, cluster); dev_dbg(cpu_dev, "%s: cluster: %d\n", __func__, cluster);
} }
static void put_cluster_clk_and_freq_table(struct device *cpu_dev) static void put_cluster_clk_and_freq_table(struct device *cpu_dev,
const struct cpumask *cpumask)
{ {
u32 cluster = cpu_to_cluster(cpu_dev->id); u32 cluster = cpu_to_cluster(cpu_dev->id);
int i; int i;
...@@ -321,7 +323,7 @@ static void put_cluster_clk_and_freq_table(struct device *cpu_dev) ...@@ -321,7 +323,7 @@ static void put_cluster_clk_and_freq_table(struct device *cpu_dev)
return; return;
if (cluster < MAX_CLUSTERS) if (cluster < MAX_CLUSTERS)
return _put_cluster_clk_and_freq_table(cpu_dev); return _put_cluster_clk_and_freq_table(cpu_dev, cpumask);
for_each_present_cpu(i) { for_each_present_cpu(i) {
struct device *cdev = get_cpu_device(i); struct device *cdev = get_cpu_device(i);
...@@ -330,14 +332,15 @@ static void put_cluster_clk_and_freq_table(struct device *cpu_dev) ...@@ -330,14 +332,15 @@ static void put_cluster_clk_and_freq_table(struct device *cpu_dev)
return; return;
} }
_put_cluster_clk_and_freq_table(cdev); _put_cluster_clk_and_freq_table(cdev, cpumask);
} }
/* free virtual table */ /* free virtual table */
kfree(freq_table[cluster]); kfree(freq_table[cluster]);
} }
static int _get_cluster_clk_and_freq_table(struct device *cpu_dev) static int _get_cluster_clk_and_freq_table(struct device *cpu_dev,
const struct cpumask *cpumask)
{ {
u32 cluster = raw_cpu_to_cluster(cpu_dev->id); u32 cluster = raw_cpu_to_cluster(cpu_dev->id);
int ret; int ret;
...@@ -345,7 +348,7 @@ static int _get_cluster_clk_and_freq_table(struct device *cpu_dev) ...@@ -345,7 +348,7 @@ static int _get_cluster_clk_and_freq_table(struct device *cpu_dev)
if (freq_table[cluster]) if (freq_table[cluster])
return 0; return 0;
ret = arm_bL_ops->init_opp_table(cpu_dev); ret = arm_bL_ops->init_opp_table(cpumask);
if (ret) { if (ret) {
dev_err(cpu_dev, "%s: init_opp_table failed, cpu: %d, err: %d\n", dev_err(cpu_dev, "%s: init_opp_table failed, cpu: %d, err: %d\n",
__func__, cpu_dev->id, ret); __func__, cpu_dev->id, ret);
...@@ -374,14 +377,15 @@ static int _get_cluster_clk_and_freq_table(struct device *cpu_dev) ...@@ -374,14 +377,15 @@ static int _get_cluster_clk_and_freq_table(struct device *cpu_dev)
free_opp_table: free_opp_table:
if (arm_bL_ops->free_opp_table) if (arm_bL_ops->free_opp_table)
arm_bL_ops->free_opp_table(cpu_dev); arm_bL_ops->free_opp_table(cpumask);
out: out:
dev_err(cpu_dev, "%s: Failed to get data for cluster: %d\n", __func__, dev_err(cpu_dev, "%s: Failed to get data for cluster: %d\n", __func__,
cluster); cluster);
return ret; return ret;
} }
static int get_cluster_clk_and_freq_table(struct device *cpu_dev) static int get_cluster_clk_and_freq_table(struct device *cpu_dev,
const struct cpumask *cpumask)
{ {
u32 cluster = cpu_to_cluster(cpu_dev->id); u32 cluster = cpu_to_cluster(cpu_dev->id);
int i, ret; int i, ret;
...@@ -390,7 +394,7 @@ static int get_cluster_clk_and_freq_table(struct device *cpu_dev) ...@@ -390,7 +394,7 @@ static int get_cluster_clk_and_freq_table(struct device *cpu_dev)
return 0; return 0;
if (cluster < MAX_CLUSTERS) { if (cluster < MAX_CLUSTERS) {
ret = _get_cluster_clk_and_freq_table(cpu_dev); ret = _get_cluster_clk_and_freq_table(cpu_dev, cpumask);
if (ret) if (ret)
atomic_dec(&cluster_usage[cluster]); atomic_dec(&cluster_usage[cluster]);
return ret; return ret;
...@@ -407,7 +411,7 @@ static int get_cluster_clk_and_freq_table(struct device *cpu_dev) ...@@ -407,7 +411,7 @@ static int get_cluster_clk_and_freq_table(struct device *cpu_dev)
return -ENODEV; return -ENODEV;
} }
ret = _get_cluster_clk_and_freq_table(cdev); ret = _get_cluster_clk_and_freq_table(cdev, cpumask);
if (ret) if (ret)
goto put_clusters; goto put_clusters;
} }
...@@ -433,7 +437,7 @@ static int get_cluster_clk_and_freq_table(struct device *cpu_dev) ...@@ -433,7 +437,7 @@ static int get_cluster_clk_and_freq_table(struct device *cpu_dev)
return -ENODEV; return -ENODEV;
} }
_put_cluster_clk_and_freq_table(cdev); _put_cluster_clk_and_freq_table(cdev, cpumask);
} }
atomic_dec(&cluster_usage[cluster]); atomic_dec(&cluster_usage[cluster]);
...@@ -455,18 +459,6 @@ static int bL_cpufreq_init(struct cpufreq_policy *policy) ...@@ -455,18 +459,6 @@ static int bL_cpufreq_init(struct cpufreq_policy *policy)
return -ENODEV; return -ENODEV;
} }
ret = get_cluster_clk_and_freq_table(cpu_dev);
if (ret)
return ret;
ret = cpufreq_table_validate_and_show(policy, freq_table[cur_cluster]);
if (ret) {
dev_err(cpu_dev, "CPU %d, cluster: %d invalid freq table\n",
policy->cpu, cur_cluster);
put_cluster_clk_and_freq_table(cpu_dev);
return ret;
}
if (cur_cluster < MAX_CLUSTERS) { if (cur_cluster < MAX_CLUSTERS) {
int cpu; int cpu;
...@@ -479,6 +471,18 @@ static int bL_cpufreq_init(struct cpufreq_policy *policy) ...@@ -479,6 +471,18 @@ static int bL_cpufreq_init(struct cpufreq_policy *policy)
per_cpu(physical_cluster, policy->cpu) = A15_CLUSTER; per_cpu(physical_cluster, policy->cpu) = A15_CLUSTER;
} }
ret = get_cluster_clk_and_freq_table(cpu_dev, policy->cpus);
if (ret)
return ret;
ret = cpufreq_table_validate_and_show(policy, freq_table[cur_cluster]);
if (ret) {
dev_err(cpu_dev, "CPU %d, cluster: %d invalid freq table\n",
policy->cpu, cur_cluster);
put_cluster_clk_and_freq_table(cpu_dev, policy->cpus);
return ret;
}
if (arm_bL_ops->get_transition_latency) if (arm_bL_ops->get_transition_latency)
policy->cpuinfo.transition_latency = policy->cpuinfo.transition_latency =
arm_bL_ops->get_transition_latency(cpu_dev); arm_bL_ops->get_transition_latency(cpu_dev);
...@@ -509,7 +513,7 @@ static int bL_cpufreq_exit(struct cpufreq_policy *policy) ...@@ -509,7 +513,7 @@ static int bL_cpufreq_exit(struct cpufreq_policy *policy)
return -ENODEV; return -ENODEV;
} }
put_cluster_clk_and_freq_table(cpu_dev); put_cluster_clk_and_freq_table(cpu_dev, policy->related_cpus);
dev_dbg(cpu_dev, "%s: Exited, cpu: %d\n", __func__, policy->cpu); dev_dbg(cpu_dev, "%s: Exited, cpu: %d\n", __func__, policy->cpu);
return 0; return 0;
......
...@@ -30,11 +30,11 @@ struct cpufreq_arm_bL_ops { ...@@ -30,11 +30,11 @@ struct cpufreq_arm_bL_ops {
* This must set opp table for cpu_dev in a similar way as done by * This must set opp table for cpu_dev in a similar way as done by
* dev_pm_opp_of_add_table(). * dev_pm_opp_of_add_table().
*/ */
int (*init_opp_table)(struct device *cpu_dev); int (*init_opp_table)(const struct cpumask *cpumask);
/* Optional */ /* Optional */
int (*get_transition_latency)(struct device *cpu_dev); int (*get_transition_latency)(struct device *cpu_dev);
void (*free_opp_table)(struct device *cpu_dev); void (*free_opp_table)(const struct cpumask *cpumask);
}; };
int bL_cpufreq_register(struct cpufreq_arm_bL_ops *ops); int bL_cpufreq_register(struct cpufreq_arm_bL_ops *ops);
......
...@@ -43,23 +43,6 @@ static struct device_node *get_cpu_node_with_valid_op(int cpu) ...@@ -43,23 +43,6 @@ static struct device_node *get_cpu_node_with_valid_op(int cpu)
return np; return np;
} }
static int dt_init_opp_table(struct device *cpu_dev)
{
struct device_node *np;
int ret;
np = of_node_get(cpu_dev->of_node);
if (!np) {
pr_err("failed to find cpu%d node\n", cpu_dev->id);
return -ENOENT;
}
ret = dev_pm_opp_of_add_table(cpu_dev);
of_node_put(np);
return ret;
}
static int dt_get_transition_latency(struct device *cpu_dev) static int dt_get_transition_latency(struct device *cpu_dev)
{ {
struct device_node *np; struct device_node *np;
...@@ -81,8 +64,8 @@ static int dt_get_transition_latency(struct device *cpu_dev) ...@@ -81,8 +64,8 @@ static int dt_get_transition_latency(struct device *cpu_dev)
static struct cpufreq_arm_bL_ops dt_bL_ops = { static struct cpufreq_arm_bL_ops dt_bL_ops = {
.name = "dt-bl", .name = "dt-bl",
.get_transition_latency = dt_get_transition_latency, .get_transition_latency = dt_get_transition_latency,
.init_opp_table = dt_init_opp_table, .init_opp_table = dev_pm_opp_of_cpumask_add_table,
.free_opp_table = dev_pm_opp_of_remove_table, .free_opp_table = dev_pm_opp_of_cpumask_remove_table,
}; };
static int generic_bL_probe(struct platform_device *pdev) static int generic_bL_probe(struct platform_device *pdev)
......
...@@ -173,4 +173,25 @@ static int __init cppc_cpufreq_init(void) ...@@ -173,4 +173,25 @@ static int __init cppc_cpufreq_init(void)
return -ENODEV; return -ENODEV;
} }
static void __exit cppc_cpufreq_exit(void)
{
struct cpudata *cpu;
int i;
cpufreq_unregister_driver(&cppc_cpufreq_driver);
for_each_possible_cpu(i) {
cpu = all_cpu_data[i];
free_cpumask_var(cpu->shared_cpu_map);
kfree(cpu);
}
kfree(all_cpu_data);
}
module_exit(cppc_cpufreq_exit);
MODULE_AUTHOR("Ashwin Chaugule");
MODULE_DESCRIPTION("CPUFreq driver based on the ACPI CPPC v5.0+ spec");
MODULE_LICENSE("GPL");
late_initcall(cppc_cpufreq_init); late_initcall(cppc_cpufreq_init);
/*
* Copyright (C) 2016 Linaro.
* Viresh Kumar <viresh.kumar@linaro.org>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/err.h>
#include <linux/of.h>
#include <linux/platform_device.h>
static const struct of_device_id machines[] __initconst = {
{ .compatible = "allwinner,sun4i-a10", },
{ .compatible = "allwinner,sun5i-a10s", },
{ .compatible = "allwinner,sun5i-a13", },
{ .compatible = "allwinner,sun5i-r8", },
{ .compatible = "allwinner,sun6i-a31", },
{ .compatible = "allwinner,sun6i-a31s", },
{ .compatible = "allwinner,sun7i-a20", },
{ .compatible = "allwinner,sun8i-a23", },
{ .compatible = "allwinner,sun8i-a33", },
{ .compatible = "allwinner,sun8i-a83t", },
{ .compatible = "allwinner,sun8i-h3", },
{ .compatible = "hisilicon,hi6220", },
{ .compatible = "fsl,imx27", },
{ .compatible = "fsl,imx51", },
{ .compatible = "fsl,imx53", },
{ .compatible = "fsl,imx7d", },
{ .compatible = "marvell,berlin", },
{ .compatible = "samsung,exynos3250", },
{ .compatible = "samsung,exynos4210", },
{ .compatible = "samsung,exynos4212", },
{ .compatible = "samsung,exynos4412", },
{ .compatible = "samsung,exynos5250", },
#ifndef CONFIG_BL_SWITCHER
{ .compatible = "samsung,exynos5420", },
{ .compatible = "samsung,exynos5800", },
#endif
{ .compatible = "renesas,emev2", },
{ .compatible = "renesas,r7s72100", },
{ .compatible = "renesas,r8a73a4", },
{ .compatible = "renesas,r8a7740", },
{ .compatible = "renesas,r8a7778", },
{ .compatible = "renesas,r8a7779", },
{ .compatible = "renesas,r8a7790", },
{ .compatible = "renesas,r8a7791", },
{ .compatible = "renesas,r8a7793", },
{ .compatible = "renesas,r8a7794", },
{ .compatible = "renesas,sh73a0", },
{ .compatible = "rockchip,rk2928", },
{ .compatible = "rockchip,rk3036", },
{ .compatible = "rockchip,rk3066a", },
{ .compatible = "rockchip,rk3066b", },
{ .compatible = "rockchip,rk3188", },
{ .compatible = "rockchip,rk3228", },
{ .compatible = "rockchip,rk3288", },
{ .compatible = "rockchip,rk3366", },
{ .compatible = "rockchip,rk3368", },
{ .compatible = "rockchip,rk3399", },
{ .compatible = "sigma,tango4" },
{ .compatible = "ti,omap2", },
{ .compatible = "ti,omap3", },
{ .compatible = "ti,omap4", },
{ .compatible = "ti,omap5", },
{ .compatible = "xlnx,zynq-7000", },
};
static int __init cpufreq_dt_platdev_init(void)
{
struct device_node *np = of_find_node_by_path("/");
if (!np)
return -ENODEV;
if (!of_match_node(machines, np))
return -ENODEV;
of_node_put(of_root);
return PTR_ERR_OR_ZERO(platform_device_register_simple("cpufreq-dt", -1,
NULL, 0));
}
device_initcall(cpufreq_dt_platdev_init);
...@@ -15,7 +15,6 @@ ...@@ -15,7 +15,6 @@
#include <linux/cpu.h> #include <linux/cpu.h>
#include <linux/cpu_cooling.h> #include <linux/cpu_cooling.h>
#include <linux/cpufreq.h> #include <linux/cpufreq.h>
#include <linux/cpufreq-dt.h>
#include <linux/cpumask.h> #include <linux/cpumask.h>
#include <linux/err.h> #include <linux/err.h>
#include <linux/module.h> #include <linux/module.h>
...@@ -147,7 +146,7 @@ static int cpufreq_init(struct cpufreq_policy *policy) ...@@ -147,7 +146,7 @@ static int cpufreq_init(struct cpufreq_policy *policy)
struct clk *cpu_clk; struct clk *cpu_clk;
struct dev_pm_opp *suspend_opp; struct dev_pm_opp *suspend_opp;
unsigned int transition_latency; unsigned int transition_latency;
bool opp_v1 = false; bool fallback = false;
const char *name; const char *name;
int ret; int ret;
...@@ -167,14 +166,16 @@ static int cpufreq_init(struct cpufreq_policy *policy) ...@@ -167,14 +166,16 @@ static int cpufreq_init(struct cpufreq_policy *policy)
/* Get OPP-sharing information from "operating-points-v2" bindings */ /* Get OPP-sharing information from "operating-points-v2" bindings */
ret = dev_pm_opp_of_get_sharing_cpus(cpu_dev, policy->cpus); ret = dev_pm_opp_of_get_sharing_cpus(cpu_dev, policy->cpus);
if (ret) { if (ret) {
if (ret != -ENOENT)
goto out_put_clk;
/* /*
* operating-points-v2 not supported, fallback to old method of * operating-points-v2 not supported, fallback to old method of
* finding shared-OPPs for backward compatibility. * finding shared-OPPs for backward compatibility if the
* platform hasn't set sharing CPUs.
*/ */
if (ret == -ENOENT) if (dev_pm_opp_get_sharing_cpus(cpu_dev, policy->cpus))
opp_v1 = true; fallback = true;
else
goto out_put_clk;
} }
/* /*
...@@ -214,10 +215,7 @@ static int cpufreq_init(struct cpufreq_policy *policy) ...@@ -214,10 +215,7 @@ static int cpufreq_init(struct cpufreq_policy *policy)
goto out_free_opp; goto out_free_opp;
} }
if (opp_v1) { if (fallback) {
struct cpufreq_dt_platform_data *pd = cpufreq_get_driver_data();
if (!pd || !pd->independent_clocks)
cpumask_setall(policy->cpus); cpumask_setall(policy->cpus);
/* /*
......
...@@ -7,6 +7,8 @@ ...@@ -7,6 +7,8 @@
* BIG FAT DISCLAIMER: Work in progress code. Possibly *dangerous* * BIG FAT DISCLAIMER: Work in progress code. Possibly *dangerous*
*/ */
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/moduleparam.h> #include <linux/moduleparam.h>
...@@ -56,8 +58,6 @@ MODULE_PARM_DESC(fid, "CPU multiplier to use (11.5 = 115)"); ...@@ -56,8 +58,6 @@ MODULE_PARM_DESC(fid, "CPU multiplier to use (11.5 = 115)");
MODULE_PARM_DESC(min_fsb, MODULE_PARM_DESC(min_fsb,
"Minimum FSB to use, if not defined: current FSB - 50"); "Minimum FSB to use, if not defined: current FSB - 50");
#define PFX "cpufreq-nforce2: "
/** /**
* nforce2_calc_fsb - calculate FSB * nforce2_calc_fsb - calculate FSB
* @pll: PLL value * @pll: PLL value
...@@ -174,13 +174,13 @@ static int nforce2_set_fsb(unsigned int fsb) ...@@ -174,13 +174,13 @@ static int nforce2_set_fsb(unsigned int fsb)
int pll = 0; int pll = 0;
if ((fsb > max_fsb) || (fsb < NFORCE2_MIN_FSB)) { if ((fsb > max_fsb) || (fsb < NFORCE2_MIN_FSB)) {
printk(KERN_ERR PFX "FSB %d is out of range!\n", fsb); pr_err("FSB %d is out of range!\n", fsb);
return -EINVAL; return -EINVAL;
} }
tfsb = nforce2_fsb_read(0); tfsb = nforce2_fsb_read(0);
if (!tfsb) { if (!tfsb) {
printk(KERN_ERR PFX "Error while reading the FSB\n"); pr_err("Error while reading the FSB\n");
return -EINVAL; return -EINVAL;
} }
...@@ -276,8 +276,7 @@ static int nforce2_target(struct cpufreq_policy *policy, ...@@ -276,8 +276,7 @@ static int nforce2_target(struct cpufreq_policy *policy,
/* local_irq_save(flags); */ /* local_irq_save(flags); */
if (nforce2_set_fsb(target_fsb) < 0) if (nforce2_set_fsb(target_fsb) < 0)
printk(KERN_ERR PFX "Changing FSB to %d failed\n", pr_err("Changing FSB to %d failed\n", target_fsb);
target_fsb);
else else
pr_debug("Changed FSB successfully to %d\n", pr_debug("Changed FSB successfully to %d\n",
target_fsb); target_fsb);
...@@ -325,8 +324,7 @@ static int nforce2_cpu_init(struct cpufreq_policy *policy) ...@@ -325,8 +324,7 @@ static int nforce2_cpu_init(struct cpufreq_policy *policy)
/* FIX: Get FID from CPU */ /* FIX: Get FID from CPU */
if (!fid) { if (!fid) {
if (!cpu_khz) { if (!cpu_khz) {
printk(KERN_WARNING PFX pr_warn("cpu_khz not set, can't calculate multiplier!\n");
"cpu_khz not set, can't calculate multiplier!\n");
return -ENODEV; return -ENODEV;
} }
...@@ -341,8 +339,8 @@ static int nforce2_cpu_init(struct cpufreq_policy *policy) ...@@ -341,8 +339,8 @@ static int nforce2_cpu_init(struct cpufreq_policy *policy)
} }
} }
printk(KERN_INFO PFX "FSB currently at %i MHz, FID %d.%d\n", fsb, pr_info("FSB currently at %i MHz, FID %d.%d\n",
fid / 10, fid % 10); fsb, fid / 10, fid % 10);
/* Set maximum FSB to FSB at boot time */ /* Set maximum FSB to FSB at boot time */
max_fsb = nforce2_fsb_read(1); max_fsb = nforce2_fsb_read(1);
...@@ -401,11 +399,9 @@ static int nforce2_detect_chipset(void) ...@@ -401,11 +399,9 @@ static int nforce2_detect_chipset(void)
if (nforce2_dev == NULL) if (nforce2_dev == NULL)
return -ENODEV; return -ENODEV;
printk(KERN_INFO PFX "Detected nForce2 chipset revision %X\n", pr_info("Detected nForce2 chipset revision %X\n",
nforce2_dev->revision); nforce2_dev->revision);
printk(KERN_INFO PFX pr_info("FSB changing is maybe unstable and can lead to crashes and data loss\n");
"FSB changing is maybe unstable and can lead to "
"crashes and data loss.\n");
return 0; return 0;
} }
...@@ -423,7 +419,7 @@ static int __init nforce2_init(void) ...@@ -423,7 +419,7 @@ static int __init nforce2_init(void)
/* detect chipset */ /* detect chipset */
if (nforce2_detect_chipset()) { if (nforce2_detect_chipset()) {
printk(KERN_INFO PFX "No nForce2 chipset.\n"); pr_info("No nForce2 chipset\n");
return -ENODEV; return -ENODEV;
} }
......
...@@ -78,6 +78,11 @@ static int cpufreq_governor(struct cpufreq_policy *policy, unsigned int event); ...@@ -78,6 +78,11 @@ static int cpufreq_governor(struct cpufreq_policy *policy, unsigned int event);
static unsigned int __cpufreq_get(struct cpufreq_policy *policy); static unsigned int __cpufreq_get(struct cpufreq_policy *policy);
static int cpufreq_start_governor(struct cpufreq_policy *policy); static int cpufreq_start_governor(struct cpufreq_policy *policy);
static inline int cpufreq_exit_governor(struct cpufreq_policy *policy)
{
return cpufreq_governor(policy, CPUFREQ_GOV_POLICY_EXIT);
}
/** /**
* Two notifier lists: the "policy" list is involved in the * Two notifier lists: the "policy" list is involved in the
* validation process for a new CPU frequency policy; the * validation process for a new CPU frequency policy; the
...@@ -429,6 +434,73 @@ void cpufreq_freq_transition_end(struct cpufreq_policy *policy, ...@@ -429,6 +434,73 @@ void cpufreq_freq_transition_end(struct cpufreq_policy *policy,
} }
EXPORT_SYMBOL_GPL(cpufreq_freq_transition_end); EXPORT_SYMBOL_GPL(cpufreq_freq_transition_end);
/*
* Fast frequency switching status count. Positive means "enabled", negative
* means "disabled" and 0 means "not decided yet".
*/
static int cpufreq_fast_switch_count;
static DEFINE_MUTEX(cpufreq_fast_switch_lock);
static void cpufreq_list_transition_notifiers(void)
{
struct notifier_block *nb;
pr_info("Registered transition notifiers:\n");
mutex_lock(&cpufreq_transition_notifier_list.mutex);
for (nb = cpufreq_transition_notifier_list.head; nb; nb = nb->next)
pr_info("%pF\n", nb->notifier_call);
mutex_unlock(&cpufreq_transition_notifier_list.mutex);
}
/**
* cpufreq_enable_fast_switch - Enable fast frequency switching for policy.
* @policy: cpufreq policy to enable fast frequency switching for.
*
* Try to enable fast frequency switching for @policy.
*
* The attempt will fail if there is at least one transition notifier registered
* at this point, as fast frequency switching is quite fundamentally at odds
* with transition notifiers. Thus if successful, it will make registration of
* transition notifiers fail going forward.
*/
void cpufreq_enable_fast_switch(struct cpufreq_policy *policy)
{
lockdep_assert_held(&policy->rwsem);
if (!policy->fast_switch_possible)
return;
mutex_lock(&cpufreq_fast_switch_lock);
if (cpufreq_fast_switch_count >= 0) {
cpufreq_fast_switch_count++;
policy->fast_switch_enabled = true;
} else {
pr_warn("CPU%u: Fast frequency switching not enabled\n",
policy->cpu);
cpufreq_list_transition_notifiers();
}
mutex_unlock(&cpufreq_fast_switch_lock);
}
EXPORT_SYMBOL_GPL(cpufreq_enable_fast_switch);
/**
* cpufreq_disable_fast_switch - Disable fast frequency switching for policy.
* @policy: cpufreq policy to disable fast frequency switching for.
*/
void cpufreq_disable_fast_switch(struct cpufreq_policy *policy)
{
mutex_lock(&cpufreq_fast_switch_lock);
if (policy->fast_switch_enabled) {
policy->fast_switch_enabled = false;
if (!WARN_ON(cpufreq_fast_switch_count <= 0))
cpufreq_fast_switch_count--;
}
mutex_unlock(&cpufreq_fast_switch_lock);
}
EXPORT_SYMBOL_GPL(cpufreq_disable_fast_switch);
/********************************************************************* /*********************************************************************
* SYSFS INTERFACE * * SYSFS INTERFACE *
...@@ -1248,26 +1320,24 @@ static int cpufreq_online(unsigned int cpu) ...@@ -1248,26 +1320,24 @@ static int cpufreq_online(unsigned int cpu)
*/ */
static int cpufreq_add_dev(struct device *dev, struct subsys_interface *sif) static int cpufreq_add_dev(struct device *dev, struct subsys_interface *sif)
{ {
struct cpufreq_policy *policy;
unsigned cpu = dev->id; unsigned cpu = dev->id;
int ret;
dev_dbg(dev, "%s: adding CPU%u\n", __func__, cpu); dev_dbg(dev, "%s: adding CPU%u\n", __func__, cpu);
if (cpu_online(cpu)) { if (cpu_online(cpu))
ret = cpufreq_online(cpu); return cpufreq_online(cpu);
} else {
/* /*
* A hotplug notifier will follow and we will handle it as CPU * A hotplug notifier will follow and we will handle it as CPU online
* online then. For now, just create the sysfs link, unless * then. For now, just create the sysfs link, unless there is no policy
* there is no policy or the link is already present. * or the link is already present.
*/ */
struct cpufreq_policy *policy = per_cpu(cpufreq_cpu_data, cpu); policy = per_cpu(cpufreq_cpu_data, cpu);
if (!policy || cpumask_test_and_set_cpu(cpu, policy->real_cpus))
ret = policy && !cpumask_test_and_set_cpu(cpu, policy->real_cpus) return 0;
? add_cpu_dev_symlink(policy, cpu) : 0;
}
return ret; return add_cpu_dev_symlink(policy, cpu);
} }
static void cpufreq_offline(unsigned int cpu) static void cpufreq_offline(unsigned int cpu)
...@@ -1319,7 +1389,7 @@ static void cpufreq_offline(unsigned int cpu) ...@@ -1319,7 +1389,7 @@ static void cpufreq_offline(unsigned int cpu)
/* If cpu is last user of policy, free policy */ /* If cpu is last user of policy, free policy */
if (has_target()) { if (has_target()) {
ret = cpufreq_governor(policy, CPUFREQ_GOV_POLICY_EXIT); ret = cpufreq_exit_governor(policy);
if (ret) if (ret)
pr_err("%s: Failed to exit governor\n", __func__); pr_err("%s: Failed to exit governor\n", __func__);
} }
...@@ -1447,8 +1517,12 @@ static unsigned int __cpufreq_get(struct cpufreq_policy *policy) ...@@ -1447,8 +1517,12 @@ static unsigned int __cpufreq_get(struct cpufreq_policy *policy)
ret_freq = cpufreq_driver->get(policy->cpu); ret_freq = cpufreq_driver->get(policy->cpu);
/* Updating inactive policies is invalid, so avoid doing that. */ /*
if (unlikely(policy_is_inactive(policy))) * Updating inactive policies is invalid, so avoid doing that. Also
* if fast frequency switching is used with the given policy, the check
* against policy->cur is pointless, so skip it in that case too.
*/
if (unlikely(policy_is_inactive(policy)) || policy->fast_switch_enabled)
return ret_freq; return ret_freq;
if (ret_freq && policy->cur && if (ret_freq && policy->cur &&
...@@ -1679,8 +1753,18 @@ int cpufreq_register_notifier(struct notifier_block *nb, unsigned int list) ...@@ -1679,8 +1753,18 @@ int cpufreq_register_notifier(struct notifier_block *nb, unsigned int list)
switch (list) { switch (list) {
case CPUFREQ_TRANSITION_NOTIFIER: case CPUFREQ_TRANSITION_NOTIFIER:
mutex_lock(&cpufreq_fast_switch_lock);
if (cpufreq_fast_switch_count > 0) {
mutex_unlock(&cpufreq_fast_switch_lock);
return -EBUSY;
}
ret = srcu_notifier_chain_register( ret = srcu_notifier_chain_register(
&cpufreq_transition_notifier_list, nb); &cpufreq_transition_notifier_list, nb);
if (!ret)
cpufreq_fast_switch_count--;
mutex_unlock(&cpufreq_fast_switch_lock);
break; break;
case CPUFREQ_POLICY_NOTIFIER: case CPUFREQ_POLICY_NOTIFIER:
ret = blocking_notifier_chain_register( ret = blocking_notifier_chain_register(
...@@ -1713,8 +1797,14 @@ int cpufreq_unregister_notifier(struct notifier_block *nb, unsigned int list) ...@@ -1713,8 +1797,14 @@ int cpufreq_unregister_notifier(struct notifier_block *nb, unsigned int list)
switch (list) { switch (list) {
case CPUFREQ_TRANSITION_NOTIFIER: case CPUFREQ_TRANSITION_NOTIFIER:
mutex_lock(&cpufreq_fast_switch_lock);
ret = srcu_notifier_chain_unregister( ret = srcu_notifier_chain_unregister(
&cpufreq_transition_notifier_list, nb); &cpufreq_transition_notifier_list, nb);
if (!ret && !WARN_ON(cpufreq_fast_switch_count >= 0))
cpufreq_fast_switch_count++;
mutex_unlock(&cpufreq_fast_switch_lock);
break; break;
case CPUFREQ_POLICY_NOTIFIER: case CPUFREQ_POLICY_NOTIFIER:
ret = blocking_notifier_chain_unregister( ret = blocking_notifier_chain_unregister(
...@@ -1733,6 +1823,37 @@ EXPORT_SYMBOL(cpufreq_unregister_notifier); ...@@ -1733,6 +1823,37 @@ EXPORT_SYMBOL(cpufreq_unregister_notifier);
* GOVERNORS * * GOVERNORS *
*********************************************************************/ *********************************************************************/
/**
* cpufreq_driver_fast_switch - Carry out a fast CPU frequency switch.
* @policy: cpufreq policy to switch the frequency for.
* @target_freq: New frequency to set (may be approximate).
*
* Carry out a fast frequency switch without sleeping.
*
* The driver's ->fast_switch() callback invoked by this function must be
* suitable for being called from within RCU-sched read-side critical sections
* and it is expected to select the minimum available frequency greater than or
* equal to @target_freq (CPUFREQ_RELATION_L).
*
* This function must not be called if policy->fast_switch_enabled is unset.
*
* Governors calling this function must guarantee that it will never be invoked
* twice in parallel for the same policy and that it will never be called in
* parallel with either ->target() or ->target_index() for the same policy.
*
* If CPUFREQ_ENTRY_INVALID is returned by the driver's ->fast_switch()
* callback to indicate an error condition, the hardware configuration must be
* preserved.
*/
unsigned int cpufreq_driver_fast_switch(struct cpufreq_policy *policy,
unsigned int target_freq)
{
clamp_val(target_freq, policy->min, policy->max);
return cpufreq_driver->fast_switch(policy, target_freq);
}
EXPORT_SYMBOL_GPL(cpufreq_driver_fast_switch);
/* Must set freqs->new to intermediate frequency */ /* Must set freqs->new to intermediate frequency */
static int __target_intermediate(struct cpufreq_policy *policy, static int __target_intermediate(struct cpufreq_policy *policy,
struct cpufreq_freqs *freqs, int index) struct cpufreq_freqs *freqs, int index)
...@@ -2108,7 +2229,7 @@ static int cpufreq_set_policy(struct cpufreq_policy *policy, ...@@ -2108,7 +2229,7 @@ static int cpufreq_set_policy(struct cpufreq_policy *policy,
return ret; return ret;
} }
ret = cpufreq_governor(policy, CPUFREQ_GOV_POLICY_EXIT); ret = cpufreq_exit_governor(policy);
if (ret) { if (ret) {
pr_err("%s: Failed to Exit Governor: %s (%d)\n", pr_err("%s: Failed to Exit Governor: %s (%d)\n",
__func__, old_gov->name, ret); __func__, old_gov->name, ret);
...@@ -2125,7 +2246,7 @@ static int cpufreq_set_policy(struct cpufreq_policy *policy, ...@@ -2125,7 +2246,7 @@ static int cpufreq_set_policy(struct cpufreq_policy *policy,
pr_debug("cpufreq: governor change\n"); pr_debug("cpufreq: governor change\n");
return 0; return 0;
} }
cpufreq_governor(policy, CPUFREQ_GOV_POLICY_EXIT); cpufreq_exit_governor(policy);
} }
/* new governor failed, so re-start old one */ /* new governor failed, so re-start old one */
...@@ -2193,16 +2314,13 @@ static int cpufreq_cpu_callback(struct notifier_block *nfb, ...@@ -2193,16 +2314,13 @@ static int cpufreq_cpu_callback(struct notifier_block *nfb,
switch (action & ~CPU_TASKS_FROZEN) { switch (action & ~CPU_TASKS_FROZEN) {
case CPU_ONLINE: case CPU_ONLINE:
case CPU_DOWN_FAILED:
cpufreq_online(cpu); cpufreq_online(cpu);
break; break;
case CPU_DOWN_PREPARE: case CPU_DOWN_PREPARE:
cpufreq_offline(cpu); cpufreq_offline(cpu);
break; break;
case CPU_DOWN_FAILED:
cpufreq_online(cpu);
break;
} }
return NOTIFY_OK; return NOTIFY_OK;
} }
......
...@@ -129,9 +129,10 @@ static struct notifier_block cs_cpufreq_notifier_block = { ...@@ -129,9 +129,10 @@ static struct notifier_block cs_cpufreq_notifier_block = {
/************************** sysfs interface ************************/ /************************** sysfs interface ************************/
static struct dbs_governor cs_dbs_gov; static struct dbs_governor cs_dbs_gov;
static ssize_t store_sampling_down_factor(struct dbs_data *dbs_data, static ssize_t store_sampling_down_factor(struct gov_attr_set *attr_set,
const char *buf, size_t count) const char *buf, size_t count)
{ {
struct dbs_data *dbs_data = to_dbs_data(attr_set);
unsigned int input; unsigned int input;
int ret; int ret;
ret = sscanf(buf, "%u", &input); ret = sscanf(buf, "%u", &input);
...@@ -143,9 +144,10 @@ static ssize_t store_sampling_down_factor(struct dbs_data *dbs_data, ...@@ -143,9 +144,10 @@ static ssize_t store_sampling_down_factor(struct dbs_data *dbs_data,
return count; return count;
} }
static ssize_t store_up_threshold(struct dbs_data *dbs_data, const char *buf, static ssize_t store_up_threshold(struct gov_attr_set *attr_set,
size_t count) const char *buf, size_t count)
{ {
struct dbs_data *dbs_data = to_dbs_data(attr_set);
struct cs_dbs_tuners *cs_tuners = dbs_data->tuners; struct cs_dbs_tuners *cs_tuners = dbs_data->tuners;
unsigned int input; unsigned int input;
int ret; int ret;
...@@ -158,9 +160,10 @@ static ssize_t store_up_threshold(struct dbs_data *dbs_data, const char *buf, ...@@ -158,9 +160,10 @@ static ssize_t store_up_threshold(struct dbs_data *dbs_data, const char *buf,
return count; return count;
} }
static ssize_t store_down_threshold(struct dbs_data *dbs_data, const char *buf, static ssize_t store_down_threshold(struct gov_attr_set *attr_set,
size_t count) const char *buf, size_t count)
{ {
struct dbs_data *dbs_data = to_dbs_data(attr_set);
struct cs_dbs_tuners *cs_tuners = dbs_data->tuners; struct cs_dbs_tuners *cs_tuners = dbs_data->tuners;
unsigned int input; unsigned int input;
int ret; int ret;
...@@ -175,9 +178,10 @@ static ssize_t store_down_threshold(struct dbs_data *dbs_data, const char *buf, ...@@ -175,9 +178,10 @@ static ssize_t store_down_threshold(struct dbs_data *dbs_data, const char *buf,
return count; return count;
} }
static ssize_t store_ignore_nice_load(struct dbs_data *dbs_data, static ssize_t store_ignore_nice_load(struct gov_attr_set *attr_set,
const char *buf, size_t count) const char *buf, size_t count)
{ {
struct dbs_data *dbs_data = to_dbs_data(attr_set);
unsigned int input; unsigned int input;
int ret; int ret;
...@@ -199,9 +203,10 @@ static ssize_t store_ignore_nice_load(struct dbs_data *dbs_data, ...@@ -199,9 +203,10 @@ static ssize_t store_ignore_nice_load(struct dbs_data *dbs_data,
return count; return count;
} }
static ssize_t store_freq_step(struct dbs_data *dbs_data, const char *buf, static ssize_t store_freq_step(struct gov_attr_set *attr_set, const char *buf,
size_t count) size_t count)
{ {
struct dbs_data *dbs_data = to_dbs_data(attr_set);
struct cs_dbs_tuners *cs_tuners = dbs_data->tuners; struct cs_dbs_tuners *cs_tuners = dbs_data->tuners;
unsigned int input; unsigned int input;
int ret; int ret;
......
This diff is collapsed.
...@@ -24,20 +24,6 @@ ...@@ -24,20 +24,6 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/mutex.h> #include <linux/mutex.h>
/*
* The polling frequency depends on the capability of the processor. Default
* polling frequency is 1000 times the transition latency of the processor. The
* governor will work on any processor with transition latency <= 10ms, using
* appropriate sampling rate.
*
* For CPUs with transition latency > 10ms (mostly drivers with CPUFREQ_ETERNAL)
* this governor will not work. All times here are in us (micro seconds).
*/
#define MIN_SAMPLING_RATE_RATIO (2)
#define LATENCY_MULTIPLIER (1000)
#define MIN_LATENCY_MULTIPLIER (20)
#define TRANSITION_LATENCY_LIMIT (10 * 1000 * 1000)
/* Ondemand Sampling types */ /* Ondemand Sampling types */
enum {OD_NORMAL_SAMPLE, OD_SUB_SAMPLE}; enum {OD_NORMAL_SAMPLE, OD_SUB_SAMPLE};
...@@ -52,7 +38,7 @@ enum {OD_NORMAL_SAMPLE, OD_SUB_SAMPLE}; ...@@ -52,7 +38,7 @@ enum {OD_NORMAL_SAMPLE, OD_SUB_SAMPLE};
/* Governor demand based switching data (per-policy or global). */ /* Governor demand based switching data (per-policy or global). */
struct dbs_data { struct dbs_data {
int usage_count; struct gov_attr_set attr_set;
void *tuners; void *tuners;
unsigned int min_sampling_rate; unsigned int min_sampling_rate;
unsigned int ignore_nice_load; unsigned int ignore_nice_load;
...@@ -60,37 +46,27 @@ struct dbs_data { ...@@ -60,37 +46,27 @@ struct dbs_data {
unsigned int sampling_down_factor; unsigned int sampling_down_factor;
unsigned int up_threshold; unsigned int up_threshold;
unsigned int io_is_busy; unsigned int io_is_busy;
struct kobject kobj;
struct list_head policy_dbs_list;
/*
* Protect concurrent updates to governor tunables from sysfs,
* policy_dbs_list and usage_count.
*/
struct mutex mutex;
}; };
/* Governor's specific attributes */ static inline struct dbs_data *to_dbs_data(struct gov_attr_set *attr_set)
struct dbs_data; {
struct governor_attr { return container_of(attr_set, struct dbs_data, attr_set);
struct attribute attr; }
ssize_t (*show)(struct dbs_data *dbs_data, char *buf);
ssize_t (*store)(struct dbs_data *dbs_data, const char *buf,
size_t count);
};
#define gov_show_one(_gov, file_name) \ #define gov_show_one(_gov, file_name) \
static ssize_t show_##file_name \ static ssize_t show_##file_name \
(struct dbs_data *dbs_data, char *buf) \ (struct gov_attr_set *attr_set, char *buf) \
{ \ { \
struct dbs_data *dbs_data = to_dbs_data(attr_set); \
struct _gov##_dbs_tuners *tuners = dbs_data->tuners; \ struct _gov##_dbs_tuners *tuners = dbs_data->tuners; \
return sprintf(buf, "%u\n", tuners->file_name); \ return sprintf(buf, "%u\n", tuners->file_name); \
} }
#define gov_show_one_common(file_name) \ #define gov_show_one_common(file_name) \
static ssize_t show_##file_name \ static ssize_t show_##file_name \
(struct dbs_data *dbs_data, char *buf) \ (struct gov_attr_set *attr_set, char *buf) \
{ \ { \
struct dbs_data *dbs_data = to_dbs_data(attr_set); \
return sprintf(buf, "%u\n", dbs_data->file_name); \ return sprintf(buf, "%u\n", dbs_data->file_name); \
} }
...@@ -135,7 +111,7 @@ static inline void gov_update_sample_delay(struct policy_dbs_info *policy_dbs, ...@@ -135,7 +111,7 @@ static inline void gov_update_sample_delay(struct policy_dbs_info *policy_dbs,
/* Per cpu structures */ /* Per cpu structures */
struct cpu_dbs_info { struct cpu_dbs_info {
u64 prev_cpu_idle; u64 prev_cpu_idle;
u64 prev_cpu_wall; u64 prev_update_time;
u64 prev_cpu_nice; u64 prev_cpu_nice;
/* /*
* Used to keep track of load in the previous interval. However, when * Used to keep track of load in the previous interval. However, when
...@@ -184,7 +160,7 @@ void od_register_powersave_bias_handler(unsigned int (*f) ...@@ -184,7 +160,7 @@ void od_register_powersave_bias_handler(unsigned int (*f)
(struct cpufreq_policy *, unsigned int, unsigned int), (struct cpufreq_policy *, unsigned int, unsigned int),
unsigned int powersave_bias); unsigned int powersave_bias);
void od_unregister_powersave_bias_handler(void); void od_unregister_powersave_bias_handler(void);
ssize_t store_sampling_rate(struct dbs_data *dbs_data, const char *buf, ssize_t store_sampling_rate(struct gov_attr_set *attr_set, const char *buf,
size_t count); size_t count);
void gov_update_cpu_data(struct dbs_data *dbs_data); void gov_update_cpu_data(struct dbs_data *dbs_data);
#endif /* _CPUFREQ_GOVERNOR_H */ #endif /* _CPUFREQ_GOVERNOR_H */
/*
* Abstract code for CPUFreq governor tunable sysfs attributes.
*
* Copyright (C) 2016, Intel Corporation
* Author: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include "cpufreq_governor.h"
static inline struct gov_attr_set *to_gov_attr_set(struct kobject *kobj)
{
return container_of(kobj, struct gov_attr_set, kobj);
}
static inline struct governor_attr *to_gov_attr(struct attribute *attr)
{
return container_of(attr, struct governor_attr, attr);
}
static ssize_t governor_show(struct kobject *kobj, struct attribute *attr,
char *buf)
{
struct governor_attr *gattr = to_gov_attr(attr);
return gattr->show(to_gov_attr_set(kobj), buf);
}
static ssize_t governor_store(struct kobject *kobj, struct attribute *attr,
const char *buf, size_t count)
{
struct gov_attr_set *attr_set = to_gov_attr_set(kobj);
struct governor_attr *gattr = to_gov_attr(attr);
int ret;
mutex_lock(&attr_set->update_lock);
ret = attr_set->usage_count ? gattr->store(attr_set, buf, count) : -EBUSY;
mutex_unlock(&attr_set->update_lock);
return ret;
}
const struct sysfs_ops governor_sysfs_ops = {
.show = governor_show,
.store = governor_store,
};
EXPORT_SYMBOL_GPL(governor_sysfs_ops);
void gov_attr_set_init(struct gov_attr_set *attr_set, struct list_head *list_node)
{
INIT_LIST_HEAD(&attr_set->policy_list);
mutex_init(&attr_set->update_lock);
attr_set->usage_count = 1;
list_add(list_node, &attr_set->policy_list);
}
EXPORT_SYMBOL_GPL(gov_attr_set_init);
void gov_attr_set_get(struct gov_attr_set *attr_set, struct list_head *list_node)
{
mutex_lock(&attr_set->update_lock);
attr_set->usage_count++;
list_add(list_node, &attr_set->policy_list);
mutex_unlock(&attr_set->update_lock);
}
EXPORT_SYMBOL_GPL(gov_attr_set_get);
unsigned int gov_attr_set_put(struct gov_attr_set *attr_set, struct list_head *list_node)
{
unsigned int count;
mutex_lock(&attr_set->update_lock);
list_del(list_node);
count = --attr_set->usage_count;
mutex_unlock(&attr_set->update_lock);
if (count)
return count;
kobject_put(&attr_set->kobj);
mutex_destroy(&attr_set->update_lock);
return 0;
}
EXPORT_SYMBOL_GPL(gov_attr_set_put);
...@@ -207,9 +207,10 @@ static unsigned int od_dbs_timer(struct cpufreq_policy *policy) ...@@ -207,9 +207,10 @@ static unsigned int od_dbs_timer(struct cpufreq_policy *policy)
/************************** sysfs interface ************************/ /************************** sysfs interface ************************/
static struct dbs_governor od_dbs_gov; static struct dbs_governor od_dbs_gov;
static ssize_t store_io_is_busy(struct dbs_data *dbs_data, const char *buf, static ssize_t store_io_is_busy(struct gov_attr_set *attr_set, const char *buf,
size_t count) size_t count)
{ {
struct dbs_data *dbs_data = to_dbs_data(attr_set);
unsigned int input; unsigned int input;
int ret; int ret;
...@@ -224,9 +225,10 @@ static ssize_t store_io_is_busy(struct dbs_data *dbs_data, const char *buf, ...@@ -224,9 +225,10 @@ static ssize_t store_io_is_busy(struct dbs_data *dbs_data, const char *buf,
return count; return count;
} }
static ssize_t store_up_threshold(struct dbs_data *dbs_data, const char *buf, static ssize_t store_up_threshold(struct gov_attr_set *attr_set,
size_t count) const char *buf, size_t count)
{ {
struct dbs_data *dbs_data = to_dbs_data(attr_set);
unsigned int input; unsigned int input;
int ret; int ret;
ret = sscanf(buf, "%u", &input); ret = sscanf(buf, "%u", &input);
...@@ -240,9 +242,10 @@ static ssize_t store_up_threshold(struct dbs_data *dbs_data, const char *buf, ...@@ -240,9 +242,10 @@ static ssize_t store_up_threshold(struct dbs_data *dbs_data, const char *buf,
return count; return count;
} }
static ssize_t store_sampling_down_factor(struct dbs_data *dbs_data, static ssize_t store_sampling_down_factor(struct gov_attr_set *attr_set,
const char *buf, size_t count) const char *buf, size_t count)
{ {
struct dbs_data *dbs_data = to_dbs_data(attr_set);
struct policy_dbs_info *policy_dbs; struct policy_dbs_info *policy_dbs;
unsigned int input; unsigned int input;
int ret; int ret;
...@@ -254,7 +257,7 @@ static ssize_t store_sampling_down_factor(struct dbs_data *dbs_data, ...@@ -254,7 +257,7 @@ static ssize_t store_sampling_down_factor(struct dbs_data *dbs_data,
dbs_data->sampling_down_factor = input; dbs_data->sampling_down_factor = input;
/* Reset down sampling multiplier in case it was active */ /* Reset down sampling multiplier in case it was active */
list_for_each_entry(policy_dbs, &dbs_data->policy_dbs_list, list) { list_for_each_entry(policy_dbs, &attr_set->policy_list, list) {
/* /*
* Doing this without locking might lead to using different * Doing this without locking might lead to using different
* rate_mult values in od_update() and od_dbs_timer(). * rate_mult values in od_update() and od_dbs_timer().
...@@ -267,9 +270,10 @@ static ssize_t store_sampling_down_factor(struct dbs_data *dbs_data, ...@@ -267,9 +270,10 @@ static ssize_t store_sampling_down_factor(struct dbs_data *dbs_data,
return count; return count;
} }
static ssize_t store_ignore_nice_load(struct dbs_data *dbs_data, static ssize_t store_ignore_nice_load(struct gov_attr_set *attr_set,
const char *buf, size_t count) const char *buf, size_t count)
{ {
struct dbs_data *dbs_data = to_dbs_data(attr_set);
unsigned int input; unsigned int input;
int ret; int ret;
...@@ -291,9 +295,10 @@ static ssize_t store_ignore_nice_load(struct dbs_data *dbs_data, ...@@ -291,9 +295,10 @@ static ssize_t store_ignore_nice_load(struct dbs_data *dbs_data,
return count; return count;
} }
static ssize_t store_powersave_bias(struct dbs_data *dbs_data, const char *buf, static ssize_t store_powersave_bias(struct gov_attr_set *attr_set,
size_t count) const char *buf, size_t count)
{ {
struct dbs_data *dbs_data = to_dbs_data(attr_set);
struct od_dbs_tuners *od_tuners = dbs_data->tuners; struct od_dbs_tuners *od_tuners = dbs_data->tuners;
struct policy_dbs_info *policy_dbs; struct policy_dbs_info *policy_dbs;
unsigned int input; unsigned int input;
...@@ -308,7 +313,7 @@ static ssize_t store_powersave_bias(struct dbs_data *dbs_data, const char *buf, ...@@ -308,7 +313,7 @@ static ssize_t store_powersave_bias(struct dbs_data *dbs_data, const char *buf,
od_tuners->powersave_bias = input; od_tuners->powersave_bias = input;
list_for_each_entry(policy_dbs, &dbs_data->policy_dbs_list, list) list_for_each_entry(policy_dbs, &attr_set->policy_list, list)
ondemand_powersave_bias_init(policy_dbs->policy); ondemand_powersave_bias_init(policy_dbs->policy);
return count; return count;
......
...@@ -17,6 +17,7 @@ ...@@ -17,6 +17,7 @@
#include <linux/init.h> #include <linux/init.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/mutex.h> #include <linux/mutex.h>
#include <linux/slab.h>
static DEFINE_PER_CPU(unsigned int, cpu_is_managed); static DEFINE_PER_CPU(unsigned int, cpu_is_managed);
static DEFINE_MUTEX(userspace_mutex); static DEFINE_MUTEX(userspace_mutex);
...@@ -31,6 +32,7 @@ static DEFINE_MUTEX(userspace_mutex); ...@@ -31,6 +32,7 @@ static DEFINE_MUTEX(userspace_mutex);
static int cpufreq_set(struct cpufreq_policy *policy, unsigned int freq) static int cpufreq_set(struct cpufreq_policy *policy, unsigned int freq)
{ {
int ret = -EINVAL; int ret = -EINVAL;
unsigned int *setspeed = policy->governor_data;
pr_debug("cpufreq_set for cpu %u, freq %u kHz\n", policy->cpu, freq); pr_debug("cpufreq_set for cpu %u, freq %u kHz\n", policy->cpu, freq);
...@@ -38,6 +40,8 @@ static int cpufreq_set(struct cpufreq_policy *policy, unsigned int freq) ...@@ -38,6 +40,8 @@ static int cpufreq_set(struct cpufreq_policy *policy, unsigned int freq)
if (!per_cpu(cpu_is_managed, policy->cpu)) if (!per_cpu(cpu_is_managed, policy->cpu))
goto err; goto err;
*setspeed = freq;
ret = __cpufreq_driver_target(policy, freq, CPUFREQ_RELATION_L); ret = __cpufreq_driver_target(policy, freq, CPUFREQ_RELATION_L);
err: err:
mutex_unlock(&userspace_mutex); mutex_unlock(&userspace_mutex);
...@@ -49,19 +53,45 @@ static ssize_t show_speed(struct cpufreq_policy *policy, char *buf) ...@@ -49,19 +53,45 @@ static ssize_t show_speed(struct cpufreq_policy *policy, char *buf)
return sprintf(buf, "%u\n", policy->cur); return sprintf(buf, "%u\n", policy->cur);
} }
static int cpufreq_userspace_policy_init(struct cpufreq_policy *policy)
{
unsigned int *setspeed;
setspeed = kzalloc(sizeof(*setspeed), GFP_KERNEL);
if (!setspeed)
return -ENOMEM;
policy->governor_data = setspeed;
return 0;
}
static int cpufreq_governor_userspace(struct cpufreq_policy *policy, static int cpufreq_governor_userspace(struct cpufreq_policy *policy,
unsigned int event) unsigned int event)
{ {
unsigned int *setspeed = policy->governor_data;
unsigned int cpu = policy->cpu; unsigned int cpu = policy->cpu;
int rc = 0; int rc = 0;
if (event == CPUFREQ_GOV_POLICY_INIT)
return cpufreq_userspace_policy_init(policy);
if (!setspeed)
return -EINVAL;
switch (event) { switch (event) {
case CPUFREQ_GOV_POLICY_EXIT:
mutex_lock(&userspace_mutex);
policy->governor_data = NULL;
kfree(setspeed);
mutex_unlock(&userspace_mutex);
break;
case CPUFREQ_GOV_START: case CPUFREQ_GOV_START:
BUG_ON(!policy->cur); BUG_ON(!policy->cur);
pr_debug("started managing cpu %u\n", cpu); pr_debug("started managing cpu %u\n", cpu);
mutex_lock(&userspace_mutex); mutex_lock(&userspace_mutex);
per_cpu(cpu_is_managed, cpu) = 1; per_cpu(cpu_is_managed, cpu) = 1;
*setspeed = policy->cur;
mutex_unlock(&userspace_mutex); mutex_unlock(&userspace_mutex);
break; break;
case CPUFREQ_GOV_STOP: case CPUFREQ_GOV_STOP:
...@@ -69,20 +99,23 @@ static int cpufreq_governor_userspace(struct cpufreq_policy *policy, ...@@ -69,20 +99,23 @@ static int cpufreq_governor_userspace(struct cpufreq_policy *policy,
mutex_lock(&userspace_mutex); mutex_lock(&userspace_mutex);
per_cpu(cpu_is_managed, cpu) = 0; per_cpu(cpu_is_managed, cpu) = 0;
*setspeed = 0;
mutex_unlock(&userspace_mutex); mutex_unlock(&userspace_mutex);
break; break;
case CPUFREQ_GOV_LIMITS: case CPUFREQ_GOV_LIMITS:
mutex_lock(&userspace_mutex); mutex_lock(&userspace_mutex);
pr_debug("limit event for cpu %u: %u - %u kHz, currently %u kHz\n", pr_debug("limit event for cpu %u: %u - %u kHz, currently %u kHz, last set to %u kHz\n",
cpu, policy->min, policy->max, cpu, policy->min, policy->max, policy->cur, *setspeed);
policy->cur);
if (policy->max < policy->cur) if (policy->max < *setspeed)
__cpufreq_driver_target(policy, policy->max, __cpufreq_driver_target(policy, policy->max,
CPUFREQ_RELATION_H); CPUFREQ_RELATION_H);
else if (policy->min > policy->cur) else if (policy->min > *setspeed)
__cpufreq_driver_target(policy, policy->min, __cpufreq_driver_target(policy, policy->min,
CPUFREQ_RELATION_L); CPUFREQ_RELATION_L);
else
__cpufreq_driver_target(policy, *setspeed,
CPUFREQ_RELATION_L);
mutex_unlock(&userspace_mutex); mutex_unlock(&userspace_mutex);
break; break;
} }
......
...@@ -6,6 +6,8 @@ ...@@ -6,6 +6,8 @@
* BIG FAT DISCLAIMER: Work in progress code. Possibly *dangerous* * BIG FAT DISCLAIMER: Work in progress code. Possibly *dangerous*
*/ */
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/init.h> #include <linux/init.h>
...@@ -20,7 +22,7 @@ ...@@ -20,7 +22,7 @@
#include <asm/msr.h> #include <asm/msr.h>
#include <asm/tsc.h> #include <asm/tsc.h>
#if defined CONFIG_ACPI_PROCESSOR || defined CONFIG_ACPI_PROCESSOR_MODULE #if IS_ENABLED(CONFIG_ACPI_PROCESSOR)
#include <linux/acpi.h> #include <linux/acpi.h>
#include <acpi/processor.h> #include <acpi/processor.h>
#endif #endif
...@@ -33,7 +35,7 @@ ...@@ -33,7 +35,7 @@
struct eps_cpu_data { struct eps_cpu_data {
u32 fsb; u32 fsb;
#if defined CONFIG_ACPI_PROCESSOR || defined CONFIG_ACPI_PROCESSOR_MODULE #if IS_ENABLED(CONFIG_ACPI_PROCESSOR)
u32 bios_limit; u32 bios_limit;
#endif #endif
struct cpufreq_frequency_table freq_table[]; struct cpufreq_frequency_table freq_table[];
...@@ -46,7 +48,7 @@ static int freq_failsafe_off; ...@@ -46,7 +48,7 @@ static int freq_failsafe_off;
static int voltage_failsafe_off; static int voltage_failsafe_off;
static int set_max_voltage; static int set_max_voltage;
#if defined CONFIG_ACPI_PROCESSOR || defined CONFIG_ACPI_PROCESSOR_MODULE #if IS_ENABLED(CONFIG_ACPI_PROCESSOR)
static int ignore_acpi_limit; static int ignore_acpi_limit;
static struct acpi_processor_performance *eps_acpi_cpu_perf; static struct acpi_processor_performance *eps_acpi_cpu_perf;
...@@ -141,11 +143,9 @@ static int eps_set_state(struct eps_cpu_data *centaur, ...@@ -141,11 +143,9 @@ static int eps_set_state(struct eps_cpu_data *centaur,
/* Print voltage and multiplier */ /* Print voltage and multiplier */
rdmsr(MSR_IA32_PERF_STATUS, lo, hi); rdmsr(MSR_IA32_PERF_STATUS, lo, hi);
current_voltage = lo & 0xff; current_voltage = lo & 0xff;
printk(KERN_INFO "eps: Current voltage = %dmV\n", pr_info("Current voltage = %dmV\n", current_voltage * 16 + 700);
current_voltage * 16 + 700);
current_multiplier = (lo >> 8) & 0xff; current_multiplier = (lo >> 8) & 0xff;
printk(KERN_INFO "eps: Current multiplier = %d\n", pr_info("Current multiplier = %d\n", current_multiplier);
current_multiplier);
} }
#endif #endif
return 0; return 0;
...@@ -166,7 +166,7 @@ static int eps_target(struct cpufreq_policy *policy, unsigned int index) ...@@ -166,7 +166,7 @@ static int eps_target(struct cpufreq_policy *policy, unsigned int index)
dest_state = centaur->freq_table[index].driver_data & 0xffff; dest_state = centaur->freq_table[index].driver_data & 0xffff;
ret = eps_set_state(centaur, policy, dest_state); ret = eps_set_state(centaur, policy, dest_state);
if (ret) if (ret)
printk(KERN_ERR "eps: Timeout!\n"); pr_err("Timeout!\n");
return ret; return ret;
} }
...@@ -186,7 +186,7 @@ static int eps_cpu_init(struct cpufreq_policy *policy) ...@@ -186,7 +186,7 @@ static int eps_cpu_init(struct cpufreq_policy *policy)
int k, step, voltage; int k, step, voltage;
int ret; int ret;
int states; int states;
#if defined CONFIG_ACPI_PROCESSOR || defined CONFIG_ACPI_PROCESSOR_MODULE #if IS_ENABLED(CONFIG_ACPI_PROCESSOR)
unsigned int limit; unsigned int limit;
#endif #endif
...@@ -194,36 +194,36 @@ static int eps_cpu_init(struct cpufreq_policy *policy) ...@@ -194,36 +194,36 @@ static int eps_cpu_init(struct cpufreq_policy *policy)
return -ENODEV; return -ENODEV;
/* Check brand */ /* Check brand */
printk(KERN_INFO "eps: Detected VIA "); pr_info("Detected VIA ");
switch (c->x86_model) { switch (c->x86_model) {
case 10: case 10:
rdmsr(0x1153, lo, hi); rdmsr(0x1153, lo, hi);
brand = (((lo >> 2) ^ lo) >> 18) & 3; brand = (((lo >> 2) ^ lo) >> 18) & 3;
printk(KERN_CONT "Model A "); pr_cont("Model A ");
break; break;
case 13: case 13:
rdmsr(0x1154, lo, hi); rdmsr(0x1154, lo, hi);
brand = (((lo >> 4) ^ (lo >> 2))) & 0x000000ff; brand = (((lo >> 4) ^ (lo >> 2))) & 0x000000ff;
printk(KERN_CONT "Model D "); pr_cont("Model D ");
break; break;
} }
switch (brand) { switch (brand) {
case EPS_BRAND_C7M: case EPS_BRAND_C7M:
printk(KERN_CONT "C7-M\n"); pr_cont("C7-M\n");
break; break;
case EPS_BRAND_C7: case EPS_BRAND_C7:
printk(KERN_CONT "C7\n"); pr_cont("C7\n");
break; break;
case EPS_BRAND_EDEN: case EPS_BRAND_EDEN:
printk(KERN_CONT "Eden\n"); pr_cont("Eden\n");
break; break;
case EPS_BRAND_C7D: case EPS_BRAND_C7D:
printk(KERN_CONT "C7-D\n"); pr_cont("C7-D\n");
break; break;
case EPS_BRAND_C3: case EPS_BRAND_C3:
printk(KERN_CONT "C3\n"); pr_cont("C3\n");
return -ENODEV; return -ENODEV;
break; break;
} }
...@@ -235,7 +235,7 @@ static int eps_cpu_init(struct cpufreq_policy *policy) ...@@ -235,7 +235,7 @@ static int eps_cpu_init(struct cpufreq_policy *policy)
/* Can be locked at 0 */ /* Can be locked at 0 */
rdmsrl(MSR_IA32_MISC_ENABLE, val); rdmsrl(MSR_IA32_MISC_ENABLE, val);
if (!(val & MSR_IA32_MISC_ENABLE_ENHANCED_SPEEDSTEP)) { if (!(val & MSR_IA32_MISC_ENABLE_ENHANCED_SPEEDSTEP)) {
printk(KERN_INFO "eps: Can't enable Enhanced PowerSaver\n"); pr_info("Can't enable Enhanced PowerSaver\n");
return -ENODEV; return -ENODEV;
} }
} }
...@@ -243,22 +243,19 @@ static int eps_cpu_init(struct cpufreq_policy *policy) ...@@ -243,22 +243,19 @@ static int eps_cpu_init(struct cpufreq_policy *policy)
/* Print voltage and multiplier */ /* Print voltage and multiplier */
rdmsr(MSR_IA32_PERF_STATUS, lo, hi); rdmsr(MSR_IA32_PERF_STATUS, lo, hi);
current_voltage = lo & 0xff; current_voltage = lo & 0xff;
printk(KERN_INFO "eps: Current voltage = %dmV\n", pr_info("Current voltage = %dmV\n", current_voltage * 16 + 700);
current_voltage * 16 + 700);
current_multiplier = (lo >> 8) & 0xff; current_multiplier = (lo >> 8) & 0xff;
printk(KERN_INFO "eps: Current multiplier = %d\n", current_multiplier); pr_info("Current multiplier = %d\n", current_multiplier);
/* Print limits */ /* Print limits */
max_voltage = hi & 0xff; max_voltage = hi & 0xff;
printk(KERN_INFO "eps: Highest voltage = %dmV\n", pr_info("Highest voltage = %dmV\n", max_voltage * 16 + 700);
max_voltage * 16 + 700);
max_multiplier = (hi >> 8) & 0xff; max_multiplier = (hi >> 8) & 0xff;
printk(KERN_INFO "eps: Highest multiplier = %d\n", max_multiplier); pr_info("Highest multiplier = %d\n", max_multiplier);
min_voltage = (hi >> 16) & 0xff; min_voltage = (hi >> 16) & 0xff;
printk(KERN_INFO "eps: Lowest voltage = %dmV\n", pr_info("Lowest voltage = %dmV\n", min_voltage * 16 + 700);
min_voltage * 16 + 700);
min_multiplier = (hi >> 24) & 0xff; min_multiplier = (hi >> 24) & 0xff;
printk(KERN_INFO "eps: Lowest multiplier = %d\n", min_multiplier); pr_info("Lowest multiplier = %d\n", min_multiplier);
/* Sanity checks */ /* Sanity checks */
if (current_multiplier == 0 || max_multiplier == 0 if (current_multiplier == 0 || max_multiplier == 0
...@@ -276,34 +273,30 @@ static int eps_cpu_init(struct cpufreq_policy *policy) ...@@ -276,34 +273,30 @@ static int eps_cpu_init(struct cpufreq_policy *policy)
/* Check for systems using underclocked CPU */ /* Check for systems using underclocked CPU */
if (!freq_failsafe_off && max_multiplier != current_multiplier) { if (!freq_failsafe_off && max_multiplier != current_multiplier) {
printk(KERN_INFO "eps: Your processor is running at different " pr_info("Your processor is running at different frequency then its maximum. Aborting.\n");
"frequency then its maximum. Aborting.\n"); pr_info("You can use freq_failsafe_off option to disable this check.\n");
printk(KERN_INFO "eps: You can use freq_failsafe_off option "
"to disable this check.\n");
return -EINVAL; return -EINVAL;
} }
if (!voltage_failsafe_off && max_voltage != current_voltage) { if (!voltage_failsafe_off && max_voltage != current_voltage) {
printk(KERN_INFO "eps: Your processor is running at different " pr_info("Your processor is running at different voltage then its maximum. Aborting.\n");
"voltage then its maximum. Aborting.\n"); pr_info("You can use voltage_failsafe_off option to disable this check.\n");
printk(KERN_INFO "eps: You can use voltage_failsafe_off "
"option to disable this check.\n");
return -EINVAL; return -EINVAL;
} }
/* Calc FSB speed */ /* Calc FSB speed */
fsb = cpu_khz / current_multiplier; fsb = cpu_khz / current_multiplier;
#if defined CONFIG_ACPI_PROCESSOR || defined CONFIG_ACPI_PROCESSOR_MODULE #if IS_ENABLED(CONFIG_ACPI_PROCESSOR)
/* Check for ACPI processor speed limit */ /* Check for ACPI processor speed limit */
if (!ignore_acpi_limit && !eps_acpi_init()) { if (!ignore_acpi_limit && !eps_acpi_init()) {
if (!acpi_processor_get_bios_limit(policy->cpu, &limit)) { if (!acpi_processor_get_bios_limit(policy->cpu, &limit)) {
printk(KERN_INFO "eps: ACPI limit %u.%uGHz\n", pr_info("ACPI limit %u.%uGHz\n",
limit/1000000, limit/1000000,
(limit%1000000)/10000); (limit%1000000)/10000);
eps_acpi_exit(policy); eps_acpi_exit(policy);
/* Check if max_multiplier is in BIOS limits */ /* Check if max_multiplier is in BIOS limits */
if (limit && max_multiplier * fsb > limit) { if (limit && max_multiplier * fsb > limit) {
printk(KERN_INFO "eps: Aborting.\n"); pr_info("Aborting\n");
return -EINVAL; return -EINVAL;
} }
} }
...@@ -319,8 +312,7 @@ static int eps_cpu_init(struct cpufreq_policy *policy) ...@@ -319,8 +312,7 @@ static int eps_cpu_init(struct cpufreq_policy *policy)
v = (set_max_voltage - 700) / 16; v = (set_max_voltage - 700) / 16;
/* Check if voltage is within limits */ /* Check if voltage is within limits */
if (v >= min_voltage && v <= max_voltage) { if (v >= min_voltage && v <= max_voltage) {
printk(KERN_INFO "eps: Setting %dmV as maximum.\n", pr_info("Setting %dmV as maximum\n", v * 16 + 700);
v * 16 + 700);
max_voltage = v; max_voltage = v;
} }
} }
...@@ -341,7 +333,7 @@ static int eps_cpu_init(struct cpufreq_policy *policy) ...@@ -341,7 +333,7 @@ static int eps_cpu_init(struct cpufreq_policy *policy)
/* Copy basic values */ /* Copy basic values */
centaur->fsb = fsb; centaur->fsb = fsb;
#if defined CONFIG_ACPI_PROCESSOR || defined CONFIG_ACPI_PROCESSOR_MODULE #if IS_ENABLED(CONFIG_ACPI_PROCESSOR)
centaur->bios_limit = limit; centaur->bios_limit = limit;
#endif #endif
...@@ -426,7 +418,7 @@ module_param(freq_failsafe_off, int, 0644); ...@@ -426,7 +418,7 @@ module_param(freq_failsafe_off, int, 0644);
MODULE_PARM_DESC(freq_failsafe_off, "Disable current vs max frequency check"); MODULE_PARM_DESC(freq_failsafe_off, "Disable current vs max frequency check");
module_param(voltage_failsafe_off, int, 0644); module_param(voltage_failsafe_off, int, 0644);
MODULE_PARM_DESC(voltage_failsafe_off, "Disable current vs max voltage check"); MODULE_PARM_DESC(voltage_failsafe_off, "Disable current vs max voltage check");
#if defined CONFIG_ACPI_PROCESSOR || defined CONFIG_ACPI_PROCESSOR_MODULE #if IS_ENABLED(CONFIG_ACPI_PROCESSOR)
module_param(ignore_acpi_limit, int, 0644); module_param(ignore_acpi_limit, int, 0644);
MODULE_PARM_DESC(ignore_acpi_limit, "Don't check ACPI's processor speed limit"); MODULE_PARM_DESC(ignore_acpi_limit, "Don't check ACPI's processor speed limit");
#endif #endif
......
...@@ -16,6 +16,8 @@ ...@@ -16,6 +16,8 @@
* *
*/ */
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/init.h> #include <linux/init.h>
...@@ -185,7 +187,7 @@ static int elanfreq_cpu_init(struct cpufreq_policy *policy) ...@@ -185,7 +187,7 @@ static int elanfreq_cpu_init(struct cpufreq_policy *policy)
static int __init elanfreq_setup(char *str) static int __init elanfreq_setup(char *str)
{ {
max_freq = simple_strtoul(str, &str, 0); max_freq = simple_strtoul(str, &str, 0);
printk(KERN_WARNING "You're using the deprecated elanfreq command line option. Use elanfreq.max_freq instead, please!\n"); pr_warn("You're using the deprecated elanfreq command line option. Use elanfreq.max_freq instead, please!\n");
return 1; return 1;
} }
__setup("elanfreq=", elanfreq_setup); __setup("elanfreq=", elanfreq_setup);
......
/*
* Hisilicon Platforms Using ACPU CPUFreq Support
*
* Copyright (c) 2015 Hisilicon Limited.
* Copyright (c) 2015 Linaro Limited.
*
* Leo Yan <leo.yan@linaro.org>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed "as is" WITHOUT ANY WARRANTY of any
* kind, whether express or implied; without even the implied warranty
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/err.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/platform_device.h>
static int __init hisi_acpu_cpufreq_driver_init(void)
{
struct platform_device *pdev;
if (!of_machine_is_compatible("hisilicon,hi6220"))
return -ENODEV;
pdev = platform_device_register_simple("cpufreq-dt", -1, NULL, 0);
return PTR_ERR_OR_ZERO(pdev);
}
module_init(hisi_acpu_cpufreq_driver_init);
MODULE_AUTHOR("Leo Yan <leo.yan@linaro.org>");
MODULE_DESCRIPTION("Hisilicon acpu cpufreq driver");
MODULE_LICENSE("GPL v2");
...@@ -8,6 +8,8 @@ ...@@ -8,6 +8,8 @@
* Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> * Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
*/ */
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/module.h> #include <linux/module.h>
...@@ -118,8 +120,7 @@ processor_get_freq ( ...@@ -118,8 +120,7 @@ processor_get_freq (
if (ret) { if (ret) {
set_cpus_allowed_ptr(current, &saved_mask); set_cpus_allowed_ptr(current, &saved_mask);
printk(KERN_WARNING "get performance failed with error %d\n", pr_warn("get performance failed with error %d\n", ret);
ret);
ret = 0; ret = 0;
goto migrate_end; goto migrate_end;
} }
...@@ -177,7 +178,7 @@ processor_set_freq ( ...@@ -177,7 +178,7 @@ processor_set_freq (
ret = processor_set_pstate(value); ret = processor_set_pstate(value);
if (ret) { if (ret) {
printk(KERN_WARNING "Transition failed with error %d\n", ret); pr_warn("Transition failed with error %d\n", ret);
retval = -ENODEV; retval = -ENODEV;
goto migrate_end; goto migrate_end;
} }
...@@ -291,8 +292,7 @@ acpi_cpufreq_cpu_init ( ...@@ -291,8 +292,7 @@ acpi_cpufreq_cpu_init (
/* notify BIOS that we exist */ /* notify BIOS that we exist */
acpi_processor_notify_smm(THIS_MODULE); acpi_processor_notify_smm(THIS_MODULE);
printk(KERN_INFO "acpi-cpufreq: CPU%u - ACPI performance management " pr_info("CPU%u - ACPI performance management activated\n", cpu);
"activated.\n", cpu);
for (i = 0; i < data->acpi_data.state_count; i++) for (i = 0; i < data->acpi_data.state_count; i++)
pr_debug(" %cP%d: %d MHz, %d mW, %d uS, %d uS, 0x%x 0x%x\n", pr_debug(" %cP%d: %d MHz, %d mW, %d uS, %d uS, 0x%x 0x%x\n",
......
This diff is collapsed.
...@@ -21,6 +21,8 @@ ...@@ -21,6 +21,8 @@
* BIG FAT DISCLAIMER: Work in progress code. Possibly *dangerous* * BIG FAT DISCLAIMER: Work in progress code. Possibly *dangerous*
*/ */
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/moduleparam.h> #include <linux/moduleparam.h>
...@@ -40,8 +42,6 @@ ...@@ -40,8 +42,6 @@
#include "longhaul.h" #include "longhaul.h"
#define PFX "longhaul: "
#define TYPE_LONGHAUL_V1 1 #define TYPE_LONGHAUL_V1 1
#define TYPE_LONGHAUL_V2 2 #define TYPE_LONGHAUL_V2 2
#define TYPE_POWERSAVER 3 #define TYPE_POWERSAVER 3
...@@ -347,14 +347,13 @@ static int longhaul_setstate(struct cpufreq_policy *policy, ...@@ -347,14 +347,13 @@ static int longhaul_setstate(struct cpufreq_policy *policy,
freqs.new = calc_speed(longhaul_get_cpu_mult()); freqs.new = calc_speed(longhaul_get_cpu_mult());
/* Check if requested frequency is set. */ /* Check if requested frequency is set. */
if (unlikely(freqs.new != speed)) { if (unlikely(freqs.new != speed)) {
printk(KERN_INFO PFX "Failed to set requested frequency!\n"); pr_info("Failed to set requested frequency!\n");
/* Revision ID = 1 but processor is expecting revision key /* Revision ID = 1 but processor is expecting revision key
* equal to 0. Jumpers at the bottom of processor will change * equal to 0. Jumpers at the bottom of processor will change
* multiplier and FSB, but will not change bits in Longhaul * multiplier and FSB, but will not change bits in Longhaul
* MSR nor enable voltage scaling. */ * MSR nor enable voltage scaling. */
if (!revid_errata) { if (!revid_errata) {
printk(KERN_INFO PFX "Enabling \"Ignore Revision ID\" " pr_info("Enabling \"Ignore Revision ID\" option\n");
"option.\n");
revid_errata = 1; revid_errata = 1;
msleep(200); msleep(200);
goto retry_loop; goto retry_loop;
...@@ -364,11 +363,10 @@ static int longhaul_setstate(struct cpufreq_policy *policy, ...@@ -364,11 +363,10 @@ static int longhaul_setstate(struct cpufreq_policy *policy,
* but it doesn't change frequency. I tried poking various * but it doesn't change frequency. I tried poking various
* bits in northbridge registers, but without success. */ * bits in northbridge registers, but without success. */
if (longhaul_flags & USE_ACPI_C3) { if (longhaul_flags & USE_ACPI_C3) {
printk(KERN_INFO PFX "Disabling ACPI C3 support.\n"); pr_info("Disabling ACPI C3 support\n");
longhaul_flags &= ~USE_ACPI_C3; longhaul_flags &= ~USE_ACPI_C3;
if (revid_errata) { if (revid_errata) {
printk(KERN_INFO PFX "Disabling \"Ignore " pr_info("Disabling \"Ignore Revision ID\" option\n");
"Revision ID\" option.\n");
revid_errata = 0; revid_errata = 0;
} }
msleep(200); msleep(200);
...@@ -379,7 +377,7 @@ static int longhaul_setstate(struct cpufreq_policy *policy, ...@@ -379,7 +377,7 @@ static int longhaul_setstate(struct cpufreq_policy *policy,
* RevID = 1. RevID errata will make things right. Just * RevID = 1. RevID errata will make things right. Just
* to be 100% sure. */ * to be 100% sure. */
if (longhaul_version == TYPE_LONGHAUL_V2) { if (longhaul_version == TYPE_LONGHAUL_V2) {
printk(KERN_INFO PFX "Switching to Longhaul ver. 1\n"); pr_info("Switching to Longhaul ver. 1\n");
longhaul_version = TYPE_LONGHAUL_V1; longhaul_version = TYPE_LONGHAUL_V1;
msleep(200); msleep(200);
goto retry_loop; goto retry_loop;
...@@ -387,8 +385,7 @@ static int longhaul_setstate(struct cpufreq_policy *policy, ...@@ -387,8 +385,7 @@ static int longhaul_setstate(struct cpufreq_policy *policy,
} }
if (!bm_timeout) { if (!bm_timeout) {
printk(KERN_INFO PFX "Warning: Timeout while waiting for " pr_info("Warning: Timeout while waiting for idle PCI bus\n");
"idle PCI bus.\n");
return -EBUSY; return -EBUSY;
} }
...@@ -433,12 +430,12 @@ static int longhaul_get_ranges(void) ...@@ -433,12 +430,12 @@ static int longhaul_get_ranges(void)
/* Get current frequency */ /* Get current frequency */
mult = longhaul_get_cpu_mult(); mult = longhaul_get_cpu_mult();
if (mult == -1) { if (mult == -1) {
printk(KERN_INFO PFX "Invalid (reserved) multiplier!\n"); pr_info("Invalid (reserved) multiplier!\n");
return -EINVAL; return -EINVAL;
} }
fsb = guess_fsb(mult); fsb = guess_fsb(mult);
if (fsb == 0) { if (fsb == 0) {
printk(KERN_INFO PFX "Invalid (reserved) FSB!\n"); pr_info("Invalid (reserved) FSB!\n");
return -EINVAL; return -EINVAL;
} }
/* Get max multiplier - as we always did. /* Get max multiplier - as we always did.
...@@ -468,11 +465,11 @@ static int longhaul_get_ranges(void) ...@@ -468,11 +465,11 @@ static int longhaul_get_ranges(void)
print_speed(highest_speed/1000)); print_speed(highest_speed/1000));
if (lowest_speed == highest_speed) { if (lowest_speed == highest_speed) {
printk(KERN_INFO PFX "highestspeed == lowest, aborting.\n"); pr_info("highestspeed == lowest, aborting\n");
return -EINVAL; return -EINVAL;
} }
if (lowest_speed > highest_speed) { if (lowest_speed > highest_speed) {
printk(KERN_INFO PFX "nonsense! lowest (%d > %d) !\n", pr_info("nonsense! lowest (%d > %d) !\n",
lowest_speed, highest_speed); lowest_speed, highest_speed);
return -EINVAL; return -EINVAL;
} }
...@@ -538,16 +535,16 @@ static void longhaul_setup_voltagescaling(void) ...@@ -538,16 +535,16 @@ static void longhaul_setup_voltagescaling(void)
rdmsrl(MSR_VIA_LONGHAUL, longhaul.val); rdmsrl(MSR_VIA_LONGHAUL, longhaul.val);
if (!(longhaul.bits.RevisionID & 1)) { if (!(longhaul.bits.RevisionID & 1)) {
printk(KERN_INFO PFX "Voltage scaling not supported by CPU.\n"); pr_info("Voltage scaling not supported by CPU\n");
return; return;
} }
if (!longhaul.bits.VRMRev) { if (!longhaul.bits.VRMRev) {
printk(KERN_INFO PFX "VRM 8.5\n"); pr_info("VRM 8.5\n");
vrm_mV_table = &vrm85_mV[0]; vrm_mV_table = &vrm85_mV[0];
mV_vrm_table = &mV_vrm85[0]; mV_vrm_table = &mV_vrm85[0];
} else { } else {
printk(KERN_INFO PFX "Mobile VRM\n"); pr_info("Mobile VRM\n");
if (cpu_model < CPU_NEHEMIAH) if (cpu_model < CPU_NEHEMIAH)
return; return;
vrm_mV_table = &mobilevrm_mV[0]; vrm_mV_table = &mobilevrm_mV[0];
...@@ -558,27 +555,21 @@ static void longhaul_setup_voltagescaling(void) ...@@ -558,27 +555,21 @@ static void longhaul_setup_voltagescaling(void)
maxvid = vrm_mV_table[longhaul.bits.MaximumVID]; maxvid = vrm_mV_table[longhaul.bits.MaximumVID];
if (minvid.mV == 0 || maxvid.mV == 0 || minvid.mV > maxvid.mV) { if (minvid.mV == 0 || maxvid.mV == 0 || minvid.mV > maxvid.mV) {
printk(KERN_INFO PFX "Bogus values Min:%d.%03d Max:%d.%03d. " pr_info("Bogus values Min:%d.%03d Max:%d.%03d - Voltage scaling disabled\n",
"Voltage scaling disabled.\n",
minvid.mV/1000, minvid.mV%1000, minvid.mV/1000, minvid.mV%1000,
maxvid.mV/1000, maxvid.mV%1000); maxvid.mV/1000, maxvid.mV%1000);
return; return;
} }
if (minvid.mV == maxvid.mV) { if (minvid.mV == maxvid.mV) {
printk(KERN_INFO PFX "Claims to support voltage scaling but " pr_info("Claims to support voltage scaling but min & max are both %d.%03d - Voltage scaling disabled\n",
"min & max are both %d.%03d. "
"Voltage scaling disabled\n",
maxvid.mV/1000, maxvid.mV%1000); maxvid.mV/1000, maxvid.mV%1000);
return; return;
} }
/* How many voltage steps*/ /* How many voltage steps*/
numvscales = maxvid.pos - minvid.pos + 1; numvscales = maxvid.pos - minvid.pos + 1;
printk(KERN_INFO PFX pr_info("Max VID=%d.%03d Min VID=%d.%03d, %d possible voltage scales\n",
"Max VID=%d.%03d "
"Min VID=%d.%03d, "
"%d possible voltage scales\n",
maxvid.mV/1000, maxvid.mV%1000, maxvid.mV/1000, maxvid.mV%1000,
minvid.mV/1000, minvid.mV%1000, minvid.mV/1000, minvid.mV%1000,
numvscales); numvscales);
...@@ -617,12 +608,12 @@ static void longhaul_setup_voltagescaling(void) ...@@ -617,12 +608,12 @@ static void longhaul_setup_voltagescaling(void)
pos = minvid.pos; pos = minvid.pos;
freq_pos->driver_data |= mV_vrm_table[pos] << 8; freq_pos->driver_data |= mV_vrm_table[pos] << 8;
vid = vrm_mV_table[mV_vrm_table[pos]]; vid = vrm_mV_table[mV_vrm_table[pos]];
printk(KERN_INFO PFX "f: %d kHz, index: %d, vid: %d mV\n", pr_info("f: %d kHz, index: %d, vid: %d mV\n",
speed, (int)(freq_pos - longhaul_table), vid.mV); speed, (int)(freq_pos - longhaul_table), vid.mV);
} }
can_scale_voltage = 1; can_scale_voltage = 1;
printk(KERN_INFO PFX "Voltage scaling enabled.\n"); pr_info("Voltage scaling enabled\n");
} }
...@@ -720,8 +711,7 @@ static int enable_arbiter_disable(void) ...@@ -720,8 +711,7 @@ static int enable_arbiter_disable(void)
pci_write_config_byte(dev, reg, pci_cmd); pci_write_config_byte(dev, reg, pci_cmd);
pci_read_config_byte(dev, reg, &pci_cmd); pci_read_config_byte(dev, reg, &pci_cmd);
if (!(pci_cmd & 1<<7)) { if (!(pci_cmd & 1<<7)) {
printk(KERN_ERR PFX pr_err("Can't enable access to port 0x22\n");
"Can't enable access to port 0x22.\n");
status = 0; status = 0;
} }
} }
...@@ -758,8 +748,7 @@ static int longhaul_setup_southbridge(void) ...@@ -758,8 +748,7 @@ static int longhaul_setup_southbridge(void)
if (pci_cmd & 1 << 7) { if (pci_cmd & 1 << 7) {
pci_read_config_dword(dev, 0x88, &acpi_regs_addr); pci_read_config_dword(dev, 0x88, &acpi_regs_addr);
acpi_regs_addr &= 0xff00; acpi_regs_addr &= 0xff00;
printk(KERN_INFO PFX "ACPI I/O at 0x%x\n", pr_info("ACPI I/O at 0x%x\n", acpi_regs_addr);
acpi_regs_addr);
} }
pci_dev_put(dev); pci_dev_put(dev);
...@@ -853,14 +842,14 @@ static int longhaul_cpu_init(struct cpufreq_policy *policy) ...@@ -853,14 +842,14 @@ static int longhaul_cpu_init(struct cpufreq_policy *policy)
longhaul_version = TYPE_LONGHAUL_V1; longhaul_version = TYPE_LONGHAUL_V1;
} }
printk(KERN_INFO PFX "VIA %s CPU detected. ", cpuname); pr_info("VIA %s CPU detected. ", cpuname);
switch (longhaul_version) { switch (longhaul_version) {
case TYPE_LONGHAUL_V1: case TYPE_LONGHAUL_V1:
case TYPE_LONGHAUL_V2: case TYPE_LONGHAUL_V2:
printk(KERN_CONT "Longhaul v%d supported.\n", longhaul_version); pr_cont("Longhaul v%d supported\n", longhaul_version);
break; break;
case TYPE_POWERSAVER: case TYPE_POWERSAVER:
printk(KERN_CONT "Powersaver supported.\n"); pr_cont("Powersaver supported\n");
break; break;
}; };
...@@ -889,15 +878,14 @@ static int longhaul_cpu_init(struct cpufreq_policy *policy) ...@@ -889,15 +878,14 @@ static int longhaul_cpu_init(struct cpufreq_policy *policy)
if (!(longhaul_flags & USE_ACPI_C3 if (!(longhaul_flags & USE_ACPI_C3
|| longhaul_flags & USE_NORTHBRIDGE) || longhaul_flags & USE_NORTHBRIDGE)
&& ((pr == NULL) || !(pr->flags.bm_control))) { && ((pr == NULL) || !(pr->flags.bm_control))) {
printk(KERN_ERR PFX pr_err("No ACPI support: Unsupported northbridge\n");
"No ACPI support. Unsupported northbridge.\n");
return -ENODEV; return -ENODEV;
} }
if (longhaul_flags & USE_NORTHBRIDGE) if (longhaul_flags & USE_NORTHBRIDGE)
printk(KERN_INFO PFX "Using northbridge support.\n"); pr_info("Using northbridge support\n");
if (longhaul_flags & USE_ACPI_C3) if (longhaul_flags & USE_ACPI_C3)
printk(KERN_INFO PFX "Using ACPI support.\n"); pr_info("Using ACPI support\n");
ret = longhaul_get_ranges(); ret = longhaul_get_ranges();
if (ret != 0) if (ret != 0)
...@@ -934,20 +922,18 @@ static int __init longhaul_init(void) ...@@ -934,20 +922,18 @@ static int __init longhaul_init(void)
return -ENODEV; return -ENODEV;
if (!enable) { if (!enable) {
printk(KERN_ERR PFX "Option \"enable\" not set. Aborting.\n"); pr_err("Option \"enable\" not set - Aborting\n");
return -ENODEV; return -ENODEV;
} }
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
if (num_online_cpus() > 1) { if (num_online_cpus() > 1) {
printk(KERN_ERR PFX "More than 1 CPU detected, " pr_err("More than 1 CPU detected, longhaul disabled\n");
"longhaul disabled.\n");
return -ENODEV; return -ENODEV;
} }
#endif #endif
#ifdef CONFIG_X86_IO_APIC #ifdef CONFIG_X86_IO_APIC
if (boot_cpu_has(X86_FEATURE_APIC)) { if (boot_cpu_has(X86_FEATURE_APIC)) {
printk(KERN_ERR PFX "APIC detected. Longhaul is currently " pr_err("APIC detected. Longhaul is currently broken in this configuration.\n");
"broken in this configuration.\n");
return -ENODEV; return -ENODEV;
} }
#endif #endif
...@@ -955,7 +941,7 @@ static int __init longhaul_init(void) ...@@ -955,7 +941,7 @@ static int __init longhaul_init(void)
case 6 ... 9: case 6 ... 9:
return cpufreq_register_driver(&longhaul_driver); return cpufreq_register_driver(&longhaul_driver);
case 10: case 10:
printk(KERN_ERR PFX "Use acpi-cpufreq driver for VIA C7\n"); pr_err("Use acpi-cpufreq driver for VIA C7\n");
default: default:
; ;
} }
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
# Exynos DEVFREQ Event Drivers # Exynos DEVFREQ Event Drivers
obj-$(CONFIG_DEVFREQ_EVENT_EXYNOS_NOCP) += exynos-nocp.o
obj-$(CONFIG_DEVFREQ_EVENT_EXYNOS_PPMU) += exynos-ppmu.o obj-$(CONFIG_DEVFREQ_EVENT_EXYNOS_PPMU) += exynos-ppmu.o
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment