Commit 7b9dc3f7 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'pm-4.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull power management updates from Rafael Wysocki:
 "Again, cpufreq gets more changes than the other parts this time (one
  new driver, one old driver less, a bunch of enhancements of the
  existing code, new CPU IDs, fixes, cleanups)

  There also are some changes in cpuidle (idle injection rework, a
  couple of new CPU IDs, online/offline rework in intel_idle, fixes and
  cleanups), in the generic power domains framework (mostly related to
  supporting power domains containing CPUs), and in the Operating
  Performance Points (OPP) library (mostly related to supporting devices
  with multiple voltage regulators)

  In addition to that, the system sleep state selection interface is
  modified to make it easier for distributions with unchanged user space
  to support suspend-to-idle as the default system suspend method, some
  issues are fixed in the PM core, the latency tolerance PM QoS
  framework is improved a bit, the Intel RAPL power capping driver is
  cleaned up and there are some fixes and cleanups in the devfreq
  subsystem

  Specifics:

   - New cpufreq driver for Broadcom STB SoCs and a Device Tree binding
     for it (Markus Mayer)

   - Support for ARM Integrator/AP and Integrator/CP in the generic DT
     cpufreq driver and elimination of the old Integrator cpufreq driver
     (Linus Walleij)

   - Support for the zx296718, r8a7743 and r8a7745, Socionext UniPhier,
     and PXA SoCs in the the generic DT cpufreq driver (Baoyou Xie,
     Geert Uytterhoeven, Masahiro Yamada, Robert Jarzmik)

   - cpufreq core fix to eliminate races that may lead to using inactive
     policy objects and related cleanups (Rafael Wysocki)

   - cpufreq schedutil governor update to make it use SCHED_FIFO kernel
     threads (instead of regular workqueues) for doing delayed work (to
     reduce the response latency in some cases) and related cleanups
     (Viresh Kumar)

   - New cpufreq sysfs attribute for resetting statistics (Markus Mayer)

   - cpufreq governors fixes and cleanups (Chen Yu, Stratos Karafotis,
     Viresh Kumar)

   - Support for using generic cpufreq governors in the intel_pstate
     driver (Rafael Wysocki)

   - Support for per-logical-CPU P-state limits and the EPP/EPB (Energy
     Performance Preference/Energy Performance Bias) knobs in the
     intel_pstate driver (Srinivas Pandruvada)

   - New CPU ID for Knights Mill in intel_pstate (Piotr Luc)

   - intel_pstate driver modification to use the P-state selection
     algorithm based on CPU load on platforms with the system profile in
     the ACPI tables set to "mobile" (Srinivas Pandruvada)

   - intel_pstate driver cleanups (Arnd Bergmann, Rafael Wysocki,
     Srinivas Pandruvada)

   - cpufreq powernv driver updates including fast switching support
     (for the schedutil governor), fixes and cleanus (Akshay Adiga,
     Andrew Donnellan, Denis Kirjanov)

   - acpi-cpufreq driver rework to switch it over to the new CPU
     offline/online state machine (Sebastian Andrzej Siewior)

   - Assorted cleanups in cpufreq drivers (Wei Yongjun, Prashanth
     Prakash)

   - Idle injection rework (to make it use the regular idle path instead
     of a home-grown custom one) and related powerclamp thermal driver
     updates (Peter Zijlstra, Jacob Pan, Petr Mladek, Sebastian Andrzej
     Siewior)

   - New CPU IDs for Atom Z34xx and Knights Mill in intel_idle (Andy
     Shevchenko, Piotr Luc)

   - intel_idle driver cleanups and switch over to using the new CPU
     offline/online state machine (Anna-Maria Gleixner, Sebastian
     Andrzej Siewior)

   - cpuidle DT driver update to support suspend-to-idle properly
     (Sudeep Holla)

   - cpuidle core cleanups and misc updates (Daniel Lezcano, Pan Bian,
     Rafael Wysocki)

   - Preliminary support for power domains including CPUs in the generic
     power domains (genpd) framework and related DT bindings (Lina Iyer)

   - Assorted fixes and cleanups in the generic power domains (genpd)
     framework (Colin Ian King, Dan Carpenter, Geert Uytterhoeven)

   - Preliminary support for devices with multiple voltage regulators
     and related fixes and cleanups in the Operating Performance Points
     (OPP) library (Viresh Kumar, Masahiro Yamada, Stephen Boyd)

   - System sleep state selection interface rework to make it easier to
     support suspend-to-idle as the default system suspend method
     (Rafael Wysocki)

   - PM core fixes and cleanups, mostly related to the interactions
     between the system suspend and runtime PM frameworks (Ulf Hansson,
     Sahitya Tummala, Tony Lindgren)

   - Latency tolerance PM QoS framework imorovements (Andrew Lutomirski)

   - New Knights Mill CPU ID for the Intel RAPL power capping driver
     (Piotr Luc)

   - Intel RAPL power capping driver fixes, cleanups and switch over to
     using the new CPU offline/online state machine (Jacob Pan, Thomas
     Gleixner, Sebastian Andrzej Siewior)

   - Fixes and cleanups in the exynos-ppmu, exynos-nocp, rk3399_dmc,
     rockchip-dfi devfreq drivers and the devfreq core (Axel Lin,
     Chanwoo Choi, Javier Martinez Canillas, MyungJoo Ham, Viresh Kumar)

   - Fix for false-positive KASAN warnings during resume from ACPI S3
     (suspend-to-RAM) on x86 (Josh Poimboeuf)

   - Memory map verification during resume from hibernation on x86 to
     ensure a consistent address space layout (Chen Yu)

   - Wakeup sources debugging enhancement (Xing Wei)

   - rockchip-io AVS driver cleanup (Shawn Lin)"

* tag 'pm-4.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (127 commits)
  devfreq: rk3399_dmc: Don't use OPP structures outside of RCU locks
  devfreq: rk3399_dmc: Remove dangling rcu_read_unlock()
  devfreq: exynos: Don't use OPP structures outside of RCU locks
  Documentation: intel_pstate: Document HWP energy/performance hints
  cpufreq: intel_pstate: Support for energy performance hints with HWP
  cpufreq: intel_pstate: Add locking around HWP requests
  PM / sleep: Print active wakeup sources when blocking on wakeup_count reads
  PM / core: Fix bug in the error handling of async suspend
  PM / wakeirq: Fix dedicated wakeirq for drivers not using autosuspend
  PM / Domains: Fix compatible for domain idle state
  PM / OPP: Don't WARN on multiple calls to dev_pm_opp_set_regulators()
  PM / OPP: Allow platform specific custom set_opp() callbacks
  PM / OPP: Separate out _generic_set_opp()
  PM / OPP: Add infrastructure to manage multiple regulators
  PM / OPP: Pass struct dev_pm_opp_supply to _set_opp_voltage()
  PM / OPP: Manage supply's voltage/current in a separate structure
  PM / OPP: Don't use OPP structure outside of rcu protected section
  PM / OPP: Reword binding supporting multiple regulators per device
  PM / OPP: Fix incorrect cpu-supply property in binding
  cpuidle: Add a kerneldoc comment to cpuidle_use_deepest_state()
  ..
parents 36869cb9 bbc17bb8
...@@ -7,30 +7,35 @@ Description: ...@@ -7,30 +7,35 @@ Description:
subsystem. subsystem.
What: /sys/power/state What: /sys/power/state
Date: May 2014 Date: November 2016
Contact: Rafael J. Wysocki <rjw@rjwysocki.net> Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
Description: Description:
The /sys/power/state file controls system sleep states. The /sys/power/state file controls system sleep states.
Reading from this file returns the available sleep state Reading from this file returns the available sleep state
labels, which may be "mem", "standby", "freeze" and "disk" labels, which may be "mem" (suspend), "standby" (power-on
(hibernation). The meanings of the first three labels depend on suspend), "freeze" (suspend-to-idle) and "disk" (hibernation).
the relative_sleep_states command line argument as follows:
1) relative_sleep_states = 1 Writing one of the above strings to this file causes the system
"mem", "standby", "freeze" represent non-hibernation sleep to transition into the corresponding state, if available.
states from the deepest ("mem", always present) to the
shallowest ("freeze"). "standby" and "freeze" may or may See Documentation/power/states.txt for more information.
not be present depending on the capabilities of the
platform. "freeze" can only be present if "standby" is What: /sys/power/mem_sleep
present. Date: November 2016
2) relative_sleep_states = 0 (default) Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
"mem" - "suspend-to-RAM", present if supported. Description:
"standby" - "power-on suspend", present if supported. The /sys/power/mem_sleep file controls the operating mode of
"freeze" - "suspend-to-idle", always present. system suspend. Reading from it returns the available modes
as "s2idle" (always present), "shallow" and "deep" (present if
Writing to this file one of these strings causes the system to supported). The mode that will be used on subsequent attempts
transition into the corresponding state, if available. See to suspend the system (by writing "mem" to the /sys/power/state
Documentation/power/states.txt for a description of what file described above) is enclosed in square brackets.
"suspend-to-RAM", "power-on suspend" and "suspend-to-idle" mean.
Writing one of the above strings to this file causes the mode
represented by it to be used on subsequent attempts to suspend
the system.
See Documentation/power/states.txt for more information.
What: /sys/power/disk What: /sys/power/disk
Date: September 2006 Date: September 2006
......
...@@ -1560,6 +1560,12 @@ ...@@ -1560,6 +1560,12 @@
disable disable
Do not enable intel_pstate as the default Do not enable intel_pstate as the default
scaling driver for the supported processors scaling driver for the supported processors
passive
Use intel_pstate as a scaling driver, but configure it
to work with generic cpufreq governors (instead of
enabling its internal governor). This mode cannot be
used along with the hardware-managed P-states (HWP)
feature.
force force
Enable intel_pstate on systems that prohibit it by default Enable intel_pstate on systems that prohibit it by default
in favor of acpi-cpufreq. Forcing the intel_pstate driver in favor of acpi-cpufreq. Forcing the intel_pstate driver
...@@ -1580,6 +1586,9 @@ ...@@ -1580,6 +1586,9 @@
Description Table, specifies preferred power management Description Table, specifies preferred power management
profile as "Enterprise Server" or "Performance Server", profile as "Enterprise Server" or "Performance Server",
then this feature is turned on by default. then this feature is turned on by default.
per_cpu_perf_limits
Allow per-logical-CPU P-State performance control limits using
cpufreq sysfs interface
intremap= [X86-64, Intel-IOMMU] intremap= [X86-64, Intel-IOMMU]
on enable Interrupt Remapping (default) on enable Interrupt Remapping (default)
...@@ -2122,6 +2131,12 @@ ...@@ -2122,6 +2131,12 @@
memory contents and reserves bad memory memory contents and reserves bad memory
regions that are detected. regions that are detected.
mem_sleep_default= [SUSPEND] Default system suspend mode:
s2idle - Suspend-To-Idle
shallow - Power-On Suspend or equivalent (if supported)
deep - Suspend-To-RAM or equivalent (if supported)
See Documentation/power/states.txt.
meye.*= [HW] Set MotionEye Camera parameters meye.*= [HW] Set MotionEye Camera parameters
See Documentation/video4linux/meye.txt. See Documentation/video4linux/meye.txt.
...@@ -3475,13 +3490,6 @@ ...@@ -3475,13 +3490,6 @@
[KNL, SMP] Set scheduler's default relax_domain_level. [KNL, SMP] Set scheduler's default relax_domain_level.
See Documentation/cgroup-v1/cpusets.txt. See Documentation/cgroup-v1/cpusets.txt.
relative_sleep_states=
[SUSPEND] Use sleep state labeling where the deepest
state available other than hibernation is always "mem".
Format: { "0" | "1" }
0 -- Traditional sleep state labels.
1 -- Relative sleep state labels.
reserve= [KNL,BUGS] Force the kernel to ignore some iomem area reserve= [KNL,BUGS] Force the kernel to ignore some iomem area
reservetop= [X86-32] reservetop= [X86-32]
......
...@@ -44,11 +44,17 @@ the stats driver insertion. ...@@ -44,11 +44,17 @@ the stats driver insertion.
total 0 total 0
drwxr-xr-x 2 root root 0 May 14 16:06 . drwxr-xr-x 2 root root 0 May 14 16:06 .
drwxr-xr-x 3 root root 0 May 14 15:58 .. drwxr-xr-x 3 root root 0 May 14 15:58 ..
--w------- 1 root root 4096 May 14 16:06 reset
-r--r--r-- 1 root root 4096 May 14 16:06 time_in_state -r--r--r-- 1 root root 4096 May 14 16:06 time_in_state
-r--r--r-- 1 root root 4096 May 14 16:06 total_trans -r--r--r-- 1 root root 4096 May 14 16:06 total_trans
-r--r--r-- 1 root root 4096 May 14 16:06 trans_table -r--r--r-- 1 root root 4096 May 14 16:06 trans_table
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
- reset
Write-only attribute that can be used to reset the stat counters. This can be
useful for evaluating system behaviour under different governors without the
need for a reboot.
- time_in_state - time_in_state
This gives the amount of time spent in each of the frequencies supported by This gives the amount of time spent in each of the frequencies supported by
this CPU. The cat output will have "<frequency> <time>" pair in each line, which this CPU. The cat output will have "<frequency> <time>" pair in each line, which
......
...@@ -48,7 +48,7 @@ In addition to the frequency-controlling interfaces provided by the cpufreq ...@@ -48,7 +48,7 @@ In addition to the frequency-controlling interfaces provided by the cpufreq
core, the driver provides its own sysfs files to control the P-State selection. core, the driver provides its own sysfs files to control the P-State selection.
These files have been added to /sys/devices/system/cpu/intel_pstate/. These files have been added to /sys/devices/system/cpu/intel_pstate/.
Any changes made to these files are applicable to all CPUs (even in a Any changes made to these files are applicable to all CPUs (even in a
multi-package system). multi-package system, Refer to later section on placing "Per-CPU limits").
max_perf_pct: Limits the maximum P-State that will be requested by max_perf_pct: Limits the maximum P-State that will be requested by
the driver. It states it as a percentage of the available performance. The the driver. It states it as a percentage of the available performance. The
...@@ -120,13 +120,57 @@ frequency is fictional for Intel Core processors. Even if the scaling ...@@ -120,13 +120,57 @@ frequency is fictional for Intel Core processors. Even if the scaling
driver selects a single P-State, the actual frequency the processor driver selects a single P-State, the actual frequency the processor
will run at is selected by the processor itself. will run at is selected by the processor itself.
Per-CPU limits
The kernel command line option "intel_pstate=per_cpu_perf_limits" forces
the intel_pstate driver to use per-CPU performance limits. When it is set,
the sysfs control interface described above is subject to limitations.
- The following controls are not available for both read and write
/sys/devices/system/cpu/intel_pstate/max_perf_pct
/sys/devices/system/cpu/intel_pstate/min_perf_pct
- The following controls can be used to set performance limits, as far as the
architecture of the processor permits:
/sys/devices/system/cpu/cpu*/cpufreq/scaling_max_freq
/sys/devices/system/cpu/cpu*/cpufreq/scaling_min_freq
/sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
- User can still observe turbo percent and number of P-States from
/sys/devices/system/cpu/intel_pstate/turbo_pct
/sys/devices/system/cpu/intel_pstate/num_pstates
- User can read write system wide turbo status
/sys/devices/system/cpu/no_turbo
Support of energy performance hints
It is possible to provide hints to the HWP algorithms in the processor
to be more performance centric to more energy centric. When the driver
is using HWP, two additional cpufreq sysfs attributes are presented for
each logical CPU.
These attributes are:
- energy_performance_available_preferences
- energy_performance_preference
To get list of supported hints:
$ cat energy_performance_available_preferences
default performance balance_performance balance_power power
The current preference can be read or changed via cpufreq sysfs
attribute "energy_performance_preference". Reading from this attribute
will display current effective setting. User can write any of the valid
preference string to this attribute. User can always restore to power-on
default by writing "default".
Since threads can migrate to different CPUs, this is possible that the
new CPU may have different energy performance preference than the previous
one. To avoid such issues, either threads can be pinned to specific CPUs
or set the same energy performance preference value to all CPUs.
Tuning Intel P-State driver Tuning Intel P-State driver
When HWP mode is not used, debugfs files have also been added to allow the When the performance can be tuned using PID (Proportional Integral
tuning of the internal governor algorithm. These files are located at Derivative) controller, debugfs files are provided for adjusting performance.
/sys/kernel/debug/pstate_snb/. The algorithm uses a PID (Proportional They are presented under:
Integral Derivative) controller. The PID tunable parameters are: /sys/kernel/debug/pstate_snb/
The PID tunable parameters are:
deadband deadband
d_gain_pct d_gain_pct
i_gain_pct i_gain_pct
......
Broadcom AVS mail box and interrupt register bindings
=====================================================
A total of three DT nodes are required. One node (brcm,avs-cpu-data-mem)
references the mailbox register used to communicate with the AVS CPU[1]. The
second node (brcm,avs-cpu-l2-intr) is required to trigger an interrupt on
the AVS CPU. The interrupt tells the AVS CPU that it needs to process a
command sent to it by a driver. Interrupting the AVS CPU is mandatory for
commands to be processed.
The interface also requires a reference to the AVS host interrupt controller,
so a driver can react to interrupts generated by the AVS CPU whenever a command
has been processed. See [2] for more information on the brcm,l2-intc node.
[1] The AVS CPU is an independent co-processor that runs proprietary
firmware. On some SoCs, this firmware supports DFS and DVFS in addition to
Adaptive Voltage Scaling.
[2] Documentation/devicetree/bindings/interrupt-controller/brcm,l2-intc.txt
Node brcm,avs-cpu-data-mem
--------------------------
Required properties:
- compatible: must include: brcm,avs-cpu-data-mem and
should include: one of brcm,bcm7271-avs-cpu-data-mem or
brcm,bcm7268-avs-cpu-data-mem
- reg: Specifies base physical address and size of the registers.
- interrupts: The interrupt that the AVS CPU will use to interrupt the host
when a command completed.
- interrupt-parent: The interrupt controller the above interrupt is routed
through.
- interrupt-names: The name of the interrupt used to interrupt the host.
Optional properties:
- None
Node brcm,avs-cpu-l2-intr
-------------------------
Required properties:
- compatible: must include: brcm,avs-cpu-l2-intr and
should include: one of brcm,bcm7271-avs-cpu-l2-intr or
brcm,bcm7268-avs-cpu-l2-intr
- reg: Specifies base physical address and size of the registers.
Optional properties:
- None
Example
=======
avs_host_l2_intc: interrupt-controller@f04d1200 {
#interrupt-cells = <1>;
compatible = "brcm,l2-intc";
interrupt-parent = <&intc>;
reg = <0xf04d1200 0x48>;
interrupt-controller;
interrupts = <0x0 0x19 0x0>;
interrupt-names = "avs";
};
avs-cpu-data-mem@f04c4000 {
compatible = "brcm,bcm7271-avs-cpu-data-mem",
"brcm,avs-cpu-data-mem";
reg = <0xf04c4000 0x60>;
interrupts = <0x1a>;
interrupt-parent = <&avs_host_l2_intc>;
interrupt-names = "sw_intr";
};
avs-cpu-l2-intr@f04d1100 {
compatible = "brcm,bcm7271-avs-cpu-l2-intr",
"brcm,avs-cpu-l2-intr";
reg = <0xf04d1100 0x10>;
};
...@@ -86,8 +86,14 @@ Optional properties: ...@@ -86,8 +86,14 @@ Optional properties:
Single entry is for target voltage and three entries are for <target min max> Single entry is for target voltage and three entries are for <target min max>
voltages. voltages.
Entries for multiple regulators must be present in the same order as Entries for multiple regulators shall be provided in the same field separated
regulators are specified in device's DT node. by angular brackets <>. The OPP binding doesn't provide any provisions to
relate the values to their power supplies or the order in which the supplies
need to be configured and that is left for the implementation specific
binding.
Entries for all regulators shall be of the same size, i.e. either all use a
single value or triplets.
- opp-microvolt-<name>: Named opp-microvolt property. This is exactly similar to - opp-microvolt-<name>: Named opp-microvolt property. This is exactly similar to
the above opp-microvolt property, but allows multiple voltage ranges to be the above opp-microvolt property, but allows multiple voltage ranges to be
...@@ -104,10 +110,13 @@ Optional properties: ...@@ -104,10 +110,13 @@ Optional properties:
Should only be set if opp-microvolt is set for the OPP. Should only be set if opp-microvolt is set for the OPP.
Entries for multiple regulators must be present in the same order as Entries for multiple regulators shall be provided in the same field separated
regulators are specified in device's DT node. If this property isn't required by angular brackets <>. If current values aren't required for a regulator,
for few regulators, then this should be marked as zero for them. If it isn't then it shall be filled with 0. If current values aren't required for any of
required for any regulator, then this property need not be present. the regulators, then this field is not required. The OPP binding doesn't
provide any provisions to relate the values to their power supplies or the
order in which the supplies need to be configured and that is left for the
implementation specific binding.
- opp-microamp-<name>: Named opp-microamp property. Similar to - opp-microamp-<name>: Named opp-microamp property. Similar to
opp-microvolt-<name> property, but for microamp instead. opp-microvolt-<name> property, but for microamp instead.
...@@ -386,10 +395,12 @@ Example 4: Handling multiple regulators ...@@ -386,10 +395,12 @@ Example 4: Handling multiple regulators
/ { / {
cpus { cpus {
cpu@0 { cpu@0 {
compatible = "arm,cortex-a7"; compatible = "vendor,cpu-type";
... ...
cpu-supply = <&cpu_supply0>, <&cpu_supply1>, <&cpu_supply2>; vcc0-supply = <&cpu_supply0>;
vcc1-supply = <&cpu_supply1>;
vcc2-supply = <&cpu_supply2>;
operating-points-v2 = <&cpu0_opp_table>; operating-points-v2 = <&cpu0_opp_table>;
}; };
}; };
......
PM Domain Idle State Node:
A domain idle state node represents the state parameters that will be used to
select the state when there are no active components in the domain.
The state node has the following parameters -
- compatible:
Usage: Required
Value type: <string>
Definition: Must be "domain-idle-state".
- entry-latency-us
Usage: Required
Value type: <prop-encoded-array>
Definition: u32 value representing worst case latency in
microseconds required to enter the idle state.
The exit-latency-us duration may be guaranteed
only after entry-latency-us has passed.
- exit-latency-us
Usage: Required
Value type: <prop-encoded-array>
Definition: u32 value representing worst case latency
in microseconds required to exit the idle state.
- min-residency-us
Usage: Required
Value type: <prop-encoded-array>
Definition: u32 value representing minimum residency duration
in microseconds after which the idle state will yield
power benefits after overcoming the overhead in entering
i the idle state.
...@@ -29,6 +29,15 @@ Optional properties: ...@@ -29,6 +29,15 @@ Optional properties:
specified by this binding. More details about power domain specifier are specified by this binding. More details about power domain specifier are
available in the next section. available in the next section.
- domain-idle-states : A phandle of an idle-state that shall be soaked into a
generic domain power state. The idle state definitions are
compatible with domain-idle-state specified in [1].
The domain-idle-state property reflects the idle state of this PM domain and
not the idle states of the devices or sub-domains in the PM domain. Devices
and sub-domains have their own idle-states independent of the parent
domain's idle states. In the absence of this property, the domain would be
considered as capable of being powered-on or powered-off.
Example: Example:
power: power-controller@12340000 { power: power-controller@12340000 {
...@@ -59,6 +68,38 @@ The nodes above define two power controllers: 'parent' and 'child'. ...@@ -59,6 +68,38 @@ The nodes above define two power controllers: 'parent' and 'child'.
Domains created by the 'child' power controller are subdomains of '0' power Domains created by the 'child' power controller are subdomains of '0' power
domain provided by the 'parent' power controller. domain provided by the 'parent' power controller.
Example 3:
parent: power-controller@12340000 {
compatible = "foo,power-controller";
reg = <0x12340000 0x1000>;
#power-domain-cells = <0>;
domain-idle-states = <&DOMAIN_RET>, <&DOMAIN_PWR_DN>;
};
child: power-controller@12341000 {
compatible = "foo,power-controller";
reg = <0x12341000 0x1000>;
power-domains = <&parent 0>;
#power-domain-cells = <0>;
domain-idle-states = <&DOMAIN_PWR_DN>;
};
DOMAIN_RET: state@0 {
compatible = "domain-idle-state";
reg = <0x0>;
entry-latency-us = <1000>;
exit-latency-us = <2000>;
min-residency-us = <10000>;
};
DOMAIN_PWR_DN: state@1 {
compatible = "domain-idle-state";
reg = <0x1>;
entry-latency-us = <5000>;
exit-latency-us = <8000>;
min-residency-us = <7000>;
};
==PM domain consumers== ==PM domain consumers==
Required properties: Required properties:
...@@ -76,3 +117,5 @@ Example: ...@@ -76,3 +117,5 @@ Example:
The node above defines a typical PM domain consumer device, which is located The node above defines a typical PM domain consumer device, which is located
inside a PM domain with index 0 of a power controller represented by a node inside a PM domain with index 0 of a power controller represented by a node
with the label "power". with the label "power".
[1]. Documentation/devicetree/bindings/power/domain-idle-state.txt
...@@ -607,7 +607,9 @@ individually. Instead, a set of devices sharing a power resource can be put ...@@ -607,7 +607,9 @@ individually. Instead, a set of devices sharing a power resource can be put
into a low-power state together at the same time by turning off the shared into a low-power state together at the same time by turning off the shared
power resource. Of course, they also need to be put into the full-power state power resource. Of course, they also need to be put into the full-power state
together, by turning the shared power resource on. A set of devices with this together, by turning the shared power resource on. A set of devices with this
property is often referred to as a power domain. property is often referred to as a power domain. A power domain may also be
nested inside another power domain. The nested domain is referred to as the
sub-domain of the parent domain.
Support for power domains is provided through the pm_domain field of struct Support for power domains is provided through the pm_domain field of struct
device. This field is a pointer to an object of type struct dev_pm_domain, device. This field is a pointer to an object of type struct dev_pm_domain,
...@@ -629,6 +631,16 @@ support for power domains into subsystem-level callbacks, for example by ...@@ -629,6 +631,16 @@ support for power domains into subsystem-level callbacks, for example by
modifying the platform bus type. Other platforms need not implement it or take modifying the platform bus type. Other platforms need not implement it or take
it into account in any way. it into account in any way.
Devices may be defined as IRQ-safe which indicates to the PM core that their
runtime PM callbacks may be invoked with disabled interrupts (see
Documentation/power/runtime_pm.txt for more information). If an IRQ-safe
device belongs to a PM domain, the runtime PM of the domain will be
disallowed, unless the domain itself is defined as IRQ-safe. However, it
makes sense to define a PM domain as IRQ-safe only if all the devices in it
are IRQ-safe. Moreover, if an IRQ-safe domain has a parent domain, the runtime
PM of the parent is only allowed if the parent itself is IRQ-safe too with the
additional restriction that all child domains of an IRQ-safe parent must also
be IRQ-safe.
Device Low Power (suspend) States Device Low Power (suspend) States
--------------------------------- ---------------------------------
......
...@@ -8,25 +8,43 @@ for each state. ...@@ -8,25 +8,43 @@ for each state.
The states are represented by strings that can be read or written to the The states are represented by strings that can be read or written to the
/sys/power/state file. Those strings may be "mem", "standby", "freeze" and /sys/power/state file. Those strings may be "mem", "standby", "freeze" and
"disk", where the last one always represents hibernation (Suspend-To-Disk) and "disk", where the last three always represent Power-On Suspend (if supported),
the meaning of the remaining ones depends on the relative_sleep_states command Suspend-To-Idle and hibernation (Suspend-To-Disk), respectively.
line argument.
The meaning of the "mem" string is controlled by the /sys/power/mem_sleep file.
For relative_sleep_states=1, the strings "mem", "standby" and "freeze" label the It contains strings representing the available modes of system suspend that may
available non-hibernation sleep states from the deepest to the shallowest, be triggered by writing "mem" to /sys/power/state. These modes are "s2idle"
respectively. In that case, "mem" is always present in /sys/power/state, (Suspend-To-Idle), "shallow" (Power-On Suspend) and "deep" (Suspend-To-RAM).
because there is at least one non-hibernation sleep state in every system. If The "s2idle" mode is always available, while the other ones are only available
the given system supports two non-hibernation sleep states, "standby" is present if supported by the platform (if not supported, the strings representing them
in /sys/power/state in addition to "mem". If the system supports three are not present in /sys/power/mem_sleep). The string representing the suspend
non-hibernation sleep states, "freeze" will be present in /sys/power/state in mode to be used subsequently is enclosed in square brackets. Writing one of
addition to "mem" and "standby". the other strings present in /sys/power/mem_sleep to it causes the suspend mode
to be used subsequently to change to the one represented by that string.
For relative_sleep_states=0, which is the default, the following descriptions
apply. Consequently, there are two ways to cause the system to go into the
Suspend-To-Idle sleep state. The first one is to write "freeze" directly to
state: Suspend-To-Idle /sys/power/state. The second one is to write "s2idle" to /sys/power/mem_sleep
and then to wrtie "mem" to /sys/power/state. Similarly, there are two ways
to cause the system to go into the Power-On Suspend sleep state (the strings to
write to the control files in that case are "standby" or "shallow" and "mem",
respectively) if that state is supported by the platform. In turn, there is
only one way to cause the system to go into the Suspend-To-RAM state (write
"deep" into /sys/power/mem_sleep and "mem" into /sys/power/state).
The default suspend mode (ie. the one to be used without writing anything into
/sys/power/mem_sleep) is either "deep" (if Suspend-To-RAM is supported) or
"s2idle", but it can be overridden by the value of the "mem_sleep_default"
parameter in the kernel command line. On some ACPI-based systems, depending on
the information in the FADT, the default may be "s2idle" even if Suspend-To-RAM
is supported.
The properties of all of the sleep states are described below.
State: Suspend-To-Idle
ACPI state: S0 ACPI state: S0
Label: "freeze" Label: "s2idle" ("freeze")
This state is a generic, pure software, light-weight, system sleep state. This state is a generic, pure software, light-weight, system sleep state.
It allows more energy to be saved relative to runtime idle by freezing user It allows more energy to be saved relative to runtime idle by freezing user
...@@ -35,13 +53,13 @@ lower-power than available at run time), such that the processors can ...@@ -35,13 +53,13 @@ lower-power than available at run time), such that the processors can
spend more time in their idle states. spend more time in their idle states.
This state can be used for platforms without Power-On Suspend/Suspend-to-RAM This state can be used for platforms without Power-On Suspend/Suspend-to-RAM
support, or it can be used in addition to Suspend-to-RAM (memory sleep) support, or it can be used in addition to Suspend-to-RAM to provide reduced
to provide reduced resume latency. It is always supported. resume latency. It is always supported.
State: Standby / Power-On Suspend State: Standby / Power-On Suspend
ACPI State: S1 ACPI State: S1
Label: "standby" Label: "shallow" ("standby")
This state, if supported, offers moderate, though real, power savings, while This state, if supported, offers moderate, though real, power savings, while
providing a relatively low-latency transition back to a working system. No providing a relatively low-latency transition back to a working system. No
...@@ -58,7 +76,7 @@ state. ...@@ -58,7 +76,7 @@ state.
State: Suspend-to-RAM State: Suspend-to-RAM
ACPI State: S3 ACPI State: S3
Label: "mem" Label: "deep"
This state, if supported, offers significant power savings as everything in the This state, if supported, offers significant power savings as everything in the
system is put into a low-power state, except for memory, which should be placed system is put into a low-power state, except for memory, which should be placed
......
...@@ -2764,6 +2764,14 @@ L: bcm-kernel-feedback-list@broadcom.com ...@@ -2764,6 +2764,14 @@ L: bcm-kernel-feedback-list@broadcom.com
S: Maintained S: Maintained
F: drivers/mtd/nand/brcmnand/ F: drivers/mtd/nand/brcmnand/
BROADCOM STB AVS CPUFREQ DRIVER
M: Markus Mayer <mmayer@broadcom.com>
M: bcm-kernel-feedback-list@broadcom.com
L: linux-pm@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/cpufreq/brcm,stb-avs-cpu-freq.txt
F: drivers/cpufreq/brcmstb*
BROADCOM SPECIFIC AMBA DRIVER (BCMA) BROADCOM SPECIFIC AMBA DRIVER (BCMA)
M: Rafał Miłecki <zajec5@gmail.com> M: Rafał Miłecki <zajec5@gmail.com>
L: linux-wireless@vger.kernel.org L: linux-wireless@vger.kernel.org
...@@ -3356,6 +3364,7 @@ L: linux-pm@vger.kernel.org ...@@ -3356,6 +3364,7 @@ L: linux-pm@vger.kernel.org
S: Maintained S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git
T: git git://git.linaro.org/people/vireshk/linux.git (For ARM Updates) T: git git://git.linaro.org/people/vireshk/linux.git (For ARM Updates)
B: https://bugzilla.kernel.org
F: Documentation/cpu-freq/ F: Documentation/cpu-freq/
F: drivers/cpufreq/ F: drivers/cpufreq/
F: include/linux/cpufreq.h F: include/linux/cpufreq.h
...@@ -3395,6 +3404,7 @@ M: Daniel Lezcano <daniel.lezcano@linaro.org> ...@@ -3395,6 +3404,7 @@ M: Daniel Lezcano <daniel.lezcano@linaro.org>
L: linux-pm@vger.kernel.org L: linux-pm@vger.kernel.org
S: Maintained S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git
B: https://bugzilla.kernel.org
F: drivers/cpuidle/* F: drivers/cpuidle/*
F: include/linux/cpuidle.h F: include/linux/cpuidle.h
...@@ -6362,9 +6372,11 @@ S: Maintained ...@@ -6362,9 +6372,11 @@ S: Maintained
F: drivers/platform/x86/intel-vbtn.c F: drivers/platform/x86/intel-vbtn.c
INTEL IDLE DRIVER INTEL IDLE DRIVER
M: Jacob Pan <jacob.jun.pan@linux.intel.com>
M: Len Brown <lenb@kernel.org> M: Len Brown <lenb@kernel.org>
L: linux-pm@vger.kernel.org L: linux-pm@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux.git T: git git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux.git
B: https://bugzilla.kernel.org
S: Supported S: Supported
F: drivers/idle/intel_idle.c F: drivers/idle/intel_idle.c
......
...@@ -380,13 +380,6 @@ static struct pu_domain imx6q_pu_domain = { ...@@ -380,13 +380,6 @@ static struct pu_domain imx6q_pu_domain = {
.name = "PU", .name = "PU",
.power_off = imx6q_pm_pu_power_off, .power_off = imx6q_pm_pu_power_off,
.power_on = imx6q_pm_pu_power_on, .power_on = imx6q_pm_pu_power_on,
.states = {
[0] = {
.power_off_latency_ns = 25000,
.power_on_latency_ns = 2000000,
},
},
.state_count = 1,
}, },
}; };
...@@ -430,6 +423,16 @@ static int imx_gpc_genpd_init(struct device *dev, struct regulator *pu_reg) ...@@ -430,6 +423,16 @@ static int imx_gpc_genpd_init(struct device *dev, struct regulator *pu_reg)
if (!IS_ENABLED(CONFIG_PM_GENERIC_DOMAINS)) if (!IS_ENABLED(CONFIG_PM_GENERIC_DOMAINS))
return 0; return 0;
imx6q_pu_domain.base.states = devm_kzalloc(dev,
sizeof(*imx6q_pu_domain.base.states),
GFP_KERNEL);
if (!imx6q_pu_domain.base.states)
return -ENOMEM;
imx6q_pu_domain.base.states[0].power_off_latency_ns = 25000;
imx6q_pu_domain.base.states[0].power_on_latency_ns = 2000000;
imx6q_pu_domain.base.state_count = 1;
for (i = 0; i < ARRAY_SIZE(imx_gpc_domains); i++) for (i = 0; i < ARRAY_SIZE(imx_gpc_domains); i++)
pm_genpd_init(imx_gpc_domains[i], NULL, false); pm_genpd_init(imx_gpc_domains[i], NULL, false);
......
...@@ -109,6 +109,15 @@ ENTRY(do_suspend_lowlevel) ...@@ -109,6 +109,15 @@ ENTRY(do_suspend_lowlevel)
movq pt_regs_r14(%rax), %r14 movq pt_regs_r14(%rax), %r14
movq pt_regs_r15(%rax), %r15 movq pt_regs_r15(%rax), %r15
#ifdef CONFIG_KASAN
/*
* The suspend path may have poisoned some areas deeper in the stack,
* which we now need to unpoison.
*/
movq %rsp, %rdi
call kasan_unpoison_task_stack_below
#endif
xorl %eax, %eax xorl %eax, %eax
addq $8, %rsp addq $8, %rsp
FRAME_END FRAME_END
......
...@@ -11,6 +11,10 @@ ...@@ -11,6 +11,10 @@
#include <linux/gfp.h> #include <linux/gfp.h>
#include <linux/smp.h> #include <linux/smp.h>
#include <linux/suspend.h> #include <linux/suspend.h>
#include <linux/scatterlist.h>
#include <linux/kdebug.h>
#include <crypto/hash.h>
#include <asm/init.h> #include <asm/init.h>
#include <asm/proto.h> #include <asm/proto.h>
...@@ -177,14 +181,86 @@ int pfn_is_nosave(unsigned long pfn) ...@@ -177,14 +181,86 @@ int pfn_is_nosave(unsigned long pfn)
return (pfn >= nosave_begin_pfn) && (pfn < nosave_end_pfn); return (pfn >= nosave_begin_pfn) && (pfn < nosave_end_pfn);
} }
#define MD5_DIGEST_SIZE 16
struct restore_data_record { struct restore_data_record {
unsigned long jump_address; unsigned long jump_address;
unsigned long jump_address_phys; unsigned long jump_address_phys;
unsigned long cr3; unsigned long cr3;
unsigned long magic; unsigned long magic;
u8 e820_digest[MD5_DIGEST_SIZE];
}; };
#define RESTORE_MAGIC 0x123456789ABCDEF0UL #define RESTORE_MAGIC 0x23456789ABCDEF01UL
#if IS_BUILTIN(CONFIG_CRYPTO_MD5)
/**
* get_e820_md5 - calculate md5 according to given e820 map
*
* @map: the e820 map to be calculated
* @buf: the md5 result to be stored to
*/
static int get_e820_md5(struct e820map *map, void *buf)
{
struct scatterlist sg;
struct crypto_ahash *tfm;
int size;
int ret = 0;
tfm = crypto_alloc_ahash("md5", 0, CRYPTO_ALG_ASYNC);
if (IS_ERR(tfm))
return -ENOMEM;
{
AHASH_REQUEST_ON_STACK(req, tfm);
size = offsetof(struct e820map, map)
+ sizeof(struct e820entry) * map->nr_map;
ahash_request_set_tfm(req, tfm);
sg_init_one(&sg, (u8 *)map, size);
ahash_request_set_callback(req, 0, NULL, NULL);
ahash_request_set_crypt(req, &sg, buf, size);
if (crypto_ahash_digest(req))
ret = -EINVAL;
ahash_request_zero(req);
}
crypto_free_ahash(tfm);
return ret;
}
static void hibernation_e820_save(void *buf)
{
get_e820_md5(e820_saved, buf);
}
static bool hibernation_e820_mismatch(void *buf)
{
int ret;
u8 result[MD5_DIGEST_SIZE];
memset(result, 0, MD5_DIGEST_SIZE);
/* If there is no digest in suspend kernel, let it go. */
if (!memcmp(result, buf, MD5_DIGEST_SIZE))
return false;
ret = get_e820_md5(e820_saved, result);
if (ret)
return true;
return memcmp(result, buf, MD5_DIGEST_SIZE) ? true : false;
}
#else
static void hibernation_e820_save(void *buf)
{
}
static bool hibernation_e820_mismatch(void *buf)
{
/* If md5 is not builtin for restore kernel, let it go. */
return false;
}
#endif
/** /**
* arch_hibernation_header_save - populate the architecture specific part * arch_hibernation_header_save - populate the architecture specific part
...@@ -201,6 +277,9 @@ int arch_hibernation_header_save(void *addr, unsigned int max_size) ...@@ -201,6 +277,9 @@ int arch_hibernation_header_save(void *addr, unsigned int max_size)
rdr->jump_address_phys = __pa_symbol(&restore_registers); rdr->jump_address_phys = __pa_symbol(&restore_registers);
rdr->cr3 = restore_cr3; rdr->cr3 = restore_cr3;
rdr->magic = RESTORE_MAGIC; rdr->magic = RESTORE_MAGIC;
hibernation_e820_save(rdr->e820_digest);
return 0; return 0;
} }
...@@ -216,5 +295,16 @@ int arch_hibernation_header_restore(void *addr) ...@@ -216,5 +295,16 @@ int arch_hibernation_header_restore(void *addr)
restore_jump_address = rdr->jump_address; restore_jump_address = rdr->jump_address;
jump_address_phys = rdr->jump_address_phys; jump_address_phys = rdr->jump_address_phys;
restore_cr3 = rdr->cr3; restore_cr3 = rdr->cr3;
return (rdr->magic == RESTORE_MAGIC) ? 0 : -EINVAL;
if (rdr->magic != RESTORE_MAGIC) {
pr_crit("Unrecognized hibernate image header format!\n");
return -EINVAL;
}
if (hibernation_e820_mismatch(rdr->e820_digest)) {
pr_crit("Hibernate inconsistent memory map detected!\n");
return -ENODEV;
}
return 0;
} }
...@@ -157,7 +157,7 @@ static void acpi_processor_ppc_ost(acpi_handle handle, int status) ...@@ -157,7 +157,7 @@ static void acpi_processor_ppc_ost(acpi_handle handle, int status)
status, NULL); status, NULL);
} }
int acpi_processor_ppc_has_changed(struct acpi_processor *pr, int event_flag) void acpi_processor_ppc_has_changed(struct acpi_processor *pr, int event_flag)
{ {
int ret; int ret;
...@@ -168,7 +168,7 @@ int acpi_processor_ppc_has_changed(struct acpi_processor *pr, int event_flag) ...@@ -168,7 +168,7 @@ int acpi_processor_ppc_has_changed(struct acpi_processor *pr, int event_flag)
*/ */
if (event_flag) if (event_flag)
acpi_processor_ppc_ost(pr->handle, 1); acpi_processor_ppc_ost(pr->handle, 1);
return 0; return;
} }
ret = acpi_processor_get_platform_limit(pr); ret = acpi_processor_get_platform_limit(pr);
...@@ -182,10 +182,8 @@ int acpi_processor_ppc_has_changed(struct acpi_processor *pr, int event_flag) ...@@ -182,10 +182,8 @@ int acpi_processor_ppc_has_changed(struct acpi_processor *pr, int event_flag)
else else
acpi_processor_ppc_ost(pr->handle, 0); acpi_processor_ppc_ost(pr->handle, 0);
} }
if (ret < 0) if (ret >= 0)
return (ret); cpufreq_update_policy(pr->id);
else
return cpufreq_update_policy(pr->id);
} }
int acpi_processor_get_bios_limit(int cpu, unsigned int *limit) int acpi_processor_get_bios_limit(int cpu, unsigned int *limit)
...@@ -465,11 +463,33 @@ int acpi_processor_get_performance_info(struct acpi_processor *pr) ...@@ -465,11 +463,33 @@ int acpi_processor_get_performance_info(struct acpi_processor *pr)
return result; return result;
} }
EXPORT_SYMBOL_GPL(acpi_processor_get_performance_info); EXPORT_SYMBOL_GPL(acpi_processor_get_performance_info);
int acpi_processor_notify_smm(struct module *calling_module)
int acpi_processor_pstate_control(void)
{ {
acpi_status status; acpi_status status;
static int is_done = 0;
if (!acpi_gbl_FADT.smi_command || !acpi_gbl_FADT.pstate_control)
return 0;
ACPI_DEBUG_PRINT((ACPI_DB_INFO,
"Writing pstate_control [0x%x] to smi_command [0x%x]\n",
acpi_gbl_FADT.pstate_control, acpi_gbl_FADT.smi_command));
status = acpi_os_write_port(acpi_gbl_FADT.smi_command,
(u32)acpi_gbl_FADT.pstate_control, 8);
if (ACPI_SUCCESS(status))
return 1;
ACPI_EXCEPTION((AE_INFO, status,
"Failed to write pstate_control [0x%x] to smi_command [0x%x]",
acpi_gbl_FADT.pstate_control, acpi_gbl_FADT.smi_command));
return -EIO;
}
int acpi_processor_notify_smm(struct module *calling_module)
{
static int is_done = 0;
int result;
if (!(acpi_processor_ppc_status & PPC_REGISTERED)) if (!(acpi_processor_ppc_status & PPC_REGISTERED))
return -EBUSY; return -EBUSY;
...@@ -492,26 +512,15 @@ int acpi_processor_notify_smm(struct module *calling_module) ...@@ -492,26 +512,15 @@ int acpi_processor_notify_smm(struct module *calling_module)
is_done = -EIO; is_done = -EIO;
/* Can't write pstate_control to smi_command if either value is zero */ result = acpi_processor_pstate_control();
if ((!acpi_gbl_FADT.smi_command) || (!acpi_gbl_FADT.pstate_control)) { if (!result) {
ACPI_DEBUG_PRINT((ACPI_DB_INFO, "No SMI port or pstate_control\n")); ACPI_DEBUG_PRINT((ACPI_DB_INFO, "No SMI port or pstate_control\n"));
module_put(calling_module); module_put(calling_module);
return 0; return 0;
} }
if (result < 0) {
ACPI_DEBUG_PRINT((ACPI_DB_INFO,
"Writing pstate_control [0x%x] to smi_command [0x%x]\n",
acpi_gbl_FADT.pstate_control, acpi_gbl_FADT.smi_command));
status = acpi_os_write_port(acpi_gbl_FADT.smi_command,
(u32) acpi_gbl_FADT.pstate_control, 8);
if (ACPI_FAILURE(status)) {
ACPI_EXCEPTION((AE_INFO, status,
"Failed to write pstate_control [0x%x] to "
"smi_command [0x%x]", acpi_gbl_FADT.pstate_control,
acpi_gbl_FADT.smi_command));
module_put(calling_module); module_put(calling_module);
return status; return result;
} }
/* Success. If there's no _PPC, we need to fear nothing, so /* Success. If there's no _PPC, we need to fear nothing, so
......
...@@ -674,6 +674,14 @@ static void acpi_sleep_suspend_setup(void) ...@@ -674,6 +674,14 @@ static void acpi_sleep_suspend_setup(void)
if (acpi_sleep_state_supported(i)) if (acpi_sleep_state_supported(i))
sleep_states[i] = 1; sleep_states[i] = 1;
/*
* Use suspend-to-idle by default if ACPI_FADT_LOW_POWER_S0 is set and
* the default suspend mode was not selected from the command line.
*/
if (acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0 &&
mem_sleep_default > PM_SUSPEND_MEM)
mem_sleep_default = PM_SUSPEND_FREEZE;
suspend_set_ops(old_suspend_ordering ? suspend_set_ops(old_suspend_ordering ?
&acpi_suspend_ops_old : &acpi_suspend_ops); &acpi_suspend_ops_old : &acpi_suspend_ops);
freeze_set_ops(&acpi_freeze_ops); freeze_set_ops(&acpi_freeze_ops);
......
This diff is collapsed.
...@@ -1460,10 +1460,10 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async) ...@@ -1460,10 +1460,10 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
dpm_watchdog_clear(&wd); dpm_watchdog_clear(&wd);
Complete: Complete:
complete_all(&dev->power.completion);
if (error) if (error)
async_error = error; async_error = error;
complete_all(&dev->power.completion);
TRACE_SUSPEND(error); TRACE_SUSPEND(error);
return error; return error;
} }
......
This diff is collapsed.
...@@ -15,6 +15,7 @@ ...@@ -15,6 +15,7 @@
#include <linux/err.h> #include <linux/err.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/limits.h> #include <linux/limits.h>
#include <linux/slab.h>
#include "opp.h" #include "opp.h"
...@@ -34,6 +35,46 @@ void opp_debug_remove_one(struct dev_pm_opp *opp) ...@@ -34,6 +35,46 @@ void opp_debug_remove_one(struct dev_pm_opp *opp)
debugfs_remove_recursive(opp->dentry); debugfs_remove_recursive(opp->dentry);
} }
static bool opp_debug_create_supplies(struct dev_pm_opp *opp,
struct opp_table *opp_table,
struct dentry *pdentry)
{
struct dentry *d;
int i = 0;
char *name;
/* Always create at least supply-0 directory */
do {
name = kasprintf(GFP_KERNEL, "supply-%d", i);
/* Create per-opp directory */
d = debugfs_create_dir(name, pdentry);
kfree(name);
if (!d)
return false;
if (!debugfs_create_ulong("u_volt_target", S_IRUGO, d,
&opp->supplies[i].u_volt))
return false;
if (!debugfs_create_ulong("u_volt_min", S_IRUGO, d,
&opp->supplies[i].u_volt_min))
return false;
if (!debugfs_create_ulong("u_volt_max", S_IRUGO, d,
&opp->supplies[i].u_volt_max))
return false;
if (!debugfs_create_ulong("u_amp", S_IRUGO, d,
&opp->supplies[i].u_amp))
return false;
} while (++i < opp_table->regulator_count);
return true;
}
int opp_debug_create_one(struct dev_pm_opp *opp, struct opp_table *opp_table) int opp_debug_create_one(struct dev_pm_opp *opp, struct opp_table *opp_table)
{ {
struct dentry *pdentry = opp_table->dentry; struct dentry *pdentry = opp_table->dentry;
...@@ -63,16 +104,7 @@ int opp_debug_create_one(struct dev_pm_opp *opp, struct opp_table *opp_table) ...@@ -63,16 +104,7 @@ int opp_debug_create_one(struct dev_pm_opp *opp, struct opp_table *opp_table)
if (!debugfs_create_ulong("rate_hz", S_IRUGO, d, &opp->rate)) if (!debugfs_create_ulong("rate_hz", S_IRUGO, d, &opp->rate))
return -ENOMEM; return -ENOMEM;
if (!debugfs_create_ulong("u_volt_target", S_IRUGO, d, &opp->u_volt)) if (!opp_debug_create_supplies(opp, opp_table, d))
return -ENOMEM;
if (!debugfs_create_ulong("u_volt_min", S_IRUGO, d, &opp->u_volt_min))
return -ENOMEM;
if (!debugfs_create_ulong("u_volt_max", S_IRUGO, d, &opp->u_volt_max))
return -ENOMEM;
if (!debugfs_create_ulong("u_amp", S_IRUGO, d, &opp->u_amp))
return -ENOMEM; return -ENOMEM;
if (!debugfs_create_ulong("clock_latency_ns", S_IRUGO, d, if (!debugfs_create_ulong("clock_latency_ns", S_IRUGO, d,
......
...@@ -17,6 +17,7 @@ ...@@ -17,6 +17,7 @@
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/device.h> #include <linux/device.h>
#include <linux/of.h> #include <linux/of.h>
#include <linux/slab.h>
#include <linux/export.h> #include <linux/export.h>
#include "opp.h" #include "opp.h"
...@@ -101,16 +102,16 @@ static bool _opp_is_supported(struct device *dev, struct opp_table *opp_table, ...@@ -101,16 +102,16 @@ static bool _opp_is_supported(struct device *dev, struct opp_table *opp_table,
return true; return true;
} }
/* TODO: Support multiple regulators */
static int opp_parse_supplies(struct dev_pm_opp *opp, struct device *dev, static int opp_parse_supplies(struct dev_pm_opp *opp, struct device *dev,
struct opp_table *opp_table) struct opp_table *opp_table)
{ {
u32 microvolt[3] = {0}; u32 *microvolt, *microamp = NULL;
u32 val; int supplies, vcount, icount, ret, i, j;
int count, ret;
struct property *prop = NULL; struct property *prop = NULL;
char name[NAME_MAX]; char name[NAME_MAX];
supplies = opp_table->regulator_count ? opp_table->regulator_count : 1;
/* Search for "opp-microvolt-<name>" */ /* Search for "opp-microvolt-<name>" */
if (opp_table->prop_name) { if (opp_table->prop_name) {
snprintf(name, sizeof(name), "opp-microvolt-%s", snprintf(name, sizeof(name), "opp-microvolt-%s",
...@@ -128,34 +129,29 @@ static int opp_parse_supplies(struct dev_pm_opp *opp, struct device *dev, ...@@ -128,34 +129,29 @@ static int opp_parse_supplies(struct dev_pm_opp *opp, struct device *dev,
return 0; return 0;
} }
count = of_property_count_u32_elems(opp->np, name); vcount = of_property_count_u32_elems(opp->np, name);
if (count < 0) { if (vcount < 0) {
dev_err(dev, "%s: Invalid %s property (%d)\n", dev_err(dev, "%s: Invalid %s property (%d)\n",
__func__, name, count); __func__, name, vcount);
return count; return vcount;
} }
/* There can be one or three elements here */ /* There can be one or three elements per supply */
if (count != 1 && count != 3) { if (vcount != supplies && vcount != supplies * 3) {
dev_err(dev, "%s: Invalid number of elements in %s property (%d)\n", dev_err(dev, "%s: Invalid number of elements in %s property (%d) with supplies (%d)\n",
__func__, name, count); __func__, name, vcount, supplies);
return -EINVAL; return -EINVAL;
} }
ret = of_property_read_u32_array(opp->np, name, microvolt, count); microvolt = kmalloc_array(vcount, sizeof(*microvolt), GFP_KERNEL);
if (!microvolt)
return -ENOMEM;
ret = of_property_read_u32_array(opp->np, name, microvolt, vcount);
if (ret) { if (ret) {
dev_err(dev, "%s: error parsing %s: %d\n", __func__, name, ret); dev_err(dev, "%s: error parsing %s: %d\n", __func__, name, ret);
return -EINVAL; ret = -EINVAL;
} goto free_microvolt;
opp->u_volt = microvolt[0];
if (count == 1) {
opp->u_volt_min = opp->u_volt;
opp->u_volt_max = opp->u_volt;
} else {
opp->u_volt_min = microvolt[1];
opp->u_volt_max = microvolt[2];
} }
/* Search for "opp-microamp-<name>" */ /* Search for "opp-microamp-<name>" */
...@@ -172,10 +168,59 @@ static int opp_parse_supplies(struct dev_pm_opp *opp, struct device *dev, ...@@ -172,10 +168,59 @@ static int opp_parse_supplies(struct dev_pm_opp *opp, struct device *dev,
prop = of_find_property(opp->np, name, NULL); prop = of_find_property(opp->np, name, NULL);
} }
if (prop && !of_property_read_u32(opp->np, name, &val)) if (prop) {
opp->u_amp = val; icount = of_property_count_u32_elems(opp->np, name);
if (icount < 0) {
dev_err(dev, "%s: Invalid %s property (%d)\n", __func__,
name, icount);
ret = icount;
goto free_microvolt;
}
return 0; if (icount != supplies) {
dev_err(dev, "%s: Invalid number of elements in %s property (%d) with supplies (%d)\n",
__func__, name, icount, supplies);
ret = -EINVAL;
goto free_microvolt;
}
microamp = kmalloc_array(icount, sizeof(*microamp), GFP_KERNEL);
if (!microamp) {
ret = -EINVAL;
goto free_microvolt;
}
ret = of_property_read_u32_array(opp->np, name, microamp,
icount);
if (ret) {
dev_err(dev, "%s: error parsing %s: %d\n", __func__,
name, ret);
ret = -EINVAL;
goto free_microamp;
}
}
for (i = 0, j = 0; i < supplies; i++) {
opp->supplies[i].u_volt = microvolt[j++];
if (vcount == supplies) {
opp->supplies[i].u_volt_min = opp->supplies[i].u_volt;
opp->supplies[i].u_volt_max = opp->supplies[i].u_volt;
} else {
opp->supplies[i].u_volt_min = microvolt[j++];
opp->supplies[i].u_volt_max = microvolt[j++];
}
if (microamp)
opp->supplies[i].u_amp = microamp[i];
}
free_microamp:
kfree(microamp);
free_microvolt:
kfree(microvolt);
return ret;
} }
/** /**
...@@ -198,7 +243,7 @@ void dev_pm_opp_of_remove_table(struct device *dev) ...@@ -198,7 +243,7 @@ void dev_pm_opp_of_remove_table(struct device *dev)
EXPORT_SYMBOL_GPL(dev_pm_opp_of_remove_table); EXPORT_SYMBOL_GPL(dev_pm_opp_of_remove_table);
/* Returns opp descriptor node for a device, caller must do of_node_put() */ /* Returns opp descriptor node for a device, caller must do of_node_put() */
struct device_node *_of_get_opp_desc_node(struct device *dev) static struct device_node *_of_get_opp_desc_node(struct device *dev)
{ {
/* /*
* TODO: Support for multiple OPP tables. * TODO: Support for multiple OPP tables.
...@@ -303,9 +348,9 @@ static int _opp_add_static_v2(struct device *dev, struct device_node *np) ...@@ -303,9 +348,9 @@ static int _opp_add_static_v2(struct device *dev, struct device_node *np)
mutex_unlock(&opp_table_lock); mutex_unlock(&opp_table_lock);
pr_debug("%s: turbo:%d rate:%lu uv:%lu uvmin:%lu uvmax:%lu latency:%lu\n", pr_debug("%s: turbo:%d rate:%lu uv:%lu uvmin:%lu uvmax:%lu latency:%lu\n",
__func__, new_opp->turbo, new_opp->rate, new_opp->u_volt, __func__, new_opp->turbo, new_opp->rate,
new_opp->u_volt_min, new_opp->u_volt_max, new_opp->supplies[0].u_volt, new_opp->supplies[0].u_volt_min,
new_opp->clock_latency_ns); new_opp->supplies[0].u_volt_max, new_opp->clock_latency_ns);
/* /*
* Notify the changes in the availability of the operable * Notify the changes in the availability of the operable
...@@ -562,7 +607,7 @@ int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, ...@@ -562,7 +607,7 @@ int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev,
/* Get OPP descriptor node */ /* Get OPP descriptor node */
np = _of_get_opp_desc_node(cpu_dev); np = _of_get_opp_desc_node(cpu_dev);
if (!np) { if (!np) {
dev_dbg(cpu_dev, "%s: Couldn't find cpu_dev node.\n", __func__); dev_dbg(cpu_dev, "%s: Couldn't find opp node.\n", __func__);
return -ENOENT; return -ENOENT;
} }
...@@ -587,7 +632,7 @@ int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, ...@@ -587,7 +632,7 @@ int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev,
/* Get OPP descriptor node */ /* Get OPP descriptor node */
tmp_np = _of_get_opp_desc_node(tcpu_dev); tmp_np = _of_get_opp_desc_node(tcpu_dev);
if (!tmp_np) { if (!tmp_np) {
dev_err(tcpu_dev, "%s: Couldn't find tcpu_dev node.\n", dev_err(tcpu_dev, "%s: Couldn't find opp node.\n",
__func__); __func__);
ret = -ENOENT; ret = -ENOENT;
goto put_cpu_node; goto put_cpu_node;
......
...@@ -61,10 +61,7 @@ extern struct list_head opp_tables; ...@@ -61,10 +61,7 @@ extern struct list_head opp_tables;
* @turbo: true if turbo (boost) OPP * @turbo: true if turbo (boost) OPP
* @suspend: true if suspend OPP * @suspend: true if suspend OPP
* @rate: Frequency in hertz * @rate: Frequency in hertz
* @u_volt: Target voltage in microvolts corresponding to this OPP * @supplies: Power supplies voltage/current values
* @u_volt_min: Minimum voltage in microvolts corresponding to this OPP
* @u_volt_max: Maximum voltage in microvolts corresponding to this OPP
* @u_amp: Maximum current drawn by the device in microamperes
* @clock_latency_ns: Latency (in nanoseconds) of switching to this OPP's * @clock_latency_ns: Latency (in nanoseconds) of switching to this OPP's
* frequency from any other OPP's frequency. * frequency from any other OPP's frequency.
* @opp_table: points back to the opp_table struct this opp belongs to * @opp_table: points back to the opp_table struct this opp belongs to
...@@ -83,10 +80,8 @@ struct dev_pm_opp { ...@@ -83,10 +80,8 @@ struct dev_pm_opp {
bool suspend; bool suspend;
unsigned long rate; unsigned long rate;
unsigned long u_volt; struct dev_pm_opp_supply *supplies;
unsigned long u_volt_min;
unsigned long u_volt_max;
unsigned long u_amp;
unsigned long clock_latency_ns; unsigned long clock_latency_ns;
struct opp_table *opp_table; struct opp_table *opp_table;
...@@ -144,7 +139,10 @@ enum opp_table_access { ...@@ -144,7 +139,10 @@ enum opp_table_access {
* @supported_hw_count: Number of elements in supported_hw array. * @supported_hw_count: Number of elements in supported_hw array.
* @prop_name: A name to postfix to many DT properties, while parsing them. * @prop_name: A name to postfix to many DT properties, while parsing them.
* @clk: Device's clock handle * @clk: Device's clock handle
* @regulator: Supply regulator * @regulators: Supply regulators
* @regulator_count: Number of power supply regulators
* @set_opp: Platform specific set_opp callback
* @set_opp_data: Data to be passed to set_opp callback
* @dentry: debugfs dentry pointer of the real device directory (not links). * @dentry: debugfs dentry pointer of the real device directory (not links).
* @dentry_name: Name of the real dentry. * @dentry_name: Name of the real dentry.
* *
...@@ -179,7 +177,11 @@ struct opp_table { ...@@ -179,7 +177,11 @@ struct opp_table {
unsigned int supported_hw_count; unsigned int supported_hw_count;
const char *prop_name; const char *prop_name;
struct clk *clk; struct clk *clk;
struct regulator *regulator; struct regulator **regulators;
unsigned int regulator_count;
int (*set_opp)(struct dev_pm_set_opp_data *data);
struct dev_pm_set_opp_data *set_opp_data;
#ifdef CONFIG_DEBUG_FS #ifdef CONFIG_DEBUG_FS
struct dentry *dentry; struct dentry *dentry;
...@@ -190,7 +192,6 @@ struct opp_table { ...@@ -190,7 +192,6 @@ struct opp_table {
/* Routines internal to opp core */ /* Routines internal to opp core */
struct opp_table *_find_opp_table(struct device *dev); struct opp_table *_find_opp_table(struct device *dev);
struct opp_device *_add_opp_dev(const struct device *dev, struct opp_table *opp_table); struct opp_device *_add_opp_dev(const struct device *dev, struct opp_table *opp_table);
struct device_node *_of_get_opp_desc_node(struct device *dev);
void _dev_pm_opp_remove_table(struct device *dev, bool remove_all); void _dev_pm_opp_remove_table(struct device *dev, bool remove_all);
struct dev_pm_opp *_allocate_opp(struct device *dev, struct opp_table **opp_table); struct dev_pm_opp *_allocate_opp(struct device *dev, struct opp_table **opp_table);
int _opp_add(struct device *dev, struct dev_pm_opp *new_opp, struct opp_table *opp_table); int _opp_add(struct device *dev, struct dev_pm_opp *new_opp, struct opp_table *opp_table);
......
...@@ -21,14 +21,22 @@ extern void pm_runtime_init(struct device *dev); ...@@ -21,14 +21,22 @@ extern void pm_runtime_init(struct device *dev);
extern void pm_runtime_reinit(struct device *dev); extern void pm_runtime_reinit(struct device *dev);
extern void pm_runtime_remove(struct device *dev); extern void pm_runtime_remove(struct device *dev);
#define WAKE_IRQ_DEDICATED_ALLOCATED BIT(0)
#define WAKE_IRQ_DEDICATED_MANAGED BIT(1)
#define WAKE_IRQ_DEDICATED_MASK (WAKE_IRQ_DEDICATED_ALLOCATED | \
WAKE_IRQ_DEDICATED_MANAGED)
struct wake_irq { struct wake_irq {
struct device *dev; struct device *dev;
unsigned int status;
int irq; int irq;
bool dedicated_irq:1;
}; };
extern void dev_pm_arm_wake_irq(struct wake_irq *wirq); extern void dev_pm_arm_wake_irq(struct wake_irq *wirq);
extern void dev_pm_disarm_wake_irq(struct wake_irq *wirq); extern void dev_pm_disarm_wake_irq(struct wake_irq *wirq);
extern void dev_pm_enable_wake_irq_check(struct device *dev,
bool can_change_status);
extern void dev_pm_disable_wake_irq_check(struct device *dev);
#ifdef CONFIG_PM_SLEEP #ifdef CONFIG_PM_SLEEP
...@@ -104,6 +112,15 @@ static inline void dev_pm_disarm_wake_irq(struct wake_irq *wirq) ...@@ -104,6 +112,15 @@ static inline void dev_pm_disarm_wake_irq(struct wake_irq *wirq)
{ {
} }
static inline void dev_pm_enable_wake_irq_check(struct device *dev,
bool can_change_status)
{
}
static inline void dev_pm_disable_wake_irq_check(struct device *dev)
{
}
#endif #endif
#ifdef CONFIG_PM_SLEEP #ifdef CONFIG_PM_SLEEP
......
...@@ -856,6 +856,9 @@ int dev_pm_qos_update_user_latency_tolerance(struct device *dev, s32 val) ...@@ -856,6 +856,9 @@ int dev_pm_qos_update_user_latency_tolerance(struct device *dev, s32 val)
struct dev_pm_qos_request *req; struct dev_pm_qos_request *req;
if (val < 0) { if (val < 0) {
if (val == PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT)
ret = 0;
else
ret = -EINVAL; ret = -EINVAL;
goto out; goto out;
} }
...@@ -883,6 +886,7 @@ int dev_pm_qos_update_user_latency_tolerance(struct device *dev, s32 val) ...@@ -883,6 +886,7 @@ int dev_pm_qos_update_user_latency_tolerance(struct device *dev, s32 val)
mutex_unlock(&dev_pm_qos_mtx); mutex_unlock(&dev_pm_qos_mtx);
return ret; return ret;
} }
EXPORT_SYMBOL_GPL(dev_pm_qos_update_user_latency_tolerance);
/** /**
* dev_pm_qos_expose_latency_tolerance - Expose latency tolerance to userspace * dev_pm_qos_expose_latency_tolerance - Expose latency tolerance to userspace
......
...@@ -241,7 +241,8 @@ static int rpm_check_suspend_allowed(struct device *dev) ...@@ -241,7 +241,8 @@ static int rpm_check_suspend_allowed(struct device *dev)
retval = -EACCES; retval = -EACCES;
else if (atomic_read(&dev->power.usage_count) > 0) else if (atomic_read(&dev->power.usage_count) > 0)
retval = -EAGAIN; retval = -EAGAIN;
else if (!pm_children_suspended(dev)) else if (!dev->power.ignore_children &&
atomic_read(&dev->power.child_count))
retval = -EBUSY; retval = -EBUSY;
/* Pending resume requests take precedence over suspends. */ /* Pending resume requests take precedence over suspends. */
...@@ -515,7 +516,7 @@ static int rpm_suspend(struct device *dev, int rpmflags) ...@@ -515,7 +516,7 @@ static int rpm_suspend(struct device *dev, int rpmflags)
callback = RPM_GET_CALLBACK(dev, runtime_suspend); callback = RPM_GET_CALLBACK(dev, runtime_suspend);
dev_pm_enable_wake_irq(dev); dev_pm_enable_wake_irq_check(dev, true);
retval = rpm_callback(callback, dev); retval = rpm_callback(callback, dev);
if (retval) if (retval)
goto fail; goto fail;
...@@ -554,7 +555,7 @@ static int rpm_suspend(struct device *dev, int rpmflags) ...@@ -554,7 +555,7 @@ static int rpm_suspend(struct device *dev, int rpmflags)
return retval; return retval;
fail: fail:
dev_pm_disable_wake_irq(dev); dev_pm_disable_wake_irq_check(dev);
__update_runtime_status(dev, RPM_ACTIVE); __update_runtime_status(dev, RPM_ACTIVE);
dev->power.deferred_resume = false; dev->power.deferred_resume = false;
wake_up_all(&dev->power.wait_queue); wake_up_all(&dev->power.wait_queue);
...@@ -712,8 +713,8 @@ static int rpm_resume(struct device *dev, int rpmflags) ...@@ -712,8 +713,8 @@ static int rpm_resume(struct device *dev, int rpmflags)
spin_lock(&parent->power.lock); spin_lock(&parent->power.lock);
/* /*
* We can resume if the parent's runtime PM is disabled or it * Resume the parent if it has runtime PM enabled and not been
* is set to ignore children. * set to ignore its children.
*/ */
if (!parent->power.disable_depth if (!parent->power.disable_depth
&& !parent->power.ignore_children) { && !parent->power.ignore_children) {
...@@ -737,12 +738,12 @@ static int rpm_resume(struct device *dev, int rpmflags) ...@@ -737,12 +738,12 @@ static int rpm_resume(struct device *dev, int rpmflags)
callback = RPM_GET_CALLBACK(dev, runtime_resume); callback = RPM_GET_CALLBACK(dev, runtime_resume);
dev_pm_disable_wake_irq(dev); dev_pm_disable_wake_irq_check(dev);
retval = rpm_callback(callback, dev); retval = rpm_callback(callback, dev);
if (retval) { if (retval) {
__update_runtime_status(dev, RPM_SUSPENDED); __update_runtime_status(dev, RPM_SUSPENDED);
pm_runtime_cancel_pending(dev); pm_runtime_cancel_pending(dev);
dev_pm_enable_wake_irq(dev); dev_pm_enable_wake_irq_check(dev, false);
} else { } else {
no_callback: no_callback:
__update_runtime_status(dev, RPM_ACTIVE); __update_runtime_status(dev, RPM_ACTIVE);
...@@ -1027,7 +1028,17 @@ int __pm_runtime_set_status(struct device *dev, unsigned int status) ...@@ -1027,7 +1028,17 @@ int __pm_runtime_set_status(struct device *dev, unsigned int status)
goto out_set; goto out_set;
if (status == RPM_SUSPENDED) { if (status == RPM_SUSPENDED) {
/* It always is possible to set the status to 'suspended'. */ /*
* It is invalid to suspend a device with an active child,
* unless it has been set to ignore its children.
*/
if (!dev->power.ignore_children &&
atomic_read(&dev->power.child_count)) {
dev_err(dev, "runtime PM trying to suspend device but active child\n");
error = -EBUSY;
goto out;
}
if (parent) { if (parent) {
atomic_add_unless(&parent->power.child_count, -1, 0); atomic_add_unless(&parent->power.child_count, -1, 0);
notify_parent = !parent->power.ignore_children; notify_parent = !parent->power.ignore_children;
...@@ -1478,6 +1489,16 @@ int pm_runtime_force_suspend(struct device *dev) ...@@ -1478,6 +1489,16 @@ int pm_runtime_force_suspend(struct device *dev)
if (ret) if (ret)
goto err; goto err;
/*
* Increase the runtime PM usage count for the device's parent, in case
* when we find the device being used when system suspend was invoked.
* This informs pm_runtime_force_resume() to resume the parent
* immediately, which is needed to be able to resume its children,
* when not deferring the resume to be managed via runtime PM.
*/
if (dev->parent && atomic_read(&dev->power.usage_count) > 1)
pm_runtime_get_noresume(dev->parent);
pm_runtime_set_suspended(dev); pm_runtime_set_suspended(dev);
return 0; return 0;
err: err:
...@@ -1487,16 +1508,20 @@ int pm_runtime_force_suspend(struct device *dev) ...@@ -1487,16 +1508,20 @@ int pm_runtime_force_suspend(struct device *dev)
EXPORT_SYMBOL_GPL(pm_runtime_force_suspend); EXPORT_SYMBOL_GPL(pm_runtime_force_suspend);
/** /**
* pm_runtime_force_resume - Force a device into resume state. * pm_runtime_force_resume - Force a device into resume state if needed.
* @dev: Device to resume. * @dev: Device to resume.
* *
* Prior invoking this function we expect the user to have brought the device * Prior invoking this function we expect the user to have brought the device
* into low power state by a call to pm_runtime_force_suspend(). Here we reverse * into low power state by a call to pm_runtime_force_suspend(). Here we reverse
* those actions and brings the device into full power. We update the runtime PM * those actions and brings the device into full power, if it is expected to be
* status and re-enables runtime PM. * used on system resume. To distinguish that, we check whether the runtime PM
* usage count is greater than 1 (the PM core increases the usage count in the
* system PM prepare phase), as that indicates a real user (such as a subsystem,
* driver, userspace, etc.) is using it. If that is the case, the device is
* expected to be used on system resume as well, so then we resume it. In the
* other case, we defer the resume to be managed via runtime PM.
* *
* Typically this function may be invoked from a system resume callback to make * Typically this function may be invoked from a system resume callback.
* sure the device is put into full power state.
*/ */
int pm_runtime_force_resume(struct device *dev) int pm_runtime_force_resume(struct device *dev)
{ {
...@@ -1513,6 +1538,17 @@ int pm_runtime_force_resume(struct device *dev) ...@@ -1513,6 +1538,17 @@ int pm_runtime_force_resume(struct device *dev)
if (!pm_runtime_status_suspended(dev)) if (!pm_runtime_status_suspended(dev))
goto out; goto out;
/*
* Decrease the parent's runtime PM usage count, if we increased it
* during system suspend in pm_runtime_force_suspend().
*/
if (atomic_read(&dev->power.usage_count) > 1) {
if (dev->parent)
pm_runtime_put_noidle(dev->parent);
} else {
goto out;
}
ret = pm_runtime_set_active(dev); ret = pm_runtime_set_active(dev);
if (ret) if (ret)
goto out; goto out;
......
...@@ -263,7 +263,11 @@ static ssize_t pm_qos_latency_tolerance_store(struct device *dev, ...@@ -263,7 +263,11 @@ static ssize_t pm_qos_latency_tolerance_store(struct device *dev,
s32 value; s32 value;
int ret; int ret;
if (kstrtos32(buf, 0, &value)) { if (kstrtos32(buf, 0, &value) == 0) {
/* Users can't write negative values directly */
if (value < 0)
return -EINVAL;
} else {
if (!strcmp(buf, "auto") || !strcmp(buf, "auto\n")) if (!strcmp(buf, "auto") || !strcmp(buf, "auto\n"))
value = PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT; value = PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT;
else if (!strcmp(buf, "any") || !strcmp(buf, "any\n")) else if (!strcmp(buf, "any") || !strcmp(buf, "any\n"))
......
...@@ -110,8 +110,10 @@ void dev_pm_clear_wake_irq(struct device *dev) ...@@ -110,8 +110,10 @@ void dev_pm_clear_wake_irq(struct device *dev)
dev->power.wakeirq = NULL; dev->power.wakeirq = NULL;
spin_unlock_irqrestore(&dev->power.lock, flags); spin_unlock_irqrestore(&dev->power.lock, flags);
if (wirq->dedicated_irq) if (wirq->status & WAKE_IRQ_DEDICATED_ALLOCATED) {
free_irq(wirq->irq, wirq); free_irq(wirq->irq, wirq);
wirq->status &= ~WAKE_IRQ_DEDICATED_MASK;
}
kfree(wirq); kfree(wirq);
} }
EXPORT_SYMBOL_GPL(dev_pm_clear_wake_irq); EXPORT_SYMBOL_GPL(dev_pm_clear_wake_irq);
...@@ -179,7 +181,6 @@ int dev_pm_set_dedicated_wake_irq(struct device *dev, int irq) ...@@ -179,7 +181,6 @@ int dev_pm_set_dedicated_wake_irq(struct device *dev, int irq)
wirq->dev = dev; wirq->dev = dev;
wirq->irq = irq; wirq->irq = irq;
wirq->dedicated_irq = true;
irq_set_status_flags(irq, IRQ_NOAUTOEN); irq_set_status_flags(irq, IRQ_NOAUTOEN);
/* /*
...@@ -195,6 +196,8 @@ int dev_pm_set_dedicated_wake_irq(struct device *dev, int irq) ...@@ -195,6 +196,8 @@ int dev_pm_set_dedicated_wake_irq(struct device *dev, int irq)
if (err) if (err)
goto err_free_irq; goto err_free_irq;
wirq->status = WAKE_IRQ_DEDICATED_ALLOCATED;
return err; return err;
err_free_irq: err_free_irq:
...@@ -210,9 +213,9 @@ EXPORT_SYMBOL_GPL(dev_pm_set_dedicated_wake_irq); ...@@ -210,9 +213,9 @@ EXPORT_SYMBOL_GPL(dev_pm_set_dedicated_wake_irq);
* dev_pm_enable_wake_irq - Enable device wake-up interrupt * dev_pm_enable_wake_irq - Enable device wake-up interrupt
* @dev: Device * @dev: Device
* *
* Called from the bus code or the device driver for * Optionally called from the bus code or the device driver for
* runtime_suspend() to enable the wake-up interrupt while * runtime_resume() to override the PM runtime core managed wake-up
* the device is running. * interrupt handling to enable the wake-up interrupt.
* *
* Note that for runtime_suspend()) the wake-up interrupts * Note that for runtime_suspend()) the wake-up interrupts
* should be unconditionally enabled unlike for suspend() * should be unconditionally enabled unlike for suspend()
...@@ -222,7 +225,7 @@ void dev_pm_enable_wake_irq(struct device *dev) ...@@ -222,7 +225,7 @@ void dev_pm_enable_wake_irq(struct device *dev)
{ {
struct wake_irq *wirq = dev->power.wakeirq; struct wake_irq *wirq = dev->power.wakeirq;
if (wirq && wirq->dedicated_irq) if (wirq && (wirq->status & WAKE_IRQ_DEDICATED_ALLOCATED))
enable_irq(wirq->irq); enable_irq(wirq->irq);
} }
EXPORT_SYMBOL_GPL(dev_pm_enable_wake_irq); EXPORT_SYMBOL_GPL(dev_pm_enable_wake_irq);
...@@ -231,19 +234,72 @@ EXPORT_SYMBOL_GPL(dev_pm_enable_wake_irq); ...@@ -231,19 +234,72 @@ EXPORT_SYMBOL_GPL(dev_pm_enable_wake_irq);
* dev_pm_disable_wake_irq - Disable device wake-up interrupt * dev_pm_disable_wake_irq - Disable device wake-up interrupt
* @dev: Device * @dev: Device
* *
* Called from the bus code or the device driver for * Optionally called from the bus code or the device driver for
* runtime_resume() to disable the wake-up interrupt while * runtime_suspend() to override the PM runtime core managed wake-up
* the device is running. * interrupt handling to disable the wake-up interrupt.
*/ */
void dev_pm_disable_wake_irq(struct device *dev) void dev_pm_disable_wake_irq(struct device *dev)
{ {
struct wake_irq *wirq = dev->power.wakeirq; struct wake_irq *wirq = dev->power.wakeirq;
if (wirq && wirq->dedicated_irq) if (wirq && (wirq->status & WAKE_IRQ_DEDICATED_ALLOCATED))
disable_irq_nosync(wirq->irq); disable_irq_nosync(wirq->irq);
} }
EXPORT_SYMBOL_GPL(dev_pm_disable_wake_irq); EXPORT_SYMBOL_GPL(dev_pm_disable_wake_irq);
/**
* dev_pm_enable_wake_irq_check - Checks and enables wake-up interrupt
* @dev: Device
* @can_change_status: Can change wake-up interrupt status
*
* Enables wakeirq conditionally. We need to enable wake-up interrupt
* lazily on the first rpm_suspend(). This is needed as the consumer device
* starts in RPM_SUSPENDED state, and the the first pm_runtime_get() would
* otherwise try to disable already disabled wakeirq. The wake-up interrupt
* starts disabled with IRQ_NOAUTOEN set.
*
* Should be only called from rpm_suspend() and rpm_resume() path.
* Caller must hold &dev->power.lock to change wirq->status
*/
void dev_pm_enable_wake_irq_check(struct device *dev,
bool can_change_status)
{
struct wake_irq *wirq = dev->power.wakeirq;
if (!wirq || !((wirq->status & WAKE_IRQ_DEDICATED_MASK)))
return;
if (likely(wirq->status & WAKE_IRQ_DEDICATED_MANAGED)) {
goto enable;
} else if (can_change_status) {
wirq->status |= WAKE_IRQ_DEDICATED_MANAGED;
goto enable;
}
return;
enable:
enable_irq(wirq->irq);
}
/**
* dev_pm_disable_wake_irq_check - Checks and disables wake-up interrupt
* @dev: Device
*
* Disables wake-up interrupt conditionally based on status.
* Should be only called from rpm_suspend() and rpm_resume() path.
*/
void dev_pm_disable_wake_irq_check(struct device *dev)
{
struct wake_irq *wirq = dev->power.wakeirq;
if (!wirq || !((wirq->status & WAKE_IRQ_DEDICATED_MASK)))
return;
if (wirq->status & WAKE_IRQ_DEDICATED_MANAGED)
disable_irq_nosync(wirq->irq);
}
/** /**
* dev_pm_arm_wake_irq - Arm device wake-up * dev_pm_arm_wake_irq - Arm device wake-up
* @wirq: Device wake-up interrupt * @wirq: Device wake-up interrupt
......
...@@ -811,7 +811,7 @@ void pm_print_active_wakeup_sources(void) ...@@ -811,7 +811,7 @@ void pm_print_active_wakeup_sources(void)
rcu_read_lock(); rcu_read_lock();
list_for_each_entry_rcu(ws, &wakeup_sources, entry) { list_for_each_entry_rcu(ws, &wakeup_sources, entry) {
if (ws->active) { if (ws->active) {
pr_info("active wakeup source: %s\n", ws->name); pr_debug("active wakeup source: %s\n", ws->name);
active = 1; active = 1;
} else if (!active && } else if (!active &&
(!last_activity_ws || (!last_activity_ws ||
...@@ -822,7 +822,7 @@ void pm_print_active_wakeup_sources(void) ...@@ -822,7 +822,7 @@ void pm_print_active_wakeup_sources(void)
} }
if (!active && last_activity_ws) if (!active && last_activity_ws)
pr_info("last active wakeup source: %s\n", pr_debug("last active wakeup source: %s\n",
last_activity_ws->name); last_activity_ws->name);
rcu_read_unlock(); rcu_read_unlock();
} }
...@@ -905,7 +905,7 @@ bool pm_get_wakeup_count(unsigned int *count, bool block) ...@@ -905,7 +905,7 @@ bool pm_get_wakeup_count(unsigned int *count, bool block)
split_counters(&cnt, &inpr); split_counters(&cnt, &inpr);
if (inpr == 0 || signal_pending(current)) if (inpr == 0 || signal_pending(current))
break; break;
pm_print_active_wakeup_sources();
schedule(); schedule();
} }
finish_wait(&wakeup_count_wait_queue, &wait); finish_wait(&wakeup_count_wait_queue, &wait);
......
...@@ -12,6 +12,27 @@ config ARM_BIG_LITTLE_CPUFREQ ...@@ -12,6 +12,27 @@ config ARM_BIG_LITTLE_CPUFREQ
help help
This enables the Generic CPUfreq driver for ARM big.LITTLE platforms. This enables the Generic CPUfreq driver for ARM big.LITTLE platforms.
config ARM_BRCMSTB_AVS_CPUFREQ
tristate "Broadcom STB AVS CPUfreq driver"
depends on ARCH_BRCMSTB || COMPILE_TEST
default y
help
Some Broadcom STB SoCs use a co-processor running proprietary firmware
("AVS") to handle voltage and frequency scaling. This driver provides
a standard CPUfreq interface to to the firmware.
Say Y, if you have a Broadcom SoC with AVS support for DFS or DVFS.
config ARM_BRCMSTB_AVS_CPUFREQ_DEBUG
bool "Broadcom STB AVS CPUfreq driver sysfs debug capability"
depends on ARM_BRCMSTB_AVS_CPUFREQ
help
Enabling this option turns on debug support via sysfs under
/sys/kernel/debug/brcmstb-avs-cpufreq. It is possible to read all and
write some AVS mailbox registers through sysfs entries.
If in doubt, say N.
config ARM_DT_BL_CPUFREQ config ARM_DT_BL_CPUFREQ
tristate "Generic probing via DT for ARM big LITTLE CPUfreq driver" tristate "Generic probing via DT for ARM big LITTLE CPUfreq driver"
depends on ARM_BIG_LITTLE_CPUFREQ && OF depends on ARM_BIG_LITTLE_CPUFREQ && OF
...@@ -60,14 +81,6 @@ config ARM_IMX6Q_CPUFREQ ...@@ -60,14 +81,6 @@ config ARM_IMX6Q_CPUFREQ
If in doubt, say N. If in doubt, say N.
config ARM_INTEGRATOR
tristate "CPUfreq driver for ARM Integrator CPUs"
depends on ARCH_INTEGRATOR
default y
help
This enables the CPUfreq driver for ARM Integrator CPUs.
If in doubt, say Y.
config ARM_KIRKWOOD_CPUFREQ config ARM_KIRKWOOD_CPUFREQ
def_bool MACH_KIRKWOOD def_bool MACH_KIRKWOOD
help help
......
...@@ -51,12 +51,12 @@ obj-$(CONFIG_ARM_BIG_LITTLE_CPUFREQ) += arm_big_little.o ...@@ -51,12 +51,12 @@ obj-$(CONFIG_ARM_BIG_LITTLE_CPUFREQ) += arm_big_little.o
# LITTLE drivers, so that it is probed last. # LITTLE drivers, so that it is probed last.
obj-$(CONFIG_ARM_DT_BL_CPUFREQ) += arm_big_little_dt.o obj-$(CONFIG_ARM_DT_BL_CPUFREQ) += arm_big_little_dt.o
obj-$(CONFIG_ARM_BRCMSTB_AVS_CPUFREQ) += brcmstb-avs-cpufreq.o
obj-$(CONFIG_ARCH_DAVINCI) += davinci-cpufreq.o obj-$(CONFIG_ARCH_DAVINCI) += davinci-cpufreq.o
obj-$(CONFIG_UX500_SOC_DB8500) += dbx500-cpufreq.o obj-$(CONFIG_UX500_SOC_DB8500) += dbx500-cpufreq.o
obj-$(CONFIG_ARM_EXYNOS5440_CPUFREQ) += exynos5440-cpufreq.o obj-$(CONFIG_ARM_EXYNOS5440_CPUFREQ) += exynos5440-cpufreq.o
obj-$(CONFIG_ARM_HIGHBANK_CPUFREQ) += highbank-cpufreq.o obj-$(CONFIG_ARM_HIGHBANK_CPUFREQ) += highbank-cpufreq.o
obj-$(CONFIG_ARM_IMX6Q_CPUFREQ) += imx6q-cpufreq.o obj-$(CONFIG_ARM_IMX6Q_CPUFREQ) += imx6q-cpufreq.o
obj-$(CONFIG_ARM_INTEGRATOR) += integrator-cpufreq.o
obj-$(CONFIG_ARM_KIRKWOOD_CPUFREQ) += kirkwood-cpufreq.o obj-$(CONFIG_ARM_KIRKWOOD_CPUFREQ) += kirkwood-cpufreq.o
obj-$(CONFIG_ARM_MT8173_CPUFREQ) += mt8173-cpufreq.o obj-$(CONFIG_ARM_MT8173_CPUFREQ) += mt8173-cpufreq.o
obj-$(CONFIG_ARM_OMAP2PLUS_CPUFREQ) += omap-cpufreq.o obj-$(CONFIG_ARM_OMAP2PLUS_CPUFREQ) += omap-cpufreq.o
......
...@@ -84,7 +84,6 @@ static inline struct acpi_processor_performance *to_perf_data(struct acpi_cpufre ...@@ -84,7 +84,6 @@ static inline struct acpi_processor_performance *to_perf_data(struct acpi_cpufre
static struct cpufreq_driver acpi_cpufreq_driver; static struct cpufreq_driver acpi_cpufreq_driver;
static unsigned int acpi_pstate_strict; static unsigned int acpi_pstate_strict;
static struct msr __percpu *msrs;
static bool boost_state(unsigned int cpu) static bool boost_state(unsigned int cpu)
{ {
...@@ -104,11 +103,10 @@ static bool boost_state(unsigned int cpu) ...@@ -104,11 +103,10 @@ static bool boost_state(unsigned int cpu)
return false; return false;
} }
static void boost_set_msrs(bool enable, const struct cpumask *cpumask) static int boost_set_msr(bool enable)
{ {
u32 cpu;
u32 msr_addr; u32 msr_addr;
u64 msr_mask; u64 msr_mask, val;
switch (boot_cpu_data.x86_vendor) { switch (boot_cpu_data.x86_vendor) {
case X86_VENDOR_INTEL: case X86_VENDOR_INTEL:
...@@ -120,26 +118,31 @@ static void boost_set_msrs(bool enable, const struct cpumask *cpumask) ...@@ -120,26 +118,31 @@ static void boost_set_msrs(bool enable, const struct cpumask *cpumask)
msr_mask = MSR_K7_HWCR_CPB_DIS; msr_mask = MSR_K7_HWCR_CPB_DIS;
break; break;
default: default:
return; return -EINVAL;
} }
rdmsr_on_cpus(cpumask, msr_addr, msrs); rdmsrl(msr_addr, val);
for_each_cpu(cpu, cpumask) {
struct msr *reg = per_cpu_ptr(msrs, cpu);
if (enable) if (enable)
reg->q &= ~msr_mask; val &= ~msr_mask;
else else
reg->q |= msr_mask; val |= msr_mask;
}
wrmsr_on_cpus(cpumask, msr_addr, msrs); wrmsrl(msr_addr, val);
return 0;
}
static void boost_set_msr_each(void *p_en)
{
bool enable = (bool) p_en;
boost_set_msr(enable);
} }
static int set_boost(int val) static int set_boost(int val)
{ {
get_online_cpus(); get_online_cpus();
boost_set_msrs(val, cpu_online_mask); on_each_cpu(boost_set_msr_each, (void *)(long)val, 1);
put_online_cpus(); put_online_cpus();
pr_debug("Core Boosting %sabled.\n", val ? "en" : "dis"); pr_debug("Core Boosting %sabled.\n", val ? "en" : "dis");
...@@ -536,46 +539,24 @@ static void free_acpi_perf_data(void) ...@@ -536,46 +539,24 @@ static void free_acpi_perf_data(void)
free_percpu(acpi_perf_data); free_percpu(acpi_perf_data);
} }
static int boost_notify(struct notifier_block *nb, unsigned long action, static int cpufreq_boost_online(unsigned int cpu)
void *hcpu)
{ {
unsigned cpu = (long)hcpu; /*
const struct cpumask *cpumask; * On the CPU_UP path we simply keep the boost-disable flag
* in sync with the current global state.
cpumask = get_cpu_mask(cpu); */
return boost_set_msr(acpi_cpufreq_driver.boost_enabled);
}
static int cpufreq_boost_down_prep(unsigned int cpu)
{
/* /*
* Clear the boost-disable bit on the CPU_DOWN path so that * Clear the boost-disable bit on the CPU_DOWN path so that
* this cpu cannot block the remaining ones from boosting. On * this cpu cannot block the remaining ones from boosting.
* the CPU_UP path we simply keep the boost-disable flag in
* sync with the current global state.
*/ */
return boost_set_msr(1);
switch (action) {
case CPU_DOWN_FAILED:
case CPU_DOWN_FAILED_FROZEN:
case CPU_ONLINE:
case CPU_ONLINE_FROZEN:
boost_set_msrs(acpi_cpufreq_driver.boost_enabled, cpumask);
break;
case CPU_DOWN_PREPARE:
case CPU_DOWN_PREPARE_FROZEN:
boost_set_msrs(1, cpumask);
break;
default:
break;
}
return NOTIFY_OK;
} }
static struct notifier_block boost_nb = {
.notifier_call = boost_notify,
};
/* /*
* acpi_cpufreq_early_init - initialize ACPI P-States library * acpi_cpufreq_early_init - initialize ACPI P-States library
* *
...@@ -922,37 +903,35 @@ static struct cpufreq_driver acpi_cpufreq_driver = { ...@@ -922,37 +903,35 @@ static struct cpufreq_driver acpi_cpufreq_driver = {
.attr = acpi_cpufreq_attr, .attr = acpi_cpufreq_attr,
}; };
static enum cpuhp_state acpi_cpufreq_online;
static void __init acpi_cpufreq_boost_init(void) static void __init acpi_cpufreq_boost_init(void)
{ {
if (boot_cpu_has(X86_FEATURE_CPB) || boot_cpu_has(X86_FEATURE_IDA)) { int ret;
msrs = msrs_alloc();
if (!msrs) if (!(boot_cpu_has(X86_FEATURE_CPB) || boot_cpu_has(X86_FEATURE_IDA)))
return; return;
acpi_cpufreq_driver.set_boost = set_boost; acpi_cpufreq_driver.set_boost = set_boost;
acpi_cpufreq_driver.boost_enabled = boost_state(0); acpi_cpufreq_driver.boost_enabled = boost_state(0);
cpu_notifier_register_begin(); /*
* This calls the online callback on all online cpu and forces all
/* Force all MSRs to the same value */ * MSRs to the same value.
boost_set_msrs(acpi_cpufreq_driver.boost_enabled, */
cpu_online_mask); ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "cpufreq/acpi:online",
cpufreq_boost_online, cpufreq_boost_down_prep);
__register_cpu_notifier(&boost_nb); if (ret < 0) {
pr_err("acpi_cpufreq: failed to register hotplug callbacks\n");
cpu_notifier_register_done(); return;
} }
acpi_cpufreq_online = ret;
} }
static void acpi_cpufreq_boost_exit(void) static void acpi_cpufreq_boost_exit(void)
{ {
if (msrs) { if (acpi_cpufreq_online >= 0)
unregister_cpu_notifier(&boost_nb); cpuhp_remove_state_nocalls(acpi_cpufreq_online);
msrs_free(msrs);
msrs = NULL;
}
} }
static int __init acpi_cpufreq_init(void) static int __init acpi_cpufreq_init(void)
......
This diff is collapsed.
...@@ -247,3 +247,10 @@ MODULE_DESCRIPTION("CPUFreq driver based on the ACPI CPPC v5.0+ spec"); ...@@ -247,3 +247,10 @@ MODULE_DESCRIPTION("CPUFreq driver based on the ACPI CPPC v5.0+ spec");
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
late_initcall(cppc_cpufreq_init); late_initcall(cppc_cpufreq_init);
static const struct acpi_device_id cppc_acpi_ids[] = {
{ACPI_PROCESSOR_DEVICE_HID, },
{}
};
MODULE_DEVICE_TABLE(acpi, cppc_acpi_ids);
...@@ -26,6 +26,9 @@ static const struct of_device_id machines[] __initconst = { ...@@ -26,6 +26,9 @@ static const struct of_device_id machines[] __initconst = {
{ .compatible = "allwinner,sun8i-a83t", }, { .compatible = "allwinner,sun8i-a83t", },
{ .compatible = "allwinner,sun8i-h3", }, { .compatible = "allwinner,sun8i-h3", },
{ .compatible = "arm,integrator-ap", },
{ .compatible = "arm,integrator-cp", },
{ .compatible = "hisilicon,hi6220", }, { .compatible = "hisilicon,hi6220", },
{ .compatible = "fsl,imx27", }, { .compatible = "fsl,imx27", },
...@@ -34,6 +37,8 @@ static const struct of_device_id machines[] __initconst = { ...@@ -34,6 +37,8 @@ static const struct of_device_id machines[] __initconst = {
{ .compatible = "fsl,imx7d", }, { .compatible = "fsl,imx7d", },
{ .compatible = "marvell,berlin", }, { .compatible = "marvell,berlin", },
{ .compatible = "marvell,pxa250", },
{ .compatible = "marvell,pxa270", },
{ .compatible = "samsung,exynos3250", }, { .compatible = "samsung,exynos3250", },
{ .compatible = "samsung,exynos4210", }, { .compatible = "samsung,exynos4210", },
...@@ -50,6 +55,8 @@ static const struct of_device_id machines[] __initconst = { ...@@ -50,6 +55,8 @@ static const struct of_device_id machines[] __initconst = {
{ .compatible = "renesas,r7s72100", }, { .compatible = "renesas,r7s72100", },
{ .compatible = "renesas,r8a73a4", }, { .compatible = "renesas,r8a73a4", },
{ .compatible = "renesas,r8a7740", }, { .compatible = "renesas,r8a7740", },
{ .compatible = "renesas,r8a7743", },
{ .compatible = "renesas,r8a7745", },
{ .compatible = "renesas,r8a7778", }, { .compatible = "renesas,r8a7778", },
{ .compatible = "renesas,r8a7779", }, { .compatible = "renesas,r8a7779", },
{ .compatible = "renesas,r8a7790", }, { .compatible = "renesas,r8a7790", },
...@@ -72,6 +79,12 @@ static const struct of_device_id machines[] __initconst = { ...@@ -72,6 +79,12 @@ static const struct of_device_id machines[] __initconst = {
{ .compatible = "sigma,tango4" }, { .compatible = "sigma,tango4" },
{ .compatible = "socionext,uniphier-pro5", },
{ .compatible = "socionext,uniphier-pxs2", },
{ .compatible = "socionext,uniphier-ld6b", },
{ .compatible = "socionext,uniphier-ld11", },
{ .compatible = "socionext,uniphier-ld20", },
{ .compatible = "ti,am33xx", }, { .compatible = "ti,am33xx", },
{ .compatible = "ti,dra7", }, { .compatible = "ti,dra7", },
{ .compatible = "ti,omap2", }, { .compatible = "ti,omap2", },
...@@ -81,6 +94,8 @@ static const struct of_device_id machines[] __initconst = { ...@@ -81,6 +94,8 @@ static const struct of_device_id machines[] __initconst = {
{ .compatible = "xlnx,zynq-7000", }, { .compatible = "xlnx,zynq-7000", },
{ .compatible = "zte,zx296718", },
{ } { }
}; };
......
...@@ -28,6 +28,7 @@ ...@@ -28,6 +28,7 @@
#include "cpufreq-dt.h" #include "cpufreq-dt.h"
struct private_data { struct private_data {
struct opp_table *opp_table;
struct device *cpu_dev; struct device *cpu_dev;
struct thermal_cooling_device *cdev; struct thermal_cooling_device *cdev;
const char *reg_name; const char *reg_name;
...@@ -143,6 +144,7 @@ static int resources_available(void) ...@@ -143,6 +144,7 @@ static int resources_available(void)
static int cpufreq_init(struct cpufreq_policy *policy) static int cpufreq_init(struct cpufreq_policy *policy)
{ {
struct cpufreq_frequency_table *freq_table; struct cpufreq_frequency_table *freq_table;
struct opp_table *opp_table = NULL;
struct private_data *priv; struct private_data *priv;
struct device *cpu_dev; struct device *cpu_dev;
struct clk *cpu_clk; struct clk *cpu_clk;
...@@ -186,8 +188,9 @@ static int cpufreq_init(struct cpufreq_policy *policy) ...@@ -186,8 +188,9 @@ static int cpufreq_init(struct cpufreq_policy *policy)
*/ */
name = find_supply_name(cpu_dev); name = find_supply_name(cpu_dev);
if (name) { if (name) {
ret = dev_pm_opp_set_regulator(cpu_dev, name); opp_table = dev_pm_opp_set_regulators(cpu_dev, &name, 1);
if (ret) { if (IS_ERR(opp_table)) {
ret = PTR_ERR(opp_table);
dev_err(cpu_dev, "Failed to set regulator for cpu%d: %d\n", dev_err(cpu_dev, "Failed to set regulator for cpu%d: %d\n",
policy->cpu, ret); policy->cpu, ret);
goto out_put_clk; goto out_put_clk;
...@@ -237,6 +240,7 @@ static int cpufreq_init(struct cpufreq_policy *policy) ...@@ -237,6 +240,7 @@ static int cpufreq_init(struct cpufreq_policy *policy)
} }
priv->reg_name = name; priv->reg_name = name;
priv->opp_table = opp_table;
ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &freq_table); ret = dev_pm_opp_init_cpufreq_table(cpu_dev, &freq_table);
if (ret) { if (ret) {
...@@ -285,7 +289,7 @@ static int cpufreq_init(struct cpufreq_policy *policy) ...@@ -285,7 +289,7 @@ static int cpufreq_init(struct cpufreq_policy *policy)
out_free_opp: out_free_opp:
dev_pm_opp_of_cpumask_remove_table(policy->cpus); dev_pm_opp_of_cpumask_remove_table(policy->cpus);
if (name) if (name)
dev_pm_opp_put_regulator(cpu_dev); dev_pm_opp_put_regulators(opp_table);
out_put_clk: out_put_clk:
clk_put(cpu_clk); clk_put(cpu_clk);
...@@ -300,7 +304,7 @@ static int cpufreq_exit(struct cpufreq_policy *policy) ...@@ -300,7 +304,7 @@ static int cpufreq_exit(struct cpufreq_policy *policy)
dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table); dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table);
dev_pm_opp_of_cpumask_remove_table(policy->related_cpus); dev_pm_opp_of_cpumask_remove_table(policy->related_cpus);
if (priv->reg_name) if (priv->reg_name)
dev_pm_opp_put_regulator(priv->cpu_dev); dev_pm_opp_put_regulators(priv->opp_table);
clk_put(policy->clk); clk_put(policy->clk);
kfree(priv); kfree(priv);
......
...@@ -1526,7 +1526,10 @@ unsigned int cpufreq_get(unsigned int cpu) ...@@ -1526,7 +1526,10 @@ unsigned int cpufreq_get(unsigned int cpu)
if (policy) { if (policy) {
down_read(&policy->rwsem); down_read(&policy->rwsem);
if (!policy_is_inactive(policy))
ret_freq = __cpufreq_get(policy); ret_freq = __cpufreq_get(policy);
up_read(&policy->rwsem); up_read(&policy->rwsem);
cpufreq_cpu_put(policy); cpufreq_cpu_put(policy);
...@@ -2254,17 +2257,19 @@ static int cpufreq_set_policy(struct cpufreq_policy *policy, ...@@ -2254,17 +2257,19 @@ static int cpufreq_set_policy(struct cpufreq_policy *policy,
* Useful for policy notifiers which have different necessities * Useful for policy notifiers which have different necessities
* at different times. * at different times.
*/ */
int cpufreq_update_policy(unsigned int cpu) void cpufreq_update_policy(unsigned int cpu)
{ {
struct cpufreq_policy *policy = cpufreq_cpu_get(cpu); struct cpufreq_policy *policy = cpufreq_cpu_get(cpu);
struct cpufreq_policy new_policy; struct cpufreq_policy new_policy;
int ret;
if (!policy) if (!policy)
return -ENODEV; return;
down_write(&policy->rwsem); down_write(&policy->rwsem);
if (policy_is_inactive(policy))
goto unlock;
pr_debug("updating policy for CPU %u\n", cpu); pr_debug("updating policy for CPU %u\n", cpu);
memcpy(&new_policy, policy, sizeof(*policy)); memcpy(&new_policy, policy, sizeof(*policy));
new_policy.min = policy->user_policy.min; new_policy.min = policy->user_policy.min;
...@@ -2275,24 +2280,20 @@ int cpufreq_update_policy(unsigned int cpu) ...@@ -2275,24 +2280,20 @@ int cpufreq_update_policy(unsigned int cpu)
* -> ask driver for current freq and notify governors about a change * -> ask driver for current freq and notify governors about a change
*/ */
if (cpufreq_driver->get && !cpufreq_driver->setpolicy) { if (cpufreq_driver->get && !cpufreq_driver->setpolicy) {
if (cpufreq_suspended) { if (cpufreq_suspended)
ret = -EAGAIN;
goto unlock; goto unlock;
}
new_policy.cur = cpufreq_update_current_freq(policy); new_policy.cur = cpufreq_update_current_freq(policy);
if (WARN_ON(!new_policy.cur)) { if (WARN_ON(!new_policy.cur))
ret = -EIO;
goto unlock; goto unlock;
} }
}
ret = cpufreq_set_policy(policy, &new_policy); cpufreq_set_policy(policy, &new_policy);
unlock: unlock:
up_write(&policy->rwsem); up_write(&policy->rwsem);
cpufreq_cpu_put(policy); cpufreq_cpu_put(policy);
return ret;
} }
EXPORT_SYMBOL(cpufreq_update_policy); EXPORT_SYMBOL(cpufreq_update_policy);
......
...@@ -37,16 +37,16 @@ struct cs_dbs_tuners { ...@@ -37,16 +37,16 @@ struct cs_dbs_tuners {
#define DEF_SAMPLING_DOWN_FACTOR (1) #define DEF_SAMPLING_DOWN_FACTOR (1)
#define MAX_SAMPLING_DOWN_FACTOR (10) #define MAX_SAMPLING_DOWN_FACTOR (10)
static inline unsigned int get_freq_target(struct cs_dbs_tuners *cs_tuners, static inline unsigned int get_freq_step(struct cs_dbs_tuners *cs_tuners,
struct cpufreq_policy *policy) struct cpufreq_policy *policy)
{ {
unsigned int freq_target = (cs_tuners->freq_step * policy->max) / 100; unsigned int freq_step = (cs_tuners->freq_step * policy->max) / 100;
/* max freq cannot be less than 100. But who knows... */ /* max freq cannot be less than 100. But who knows... */
if (unlikely(freq_target == 0)) if (unlikely(freq_step == 0))
freq_target = DEF_FREQUENCY_STEP; freq_step = DEF_FREQUENCY_STEP;
return freq_target; return freq_step;
} }
/* /*
...@@ -55,10 +55,10 @@ static inline unsigned int get_freq_target(struct cs_dbs_tuners *cs_tuners, ...@@ -55,10 +55,10 @@ static inline unsigned int get_freq_target(struct cs_dbs_tuners *cs_tuners,
* sampling_down_factor, we check, if current idle time is more than 80% * sampling_down_factor, we check, if current idle time is more than 80%
* (default), then we try to decrease frequency * (default), then we try to decrease frequency
* *
* Any frequency increase takes it to the maximum frequency. Frequency reduction * Frequency updates happen at minimum steps of 5% (default) of maximum
* happens at minimum steps of 5% (default) of maximum frequency * frequency
*/ */
static unsigned int cs_dbs_timer(struct cpufreq_policy *policy) static unsigned int cs_dbs_update(struct cpufreq_policy *policy)
{ {
struct policy_dbs_info *policy_dbs = policy->governor_data; struct policy_dbs_info *policy_dbs = policy->governor_data;
struct cs_policy_dbs_info *dbs_info = to_dbs_info(policy_dbs); struct cs_policy_dbs_info *dbs_info = to_dbs_info(policy_dbs);
...@@ -66,6 +66,7 @@ static unsigned int cs_dbs_timer(struct cpufreq_policy *policy) ...@@ -66,6 +66,7 @@ static unsigned int cs_dbs_timer(struct cpufreq_policy *policy)
struct dbs_data *dbs_data = policy_dbs->dbs_data; struct dbs_data *dbs_data = policy_dbs->dbs_data;
struct cs_dbs_tuners *cs_tuners = dbs_data->tuners; struct cs_dbs_tuners *cs_tuners = dbs_data->tuners;
unsigned int load = dbs_update(policy); unsigned int load = dbs_update(policy);
unsigned int freq_step;
/* /*
* break out if we 'cannot' reduce the speed as the user might * break out if we 'cannot' reduce the speed as the user might
...@@ -82,6 +83,23 @@ static unsigned int cs_dbs_timer(struct cpufreq_policy *policy) ...@@ -82,6 +83,23 @@ static unsigned int cs_dbs_timer(struct cpufreq_policy *policy)
if (requested_freq > policy->max || requested_freq < policy->min) if (requested_freq > policy->max || requested_freq < policy->min)
requested_freq = policy->cur; requested_freq = policy->cur;
freq_step = get_freq_step(cs_tuners, policy);
/*
* Decrease requested_freq one freq_step for each idle period that
* we didn't update the frequency.
*/
if (policy_dbs->idle_periods < UINT_MAX) {
unsigned int freq_steps = policy_dbs->idle_periods * freq_step;
if (requested_freq > freq_steps)
requested_freq -= freq_steps;
else
requested_freq = policy->min;
policy_dbs->idle_periods = UINT_MAX;
}
/* Check for frequency increase */ /* Check for frequency increase */
if (load > dbs_data->up_threshold) { if (load > dbs_data->up_threshold) {
dbs_info->down_skip = 0; dbs_info->down_skip = 0;
...@@ -90,7 +108,7 @@ static unsigned int cs_dbs_timer(struct cpufreq_policy *policy) ...@@ -90,7 +108,7 @@ static unsigned int cs_dbs_timer(struct cpufreq_policy *policy)
if (requested_freq == policy->max) if (requested_freq == policy->max)
goto out; goto out;
requested_freq += get_freq_target(cs_tuners, policy); requested_freq += freq_step;
if (requested_freq > policy->max) if (requested_freq > policy->max)
requested_freq = policy->max; requested_freq = policy->max;
...@@ -106,16 +124,14 @@ static unsigned int cs_dbs_timer(struct cpufreq_policy *policy) ...@@ -106,16 +124,14 @@ static unsigned int cs_dbs_timer(struct cpufreq_policy *policy)
/* Check for frequency decrease */ /* Check for frequency decrease */
if (load < cs_tuners->down_threshold) { if (load < cs_tuners->down_threshold) {
unsigned int freq_target;
/* /*
* if we cannot reduce the frequency anymore, break out early * if we cannot reduce the frequency anymore, break out early
*/ */
if (requested_freq == policy->min) if (requested_freq == policy->min)
goto out; goto out;
freq_target = get_freq_target(cs_tuners, policy); if (requested_freq > freq_step)
if (requested_freq > freq_target) requested_freq -= freq_step;
requested_freq -= freq_target;
else else
requested_freq = policy->min; requested_freq = policy->min;
...@@ -305,7 +321,7 @@ static void cs_start(struct cpufreq_policy *policy) ...@@ -305,7 +321,7 @@ static void cs_start(struct cpufreq_policy *policy)
static struct dbs_governor cs_governor = { static struct dbs_governor cs_governor = {
.gov = CPUFREQ_DBS_GOVERNOR_INITIALIZER("conservative"), .gov = CPUFREQ_DBS_GOVERNOR_INITIALIZER("conservative"),
.kobj_type = { .default_attrs = cs_attributes }, .kobj_type = { .default_attrs = cs_attributes },
.gov_dbs_timer = cs_dbs_timer, .gov_dbs_update = cs_dbs_update,
.alloc = cs_alloc, .alloc = cs_alloc,
.free = cs_free, .free = cs_free,
.init = cs_init, .init = cs_init,
......
...@@ -61,7 +61,7 @@ ssize_t store_sampling_rate(struct gov_attr_set *attr_set, const char *buf, ...@@ -61,7 +61,7 @@ ssize_t store_sampling_rate(struct gov_attr_set *attr_set, const char *buf,
* entries can't be freed concurrently. * entries can't be freed concurrently.
*/ */
list_for_each_entry(policy_dbs, &attr_set->policy_list, list) { list_for_each_entry(policy_dbs, &attr_set->policy_list, list) {
mutex_lock(&policy_dbs->timer_mutex); mutex_lock(&policy_dbs->update_mutex);
/* /*
* On 32-bit architectures this may race with the * On 32-bit architectures this may race with the
* sample_delay_ns read in dbs_update_util_handler(), but that * sample_delay_ns read in dbs_update_util_handler(), but that
...@@ -76,7 +76,7 @@ ssize_t store_sampling_rate(struct gov_attr_set *attr_set, const char *buf, ...@@ -76,7 +76,7 @@ ssize_t store_sampling_rate(struct gov_attr_set *attr_set, const char *buf,
* taken, so it shouldn't be significant. * taken, so it shouldn't be significant.
*/ */
gov_update_sample_delay(policy_dbs, 0); gov_update_sample_delay(policy_dbs, 0);
mutex_unlock(&policy_dbs->timer_mutex); mutex_unlock(&policy_dbs->update_mutex);
} }
return count; return count;
...@@ -117,7 +117,7 @@ unsigned int dbs_update(struct cpufreq_policy *policy) ...@@ -117,7 +117,7 @@ unsigned int dbs_update(struct cpufreq_policy *policy)
struct policy_dbs_info *policy_dbs = policy->governor_data; struct policy_dbs_info *policy_dbs = policy->governor_data;
struct dbs_data *dbs_data = policy_dbs->dbs_data; struct dbs_data *dbs_data = policy_dbs->dbs_data;
unsigned int ignore_nice = dbs_data->ignore_nice_load; unsigned int ignore_nice = dbs_data->ignore_nice_load;
unsigned int max_load = 0; unsigned int max_load = 0, idle_periods = UINT_MAX;
unsigned int sampling_rate, io_busy, j; unsigned int sampling_rate, io_busy, j;
/* /*
...@@ -215,9 +215,19 @@ unsigned int dbs_update(struct cpufreq_policy *policy) ...@@ -215,9 +215,19 @@ unsigned int dbs_update(struct cpufreq_policy *policy)
j_cdbs->prev_load = load; j_cdbs->prev_load = load;
} }
if (time_elapsed > 2 * sampling_rate) {
unsigned int periods = time_elapsed / sampling_rate;
if (periods < idle_periods)
idle_periods = periods;
}
if (load > max_load) if (load > max_load)
max_load = load; max_load = load;
} }
policy_dbs->idle_periods = idle_periods;
return max_load; return max_load;
} }
EXPORT_SYMBOL_GPL(dbs_update); EXPORT_SYMBOL_GPL(dbs_update);
...@@ -236,9 +246,9 @@ static void dbs_work_handler(struct work_struct *work) ...@@ -236,9 +246,9 @@ static void dbs_work_handler(struct work_struct *work)
* Make sure cpufreq_governor_limits() isn't evaluating load or the * Make sure cpufreq_governor_limits() isn't evaluating load or the
* ondemand governor isn't updating the sampling rate in parallel. * ondemand governor isn't updating the sampling rate in parallel.
*/ */
mutex_lock(&policy_dbs->timer_mutex); mutex_lock(&policy_dbs->update_mutex);
gov_update_sample_delay(policy_dbs, gov->gov_dbs_timer(policy)); gov_update_sample_delay(policy_dbs, gov->gov_dbs_update(policy));
mutex_unlock(&policy_dbs->timer_mutex); mutex_unlock(&policy_dbs->update_mutex);
/* Allow the utilization update handler to queue up more work. */ /* Allow the utilization update handler to queue up more work. */
atomic_set(&policy_dbs->work_count, 0); atomic_set(&policy_dbs->work_count, 0);
...@@ -348,7 +358,7 @@ static struct policy_dbs_info *alloc_policy_dbs_info(struct cpufreq_policy *poli ...@@ -348,7 +358,7 @@ static struct policy_dbs_info *alloc_policy_dbs_info(struct cpufreq_policy *poli
return NULL; return NULL;
policy_dbs->policy = policy; policy_dbs->policy = policy;
mutex_init(&policy_dbs->timer_mutex); mutex_init(&policy_dbs->update_mutex);
atomic_set(&policy_dbs->work_count, 0); atomic_set(&policy_dbs->work_count, 0);
init_irq_work(&policy_dbs->irq_work, dbs_irq_work); init_irq_work(&policy_dbs->irq_work, dbs_irq_work);
INIT_WORK(&policy_dbs->work, dbs_work_handler); INIT_WORK(&policy_dbs->work, dbs_work_handler);
...@@ -367,7 +377,7 @@ static void free_policy_dbs_info(struct policy_dbs_info *policy_dbs, ...@@ -367,7 +377,7 @@ static void free_policy_dbs_info(struct policy_dbs_info *policy_dbs,
{ {
int j; int j;
mutex_destroy(&policy_dbs->timer_mutex); mutex_destroy(&policy_dbs->update_mutex);
for_each_cpu(j, policy_dbs->policy->related_cpus) { for_each_cpu(j, policy_dbs->policy->related_cpus) {
struct cpu_dbs_info *j_cdbs = &per_cpu(cpu_dbs, j); struct cpu_dbs_info *j_cdbs = &per_cpu(cpu_dbs, j);
...@@ -547,10 +557,10 @@ void cpufreq_dbs_governor_limits(struct cpufreq_policy *policy) ...@@ -547,10 +557,10 @@ void cpufreq_dbs_governor_limits(struct cpufreq_policy *policy)
{ {
struct policy_dbs_info *policy_dbs = policy->governor_data; struct policy_dbs_info *policy_dbs = policy->governor_data;
mutex_lock(&policy_dbs->timer_mutex); mutex_lock(&policy_dbs->update_mutex);
cpufreq_policy_apply_limits(policy); cpufreq_policy_apply_limits(policy);
gov_update_sample_delay(policy_dbs, 0); gov_update_sample_delay(policy_dbs, 0);
mutex_unlock(&policy_dbs->timer_mutex); mutex_unlock(&policy_dbs->update_mutex);
} }
EXPORT_SYMBOL_GPL(cpufreq_dbs_governor_limits); EXPORT_SYMBOL_GPL(cpufreq_dbs_governor_limits);
...@@ -85,7 +85,7 @@ struct policy_dbs_info { ...@@ -85,7 +85,7 @@ struct policy_dbs_info {
* Per policy mutex that serializes load evaluation from limit-change * Per policy mutex that serializes load evaluation from limit-change
* and work-handler. * and work-handler.
*/ */
struct mutex timer_mutex; struct mutex update_mutex;
u64 last_sample_time; u64 last_sample_time;
s64 sample_delay_ns; s64 sample_delay_ns;
...@@ -97,6 +97,7 @@ struct policy_dbs_info { ...@@ -97,6 +97,7 @@ struct policy_dbs_info {
struct list_head list; struct list_head list;
/* Multiplier for increasing sample delay temporarily. */ /* Multiplier for increasing sample delay temporarily. */
unsigned int rate_mult; unsigned int rate_mult;
unsigned int idle_periods; /* For conservative */
/* Status indicators */ /* Status indicators */
bool is_shared; /* This object is used by multiple CPUs */ bool is_shared; /* This object is used by multiple CPUs */
bool work_in_progress; /* Work is being queued up or in progress */ bool work_in_progress; /* Work is being queued up or in progress */
...@@ -135,7 +136,7 @@ struct dbs_governor { ...@@ -135,7 +136,7 @@ struct dbs_governor {
*/ */
struct dbs_data *gdbs_data; struct dbs_data *gdbs_data;
unsigned int (*gov_dbs_timer)(struct cpufreq_policy *policy); unsigned int (*gov_dbs_update)(struct cpufreq_policy *policy);
struct policy_dbs_info *(*alloc)(void); struct policy_dbs_info *(*alloc)(void);
void (*free)(struct policy_dbs_info *policy_dbs); void (*free)(struct policy_dbs_info *policy_dbs);
int (*init)(struct dbs_data *dbs_data); int (*init)(struct dbs_data *dbs_data);
......
...@@ -25,7 +25,7 @@ ...@@ -25,7 +25,7 @@
#define MAX_SAMPLING_DOWN_FACTOR (100000) #define MAX_SAMPLING_DOWN_FACTOR (100000)
#define MICRO_FREQUENCY_UP_THRESHOLD (95) #define MICRO_FREQUENCY_UP_THRESHOLD (95)
#define MICRO_FREQUENCY_MIN_SAMPLE_RATE (10000) #define MICRO_FREQUENCY_MIN_SAMPLE_RATE (10000)
#define MIN_FREQUENCY_UP_THRESHOLD (11) #define MIN_FREQUENCY_UP_THRESHOLD (1)
#define MAX_FREQUENCY_UP_THRESHOLD (100) #define MAX_FREQUENCY_UP_THRESHOLD (100)
static struct od_ops od_ops; static struct od_ops od_ops;
...@@ -169,7 +169,7 @@ static void od_update(struct cpufreq_policy *policy) ...@@ -169,7 +169,7 @@ static void od_update(struct cpufreq_policy *policy)
} }
} }
static unsigned int od_dbs_timer(struct cpufreq_policy *policy) static unsigned int od_dbs_update(struct cpufreq_policy *policy)
{ {
struct policy_dbs_info *policy_dbs = policy->governor_data; struct policy_dbs_info *policy_dbs = policy->governor_data;
struct dbs_data *dbs_data = policy_dbs->dbs_data; struct dbs_data *dbs_data = policy_dbs->dbs_data;
...@@ -191,7 +191,7 @@ static unsigned int od_dbs_timer(struct cpufreq_policy *policy) ...@@ -191,7 +191,7 @@ static unsigned int od_dbs_timer(struct cpufreq_policy *policy)
od_update(policy); od_update(policy);
if (dbs_info->freq_lo) { if (dbs_info->freq_lo) {
/* Setup timer for SUB_SAMPLE */ /* Setup SUB_SAMPLE */
dbs_info->sample_type = OD_SUB_SAMPLE; dbs_info->sample_type = OD_SUB_SAMPLE;
return dbs_info->freq_hi_delay_us; return dbs_info->freq_hi_delay_us;
} }
...@@ -255,11 +255,11 @@ static ssize_t store_sampling_down_factor(struct gov_attr_set *attr_set, ...@@ -255,11 +255,11 @@ static ssize_t store_sampling_down_factor(struct gov_attr_set *attr_set,
list_for_each_entry(policy_dbs, &attr_set->policy_list, list) { list_for_each_entry(policy_dbs, &attr_set->policy_list, list) {
/* /*
* Doing this without locking might lead to using different * Doing this without locking might lead to using different
* rate_mult values in od_update() and od_dbs_timer(). * rate_mult values in od_update() and od_dbs_update().
*/ */
mutex_lock(&policy_dbs->timer_mutex); mutex_lock(&policy_dbs->update_mutex);
policy_dbs->rate_mult = 1; policy_dbs->rate_mult = 1;
mutex_unlock(&policy_dbs->timer_mutex); mutex_unlock(&policy_dbs->update_mutex);
} }
return count; return count;
...@@ -374,8 +374,7 @@ static int od_init(struct dbs_data *dbs_data) ...@@ -374,8 +374,7 @@ static int od_init(struct dbs_data *dbs_data)
dbs_data->up_threshold = MICRO_FREQUENCY_UP_THRESHOLD; dbs_data->up_threshold = MICRO_FREQUENCY_UP_THRESHOLD;
/* /*
* In nohz/micro accounting case we set the minimum frequency * In nohz/micro accounting case we set the minimum frequency
* not depending on HZ, but fixed (very low). The deferred * not depending on HZ, but fixed (very low).
* timer might skip some samples if idle/sleeping as needed.
*/ */
dbs_data->min_sampling_rate = MICRO_FREQUENCY_MIN_SAMPLE_RATE; dbs_data->min_sampling_rate = MICRO_FREQUENCY_MIN_SAMPLE_RATE;
} else { } else {
...@@ -415,7 +414,7 @@ static struct od_ops od_ops = { ...@@ -415,7 +414,7 @@ static struct od_ops od_ops = {
static struct dbs_governor od_dbs_gov = { static struct dbs_governor od_dbs_gov = {
.gov = CPUFREQ_DBS_GOVERNOR_INITIALIZER("ondemand"), .gov = CPUFREQ_DBS_GOVERNOR_INITIALIZER("ondemand"),
.kobj_type = { .default_attrs = od_attributes }, .kobj_type = { .default_attrs = od_attributes },
.gov_dbs_timer = od_dbs_timer, .gov_dbs_update = od_dbs_update,
.alloc = od_alloc, .alloc = od_alloc,
.free = od_free, .free = od_free,
.init = od_init, .init = od_init,
......
...@@ -41,6 +41,18 @@ static int cpufreq_stats_update(struct cpufreq_stats *stats) ...@@ -41,6 +41,18 @@ static int cpufreq_stats_update(struct cpufreq_stats *stats)
return 0; return 0;
} }
static void cpufreq_stats_clear_table(struct cpufreq_stats *stats)
{
unsigned int count = stats->max_state;
memset(stats->time_in_state, 0, count * sizeof(u64));
#ifdef CONFIG_CPU_FREQ_STAT_DETAILS
memset(stats->trans_table, 0, count * count * sizeof(int));
#endif
stats->last_time = get_jiffies_64();
stats->total_trans = 0;
}
static ssize_t show_total_trans(struct cpufreq_policy *policy, char *buf) static ssize_t show_total_trans(struct cpufreq_policy *policy, char *buf)
{ {
return sprintf(buf, "%d\n", policy->stats->total_trans); return sprintf(buf, "%d\n", policy->stats->total_trans);
...@@ -64,6 +76,14 @@ static ssize_t show_time_in_state(struct cpufreq_policy *policy, char *buf) ...@@ -64,6 +76,14 @@ static ssize_t show_time_in_state(struct cpufreq_policy *policy, char *buf)
return len; return len;
} }
static ssize_t store_reset(struct cpufreq_policy *policy, const char *buf,
size_t count)
{
/* We don't care what is written to the attribute. */
cpufreq_stats_clear_table(policy->stats);
return count;
}
#ifdef CONFIG_CPU_FREQ_STAT_DETAILS #ifdef CONFIG_CPU_FREQ_STAT_DETAILS
static ssize_t show_trans_table(struct cpufreq_policy *policy, char *buf) static ssize_t show_trans_table(struct cpufreq_policy *policy, char *buf)
{ {
...@@ -113,10 +133,12 @@ cpufreq_freq_attr_ro(trans_table); ...@@ -113,10 +133,12 @@ cpufreq_freq_attr_ro(trans_table);
cpufreq_freq_attr_ro(total_trans); cpufreq_freq_attr_ro(total_trans);
cpufreq_freq_attr_ro(time_in_state); cpufreq_freq_attr_ro(time_in_state);
cpufreq_freq_attr_wo(reset);
static struct attribute *default_attrs[] = { static struct attribute *default_attrs[] = {
&total_trans.attr, &total_trans.attr,
&time_in_state.attr, &time_in_state.attr,
&reset.attr,
#ifdef CONFIG_CPU_FREQ_STAT_DETAILS #ifdef CONFIG_CPU_FREQ_STAT_DETAILS
&trans_table.attr, &trans_table.attr,
#endif #endif
......
/*
* Copyright (C) 2001-2002 Deep Blue Solutions Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* CPU support functions
*/
#include <linux/module.h>
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/cpufreq.h>
#include <linux/sched.h>
#include <linux/smp.h>
#include <linux/init.h>
#include <linux/io.h>
#include <linux/platform_device.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <asm/mach-types.h>
#include <asm/hardware/icst.h>
static void __iomem *cm_base;
/* The cpufreq driver only use the OSC register */
#define INTEGRATOR_HDR_OSC_OFFSET 0x08
#define INTEGRATOR_HDR_LOCK_OFFSET 0x14
static struct cpufreq_driver integrator_driver;
static const struct icst_params lclk_params = {
.ref = 24000000,
.vco_max = ICST525_VCO_MAX_5V,
.vco_min = ICST525_VCO_MIN,
.vd_min = 8,
.vd_max = 132,
.rd_min = 24,
.rd_max = 24,
.s2div = icst525_s2div,
.idx2s = icst525_idx2s,
};
static const struct icst_params cclk_params = {
.ref = 24000000,
.vco_max = ICST525_VCO_MAX_5V,
.vco_min = ICST525_VCO_MIN,
.vd_min = 12,
.vd_max = 160,
.rd_min = 24,
.rd_max = 24,
.s2div = icst525_s2div,
.idx2s = icst525_idx2s,
};
/*
* Validate the speed policy.
*/
static int integrator_verify_policy(struct cpufreq_policy *policy)
{
struct icst_vco vco;
cpufreq_verify_within_cpu_limits(policy);
vco = icst_hz_to_vco(&cclk_params, policy->max * 1000);
policy->max = icst_hz(&cclk_params, vco) / 1000;
vco = icst_hz_to_vco(&cclk_params, policy->min * 1000);
policy->min = icst_hz(&cclk_params, vco) / 1000;
cpufreq_verify_within_cpu_limits(policy);
return 0;
}
static int integrator_set_target(struct cpufreq_policy *policy,
unsigned int target_freq,
unsigned int relation)
{
cpumask_t cpus_allowed;
int cpu = policy->cpu;
struct icst_vco vco;
struct cpufreq_freqs freqs;
u_int cm_osc;
/*
* Save this threads cpus_allowed mask.
*/
cpus_allowed = current->cpus_allowed;
/*
* Bind to the specified CPU. When this call returns,
* we should be running on the right CPU.
*/
set_cpus_allowed_ptr(current, cpumask_of(cpu));
BUG_ON(cpu != smp_processor_id());
/* get current setting */
cm_osc = __raw_readl(cm_base + INTEGRATOR_HDR_OSC_OFFSET);
if (machine_is_integrator())
vco.s = (cm_osc >> 8) & 7;
else if (machine_is_cintegrator())
vco.s = 1;
vco.v = cm_osc & 255;
vco.r = 22;
freqs.old = icst_hz(&cclk_params, vco) / 1000;
/* icst_hz_to_vco rounds down -- so we need the next
* larger freq in case of CPUFREQ_RELATION_L.
*/
if (relation == CPUFREQ_RELATION_L)
target_freq += 999;
if (target_freq > policy->max)
target_freq = policy->max;
vco = icst_hz_to_vco(&cclk_params, target_freq * 1000);
freqs.new = icst_hz(&cclk_params, vco) / 1000;
if (freqs.old == freqs.new) {
set_cpus_allowed_ptr(current, &cpus_allowed);
return 0;
}
cpufreq_freq_transition_begin(policy, &freqs);
cm_osc = __raw_readl(cm_base + INTEGRATOR_HDR_OSC_OFFSET);
if (machine_is_integrator()) {
cm_osc &= 0xfffff800;
cm_osc |= vco.s << 8;
} else if (machine_is_cintegrator()) {
cm_osc &= 0xffffff00;
}
cm_osc |= vco.v;
__raw_writel(0xa05f, cm_base + INTEGRATOR_HDR_LOCK_OFFSET);
__raw_writel(cm_osc, cm_base + INTEGRATOR_HDR_OSC_OFFSET);
__raw_writel(0, cm_base + INTEGRATOR_HDR_LOCK_OFFSET);
/*
* Restore the CPUs allowed mask.
*/
set_cpus_allowed_ptr(current, &cpus_allowed);
cpufreq_freq_transition_end(policy, &freqs, 0);
return 0;
}
static unsigned int integrator_get(unsigned int cpu)
{
cpumask_t cpus_allowed;
unsigned int current_freq;
u_int cm_osc;
struct icst_vco vco;
cpus_allowed = current->cpus_allowed;
set_cpus_allowed_ptr(current, cpumask_of(cpu));
BUG_ON(cpu != smp_processor_id());
/* detect memory etc. */
cm_osc = __raw_readl(cm_base + INTEGRATOR_HDR_OSC_OFFSET);
if (machine_is_integrator())
vco.s = (cm_osc >> 8) & 7;
else
vco.s = 1;
vco.v = cm_osc & 255;
vco.r = 22;
current_freq = icst_hz(&cclk_params, vco) / 1000; /* current freq */
set_cpus_allowed_ptr(current, &cpus_allowed);
return current_freq;
}
static int integrator_cpufreq_init(struct cpufreq_policy *policy)
{
/* set default policy and cpuinfo */
policy->max = policy->cpuinfo.max_freq = 160000;
policy->min = policy->cpuinfo.min_freq = 12000;
policy->cpuinfo.transition_latency = 1000000; /* 1 ms, assumed */
return 0;
}
static struct cpufreq_driver integrator_driver = {
.flags = CPUFREQ_NEED_INITIAL_FREQ_CHECK,
.verify = integrator_verify_policy,
.target = integrator_set_target,
.get = integrator_get,
.init = integrator_cpufreq_init,
.name = "integrator",
};
static int __init integrator_cpufreq_probe(struct platform_device *pdev)
{
struct resource *res;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!res)
return -ENODEV;
cm_base = devm_ioremap(&pdev->dev, res->start, resource_size(res));
if (!cm_base)
return -ENODEV;
return cpufreq_register_driver(&integrator_driver);
}
static int __exit integrator_cpufreq_remove(struct platform_device *pdev)
{
return cpufreq_unregister_driver(&integrator_driver);
}
static const struct of_device_id integrator_cpufreq_match[] = {
{ .compatible = "arm,core-module-integrator"},
{ },
};
MODULE_DEVICE_TABLE(of, integrator_cpufreq_match);
static struct platform_driver integrator_cpufreq_driver = {
.driver = {
.name = "integrator-cpufreq",
.of_match_table = integrator_cpufreq_match,
},
.remove = __exit_p(integrator_cpufreq_remove),
};
module_platform_driver_probe(integrator_cpufreq_driver,
integrator_cpufreq_probe);
MODULE_AUTHOR("Russell M. King");
MODULE_DESCRIPTION("cpufreq driver for ARM Integrator CPUs");
MODULE_LICENSE("GPL");
This diff is collapsed.
...@@ -42,6 +42,10 @@ ...@@ -42,6 +42,10 @@
#define PMSR_PSAFE_ENABLE (1UL << 30) #define PMSR_PSAFE_ENABLE (1UL << 30)
#define PMSR_SPR_EM_DISABLE (1UL << 31) #define PMSR_SPR_EM_DISABLE (1UL << 31)
#define PMSR_MAX(x) ((x >> 32) & 0xFF) #define PMSR_MAX(x) ((x >> 32) & 0xFF)
#define LPSTATE_SHIFT 48
#define GPSTATE_SHIFT 56
#define GET_LPSTATE(x) (((x) >> LPSTATE_SHIFT) & 0xFF)
#define GET_GPSTATE(x) (((x) >> GPSTATE_SHIFT) & 0xFF)
#define MAX_RAMP_DOWN_TIME 5120 #define MAX_RAMP_DOWN_TIME 5120
/* /*
...@@ -592,7 +596,8 @@ void gpstate_timer_handler(unsigned long data) ...@@ -592,7 +596,8 @@ void gpstate_timer_handler(unsigned long data)
{ {
struct cpufreq_policy *policy = (struct cpufreq_policy *)data; struct cpufreq_policy *policy = (struct cpufreq_policy *)data;
struct global_pstate_info *gpstates = policy->driver_data; struct global_pstate_info *gpstates = policy->driver_data;
int gpstate_idx; int gpstate_idx, lpstate_idx;
unsigned long val;
unsigned int time_diff = jiffies_to_msecs(jiffies) unsigned int time_diff = jiffies_to_msecs(jiffies)
- gpstates->last_sampled_time; - gpstates->last_sampled_time;
struct powernv_smp_call_data freq_data; struct powernv_smp_call_data freq_data;
...@@ -600,21 +605,37 @@ void gpstate_timer_handler(unsigned long data) ...@@ -600,21 +605,37 @@ void gpstate_timer_handler(unsigned long data)
if (!spin_trylock(&gpstates->gpstate_lock)) if (!spin_trylock(&gpstates->gpstate_lock))
return; return;
/*
* If PMCR was last updated was using fast_swtich then
* We may have wrong in gpstate->last_lpstate_idx
* value. Hence, read from PMCR to get correct data.
*/
val = get_pmspr(SPRN_PMCR);
freq_data.gpstate_id = (s8)GET_GPSTATE(val);
freq_data.pstate_id = (s8)GET_LPSTATE(val);
if (freq_data.gpstate_id == freq_data.pstate_id) {
reset_gpstates(policy);
spin_unlock(&gpstates->gpstate_lock);
return;
}
gpstates->last_sampled_time += time_diff; gpstates->last_sampled_time += time_diff;
gpstates->elapsed_time += time_diff; gpstates->elapsed_time += time_diff;
freq_data.pstate_id = idx_to_pstate(gpstates->last_lpstate_idx);
if ((gpstates->last_gpstate_idx == gpstates->last_lpstate_idx) || if (gpstates->elapsed_time > MAX_RAMP_DOWN_TIME) {
(gpstates->elapsed_time > MAX_RAMP_DOWN_TIME)) {
gpstate_idx = pstate_to_idx(freq_data.pstate_id); gpstate_idx = pstate_to_idx(freq_data.pstate_id);
lpstate_idx = gpstate_idx;
reset_gpstates(policy); reset_gpstates(policy);
gpstates->highest_lpstate_idx = gpstate_idx; gpstates->highest_lpstate_idx = gpstate_idx;
} else { } else {
lpstate_idx = pstate_to_idx(freq_data.pstate_id);
gpstate_idx = calc_global_pstate(gpstates->elapsed_time, gpstate_idx = calc_global_pstate(gpstates->elapsed_time,
gpstates->highest_lpstate_idx, gpstates->highest_lpstate_idx,
gpstates->last_lpstate_idx); lpstate_idx);
} }
freq_data.gpstate_id = idx_to_pstate(gpstate_idx);
gpstates->last_gpstate_idx = gpstate_idx;
gpstates->last_lpstate_idx = lpstate_idx;
/* /*
* If local pstate is equal to global pstate, rampdown is over * If local pstate is equal to global pstate, rampdown is over
* So timer is not required to be queued. * So timer is not required to be queued.
...@@ -622,10 +643,6 @@ void gpstate_timer_handler(unsigned long data) ...@@ -622,10 +643,6 @@ void gpstate_timer_handler(unsigned long data)
if (gpstate_idx != gpstates->last_lpstate_idx) if (gpstate_idx != gpstates->last_lpstate_idx)
queue_gpstate_timer(gpstates); queue_gpstate_timer(gpstates);
freq_data.gpstate_id = idx_to_pstate(gpstate_idx);
gpstates->last_gpstate_idx = pstate_to_idx(freq_data.gpstate_id);
gpstates->last_lpstate_idx = pstate_to_idx(freq_data.pstate_id);
spin_unlock(&gpstates->gpstate_lock); spin_unlock(&gpstates->gpstate_lock);
/* Timer may get migrated to a different cpu on cpu hot unplug */ /* Timer may get migrated to a different cpu on cpu hot unplug */
...@@ -647,8 +664,14 @@ static int powernv_cpufreq_target_index(struct cpufreq_policy *policy, ...@@ -647,8 +664,14 @@ static int powernv_cpufreq_target_index(struct cpufreq_policy *policy,
if (unlikely(rebooting) && new_index != get_nominal_index()) if (unlikely(rebooting) && new_index != get_nominal_index())
return 0; return 0;
if (!throttled) if (!throttled) {
/* we don't want to be preempted while
* checking if the CPU frequency has been throttled
*/
preempt_disable();
powernv_cpufreq_throttle_check(NULL); powernv_cpufreq_throttle_check(NULL);
preempt_enable();
}
cur_msec = jiffies_to_msecs(get_jiffies_64()); cur_msec = jiffies_to_msecs(get_jiffies_64());
...@@ -752,9 +775,12 @@ static int powernv_cpufreq_cpu_init(struct cpufreq_policy *policy) ...@@ -752,9 +775,12 @@ static int powernv_cpufreq_cpu_init(struct cpufreq_policy *policy)
spin_lock_init(&gpstates->gpstate_lock); spin_lock_init(&gpstates->gpstate_lock);
ret = cpufreq_table_validate_and_show(policy, powernv_freqs); ret = cpufreq_table_validate_and_show(policy, powernv_freqs);
if (ret < 0) if (ret < 0) {
kfree(policy->driver_data); kfree(policy->driver_data);
return ret;
}
policy->fast_switch_possible = true;
return ret; return ret;
} }
...@@ -897,6 +923,20 @@ static void powernv_cpufreq_stop_cpu(struct cpufreq_policy *policy) ...@@ -897,6 +923,20 @@ static void powernv_cpufreq_stop_cpu(struct cpufreq_policy *policy)
del_timer_sync(&gpstates->timer); del_timer_sync(&gpstates->timer);
} }
static unsigned int powernv_fast_switch(struct cpufreq_policy *policy,
unsigned int target_freq)
{
int index;
struct powernv_smp_call_data freq_data;
index = cpufreq_table_find_index_dl(policy, target_freq);
freq_data.pstate_id = powernv_freqs[index].driver_data;
freq_data.gpstate_id = powernv_freqs[index].driver_data;
set_pstate(&freq_data);
return powernv_freqs[index].frequency;
}
static struct cpufreq_driver powernv_cpufreq_driver = { static struct cpufreq_driver powernv_cpufreq_driver = {
.name = "powernv-cpufreq", .name = "powernv-cpufreq",
.flags = CPUFREQ_CONST_LOOPS, .flags = CPUFREQ_CONST_LOOPS,
...@@ -904,6 +944,7 @@ static struct cpufreq_driver powernv_cpufreq_driver = { ...@@ -904,6 +944,7 @@ static struct cpufreq_driver powernv_cpufreq_driver = {
.exit = powernv_cpufreq_cpu_exit, .exit = powernv_cpufreq_cpu_exit,
.verify = cpufreq_generic_frequency_table_verify, .verify = cpufreq_generic_frequency_table_verify,
.target_index = powernv_cpufreq_target_index, .target_index = powernv_cpufreq_target_index,
.fast_switch = powernv_fast_switch,
.get = powernv_cpufreq_get, .get = powernv_cpufreq_get,
.stop_cpu = powernv_cpufreq_stop_cpu, .stop_cpu = powernv_cpufreq_stop_cpu,
.attr = powernv_cpu_freq_attr, .attr = powernv_cpu_freq_attr,
......
...@@ -22,7 +22,7 @@ ...@@ -22,7 +22,7 @@
#define POWERNV_THRESHOLD_LATENCY_NS 200000 #define POWERNV_THRESHOLD_LATENCY_NS 200000
struct cpuidle_driver powernv_idle_driver = { static struct cpuidle_driver powernv_idle_driver = {
.name = "powernv_idle", .name = "powernv_idle",
.owner = THIS_MODULE, .owner = THIS_MODULE,
}; };
......
...@@ -97,7 +97,23 @@ static int find_deepest_state(struct cpuidle_driver *drv, ...@@ -97,7 +97,23 @@ static int find_deepest_state(struct cpuidle_driver *drv,
return ret; return ret;
} }
#ifdef CONFIG_SUSPEND /**
* cpuidle_use_deepest_state - Set/clear governor override flag.
* @enable: New value of the flag.
*
* Set/unset the current CPU to use the deepest idle state (override governors
* going forward if set).
*/
void cpuidle_use_deepest_state(bool enable)
{
struct cpuidle_device *dev;
preempt_disable();
dev = cpuidle_get_device();
dev->use_deepest_state = enable;
preempt_enable();
}
/** /**
* cpuidle_find_deepest_state - Find the deepest available idle state. * cpuidle_find_deepest_state - Find the deepest available idle state.
* @drv: cpuidle driver for the given CPU. * @drv: cpuidle driver for the given CPU.
...@@ -109,6 +125,7 @@ int cpuidle_find_deepest_state(struct cpuidle_driver *drv, ...@@ -109,6 +125,7 @@ int cpuidle_find_deepest_state(struct cpuidle_driver *drv,
return find_deepest_state(drv, dev, UINT_MAX, 0, false); return find_deepest_state(drv, dev, UINT_MAX, 0, false);
} }
#ifdef CONFIG_SUSPEND
static void enter_freeze_proper(struct cpuidle_driver *drv, static void enter_freeze_proper(struct cpuidle_driver *drv,
struct cpuidle_device *dev, int index) struct cpuidle_device *dev, int index)
{ {
......
...@@ -38,6 +38,12 @@ static int init_state_node(struct cpuidle_state *idle_state, ...@@ -38,6 +38,12 @@ static int init_state_node(struct cpuidle_state *idle_state,
* state enter function. * state enter function.
*/ */
idle_state->enter = match_id->data; idle_state->enter = match_id->data;
/*
* Since this is not a "coupled" state, it's safe to assume interrupts
* won't be enabled when it exits allowing the tick to be frozen
* safely. So enter() can be also enter_freeze() callback.
*/
idle_state->enter_freeze = match_id->data;
err = of_property_read_u32(state_node, "wakeup-latency-us", err = of_property_read_u32(state_node, "wakeup-latency-us",
&idle_state->exit_latency); &idle_state->exit_latency);
......
...@@ -9,7 +9,6 @@ ...@@ -9,7 +9,6 @@
*/ */
#include <linux/mutex.h> #include <linux/mutex.h>
#include <linux/module.h>
#include <linux/cpuidle.h> #include <linux/cpuidle.h>
#include "cpuidle.h" #include "cpuidle.h"
...@@ -53,14 +52,11 @@ int cpuidle_switch_governor(struct cpuidle_governor *gov) ...@@ -53,14 +52,11 @@ int cpuidle_switch_governor(struct cpuidle_governor *gov)
if (cpuidle_curr_governor) { if (cpuidle_curr_governor) {
list_for_each_entry(dev, &cpuidle_detected_devices, device_list) list_for_each_entry(dev, &cpuidle_detected_devices, device_list)
cpuidle_disable_device(dev); cpuidle_disable_device(dev);
module_put(cpuidle_curr_governor->owner);
} }
cpuidle_curr_governor = gov; cpuidle_curr_governor = gov;
if (gov) { if (gov) {
if (!try_module_get(cpuidle_curr_governor->owner))
return -EINVAL;
list_for_each_entry(dev, &cpuidle_detected_devices, device_list) list_for_each_entry(dev, &cpuidle_detected_devices, device_list)
cpuidle_enable_device(dev); cpuidle_enable_device(dev);
cpuidle_install_idle_handler(); cpuidle_install_idle_handler();
......
...@@ -15,7 +15,6 @@ ...@@ -15,7 +15,6 @@
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/cpuidle.h> #include <linux/cpuidle.h>
#include <linux/pm_qos.h> #include <linux/pm_qos.h>
#include <linux/module.h>
#include <linux/jiffies.h> #include <linux/jiffies.h>
#include <linux/tick.h> #include <linux/tick.h>
...@@ -177,7 +176,6 @@ static struct cpuidle_governor ladder_governor = { ...@@ -177,7 +176,6 @@ static struct cpuidle_governor ladder_governor = {
.enable = ladder_enable_device, .enable = ladder_enable_device,
.select = ladder_select_state, .select = ladder_select_state,
.reflect = ladder_reflect, .reflect = ladder_reflect,
.owner = THIS_MODULE,
}; };
/** /**
......
...@@ -19,7 +19,6 @@ ...@@ -19,7 +19,6 @@
#include <linux/tick.h> #include <linux/tick.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/math64.h> #include <linux/math64.h>
#include <linux/module.h>
/* /*
* Please note when changing the tuning values: * Please note when changing the tuning values:
...@@ -484,7 +483,6 @@ static struct cpuidle_governor menu_governor = { ...@@ -484,7 +483,6 @@ static struct cpuidle_governor menu_governor = {
.enable = menu_enable_device, .enable = menu_enable_device,
.select = menu_select, .select = menu_select,
.reflect = menu_reflect, .reflect = menu_reflect,
.owner = THIS_MODULE,
}; };
/** /**
......
...@@ -403,8 +403,10 @@ static int cpuidle_add_state_sysfs(struct cpuidle_device *device) ...@@ -403,8 +403,10 @@ static int cpuidle_add_state_sysfs(struct cpuidle_device *device)
/* state statistics */ /* state statistics */
for (i = 0; i < drv->state_count; i++) { for (i = 0; i < drv->state_count; i++) {
kobj = kzalloc(sizeof(struct cpuidle_state_kobj), GFP_KERNEL); kobj = kzalloc(sizeof(struct cpuidle_state_kobj), GFP_KERNEL);
if (!kobj) if (!kobj) {
ret = -ENOMEM;
goto error_state; goto error_state;
}
kobj->state = &drv->states[i]; kobj->state = &drv->states[i];
kobj->state_usage = &device->states_usage[i]; kobj->state_usage = &device->states_usage[i];
init_completion(&kobj->kobj_unregister); init_completion(&kobj->kobj_unregister);
......
...@@ -850,7 +850,7 @@ int devfreq_add_governor(struct devfreq_governor *governor) ...@@ -850,7 +850,7 @@ int devfreq_add_governor(struct devfreq_governor *governor)
EXPORT_SYMBOL(devfreq_add_governor); EXPORT_SYMBOL(devfreq_add_governor);
/** /**
* devfreq_remove_device() - Remove devfreq feature from a device. * devfreq_remove_governor() - Remove devfreq feature from a device.
* @governor: the devfreq governor to be removed * @governor: the devfreq governor to be removed
*/ */
int devfreq_remove_governor(struct devfreq_governor *governor) int devfreq_remove_governor(struct devfreq_governor *governor)
......
...@@ -190,6 +190,7 @@ static const struct of_device_id exynos_nocp_id_match[] = { ...@@ -190,6 +190,7 @@ static const struct of_device_id exynos_nocp_id_match[] = {
{ .compatible = "samsung,exynos5420-nocp", }, { .compatible = "samsung,exynos5420-nocp", },
{ /* sentinel */ }, { /* sentinel */ },
}; };
MODULE_DEVICE_TABLE(of, exynos_nocp_id_match);
static struct regmap_config exynos_nocp_regmap_config = { static struct regmap_config exynos_nocp_regmap_config = {
.reg_bits = 32, .reg_bits = 32,
......
This diff is collapsed.
...@@ -188,6 +188,7 @@ static const struct of_device_id rockchip_dfi_id_match[] = { ...@@ -188,6 +188,7 @@ static const struct of_device_id rockchip_dfi_id_match[] = {
{ .compatible = "rockchip,rk3399-dfi" }, { .compatible = "rockchip,rk3399-dfi" },
{ }, { },
}; };
MODULE_DEVICE_TABLE(of, rockchip_dfi_id_match);
static int rockchip_dfi_probe(struct platform_device *pdev) static int rockchip_dfi_probe(struct platform_device *pdev)
{ {
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment