Commit ef800684 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'pm-5.1-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm

Pull power management updates from Rafael Wysocki:
 "These are PM-runtime framework changes to use ktime instead of jiffies
  for accounting, new PM core flag to mark devices that don't need any
  form of power management, cpuidle updates including driver API
  documentation and a new governor, cpufreq updates including a new
  driver for Armada 8K, thermal cleanups and more, some energy-aware
  scheduling (EAS) enabling changes, new chips support in the intel_idle
  and RAPL drivers and assorted cleanups in some other places.

  Specifics:

   - Update the PM-runtime framework to use ktime instead of jiffies for
     accounting (Thara Gopinath, Vincent Guittot)

   - Optimize the autosuspend code in the PM-runtime framework somewhat
     (Ladislav Michl)

   - Add a PM core flag to mark devices that don't need any form of
     power management (Sudeep Holla)

   - Introduce driver API documentation for cpuidle and add a new
     cpuidle governor for tickless systems (Rafael Wysocki)

   - Add Jacobsville support to the intel_idle driver (Zhang Rui)

   - Clean up a cpuidle core header file and the cpuidle-dt and ACPI
     processor-idle drivers (Yangtao Li, Joseph Lo, Yazen Ghannam)

   - Add new cpufreq driver for Armada 8K (Gregory Clement)

   - Fix and clean up cpufreq core (Rafael Wysocki, Viresh Kumar, Amit
     Kucheria)

   - Add support for light-weight tear-down and bring-up of CPUs to the
     cpufreq core and use it in the cpufreq-dt driver (Viresh Kumar)

   - Fix cpu_cooling Kconfig dependencies, add support for CPU cooling
     auto-registration to the cpufreq core and use it in multiple
     cpufreq drivers (Amit Kucheria)

   - Fix some minor issues and do some cleanups in the davinci,
     e_powersaver, ap806, s5pv210, qcom and kryo cpufreq drivers
     (Bartosz Golaszewski, Gustavo Silva, Julia Lawall, Paweł Chmiel,
     Taniya Das, Viresh Kumar)

   - Add a Hisilicon CPPC quirk to the cppc_cpufreq driver (Xiongfeng
     Wang)

   - Clean up the intel_pstate and acpi-cpufreq drivers (Erwan Velu,
     Rafael Wysocki)

   - Clean up multiple cpufreq drivers (Yangtao Li)

   - Update cpufreq-related MAINTAINERS entries (Baruch Siach, Lukas
     Bulwahn)

   - Add support for exposing the Energy Model via debugfs and make
     multiple cpufreq drivers register an Energy Model to support
     energy-aware scheduling (Quentin Perret, Dietmar Eggemann, Matthias
     Kaehlcke)

   - Add Ice Lake mobile and Jacobsville support to the Intel RAPL
     power-capping driver (Gayatri Kammela, Zhang Rui)

   - Add a power estimation helper to the operating performance points
     (OPP) framework and clean up a core function in it (Quentin Perret,
     Viresh Kumar)

   - Make minor improvements in the generic power domains (genpd), OPP
     and system suspend frameworks and in the PM core (Aditya Pakki,
     Douglas Anderson, Greg Kroah-Hartman, Rafael Wysocki, Yangtao Li)"

* tag 'pm-5.1-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (80 commits)
  cpufreq: kryo: Release OPP tables on module removal
  cpufreq: ap806: add missing of_node_put after of_device_is_available
  cpufreq: acpi-cpufreq: Report if CPU doesn't support boost technologies
  cpufreq: Pass updated policy to driver ->setpolicy() callback
  cpufreq: Fix two debug messages in cpufreq_set_policy()
  cpufreq: Reorder and simplify cpufreq_update_policy()
  cpufreq: Add kerneldoc comments for two core functions
  PM / core: Add support to skip power management in device/driver model
  cpufreq: intel_pstate: Rework iowait boosting to be less aggressive
  cpufreq: intel_pstate: Eliminate intel_pstate_get_base_pstate()
  cpufreq: intel_pstate: Avoid redundant initialization of local vars
  powercap/intel_rapl: add Ice Lake mobile
  ACPI / processor: Set P_LVL{2,3} idle state descriptions
  cpufreq / cppc: Work around for Hisilicon CPPC cpufreq
  ACPI / CPPC: Add a helper to get desired performance
  cpufreq: davinci: move configuration to include/linux/platform_data
  cpufreq: speedstep: convert BUG() to BUG_ON()
  cpufreq: powernv: fix missing check of return value in init_powernv_pstates()
  cpufreq: longhaul: remove unneeded semicolon
  cpufreq: pcc-cpufreq: remove unneeded semicolon
  ..
parents 8dcd175b 1271d6d5
......@@ -155,14 +155,14 @@ governor uses that information depends on what algorithm is implemented by it
and that is the primary reason for having more than one governor in the
``CPUIdle`` subsystem.
There are two ``CPUIdle`` governors available, ``menu`` and ``ladder``. Which
of them is used depends on the configuration of the kernel and in particular on
whether or not the scheduler tick can be `stopped by the idle
loop <idle-cpus-and-tick_>`_. It is possible to change the governor at run time
if the ``cpuidle_sysfs_switch`` command line parameter has been passed to the
kernel, but that is not safe in general, so it should not be done on production
systems (that may change in the future, though). The name of the ``CPUIdle``
governor currently used by the kernel can be read from the
There are three ``CPUIdle`` governors available, ``menu``, `TEO <teo-gov_>`_
and ``ladder``. Which of them is used by default depends on the configuration
of the kernel and in particular on whether or not the scheduler tick can be
`stopped by the idle loop <idle-cpus-and-tick_>`_. It is possible to change the
governor at run time if the ``cpuidle_sysfs_switch`` command line parameter has
been passed to the kernel, but that is not safe in general, so it should not be
done on production systems (that may change in the future, though). The name of
the ``CPUIdle`` governor currently used by the kernel can be read from the
:file:`current_governor_ro` (or :file:`current_governor` if
``cpuidle_sysfs_switch`` is present in the kernel command line) file under
:file:`/sys/devices/system/cpu/cpuidle/` in ``sysfs``.
......@@ -256,6 +256,8 @@ the ``menu`` governor by default and if it is not tickless, the default
``CPUIdle`` governor on it will be ``ladder``.
.. _menu-gov:
The ``menu`` Governor
=====================
......@@ -333,6 +335,92 @@ that time, the governor may need to select a shallower state with a suitable
target residency.
.. _teo-gov:
The Timer Events Oriented (TEO) Governor
========================================
The timer events oriented (TEO) governor is an alternative ``CPUIdle`` governor
for tickless systems. It follows the same basic strategy as the ``menu`` `one
<menu-gov_>`_: it always tries to find the deepest idle state suitable for the
given conditions. However, it applies a different approach to that problem.
First, it does not use sleep length correction factors, but instead it attempts
to correlate the observed idle duration values with the available idle states
and use that information to pick up the idle state that is most likely to
"match" the upcoming CPU idle interval. Second, it does not take the tasks
that were running on the given CPU in the past and are waiting on some I/O
operations to complete now at all (there is no guarantee that they will run on
the same CPU when they become runnable again) and the pattern detection code in
it avoids taking timer wakeups into account. It also only uses idle duration
values less than the current time till the closest timer (with the scheduler
tick excluded) for that purpose.
Like in the ``menu`` governor `case <menu-gov_>`_, the first step is to obtain
the *sleep length*, which is the time until the closest timer event with the
assumption that the scheduler tick will be stopped (that also is the upper bound
on the time until the next CPU wakeup). That value is then used to preselect an
idle state on the basis of three metrics maintained for each idle state provided
by the ``CPUIdle`` driver: ``hits``, ``misses`` and ``early_hits``.
The ``hits`` and ``misses`` metrics measure the likelihood that a given idle
state will "match" the observed (post-wakeup) idle duration if it "matches" the
sleep length. They both are subject to decay (after a CPU wakeup) every time
the target residency of the idle state corresponding to them is less than or
equal to the sleep length and the target residency of the next idle state is
greater than the sleep length (that is, when the idle state corresponding to
them "matches" the sleep length). The ``hits`` metric is increased if the
former condition is satisfied and the target residency of the given idle state
is less than or equal to the observed idle duration and the target residency of
the next idle state is greater than the observed idle duration at the same time
(that is, it is increased when the given idle state "matches" both the sleep
length and the observed idle duration). In turn, the ``misses`` metric is
increased when the given idle state "matches" the sleep length only and the
observed idle duration is too short for its target residency.
The ``early_hits`` metric measures the likelihood that a given idle state will
"match" the observed (post-wakeup) idle duration if it does not "match" the
sleep length. It is subject to decay on every CPU wakeup and it is increased
when the idle state corresponding to it "matches" the observed (post-wakeup)
idle duration and the target residency of the next idle state is less than or
equal to the sleep length (i.e. the idle state "matching" the sleep length is
deeper than the given one).
The governor walks the list of idle states provided by the ``CPUIdle`` driver
and finds the last (deepest) one with the target residency less than or equal
to the sleep length. Then, the ``hits`` and ``misses`` metrics of that idle
state are compared with each other and it is preselected if the ``hits`` one is
greater (which means that that idle state is likely to "match" the observed idle
duration after CPU wakeup). If the ``misses`` one is greater, the governor
preselects the shallower idle state with the maximum ``early_hits`` metric
(or if there are multiple shallower idle states with equal ``early_hits``
metric which also is the maximum, the shallowest of them will be preselected).
[If there is a wakeup latency constraint coming from the `PM QoS framework
<cpu-pm-qos_>`_ which is hit before reaching the deepest idle state with the
target residency within the sleep length, the deepest idle state with the exit
latency within the constraint is preselected without consulting the ``hits``,
``misses`` and ``early_hits`` metrics.]
Next, the governor takes several idle duration values observed most recently
into consideration and if at least a half of them are greater than or equal to
the target residency of the preselected idle state, that idle state becomes the
final candidate to ask for. Otherwise, the average of the most recent idle
duration values below the target residency of the preselected idle state is
computed and the governor walks the idle states shallower than the preselected
one and finds the deepest of them with the target residency within that average.
That idle state is then taken as the final candidate to ask for.
Still, at this point the governor may need to refine the idle state selection if
it has not decided to `stop the scheduler tick <idle-cpus-and-tick_>`_. That
generally happens if the target residency of the idle state selected so far is
less than the tick period and the tick has not been stopped already (in a
previous iteration of the idle loop). Then, like in the ``menu`` governor
`case <menu-gov_>`_, the sleep length used in the previous computations may not
reflect the real time until the closest timer event and if it really is greater
than that time, a shallower state with a suitable target residency may need to
be selected.
.. _idle-states-representation:
Representation of Idle States
......
Supporting multiple CPU idle levels in kernel
cpuidle drivers
cpuidle driver hooks into the cpuidle infrastructure and handles the
architecture/platform dependent part of CPU idle states. Driver
provides the platform idle state detection capability and also
has mechanisms in place to support actual entry-exit into CPU idle states.
cpuidle driver initializes the cpuidle_device structure for each CPU device
and registers with cpuidle using cpuidle_register_device.
If all the idle states are the same, the wrapper function cpuidle_register
could be used instead.
It can also support the dynamic changes (like battery <-> AC), by using
cpuidle_pause_and_lock, cpuidle_disable_device and cpuidle_enable_device,
cpuidle_resume_and_unlock.
Interfaces:
extern int cpuidle_register(struct cpuidle_driver *drv,
const struct cpumask *const coupled_cpus);
extern int cpuidle_unregister(struct cpuidle_driver *drv);
extern int cpuidle_register_driver(struct cpuidle_driver *drv);
extern void cpuidle_unregister_driver(struct cpuidle_driver *drv);
extern int cpuidle_register_device(struct cpuidle_device *dev);
extern void cpuidle_unregister_device(struct cpuidle_device *dev);
extern void cpuidle_pause_and_lock(void);
extern void cpuidle_resume_and_unlock(void);
extern int cpuidle_enable_device(struct cpuidle_device *dev);
extern void cpuidle_disable_device(struct cpuidle_device *dev);
Supporting multiple CPU idle levels in kernel
cpuidle governors
cpuidle governor is policy routine that decides what idle state to enter at
any given time. cpuidle core uses different callbacks to the governor.
* enable() to enable governor for a particular device
* disable() to disable governor for a particular device
* select() to select an idle state to enter
* reflect() called after returning from the idle state, which can be used
by the governor for some record keeping.
More than one governor can be registered at the same time and
users can switch between drivers using /sysfs interface (when enabled).
More than one governor part is supported for developers to easily experiment
with different governors. By default, most optimal governor based on your
kernel configuration and platform will be selected by cpuidle.
Interfaces:
extern int cpuidle_register_governor(struct cpuidle_governor *gov);
struct cpuidle_governor
This diff is collapsed.
=======================
Device Power Management
=======================
===============================
CPU and Device Power Management
===============================
.. toctree::
cpuidle
devices
notifiers
types
......
......@@ -1736,6 +1736,7 @@ F: arch/arm/configs/mvebu_*_defconfig
F: arch/arm/mach-mvebu/
F: arch/arm64/boot/dts/marvell/armada*
F: drivers/cpufreq/armada-37xx-cpufreq.c
F: drivers/cpufreq/armada-8k-cpufreq.c
F: drivers/cpufreq/mvebu-cpufreq.c
F: drivers/irqchip/irq-armada-370-xp.c
F: drivers/irqchip/irq-mvebu-*
......@@ -3994,7 +3995,7 @@ M: Viresh Kumar <viresh.kumar@linaro.org>
L: linux-pm@vger.kernel.org
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git
T: git git://git.linaro.org/people/vireshk/linux.git (For ARM Updates)
T: git git://git.kernel.org/pub/scm/linux/kernel/git/vireshk/pm.git (For ARM Updates)
B: https://bugzilla.kernel.org
F: Documentation/admin-guide/pm/cpufreq.rst
F: Documentation/admin-guide/pm/intel_pstate.rst
......@@ -4054,6 +4055,7 @@ S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git
B: https://bugzilla.kernel.org
F: Documentation/admin-guide/pm/cpuidle.rst
F: Documentation/driver-api/pm/cpuidle.rst
F: drivers/cpuidle/*
F: include/linux/cpuidle.h
......@@ -12679,11 +12681,11 @@ F: Documentation/media/v4l-drivers/qcom_camss.rst
F: drivers/media/platform/qcom/camss/
QUALCOMM CPUFREQ DRIVER MSM8996/APQ8096
M: Ilia Lin <ilia.lin@gmail.com>
L: linux-pm@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/opp/kryo-cpufreq.txt
F: drivers/cpufreq/qcom-cpufreq-kryo.c
M: Ilia Lin <ilia.lin@kernel.org>
L: linux-pm@vger.kernel.org
S: Maintained
F: Documentation/devicetree/bindings/opp/kryo-cpufreq.txt
F: drivers/cpufreq/qcom-cpufreq-kryo.c
QUALCOMM EMAC GIGABIT ETHERNET DRIVER
M: Timur Tabi <timur@kernel.org>
......
......@@ -22,6 +22,7 @@
#include <linux/mfd/da8xx-cfgchip.h>
#include <linux/platform_data/clk-da8xx-cfgchip.h>
#include <linux/platform_data/clk-davinci-pll.h>
#include <linux/platform_data/davinci-cpufreq.h>
#include <linux/platform_data/gpio-davinci.h>
#include <linux/platform_device.h>
#include <linux/regmap.h>
......@@ -30,7 +31,6 @@
#include <asm/mach/map.h>
#include <mach/common.h>
#include <mach/cpufreq.h>
#include <mach/cputype.h>
#include <mach/da8xx.h>
#include <mach/pm.h>
......
......@@ -1050,6 +1050,48 @@ static int cpc_write(int cpu, struct cpc_register_resource *reg_res, u64 val)
return ret_val;
}
/**
* cppc_get_desired_perf - Get the value of desired performance register.
* @cpunum: CPU from which to get desired performance.
* @desired_perf: address of a variable to store the returned desired performance
*
* Return: 0 for success, -EIO otherwise.
*/
int cppc_get_desired_perf(int cpunum, u64 *desired_perf)
{
struct cpc_desc *cpc_desc = per_cpu(cpc_desc_ptr, cpunum);
int pcc_ss_id = per_cpu(cpu_pcc_subspace_idx, cpunum);
struct cpc_register_resource *desired_reg;
struct cppc_pcc_data *pcc_ss_data = NULL;
desired_reg = &cpc_desc->cpc_regs[DESIRED_PERF];
if (CPC_IN_PCC(desired_reg)) {
int ret = 0;
if (pcc_ss_id < 0)
return -EIO;
pcc_ss_data = pcc_data[pcc_ss_id];
down_write(&pcc_ss_data->pcc_lock);
if (send_pcc_cmd(pcc_ss_id, CMD_READ) >= 0)
cpc_read(cpunum, desired_reg, desired_perf);
else
ret = -EIO;
up_write(&pcc_ss_data->pcc_lock);
return ret;
}
cpc_read(cpunum, desired_reg, desired_perf);
return 0;
}
EXPORT_SYMBOL_GPL(cppc_get_desired_perf);
/**
* cppc_get_perf_caps - Get a CPUs performance capabilities.
* @cpunum: CPU from which to get capabilities info.
......
......@@ -282,6 +282,13 @@ static int acpi_processor_get_power_info_fadt(struct acpi_processor *pr)
pr->power.states[ACPI_STATE_C2].address,
pr->power.states[ACPI_STATE_C3].address));
snprintf(pr->power.states[ACPI_STATE_C2].desc,
ACPI_CX_DESC_LEN, "ACPI P_LVL2 IOPORT 0x%x",
pr->power.states[ACPI_STATE_C2].address);
snprintf(pr->power.states[ACPI_STATE_C3].desc,
ACPI_CX_DESC_LEN, "ACPI P_LVL3 IOPORT 0x%x",
pr->power.states[ACPI_STATE_C3].address);
return 0;
}
......
......@@ -427,6 +427,7 @@ __cpu_device_create(struct device *parent, void *drvdata,
dev->parent = parent;
dev->groups = groups;
dev->release = device_create_release;
device_set_pm_not_required(dev);
dev_set_drvdata(dev, drvdata);
retval = kobject_set_name_vargs(&dev->kobj, fmt, args);
......
......@@ -65,10 +65,15 @@ static void pm_clk_acquire(struct device *dev, struct pm_clock_entry *ce)
if (IS_ERR(ce->clk)) {
ce->status = PCE_STATUS_ERROR;
} else {
clk_prepare(ce->clk);
ce->status = PCE_STATUS_ACQUIRED;
dev_dbg(dev, "Clock %pC con_id %s managed by runtime PM.\n",
ce->clk, ce->con_id);
if (clk_prepare(ce->clk)) {
ce->status = PCE_STATUS_ERROR;
dev_err(dev, "clk_prepare() failed\n");
} else {
ce->status = PCE_STATUS_ACQUIRED;
dev_dbg(dev,
"Clock %pC con_id %s managed by runtime PM.\n",
ce->clk, ce->con_id);
}
}
}
......
......@@ -160,7 +160,7 @@ EXPORT_SYMBOL_GPL(dev_pm_domain_attach_by_id);
* For a detailed function description, see dev_pm_domain_attach_by_id().
*/
struct device *dev_pm_domain_attach_by_name(struct device *dev,
char *name)
const char *name)
{
if (dev->pm_domain)
return ERR_PTR(-EEXIST);
......
......@@ -2483,7 +2483,7 @@ EXPORT_SYMBOL_GPL(genpd_dev_pm_attach_by_id);
* power-domain-names DT property. For further description see
* genpd_dev_pm_attach_by_id().
*/
struct device *genpd_dev_pm_attach_by_name(struct device *dev, char *name)
struct device *genpd_dev_pm_attach_by_name(struct device *dev, const char *name)
{
int index;
......@@ -2948,18 +2948,11 @@ static int __init genpd_debug_init(void)
genpd_debugfs_dir = debugfs_create_dir("pm_genpd", NULL);
if (!genpd_debugfs_dir)
return -ENOMEM;
d = debugfs_create_file("pm_genpd_summary", S_IRUGO,
genpd_debugfs_dir, NULL, &summary_fops);
if (!d)
return -ENOMEM;
debugfs_create_file("pm_genpd_summary", S_IRUGO, genpd_debugfs_dir,
NULL, &summary_fops);
list_for_each_entry(genpd, &gpd_list, gpd_list_node) {
d = debugfs_create_dir(genpd->name, genpd_debugfs_dir);
if (!d)
return -ENOMEM;
debugfs_create_file("current_state", 0444,
d, genpd, &status_fops);
......
......@@ -124,6 +124,10 @@ void device_pm_unlock(void)
*/
void device_pm_add(struct device *dev)
{
/* Skip PM setup/initialization. */
if (device_pm_not_required(dev))
return;
pr_debug("PM: Adding info for %s:%s\n",
dev->bus ? dev->bus->name : "No Bus", dev_name(dev));
device_pm_check_callbacks(dev);
......@@ -142,6 +146,9 @@ void device_pm_add(struct device *dev)
*/
void device_pm_remove(struct device *dev)
{
if (device_pm_not_required(dev))
return;
pr_debug("PM: Removing info for %s:%s\n",
dev->bus ? dev->bus->name : "No Bus", dev_name(dev));
complete_all(&dev->power.completion);
......@@ -1741,8 +1748,10 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
if (dev->power.direct_complete) {
if (pm_runtime_status_suspended(dev)) {
pm_runtime_disable(dev);
if (pm_runtime_status_suspended(dev))
if (pm_runtime_status_suspended(dev)) {
pm_dev_dbg(dev, state, "direct-complete ");
goto Complete;
}
pm_runtime_enable(dev);
}
......
......@@ -66,20 +66,30 @@ static int rpm_suspend(struct device *dev, int rpmflags);
*/
void update_pm_runtime_accounting(struct device *dev)
{
unsigned long now = jiffies;
unsigned long delta;
u64 now, last, delta;
delta = now - dev->power.accounting_timestamp;
if (dev->power.disable_depth > 0)
return;
last = dev->power.accounting_timestamp;
now = ktime_get_mono_fast_ns();
dev->power.accounting_timestamp = now;
if (dev->power.disable_depth > 0)
/*
* Because ktime_get_mono_fast_ns() is not monotonic during
* timekeeping updates, ensure that 'now' is after the last saved
* timesptamp.
*/
if (now < last)
return;
delta = now - last;
if (dev->power.runtime_status == RPM_SUSPENDED)
dev->power.suspended_jiffies += delta;
dev->power.suspended_time += delta;
else
dev->power.active_jiffies += delta;
dev->power.active_time += delta;
}
static void __update_runtime_status(struct device *dev, enum rpm_status status)
......@@ -88,6 +98,22 @@ static void __update_runtime_status(struct device *dev, enum rpm_status status)
dev->power.runtime_status = status;
}
u64 pm_runtime_suspended_time(struct device *dev)
{
u64 time;
unsigned long flags;
spin_lock_irqsave(&dev->power.lock, flags);
update_pm_runtime_accounting(dev);
time = dev->power.suspended_time;
spin_unlock_irqrestore(&dev->power.lock, flags);
return time;
}
EXPORT_SYMBOL_GPL(pm_runtime_suspended_time);
/**
* pm_runtime_deactivate_timer - Deactivate given device's suspend timer.
* @dev: Device to handle.
......@@ -129,24 +155,21 @@ static void pm_runtime_cancel_pending(struct device *dev)
u64 pm_runtime_autosuspend_expiration(struct device *dev)
{
int autosuspend_delay;
u64 last_busy, expires = 0;
u64 now = ktime_get_mono_fast_ns();
u64 expires;
if (!dev->power.use_autosuspend)
goto out;
return 0;
autosuspend_delay = READ_ONCE(dev->power.autosuspend_delay);
if (autosuspend_delay < 0)
goto out;
last_busy = READ_ONCE(dev->power.last_busy);
return 0;
expires = last_busy + (u64)autosuspend_delay * NSEC_PER_MSEC;
if (expires <= now)
expires = 0; /* Already expired. */
expires = READ_ONCE(dev->power.last_busy);
expires += (u64)autosuspend_delay * NSEC_PER_MSEC;
if (expires > ktime_get_mono_fast_ns())
return expires; /* Expires in the future */
out:
return expires;
return 0;
}
EXPORT_SYMBOL_GPL(pm_runtime_autosuspend_expiration);
......@@ -1276,6 +1299,9 @@ void __pm_runtime_disable(struct device *dev, bool check_resume)
pm_runtime_put_noidle(dev);
}
/* Update time accounting before disabling PM-runtime. */
update_pm_runtime_accounting(dev);
if (!dev->power.disable_depth++)
__pm_runtime_barrier(dev);
......@@ -1294,10 +1320,15 @@ void pm_runtime_enable(struct device *dev)
spin_lock_irqsave(&dev->power.lock, flags);
if (dev->power.disable_depth > 0)
if (dev->power.disable_depth > 0) {
dev->power.disable_depth--;
else
/* About to enable runtime pm, set accounting_timestamp to now */
if (!dev->power.disable_depth)
dev->power.accounting_timestamp = ktime_get_mono_fast_ns();
} else {
dev_warn(dev, "Unbalanced %s!\n", __func__);
}
WARN(!dev->power.disable_depth &&
dev->power.runtime_status == RPM_SUSPENDED &&
......@@ -1494,7 +1525,6 @@ void pm_runtime_init(struct device *dev)
dev->power.request_pending = false;
dev->power.request = RPM_REQ_NONE;
dev->power.deferred_resume = false;
dev->power.accounting_timestamp = jiffies;
INIT_WORK(&dev->power.work, pm_runtime_work);
dev->power.timer_expires = 0;
......
......@@ -125,9 +125,12 @@ static ssize_t runtime_active_time_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
int ret;
u64 tmp;
spin_lock_irq(&dev->power.lock);
update_pm_runtime_accounting(dev);
ret = sprintf(buf, "%i\n", jiffies_to_msecs(dev->power.active_jiffies));
tmp = dev->power.active_time;
do_div(tmp, NSEC_PER_MSEC);
ret = sprintf(buf, "%llu\n", tmp);
spin_unlock_irq(&dev->power.lock);
return ret;
}
......@@ -138,10 +141,12 @@ static ssize_t runtime_suspended_time_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
int ret;
u64 tmp;
spin_lock_irq(&dev->power.lock);
update_pm_runtime_accounting(dev);
ret = sprintf(buf, "%i\n",
jiffies_to_msecs(dev->power.suspended_jiffies));
tmp = dev->power.suspended_time;
do_div(tmp, NSEC_PER_MSEC);
ret = sprintf(buf, "%llu\n", tmp);
spin_unlock_irq(&dev->power.lock);
return ret;
}
......@@ -648,6 +653,10 @@ int dpm_sysfs_add(struct device *dev)
{
int rc;
/* No need to create PM sysfs if explicitly disabled. */
if (device_pm_not_required(dev))
return 0;
rc = sysfs_create_group(&dev->kobj, &pm_attr_group);
if (rc)
return rc;
......@@ -727,6 +736,8 @@ void rpm_sysfs_remove(struct device *dev)
void dpm_sysfs_remove(struct device *dev)
{
if (device_pm_not_required(dev))
return;
sysfs_unmerge_group(&dev->kobj, &pm_qos_latency_tolerance_attr_group);
dev_pm_qos_constraints_destroy(dev);
rpm_sysfs_remove(dev);
......
......@@ -783,7 +783,7 @@ void pm_wakeup_ws_event(struct wakeup_source *ws, unsigned int msec, bool hard)
EXPORT_SYMBOL_GPL(pm_wakeup_ws_event);
/**
* pm_wakeup_event - Notify the PM core of a wakeup event.
* pm_wakeup_dev_event - Notify the PM core of a wakeup event.
* @dev: Device the wakeup event is related to.
* @msec: Anticipated event processing time (in milliseconds).
* @hard: If set, abort suspends in progress and wake up from suspend-to-idle.
......
......@@ -207,8 +207,6 @@ comment "CPU frequency scaling drivers"
config CPUFREQ_DT
tristate "Generic DT based cpufreq driver"
depends on HAVE_CLK && OF
# if CPU_THERMAL is on and THERMAL=m, CPUFREQ_DT cannot be =y:
depends on !CPU_THERMAL || THERMAL
select CPUFREQ_DT_PLATDEV
select PM_OPP
help
......@@ -327,7 +325,6 @@ endif
config QORIQ_CPUFREQ
tristate "CPU frequency scaling driver for Freescale QorIQ SoCs"
depends on OF && COMMON_CLK && (PPC_E500MC || ARM || ARM64)
depends on !CPU_THERMAL || THERMAL
select CLK_QORIQ
help
This adds the CPUFreq driver support for Freescale QorIQ SoCs
......
......@@ -25,12 +25,21 @@ config ARM_ARMADA_37XX_CPUFREQ
This adds the CPUFreq driver support for Marvell Armada 37xx SoCs.
The Armada 37xx PMU supports 4 frequency and VDD levels.
config ARM_ARMADA_8K_CPUFREQ
tristate "Armada 8K CPUFreq driver"
depends on ARCH_MVEBU && CPUFREQ_DT
help
This enables the CPUFreq driver support for Marvell
Armada8k SOCs.
Armada8K device has the AP806 which supports scaling
to any full integer divider.
If in doubt, say N.
# big LITTLE core layer and glue drivers
config ARM_BIG_LITTLE_CPUFREQ
tristate "Generic ARM big LITTLE CPUfreq driver"
depends on ARM_CPU_TOPOLOGY && HAVE_CLK
# if CPU_THERMAL is on and THERMAL=m, ARM_BIT_LITTLE_CPUFREQ cannot be =y
depends on !CPU_THERMAL || THERMAL
select PM_OPP
help
This enables the Generic CPUfreq driver for ARM big.LITTLE platforms.
......@@ -38,7 +47,6 @@ config ARM_BIG_LITTLE_CPUFREQ
config ARM_SCPI_CPUFREQ
tristate "SCPI based CPUfreq driver"
depends on ARM_SCPI_PROTOCOL && COMMON_CLK_SCPI
depends on !CPU_THERMAL || THERMAL
help
This adds the CPUfreq driver support for ARM platforms using SCPI
protocol for CPU power management.
......@@ -93,7 +101,6 @@ config ARM_KIRKWOOD_CPUFREQ
config ARM_MEDIATEK_CPUFREQ
tristate "CPU Frequency scaling support for MediaTek SoCs"
depends on ARCH_MEDIATEK && REGULATOR
depends on !CPU_THERMAL || THERMAL
select PM_OPP
help
This adds the CPUFreq driver support for MediaTek SoCs.
......@@ -233,7 +240,6 @@ config ARM_SA1110_CPUFREQ
config ARM_SCMI_CPUFREQ
tristate "SCMI based CPUfreq driver"
depends on ARM_SCMI_PROTOCOL || COMPILE_TEST
depends on !CPU_THERMAL || THERMAL
select PM_OPP
help
This adds the CPUfreq driver support for ARM platforms using SCMI
......
......@@ -50,6 +50,7 @@ obj-$(CONFIG_X86_SFI_CPUFREQ) += sfi-cpufreq.o
obj-$(CONFIG_ARM_BIG_LITTLE_CPUFREQ) += arm_big_little.o
obj-$(CONFIG_ARM_ARMADA_37XX_CPUFREQ) += armada-37xx-cpufreq.o
obj-$(CONFIG_ARM_ARMADA_8K_CPUFREQ) += armada-8k-cpufreq.o
obj-$(CONFIG_ARM_BRCMSTB_AVS_CPUFREQ) += brcmstb-avs-cpufreq.o
obj-$(CONFIG_ACPI_CPPC_CPUFREQ) += cppc_cpufreq.o
obj-$(CONFIG_ARCH_DAVINCI) += davinci-cpufreq.o
......
......@@ -916,8 +916,10 @@ static void __init acpi_cpufreq_boost_init(void)
{
int ret;
if (!(boot_cpu_has(X86_FEATURE_CPB) || boot_cpu_has(X86_FEATURE_IDA)))
if (!(boot_cpu_has(X86_FEATURE_CPB) || boot_cpu_has(X86_FEATURE_IDA))) {
pr_debug("Boost capabilities not present in the processor\n");
return;
}
acpi_cpufreq_driver.set_boost = set_boost;
acpi_cpufreq_driver.boost_enabled = boost_state(0);
......
......@@ -487,6 +487,8 @@ static int bL_cpufreq_init(struct cpufreq_policy *policy)
policy->cpuinfo.transition_latency =
arm_bL_ops->get_transition_latency(cpu_dev);
dev_pm_opp_of_register_em(policy->cpus);
if (is_bL_switching_enabled())
per_cpu(cpu_last_req_freq, policy->cpu) = clk_get_cpu_rate(policy->cpu);
......
// SPDX-License-Identifier: GPL-2.0+
/*
* CPUFreq support for Armada 8K
*
* Copyright (C) 2018 Marvell
*
* Omri Itach <omrii@marvell.com>
* Gregory Clement <gregory.clement@bootlin.com>
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/clk.h>
#include <linux/cpu.h>
#include <linux/err.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/platform_device.h>
#include <linux/pm_opp.h>
#include <linux/slab.h>
/*
* Setup the opps list with the divider for the max frequency, that
* will be filled at runtime.
*/
static const int opps_div[] __initconst = {1, 2, 3, 4};
static struct platform_device *armada_8k_pdev;
struct freq_table {
struct device *cpu_dev;
unsigned int freq[ARRAY_SIZE(opps_div)];
};
/* If the CPUs share the same clock, then they are in the same cluster. */
static void __init armada_8k_get_sharing_cpus(struct clk *cur_clk,
struct cpumask *cpumask)
{
int cpu;
for_each_possible_cpu(cpu) {
struct device *cpu_dev;
struct clk *clk;
cpu_dev = get_cpu_device(cpu);
if (!cpu_dev) {
pr_warn("Failed to get cpu%d device\n", cpu);
continue;
}
clk = clk_get(cpu_dev, 0);
if (IS_ERR(clk)) {
pr_warn("Cannot get clock for CPU %d\n", cpu);
} else {
if (clk_is_match(clk, cur_clk))
cpumask_set_cpu(cpu, cpumask);
clk_put(clk);
}
}
}
static int __init armada_8k_add_opp(struct clk *clk, struct device *cpu_dev,
struct freq_table *freq_tables,
int opps_index)
{
unsigned int cur_frequency;
unsigned int freq;
int i, ret;
/* Get nominal (current) CPU frequency. */
cur_frequency = clk_get_rate(clk);
if (!cur_frequency) {
dev_err(cpu_dev, "Failed to get clock rate for this CPU\n");
return -EINVAL;
}
freq_tables[opps_index].cpu_dev = cpu_dev;
for (i = 0; i < ARRAY_SIZE(opps_div); i++) {
freq = cur_frequency / opps_div[i];
ret = dev_pm_opp_add(cpu_dev, freq, 0);
if (ret)
return ret;
freq_tables[opps_index].freq[i] = freq;
}
return 0;
}
static void armada_8k_cpufreq_free_table(struct freq_table *freq_tables)
{
int opps_index, nb_cpus = num_possible_cpus();
for (opps_index = 0 ; opps_index <= nb_cpus; opps_index++) {
int i;
/* If cpu_dev is NULL then we reached the end of the array */
if (!freq_tables[opps_index].cpu_dev)
break;
for (i = 0; i < ARRAY_SIZE(opps_div); i++) {
/*
* A 0Hz frequency is not valid, this meant
* that it was not yet initialized so there is
* no more opp to free
*/
if (freq_tables[opps_index].freq[i] == 0)
break;
dev_pm_opp_remove(freq_tables[opps_index].cpu_dev,
freq_tables[opps_index].freq[i]);
}
}
kfree(freq_tables);
}
static int __init armada_8k_cpufreq_init(void)
{
int ret = 0, opps_index = 0, cpu, nb_cpus;
struct freq_table *freq_tables;
struct device_node *node;
struct cpumask cpus;
node = of_find_compatible_node(NULL, NULL, "marvell,ap806-cpu-clock");
if (!node || !of_device_is_available(node)) {
of_node_put(node);
return -ENODEV;
}
nb_cpus = num_possible_cpus();
freq_tables = kcalloc(nb_cpus, sizeof(*freq_tables), GFP_KERNEL);
cpumask_copy(&cpus, cpu_possible_mask);
/*
* For each CPU, this loop registers the operating points
* supported (which are the nominal CPU frequency and full integer
* divisions of it).
*/
for_each_cpu(cpu, &cpus) {
struct cpumask shared_cpus;
struct device *cpu_dev;
struct clk *clk;
cpu_dev = get_cpu_device(cpu);
if (!cpu_dev) {
pr_err("Cannot get CPU %d\n", cpu);
continue;
}
clk = clk_get(cpu_dev, 0);
if (IS_ERR(clk)) {
pr_err("Cannot get clock for CPU %d\n", cpu);
ret = PTR_ERR(clk);
goto remove_opp;
}
ret = armada_8k_add_opp(clk, cpu_dev, freq_tables, opps_index);
if (ret) {
clk_put(clk);
goto remove_opp;
}
opps_index++;
cpumask_clear(&shared_cpus);
armada_8k_get_sharing_cpus(clk, &shared_cpus);
dev_pm_opp_set_sharing_cpus(cpu_dev, &shared_cpus);
cpumask_andnot(&cpus, &cpus, &shared_cpus);
clk_put(clk);
}
armada_8k_pdev = platform_device_register_simple("cpufreq-dt", -1,
NULL, 0);
ret = PTR_ERR_OR_ZERO(armada_8k_pdev);
if (ret)
goto remove_opp;
platform_set_drvdata(armada_8k_pdev, freq_tables);
return 0;
remove_opp:
armada_8k_cpufreq_free_table(freq_tables);
return ret;
}
module_init(armada_8k_cpufreq_init);
static void __exit armada_8k_cpufreq_exit(void)
{
struct freq_table *freq_tables = platform_get_drvdata(armada_8k_pdev);
platform_device_unregister(armada_8k_pdev);
armada_8k_cpufreq_free_table(freq_tables);
}
module_exit(armada_8k_cpufreq_exit);
MODULE_AUTHOR("Gregory Clement <gregory.clement@bootlin.com>");
MODULE_DESCRIPTION("Armada 8K cpufreq driver");
MODULE_LICENSE("GPL");
......@@ -42,6 +42,66 @@
*/
static struct cppc_cpudata **all_cpu_data;
struct cppc_workaround_oem_info {
char oem_id[ACPI_OEM_ID_SIZE +1];
char oem_table_id[ACPI_OEM_TABLE_ID_SIZE + 1];
u32 oem_revision;
};
static bool apply_hisi_workaround;
static struct cppc_workaround_oem_info wa_info[] = {
{
.oem_id = "HISI ",
.oem_table_id = "HIP07 ",
.oem_revision = 0,
}, {
.oem_id = "HISI ",
.oem_table_id = "HIP08 ",
.oem_revision = 0,
}
};
static unsigned int cppc_cpufreq_perf_to_khz(struct cppc_cpudata *cpu,
unsigned int perf);
/*
* HISI platform does not support delivered performance counter and
* reference performance counter. It can calculate the performance using the
* platform specific mechanism. We reuse the desired performance register to
* store the real performance calculated by the platform.
*/
static unsigned int hisi_cppc_cpufreq_get_rate(unsigned int cpunum)
{
struct cppc_cpudata *cpudata = all_cpu_data[cpunum];
u64 desired_perf;
int ret;
ret = cppc_get_desired_perf(cpunum, &desired_perf);
if (ret < 0)
return -EIO;
return cppc_cpufreq_perf_to_khz(cpudata, desired_perf);
}
static void cppc_check_hisi_workaround(void)
{
struct acpi_table_header *tbl;
acpi_status status = AE_OK;
int i;
status = acpi_get_table(ACPI_SIG_PCCT, 0, &tbl);
if (ACPI_FAILURE(status) || !tbl)
return;
for (i = 0; i < ARRAY_SIZE(wa_info); i++) {
if (!memcmp(wa_info[i].oem_id, tbl->oem_id, ACPI_OEM_ID_SIZE) &&
!memcmp(wa_info[i].oem_table_id, tbl->oem_table_id, ACPI_OEM_TABLE_ID_SIZE) &&
wa_info[i].oem_revision == tbl->oem_revision)
apply_hisi_workaround = true;
}
}
/* Callback function used to retrieve the max frequency from DMI */
static void cppc_find_dmi_mhz(const struct dmi_header *dm, void *private)
{
......@@ -334,6 +394,9 @@ static unsigned int cppc_cpufreq_get_rate(unsigned int cpunum)
struct cppc_cpudata *cpu = all_cpu_data[cpunum];
int ret;
if (apply_hisi_workaround)
return hisi_cppc_cpufreq_get_rate(cpunum);
ret = cppc_get_perf_ctrs(cpunum, &fb_ctrs_t0);
if (ret)
return ret;
......@@ -386,6 +449,8 @@ static int __init cppc_cpufreq_init(void)
goto out;
}
cppc_check_hisi_workaround();
ret = cpufreq_register_driver(&cppc_cpufreq_driver);
if (ret)
goto out;
......
......@@ -13,7 +13,6 @@
#include <linux/clk.h>
#include <linux/cpu.h>
#include <linux/cpu_cooling.h>
#include <linux/cpufreq.h>
#include <linux/cpumask.h>
#include <linux/err.h>
......@@ -30,7 +29,6 @@
struct private_data {
struct opp_table *opp_table;
struct device *cpu_dev;
struct thermal_cooling_device *cdev;
const char *reg_name;
bool have_static_opps;
};
......@@ -280,6 +278,8 @@ static int cpufreq_init(struct cpufreq_policy *policy)
policy->cpuinfo.transition_latency = transition_latency;
policy->dvfs_possible_from_any_cpu = true;
dev_pm_opp_of_register_em(policy->cpus);
return 0;
out_free_cpufreq_table:
......@@ -297,11 +297,25 @@ static int cpufreq_init(struct cpufreq_policy *policy)
return ret;
}
static int cpufreq_online(struct cpufreq_policy *policy)
{
/* We did light-weight tear down earlier, nothing to do here */
return 0;
}
static int cpufreq_offline(struct cpufreq_policy *policy)
{
/*
* Preserve policy->driver_data and don't free resources on light-weight
* tear down.
*/
return 0;
}
static int cpufreq_exit(struct cpufreq_policy *policy)
{
struct private_data *priv = policy->driver_data;
cpufreq_cooling_unregister(priv->cdev);
dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table);
if (priv->have_static_opps)
dev_pm_opp_of_cpumask_remove_table(policy->related_cpus);
......@@ -314,21 +328,16 @@ static int cpufreq_exit(struct cpufreq_policy *policy)
return 0;
}
static void cpufreq_ready(struct cpufreq_policy *policy)
{
struct private_data *priv = policy->driver_data;
priv->cdev = of_cpufreq_cooling_register(policy);
}
static struct cpufreq_driver dt_cpufreq_driver = {
.flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK,
.flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK |
CPUFREQ_IS_COOLING_DEV,
.verify = cpufreq_generic_frequency_table_verify,
.target_index = set_target,
.get = cpufreq_generic_get,
.init = cpufreq_init,
.exit = cpufreq_exit,
.ready = cpufreq_ready,
.online = cpufreq_online,
.offline = cpufreq_offline,
.name = "cpufreq-dt",
.attr = cpufreq_dt_attr,
.suspend = cpufreq_generic_suspend,
......
......@@ -19,6 +19,7 @@
#include <linux/cpu.h>
#include <linux/cpufreq.h>
#include <linux/cpu_cooling.h>
#include <linux/delay.h>
#include <linux/device.h>
#include <linux/init.h>
......@@ -545,13 +546,13 @@ EXPORT_SYMBOL_GPL(cpufreq_policy_transition_delay_us);
* SYSFS INTERFACE *
*********************************************************************/
static ssize_t show_boost(struct kobject *kobj,
struct attribute *attr, char *buf)
struct kobj_attribute *attr, char *buf)
{
return sprintf(buf, "%d\n", cpufreq_driver->boost_enabled);
}
static ssize_t store_boost(struct kobject *kobj, struct attribute *attr,
const char *buf, size_t count)
static ssize_t store_boost(struct kobject *kobj, struct kobj_attribute *attr,
const char *buf, size_t count)
{
int ret, enable;
......@@ -1200,28 +1201,39 @@ static int cpufreq_online(unsigned int cpu)
return -ENOMEM;
}
cpumask_copy(policy->cpus, cpumask_of(cpu));
if (!new_policy && cpufreq_driver->online) {
ret = cpufreq_driver->online(policy);
if (ret) {
pr_debug("%s: %d: initialization failed\n", __func__,
__LINE__);
goto out_exit_policy;
}
/* call driver. From then on the cpufreq must be able
* to accept all calls to ->verify and ->setpolicy for this CPU
*/
ret = cpufreq_driver->init(policy);
if (ret) {
pr_debug("initialization failed\n");
goto out_free_policy;
}
/* Recover policy->cpus using related_cpus */
cpumask_copy(policy->cpus, policy->related_cpus);
} else {
cpumask_copy(policy->cpus, cpumask_of(cpu));
ret = cpufreq_table_validate_and_sort(policy);
if (ret)
goto out_exit_policy;
/*
* Call driver. From then on the cpufreq must be able
* to accept all calls to ->verify and ->setpolicy for this CPU.
*/
ret = cpufreq_driver->init(policy);
if (ret) {
pr_debug("%s: %d: initialization failed\n", __func__,
__LINE__);
goto out_free_policy;
}
down_write(&policy->rwsem);
ret = cpufreq_table_validate_and_sort(policy);
if (ret)
goto out_exit_policy;
if (new_policy) {
/* related_cpus should at least include policy->cpus. */
cpumask_copy(policy->related_cpus, policy->cpus);
}
down_write(&policy->rwsem);
/*
* affected cpus must always be the one, which are online. We aren't
* managing offline cpus here.
......@@ -1305,8 +1317,6 @@ static int cpufreq_online(unsigned int cpu)
if (ret) {
pr_err("%s: Failed to initialize policy for cpu: %d (%d)\n",
__func__, cpu, ret);
/* cpufreq_policy_free() will notify based on this */
new_policy = false;
goto out_destroy_policy;
}
......@@ -1318,6 +1328,10 @@ static int cpufreq_online(unsigned int cpu)
if (cpufreq_driver->ready)
cpufreq_driver->ready(policy);
if (IS_ENABLED(CONFIG_CPU_THERMAL) &&
cpufreq_driver->flags & CPUFREQ_IS_COOLING_DEV)
policy->cdev = of_cpufreq_cooling_register(policy);
pr_debug("initialization complete\n");
return 0;
......@@ -1405,6 +1419,12 @@ static int cpufreq_offline(unsigned int cpu)
goto unlock;
}
if (IS_ENABLED(CONFIG_CPU_THERMAL) &&
cpufreq_driver->flags & CPUFREQ_IS_COOLING_DEV) {
cpufreq_cooling_unregister(policy->cdev);
policy->cdev = NULL;
}
if (cpufreq_driver->stop_cpu)
cpufreq_driver->stop_cpu(policy);
......@@ -1412,11 +1432,12 @@ static int cpufreq_offline(unsigned int cpu)
cpufreq_exit_governor(policy);
/*
* Perform the ->exit() even during light-weight tear-down,
* since this is a core component, and is essential for the
* subsequent light-weight ->init() to succeed.
* Perform the ->offline() during light-weight tear-down, as
* that allows fast recovery when the CPU comes back.
*/
if (cpufreq_driver->exit) {
if (cpufreq_driver->offline) {
cpufreq_driver->offline(policy);
} else if (cpufreq_driver->exit) {
cpufreq_driver->exit(policy);
policy->freq_table = NULL;
}
......@@ -1445,8 +1466,13 @@ static void cpufreq_remove_dev(struct device *dev, struct subsys_interface *sif)
cpumask_clear_cpu(cpu, policy->real_cpus);
remove_cpu_dev_symlink(policy, dev);
if (cpumask_empty(policy->real_cpus))
if (cpumask_empty(policy->real_cpus)) {
/* We did light-weight exit earlier, do full tear down now */
if (cpufreq_driver->offline)
cpufreq_driver->exit(policy);
cpufreq_policy_free(policy);
}
}
/**
......@@ -2192,12 +2218,25 @@ int cpufreq_get_policy(struct cpufreq_policy *policy, unsigned int cpu)
}
EXPORT_SYMBOL(cpufreq_get_policy);
/*
* policy : current policy.
* new_policy: policy to be set.
/**
* cpufreq_set_policy - Modify cpufreq policy parameters.
* @policy: Policy object to modify.
* @new_policy: New policy data.
*
* Pass @new_policy to the cpufreq driver's ->verify() callback, run the
* installed policy notifiers for it with the CPUFREQ_ADJUST value, pass it to
* the driver's ->verify() callback again and run the notifiers for it again
* with the CPUFREQ_NOTIFY value. Next, copy the min and max parameters
* of @new_policy to @policy and either invoke the driver's ->setpolicy()
* callback (if present) or carry out a governor update for @policy. That is,
* run the current governor's ->limits() callback (if the governor field in
* @new_policy points to the same object as the one in @policy) or replace the
* governor for @policy with the new one stored in @new_policy.
*
* The cpuinfo part of @policy is not updated by this function.
*/
static int cpufreq_set_policy(struct cpufreq_policy *policy,
struct cpufreq_policy *new_policy)
struct cpufreq_policy *new_policy)
{
struct cpufreq_governor *old_gov;
int ret;
......@@ -2247,11 +2286,11 @@ static int cpufreq_set_policy(struct cpufreq_policy *policy,
if (cpufreq_driver->setpolicy) {
policy->policy = new_policy->policy;
pr_debug("setting range\n");
return cpufreq_driver->setpolicy(new_policy);
return cpufreq_driver->setpolicy(policy);
}
if (new_policy->governor == policy->governor) {
pr_debug("cpufreq: governor limits update\n");
pr_debug("governor limits update\n");
cpufreq_governor_limits(policy);
return 0;
}
......@@ -2272,7 +2311,7 @@ static int cpufreq_set_policy(struct cpufreq_policy *policy,
if (!ret) {
ret = cpufreq_start_governor(policy);
if (!ret) {
pr_debug("cpufreq: governor change\n");
pr_debug("governor change\n");
sched_cpufreq_governor_change(policy, old_gov);
return 0;
}
......@@ -2293,11 +2332,14 @@ static int cpufreq_set_policy(struct cpufreq_policy *policy,
}
/**
* cpufreq_update_policy - re-evaluate an existing cpufreq policy
* @cpu: CPU which shall be re-evaluated
* cpufreq_update_policy - Re-evaluate an existing cpufreq policy.
* @cpu: CPU to re-evaluate the policy for.
*
* Useful for policy notifiers which have different necessities
* at different times.
* Update the current frequency for the cpufreq policy of @cpu and use
* cpufreq_set_policy() to re-apply the min and max limits saved in the
* user_policy sub-structure of that policy, which triggers the evaluation
* of policy notifiers and the cpufreq driver's ->verify() callback for the
* policy in question, among other things.
*/
void cpufreq_update_policy(unsigned int cpu)
{
......@@ -2312,23 +2354,18 @@ void cpufreq_update_policy(unsigned int cpu)
if (policy_is_inactive(policy))
goto unlock;
pr_debug("updating policy for CPU %u\n", cpu);
memcpy(&new_policy, policy, sizeof(*policy));
new_policy.min = policy->user_policy.min;
new_policy.max = policy->user_policy.max;
/*
* BIOS might change freq behind our back
* -> ask driver for current freq and notify governors about a change
*/
if (cpufreq_driver->get && !cpufreq_driver->setpolicy) {
if (cpufreq_suspended)
goto unlock;
if (cpufreq_driver->get && !cpufreq_driver->setpolicy &&
(cpufreq_suspended || WARN_ON(!cpufreq_update_current_freq(policy))))
goto unlock;
new_policy.cur = cpufreq_update_current_freq(policy);
if (WARN_ON(!new_policy.cur))
goto unlock;
}
pr_debug("updating policy for CPU %u\n", cpu);
memcpy(&new_policy, policy, sizeof(*policy));
new_policy.min = policy->user_policy.min;
new_policy.max = policy->user_policy.max;
cpufreq_set_policy(policy, &new_policy);
......@@ -2479,7 +2516,8 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data)
driver_data->target) ||
(driver_data->setpolicy && (driver_data->target_index ||
driver_data->target)) ||
(!!driver_data->get_intermediate != !!driver_data->target_intermediate))
(!driver_data->get_intermediate != !driver_data->target_intermediate) ||
(!driver_data->online != !driver_data->offline))
return -EINVAL;
pr_debug("trying to register driver %s\n", driver_data->name);
......
......@@ -31,26 +31,27 @@ static void cpufreq_stats_update(struct cpufreq_stats *stats)
{
unsigned long long cur_time = get_jiffies_64();
spin_lock(&cpufreq_stats_lock);
stats->time_in_state[stats->last_index] += cur_time - stats->last_time;
stats->last_time = cur_time;
spin_unlock(&cpufreq_stats_lock);
}
static void cpufreq_stats_clear_table(struct cpufreq_stats *stats)
{
unsigned int count = stats->max_state;
spin_lock(&cpufreq_stats_lock);
memset(stats->time_in_state, 0, count * sizeof(u64));
memset(stats->trans_table, 0, count * count * sizeof(int));
stats->last_time = get_jiffies_64();
stats->total_trans = 0;
spin_unlock(&cpufreq_stats_lock);
}
static ssize_t show_total_trans(struct cpufreq_policy *policy, char *buf)
{
return sprintf(buf, "%d\n", policy->stats->total_trans);
}
cpufreq_freq_attr_ro(total_trans);
static ssize_t show_time_in_state(struct cpufreq_policy *policy, char *buf)
{
......@@ -61,7 +62,10 @@ static ssize_t show_time_in_state(struct cpufreq_policy *policy, char *buf)
if (policy->fast_switch_enabled)
return 0;
spin_lock(&cpufreq_stats_lock);
cpufreq_stats_update(stats);
spin_unlock(&cpufreq_stats_lock);
for (i = 0; i < stats->state_num; i++) {
len += sprintf(buf + len, "%u %llu\n", stats->freq_table[i],
(unsigned long long)
......@@ -69,6 +73,7 @@ static ssize_t show_time_in_state(struct cpufreq_policy *policy, char *buf)
}
return len;
}
cpufreq_freq_attr_ro(time_in_state);
static ssize_t store_reset(struct cpufreq_policy *policy, const char *buf,
size_t count)
......@@ -77,6 +82,7 @@ static ssize_t store_reset(struct cpufreq_policy *policy, const char *buf,
cpufreq_stats_clear_table(policy->stats);
return count;
}
cpufreq_freq_attr_wo(reset);
static ssize_t show_trans_table(struct cpufreq_policy *policy, char *buf)
{
......@@ -126,10 +132,6 @@ static ssize_t show_trans_table(struct cpufreq_policy *policy, char *buf)
}
cpufreq_freq_attr_ro(trans_table);
cpufreq_freq_attr_ro(total_trans);
cpufreq_freq_attr_ro(time_in_state);
cpufreq_freq_attr_wo(reset);
static struct attribute *default_attrs[] = {
&total_trans.attr,
&time_in_state.attr,
......@@ -240,9 +242,11 @@ void cpufreq_stats_record_transition(struct cpufreq_policy *policy,
if (old_index == -1 || new_index == -1 || old_index == new_index)
return;
spin_lock(&cpufreq_stats_lock);
cpufreq_stats_update(stats);
stats->last_index = new_index;
stats->trans_table[old_index * stats->max_state + new_index]++;
stats->total_trans++;
spin_unlock(&cpufreq_stats_lock);
}
......@@ -23,13 +23,10 @@
#include <linux/init.h>
#include <linux/err.h>
#include <linux/clk.h>
#include <linux/platform_data/davinci-cpufreq.h>
#include <linux/platform_device.h>
#include <linux/export.h>
#include <mach/hardware.h>
#include <mach/cpufreq.h>
#include <mach/common.h>
struct davinci_cpufreq {
struct device *dev;
struct clk *armclk;
......
......@@ -323,9 +323,8 @@ static int eps_cpu_init(struct cpufreq_policy *policy)
states = 2;
/* Allocate private data and frequency table for current cpu */
centaur = kzalloc(sizeof(*centaur)
+ (states + 1) * sizeof(struct cpufreq_frequency_table),
GFP_KERNEL);
centaur = kzalloc(struct_size(centaur, freq_table, states + 1),
GFP_KERNEL);
if (!centaur)
return -ENOMEM;
eps_cpu[0] = centaur;
......
......@@ -9,7 +9,6 @@
#include <linux/clk.h>
#include <linux/cpu.h>
#include <linux/cpufreq.h>
#include <linux/cpu_cooling.h>
#include <linux/err.h>
#include <linux/module.h>
#include <linux/nvmem-consumer.h>
......@@ -52,7 +51,6 @@ static struct clk_bulk_data clks[] = {
};
static struct device *cpu_dev;
static struct thermal_cooling_device *cdev;
static bool free_opp;
static struct cpufreq_frequency_table *freq_table;
static unsigned int max_freq;
......@@ -193,16 +191,6 @@ static int imx6q_set_target(struct cpufreq_policy *policy, unsigned int index)
return 0;
}
static void imx6q_cpufreq_ready(struct cpufreq_policy *policy)
{
cdev = of_cpufreq_cooling_register(policy);
if (!cdev)
dev_err(cpu_dev,
"running cpufreq without cooling device: %ld\n",
PTR_ERR(cdev));
}
static int imx6q_cpufreq_init(struct cpufreq_policy *policy)
{
int ret;
......@@ -210,26 +198,19 @@ static int imx6q_cpufreq_init(struct cpufreq_policy *policy)
policy->clk = clks[ARM].clk;
ret = cpufreq_generic_init(policy, freq_table, transition_latency);
policy->suspend_freq = max_freq;
dev_pm_opp_of_register_em(policy->cpus);
return ret;
}
static int imx6q_cpufreq_exit(struct cpufreq_policy *policy)
{
cpufreq_cooling_unregister(cdev);
return 0;
}
static struct cpufreq_driver imx6q_cpufreq_driver = {
.flags = CPUFREQ_NEED_INITIAL_FREQ_CHECK,
.flags = CPUFREQ_NEED_INITIAL_FREQ_CHECK |
CPUFREQ_IS_COOLING_DEV,
.verify = cpufreq_generic_frequency_table_verify,
.target_index = imx6q_set_target,
.get = cpufreq_generic_get,
.init = imx6q_cpufreq_init,
.exit = imx6q_cpufreq_exit,
.name = "imx6q-cpufreq",
.ready = imx6q_cpufreq_ready,
.attr = cpufreq_generic_attr,
.suspend = cpufreq_generic_suspend,
};
......
......@@ -50,6 +50,8 @@
#define int_tofp(X) ((int64_t)(X) << FRAC_BITS)
#define fp_toint(X) ((X) >> FRAC_BITS)
#define ONE_EIGHTH_FP ((int64_t)1 << (FRAC_BITS - 3))
#define EXT_BITS 6
#define EXT_FRAC_BITS (EXT_BITS + FRAC_BITS)
#define fp_ext_toint(X) ((X) >> EXT_FRAC_BITS)
......@@ -895,7 +897,7 @@ static void intel_pstate_update_policies(void)
/************************** sysfs begin ************************/
#define show_one(file_name, object) \
static ssize_t show_##file_name \
(struct kobject *kobj, struct attribute *attr, char *buf) \
(struct kobject *kobj, struct kobj_attribute *attr, char *buf) \
{ \
return sprintf(buf, "%u\n", global.object); \
}
......@@ -904,7 +906,7 @@ static ssize_t intel_pstate_show_status(char *buf);
static int intel_pstate_update_status(const char *buf, size_t size);
static ssize_t show_status(struct kobject *kobj,
struct attribute *attr, char *buf)
struct kobj_attribute *attr, char *buf)
{
ssize_t ret;
......@@ -915,7 +917,7 @@ static ssize_t show_status(struct kobject *kobj,
return ret;
}
static ssize_t store_status(struct kobject *a, struct attribute *b,
static ssize_t store_status(struct kobject *a, struct kobj_attribute *b,
const char *buf, size_t count)
{
char *p = memchr(buf, '\n', count);
......@@ -929,7 +931,7 @@ static ssize_t store_status(struct kobject *a, struct attribute *b,
}
static ssize_t show_turbo_pct(struct kobject *kobj,
struct attribute *attr, char *buf)
struct kobj_attribute *attr, char *buf)
{
struct cpudata *cpu;
int total, no_turbo, turbo_pct;
......@@ -955,7 +957,7 @@ static ssize_t show_turbo_pct(struct kobject *kobj,
}
static ssize_t show_num_pstates(struct kobject *kobj,
struct attribute *attr, char *buf)
struct kobj_attribute *attr, char *buf)
{
struct cpudata *cpu;
int total;
......@@ -976,7 +978,7 @@ static ssize_t show_num_pstates(struct kobject *kobj,
}
static ssize_t show_no_turbo(struct kobject *kobj,
struct attribute *attr, char *buf)
struct kobj_attribute *attr, char *buf)
{
ssize_t ret;
......@@ -998,7 +1000,7 @@ static ssize_t show_no_turbo(struct kobject *kobj,
return ret;
}
static ssize_t store_no_turbo(struct kobject *a, struct attribute *b,
static ssize_t store_no_turbo(struct kobject *a, struct kobj_attribute *b,
const char *buf, size_t count)
{
unsigned int input;
......@@ -1045,7 +1047,7 @@ static ssize_t store_no_turbo(struct kobject *a, struct attribute *b,
return count;
}
static ssize_t store_max_perf_pct(struct kobject *a, struct attribute *b,
static ssize_t store_max_perf_pct(struct kobject *a, struct kobj_attribute *b,
const char *buf, size_t count)
{
unsigned int input;
......@@ -1075,7 +1077,7 @@ static ssize_t store_max_perf_pct(struct kobject *a, struct attribute *b,
return count;
}
static ssize_t store_min_perf_pct(struct kobject *a, struct attribute *b,
static ssize_t store_min_perf_pct(struct kobject *a, struct kobj_attribute *b,
const char *buf, size_t count)
{
unsigned int input;
......@@ -1107,12 +1109,13 @@ static ssize_t store_min_perf_pct(struct kobject *a, struct attribute *b,
}
static ssize_t show_hwp_dynamic_boost(struct kobject *kobj,
struct attribute *attr, char *buf)
struct kobj_attribute *attr, char *buf)
{
return sprintf(buf, "%u\n", hwp_boost);
}
static ssize_t store_hwp_dynamic_boost(struct kobject *a, struct attribute *b,
static ssize_t store_hwp_dynamic_boost(struct kobject *a,
struct kobj_attribute *b,
const char *buf, size_t count)
{
unsigned int input;
......@@ -1444,12 +1447,6 @@ static int knl_get_turbo_pstate(void)
return ret;
}
static int intel_pstate_get_base_pstate(struct cpudata *cpu)
{
return global.no_turbo || global.turbo_disabled ?
cpu->pstate.max_pstate : cpu->pstate.turbo_pstate;
}
static void intel_pstate_set_pstate(struct cpudata *cpu, int pstate)
{
trace_cpu_frequency(pstate * cpu->pstate.scaling, cpu->cpu);
......@@ -1470,11 +1467,9 @@ static void intel_pstate_set_min_pstate(struct cpudata *cpu)
static void intel_pstate_max_within_limits(struct cpudata *cpu)
{
int pstate;
int pstate = max(cpu->pstate.min_pstate, cpu->max_perf_ratio);
update_turbo_state();
pstate = intel_pstate_get_base_pstate(cpu);
pstate = max(cpu->pstate.min_pstate, cpu->max_perf_ratio);
intel_pstate_set_pstate(cpu, pstate);
}
......@@ -1678,17 +1673,14 @@ static inline int32_t get_avg_pstate(struct cpudata *cpu)
static inline int32_t get_target_pstate(struct cpudata *cpu)
{
struct sample *sample = &cpu->sample;
int32_t busy_frac, boost;
int32_t busy_frac;
int target, avg_pstate;
busy_frac = div_fp(sample->mperf << cpu->aperf_mperf_shift,
sample->tsc);
boost = cpu->iowait_boost;
cpu->iowait_boost >>= 1;
if (busy_frac < boost)
busy_frac = boost;
if (busy_frac < cpu->iowait_boost)
busy_frac = cpu->iowait_boost;
sample->busy_scaled = busy_frac * 100;
......@@ -1715,11 +1707,9 @@ static inline int32_t get_target_pstate(struct cpudata *cpu)
static int intel_pstate_prepare_request(struct cpudata *cpu, int pstate)
{
int max_pstate = intel_pstate_get_base_pstate(cpu);
int min_pstate;
int min_pstate = max(cpu->pstate.min_pstate, cpu->min_perf_ratio);
int max_pstate = max(min_pstate, cpu->max_perf_ratio);
min_pstate = max(cpu->pstate.min_pstate, cpu->min_perf_ratio);
max_pstate = max(min_pstate, cpu->max_perf_ratio);
return clamp_t(int, pstate, min_pstate, max_pstate);
}
......@@ -1767,29 +1757,30 @@ static void intel_pstate_update_util(struct update_util_data *data, u64 time,
if (smp_processor_id() != cpu->cpu)
return;
delta_ns = time - cpu->last_update;
if (flags & SCHED_CPUFREQ_IOWAIT) {
cpu->iowait_boost = int_tofp(1);
cpu->last_update = time;
/*
* The last time the busy was 100% so P-state was max anyway
* so avoid overhead of computation.
*/
if (fp_toint(cpu->sample.busy_scaled) == 100)
return;
goto set_pstate;
/* Start over if the CPU may have been idle. */
if (delta_ns > TICK_NSEC) {
cpu->iowait_boost = ONE_EIGHTH_FP;
} else if (cpu->iowait_boost) {
cpu->iowait_boost <<= 1;
if (cpu->iowait_boost > int_tofp(1))
cpu->iowait_boost = int_tofp(1);
} else {
cpu->iowait_boost = ONE_EIGHTH_FP;
}
} else if (cpu->iowait_boost) {
/* Clear iowait_boost if the CPU may have been idle. */
delta_ns = time - cpu->last_update;
if (delta_ns > TICK_NSEC)
cpu->iowait_boost = 0;
else
cpu->iowait_boost >>= 1;
}
cpu->last_update = time;
delta_ns = time - cpu->sample.time;
if ((s64)delta_ns < INTEL_PSTATE_SAMPLING_INTERVAL)
return;
set_pstate:
if (intel_pstate_sample(cpu, time))
intel_pstate_adjust_pstate(cpu);
}
......@@ -1976,7 +1967,8 @@ static void intel_pstate_update_perf_limits(struct cpufreq_policy *policy,
if (hwp_active) {
intel_pstate_get_hwp_max(cpu->cpu, &turbo_max, &max_state);
} else {
max_state = intel_pstate_get_base_pstate(cpu);
max_state = global.no_turbo || global.turbo_disabled ?
cpu->pstate.max_pstate : cpu->pstate.turbo_pstate;
turbo_max = cpu->pstate.turbo_pstate;
}
......@@ -2475,6 +2467,7 @@ static bool __init intel_pstate_no_acpi_pss(void)
kfree(pss);
}
pr_debug("ACPI _PSS not found\n");
return true;
}
......@@ -2485,9 +2478,14 @@ static bool __init intel_pstate_no_acpi_pcch(void)
status = acpi_get_handle(NULL, "\\_SB", &handle);
if (ACPI_FAILURE(status))
return true;
goto not_found;
if (acpi_has_method(handle, "PCCH"))
return false;
return !acpi_has_method(handle, "PCCH");
not_found:
pr_debug("ACPI PCCH not found\n");
return true;
}
static bool __init intel_pstate_has_acpi_ppc(void)
......@@ -2502,6 +2500,7 @@ static bool __init intel_pstate_has_acpi_ppc(void)
if (acpi_has_method(pr->handle, "_PPC"))
return true;
}
pr_debug("ACPI _PPC not found\n");
return false;
}
......@@ -2539,8 +2538,10 @@ static bool __init intel_pstate_platform_pwr_mgmt_exists(void)
id = x86_match_cpu(intel_pstate_cpu_oob_ids);
if (id) {
rdmsrl(MSR_MISC_PWR_MGMT, misc_pwr);
if ( misc_pwr & (1 << 8))
if (misc_pwr & (1 << 8)) {
pr_debug("Bit 8 in the MISC_PWR_MGMT MSR set\n");
return true;
}
}
idx = acpi_match_platform_list(plat_info);
......@@ -2606,22 +2607,28 @@ static int __init intel_pstate_init(void)
}
} else {
id = x86_match_cpu(intel_pstate_cpu_ids);
if (!id)
if (!id) {
pr_info("CPU ID not supported\n");
return -ENODEV;
}
copy_cpu_funcs((struct pstate_funcs *)id->driver_data);
}
if (intel_pstate_msrs_not_valid())
if (intel_pstate_msrs_not_valid()) {
pr_info("Invalid MSRs\n");
return -ENODEV;
}
hwp_cpu_matched:
/*
* The Intel pstate driver will be ignored if the platform
* firmware has its own power management modes.
*/
if (intel_pstate_platform_pwr_mgmt_exists())
if (intel_pstate_platform_pwr_mgmt_exists()) {
pr_info("P-states controlled by the platform\n");
return -ENODEV;
}
if (!hwp_active && hwp_only)
return -ENOTSUPP;
......
......@@ -851,7 +851,7 @@ static int longhaul_cpu_init(struct cpufreq_policy *policy)
case TYPE_POWERSAVER:
pr_cont("Powersaver supported\n");
break;
};
}
/* Doesn't hurt */
longhaul_setup_southbridge();
......
......@@ -14,7 +14,6 @@
#include <linux/clk.h>
#include <linux/cpu.h>
#include <linux/cpu_cooling.h>
#include <linux/cpufreq.h>
#include <linux/cpumask.h>
#include <linux/module.h>
......@@ -48,7 +47,6 @@ struct mtk_cpu_dvfs_info {
struct regulator *sram_reg;
struct clk *cpu_clk;
struct clk *inter_clk;
struct thermal_cooling_device *cdev;
struct list_head list_head;
int intermediate_voltage;
bool need_voltage_tracking;
......@@ -307,13 +305,6 @@ static int mtk_cpufreq_set_target(struct cpufreq_policy *policy,
#define DYNAMIC_POWER "dynamic-power-coefficient"
static void mtk_cpufreq_ready(struct cpufreq_policy *policy)
{
struct mtk_cpu_dvfs_info *info = policy->driver_data;
info->cdev = of_cpufreq_cooling_register(policy);
}
static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu)
{
struct device *cpu_dev;
......@@ -465,6 +456,8 @@ static int mtk_cpufreq_init(struct cpufreq_policy *policy)
policy->driver_data = info;
policy->clk = info->cpu_clk;
dev_pm_opp_of_register_em(policy->cpus);
return 0;
}
......@@ -472,7 +465,6 @@ static int mtk_cpufreq_exit(struct cpufreq_policy *policy)
{
struct mtk_cpu_dvfs_info *info = policy->driver_data;
cpufreq_cooling_unregister(info->cdev);
dev_pm_opp_free_cpufreq_table(info->cpu_dev, &policy->freq_table);
return 0;
......@@ -480,13 +472,13 @@ static int mtk_cpufreq_exit(struct cpufreq_policy *policy)
static struct cpufreq_driver mtk_cpufreq_driver = {
.flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK |
CPUFREQ_HAVE_GOVERNOR_PER_POLICY,
CPUFREQ_HAVE_GOVERNOR_PER_POLICY |
CPUFREQ_IS_COOLING_DEV,
.verify = cpufreq_generic_frequency_table_verify,
.target_index = mtk_cpufreq_set_target,
.get = cpufreq_generic_get,
.init = mtk_cpufreq_init,
.exit = mtk_cpufreq_exit,
.ready = mtk_cpufreq_ready,
.name = "mtk-cpufreq",
.attr = cpufreq_generic_attr,
};
......
......@@ -133,8 +133,10 @@ static int omap_cpu_init(struct cpufreq_policy *policy)
/* FIXME: what's the actual transition time? */
result = cpufreq_generic_init(policy, freq_table, 300 * 1000);
if (!result)
if (!result) {
dev_pm_opp_of_register_em(policy->cpus);
return 0;
}
freq_table_free();
fail:
......
......@@ -268,7 +268,7 @@ static int pcc_get_offset(int cpu)
if (!pccp || pccp->type != ACPI_TYPE_PACKAGE) {
ret = -ENODEV;
goto out_free;
};
}
offset = &(pccp->package.elements[0]);
if (!offset || offset->type != ACPI_TYPE_INTEGER) {
......
......@@ -244,6 +244,7 @@ static int init_powernv_pstates(void)
u32 len_ids, len_freqs;
u32 pstate_min, pstate_max, pstate_nominal;
u32 pstate_turbo, pstate_ultra_turbo;
int rc = -ENODEV;
power_mgt = of_find_node_by_path("/ibm,opal/power-mgt");
if (!power_mgt) {
......@@ -327,8 +328,11 @@ static int init_powernv_pstates(void)
powernv_freqs[i].frequency = freq * 1000; /* kHz */
powernv_freqs[i].driver_data = id & 0xFF;
revmap_data = (struct pstate_idx_revmap_data *)
kmalloc(sizeof(*revmap_data), GFP_KERNEL);
revmap_data = kmalloc(sizeof(*revmap_data), GFP_KERNEL);
if (!revmap_data) {
rc = -ENOMEM;
goto out;
}
revmap_data->pstate_id = id & 0xFF;
revmap_data->cpufreq_table_idx = i;
......@@ -357,7 +361,7 @@ static int init_powernv_pstates(void)
return 0;
out:
of_node_put(power_mgt);
return -ENODEV;
return rc;
}
/* Returns the CPU frequency corresponding to the pstate_id. */
......
......@@ -10,18 +10,21 @@
#include <linux/module.h>
#include <linux/of_address.h>
#include <linux/of_platform.h>
#include <linux/pm_opp.h>
#include <linux/slab.h>
#define LUT_MAX_ENTRIES 40U
#define LUT_SRC GENMASK(31, 30)
#define LUT_L_VAL GENMASK(7, 0)
#define LUT_CORE_COUNT GENMASK(18, 16)
#define LUT_VOLT GENMASK(11, 0)
#define LUT_ROW_SIZE 32
#define CLK_HW_DIV 2
/* Register offsets */
#define REG_ENABLE 0x0
#define REG_LUT_TABLE 0x110
#define REG_FREQ_LUT 0x110
#define REG_VOLT_LUT 0x114
#define REG_PERF_STATE 0x920
static unsigned long cpu_hw_rate, xo_rate;
......@@ -70,11 +73,12 @@ static unsigned int qcom_cpufreq_hw_fast_switch(struct cpufreq_policy *policy,
return policy->freq_table[index].frequency;
}
static int qcom_cpufreq_hw_read_lut(struct device *dev,
static int qcom_cpufreq_hw_read_lut(struct device *cpu_dev,
struct cpufreq_policy *policy,
void __iomem *base)
{
u32 data, src, lval, i, core_count, prev_cc = 0, prev_freq = 0, freq;
u32 volt;
unsigned int max_cores = cpumask_weight(policy->cpus);
struct cpufreq_frequency_table *table;
......@@ -83,23 +87,28 @@ static int qcom_cpufreq_hw_read_lut(struct device *dev,
return -ENOMEM;
for (i = 0; i < LUT_MAX_ENTRIES; i++) {
data = readl_relaxed(base + REG_LUT_TABLE + i * LUT_ROW_SIZE);
data = readl_relaxed(base + REG_FREQ_LUT +
i * LUT_ROW_SIZE);
src = FIELD_GET(LUT_SRC, data);
lval = FIELD_GET(LUT_L_VAL, data);
core_count = FIELD_GET(LUT_CORE_COUNT, data);
data = readl_relaxed(base + REG_VOLT_LUT +
i * LUT_ROW_SIZE);
volt = FIELD_GET(LUT_VOLT, data) * 1000;
if (src)
freq = xo_rate * lval / 1000;
else
freq = cpu_hw_rate / 1000;
/* Ignore boosts in the middle of the table */
if (core_count != max_cores) {
table[i].frequency = CPUFREQ_ENTRY_INVALID;
} else {
if (freq != prev_freq && core_count == max_cores) {
table[i].frequency = freq;
dev_dbg(dev, "index=%d freq=%d, core_count %d\n", i,
dev_pm_opp_add(cpu_dev, freq * 1000, volt);
dev_dbg(cpu_dev, "index=%d freq=%d, core_count %d\n", i,
freq, core_count);
} else {
table[i].frequency = CPUFREQ_ENTRY_INVALID;
}
/*
......@@ -116,6 +125,7 @@ static int qcom_cpufreq_hw_read_lut(struct device *dev,
if (prev_cc != max_cores) {
prev->frequency = prev_freq;
prev->flags = CPUFREQ_BOOST_FREQ;
dev_pm_opp_add(cpu_dev, prev_freq * 1000, volt);
}
break;
......@@ -127,6 +137,7 @@ static int qcom_cpufreq_hw_read_lut(struct device *dev,
table[i].frequency = CPUFREQ_TABLE_END;
policy->freq_table = table;
dev_pm_opp_set_sharing_cpus(cpu_dev, policy->cpus);
return 0;
}
......@@ -159,10 +170,18 @@ static int qcom_cpufreq_hw_cpu_init(struct cpufreq_policy *policy)
struct device *dev = &global_pdev->dev;
struct of_phandle_args args;
struct device_node *cpu_np;
struct device *cpu_dev;
struct resource *res;
void __iomem *base;
int ret, index;
cpu_dev = get_cpu_device(policy->cpu);
if (!cpu_dev) {
pr_err("%s: failed to get cpu%d device\n", __func__,
policy->cpu);
return -ENODEV;
}
cpu_np = of_cpu_device_node_get(policy->cpu);
if (!cpu_np)
return -EINVAL;
......@@ -199,12 +218,21 @@ static int qcom_cpufreq_hw_cpu_init(struct cpufreq_policy *policy)
policy->driver_data = base + REG_PERF_STATE;
ret = qcom_cpufreq_hw_read_lut(dev, policy, base);
ret = qcom_cpufreq_hw_read_lut(cpu_dev, policy, base);
if (ret) {
dev_err(dev, "Domain-%d failed to read LUT\n", index);
goto error;
}
ret = dev_pm_opp_get_opp_count(cpu_dev);
if (ret <= 0) {
dev_err(cpu_dev, "Failed to add OPPs\n");
ret = -ENODEV;
goto error;
}
dev_pm_opp_of_register_em(policy->cpus);
policy->fast_switch_possible = true;
return 0;
......@@ -215,8 +243,10 @@ static int qcom_cpufreq_hw_cpu_init(struct cpufreq_policy *policy)
static int qcom_cpufreq_hw_cpu_exit(struct cpufreq_policy *policy)
{
struct device *cpu_dev = get_cpu_device(policy->cpu);
void __iomem *base = policy->driver_data - REG_PERF_STATE;
dev_pm_opp_remove_all_dynamic(cpu_dev);
kfree(policy->freq_table);
devm_iounmap(&global_pdev->dev, base);
......@@ -231,7 +261,8 @@ static struct freq_attr *qcom_cpufreq_hw_attr[] = {
static struct cpufreq_driver cpufreq_qcom_hw_driver = {
.flags = CPUFREQ_STICKY | CPUFREQ_NEED_INITIAL_FREQ_CHECK |
CPUFREQ_HAVE_GOVERNOR_PER_POLICY,
CPUFREQ_HAVE_GOVERNOR_PER_POLICY |
CPUFREQ_IS_COOLING_DEV,
.verify = cpufreq_generic_frequency_table_verify,
.target_index = qcom_cpufreq_hw_target_index,
.get = qcom_cpufreq_hw_get,
......@@ -296,7 +327,7 @@ static int __init qcom_cpufreq_hw_init(void)
{
return platform_driver_register(&qcom_cpufreq_hw_driver);
}
subsys_initcall(qcom_cpufreq_hw_init);
device_initcall(qcom_cpufreq_hw_init);
static void __exit qcom_cpufreq_hw_exit(void)
{
......
......@@ -42,7 +42,7 @@ enum _msm8996_version {
NUM_OF_MSM8996_VERSIONS,
};
struct platform_device *cpufreq_dt_pdev, *kryo_cpufreq_pdev;
static struct platform_device *cpufreq_dt_pdev, *kryo_cpufreq_pdev;
static enum _msm8996_version qcom_cpufreq_kryo_get_msm_id(void)
{
......@@ -75,7 +75,7 @@ static enum _msm8996_version qcom_cpufreq_kryo_get_msm_id(void)
static int qcom_cpufreq_kryo_probe(struct platform_device *pdev)
{
struct opp_table *opp_tables[NR_CPUS] = {0};
struct opp_table **opp_tables;
enum _msm8996_version msm8996_version;
struct nvmem_cell *speedbin_nvmem;
struct device_node *np;
......@@ -133,6 +133,10 @@ static int qcom_cpufreq_kryo_probe(struct platform_device *pdev)
}
kfree(speedbin);
opp_tables = kcalloc(num_possible_cpus(), sizeof(*opp_tables), GFP_KERNEL);
if (!opp_tables)
return -ENOMEM;
for_each_possible_cpu(cpu) {
cpu_dev = get_cpu_device(cpu);
if (NULL == cpu_dev) {
......@@ -151,8 +155,10 @@ static int qcom_cpufreq_kryo_probe(struct platform_device *pdev)
cpufreq_dt_pdev = platform_device_register_simple("cpufreq-dt", -1,
NULL, 0);
if (!IS_ERR(cpufreq_dt_pdev))
if (!IS_ERR(cpufreq_dt_pdev)) {
platform_set_drvdata(pdev, opp_tables);
return 0;
}
ret = PTR_ERR(cpufreq_dt_pdev);
dev_err(cpu_dev, "Failed to register platform device\n");
......@@ -163,13 +169,23 @@ static int qcom_cpufreq_kryo_probe(struct platform_device *pdev)
break;
dev_pm_opp_put_supported_hw(opp_tables[cpu]);
}
kfree(opp_tables);
return ret;
}
static int qcom_cpufreq_kryo_remove(struct platform_device *pdev)
{
struct opp_table **opp_tables = platform_get_drvdata(pdev);
unsigned int cpu;
platform_device_unregister(cpufreq_dt_pdev);
for_each_possible_cpu(cpu)
dev_pm_opp_put_supported_hw(opp_tables[cpu]);
kfree(opp_tables);
return 0;
}
......
......@@ -13,7 +13,6 @@
#include <linux/clk.h>
#include <linux/clk-provider.h>
#include <linux/cpufreq.h>
#include <linux/cpu_cooling.h>
#include <linux/errno.h>
#include <linux/init.h>
#include <linux/kernel.h>
......@@ -31,7 +30,6 @@
struct cpu_data {
struct clk **pclk;
struct cpufreq_frequency_table *table;
struct thermal_cooling_device *cdev;
};
/*
......@@ -239,7 +237,6 @@ static int qoriq_cpufreq_cpu_exit(struct cpufreq_policy *policy)
{
struct cpu_data *data = policy->driver_data;
cpufreq_cooling_unregister(data->cdev);
kfree(data->pclk);
kfree(data->table);
kfree(data);
......@@ -258,23 +255,15 @@ static int qoriq_cpufreq_target(struct cpufreq_policy *policy,
return clk_set_parent(policy->clk, parent);
}
static void qoriq_cpufreq_ready(struct cpufreq_policy *policy)
{
struct cpu_data *cpud = policy->driver_data;
cpud->cdev = of_cpufreq_cooling_register(policy);
}
static struct cpufreq_driver qoriq_cpufreq_driver = {
.name = "qoriq_cpufreq",
.flags = CPUFREQ_CONST_LOOPS,
.flags = CPUFREQ_CONST_LOOPS |
CPUFREQ_IS_COOLING_DEV,
.init = qoriq_cpufreq_cpu_init,
.exit = qoriq_cpufreq_cpu_exit,
.verify = cpufreq_generic_frequency_table_verify,
.target_index = qoriq_cpufreq_target,
.get = cpufreq_generic_get,
.ready = qoriq_cpufreq_ready,
.attr = cpufreq_generic_attr,
};
......
......@@ -584,7 +584,7 @@ static struct notifier_block s5pv210_cpufreq_reboot_notifier = {
static int s5pv210_cpufreq_probe(struct platform_device *pdev)
{
struct device_node *np;
int id;
int id, result = 0;
/*
* HACK: This is a temporary workaround to get access to clock
......@@ -594,18 +594,39 @@ static int s5pv210_cpufreq_probe(struct platform_device *pdev)
* this whole driver as soon as S5PV210 gets migrated to use
* cpufreq-dt driver.
*/
arm_regulator = regulator_get(NULL, "vddarm");
if (IS_ERR(arm_regulator)) {
if (PTR_ERR(arm_regulator) == -EPROBE_DEFER)
pr_debug("vddarm regulator not ready, defer\n");
else
pr_err("failed to get regulator vddarm\n");
return PTR_ERR(arm_regulator);
}
int_regulator = regulator_get(NULL, "vddint");
if (IS_ERR(int_regulator)) {
if (PTR_ERR(int_regulator) == -EPROBE_DEFER)
pr_debug("vddint regulator not ready, defer\n");
else
pr_err("failed to get regulator vddint\n");
result = PTR_ERR(int_regulator);
goto err_int_regulator;
}
np = of_find_compatible_node(NULL, NULL, "samsung,s5pv210-clock");
if (!np) {
pr_err("%s: failed to find clock controller DT node\n",
__func__);
return -ENODEV;
result = -ENODEV;
goto err_clock;
}
clk_base = of_iomap(np, 0);
of_node_put(np);
if (!clk_base) {
pr_err("%s: failed to map clock registers\n", __func__);
return -EFAULT;
result = -EFAULT;
goto err_clock;
}
for_each_compatible_node(np, NULL, "samsung,s5pv210-dmc") {
......@@ -614,7 +635,8 @@ static int s5pv210_cpufreq_probe(struct platform_device *pdev)
pr_err("%s: failed to get alias of dmc node '%pOFn'\n",
__func__, np);
of_node_put(np);
return id;
result = id;
goto err_clk_base;
}
dmc_base[id] = of_iomap(np, 0);
......@@ -622,33 +644,40 @@ static int s5pv210_cpufreq_probe(struct platform_device *pdev)
pr_err("%s: failed to map dmc%d registers\n",
__func__, id);
of_node_put(np);
return -EFAULT;
result = -EFAULT;
goto err_dmc;
}
}
for (id = 0; id < ARRAY_SIZE(dmc_base); ++id) {
if (!dmc_base[id]) {
pr_err("%s: failed to find dmc%d node\n", __func__, id);
return -ENODEV;
result = -ENODEV;
goto err_dmc;
}
}
arm_regulator = regulator_get(NULL, "vddarm");
if (IS_ERR(arm_regulator)) {
pr_err("failed to get regulator vddarm\n");
return PTR_ERR(arm_regulator);
}
int_regulator = regulator_get(NULL, "vddint");
if (IS_ERR(int_regulator)) {
pr_err("failed to get regulator vddint\n");
regulator_put(arm_regulator);
return PTR_ERR(int_regulator);
}
register_reboot_notifier(&s5pv210_cpufreq_reboot_notifier);
return cpufreq_register_driver(&s5pv210_driver);
err_dmc:
for (id = 0; id < ARRAY_SIZE(dmc_base); ++id)
if (dmc_base[id]) {
iounmap(dmc_base[id]);
dmc_base[id] = NULL;
}
err_clk_base:
iounmap(clk_base);
err_clock:
regulator_put(int_regulator);
err_int_regulator:
regulator_put(arm_regulator);
return result;
}
static struct platform_driver s5pv210_cpufreq_platdrv = {
......
......@@ -11,7 +11,7 @@
#include <linux/cpu.h>
#include <linux/cpufreq.h>
#include <linux/cpumask.h>
#include <linux/cpu_cooling.h>
#include <linux/energy_model.h>
#include <linux/export.h>
#include <linux/module.h>
#include <linux/pm_opp.h>
......@@ -22,7 +22,6 @@
struct scmi_data {
int domain_id;
struct device *cpu_dev;
struct thermal_cooling_device *cdev;
};
static const struct scmi_handle *handle;
......@@ -103,13 +102,42 @@ scmi_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask)
return 0;
}
static int __maybe_unused
scmi_get_cpu_power(unsigned long *power, unsigned long *KHz, int cpu)
{
struct device *cpu_dev = get_cpu_device(cpu);
unsigned long Hz;
int ret, domain;
if (!cpu_dev) {
pr_err("failed to get cpu%d device\n", cpu);
return -ENODEV;
}
domain = handle->perf_ops->device_domain_id(cpu_dev);
if (domain < 0)
return domain;
/* Get the power cost of the performance domain. */
Hz = *KHz * 1000;
ret = handle->perf_ops->est_power_get(handle, domain, &Hz, power);
if (ret)
return ret;
/* The EM framework specifies the frequency in KHz. */
*KHz = Hz / 1000;
return 0;
}
static int scmi_cpufreq_init(struct cpufreq_policy *policy)
{
int ret;
int ret, nr_opp;
unsigned int latency;
struct device *cpu_dev;
struct scmi_data *priv;
struct cpufreq_frequency_table *freq_table;
struct em_data_callback em_cb = EM_DATA_CB(scmi_get_cpu_power);
cpu_dev = get_cpu_device(policy->cpu);
if (!cpu_dev) {
......@@ -136,8 +164,8 @@ static int scmi_cpufreq_init(struct cpufreq_policy *policy)
return ret;
}
ret = dev_pm_opp_get_opp_count(cpu_dev);
if (ret <= 0) {
nr_opp = dev_pm_opp_get_opp_count(cpu_dev);
if (nr_opp <= 0) {
dev_dbg(cpu_dev, "OPP table is not ready, deferring probe\n");
ret = -EPROBE_DEFER;
goto out_free_opp;
......@@ -171,6 +199,9 @@ static int scmi_cpufreq_init(struct cpufreq_policy *policy)
policy->cpuinfo.transition_latency = latency;
policy->fast_switch_possible = true;
em_register_perf_domain(policy->cpus, nr_opp, &em_cb);
return 0;
out_free_priv:
......@@ -185,7 +216,6 @@ static int scmi_cpufreq_exit(struct cpufreq_policy *policy)
{
struct scmi_data *priv = policy->driver_data;
cpufreq_cooling_unregister(priv->cdev);
dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table);
dev_pm_opp_remove_all_dynamic(priv->cpu_dev);
kfree(priv);
......@@ -193,17 +223,11 @@ static int scmi_cpufreq_exit(struct cpufreq_policy *policy)
return 0;
}
static void scmi_cpufreq_ready(struct cpufreq_policy *policy)
{
struct scmi_data *priv = policy->driver_data;
priv->cdev = of_cpufreq_cooling_register(policy);
}
static struct cpufreq_driver scmi_cpufreq_driver = {
.name = "scmi",
.flags = CPUFREQ_STICKY | CPUFREQ_HAVE_GOVERNOR_PER_POLICY |
CPUFREQ_NEED_INITIAL_FREQ_CHECK,
CPUFREQ_NEED_INITIAL_FREQ_CHECK |
CPUFREQ_IS_COOLING_DEV,
.verify = cpufreq_generic_frequency_table_verify,
.attr = cpufreq_generic_attr,
.target_index = scmi_cpufreq_set_target,
......@@ -211,7 +235,6 @@ static struct cpufreq_driver scmi_cpufreq_driver = {
.get = scmi_cpufreq_get_rate,
.init = scmi_cpufreq_init,
.exit = scmi_cpufreq_exit,
.ready = scmi_cpufreq_ready,
};
static int scmi_cpufreq_probe(struct scmi_device *sdev)
......
......@@ -22,7 +22,6 @@
#include <linux/cpu.h>
#include <linux/cpufreq.h>
#include <linux/cpumask.h>
#include <linux/cpu_cooling.h>
#include <linux/export.h>
#include <linux/module.h>
#include <linux/of_platform.h>
......@@ -34,7 +33,6 @@
struct scpi_data {
struct clk *clk;
struct device *cpu_dev;
struct thermal_cooling_device *cdev;
};
static struct scpi_ops *scpi_ops;
......@@ -170,6 +168,9 @@ static int scpi_cpufreq_init(struct cpufreq_policy *policy)
policy->cpuinfo.transition_latency = latency;
policy->fast_switch_possible = false;
dev_pm_opp_of_register_em(policy->cpus);
return 0;
out_free_cpufreq_table:
......@@ -186,7 +187,6 @@ static int scpi_cpufreq_exit(struct cpufreq_policy *policy)
{
struct scpi_data *priv = policy->driver_data;
cpufreq_cooling_unregister(priv->cdev);
clk_put(priv->clk);
dev_pm_opp_free_cpufreq_table(priv->cpu_dev, &policy->freq_table);
kfree(priv);
......@@ -195,23 +195,16 @@ static int scpi_cpufreq_exit(struct cpufreq_policy *policy)
return 0;
}
static void scpi_cpufreq_ready(struct cpufreq_policy *policy)
{
struct scpi_data *priv = policy->driver_data;
priv->cdev = of_cpufreq_cooling_register(policy);
}
static struct cpufreq_driver scpi_cpufreq_driver = {
.name = "scpi-cpufreq",
.flags = CPUFREQ_STICKY | CPUFREQ_HAVE_GOVERNOR_PER_POLICY |
CPUFREQ_NEED_INITIAL_FREQ_CHECK,
CPUFREQ_NEED_INITIAL_FREQ_CHECK |
CPUFREQ_IS_COOLING_DEV,
.verify = cpufreq_generic_frequency_table_verify,
.attr = cpufreq_generic_attr,
.get = scpi_cpufreq_get_rate,
.init = scpi_cpufreq_init,
.exit = scpi_cpufreq_exit,
.ready = scpi_cpufreq_ready,
.target_index = scpi_cpufreq_set_target,
};
......
......@@ -243,8 +243,7 @@ static unsigned int speedstep_get(unsigned int cpu)
unsigned int speed;
/* You're supposed to ensure CPU is online. */
if (smp_call_function_single(cpu, get_freq_data, &speed, 1) != 0)
BUG();
BUG_ON(smp_call_function_single(cpu, get_freq_data, &speed, 1));
pr_debug("detected %u kHz as current frequency\n", speed);
return speed;
......
......@@ -118,6 +118,8 @@ static int tegra124_cpufreq_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, priv);
of_node_put(np);
return 0;
out_put_pllp_clk:
......
......@@ -4,7 +4,7 @@ config CPU_IDLE
bool "CPU idle PM support"
default y if ACPI || PPC_PSERIES
select CPU_IDLE_GOV_LADDER if (!NO_HZ && !NO_HZ_IDLE)
select CPU_IDLE_GOV_MENU if (NO_HZ || NO_HZ_IDLE)
select CPU_IDLE_GOV_MENU if (NO_HZ || NO_HZ_IDLE) && !CPU_IDLE_GOV_TEO
help
CPU idle is a generic framework for supporting software-controlled
idle processor power management. It includes modular cross-platform
......@@ -23,6 +23,15 @@ config CPU_IDLE_GOV_LADDER
config CPU_IDLE_GOV_MENU
bool "Menu governor (for tickless system)"
config CPU_IDLE_GOV_TEO
bool "Timer events oriented (TEO) governor (for tickless systems)"
help
This governor implements a simplified idle state selection method
focused on timer events and does not do any interactivity boosting.
Some workloads benefit from using it and it generally should be safe
to use. Say Y here if you are not happy with the alternatives.
config DT_IDLE_STATES
bool
......
......@@ -22,16 +22,12 @@
#include "dt_idle_states.h"
static int init_state_node(struct cpuidle_state *idle_state,
const struct of_device_id *matches,
const struct of_device_id *match_id,
struct device_node *state_node)
{
int err;
const struct of_device_id *match_id;
const char *desc;
match_id = of_match_node(matches, state_node);
if (!match_id)
return -ENODEV;
/*
* CPUidle drivers are expected to initialize the const void *data
* pointer of the passed in struct of_device_id array to the idle
......@@ -160,6 +156,7 @@ int dt_init_idle_driver(struct cpuidle_driver *drv,
{
struct cpuidle_state *idle_state;
struct device_node *state_node, *cpu_node;
const struct of_device_id *match_id;
int i, err = 0;
const cpumask_t *cpumask;
unsigned int state_idx = start_idx;
......@@ -180,6 +177,12 @@ int dt_init_idle_driver(struct cpuidle_driver *drv,
if (!state_node)
break;
match_id = of_match_node(matches, state_node);
if (!match_id) {
err = -ENODEV;
break;
}
if (!of_device_is_available(state_node)) {
of_node_put(state_node);
continue;
......@@ -198,7 +201,7 @@ int dt_init_idle_driver(struct cpuidle_driver *drv,
}
idle_state = &drv->states[state_idx++];
err = init_state_node(idle_state, matches, state_node);
err = init_state_node(idle_state, match_id, state_node);
if (err) {
pr_err("Parsing idle state node %pOF failed with err %d\n",
state_node, err);
......
......@@ -4,3 +4,4 @@
obj-$(CONFIG_CPU_IDLE_GOV_LADDER) += ladder.o
obj-$(CONFIG_CPU_IDLE_GOV_MENU) += menu.o
obj-$(CONFIG_CPU_IDLE_GOV_TEO) += teo.o
This diff is collapsed.
......@@ -5,6 +5,7 @@
*/
#include <linux/irq.h>
#include <linux/pm_runtime.h>
#include "i915_pmu.h"
#include "intel_ringbuffer.h"
#include "i915_drv.h"
......@@ -478,7 +479,6 @@ static u64 get_rc6(struct drm_i915_private *i915)
* counter value.
*/
spin_lock_irqsave(&i915->pmu.lock, flags);
spin_lock(&kdev->power.lock);
/*
* After the above branch intel_runtime_pm_get_if_in_use failed
......@@ -491,16 +491,13 @@ static u64 get_rc6(struct drm_i915_private *i915)
* suspended and if not we cannot do better than report the last
* known RC6 value.
*/
if (kdev->power.runtime_status == RPM_SUSPENDED) {
if (!i915->pmu.sample[__I915_SAMPLE_RC6_ESTIMATED].cur)
i915->pmu.suspended_jiffies_last =
kdev->power.suspended_jiffies;
if (pm_runtime_status_suspended(kdev)) {
val = pm_runtime_suspended_time(kdev);
val = kdev->power.suspended_jiffies -
i915->pmu.suspended_jiffies_last;
val += jiffies - kdev->power.accounting_timestamp;
if (!i915->pmu.sample[__I915_SAMPLE_RC6_ESTIMATED].cur)
i915->pmu.suspended_time_last = val;
val = jiffies_to_nsecs(val);
val -= i915->pmu.suspended_time_last;
val += i915->pmu.sample[__I915_SAMPLE_RC6].cur;
i915->pmu.sample[__I915_SAMPLE_RC6_ESTIMATED].cur = val;
......@@ -510,7 +507,6 @@ static u64 get_rc6(struct drm_i915_private *i915)
val = i915->pmu.sample[__I915_SAMPLE_RC6].cur;
}
spin_unlock(&kdev->power.lock);
spin_unlock_irqrestore(&i915->pmu.lock, flags);
}
......
......@@ -97,9 +97,9 @@ struct i915_pmu {
*/
struct i915_pmu_sample sample[__I915_NUM_PMU_SAMPLERS];
/**
* @suspended_jiffies_last: Cached suspend time from PM core.
* @suspended_time_last: Cached suspend time from PM core.
*/
unsigned long suspended_jiffies_last;
u64 suspended_time_last;
/**
* @i915_attr: Memory block holding device attributes.
*/
......
......@@ -1103,6 +1103,7 @@ static const struct x86_cpu_id intel_idle_ids[] __initconst = {
INTEL_CPU_FAM6(ATOM_GOLDMONT, idle_cpu_bxt),
INTEL_CPU_FAM6(ATOM_GOLDMONT_PLUS, idle_cpu_bxt),
INTEL_CPU_FAM6(ATOM_GOLDMONT_X, idle_cpu_dnv),
INTEL_CPU_FAM6(ATOM_TREMONT_X, idle_cpu_dnv),
{}
};
......
......@@ -551,9 +551,8 @@ static int _set_opp_voltage(struct device *dev, struct regulator *reg,
return ret;
}
static inline int
_generic_set_opp_clk_only(struct device *dev, struct clk *clk,
unsigned long old_freq, unsigned long freq)
static inline int _generic_set_opp_clk_only(struct device *dev, struct clk *clk,
unsigned long freq)
{
int ret;
......@@ -590,7 +589,7 @@ static int _generic_set_opp_regulator(const struct opp_table *opp_table,
}
/* Change frequency */
ret = _generic_set_opp_clk_only(dev, opp_table->clk, old_freq, freq);
ret = _generic_set_opp_clk_only(dev, opp_table->clk, freq);
if (ret)
goto restore_voltage;
......@@ -604,7 +603,7 @@ static int _generic_set_opp_regulator(const struct opp_table *opp_table,
return 0;
restore_freq:
if (_generic_set_opp_clk_only(dev, opp_table->clk, freq, old_freq))
if (_generic_set_opp_clk_only(dev, opp_table->clk, old_freq))
dev_err(dev, "%s: failed to restore old-freq (%lu Hz)\n",
__func__, old_freq);
restore_voltage:
......@@ -777,7 +776,7 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
opp->supplies);
} else {
/* Only frequency scaling */
ret = _generic_set_opp_clk_only(dev, clk, old_freq, freq);
ret = _generic_set_opp_clk_only(dev, clk, freq);
}
/* Scaling down? Configure required OPPs after frequency */
......@@ -811,7 +810,6 @@ static struct opp_device *_add_opp_dev_unlocked(const struct device *dev,
struct opp_table *opp_table)
{
struct opp_device *opp_dev;
int ret;
opp_dev = kzalloc(sizeof(*opp_dev), GFP_KERNEL);
if (!opp_dev)
......@@ -823,10 +821,7 @@ static struct opp_device *_add_opp_dev_unlocked(const struct device *dev,
list_add(&opp_dev->node, &opp_table->dev_list);
/* Create debugfs entries for the opp_table */
ret = opp_debug_register(opp_dev, opp_table);
if (ret)
dev_err(dev, "%s: Failed to register opp debugfs (%d)\n",
__func__, ret);
opp_debug_register(opp_dev, opp_table);
return opp_dev;
}
......@@ -1247,10 +1242,7 @@ int _opp_add(struct device *dev, struct dev_pm_opp *new_opp,
new_opp->opp_table = opp_table;
kref_init(&new_opp->kref);
ret = opp_debug_create_one(new_opp, opp_table);
if (ret)
dev_err(dev, "%s: Failed to register opp to debugfs (%d)\n",
__func__, ret);
opp_debug_create_one(new_opp, opp_table);
if (!_opp_supported_by_regulators(new_opp, opp_table)) {
new_opp->available = false;
......
......@@ -35,7 +35,7 @@ void opp_debug_remove_one(struct dev_pm_opp *opp)
debugfs_remove_recursive(opp->dentry);
}
static bool opp_debug_create_supplies(struct dev_pm_opp *opp,
static void opp_debug_create_supplies(struct dev_pm_opp *opp,
struct opp_table *opp_table,
struct dentry *pdentry)
{
......@@ -50,30 +50,21 @@ static bool opp_debug_create_supplies(struct dev_pm_opp *opp,
/* Create per-opp directory */
d = debugfs_create_dir(name, pdentry);
if (!d)
return false;
debugfs_create_ulong("u_volt_target", S_IRUGO, d,
&opp->supplies[i].u_volt);
if (!debugfs_create_ulong("u_volt_target", S_IRUGO, d,
&opp->supplies[i].u_volt))
return false;
debugfs_create_ulong("u_volt_min", S_IRUGO, d,
&opp->supplies[i].u_volt_min);
if (!debugfs_create_ulong("u_volt_min", S_IRUGO, d,
&opp->supplies[i].u_volt_min))
return false;
debugfs_create_ulong("u_volt_max", S_IRUGO, d,
&opp->supplies[i].u_volt_max);
if (!debugfs_create_ulong("u_volt_max", S_IRUGO, d,
&opp->supplies[i].u_volt_max))
return false;
if (!debugfs_create_ulong("u_amp", S_IRUGO, d,
&opp->supplies[i].u_amp))
return false;
debugfs_create_ulong("u_amp", S_IRUGO, d,
&opp->supplies[i].u_amp);
}
return true;
}
int opp_debug_create_one(struct dev_pm_opp *opp, struct opp_table *opp_table)
void opp_debug_create_one(struct dev_pm_opp *opp, struct opp_table *opp_table)
{
struct dentry *pdentry = opp_table->dentry;
struct dentry *d;
......@@ -95,40 +86,23 @@ int opp_debug_create_one(struct dev_pm_opp *opp, struct opp_table *opp_table)
/* Create per-opp directory */
d = debugfs_create_dir(name, pdentry);
if (!d)
return -ENOMEM;
if (!debugfs_create_bool("available", S_IRUGO, d, &opp->available))
return -ENOMEM;
if (!debugfs_create_bool("dynamic", S_IRUGO, d, &opp->dynamic))
return -ENOMEM;
if (!debugfs_create_bool("turbo", S_IRUGO, d, &opp->turbo))
return -ENOMEM;
if (!debugfs_create_bool("suspend", S_IRUGO, d, &opp->suspend))
return -ENOMEM;
if (!debugfs_create_u32("performance_state", S_IRUGO, d, &opp->pstate))
return -ENOMEM;
if (!debugfs_create_ulong("rate_hz", S_IRUGO, d, &opp->rate))
return -ENOMEM;
debugfs_create_bool("available", S_IRUGO, d, &opp->available);
debugfs_create_bool("dynamic", S_IRUGO, d, &opp->dynamic);
debugfs_create_bool("turbo", S_IRUGO, d, &opp->turbo);
debugfs_create_bool("suspend", S_IRUGO, d, &opp->suspend);
debugfs_create_u32("performance_state", S_IRUGO, d, &opp->pstate);
debugfs_create_ulong("rate_hz", S_IRUGO, d, &opp->rate);
debugfs_create_ulong("clock_latency_ns", S_IRUGO, d,
&opp->clock_latency_ns);
if (!opp_debug_create_supplies(opp, opp_table, d))
return -ENOMEM;
if (!debugfs_create_ulong("clock_latency_ns", S_IRUGO, d,
&opp->clock_latency_ns))
return -ENOMEM;
opp_debug_create_supplies(opp, opp_table, d);
opp->dentry = d;
return 0;
}
static int opp_list_debug_create_dir(struct opp_device *opp_dev,
struct opp_table *opp_table)
static void opp_list_debug_create_dir(struct opp_device *opp_dev,
struct opp_table *opp_table)
{
const struct device *dev = opp_dev->dev;
struct dentry *d;
......@@ -137,36 +111,21 @@ static int opp_list_debug_create_dir(struct opp_device *opp_dev,
/* Create device specific directory */
d = debugfs_create_dir(opp_table->dentry_name, rootdir);
if (!d) {
dev_err(dev, "%s: Failed to create debugfs dir\n", __func__);
return -ENOMEM;
}
opp_dev->dentry = d;
opp_table->dentry = d;
return 0;
}
static int opp_list_debug_create_link(struct opp_device *opp_dev,
struct opp_table *opp_table)
static void opp_list_debug_create_link(struct opp_device *opp_dev,
struct opp_table *opp_table)
{
const struct device *dev = opp_dev->dev;
char name[NAME_MAX];
struct dentry *d;
opp_set_dev_name(opp_dev->dev, name);
/* Create device specific directory link */
d = debugfs_create_symlink(name, rootdir, opp_table->dentry_name);
if (!d) {
dev_err(dev, "%s: Failed to create link\n", __func__);
return -ENOMEM;
}
opp_dev->dentry = d;
return 0;
opp_dev->dentry = debugfs_create_symlink(name, rootdir,
opp_table->dentry_name);
}
/**
......@@ -177,20 +136,13 @@ static int opp_list_debug_create_link(struct opp_device *opp_dev,
* Dynamically adds device specific directory in debugfs 'opp' directory. If the
* device-opp is shared with other devices, then links will be created for all
* devices except the first.
*
* Return: 0 on success, otherwise negative error.
*/
int opp_debug_register(struct opp_device *opp_dev, struct opp_table *opp_table)
void opp_debug_register(struct opp_device *opp_dev, struct opp_table *opp_table)
{
if (!rootdir) {
pr_debug("%s: Uninitialized rootdir\n", __func__);
return -EINVAL;
}
if (opp_table->dentry)
return opp_list_debug_create_link(opp_dev, opp_table);
return opp_list_debug_create_dir(opp_dev, opp_table);
opp_list_debug_create_link(opp_dev, opp_table);
else
opp_list_debug_create_dir(opp_dev, opp_table);
}
static void opp_migrate_dentry(struct opp_device *opp_dev,
......@@ -252,10 +204,6 @@ static int __init opp_debug_init(void)
{
/* Create /sys/kernel/debug/opp directory */
rootdir = debugfs_create_dir("opp", NULL);
if (!rootdir) {
pr_err("%s: Failed to create root directory\n", __func__);
return -ENOMEM;
}
return 0;
}
......
......@@ -20,6 +20,7 @@
#include <linux/pm_domain.h>
#include <linux/slab.h>
#include <linux/export.h>
#include <linux/energy_model.h>
#include "opp.h"
......@@ -1049,3 +1050,101 @@ struct device_node *dev_pm_opp_get_of_node(struct dev_pm_opp *opp)
return of_node_get(opp->np);
}
EXPORT_SYMBOL_GPL(dev_pm_opp_get_of_node);
/*
* Callback function provided to the Energy Model framework upon registration.
* This computes the power estimated by @CPU at @kHz if it is the frequency
* of an existing OPP, or at the frequency of the first OPP above @kHz otherwise
* (see dev_pm_opp_find_freq_ceil()). This function updates @kHz to the ceiled
* frequency and @mW to the associated power. The power is estimated as
* P = C * V^2 * f with C being the CPU's capacitance and V and f respectively
* the voltage and frequency of the OPP.
*
* Returns -ENODEV if the CPU device cannot be found, -EINVAL if the power
* calculation failed because of missing parameters, 0 otherwise.
*/
static int __maybe_unused _get_cpu_power(unsigned long *mW, unsigned long *kHz,
int cpu)
{
struct device *cpu_dev;
struct dev_pm_opp *opp;
struct device_node *np;
unsigned long mV, Hz;
u32 cap;
u64 tmp;
int ret;
cpu_dev = get_cpu_device(cpu);
if (!cpu_dev)
return -ENODEV;
np = of_node_get(cpu_dev->of_node);
if (!np)
return -EINVAL;
ret = of_property_read_u32(np, "dynamic-power-coefficient", &cap);
of_node_put(np);
if (ret)
return -EINVAL;
Hz = *kHz * 1000;
opp = dev_pm_opp_find_freq_ceil(cpu_dev, &Hz);
if (IS_ERR(opp))
return -EINVAL;
mV = dev_pm_opp_get_voltage(opp) / 1000;
dev_pm_opp_put(opp);
if (!mV)
return -EINVAL;
tmp = (u64)cap * mV * mV * (Hz / 1000000);
do_div(tmp, 1000000000);
*mW = (unsigned long)tmp;
*kHz = Hz / 1000;
return 0;
}
/**
* dev_pm_opp_of_register_em() - Attempt to register an Energy Model
* @cpus : CPUs for which an Energy Model has to be registered
*
* This checks whether the "dynamic-power-coefficient" devicetree property has
* been specified, and tries to register an Energy Model with it if it has.
*/
void dev_pm_opp_of_register_em(struct cpumask *cpus)
{
struct em_data_callback em_cb = EM_DATA_CB(_get_cpu_power);
int ret, nr_opp, cpu = cpumask_first(cpus);
struct device *cpu_dev;
struct device_node *np;
u32 cap;
cpu_dev = get_cpu_device(cpu);
if (!cpu_dev)
return;
nr_opp = dev_pm_opp_get_opp_count(cpu_dev);
if (nr_opp <= 0)
return;
np = of_node_get(cpu_dev->of_node);
if (!np)
return;
/*
* Register an EM only if the 'dynamic-power-coefficient' property is
* set in devicetree. It is assumed the voltage values are known if that
* property is set since it is useless otherwise. If voltages are not
* known, just let the EM registration fail with an error to alert the
* user about the inconsistent configuration.
*/
ret = of_property_read_u32(np, "dynamic-power-coefficient", &cap);
of_node_put(np);
if (ret || !cap)
return;
em_register_perf_domain(cpus, nr_opp, &em_cb);
}
EXPORT_SYMBOL_GPL(dev_pm_opp_of_register_em);
......@@ -238,18 +238,17 @@ static inline void _of_opp_free_required_opps(struct opp_table *opp_table,
#ifdef CONFIG_DEBUG_FS
void opp_debug_remove_one(struct dev_pm_opp *opp);
int opp_debug_create_one(struct dev_pm_opp *opp, struct opp_table *opp_table);
int opp_debug_register(struct opp_device *opp_dev, struct opp_table *opp_table);
void opp_debug_create_one(struct dev_pm_opp *opp, struct opp_table *opp_table);
void opp_debug_register(struct opp_device *opp_dev, struct opp_table *opp_table);
void opp_debug_unregister(struct opp_device *opp_dev, struct opp_table *opp_table);
#else
static inline void opp_debug_remove_one(struct dev_pm_opp *opp) {}
static inline int opp_debug_create_one(struct dev_pm_opp *opp,
struct opp_table *opp_table)
{ return 0; }
static inline int opp_debug_register(struct opp_device *opp_dev,
struct opp_table *opp_table)
{ return 0; }
static inline void opp_debug_create_one(struct dev_pm_opp *opp,
struct opp_table *opp_table) { }
static inline void opp_debug_register(struct opp_device *opp_dev,
struct opp_table *opp_table) { }
static inline void opp_debug_unregister(struct opp_device *opp_dev,
struct opp_table *opp_table)
......
......@@ -1156,6 +1156,7 @@ static const struct x86_cpu_id rapl_ids[] __initconst = {
INTEL_CPU_FAM6(KABYLAKE_MOBILE, rapl_defaults_core),
INTEL_CPU_FAM6(KABYLAKE_DESKTOP, rapl_defaults_core),
INTEL_CPU_FAM6(CANNONLAKE_MOBILE, rapl_defaults_core),
INTEL_CPU_FAM6(ICELAKE_MOBILE, rapl_defaults_core),
INTEL_CPU_FAM6(ATOM_SILVERMONT, rapl_defaults_byt),
INTEL_CPU_FAM6(ATOM_AIRMONT, rapl_defaults_cht),
......@@ -1164,6 +1165,7 @@ static const struct x86_cpu_id rapl_ids[] __initconst = {
INTEL_CPU_FAM6(ATOM_GOLDMONT, rapl_defaults_core),
INTEL_CPU_FAM6(ATOM_GOLDMONT_PLUS, rapl_defaults_core),
INTEL_CPU_FAM6(ATOM_GOLDMONT_X, rapl_defaults_core),
INTEL_CPU_FAM6(ATOM_TREMONT_X, rapl_defaults_core),
INTEL_CPU_FAM6(XEON_PHI_KNL, rapl_defaults_hsw_server),
INTEL_CPU_FAM6(XEON_PHI_KNM, rapl_defaults_hsw_server),
......
......@@ -152,6 +152,7 @@ config CPU_THERMAL
bool "generic cpu cooling support"
depends on CPU_FREQ
depends on THERMAL_OF
depends on THERMAL=y
help
This implements the generic cpu cooling mechanism through frequency
reduction. An ACPI version of this already exists
......
......@@ -137,6 +137,7 @@ struct cppc_cpudata {
cpumask_var_t shared_cpu_map;
};
extern int cppc_get_desired_perf(int cpunum, u64 *desired_perf);
extern int cppc_get_perf_ctrs(int cpu, struct cppc_perf_fb_ctrs *perf_fb_ctrs);
extern int cppc_set_perf(int cpu, struct cppc_perf_ctrls *perf_ctrls);
extern int cppc_get_perf_caps(int cpu, struct cppc_perf_caps *caps);
......
......@@ -151,6 +151,9 @@ struct cpufreq_policy {
/* For cpufreq driver's internal use */
void *driver_data;
/* Pointer to the cooling device if used for thermal mitigation */
struct thermal_cooling_device *cdev;
};
/* Only for ACPI */
......@@ -254,20 +257,12 @@ __ATTR(_name, 0644, show_##_name, store_##_name)
static struct freq_attr _name = \
__ATTR(_name, 0200, NULL, store_##_name)
struct global_attr {
struct attribute attr;
ssize_t (*show)(struct kobject *kobj,
struct attribute *attr, char *buf);
ssize_t (*store)(struct kobject *a, struct attribute *b,
const char *c, size_t count);
};
#define define_one_global_ro(_name) \
static struct global_attr _name = \
static struct kobj_attribute _name = \
__ATTR(_name, 0444, show_##_name, NULL)
#define define_one_global_rw(_name) \
static struct global_attr _name = \
static struct kobj_attribute _name = \
__ATTR(_name, 0644, show_##_name, store_##_name)
......@@ -330,6 +325,8 @@ struct cpufreq_driver {
/* optional */
int (*bios_limit)(int cpu, unsigned int *limit);
int (*online)(struct cpufreq_policy *policy);
int (*offline)(struct cpufreq_policy *policy);
int (*exit)(struct cpufreq_policy *policy);
void (*stop_cpu)(struct cpufreq_policy *policy);
int (*suspend)(struct cpufreq_policy *policy);
......@@ -346,14 +343,15 @@ struct cpufreq_driver {
};
/* flags */
#define CPUFREQ_STICKY (1 << 0) /* driver isn't removed even if
all ->init() calls failed */
#define CPUFREQ_CONST_LOOPS (1 << 1) /* loops_per_jiffy or other
kernel "constants" aren't
affected by frequency
transitions */
#define CPUFREQ_PM_NO_WARN (1 << 2) /* don't warn on suspend/resume
speed mismatches */
/* driver isn't removed even if all ->init() calls failed */
#define CPUFREQ_STICKY BIT(0)
/* loops_per_jiffy or other kernel "constants" aren't affected by frequency transitions */
#define CPUFREQ_CONST_LOOPS BIT(1)
/* don't warn on suspend/resume speed mismatches */
#define CPUFREQ_PM_NO_WARN BIT(2)
/*
* This should be set by platforms having multiple clock-domains, i.e.
......@@ -361,14 +359,14 @@ struct cpufreq_driver {
* be created in cpu/cpu<num>/cpufreq/ directory and so they can use the same
* governor with different tunables for different clusters.
*/
#define CPUFREQ_HAVE_GOVERNOR_PER_POLICY (1 << 3)
#define CPUFREQ_HAVE_GOVERNOR_PER_POLICY BIT(3)
/*
* Driver will do POSTCHANGE notifications from outside of their ->target()
* routine and so must set cpufreq_driver->flags with this flag, so that core
* can handle them specially.
*/
#define CPUFREQ_ASYNC_NOTIFICATION (1 << 4)
#define CPUFREQ_ASYNC_NOTIFICATION BIT(4)
/*
* Set by drivers which want cpufreq core to check if CPU is running at a
......@@ -377,13 +375,19 @@ struct cpufreq_driver {
* from the table. And if that fails, we will stop further boot process by
* issuing a BUG_ON().
*/
#define CPUFREQ_NEED_INITIAL_FREQ_CHECK (1 << 5)
#define CPUFREQ_NEED_INITIAL_FREQ_CHECK BIT(5)
/*
* Set by drivers to disallow use of governors with "dynamic_switching" flag
* set.
*/
#define CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING (1 << 6)
#define CPUFREQ_NO_AUTO_DYNAMIC_SWITCHING BIT(6)
/*
* Set by drivers that want the core to automatically register the cpufreq
* driver as a thermal cooling device.
*/
#define CPUFREQ_IS_COOLING_DEV BIT(7)
int cpufreq_register_driver(struct cpufreq_driver *driver_data);
int cpufreq_unregister_driver(struct cpufreq_driver *driver_data);
......
......@@ -69,11 +69,9 @@ struct cpuidle_state {
/* Idle State Flags */
#define CPUIDLE_FLAG_NONE (0x00)
#define CPUIDLE_FLAG_POLLING (0x01) /* polling state */
#define CPUIDLE_FLAG_COUPLED (0x02) /* state applies to multiple cpus */
#define CPUIDLE_FLAG_TIMER_STOP (0x04) /* timer is stopped on this state */
#define CPUIDLE_DRIVER_FLAGS_MASK (0xFFFF0000)
#define CPUIDLE_FLAG_POLLING BIT(0) /* polling state */
#define CPUIDLE_FLAG_COUPLED BIT(1) /* state applies to multiple cpus */
#define CPUIDLE_FLAG_TIMER_STOP BIT(2) /* timer is stopped on this state */
struct cpuidle_device_kobj;
struct cpuidle_state_kobj;
......
......@@ -1165,6 +1165,16 @@ static inline bool device_async_suspend_enabled(struct device *dev)
return !!dev->power.async_suspend;
}
static inline bool device_pm_not_required(struct device *dev)
{
return dev->power.no_pm;
}
static inline void device_set_pm_not_required(struct device *dev)
{
dev->power.no_pm = true;
}
static inline void dev_pm_syscore_device(struct device *dev, bool val)
{
#ifdef CONFIG_PM_SLEEP
......
/* SPDX-License-Identifier: GPL-2.0 */
/*
* TI DaVinci CPUFreq platform support.
*
* Copyright (C) 2009 Texas Instruments, Inc. http://www.ti.com/
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation version 2.
*
* This program is distributed "as is" WITHOUT ANY WARRANTY of any
* kind, whether express or implied; without even the implied warranty
* of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#ifndef _MACH_DAVINCI_CPUFREQ_H
#define _MACH_DAVINCI_CPUFREQ_H
......@@ -19,8 +12,8 @@
struct davinci_cpufreq_config {
struct cpufreq_frequency_table *freq_table;
int (*set_voltage) (unsigned int index);
int (*init) (void);
int (*set_voltage)(unsigned int index);
int (*init)(void);
};
#endif
#endif /* _MACH_DAVINCI_CPUFREQ_H */
......@@ -592,6 +592,7 @@ struct dev_pm_info {
bool is_suspended:1; /* Ditto */
bool is_noirq_suspended:1;
bool is_late_suspended:1;
bool no_pm:1;
bool early_init:1; /* Owned by the PM core */
bool direct_complete:1; /* Owned by the PM core */
u32 driver_flags;
......@@ -633,9 +634,9 @@ struct dev_pm_info {
int runtime_error;
int autosuspend_delay;
u64 last_busy;
unsigned long active_jiffies;
unsigned long suspended_jiffies;
unsigned long accounting_timestamp;
u64 active_time;
u64 suspended_time;
u64 accounting_timestamp;
#endif
struct pm_subsys_data *subsys_data; /* Owned by the subsystem. */
void (*set_latency_tolerance)(struct device *, s32);
......
......@@ -271,7 +271,7 @@ int genpd_dev_pm_attach(struct device *dev);
struct device *genpd_dev_pm_attach_by_id(struct device *dev,
unsigned int index);
struct device *genpd_dev_pm_attach_by_name(struct device *dev,
char *name);
const char *name);
#else /* !CONFIG_PM_GENERIC_DOMAINS_OF */
static inline int of_genpd_add_provider_simple(struct device_node *np,
struct generic_pm_domain *genpd)
......@@ -324,7 +324,7 @@ static inline struct device *genpd_dev_pm_attach_by_id(struct device *dev,
}
static inline struct device *genpd_dev_pm_attach_by_name(struct device *dev,
char *name)
const char *name)
{
return NULL;
}
......@@ -341,7 +341,7 @@ int dev_pm_domain_attach(struct device *dev, bool power_on);
struct device *dev_pm_domain_attach_by_id(struct device *dev,
unsigned int index);
struct device *dev_pm_domain_attach_by_name(struct device *dev,
char *name);
const char *name);
void dev_pm_domain_detach(struct device *dev, bool power_off);
void dev_pm_domain_set(struct device *dev, struct dev_pm_domain *pd);
#else
......@@ -355,7 +355,7 @@ static inline struct device *dev_pm_domain_attach_by_id(struct device *dev,
return NULL;
}
static inline struct device *dev_pm_domain_attach_by_name(struct device *dev,
char *name)
const char *name)
{
return NULL;
}
......
......@@ -334,6 +334,7 @@ int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpuma
struct device_node *dev_pm_opp_of_get_opp_desc_node(struct device *dev);
struct device_node *dev_pm_opp_get_of_node(struct dev_pm_opp *opp);
int of_get_required_opp_performance_state(struct device_node *np, int index);
void dev_pm_opp_of_register_em(struct cpumask *cpus);
#else
static inline int dev_pm_opp_of_add_table(struct device *dev)
{
......@@ -372,6 +373,11 @@ static inline struct device_node *dev_pm_opp_get_of_node(struct dev_pm_opp *opp)
{
return NULL;
}
static inline void dev_pm_opp_of_register_em(struct cpumask *cpus)
{
}
static inline int of_get_required_opp_performance_state(struct device_node *np, int index)
{
return -ENOTSUPP;
......
......@@ -113,6 +113,8 @@ static inline bool pm_runtime_is_irq_safe(struct device *dev)
return dev->power.irq_safe;
}
extern u64 pm_runtime_suspended_time(struct device *dev);
#else /* !CONFIG_PM */
static inline bool queue_pm_work(struct work_struct *work) { return false; }
......
......@@ -10,6 +10,7 @@
#include <linux/cpu.h>
#include <linux/cpumask.h>
#include <linux/debugfs.h>
#include <linux/energy_model.h>
#include <linux/sched/topology.h>
#include <linux/slab.h>
......@@ -23,6 +24,60 @@ static DEFINE_PER_CPU(struct em_perf_domain *, em_data);
*/
static DEFINE_MUTEX(em_pd_mutex);
#ifdef CONFIG_DEBUG_FS
static struct dentry *rootdir;
static void em_debug_create_cs(struct em_cap_state *cs, struct dentry *pd)
{
struct dentry *d;
char name[24];
snprintf(name, sizeof(name), "cs:%lu", cs->frequency);
/* Create per-cs directory */
d = debugfs_create_dir(name, pd);
debugfs_create_ulong("frequency", 0444, d, &cs->frequency);
debugfs_create_ulong("power", 0444, d, &cs->power);
debugfs_create_ulong("cost", 0444, d, &cs->cost);
}
static int em_debug_cpus_show(struct seq_file *s, void *unused)
{
seq_printf(s, "%*pbl\n", cpumask_pr_args(to_cpumask(s->private)));
return 0;
}
DEFINE_SHOW_ATTRIBUTE(em_debug_cpus);
static void em_debug_create_pd(struct em_perf_domain *pd, int cpu)
{
struct dentry *d;
char name[8];
int i;
snprintf(name, sizeof(name), "pd%d", cpu);
/* Create the directory of the performance domain */
d = debugfs_create_dir(name, rootdir);
debugfs_create_file("cpus", 0444, d, pd->cpus, &em_debug_cpus_fops);
/* Create a sub-directory for each capacity state */
for (i = 0; i < pd->nr_cap_states; i++)
em_debug_create_cs(&pd->table[i], d);
}
static int __init em_debug_init(void)
{
/* Create /sys/kernel/debug/energy_model directory */
rootdir = debugfs_create_dir("energy_model", NULL);
return 0;
}
core_initcall(em_debug_init);
#else /* CONFIG_DEBUG_FS */
static void em_debug_create_pd(struct em_perf_domain *pd, int cpu) {}
#endif
static struct em_perf_domain *em_create_pd(cpumask_t *span, int nr_states,
struct em_data_callback *cb)
{
......@@ -102,6 +157,8 @@ static struct em_perf_domain *em_create_pd(cpumask_t *span, int nr_states,
pd->nr_cap_states = nr_states;
cpumask_copy(to_cpumask(pd->cpus), span);
em_debug_create_pd(pd, cpu);
return pd;
free_cs_table:
......
......@@ -582,10 +582,8 @@ static int register_pm_qos_misc(struct pm_qos_object *qos, struct dentry *d)
qos->pm_qos_power_miscdev.name = qos->name;
qos->pm_qos_power_miscdev.fops = &pm_qos_power_fops;
if (d) {
(void)debugfs_create_file(qos->name, S_IRUGO, d,
(void *)qos, &pm_qos_debug_fops);
}
debugfs_create_file(qos->name, S_IRUGO, d, (void *)qos,
&pm_qos_debug_fops);
return misc_register(&qos->pm_qos_power_miscdev);
}
......@@ -685,8 +683,6 @@ static int __init pm_qos_power_init(void)
BUILD_BUG_ON(ARRAY_SIZE(pm_qos_array) != PM_QOS_NUM_CLASSES);
d = debugfs_create_dir("pm_qos", NULL);
if (IS_ERR_OR_NULL(d))
d = NULL;
for (i = PM_QOS_CPU_DMA_LATENCY; i < PM_QOS_NUM_CLASSES; i++) {
ret = register_pm_qos_misc(pm_qos_array[i], d);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment