Commit 40e993aa authored by Rafael J. Wysocki's avatar Rafael J. Wysocki

Merge OPP material for v4.11 to satisfy dependencies.

parents b1e9a649 0764c604
...@@ -79,22 +79,6 @@ dependent subsystems such as cpufreq are left to the discretion of the SoC ...@@ -79,22 +79,6 @@ dependent subsystems such as cpufreq are left to the discretion of the SoC
specific framework which uses the OPP library. Similar care needs to be taken specific framework which uses the OPP library. Similar care needs to be taken
care to refresh the cpufreq table in cases of these operations. care to refresh the cpufreq table in cases of these operations.
WARNING on OPP List locking mechanism:
-------------------------------------------------
OPP library uses RCU for exclusivity. RCU allows the query functions to operate
in multiple contexts and this synchronization mechanism is optimal for a read
intensive operations on data structure as the OPP library caters to.
To ensure that the data retrieved are sane, the users such as SoC framework
should ensure that the section of code operating on OPP queries are locked
using RCU read locks. The opp_find_freq_{exact,ceil,floor},
opp_get_{voltage, freq, opp_count} fall into this category.
opp_{add,enable,disable} are updaters which use mutex and implement it's own
RCU locking mechanisms. These functions should *NOT* be called under RCU locks
and other contexts that prevent blocking functions in RCU or mutex operations
from working.
2. Initial OPP List Registration 2. Initial OPP List Registration
================================ ================================
The SoC implementation calls dev_pm_opp_add function iteratively to add OPPs per The SoC implementation calls dev_pm_opp_add function iteratively to add OPPs per
...@@ -137,15 +121,18 @@ functions return the matching pointer representing the opp if a match is ...@@ -137,15 +121,18 @@ functions return the matching pointer representing the opp if a match is
found, else returns error. These errors are expected to be handled by standard found, else returns error. These errors are expected to be handled by standard
error checks such as IS_ERR() and appropriate actions taken by the caller. error checks such as IS_ERR() and appropriate actions taken by the caller.
Callers of these functions shall call dev_pm_opp_put() after they have used the
OPP. Otherwise the memory for the OPP will never get freed and result in
memleak.
dev_pm_opp_find_freq_exact - Search for an OPP based on an *exact* frequency and dev_pm_opp_find_freq_exact - Search for an OPP based on an *exact* frequency and
availability. This function is especially useful to enable an OPP which availability. This function is especially useful to enable an OPP which
is not available by default. is not available by default.
Example: In a case when SoC framework detects a situation where a Example: In a case when SoC framework detects a situation where a
higher frequency could be made available, it can use this function to higher frequency could be made available, it can use this function to
find the OPP prior to call the dev_pm_opp_enable to actually make it available. find the OPP prior to call the dev_pm_opp_enable to actually make it available.
rcu_read_lock();
opp = dev_pm_opp_find_freq_exact(dev, 1000000000, false); opp = dev_pm_opp_find_freq_exact(dev, 1000000000, false);
rcu_read_unlock(); dev_pm_opp_put(opp);
/* dont operate on the pointer.. just do a sanity check.. */ /* dont operate on the pointer.. just do a sanity check.. */
if (IS_ERR(opp)) { if (IS_ERR(opp)) {
pr_err("frequency not disabled!\n"); pr_err("frequency not disabled!\n");
...@@ -163,9 +150,8 @@ dev_pm_opp_find_freq_floor - Search for an available OPP which is *at most* the ...@@ -163,9 +150,8 @@ dev_pm_opp_find_freq_floor - Search for an available OPP which is *at most* the
frequency. frequency.
Example: To find the highest opp for a device: Example: To find the highest opp for a device:
freq = ULONG_MAX; freq = ULONG_MAX;
rcu_read_lock(); opp = dev_pm_opp_find_freq_floor(dev, &freq);
dev_pm_opp_find_freq_floor(dev, &freq); dev_pm_opp_put(opp);
rcu_read_unlock();
dev_pm_opp_find_freq_ceil - Search for an available OPP which is *at least* the dev_pm_opp_find_freq_ceil - Search for an available OPP which is *at least* the
provided frequency. This function is useful while searching for a provided frequency. This function is useful while searching for a
...@@ -173,17 +159,15 @@ dev_pm_opp_find_freq_ceil - Search for an available OPP which is *at least* the ...@@ -173,17 +159,15 @@ dev_pm_opp_find_freq_ceil - Search for an available OPP which is *at least* the
frequency. frequency.
Example 1: To find the lowest opp for a device: Example 1: To find the lowest opp for a device:
freq = 0; freq = 0;
rcu_read_lock(); opp = dev_pm_opp_find_freq_ceil(dev, &freq);
dev_pm_opp_find_freq_ceil(dev, &freq); dev_pm_opp_put(opp);
rcu_read_unlock();
Example 2: A simplified implementation of a SoC cpufreq_driver->target: Example 2: A simplified implementation of a SoC cpufreq_driver->target:
soc_cpufreq_target(..) soc_cpufreq_target(..)
{ {
/* Do stuff like policy checks etc. */ /* Do stuff like policy checks etc. */
/* Find the best frequency match for the req */ /* Find the best frequency match for the req */
rcu_read_lock();
opp = dev_pm_opp_find_freq_ceil(dev, &freq); opp = dev_pm_opp_find_freq_ceil(dev, &freq);
rcu_read_unlock(); dev_pm_opp_put(opp);
if (!IS_ERR(opp)) if (!IS_ERR(opp))
soc_switch_to_freq_voltage(freq); soc_switch_to_freq_voltage(freq);
else else
...@@ -208,9 +192,8 @@ dev_pm_opp_enable - Make a OPP available for operation. ...@@ -208,9 +192,8 @@ dev_pm_opp_enable - Make a OPP available for operation.
implementation might choose to do something as follows: implementation might choose to do something as follows:
if (cur_temp < temp_low_thresh) { if (cur_temp < temp_low_thresh) {
/* Enable 1GHz if it was disabled */ /* Enable 1GHz if it was disabled */
rcu_read_lock();
opp = dev_pm_opp_find_freq_exact(dev, 1000000000, false); opp = dev_pm_opp_find_freq_exact(dev, 1000000000, false);
rcu_read_unlock(); dev_pm_opp_put(opp);
/* just error check */ /* just error check */
if (!IS_ERR(opp)) if (!IS_ERR(opp))
ret = dev_pm_opp_enable(dev, 1000000000); ret = dev_pm_opp_enable(dev, 1000000000);
...@@ -224,9 +207,8 @@ dev_pm_opp_disable - Make an OPP to be not available for operation ...@@ -224,9 +207,8 @@ dev_pm_opp_disable - Make an OPP to be not available for operation
choose to do something as follows: choose to do something as follows:
if (cur_temp > temp_high_thresh) { if (cur_temp > temp_high_thresh) {
/* Disable 1GHz if it was enabled */ /* Disable 1GHz if it was enabled */
rcu_read_lock();
opp = dev_pm_opp_find_freq_exact(dev, 1000000000, true); opp = dev_pm_opp_find_freq_exact(dev, 1000000000, true);
rcu_read_unlock(); dev_pm_opp_put(opp);
/* just error check */ /* just error check */
if (!IS_ERR(opp)) if (!IS_ERR(opp))
ret = dev_pm_opp_disable(dev, 1000000000); ret = dev_pm_opp_disable(dev, 1000000000);
...@@ -249,10 +231,9 @@ dev_pm_opp_get_voltage - Retrieve the voltage represented by the opp pointer. ...@@ -249,10 +231,9 @@ dev_pm_opp_get_voltage - Retrieve the voltage represented by the opp pointer.
soc_switch_to_freq_voltage(freq) soc_switch_to_freq_voltage(freq)
{ {
/* do things */ /* do things */
rcu_read_lock();
opp = dev_pm_opp_find_freq_ceil(dev, &freq); opp = dev_pm_opp_find_freq_ceil(dev, &freq);
v = dev_pm_opp_get_voltage(opp); v = dev_pm_opp_get_voltage(opp);
rcu_read_unlock(); dev_pm_opp_put(opp);
if (v) if (v)
regulator_set_voltage(.., v); regulator_set_voltage(.., v);
/* do other things */ /* do other things */
...@@ -266,12 +247,12 @@ dev_pm_opp_get_freq - Retrieve the freq represented by the opp pointer. ...@@ -266,12 +247,12 @@ dev_pm_opp_get_freq - Retrieve the freq represented by the opp pointer.
{ {
/* do things.. */ /* do things.. */
max_freq = ULONG_MAX; max_freq = ULONG_MAX;
rcu_read_lock();
max_opp = dev_pm_opp_find_freq_floor(dev,&max_freq); max_opp = dev_pm_opp_find_freq_floor(dev,&max_freq);
requested_opp = dev_pm_opp_find_freq_ceil(dev,&freq); requested_opp = dev_pm_opp_find_freq_ceil(dev,&freq);
if (!IS_ERR(max_opp) && !IS_ERR(requested_opp)) if (!IS_ERR(max_opp) && !IS_ERR(requested_opp))
r = soc_test_validity(max_opp, requested_opp); r = soc_test_validity(max_opp, requested_opp);
rcu_read_unlock(); dev_pm_opp_put(max_opp);
dev_pm_opp_put(requested_opp);
/* do other things */ /* do other things */
} }
soc_test_validity(..) soc_test_validity(..)
...@@ -289,7 +270,6 @@ dev_pm_opp_get_opp_count - Retrieve the number of available opps for a device ...@@ -289,7 +270,6 @@ dev_pm_opp_get_opp_count - Retrieve the number of available opps for a device
soc_notify_coproc_available_frequencies() soc_notify_coproc_available_frequencies()
{ {
/* Do things */ /* Do things */
rcu_read_lock();
num_available = dev_pm_opp_get_opp_count(dev); num_available = dev_pm_opp_get_opp_count(dev);
speeds = kzalloc(sizeof(u32) * num_available, GFP_KERNEL); speeds = kzalloc(sizeof(u32) * num_available, GFP_KERNEL);
/* populate the table in increasing order */ /* populate the table in increasing order */
...@@ -298,8 +278,8 @@ dev_pm_opp_get_opp_count - Retrieve the number of available opps for a device ...@@ -298,8 +278,8 @@ dev_pm_opp_get_opp_count - Retrieve the number of available opps for a device
speeds[i] = freq; speeds[i] = freq;
freq++; freq++;
i++; i++;
dev_pm_opp_put(opp);
} }
rcu_read_unlock();
soc_notify_coproc(AVAILABLE_FREQs, speeds, num_available); soc_notify_coproc(AVAILABLE_FREQs, speeds, num_available);
/* Do other things */ /* Do other things */
......
...@@ -130,17 +130,16 @@ static int __init omap2_set_init_voltage(char *vdd_name, char *clk_name, ...@@ -130,17 +130,16 @@ static int __init omap2_set_init_voltage(char *vdd_name, char *clk_name,
freq = clk_get_rate(clk); freq = clk_get_rate(clk);
clk_put(clk); clk_put(clk);
rcu_read_lock();
opp = dev_pm_opp_find_freq_ceil(dev, &freq); opp = dev_pm_opp_find_freq_ceil(dev, &freq);
if (IS_ERR(opp)) { if (IS_ERR(opp)) {
rcu_read_unlock();
pr_err("%s: unable to find boot up OPP for vdd_%s\n", pr_err("%s: unable to find boot up OPP for vdd_%s\n",
__func__, vdd_name); __func__, vdd_name);
goto exit; goto exit;
} }
bootup_volt = dev_pm_opp_get_voltage(opp); bootup_volt = dev_pm_opp_get_voltage(opp);
rcu_read_unlock(); dev_pm_opp_put(opp);
if (!bootup_volt) { if (!bootup_volt) {
pr_err("%s: unable to find voltage corresponding to the bootup OPP for vdd_%s\n", pr_err("%s: unable to find voltage corresponding to the bootup OPP for vdd_%s\n",
__func__, vdd_name); __func__, vdd_name);
......
...@@ -32,13 +32,7 @@ LIST_HEAD(opp_tables); ...@@ -32,13 +32,7 @@ LIST_HEAD(opp_tables);
/* Lock to allow exclusive modification to the device and opp lists */ /* Lock to allow exclusive modification to the device and opp lists */
DEFINE_MUTEX(opp_table_lock); DEFINE_MUTEX(opp_table_lock);
#define opp_rcu_lockdep_assert() \ static void dev_pm_opp_get(struct dev_pm_opp *opp);
do { \
RCU_LOCKDEP_WARN(!rcu_read_lock_held() && \
!lockdep_is_held(&opp_table_lock), \
"Missing rcu_read_lock() or " \
"opp_table_lock protection"); \
} while (0)
static struct opp_device *_find_opp_dev(const struct device *dev, static struct opp_device *_find_opp_dev(const struct device *dev,
struct opp_table *opp_table) struct opp_table *opp_table)
...@@ -52,38 +46,46 @@ static struct opp_device *_find_opp_dev(const struct device *dev, ...@@ -52,38 +46,46 @@ static struct opp_device *_find_opp_dev(const struct device *dev,
return NULL; return NULL;
} }
static struct opp_table *_find_opp_table_unlocked(struct device *dev)
{
struct opp_table *opp_table;
list_for_each_entry(opp_table, &opp_tables, node) {
if (_find_opp_dev(dev, opp_table)) {
_get_opp_table_kref(opp_table);
return opp_table;
}
}
return ERR_PTR(-ENODEV);
}
/** /**
* _find_opp_table() - find opp_table struct using device pointer * _find_opp_table() - find opp_table struct using device pointer
* @dev: device pointer used to lookup OPP table * @dev: device pointer used to lookup OPP table
* *
* Search OPP table for one containing matching device. Does a RCU reader * Search OPP table for one containing matching device.
* operation to grab the pointer needed.
* *
* Return: pointer to 'struct opp_table' if found, otherwise -ENODEV or * Return: pointer to 'struct opp_table' if found, otherwise -ENODEV or
* -EINVAL based on type of error. * -EINVAL based on type of error.
* *
* Locking: For readers, this function must be called under rcu_read_lock(). * The callers must call dev_pm_opp_put_opp_table() after the table is used.
* opp_table is a RCU protected pointer, which means that opp_table is valid
* as long as we are under RCU lock.
*
* For Writers, this function must be called with opp_table_lock held.
*/ */
struct opp_table *_find_opp_table(struct device *dev) struct opp_table *_find_opp_table(struct device *dev)
{ {
struct opp_table *opp_table; struct opp_table *opp_table;
opp_rcu_lockdep_assert();
if (IS_ERR_OR_NULL(dev)) { if (IS_ERR_OR_NULL(dev)) {
pr_err("%s: Invalid parameters\n", __func__); pr_err("%s: Invalid parameters\n", __func__);
return ERR_PTR(-EINVAL); return ERR_PTR(-EINVAL);
} }
list_for_each_entry_rcu(opp_table, &opp_tables, node) mutex_lock(&opp_table_lock);
if (_find_opp_dev(dev, opp_table)) opp_table = _find_opp_table_unlocked(dev);
return opp_table; mutex_unlock(&opp_table_lock);
return ERR_PTR(-ENODEV); return opp_table;
} }
/** /**
...@@ -94,29 +96,15 @@ struct opp_table *_find_opp_table(struct device *dev) ...@@ -94,29 +96,15 @@ struct opp_table *_find_opp_table(struct device *dev)
* return 0 * return 0
* *
* This is useful only for devices with single power supply. * This is useful only for devices with single power supply.
*
* Locking: This function must be called under rcu_read_lock(). opp is a rcu
* protected pointer. This means that opp which could have been fetched by
* opp_find_freq_{exact,ceil,floor} functions is valid as long as we are
* under RCU lock. The pointer returned by the opp_find_freq family must be
* used in the same section as the usage of this function with the pointer
* prior to unlocking with rcu_read_unlock() to maintain the integrity of the
* pointer.
*/ */
unsigned long dev_pm_opp_get_voltage(struct dev_pm_opp *opp) unsigned long dev_pm_opp_get_voltage(struct dev_pm_opp *opp)
{ {
struct dev_pm_opp *tmp_opp; if (IS_ERR_OR_NULL(opp)) {
unsigned long v = 0;
opp_rcu_lockdep_assert();
tmp_opp = rcu_dereference(opp);
if (IS_ERR_OR_NULL(tmp_opp))
pr_err("%s: Invalid parameters\n", __func__); pr_err("%s: Invalid parameters\n", __func__);
else return 0;
v = tmp_opp->supplies[0].u_volt; }
return v; return opp->supplies[0].u_volt;
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_get_voltage); EXPORT_SYMBOL_GPL(dev_pm_opp_get_voltage);
...@@ -126,29 +114,15 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_get_voltage); ...@@ -126,29 +114,15 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_get_voltage);
* *
* Return: frequency in hertz corresponding to the opp, else * Return: frequency in hertz corresponding to the opp, else
* return 0 * return 0
*
* Locking: This function must be called under rcu_read_lock(). opp is a rcu
* protected pointer. This means that opp which could have been fetched by
* opp_find_freq_{exact,ceil,floor} functions is valid as long as we are
* under RCU lock. The pointer returned by the opp_find_freq family must be
* used in the same section as the usage of this function with the pointer
* prior to unlocking with rcu_read_unlock() to maintain the integrity of the
* pointer.
*/ */
unsigned long dev_pm_opp_get_freq(struct dev_pm_opp *opp) unsigned long dev_pm_opp_get_freq(struct dev_pm_opp *opp)
{ {
struct dev_pm_opp *tmp_opp; if (IS_ERR_OR_NULL(opp) || !opp->available) {
unsigned long f = 0;
opp_rcu_lockdep_assert();
tmp_opp = rcu_dereference(opp);
if (IS_ERR_OR_NULL(tmp_opp) || !tmp_opp->available)
pr_err("%s: Invalid parameters\n", __func__); pr_err("%s: Invalid parameters\n", __func__);
else return 0;
f = tmp_opp->rate; }
return f; return opp->rate;
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_get_freq); EXPORT_SYMBOL_GPL(dev_pm_opp_get_freq);
...@@ -161,28 +135,15 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_get_freq); ...@@ -161,28 +135,15 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_get_freq);
* quickly. Running on them for longer times may overheat the chip. * quickly. Running on them for longer times may overheat the chip.
* *
* Return: true if opp is turbo opp, else false. * Return: true if opp is turbo opp, else false.
*
* Locking: This function must be called under rcu_read_lock(). opp is a rcu
* protected pointer. This means that opp which could have been fetched by
* opp_find_freq_{exact,ceil,floor} functions is valid as long as we are
* under RCU lock. The pointer returned by the opp_find_freq family must be
* used in the same section as the usage of this function with the pointer
* prior to unlocking with rcu_read_unlock() to maintain the integrity of the
* pointer.
*/ */
bool dev_pm_opp_is_turbo(struct dev_pm_opp *opp) bool dev_pm_opp_is_turbo(struct dev_pm_opp *opp)
{ {
struct dev_pm_opp *tmp_opp; if (IS_ERR_OR_NULL(opp) || !opp->available) {
opp_rcu_lockdep_assert();
tmp_opp = rcu_dereference(opp);
if (IS_ERR_OR_NULL(tmp_opp) || !tmp_opp->available) {
pr_err("%s: Invalid parameters\n", __func__); pr_err("%s: Invalid parameters\n", __func__);
return false; return false;
} }
return tmp_opp->turbo; return opp->turbo;
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_is_turbo); EXPORT_SYMBOL_GPL(dev_pm_opp_is_turbo);
...@@ -191,52 +152,29 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_is_turbo); ...@@ -191,52 +152,29 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_is_turbo);
* @dev: device for which we do this operation * @dev: device for which we do this operation
* *
* Return: This function returns the max clock latency in nanoseconds. * Return: This function returns the max clock latency in nanoseconds.
*
* Locking: This function takes rcu_read_lock().
*/ */
unsigned long dev_pm_opp_get_max_clock_latency(struct device *dev) unsigned long dev_pm_opp_get_max_clock_latency(struct device *dev)
{ {
struct opp_table *opp_table; struct opp_table *opp_table;
unsigned long clock_latency_ns; unsigned long clock_latency_ns;
rcu_read_lock();
opp_table = _find_opp_table(dev); opp_table = _find_opp_table(dev);
if (IS_ERR(opp_table)) if (IS_ERR(opp_table))
clock_latency_ns = 0; return 0;
else
clock_latency_ns = opp_table->clock_latency_ns_max;
rcu_read_unlock();
return clock_latency_ns;
}
EXPORT_SYMBOL_GPL(dev_pm_opp_get_max_clock_latency);
static int _get_regulator_count(struct device *dev)
{
struct opp_table *opp_table;
int count;
rcu_read_lock(); clock_latency_ns = opp_table->clock_latency_ns_max;
opp_table = _find_opp_table(dev); dev_pm_opp_put_opp_table(opp_table);
if (!IS_ERR(opp_table))
count = opp_table->regulator_count;
else
count = 0;
rcu_read_unlock(); return clock_latency_ns;
return count;
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_get_max_clock_latency);
/** /**
* dev_pm_opp_get_max_volt_latency() - Get max voltage latency in nanoseconds * dev_pm_opp_get_max_volt_latency() - Get max voltage latency in nanoseconds
* @dev: device for which we do this operation * @dev: device for which we do this operation
* *
* Return: This function returns the max voltage latency in nanoseconds. * Return: This function returns the max voltage latency in nanoseconds.
*
* Locking: This function takes rcu_read_lock().
*/ */
unsigned long dev_pm_opp_get_max_volt_latency(struct device *dev) unsigned long dev_pm_opp_get_max_volt_latency(struct device *dev)
{ {
...@@ -250,35 +188,33 @@ unsigned long dev_pm_opp_get_max_volt_latency(struct device *dev) ...@@ -250,35 +188,33 @@ unsigned long dev_pm_opp_get_max_volt_latency(struct device *dev)
unsigned long max; unsigned long max;
} *uV; } *uV;
count = _get_regulator_count(dev); opp_table = _find_opp_table(dev);
if (IS_ERR(opp_table))
return 0;
count = opp_table->regulator_count;
/* Regulator may not be required for the device */ /* Regulator may not be required for the device */
if (!count) if (!count)
return 0; goto put_opp_table;
regulators = kmalloc_array(count, sizeof(*regulators), GFP_KERNEL); regulators = kmalloc_array(count, sizeof(*regulators), GFP_KERNEL);
if (!regulators) if (!regulators)
return 0; goto put_opp_table;
uV = kmalloc_array(count, sizeof(*uV), GFP_KERNEL); uV = kmalloc_array(count, sizeof(*uV), GFP_KERNEL);
if (!uV) if (!uV)
goto free_regulators; goto free_regulators;
rcu_read_lock();
opp_table = _find_opp_table(dev);
if (IS_ERR(opp_table)) {
rcu_read_unlock();
goto free_uV;
}
memcpy(regulators, opp_table->regulators, count * sizeof(*regulators)); memcpy(regulators, opp_table->regulators, count * sizeof(*regulators));
mutex_lock(&opp_table->lock);
for (i = 0; i < count; i++) { for (i = 0; i < count; i++) {
uV[i].min = ~0; uV[i].min = ~0;
uV[i].max = 0; uV[i].max = 0;
list_for_each_entry_rcu(opp, &opp_table->opp_list, node) { list_for_each_entry(opp, &opp_table->opp_list, node) {
if (!opp->available) if (!opp->available)
continue; continue;
...@@ -289,7 +225,7 @@ unsigned long dev_pm_opp_get_max_volt_latency(struct device *dev) ...@@ -289,7 +225,7 @@ unsigned long dev_pm_opp_get_max_volt_latency(struct device *dev)
} }
} }
rcu_read_unlock(); mutex_unlock(&opp_table->lock);
/* /*
* The caller needs to ensure that opp_table (and hence the regulator) * The caller needs to ensure that opp_table (and hence the regulator)
...@@ -301,10 +237,11 @@ unsigned long dev_pm_opp_get_max_volt_latency(struct device *dev) ...@@ -301,10 +237,11 @@ unsigned long dev_pm_opp_get_max_volt_latency(struct device *dev)
latency_ns += ret * 1000; latency_ns += ret * 1000;
} }
free_uV:
kfree(uV); kfree(uV);
free_regulators: free_regulators:
kfree(regulators); kfree(regulators);
put_opp_table:
dev_pm_opp_put_opp_table(opp_table);
return latency_ns; return latency_ns;
} }
...@@ -317,8 +254,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_get_max_volt_latency); ...@@ -317,8 +254,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_get_max_volt_latency);
* *
* Return: This function returns the max transition latency, in nanoseconds, to * Return: This function returns the max transition latency, in nanoseconds, to
* switch from one OPP to other. * switch from one OPP to other.
*
* Locking: This function takes rcu_read_lock().
*/ */
unsigned long dev_pm_opp_get_max_transition_latency(struct device *dev) unsigned long dev_pm_opp_get_max_transition_latency(struct device *dev)
{ {
...@@ -328,32 +263,29 @@ unsigned long dev_pm_opp_get_max_transition_latency(struct device *dev) ...@@ -328,32 +263,29 @@ unsigned long dev_pm_opp_get_max_transition_latency(struct device *dev)
EXPORT_SYMBOL_GPL(dev_pm_opp_get_max_transition_latency); EXPORT_SYMBOL_GPL(dev_pm_opp_get_max_transition_latency);
/** /**
* dev_pm_opp_get_suspend_opp() - Get suspend opp * dev_pm_opp_get_suspend_opp_freq() - Get frequency of suspend opp in Hz
* @dev: device for which we do this operation * @dev: device for which we do this operation
* *
* Return: This function returns pointer to the suspend opp if it is * Return: This function returns the frequency of the OPP marked as suspend_opp
* defined and available, otherwise it returns NULL. * if one is available, else returns 0;
*
* Locking: This function must be called under rcu_read_lock(). opp is a rcu
* protected pointer. The reason for the same is that the opp pointer which is
* returned will remain valid for use with opp_get_{voltage, freq} only while
* under the locked area. The pointer returned must be used prior to unlocking
* with rcu_read_unlock() to maintain the integrity of the pointer.
*/ */
struct dev_pm_opp *dev_pm_opp_get_suspend_opp(struct device *dev) unsigned long dev_pm_opp_get_suspend_opp_freq(struct device *dev)
{ {
struct opp_table *opp_table; struct opp_table *opp_table;
unsigned long freq = 0;
opp_rcu_lockdep_assert();
opp_table = _find_opp_table(dev); opp_table = _find_opp_table(dev);
if (IS_ERR(opp_table) || !opp_table->suspend_opp || if (IS_ERR(opp_table))
!opp_table->suspend_opp->available) return 0;
return NULL;
if (opp_table->suspend_opp && opp_table->suspend_opp->available)
freq = dev_pm_opp_get_freq(opp_table->suspend_opp);
return opp_table->suspend_opp; dev_pm_opp_put_opp_table(opp_table);
return freq;
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_get_suspend_opp); EXPORT_SYMBOL_GPL(dev_pm_opp_get_suspend_opp_freq);
/** /**
* dev_pm_opp_get_opp_count() - Get number of opps available in the opp table * dev_pm_opp_get_opp_count() - Get number of opps available in the opp table
...@@ -361,8 +293,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_get_suspend_opp); ...@@ -361,8 +293,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_get_suspend_opp);
* *
* Return: This function returns the number of available opps if there are any, * Return: This function returns the number of available opps if there are any,
* else returns 0 if none or the corresponding error value. * else returns 0 if none or the corresponding error value.
*
* Locking: This function takes rcu_read_lock().
*/ */
int dev_pm_opp_get_opp_count(struct device *dev) int dev_pm_opp_get_opp_count(struct device *dev)
{ {
...@@ -370,23 +300,24 @@ int dev_pm_opp_get_opp_count(struct device *dev) ...@@ -370,23 +300,24 @@ int dev_pm_opp_get_opp_count(struct device *dev)
struct dev_pm_opp *temp_opp; struct dev_pm_opp *temp_opp;
int count = 0; int count = 0;
rcu_read_lock();
opp_table = _find_opp_table(dev); opp_table = _find_opp_table(dev);
if (IS_ERR(opp_table)) { if (IS_ERR(opp_table)) {
count = PTR_ERR(opp_table); count = PTR_ERR(opp_table);
dev_err(dev, "%s: OPP table not found (%d)\n", dev_err(dev, "%s: OPP table not found (%d)\n",
__func__, count); __func__, count);
goto out_unlock; return count;
} }
list_for_each_entry_rcu(temp_opp, &opp_table->opp_list, node) { mutex_lock(&opp_table->lock);
list_for_each_entry(temp_opp, &opp_table->opp_list, node) {
if (temp_opp->available) if (temp_opp->available)
count++; count++;
} }
out_unlock: mutex_unlock(&opp_table->lock);
rcu_read_unlock(); dev_pm_opp_put_opp_table(opp_table);
return count; return count;
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_get_opp_count); EXPORT_SYMBOL_GPL(dev_pm_opp_get_opp_count);
...@@ -411,11 +342,8 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_get_opp_count); ...@@ -411,11 +342,8 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_get_opp_count);
* This provides a mechanism to enable an opp which is not available currently * This provides a mechanism to enable an opp which is not available currently
* or the opposite as well. * or the opposite as well.
* *
* Locking: This function must be called under rcu_read_lock(). opp is a rcu * The callers are required to call dev_pm_opp_put() for the returned OPP after
* protected pointer. The reason for the same is that the opp pointer which is * use.
* returned will remain valid for use with opp_get_{voltage, freq} only while
* under the locked area. The pointer returned must be used prior to unlocking
* with rcu_read_unlock() to maintain the integrity of the pointer.
*/ */
struct dev_pm_opp *dev_pm_opp_find_freq_exact(struct device *dev, struct dev_pm_opp *dev_pm_opp_find_freq_exact(struct device *dev,
unsigned long freq, unsigned long freq,
...@@ -424,8 +352,6 @@ struct dev_pm_opp *dev_pm_opp_find_freq_exact(struct device *dev, ...@@ -424,8 +352,6 @@ struct dev_pm_opp *dev_pm_opp_find_freq_exact(struct device *dev,
struct opp_table *opp_table; struct opp_table *opp_table;
struct dev_pm_opp *temp_opp, *opp = ERR_PTR(-ERANGE); struct dev_pm_opp *temp_opp, *opp = ERR_PTR(-ERANGE);
opp_rcu_lockdep_assert();
opp_table = _find_opp_table(dev); opp_table = _find_opp_table(dev);
if (IS_ERR(opp_table)) { if (IS_ERR(opp_table)) {
int r = PTR_ERR(opp_table); int r = PTR_ERR(opp_table);
...@@ -434,14 +360,22 @@ struct dev_pm_opp *dev_pm_opp_find_freq_exact(struct device *dev, ...@@ -434,14 +360,22 @@ struct dev_pm_opp *dev_pm_opp_find_freq_exact(struct device *dev,
return ERR_PTR(r); return ERR_PTR(r);
} }
list_for_each_entry_rcu(temp_opp, &opp_table->opp_list, node) { mutex_lock(&opp_table->lock);
list_for_each_entry(temp_opp, &opp_table->opp_list, node) {
if (temp_opp->available == available && if (temp_opp->available == available &&
temp_opp->rate == freq) { temp_opp->rate == freq) {
opp = temp_opp; opp = temp_opp;
/* Increment the reference count of OPP */
dev_pm_opp_get(opp);
break; break;
} }
} }
mutex_unlock(&opp_table->lock);
dev_pm_opp_put_opp_table(opp_table);
return opp; return opp;
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_exact); EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_exact);
...@@ -451,14 +385,21 @@ static noinline struct dev_pm_opp *_find_freq_ceil(struct opp_table *opp_table, ...@@ -451,14 +385,21 @@ static noinline struct dev_pm_opp *_find_freq_ceil(struct opp_table *opp_table,
{ {
struct dev_pm_opp *temp_opp, *opp = ERR_PTR(-ERANGE); struct dev_pm_opp *temp_opp, *opp = ERR_PTR(-ERANGE);
list_for_each_entry_rcu(temp_opp, &opp_table->opp_list, node) { mutex_lock(&opp_table->lock);
list_for_each_entry(temp_opp, &opp_table->opp_list, node) {
if (temp_opp->available && temp_opp->rate >= *freq) { if (temp_opp->available && temp_opp->rate >= *freq) {
opp = temp_opp; opp = temp_opp;
*freq = opp->rate; *freq = opp->rate;
/* Increment the reference count of OPP */
dev_pm_opp_get(opp);
break; break;
} }
} }
mutex_unlock(&opp_table->lock);
return opp; return opp;
} }
...@@ -477,18 +418,14 @@ static noinline struct dev_pm_opp *_find_freq_ceil(struct opp_table *opp_table, ...@@ -477,18 +418,14 @@ static noinline struct dev_pm_opp *_find_freq_ceil(struct opp_table *opp_table,
* ERANGE: no match found for search * ERANGE: no match found for search
* ENODEV: if device not found in list of registered devices * ENODEV: if device not found in list of registered devices
* *
* Locking: This function must be called under rcu_read_lock(). opp is a rcu * The callers are required to call dev_pm_opp_put() for the returned OPP after
* protected pointer. The reason for the same is that the opp pointer which is * use.
* returned will remain valid for use with opp_get_{voltage, freq} only while
* under the locked area. The pointer returned must be used prior to unlocking
* with rcu_read_unlock() to maintain the integrity of the pointer.
*/ */
struct dev_pm_opp *dev_pm_opp_find_freq_ceil(struct device *dev, struct dev_pm_opp *dev_pm_opp_find_freq_ceil(struct device *dev,
unsigned long *freq) unsigned long *freq)
{ {
struct opp_table *opp_table; struct opp_table *opp_table;
struct dev_pm_opp *opp;
opp_rcu_lockdep_assert();
if (!dev || !freq) { if (!dev || !freq) {
dev_err(dev, "%s: Invalid argument freq=%p\n", __func__, freq); dev_err(dev, "%s: Invalid argument freq=%p\n", __func__, freq);
...@@ -499,7 +436,11 @@ struct dev_pm_opp *dev_pm_opp_find_freq_ceil(struct device *dev, ...@@ -499,7 +436,11 @@ struct dev_pm_opp *dev_pm_opp_find_freq_ceil(struct device *dev,
if (IS_ERR(opp_table)) if (IS_ERR(opp_table))
return ERR_CAST(opp_table); return ERR_CAST(opp_table);
return _find_freq_ceil(opp_table, freq); opp = _find_freq_ceil(opp_table, freq);
dev_pm_opp_put_opp_table(opp_table);
return opp;
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_ceil); EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_ceil);
...@@ -518,11 +459,8 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_ceil); ...@@ -518,11 +459,8 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_ceil);
* ERANGE: no match found for search * ERANGE: no match found for search
* ENODEV: if device not found in list of registered devices * ENODEV: if device not found in list of registered devices
* *
* Locking: This function must be called under rcu_read_lock(). opp is a rcu * The callers are required to call dev_pm_opp_put() for the returned OPP after
* protected pointer. The reason for the same is that the opp pointer which is * use.
* returned will remain valid for use with opp_get_{voltage, freq} only while
* under the locked area. The pointer returned must be used prior to unlocking
* with rcu_read_unlock() to maintain the integrity of the pointer.
*/ */
struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev, struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev,
unsigned long *freq) unsigned long *freq)
...@@ -530,8 +468,6 @@ struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev, ...@@ -530,8 +468,6 @@ struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev,
struct opp_table *opp_table; struct opp_table *opp_table;
struct dev_pm_opp *temp_opp, *opp = ERR_PTR(-ERANGE); struct dev_pm_opp *temp_opp, *opp = ERR_PTR(-ERANGE);
opp_rcu_lockdep_assert();
if (!dev || !freq) { if (!dev || !freq) {
dev_err(dev, "%s: Invalid argument freq=%p\n", __func__, freq); dev_err(dev, "%s: Invalid argument freq=%p\n", __func__, freq);
return ERR_PTR(-EINVAL); return ERR_PTR(-EINVAL);
...@@ -541,7 +477,9 @@ struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev, ...@@ -541,7 +477,9 @@ struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev,
if (IS_ERR(opp_table)) if (IS_ERR(opp_table))
return ERR_CAST(opp_table); return ERR_CAST(opp_table);
list_for_each_entry_rcu(temp_opp, &opp_table->opp_list, node) { mutex_lock(&opp_table->lock);
list_for_each_entry(temp_opp, &opp_table->opp_list, node) {
if (temp_opp->available) { if (temp_opp->available) {
/* go to the next node, before choosing prev */ /* go to the next node, before choosing prev */
if (temp_opp->rate > *freq) if (temp_opp->rate > *freq)
...@@ -550,6 +488,13 @@ struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev, ...@@ -550,6 +488,13 @@ struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev,
opp = temp_opp; opp = temp_opp;
} }
} }
/* Increment the reference count of OPP */
if (!IS_ERR(opp))
dev_pm_opp_get(opp);
mutex_unlock(&opp_table->lock);
dev_pm_opp_put_opp_table(opp_table);
if (!IS_ERR(opp)) if (!IS_ERR(opp))
*freq = opp->rate; *freq = opp->rate;
...@@ -557,34 +502,6 @@ struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev, ...@@ -557,34 +502,6 @@ struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev,
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_floor); EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_floor);
/*
* The caller needs to ensure that opp_table (and hence the clk) isn't freed,
* while clk returned here is used.
*/
static struct clk *_get_opp_clk(struct device *dev)
{
struct opp_table *opp_table;
struct clk *clk;
rcu_read_lock();
opp_table = _find_opp_table(dev);
if (IS_ERR(opp_table)) {
dev_err(dev, "%s: device opp doesn't exist\n", __func__);
clk = ERR_CAST(opp_table);
goto unlock;
}
clk = opp_table->clk;
if (IS_ERR(clk))
dev_err(dev, "%s: No clock available for the device\n",
__func__);
unlock:
rcu_read_unlock();
return clk;
}
static int _set_opp_voltage(struct device *dev, struct regulator *reg, static int _set_opp_voltage(struct device *dev, struct regulator *reg,
struct dev_pm_opp_supply *supply) struct dev_pm_opp_supply *supply)
{ {
...@@ -680,8 +597,6 @@ static int _generic_set_opp(struct dev_pm_set_opp_data *data) ...@@ -680,8 +597,6 @@ static int _generic_set_opp(struct dev_pm_set_opp_data *data)
* *
* This configures the power-supplies and clock source to the levels specified * This configures the power-supplies and clock source to the levels specified
* by the OPP corresponding to the target_freq. * by the OPP corresponding to the target_freq.
*
* Locking: This function takes rcu_read_lock().
*/ */
int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq) int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
{ {
...@@ -700,9 +615,19 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq) ...@@ -700,9 +615,19 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
return -EINVAL; return -EINVAL;
} }
clk = _get_opp_clk(dev); opp_table = _find_opp_table(dev);
if (IS_ERR(clk)) if (IS_ERR(opp_table)) {
return PTR_ERR(clk); dev_err(dev, "%s: device opp doesn't exist\n", __func__);
return PTR_ERR(opp_table);
}
clk = opp_table->clk;
if (IS_ERR(clk)) {
dev_err(dev, "%s: No clock available for the device\n",
__func__);
ret = PTR_ERR(clk);
goto put_opp_table;
}
freq = clk_round_rate(clk, target_freq); freq = clk_round_rate(clk, target_freq);
if ((long)freq <= 0) if ((long)freq <= 0)
...@@ -714,16 +639,8 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq) ...@@ -714,16 +639,8 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
if (old_freq == freq) { if (old_freq == freq) {
dev_dbg(dev, "%s: old/new frequencies (%lu Hz) are same, nothing to do\n", dev_dbg(dev, "%s: old/new frequencies (%lu Hz) are same, nothing to do\n",
__func__, freq); __func__, freq);
return 0; ret = 0;
} goto put_opp_table;
rcu_read_lock();
opp_table = _find_opp_table(dev);
if (IS_ERR(opp_table)) {
dev_err(dev, "%s: device opp doesn't exist\n", __func__);
rcu_read_unlock();
return PTR_ERR(opp_table);
} }
old_opp = _find_freq_ceil(opp_table, &old_freq); old_opp = _find_freq_ceil(opp_table, &old_freq);
...@@ -737,8 +654,7 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq) ...@@ -737,8 +654,7 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
ret = PTR_ERR(opp); ret = PTR_ERR(opp);
dev_err(dev, "%s: failed to find OPP for freq %lu (%d)\n", dev_err(dev, "%s: failed to find OPP for freq %lu (%d)\n",
__func__, freq, ret); __func__, freq, ret);
rcu_read_unlock(); goto put_old_opp;
return ret;
} }
dev_dbg(dev, "%s: switching OPP: %lu Hz --> %lu Hz\n", __func__, dev_dbg(dev, "%s: switching OPP: %lu Hz --> %lu Hz\n", __func__,
...@@ -748,8 +664,8 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq) ...@@ -748,8 +664,8 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
/* Only frequency scaling */ /* Only frequency scaling */
if (!regulators) { if (!regulators) {
rcu_read_unlock(); ret = _generic_set_opp_clk_only(dev, clk, old_freq, freq);
return _generic_set_opp_clk_only(dev, clk, old_freq, freq); goto put_opps;
} }
if (opp_table->set_opp) if (opp_table->set_opp)
...@@ -773,28 +689,26 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq) ...@@ -773,28 +689,26 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq)
data->new_opp.rate = freq; data->new_opp.rate = freq;
memcpy(data->new_opp.supplies, opp->supplies, size); memcpy(data->new_opp.supplies, opp->supplies, size);
rcu_read_unlock(); ret = set_opp(data);
return set_opp(data); put_opps:
dev_pm_opp_put(opp);
put_old_opp:
if (!IS_ERR(old_opp))
dev_pm_opp_put(old_opp);
put_opp_table:
dev_pm_opp_put_opp_table(opp_table);
return ret;
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_set_rate); EXPORT_SYMBOL_GPL(dev_pm_opp_set_rate);
/* OPP-dev Helpers */ /* OPP-dev Helpers */
static void _kfree_opp_dev_rcu(struct rcu_head *head)
{
struct opp_device *opp_dev;
opp_dev = container_of(head, struct opp_device, rcu_head);
kfree_rcu(opp_dev, rcu_head);
}
static void _remove_opp_dev(struct opp_device *opp_dev, static void _remove_opp_dev(struct opp_device *opp_dev,
struct opp_table *opp_table) struct opp_table *opp_table)
{ {
opp_debug_unregister(opp_dev, opp_table); opp_debug_unregister(opp_dev, opp_table);
list_del(&opp_dev->node); list_del(&opp_dev->node);
call_srcu(&opp_table->srcu_head.srcu, &opp_dev->rcu_head, kfree(opp_dev);
_kfree_opp_dev_rcu);
} }
struct opp_device *_add_opp_dev(const struct device *dev, struct opp_device *_add_opp_dev(const struct device *dev,
...@@ -809,7 +723,7 @@ struct opp_device *_add_opp_dev(const struct device *dev, ...@@ -809,7 +723,7 @@ struct opp_device *_add_opp_dev(const struct device *dev,
/* Initialize opp-dev */ /* Initialize opp-dev */
opp_dev->dev = dev; opp_dev->dev = dev;
list_add_rcu(&opp_dev->node, &opp_table->dev_list); list_add(&opp_dev->node, &opp_table->dev_list);
/* Create debugfs entries for the opp_table */ /* Create debugfs entries for the opp_table */
ret = opp_debug_register(opp_dev, opp_table); ret = opp_debug_register(opp_dev, opp_table);
...@@ -820,26 +734,12 @@ struct opp_device *_add_opp_dev(const struct device *dev, ...@@ -820,26 +734,12 @@ struct opp_device *_add_opp_dev(const struct device *dev,
return opp_dev; return opp_dev;
} }
/** static struct opp_table *_allocate_opp_table(struct device *dev)
* _add_opp_table() - Find OPP table or allocate a new one
* @dev: device for which we do this operation
*
* It tries to find an existing table first, if it couldn't find one, it
* allocates a new OPP table and returns that.
*
* Return: valid opp_table pointer if success, else NULL.
*/
static struct opp_table *_add_opp_table(struct device *dev)
{ {
struct opp_table *opp_table; struct opp_table *opp_table;
struct opp_device *opp_dev; struct opp_device *opp_dev;
int ret; int ret;
/* Check for existing table for 'dev' first */
opp_table = _find_opp_table(dev);
if (!IS_ERR(opp_table))
return opp_table;
/* /*
* Allocate a new OPP table. In the infrequent case where a new * Allocate a new OPP table. In the infrequent case where a new
* device is needed to be added, we pay this penalty. * device is needed to be added, we pay this penalty.
...@@ -867,50 +767,45 @@ static struct opp_table *_add_opp_table(struct device *dev) ...@@ -867,50 +767,45 @@ static struct opp_table *_add_opp_table(struct device *dev)
ret); ret);
} }
srcu_init_notifier_head(&opp_table->srcu_head); BLOCKING_INIT_NOTIFIER_HEAD(&opp_table->head);
INIT_LIST_HEAD(&opp_table->opp_list); INIT_LIST_HEAD(&opp_table->opp_list);
mutex_init(&opp_table->lock);
kref_init(&opp_table->kref);
/* Secure the device table modification */ /* Secure the device table modification */
list_add_rcu(&opp_table->node, &opp_tables); list_add(&opp_table->node, &opp_tables);
return opp_table; return opp_table;
} }
/** void _get_opp_table_kref(struct opp_table *opp_table)
* _kfree_device_rcu() - Free opp_table RCU handler
* @head: RCU head
*/
static void _kfree_device_rcu(struct rcu_head *head)
{ {
struct opp_table *opp_table = container_of(head, struct opp_table, kref_get(&opp_table->kref);
rcu_head);
kfree_rcu(opp_table, rcu_head);
} }
/** struct opp_table *dev_pm_opp_get_opp_table(struct device *dev)
* _remove_opp_table() - Removes a OPP table
* @opp_table: OPP table to be removed.
*
* Removes/frees OPP table if it doesn't contain any OPPs.
*/
static void _remove_opp_table(struct opp_table *opp_table)
{ {
struct opp_device *opp_dev; struct opp_table *opp_table;
if (!list_empty(&opp_table->opp_list)) /* Hold our table modification lock here */
return; mutex_lock(&opp_table_lock);
if (opp_table->supported_hw) opp_table = _find_opp_table_unlocked(dev);
return; if (!IS_ERR(opp_table))
goto unlock;
if (opp_table->prop_name) opp_table = _allocate_opp_table(dev);
return;
if (opp_table->regulators) unlock:
return; mutex_unlock(&opp_table_lock);
if (opp_table->set_opp) return opp_table;
return; }
EXPORT_SYMBOL_GPL(dev_pm_opp_get_opp_table);
static void _opp_table_kref_release(struct kref *kref)
{
struct opp_table *opp_table = container_of(kref, struct opp_table, kref);
struct opp_device *opp_dev;
/* Release clk */ /* Release clk */
if (!IS_ERR(opp_table->clk)) if (!IS_ERR(opp_table->clk))
...@@ -924,63 +819,60 @@ static void _remove_opp_table(struct opp_table *opp_table) ...@@ -924,63 +819,60 @@ static void _remove_opp_table(struct opp_table *opp_table)
/* dev_list must be empty now */ /* dev_list must be empty now */
WARN_ON(!list_empty(&opp_table->dev_list)); WARN_ON(!list_empty(&opp_table->dev_list));
list_del_rcu(&opp_table->node); mutex_destroy(&opp_table->lock);
call_srcu(&opp_table->srcu_head.srcu, &opp_table->rcu_head, list_del(&opp_table->node);
_kfree_device_rcu); kfree(opp_table);
mutex_unlock(&opp_table_lock);
} }
/** void dev_pm_opp_put_opp_table(struct opp_table *opp_table)
* _kfree_opp_rcu() - Free OPP RCU handler
* @head: RCU head
*/
static void _kfree_opp_rcu(struct rcu_head *head)
{ {
struct dev_pm_opp *opp = container_of(head, struct dev_pm_opp, rcu_head); kref_put_mutex(&opp_table->kref, _opp_table_kref_release,
&opp_table_lock);
}
EXPORT_SYMBOL_GPL(dev_pm_opp_put_opp_table);
kfree_rcu(opp, rcu_head); void _opp_free(struct dev_pm_opp *opp)
{
kfree(opp);
} }
/** static void _opp_kref_release(struct kref *kref)
* _opp_remove() - Remove an OPP from a table definition
* @opp_table: points back to the opp_table struct this opp belongs to
* @opp: pointer to the OPP to remove
* @notify: OPP_EVENT_REMOVE notification should be sent or not
*
* This function removes an opp definition from the opp table.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* It is assumed that the caller holds required mutex for an RCU updater
* strategy.
*/
void _opp_remove(struct opp_table *opp_table, struct dev_pm_opp *opp,
bool notify)
{ {
struct dev_pm_opp *opp = container_of(kref, struct dev_pm_opp, kref);
struct opp_table *opp_table = opp->opp_table;
/* /*
* Notify the changes in the availability of the operable * Notify the changes in the availability of the operable
* frequency/voltage list. * frequency/voltage list.
*/ */
if (notify) blocking_notifier_call_chain(&opp_table->head, OPP_EVENT_REMOVE, opp);
srcu_notifier_call_chain(&opp_table->srcu_head,
OPP_EVENT_REMOVE, opp);
opp_debug_remove_one(opp); opp_debug_remove_one(opp);
list_del_rcu(&opp->node); list_del(&opp->node);
call_srcu(&opp_table->srcu_head.srcu, &opp->rcu_head, _kfree_opp_rcu); kfree(opp);
_remove_opp_table(opp_table); mutex_unlock(&opp_table->lock);
dev_pm_opp_put_opp_table(opp_table);
}
static void dev_pm_opp_get(struct dev_pm_opp *opp)
{
kref_get(&opp->kref);
} }
void dev_pm_opp_put(struct dev_pm_opp *opp)
{
kref_put_mutex(&opp->kref, _opp_kref_release, &opp->opp_table->lock);
}
EXPORT_SYMBOL_GPL(dev_pm_opp_put);
/** /**
* dev_pm_opp_remove() - Remove an OPP from OPP table * dev_pm_opp_remove() - Remove an OPP from OPP table
* @dev: device for which we do this operation * @dev: device for which we do this operation
* @freq: OPP to remove with matching 'freq' * @freq: OPP to remove with matching 'freq'
* *
* This function removes an opp from the opp table. * This function removes an opp from the opp table.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*/ */
void dev_pm_opp_remove(struct device *dev, unsigned long freq) void dev_pm_opp_remove(struct device *dev, unsigned long freq)
{ {
...@@ -988,12 +880,11 @@ void dev_pm_opp_remove(struct device *dev, unsigned long freq) ...@@ -988,12 +880,11 @@ void dev_pm_opp_remove(struct device *dev, unsigned long freq)
struct opp_table *opp_table; struct opp_table *opp_table;
bool found = false; bool found = false;
/* Hold our table modification lock here */
mutex_lock(&opp_table_lock);
opp_table = _find_opp_table(dev); opp_table = _find_opp_table(dev);
if (IS_ERR(opp_table)) if (IS_ERR(opp_table))
goto unlock; return;
mutex_lock(&opp_table->lock);
list_for_each_entry(opp, &opp_table->opp_list, node) { list_for_each_entry(opp, &opp_table->opp_list, node) {
if (opp->rate == freq) { if (opp->rate == freq) {
...@@ -1002,28 +893,23 @@ void dev_pm_opp_remove(struct device *dev, unsigned long freq) ...@@ -1002,28 +893,23 @@ void dev_pm_opp_remove(struct device *dev, unsigned long freq)
} }
} }
if (!found) { mutex_unlock(&opp_table->lock);
if (found) {
dev_pm_opp_put(opp);
} else {
dev_warn(dev, "%s: Couldn't find OPP with freq: %lu\n", dev_warn(dev, "%s: Couldn't find OPP with freq: %lu\n",
__func__, freq); __func__, freq);
goto unlock;
} }
_opp_remove(opp_table, opp, true); dev_pm_opp_put_opp_table(opp_table);
unlock:
mutex_unlock(&opp_table_lock);
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_remove); EXPORT_SYMBOL_GPL(dev_pm_opp_remove);
struct dev_pm_opp *_allocate_opp(struct device *dev, struct dev_pm_opp *_opp_allocate(struct opp_table *table)
struct opp_table **opp_table)
{ {
struct dev_pm_opp *opp; struct dev_pm_opp *opp;
int count, supply_size; int count, supply_size;
struct opp_table *table;
table = _add_opp_table(dev);
if (!table)
return NULL;
/* Allocate space for at least one supply */ /* Allocate space for at least one supply */
count = table->regulator_count ? table->regulator_count : 1; count = table->regulator_count ? table->regulator_count : 1;
...@@ -1031,17 +917,13 @@ struct dev_pm_opp *_allocate_opp(struct device *dev, ...@@ -1031,17 +917,13 @@ struct dev_pm_opp *_allocate_opp(struct device *dev,
/* allocate new OPP node and supplies structures */ /* allocate new OPP node and supplies structures */
opp = kzalloc(sizeof(*opp) + supply_size, GFP_KERNEL); opp = kzalloc(sizeof(*opp) + supply_size, GFP_KERNEL);
if (!opp) { if (!opp)
kfree(table);
return NULL; return NULL;
}
/* Put the supplies at the end of the OPP structure as an empty array */ /* Put the supplies at the end of the OPP structure as an empty array */
opp->supplies = (struct dev_pm_opp_supply *)(opp + 1); opp->supplies = (struct dev_pm_opp_supply *)(opp + 1);
INIT_LIST_HEAD(&opp->node); INIT_LIST_HEAD(&opp->node);
*opp_table = table;
return opp; return opp;
} }
...@@ -1067,11 +949,21 @@ static bool _opp_supported_by_regulators(struct dev_pm_opp *opp, ...@@ -1067,11 +949,21 @@ static bool _opp_supported_by_regulators(struct dev_pm_opp *opp,
return true; return true;
} }
/*
* Returns:
* 0: On success. And appropriate error message for duplicate OPPs.
* -EBUSY: For OPP with same freq/volt and is available. The callers of
* _opp_add() must return 0 if they receive -EBUSY from it. This is to make
* sure we don't print error messages unnecessarily if different parts of
* kernel try to initialize the OPP table.
* -EEXIST: For OPP with same freq but different volt or is unavailable. This
* should be considered an error by the callers of _opp_add().
*/
int _opp_add(struct device *dev, struct dev_pm_opp *new_opp, int _opp_add(struct device *dev, struct dev_pm_opp *new_opp,
struct opp_table *opp_table) struct opp_table *opp_table)
{ {
struct dev_pm_opp *opp; struct dev_pm_opp *opp;
struct list_head *head = &opp_table->opp_list; struct list_head *head;
int ret; int ret;
/* /*
...@@ -1082,7 +974,10 @@ int _opp_add(struct device *dev, struct dev_pm_opp *new_opp, ...@@ -1082,7 +974,10 @@ int _opp_add(struct device *dev, struct dev_pm_opp *new_opp,
* loop, don't replace it with head otherwise it will become an infinite * loop, don't replace it with head otherwise it will become an infinite
* loop. * loop.
*/ */
list_for_each_entry_rcu(opp, &opp_table->opp_list, node) { mutex_lock(&opp_table->lock);
head = &opp_table->opp_list;
list_for_each_entry(opp, &opp_table->opp_list, node) {
if (new_opp->rate > opp->rate) { if (new_opp->rate > opp->rate) {
head = &opp->node; head = &opp->node;
continue; continue;
...@@ -1098,12 +993,21 @@ int _opp_add(struct device *dev, struct dev_pm_opp *new_opp, ...@@ -1098,12 +993,21 @@ int _opp_add(struct device *dev, struct dev_pm_opp *new_opp,
new_opp->supplies[0].u_volt, new_opp->available); new_opp->supplies[0].u_volt, new_opp->available);
/* Should we compare voltages for all regulators here ? */ /* Should we compare voltages for all regulators here ? */
return opp->available && ret = opp->available &&
new_opp->supplies[0].u_volt == opp->supplies[0].u_volt ? 0 : -EEXIST; new_opp->supplies[0].u_volt == opp->supplies[0].u_volt ? -EBUSY : -EEXIST;
mutex_unlock(&opp_table->lock);
return ret;
} }
list_add(&new_opp->node, head);
mutex_unlock(&opp_table->lock);
new_opp->opp_table = opp_table; new_opp->opp_table = opp_table;
list_add_rcu(&new_opp->node, head); kref_init(&new_opp->kref);
/* Get a reference to the OPP table */
_get_opp_table_kref(opp_table);
ret = opp_debug_create_one(new_opp, opp_table); ret = opp_debug_create_one(new_opp, opp_table);
if (ret) if (ret)
...@@ -1121,6 +1025,7 @@ int _opp_add(struct device *dev, struct dev_pm_opp *new_opp, ...@@ -1121,6 +1025,7 @@ int _opp_add(struct device *dev, struct dev_pm_opp *new_opp,
/** /**
* _opp_add_v1() - Allocate a OPP based on v1 bindings. * _opp_add_v1() - Allocate a OPP based on v1 bindings.
* @opp_table: OPP table
* @dev: device for which we do this operation * @dev: device for which we do this operation
* @freq: Frequency in Hz for this OPP * @freq: Frequency in Hz for this OPP
* @u_volt: Voltage in uVolts for this OPP * @u_volt: Voltage in uVolts for this OPP
...@@ -1133,12 +1038,6 @@ int _opp_add(struct device *dev, struct dev_pm_opp *new_opp, ...@@ -1133,12 +1038,6 @@ int _opp_add(struct device *dev, struct dev_pm_opp *new_opp,
* NOTE: "dynamic" parameter impacts OPPs added by the dev_pm_opp_of_add_table * NOTE: "dynamic" parameter impacts OPPs added by the dev_pm_opp_of_add_table
* and freed by dev_pm_opp_of_remove_table. * and freed by dev_pm_opp_of_remove_table.
* *
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*
* Return: * Return:
* 0 On success OR * 0 On success OR
* Duplicate OPPs (both freq and volt are same) and opp->available * Duplicate OPPs (both freq and volt are same) and opp->available
...@@ -1146,22 +1045,16 @@ int _opp_add(struct device *dev, struct dev_pm_opp *new_opp, ...@@ -1146,22 +1045,16 @@ int _opp_add(struct device *dev, struct dev_pm_opp *new_opp,
* Duplicate OPPs (both freq and volt are same) and !opp->available * Duplicate OPPs (both freq and volt are same) and !opp->available
* -ENOMEM Memory allocation failure * -ENOMEM Memory allocation failure
*/ */
int _opp_add_v1(struct device *dev, unsigned long freq, long u_volt, int _opp_add_v1(struct opp_table *opp_table, struct device *dev,
bool dynamic) unsigned long freq, long u_volt, bool dynamic)
{ {
struct opp_table *opp_table;
struct dev_pm_opp *new_opp; struct dev_pm_opp *new_opp;
unsigned long tol; unsigned long tol;
int ret; int ret;
/* Hold our table modification lock here */ new_opp = _opp_allocate(opp_table);
mutex_lock(&opp_table_lock); if (!new_opp)
return -ENOMEM;
new_opp = _allocate_opp(dev, &opp_table);
if (!new_opp) {
ret = -ENOMEM;
goto unlock;
}
/* populate the opp table */ /* populate the opp table */
new_opp->rate = freq; new_opp->rate = freq;
...@@ -1173,22 +1066,23 @@ int _opp_add_v1(struct device *dev, unsigned long freq, long u_volt, ...@@ -1173,22 +1066,23 @@ int _opp_add_v1(struct device *dev, unsigned long freq, long u_volt,
new_opp->dynamic = dynamic; new_opp->dynamic = dynamic;
ret = _opp_add(dev, new_opp, opp_table); ret = _opp_add(dev, new_opp, opp_table);
if (ret) if (ret) {
/* Don't return error for duplicate OPPs */
if (ret == -EBUSY)
ret = 0;
goto free_opp; goto free_opp;
}
mutex_unlock(&opp_table_lock);
/* /*
* Notify the changes in the availability of the operable * Notify the changes in the availability of the operable
* frequency/voltage list. * frequency/voltage list.
*/ */
srcu_notifier_call_chain(&opp_table->srcu_head, OPP_EVENT_ADD, new_opp); blocking_notifier_call_chain(&opp_table->head, OPP_EVENT_ADD, new_opp);
return 0; return 0;
free_opp: free_opp:
_opp_remove(opp_table, new_opp, false); _opp_free(new_opp);
unlock:
mutex_unlock(&opp_table_lock);
return ret; return ret;
} }
...@@ -1202,27 +1096,16 @@ int _opp_add_v1(struct device *dev, unsigned long freq, long u_volt, ...@@ -1202,27 +1096,16 @@ int _opp_add_v1(struct device *dev, unsigned long freq, long u_volt,
* specify the hierarchy of versions it supports. OPP layer will then enable * specify the hierarchy of versions it supports. OPP layer will then enable
* OPPs, which are available for those versions, based on its 'opp-supported-hw' * OPPs, which are available for those versions, based on its 'opp-supported-hw'
* property. * property.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*/ */
int dev_pm_opp_set_supported_hw(struct device *dev, const u32 *versions, struct opp_table *dev_pm_opp_set_supported_hw(struct device *dev,
unsigned int count) const u32 *versions, unsigned int count)
{ {
struct opp_table *opp_table; struct opp_table *opp_table;
int ret = 0; int ret;
/* Hold our table modification lock here */
mutex_lock(&opp_table_lock);
opp_table = _add_opp_table(dev); opp_table = dev_pm_opp_get_opp_table(dev);
if (!opp_table) { if (!opp_table)
ret = -ENOMEM; return ERR_PTR(-ENOMEM);
goto unlock;
}
/* Make sure there are no concurrent readers while updating opp_table */ /* Make sure there are no concurrent readers while updating opp_table */
WARN_ON(!list_empty(&opp_table->opp_list)); WARN_ON(!list_empty(&opp_table->opp_list));
...@@ -1243,65 +1126,40 @@ int dev_pm_opp_set_supported_hw(struct device *dev, const u32 *versions, ...@@ -1243,65 +1126,40 @@ int dev_pm_opp_set_supported_hw(struct device *dev, const u32 *versions,
} }
opp_table->supported_hw_count = count; opp_table->supported_hw_count = count;
mutex_unlock(&opp_table_lock);
return 0; return opp_table;
err: err:
_remove_opp_table(opp_table); dev_pm_opp_put_opp_table(opp_table);
unlock:
mutex_unlock(&opp_table_lock);
return ret; return ERR_PTR(ret);
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_set_supported_hw); EXPORT_SYMBOL_GPL(dev_pm_opp_set_supported_hw);
/** /**
* dev_pm_opp_put_supported_hw() - Releases resources blocked for supported hw * dev_pm_opp_put_supported_hw() - Releases resources blocked for supported hw
* @dev: Device for which supported-hw has to be put. * @opp_table: OPP table returned by dev_pm_opp_set_supported_hw().
* *
* This is required only for the V2 bindings, and is called for a matching * This is required only for the V2 bindings, and is called for a matching
* dev_pm_opp_set_supported_hw(). Until this is called, the opp_table structure * dev_pm_opp_set_supported_hw(). Until this is called, the opp_table structure
* will not be freed. * will not be freed.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*/ */
void dev_pm_opp_put_supported_hw(struct device *dev) void dev_pm_opp_put_supported_hw(struct opp_table *opp_table)
{ {
struct opp_table *opp_table;
/* Hold our table modification lock here */
mutex_lock(&opp_table_lock);
/* Check for existing table for 'dev' first */
opp_table = _find_opp_table(dev);
if (IS_ERR(opp_table)) {
dev_err(dev, "Failed to find opp_table: %ld\n",
PTR_ERR(opp_table));
goto unlock;
}
/* Make sure there are no concurrent readers while updating opp_table */ /* Make sure there are no concurrent readers while updating opp_table */
WARN_ON(!list_empty(&opp_table->opp_list)); WARN_ON(!list_empty(&opp_table->opp_list));
if (!opp_table->supported_hw) { if (!opp_table->supported_hw) {
dev_err(dev, "%s: Doesn't have supported hardware list\n", pr_err("%s: Doesn't have supported hardware list\n",
__func__); __func__);
goto unlock; return;
} }
kfree(opp_table->supported_hw); kfree(opp_table->supported_hw);
opp_table->supported_hw = NULL; opp_table->supported_hw = NULL;
opp_table->supported_hw_count = 0; opp_table->supported_hw_count = 0;
/* Try freeing opp_table if this was the last blocking resource */ dev_pm_opp_put_opp_table(opp_table);
_remove_opp_table(opp_table);
unlock:
mutex_unlock(&opp_table_lock);
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_put_supported_hw); EXPORT_SYMBOL_GPL(dev_pm_opp_put_supported_hw);
...@@ -1314,26 +1172,15 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_put_supported_hw); ...@@ -1314,26 +1172,15 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_put_supported_hw);
* specify the extn to be used for certain property names. The properties to * specify the extn to be used for certain property names. The properties to
* which the extension will apply are opp-microvolt and opp-microamp. OPP core * which the extension will apply are opp-microvolt and opp-microamp. OPP core
* should postfix the property name with -<name> while looking for them. * should postfix the property name with -<name> while looking for them.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*/ */
int dev_pm_opp_set_prop_name(struct device *dev, const char *name) struct opp_table *dev_pm_opp_set_prop_name(struct device *dev, const char *name)
{ {
struct opp_table *opp_table; struct opp_table *opp_table;
int ret = 0; int ret;
/* Hold our table modification lock here */
mutex_lock(&opp_table_lock);
opp_table = _add_opp_table(dev); opp_table = dev_pm_opp_get_opp_table(dev);
if (!opp_table) { if (!opp_table)
ret = -ENOMEM; return ERR_PTR(-ENOMEM);
goto unlock;
}
/* Make sure there are no concurrent readers while updating opp_table */ /* Make sure there are no concurrent readers while updating opp_table */
WARN_ON(!list_empty(&opp_table->opp_list)); WARN_ON(!list_empty(&opp_table->opp_list));
...@@ -1352,63 +1199,37 @@ int dev_pm_opp_set_prop_name(struct device *dev, const char *name) ...@@ -1352,63 +1199,37 @@ int dev_pm_opp_set_prop_name(struct device *dev, const char *name)
goto err; goto err;
} }
mutex_unlock(&opp_table_lock); return opp_table;
return 0;
err: err:
_remove_opp_table(opp_table); dev_pm_opp_put_opp_table(opp_table);
unlock:
mutex_unlock(&opp_table_lock);
return ret; return ERR_PTR(ret);
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_set_prop_name); EXPORT_SYMBOL_GPL(dev_pm_opp_set_prop_name);
/** /**
* dev_pm_opp_put_prop_name() - Releases resources blocked for prop-name * dev_pm_opp_put_prop_name() - Releases resources blocked for prop-name
* @dev: Device for which the prop-name has to be put. * @opp_table: OPP table returned by dev_pm_opp_set_prop_name().
* *
* This is required only for the V2 bindings, and is called for a matching * This is required only for the V2 bindings, and is called for a matching
* dev_pm_opp_set_prop_name(). Until this is called, the opp_table structure * dev_pm_opp_set_prop_name(). Until this is called, the opp_table structure
* will not be freed. * will not be freed.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*/ */
void dev_pm_opp_put_prop_name(struct device *dev) void dev_pm_opp_put_prop_name(struct opp_table *opp_table)
{ {
struct opp_table *opp_table;
/* Hold our table modification lock here */
mutex_lock(&opp_table_lock);
/* Check for existing table for 'dev' first */
opp_table = _find_opp_table(dev);
if (IS_ERR(opp_table)) {
dev_err(dev, "Failed to find opp_table: %ld\n",
PTR_ERR(opp_table));
goto unlock;
}
/* Make sure there are no concurrent readers while updating opp_table */ /* Make sure there are no concurrent readers while updating opp_table */
WARN_ON(!list_empty(&opp_table->opp_list)); WARN_ON(!list_empty(&opp_table->opp_list));
if (!opp_table->prop_name) { if (!opp_table->prop_name) {
dev_err(dev, "%s: Doesn't have a prop-name\n", __func__); pr_err("%s: Doesn't have a prop-name\n", __func__);
goto unlock; return;
} }
kfree(opp_table->prop_name); kfree(opp_table->prop_name);
opp_table->prop_name = NULL; opp_table->prop_name = NULL;
/* Try freeing opp_table if this was the last blocking resource */ dev_pm_opp_put_opp_table(opp_table);
_remove_opp_table(opp_table);
unlock:
mutex_unlock(&opp_table_lock);
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_put_prop_name); EXPORT_SYMBOL_GPL(dev_pm_opp_put_prop_name);
...@@ -1455,12 +1276,6 @@ static void _free_set_opp_data(struct opp_table *opp_table) ...@@ -1455,12 +1276,6 @@ static void _free_set_opp_data(struct opp_table *opp_table)
* well. * well.
* *
* This must be called before any OPPs are initialized for the device. * This must be called before any OPPs are initialized for the device.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*/ */
struct opp_table *dev_pm_opp_set_regulators(struct device *dev, struct opp_table *dev_pm_opp_set_regulators(struct device *dev,
const char * const names[], const char * const names[],
...@@ -1470,13 +1285,9 @@ struct opp_table *dev_pm_opp_set_regulators(struct device *dev, ...@@ -1470,13 +1285,9 @@ struct opp_table *dev_pm_opp_set_regulators(struct device *dev,
struct regulator *reg; struct regulator *reg;
int ret, i; int ret, i;
mutex_lock(&opp_table_lock); opp_table = dev_pm_opp_get_opp_table(dev);
if (!opp_table)
opp_table = _add_opp_table(dev); return ERR_PTR(-ENOMEM);
if (!opp_table) {
ret = -ENOMEM;
goto unlock;
}
/* This should be called before OPPs are initialized */ /* This should be called before OPPs are initialized */
if (WARN_ON(!list_empty(&opp_table->opp_list))) { if (WARN_ON(!list_empty(&opp_table->opp_list))) {
...@@ -1518,7 +1329,6 @@ struct opp_table *dev_pm_opp_set_regulators(struct device *dev, ...@@ -1518,7 +1329,6 @@ struct opp_table *dev_pm_opp_set_regulators(struct device *dev,
if (ret) if (ret)
goto free_regulators; goto free_regulators;
mutex_unlock(&opp_table_lock);
return opp_table; return opp_table;
free_regulators: free_regulators:
...@@ -1529,9 +1339,7 @@ struct opp_table *dev_pm_opp_set_regulators(struct device *dev, ...@@ -1529,9 +1339,7 @@ struct opp_table *dev_pm_opp_set_regulators(struct device *dev,
opp_table->regulators = NULL; opp_table->regulators = NULL;
opp_table->regulator_count = 0; opp_table->regulator_count = 0;
err: err:
_remove_opp_table(opp_table); dev_pm_opp_put_opp_table(opp_table);
unlock:
mutex_unlock(&opp_table_lock);
return ERR_PTR(ret); return ERR_PTR(ret);
} }
...@@ -1540,22 +1348,14 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_set_regulators); ...@@ -1540,22 +1348,14 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_set_regulators);
/** /**
* dev_pm_opp_put_regulators() - Releases resources blocked for regulator * dev_pm_opp_put_regulators() - Releases resources blocked for regulator
* @opp_table: OPP table returned from dev_pm_opp_set_regulators(). * @opp_table: OPP table returned from dev_pm_opp_set_regulators().
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*/ */
void dev_pm_opp_put_regulators(struct opp_table *opp_table) void dev_pm_opp_put_regulators(struct opp_table *opp_table)
{ {
int i; int i;
mutex_lock(&opp_table_lock);
if (!opp_table->regulators) { if (!opp_table->regulators) {
pr_err("%s: Doesn't have regulators set\n", __func__); pr_err("%s: Doesn't have regulators set\n", __func__);
goto unlock; return;
} }
/* Make sure there are no concurrent readers while updating opp_table */ /* Make sure there are no concurrent readers while updating opp_table */
...@@ -1570,11 +1370,7 @@ void dev_pm_opp_put_regulators(struct opp_table *opp_table) ...@@ -1570,11 +1370,7 @@ void dev_pm_opp_put_regulators(struct opp_table *opp_table)
opp_table->regulators = NULL; opp_table->regulators = NULL;
opp_table->regulator_count = 0; opp_table->regulator_count = 0;
/* Try freeing opp_table if this was the last blocking resource */ dev_pm_opp_put_opp_table(opp_table);
_remove_opp_table(opp_table);
unlock:
mutex_unlock(&opp_table_lock);
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_put_regulators); EXPORT_SYMBOL_GPL(dev_pm_opp_put_regulators);
...@@ -1587,29 +1383,19 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_put_regulators); ...@@ -1587,29 +1383,19 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_put_regulators);
* regulators per device), instead of the generic OPP set rate helper. * regulators per device), instead of the generic OPP set rate helper.
* *
* This must be called before any OPPs are initialized for the device. * This must be called before any OPPs are initialized for the device.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*/ */
int dev_pm_opp_register_set_opp_helper(struct device *dev, struct opp_table *dev_pm_opp_register_set_opp_helper(struct device *dev,
int (*set_opp)(struct dev_pm_set_opp_data *data)) int (*set_opp)(struct dev_pm_set_opp_data *data))
{ {
struct opp_table *opp_table; struct opp_table *opp_table;
int ret; int ret;
if (!set_opp) if (!set_opp)
return -EINVAL; return ERR_PTR(-EINVAL);
mutex_lock(&opp_table_lock);
opp_table = _add_opp_table(dev); opp_table = dev_pm_opp_get_opp_table(dev);
if (!opp_table) { if (!opp_table)
ret = -ENOMEM; return ERR_PTR(-ENOMEM);
goto unlock;
}
/* This should be called before OPPs are initialized */ /* This should be called before OPPs are initialized */
if (WARN_ON(!list_empty(&opp_table->opp_list))) { if (WARN_ON(!list_empty(&opp_table->opp_list))) {
...@@ -1625,47 +1411,28 @@ int dev_pm_opp_register_set_opp_helper(struct device *dev, ...@@ -1625,47 +1411,28 @@ int dev_pm_opp_register_set_opp_helper(struct device *dev,
opp_table->set_opp = set_opp; opp_table->set_opp = set_opp;
mutex_unlock(&opp_table_lock); return opp_table;
return 0;
err: err:
_remove_opp_table(opp_table); dev_pm_opp_put_opp_table(opp_table);
unlock:
mutex_unlock(&opp_table_lock);
return ret; return ERR_PTR(ret);
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_register_set_opp_helper); EXPORT_SYMBOL_GPL(dev_pm_opp_register_set_opp_helper);
/** /**
* dev_pm_opp_register_put_opp_helper() - Releases resources blocked for * dev_pm_opp_register_put_opp_helper() - Releases resources blocked for
* set_opp helper * set_opp helper
* @dev: Device for which custom set_opp helper has to be cleared. * @opp_table: OPP table returned from dev_pm_opp_register_set_opp_helper().
* *
* Locking: The internal opp_table and opp structures are RCU protected. * Release resources blocked for platform specific set_opp helper.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*/ */
void dev_pm_opp_register_put_opp_helper(struct device *dev) void dev_pm_opp_register_put_opp_helper(struct opp_table *opp_table)
{ {
struct opp_table *opp_table;
mutex_lock(&opp_table_lock);
/* Check for existing table for 'dev' first */
opp_table = _find_opp_table(dev);
if (IS_ERR(opp_table)) {
dev_err(dev, "Failed to find opp_table: %ld\n",
PTR_ERR(opp_table));
goto unlock;
}
if (!opp_table->set_opp) { if (!opp_table->set_opp) {
dev_err(dev, "%s: Doesn't have custom set_opp helper set\n", pr_err("%s: Doesn't have custom set_opp helper set\n",
__func__); __func__);
goto unlock; return;
} }
/* Make sure there are no concurrent readers while updating opp_table */ /* Make sure there are no concurrent readers while updating opp_table */
...@@ -1673,11 +1440,7 @@ void dev_pm_opp_register_put_opp_helper(struct device *dev) ...@@ -1673,11 +1440,7 @@ void dev_pm_opp_register_put_opp_helper(struct device *dev)
opp_table->set_opp = NULL; opp_table->set_opp = NULL;
/* Try freeing opp_table if this was the last blocking resource */ dev_pm_opp_put_opp_table(opp_table);
_remove_opp_table(opp_table);
unlock:
mutex_unlock(&opp_table_lock);
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_register_put_opp_helper); EXPORT_SYMBOL_GPL(dev_pm_opp_register_put_opp_helper);
...@@ -1691,12 +1454,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_register_put_opp_helper); ...@@ -1691,12 +1454,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_register_put_opp_helper);
* The opp is made available by default and it can be controlled using * The opp is made available by default and it can be controlled using
* dev_pm_opp_enable/disable functions. * dev_pm_opp_enable/disable functions.
* *
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*
* Return: * Return:
* 0 On success OR * 0 On success OR
* Duplicate OPPs (both freq and volt are same) and opp->available * Duplicate OPPs (both freq and volt are same) and opp->available
...@@ -1706,7 +1463,17 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_register_put_opp_helper); ...@@ -1706,7 +1463,17 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_register_put_opp_helper);
*/ */
int dev_pm_opp_add(struct device *dev, unsigned long freq, unsigned long u_volt) int dev_pm_opp_add(struct device *dev, unsigned long freq, unsigned long u_volt)
{ {
return _opp_add_v1(dev, freq, u_volt, true); struct opp_table *opp_table;
int ret;
opp_table = dev_pm_opp_get_opp_table(dev);
if (!opp_table)
return -ENOMEM;
ret = _opp_add_v1(opp_table, dev, freq, u_volt, true);
dev_pm_opp_put_opp_table(opp_table);
return ret;
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_add); EXPORT_SYMBOL_GPL(dev_pm_opp_add);
...@@ -1716,41 +1483,30 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_add); ...@@ -1716,41 +1483,30 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_add);
* @freq: OPP frequency to modify availability * @freq: OPP frequency to modify availability
* @availability_req: availability status requested for this opp * @availability_req: availability status requested for this opp
* *
* Set the availability of an OPP with an RCU operation, opp_{enable,disable} * Set the availability of an OPP, opp_{enable,disable} share a common logic
* share a common logic which is isolated here. * which is isolated here.
* *
* Return: -EINVAL for bad pointers, -ENOMEM if no memory available for the * Return: -EINVAL for bad pointers, -ENOMEM if no memory available for the
* copy operation, returns 0 if no modification was done OR modification was * copy operation, returns 0 if no modification was done OR modification was
* successful. * successful.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks to
* keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex locking or synchronize_rcu() blocking calls cannot be used.
*/ */
static int _opp_set_availability(struct device *dev, unsigned long freq, static int _opp_set_availability(struct device *dev, unsigned long freq,
bool availability_req) bool availability_req)
{ {
struct opp_table *opp_table; struct opp_table *opp_table;
struct dev_pm_opp *new_opp, *tmp_opp, *opp = ERR_PTR(-ENODEV); struct dev_pm_opp *tmp_opp, *opp = ERR_PTR(-ENODEV);
int r = 0; int r = 0;
/* keep the node allocated */
new_opp = kmalloc(sizeof(*new_opp), GFP_KERNEL);
if (!new_opp)
return -ENOMEM;
mutex_lock(&opp_table_lock);
/* Find the opp_table */ /* Find the opp_table */
opp_table = _find_opp_table(dev); opp_table = _find_opp_table(dev);
if (IS_ERR(opp_table)) { if (IS_ERR(opp_table)) {
r = PTR_ERR(opp_table); r = PTR_ERR(opp_table);
dev_warn(dev, "%s: Device OPP not found (%d)\n", __func__, r); dev_warn(dev, "%s: Device OPP not found (%d)\n", __func__, r);
goto unlock; return r;
} }
mutex_lock(&opp_table->lock);
/* Do we have the frequency? */ /* Do we have the frequency? */
list_for_each_entry(tmp_opp, &opp_table->opp_list, node) { list_for_each_entry(tmp_opp, &opp_table->opp_list, node) {
if (tmp_opp->rate == freq) { if (tmp_opp->rate == freq) {
...@@ -1758,6 +1514,7 @@ static int _opp_set_availability(struct device *dev, unsigned long freq, ...@@ -1758,6 +1514,7 @@ static int _opp_set_availability(struct device *dev, unsigned long freq,
break; break;
} }
} }
if (IS_ERR(opp)) { if (IS_ERR(opp)) {
r = PTR_ERR(opp); r = PTR_ERR(opp);
goto unlock; goto unlock;
...@@ -1766,29 +1523,20 @@ static int _opp_set_availability(struct device *dev, unsigned long freq, ...@@ -1766,29 +1523,20 @@ static int _opp_set_availability(struct device *dev, unsigned long freq,
/* Is update really needed? */ /* Is update really needed? */
if (opp->available == availability_req) if (opp->available == availability_req)
goto unlock; goto unlock;
/* copy the old data over */
*new_opp = *opp;
/* plug in new node */ opp->available = availability_req;
new_opp->available = availability_req;
list_replace_rcu(&opp->node, &new_opp->node);
mutex_unlock(&opp_table_lock);
call_srcu(&opp_table->srcu_head.srcu, &opp->rcu_head, _kfree_opp_rcu);
/* Notify the change of the OPP availability */ /* Notify the change of the OPP availability */
if (availability_req) if (availability_req)
srcu_notifier_call_chain(&opp_table->srcu_head, blocking_notifier_call_chain(&opp_table->head, OPP_EVENT_ENABLE,
OPP_EVENT_ENABLE, new_opp); opp);
else else
srcu_notifier_call_chain(&opp_table->srcu_head, blocking_notifier_call_chain(&opp_table->head,
OPP_EVENT_DISABLE, new_opp); OPP_EVENT_DISABLE, opp);
return 0;
unlock: unlock:
mutex_unlock(&opp_table_lock); mutex_unlock(&opp_table->lock);
kfree(new_opp); dev_pm_opp_put_opp_table(opp_table);
return r; return r;
} }
...@@ -1801,12 +1549,6 @@ static int _opp_set_availability(struct device *dev, unsigned long freq, ...@@ -1801,12 +1549,6 @@ static int _opp_set_availability(struct device *dev, unsigned long freq,
* corresponding error value. It is meant to be used for users an OPP available * corresponding error value. It is meant to be used for users an OPP available
* after being temporarily made unavailable with dev_pm_opp_disable. * after being temporarily made unavailable with dev_pm_opp_disable.
* *
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function indirectly uses RCU and mutex locks to keep the
* integrity of the internal data structures. Callers should ensure that
* this function is *NOT* called under RCU protection or in contexts where
* mutex locking or synchronize_rcu() blocking calls cannot be used.
*
* Return: -EINVAL for bad pointers, -ENOMEM if no memory available for the * Return: -EINVAL for bad pointers, -ENOMEM if no memory available for the
* copy operation, returns 0 if no modification was done OR modification was * copy operation, returns 0 if no modification was done OR modification was
* successful. * successful.
...@@ -1827,12 +1569,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_enable); ...@@ -1827,12 +1569,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_enable);
* control by users to make this OPP not available until the circumstances are * control by users to make this OPP not available until the circumstances are
* right to make it available again (with a call to dev_pm_opp_enable). * right to make it available again (with a call to dev_pm_opp_enable).
* *
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function indirectly uses RCU and mutex locks to keep the
* integrity of the internal data structures. Callers should ensure that
* this function is *NOT* called under RCU protection or in contexts where
* mutex locking or synchronize_rcu() blocking calls cannot be used.
*
* Return: -EINVAL for bad pointers, -ENOMEM if no memory available for the * Return: -EINVAL for bad pointers, -ENOMEM if no memory available for the
* copy operation, returns 0 if no modification was done OR modification was * copy operation, returns 0 if no modification was done OR modification was
* successful. * successful.
...@@ -1844,41 +1580,78 @@ int dev_pm_opp_disable(struct device *dev, unsigned long freq) ...@@ -1844,41 +1580,78 @@ int dev_pm_opp_disable(struct device *dev, unsigned long freq)
EXPORT_SYMBOL_GPL(dev_pm_opp_disable); EXPORT_SYMBOL_GPL(dev_pm_opp_disable);
/** /**
* dev_pm_opp_get_notifier() - find notifier_head of the device with opp * dev_pm_opp_register_notifier() - Register OPP notifier for the device
* @dev: device pointer used to lookup OPP table. * @dev: Device for which notifier needs to be registered
* @nb: Notifier block to be registered
* *
* Return: pointer to notifier head if found, otherwise -ENODEV or * Return: 0 on success or a negative error value.
* -EINVAL based on type of error casted as pointer. value must be checked */
* with IS_ERR to determine valid pointer or error result. int dev_pm_opp_register_notifier(struct device *dev, struct notifier_block *nb)
{
struct opp_table *opp_table;
int ret;
opp_table = _find_opp_table(dev);
if (IS_ERR(opp_table))
return PTR_ERR(opp_table);
ret = blocking_notifier_chain_register(&opp_table->head, nb);
dev_pm_opp_put_opp_table(opp_table);
return ret;
}
EXPORT_SYMBOL(dev_pm_opp_register_notifier);
/**
* dev_pm_opp_unregister_notifier() - Unregister OPP notifier for the device
* @dev: Device for which notifier needs to be unregistered
* @nb: Notifier block to be unregistered
* *
* Locking: This function must be called under rcu_read_lock(). opp_table is a * Return: 0 on success or a negative error value.
* RCU protected pointer. The reason for the same is that the opp pointer which
* is returned will remain valid for use with opp_get_{voltage, freq} only while
* under the locked area. The pointer returned must be used prior to unlocking
* with rcu_read_unlock() to maintain the integrity of the pointer.
*/ */
struct srcu_notifier_head *dev_pm_opp_get_notifier(struct device *dev) int dev_pm_opp_unregister_notifier(struct device *dev,
struct notifier_block *nb)
{ {
struct opp_table *opp_table = _find_opp_table(dev); struct opp_table *opp_table;
int ret;
opp_table = _find_opp_table(dev);
if (IS_ERR(opp_table)) if (IS_ERR(opp_table))
return ERR_CAST(opp_table); /* matching type */ return PTR_ERR(opp_table);
ret = blocking_notifier_chain_unregister(&opp_table->head, nb);
return &opp_table->srcu_head; dev_pm_opp_put_opp_table(opp_table);
return ret;
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_get_notifier); EXPORT_SYMBOL(dev_pm_opp_unregister_notifier);
/* /*
* Free OPPs either created using static entries present in DT or even the * Free OPPs either created using static entries present in DT or even the
* dynamically added entries based on remove_all param. * dynamically added entries based on remove_all param.
*/ */
void _dev_pm_opp_remove_table(struct device *dev, bool remove_all) void _dev_pm_opp_remove_table(struct opp_table *opp_table, struct device *dev,
bool remove_all)
{ {
struct opp_table *opp_table;
struct dev_pm_opp *opp, *tmp; struct dev_pm_opp *opp, *tmp;
/* Hold our table modification lock here */ /* Find if opp_table manages a single device */
mutex_lock(&opp_table_lock); if (list_is_singular(&opp_table->dev_list)) {
/* Free static OPPs */
list_for_each_entry_safe(opp, tmp, &opp_table->opp_list, node) {
if (remove_all || !opp->dynamic)
dev_pm_opp_put(opp);
}
} else {
_remove_opp_dev(_find_opp_dev(dev, opp_table), opp_table);
}
}
void _dev_pm_opp_find_and_remove_table(struct device *dev, bool remove_all)
{
struct opp_table *opp_table;
/* Check for existing table for 'dev' */ /* Check for existing table for 'dev' */
opp_table = _find_opp_table(dev); opp_table = _find_opp_table(dev);
...@@ -1890,22 +1663,12 @@ void _dev_pm_opp_remove_table(struct device *dev, bool remove_all) ...@@ -1890,22 +1663,12 @@ void _dev_pm_opp_remove_table(struct device *dev, bool remove_all)
IS_ERR_OR_NULL(dev) ? IS_ERR_OR_NULL(dev) ?
"Invalid device" : dev_name(dev), "Invalid device" : dev_name(dev),
error); error);
goto unlock; return;
} }
/* Find if opp_table manages a single device */ _dev_pm_opp_remove_table(opp_table, dev, remove_all);
if (list_is_singular(&opp_table->dev_list)) {
/* Free static OPPs */
list_for_each_entry_safe(opp, tmp, &opp_table->opp_list, node) {
if (remove_all || !opp->dynamic)
_opp_remove(opp_table, opp, true);
}
} else {
_remove_opp_dev(_find_opp_dev(dev, opp_table), opp_table);
}
unlock: dev_pm_opp_put_opp_table(opp_table);
mutex_unlock(&opp_table_lock);
} }
/** /**
...@@ -1914,15 +1677,9 @@ void _dev_pm_opp_remove_table(struct device *dev, bool remove_all) ...@@ -1914,15 +1677,9 @@ void _dev_pm_opp_remove_table(struct device *dev, bool remove_all)
* *
* Free both OPPs created using static entries present in DT and the * Free both OPPs created using static entries present in DT and the
* dynamically added entries. * dynamically added entries.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function indirectly uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*/ */
void dev_pm_opp_remove_table(struct device *dev) void dev_pm_opp_remove_table(struct device *dev)
{ {
_dev_pm_opp_remove_table(dev, true); _dev_pm_opp_find_and_remove_table(dev, true);
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_remove_table); EXPORT_SYMBOL_GPL(dev_pm_opp_remove_table);
...@@ -42,11 +42,6 @@ ...@@ -42,11 +42,6 @@
* *
* WARNING: It is important for the callers to ensure refreshing their copy of * WARNING: It is important for the callers to ensure refreshing their copy of
* the table if any of the mentioned functions have been invoked in the interim. * the table if any of the mentioned functions have been invoked in the interim.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Since we just use the regular accessor functions to access the internal data
* structures, we use RCU read lock inside this function. As a result, users of
* this function DONOT need to use explicit locks for invoking.
*/ */
int dev_pm_opp_init_cpufreq_table(struct device *dev, int dev_pm_opp_init_cpufreq_table(struct device *dev,
struct cpufreq_frequency_table **table) struct cpufreq_frequency_table **table)
...@@ -56,19 +51,13 @@ int dev_pm_opp_init_cpufreq_table(struct device *dev, ...@@ -56,19 +51,13 @@ int dev_pm_opp_init_cpufreq_table(struct device *dev,
int i, max_opps, ret = 0; int i, max_opps, ret = 0;
unsigned long rate; unsigned long rate;
rcu_read_lock();
max_opps = dev_pm_opp_get_opp_count(dev); max_opps = dev_pm_opp_get_opp_count(dev);
if (max_opps <= 0) { if (max_opps <= 0)
ret = max_opps ? max_opps : -ENODATA; return max_opps ? max_opps : -ENODATA;
goto out;
}
freq_table = kcalloc((max_opps + 1), sizeof(*freq_table), GFP_ATOMIC); freq_table = kcalloc((max_opps + 1), sizeof(*freq_table), GFP_ATOMIC);
if (!freq_table) { if (!freq_table)
ret = -ENOMEM; return -ENOMEM;
goto out;
}
for (i = 0, rate = 0; i < max_opps; i++, rate++) { for (i = 0, rate = 0; i < max_opps; i++, rate++) {
/* find next rate */ /* find next rate */
...@@ -83,6 +72,8 @@ int dev_pm_opp_init_cpufreq_table(struct device *dev, ...@@ -83,6 +72,8 @@ int dev_pm_opp_init_cpufreq_table(struct device *dev,
/* Is Boost/turbo opp ? */ /* Is Boost/turbo opp ? */
if (dev_pm_opp_is_turbo(opp)) if (dev_pm_opp_is_turbo(opp))
freq_table[i].flags = CPUFREQ_BOOST_FREQ; freq_table[i].flags = CPUFREQ_BOOST_FREQ;
dev_pm_opp_put(opp);
} }
freq_table[i].driver_data = i; freq_table[i].driver_data = i;
...@@ -91,7 +82,6 @@ int dev_pm_opp_init_cpufreq_table(struct device *dev, ...@@ -91,7 +82,6 @@ int dev_pm_opp_init_cpufreq_table(struct device *dev,
*table = &freq_table[0]; *table = &freq_table[0];
out: out:
rcu_read_unlock();
if (ret) if (ret)
kfree(freq_table); kfree(freq_table);
...@@ -147,12 +137,6 @@ void _dev_pm_opp_cpumask_remove_table(const struct cpumask *cpumask, bool of) ...@@ -147,12 +137,6 @@ void _dev_pm_opp_cpumask_remove_table(const struct cpumask *cpumask, bool of)
* This removes the OPP tables for CPUs present in the @cpumask. * This removes the OPP tables for CPUs present in the @cpumask.
* This should be used to remove all the OPPs entries associated with * This should be used to remove all the OPPs entries associated with
* the cpus in @cpumask. * the cpus in @cpumask.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*/ */
void dev_pm_opp_cpumask_remove_table(const struct cpumask *cpumask) void dev_pm_opp_cpumask_remove_table(const struct cpumask *cpumask)
{ {
...@@ -169,12 +153,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_cpumask_remove_table); ...@@ -169,12 +153,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_cpumask_remove_table);
* @cpumask. * @cpumask.
* *
* Returns -ENODEV if OPP table isn't already present. * Returns -ENODEV if OPP table isn't already present.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*/ */
int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev,
const struct cpumask *cpumask) const struct cpumask *cpumask)
...@@ -184,13 +162,9 @@ int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, ...@@ -184,13 +162,9 @@ int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev,
struct device *dev; struct device *dev;
int cpu, ret = 0; int cpu, ret = 0;
mutex_lock(&opp_table_lock);
opp_table = _find_opp_table(cpu_dev); opp_table = _find_opp_table(cpu_dev);
if (IS_ERR(opp_table)) { if (IS_ERR(opp_table))
ret = PTR_ERR(opp_table); return PTR_ERR(opp_table);
goto unlock;
}
for_each_cpu(cpu, cpumask) { for_each_cpu(cpu, cpumask) {
if (cpu == cpu_dev->id) if (cpu == cpu_dev->id)
...@@ -213,8 +187,8 @@ int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, ...@@ -213,8 +187,8 @@ int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev,
/* Mark opp-table as multiple CPUs are sharing it now */ /* Mark opp-table as multiple CPUs are sharing it now */
opp_table->shared_opp = OPP_TABLE_ACCESS_SHARED; opp_table->shared_opp = OPP_TABLE_ACCESS_SHARED;
} }
unlock:
mutex_unlock(&opp_table_lock); dev_pm_opp_put_opp_table(opp_table);
return ret; return ret;
} }
...@@ -229,12 +203,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_set_sharing_cpus); ...@@ -229,12 +203,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_set_sharing_cpus);
* *
* Returns -ENODEV if OPP table isn't already present and -EINVAL if the OPP * Returns -ENODEV if OPP table isn't already present and -EINVAL if the OPP
* table's status is access-unknown. * table's status is access-unknown.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*/ */
int dev_pm_opp_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask) int dev_pm_opp_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask)
{ {
...@@ -242,17 +210,13 @@ int dev_pm_opp_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask) ...@@ -242,17 +210,13 @@ int dev_pm_opp_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask)
struct opp_table *opp_table; struct opp_table *opp_table;
int ret = 0; int ret = 0;
mutex_lock(&opp_table_lock);
opp_table = _find_opp_table(cpu_dev); opp_table = _find_opp_table(cpu_dev);
if (IS_ERR(opp_table)) { if (IS_ERR(opp_table))
ret = PTR_ERR(opp_table); return PTR_ERR(opp_table);
goto unlock;
}
if (opp_table->shared_opp == OPP_TABLE_ACCESS_UNKNOWN) { if (opp_table->shared_opp == OPP_TABLE_ACCESS_UNKNOWN) {
ret = -EINVAL; ret = -EINVAL;
goto unlock; goto put_opp_table;
} }
cpumask_clear(cpumask); cpumask_clear(cpumask);
...@@ -264,8 +228,8 @@ int dev_pm_opp_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask) ...@@ -264,8 +228,8 @@ int dev_pm_opp_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask)
cpumask_set_cpu(cpu_dev->id, cpumask); cpumask_set_cpu(cpu_dev->id, cpumask);
} }
unlock: put_opp_table:
mutex_unlock(&opp_table_lock); dev_pm_opp_put_opp_table(opp_table);
return ret; return ret;
} }
......
...@@ -24,9 +24,11 @@ ...@@ -24,9 +24,11 @@
static struct opp_table *_managed_opp(const struct device_node *np) static struct opp_table *_managed_opp(const struct device_node *np)
{ {
struct opp_table *opp_table; struct opp_table *opp_table, *managed_table = NULL;
mutex_lock(&opp_table_lock);
list_for_each_entry_rcu(opp_table, &opp_tables, node) { list_for_each_entry(opp_table, &opp_tables, node) {
if (opp_table->np == np) { if (opp_table->np == np) {
/* /*
* Multiple devices can point to the same OPP table and * Multiple devices can point to the same OPP table and
...@@ -35,14 +37,18 @@ static struct opp_table *_managed_opp(const struct device_node *np) ...@@ -35,14 +37,18 @@ static struct opp_table *_managed_opp(const struct device_node *np)
* But the OPPs will be considered as shared only if the * But the OPPs will be considered as shared only if the
* OPP table contains a "opp-shared" property. * OPP table contains a "opp-shared" property.
*/ */
if (opp_table->shared_opp == OPP_TABLE_ACCESS_SHARED) if (opp_table->shared_opp == OPP_TABLE_ACCESS_SHARED) {
return opp_table; _get_opp_table_kref(opp_table);
managed_table = opp_table;
}
return NULL; break;
} }
} }
return NULL; mutex_unlock(&opp_table_lock);
return managed_table;
} }
void _of_init_opp_table(struct opp_table *opp_table, struct device *dev) void _of_init_opp_table(struct opp_table *opp_table, struct device *dev)
...@@ -229,34 +235,28 @@ static int opp_parse_supplies(struct dev_pm_opp *opp, struct device *dev, ...@@ -229,34 +235,28 @@ static int opp_parse_supplies(struct dev_pm_opp *opp, struct device *dev,
* @dev: device pointer used to lookup OPP table. * @dev: device pointer used to lookup OPP table.
* *
* Free OPPs created using static entries present in DT. * Free OPPs created using static entries present in DT.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function indirectly uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*/ */
void dev_pm_opp_of_remove_table(struct device *dev) void dev_pm_opp_of_remove_table(struct device *dev)
{ {
_dev_pm_opp_remove_table(dev, false); _dev_pm_opp_find_and_remove_table(dev, false);
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_of_remove_table); EXPORT_SYMBOL_GPL(dev_pm_opp_of_remove_table);
/* Returns opp descriptor node for a device, caller must do of_node_put() */ /* Returns opp descriptor node for a device, caller must do of_node_put() */
static struct device_node *_of_get_opp_desc_node(struct device *dev) struct device_node *dev_pm_opp_of_get_opp_desc_node(struct device *dev)
{ {
/* /*
* TODO: Support for multiple OPP tables.
*
* There should be only ONE phandle present in "operating-points-v2" * There should be only ONE phandle present in "operating-points-v2"
* property. * property.
*/ */
return of_parse_phandle(dev->of_node, "operating-points-v2", 0); return of_parse_phandle(dev->of_node, "operating-points-v2", 0);
} }
EXPORT_SYMBOL_GPL(dev_pm_opp_of_get_opp_desc_node);
/** /**
* _opp_add_static_v2() - Allocate static OPPs (As per 'v2' DT bindings) * _opp_add_static_v2() - Allocate static OPPs (As per 'v2' DT bindings)
* @opp_table: OPP table
* @dev: device for which we do this operation * @dev: device for which we do this operation
* @np: device node * @np: device node
* *
...@@ -264,12 +264,6 @@ static struct device_node *_of_get_opp_desc_node(struct device *dev) ...@@ -264,12 +264,6 @@ static struct device_node *_of_get_opp_desc_node(struct device *dev)
* opp can be controlled using dev_pm_opp_enable/disable functions and may be * opp can be controlled using dev_pm_opp_enable/disable functions and may be
* removed by dev_pm_opp_remove. * removed by dev_pm_opp_remove.
* *
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*
* Return: * Return:
* 0 On success OR * 0 On success OR
* Duplicate OPPs (both freq and volt are same) and opp->available * Duplicate OPPs (both freq and volt are same) and opp->available
...@@ -278,22 +272,17 @@ static struct device_node *_of_get_opp_desc_node(struct device *dev) ...@@ -278,22 +272,17 @@ static struct device_node *_of_get_opp_desc_node(struct device *dev)
* -ENOMEM Memory allocation failure * -ENOMEM Memory allocation failure
* -EINVAL Failed parsing the OPP node * -EINVAL Failed parsing the OPP node
*/ */
static int _opp_add_static_v2(struct device *dev, struct device_node *np) static int _opp_add_static_v2(struct opp_table *opp_table, struct device *dev,
struct device_node *np)
{ {
struct opp_table *opp_table;
struct dev_pm_opp *new_opp; struct dev_pm_opp *new_opp;
u64 rate; u64 rate;
u32 val; u32 val;
int ret; int ret;
/* Hold our table modification lock here */ new_opp = _opp_allocate(opp_table);
mutex_lock(&opp_table_lock); if (!new_opp)
return -ENOMEM;
new_opp = _allocate_opp(dev, &opp_table);
if (!new_opp) {
ret = -ENOMEM;
goto unlock;
}
ret = of_property_read_u64(np, "opp-hz", &rate); ret = of_property_read_u64(np, "opp-hz", &rate);
if (ret < 0) { if (ret < 0) {
...@@ -327,8 +316,12 @@ static int _opp_add_static_v2(struct device *dev, struct device_node *np) ...@@ -327,8 +316,12 @@ static int _opp_add_static_v2(struct device *dev, struct device_node *np)
goto free_opp; goto free_opp;
ret = _opp_add(dev, new_opp, opp_table); ret = _opp_add(dev, new_opp, opp_table);
if (ret) if (ret) {
/* Don't return error for duplicate OPPs */
if (ret == -EBUSY)
ret = 0;
goto free_opp; goto free_opp;
}
/* OPP to select on device suspend */ /* OPP to select on device suspend */
if (of_property_read_bool(np, "opp-suspend")) { if (of_property_read_bool(np, "opp-suspend")) {
...@@ -345,8 +338,6 @@ static int _opp_add_static_v2(struct device *dev, struct device_node *np) ...@@ -345,8 +338,6 @@ static int _opp_add_static_v2(struct device *dev, struct device_node *np)
if (new_opp->clock_latency_ns > opp_table->clock_latency_ns_max) if (new_opp->clock_latency_ns > opp_table->clock_latency_ns_max)
opp_table->clock_latency_ns_max = new_opp->clock_latency_ns; opp_table->clock_latency_ns_max = new_opp->clock_latency_ns;
mutex_unlock(&opp_table_lock);
pr_debug("%s: turbo:%d rate:%lu uv:%lu uvmin:%lu uvmax:%lu latency:%lu\n", pr_debug("%s: turbo:%d rate:%lu uv:%lu uvmin:%lu uvmax:%lu latency:%lu\n",
__func__, new_opp->turbo, new_opp->rate, __func__, new_opp->turbo, new_opp->rate,
new_opp->supplies[0].u_volt, new_opp->supplies[0].u_volt_min, new_opp->supplies[0].u_volt, new_opp->supplies[0].u_volt_min,
...@@ -356,13 +347,12 @@ static int _opp_add_static_v2(struct device *dev, struct device_node *np) ...@@ -356,13 +347,12 @@ static int _opp_add_static_v2(struct device *dev, struct device_node *np)
* Notify the changes in the availability of the operable * Notify the changes in the availability of the operable
* frequency/voltage list. * frequency/voltage list.
*/ */
srcu_notifier_call_chain(&opp_table->srcu_head, OPP_EVENT_ADD, new_opp); blocking_notifier_call_chain(&opp_table->head, OPP_EVENT_ADD, new_opp);
return 0; return 0;
free_opp: free_opp:
_opp_remove(opp_table, new_opp, false); _opp_free(new_opp);
unlock:
mutex_unlock(&opp_table_lock);
return ret; return ret;
} }
...@@ -373,41 +363,35 @@ static int _of_add_opp_table_v2(struct device *dev, struct device_node *opp_np) ...@@ -373,41 +363,35 @@ static int _of_add_opp_table_v2(struct device *dev, struct device_node *opp_np)
struct opp_table *opp_table; struct opp_table *opp_table;
int ret = 0, count = 0; int ret = 0, count = 0;
mutex_lock(&opp_table_lock);
opp_table = _managed_opp(opp_np); opp_table = _managed_opp(opp_np);
if (opp_table) { if (opp_table) {
/* OPPs are already managed */ /* OPPs are already managed */
if (!_add_opp_dev(dev, opp_table)) if (!_add_opp_dev(dev, opp_table))
ret = -ENOMEM; ret = -ENOMEM;
mutex_unlock(&opp_table_lock); goto put_opp_table;
return ret;
} }
mutex_unlock(&opp_table_lock);
opp_table = dev_pm_opp_get_opp_table(dev);
if (!opp_table)
return -ENOMEM;
/* We have opp-table node now, iterate over it and add OPPs */ /* We have opp-table node now, iterate over it and add OPPs */
for_each_available_child_of_node(opp_np, np) { for_each_available_child_of_node(opp_np, np) {
count++; count++;
ret = _opp_add_static_v2(dev, np); ret = _opp_add_static_v2(opp_table, dev, np);
if (ret) { if (ret) {
dev_err(dev, "%s: Failed to add OPP, %d\n", __func__, dev_err(dev, "%s: Failed to add OPP, %d\n", __func__,
ret); ret);
goto free_table; _dev_pm_opp_remove_table(opp_table, dev, false);
goto put_opp_table;
} }
} }
/* There should be one of more OPP defined */ /* There should be one of more OPP defined */
if (WARN_ON(!count)) if (WARN_ON(!count)) {
return -ENOENT; ret = -ENOENT;
goto put_opp_table;
mutex_lock(&opp_table_lock);
opp_table = _find_opp_table(dev);
if (WARN_ON(IS_ERR(opp_table))) {
ret = PTR_ERR(opp_table);
mutex_unlock(&opp_table_lock);
goto free_table;
} }
opp_table->np = opp_np; opp_table->np = opp_np;
...@@ -416,12 +400,8 @@ static int _of_add_opp_table_v2(struct device *dev, struct device_node *opp_np) ...@@ -416,12 +400,8 @@ static int _of_add_opp_table_v2(struct device *dev, struct device_node *opp_np)
else else
opp_table->shared_opp = OPP_TABLE_ACCESS_EXCLUSIVE; opp_table->shared_opp = OPP_TABLE_ACCESS_EXCLUSIVE;
mutex_unlock(&opp_table_lock); put_opp_table:
dev_pm_opp_put_opp_table(opp_table);
return 0;
free_table:
dev_pm_opp_of_remove_table(dev);
return ret; return ret;
} }
...@@ -429,9 +409,10 @@ static int _of_add_opp_table_v2(struct device *dev, struct device_node *opp_np) ...@@ -429,9 +409,10 @@ static int _of_add_opp_table_v2(struct device *dev, struct device_node *opp_np)
/* Initializes OPP tables based on old-deprecated bindings */ /* Initializes OPP tables based on old-deprecated bindings */
static int _of_add_opp_table_v1(struct device *dev) static int _of_add_opp_table_v1(struct device *dev)
{ {
struct opp_table *opp_table;
const struct property *prop; const struct property *prop;
const __be32 *val; const __be32 *val;
int nr; int nr, ret = 0;
prop = of_find_property(dev->of_node, "operating-points", NULL); prop = of_find_property(dev->of_node, "operating-points", NULL);
if (!prop) if (!prop)
...@@ -449,18 +430,27 @@ static int _of_add_opp_table_v1(struct device *dev) ...@@ -449,18 +430,27 @@ static int _of_add_opp_table_v1(struct device *dev)
return -EINVAL; return -EINVAL;
} }
opp_table = dev_pm_opp_get_opp_table(dev);
if (!opp_table)
return -ENOMEM;
val = prop->value; val = prop->value;
while (nr) { while (nr) {
unsigned long freq = be32_to_cpup(val++) * 1000; unsigned long freq = be32_to_cpup(val++) * 1000;
unsigned long volt = be32_to_cpup(val++); unsigned long volt = be32_to_cpup(val++);
if (_opp_add_v1(dev, freq, volt, false)) ret = _opp_add_v1(opp_table, dev, freq, volt, false);
dev_warn(dev, "%s: Failed to add OPP %ld\n", if (ret) {
__func__, freq); dev_err(dev, "%s: Failed to add OPP %ld (%d)\n",
__func__, freq, ret);
_dev_pm_opp_remove_table(opp_table, dev, false);
break;
}
nr -= 2; nr -= 2;
} }
return 0; dev_pm_opp_put_opp_table(opp_table);
return ret;
} }
/** /**
...@@ -469,12 +459,6 @@ static int _of_add_opp_table_v1(struct device *dev) ...@@ -469,12 +459,6 @@ static int _of_add_opp_table_v1(struct device *dev)
* *
* Register the initial OPP table with the OPP library for given device. * Register the initial OPP table with the OPP library for given device.
* *
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function indirectly uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*
* Return: * Return:
* 0 On success OR * 0 On success OR
* Duplicate OPPs (both freq and volt are same) and opp->available * Duplicate OPPs (both freq and volt are same) and opp->available
...@@ -495,7 +479,7 @@ int dev_pm_opp_of_add_table(struct device *dev) ...@@ -495,7 +479,7 @@ int dev_pm_opp_of_add_table(struct device *dev)
* OPPs have two version of bindings now. The older one is deprecated, * OPPs have two version of bindings now. The older one is deprecated,
* try for the new binding first. * try for the new binding first.
*/ */
opp_np = _of_get_opp_desc_node(dev); opp_np = dev_pm_opp_of_get_opp_desc_node(dev);
if (!opp_np) { if (!opp_np) {
/* /*
* Try old-deprecated bindings for backward compatibility with * Try old-deprecated bindings for backward compatibility with
...@@ -519,12 +503,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_of_add_table); ...@@ -519,12 +503,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_of_add_table);
* *
* This removes the OPP tables for CPUs present in the @cpumask. * This removes the OPP tables for CPUs present in the @cpumask.
* This should be used only to remove static entries created from DT. * This should be used only to remove static entries created from DT.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*/ */
void dev_pm_opp_of_cpumask_remove_table(const struct cpumask *cpumask) void dev_pm_opp_of_cpumask_remove_table(const struct cpumask *cpumask)
{ {
...@@ -537,12 +515,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_of_cpumask_remove_table); ...@@ -537,12 +515,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_of_cpumask_remove_table);
* @cpumask: cpumask for which OPP table needs to be added. * @cpumask: cpumask for which OPP table needs to be added.
* *
* This adds the OPP tables for CPUs present in the @cpumask. * This adds the OPP tables for CPUs present in the @cpumask.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*/ */
int dev_pm_opp_of_cpumask_add_table(const struct cpumask *cpumask) int dev_pm_opp_of_cpumask_add_table(const struct cpumask *cpumask)
{ {
...@@ -590,12 +562,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_of_cpumask_add_table); ...@@ -590,12 +562,6 @@ EXPORT_SYMBOL_GPL(dev_pm_opp_of_cpumask_add_table);
* This updates the @cpumask with CPUs that are sharing OPPs with @cpu_dev. * This updates the @cpumask with CPUs that are sharing OPPs with @cpu_dev.
* *
* Returns -ENOENT if operating-points-v2 isn't present for @cpu_dev. * Returns -ENOENT if operating-points-v2 isn't present for @cpu_dev.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function internally uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
*/ */
int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev,
struct cpumask *cpumask) struct cpumask *cpumask)
...@@ -605,7 +571,7 @@ int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, ...@@ -605,7 +571,7 @@ int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev,
int cpu, ret = 0; int cpu, ret = 0;
/* Get OPP descriptor node */ /* Get OPP descriptor node */
np = _of_get_opp_desc_node(cpu_dev); np = dev_pm_opp_of_get_opp_desc_node(cpu_dev);
if (!np) { if (!np) {
dev_dbg(cpu_dev, "%s: Couldn't find opp node.\n", __func__); dev_dbg(cpu_dev, "%s: Couldn't find opp node.\n", __func__);
return -ENOENT; return -ENOENT;
...@@ -630,7 +596,7 @@ int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, ...@@ -630,7 +596,7 @@ int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev,
} }
/* Get OPP descriptor node */ /* Get OPP descriptor node */
tmp_np = _of_get_opp_desc_node(tcpu_dev); tmp_np = dev_pm_opp_of_get_opp_desc_node(tcpu_dev);
if (!tmp_np) { if (!tmp_np) {
dev_err(tcpu_dev, "%s: Couldn't find opp node.\n", dev_err(tcpu_dev, "%s: Couldn't find opp node.\n",
__func__); __func__);
......
...@@ -16,11 +16,11 @@ ...@@ -16,11 +16,11 @@
#include <linux/device.h> #include <linux/device.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/kref.h>
#include <linux/list.h> #include <linux/list.h>
#include <linux/limits.h> #include <linux/limits.h>
#include <linux/pm_opp.h> #include <linux/pm_opp.h>
#include <linux/rculist.h> #include <linux/notifier.h>
#include <linux/rcupdate.h>
struct clk; struct clk;
struct regulator; struct regulator;
...@@ -51,11 +51,9 @@ extern struct list_head opp_tables; ...@@ -51,11 +51,9 @@ extern struct list_head opp_tables;
* @node: opp table node. The nodes are maintained throughout the lifetime * @node: opp table node. The nodes are maintained throughout the lifetime
* of boot. It is expected only an optimal set of OPPs are * of boot. It is expected only an optimal set of OPPs are
* added to the library by the SoC framework. * added to the library by the SoC framework.
* RCU usage: opp table is traversed with RCU locks. node
* modification is possible realtime, hence the modifications
* are protected by the opp_table_lock for integrity.
* IMPORTANT: the opp nodes should be maintained in increasing * IMPORTANT: the opp nodes should be maintained in increasing
* order. * order.
* @kref: for reference count of the OPP.
* @available: true/false - marks if this OPP as available or not * @available: true/false - marks if this OPP as available or not
* @dynamic: not-created from static DT entries. * @dynamic: not-created from static DT entries.
* @turbo: true if turbo (boost) OPP * @turbo: true if turbo (boost) OPP
...@@ -65,7 +63,6 @@ extern struct list_head opp_tables; ...@@ -65,7 +63,6 @@ extern struct list_head opp_tables;
* @clock_latency_ns: Latency (in nanoseconds) of switching to this OPP's * @clock_latency_ns: Latency (in nanoseconds) of switching to this OPP's
* frequency from any other OPP's frequency. * frequency from any other OPP's frequency.
* @opp_table: points back to the opp_table struct this opp belongs to * @opp_table: points back to the opp_table struct this opp belongs to
* @rcu_head: RCU callback head used for deferred freeing
* @np: OPP's device node. * @np: OPP's device node.
* @dentry: debugfs dentry pointer (per opp) * @dentry: debugfs dentry pointer (per opp)
* *
...@@ -73,6 +70,7 @@ extern struct list_head opp_tables; ...@@ -73,6 +70,7 @@ extern struct list_head opp_tables;
*/ */
struct dev_pm_opp { struct dev_pm_opp {
struct list_head node; struct list_head node;
struct kref kref;
bool available; bool available;
bool dynamic; bool dynamic;
...@@ -85,7 +83,6 @@ struct dev_pm_opp { ...@@ -85,7 +83,6 @@ struct dev_pm_opp {
unsigned long clock_latency_ns; unsigned long clock_latency_ns;
struct opp_table *opp_table; struct opp_table *opp_table;
struct rcu_head rcu_head;
struct device_node *np; struct device_node *np;
...@@ -98,7 +95,6 @@ struct dev_pm_opp { ...@@ -98,7 +95,6 @@ struct dev_pm_opp {
* struct opp_device - devices managed by 'struct opp_table' * struct opp_device - devices managed by 'struct opp_table'
* @node: list node * @node: list node
* @dev: device to which the struct object belongs * @dev: device to which the struct object belongs
* @rcu_head: RCU callback head used for deferred freeing
* @dentry: debugfs dentry pointer (per device) * @dentry: debugfs dentry pointer (per device)
* *
* This is an internal data structure maintaining the devices that are managed * This is an internal data structure maintaining the devices that are managed
...@@ -107,7 +103,6 @@ struct dev_pm_opp { ...@@ -107,7 +103,6 @@ struct dev_pm_opp {
struct opp_device { struct opp_device {
struct list_head node; struct list_head node;
const struct device *dev; const struct device *dev;
struct rcu_head rcu_head;
#ifdef CONFIG_DEBUG_FS #ifdef CONFIG_DEBUG_FS
struct dentry *dentry; struct dentry *dentry;
...@@ -125,12 +120,11 @@ enum opp_table_access { ...@@ -125,12 +120,11 @@ enum opp_table_access {
* @node: table node - contains the devices with OPPs that * @node: table node - contains the devices with OPPs that
* have been registered. Nodes once added are not modified in this * have been registered. Nodes once added are not modified in this
* table. * table.
* RCU usage: nodes are not modified in the table of opp_table, * @head: notifier head to notify the OPP availability changes.
* however addition is possible and is secured by opp_table_lock
* @srcu_head: notifier head to notify the OPP availability changes.
* @rcu_head: RCU callback head used for deferred freeing
* @dev_list: list of devices that share these OPPs * @dev_list: list of devices that share these OPPs
* @opp_list: table of opps * @opp_list: table of opps
* @kref: for reference count of the table.
* @lock: mutex protecting the opp_list.
* @np: struct device_node pointer for opp's DT node. * @np: struct device_node pointer for opp's DT node.
* @clock_latency_ns_max: Max clock latency in nanoseconds. * @clock_latency_ns_max: Max clock latency in nanoseconds.
* @shared_opp: OPP is shared between multiple devices. * @shared_opp: OPP is shared between multiple devices.
...@@ -151,18 +145,15 @@ enum opp_table_access { ...@@ -151,18 +145,15 @@ enum opp_table_access {
* This is an internal data structure maintaining the link to opps attached to * This is an internal data structure maintaining the link to opps attached to
* a device. This structure is not meant to be shared to users as it is * a device. This structure is not meant to be shared to users as it is
* meant for book keeping and private to OPP library. * meant for book keeping and private to OPP library.
*
* Because the opp structures can be used from both rcu and srcu readers, we
* need to wait for the grace period of both of them before freeing any
* resources. And so we have used kfree_rcu() from within call_srcu() handlers.
*/ */
struct opp_table { struct opp_table {
struct list_head node; struct list_head node;
struct srcu_notifier_head srcu_head; struct blocking_notifier_head head;
struct rcu_head rcu_head;
struct list_head dev_list; struct list_head dev_list;
struct list_head opp_list; struct list_head opp_list;
struct kref kref;
struct mutex lock;
struct device_node *np; struct device_node *np;
unsigned long clock_latency_ns_max; unsigned long clock_latency_ns_max;
...@@ -190,14 +181,17 @@ struct opp_table { ...@@ -190,14 +181,17 @@ struct opp_table {
}; };
/* Routines internal to opp core */ /* Routines internal to opp core */
void _get_opp_table_kref(struct opp_table *opp_table);
struct opp_table *_find_opp_table(struct device *dev); struct opp_table *_find_opp_table(struct device *dev);
struct opp_device *_add_opp_dev(const struct device *dev, struct opp_table *opp_table); struct opp_device *_add_opp_dev(const struct device *dev, struct opp_table *opp_table);
void _dev_pm_opp_remove_table(struct device *dev, bool remove_all); void _dev_pm_opp_remove_table(struct opp_table *opp_table, struct device *dev, bool remove_all);
struct dev_pm_opp *_allocate_opp(struct device *dev, struct opp_table **opp_table); void _dev_pm_opp_find_and_remove_table(struct device *dev, bool remove_all);
struct dev_pm_opp *_opp_allocate(struct opp_table *opp_table);
void _opp_free(struct dev_pm_opp *opp);
int _opp_add(struct device *dev, struct dev_pm_opp *new_opp, struct opp_table *opp_table); int _opp_add(struct device *dev, struct dev_pm_opp *new_opp, struct opp_table *opp_table);
void _opp_remove(struct opp_table *opp_table, struct dev_pm_opp *opp, bool notify); int _opp_add_v1(struct opp_table *opp_table, struct device *dev, unsigned long freq, long u_volt, bool dynamic);
int _opp_add_v1(struct device *dev, unsigned long freq, long u_volt, bool dynamic);
void _dev_pm_opp_cpumask_remove_table(const struct cpumask *cpumask, bool of); void _dev_pm_opp_cpumask_remove_table(const struct cpumask *cpumask, bool of);
struct opp_table *_add_opp_table(struct device *dev);
#ifdef CONFIG_OF #ifdef CONFIG_OF
void _of_init_opp_table(struct opp_table *opp_table, struct device *dev); void _of_init_opp_table(struct opp_table *opp_table, struct device *dev);
......
...@@ -633,16 +633,12 @@ static int find_lut_index_for_rate(struct tegra_dfll *td, unsigned long rate) ...@@ -633,16 +633,12 @@ static int find_lut_index_for_rate(struct tegra_dfll *td, unsigned long rate)
struct dev_pm_opp *opp; struct dev_pm_opp *opp;
int i, uv; int i, uv;
rcu_read_lock();
opp = dev_pm_opp_find_freq_ceil(td->soc->dev, &rate); opp = dev_pm_opp_find_freq_ceil(td->soc->dev, &rate);
if (IS_ERR(opp)) { if (IS_ERR(opp))
rcu_read_unlock();
return PTR_ERR(opp); return PTR_ERR(opp);
}
uv = dev_pm_opp_get_voltage(opp);
rcu_read_unlock(); uv = dev_pm_opp_get_voltage(opp);
dev_pm_opp_put(opp);
for (i = 0; i < td->i2c_lut_size; i++) { for (i = 0; i < td->i2c_lut_size; i++) {
if (regulator_list_voltage(td->vdd_reg, td->i2c_lut[i]) == uv) if (regulator_list_voltage(td->vdd_reg, td->i2c_lut[i]) == uv)
...@@ -1440,8 +1436,6 @@ static int dfll_build_i2c_lut(struct tegra_dfll *td) ...@@ -1440,8 +1436,6 @@ static int dfll_build_i2c_lut(struct tegra_dfll *td)
struct dev_pm_opp *opp; struct dev_pm_opp *opp;
int lut; int lut;
rcu_read_lock();
rate = ULONG_MAX; rate = ULONG_MAX;
opp = dev_pm_opp_find_freq_floor(td->soc->dev, &rate); opp = dev_pm_opp_find_freq_floor(td->soc->dev, &rate);
if (IS_ERR(opp)) { if (IS_ERR(opp)) {
...@@ -1449,6 +1443,7 @@ static int dfll_build_i2c_lut(struct tegra_dfll *td) ...@@ -1449,6 +1443,7 @@ static int dfll_build_i2c_lut(struct tegra_dfll *td)
goto out; goto out;
} }
v_max = dev_pm_opp_get_voltage(opp); v_max = dev_pm_opp_get_voltage(opp);
dev_pm_opp_put(opp);
v = td->soc->cvb->min_millivolts * 1000; v = td->soc->cvb->min_millivolts * 1000;
lut = find_vdd_map_entry_exact(td, v); lut = find_vdd_map_entry_exact(td, v);
...@@ -1465,6 +1460,8 @@ static int dfll_build_i2c_lut(struct tegra_dfll *td) ...@@ -1465,6 +1460,8 @@ static int dfll_build_i2c_lut(struct tegra_dfll *td)
if (v_opp <= td->soc->cvb->min_millivolts * 1000) if (v_opp <= td->soc->cvb->min_millivolts * 1000)
td->dvco_rate_min = dev_pm_opp_get_freq(opp); td->dvco_rate_min = dev_pm_opp_get_freq(opp);
dev_pm_opp_put(opp);
for (;;) { for (;;) {
v += max(1, (v_max - v) / (MAX_DFLL_VOLTAGES - j)); v += max(1, (v_max - v) / (MAX_DFLL_VOLTAGES - j));
if (v >= v_opp) if (v >= v_opp)
...@@ -1496,8 +1493,6 @@ static int dfll_build_i2c_lut(struct tegra_dfll *td) ...@@ -1496,8 +1493,6 @@ static int dfll_build_i2c_lut(struct tegra_dfll *td)
ret = 0; ret = 0;
out: out:
rcu_read_unlock();
return ret; return ret;
} }
......
...@@ -148,7 +148,6 @@ static int cpufreq_init(struct cpufreq_policy *policy) ...@@ -148,7 +148,6 @@ static int cpufreq_init(struct cpufreq_policy *policy)
struct private_data *priv; struct private_data *priv;
struct device *cpu_dev; struct device *cpu_dev;
struct clk *cpu_clk; struct clk *cpu_clk;
struct dev_pm_opp *suspend_opp;
unsigned int transition_latency; unsigned int transition_latency;
bool fallback = false; bool fallback = false;
const char *name; const char *name;
...@@ -252,11 +251,7 @@ static int cpufreq_init(struct cpufreq_policy *policy) ...@@ -252,11 +251,7 @@ static int cpufreq_init(struct cpufreq_policy *policy)
policy->driver_data = priv; policy->driver_data = priv;
policy->clk = cpu_clk; policy->clk = cpu_clk;
rcu_read_lock(); policy->suspend_freq = dev_pm_opp_get_suspend_opp_freq(cpu_dev) / 1000;
suspend_opp = dev_pm_opp_get_suspend_opp(cpu_dev);
if (suspend_opp)
policy->suspend_freq = dev_pm_opp_get_freq(suspend_opp) / 1000;
rcu_read_unlock();
ret = cpufreq_table_validate_and_show(policy, freq_table); ret = cpufreq_table_validate_and_show(policy, freq_table);
if (ret) { if (ret) {
......
...@@ -118,12 +118,10 @@ static int init_div_table(void) ...@@ -118,12 +118,10 @@ static int init_div_table(void)
unsigned int tmp, clk_div, ema_div, freq, volt_id; unsigned int tmp, clk_div, ema_div, freq, volt_id;
struct dev_pm_opp *opp; struct dev_pm_opp *opp;
rcu_read_lock();
cpufreq_for_each_entry(pos, freq_tbl) { cpufreq_for_each_entry(pos, freq_tbl) {
opp = dev_pm_opp_find_freq_exact(dvfs_info->dev, opp = dev_pm_opp_find_freq_exact(dvfs_info->dev,
pos->frequency * 1000, true); pos->frequency * 1000, true);
if (IS_ERR(opp)) { if (IS_ERR(opp)) {
rcu_read_unlock();
dev_err(dvfs_info->dev, dev_err(dvfs_info->dev,
"failed to find valid OPP for %u KHZ\n", "failed to find valid OPP for %u KHZ\n",
pos->frequency); pos->frequency);
...@@ -140,6 +138,7 @@ static int init_div_table(void) ...@@ -140,6 +138,7 @@ static int init_div_table(void)
/* Calculate EMA */ /* Calculate EMA */
volt_id = dev_pm_opp_get_voltage(opp); volt_id = dev_pm_opp_get_voltage(opp);
volt_id = (MAX_VOLTAGE - volt_id) / VOLTAGE_STEP; volt_id = (MAX_VOLTAGE - volt_id) / VOLTAGE_STEP;
if (volt_id < PMIC_HIGH_VOLT) { if (volt_id < PMIC_HIGH_VOLT) {
ema_div = (CPUEMA_HIGH << P0_7_CPUEMA_SHIFT) | ema_div = (CPUEMA_HIGH << P0_7_CPUEMA_SHIFT) |
...@@ -157,9 +156,9 @@ static int init_div_table(void) ...@@ -157,9 +156,9 @@ static int init_div_table(void)
__raw_writel(tmp, dvfs_info->base + XMU_PMU_P0_7 + 4 * __raw_writel(tmp, dvfs_info->base + XMU_PMU_P0_7 + 4 *
(pos - freq_tbl)); (pos - freq_tbl));
dev_pm_opp_put(opp);
} }
rcu_read_unlock();
return 0; return 0;
} }
......
...@@ -53,16 +53,15 @@ static int imx6q_set_target(struct cpufreq_policy *policy, unsigned int index) ...@@ -53,16 +53,15 @@ static int imx6q_set_target(struct cpufreq_policy *policy, unsigned int index)
freq_hz = new_freq * 1000; freq_hz = new_freq * 1000;
old_freq = clk_get_rate(arm_clk) / 1000; old_freq = clk_get_rate(arm_clk) / 1000;
rcu_read_lock();
opp = dev_pm_opp_find_freq_ceil(cpu_dev, &freq_hz); opp = dev_pm_opp_find_freq_ceil(cpu_dev, &freq_hz);
if (IS_ERR(opp)) { if (IS_ERR(opp)) {
rcu_read_unlock();
dev_err(cpu_dev, "failed to find OPP for %ld\n", freq_hz); dev_err(cpu_dev, "failed to find OPP for %ld\n", freq_hz);
return PTR_ERR(opp); return PTR_ERR(opp);
} }
volt = dev_pm_opp_get_voltage(opp); volt = dev_pm_opp_get_voltage(opp);
rcu_read_unlock(); dev_pm_opp_put(opp);
volt_old = regulator_get_voltage(arm_reg); volt_old = regulator_get_voltage(arm_reg);
dev_dbg(cpu_dev, "%u MHz, %ld mV --> %u MHz, %ld mV\n", dev_dbg(cpu_dev, "%u MHz, %ld mV --> %u MHz, %ld mV\n",
...@@ -321,14 +320,15 @@ static int imx6q_cpufreq_probe(struct platform_device *pdev) ...@@ -321,14 +320,15 @@ static int imx6q_cpufreq_probe(struct platform_device *pdev)
* freq_table initialised from OPP is therefore sorted in the * freq_table initialised from OPP is therefore sorted in the
* same order. * same order.
*/ */
rcu_read_lock();
opp = dev_pm_opp_find_freq_exact(cpu_dev, opp = dev_pm_opp_find_freq_exact(cpu_dev,
freq_table[0].frequency * 1000, true); freq_table[0].frequency * 1000, true);
min_volt = dev_pm_opp_get_voltage(opp); min_volt = dev_pm_opp_get_voltage(opp);
dev_pm_opp_put(opp);
opp = dev_pm_opp_find_freq_exact(cpu_dev, opp = dev_pm_opp_find_freq_exact(cpu_dev,
freq_table[--num].frequency * 1000, true); freq_table[--num].frequency * 1000, true);
max_volt = dev_pm_opp_get_voltage(opp); max_volt = dev_pm_opp_get_voltage(opp);
rcu_read_unlock(); dev_pm_opp_put(opp);
ret = regulator_set_voltage_time(arm_reg, min_volt, max_volt); ret = regulator_set_voltage_time(arm_reg, min_volt, max_volt);
if (ret > 0) if (ret > 0)
transition_latency += ret * 1000; transition_latency += ret * 1000;
......
...@@ -232,16 +232,14 @@ static int mtk_cpufreq_set_target(struct cpufreq_policy *policy, ...@@ -232,16 +232,14 @@ static int mtk_cpufreq_set_target(struct cpufreq_policy *policy,
freq_hz = freq_table[index].frequency * 1000; freq_hz = freq_table[index].frequency * 1000;
rcu_read_lock();
opp = dev_pm_opp_find_freq_ceil(cpu_dev, &freq_hz); opp = dev_pm_opp_find_freq_ceil(cpu_dev, &freq_hz);
if (IS_ERR(opp)) { if (IS_ERR(opp)) {
rcu_read_unlock();
pr_err("cpu%d: failed to find OPP for %ld\n", pr_err("cpu%d: failed to find OPP for %ld\n",
policy->cpu, freq_hz); policy->cpu, freq_hz);
return PTR_ERR(opp); return PTR_ERR(opp);
} }
vproc = dev_pm_opp_get_voltage(opp); vproc = dev_pm_opp_get_voltage(opp);
rcu_read_unlock(); dev_pm_opp_put(opp);
/* /*
* If the new voltage or the intermediate voltage is higher than the * If the new voltage or the intermediate voltage is higher than the
...@@ -411,16 +409,14 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu) ...@@ -411,16 +409,14 @@ static int mtk_cpu_dvfs_info_init(struct mtk_cpu_dvfs_info *info, int cpu)
/* Search a safe voltage for intermediate frequency. */ /* Search a safe voltage for intermediate frequency. */
rate = clk_get_rate(inter_clk); rate = clk_get_rate(inter_clk);
rcu_read_lock();
opp = dev_pm_opp_find_freq_ceil(cpu_dev, &rate); opp = dev_pm_opp_find_freq_ceil(cpu_dev, &rate);
if (IS_ERR(opp)) { if (IS_ERR(opp)) {
rcu_read_unlock();
pr_err("failed to get intermediate opp for cpu%d\n", cpu); pr_err("failed to get intermediate opp for cpu%d\n", cpu);
ret = PTR_ERR(opp); ret = PTR_ERR(opp);
goto out_free_opp_table; goto out_free_opp_table;
} }
info->intermediate_voltage = dev_pm_opp_get_voltage(opp); info->intermediate_voltage = dev_pm_opp_get_voltage(opp);
rcu_read_unlock(); dev_pm_opp_put(opp);
info->cpu_dev = cpu_dev; info->cpu_dev = cpu_dev;
info->proc_reg = proc_reg; info->proc_reg = proc_reg;
......
...@@ -63,16 +63,14 @@ static int omap_target(struct cpufreq_policy *policy, unsigned int index) ...@@ -63,16 +63,14 @@ static int omap_target(struct cpufreq_policy *policy, unsigned int index)
freq = ret; freq = ret;
if (mpu_reg) { if (mpu_reg) {
rcu_read_lock();
opp = dev_pm_opp_find_freq_ceil(mpu_dev, &freq); opp = dev_pm_opp_find_freq_ceil(mpu_dev, &freq);
if (IS_ERR(opp)) { if (IS_ERR(opp)) {
rcu_read_unlock();
dev_err(mpu_dev, "%s: unable to find MPU OPP for %d\n", dev_err(mpu_dev, "%s: unable to find MPU OPP for %d\n",
__func__, new_freq); __func__, new_freq);
return -EINVAL; return -EINVAL;
} }
volt = dev_pm_opp_get_voltage(opp); volt = dev_pm_opp_get_voltage(opp);
rcu_read_unlock(); dev_pm_opp_put(opp);
tol = volt * OPP_TOLERANCE / 100; tol = volt * OPP_TOLERANCE / 100;
volt_old = regulator_get_voltage(mpu_reg); volt_old = regulator_get_voltage(mpu_reg);
} }
......
...@@ -160,6 +160,7 @@ static int sti_cpufreq_set_opp_info(void) ...@@ -160,6 +160,7 @@ static int sti_cpufreq_set_opp_info(void)
int pcode, substrate, major, minor; int pcode, substrate, major, minor;
int ret; int ret;
char name[MAX_PCODE_NAME_LEN]; char name[MAX_PCODE_NAME_LEN];
struct opp_table *opp_table;
reg_fields = sti_cpufreq_match(); reg_fields = sti_cpufreq_match();
if (!reg_fields) { if (!reg_fields) {
...@@ -211,20 +212,20 @@ static int sti_cpufreq_set_opp_info(void) ...@@ -211,20 +212,20 @@ static int sti_cpufreq_set_opp_info(void)
snprintf(name, MAX_PCODE_NAME_LEN, "pcode%d", pcode); snprintf(name, MAX_PCODE_NAME_LEN, "pcode%d", pcode);
ret = dev_pm_opp_set_prop_name(dev, name); opp_table = dev_pm_opp_set_prop_name(dev, name);
if (ret) { if (IS_ERR(opp_table)) {
dev_err(dev, "Failed to set prop name\n"); dev_err(dev, "Failed to set prop name\n");
return ret; return PTR_ERR(opp_table);
} }
version[0] = BIT(major); version[0] = BIT(major);
version[1] = BIT(minor); version[1] = BIT(minor);
version[2] = BIT(substrate); version[2] = BIT(substrate);
ret = dev_pm_opp_set_supported_hw(dev, version, VERSION_ELEMENTS); opp_table = dev_pm_opp_set_supported_hw(dev, version, VERSION_ELEMENTS);
if (ret) { if (IS_ERR(opp_table)) {
dev_err(dev, "Failed to set supported hardware\n"); dev_err(dev, "Failed to set supported hardware\n");
return ret; return PTR_ERR(opp_table);
} }
dev_dbg(dev, "pcode: %d major: %d minor: %d substrate: %d\n", dev_dbg(dev, "pcode: %d major: %d minor: %d substrate: %d\n",
......
...@@ -111,18 +111,16 @@ static void devfreq_set_freq_table(struct devfreq *devfreq) ...@@ -111,18 +111,16 @@ static void devfreq_set_freq_table(struct devfreq *devfreq)
return; return;
} }
rcu_read_lock();
for (i = 0, freq = 0; i < profile->max_state; i++, freq++) { for (i = 0, freq = 0; i < profile->max_state; i++, freq++) {
opp = dev_pm_opp_find_freq_ceil(devfreq->dev.parent, &freq); opp = dev_pm_opp_find_freq_ceil(devfreq->dev.parent, &freq);
if (IS_ERR(opp)) { if (IS_ERR(opp)) {
devm_kfree(devfreq->dev.parent, profile->freq_table); devm_kfree(devfreq->dev.parent, profile->freq_table);
profile->max_state = 0; profile->max_state = 0;
rcu_read_unlock();
return; return;
} }
dev_pm_opp_put(opp);
profile->freq_table[i] = freq; profile->freq_table[i] = freq;
} }
rcu_read_unlock();
} }
/** /**
...@@ -1112,17 +1110,16 @@ static ssize_t available_frequencies_show(struct device *d, ...@@ -1112,17 +1110,16 @@ static ssize_t available_frequencies_show(struct device *d,
ssize_t count = 0; ssize_t count = 0;
unsigned long freq = 0; unsigned long freq = 0;
rcu_read_lock();
do { do {
opp = dev_pm_opp_find_freq_ceil(dev, &freq); opp = dev_pm_opp_find_freq_ceil(dev, &freq);
if (IS_ERR(opp)) if (IS_ERR(opp))
break; break;
dev_pm_opp_put(opp);
count += scnprintf(&buf[count], (PAGE_SIZE - count - 2), count += scnprintf(&buf[count], (PAGE_SIZE - count - 2),
"%lu ", freq); "%lu ", freq);
freq++; freq++;
} while (1); } while (1);
rcu_read_unlock();
/* Truncate the trailing space */ /* Truncate the trailing space */
if (count) if (count)
...@@ -1224,11 +1221,8 @@ subsys_initcall(devfreq_init); ...@@ -1224,11 +1221,8 @@ subsys_initcall(devfreq_init);
* @freq: The frequency given to target function * @freq: The frequency given to target function
* @flags: Flags handed from devfreq framework. * @flags: Flags handed from devfreq framework.
* *
* Locking: This function must be called under rcu_read_lock(). opp is a rcu * The callers are required to call dev_pm_opp_put() for the returned OPP after
* protected pointer. The reason for the same is that the opp pointer which is * use.
* returned will remain valid for use with opp_get_{voltage, freq} only while
* under the locked area. The pointer returned must be used prior to unlocking
* with rcu_read_unlock() to maintain the integrity of the pointer.
*/ */
struct dev_pm_opp *devfreq_recommended_opp(struct device *dev, struct dev_pm_opp *devfreq_recommended_opp(struct device *dev,
unsigned long *freq, unsigned long *freq,
...@@ -1265,18 +1259,7 @@ EXPORT_SYMBOL(devfreq_recommended_opp); ...@@ -1265,18 +1259,7 @@ EXPORT_SYMBOL(devfreq_recommended_opp);
*/ */
int devfreq_register_opp_notifier(struct device *dev, struct devfreq *devfreq) int devfreq_register_opp_notifier(struct device *dev, struct devfreq *devfreq)
{ {
struct srcu_notifier_head *nh; return dev_pm_opp_register_notifier(dev, &devfreq->nb);
int ret = 0;
rcu_read_lock();
nh = dev_pm_opp_get_notifier(dev);
if (IS_ERR(nh))
ret = PTR_ERR(nh);
rcu_read_unlock();
if (!ret)
ret = srcu_notifier_chain_register(nh, &devfreq->nb);
return ret;
} }
EXPORT_SYMBOL(devfreq_register_opp_notifier); EXPORT_SYMBOL(devfreq_register_opp_notifier);
...@@ -1292,18 +1275,7 @@ EXPORT_SYMBOL(devfreq_register_opp_notifier); ...@@ -1292,18 +1275,7 @@ EXPORT_SYMBOL(devfreq_register_opp_notifier);
*/ */
int devfreq_unregister_opp_notifier(struct device *dev, struct devfreq *devfreq) int devfreq_unregister_opp_notifier(struct device *dev, struct devfreq *devfreq)
{ {
struct srcu_notifier_head *nh; return dev_pm_opp_unregister_notifier(dev, &devfreq->nb);
int ret = 0;
rcu_read_lock();
nh = dev_pm_opp_get_notifier(dev);
if (IS_ERR(nh))
ret = PTR_ERR(nh);
rcu_read_unlock();
if (!ret)
ret = srcu_notifier_chain_unregister(nh, &devfreq->nb);
return ret;
} }
EXPORT_SYMBOL(devfreq_unregister_opp_notifier); EXPORT_SYMBOL(devfreq_unregister_opp_notifier);
......
...@@ -103,18 +103,17 @@ static int exynos_bus_target(struct device *dev, unsigned long *freq, u32 flags) ...@@ -103,18 +103,17 @@ static int exynos_bus_target(struct device *dev, unsigned long *freq, u32 flags)
int ret = 0; int ret = 0;
/* Get new opp-bus instance according to new bus clock */ /* Get new opp-bus instance according to new bus clock */
rcu_read_lock();
new_opp = devfreq_recommended_opp(dev, freq, flags); new_opp = devfreq_recommended_opp(dev, freq, flags);
if (IS_ERR(new_opp)) { if (IS_ERR(new_opp)) {
dev_err(dev, "failed to get recommended opp instance\n"); dev_err(dev, "failed to get recommended opp instance\n");
rcu_read_unlock();
return PTR_ERR(new_opp); return PTR_ERR(new_opp);
} }
new_freq = dev_pm_opp_get_freq(new_opp); new_freq = dev_pm_opp_get_freq(new_opp);
new_volt = dev_pm_opp_get_voltage(new_opp); new_volt = dev_pm_opp_get_voltage(new_opp);
dev_pm_opp_put(new_opp);
old_freq = bus->curr_freq; old_freq = bus->curr_freq;
rcu_read_unlock();
if (old_freq == new_freq) if (old_freq == new_freq)
return 0; return 0;
...@@ -214,17 +213,16 @@ static int exynos_bus_passive_target(struct device *dev, unsigned long *freq, ...@@ -214,17 +213,16 @@ static int exynos_bus_passive_target(struct device *dev, unsigned long *freq,
int ret = 0; int ret = 0;
/* Get new opp-bus instance according to new bus clock */ /* Get new opp-bus instance according to new bus clock */
rcu_read_lock();
new_opp = devfreq_recommended_opp(dev, freq, flags); new_opp = devfreq_recommended_opp(dev, freq, flags);
if (IS_ERR(new_opp)) { if (IS_ERR(new_opp)) {
dev_err(dev, "failed to get recommended opp instance\n"); dev_err(dev, "failed to get recommended opp instance\n");
rcu_read_unlock();
return PTR_ERR(new_opp); return PTR_ERR(new_opp);
} }
new_freq = dev_pm_opp_get_freq(new_opp); new_freq = dev_pm_opp_get_freq(new_opp);
dev_pm_opp_put(new_opp);
old_freq = bus->curr_freq; old_freq = bus->curr_freq;
rcu_read_unlock();
if (old_freq == new_freq) if (old_freq == new_freq)
return 0; return 0;
...@@ -358,16 +356,14 @@ static int exynos_bus_parse_of(struct device_node *np, ...@@ -358,16 +356,14 @@ static int exynos_bus_parse_of(struct device_node *np,
rate = clk_get_rate(bus->clk); rate = clk_get_rate(bus->clk);
rcu_read_lock();
opp = devfreq_recommended_opp(dev, &rate, 0); opp = devfreq_recommended_opp(dev, &rate, 0);
if (IS_ERR(opp)) { if (IS_ERR(opp)) {
dev_err(dev, "failed to find dev_pm_opp\n"); dev_err(dev, "failed to find dev_pm_opp\n");
rcu_read_unlock();
ret = PTR_ERR(opp); ret = PTR_ERR(opp);
goto err_opp; goto err_opp;
} }
bus->curr_freq = dev_pm_opp_get_freq(opp); bus->curr_freq = dev_pm_opp_get_freq(opp);
rcu_read_unlock(); dev_pm_opp_put(opp);
return 0; return 0;
......
...@@ -59,14 +59,14 @@ static int devfreq_passive_get_target_freq(struct devfreq *devfreq, ...@@ -59,14 +59,14 @@ static int devfreq_passive_get_target_freq(struct devfreq *devfreq,
* list of parent device. Because in this case, *freq is temporary * list of parent device. Because in this case, *freq is temporary
* value which is decided by ondemand governor. * value which is decided by ondemand governor.
*/ */
rcu_read_lock();
opp = devfreq_recommended_opp(parent_devfreq->dev.parent, freq, 0); opp = devfreq_recommended_opp(parent_devfreq->dev.parent, freq, 0);
rcu_read_unlock();
if (IS_ERR(opp)) { if (IS_ERR(opp)) {
ret = PTR_ERR(opp); ret = PTR_ERR(opp);
goto out; goto out;
} }
dev_pm_opp_put(opp);
/* /*
* Get the OPP table's index of decided freqeuncy by governor * Get the OPP table's index of decided freqeuncy by governor
* of parent device. * of parent device.
......
...@@ -91,17 +91,13 @@ static int rk3399_dmcfreq_target(struct device *dev, unsigned long *freq, ...@@ -91,17 +91,13 @@ static int rk3399_dmcfreq_target(struct device *dev, unsigned long *freq,
unsigned long target_volt, target_rate; unsigned long target_volt, target_rate;
int err; int err;
rcu_read_lock();
opp = devfreq_recommended_opp(dev, freq, flags); opp = devfreq_recommended_opp(dev, freq, flags);
if (IS_ERR(opp)) { if (IS_ERR(opp))
rcu_read_unlock();
return PTR_ERR(opp); return PTR_ERR(opp);
}
target_rate = dev_pm_opp_get_freq(opp); target_rate = dev_pm_opp_get_freq(opp);
target_volt = dev_pm_opp_get_voltage(opp); target_volt = dev_pm_opp_get_voltage(opp);
dev_pm_opp_put(opp);
rcu_read_unlock();
if (dmcfreq->rate == target_rate) if (dmcfreq->rate == target_rate)
return 0; return 0;
...@@ -422,15 +418,13 @@ static int rk3399_dmcfreq_probe(struct platform_device *pdev) ...@@ -422,15 +418,13 @@ static int rk3399_dmcfreq_probe(struct platform_device *pdev)
data->rate = clk_get_rate(data->dmc_clk); data->rate = clk_get_rate(data->dmc_clk);
rcu_read_lock();
opp = devfreq_recommended_opp(dev, &data->rate, 0); opp = devfreq_recommended_opp(dev, &data->rate, 0);
if (IS_ERR(opp)) { if (IS_ERR(opp))
rcu_read_unlock();
return PTR_ERR(opp); return PTR_ERR(opp);
}
data->rate = dev_pm_opp_get_freq(opp); data->rate = dev_pm_opp_get_freq(opp);
data->volt = dev_pm_opp_get_voltage(opp); data->volt = dev_pm_opp_get_voltage(opp);
rcu_read_unlock(); dev_pm_opp_put(opp);
rk3399_devfreq_dmc_profile.initial_freq = data->rate; rk3399_devfreq_dmc_profile.initial_freq = data->rate;
......
...@@ -487,15 +487,13 @@ static int tegra_devfreq_target(struct device *dev, unsigned long *freq, ...@@ -487,15 +487,13 @@ static int tegra_devfreq_target(struct device *dev, unsigned long *freq,
struct dev_pm_opp *opp; struct dev_pm_opp *opp;
unsigned long rate = *freq * KHZ; unsigned long rate = *freq * KHZ;
rcu_read_lock();
opp = devfreq_recommended_opp(dev, &rate, flags); opp = devfreq_recommended_opp(dev, &rate, flags);
if (IS_ERR(opp)) { if (IS_ERR(opp)) {
rcu_read_unlock();
dev_err(dev, "Failed to find opp for %lu KHz\n", *freq); dev_err(dev, "Failed to find opp for %lu KHz\n", *freq);
return PTR_ERR(opp); return PTR_ERR(opp);
} }
rate = dev_pm_opp_get_freq(opp); rate = dev_pm_opp_get_freq(opp);
rcu_read_unlock(); dev_pm_opp_put(opp);
clk_set_min_rate(tegra->emc_clock, rate); clk_set_min_rate(tegra->emc_clock, rate);
clk_set_rate(tegra->emc_clock, 0); clk_set_rate(tegra->emc_clock, 0);
......
...@@ -297,8 +297,6 @@ static int build_dyn_power_table(struct cpufreq_cooling_device *cpufreq_device, ...@@ -297,8 +297,6 @@ static int build_dyn_power_table(struct cpufreq_cooling_device *cpufreq_device,
if (!power_table) if (!power_table)
return -ENOMEM; return -ENOMEM;
rcu_read_lock();
for (freq = 0, i = 0; for (freq = 0, i = 0;
opp = dev_pm_opp_find_freq_ceil(dev, &freq), !IS_ERR(opp); opp = dev_pm_opp_find_freq_ceil(dev, &freq), !IS_ERR(opp);
freq++, i++) { freq++, i++) {
...@@ -306,13 +304,13 @@ static int build_dyn_power_table(struct cpufreq_cooling_device *cpufreq_device, ...@@ -306,13 +304,13 @@ static int build_dyn_power_table(struct cpufreq_cooling_device *cpufreq_device,
u64 power; u64 power;
if (i >= num_opps) { if (i >= num_opps) {
rcu_read_unlock();
ret = -EAGAIN; ret = -EAGAIN;
goto free_power_table; goto free_power_table;
} }
freq_mhz = freq / 1000000; freq_mhz = freq / 1000000;
voltage_mv = dev_pm_opp_get_voltage(opp) / 1000; voltage_mv = dev_pm_opp_get_voltage(opp) / 1000;
dev_pm_opp_put(opp);
/* /*
* Do the multiplication with MHz and millivolt so as * Do the multiplication with MHz and millivolt so as
...@@ -328,8 +326,6 @@ static int build_dyn_power_table(struct cpufreq_cooling_device *cpufreq_device, ...@@ -328,8 +326,6 @@ static int build_dyn_power_table(struct cpufreq_cooling_device *cpufreq_device,
power_table[i].power = power; power_table[i].power = power;
} }
rcu_read_unlock();
if (i != num_opps) { if (i != num_opps) {
ret = PTR_ERR(opp); ret = PTR_ERR(opp);
goto free_power_table; goto free_power_table;
...@@ -433,13 +429,10 @@ static int get_static_power(struct cpufreq_cooling_device *cpufreq_device, ...@@ -433,13 +429,10 @@ static int get_static_power(struct cpufreq_cooling_device *cpufreq_device,
return 0; return 0;
} }
rcu_read_lock();
opp = dev_pm_opp_find_freq_exact(cpufreq_device->cpu_dev, freq_hz, opp = dev_pm_opp_find_freq_exact(cpufreq_device->cpu_dev, freq_hz,
true); true);
voltage = dev_pm_opp_get_voltage(opp); voltage = dev_pm_opp_get_voltage(opp);
dev_pm_opp_put(opp);
rcu_read_unlock();
if (voltage == 0) { if (voltage == 0) {
dev_warn_ratelimited(cpufreq_device->cpu_dev, dev_warn_ratelimited(cpufreq_device->cpu_dev,
......
...@@ -113,15 +113,15 @@ static int partition_enable_opps(struct devfreq_cooling_device *dfc, ...@@ -113,15 +113,15 @@ static int partition_enable_opps(struct devfreq_cooling_device *dfc,
unsigned int freq = dfc->freq_table[i]; unsigned int freq = dfc->freq_table[i];
bool want_enable = i >= cdev_state ? true : false; bool want_enable = i >= cdev_state ? true : false;
rcu_read_lock();
opp = dev_pm_opp_find_freq_exact(dev, freq, !want_enable); opp = dev_pm_opp_find_freq_exact(dev, freq, !want_enable);
rcu_read_unlock();
if (PTR_ERR(opp) == -ERANGE) if (PTR_ERR(opp) == -ERANGE)
continue; continue;
else if (IS_ERR(opp)) else if (IS_ERR(opp))
return PTR_ERR(opp); return PTR_ERR(opp);
dev_pm_opp_put(opp);
if (want_enable) if (want_enable)
ret = dev_pm_opp_enable(dev, freq); ret = dev_pm_opp_enable(dev, freq);
else else
...@@ -221,15 +221,12 @@ get_static_power(struct devfreq_cooling_device *dfc, unsigned long freq) ...@@ -221,15 +221,12 @@ get_static_power(struct devfreq_cooling_device *dfc, unsigned long freq)
if (!dfc->power_ops->get_static_power) if (!dfc->power_ops->get_static_power)
return 0; return 0;
rcu_read_lock();
opp = dev_pm_opp_find_freq_exact(dev, freq, true); opp = dev_pm_opp_find_freq_exact(dev, freq, true);
if (IS_ERR(opp) && (PTR_ERR(opp) == -ERANGE)) if (IS_ERR(opp) && (PTR_ERR(opp) == -ERANGE))
opp = dev_pm_opp_find_freq_exact(dev, freq, false); opp = dev_pm_opp_find_freq_exact(dev, freq, false);
voltage = dev_pm_opp_get_voltage(opp) / 1000; /* mV */ voltage = dev_pm_opp_get_voltage(opp) / 1000; /* mV */
dev_pm_opp_put(opp);
rcu_read_unlock();
if (voltage == 0) { if (voltage == 0) {
dev_warn_ratelimited(dev, dev_warn_ratelimited(dev,
...@@ -412,18 +409,14 @@ static int devfreq_cooling_gen_tables(struct devfreq_cooling_device *dfc) ...@@ -412,18 +409,14 @@ static int devfreq_cooling_gen_tables(struct devfreq_cooling_device *dfc)
unsigned long power_dyn, voltage; unsigned long power_dyn, voltage;
struct dev_pm_opp *opp; struct dev_pm_opp *opp;
rcu_read_lock();
opp = dev_pm_opp_find_freq_floor(dev, &freq); opp = dev_pm_opp_find_freq_floor(dev, &freq);
if (IS_ERR(opp)) { if (IS_ERR(opp)) {
rcu_read_unlock();
ret = PTR_ERR(opp); ret = PTR_ERR(opp);
goto free_tables; goto free_tables;
} }
voltage = dev_pm_opp_get_voltage(opp) / 1000; /* mV */ voltage = dev_pm_opp_get_voltage(opp) / 1000; /* mV */
dev_pm_opp_put(opp);
rcu_read_unlock();
if (dfc->power_ops) { if (dfc->power_ops) {
power_dyn = get_dynamic_power(dfc, freq, voltage); power_dyn = get_dynamic_power(dfc, freq, voltage);
......
...@@ -78,6 +78,9 @@ struct dev_pm_set_opp_data { ...@@ -78,6 +78,9 @@ struct dev_pm_set_opp_data {
#if defined(CONFIG_PM_OPP) #if defined(CONFIG_PM_OPP)
struct opp_table *dev_pm_opp_get_opp_table(struct device *dev);
void dev_pm_opp_put_opp_table(struct opp_table *opp_table);
unsigned long dev_pm_opp_get_voltage(struct dev_pm_opp *opp); unsigned long dev_pm_opp_get_voltage(struct dev_pm_opp *opp);
unsigned long dev_pm_opp_get_freq(struct dev_pm_opp *opp); unsigned long dev_pm_opp_get_freq(struct dev_pm_opp *opp);
...@@ -88,7 +91,7 @@ int dev_pm_opp_get_opp_count(struct device *dev); ...@@ -88,7 +91,7 @@ int dev_pm_opp_get_opp_count(struct device *dev);
unsigned long dev_pm_opp_get_max_clock_latency(struct device *dev); unsigned long dev_pm_opp_get_max_clock_latency(struct device *dev);
unsigned long dev_pm_opp_get_max_volt_latency(struct device *dev); unsigned long dev_pm_opp_get_max_volt_latency(struct device *dev);
unsigned long dev_pm_opp_get_max_transition_latency(struct device *dev); unsigned long dev_pm_opp_get_max_transition_latency(struct device *dev);
struct dev_pm_opp *dev_pm_opp_get_suspend_opp(struct device *dev); unsigned long dev_pm_opp_get_suspend_opp_freq(struct device *dev);
struct dev_pm_opp *dev_pm_opp_find_freq_exact(struct device *dev, struct dev_pm_opp *dev_pm_opp_find_freq_exact(struct device *dev,
unsigned long freq, unsigned long freq,
...@@ -99,6 +102,7 @@ struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev, ...@@ -99,6 +102,7 @@ struct dev_pm_opp *dev_pm_opp_find_freq_floor(struct device *dev,
struct dev_pm_opp *dev_pm_opp_find_freq_ceil(struct device *dev, struct dev_pm_opp *dev_pm_opp_find_freq_ceil(struct device *dev,
unsigned long *freq); unsigned long *freq);
void dev_pm_opp_put(struct dev_pm_opp *opp);
int dev_pm_opp_add(struct device *dev, unsigned long freq, int dev_pm_opp_add(struct device *dev, unsigned long freq,
unsigned long u_volt); unsigned long u_volt);
...@@ -108,22 +112,30 @@ int dev_pm_opp_enable(struct device *dev, unsigned long freq); ...@@ -108,22 +112,30 @@ int dev_pm_opp_enable(struct device *dev, unsigned long freq);
int dev_pm_opp_disable(struct device *dev, unsigned long freq); int dev_pm_opp_disable(struct device *dev, unsigned long freq);
struct srcu_notifier_head *dev_pm_opp_get_notifier(struct device *dev); int dev_pm_opp_register_notifier(struct device *dev, struct notifier_block *nb);
int dev_pm_opp_set_supported_hw(struct device *dev, const u32 *versions, int dev_pm_opp_unregister_notifier(struct device *dev, struct notifier_block *nb);
unsigned int count);
void dev_pm_opp_put_supported_hw(struct device *dev); struct opp_table *dev_pm_opp_set_supported_hw(struct device *dev, const u32 *versions, unsigned int count);
int dev_pm_opp_set_prop_name(struct device *dev, const char *name); void dev_pm_opp_put_supported_hw(struct opp_table *opp_table);
void dev_pm_opp_put_prop_name(struct device *dev); struct opp_table *dev_pm_opp_set_prop_name(struct device *dev, const char *name);
void dev_pm_opp_put_prop_name(struct opp_table *opp_table);
struct opp_table *dev_pm_opp_set_regulators(struct device *dev, const char * const names[], unsigned int count); struct opp_table *dev_pm_opp_set_regulators(struct device *dev, const char * const names[], unsigned int count);
void dev_pm_opp_put_regulators(struct opp_table *opp_table); void dev_pm_opp_put_regulators(struct opp_table *opp_table);
int dev_pm_opp_register_set_opp_helper(struct device *dev, int (*set_opp)(struct dev_pm_set_opp_data *data)); struct opp_table *dev_pm_opp_register_set_opp_helper(struct device *dev, int (*set_opp)(struct dev_pm_set_opp_data *data));
void dev_pm_opp_register_put_opp_helper(struct device *dev); void dev_pm_opp_register_put_opp_helper(struct opp_table *opp_table);
int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq); int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq);
int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, const struct cpumask *cpumask); int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, const struct cpumask *cpumask);
int dev_pm_opp_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask); int dev_pm_opp_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask);
void dev_pm_opp_remove_table(struct device *dev); void dev_pm_opp_remove_table(struct device *dev);
void dev_pm_opp_cpumask_remove_table(const struct cpumask *cpumask); void dev_pm_opp_cpumask_remove_table(const struct cpumask *cpumask);
#else #else
static inline struct opp_table *dev_pm_opp_get_opp_table(struct device *dev)
{
return ERR_PTR(-ENOTSUPP);
}
static inline void dev_pm_opp_put_opp_table(struct opp_table *opp_table) {}
static inline unsigned long dev_pm_opp_get_voltage(struct dev_pm_opp *opp) static inline unsigned long dev_pm_opp_get_voltage(struct dev_pm_opp *opp)
{ {
return 0; return 0;
...@@ -159,9 +171,9 @@ static inline unsigned long dev_pm_opp_get_max_transition_latency(struct device ...@@ -159,9 +171,9 @@ static inline unsigned long dev_pm_opp_get_max_transition_latency(struct device
return 0; return 0;
} }
static inline struct dev_pm_opp *dev_pm_opp_get_suspend_opp(struct device *dev) static inline unsigned long dev_pm_opp_get_suspend_opp_freq(struct device *dev)
{ {
return NULL; return 0;
} }
static inline struct dev_pm_opp *dev_pm_opp_find_freq_exact(struct device *dev, static inline struct dev_pm_opp *dev_pm_opp_find_freq_exact(struct device *dev,
...@@ -182,6 +194,8 @@ static inline struct dev_pm_opp *dev_pm_opp_find_freq_ceil(struct device *dev, ...@@ -182,6 +194,8 @@ static inline struct dev_pm_opp *dev_pm_opp_find_freq_ceil(struct device *dev,
return ERR_PTR(-ENOTSUPP); return ERR_PTR(-ENOTSUPP);
} }
static inline void dev_pm_opp_put(struct dev_pm_opp *opp) {}
static inline int dev_pm_opp_add(struct device *dev, unsigned long freq, static inline int dev_pm_opp_add(struct device *dev, unsigned long freq,
unsigned long u_volt) unsigned long u_volt)
{ {
...@@ -202,35 +216,39 @@ static inline int dev_pm_opp_disable(struct device *dev, unsigned long freq) ...@@ -202,35 +216,39 @@ static inline int dev_pm_opp_disable(struct device *dev, unsigned long freq)
return 0; return 0;
} }
static inline struct srcu_notifier_head *dev_pm_opp_get_notifier( static inline int dev_pm_opp_register_notifier(struct device *dev, struct notifier_block *nb)
struct device *dev)
{ {
return ERR_PTR(-ENOTSUPP); return -ENOTSUPP;
} }
static inline int dev_pm_opp_set_supported_hw(struct device *dev, static inline int dev_pm_opp_unregister_notifier(struct device *dev, struct notifier_block *nb)
const u32 *versions,
unsigned int count)
{ {
return -ENOTSUPP; return -ENOTSUPP;
} }
static inline void dev_pm_opp_put_supported_hw(struct device *dev) {} static inline struct opp_table *dev_pm_opp_set_supported_hw(struct device *dev,
const u32 *versions,
unsigned int count)
{
return ERR_PTR(-ENOTSUPP);
}
static inline int dev_pm_opp_register_set_opp_helper(struct device *dev, static inline void dev_pm_opp_put_supported_hw(struct opp_table *opp_table) {}
static inline struct opp_table *dev_pm_opp_register_set_opp_helper(struct device *dev,
int (*set_opp)(struct dev_pm_set_opp_data *data)) int (*set_opp)(struct dev_pm_set_opp_data *data))
{ {
return -ENOTSUPP; return ERR_PTR(-ENOTSUPP);
} }
static inline void dev_pm_opp_register_put_opp_helper(struct device *dev) {} static inline void dev_pm_opp_register_put_opp_helper(struct opp_table *opp_table) {}
static inline int dev_pm_opp_set_prop_name(struct device *dev, const char *name) static inline struct opp_table *dev_pm_opp_set_prop_name(struct device *dev, const char *name)
{ {
return -ENOTSUPP; return ERR_PTR(-ENOTSUPP);
} }
static inline void dev_pm_opp_put_prop_name(struct device *dev) {} static inline void dev_pm_opp_put_prop_name(struct opp_table *opp_table) {}
static inline struct opp_table *dev_pm_opp_set_regulators(struct device *dev, const char * const names[], unsigned int count) static inline struct opp_table *dev_pm_opp_set_regulators(struct device *dev, const char * const names[], unsigned int count)
{ {
...@@ -270,6 +288,7 @@ void dev_pm_opp_of_remove_table(struct device *dev); ...@@ -270,6 +288,7 @@ void dev_pm_opp_of_remove_table(struct device *dev);
int dev_pm_opp_of_cpumask_add_table(const struct cpumask *cpumask); int dev_pm_opp_of_cpumask_add_table(const struct cpumask *cpumask);
void dev_pm_opp_of_cpumask_remove_table(const struct cpumask *cpumask); void dev_pm_opp_of_cpumask_remove_table(const struct cpumask *cpumask);
int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask); int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask);
struct device_node *dev_pm_opp_of_get_opp_desc_node(struct device *dev);
#else #else
static inline int dev_pm_opp_of_add_table(struct device *dev) static inline int dev_pm_opp_of_add_table(struct device *dev)
{ {
...@@ -293,6 +312,11 @@ static inline int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, struct ...@@ -293,6 +312,11 @@ static inline int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, struct
{ {
return -ENOTSUPP; return -ENOTSUPP;
} }
static inline struct device_node *dev_pm_opp_of_get_opp_desc_node(struct device *dev)
{
return NULL;
}
#endif #endif
#endif /* __LINUX_OPP_H__ */ #endif /* __LINUX_OPP_H__ */
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment