Commit 1404d2e5 authored by Chen Yu's avatar Chen Yu Committed by Greg Kroah-Hartman

cpufreq: governors: Fix long idle detection logic in load calculation

commit 75920196 upstream.

According to current code implementation, detecting the long
idle period is done by checking if the interval between two
adjacent utilization update handlers is long enough. Although
this mechanism can detect if the idle period is long enough
(no utilization hooks invoked during idle period), it might
not cover a corner case: if the task has occupied the CPU
for too long which causes no context switches during that
period, then no utilization handler will be launched until this
high prio task is scheduled out. As a result, the idle_periods
field might be calculated incorrectly because it regards the
100% load as 0% and makes the conservative governor who uses
this field confusing.

Change the detection to compare the idle_time with sampling_rate
directly.
Reported-by: default avatarArtem S. Tashkinov <t.artem@mailcity.com>
Signed-off-by: default avatarChen Yu <yu.c.chen@intel.com>
Acked-by: default avatarViresh Kumar <viresh.kumar@linaro.org>
Cc: All applicable <stable@vger.kernel.org>
Signed-off-by: default avatarRafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
parent c3c77b5d
...@@ -165,7 +165,7 @@ unsigned int dbs_update(struct cpufreq_policy *policy) ...@@ -165,7 +165,7 @@ unsigned int dbs_update(struct cpufreq_policy *policy)
* calls, so the previous load value can be used then. * calls, so the previous load value can be used then.
*/ */
load = j_cdbs->prev_load; load = j_cdbs->prev_load;
} else if (unlikely(time_elapsed > 2 * sampling_rate && } else if (unlikely((int)idle_time > 2 * sampling_rate &&
j_cdbs->prev_load)) { j_cdbs->prev_load)) {
/* /*
* If the CPU had gone completely idle and a task has * If the CPU had gone completely idle and a task has
...@@ -185,10 +185,8 @@ unsigned int dbs_update(struct cpufreq_policy *policy) ...@@ -185,10 +185,8 @@ unsigned int dbs_update(struct cpufreq_policy *policy)
* clear prev_load to guarantee that the load will be * clear prev_load to guarantee that the load will be
* computed again next time. * computed again next time.
* *
* Detecting this situation is easy: the governor's * Detecting this situation is easy: an unusually large
* utilization update handler would not have run during * 'idle_time' (as compared to the sampling rate)
* CPU-idle periods. Hence, an unusually large
* 'time_elapsed' (as compared to the sampling rate)
* indicates this scenario. * indicates this scenario.
*/ */
load = j_cdbs->prev_load; load = j_cdbs->prev_load;
...@@ -217,8 +215,8 @@ unsigned int dbs_update(struct cpufreq_policy *policy) ...@@ -217,8 +215,8 @@ unsigned int dbs_update(struct cpufreq_policy *policy)
j_cdbs->prev_load = load; j_cdbs->prev_load = load;
} }
if (time_elapsed > 2 * sampling_rate) { if (unlikely((int)idle_time > 2 * sampling_rate)) {
unsigned int periods = time_elapsed / sampling_rate; unsigned int periods = idle_time / sampling_rate;
if (periods < idle_periods) if (periods < idle_periods)
idle_periods = periods; idle_periods = periods;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment