Commit 7cff8cf6 authored by Ingo Molnar's avatar Ingo Molnar

sched: refine negative nice level granularity

refine the granularity of negative nice level tasks: let them
reschedule more often to offset the effect of them consuming
their wait_runtime proportionately slower. (This makes nice-0
task scheduling smoother in the presence of negatively
reniced tasks.)
Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
parent a69edb55
...@@ -222,21 +222,25 @@ niced_granularity(struct sched_entity *curr, unsigned long granularity) ...@@ -222,21 +222,25 @@ niced_granularity(struct sched_entity *curr, unsigned long granularity)
{ {
u64 tmp; u64 tmp;
if (likely(curr->load.weight == NICE_0_LOAD))
return granularity;
/* /*
* Negative nice levels get the same granularity as nice-0: * Positive nice levels get the same granularity as nice-0:
*/ */
if (likely(curr->load.weight >= NICE_0_LOAD)) if (likely(curr->load.weight < NICE_0_LOAD)) {
return granularity; tmp = curr->load.weight * (u64)granularity;
return (long) (tmp >> NICE_0_SHIFT);
}
/* /*
* Positive nice level tasks get linearly finer * Negative nice level tasks get linearly finer
* granularity: * granularity:
*/ */
tmp = curr->load.weight * (u64)granularity; tmp = curr->load.inv_weight * (u64)granularity;
/* /*
* It will always fit into 'long': * It will always fit into 'long':
*/ */
return (long) (tmp >> NICE_0_SHIFT); return (long) (tmp >> WMULT_SHIFT);
} }
static inline void static inline void
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment