Commit cc00c198 authored by Ingo Molnar's avatar Ingo Molnar

sched: Fix leftover comment typos

A few more snuck in. Also capitalize 'CPU' while at it.
Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent f1a0a376
...@@ -14,7 +14,7 @@ ...@@ -14,7 +14,7 @@
* @sched_clock_mask: Bitmask for two's complement subtraction of non 64bit * @sched_clock_mask: Bitmask for two's complement subtraction of non 64bit
* clocks. * clocks.
* @read_sched_clock: Current clock source (or dummy source when suspended). * @read_sched_clock: Current clock source (or dummy source when suspended).
* @mult: Multipler for scaled math conversion. * @mult: Multiplier for scaled math conversion.
* @shift: Shift value for scaled math conversion. * @shift: Shift value for scaled math conversion.
* *
* Care must be taken when updating this structure; it is read by * Care must be taken when updating this structure; it is read by
......
...@@ -5506,7 +5506,7 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) ...@@ -5506,7 +5506,7 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
} }
/* /*
* Try and select tasks for each sibling in decending sched_class * Try and select tasks for each sibling in descending sched_class
* order. * order.
*/ */
for_each_class(class) { for_each_class(class) {
...@@ -5520,7 +5520,7 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) ...@@ -5520,7 +5520,7 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
/* /*
* If this sibling doesn't yet have a suitable task to * If this sibling doesn't yet have a suitable task to
* run; ask for the most elegible task, given the * run; ask for the most eligible task, given the
* highest priority task already selected for this * highest priority task already selected for this
* core. * core.
*/ */
......
...@@ -10808,11 +10808,11 @@ static inline void task_tick_core(struct rq *rq, struct task_struct *curr) ...@@ -10808,11 +10808,11 @@ static inline void task_tick_core(struct rq *rq, struct task_struct *curr)
* sched_slice() considers only this active rq and it gets the * sched_slice() considers only this active rq and it gets the
* whole slice. But during force idle, we have siblings acting * whole slice. But during force idle, we have siblings acting
* like a single runqueue and hence we need to consider runnable * like a single runqueue and hence we need to consider runnable
* tasks on this cpu and the forced idle cpu. Ideally, we should * tasks on this CPU and the forced idle CPU. Ideally, we should
* go through the forced idle rq, but that would be a perf hit. * go through the forced idle rq, but that would be a perf hit.
* We can assume that the forced idle cpu has atleast * We can assume that the forced idle CPU has at least
* MIN_NR_TASKS_DURING_FORCEIDLE - 1 tasks and use that to check * MIN_NR_TASKS_DURING_FORCEIDLE - 1 tasks and use that to check
* if we need to give up the cpu. * if we need to give up the CPU.
*/ */
if (rq->core->core_forceidle && rq->cfs.nr_running == 1 && if (rq->core->core_forceidle && rq->cfs.nr_running == 1 &&
__entity_slice_used(&curr->se, MIN_NR_TASKS_DURING_FORCEIDLE)) __entity_slice_used(&curr->se, MIN_NR_TASKS_DURING_FORCEIDLE))
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment