Commit f6ac18fa authored by Peter Zijlstra's avatar Peter Zijlstra

sched: Improve try_invoke_on_locked_down_task()

Clarify and tighten try_invoke_on_locked_down_task().

Basically the function calls @func under task_rq_lock(), except it
avoids taking rq->lock when possible.

This makes calling @func unconditional (the function will get renamed
in a later patch to remove the try).
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: default avatarVasily Gorbik <gor@linux.ibm.com>
Tested-by: Vasily Gorbik <gor@linux.ibm.com> # on s390
Link: https://lkml.kernel.org/r/20210929152428.589323576@infradead.org
parent 769fdf83
......@@ -4115,41 +4115,56 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
* @func: Function to invoke.
* @arg: Argument to function.
*
* If the specified task can be quickly locked into a definite state
* (either sleeping or on a given runqueue), arrange to keep it in that
* state while invoking @func(@arg). This function can use ->on_rq and
* task_curr() to work out what the state is, if required. Given that
* @func can be invoked with a runqueue lock held, it had better be quite
* lightweight.
* Fix the task in it's current state by avoiding wakeups and or rq operations
* and call @func(@arg) on it. This function can use ->on_rq and task_curr()
* to work out what the state is, if required. Given that @func can be invoked
* with a runqueue lock held, it had better be quite lightweight.
*
* Returns:
* @false if the task slipped out from under the locks.
* @true if the task was locked onto a runqueue or is sleeping.
* However, @func can override this by returning @false.
* Whatever @func returns
*/
bool try_invoke_on_locked_down_task(struct task_struct *p, bool (*func)(struct task_struct *t, void *arg), void *arg)
{
struct rq *rq = NULL;
unsigned int state;
struct rq_flags rf;
bool ret = false;
struct rq *rq;
raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
if (p->on_rq) {
state = READ_ONCE(p->__state);
/*
* Ensure we load p->on_rq after p->__state, otherwise it would be
* possible to, falsely, observe p->on_rq == 0.
*
* See try_to_wake_up() for a longer comment.
*/
smp_rmb();
/*
* Since pi->lock blocks try_to_wake_up(), we don't need rq->lock when
* the task is blocked. Make sure to check @state since ttwu() can drop
* locks at the end, see ttwu_queue_wakelist().
*/
if (state == TASK_RUNNING || state == TASK_WAKING || p->on_rq)
rq = __task_rq_lock(p, &rf);
if (task_rq(p) == rq)
ret = func(p, arg);
/*
* At this point the task is pinned; either:
* - blocked and we're holding off wakeups (pi->lock)
* - woken, and we're holding off enqueue (rq->lock)
* - queued, and we're holding off schedule (rq->lock)
* - running, and we're holding off de-schedule (rq->lock)
*
* The called function (@func) can use: task_curr(), p->on_rq and
* p->__state to differentiate between these states.
*/
ret = func(p, arg);
if (rq)
rq_unlock(rq, &rf);
} else {
switch (READ_ONCE(p->__state)) {
case TASK_RUNNING:
case TASK_WAKING:
break;
default:
smp_rmb(); // See smp_rmb() comment in try_to_wake_up().
if (!p->on_rq)
ret = func(p, arg);
}
}
raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags);
return ret;
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment