Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
L
linux
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
Analytics
Analytics
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Commits
Issue Boards
Open sidebar
Kirill Smelkov
linux
Commits
629a3cd0
Commit
629a3cd0
authored
Jan 21, 2019
by
Ingo Molnar
Browse files
Options
Browse Files
Download
Plain Diff
Merge branch 'locking/urgent' into locking/core, to pick up dependent fixes
Signed-off-by:
Ingo Molnar
<
mingo@kernel.org
>
parents
910cc959
e158488b
Changes
5
Show whitespace changes
Inline
Side-by-side
Showing
5 changed files
with
39 additions
and
12 deletions
+39
-12
include/linux/sched/wake_q.h
include/linux/sched/wake_q.h
+5
-1
kernel/exit.c
kernel/exit.c
+1
-1
kernel/futex.c
kernel/futex.c
+8
-5
kernel/locking/rwsem-xadd.c
kernel/locking/rwsem-xadd.c
+9
-2
kernel/sched/core.c
kernel/sched/core.c
+16
-3
No files found.
include/linux/sched/wake_q.h
View file @
629a3cd0
...
...
@@ -24,9 +24,13 @@
* called near the end of a function. Otherwise, the list can be
* re-initialized for later re-use by wake_q_init().
*
* N
ote
that this can cause spurious wakeups. schedule() callers
* N
OTE
that this can cause spurious wakeups. schedule() callers
* must ensure the call is done inside a loop, confirming that the
* wakeup condition has in fact occurred.
*
* NOTE that there is no guarantee the wakeup will happen any later than the
* wake_q_add() location. Therefore task must be ready to be woken at the
* location of the wake_q_add().
*/
#include <linux/sched.h>
...
...
kernel/exit.c
View file @
629a3cd0
...
...
@@ -307,7 +307,7 @@ void rcuwait_wake_up(struct rcuwait *w)
* MB (A) MB (B)
* [L] cond [L] tsk
*/
smp_
r
mb
();
/* (B) */
smp_mb
();
/* (B) */
/*
* Avoid using task_rcu_dereference() magic as long as we are careful,
...
...
kernel/futex.c
View file @
629a3cd0
...
...
@@ -1452,11 +1452,7 @@ static void mark_wake_futex(struct wake_q_head *wake_q, struct futex_q *q)
if
(
WARN
(
q
->
pi_state
||
q
->
rt_waiter
,
"refusing to wake PI futex
\n
"
))
return
;
/*
* Queue the task for later wakeup for after we've released
* the hb->lock. wake_q_add() grabs reference to p.
*/
wake_q_add
(
wake_q
,
p
);
get_task_struct
(
p
);
__unqueue_futex
(
q
);
/*
* The waiting task can free the futex_q as soon as q->lock_ptr = NULL
...
...
@@ -1466,6 +1462,13 @@ static void mark_wake_futex(struct wake_q_head *wake_q, struct futex_q *q)
* plist_del in __unqueue_futex().
*/
smp_store_release
(
&
q
->
lock_ptr
,
NULL
);
/*
* Queue the task for later wakeup for after we've released
* the hb->lock. wake_q_add() grabs reference to p.
*/
wake_q_add
(
wake_q
,
p
);
put_task_struct
(
p
);
}
/*
...
...
kernel/locking/rwsem-xadd.c
View file @
629a3cd0
...
...
@@ -198,15 +198,22 @@ static void __rwsem_mark_wake(struct rw_semaphore *sem,
woken
++
;
tsk
=
waiter
->
task
;
wake_q_add
(
wake_q
,
tsk
);
get_task_struct
(
tsk
);
list_del
(
&
waiter
->
list
);
/*
* Ensure
that the last operation is
setting the reader
* Ensure
calling get_task_struct() before
setting the reader
* waiter to nil such that rwsem_down_read_failed() cannot
* race with do_exit() by always holding a reference count
* to the task to wakeup.
*/
smp_store_release
(
&
waiter
->
task
,
NULL
);
/*
* Ensure issuing the wakeup (either by us or someone else)
* after setting the reader waiter to nil.
*/
wake_q_add
(
wake_q
,
tsk
);
/* wake_q_add() already take the task ref */
put_task_struct
(
tsk
);
}
adjustment
=
woken
*
RWSEM_ACTIVE_READ_BIAS
-
adjustment
;
...
...
kernel/sched/core.c
View file @
629a3cd0
...
...
@@ -396,6 +396,18 @@ static bool set_nr_if_polling(struct task_struct *p)
#endif
#endif
/**
* wake_q_add() - queue a wakeup for 'later' waking.
* @head: the wake_q_head to add @task to
* @task: the task to queue for 'later' wakeup
*
* Queue a task for later wakeup, most likely by the wake_up_q() call in the
* same context, _HOWEVER_ this is not guaranteed, the wakeup can come
* instantly.
*
* This function must be used as-if it were wake_up_process(); IOW the task
* must be ready to be woken at this location.
*/
void
wake_q_add
(
struct
wake_q_head
*
head
,
struct
task_struct
*
task
)
{
struct
wake_q_node
*
node
=
&
task
->
wake_q
;
...
...
@@ -405,10 +417,11 @@ void wake_q_add(struct wake_q_head *head, struct task_struct *task)
* its already queued (either by us or someone else) and will get the
* wakeup due to that.
*
*
This cmpxchg() executes a full barrier, which pairs with the full
*
barrier executed by the wakeup in wake_up_q()
.
*
In order to ensure that a pending wakeup will observe our pending
*
state, even in the failed case, an explicit smp_mb() must be used
.
*/
if
(
cmpxchg
(
&
node
->
next
,
NULL
,
WAKE_Q_TAIL
))
smp_mb__before_atomic
();
if
(
cmpxchg_relaxed
(
&
node
->
next
,
NULL
,
WAKE_Q_TAIL
))
return
;
get_task_struct
(
task
);
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment