Commit 628fd556 authored by Will Deacon's avatar Will Deacon

tools/memory-model: Remove smp_read_barrier_depends() from informal doc

smp_read_barrier_depends() has gone the way of mmiowb() and so many
esoteric memory barriers before it. Drop the two mentions of this
deceased barrier from the LKMM informal explanation document.
Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: default avatarAlan Stern <stern@rowland.harvard.edu>
Acked-by: default avatarPaul E. McKenney <paulmck@kernel.org>
Signed-off-by: default avatarWill Deacon <will@kernel.org>
parent 9ce1b14e
...@@ -1122,12 +1122,10 @@ maintain at least the appearance of FIFO order. ...@@ -1122,12 +1122,10 @@ maintain at least the appearance of FIFO order.
In practice, this difficulty is solved by inserting a special fence In practice, this difficulty is solved by inserting a special fence
between P1's two loads when the kernel is compiled for the Alpha between P1's two loads when the kernel is compiled for the Alpha
architecture. In fact, as of version 4.15, the kernel automatically architecture. In fact, as of version 4.15, the kernel automatically
adds this fence (called smp_read_barrier_depends() and defined as adds this fence after every READ_ONCE() and atomic load on Alpha. The
nothing at all on non-Alpha builds) after every READ_ONCE() and atomic effect of the fence is to cause the CPU not to execute any po-later
load. The effect of the fence is to cause the CPU not to execute any instructions until after the local cache has finished processing all
po-later instructions until after the local cache has finished the stores it has already received. Thus, if the code was changed to:
processing all the stores it has already received. Thus, if the code
was changed to:
P1() P1()
{ {
...@@ -1146,14 +1144,14 @@ READ_ONCE() or another synchronization primitive rather than accessed ...@@ -1146,14 +1144,14 @@ READ_ONCE() or another synchronization primitive rather than accessed
directly. directly.
The LKMM requires that smp_rmb(), acquire fences, and strong fences The LKMM requires that smp_rmb(), acquire fences, and strong fences
share this property with smp_read_barrier_depends(): They do not allow share this property: They do not allow the CPU to execute any po-later
the CPU to execute any po-later instructions (or po-later loads in the instructions (or po-later loads in the case of smp_rmb()) until all
case of smp_rmb()) until all outstanding stores have been processed by outstanding stores have been processed by the local cache. In the
the local cache. In the case of a strong fence, the CPU first has to case of a strong fence, the CPU first has to wait for all of its
wait for all of its po-earlier stores to propagate to every other CPU po-earlier stores to propagate to every other CPU in the system; then
in the system; then it has to wait for the local cache to process all it has to wait for the local cache to process all the stores received
the stores received as of that time -- not just the stores received as of that time -- not just the stores received when the strong fence
when the strong fence began. began.
And of course, none of this matters for any architecture other than And of course, none of this matters for any architecture other than
Alpha. Alpha.
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment