Commit a68415c2 authored by Ingo Molnar's avatar Ingo Molnar

Merge branch 'lkmm' of...

Merge branch 'lkmm' of git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu into locking/core

Pull v5.9 LKMM changes from Paul E. McKenney.

Mostly documentation changes, but also some new litmus tests for atomic ops.
Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
parents 63722bbc 5ef0a07a
......@@ -85,21 +85,21 @@ smp_store_release() respectively. Therefore, if you find yourself only using
the Non-RMW operations of atomic_t, you do not in fact need atomic_t at all
and are doing it wrong.
A subtle detail of atomic_set{}() is that it should be observable to the RMW
ops. That is:
A note for the implementation of atomic_set{}() is that it must not break the
atomicity of the RMW ops. That is:
C atomic-set
C Atomic-RMW-ops-are-atomic-WRT-atomic_set
{
atomic_set(v, 1);
atomic_t v = ATOMIC_INIT(1);
}
P1(atomic_t *v)
P0(atomic_t *v)
{
atomic_add_unless(v, 1, 0);
(void)atomic_add_unless(v, 1, 0);
}
P2(atomic_t *v)
P1(atomic_t *v)
{
atomic_set(v, 0);
}
......@@ -233,19 +233,19 @@ as well. Similarly, something like:
is an ACQUIRE pattern (though very much not typical), but again the barrier is
strictly stronger than ACQUIRE. As illustrated:
C strong-acquire
C Atomic-RMW+mb__after_atomic-is-stronger-than-acquire
{
}
P1(int *x, atomic_t *y)
P0(int *x, atomic_t *y)
{
r0 = READ_ONCE(*x);
smp_rmb();
r1 = atomic_read(y);
}
P2(int *x, atomic_t *y)
P1(int *x, atomic_t *y)
{
atomic_inc(y);
smp_mb__after_atomic();
......@@ -253,14 +253,14 @@ strictly stronger than ACQUIRE. As illustrated:
}
exists
(r0=1 /\ r1=0)
(0:r0=1 /\ 0:r1=0)
This should not happen; but a hypothetical atomic_inc_acquire() --
(void)atomic_fetch_inc_acquire() for instance -- would allow the outcome,
because it would not order the W part of the RMW against the following
WRITE_ONCE. Thus:
P1 P2
P0 P1
t = LL.acq *y (0)
t++;
......
============
LITMUS TESTS
============
Each subdirectory contains litmus tests that are typical to describe the
semantics of respective kernel APIs.
For more information about how to "run" a litmus test or how to generate
a kernel test module based on a litmus test, please see
tools/memory-model/README.
atomic (/atomic derectory)
--------------------------
Atomic-RMW+mb__after_atomic-is-stronger-than-acquire.litmus
Test that an atomic RMW followed by a smp_mb__after_atomic() is
stronger than a normal acquire: both the read and write parts of
the RMW are ordered before the subsequential memory accesses.
Atomic-RMW-ops-are-atomic-WRT-atomic_set.litmus
Test that atomic_set() cannot break the atomicity of atomic RMWs.
NOTE: Require herd7 7.56 or later which supports "(void)expr".
RCU (/rcu directory)
--------------------
MP+onceassign+derefonce.litmus (under tools/memory-model/litmus-tests/)
Demonstrates the use of rcu_assign_pointer() and rcu_dereference() to
ensure that an RCU reader will not see pre-initialization garbage.
RCU+sync+read.litmus
RCU+sync+free.litmus
Both the above litmus tests demonstrate the RCU grace period guarantee
that an RCU read-side critical section can never span a grace period.
C Atomic-RMW+mb__after_atomic-is-stronger-than-acquire
(*
* Result: Never
*
* Test that an atomic RMW followed by a smp_mb__after_atomic() is
* stronger than a normal acquire: both the read and write parts of
* the RMW are ordered before the subsequential memory accesses.
*)
{
}
P0(int *x, atomic_t *y)
{
int r0;
int r1;
r0 = READ_ONCE(*x);
smp_rmb();
r1 = atomic_read(y);
}
P1(int *x, atomic_t *y)
{
atomic_inc(y);
smp_mb__after_atomic();
WRITE_ONCE(*x, 1);
}
exists
(0:r0=1 /\ 0:r1=0)
C Atomic-RMW-ops-are-atomic-WRT-atomic_set
(*
* Result: Never
*
* Test that atomic_set() cannot break the atomicity of atomic RMWs.
* NOTE: This requires herd7 7.56 or later which supports "(void)expr".
*)
{
atomic_t v = ATOMIC_INIT(1);
}
P0(atomic_t *v)
{
(void)atomic_add_unless(v, 1, 0);
}
P1(atomic_t *v)
{
atomic_set(v, 0);
}
exists
(v=2)
C RCU+sync+free
(*
* Result: Never
*
* This litmus test demonstrates that an RCU reader can never see a write that
* follows a grace period, if it did not see writes that precede that grace
* period.
*
* This is a typical pattern of RCU usage, where the write before the grace
* period assigns a pointer, and the writes following the grace period destroy
* the object that the pointer used to point to.
*
* This is one implication of the RCU grace-period guarantee, which says (among
* other things) that an RCU read-side critical section cannot span a grace period.
*)
{
int x = 1;
int *y = &x;
int z = 1;
}
P0(int *x, int *z, int **y)
{
int *r0;
int r1;
rcu_read_lock();
r0 = rcu_dereference(*y);
r1 = READ_ONCE(*r0);
rcu_read_unlock();
}
P1(int *x, int *z, int **y)
{
rcu_assign_pointer(*y, z);
synchronize_rcu();
WRITE_ONCE(*x, 0);
}
exists (0:r0=x /\ 0:r1=0)
C RCU+sync+read
(*
* Result: Never
*
* This litmus test demonstrates that after a grace period, an RCU updater always
* sees all stores done in prior RCU read-side critical sections. Such
* read-side critical sections would have ended before the grace period ended.
*
* This is one implication of the RCU grace-period guarantee, which says (among
* other things) that an RCU read-side critical section cannot span a grace period.
*)
{
int x = 0;
int y = 0;
}
P0(int *x, int *y)
{
rcu_read_lock();
WRITE_ONCE(*x, 1);
WRITE_ONCE(*y, 1);
rcu_read_unlock();
}
P1(int *x, int *y)
{
int r0;
int r1;
r0 = READ_ONCE(*x);
synchronize_rcu();
r1 = READ_ONCE(*y);
}
exists (1:r0=1 /\ 1:r1=0)
......@@ -9972,6 +9972,7 @@ M: Luc Maranget <luc.maranget@inria.fr>
M: "Paul E. McKenney" <paulmck@kernel.org>
R: Akira Yokosawa <akiyks@gmail.com>
R: Daniel Lustig <dlustig@nvidia.com>
R: Joel Fernandes <joel@joelfernandes.org>
L: linux-kernel@vger.kernel.org
L: linux-arch@vger.kernel.org
S: Supported
......@@ -9980,6 +9981,7 @@ F: Documentation/atomic_bitops.txt
F: Documentation/atomic_t.txt
F: Documentation/core-api/atomic_ops.rst
F: Documentation/core-api/refcount-vs-atomic.rst
F: Documentation/litmus-tests/
F: Documentation/memory-barriers.txt
F: tools/memory-model/
......
......@@ -1987,28 +1987,36 @@ outcome undefined.
In technical terms, the compiler is allowed to assume that when the
program executes, there will not be any data races. A "data race"
occurs when two conflicting memory accesses execute concurrently;
two memory accesses "conflict" if:
occurs when there are two memory accesses such that:
they access the same location,
1. they access the same location,
they occur on different CPUs (or in different threads on the
same CPU),
2. at least one of them is a store,
at least one of them is a plain access,
3. at least one of them is plain,
and at least one of them is a store.
4. they occur on different CPUs (or in different threads on the
same CPU), and
The LKMM tries to determine whether a program contains two conflicting
accesses which may execute concurrently; if it does then the LKMM says
there is a potential data race and makes no predictions about the
program's outcome.
5. they execute concurrently.
Determining whether two accesses conflict is easy; you can see that
all the concepts involved in the definition above are already part of
the memory model. The hard part is telling whether they may execute
concurrently. The LKMM takes a conservative attitude, assuming that
accesses may be concurrent unless it can prove they cannot.
In the literature, two accesses are said to "conflict" if they satisfy
1 and 2 above. We'll go a little farther and say that two accesses
are "race candidates" if they satisfy 1 - 4. Thus, whether or not two
race candidates actually do race in a given execution depends on
whether they are concurrent.
The LKMM tries to determine whether a program contains race candidates
which may execute concurrently; if it does then the LKMM says there is
a potential data race and makes no predictions about the program's
outcome.
Determining whether two accesses are race candidates is easy; you can
see that all the concepts involved in the definition above are already
part of the memory model. The hard part is telling whether they may
execute concurrently. The LKMM takes a conservative attitude,
assuming that accesses may be concurrent unless it can prove they
are not.
If two memory accesses aren't concurrent then one must execute before
the other. Therefore the LKMM decides two accesses aren't concurrent
......@@ -2171,8 +2179,8 @@ again, now using plain accesses for buf:
}
This program does not contain a data race. Although the U and V
accesses conflict, the LKMM can prove they are not concurrent as
follows:
accesses are race candidates, the LKMM can prove they are not
concurrent as follows:
The smp_wmb() fence in P0 is both a compiler barrier and a
cumul-fence. It guarantees that no matter what hash of
......@@ -2326,12 +2334,11 @@ could now perform the load of x before the load of ptr (there might be
a control dependency but no address dependency at the machine level).
Finally, it turns out there is a situation in which a plain write does
not need to be w-post-bounded: when it is separated from the
conflicting access by a fence. At first glance this may seem
impossible. After all, to be conflicting the second access has to be
on a different CPU from the first, and fences don't link events on
different CPUs. Well, normal fences don't -- but rcu-fence can!
Here's an example:
not need to be w-post-bounded: when it is separated from the other
race-candidate access by a fence. At first glance this may seem
impossible. After all, to be race candidates the two accesses must
be on different CPUs, and fences don't link events on different CPUs.
Well, normal fences don't -- but rcu-fence can! Here's an example:
int x, y;
......@@ -2367,7 +2374,7 @@ concurrent and there is no race, even though P1's plain store to y
isn't w-post-bounded by any marked accesses.
Putting all this material together yields the following picture. For
two conflicting stores W and W', where W ->co W', the LKMM says the
race-candidate stores W and W', where W ->co W', the LKMM says the
stores don't race if W can be linked to W' by a
w-post-bounded ; vis ; w-pre-bounded
......@@ -2380,8 +2387,8 @@ sequence, and if W' is plain then they also have to be linked by a
w-post-bounded ; vis ; r-pre-bounded
sequence. For a conflicting load R and store W, the LKMM says the two
accesses don't race if R can be linked to W by an
sequence. For race-candidate load R and store W, the LKMM says the
two accesses don't race if R can be linked to W by an
r-post-bounded ; xb* ; w-pre-bounded
......@@ -2413,20 +2420,20 @@ is, the rules governing the memory subsystem's choice of a store to
satisfy a load request and its determination of where a store will
fall in the coherence order):
If R and W conflict and it is possible to link R to W by one
of the xb* sequences listed above, then W ->rfe R is not
allowed (i.e., a load cannot read from a store that it
If R and W are race candidates and it is possible to link R to
W by one of the xb* sequences listed above, then W ->rfe R is
not allowed (i.e., a load cannot read from a store that it
executes before, even if one or both is plain).
If W and R conflict and it is possible to link W to R by one
of the vis sequences listed above, then R ->fre W is not
allowed (i.e., if a store is visible to a load then the load
must read from that store or one coherence-after it).
If W and R are race candidates and it is possible to link W to
R by one of the vis sequences listed above, then R ->fre W is
not allowed (i.e., if a store is visible to a load then the
load must read from that store or one coherence-after it).
If W and W' conflict and it is possible to link W to W' by one
of the vis sequences listed above, then W' ->co W is not
allowed (i.e., if one store is visible to a second then the
second must come after the first in the coherence order).
If W and W' are race candidates and it is possible to link W
to W' by one of the vis sequences listed above, then W' ->co W
is not allowed (i.e., if one store is visible to a second then
the second must come after the first in the coherence order).
This is the extent to which the LKMM deals with plain accesses.
Perhaps it could say more (for example, plain accesses might
......
......@@ -126,7 +126,7 @@ However, it is not necessarily the case that accesses ordered by
locking will be seen as ordered by CPUs not holding that lock.
Consider this example:
/* See Z6.0+pooncerelease+poacquirerelease+fencembonceonce.litmus. */
/* See Z6.0+pooncelock+pooncelock+pombonce.litmus. */
void CPU0(void)
{
spin_lock(&mylock);
......
......@@ -73,6 +73,18 @@ o Christopher Pulte, Shaked Flur, Will Deacon, Jon French,
Linux-kernel memory model
=========================
o Jade Alglave, Will Deacon, Boqun Feng, David Howells, Daniel
Lustig, Luc Maranget, Paul E. McKenney, Andrea Parri, Nicholas
Piggin, Alan Stern, Akira Yokosawa, and Peter Zijlstra.
2019. "Calibrating your fear of big bad optimizing compilers"
Linux Weekly News. https://lwn.net/Articles/799218/
o Jade Alglave, Will Deacon, Boqun Feng, David Howells, Daniel
Lustig, Luc Maranget, Paul E. McKenney, Andrea Parri, Nicholas
Piggin, Alan Stern, Akira Yokosawa, and Peter Zijlstra.
2019. "Who's afraid of a big bad optimizing compiler?"
Linux Weekly News. https://lwn.net/Articles/793253/
o Jade Alglave, Luc Maranget, Paul E. McKenney, Andrea Parri, and
Alan Stern. 2018. "Frightening small children and disconcerting
grown-ups: Concurrency in the Linux kernel". In Proceedings of
......@@ -88,6 +100,11 @@ o Jade Alglave, Luc Maranget, Paul E. McKenney, Andrea Parri, and
Alan Stern. 2017. "A formal kernel memory-ordering model (part 2)"
Linux Weekly News. https://lwn.net/Articles/720550/
o Jade Alglave, Luc Maranget, Paul E. McKenney, Andrea Parri, and
Alan Stern. 2017-2019. "A Formal Model of Linux-Kernel Memory
Ordering" (backup material for the LWN articles)
https://mirrors.edge.kernel.org/pub/linux/kernel/people/paulmck/LWNLinuxMM/
Memory-model tooling
====================
......@@ -110,5 +127,5 @@ Memory-model comparisons
========================
o Paul E. McKenney, Ulrich Weigand, Andrea Parri, and Boqun
Feng. 2016. "Linux-Kernel Memory Model". (6 June 2016).
http://open-std.org/JTC1/SC22/WG21/docs/papers/2016/p0124r2.html.
Feng. 2018. "Linux-Kernel Memory Model". (27 September 2018).
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0124r6.html.
......@@ -28,8 +28,34 @@ downloaded separately:
See "herdtools7/INSTALL.md" for installation instructions.
Note that although these tools usually provide backwards compatibility,
this is not absolutely guaranteed. Therefore, if a later version does
not work, please try using the exact version called out above.
this is not absolutely guaranteed.
For example, a future version of herd7 might not work with the model
in this release. A compatible model will likely be made available in
a later release of Linux kernel.
If you absolutely need to run the model in this particular release,
please try using the exact version called out above.
klitmus7 is independent of the model provided here. It has its own
dependency on a target kernel release where converted code is built
and executed. Any change in kernel APIs essential to klitmus7 will
necessitate an upgrade of klitmus7.
If you find any compatibility issues in klitmus7, please inform the
memory model maintainers.
klitmus7 Compatibility Table
----------------------------
============ ==========
target Linux herdtools7
------------ ----------
-- 4.18 7.48 --
4.15 -- 4.19 7.49 --
4.20 -- 5.5 7.54 --
5.6 -- 7.56 --
============ ==========
==================
......@@ -207,11 +233,15 @@ The Linux-kernel memory model (LKMM) has the following limitations:
case as a store release.
b. The "unless" RMW operations are not currently modeled:
atomic_long_add_unless(), atomic_add_unless(),
atomic_inc_unless_negative(), and
atomic_dec_unless_positive(). These can be emulated
atomic_long_add_unless(), atomic_inc_unless_negative(),
and atomic_dec_unless_positive(). These can be emulated
in litmus tests, for example, by using atomic_cmpxchg().
One exception of this limitation is atomic_add_unless(),
which is provided directly by herd7 (so no corresponding
definition in linux-kernel.def). atomic_add_unless() is
modeled by herd7 therefore it can be used in litmus tests.
c. The call_rcu() function is not modeled. It can be
emulated in litmus tests by adding another process that
invokes synchronize_rcu() and the body of the callback
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment