1. 14 Apr, 2020 5 commits
    • Marco Elver's avatar
      kcsan: Introduce scoped ASSERT_EXCLUSIVE macros · d8949ef1
      Marco Elver authored
      Introduce ASSERT_EXCLUSIVE_*_SCOPED(), which provide an intuitive
      interface to use the scoped-access feature, without having to explicitly
      mark the start and end of the desired scope. Basing duration of the
      checks on scope avoids accidental misuse and resulting false positives,
      which may be hard to debug. See added comments for usage.
      
      The macros are implemented using __attribute__((__cleanup__(func))),
      which is supported by all compilers that currently support KCSAN.
      Suggested-by: default avatarBoqun Feng <boqun.feng@gmail.com>
      Suggested-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      d8949ef1
    • Marco Elver's avatar
      objtool, kcsan: Add explicit check functions to uaccess whitelist · 9967683c
      Marco Elver authored
      Add explicitly invoked KCSAN check functions to objtool's uaccess
      whitelist. This is needed in order to permit calling into
      kcsan_check_scoped_accesses() from the fast-path, which in turn calls
      __kcsan_check_access().  __kcsan_check_access() is the generic variant
      of the already whitelisted specializations __tsan_{read,write}N.
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      9967683c
    • Marco Elver's avatar
      kcsan: Add support for scoped accesses · 757a4cef
      Marco Elver authored
      This adds support for scoped accesses, where the memory range is checked
      for the duration of the scope. The feature is implemented by inserting
      the relevant access information into a list of scoped accesses for
      the current execution context, which are then checked (until removed)
      on every call (through instrumentation) into the KCSAN runtime.
      
      An alternative, more complex, implementation could set up a watchpoint for
      the scoped access, and keep the watchpoint set up. This, however, would
      require first exposing a handle to the watchpoint, as well as dealing
      with cases such as accesses by the same thread while the watchpoint is
      still set up (and several more cases). It is also doubtful if this would
      provide any benefit, since the majority of delay where the watchpoint
      is set up is likely due to the injected delays by KCSAN.  Therefore,
      the implementation in this patch is simpler and avoids hurting KCSAN's
      main use-case (normal data race detection); it also implicitly increases
      scoped-access race-detection-ability due to increased probability of
      setting up watchpoints by repeatedly calling __kcsan_check_access()
      throughout the scope of the access.
      
      The implementation required adding an additional conditional branch to
      the fast-path. However, the microbenchmark showed a *speedup* of ~5%
      on the fast-path. This appears to be due to subtly improved codegen by
      GCC from moving get_ctx() and associated load of preempt_count earlier.
      Suggested-by: default avatarBoqun Feng <boqun.feng@gmail.com>
      Suggested-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      757a4cef
    • Marco Elver's avatar
      kcsan: Avoid blocking producers in prepare_report() · 6119418f
      Marco Elver authored
      To avoid deadlock in case watchers can be interrupted, we need to ensure
      that producers of the struct other_info can never be blocked by an
      unrelated consumer. (Likely to occur with KCSAN_INTERRUPT_WATCHER.)
      
      There are several cases that can lead to this scenario, for example:
      
      	1. A watchpoint A was set up by task T1, but interrupted by
      	   interrupt I1. Some other thread (task or interrupt) finds
      	   watchpoint A consumes it, and sets other_info. Then I1 also
      	   finds some unrelated watchpoint B, consumes it, but is blocked
      	   because other_info is in use. T1 cannot consume other_info
      	   because I1 never returns -> deadlock.
      
      	2. A watchpoint A was set up by task T1, but interrupted by
      	   interrupt I1, which also sets up a watchpoint B. Some other
      	   thread finds watchpoint A, and consumes it and sets up
      	   other_info with its information. Similarly some other thread
      	   finds watchpoint B and consumes it, but is then blocked because
      	   other_info is in use. When I1 continues it sees its watchpoint
      	   was consumed, and that it must wait for other_info, which
      	   currently contains information to be consumed by T1. However, T1
      	   cannot unblock other_info because I1 never returns -> deadlock.
      
      To avoid this, we need to ensure that producers of struct other_info
      always have a usable other_info entry. This is obviously not the case
      with only a single instance of struct other_info, as concurrent
      producers must wait for the entry to be released by some consumer (which
      may be locked up as illustrated above).
      
      While it would be nice if producers could simply call kmalloc() and
      append their instance of struct other_info to a list, we are very
      limited in this code path: since KCSAN can instrument the allocators
      themselves, calling kmalloc() could lead to deadlock or corrupted
      allocator state.
      
      Since producers of the struct other_info will always succeed at
      try_consume_watchpoint(), preceding the call into kcsan_report(), we
      know that the particular watchpoint slot cannot simply be reused or
      consumed by another potential other_info producer. If we move removal of
      a watchpoint after reporting (by the consumer of struct other_info), we
      can see a consumed watchpoint as a held lock on elements of other_info,
      if we create a one-to-one mapping of a watchpoint to an other_info
      element.
      
      Therefore, the simplest solution is to create an array of struct
      other_info that is as large as the watchpoints array in core.c, and pass
      the watchpoint index to kcsan_report() for producers and consumers, and
      change watchpoints to be removed after reporting is done.
      
      With a default config on a 64-bit system, the array other_infos consumes
      ~37KiB. For most systems today this is not a problem. On smaller memory
      constrained systems, the config value CONFIG_KCSAN_NUM_WATCHPOINTS can
      be reduced appropriately.
      
      Overall, this change is a simplification of the prepare_report() code,
      and makes some of the checks (such as checking if at least one access is
      a write) redundant.
      
      Tested:
      $ tools/testing/selftests/rcutorture/bin/kvm.sh \
      	--cpus 12 --duration 10 --kconfig "CONFIG_DEBUG_INFO=y \
      	CONFIG_KCSAN=y CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n \
      	CONFIG_KCSAN_REPORT_VALUE_CHANGE_ONLY=n \
      	CONFIG_KCSAN_REPORT_ONCE_IN_MS=100000 CONFIG_KCSAN_VERBOSE=y \
      	CONFIG_KCSAN_INTERRUPT_WATCHER=y CONFIG_PROVE_LOCKING=y" \
      	--configs TREE03
      => No longer hangs and runs to completion as expected.
      Reported-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      6119418f
    • Marco Elver's avatar
      kcsan: Introduce report access_info and other_info · 135c0872
      Marco Elver authored
      Improve readability by introducing access_info and other_info structs,
      and in preparation of the following commit in this series replaces the
      single instance of other_info with an array of size 1.
      
      No functional change intended.
      Signed-off-by: default avatarMarco Elver <elver@google.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@kernel.org>
      135c0872
  2. 25 Mar, 2020 6 commits
  3. 21 Mar, 2020 28 commits
  4. 20 Mar, 2020 1 commit