1. 29 Nov, 2019 6 commits
  2. 28 Nov, 2019 4 commits
  3. 27 Nov, 2019 5 commits
  4. 26 Nov, 2019 3 commits
  5. 25 Nov, 2019 7 commits
  6. 23 Nov, 2019 2 commits
  7. 22 Nov, 2019 5 commits
    • Juston Li's avatar
      drm/i915: coffeelake supports hdcp2.2 · 6025ba12
      Juston Li authored
      This includes other platforms that utilize the same gen graphics as
      CFL: AML, WHL and CML.
      Signed-off-by: default avatarJuston Li <juston.li@intel.com>
      Reviewed-by: default avatarRamalingam C <ramalingam.c@intel.com>
      Signed-off-by: default avatarLyude Paul <lyude@redhat.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191011181918.29618-1-juston.li@intel.com
      6025ba12
    • Chris Wilson's avatar
      drm/i915/selftests: Flush the active callbacks · e8e61f10
      Chris Wilson authored
      Before checking the current i915_active state for the asynchronous work
      we submitted, flush any ongoing callback. This ensures that our sampling
      is robust and does not sporadically fail due to bad timing as the work
      is running on another cpu.
      
      v2: Drop the fence callback sync, retiring under the lock should be good
      enough to synchronize with engine_retire() and the
      intel_gt_retire_requests() background worker.
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Reviewed-by: default avatarTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191122132404.690440-1-chris@chris-wilson.co.uk
      e8e61f10
    • Chris Wilson's avatar
      drm/i915/selftests: Force bonded submission to overlap · cfd821b2
      Chris Wilson authored
      Bonded request submission is designed to allow requests to execute in
      parallel as laid out by the user. If the master request is already
      finished before its bonded pair is submitted, the pair were not destined
      to run in parallel and we lose the information about the master engine
      to dictate selection of the secondary. If the second request was
      required to be run on a particular engine in a virtual set, that should
      have been specified, rather than left to the whims of a random
      unconnected requests!
      
      In the selftest, I made the mistake of not ensuring the master would
      overlap with its bonded pairs, meaning that it could indeed complete
      before we submitted the bonds. Those bonds were then free to select any
      available engine in their virtual set, and not the one expected by the
      test.
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: default avatarTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191122112152.660743-1-chris@chris-wilson.co.uk
      cfd821b2
    • Chris Wilson's avatar
      drm/i915: Use a ctor for TYPESAFE_BY_RCU i915_request · 67a3acaa
      Chris Wilson authored
      As we start peeking into requests for longer and longer, e.g.
      incorporating use of spinlocks when only protected by an
      rcu_read_lock(), we need to be careful in how we reset the request when
      recycling and need to preserve any barriers that may still be in use as
      the request is reset for reuse.
      
      Quoting Linus Torvalds:
      
      > If there is refcounting going on then why use SLAB_TYPESAFE_BY_RCU?
      
        .. because the object can be accessed (by RCU) after the refcount has
        gone down to zero, and the thing has been released.
      
        That's the whole and only point of SLAB_TYPESAFE_BY_RCU.
      
        That flag basically says:
      
        "I may end up accessing this object *after* it has been free'd,
        because there may be RCU lookups in flight"
      
        This has nothing to do with constructors. It's ok if the object gets
        reused as an object of the same type and does *not* get
        re-initialized, because we're perfectly fine seeing old stale data.
      
        What it guarantees is that the slab isn't shared with any other kind
        of object, _and_ that the underlying pages are free'd after an RCU
        quiescent period (so the pages aren't shared with another kind of
        object either during an RCU walk).
      
        And it doesn't necessarily have to have a constructor, because the
        thing that a RCU walk will care about is
      
          (a) guaranteed to be an object that *has* been on some RCU list (so
          it's not a "new" object)
      
          (b) the RCU walk needs to have logic to verify that it's still the
          *same* object and hasn't been re-used as something else.
      
        In contrast, a SLAB_TYPESAFE_BY_RCU memory gets free'd and re-used
        immediately, but because it gets reused as the same kind of object,
        the RCU walker can "know" what parts have meaning for re-use, in a way
        it couidn't if the re-use was random.
      
        That said, it *is* subtle, and people should be careful.
      
      > So the re-use might initialize the fields lazily, not necessarily using a ctor.
      
        If you have a well-defined refcount, and use "atomic_inc_not_zero()"
        to guard the speculative RCU access section, and use
        "atomic_dec_and_test()" in the freeing section, then you should be
        safe wrt new allocations.
      
        If you have a completely new allocation that has "random stale
        content", you know that it cannot be on the RCU list, so there is no
        speculative access that can ever see that random content.
      
        So the only case you need to worry about is a re-use allocation, and
        you know that the refcount will start out as zero even if you don't
        have a constructor.
      
        So you can think of the refcount itself as always having a zero
        constructor, *BUT* you need to be careful with ordering.
      
        In particular, whoever does the allocation needs to then set the
        refcount to a non-zero value *after* it has initialized all the other
        fields. And in particular, it needs to make sure that it uses the
        proper memory ordering to do so.
      
        NOTE! One thing to be very worried about is that re-initializing
        whatever RCU lists means that now the RCU walker may be walking on the
        wrong list so the walker may do the right thing for this particular
        entry, but it may miss walking *other* entries. So then you can get
        spurious lookup failures, because the RCU walker never walked all the
        way to the end of the right list. That ends up being a much more
        subtle bug.
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: default avatarTvrtko Ursulin <tvrtko.ursulin@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191122094924.629690-1-chris@chris-wilson.co.uk
      67a3acaa
    • Chris Wilson's avatar
      drm/i915/selftests: Shorten infinite wait for sseu · f05bfce3
      Chris Wilson authored
      Use our more regular igt_flush_test() to bind the wait-for-idle and
      error out instead of waiting around forever on critical failure.
      Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
      Reviewed-by: default avatarStuart Summers <stuart.summers@intel.com>
      Link: https://patchwork.freedesktop.org/patch/msgid/20191121233021.507400-1-chris@chris-wilson.co.uk
      f05bfce3
  8. 21 Nov, 2019 6 commits
  9. 20 Nov, 2019 2 commits