Commit 7d0eb51d authored by Chris Wilson's avatar Chris Wilson Committed by Rodrigo Vivi

drm/i915: Prevent bonded requests from overtaking each other on preemption

Force bonded requests to run on distinct engines so that they cannot be
shuffled onto the same engine where timeslicing will reverse the order.
A bonded request will often wait on a semaphore signaled by its master,
creating an implicit dependency -- if we ignore that implicit dependency
and allow the bonded request to run on the same engine and before its
master, we will cause a GPU hang. [Whether it will hang the GPU is
debatable, we should keep on timeslicing and each timeslice should be
"accidentally" counted as forward progress, in which case it should run
but at one-half to one-third speed.]

We can prevent this inversion by restricting which engines we allow
ourselves to jump to upon preemption, i.e. baking in the arrangement
established at first execution. (We should also consider capturing the
implicit dependency using i915_sched_add_dependency(), but first we need
to think about the constraints that requires on the execution/retirement
ordering.)

Fixes: 8ee36e04 ("drm/i915/execlists: Minimalistic timeslicing")
References: ee113690 ("drm/i915/execlists: Virtual engine bonding")
Testcase: igt/gem_exec_balancer/bonded-slice
Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: default avatarTvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190923152844.8914-3-chris@chris-wilson.co.uk
(cherry picked from commit e2144503)
Signed-off-by: default avatarRodrigo Vivi <rodrigo.vivi@intel.com>
parent dc789099
...@@ -3630,18 +3630,22 @@ static void ...@@ -3630,18 +3630,22 @@ static void
virtual_bond_execute(struct i915_request *rq, struct dma_fence *signal) virtual_bond_execute(struct i915_request *rq, struct dma_fence *signal)
{ {
struct virtual_engine *ve = to_virtual_engine(rq->engine); struct virtual_engine *ve = to_virtual_engine(rq->engine);
intel_engine_mask_t allowed, exec;
struct ve_bond *bond; struct ve_bond *bond;
allowed = ~to_request(signal)->engine->mask;
bond = virtual_find_bond(ve, to_request(signal)->engine); bond = virtual_find_bond(ve, to_request(signal)->engine);
if (bond) { if (bond)
intel_engine_mask_t old, new, cmp; allowed &= bond->sibling_mask;
cmp = READ_ONCE(rq->execution_mask); /* Restrict the bonded request to run on only the available engines */
do { exec = READ_ONCE(rq->execution_mask);
old = cmp; while (!try_cmpxchg(&rq->execution_mask, &exec, exec & allowed))
new = cmp & bond->sibling_mask; ;
} while ((cmp = cmpxchg(&rq->execution_mask, old, new)) != old);
} /* Prevent the master from being re-run on the bonded engines */
to_request(signal)->execution_mask &= ~allowed;
} }
struct intel_context * struct intel_context *
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment