Commit 290271de authored by Chris Wilson's avatar Chris Wilson

drm/i915: Spin for struct_mutex inside shrinker

Having resolved whether or not we would deadlock upon a call to
mutex_lock(&dev->struct_mutex), we can then spin for the contended
struct_mutex if we are not the owner. We cannot afford to simply block
and wait for the mutex, as the owner may itself be waiting for the
allocator -- i.e. a cyclic deadlock. This should significantly improve
the chance of running the shrinker for other processes whilst the GPU is
busy.

A more balanced approach would be to optimistically spin whilst the
mutex owner was on the cpu and there was an opportunity to acquire the
mutex for ourselves quickly. However, that requires support from
kernel/locking/ and a new mutex_spin_trylock() primitive.
Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170609110350.1767-4-chris@chris-wilson.co.ukReviewed-by: default avatarJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
parent 0f6ab55d
...@@ -38,16 +38,21 @@ ...@@ -38,16 +38,21 @@
static bool shrinker_lock(struct drm_i915_private *dev_priv, bool *unlock) static bool shrinker_lock(struct drm_i915_private *dev_priv, bool *unlock)
{ {
switch (mutex_trylock_recursive(&dev_priv->drm.struct_mutex)) { switch (mutex_trylock_recursive(&dev_priv->drm.struct_mutex)) {
case MUTEX_TRYLOCK_FAILED:
return false;
case MUTEX_TRYLOCK_SUCCESS:
*unlock = true;
return true;
case MUTEX_TRYLOCK_RECURSIVE: case MUTEX_TRYLOCK_RECURSIVE:
*unlock = false; *unlock = false;
return true; return true;
case MUTEX_TRYLOCK_FAILED:
do {
cpu_relax();
if (mutex_trylock(&dev_priv->drm.struct_mutex)) {
case MUTEX_TRYLOCK_SUCCESS:
*unlock = true;
return true;
}
} while (!need_resched());
return false;
} }
BUG(); BUG();
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment