Commit 00c26cf9 authored by Chris Wilson's avatar Chris Wilson

drm/i915: Remove toplevel struct_mutex locking from debugfs/i915_drop_caches

I have a plan to write a quick test to exercise concurrent usage of
i915_gem_shrink(), the simplest way looks to be to have multiple threads
using debugfs/i915_drop_caches. However, we currently take one lock over
the entire function, serialising the calls into i915_gem_shrink() so
reduce the lock coverage.

Testcase: igt/gem_shrink/reclaim
Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170524162653.5446-1-chris@chris-wilson.co.ukReviewed-by: default avatarJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
parent b5a82425
......@@ -4289,26 +4289,27 @@ i915_drop_caches_set(void *data, u64 val)
{
struct drm_i915_private *dev_priv = data;
struct drm_device *dev = &dev_priv->drm;
int ret;
int ret = 0;
DRM_DEBUG("Dropping caches: 0x%08llx\n", val);
/* No need to check and wait for gpu resets, only libdrm auto-restarts
* on ioctls on -EAGAIN. */
ret = mutex_lock_interruptible(&dev->struct_mutex);
if (ret)
return ret;
if (val & DROP_ACTIVE) {
ret = i915_gem_wait_for_idle(dev_priv,
I915_WAIT_INTERRUPTIBLE |
I915_WAIT_LOCKED);
if (val & (DROP_ACTIVE | DROP_RETIRE)) {
ret = mutex_lock_interruptible(&dev->struct_mutex);
if (ret)
goto unlock;
}
return ret;
if (val & DROP_RETIRE)
i915_gem_retire_requests(dev_priv);
if (val & DROP_ACTIVE)
ret = i915_gem_wait_for_idle(dev_priv,
I915_WAIT_INTERRUPTIBLE |
I915_WAIT_LOCKED);
if (val & DROP_RETIRE)
i915_gem_retire_requests(dev_priv);
mutex_unlock(&dev->struct_mutex);
}
lockdep_set_current_reclaim_state(GFP_KERNEL);
if (val & DROP_BOUND)
......@@ -4321,9 +4322,6 @@ i915_drop_caches_set(void *data, u64 val)
i915_gem_shrink_all(dev_priv);
lockdep_clear_current_reclaim_state();
unlock:
mutex_unlock(&dev->struct_mutex);
if (val & DROP_FREED) {
synchronize_rcu();
i915_gem_drain_freed_objects(dev_priv);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment