Commit e6db7f4d authored by Chris Wilson's avatar Chris Wilson

drm/i915: Break long iterations for get/put shmemfs pages

As we may have to iterate a few thousand elements to acquire and release
the shmemfs backing storage for a GPU object, we need to break up the
long loop with cond_resched() to retain a modicum of low latency for
other processes.

Testcase: igt/benchmarks/gem_syslatency
Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
Cc: Kuo-Hsin Yang <vovoy@chromium.org>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: default avatarJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181105170640.26905-1-chris@chris-wilson.co.uk
parent bfe60a02
......@@ -2404,6 +2404,7 @@ i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj,
mark_page_accessed(page);
put_page(page);
cond_resched();
}
obj->mm.dirty = false;
......@@ -2574,6 +2575,7 @@ static int i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
gfp_t gfp = noreclaim;
do {
cond_resched();
page = shmem_read_mapping_page_gfp(mapping, i, gfp);
if (likely(!IS_ERR(page)))
break;
......@@ -2584,7 +2586,6 @@ static int i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
}
i915_gem_shrink(dev_priv, 2 * page_count, NULL, *s++);
cond_resched();
/*
* We've tried hard to allocate the memory by reaping
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment