Commit b8d5a9cc authored by Chris Wilson's avatar Chris Wilson Committed by Jani Nikula

drm/i915: Encourage our shrinker more when our shmemfs allocations fails

Commit 24f8e00a ("drm/i915: Prefer to report ENOMEM rather than
incur the oom for gfx allocations") made the bold decision to try and
avoid the oomkiller by reporting -ENOMEM to userspace if our allocation
failed after attempting to free enough buffer objects. In short, it
appears we were giving up too easily (even before we start wondering if
one pass of reclaim is as strong as we would like). Part of the problem
is that if we only shrink just enough pages for our expected allocation,
the likelihood of those pages becoming available to us is less than 100%
To counter-act that we ask for twice the number of pages to be made
available. Furthermore, we allow the shrinker to pull pages from the
active list in later passes.

v2: Be a little more cautious in paging out gfx buffers, and leave that
to a more balanced approach from shrink_slab(). Important when combined
with "drm/i915: Start writeback from the shrinker" as anything shrunk is
immediately swapped out and so should be more conservative.

Fixes: 24f8e00a ("drm/i915: Prefer to report ENOMEM rather than incur the oom for gfx allocations")
Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: default avatarJoonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170609110350.1767-1-chris@chris-wilson.co.uk
(cherry picked from commit 4846bf0c)
Signed-off-by: default avatarJani Nikula <jani.nikula@intel.com>
parent a21ef715
...@@ -2285,8 +2285,8 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj) ...@@ -2285,8 +2285,8 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
struct page *page; struct page *page;
unsigned long last_pfn = 0; /* suppress gcc warning */ unsigned long last_pfn = 0; /* suppress gcc warning */
unsigned int max_segment; unsigned int max_segment;
gfp_t noreclaim;
int ret; int ret;
gfp_t gfp;
/* Assert that the object is not currently in any GPU domain. As it /* Assert that the object is not currently in any GPU domain. As it
* wasn't in the GTT, there shouldn't be any way it could have been in * wasn't in the GTT, there shouldn't be any way it could have been in
...@@ -2315,22 +2315,31 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj) ...@@ -2315,22 +2315,31 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
* Fail silently without starting the shrinker * Fail silently without starting the shrinker
*/ */
mapping = obj->base.filp->f_mapping; mapping = obj->base.filp->f_mapping;
gfp = mapping_gfp_constraint(mapping, ~(__GFP_IO | __GFP_RECLAIM)); noreclaim = mapping_gfp_constraint(mapping,
gfp |= __GFP_NORETRY | __GFP_NOWARN; ~(__GFP_IO | __GFP_RECLAIM));
noreclaim |= __GFP_NORETRY | __GFP_NOWARN;
sg = st->sgl; sg = st->sgl;
st->nents = 0; st->nents = 0;
for (i = 0; i < page_count; i++) { for (i = 0; i < page_count; i++) {
const unsigned int shrink[] = {
I915_SHRINK_BOUND | I915_SHRINK_UNBOUND | I915_SHRINK_PURGEABLE,
0,
}, *s = shrink;
gfp_t gfp = noreclaim;
do {
page = shmem_read_mapping_page_gfp(mapping, i, gfp); page = shmem_read_mapping_page_gfp(mapping, i, gfp);
if (unlikely(IS_ERR(page))) { if (likely(!IS_ERR(page)))
i915_gem_shrink(dev_priv, break;
page_count,
I915_SHRINK_BOUND | if (!*s) {
I915_SHRINK_UNBOUND | ret = PTR_ERR(page);
I915_SHRINK_PURGEABLE); goto err_sg;
page = shmem_read_mapping_page_gfp(mapping, i, gfp);
} }
if (unlikely(IS_ERR(page))) {
gfp_t reclaim; i915_gem_shrink(dev_priv, 2 * page_count, *s++);
cond_resched();
/* We've tried hard to allocate the memory by reaping /* We've tried hard to allocate the memory by reaping
* our own buffer, now let the real VM do its job and * our own buffer, now let the real VM do its job and
...@@ -2340,15 +2349,13 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj) ...@@ -2340,15 +2349,13 @@ i915_gem_object_get_pages_gtt(struct drm_i915_gem_object *obj)
* defer the oom here by reporting the ENOMEM back * defer the oom here by reporting the ENOMEM back
* to userspace. * to userspace.
*/ */
reclaim = mapping_gfp_mask(mapping); if (!*s) {
reclaim |= __GFP_NORETRY; /* reclaim, but no oom */ /* reclaim and warn, but no oom */
gfp = mapping_gfp_mask(mapping);
page = shmem_read_mapping_page_gfp(mapping, i, reclaim); gfp |= __GFP_NORETRY;
if (IS_ERR(page)) {
ret = PTR_ERR(page);
goto err_sg;
}
} }
} while (1);
if (!i || if (!i ||
sg->length >= max_segment || sg->length >= max_segment ||
page_to_pfn(page) != last_pfn + 1) { page_to_pfn(page) != last_pfn + 1) {
...@@ -4222,6 +4229,7 @@ i915_gem_object_create(struct drm_i915_private *dev_priv, u64 size) ...@@ -4222,6 +4229,7 @@ i915_gem_object_create(struct drm_i915_private *dev_priv, u64 size)
mapping = obj->base.filp->f_mapping; mapping = obj->base.filp->f_mapping;
mapping_set_gfp_mask(mapping, mask); mapping_set_gfp_mask(mapping, mask);
GEM_BUG_ON(!(mapping_gfp_mask(mapping) & __GFP_RECLAIM));
i915_gem_object_init(obj, &i915_gem_object_ops); i915_gem_object_init(obj, &i915_gem_object_ops);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment