Commit c2831a94 authored by Chris Wilson's avatar Chris Wilson Committed by Daniel Vetter

drm/i915: Do not force non-caching copies for pwrite along shmem path

We don't always want to write into main memory with pwrite. The shmem
fast path in particular is used for memory that is cacheable - under
such circumstances forcing the cache eviction is undesirable. As we will
always flush the cache when targeting incoherent buffers, we can rely on
that second pass to apply the cache coherency rules and so benefit from
in-cache copies otherwise.
Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: default avatarBrad Volkin <bradley.d.volkin@intel.com>
Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
parent 17793c9a
...@@ -693,9 +693,8 @@ shmem_pwrite_fast(struct page *page, int shmem_page_offset, int page_length, ...@@ -693,9 +693,8 @@ shmem_pwrite_fast(struct page *page, int shmem_page_offset, int page_length,
if (needs_clflush_before) if (needs_clflush_before)
drm_clflush_virt_range(vaddr + shmem_page_offset, drm_clflush_virt_range(vaddr + shmem_page_offset,
page_length); page_length);
ret = __copy_from_user_inatomic_nocache(vaddr + shmem_page_offset, ret = __copy_from_user_inatomic(vaddr + shmem_page_offset,
user_data, user_data, page_length);
page_length);
if (needs_clflush_after) if (needs_clflush_after)
drm_clflush_virt_range(vaddr + shmem_page_offset, drm_clflush_virt_range(vaddr + shmem_page_offset,
page_length); page_length);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment