Commit 4f0c7cfb authored by Ben Widawsky's avatar Ben Widawsky Committed by Daniel Vetter

drm/i915: [sparse] __iomem fixes for gem

As with one of the earlier patches in the series, we're forced to cast
for copy_[to|from]_user. Again because of the nature of the GEN x86
exclusivity, this should be safe.
Signed-off-by: default avatarBen Widawsky <benjamin.widawsky@intel.com>
[danvet: Added some bikeshed.]
Signed-off-by: default avatarDaniel Vetter <daniel.vetter@ffwll.ch>
parent 0d38f009
...@@ -282,8 +282,8 @@ __copy_to_user_swizzled(char __user *cpu_vaddr, ...@@ -282,8 +282,8 @@ __copy_to_user_swizzled(char __user *cpu_vaddr,
} }
static inline int static inline int
__copy_from_user_swizzled(char __user *gpu_vaddr, int gpu_offset, __copy_from_user_swizzled(char *gpu_vaddr, int gpu_offset,
const char *cpu_vaddr, const char __user *cpu_vaddr,
int length) int length)
{ {
int ret, cpu_offset = 0; int ret, cpu_offset = 0;
...@@ -558,11 +558,14 @@ fast_user_write(struct io_mapping *mapping, ...@@ -558,11 +558,14 @@ fast_user_write(struct io_mapping *mapping,
char __user *user_data, char __user *user_data,
int length) int length)
{ {
char *vaddr_atomic; void __iomem *vaddr_atomic;
void *vaddr;
unsigned long unwritten; unsigned long unwritten;
vaddr_atomic = io_mapping_map_atomic_wc(mapping, page_base); vaddr_atomic = io_mapping_map_atomic_wc(mapping, page_base);
unwritten = __copy_from_user_inatomic_nocache(vaddr_atomic + page_offset, /* We can use the cpu mem copy function because this is X86. */
vaddr = (void __force*)vaddr_atomic + page_offset;
unwritten = __copy_from_user_inatomic_nocache(vaddr,
user_data, length); user_data, length);
io_mapping_unmap_atomic(vaddr_atomic); io_mapping_unmap_atomic(vaddr_atomic);
return unwritten; return unwritten;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment