Commit 0959a168 authored by Daniel Vetter's avatar Daniel Vetter Committed by Sumit Semwal

dma-buf: Update cpu access documentation

- Again move the information relevant for driver writers next to the
  callbacks.
- Put the overview and userspace interface documentation into a DOC:
  section within the code.
- Remove the text that mmap needs to be coherent - since the
  DMA_BUF_IOCTL_SYNC landed that's no longer the case. But keep the text
  that for pte zapping exporters need to adjust the address space.
- Add a FIXME that kmap and the new begin/end stuff used by the SYNC
  ioctl don't really mix correctly. That's something I just realized
  while doing this doc rework.
- Augment function and structure docs like usual.

Cc: linux-doc@vger.kernel.org
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Signed-off-by: default avatarDaniel Vetter <daniel.vetter@intel.com>
Signed-off-by: default avatarSumit Semwal <sumit.semwal@linaro.org>
  [sumits: fix cosmetic issues]
Link: http://patchwork.freedesktop.org/patch/msgid/20161209185309.1682-5-daniel.vetter@ffwll.ch
parent 2904a8c1
...@@ -6,205 +6,6 @@ ...@@ -6,205 +6,6 @@
<sumit dot semwal at ti dot com> <sumit dot semwal at ti dot com>
Kernel cpu access to a dma-buf buffer object
--------------------------------------------
The motivation to allow cpu access from the kernel to a dma-buf object from the
importers side are:
- fallback operations, e.g. if the devices is connected to a usb bus and the
kernel needs to shuffle the data around first before sending it away.
- full transparency for existing users on the importer side, i.e. userspace
should not notice the difference between a normal object from that subsystem
and an imported one backed by a dma-buf. This is really important for drm
opengl drivers that expect to still use all the existing upload/download
paths.
Access to a dma_buf from the kernel context involves three steps:
1. Prepare access, which invalidate any necessary caches and make the object
available for cpu access.
2. Access the object page-by-page with the dma_buf map apis
3. Finish access, which will flush any necessary cpu caches and free reserved
resources.
1. Prepare access
Before an importer can access a dma_buf object with the cpu from the kernel
context, it needs to notify the exporter of the access that is about to
happen.
Interface:
int dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
enum dma_data_direction direction)
This allows the exporter to ensure that the memory is actually available for
cpu access - the exporter might need to allocate or swap-in and pin the
backing storage. The exporter also needs to ensure that cpu access is
coherent for the access direction. The direction can be used by the exporter
to optimize the cache flushing, i.e. access with a different direction (read
instead of write) might return stale or even bogus data (e.g. when the
exporter needs to copy the data to temporary storage).
This step might fail, e.g. in oom conditions.
2. Accessing the buffer
To support dma_buf objects residing in highmem cpu access is page-based using
an api similar to kmap. Accessing a dma_buf is done in aligned chunks of
PAGE_SIZE size. Before accessing a chunk it needs to be mapped, which returns
a pointer in kernel virtual address space. Afterwards the chunk needs to be
unmapped again. There is no limit on how often a given chunk can be mapped
and unmapped, i.e. the importer does not need to call begin_cpu_access again
before mapping the same chunk again.
Interfaces:
void *dma_buf_kmap(struct dma_buf *, unsigned long);
void dma_buf_kunmap(struct dma_buf *, unsigned long, void *);
There are also atomic variants of these interfaces. Like for kmap they
facilitate non-blocking fast-paths. Neither the importer nor the exporter (in
the callback) is allowed to block when using these.
Interfaces:
void *dma_buf_kmap_atomic(struct dma_buf *, unsigned long);
void dma_buf_kunmap_atomic(struct dma_buf *, unsigned long, void *);
For importers all the restrictions of using kmap apply, like the limited
supply of kmap_atomic slots. Hence an importer shall only hold onto at most 2
atomic dma_buf kmaps at the same time (in any given process context).
dma_buf kmap calls outside of the range specified in begin_cpu_access are
undefined. If the range is not PAGE_SIZE aligned, kmap needs to succeed on
the partial chunks at the beginning and end but may return stale or bogus
data outside of the range (in these partial chunks).
Note that these calls need to always succeed. The exporter needs to complete
any preparations that might fail in begin_cpu_access.
For some cases the overhead of kmap can be too high, a vmap interface
is introduced. This interface should be used very carefully, as vmalloc
space is a limited resources on many architectures.
Interfaces:
void *dma_buf_vmap(struct dma_buf *dmabuf)
void dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr)
The vmap call can fail if there is no vmap support in the exporter, or if it
runs out of vmalloc space. Fallback to kmap should be implemented. Note that
the dma-buf layer keeps a reference count for all vmap access and calls down
into the exporter's vmap function only when no vmapping exists, and only
unmaps it once. Protection against concurrent vmap/vunmap calls is provided
by taking the dma_buf->lock mutex.
3. Finish access
When the importer is done accessing the CPU, it needs to announce this to
the exporter (to facilitate cache flushing and unpinning of any pinned
resources). The result of any dma_buf kmap calls after end_cpu_access is
undefined.
Interface:
void dma_buf_end_cpu_access(struct dma_buf *dma_buf,
enum dma_data_direction dir);
Direct Userspace Access/mmap Support
------------------------------------
Being able to mmap an export dma-buf buffer object has 2 main use-cases:
- CPU fallback processing in a pipeline and
- supporting existing mmap interfaces in importers.
1. CPU fallback processing in a pipeline
In many processing pipelines it is sometimes required that the cpu can access
the data in a dma-buf (e.g. for thumbnail creation, snapshots, ...). To avoid
the need to handle this specially in userspace frameworks for buffer sharing
it's ideal if the dma_buf fd itself can be used to access the backing storage
from userspace using mmap.
Furthermore Android's ION framework already supports this (and is otherwise
rather similar to dma-buf from a userspace consumer side with using fds as
handles, too). So it's beneficial to support this in a similar fashion on
dma-buf to have a good transition path for existing Android userspace.
No special interfaces, userspace simply calls mmap on the dma-buf fd, making
sure that the cache synchronization ioctl (DMA_BUF_IOCTL_SYNC) is *always*
used when the access happens. Note that DMA_BUF_IOCTL_SYNC can fail with
-EAGAIN or -EINTR, in which case it must be restarted.
Some systems might need some sort of cache coherency management e.g. when
CPU and GPU domains are being accessed through dma-buf at the same time. To
circumvent this problem there are begin/end coherency markers, that forward
directly to existing dma-buf device drivers vfunc hooks. Userspace can make
use of those markers through the DMA_BUF_IOCTL_SYNC ioctl. The sequence
would be used like following:
- mmap dma-buf fd
- for each drawing/upload cycle in CPU 1. SYNC_START ioctl, 2. read/write
to mmap area 3. SYNC_END ioctl. This can be repeated as often as you
want (with the new data being consumed by the GPU or say scanout device)
- munmap once you don't need the buffer any more
For correctness and optimal performance, it is always required to use
SYNC_START and SYNC_END before and after, respectively, when accessing the
mapped address. Userspace cannot rely on coherent access, even when there
are systems where it just works without calling these ioctls.
2. Supporting existing mmap interfaces in importers
Similar to the motivation for kernel cpu access it is again important that
the userspace code of a given importing subsystem can use the same interfaces
with a imported dma-buf buffer object as with a native buffer object. This is
especially important for drm where the userspace part of contemporary OpenGL,
X, and other drivers is huge, and reworking them to use a different way to
mmap a buffer rather invasive.
The assumption in the current dma-buf interfaces is that redirecting the
initial mmap is all that's needed. A survey of some of the existing
subsystems shows that no driver seems to do any nefarious thing like syncing
up with outstanding asynchronous processing on the device or allocating
special resources at fault time. So hopefully this is good enough, since
adding interfaces to intercept pagefaults and allow pte shootdowns would
increase the complexity quite a bit.
Interface:
int dma_buf_mmap(struct dma_buf *, struct vm_area_struct *,
unsigned long);
If the importing subsystem simply provides a special-purpose mmap call to set
up a mapping in userspace, calling do_mmap with dma_buf->file will equally
achieve that for a dma-buf object.
3. Implementation notes for exporters
Because dma-buf buffers have invariant size over their lifetime, the dma-buf
core checks whether a vma is too large and rejects such mappings. The
exporter hence does not need to duplicate this check.
Because existing importing subsystems might presume coherent mappings for
userspace, the exporter needs to set up a coherent mapping. If that's not
possible, it needs to fake coherency by manually shooting down ptes when
leaving the cpu domain and flushing caches at fault time. Note that all the
dma_buf files share the same anon inode, hence the exporter needs to replace
the dma_buf file stored in vma->vm_file with it's own if pte shootdown is
required. This is because the kernel uses the underlying inode's address_space
for vma tracking (and hence pte tracking at shootdown time with
unmap_mapping_range).
If the above shootdown dance turns out to be too expensive in certain
scenarios, we can extend dma-buf with a more explicit cache tracking scheme
for userspace mappings. But the current assumption is that using mmap is
always a slower path, so some inefficiencies should be acceptable.
Exporters that shoot down mappings (for any reasons) shall not do any
synchronization at fault time with outstanding device operations.
Synchronization is an orthogonal issue to sharing the backing storage of a
buffer and hence should not be handled by dma-buf itself. This is explicitly
mentioned here because many people seem to want something like this, but if
different exporters handle this differently, buffer sharing can fail in
interesting ways depending upong the exporter (if userspace starts depending
upon this implicit synchronization).
Other Interfaces Exposed to Userspace on the dma-buf FD Other Interfaces Exposed to Userspace on the dma-buf FD
------------------------------------------------------ ------------------------------------------------------
...@@ -240,20 +41,6 @@ Miscellaneous notes ...@@ -240,20 +41,6 @@ Miscellaneous notes
the exporting driver to create a dmabuf fd must provide a way to let the exporting driver to create a dmabuf fd must provide a way to let
userspace control setting of O_CLOEXEC flag passed in to dma_buf_fd(). userspace control setting of O_CLOEXEC flag passed in to dma_buf_fd().
- If an exporter needs to manually flush caches and hence needs to fake
coherency for mmap support, it needs to be able to zap all the ptes pointing
at the backing storage. Now linux mm needs a struct address_space associated
with the struct file stored in vma->vm_file to do that with the function
unmap_mapping_range. But the dma_buf framework only backs every dma_buf fd
with the anon_file struct file, i.e. all dma_bufs share the same file.
Hence exporters need to setup their own file (and address_space) association
by setting vma->vm_file and adjusting vma->vm_pgoff in the dma_buf mmap
callback. In the specific case of a gem driver the exporter could use the
shmem file already provided by gem (and set vm_pgoff = 0). Exporters can then
zap ptes by unmapping the corresponding range of the struct address_space
associated with their own file.
References: References:
[1] struct dma_buf_ops in include/linux/dma-buf.h [1] struct dma_buf_ops in include/linux/dma-buf.h
[2] All interfaces mentioned above defined in include/linux/dma-buf.h [2] All interfaces mentioned above defined in include/linux/dma-buf.h
......
...@@ -52,6 +52,12 @@ Basic Operation and Device DMA Access ...@@ -52,6 +52,12 @@ Basic Operation and Device DMA Access
.. kernel-doc:: drivers/dma-buf/dma-buf.c .. kernel-doc:: drivers/dma-buf/dma-buf.c
:doc: dma buf device access :doc: dma buf device access
CPU Access to DMA Buffer Objects
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. kernel-doc:: drivers/dma-buf/dma-buf.c
:doc: cpu access
Kernel Functions and Structures Reference Kernel Functions and Structures Reference
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
......
...@@ -640,6 +640,122 @@ void dma_buf_unmap_attachment(struct dma_buf_attachment *attach, ...@@ -640,6 +640,122 @@ void dma_buf_unmap_attachment(struct dma_buf_attachment *attach,
} }
EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment); EXPORT_SYMBOL_GPL(dma_buf_unmap_attachment);
/**
* DOC: cpu access
*
* There are mutliple reasons for supporting CPU access to a dma buffer object:
*
* - Fallback operations in the kernel, for example when a device is connected
* over USB and the kernel needs to shuffle the data around first before
* sending it away. Cache coherency is handled by braketing any transactions
* with calls to dma_buf_begin_cpu_access() and dma_buf_end_cpu_access()
* access.
*
* To support dma_buf objects residing in highmem cpu access is page-based
* using an api similar to kmap. Accessing a dma_buf is done in aligned chunks
* of PAGE_SIZE size. Before accessing a chunk it needs to be mapped, which
* returns a pointer in kernel virtual address space. Afterwards the chunk
* needs to be unmapped again. There is no limit on how often a given chunk
* can be mapped and unmapped, i.e. the importer does not need to call
* begin_cpu_access again before mapping the same chunk again.
*
* Interfaces::
* void \*dma_buf_kmap(struct dma_buf \*, unsigned long);
* void dma_buf_kunmap(struct dma_buf \*, unsigned long, void \*);
*
* There are also atomic variants of these interfaces. Like for kmap they
* facilitate non-blocking fast-paths. Neither the importer nor the exporter
* (in the callback) is allowed to block when using these.
*
* Interfaces::
* void \*dma_buf_kmap_atomic(struct dma_buf \*, unsigned long);
* void dma_buf_kunmap_atomic(struct dma_buf \*, unsigned long, void \*);
*
* For importers all the restrictions of using kmap apply, like the limited
* supply of kmap_atomic slots. Hence an importer shall only hold onto at
* max 2 atomic dma_buf kmaps at the same time (in any given process context).
*
* dma_buf kmap calls outside of the range specified in begin_cpu_access are
* undefined. If the range is not PAGE_SIZE aligned, kmap needs to succeed on
* the partial chunks at the beginning and end but may return stale or bogus
* data outside of the range (in these partial chunks).
*
* Note that these calls need to always succeed. The exporter needs to
* complete any preparations that might fail in begin_cpu_access.
*
* For some cases the overhead of kmap can be too high, a vmap interface
* is introduced. This interface should be used very carefully, as vmalloc
* space is a limited resources on many architectures.
*
* Interfaces::
* void \*dma_buf_vmap(struct dma_buf \*dmabuf)
* void dma_buf_vunmap(struct dma_buf \*dmabuf, void \*vaddr)
*
* The vmap call can fail if there is no vmap support in the exporter, or if
* it runs out of vmalloc space. Fallback to kmap should be implemented. Note
* that the dma-buf layer keeps a reference count for all vmap access and
* calls down into the exporter's vmap function only when no vmapping exists,
* and only unmaps it once. Protection against concurrent vmap/vunmap calls is
* provided by taking the dma_buf->lock mutex.
*
* - For full compatibility on the importer side with existing userspace
* interfaces, which might already support mmap'ing buffers. This is needed in
* many processing pipelines (e.g. feeding a software rendered image into a
* hardware pipeline, thumbnail creation, snapshots, ...). Also, Android's ION
* framework already supported this and for DMA buffer file descriptors to
* replace ION buffers mmap support was needed.
*
* There is no special interfaces, userspace simply calls mmap on the dma-buf
* fd. But like for CPU access there's a need to braket the actual access,
* which is handled by the ioctl (DMA_BUF_IOCTL_SYNC). Note that
* DMA_BUF_IOCTL_SYNC can fail with -EAGAIN or -EINTR, in which case it must
* be restarted.
*
* Some systems might need some sort of cache coherency management e.g. when
* CPU and GPU domains are being accessed through dma-buf at the same time.
* To circumvent this problem there are begin/end coherency markers, that
* forward directly to existing dma-buf device drivers vfunc hooks. Userspace
* can make use of those markers through the DMA_BUF_IOCTL_SYNC ioctl. The
* sequence would be used like following:
*
* - mmap dma-buf fd
* - for each drawing/upload cycle in CPU 1. SYNC_START ioctl, 2. read/write
* to mmap area 3. SYNC_END ioctl. This can be repeated as often as you
* want (with the new data being consumed by say the GPU or the scanout
* device)
* - munmap once you don't need the buffer any more
*
* For correctness and optimal performance, it is always required to use
* SYNC_START and SYNC_END before and after, respectively, when accessing the
* mapped address. Userspace cannot rely on coherent access, even when there
* are systems where it just works without calling these ioctls.
*
* - And as a CPU fallback in userspace processing pipelines.
*
* Similar to the motivation for kernel cpu access it is again important that
* the userspace code of a given importing subsystem can use the same
* interfaces with a imported dma-buf buffer object as with a native buffer
* object. This is especially important for drm where the userspace part of
* contemporary OpenGL, X, and other drivers is huge, and reworking them to
* use a different way to mmap a buffer rather invasive.
*
* The assumption in the current dma-buf interfaces is that redirecting the
* initial mmap is all that's needed. A survey of some of the existing
* subsystems shows that no driver seems to do any nefarious thing like
* syncing up with outstanding asynchronous processing on the device or
* allocating special resources at fault time. So hopefully this is good
* enough, since adding interfaces to intercept pagefaults and allow pte
* shootdowns would increase the complexity quite a bit.
*
* Interface::
* int dma_buf_mmap(struct dma_buf \*, struct vm_area_struct \*,
* unsigned long);
*
* If the importing subsystem simply provides a special-purpose mmap call to
* set up a mapping in userspace, calling do_mmap with dma_buf->file will
* equally achieve that for a dma-buf object.
*/
static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf, static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
enum dma_data_direction direction) enum dma_data_direction direction)
{ {
...@@ -665,6 +781,10 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf, ...@@ -665,6 +781,10 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
* @dmabuf: [in] buffer to prepare cpu access for. * @dmabuf: [in] buffer to prepare cpu access for.
* @direction: [in] length of range for cpu access. * @direction: [in] length of range for cpu access.
* *
* After the cpu access is complete the caller should call
* dma_buf_end_cpu_access(). Only when cpu access is braketed by both calls is
* it guaranteed to be coherent with other DMA access.
*
* Can return negative error values, returns 0 on success. * Can return negative error values, returns 0 on success.
*/ */
int dma_buf_begin_cpu_access(struct dma_buf *dmabuf, int dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
...@@ -697,6 +817,8 @@ EXPORT_SYMBOL_GPL(dma_buf_begin_cpu_access); ...@@ -697,6 +817,8 @@ EXPORT_SYMBOL_GPL(dma_buf_begin_cpu_access);
* @dmabuf: [in] buffer to complete cpu access for. * @dmabuf: [in] buffer to complete cpu access for.
* @direction: [in] length of range for cpu access. * @direction: [in] length of range for cpu access.
* *
* This terminates CPU access started with dma_buf_begin_cpu_access().
*
* Can return negative error values, returns 0 on success. * Can return negative error values, returns 0 on success.
*/ */
int dma_buf_end_cpu_access(struct dma_buf *dmabuf, int dma_buf_end_cpu_access(struct dma_buf *dmabuf,
......
...@@ -39,10 +39,6 @@ struct dma_buf_attachment; ...@@ -39,10 +39,6 @@ struct dma_buf_attachment;
/** /**
* struct dma_buf_ops - operations possible on struct dma_buf * struct dma_buf_ops - operations possible on struct dma_buf
* @begin_cpu_access: [optional] called before cpu access to invalidate cpu
* caches and allocate backing storage (if not yet done)
* respectively pin the object into memory.
* @end_cpu_access: [optional] called after cpu access to flush caches.
* @kmap_atomic: maps a page from the buffer into kernel address * @kmap_atomic: maps a page from the buffer into kernel address
* space, users may not block until the subsequent unmap call. * space, users may not block until the subsequent unmap call.
* This callback must not sleep. * This callback must not sleep.
...@@ -50,10 +46,6 @@ struct dma_buf_attachment; ...@@ -50,10 +46,6 @@ struct dma_buf_attachment;
* This Callback must not sleep. * This Callback must not sleep.
* @kmap: maps a page from the buffer into kernel address space. * @kmap: maps a page from the buffer into kernel address space.
* @kunmap: [optional] unmaps a page from the buffer. * @kunmap: [optional] unmaps a page from the buffer.
* @mmap: used to expose the backing storage to userspace. Note that the
* mapping needs to be coherent - if the exporter doesn't directly
* support this, it needs to fake coherency by shooting down any ptes
* when transitioning away from the cpu domain.
* @vmap: [optional] creates a virtual mapping for the buffer into kernel * @vmap: [optional] creates a virtual mapping for the buffer into kernel
* address space. Same restrictions as for vmap and friends apply. * address space. Same restrictions as for vmap and friends apply.
* @vunmap: [optional] unmaps a vmap from the buffer * @vunmap: [optional] unmaps a vmap from the buffer
...@@ -164,13 +156,96 @@ struct dma_buf_ops { ...@@ -164,13 +156,96 @@ struct dma_buf_ops {
*/ */
void (*release)(struct dma_buf *); void (*release)(struct dma_buf *);
/**
* @begin_cpu_access:
*
* This is called from dma_buf_begin_cpu_access() and allows the
* exporter to ensure that the memory is actually available for cpu
* access - the exporter might need to allocate or swap-in and pin the
* backing storage. The exporter also needs to ensure that cpu access is
* coherent for the access direction. The direction can be used by the
* exporter to optimize the cache flushing, i.e. access with a different
* direction (read instead of write) might return stale or even bogus
* data (e.g. when the exporter needs to copy the data to temporary
* storage).
*
* This callback is optional.
*
* FIXME: This is both called through the DMA_BUF_IOCTL_SYNC command
* from userspace (where storage shouldn't be pinned to avoid handing
* de-factor mlock rights to userspace) and for the kernel-internal
* users of the various kmap interfaces, where the backing storage must
* be pinned to guarantee that the atomic kmap calls can succeed. Since
* there's no in-kernel users of the kmap interfaces yet this isn't a
* real problem.
*
* Returns:
*
* 0 on success or a negative error code on failure. This can for
* example fail when the backing storage can't be allocated. Can also
* return -ERESTARTSYS or -EINTR when the call has been interrupted and
* needs to be restarted.
*/
int (*begin_cpu_access)(struct dma_buf *, enum dma_data_direction); int (*begin_cpu_access)(struct dma_buf *, enum dma_data_direction);
/**
* @end_cpu_access:
*
* This is called from dma_buf_end_cpu_access() when the importer is
* done accessing the CPU. The exporter can use this to flush caches and
* unpin any resources pinned in @begin_cpu_access.
* The result of any dma_buf kmap calls after end_cpu_access is
* undefined.
*
* This callback is optional.
*
* Returns:
*
* 0 on success or a negative error code on failure. Can return
* -ERESTARTSYS or -EINTR when the call has been interrupted and needs
* to be restarted.
*/
int (*end_cpu_access)(struct dma_buf *, enum dma_data_direction); int (*end_cpu_access)(struct dma_buf *, enum dma_data_direction);
void *(*kmap_atomic)(struct dma_buf *, unsigned long); void *(*kmap_atomic)(struct dma_buf *, unsigned long);
void (*kunmap_atomic)(struct dma_buf *, unsigned long, void *); void (*kunmap_atomic)(struct dma_buf *, unsigned long, void *);
void *(*kmap)(struct dma_buf *, unsigned long); void *(*kmap)(struct dma_buf *, unsigned long);
void (*kunmap)(struct dma_buf *, unsigned long, void *); void (*kunmap)(struct dma_buf *, unsigned long, void *);
/**
* @mmap:
*
* This callback is used by the dma_buf_mmap() function
*
* Note that the mapping needs to be incoherent, userspace is expected
* to braket CPU access using the DMA_BUF_IOCTL_SYNC interface.
*
* Because dma-buf buffers have invariant size over their lifetime, the
* dma-buf core checks whether a vma is too large and rejects such
* mappings. The exporter hence does not need to duplicate this check.
* Drivers do not need to check this themselves.
*
* If an exporter needs to manually flush caches and hence needs to fake
* coherency for mmap support, it needs to be able to zap all the ptes
* pointing at the backing storage. Now linux mm needs a struct
* address_space associated with the struct file stored in vma->vm_file
* to do that with the function unmap_mapping_range. But the dma_buf
* framework only backs every dma_buf fd with the anon_file struct file,
* i.e. all dma_bufs share the same file.
*
* Hence exporters need to setup their own file (and address_space)
* association by setting vma->vm_file and adjusting vma->vm_pgoff in
* the dma_buf mmap callback. In the specific case of a gem driver the
* exporter could use the shmem file already provided by gem (and set
* vm_pgoff = 0). Exporters can then zap ptes by unmapping the
* corresponding range of the struct address_space associated with their
* own file.
*
* This callback is optional.
*
* Returns:
*
* 0 on success or a negative error code on failure.
*/
int (*mmap)(struct dma_buf *, struct vm_area_struct *vma); int (*mmap)(struct dma_buf *, struct vm_area_struct *vma);
void *(*vmap)(struct dma_buf *); void *(*vmap)(struct dma_buf *);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment