Commit 6857a5eb authored by Christoph Hellwig's avatar Christoph Hellwig

dma-mapping: document dma_{alloc,free}_pages

Document the new dma_alloc_pages and dma_free_pages APIs, and fix
up the documentation for dma_alloc_noncoherent and dma_free_noncoherent.
Reported-by: default avatarRobin Murphy <robin.murphy@arm.com>
Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
Reviewed-by: default avatarRobin Murphy <robin.murphy@arm.com>
parent 695cebe5
......@@ -519,10 +519,9 @@ routines, e.g.:::
Part II - Non-coherent DMA allocations
--------------------------------------
These APIs allow to allocate pages in the kernel direct mapping that are
guaranteed to be DMA addressable. This means that unlike dma_alloc_coherent,
virt_to_page can be called on the resulting address, and the resulting
struct page can be used for everything a struct page is suitable for.
These APIs allow to allocate pages that are guaranteed to be DMA addressable
by the passed in device, but which need explicit management of memory ownership
for the kernel vs the device.
If you don't understand how cache line coherency works between a processor and
an I/O device, you should not be using this part of the API.
......@@ -537,7 +536,7 @@ an I/O device, you should not be using this part of the API.
This routine allocates a region of <size> bytes of consistent memory. It
returns a pointer to the allocated region (in the processor's virtual address
space) or NULL if the allocation failed. The returned memory may or may not
be in the kernels direct mapping. Drivers must not call virt_to_page on
be in the kernel direct mapping. Drivers must not call virt_to_page on
the returned memory region.
It also returns a <dma_handle> which may be cast to an unsigned integer the
......@@ -565,7 +564,45 @@ reused.
Free a region of memory previously allocated using dma_alloc_noncoherent().
dev, size and dma_handle and dir must all be the same as those passed into
dma_alloc_noncoherent(). cpu_addr must be the virtual address returned by
the dma_alloc_noncoherent().
dma_alloc_noncoherent().
::
struct page *
dma_alloc_pages(struct device *dev, size_t size, dma_addr_t *dma_handle,
enum dma_data_direction dir, gfp_t gfp)
This routine allocates a region of <size> bytes of non-coherent memory. It
returns a pointer to first struct page for the region, or NULL if the
allocation failed. The resulting struct page can be used for everything a
struct page is suitable for.
It also returns a <dma_handle> which may be cast to an unsigned integer the
same width as the bus and given to the device as the DMA address base of
the region.
The dir parameter specified if data is read and/or written by the device,
see dma_map_single() for details.
The gfp parameter allows the caller to specify the ``GFP_`` flags (see
kmalloc()) for the allocation, but rejects flags used to specify a memory
zone such as GFP_DMA or GFP_HIGHMEM.
Before giving the memory to the device, dma_sync_single_for_device() needs
to be called, and before reading memory written by the device,
dma_sync_single_for_cpu(), just like for streaming DMA mappings that are
reused.
::
void
dma_free_pages(struct device *dev, size_t size, struct page *page,
dma_addr_t dma_handle, enum dma_data_direction dir)
Free a region of memory previously allocated using dma_alloc_pages().
dev, size and dma_handle and dir must all be the same as those passed into
dma_alloc_noncoherent(). page must be the pointer returned by
dma_alloc_pages().
::
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment