Commit 78576382 authored by Andrew Morton's avatar Andrew Morton Committed by Linus Torvalds

[PATCH] Add dma_error() and pci_dma_error()

From: Anton Blanchard <anton@samba.org>

Introduce dma_error() and pci_dma_error() which are used to detect failures
in pci_map_single.
parent 1c4c0ff6
......@@ -278,6 +278,18 @@ and <size> parameters are provided to do partial page mapping, it is
recommended that you never use these unless you really know what the
cache width is.
int
dma_error(dma_addr_t dma_addr)
int
pci_dma_error(dma_addr_t dma_addr)
In some circumstances dma_map_single and dma_map_page will fail to create
a mapping. A driver can check for these errors by testing the returned
dma address with dma_error(). A non zero return value means the mapping
could not be created and the driver should take appropriate action (eg
reduce current DMA mapping usage or delay and try again later).
int
dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
enum dma_data_direction direction)
......@@ -292,6 +304,15 @@ than <nents> passed in if the block layer determines that some
elements of the scatter/gather list are physically adjacent and thus
may be mapped with a single entry).
Please note that the sg cannot be mapped again if it has been mapped once.
The mapping process is allowed to destroy information in the sg.
As with the other mapping interfaces, dma_map_sg can fail. When it
does, 0 is returned and a driver must take appropriate action. It is
critical that the driver do something, in the case of a block driver
aborting the request or even oopsing is better than doing nothing and
corrupting the filesystem.
void
dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nhwentries,
enum dma_data_direction direction)
......
......@@ -519,7 +519,7 @@ consecutive sglist entries can be merged into one provided the first one
ends and the second one starts on a page boundary - in fact this is a huge
advantage for cards which either cannot do scatter-gather or have very
limited number of scatter-gather entries) and returns the actual number
of sg entries it mapped them to.
of sg entries it mapped them to. On failure 0 is returned.
Then you should loop count times (note: this can be less than nents times)
and use sg_dma_address() and sg_dma_len() macros where you previously
......@@ -842,6 +842,27 @@ to "Closing".
2) More to come...
Handling Errors
DMA address space is limited on some architectures and an allocation
failure can be determined by:
- checking if pci_alloc_consistent returns NULL or pci_map_sg returns 0
- checking the returned dma_addr_t of pci_map_single and pci_map_page
by using pci_dma_error():
dma_addr_t dma_handle;
dma_handle = pci_map_single(dev, addr, size, direction);
if (pci_dma_error(dma_handle)) {
/*
* reduce current DMA mapping usage,
* delay and try again later or
* reset driver.
*/
}
Closing
This document, and the API itself, would not be in it's current
......
......@@ -140,6 +140,12 @@ dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nelems,
pci_dma_sync_sg_for_device(to_pci_dev(dev), sg, nelems, (int)direction);
}
static inline int
dma_error(dma_addr_t dma_addr)
{
return pci_dma_error(dma_addr);
}
/* Now for the API extensions over the pci_ one */
#define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f)
......
......@@ -98,4 +98,10 @@ pci_dma_sync_sg_for_device(struct pci_dev *hwdev, struct scatterlist *sg,
dma_sync_sg_for_device(hwdev == NULL ? NULL : &hwdev->dev, sg, nelems, (enum dma_data_direction)direction);
}
static inline int
pci_dma_error(dma_addr_t dma_addr)
{
return dma_error(dma_addr);
}
#endif
......@@ -110,6 +110,12 @@ dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nelems,
flush_write_buffers();
}
static inline int
dma_error(dma_addr_t dma_addr)
{
return 0;
}
static inline int
dma_supported(struct device *dev, u64 mask)
{
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment