Commit 5bcd0bb6 authored by Greg Kroah-Hartman's avatar Greg Kroah-Hartman

Merge kroah.com:/home/greg/linux/BK/bleed-2.5

into kroah.com:/home/greg/linux/BK/gregkh-2.5
parents 4fb416f4 6332c743
......@@ -15,10 +15,12 @@ OR: they can now be DMA-aware.
manage dma mappings for existing dma-ready buffers (see below).
- URBs have an additional "transfer_dma" field, as well as a transfer_flags
bit saying if it's valid. (Control requests also needed "setup_dma".)
bit saying if it's valid. (Control requests also have "setup_dma" and a
corresponding transfer_flags bit.)
- "usbcore" will map those DMA addresses, if a DMA-aware driver didn't do it
first and set URB_NO_DMA_MAP. HCDs don't manage dma mappings for urbs.
- "usbcore" will map those DMA addresses, if a DMA-aware driver didn't do
it first and set URB_NO_TRANSFER_DMA_MAP or URB_NO_SETUP_DMA_MAP. HCDs
don't manage dma mappings for URBs.
- There's a new "generic DMA API", parts of which are usable by USB device
drivers. Never use dma_set_mask() on any USB interface or device; that
......@@ -33,8 +35,9 @@ and effects like cache-trashing can impose subtle penalties.
- When you're allocating a buffer for DMA purposes anyway, use the buffer
primitives. Think of them as kmalloc and kfree that give you the right
kind of addresses to store in urb->transfer_buffer and urb->transfer_dma,
while guaranteeing that hidden copies through DMA "bounce" buffers won't
slow things down. You'd also set URB_NO_DMA_MAP in urb->transfer_flags:
while guaranteeing that no hidden copies through DMA "bounce" buffers will
slow things down. You'd also set URB_NO_TRANSFER_DMA_MAP in
urb->transfer_flags:
void *usb_buffer_alloc (struct usb_device *dev, size_t size,
int mem_flags, dma_addr_t *dma);
......@@ -42,10 +45,18 @@ and effects like cache-trashing can impose subtle penalties.
void usb_buffer_free (struct usb_device *dev, size_t size,
void *addr, dma_addr_t dma);
For control transfers you can use the buffer primitives or not for each
of the transfer buffer and setup buffer independently. Set the flag bits
URB_NO_TRANSFER_DMA_MAP and URB_NO_SETUP_DMA_MAP to indicate which
buffers you have prepared. For non-control transfers URB_NO_SETUP_DMA_MAP
is ignored.
The memory buffer returned is "dma-coherent"; sometimes you might need to
force a consistent memory access ordering by using memory barriers. It's
not using a streaming DMA mapping, so it's good for small transfers on
systems where the I/O would otherwise tie up an IOMMU mapping.
systems where the I/O would otherwise tie up an IOMMU mapping. (See
Documentation/DMA-mapping.txt for definitions of "coherent" and "streaming"
DMA mappings.)
Asking for 1/Nth of a page (as well as asking for N pages) is reasonably
space-efficient.
......@@ -91,7 +102,8 @@ DMA address space of the device.
These calls all work with initialized urbs: urb->dev, urb->pipe,
urb->transfer_buffer, and urb->transfer_buffer_length must all be
valid when these calls are used:
valid when these calls are used (urb->setup_packet must be valid too
if urb is a control request):
struct urb *usb_buffer_map (struct urb *urb);
......@@ -99,6 +111,6 @@ DMA address space of the device.
void usb_buffer_unmap (struct urb *urb);
The calls manage urb->transfer_dma for you, and set URB_NO_DMA_MAP so that
usbcore won't map or unmap the buffer.
The calls manage urb->transfer_dma for you, and set URB_NO_TRANSFER_DMA_MAP
so that usbcore won't map or unmap the buffer. The same goes for
urb->setup_dma and URB_NO_SETUP_DMA_MAP for control requests.
......@@ -548,7 +548,7 @@ static int acm_probe (struct usb_interface *intf,
struct usb_host_config *cfacm;
struct usb_host_interface *ifcom, *ifdata;
struct usb_endpoint_descriptor *epctrl, *epread, *epwrite;
int readsize, ctrlsize, minor, i;
int readsize, ctrlsize, minor, i, j;
unsigned char *buf;
dev = interface_to_usbdev (intf);
......@@ -558,120 +558,123 @@ static int acm_probe (struct usb_interface *intf,
dbg("probing config %d", cfacm->desc.bConfigurationValue);
if (cfacm->desc.bNumInterfaces != 2 ||
usb_interface_claimed(cfacm->interface + 0) ||
usb_interface_claimed(cfacm->interface + 1))
for (j = 0; j < cfacm->desc.bNumInterfaces - 1; j++) {
if (usb_interface_claimed(cfacm->interface + j) ||
usb_interface_claimed(cfacm->interface + j + 1))
continue;
ifcom = cfacm->interface[0].altsetting + 0;
ifdata = cfacm->interface[1].altsetting + 0;
ifcom = cfacm->interface[j].altsetting + 0;
ifdata = cfacm->interface[j + 1].altsetting + 0;
if (ifdata->desc.bInterfaceClass != 10 || ifdata->desc.bNumEndpoints < 2) {
ifcom = cfacm->interface[1].altsetting + 0;
ifdata = cfacm->interface[0].altsetting + 0;
if (ifdata->desc.bInterfaceClass != 10 || ifdata->desc.bNumEndpoints < 2)
continue;
}
if (ifdata->desc.bInterfaceClass != 10 || ifdata->desc.bNumEndpoints < 2) {
ifcom = cfacm->interface[j + 1].altsetting + 0;
ifdata = cfacm->interface[j].altsetting + 0;
if (ifdata->desc.bInterfaceClass != 10 || ifdata->desc.bNumEndpoints < 2)
continue;
}
if (ifcom->desc.bInterfaceClass != 2 || ifcom->desc.bInterfaceSubClass != 2 ||
ifcom->desc.bInterfaceProtocol != 1 || ifcom->desc.bNumEndpoints < 1)
continue;
if (ifcom->desc.bInterfaceClass != 2 || ifcom->desc.bInterfaceSubClass != 2 ||
ifcom->desc.bInterfaceProtocol < 1 || ifcom->desc.bInterfaceProtocol > 6 ||
ifcom->desc.bNumEndpoints < 1)
continue;
epctrl = &ifcom->endpoint[0].desc;
epread = &ifdata->endpoint[0].desc;
epwrite = &ifdata->endpoint[1].desc;
epctrl = &ifcom->endpoint[0].desc;
epread = &ifdata->endpoint[0].desc;
epwrite = &ifdata->endpoint[1].desc;
if ((epctrl->bEndpointAddress & 0x80) != 0x80 || (epctrl->bmAttributes & 3) != 3 ||
(epread->bmAttributes & 3) != 2 || (epwrite->bmAttributes & 3) != 2 ||
((epread->bEndpointAddress & 0x80) ^ (epwrite->bEndpointAddress & 0x80)) != 0x80)
continue;
if ((epctrl->bEndpointAddress & 0x80) != 0x80 || (epctrl->bmAttributes & 3) != 3 ||
(epread->bmAttributes & 3) != 2 || (epwrite->bmAttributes & 3) != 2 ||
((epread->bEndpointAddress & 0x80) ^ (epwrite->bEndpointAddress & 0x80)) != 0x80)
continue;
if ((epread->bEndpointAddress & 0x80) != 0x80) {
epread = &ifdata->endpoint[1].desc;
epwrite = &ifdata->endpoint[0].desc;
}
if ((epread->bEndpointAddress & 0x80) != 0x80) {
epread = &ifdata->endpoint[1].desc;
epwrite = &ifdata->endpoint[0].desc;
}
usb_set_configuration(dev, cfacm->desc.bConfigurationValue);
usb_set_configuration(dev, cfacm->desc.bConfigurationValue);
for (minor = 0; minor < ACM_TTY_MINORS && acm_table[minor]; minor++);
if (acm_table[minor]) {
err("no more free acm devices");
return -ENODEV;
}
for (minor = 0; minor < ACM_TTY_MINORS && acm_table[minor]; minor++);
if (acm_table[minor]) {
err("no more free acm devices");
return -ENODEV;
}
if (!(acm = kmalloc(sizeof(struct acm), GFP_KERNEL))) {
err("out of memory");
return -ENOMEM;
}
memset(acm, 0, sizeof(struct acm));
if (!(acm = kmalloc(sizeof(struct acm), GFP_KERNEL))) {
err("out of memory");
return -ENOMEM;
}
memset(acm, 0, sizeof(struct acm));
ctrlsize = epctrl->wMaxPacketSize;
readsize = epread->wMaxPacketSize;
acm->writesize = epwrite->wMaxPacketSize;
acm->iface = cfacm->interface;
acm->minor = minor;
acm->dev = dev;
ctrlsize = epctrl->wMaxPacketSize;
readsize = epread->wMaxPacketSize;
acm->writesize = epwrite->wMaxPacketSize;
acm->iface = cfacm->interface + j;
acm->minor = minor;
acm->dev = dev;
INIT_WORK(&acm->work, acm_softint, acm);
INIT_WORK(&acm->work, acm_softint, acm);
if (!(buf = kmalloc(ctrlsize + readsize + acm->writesize, GFP_KERNEL))) {
err("out of memory");
kfree(acm);
return -ENOMEM;
}
if (!(buf = kmalloc(ctrlsize + readsize + acm->writesize, GFP_KERNEL))) {
err("out of memory");
kfree(acm);
return -ENOMEM;
}
acm->ctrlurb = usb_alloc_urb(0, GFP_KERNEL);
if (!acm->ctrlurb) {
err("out of memory");
kfree(acm);
kfree(buf);
return -ENOMEM;
}
acm->readurb = usb_alloc_urb(0, GFP_KERNEL);
if (!acm->readurb) {
err("out of memory");
usb_free_urb(acm->ctrlurb);
kfree(acm);
kfree(buf);
return -ENOMEM;
}
acm->writeurb = usb_alloc_urb(0, GFP_KERNEL);
if (!acm->writeurb) {
err("out of memory");
usb_free_urb(acm->readurb);
usb_free_urb(acm->ctrlurb);
kfree(acm);
kfree(buf);
return -ENOMEM;
}
acm->ctrlurb = usb_alloc_urb(0, GFP_KERNEL);
if (!acm->ctrlurb) {
err("out of memory");
kfree(acm);
kfree(buf);
return -ENOMEM;
}
acm->readurb = usb_alloc_urb(0, GFP_KERNEL);
if (!acm->readurb) {
err("out of memory");
usb_free_urb(acm->ctrlurb);
kfree(acm);
kfree(buf);
return -ENOMEM;
}
acm->writeurb = usb_alloc_urb(0, GFP_KERNEL);
if (!acm->writeurb) {
err("out of memory");
usb_free_urb(acm->readurb);
usb_free_urb(acm->ctrlurb);
kfree(acm);
kfree(buf);
return -ENOMEM;
}
usb_fill_int_urb(acm->ctrlurb, dev, usb_rcvintpipe(dev, epctrl->bEndpointAddress),
buf, ctrlsize, acm_ctrl_irq, acm, epctrl->bInterval);
usb_fill_int_urb(acm->ctrlurb, dev, usb_rcvintpipe(dev, epctrl->bEndpointAddress),
buf, ctrlsize, acm_ctrl_irq, acm, epctrl->bInterval);
usb_fill_bulk_urb(acm->readurb, dev, usb_rcvbulkpipe(dev, epread->bEndpointAddress),
buf += ctrlsize, readsize, acm_read_bulk, acm);
acm->readurb->transfer_flags |= URB_NO_FSBR;
usb_fill_bulk_urb(acm->readurb, dev, usb_rcvbulkpipe(dev, epread->bEndpointAddress),
buf += ctrlsize, readsize, acm_read_bulk, acm);
acm->readurb->transfer_flags |= URB_NO_FSBR;
usb_fill_bulk_urb(acm->writeurb, dev, usb_sndbulkpipe(dev, epwrite->bEndpointAddress),
buf += readsize, acm->writesize, acm_write_bulk, acm);
acm->writeurb->transfer_flags |= URB_NO_FSBR;
usb_fill_bulk_urb(acm->writeurb, dev, usb_sndbulkpipe(dev, epwrite->bEndpointAddress),
buf += readsize, acm->writesize, acm_write_bulk, acm);
acm->writeurb->transfer_flags |= URB_NO_FSBR;
info("ttyACM%d: USB ACM device", minor);
info("ttyACM%d: USB ACM device", minor);
acm_set_control(acm, acm->ctrlout);
acm_set_control(acm, acm->ctrlout);
acm->line.speed = cpu_to_le32(9600);
acm->line.databits = 8;
acm_set_line(acm, &acm->line);
acm->line.speed = cpu_to_le32(9600);
acm->line.databits = 8;
acm_set_line(acm, &acm->line);
usb_driver_claim_interface(&acm_driver, acm->iface + 0, acm);
usb_driver_claim_interface(&acm_driver, acm->iface + 1, acm);
usb_driver_claim_interface(&acm_driver, acm->iface + 0, acm);
usb_driver_claim_interface(&acm_driver, acm->iface + 1, acm);
tty_register_device(acm_tty_driver, minor, &intf->dev);
tty_register_device(acm_tty_driver, minor, &intf->dev);
acm_table[minor] = acm;
usb_set_intfdata (intf, acm);
return 0;
acm_table[minor] = acm;
usb_set_intfdata (intf, acm);
return 0;
}
}
return -EIO;
......
......@@ -296,13 +296,13 @@ static int usblp_check_status(struct usblp *usblp, int err)
}
status = *usblp->statusbuf;
if (~status & LP_PERRORP) {
if (~status & LP_PERRORP)
newerr = 3;
if (status & LP_POUTPA)
newerr = 1;
if (~status & LP_PSELECD)
newerr = 2;
}
if (status & LP_POUTPA)
newerr = 1;
if (~status & LP_PSELECD)
newerr = 2;
if (newerr != err)
info("usblp%d: %s", usblp->minor, usblp_messages[newerr]);
......@@ -426,7 +426,7 @@ static int usblp_ioctl(struct inode *inode, struct file *file, unsigned int cmd,
{
struct usblp *usblp = file->private_data;
int length, err, i;
unsigned char lpstatus, newChannel;
unsigned char newChannel;
int status;
int twoints[2];
int retval = 0;
......@@ -578,12 +578,12 @@ static int usblp_ioctl(struct inode *inode, struct file *file, unsigned int cmd,
switch (cmd) {
case LPGETSTATUS:
if (usblp_read_status(usblp, &lpstatus)) {
if (usblp_read_status(usblp, usblp->statusbuf)) {
err("usblp%d: failed reading printer status", usblp->minor);
retval = -EIO;
goto done;
}
status = lpstatus;
status = *usblp->statusbuf;
if (copy_to_user ((int *)arg, &status, sizeof(int)))
retval = -EFAULT;
break;
......@@ -858,8 +858,8 @@ static int usblp_probe(struct usb_interface *intf,
}
usblp->writebuf = usblp->readbuf = NULL;
usblp->writeurb->transfer_flags = URB_NO_DMA_MAP;
usblp->readurb->transfer_flags = URB_NO_DMA_MAP;
usblp->writeurb->transfer_flags = URB_NO_TRANSFER_DMA_MAP;
usblp->readurb->transfer_flags = URB_NO_TRANSFER_DMA_MAP;
/* Malloc write & read buffers. We somewhat wastefully
* malloc both regardless of bidirectionality, because the
* alternate setting can be changed later via an ioctl. */
......
......@@ -459,7 +459,8 @@ static int rh_status_urb (struct usb_hcd *hcd, struct urb *urb)
/* rh_timer protected by hcd_data_lock */
if (hcd->rh_timer.data
|| urb->status != -EINPROGRESS
|| urb->transfer_buffer_length < len) {
|| urb->transfer_buffer_length < len
|| !HCD_IS_RUNNING (hcd->state)) {
dev_dbg (hcd->controller,
"not queuing rh status urb, stat %d\n",
urb->status);
......@@ -489,11 +490,10 @@ static void rh_report_status (unsigned long ptr)
local_irq_save (flags);
spin_lock (&urb->lock);
/* do nothing if the hc is gone or the urb's been unlinked */
/* do nothing if the urb's been unlinked */
if (!urb->dev
|| urb->status != -EINPROGRESS
|| (hcd = urb->dev->bus->hcpriv) == 0
|| !HCD_IS_RUNNING (hcd->state)) {
|| (hcd = urb->dev->bus->hcpriv) == 0) {
spin_unlock (&urb->lock);
local_irq_restore (flags);
return;
......@@ -1027,7 +1027,8 @@ static int hcd_submit_urb (struct urb *urb, int mem_flags)
* valid and usb_buffer_{sync,unmap}() not be needed, since
* they could clobber root hub response data.
*/
urb->transfer_flags |= URB_NO_DMA_MAP;
urb->transfer_flags |= (URB_NO_TRANSFER_DMA_MAP
| URB_NO_SETUP_DMA_MAP);
status = rh_urb_enqueue (hcd, urb);
goto done;
}
......@@ -1035,15 +1036,16 @@ static int hcd_submit_urb (struct urb *urb, int mem_flags)
/* lower level hcd code should use *_dma exclusively,
* unless it uses pio or talks to another transport.
*/
if (!(urb->transfer_flags & URB_NO_DMA_MAP)
&& hcd->controller->dma_mask) {
if (usb_pipecontrol (urb->pipe))
if (hcd->controller->dma_mask) {
if (usb_pipecontrol (urb->pipe)
&& !(urb->transfer_flags & URB_NO_SETUP_DMA_MAP))
urb->setup_dma = dma_map_single (
hcd->controller,
urb->setup_packet,
sizeof (struct usb_ctrlrequest),
DMA_TO_DEVICE);
if (urb->transfer_buffer_length != 0)
if (urb->transfer_buffer_length != 0
&& !(urb->transfer_flags & URB_NO_TRANSFER_DMA_MAP))
urb->transfer_dma = dma_map_single (
hcd->controller,
urb->transfer_buffer,
......@@ -1410,12 +1412,14 @@ void usb_hcd_giveback_urb (struct usb_hcd *hcd, struct urb *urb, struct pt_regs
// It would catch exit/unlink paths for all urbs.
/* lower level hcd code should use *_dma exclusively */
if (!(urb->transfer_flags & URB_NO_DMA_MAP)) {
if (usb_pipecontrol (urb->pipe))
if (hcd->controller->dma_mask) {
if (usb_pipecontrol (urb->pipe)
&& !(urb->transfer_flags & URB_NO_SETUP_DMA_MAP))
pci_unmap_single (hcd->pdev, urb->setup_dma,
sizeof (struct usb_ctrlrequest),
PCI_DMA_TODEVICE);
if (urb->transfer_buffer_length != 0)
if (urb->transfer_buffer_length != 0
&& !(urb->transfer_flags & URB_NO_TRANSFER_DMA_MAP))
pci_unmap_single (hcd->pdev, urb->transfer_dma,
urb->transfer_buffer_length,
usb_pipein (urb->pipe)
......
......@@ -461,7 +461,7 @@ static int hub_configure(struct usb_hub *hub,
usb_fill_int_urb(hub->urb, dev, pipe, *hub->buffer, maxp, hub_irq,
hub, endpoint->bInterval);
hub->urb->transfer_dma = hub->buffer_dma;
hub->urb->transfer_flags |= URB_NO_DMA_MAP;
hub->urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
ret = usb_submit_urb(hub->urb, GFP_KERNEL);
if (ret) {
message = "couldn't submit status urb";
......
......@@ -344,7 +344,8 @@ int usb_sg_init (
if (!io->urbs)
goto nomem;
urb_flags = URB_ASYNC_UNLINK | URB_NO_DMA_MAP | URB_NO_INTERRUPT;
urb_flags = URB_ASYNC_UNLINK | URB_NO_TRANSFER_DMA_MAP
| URB_NO_INTERRUPT;
if (usb_pipein (pipe))
urb_flags |= URB_SHORT_NOT_OK;
......
......@@ -297,7 +297,7 @@ int usb_submit_urb(struct urb *urb, int mem_flags)
/* enforce simple/standard policy */
allowed = URB_ASYNC_UNLINK; // affects later unlinks
allowed |= URB_NO_DMA_MAP;
allowed |= (URB_NO_TRANSFER_DMA_MAP | URB_NO_SETUP_DMA_MAP);
allowed |= URB_NO_INTERRUPT;
switch (temp) {
case PIPE_BULK:
......
......@@ -1234,7 +1234,7 @@ int usb_new_device(struct usb_device *dev, struct device *parent)
}
/**
* usb_buffer_alloc - allocate dma-consistent buffer for URB_NO_DMA_MAP
* usb_buffer_alloc - allocate dma-consistent buffer for URB_NO_xxx_DMA_MAP
* @dev: device the buffer will be used with
* @size: requested buffer size
* @mem_flags: affect whether allocation may block
......@@ -1245,9 +1245,9 @@ int usb_new_device(struct usb_device *dev, struct device *parent)
* specified device. Such cpu-space buffers are returned along with the DMA
* address (through the pointer provided).
*
* These buffers are used with URB_NO_DMA_MAP set in urb->transfer_flags to
* avoid behaviors like using "DMA bounce buffers", or tying down I/O mapping
* hardware for long idle periods. The implementation varies between
* These buffers are used with URB_NO_xxx_DMA_MAP set in urb->transfer_flags
* to avoid behaviors like using "DMA bounce buffers", or tying down I/O
* mapping hardware for long idle periods. The implementation varies between
* platforms, depending on details of how DMA will work to this device.
* Using these buffers also helps prevent cacheline sharing problems on
* architectures where CPU caches are not DMA-coherent.
......@@ -1291,17 +1291,17 @@ void usb_buffer_free (
/**
* usb_buffer_map - create DMA mapping(s) for an urb
* @urb: urb whose transfer_buffer will be mapped
* @urb: urb whose transfer_buffer/setup_packet will be mapped
*
* Return value is either null (indicating no buffer could be mapped), or
* the parameter. URB_NO_DMA_MAP is added to urb->transfer_flags if the
* operation succeeds. If the device is connected to this system through
* a non-DMA controller, this operation always succeeds.
* the parameter. URB_NO_TRANSFER_DMA_MAP and URB_NO_SETUP_DMA_MAP are
* added to urb->transfer_flags if the operation succeeds. If the device
* is connected to this system through a non-DMA controller, this operation
* always succeeds.
*
* This call would normally be used for an urb which is reused, perhaps
* as the target of a large periodic transfer, with usb_buffer_dmasync()
* calls to synchronize memory and dma state. It may not be used for
* control requests.
* calls to synchronize memory and dma state.
*
* Reverse the effect of this call with usb_buffer_unmap().
*/
......@@ -1311,7 +1311,6 @@ struct urb *usb_buffer_map (struct urb *urb)
struct device *controller;
if (!urb
|| usb_pipecontrol (urb->pipe)
|| !urb->dev
|| !(bus = urb->dev->bus)
|| !(controller = bus->controller))
......@@ -1322,17 +1321,23 @@ struct urb *usb_buffer_map (struct urb *urb)
urb->transfer_buffer, urb->transfer_buffer_length,
usb_pipein (urb->pipe)
? DMA_FROM_DEVICE : DMA_TO_DEVICE);
if (usb_pipecontrol (urb->pipe))
urb->setup_dma = dma_map_single (controller,
urb->setup_packet,
sizeof (struct usb_ctrlrequest),
DMA_TO_DEVICE);
// FIXME generic api broken like pci, can't report errors
// if (urb->transfer_dma == DMA_ADDR_INVALID) return 0;
} else
urb->transfer_dma = ~0;
urb->transfer_flags |= URB_NO_DMA_MAP;
urb->transfer_flags |= (URB_NO_TRANSFER_DMA_MAP
| URB_NO_SETUP_DMA_MAP);
return urb;
}
/**
* usb_buffer_dmasync - synchronize DMA and CPU view of buffer(s)
* @urb: urb whose transfer_buffer will be synchronized
* @urb: urb whose transfer_buffer/setup_packet will be synchronized
*/
void usb_buffer_dmasync (struct urb *urb)
{
......@@ -1340,17 +1345,23 @@ void usb_buffer_dmasync (struct urb *urb)
struct device *controller;
if (!urb
|| !(urb->transfer_flags & URB_NO_DMA_MAP)
|| !(urb->transfer_flags & URB_NO_TRANSFER_DMA_MAP)
|| !urb->dev
|| !(bus = urb->dev->bus)
|| !(controller = bus->controller))
return;
if (controller->dma_mask)
if (controller->dma_mask) {
dma_sync_single (controller,
urb->transfer_dma, urb->transfer_buffer_length,
usb_pipein (urb->pipe)
? DMA_FROM_DEVICE : DMA_TO_DEVICE);
if (usb_pipecontrol (urb->pipe))
dma_sync_single (controller,
urb->setup_dma,
sizeof (struct usb_ctrlrequest),
DMA_TO_DEVICE);
}
}
/**
......@@ -1365,18 +1376,25 @@ void usb_buffer_unmap (struct urb *urb)
struct device *controller;
if (!urb
|| !(urb->transfer_flags & URB_NO_DMA_MAP)
|| !(urb->transfer_flags & URB_NO_TRANSFER_DMA_MAP)
|| !urb->dev
|| !(bus = urb->dev->bus)
|| !(controller = bus->controller))
return;
if (controller->dma_mask)
if (controller->dma_mask) {
dma_unmap_single (controller,
urb->transfer_dma, urb->transfer_buffer_length,
usb_pipein (urb->pipe)
? DMA_FROM_DEVICE : DMA_TO_DEVICE);
urb->transfer_flags &= ~URB_NO_DMA_MAP;
if (usb_pipecontrol (urb->pipe))
dma_unmap_single (controller,
urb->setup_dma,
sizeof (struct usb_ctrlrequest),
DMA_TO_DEVICE);
}
urb->transfer_flags &= ~(URB_NO_TRANSFER_DMA_MAP
| URB_NO_SETUP_DMA_MAP);
}
/**
......@@ -1391,7 +1409,7 @@ void usb_buffer_unmap (struct urb *urb)
*
* The caller is responsible for placing the resulting DMA addresses from
* the scatterlist into URB transfer buffer pointers, and for setting the
* URB_NO_DMA_MAP transfer flag in each of those URBs.
* URB_NO_TRANSFER_DMA_MAP transfer flag in each of those URBs.
*
* Top I/O rates come from queuing URBs, instead of waiting for each one
* to complete before starting the next I/O. This is particularly easy
......
......@@ -114,20 +114,29 @@ static inline void dbg_hcc_params (struct ehci_hcd *ehci, char *label) {}
#ifdef DEBUG
static void __attribute__((__unused__))
dbg_qtd (char *label, struct ehci_hcd *ehci, struct ehci_qtd *qtd)
{
ehci_dbg (ehci, "%s td %p n%08x %08x t%08x p0=%08x\n", label, qtd,
cpu_to_le32p (&qtd->hw_next),
cpu_to_le32p (&qtd->hw_alt_next),
cpu_to_le32p (&qtd->hw_token),
cpu_to_le32p (&qtd->hw_buf [0]));
if (qtd->hw_buf [1])
ehci_dbg (ehci, " p1=%08x p2=%08x p3=%08x p4=%08x\n",
cpu_to_le32p (&qtd->hw_buf [1]),
cpu_to_le32p (&qtd->hw_buf [2]),
cpu_to_le32p (&qtd->hw_buf [3]),
cpu_to_le32p (&qtd->hw_buf [4]));
}
static void __attribute__((__unused__))
dbg_qh (char *label, struct ehci_hcd *ehci, struct ehci_qh *qh)
{
dbg ("%s %p n%08x info1 %x info2 %x hw_curr %x qtd_next %x", label,
ehci_dbg (ehci, "%s qh %p n%08x info %x %x qtd %x\n", label,
qh, qh->hw_next, qh->hw_info1, qh->hw_info2,
qh->hw_current, qh->hw_qtd_next);
dbg (" alt+nak+t= %x, token= %x, page0= %x, page1= %x",
qh->hw_alt_next, qh->hw_token,
qh->hw_buf [0], qh->hw_buf [1]);
if (qh->hw_buf [2]) {
dbg (" page2= %x, page3= %x, page4= %x",
qh->hw_buf [2], qh->hw_buf [3],
qh->hw_buf [4]);
}
qh->hw_current);
dbg_qtd ("overlay", ehci, (struct ehci_qtd *) &qh->hw_qtd_next);
}
static int __attribute__((__unused__))
......@@ -284,8 +293,7 @@ static inline char token_mark (u32 token)
return '*';
if (token & QTD_STS_HALT)
return '-';
if (QTD_PID (token) != 1 /* not IN: OUT or SETUP */
|| QTD_LENGTH (token) == 0)
if (!IS_SHORT_READ (token))
return ' ';
/* tries to advance through hw_alt_next */
return '/';
......@@ -307,11 +315,14 @@ static void qh_lines (
char *next = *nextp;
char mark;
mark = token_mark (qh->hw_token);
if (qh->hw_qtd_next == EHCI_LIST_END) /* NEC does this */
mark = '@';
else
mark = token_mark (qh->hw_token);
if (mark == '/') { /* qh_alt_next controls qh advance? */
if ((qh->hw_alt_next & QTD_MASK) == ehci->async->hw_alt_next)
mark = '#'; /* blocked */
else if (qh->hw_alt_next & cpu_to_le32 (0x01))
else if (qh->hw_alt_next == EHCI_LIST_END)
mark = '.'; /* use hw_qtd_next */
/* else alt_next points to some other qtd */
}
......@@ -324,7 +335,7 @@ static void qh_lines (
(scratch >> 8) & 0x000f,
scratch, cpu_to_le32p (&qh->hw_info2),
cpu_to_le32p (&qh->hw_token), mark,
(cpu_to_le32 (0x8000000) & qh->hw_token)
(__constant_cpu_to_le32 (QTD_TOGGLE) & qh->hw_token)
? "data0" : "data1",
(cpu_to_le32p (&qh->hw_alt_next) >> 1) & 0x0f);
size -= temp;
......@@ -390,6 +401,8 @@ show_async (struct device *dev, char *buf)
char *next;
struct ehci_qh *qh;
*buf = 0;
pdev = container_of (dev, struct pci_dev, dev);
ehci = container_of (pci_get_drvdata (pdev), struct ehci_hcd, hcd);
next = buf;
......@@ -412,7 +425,7 @@ show_async (struct device *dev, char *buf)
}
spin_unlock_irqrestore (&ehci->lock, flags);
return PAGE_SIZE - size;
return strlen (buf);
}
static DEVICE_ATTR (async, S_IRUGO, show_async, NULL);
......@@ -548,7 +561,8 @@ show_registers (struct device *dev, char *buf)
/* Capability Registers */
i = readw (&ehci->caps->hci_version);
temp = snprintf (next, size,
"EHCI %x.%02x, hcd state %d (version " DRIVER_VERSION ")\n",
"%s\nEHCI %x.%02x, hcd state %d (driver " DRIVER_VERSION ")\n",
pdev->dev.name,
i >> 8, i & 0x0ff, ehci->hcd.state);
size -= temp;
next += temp;
......
......@@ -39,13 +39,10 @@
#include <linux/interrupt.h>
#include <linux/reboot.h>
#include <linux/usb.h>
#include <linux/moduleparam.h>
#include <linux/version.h>
#if LINUX_VERSION_CODE < KERNEL_VERSION(2,5,32)
#include "../hcd.h"
#else
#include "../core/hcd.h"
#endif
#include <asm/byteorder.h>
#include <asm/io.h>
......@@ -94,11 +91,11 @@
* 2001-June Works with usb-storage and NEC EHCI on 2.4
*/
#define DRIVER_VERSION "2003-Jan-22"
#define DRIVER_VERSION "2003-Jun-13"
#define DRIVER_AUTHOR "David Brownell"
#define DRIVER_DESC "USB 2.0 'Enhanced' Host Controller (EHCI) Driver"
static const char hcd_name [] = "ehci-hcd";
static const char hcd_name [] = "ehci_hcd";
// #define EHCI_VERBOSE_DEBUG
......@@ -123,7 +120,7 @@ static const char hcd_name [] = "ehci-hcd";
/* Initial IRQ latency: lower than default */
static int log2_irq_thresh = 0; // 0 to 6
MODULE_PARM (log2_irq_thresh, "i");
module_param (log2_irq_thresh, int, S_IRUGO);
MODULE_PARM_DESC (log2_irq_thresh, "log2 IRQ latency, 1-64 microframes");
#define INTR_MASK (STS_IAA | STS_FATAL | STS_ERR | STS_INT)
......@@ -1020,7 +1017,8 @@ static int __init init (void)
if (usb_disabled())
return -ENODEV;
dbg ("block sizes: qh %Zd qtd %Zd itd %Zd sitd %Zd",
pr_debug ("%s: block sizes: qh %Zd qtd %Zd itd %Zd sitd %Zd\n",
hcd_name,
sizeof (struct ehci_qh), sizeof (struct ehci_qtd),
sizeof (struct ehci_itd), sizeof (struct ehci_sitd));
......
......@@ -88,7 +88,6 @@ qtd_fill (struct ehci_qtd *qtd, dma_addr_t buf, size_t len,
static inline void
qh_update (struct ehci_hcd *ehci, struct ehci_qh *qh, struct ehci_qtd *qtd)
{
qh->hw_current = 0;
qh->hw_qtd_next = QTD_NEXT (qtd->qtd_dma);
qh->hw_alt_next = EHCI_LIST_END;
......@@ -99,8 +98,6 @@ qh_update (struct ehci_hcd *ehci, struct ehci_qh *qh, struct ehci_qtd *qtd)
/*-------------------------------------------------------------------------*/
#define IS_SHORT_READ(token) (QTD_LENGTH (token) != 0 && QTD_PID (token) == 1)
static void qtd_copy_status (
struct ehci_hcd *ehci,
struct urb *urb,
......@@ -279,16 +276,15 @@ qh_completions (struct ehci_hcd *ehci, struct ehci_qh *qh, struct pt_regs *regs)
/* hardware copies qtd out of qh overlay */
rmb ();
token = le32_to_cpu (qtd->hw_token);
stopped = stopped
|| (HALT_BIT & qh->hw_token) != 0
|| (ehci->hcd.state == USB_STATE_HALT);
/* always clean up qtds the hc de-activated */
if ((token & QTD_STS_ACTIVE) == 0) {
/* magic dummy for short reads; won't advance */
if (IS_SHORT_READ (token)
&& !(token & QTD_STS_HALT)
if ((token & QTD_STS_HALT) != 0) {
stopped = 1;
/* magic dummy for some short reads; qh won't advance */
} else if (IS_SHORT_READ (token)
&& (qh->hw_alt_next & QTD_MASK)
== ehci->async->hw_alt_next) {
stopped = 1;
......@@ -296,10 +292,13 @@ qh_completions (struct ehci_hcd *ehci, struct ehci_qh *qh, struct pt_regs *regs)
}
/* stop scanning when we reach qtds the hc is using */
} else if (likely (!stopped)) {
} else if (likely (!stopped
|| HCD_IS_RUNNING (ehci->hcd.state))) {
break;
} else {
stopped = 1;
/* ignore active urbs unless some previous qtd
* for the urb faulted (including short read) or
* its urb was canceled. we may patch qh or qtds.
......@@ -358,12 +357,20 @@ qh_completions (struct ehci_hcd *ehci, struct ehci_qh *qh, struct pt_regs *regs)
qh->qh_state = state;
/* update qh after fault cleanup */
if (unlikely ((HALT_BIT & qh->hw_token) != 0)) {
qh_update (ehci, qh,
list_empty (&qh->qtd_list)
? qh->dummy
: list_entry (qh->qtd_list.next,
struct ehci_qtd, qtd_list));
if (unlikely (stopped != 0)
/* some EHCI 0.95 impls will overlay dummy qtds */
|| qh->hw_qtd_next == EHCI_LIST_END) {
if (list_empty (&qh->qtd_list))
end = qh->dummy;
else {
end = list_entry (qh->qtd_list.next,
struct ehci_qtd, qtd_list);
/* first qtd may already be partially processed */
if (cpu_to_le32 (end->qtd_dma) == qh->hw_current)
end = 0;
}
if (end)
qh_update (ehci, qh, end);
}
return count;
......@@ -788,11 +795,6 @@ static struct ehci_qh *qh_append_tds (
}
}
/* FIXME: changing config or interface setting is not
* supported yet. preferred fix is for usbcore to tell
* us to clear out each endpoint's state, but...
*/
/* usb_clear_halt() means qh data toggle gets reset */
if (unlikely (!usb_gettoggle (urb->dev,
(epnum & 0x0f), !(epnum & 0x10)))
......
......@@ -290,7 +290,10 @@ struct ehci_qtd {
size_t length; /* length of buffer */
} __attribute__ ((aligned (32)));
#define QTD_MASK cpu_to_le32 (~0x1f) /* mask NakCnt+T in qh->hw_alt_next */
/* mask NakCnt+T in qh->hw_alt_next */
#define QTD_MASK __constant_cpu_to_le32 (~0x1f)
#define IS_SHORT_READ(token) (QTD_LENGTH (token) != 0 && QTD_PID (token) == 1)
/*-------------------------------------------------------------------------*/
......
......@@ -330,7 +330,7 @@ aiptek_probe(struct usb_interface *intf,
aiptek->data, aiptek->features->pktlen,
aiptek->features->irq, aiptek, endpoint->bInterval);
aiptek->irq->transfer_dma = aiptek->data_dma;
aiptek->irq->transfer_flags |= URB_NO_DMA_MAP;
aiptek->irq->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
input_register_device(&aiptek->dev);
......
......@@ -1518,7 +1518,7 @@ static struct hid_device *usb_hid_configure(struct usb_interface *intf)
usb_fill_int_urb(hid->urbin, dev, pipe, hid->inbuf, 0,
hid_irq_in, hid, endpoint->bInterval);
hid->urbin->transfer_dma = hid->inbuf_dma;
hid->urbin->transfer_flags |= URB_NO_DMA_MAP;
hid->urbin->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
} else {
if (hid->urbout)
continue;
......@@ -1528,7 +1528,7 @@ static struct hid_device *usb_hid_configure(struct usb_interface *intf)
usb_fill_bulk_urb(hid->urbout, dev, pipe, hid->outbuf, 0,
hid_irq_out, hid);
hid->urbout->transfer_dma = hid->outbuf_dma;
hid->urbout->transfer_flags |= URB_NO_DMA_MAP;
hid->urbout->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
}
}
......@@ -1577,7 +1577,8 @@ static struct hid_device *usb_hid_configure(struct usb_interface *intf)
hid->ctrlbuf, 1, hid_ctrl, hid);
hid->urbctrl->setup_dma = hid->cr_dma;
hid->urbctrl->transfer_dma = hid->ctrlbuf_dma;
hid->urbctrl->transfer_flags |= URB_NO_DMA_MAP;
hid->urbctrl->transfer_flags |= (URB_NO_TRANSFER_DMA_MAP
| URB_NO_SETUP_DMA_MAP);
return hid;
......
......@@ -181,7 +181,7 @@ static int kbtab_probe(struct usb_interface *intf, const struct usb_device_id *i
kbtab->data, 8,
kbtab_irq, kbtab, endpoint->bInterval);
kbtab->irq->transfer_dma = kbtab->data_dma;
kbtab->irq->transfer_flags |= URB_NO_DMA_MAP;
kbtab->irq->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
input_register_device(&kbtab->dev);
......
......@@ -180,7 +180,7 @@ static void powermate_sync_state(struct powermate_device *pm)
(void *) pm->configcr, 0, 0,
powermate_config_complete, pm);
pm->config->setup_dma = pm->configcr_dma;
pm->config->transfer_flags |= URB_NO_DMA_MAP;
pm->config->transfer_flags |= URB_NO_SETUP_DMA_MAP;
if (usb_submit_urb(pm->config, GFP_ATOMIC))
printk(KERN_ERR "powermate: usb_submit_urb(config) failed");
......@@ -355,7 +355,7 @@ static int powermate_probe(struct usb_interface *intf, const struct usb_device_i
POWERMATE_PAYLOAD_SIZE, powermate_irq,
pm, endpoint->bInterval);
pm->irq->transfer_dma = pm->data_dma;
pm->irq->transfer_flags |= URB_NO_DMA_MAP;
pm->irq->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
/* register our interrupt URB with the USB system */
if (usb_submit_urb(pm->irq, GFP_KERNEL)) {
......
......@@ -282,7 +282,7 @@ static int usb_kbd_probe(struct usb_interface *iface,
kbd->new, (maxp > 8 ? 8 : maxp),
usb_kbd_irq, kbd, endpoint->bInterval);
kbd->irq->transfer_dma = kbd->new_dma;
kbd->irq->transfer_flags |= URB_NO_DMA_MAP;
kbd->irq->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
kbd->cr->bRequestType = USB_TYPE_CLASS | USB_RECIP_INTERFACE;
kbd->cr->bRequest = 0x09;
......@@ -325,7 +325,8 @@ static int usb_kbd_probe(struct usb_interface *iface,
usb_kbd_led, kbd);
kbd->led->setup_dma = kbd->cr_dma;
kbd->led->transfer_dma = kbd->leds_dma;
kbd->led->transfer_flags |= URB_NO_DMA_MAP;
kbd->led->transfer_flags |= (URB_NO_TRANSFER_DMA_MAP
| URB_NO_SETUP_DMA_MAP);
input_register_device(&kbd->dev);
......
......@@ -207,7 +207,7 @@ static int usb_mouse_probe(struct usb_interface * intf, const struct usb_device_
(maxp > 8 ? 8 : maxp),
usb_mouse_irq, mouse, endpoint->bInterval);
mouse->irq->transfer_dma = mouse->data_dma;
mouse->irq->transfer_flags |= URB_NO_DMA_MAP;
mouse->irq->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
input_register_device(&mouse->dev);
printk(KERN_INFO "input: %s on %s\n", mouse->name, path);
......
......@@ -590,7 +590,7 @@ static int wacom_probe(struct usb_interface *intf, const struct usb_device_id *i
wacom->data, wacom->features->pktlen,
wacom->features->irq, wacom, endpoint->bInterval);
wacom->irq->transfer_dma = wacom->data_dma;
wacom->irq->transfer_flags |= URB_NO_DMA_MAP;
wacom->irq->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
input_register_device(&wacom->dev);
......
......@@ -259,7 +259,7 @@ static int xpad_probe(struct usb_interface *intf, const struct usb_device_id *id
xpad->idata, XPAD_PKT_LEN, xpad_irq_in,
xpad, ep_irq_in->bInterval);
xpad->irq_in->transfer_dma = xpad->idata_dma;
xpad->irq_in->transfer_flags |= URB_NO_DMA_MAP;
xpad->irq_in->transfer_flags |= URB_NO_TRANSFER_DMA_MAP;
xpad->udev = udev;
......
This diff is collapsed.
......@@ -107,7 +107,7 @@ static struct urb *simple_alloc_urb (
urb->interval = (udev->speed == USB_SPEED_HIGH)
? (INTERRUPT_RATE << 3)
: INTERRUPT_RATE;
urb->transfer_flags = URB_NO_DMA_MAP;
urb->transfer_flags = URB_NO_TRANSFER_DMA_MAP;
if (usb_pipein (pipe))
urb->transfer_flags |= URB_SHORT_NOT_OK;
urb->transfer_buffer = usb_buffer_alloc (udev, bytes, SLAB_KERNEL,
......
......@@ -492,8 +492,9 @@ extern int usb_disabled(void);
*/
#define URB_SHORT_NOT_OK 0x0001 /* report short reads as errors */
#define URB_ISO_ASAP 0x0002 /* iso-only, urb->start_frame ignored */
#define URB_NO_DMA_MAP 0x0004 /* urb->*_dma are valid on submit */
#define URB_ASYNC_UNLINK 0x0008 /* usb_unlink_urb() returns asap */
#define URB_NO_TRANSFER_DMA_MAP 0x0004 /* urb->transfer_dma valid on submit */
#define URB_NO_SETUP_DMA_MAP 0x0008 /* urb->setup_dma valid on submit */
#define URB_ASYNC_UNLINK 0x0010 /* usb_unlink_urb() returns asap */
#define URB_NO_FSBR 0x0020 /* UHCI-specific */
#define URB_ZERO_PACKET 0x0040 /* Finish bulk OUTs with short packet */
#define URB_NO_INTERRUPT 0x0080 /* HINT: no non-error interrupt needed */
......@@ -531,14 +532,15 @@ typedef void (*usb_complete_t)(struct urb *, struct pt_regs *);
* submission, unlinking, or operation are handled. Different
* kinds of URB can use different flags.
* @transfer_buffer: This identifies the buffer to (or from) which
* the I/O request will be performed (unless URB_NO_DMA_MAP is set).
* This buffer must be suitable for DMA; allocate it with kmalloc()
* or equivalent. For transfers to "in" endpoints, contents of
* this buffer will be modified. This buffer is used for data
* the I/O request will be performed (unless URB_NO_TRANSFER_DMA_MAP
* is set). This buffer must be suitable for DMA; allocate it with
* kmalloc() or equivalent. For transfers to "in" endpoints, contents
* of this buffer will be modified. This buffer is used for data
* phases of control transfers.
* @transfer_dma: When transfer_flags includes URB_NO_DMA_MAP, the device
* driver is saying that it provided this DMA address, which the host
* controller driver should use instead of the transfer_buffer.
* @transfer_dma: When transfer_flags includes URB_NO_TRANSFER_DMA_MAP,
* the device driver is saying that it provided this DMA address,
* which the host controller driver should use in preference to the
* transfer_buffer.
* @transfer_buffer_length: How big is transfer_buffer. The transfer may
* be broken up into chunks according to the current maximum packet
* size for the endpoint, which is a function of the configuration
......@@ -553,11 +555,10 @@ typedef void (*usb_complete_t)(struct urb *, struct pt_regs *);
* @setup_packet: Only used for control transfers, this points to eight bytes
* of setup data. Control transfers always start by sending this data
* to the device. Then transfer_buffer is read or written, if needed.
* (Not used when URB_NO_DMA_MAP is set.)
* @setup_dma: For control transfers with URB_NO_DMA_MAP set, the device
* driver has provided this DMA address for the setup packet. The
* host controller driver should use this instead of setup_buffer.
* If there is a data phase, its buffer is identified by transfer_dma.
* @setup_dma: For control transfers with URB_NO_SETUP_DMA_MAP set, the
* device driver has provided this DMA address for the setup packet.
* The host controller driver should use this in preference to
* setup_packet.
* @start_frame: Returns the initial frame for interrupt or isochronous
* transfers.
* @number_of_packets: Lists the number of ISO transfer buffers.
......@@ -589,13 +590,15 @@ typedef void (*usb_complete_t)(struct urb *, struct pt_regs *);
* bounce buffer or talking to an IOMMU),
* although they're cheap on commodity x86 and ppc hardware.
*
* Alternatively, drivers may pass the URB_NO_DMA_MAP transfer flag, which
* tells the host controller driver that no such mapping is needed since
* the device driver is DMA-aware. For example, they might allocate a DMA
* buffer with usb_buffer_alloc(), or call usb_buffer_map().
* When this transfer flag is provided, host controller drivers will use the
* dma addresses found in the transfer_dma and/or setup_dma fields rather than
* determing a dma address themselves.
* Alternatively, drivers may pass the URB_NO_xxx_DMA_MAP transfer flags,
* which tell the host controller driver that no such mapping is needed since
* the device driver is DMA-aware. For example, a device driver might
* allocate a DMA buffer with usb_buffer_alloc() or call usb_buffer_map().
* When these transfer flags are provided, host controller drivers will
* attempt to use the dma addresses found in the transfer_dma and/or
* setup_dma fields rather than determining a dma address themselves. (Note
* that transfer_buffer and setup_packet must still be set because not all
* host controllers use DMA, nor do virtual root hubs).
*
* Initialization:
*
......@@ -614,7 +617,11 @@ typedef void (*usb_complete_t)(struct urb *, struct pt_regs *);
* should always terminate with a short packet, even if it means adding an
* extra zero length packet.
*
* Control URBs must provide a setup_packet.
* Control URBs must provide a setup_packet. The setup_packet and
* transfer_buffer may each be mapped for DMA or not, independently of
* the other. The transfer_flags bits URB_NO_TRANSFER_DMA_MAP and
* URB_NO_SETUP_DMA_MAP indicate which buffers have already been mapped.
* URB_NO_SETUP_DMA_MAP is ignored for non-control URBs.
*
* Interrupt UBS must provide an interval, saying how often (in milliseconds
* or, for highspeed devices, 125 microsecond units)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment