1. 26 Oct, 2008 6 commits
  2. 15 Oct, 2008 16 commits
    • Jay Fenlason's avatar
    • Stefan Richter's avatar
      firewire: fix ioctl() return code · 99692f71
      Stefan Richter authored
      Reported by Jay Fenlason:  ioctl() did not return as intended
        - the size of data read into ioctl_send_request,
        - the number of datagrams enqueued by ioctl_queue_iso.
      Signed-off-by: default avatarStefan Richter <stefanr@s5r6.in-berlin.de>
      99692f71
    • Stefan Richter's avatar
      firewire: fix setting tag and sy in iso transmission · 7a100344
      Stefan Richter authored
      Reported by Jay Fenlason:
      The iso packet control accessors in fw-cdev.c had bogus masks.
      Signed-off-by: default avatarStefan Richter <stefanr@s5r6.in-berlin.de>
      7a100344
    • Stefan Richter's avatar
      firewire: fw-sbp2: fix another small generation access bug · 4bbc1bdd
      Stefan Richter authored
      queuecommand() looked at the remote and local node IDs before it read
      the bus generation.  The corresponding race with sbp2_reconnect updating
      these data was probably impossible to happen though because the current
      code blocks the SCSI layer during reconnection.  However, better safe
      than sorry, especially if someone later improves the code to not block
      the SCSI layer.
      Signed-off-by: default avatarStefan Richter <stefanr@s5r6.in-berlin.de>
      4bbc1bdd
    • Stefan Richter's avatar
      firewire: fw-sbp2: enforce s/g segment size limit · 09b12dd4
      Stefan Richter authored
      1. We don't need to round the SBP-2 segment size limit down to a
         multiple of 4 kB (0xffff -> 0xf000).  It is only necessary to
         ensure quadlet alignment (0xffff -> 0xfffc).
      
      2. Use dma_set_max_seg_size() to tell the DMA mapping infrastructure
         and the block IO layer about the restriction.  This way we can
         remove the size checks and segment splitting in the queuecommand
         path.
      
         This assumes that no other code in the firewire stack uses
         dma_map_sg() with conflicting requirements.  It furthermore assumes
         that the controller device's platform actually allows us to set the
         segment size to our liking.  Assert the latter with a BUG_ON().
      
      3. Also use blk_queue_max_segment_size() to tell the block IO layer
         about it.  It cannot know it because our scsi_add_host() does not
         point to the FireWire controller's device.
      
      Thanks to Grant Grundler and FUJITA Tomonori for advice.
      Signed-off-by: default avatarStefan Richter <stefanr@s5r6.in-berlin.de>
      09b12dd4
    • Jay Fenlason's avatar
      firewire: fw_send_request_sync() · 1e119fa9
      Jay Fenlason authored
      Share code between fw_send_request + wait_for_completion callers.
      Signed-off-by: default avatarJay Fenlason <fenlason@redhat.com>
      
      Addendum:
      Removes an unnecessary struct and an ununsed retry loop.
      Calls it fw_run_transaction() instead of fw_send_request_sync().
      Signed-off-by: default avatarStefan Richter <stefanr@s5r6.in-berlin.de>
      Acked-by: default avatarKristian Høgsberg <krh@redhat.com>
      1e119fa9
    • Stefan Richter's avatar
      ieee1394: survive a few seconds connection loss · fc392fe8
      Stefan Richter authored
      There are situations when nodes vanish from the bus and come back in
      quickly thereafter:
        - When certain bus-powered hubs are plugged in,
        - when certain disk enclosures are switched from self-power to bus
          power or vice versa and break the daisy chain during the transition,
        - when the user plugs a cable out and quickly plugs it back in, e.g.
          to reorder a daisy chain (works on Mac OS X if done quickly enough),
        - when certain hubs temporarily malfunction during high bus traffic.
      
      The ieee1394 driver's nodemgr already contained a function to set
      vanished nodes aside into "limbo"; i.e. they wouldn't actually be
      deleted right away.  (In fact, only unloading the driver or writing into
      an obscure sysfs attribute would delete them eventually.)  If nodes
      reappeared later, they would be resurrected out of limbo.
      
      Moving nodes into and out of limbo was accompanied with calling the
      .suspend() and .resume() driver methods of the drivers which were bound
      to a respective node's unit directories.  Not only is this somewhat
      strange due to the intended use of these driver methods for power
      management, also the sbp2 driver in particular does not implement
      .suspend() and .resume().  Hence sbp2 would be disconnected from devices
      in situations as listed above.
      
      We now:
        - leave drivers bound when nodes go into limbo,
        - call the drivers' .update() when nodes come out of limbo,
        - automatically delete in-limbo nodes 3 seconds after the last
          bus reset and bus rescan.
        - Because of the automatic removal, the now obsolete bus attribute
          /sys/bus/ieee1394/destroy_node is removed.
      
      This especially lets sbp2 survive brief disconnections.  You can for
      example yank a disk's cable and plug it back in while reading the
      respective disk with dd, but dd will happily continue as if nothing
      happened.
      Signed-off-by: default avatarStefan Richter <stefanr@s5r6.in-berlin.de>
      fc392fe8
    • Stefan Richter's avatar
      ieee1394: nodemgr clean up class iterators · 11305c3e
      Stefan Richter authored
      Remove useless pointer type casts.
      Remove unnecessary hi->host indirection where only host is used.
      Remove an unnecessary WARN_ON.
      Change a few names.
      Signed-off-by: default avatarStefan Richter <stefanr@s5r6.in-berlin.de>
      11305c3e
    • Stefan Richter's avatar
      ieee1394: dv1394, video1394: remove unnecessary expressions · d98562d1
      Stefan Richter authored
      init->channel and v.buffer are unsigned and tests for < 0 therefore
      always false.  gcc knows this and eliminates the code, but anyway...
      Reported by Roel Kluin.
      Signed-off-by: default avatarStefan Richter <stefanr@s5r6.in-berlin.de>
      d98562d1
    • Stefan Richter's avatar
      ieee1394: raw1394: make write() thread-safe · f22e52b8
      Stefan Richter authored
      Application programs should use a libraw1394 handle only in a single
      thread.  The raw1394 driver was apparently relying on this, because it
      did nothing to protect its fi->state variable from corruption due to
      concurrent accesses.
      
      We now serialize the fi->state accesses.  This affects the write() path.
      We re-use the state_mutex which was introduced to protect fi->iso_state
      accesses in the ioctl() path.  These paths and accesses are independent
      of each other, hence separate mutexes could be used.  But I don't see
      much benefit in that.
      Signed-off-by: default avatarStefan Richter <stefanr@s5r6.in-berlin.de>
      f22e52b8
    • Stefan Richter's avatar
      ieee1394: raw1394: narrow down the state_mutex protected region · ddfb908d
      Stefan Richter authored
      Refactor the ioctl dispatcher in order to move a fraction of it out of
      the section which is serialized by fi->state_mutex.  This is not so much
      about performance but more about self-documentation:  The mutex_lock()/
      mutex_unlock() calls are now closer to the data accesses which the mutex
      protects, i.e. to the iso_state switch.
      Signed-off-by: default avatarStefan Richter <stefanr@s5r6.in-berlin.de>
      ddfb908d
    • Stefan Richter's avatar
      ieee1394: raw1394: replace BKL by local mutex, make ioctl() and mmap() thread-safe · 10963ea1
      Stefan Richter authored
      This removes the last usage of the Big Kernel Lock from the ieee1394
      stack, i.e. from raw1394's (unlocked_)ioctl and compat_ioctl.
      
      The ioctl()s don't need to take the BKL, but they need to be serialized
      per struct file *.  In particular, accesses to ->iso_state need to be
      serial.  We simply use a blocking mutex for this purpose because
      libraw1394 does not use O_NONBLOCK.  In practice, there is no lock
      contention anyway because most if not all libraw1394 clients use a
      libraw1394 handle only in a single thread.
      
      mmap() also accesses ->iso_state.  Until now this was unprotected
      against concurrent changes by ioctls.  Fix this bug while we are at it.
      Signed-off-by: default avatarStefan Richter <stefanr@s5r6.in-berlin.de>
      10963ea1
    • Stefan Richter's avatar
      ieee1394: sbp2: enforce s/g segment size limit · ed6ffd08
      Stefan Richter authored
      1. We don't need to round the SBP-2 segment size limit down to a
         multiple of 4 kB (0xffff -> 0xf000).  It is only necessary to
         ensure quadlet alignment (0xffff -> 0xfffc).
      
      2. Use dma_set_max_seg_size() to tell the DMA mapping infrastructure
         and the block IO layer about the restriction.  This way we can
         remove the size checks and segment splitting in the queuecommand
         path.
      
         This assumes that no other code in the ieee1394 stack uses
         dma_map_sg() with conflicting requirements.  It furthermore assumes
         that the controller device's platform actually allows us to set the
         segment size to our liking.  Assert the latter with a BUG_ON().
      
      3. Also use blk_queue_max_segment_size() to tell the block IO layer
         about it.  It cannot know it because our scsi_add_host() does not
         point to the FireWire controller's device.
      
      We can also uniformly use dma_map_sg() for the single segment case just
      like for the multi segment case, to further simplify the code.
      
      Also clean up how the page table is converted to big endian.
      
      Thanks to Grant Grundler and FUJITA Tomonori for advice.
      Signed-off-by: default avatarStefan Richter <stefanr@s5r6.in-berlin.de>
      ed6ffd08
    • Stefan Richter's avatar
    • Stefan Richter's avatar
      ieee1394: sbp2: stricter dma_sync · 0a77b17c
      Stefan Richter authored
      Two dma_sync_single_for_cpu() were called in the wrong place.
      Luckily they were merely for DMA_TO_DEVICE, hence nobody noticed.
      
      Also reorder the matching dma_sync_single_for_device() a little bit
      so that they reside in the same functions as their counterparts.
      This also avoids syncing the s/g table for requests which don't use it.
      Signed-off-by: default avatarStefan Richter <stefanr@s5r6.in-berlin.de>
      0a77b17c
    • Julia Lawall's avatar
      ieee1394: Use DIV_ROUND_UP · 68e2aa79
      Julia Lawall authored
      Signed-off-by: default avatarJulia Lawall <julia@diku.dk>
      Signed-off-by: default avatarStefan Richter <stefanr@s5r6.in-berlin.de>
      68e2aa79
  3. 09 Oct, 2008 12 commits
  4. 08 Oct, 2008 3 commits
  5. 07 Oct, 2008 3 commits