1. 17 Dec, 2010 3 commits
    • Stephen M. Cameron's avatar
      cciss: fix cciss_revalidate panic · 0fc13c89
      Stephen M. Cameron authored
      If you delete a logical drive, and then run BLKRRPART (e.g. via fdisk)
      on a logical drive which is "after" the deleted logical drive in the h->drv[]
      array, then cciss_revalidate panics because it will access the null pointer
      h->drv[x] when x hits the deleted drive.
      Signed-off-by: default avatarStephen M. Cameron <scameron@beardog.cce.hp.com>
      Cc: stable@kernel.org
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      0fc13c89
    • Mike Snitzer's avatar
      block: max hardware sectors limit wrapper · 72d4cd9f
      Mike Snitzer authored
      Implement blk_limits_max_hw_sectors() and make
      blk_queue_max_hw_sectors() a wrapper around it.
      
      DM needs this to avoid setting queue_limits' max_hw_sectors and
      max_sectors directly.  dm_set_device_limits() now leverages
      blk_limits_max_hw_sectors() logic to establish the appropriate
      max_hw_sectors minimum (PAGE_SIZE).  Fixes issue where DM was
      incorrectly setting max_sectors rather than max_hw_sectors (which
      caused dm_merge_bvec()'s max_hw_sectors check to be ineffective).
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      Cc: stable@kernel.org
      Acked-by: default avatarMartin K. Petersen <martin.petersen@oracle.com>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      72d4cd9f
    • Martin K. Petersen's avatar
      block: Deprecate QUEUE_FLAG_CLUSTER and use queue_limits instead · e692cb66
      Martin K. Petersen authored
      When stacking devices, a request_queue is not always available. This
      forced us to have a no_cluster flag in the queue_limits that could be
      used as a carrier until the request_queue had been set up for a
      metadevice.
      
      There were several problems with that approach. First of all it was up
      to the stacking device to remember to set queue flag after stacking had
      completed. Also, the queue flag and the queue limits had to be kept in
      sync at all times. We got that wrong, which could lead to us issuing
      commands that went beyond the max scatterlist limit set by the driver.
      
      The proper fix is to avoid having two flags for tracking the same thing.
      We deprecate QUEUE_FLAG_CLUSTER and use the queue limit directly in the
      block layer merging functions. The queue_limit 'no_cluster' is turned
      into 'cluster' to avoid double negatives and to ease stacking.
      Clustering defaults to being enabled as before. The queue flag logic is
      removed from the stacking function, and explicitly setting the cluster
      flag is no longer necessary in DM and MD.
      Reported-by: default avatarEd Lin <ed.lin@promise.com>
      Signed-off-by: default avatarMartin K. Petersen <martin.petersen@oracle.com>
      Acked-by: default avatarMike Snitzer <snitzer@redhat.com>
      Cc: stable@kernel.org
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      e692cb66
  2. 01 Dec, 2010 2 commits
    • Vivek Goyal's avatar
      blk-throttle: Correct the placement of smp_rmb() · 04a6b516
      Vivek Goyal authored
      o I was discussing what are the variable being updated without spin lock and
        why do we need barriers and Oleg pointed out that location of smp_rmb()
        should be between read of td->limits_changed and tg->limits_changed. This
        patch fixes it.
      
      o Following is one possible sequence of events. Say cpu0 is executing
        throtl_update_blkio_group_read_bps() and cpu1 is executing
        throtl_process_limit_change().
      
       cpu0                                                cpu1
      
       tg->limits_changed = true;
       smp_mb__before_atomic_inc();
       atomic_inc(&td->limits_changed);
      
                                           if (!atomic_read(&td->limits_changed))
                                                   return;
      
                                           if (tg->limits_changed)
                                                   do_something;
      
       If cpu0 has updated tg->limits_changed and td->limits_changed, we want to
       make sure that if update to td->limits_changed is visible on cpu1, then
       update to tg->limits_changed should also be visible.
      
       Oleg pointed out to ensure that we need to insert an smp_rmb() between
       td->limits_changed read and tg->limits_changed read.
      
      o I had erroneously put smp_rmb() before atomic_read(&td->limits_changed).
        This patch fixes it.
      Reported-by: default avatarOleg Nesterov <oleg@redhat.com>
      Signed-off-by: default avatarVivek Goyal <vgoyal@redhat.com>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      04a6b516
    • Vivek Goyal's avatar
      blk-throttle: Trim/adjust slice_end once a bio has been dispatched · d1ae8ffd
      Vivek Goyal authored
      o During some testing I did following and noticed throttling stops working.
      
              - Put a very low limit on a cgroup, say 1 byte per second.
              - Start some reads, this will set slice_end to a very high value.
              - Change the limit to higher value say 1MB/s
              - Now IO unthrottles and finishes as expected.
              - Try to do the read again but IO is not limited to 1MB/s as expected.
      
      o What is happening.
              - Initially low value of limit sets slice_end to a very high value.
              - During updation of limit, slice_end is not being truncated.
              - Very high value of slice_end leads to keeping the existing slice
                valid for a very long time and new slice does not start.
              - tg_may_dispatch() is called in blk_throtle_bio(), and trim_slice()
                is not called in this path. So slice_start is some old value and
                practically we are able to do huge amount of IO.
      
      o There are many ways it can be fixed. I have fixed it by trying to
        adjust/cleanup slice_end in trim_slice(). Generally we extend slices if bio
        is big and can't be dispatched in one slice. After dispatch of bio, readjust
        the slice_end to make sure we don't end up with huge values.
      Signed-off-by: default avatarVivek Goyal <vgoyal@redhat.com>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      d1ae8ffd
  3. 29 Nov, 2010 1 commit
  4. 27 Nov, 2010 2 commits
  5. 26 Nov, 2010 13 commits
  6. 25 Nov, 2010 8 commits
  7. 24 Nov, 2010 11 commits