1. 09 Dec, 2012 1 commit
  2. 06 Dec, 2012 7 commits
    • Diego Calleja's avatar
      partitions: enable EFI/GPT support by default · 5f6f38db
      Diego Calleja authored
      The Kconfig currently enables MSDOS partitions by default because they
      are assumed to be essential, but it's necessary to enable "advanced
      partition selection" in order to get GPT support. IMO GPT partitions
      are becoming common enought to deserve the same treatment MSDOS
      partitions get.
      
      (Side note: I got bit by a disk that had MSDOS and GPT partition
      tables, but for some reason the MSDOS table was different from the
      GPT one. I was stupid enought to disable "advanced partition
      selection" in my .config, which disabled GPT partitioning and made
      my btrfs pool unbootable because it couldn't find the partitions)
      Signed-off-by: default avatarDiego Calleja <diegocg@gmail.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      5f6f38db
    • Bart Van Assche's avatar
      bsg: Remove unused function bsg_goose_queue() · 80729beb
      Bart Van Assche authored
      The function bsg_goose_queue() does not have any in-tree callers,
      so let's remove it.
      Signed-off-by: default avatarBart Van Assche <bvanassche@acm.org>
      Acked-by: default avatarTejun Heo <tj@kernel.org>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      80729beb
    • Bart Van Assche's avatar
      block: Make blk_cleanup_queue() wait until request_fn finished · 24faf6f6
      Bart Van Assche authored
      Some request_fn implementations, e.g. scsi_request_fn(), unlock
      the queue lock internally. This may result in multiple threads
      executing request_fn for the same queue simultaneously. Keep
      track of the number of active request_fn calls and make sure that
      blk_cleanup_queue() waits until all active request_fn invocations
      have finished. A block driver may start cleaning up resources
      needed by its request_fn as soon as blk_cleanup_queue() finished,
      so blk_cleanup_queue() must wait for all outstanding request_fn
      invocations to finish.
      Signed-off-by: default avatarBart Van Assche <bvanassche@acm.org>
      Reported-by: default avatarChanho Min <chanho.min@lge.com>
      Cc: James Bottomley <JBottomley@Parallels.com>
      Cc: Mike Christie <michaelc@cs.wisc.edu>
      Acked-by: default avatarTejun Heo <tj@kernel.org>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      24faf6f6
    • Bart Van Assche's avatar
      block: Avoid scheduling delayed work on a dead queue · 70460571
      Bart Van Assche authored
      Running a queue must continue after it has been marked dying until
      it has been marked dead. So the function blk_run_queue_async() must
      not schedule delayed work after blk_cleanup_queue() has marked a queue
      dead. Hence add a test for that queue state in blk_run_queue_async()
      and make sure that queue_unplugged() invokes that function with the
      queue lock held. This avoids that the queue state can change after
      it has been tested and before mod_delayed_work() is invoked. Drop
      the queue dying test in queue_unplugged() since it is now
      superfluous: __blk_run_queue() already tests whether or not the
      queue is dead.
      Signed-off-by: default avatarBart Van Assche <bvanassche@acm.org>
      Cc: Mike Christie <michaelc@cs.wisc.edu>
      Acked-by: default avatarTejun Heo <tj@kernel.org>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      70460571
    • Bart Van Assche's avatar
      block: Avoid that request_fn is invoked on a dead queue · c246e80d
      Bart Van Assche authored
      A block driver may start cleaning up resources needed by its
      request_fn as soon as blk_cleanup_queue() finished, so request_fn
      must not be invoked after draining finished. This is important
      when blk_run_queue() is invoked without any requests in progress.
      As an example, if blk_drain_queue() and scsi_run_queue() run in
      parallel, blk_drain_queue() may have finished all requests after
      scsi_run_queue() has taken a SCSI device off the starved list but
      before that last function has had a chance to run the queue.
      Signed-off-by: default avatarBart Van Assche <bvanassche@acm.org>
      Cc: James Bottomley <JBottomley@Parallels.com>
      Cc: Mike Christie <michaelc@cs.wisc.edu>
      Cc: Chanho Min <chanho.min@lge.com>
      Acked-by: default avatarTejun Heo <tj@kernel.org>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      c246e80d
    • Bart Van Assche's avatar
      block: Let blk_drain_queue() caller obtain the queue lock · 807592a4
      Bart Van Assche authored
      Let the caller of blk_drain_queue() obtain the queue lock to improve
      readability of the patch called "Avoid that request_fn is invoked on
      a dead queue".
      Signed-off-by: default avatarBart Van Assche <bvanassche@acm.org>
      Acked-by: default avatarTejun Heo <tj@kernel.org>
      Cc: James Bottomley <JBottomley@Parallels.com>
      Cc: Mike Christie <michaelc@cs.wisc.edu>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Chanho Min <chanho.min@lge.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      807592a4
    • Bart Van Assche's avatar
      block: Rename queue dead flag · 3f3299d5
      Bart Van Assche authored
      QUEUE_FLAG_DEAD is used to indicate that queuing new requests must
      stop. After this flag has been set queue draining starts. However,
      during the queue draining phase it is still safe to invoke the
      queue's request_fn, so QUEUE_FLAG_DYING is a better name for this
      flag.
      
      This patch has been generated by running the following command
      over the kernel source tree:
      
      git grep -lEw 'blk_queue_dead|QUEUE_FLAG_DEAD' |
          xargs sed -i.tmp -e 's/blk_queue_dead/blk_queue_dying/g'      \
              -e 's/QUEUE_FLAG_DEAD/QUEUE_FLAG_DYING/g';                \
      sed -i.tmp -e "s/QUEUE_FLAG_DYING$(printf \\t)*5/QUEUE_FLAG_DYING$(printf \\t)5/g" \
          include/linux/blkdev.h;                                       \
      sed -i.tmp -e 's/ DEAD/ DYING/g' -e 's/dead queue/a dying queue/' \
          -e 's/Dead queue/A dying queue/' block/blk-core.c
      Signed-off-by: default avatarBart Van Assche <bvanassche@acm.org>
      Acked-by: default avatarTejun Heo <tj@kernel.org>
      Cc: James Bottomley <JBottomley@Parallels.com>
      Cc: Mike Christie <michaelc@cs.wisc.edu>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Chanho Min <chanho.min@lge.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      3f3299d5
  3. 05 Dec, 2012 1 commit
  4. 10 Nov, 2012 1 commit
  5. 09 Nov, 2012 1 commit
    • Shaohua Li's avatar
      block: recursive merge requests · bee0393c
      Shaohua Li authored
      In a workload, thread 1 accesses a, a+2, ..., thread 2 accesses a+1, a+3,....
      When the requests are flushed to queue, a and a+1 are merged to (a, a+1), a+2
      and a+3 too to (a+2, a+3), but (a, a+1) and (a+2, a+3) aren't merged.
      
      If we do recursive merge for such interleave access, some workloads throughput
      get improvement. A recent worload I'm checking on is swap, below change
      boostes the throughput around 5% ~ 10%.
      Signed-off-by: default avatarShaohua Li <shli@fusionio.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      bee0393c
  6. 06 Nov, 2012 1 commit
    • Shaohua Li's avatar
      block CFQ: avoid moving request to different queue · 3d106fba
      Shaohua Li authored
      request is queued in cfqq->fifo list. Looks it's possible we are moving a
      request from one cfqq to another in request merge case. In such case, adjusting
      the fifo list order doesn't make sense and is impossible if we don't iterate
      the whole fifo list.
      
      My test does hit one case the two cfqq are different, but didn't cause kernel
      crash, maybe it's because fifo list isn't used frequently. Anyway, from the
      code logic, this is buggy.
      
      I thought we can re-enable the recusive merge logic after this is fixed.
      Signed-off-by: default avatarShaohua Li <shli@fusionio.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      3d106fba
  7. 04 Nov, 2012 1 commit
  8. 03 Nov, 2012 15 commits
  9. 02 Nov, 2012 12 commits