1. 09 Nov, 2016 1 commit
  2. 08 Nov, 2016 3 commits
  3. 07 Nov, 2016 1 commit
    • Christoph Hellwig's avatar
      pktcdvd: don't scribble over the bvec array · feebd568
      Christoph Hellwig authored
      Hi Peter, hi Jens,
      
      I've been looking over the multi page bio vec work again recently, and
      one of the stumbling blocks is raw biovec access in the pktcdvd.
      
      The first issue is that it directly sets up the page and offset pointers
      in the biovec just before calling bio_add_page.  As bio_add_page already
      does the setup it's trivial to just switch it to stack variables for the
      arguments.
      
      The second issue is the copy code in pkt_make_local_copy, which
      effectively is an opencoded version of bio_copy_data except that it
      skips pages that already are the same in the ѕource and destination.
      But we look at the only calleer we just set up the bio using bio_add_page
      to point exactly to the page array that pkt_make_local_copy compares,
      so the pages will always be the same and we can just remove this function.
      
      Note that all this is done based on code inspection, I don't have any
      packet writing hardware myself.
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarJens Axboe <axboe@fb.com>
      feebd568
  4. 06 Nov, 2016 1 commit
    • Gabriel Krisman Bertazi's avatar
      blk-mq: Always schedule hctx->next_cpu · c02ebfdd
      Gabriel Krisman Bertazi authored
      Commit 0e87e58b ("blk-mq: improve warning for running a queue on the
      wrong CPU") attempts to avoid triggering the WARN_ON in
      __blk_mq_run_hw_queue when the expected CPU is dead.  Problem is, in the
      last batch execution before round robin, blk_mq_hctx_next_cpu can
      schedule a dead CPU and also update next_cpu to the next alive CPU in
      the mask, which will trigger the WARN_ON despite the previous
      workaround.
      
      The following patch fixes this scenario by always scheduling the value
      in hctx->next_cpu.  This changes the moment when we round-robin the CPU
      running the hctx, but it really doesn't matter, since it still executes
      BLK_MQ_CPU_WORK_BATCH times in a row before switching to another CPU.
      
      Fixes: 0e87e58b ("blk-mq: improve warning for running a queue on the wrong CPU")
      Signed-off-by: default avatarGabriel Krisman Bertazi <krisman@linux.vnet.ibm.com>
      Signed-off-by: default avatarJens Axboe <axboe@fb.com>
      c02ebfdd
  5. 05 Nov, 2016 1 commit
    • Jens Axboe's avatar
      block: add code to track actual device queue depth · d278d4a8
      Jens Axboe authored
      For blk-mq, ->nr_requests does track queue depth, at least at init
      time. But for the older queue paths, it's simply a soft setting.
      On top of that, it's generally larger than the hardware setting
      on purpose, to allow backup of requests for merging.
      
      Fill a hole in struct request with a 'queue_depth' member, that
      drivers can call to more closely inform the block layer of the
      real queue depth.
      Signed-off-by: default avatarJens Axboe <axboe@fb.com>
      Reviewed-by: default avatarJan Kara <jack@suse.cz>
      d278d4a8
  6. 04 Nov, 2016 2 commits
    • Shaohua Li's avatar
      blk-mq: immediately dispatch big size request · 600271d9
      Shaohua Li authored
      This is corresponding part for blk-mq. Disk with multiple hardware
      queues doesn't need this as we only hold 1 request at most.
      Signed-off-by: default avatarShaohua Li <shli@fb.com>
      Signed-off-by: default avatarJens Axboe <axboe@fb.com>
      600271d9
    • Shaohua Li's avatar
      block: immediately dispatch big size request · 50d24c34
      Shaohua Li authored
      Currently block plug holds up to 16 non-mergeable requests. This makes
      sense if the request size is small, eg, reduce lock contention. But if
      request size is big enough, we don't need to worry about lock
      contention. Holding such request makes no sense and it lows the disk
      utilization.
      
      In practice, this improves 10% throughput for my raid5 sequential write
      workload.
      
      The size (128k) is arbitrary right now, but it makes sure lock
      contention is small. This probably could be more intelligent, eg, check
      average request size holded. Since this is mainly for sequential IO,
      probably not worthy.
      
      V2: check the last request instead of the first request, so as long as
      there is one big size request we flush the plug.
      Signed-off-by: default avatarShaohua Li <shli@fb.com>
      Signed-off-by: default avatarJens Axboe <axboe@fb.com>
      50d24c34
  7. 03 Nov, 2016 1 commit
  8. 02 Nov, 2016 16 commits
  9. 01 Nov, 2016 14 commits