1. 11 Aug, 2011 3 commits
    • Shaohua Li's avatar
      block: improve rq_affinity placement · bcf30e75
      Shaohua Li authored
      This patch reverts commit 35ae66e0(block: Make rq_affinity = 1
      work as expected). The purpose is to avoid an unnecessary IPI.
      Let's take an example. My test box has cpu 0-7, one socket. Say request is
      added from CPU 1, blk_complete_request() occurs at CPU 7. Without the reverted
      patch, softirq will be done at CPU 7. With it, an IPI will be directed to CPU
      0, and softirq will be done at CPU 0. In this case, doing softirq at CPU 0 and
      CPU 7 have no difference from cache sharing point view and we can avoid an
      ipi if doing it in CPU 7.
      An immediate concern is this is just like QUEUE_FLAG_SAME_FORCE, but actually
      not. blk_complete_request() is running in interrupt handler, and currently
      I/O controller doesn't support multiple interrupts (I checked several LSI
      cards and AHCI), so only one CPU can run blk_complete_request(). This is
      still quite different as QUEUE_FLAG_SAME_FORCE.
      Since only one CPU runs softirq, the only difference with below patch is
      softirq not always runs at the first CPU of a group.
      Signed-off-by: default avatarShaohua Li <shaohua.li@intel.com>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      bcf30e75
    • Namhyung Kim's avatar
      blktrace: add FLUSH/FUA support · c09c47ca
      Namhyung Kim authored
      Add FLUSH/FUA support to blktrace. As FLUSH precedes WRITE and/or
      FUA follows WRITE, use the same 'F' flag for both cases and
      distinguish them by their (relative) position. The end results
      look like (other flags might be shown also):
      
       - WRITE:            W
       - WRITE_FLUSH:      FW
       - WRITE_FUA:        WF
       - WRITE_FLUSH_FUA:  FWF
      
      Note that we reuse TC_BARRIER due to lack of bit space of act_mask
      so that the older versions of blktrace tools will report flush
      requests as barriers from now on.
      
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Signed-off-by: default avatarNamhyung Kim <namhyung@gmail.com>
      Reviewed-by: default avatarJeff Moyer <jmoyer@redhat.com>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      c09c47ca
    • Matthew Wilcox's avatar
      Move some REQ flags to the common bio/request area · 8e4bf844
      Matthew Wilcox authored
      REQ_SECURE, REQ_FLUSH and REQ_FUA may all be set on a bio as well as
      on a request, so relocate them to the shared part of the enum.
      Signed-off-by: default avatarMatthew Wilcox <willy@linux.intel.com>
      Signed-off-by: default avatarNamhyung Kim <namhyung@gmail.com>
      Reviewed-by: default avatarJeff Moyer <jmoyer@redhat.com>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      8e4bf844
  2. 09 Aug, 2011 3 commits
    • Jens Axboe's avatar
      Merge branch 'stable/for-jens' of... · 40bb96ad
      Jens Axboe authored
      Merge branch 'stable/for-jens' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen into for-linus
      40bb96ad
    • Jeff Moyer's avatar
      allow blk_flush_policy to return REQ_FSEQ_DATA independent of *FLUSH · fa1bf42f
      Jeff Moyer authored
      blk_insert_flush has the following check:
      
      	/*
      	 * If there's data but flush is not necessary, the request can be
      	 * processed directly without going through flush machinery.  Queue
      	 * for normal execution.
      	 */
      	if ((policy & REQ_FSEQ_DATA) &&
      	    !(policy & (REQ_FSEQ_PREFLUSH | REQ_FSEQ_POSTFLUSH))) {
      		list_add_tail(&rq->queuelist, &q->queue_head);
      		return;
      	}
      
      However, blk_flush_policy will not return with policy set to only
      REQ_FSEQ_DATA:
      
      static unsigned int blk_flush_policy(unsigned int fflags, struct request *rq)
      {
      	unsigned int policy = 0;
      
      	if (fflags & REQ_FLUSH) {
      		if (rq->cmd_flags & REQ_FLUSH)
      			policy |= REQ_FSEQ_PREFLUSH;
      		if (blk_rq_sectors(rq))
      			policy |= REQ_FSEQ_DATA;
      		if (!(fflags & REQ_FUA) && (rq->cmd_flags & REQ_FUA))
      			policy |= REQ_FSEQ_POSTFLUSH;
      	}
      	return policy;
      }
      
      Notice that REQ_FSEQ_DATA is only set if REQ_FLUSH is set.  Fix this
      mismatch by moving the setting of REQ_FSEQ_DATA outside of the REQ_FLUSH
      check.
      
      Tejun notes:
      
        Hmmm... yes, this can become a correctness issue if (and only if)
        blk_queue_flush() is called to change q->flush_flags while requests
        are in-flight; otherwise, requests wouldn't reach the function at all.
        Also, I think it would be a generally good idea to always set
        FSEQ_DATA if the request has data.
      
      Cheers,
      Jeff
      Signed-off-by: default avatarJeff Moyer <jmoyer@redhat.com>
      Acked-by: default avatarTejun Heo <tj@kernel.org>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      fa1bf42f
    • Konrad Rzeszutek Wilk's avatar
      xen/blkback: Make description more obvious. · ea5e1161
      Konrad Rzeszutek Wilk authored
      With the frontend having Xen but the backend not, it just looks odd:
      
        <*>   Xen virtual block device support
        <*>   Block-device backend driver
      
      Fix it to have the 'Xen' in front of it.
      Reported-by: default avatarSander Eikelenboom <linux@eikelenboom.it>
      Signed-off-by: default avatarKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      ea5e1161
  3. 05 Aug, 2011 2 commits
    • Vivek Goyal's avatar
      cfq-iosched: Add documentation about idling · 4931402a
      Vivek Goyal authored
      There are always questions about why CFQ is idling on various conditions.
      Recent ones is Christoph asking again why to idle on REQ_NOIDLE. His
      assertion is that XFS is relying more and more on workqueues and is
      concerned that CFQ idling on IO from every workqueue will impact
      XFS badly.
      
      So he suggested that I add some more documentation about CFQ idling
      and that can provide more clarity on the topic and also gives an
      opprotunity to poke a hole in theory and lead to improvements.
      
      So here is my attempt at that. Any comments are welcome.
      Signed-off-by: default avatarVivek Goyal <vgoyal@redhat.com>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      4931402a
    • Tao Ma's avatar
      block: Make rq_affinity = 1 work as expected · 35ae66e0
      Tao Ma authored
      Commit 5757a6d7 introduced a new rq_affinity = 2 so as to make
      the request completed in the __make_request cpu. But it makes the
      old rq_affinity = 1 not work any more. The root cause is that
      if the 'cpu' and 'req->cpu' is in the same group and cpu != req->cpu,
      ccpu will be the same as group_cpu, so the completion will be
      excuted in the 'cpu' not 'group_cpu'.
      
      This patch fix problem by simpling removing group_cpu and the codes
      are more explicit now. If ccpu == cpu, we complete in cpu, otherwise
      we raise_blk_irq to ccpu.
      
      Cc: Christoph Hellwig <hch@infradead.org>
      Cc: Roland Dreier <roland@purestorage.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Jens Axboe <jaxboe@fusionio.com>
      Signed-off-by: default avatarTao Ma <boyu.mt@taobao.com>
      Reviewed-by: default avatarShaohua Li <shaohua.li@intel.com>
      Signed-off-by: default avatarJens Axboe <jaxboe@fusionio.com>
      35ae66e0
  4. 03 Aug, 2011 1 commit
  5. 02 Aug, 2011 5 commits
  6. 01 Aug, 2011 1 commit
  7. 31 Jul, 2011 25 commits