- 25 Jul, 2014 27 commits
-
-
Peter Senna Tschudin authored
This patch remove variables that are initialized with a constant, are never updated, and are only used as parameter of return. Return the constant instead of using a variable. Verified by compilation only. The coccinelle script that find and fixes this issue is: // <smpl> @@ type T; constant C; identifier ret; @@ - T ret = C; ... when != ret when strict return - ret + C ; // </smpl> Signed-off-by: Peter Senna Tschudin <peter.senna@gmail.com> Acked-by: Sudarsana Kalluru <Sudarsana.Kalluru@qlogic.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Ben Hutchings authored
bfa_swap_words() shifts its argument (assumed to be 64-bit) by 32 bits each way. In two places the argument type is dma_addr_t, which may be 32-bit, in which case the effect of the bit shift is undefined: drivers/scsi/bfa/bfa_fcpim.c: In function 'bfa_ioim_send_ioreq': drivers/scsi/bfa/bfa_fcpim.c:2497:4: warning: left shift count >= width of type [enabled by default] addr = bfa_sgaddr_le(sg_dma_address(sg)); ^ drivers/scsi/bfa/bfa_fcpim.c:2497:4: warning: right shift count >= width of type [enabled by default] drivers/scsi/bfa/bfa_fcpim.c:2509:4: warning: left shift count >= width of type [enabled by default] addr = bfa_sgaddr_le(sg_dma_address(sg)); ^ drivers/scsi/bfa/bfa_fcpim.c:2509:4: warning: right shift count >= width of type [enabled by default] Avoid this by adding casts to u64 in bfa_swap_words(). Compile-tested only. Signed-off-by: Ben Hutchings <ben@decadent.org.uk> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Acked-by: Anil Gurumurthy <anil.gurumurthy@qlogic.com> Cc: stable@vger.kernel.org Fixes: f16a1750 ('[SCSI] bfa: remove all OS wrappers') Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Joe Perches authored
Use the zeroing function instead of dma_alloc_coherent & memset(,0,) Signed-off-by: Joe Perches <joe@perches.com> Acked-by: Anil Gurumurthy <anil.gurumurthy@qlogic.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Himangi Saraogi authored
Use kstrdup when the goal of an allocation is copy a string into the allocated region. The Coccinelle semantic patch that makes this change is as follows: // <smpl> @@ expression from,to; expression flag,E1,E2; statement S; @@ - to = kmalloc(strlen(from) + 1,flag); + to = kstrdup(from, flag); ... when != \(from = E1 \| to = E1 \) if (to==NULL || ...) S ... when != \(from = E2 \| to = E2 \) - strcpy(to, from); // </smpl> Signed-off-by: Himangi Saraogi <himangi774@gmail.com> Acked-by: Julia Lawall <julia.lawall@lip6.fr> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Dick Kennedy authored
These speeds are to support the next generation of FCoE port speeds. Signed-off-by: Dick Kennedy <Dick.Kennedy@Emulex.Com> Reviewed-by: Ewan D. Milne <emilne@redhat.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Mike Christie authored
The custom stats is an array with custom_length indicating the length of the array. This patch fixes bnx2i and be2iscsi's setting of the custom stats length. They both just have the one, eh_abort_cnt, so that should be in the first entry of the custom array and custom_length should then be one. Reported-by: Rickard Strandqvist <rickard_strandqvist@spectrumdigital.se> Signed-off-by: Mike Christie <michaelc@cs.wisc.edu> Acked-by: Vikas Chaudhary <vikas.chaudhary@qlogic.com> Acked-by: Eddie Wai <eddie.wai@broadcom.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Nick Black' via open-iscsi authored
Remove two redundant casts from char * to char *. Signed-off-by: Nick Black <nlb@google.com> Signed-off-by: Mike Christie <michaelc@cs.wisc.edu> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Paul Bolle authored
The Kconfig symbols SCSI_TGT and SCSI_FC_TGT_ATTRS are unused since "tgt: removal". Setting them has no effect. Remove these symbols. Signed-off-by: Paul Bolle <pebolle@tiscali.nl> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
K. Y. Srinivasan authored
Commit ID: 7e660100 added code to derive the FLUSH_TIMEOUT from the basic I/O timeout. However, this patch did not use the basic I/O timeout of the device. Fix this bug. Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Reviewed-by: James Bottomley <JBottomley@Parallels.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Martin K. Petersen authored
Despite supporting modern SCSI features some storage devices continue to claim conformance to an older version of the SPC spec. This is done for compatibility with legacy operating systems. Linux by default will not attempt to read VPD pages on devices that claim SPC-2 or older. Introduce a blacklist flag that can be used to trigger VPD page inquiries on devices that are known to support them. Reported-by: KY Srinivasan <kys@microsoft.com> Tested-by: KY Srinivasan <kys@microsoft.com> Reviewed-by: KY Srinivasan <kys@microsoft.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> CC: <stable@vger.kernel.org> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
We currently set the field in common code based on the device type, but then only use it in the cdrom driver which also overrides the value previously set in the generic code. Just leave this entirely to the CDROM driver to make everyones life simpler. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Christoph Hellwig authored
Make sure we have a symbolic name for the ZBC type available, so that e.g. patch for a SATA to translate ZAC commands can make use of it. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Christoph Hellwig authored
Add two new device types, most importantly the zoned block device one. Split from an earlier patch by Hannes Reinecke. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Christoph Hellwig authored
Current the midlayer fakes up a struct request for the explicit reset ioctls, and those don't have a tag allocated to them. The fnic driver pokes into midlayer structures to paper over this design issue, but that won't work for the blk-mq case. Either someone who can actually test the hardware will have to come up with a similar hack for the blk-mq case, or we'll have to bite the bullet and fix the way the EH ioctls work for real, but until that happens we fail these explicit requests here. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Webb Scales <webbnh@hp.com> Acked-by: Jens Axboe <axboe@kernel.dk> Tested-by: Bart Van Assche <bvanassche@acm.org> Tested-by: Robert Elliott <elliott@hp.com> Cc: Hiral Patel <hiralpat@cisco.com> Cc: Suma Ramars <sramars@cisco.com> Cc: Brian Uchino <buchino@cisco.com>
-
Christoph Hellwig authored
This patch adds support for an alternate I/O path in the scsi midlayer which uses the blk-mq infrastructure instead of the legacy request code. Use of blk-mq is fully transparent to drivers, although for now a host template field is provided to opt out of blk-mq usage in case any unforseen incompatibilities arise. In general replacing the legacy request code with blk-mq is a simple and mostly mechanical transformation. The biggest exception is the new code that deals with the fact the I/O submissions in blk-mq must happen from process context, which slightly complicates the I/O completion handler. The second biggest differences is that blk-mq is build around the concept of preallocated requests that also include driver specific data, which in SCSI context means the scsi_cmnd structure. This completely avoids dynamic memory allocations for the fast path through I/O submission. Due the preallocated requests the MQ code path exclusively uses the host-wide shared tag allocator instead of a per-LUN one. This only affects drivers actually using the block layer provided tag allocator instead of their own. Unlike the old path blk-mq always provides a tag, although drivers don't have to use it. For now the blk-mq path is disable by defauly and must be enabled using the "use_blk_mq" module parameter. Once the remaining work in the block layer to make blk-mq more suitable for slow devices is complete I hope to make it the default and eventually even remove the old code path. Based on the earlier scsi-mq prototype by Nicholas Bellinger. Thanks to Bart Van Assche and Robert Elliot for testing, benchmarking and various sugestions and code contributions. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Webb Scales <webbnh@hp.com> Acked-by: Jens Axboe <axboe@kernel.dk> Tested-by: Bart Van Assche <bvanassche@acm.org> Tested-by: Robert Elliott <elliott@hp.com>
-
Christoph Hellwig authored
Blk-mq drivers usually preallocate their S/G list as part of the request, but if we want to support the very large S/G lists currently supported by the SCSI code that would tie up a lot of memory in the preallocated request pool. Add support to the scatterlist code so that it can initialize a S/G list that uses a preallocated first chunks and dynamically allocated additional chunks. That way the scsi-mq code can preallocate a first page worth of S/G entries as part of the request, and dynamically extend the S/G list when needed. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Webb Scales <webbnh@hp.com> Acked-by: Jens Axboe <axboe@kernel.dk> Tested-by: Bart Van Assche <bvanassche@acm.org> Tested-by: Robert Elliott <elliott@hp.com>
-
Christoph Hellwig authored
Replace the calls to the various blk_end_request variants with opencode equivalents. Blk-mq is using a model that gives the driver control between the bio updates and the actual completion, and making the old code follow that same model allows us to keep the code more similar for both paths. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Webb Scales <webbnh@hp.com> Acked-by: Jens Axboe <axboe@kernel.dk> Tested-by: Bart Van Assche <bvanassche@acm.org> Tested-by: Robert Elliott <elliott@hp.com>
-
Christoph Hellwig authored
This saves us an atomic operation for each I/O submission and completion for the usual case where the driver doesn't set a per-target can_queue value. Only a few iscsi hardware offload drivers set the per-target can_queue value at the moment. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Webb Scales <webbnh@hp.com> Acked-by: Jens Axboe <axboe@kernel.dk> Tested-by: Bart Van Assche <bvanassche@acm.org> Tested-by: Robert Elliott <elliott@hp.com>
-
Christoph Hellwig authored
Seems like these counters are missing any sort of synchronization for updates, as a over 10 year old comment from me noted. Fix this by using atomic counters, and while we're at it also make sure they are in the same cacheline as the _busy counters and not needlessly stored to in every I/O completion. With the new model the _busy counters can temporarily go negative, so all the readers are updated to check for > 0 values. Longer term every successful I/O completion will reset the counters to zero, so the temporarily negative values will not cause any harm. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Webb Scales <webbnh@hp.com> Acked-by: Jens Axboe <axboe@kernel.dk> Tested-by: Bart Van Assche <bvanassche@acm.org> Tested-by: Robert Elliott <elliott@hp.com>
-
Christoph Hellwig authored
Avoid taking the queue_lock to check the per-device queue limit. Instead we do an atomic_inc_return early on to grab our slot in the queue, and if necessary decrement it after finishing all checks. Unlike the host and target busy counters this doesn't allow us to avoid the queue_lock in the request_fn due to the way the interface works, but it'll allow us to prepare for using the blk-mq code, which doesn't use the queue_lock at all, and it at least avoids a queue_lock round trip in scsi_device_unbusy, which is still important given how busy the queue_lock is. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Webb Scales <webbnh@hp.com> Acked-by: Jens Axboe <axboe@kernel.dk> Tested-by: Bart Van Assche <bvanassche@acm.org> Tested-by: Robert Elliott <elliott@hp.com>
-
Christoph Hellwig authored
Avoid taking the host-wide host_lock to check the per-host queue limit. Instead we do an atomic_inc_return early on to grab our slot in the queue, and if necessary decrement it after finishing all checks. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Webb Scales <webbnh@hp.com> Acked-by: Jens Axboe <axboe@kernel.dk> Tested-by: Bart Van Assche <bvanassche@acm.org> Tested-by: Robert Elliott <elliott@hp.com>
-
Christoph Hellwig authored
Avoid taking the host-wide host_lock to check the per-target queue limit. Instead we do an atomic_inc_return early on to grab our slot in the queue, and if necessary decrement it after finishing all checks. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Webb Scales <webbnh@hp.com> Acked-by: Jens Axboe <axboe@kernel.dk> Tested-by: Bart Van Assche <bvanassche@acm.org> Tested-by: Robert Elliott <elliott@hp.com>
-
Christoph Hellwig authored
Prepare for not taking a host-wide lock in the dispatch path by pushing the lock down into the places that actually need it. Note that this patch is just a preparation step, as it will actually increase lock roundtrips and thus decrease performance on its own. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Webb Scales <webbnh@hp.com> Acked-by: Jens Axboe <axboe@kernel.dk> Tested-by: Bart Van Assche <bvanassche@acm.org> Tested-by: Robert Elliott <elliott@hp.com>
-
Christoph Hellwig authored
The blk-mq code path will set this to a different function, so make the code simpler by setting it up in a legacy-request specific place. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Webb Scales <webbnh@hp.com> Acked-by: Jens Axboe <axboe@kernel.dk> Tested-by: Bart Van Assche <bvanassche@acm.org> Tested-by: Robert Elliott <elliott@hp.com>
-
Christoph Hellwig authored
Make sure we only have the logic for requeing commands in one place. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Webb Scales <webbnh@hp.com> Acked-by: Jens Axboe <axboe@kernel.dk> Tested-by: Bart Van Assche <bvanassche@acm.org> Tested-by: Robert Elliott <elliott@hp.com>
-
Christoph Hellwig authored
Factor out a helper to set the _blocked values, which we'll reuse for the blk-mq code path. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Webb Scales <webbnh@hp.com> Acked-by: Jens Axboe <axboe@kernel.dk> Tested-by: Bart Van Assche <bvanassche@acm.org> Tested-by: Robert Elliott <elliott@hp.com>
-
Christoph Hellwig authored
Factor out command setup code that will be shared with the blk-mq code path. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Webb Scales <webbnh@hp.com>
-
- 17 Jul, 2014 13 commits
-
-
Christoph Hellwig authored
Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Hannes Reinecke <hare@suse.de>
-
Christoph Hellwig authored
Factor out a function to initialize regular read/write commands and leave sd_init_command as a simple dispatcher to the different prepare routines. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Hannes Reinecke <hare@suse.de>
-
Christoph Hellwig authored
Currently cmd->allowed is initialized from rq->retries for discard commands, but retries is always 0 for non-BLOCK_PC requests. Set it to the standard number of retries instead. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Hannes Reinecke <hare@suse.de>
-
Christoph Hellwig authored
Currently cmd->allowed is initialized from rq->retries for write same commands, but retries is always 0 for non-BLOCK_PC requests. Set it to the standard number of retries instead. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Hannes Reinecke <hare@suse.de>
-
Christoph Hellwig authored
Simplify handling of discard requests by setting up the command directly instead of initializing request fields and then calling scsi_setup_blk_pc_cmnd to propagate the information into the command. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Hannes Reinecke <hare@suse.de>
-
Christoph Hellwig authored
Simplify handling of write same requests by setting up the command directly instead of initializing request fields and then calling scsi_setup_blk_pc_cmnd to propagate the information into the command. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Webb Scales <webbnh@hp.com>
-
Christoph Hellwig authored
Simplify handling of flush requests by setting up the command directly instead of initializing request fields and then calling scsi_setup_blk_pc_cmnd to propagate the information into the command. Also rename scsi_setup_flush_cmnd to sd_setup_flush_cmnd for consistency. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Hannes Reinecke <hare@suse.de>
-
Christoph Hellwig authored
The data direction fiel in the SCSI command is derived only from the block request structure. Move setting it up into common code instead of duplicating it in the ULDs. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Hannes Reinecke <hare@suse.de>
-
Christoph Hellwig authored
We should call the device handler prep_fn for all TYPE_FS requests, not just simple read/write calls that are handled by the disk driver. Restructure the common I/O code to call the prep_fn handler and zero out the CDB, and just leave the call to scsi_init_io to the ULDs. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Hannes Reinecke <hare@suse.de>
-
Christoph Hellwig authored
scsi_init_io should only be called for requests that transfer data, so move the assert that a request has segments from the callers into scsi_init_io. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Hannes Reinecke <hare@suse.de>
-
Maurizio Lombardi authored
During IO with fabric faults, one generally sees several "Unhandled error code" messages in the syslog as shown below: sd 4:0:6:2: [sdbw] Unhandled error code sd 4:0:6:2: [sdbw] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK sd 4:0:6:2: [sdbw] CDB: Read(10): 28 00 00 00 00 00 00 00 08 00 end_request: I/O error, dev sdbw, sector 0 This comes from scsi_io_completion (in scsi_lib.c) while handling error codes other than DID_RESET or not deferred sense keys i.e. this is actually handled by the SCSI mid layer. But what gets displayed here is "Unhandled error code" which is quite misleading as it indicates something that is not addressed by the mid layer. The description string is based on the sense key and sometimes on the additional sense code; since the ACTION_FAIL case always prints the sense key and the additional sense code, this patch removes the description string completely because it does not add useful information. Signed-off-by: Maurizio Lombardi <mlombard@redhat.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Douglas Gilbert authored
While checking what scsi_adjust_queue_depth() did I thought its switch statement could be clearer: - remove redundant assignment (to sdev->queue_depth) - re-order cases (thus removing the fall-through) Signed-off-by: Douglas Gilbert <dgilbert@interlog.com> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Robert Elliott <elliott@hp.com> Tested-by: Robert Elliott <elliott@hp.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Hannes Reinecke <hare@suse.de>
-