- 11 Sep, 2013 1 commit
-
-
Nicholas Bellinger authored
This patch removes the iscsi_thread_set->[rx,tx]_post_start_comp that was originally used synchronize startup between rx and tx threads within a single thread_set. Instead, use a single ->ts_activate_sem in iscsi_activate_thread_set() to wait for both processes to awake in the RX/TX pre handlers. Also, go ahead and refactor thread_set deallocate code into a common iscsi_deallocate_thread_one(), and update iscsi_deallocate_thread_sets() and iscsi_deallocate_extra_thread_sets() use this code v3 changes: - Make iscsi_deallocate_thread_one defined as static (Fengguang) v2 changes: - Set ISCSI_THREAD_SET_ACTIVE before calling complete in iscsi_activate_thread_set - Protect ts->conn sanity checks with ->ts_state_lock in RX/TX pre handlers - Add ->ts_activate_sem to save extra context switches per iscsi_activate_thread_set() call. - Refactor thread_set shutdown into iscsi_deallocate_thread_one() Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
-
- 10 Sep, 2013 18 commits
-
-
Nicholas Bellinger authored
This patch addresses an long standing race in iscsi_[rx,tx]_thread_pre_handler() use of flush_signals(), and between iscsi_deallocate_extra_thread_sets() setting ISCSI_THREAD_SET_DIE before calling kthread_stop(). It addresses the issue by both holding ts_state_lock before calling send_sig() in iscsi_deallocate_extra_thread_sets(), as well as only calling flush_signals() when ts->status != ISCSI_THREAD_SET_DIE within iscsi_[rx,tx]_thread_pre_handler() code. v2 changes: - Add explicit complete(&ts->[rx,tx]_start_comp); before kthread_stop() in iscsi_deallocate_extra_thread_sets() - Drop left-over send_sig() calls in iscsi_deallocate_extra_thread_sets() - Add kthread_should_stop() check in iscsi_signal_thread_pre_handler() Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
-
Wei Yongjun authored
Remove including <linux/version.h> that don't need it. Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
-
Vu Pham authored
This model was introduced in 00f7ec36 "RDMA/core: Add memory management extensions support" and works when the IB device supports the IB_DEVICE_MEM_MGT_EXTENSIONS capability. Upon creating the isert device, ib_isert will test whether the HCA supports FRWR. If supported then set the flag and assign function pointers that handle fast registration and deregistration of appropriate resources (fast_reg descriptors). When new connection coming in, ib_isert will check frwr flag and create frwr resouces, if fail to do it will switch back to old model of using global dma key and turn off the frwr support. Registration is done using posting IB_WR_FAST_REG_MR to the QP and invalidations using posting IB_WR_LOCAL_INV. Signed-off-by: Vu Pham <vu@mellanox.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
-
Vu Pham authored
Current driver uses global dma key to register the memory pointed by sg list provided by the target core. This is the preparation step for adding more methods like fast path memory registration, make the reg/unreg calls be function pointers. Signed-off-by: Vu Pham <vu@mellanox.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
-
Vu Pham authored
isert_put_datain() and isert_get_dataout() share a lot of code in rdma wr processing, move this common code to a shared function. Use isert_unmap_cmd to cleanup for RDMA_READ completion. Remove duplicate field in isert_cmd and isert_rdma_wr structs Change misc debug messages to track isert_cmd Signed-off-by: Sagi Grimberg <sagig@mellanox.com> Signed-off-by: Vu Pham <vu@mellanox.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
-
Nicholas Bellinger authored
Add calls to target_xcopy_setup_pt() + target_xcopy_release_pt() to target_core_init_configfs() and target_core_exit_configfs() respectively. Cc: Christoph Hellwig <hch@lst.de> Cc: Hannes Reinecke <hare@suse.de> Cc: Martin Petersen <martin.petersen@oracle.com> Cc: Chris Mason <chris.mason@fusionio.com> Cc: Roland Dreier <roland@purestorage.com> Cc: Zach Brown <zab@redhat.com> Cc: James Bottomley <JBottomley@Parallels.com> Cc: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Nicholas Bellinger <nab@daterainc.com>
-
Nicholas Bellinger authored
This patch adds the Third Party Copy (3PC) bit to signal support for EXTENDED_COPY within standard inquiry response data. Also add emulate_3pc device attribute in configfs (enabled by default) to allow the exposure of this bit to be disabled, if necessary. Cc: Christoph Hellwig <hch@lst.de> Cc: Hannes Reinecke <hare@suse.de> Cc: Martin Petersen <martin.petersen@oracle.com> Cc: Chris Mason <chris.mason@fusionio.com> Cc: Roland Dreier <roland@purestorage.com> Cc: Zach Brown <zab@redhat.com> Cc: James Bottomley <JBottomley@Parallels.com> Cc: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Nicholas Bellinger <nab@daterainc.com>
-
Nicholas Bellinger authored
Setup up the se_cmd->execute_cmd() pointers for EXTENDED_COPY and RECEIVE_COPY_RESULTS handling within spc_parse_cdb() Cc: Christoph Hellwig <hch@lst.de> Cc: Hannes Reinecke <hare@suse.de> Cc: Martin Petersen <martin.petersen@oracle.com> Cc: Chris Mason <chris.mason@fusionio.com> Cc: Roland Dreier <roland@purestorage.com> Cc: Zach Brown <zab@redhat.com> Cc: James Bottomley <JBottomley@Parallels.com> Cc: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Nicholas Bellinger <nab@daterainc.com>
-
Nicholas Bellinger authored
This patch adds support for EXTENDED_COPY emulation from SPC-3, that enables full copy offload target support within both a single virtual backend device, and across multiple virtual backend devices. It also functions independent of target fabric, and supports copy offload across multiple target fabric ports. This implemenation supports both EXTENDED_COPY PUSH and PULL models of operation, so the actual CDB may be received on either source or desination logical unit. For Target Descriptors, it currently supports the NAA IEEE Registered Extended designator (type 0xe4), which allows the reference of target ports to occur independent of fabric type using EVPD 0x83 WWNs. For Segment Descriptors, it currently supports copy from block to block (0x02) mode. It also honors any present SCSI reservations of the destination target port. Note that only Supports No List Identifier (SNLID=1) mode is supported. Also included is basic RECEIVE_COPY_RESULTS with service action type OPERATING PARAMETERS (0x03) required for SNLID=1 operation. v3 changes: - Fix incorrect return type in target_do_receive_copy_results() (Fengguang) v2 changes: - Use target_alloc_sgl() instead of transport_generic_get_mem() - Convert debug output to use pr_debug() - Convert target_xcopy_parse_target_descriptors() NAA IEEN WWN dump to use 0x%16phN format specification - Drop unnecessary xcopy_pt_cmd->xpt_passthrough_wsem, and associated usage in xcopy_pt_write_pending() and target_xcopy_issue_pt_cmd() - Add check for unsupported EXTENDED_COPY(LID4) service action bits in target_do_xcopy() Cc: Christoph Hellwig <hch@lst.de> Cc: Hannes Reinecke <hare@suse.de> Cc: Martin Petersen <martin.petersen@oracle.com> Cc: Chris Mason <chris.mason@fusionio.com> Cc: Roland Dreier <roland@purestorage.com> Cc: Zach Brown <zab@redhat.com> Cc: James Bottomley <JBottomley@Parallels.com> Cc: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Nicholas Bellinger <nab@daterainc.com>
-
Nicholas Bellinger authored
This patch adds an check for a non-existent port->sep_alua_tg_pt_gp_mem within target_alua_state_check(), which is not present for internally dispatched EXTENDED_COPY WRITE I/O to the destination target port. Cc: Christoph Hellwig <hch@lst.de> Cc: Hannes Reinecke <hare@suse.de> Cc: Martin Petersen <martin.petersen@oracle.com> Cc: Chris Mason <chris.mason@fusionio.com> Cc: Roland Dreier <roland@purestorage.com> Cc: Zach Brown <zab@redhat.com> Cc: James Bottomley <JBottomley@Parallels.com> Cc: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Nicholas Bellinger <nab@daterainc.com>
-
Nicholas Bellinger authored
EXTENDED_COPY needs to be able to search a global list of devices based on NAA WWN device identifiers, so add a simple g_device_list protected by g_device_mutex. Cc: Christoph Hellwig <hch@lst.de> Cc: Hannes Reinecke <hare@suse.de> Cc: Martin Petersen <martin.petersen@oracle.com> Cc: Chris Mason <chris.mason@fusionio.com> Cc: Roland Dreier <roland@purestorage.com> Cc: Zach Brown <zab@redhat.com> Cc: James Bottomley <JBottomley@Parallels.com> Cc: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Nicholas Bellinger <nab@daterainc.com>
-
Nicholas Bellinger authored
Both target_alloc_sgl() and transport_generic_map_mem_to_cmd() are required by EXTENDED_COPY logic when setting up internally dispatched command descriptors, so go ahead and make both of these non static. Cc: Christoph Hellwig <hch@lst.de> Cc: Hannes Reinecke <hare@suse.de> Cc: Martin Petersen <martin.petersen@oracle.com> Cc: Chris Mason <chris.mason@fusionio.com> Cc: Roland Dreier <roland@purestorage.com> Cc: Zach Brown <zab@redhat.com> Cc: James Bottomley <JBottomley@Parallels.com> Cc: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Nicholas Bellinger <nab@daterainc.com>
-
Nicholas Bellinger authored
This patch makes spc_parse_naa_6h_vendor_specific() available to other target code, which is required by EXTENDED_COPY when comparing the received NAA WWN device identifer for locating the associated se_device backend. Cc: Christoph Hellwig <hch@lst.de> Cc: Hannes Reinecke <hare@suse.de> Cc: Martin Petersen <martin.petersen@oracle.com> Cc: Chris Mason <chris.mason@fusionio.com> Cc: Roland Dreier <roland@purestorage.com> Cc: Zach Brown <zab@redhat.com> Cc: James Bottomley <JBottomley@Parallels.com> Cc: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Nicholas Bellinger <nab@daterainc.com>
-
Nicholas Bellinger authored
This patch makes the top-level target_core_subsystem array available to other target code, which is required by EXTENDED_COPY to pin the backend se_device using configfs_depend_item(), in order to ensure it can't be removed for the duration of a EXTENDED_COPY operation. Cc: Christoph Hellwig <hch@lst.de> Cc: Hannes Reinecke <hare@suse.de> Cc: Martin Petersen <martin.petersen@oracle.com> Cc: Chris Mason <chris.mason@fusionio.com> Cc: Roland Dreier <roland@purestorage.com> Cc: Zach Brown <zab@redhat.com> Cc: James Bottomley <JBottomley@Parallels.com> Cc: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Nicholas Bellinger <nab@daterainc.com>
-
Nicholas Bellinger authored
Reversing the dma_data_direction for pci_map_sg() friends is useful for other drivers, so move it from tcm_qla2xxx into inline code within target_core_fabric.h. Also drop internal usage of equivlient in tcm_qla2xxx fabric code. Reported-by: Christoph Hellwig <hch@lst.de> Cc: Roland Dreier <roland@purestorage.com> Cc: Giridhar Malavali <giridhar.malavali@qlogic.com> Cc: Chad Dupuis <chad.dupuis@qlogic.com> Cc: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Nicholas Bellinger <nab@daterainc.com>
-
Nicholas Bellinger authored
This patch adds a extra check for SCF_COMPARE_AND_WRITE within transport_generic_request_failure() to invoke the callback for compare_and_write_callback() or compare_and_write_done(), in order to release se_dev->caw_mutex from the generic failure path. It also adds to checks within compare_and_write_callback() to avoid processing when transport_generic_request_failure() occurs early enough that cmd->t_data_sg or cmd->t_bidi_data_sg have not been setup yet, nor se_dev->caw_mutex obtained from within sbc_compare_and_write(). v4 changes: - Add explicit check for cmd->transport_complete_callback in transport_generic_request_failure() to handle case where sbc_compare_and_write()clears callback pointer (Dan Carpenter) Cc: Christoph Hellwig <hch@lst.de> Cc: Hannes Reinecke <hare@suse.de> Cc: Martin Petersen <martin.petersen@oracle.com> Cc: Chris Mason <chris.mason@fusionio.com> Cc: James Bottomley <JBottomley@Parallels.com> Cc: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
-
Nicholas Bellinger authored
This patch changes target_complete_ok_work() to fall through after calling the se_cmd->transport_complete_callback() -> compare_and_write_post() callback, by keying off the existance of SCF_COMPARE_AND_WRITE_POST. This is necessary because once SCF_COMPARE_AND_WRITE_POST has been set by compare_and_write_post(), the SCSI response needs to be sent via TFO->queue_status(). Cc: Christoph Hellwig <hch@lst.de> Cc: Hannes Reinecke <hare@suse.de> Cc: Martin Petersen <martin.petersen@oracle.com> Cc: Chris Mason <chris.mason@fusionio.com> Cc: James Bottomley <JBottomley@Parallels.com> Cc: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Nicholas Bellinger <nab@daterainc.com>
-
Nicholas Bellinger authored
This patch adds support for COMPARE_AND_WRITE emulation on a per block basis. This logic is used as an atomic test and set primative currently used by VMWare ESX VAAI for performing array side locking of individual VMFS extent ownership. This includes the COMPARE_AND_WRITE CDB parsing within sbc_parse_cdb(), and does the majority of the work within the compare_and_write_callback() to perform the verify instance user data comparision, and subsequent write instance user data I/O submission upon a successfull comparision. The synchronization is enforced by se_device->caw_sem, that is obtained before the initial READ I/O submission in sbc_compare_and_write(). The mutex is then released upon MISCOMPARE in compare_and_write_callback(), or upon WRITE instance user-data completion in compare_and_write_post(). The implementation currently assumes a single logical block (NoLB=1). v4 changes: - Explicitly clear cmd->transport_complete_callback for two failure cases in sbc_compare_and_write() in order to avoid double unlock of ->caw_sem in compare_and_write_callback() (Dan Carpenter) v3 changes: - Convert se_device->caw_mutex to ->caw_sem v2 changes: - Set SCF_COMPARE_AND_WRITE and cmd->execute_cmd() to sbc_compare_and_write() during setup in sbc_parse_cdb() - Use sbc_compare_and_write() for initial READ submission with DMA_FROM_DEVICE - Reset cmd->execute_cmd() to sbc_execute_rw() for write instance user-data in compare_and_write_callback() - Drop SCF_BIDI command flag usage - Set TRANSPORT_PROCESSING + transport_state flags before write instance submission, and convert to __target_execute_cmd() - Prevent sbc_get_size() from being being called twice to generate incorrect size in sbc_parse_cdb() - Enforce se_device->caw_mutex synchronization between initial READ I/O submission, and final WRITE I/O completion. Cc: Christoph Hellwig <hch@lst.de> Cc: Hannes Reinecke <hare@suse.de> Cc: Martin Petersen <martin.petersen@oracle.com> Cc: Chris Mason <chris.mason@fusionio.com> Cc: James Bottomley <JBottomley@Parallels.com> Cc: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Nicholas Bellinger <nab@daterainc.com>
-
- 09 Sep, 2013 21 commits
-
-
Nicholas Bellinger authored
This patch adds the MAXIMUM COMPARE AND WRITE LENGTH bit, currently hardcoded to a single logical block (NoLB=1) within the Block Limits VPD in spc_emulate_evpd_b0(). Also add emulate_caw device attribute in configfs (enabled by default) to allow the exposure of this bit to be disabled, if necessary. Cc: Christoph Hellwig <hch@lst.de> Cc: Hannes Reinecke <hare@suse.de> Cc: Martin Petersen <martin.petersen@oracle.com> Cc: Chris Mason <chris.mason@fusionio.com> Cc: James Bottomley <JBottomley@Parallels.com> Cc: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Nicholas Bellinger <nab@daterainc.com>
-
Nicholas Bellinger authored
Required by COMPARE_AND_WRITE for write instance user-data submission, in order to bypass target_execute_cmd() checks. Reported-by: Christoph Hellwig <hch@lst.de> Cc: Roland Dreier <roland@purestorage.com> Cc: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Nicholas Bellinger <nab@daterainc.com>
-
Nicholas Bellinger authored
After COMPARE_AND_WRITE completes it's comparision, the WRITE payload SGLs head expect to be updated to point from the verify instance of user data, to the write instance of user data. So for this special case, add transport_reset_sgl_orig() usage within transport_free_pages() and add se_cmd->t_data_[sg,nents]_orig members to save the original assignments. Cc: Christoph Hellwig <hch@lst.de> Cc: Hannes Reinecke <hare@suse.de> Cc: Martin Petersen <martin.petersen@oracle.com> Cc: Chris Mason <chris.mason@fusionio.com> Cc: James Bottomley <JBottomley@Parallels.com> Cc: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Nicholas Bellinger <nab@daterainc.com>
-
Nicholas Bellinger authored
This patch updates transport_generic_new_cmd() to call target_alloc_sgl() for SGL + page memory allocation for se_cmd->t_bidi_data_sg. It also adds the special case for SCF_COMPARE_AND_WRITE to calculate a different bidi_length based upon se_cmd->t_task_nolb. Reported-by: Christoph Hellwig <hch@lst.de> Cc: Christoph Hellwig <hch@lst.de> Cc: Hannes Reinecke <hare@suse.de> Cc: Martin Petersen <martin.petersen@oracle.com> Cc: Chris Mason <chris.mason@fusionio.com> Cc: James Bottomley <JBottomley@Parallels.com> Cc: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Nicholas Bellinger <nab@daterainc.com>
-
Nicholas Bellinger authored
This patch refactors transport_generic_get_mem() to target_alloc_sgl() for accepting **sgl, *nents, length and zero_page as function parameters in order to be used for both se_cmd->t_data_sg + se_cmd->t_bidi_data_sg allocations. Reported-by: Christoph Hellwig <hch@lst.de> Cc: Christoph Hellwig <hch@lst.de> Cc: Hannes Reinecke <hare@suse.de> Cc: Martin Petersen <martin.petersen@oracle.com> Cc: Chris Mason <chris.mason@fusionio.com> Cc: James Bottomley <JBottomley@Parallels.com> Cc: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Nicholas Bellinger <nab@daterainc.com>
-
Nicholas Bellinger authored
Stop keying off se_cmd->t_bidi_data_sg within transport_complete_qf() + target_complete_ok_work(), and just use SCF_BIDI instead. Cc: Christoph Hellwig <hch@lst.de> Cc: Hannes Reinecke <hare@suse.de> Cc: Martin Petersen <martin.petersen@oracle.com> Cc: Chris Mason <chris.mason@fusionio.com> Cc: James Bottomley <JBottomley@Parallels.com> Cc: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Nicholas Bellinger <nab@daterainc.com>
-
Nicholas Bellinger authored
COMPARE_AND_WRITE expects to be able to send down a DMA_FROM_DEVICE to obtain the necessary READ payload for comparision against the first half of the WRITE payload containing the verify user data. Currently virtual backends expect to internally reference SGLs, SGL nents, and data_direction, so change IBLOCK, FILEIO and RD sbc_ops->execute_rw() to accept this values as function parameters. Also add default sbc_execute_rw() handler for the typical case for cmd->execute_rw() submission using cmd->t_data_sg, cmd->t_data_nents, and cmd->data_direction). v2 Changes: - Add SCF_COMPARE_AND_WRITE command flag - Use sbc_execute_rw() for normal cmd->execute_rw() submission with expected se_cmd members. Cc: Christoph Hellwig <hch@lst.de> Cc: Hannes Reinecke <hare@suse.de> Cc: Martin Petersen <martin.petersen@oracle.com> Cc: Chris Mason <chris.mason@fusionio.com> Cc: James Bottomley <JBottomley@Parallels.com> Cc: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Nicholas Bellinger <nab@daterainc.com>
-
Nicholas Bellinger authored
This patch adds TCM_MISCOMPARE_VERIFY (ASC=0x1d, ASCQ=0x00) sense handling to transport_send_check_condition_and_sense(), which is required for a COMPARE_AND_WRITE comparision failure. Cc: Christoph Hellwig <hch@lst.de> Cc: Hannes Reinecke <hare@suse.de> Cc: Martin Petersen <martin.petersen@oracle.com> Cc: Chris Mason <chris.mason@fusionio.com> Cc: James Bottomley <JBottomley@Parallels.com> Cc: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Nicholas Bellinger <nab@daterainc.com>
-
Nicholas Bellinger authored
This patch adds a sense_reason_t return to ->transport_complete_callback(), and updates target_complete_ok_work() to invoke the call if necessary to transport_send_check_condition_and_sense() during the failure case. Also update xdreadwrite_callback() to use this return value. Cc: Christoph Hellwig <hch@lst.de> Cc: Hannes Reinecke <hare@suse.de> Cc: Martin Petersen <martin.petersen@oracle.com> Cc: Chris Mason <chris.mason@fusionio.com> Cc: James Bottomley <JBottomley@Parallels.com> Cc: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Nicholas Bellinger <nab@daterainc.com>
-
Nicholas Bellinger authored
Reviewed-by: Christoph Hellwig <hch@lst.de> Cc: Hannes Reinecke <hare@suse.de> Cc: Martin Petersen <martin.petersen@oracle.com> Cc: Chris Mason <chris.mason@fusionio.com> Cc: James Bottomley <JBottomley@Parallels.com> Cc: Nicholas Bellinger <nab@linux-iscsi.org> Signed-off-by: Nicholas Bellinger <nab@daterainc.com>
-
Dan Carpenter authored
blk_get_request() just returns NULL on error, it doesn't return an ERR_PTR. Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
-
Nicholas Bellinger authored
This patch changes iscsi-target to use transport_alloc_session_tags() pre-allocation logic for per-cpu session tag pooling with internal ida_alloc() + ida_free() calls based upon the saved se_cmd->map_tag id. This includes tag pool setup based upon per NodeACL queue_depth after locating se_node_acl in iscsi_target_locate_portal(). Also update iscsit_allocate_cmd() and iscsit_release_cmd() to use percpu_ida_alloc() and percpu_ida_free() respectively. v5 changes; - Convert to percpu_ida.h include v2 changes: - Fix bug with SessionType=Discovery in iscsi_target_locate_portal() Cc: Or Gerlitz <ogerlitz@mellanox.com> Cc: Kent Overstreet <kmo@daterainc.com> Signed-off-by: Nicholas Bellinger <nab@daterainc.com>
-
Nicholas Bellinger authored
This command converts iscsi/isert-target to use allocations based on iscsit_transport->priv_size within iscsit_allocate_cmd(), instead of using an embedded isert_cmd->iscsi_cmd. This includes removing iscsit_transport->alloc_cmd() usage, along with updating isert-target code to use iscsit_priv_cmd(). Also, remove left-over iscsit_transport->release_cmd() usage for direct calls to iscsit_release_cmd(), and drop the now unused lio_cmd_cache and isert_cmd_cache. Cc: Or Gerlitz <ogerlitz@mellanox.com> Cc: Kent Overstreet <kmo@daterainc.com> Signed-off-by: Nicholas Bellinger <nab@daterainc.com>
-
Nicholas Bellinger authored
This patch adds support for pre-allocation of per tv_cmd descriptor scatterlist + user-space page pointer memory using se_sess->sess_cmd_map within tcm_vhost_make_nexus() code. This includes sanity checks within vhost_scsi_map_to_sgl() to reject I/O that exceeds these initial hardcoded values, and the necessary cleanup in tcm_vhost_make_nexus() failure path + tcm_vhost_drop_nexus(). v3 changes: - Rebase to v3.11-rc5 code Cc: Michael S. Tsirkin <mst@redhat.com> Cc: Asias He <asias@redhat.com> Cc: Kent Overstreet <kmo@daterainc.com> Reviewed-by: Asias He <asias@redhat.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
-
Nicholas Bellinger authored
This patch changes vhost/scsi to use transport_init_session_tags() pre-allocation logic for per-cpu session tag pooling with internal ida_alloc() + ida_free() calls based upon the saved se_cmd->map_tag id. FIXME: Make transport_init_session_tags() number of tags setup configurable per vring client setting via configfs v5 changes: - Convert to percpu_ida.h include v3 changes: - Update to percpu-ida usage - Rebase to v3.11-rc5 code Cc: Michael S. Tsirkin <mst@redhat.com> Cc: Asias He <asias@redhat.com> Cc: Kent Overstreet <kmo@daterainc.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
-
Nicholas Bellinger authored
This patch adds lib/idr.c based transport_init_session_tags() logic that allows fabric drivers to setup a per-cpu se_sess->sess_tag_pool and associated se_sess->sess_cmd_map for basic tagged pre-allocation of fabric descriptor sized memory. v5 changes: - Convert to percpu_ida.h include v4 changes: - Add transport_alloc_session_tags() for fabrics that need early transport_init_session() v3 changes: - Update to percpu-ida usage Cc: Kent Overstreet <kmo@daterainc.com> Cc: Asias He <asias@redhat.com> Cc: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Asias He <asias@redhat.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
-
Kent Overstreet authored
Percpu frontend for allocating ids. With percpu allocation (that works), it's impossible to guarantee it will always be possible to allocate all nr_tags - typically, some will be stuck on a remote percpu freelist where the current job can't get to them. We do guarantee that it will always be possible to allocate at least (nr_tags / 2) tags - this is done by keeping track of which and how many cpus have tags on their percpu freelists. On allocation failure if enough cpus have tags that there could potentially be (nr_tags / 2) tags stuck on remote percpu freelists, we then pick a remote cpu at random to steal from. Note that there's no cpu hotplug notifier - we don't care, because steal_tags() will eventually get the down cpu's tags. We _could_ satisfy more allocations if we had a notifier - but we'll still meet our guarantees and it's absolutely not a correctness issue, so I don't think it's worth the extra code. From akpm: "It looks OK to me (that's as close as I get to an ack :)) v6 changes: - Add #include <linux/cpumask.h> to include/linux/percpu_ida.h to make alpha/arc builds happy (Fengguang) - Move second (cpu >= nr_cpu_ids) check inside of first check scope in steal_tags() (akpm + nab) v5 changes: - Change percpu_ida->cpus_have_tags to cpumask_t (kmo + akpm) - Add comment for percpu_ida_cpu->lock + ->nr_free (kmo + akpm) - Convert steal_tags() to use cpumask_weight() + cpumask_next() + cpumask_first() + cpumask_clear_cpu() (kmo + akpm) - Add comment for alloc_global_tags() (kmo + akpm) - Convert percpu_ida_alloc() to use cpumask_set_cpu() (kmo + akpm) - Convert percpu_ida_free() to use cpumask_set_cpu() (kmo + akpm) - Drop percpu_ida->cpus_have_tags allocation in percpu_ida_init() (kmo + akpm) - Drop percpu_ida->cpus_have_tags kfree in percpu_ida_destroy() (kmo + akpm) - Add comment for percpu_ida_alloc @ gfp (kmo + akpm) - Move to percpu_ida.c + percpu_ida.h (kmo + akpm + nab) v4 changes: - Fix tags.c reference in percpu_ida_init (akpm) Signed-off-by: Kent Overstreet <kmo@daterainc.com> Cc: Tejun Heo <tj@kernel.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Christoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Jens Axboe <axboe@kernel.dk> Cc: "Nicholas A. Bellinger" <nab@linux-iscsi.org> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
-
Nicholas Bellinger authored
This patch updates iser-target code to support login negotiation multi-plexing. This includes only using isert_conn->conn_login_comp for the first login request PDU, pushing the subsequent processing to iscsi_conn->login_work -> iscsi_target_do_login_rx(), and turning isert_get_login_rx() into a NOP. v3 changes: - Drop unnecessary LOGIN_FLAGS_READ_ACTIVE bit set in isert_rx_login_req() Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
-
Nicholas Bellinger authored
There is no need for iscsi_target_do_login_io() anymore in modern code, so go ahead and call iscsi_target_do_tx_login_io() directly within iscsi_target_do_login(). Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
-
Nicholas Bellinger authored
This patch adds a sock->sk_state_change() -> iscsi_target_sk_state_change() callback in order to handle transient TCP failures during the login process, where sock->sk_data_ready() -> iscsi_target_sk_data_ready() may not be called to release connection resources, and relinquish tpg->np_login_lock via iscsit_deaccess_np() It performs the sk->sk_state check using iscsi_target_sk_state_check() to look for TCP_CLOSE_WAIT + TCP_CLOSE, and invokes schedule_delayed_work() -> iscsi_target_do_cleanup() to perform the remaining cleanup from process context. It adds an explicit sk_state_check to iscsi_target_do_login() in order to determine a state failure when iscsi_target_sk_state_change() may not be able to proceed before LOGIN_FLAGS_READY=1 is set. Also use sk->sk_sndtimeo -> sk->sk_rcvtimeo settings during login to iscsi_target_set_sock_callbacks(), and revert back post login to use MAX_SCHEDULE_TIMEOUT in iscsi_target_restore_sock_callbacks(). Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
-
Nicholas Bellinger authored
This patch adds support for login negotiation multi-plexing in iscsi-target code. This involves handling the first login request PDU + payload and login response PDU + payload within __iscsi_target_login_thread() process context, and then changing struct sock->sk_data_ready() so that all subsequent exchanges are handled by workqueue process context, to allow other incoming login requests to be received in parallel by __iscsi_target_login_thread(). Upon login negotiation completion (or failure), ->sk_data_ready() is replaced with the original kernel sockets handler saved in iscsi_conn->orig_data_ready. v3 changes: - Convert iscsi_target_sk_data_ready() lock access to write[lock,unlock]_bh() - Only clear LOGIN_FLAGS_READ_ACTIVE when iscsi_target_do_login() returns zero - Add LOGIN_FLAGS_READY + LOGIN_FLAGS_CLOSED bit checks to iscsi_target_sk_data_ready() - Make INIT_DELAYED_WORK() + iscsi_target_set_sock_callbacks() setup happen earlier by moving from iscsi_target_start_negotiation() into iscsi_target_locate_portal() - Set LOGIN_FLAGS_READY bit in iscsi_target_start_negotiation() after iscsi_target_do_login() returns zero. v2 changes: - Add login_timer in iscsi_target_do_login_rx() to avoid possible endless sleep with MSG_WAITALL for traditional iscsi-target in certain network configurations. - Convert lprintk() -> pr_debug() - Remove forward declarations of iscsi_target_set_sock_callbacks(), iscsi_target_restore_sock_callbacks() and iscsi_target_sk_data_ready() - Make iscsi_target_set_sock_callbacks + iscsi_target_restore_sock_callbacks() static (Fengguang) - Make iscsi_target_do_login_rx() safe for iser-target w/o conn->sock Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
-