- 10 Jun, 2020 6 commits
-
-
Simon Arlott authored
If the device minor cannot be allocated or the cdrom fails to be registered then the mutex should be destroyed. Link: https://lore.kernel.org/r/06e9de38-eeed-1cab-5e08-e889288935b3@0882a8b5-c6c3-11e9-b005-00805fc181fe Fixes: 51a85881 ("scsi: sr: get rid of sr global mutex") Signed-off-by: Simon Arlott <simon@octiron.net> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
John Hubbard authored
This code was using get_user_pages*(), in a "Case 1" scenario (Direct IO), using the categorization from [1]. That means that it's time to convert the get_user_pages*() + put_page() calls to pin_user_pages*() + unpin_user_pages() calls. There is some helpful background in [2]: basically, this is a small part of fixing a long-standing disconnect between pinning pages, and file systems' use of those pages. Note that this effectively changes the code's behavior as well: it now ultimately calls set_page_dirty_lock(), instead of SetPageDirty().This is probably more accurate. As Christoph Hellwig put it, "set_page_dirty() is only safe if we are dealing with a file backed page where we have reference on the inode it hangs off." [3] Also, this deletes one of the two FIXME comments (about refcounting), because there is nothing wrong with the refcounting at this point. [1] Documentation/core-api/pin_user_pages.rst [2] "Explicit pinning of user-space pages": https://lwn.net/Articles/807108/ [3] https://lore.kernel.org/r/20190723153640.GB720@lst.de Link: https://lore.kernel.org/r/20200526182709.99599-1-jhubbard@nvidia.com Cc: "Kai Mäkisara (Kolumbus)" <kai.makisara@kolumbus.fi> Cc: Bart Van Assche <bvanassche@acm.org> Cc: James E.J. Bottomley <jejb@linux.ibm.com> Cc: Martin K. Petersen <martin.petersen@oracle.com> Cc: linux-scsi@vger.kernel.org Acked-by: Kai Mäkisara <kai.makisara@kolumbus.fi> Signed-off-by: John Hubbard <jhubbard@nvidia.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Sudhakar Panneerselvam authored
This commit also removes the unused argument, cdb, that was passed to this function. Link: https://lore.kernel.org/r/1591559913-8388-5-git-send-email-sudhakar.panneerselvam@oracle.comReviewed-by: Mike Christie <michael.christie@oracle.com> Signed-off-by: Sudhakar Panneerselvam <sudhakar.panneerselvam@oracle.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Sudhakar Panneerselvam authored
NULL pointer dereference happens when the following conditions are met: 1) A SCSI command is received for a non-existing LU or cdb initialization fails in target_setup_cmd_from_cdb(). 2) Tracing is enabled. The following call sequences lead to NULL pointer dereference: 1) iscsit_setup_scsi_cmd transport_lookup_cmd_lun <-- lookup fails. or target_setup_cmd_from_cdb() <-- cdb initialization fails iscsit_process_scsi_cmd iscsit_sequence_cmd transport_send_check_condition_and_sense trace_target_cmd_complete <-- NULL dereference 2) target_submit_cmd_map_sgls transport_lookup_cmd_lun <-- lookup fails or target_setup_cmd_from_cdb() <-- cdb initialization fails transport_send_check_condition_and_sense trace_target_cmd_complete <-- NULL dereference In the above sequence, cmd->t_task_cdb is uninitialized which when referenced in trace_target_cmd_complete() causes NULL pointer dereference. The fix is to use the helper, target_cmd_init_cdb() and call it after transport_init_se_cmd() is called, so that cmd->t_task_cdb can be initialized and hence can be referenced in trace_target_cmd_complete(). Link: https://lore.kernel.org/r/1591559913-8388-4-git-send-email-sudhakar.panneerselvam@oracle.comReviewed-by: Mike Christie <michael.christie@oracle.com> Signed-off-by: Sudhakar Panneerselvam <sudhakar.panneerselvam@oracle.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Sudhakar Panneerselvam authored
Initialization of orig_fe_lun is moved to transport_init_se_cmd() from transport_lookup_cmd_lun(). This helps for the cases where the SCSI request fails before the call to transport_lookup_cmd_lun() so that trace_target_cmd_complete() can print the LUN information to the trace buffer. Due to this change, the lun parameter is removed from transport_lookup_cmd_lun() and transport_lookup_tmr_lun(). Link: https://lore.kernel.org/r/1591559913-8388-3-git-send-email-sudhakar.panneerselvam@oracle.comReviewed-by: Mike Christie <michael.christie@oracle.com> Signed-off-by: Sudhakar Panneerselvam <sudhakar.panneerselvam@oracle.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Sudhakar Panneerselvam authored
target_setup_cmd_from_cdb() is called after a successful call to transport_lookup_cmd_lun(). The new helper factors out the code that can be called before the call to transport_lookup_cmd_lun(). This helper will be used in an upcoming commit to address NULL pointer dereference. Link: https://lore.kernel.org/r/1591559913-8388-2-git-send-email-sudhakar.panneerselvam@oracle.comReviewed-by: Mike Christie <michael.christie@oracle.com> Signed-off-by: Sudhakar Panneerselvam <sudhakar.panneerselvam@oracle.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
- 04 Jun, 2020 4 commits
-
-
Al Viro authored
Link: https://lore.kernel.org/r/20200529234028.46373-4-viro@ZenIV.linux.org.ukAcked-by: Don Brace <don.brace@microsemi.com> Tested-by: Don Brace <don.brace@microsemi.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Al Viro authored
No need for building a native struct on kernel stack, copying it to userland one, then calling hpsa_ioctl() which copies it back into _another_ instance of the same struct. Link: https://lore.kernel.org/r/20200529234028.46373-3-viro@ZenIV.linux.org.ukAcked-by: Don Brace <don.brace@microsemi.com> Tested-by: Don Brace <don.brace@microsemi.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Al Viro authored
"BIG" in the name refers to the amount of data being transferred, _not_ the size of structure itself; it's 140 or 144 bytes (for 32bit and 64bit hosts resp.). IOCTL_Command_struct is 136 or 144 bytes large... No point whatsoever turning that into dynamic allocation, let alone vmalloc one. Just keep it as local variable... Link: https://lore.kernel.org/r/20200529234028.46373-2-viro@ZenIV.linux.org.ukAcked-by: Don Brace <don.brace@microsemi.com> Tested-by: Don Brace <don.brace@microsemi.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Al Viro authored
Link: https://lore.kernel.org/r/20200529234028.46373-1-viro@ZenIV.linux.org.ukAcked-by: Don Brace <don.brace@microsemi.com> Tested-by: Don Brace <don.brace@microsemi.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
- 03 Jun, 2020 6 commits
-
-
Stanley Chu authored
In ufshcd_probe_hba(), all BKOP SW tracking variables can be reset together in ufshcd_force_reset_auto_bkops(), thus urgent_bkop_lvl initialization in the beginning of ufshcd_probe_hba() can be merged into ufshcd_force_reset_auto_bkops(). Link: https://lore.kernel.org/r/20200530141200.4616-1-stanley.chu@mediatek.comReviewed-by: Avri Altman <avri.altman@wdc.com> Signed-off-by: Stanley Chu <stanley.chu@mediatek.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Can Guo authored
Urgent bkops level is used to compare against actual bkops status read from UFS device. Urgent bkops level is set during initialization and might be updated in exception event handler during runtime. But it should not be updated to the actual bkops status every time when auto bkops is toggled. Otherwise, if urgent bkops level is updated to 0, auto bkops shall always be kept enabled. Link: https://lore.kernel.org/r/1590632686-17866-1-git-send-email-cang@codeaurora.org Fixes: 24366c2a ("scsi: ufs: Recheck bkops level if bkops is disabled") Reviewed-by: Stanley Chu <stanley.chu@mediatek.com> Signed-off-by: Can Guo <cang@codeaurora.org> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Colin Ian King authored
The variable rc is being initialized with a value that is never read and it is being updated later with a new value. The initialization is redundant and can be removed. Link: https://lore.kernel.org/r/20200527115242.172344-1-colin.king@canonical.com Addresses-Coverity: ("Unused value") Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Suganath Prabu S authored
Fix memset() accessing out of range address when reply_queue count is less than RDPQ_MAX_INDEX_IN_ONE_CHUNK (i.e. 16) in non-RDPQ mode. In non-RDPQ mode, the driver allocates a single contiguous pool of size reply_queue's count * reqly_post_free_sz. But the driver is always memsetting this pool with size 16 * reqly_post_free_sz. If reply queue count is less than 16 (i.e. when MSI-X vectors enabled < 16), the driver is accessing out of range address and this results in 'BUG: unable to handle kernel paging request at fff0x...x' bug. Make driver use dma_pool_zalloc() API to allocate and zero the pool. Link: https://lore.kernel.org/r/20200528145617.27252-1-suganath-prabu.subramani@broadcom.com Fixes: 8012209e ("scsi: mpt3sas: Handle RDPQ DMA allocation in same 4G region") Signed-off-by: Suganath Prabu S <suganath-prabu.subramani@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Qiushi Wu authored
kobject_init_and_add() takes reference even when it fails. If this function returns an error, kobject_put() must be called to properly clean up the memory associated with the object. Link: https://lore.kernel.org/r/20200528201353.14849-1-wu000273@umn.eduReviewed-by: Lee Duncan <lduncan@suse.com> Signed-off-by: Qiushi Wu <wu000273@umn.edu> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Bodo Stroesser authored
1) If remaining ring space before the end of the ring is smaller then the next cmd to write, tcmu writes a padding entry which fills the remaining space at the end of the ring. Then tcmu calls tcmu_flush_dcache_range() with the size of struct tcmu_cmd_entry as data length to flush. If the space filled by the padding was smaller then tcmu_cmd_entry, tcmu_flush_dcache_range() is called for an address range reaching behind the end of the vmalloc'ed ring. tcmu_flush_dcache_range() in a loop calls flush_dcache_page(virt_to_page(start)); for every page being part of the range. On x86 the line is optimized out by the compiler, as flush_dcache_page() is empty on x86. But I assume the above can cause trouble on other architectures that really have a flush_dcache_page(). For paddings only the header part of an entry is relevant due to alignment rules the header always fits in the remaining space, if padding is needed. So tcmu_flush_dcache_range() can safely be called with sizeof(entry->hdr) as the length here. 2) After it has written a command to cmd ring, tcmu calls tcmu_flush_dcache_range() using the size of a struct tcmu_cmd_entry as data length to flush. But if a command needs many iovecs, the real size of the command may be bigger then tcmu_cmd_entry, so a part of the written command is not flushed then. Link: https://lore.kernel.org/r/20200528193108.9085-1-bstroesser@ts.fujitsu.comAcked-by: Mike Christie <michael.christie@oracle.com> Signed-off-by: Bodo Stroesser <bstroesser@ts.fujitsu.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
- 27 May, 2020 9 commits
-
-
Vignesh Raghavendra authored
Fix unwinding of pm_runtime changes when bailing out of driver probe due to a failure and also on removal of driver. Link: https://lore.kernel.org/r/20200526100340.15032-1-vigneshr@ti.com Fixes: 6979e56c ("scsi: ufs: Add driver for TI wrapper for Cadence UFS IP") Reported-by: Dinghao Liu <dinghao.liu@zju.edu.cn> Signed-off-by: Vignesh Raghavendra <vigneshr@ti.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Dan Carpenter authored
There wasn't any clean up done if cxgb3_alloc_atid() failed and also the original code didn't release "csk->l2t". Link: https://lore.kernel.org/r/20200521121221.GA247492@mwanda Fixes: 6f7efaab ("[SCSI] cxgb3i: change cxgb3i to use libcxgbi") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Chen Tao authored
Fix the following warning: drivers/scsi/ibmvscsi/ibmvscsi.c:2387:12: warning: symbol 'ibmvscsi_module_init' was not declared. Should it be static? drivers/scsi/ibmvscsi/ibmvscsi.c:2409:13: warning: symbol 'ibmvscsi_module_exit' was not declared. Should it be static? Link: https://lore.kernel.org/r/20200520091036.247286-1-chentao107@huawei.comSigned-off-by: Chen Tao <chentao107@huawei.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Gabriel Krisman Bertazi authored
iSCSI suffers from a deadlock in case a management command submitted via the netlink socket sleeps on an allocation while holding the rx_queue_mutex if that allocation causes a memory reclaim that writebacks to a failed iSCSI device. The recovery procedure can never make progress to recover the failed disk or abort outstanding IO operations to complete the reclaim (since rx_queue_mutex is locked), thus locking the system. Nevertheless, just marking all allocations under rx_queue_mutex as GFP_NOIO (or locking the userspace process with something like PF_MEMALLOC_NOIO) is not enough, since the iSCSI command code relies on other subsystems that try to grab locked mutexes, whose threads are GFP_IO, leading to the same deadlock. One instance where this situation can be observed is in the backtraces below, stitched from multiple bugs reports, involving the kobj uevent sent when a session is created. The root of the problem is not the fact that iSCSI does GFP_IO allocations, that is acceptable. The actual problem is that rx_queue_mutex has a very large granularity, covering every unrelated netlink command execution at the same time as the error recovery path. The proposed fix leverages the recently added mechanism to stop failed connections from the kernel, by enabling it to execute even though a management command from the netlink socket is being run (rx_queue_mutex is held), provided that the command is known to be safe. It splits the rx_queue_mutex in two mutexes, one protecting from concurrent command execution from the netlink socket, and one protecting stop_conn from racing with other connection management operations that might conflict with it. It is not very pretty, but it is the simplest way to resolve the deadlock. I considered making it a lock per connection, but some external mutex would still be needed to deal with iscsi_if_destroy_conn. The patch was tested by forcing a memory shrinker (unrelated, but used bufio/dm-verity) to reclaim iSCSI pages every time ISCSI_UEVENT_CREATE_SESSION happens, which is reasonable to simulate reclaims that might happen with GFP_KERNEL on that path. Then, a faulty hung target causes a connection to fail during intensive IO, at the same time a new session is added by iscsid. The following stacktraces are stiches from several bug reports, showing a case where the deadlock can happen. iSCSI-write holding: rx_queue_mutex waiting: uevent_sock_mutex kobject_uevent_env+0x1bd/0x419 kobject_uevent+0xb/0xd device_add+0x48a/0x678 scsi_add_host_with_dma+0xc5/0x22d iscsi_host_add+0x53/0x55 iscsi_sw_tcp_session_create+0xa6/0x129 iscsi_if_rx+0x100/0x1247 netlink_unicast+0x213/0x4f0 netlink_sendmsg+0x230/0x3c0 iscsi_fail iscsi_conn_failure waiting: rx_queue_mutex schedule_preempt_disabled+0x325/0x734 __mutex_lock_slowpath+0x18b/0x230 mutex_lock+0x22/0x40 iscsi_conn_failure+0x42/0x149 worker_thread+0x24a/0xbc0 EventManager_ holding: uevent_sock_mutex waiting: dm_bufio_client->lock dm_bufio_lock+0xe/0x10 shrink+0x34/0xf7 shrink_slab+0x177/0x5d0 do_try_to_free_pages+0x129/0x470 try_to_free_mem_cgroup_pages+0x14f/0x210 memcg_kmem_newpage_charge+0xa6d/0x13b0 __alloc_pages_nodemask+0x4a3/0x1a70 fallback_alloc+0x1b2/0x36c __kmalloc_node_track_caller+0xb9/0x10d0 __alloc_skb+0x83/0x2f0 kobject_uevent_env+0x26b/0x419 dm_kobject_uevent+0x70/0x79 dev_suspend+0x1a9/0x1e7 ctl_ioctl+0x3e9/0x411 dm_ctl_ioctl+0x13/0x17 do_vfs_ioctl+0xb3/0x460 SyS_ioctl+0x5e/0x90 MemcgReclaimerD" holding: dm_bufio_client->lock waiting: stuck io to finish (needs iscsi_fail thread to progress) schedule at ffffffffbd603618 io_schedule at ffffffffbd603ba4 do_io_schedule at ffffffffbdaf0d94 __wait_on_bit at ffffffffbd6008a6 out_of_line_wait_on_bit at ffffffffbd600960 wait_on_bit.constprop.10 at ffffffffbdaf0f17 __make_buffer_clean at ffffffffbdaf18ba __cleanup_old_buffer at ffffffffbdaf192f shrink at ffffffffbdaf19fd do_shrink_slab at ffffffffbd6ec000 shrink_slab at ffffffffbd6ec24a do_try_to_free_pages at ffffffffbd6eda09 try_to_free_mem_cgroup_pages at ffffffffbd6ede7e mem_cgroup_resize_limit at ffffffffbd7024c0 mem_cgroup_write at ffffffffbd703149 cgroup_file_write at ffffffffbd6d9c6e sys_write at ffffffffbd6662ea system_call_fastpath at ffffffffbdbc34a2 Link: https://lore.kernel.org/r/20200520022959.1912856-1-krisman@collabora.comReported-by: Khazhismel Kumykov <khazhy@google.com> Reviewed-by: Lee Duncan <lduncan@suse.com> Signed-off-by: Gabriel Krisman Bertazi <krisman@collabora.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Stanley Chu authored
Currently UFS host driver promises VCC supply if UFS device needs to do WriteBooster flush during runtime suspend. However the UFS specification mentions: "While the flushing operation is in progress, the device is in Active power mode." Therefore UFS host driver needs to promise more: Keep UFS device as "Active power mode", otherwise UFS device shall not do any flush if device enters Sleep or PowerDown power mode. Similarly, the same promises shall be applied if device needs urgent BKOP during runtime suspend. Fix this by not changing device power mode if WriteBooster flush or urgent BKOP is required in ufshcd_suspend(). Now, if device finishes its job but is not resumed for a very long time, system will have unnecessary power drain because VCC is still supplied. A method to re-check the threshold of keeping VCC supply is required to fix the power drain. However, the threshold re-check needs to re-activate the link first because the decision depends on the latest device status. Also introduce a delayed work to force runtime resume after a certain delay during runtime suspend. This makes threshold re-check happen natually in the entry of the next runtime-suspend. The device can continue its WriteBooster flush or urgent BKOP jobs soon after resumed if device has no upcoming requests and link enters hibern8 state either by Auto-Hibern8 or hibern8 during clk-gating scheme. This solution not only prevents power drain but also makes as much use of time as possible for device's background jobs. Link: https://lore.kernel.org/r/20200522083212.4008-5-stanley.chu@mediatek.comReviewed-by: Asutosh Das <asutoshd@codeaurora.org> Signed-off-by: Stanley Chu <stanley.chu@mediatek.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Stanley Chu authored
For WriteBooster feature related attributes, the index used by query shall be LUN ID if LU Dedicated buffer mode is enabled. Link: https://lore.kernel.org/r/20200522083212.4008-4-stanley.chu@mediatek.comReviewed-by: Avri Altman <avri.altman@wdc.com> Reviewed-by: Asutosh Das <asutoshd@codeaurora.org> Signed-off-by: Stanley Chu <stanley.chu@mediatek.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Stanley Chu authored
According to the UFS specification, WriteBooster is officially supported by UFS 2.2. Since UFS 2.2 specification has been finalized in JEDEC and such devices have also showed up in the market, modify the checking rule for ufshcd_wb_probe() to allow these devices to enable WriteBooster. Link: https://lore.kernel.org/r/20200522083212.4008-3-stanley.chu@mediatek.comReviewed-by: Avri Altman <avri.altman@wdc.com> Reviewed-by: Asutosh Das <asutoshd@codeaurora.org> Signed-off-by: Stanley Chu <stanley.chu@mediatek.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Stanley Chu authored
The whole UFS host instance has been zero-initialized by scsi_host_alloc(), thus UFS driver does not need to clear "dev_info" member specifically in ufshcd_device_params_init(). Simply remove the unnecessary code. Link: https://lore.kernel.org/r/20200522083212.4008-2-stanley.chu@mediatek.comReviewed-by: Avri Altman <avri.altman@wdc.com> Reviewed-by: Asutosh Das <asutoshd@codeaurora.org> Signed-off-by: Stanley Chu <stanley.chu@mediatek.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Jeffrey Hugo authored
ufs_qcom_dump_dbg_regs() uses usleep_range, a sleeping function, but can be called from atomic context in the following flow: ufshcd_intr -> ufshcd_sl_intr -> ufshcd_check_errors -> ufshcd_print_host_regs -> ufshcd_vops_dbg_register_dump -> ufs_qcom_dump_dbg_regs This causes a boot crash on the Lenovo Miix 630 when the interrupt is handled on the idle thread. Fix the issue by switching to udelay(). Link: https://lore.kernel.org/r/20200525204125.46171-1-jeffrey.l.hugo@gmail.com Fixes: 9c46b867 ("scsi: ufs-qcom: dump additional testbus registers") Reviewed-by: Bean Huo <beanhuo@micron.com> Reviewed-by: Avri Altman <avri.altman@wdc.com> Signed-off-by: Jeffrey Hugo <jeffrey.l.hugo@gmail.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
- 26 May, 2020 7 commits
-
-
Suganath Prabu S authored
For non RDPQ mode, the driver allocates a single contiguous block of memory pool for all reply descriptor post queues and passes down a single address in the ReplyDescriptorPostQueueAddress field of the IOC Init Request Message to the firmware. So reply_post queue will have only one entry which holds the address of this single contiguous block of memory pool. While allocating the reply descriptor post queue pool, driver should loop only once in non-RDPQ mode. But the driver is looping for ioc->reply_queue_count number of times even though reply_post queue's queue depth is only one in non-RDPQ mode. This leads to 'BUG: KASAN: use-after-free in base_alloc_rdpq_dma_pool'. The fix is to loop only once while allocating memory for the reply descriptor post queue in non-RDPQ mode Fixes: 8012209e ("scsi: mpt3sas: Handle RDPQ DMA allocation in same 4G region") Link: https://lore.kernel.org/r/20200522103558.5710-1-suganath-prabu.subramani@broadcom.comReported-by: Tomas Henzl <thenzl@redhat.com> Reviewed-by: Tomas Henzl <thenzl@redhat.com> Signed-off-by: Suganath Prabu S <suganath-prabu.subramani@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Xiyu Yang authored
In order to create or activate a new node, lpfc_els_unsol_buffer() invokes lpfc_nlp_init() or lpfc_enable_node() or lpfc_nlp_get(), all of them will return a reference of the specified lpfc_nodelist object to "ndlp" with increased refcnt. When lpfc_els_unsol_buffer() returns, local variable "ndlp" becomes invalid, so the refcount should be decreased to keep refcount balanced. The reference counting issue happens in one exception handling path of lpfc_els_unsol_buffer(). When "ndlp" in DEV_LOSS, the function forgets to decrease the refcnt increased by lpfc_nlp_init() or lpfc_enable_node() or lpfc_nlp_get(), causing a refcnt leak. Fix this issue by calling lpfc_nlp_put() when "ndlp" in DEV_LOSS. Link: https://lore.kernel.org/r/1590416184-52592-1-git-send-email-xiyuyang19@fudan.edu.cnReviewed-by: Daniel Wagner <dwagner@suse.de> Reviewed-by: James Smart <james.smart@broadcom.com> Signed-off-by: Xiyu Yang <xiyuyang19@fudan.edu.cn> Signed-off-by: Xin Tan <tanxin.ctf@gmail.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Dan Carpenter authored
The pr_debug() dereferences "cmd" after we already freed it by calling tcmu_free_cmd(cmd). The debug printk needs to be done earlier. Link: https://lore.kernel.org/r/20200523101129.GB98132@mwanda Fixes: 61fb2482 ("scsi: target: tcmu: Userspace must not complete queued commands") Reviewed-by: Mike Christie <mchristi@redhat.com> Reviewed-by: David Disseldorp <ddiss@suse.de> Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Sudhakar Panneerselvam authored
vhost-scsi pre-allocates the maximum sg entries per command and if a command requires more than VHOST_SCSI_PREALLOC_SGLS entries, then that command is failed by it. This patch lets vhost communicate the max sg limit when it registers vhost_scsi_ops with TCM. With this change, TCM would report the max sg entries through "Block Limits" VPD page which will be typically queried by the SCSI initiator during device discovery. By knowing this limit, the initiator could ensure the maximum transfer length is less than or equal to what is reported by vhost-scsi. Link: https://lore.kernel.org/r/1590166317-953-1-git-send-email-sudhakar.panneerselvam@oracle.com Cc: Michael S. Tsirkin <mst@redhat.com> Cc: Jason Wang <jasowang@redhat.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Mike Christie <mchristi@redhat.com> Signed-off-by: Sudhakar Panneerselvam <sudhakar.panneerselvam@oracle.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Daniel Wagner authored
The function always returns QLA_SUCCESS and the caller qla2x00_start_sp() doesn't even evalute the return value. So there is no point in returning a status. Link: https://lore.kernel.org/r/20200520130819.90625-1-dwagner@suse.deReviewed-by: Bart Van Assche <bvanassche@acm.org> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com> Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com> Signed-off-by: Daniel Wagner <dwagner@suse.de> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Bart Van Assche authored
This was detected by building the qla2xxx driver with clang. See also commit a9083016 ("[SCSI] qla2xxx: Add ISP82XX support"). Link: https://lore.kernel.org/r/20200520040738.1017-1-bvanassche@acm.org Cc: Arun Easi <aeasi@marvell.com> Cc: Nilesh Javali <njavali@marvell.com> Cc: Himanshu Madhani <himanshu.madhani@oracle.com> Cc: Hannes Reinecke <hare@suse.de> Cc: Daniel Wagner <dwagner@suse.de> Cc: Martin Wilck <mwilck@suse.com> Cc: Roman Bolshakov <r.bolshakov@yadro.com> Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com> Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com> Reviewed-by: Daniel Wagner <dwagner@suse.de> Signed-off-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Bob Liu authored
This patch enables setting cpu affinity through "cpumask" for iscsi workqueues (iscsi_q_xx and iscsi_eh), so as to get performance isolation. The max number of active worker was changed form 1 to 2, because "cpumask" of ordered workqueue isn't allowed to change. Link: https://lore.kernel.org/r/20200505011908.15538-1-bob.liu@oracle.comReviewed-by: Lee Duncan <lduncan@suse.com> Signed-off-by: Bob Liu <bob.liu@oracle.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
- 20 May, 2020 8 commits
-
-
Douglas Gilbert authored
This patch is in response to a static analyser report from Dan Carpenter titled: "[bug report] scsi: scsi_debug: Add per_host_store option". This code may not clear the static analyzer reports, but may shed light on why they occur. Amongst other things this driver has a table driven SCSI command parser which also involves some C code. There are some invariants between the table entries and the corresponding C code (i.e. the resp_*() functions) that, if broken, may lead to a NULL dereference. And the report is valid, at least in the case of the PRE-FETCH command. Alas, that is not one of the cases that the static analyzer reported. In this particular corner case: when the fake_rw flag is set and the table entry for a "store"-accessing command does not have the required F_FAKE_RW flag set, do the following. Call BUG_ON() in the devip2sip() very close to a comment block explaining why it was called and how to fix it. checkpatch.pl complains about the BUG_ON() but there is no reasonable remedial action that can be taken at run time. This change allows the code reported by the static analyzer to be simplified. Comments were also added to the table flags (e.g. F_FAKE_RW) so developers who add commands might be more inclined to use them (properly). Link: https://lore.kernel.org/r/20200513013943.25285-1-dgilbert@interlog.comReported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Douglas Gilbert <dgilbert@interlog.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Ye Bin authored
shost->tag_set is used too many times, introduce temporary parameter tag_set instead of &shost->tag_set. Link: https://lore.kernel.org/r/20200518074732.39679-1-yebin10@huawei.comReviewed-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Ye Bin <yebin10@huawei.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Ye Bin authored
shost_for_each_device(sdev, shost) \ for ((sdev) = __scsi_iterate_devices((shost), NULL); \ (sdev); \ (sdev) = __scsi_iterate_devices((shost), (sdev))) When terminating shost_for_each_device() iteration with break or return, scsi_device_put() should be used to prevent stale scsi device references from being left behind. Link: https://lore.kernel.org/r/20200518074420.39275-1-yebin10@huawei.comReviewed-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Ye Bin <yebin10@huawei.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Bart Van Assche authored
Fix all endianness complaints reported by sparse (C=2) without affecting the behavior of the code on little endian CPUs. Link: https://lore.kernel.org/r/20200518211712.11395-16-bvanassche@acm.org Cc: Nilesh Javali <njavali@marvell.com> Cc: Quinn Tran <qutran@marvell.com> Cc: Martin Wilck <mwilck@suse.com> Cc: Daniel Wagner <dwagner@suse.de> Cc: Roman Bolshakov <r.bolshakov@yadro.com> Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Daniel Wagner <dwagner@suse.de> Signed-off-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Bart Van Assche authored
Annotate members of FC protocol and firmware dump data structures as big endian. Annotate members of RISC control structures as little endian. Annotate mailbox registers as little endian. Annotate the mb[] arrays as CPU-endian because communication of the mb[] values with the hardware happens through the readw() and writew() functions. readw() converts from __le16 to u16 and writew() converts from u16 to __le16. Annotate 'handles' as CPU-endian because for the firmware these are opaque values. Link: https://lore.kernel.org/r/20200518211712.11395-15-bvanassche@acm.org CC: Hannes Reinecke <hare@suse.de> Cc: Nilesh Javali <njavali@marvell.com> Cc: Quinn Tran <qutran@marvell.com> Cc: Martin Wilck <mwilck@suse.com> Cc: Roman Bolshakov <r.bolshakov@yadro.com> Reviewed-by: Daniel Wagner <dwagner@suse.de> Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com> Signed-off-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Bart Van Assche authored
Link: https://lore.kernel.org/r/20200518211712.11395-14-bvanassche@acm.org Cc: Arun Easi <aeasi@marvell.com> Cc: Nilesh Javali <njavali@marvell.com> Cc: Martin Wilck <mwilck@suse.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Daniel Wagner <dwagner@suse.de> Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com> Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com> Signed-off-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Bart Van Assche authored
Casting a pointer to void * and relying on an implicit cast from void * to uint16_t or uint32_t suppresses sparse warnings about endianness. Hence cast explicitly to uint16_t and uint32_t. Additionally, remove superfluous void * casts. Link: https://lore.kernel.org/r/20200518211712.11395-13-bvanassche@acm.org Cc: Arun Easi <aeasi@marvell.com> Cc: Nilesh Javali <njavali@marvell.com> Cc: Daniel Wagner <dwagner@suse.de> Cc: Himanshu Madhani <himanshu.madhani@oracle.com> Cc: Martin Wilck <mwilck@suse.com> Cc: Roman Bolshakov <r.bolshakov@yadro.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Daniel Wagner <dwagner@suse.de> Signed-off-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-
Bart Van Assche authored
This was suggested by Daniel Wagner. Link: https://lore.kernel.org/r/20200518211712.11395-12-bvanassche@acm.org Cc: Nilesh Javali <njavali@marvell.com> Cc: Quinn Tran <qutran@marvell.com> Cc: Martin Wilck <mwilck@suse.com> Cc: Roman Bolshakov <r.bolshakov@yadro.com> Reviewed-by: Daniel Wagner <dwagner@suse.de> Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com> Reviewed-by: Arun Easi <aeasi@marvell.com> Signed-off-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
-