- 12 Apr, 2018 40 commits
-
-
Miquel Raynal authored
[ Upstream commit 12663b44 ] Reads from NAND devices usually trigger bitflips, this is an expected behavior. While bitflips are under a given threshold, the MTD core returns 0. However, when the number of corrected bitflips is above this same threshold, -EUCLEAN is returned to inform the upper layer that this block is slightly dying and soon the ECC engine will be overtaken so actions should be taken to move the data out of it. This particular condition should not be treated like an error and the test should continue. Signed-off-by: Miquel Raynal <miquel.raynal@free-electrons.com> Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Hans de Goede authored
[ Upstream commit faec44b6 ] We should not try to do any i2c transfers before the controller is resumed (which happens before our resume method gets called). So we need to disable our IRQ while suspended to enforce this. The code paths for devices with GPIOs for the int and reset pins already disable the IRQ the through goodix_free_irq(). This commit also disables the IRQ while suspended for devices without GPIOs for the int and reset pins. This fixes the i2c bus sometimes getting stuck after a suspend/resume causing the touchscreen to sometimes not work after a suspend/resume. This has been tested on a GPD pocked device. BugLink: https://github.com/nexus511/gpd-ubuntu-packages/issues/10 BugLink: https://www.reddit.com/r/GPDPocket/comments/7niut2/fix_for_broken_touch_after_resume_all_linux/Tested-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Hans de Goede <hdegoede@redhat.com> Reviewed-by: Bastien Nocera <hadess@hadess.net> Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Nathan Fontenot authored
[ Upstream commit 09fb35ea ] Initiating a kdump via the command line can cause a pending interrupt to be handled by the ibmvnic driver when initializing the sub-CRQ irqs during driver initialization. NIP [d000000000ca34f0] ibmvnic_interrupt_rx+0x40/0xd0 [ibmvnic] LR [c000000008132ef0] __handle_irq_event_percpu+0xa0/0x2f0 Call Trace: [c000000047fcfde0] [c000000008132ef0] __handle_irq_event_percpu+0xa0/0x2f0 [c000000047fcfea0] [c00000000813317c] handle_irq_event_percpu+0x3c/0x90 [c000000047fcfee0] [c00000000813323c] handle_irq_event+0x6c/0xd0 [c000000047fcff10] [c0000000081385e0] handle_fasteoi_irq+0xf0/0x250 [c000000047fcff40] [c0000000081320a0] generic_handle_irq+0x50/0x80 [c000000047fcff60] [c000000008014984] __do_irq+0x84/0x1d0 [c000000047fcff90] [c000000008027564] call_do_irq+0x14/0x24 [c00000003c92af00] [c000000008014b70] do_IRQ+0xa0/0x120 [c00000003c92af50] [c000000008002594] hardware_interrupt_common+0x114/0x180 Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Andy Shevchenko authored
[ Upstream commit 2a609abe ] On Intel Edison the Broadcom Wi-Fi card, which is connected to SDIO, requires 2.0v, while the host, according to Intel Merrifield TRM, supports 1.8v supply only. The card announces itself as mmc2: new ultra high speed DDR50 SDIO card at address 0001 Introduce a custom OCR mask for SDIO host controller on Intel Merrifield and add a special case to sdhci_set_power_noreg() to override 2.0v supply by enforcing 1.8v power choice. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jiri Bohac authored
[ Upstream commit 2a3e83c6 ] On machines where the GART aperture is mapped over physical RAM /proc/vmcore contains the remapped range and reading it may cause hangs or reboots. In the past, the GART region was added into the resource map, implemented by commit 56dd669a ("[PATCH] Insert GART region into resource map") However, inserting the iomem_resource from the early GART code caused resource conflicts with some AGP drivers (bko#72201), which got avoided by reverting the patch in commit 707d4eef ("Revert [PATCH] Insert GART region into resource map"). This revert introduced the /proc/vmcore bug. The vmcore ELF header is either prepared by the kernel (when using the kexec_file_load syscall) or by the kexec userspace (when using the kexec_load syscall). Since we no longer have the GART iomem resource, the userspace kexec has no way of knowing which region to exclude from the ELF header. Changes from v1 of this patch: Instead of excluding the aperture from the ELF header, this patch makes /proc/vmcore return zeroes in the second kernel when attempting to read the aperture region. This is done by reusing the gart_oldmem_pfn_is_ram infrastructure originally intended to exclude XEN balooned memory. This works for both, the kexec_file_load and kexec_load syscalls. [Note that the GART region is the same in the first and second kernels: regardless whether the first kernel fixed up the northbridge/bios setting and mapped the aperture over physical memory, the second kernel finds the northbridge properly configured by the first kernel and the aperture never overlaps with e820 memory because the second kernel has a fake e820 map created from the crashkernel memory regions. Thus, the second kernel keeps the aperture address/size as configured by the first kernel.] register_oldmem_pfn_is_ram can only register one callback and returns an error if the callback has been registered already. Since XEN used to be the only user of this function, it never checks the return value. Now that we have more than one user, I added a WARN_ON just in case agp, XEN, or any other future user of register_oldmem_pfn_is_ram were to step on each other's toes. Fixes: 707d4eef ("Revert [PATCH] Insert GART region into resource map") Signed-off-by: Jiri Bohac <jbohac@suse.cz> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Baoquan He <bhe@redhat.com> Cc: Toshi Kani <toshi.kani@hpe.com> Cc: David Airlie <airlied@linux.ie> Cc: yinghai@kernel.org Cc: joro@8bytes.org Cc: kexec@lists.infradead.org Cc: Borislav Petkov <bp@alien8.de> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Dave Young <dyoung@redhat.com> Cc: Vivek Goyal <vgoyal@redhat.com> Link: https://lkml.kernel.org/r/20180106010013.73suskgxm7lox7g6@dwarf.suse.czSigned-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Wei Yongjun authored
[ Upstream commit 76e28f5f ] Fix to return error code -ENOMEM from the error handling case instead of 0, as done elsewhere in this function. Fixes: 5a2a3002 ("gpio: Add gpio driver support for ThunderX and OCTEON-TX") Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Acked-by: David Daney <david.daney@cavium.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Parav Pandit authored
[ Upstream commit 89838118 ] The 'if' logic in ucma_query_path was broken with OPA was introduced and started to treat RoCE paths as as OPA paths. Invert the logic of the 'if' so only OPA paths are treated as OPA paths. Otherwise the path records returned to rdma_cma users are mangled when in RoCE mode. Fixes: 57520751 ("IB/SA: Add OPA path record type") Signed-off-by: Parav Pandit <parav@mellanox.com> Reviewed-by: Mark Bloch <markb@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Shivasharan S authored
[ Upstream commit f3f7920b ] Issue - Driver returns DID_NO_CONNECT when unload is in progress, indicated using instance->unload flag. In case of dynamic unload of driver, this flag is set before calling scsi_remove_host(). While doing manual driver unload, user will see lots of prints for Sync Cache command with DID_NO_CONNECT status. Fix - Set the instance->unload flag after scsi_remove_host(). Allow device removal process to be completed and do not block any command before that. SCSI commands (like SYNC_CACHE) are received (as part of scsi_remove_host) by driver during unload will be submitted further down to the drives. Signed-off-by: Sumit Saxena <sumit.saxena@broadcom.com> Signed-off-by: Shivasharan S <shivasharan.srikanteshwara@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Shivasharan S authored
[ Upstream commit 7ada701d ] Currently driver does not validate ldcount provided by firmware. If the value is invalid, fail RAID map validation accordingly. This issue is rare to hit in field and is fixed as part of code review. Signed-off-by: Sumit Saxena <sumit.saxena@broadcom.com> Signed-off-by: Shivasharan S <shivasharan.srikanteshwara@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Arjun Vynipadath authored
[ Upstream commit ea0a4210 ] We'd come in with SGE_FL_BUFFER_SIZE[0] and [1] both equal to 64KB and the extant logic would flag that as an error. This was already fixed in cxgb4 driver with "92ddcc7b cxgb4: Fix some small bugs in t4_sge_init_soft() when our Page Size is 64KB". Original Work by: Casey Leedom <leedom@chelsio.com> Signed-off-by: Arjun Vynipadath <arjun@chelsio.com> Signed-off-by: Ganesh Goudar <ganeshgr@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jacob Keller authored
[ Upstream commit 44b034b4 ] In i40evf_reset_task we use netif_running() to determine whether or not the device is currently up. This allows us to properly free queue memory and shut down things before we request the hardware reset. It turns out that we cannot be guaranteed of netif_running() returning false until the device is fully up, as the kernel core code sets __LINK_STATE_START prior to calling .ndo_open. Since we're not holding the rtnl_lock(), it's possible that the driver's i40evf_open handler function is currently being called while we're resetting. We can't simply hold the rtnl_lock() while checking netif_running() as this could cause a deadlock with the i40evf_open() function. Additionally, we can't avoid the deadlock by holding the rtnl_lock() over the whole reset path, as this essentially serializes all resets, and can cause massive delays if we have multiple VFs on a system. Instead, lets just check our own internal state __I40EVF_RUNNING state field. This allows us to ensure that the state is correct and is only set after we've finished bringing the device up. Without this change we might free data structures about device queues and other memory before they've been fully allocated. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Stephen Hemminger authored
[ Upstream commit 06028d15 ] In order for userspace application to signal host, it needs the host to support the monitor page property. Check for the flag and fail if this is not supported. Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Christophe JAILLET authored
[ Upstream commit 68fa24f9 ] We should not call edac_mc_del_mc() if a corresponding call to edac_mc_add_mc() has not been performed yet. So here, we should go to err instead of err2 to branch at the right place of the error handling path. Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Cc: linux-edac <linux-edac@vger.kernel.org> Link: http://lkml.kernel.org/r/20180107205400.14068-1-christophe.jaillet@wanadoo.frSigned-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Paolo Valente authored
[ Upstream commit 52257ffb ] For each pair [device for which bfq is selected as I/O scheduler, group in blkio/io], bfq maintains a corresponding bfq group. Each such bfq group contains a set of async queues, with each async queue created on demand, i.e., when some I/O request arrives for it. On creation, an async queue gets an extra reference, to make sure that the queue is not freed as long as its bfq group exists. Accordingly, to allow the queue to be freed after the group exited, this extra reference must released on group exit. The above holds also for a bfq root group, i.e., for the bfq group corresponding to the root blkio/io root for a given device. Yet, by mistake, the references to the existing async queues of a root group are not released when the latter exits. This causes a memory leak when the instance of bfq for a given device exits. In a similar vein, bfqg_stats_xfer_dead is not executed for a root group. This commit fixes bfq_pd_offline so that the latter executes the above missing operations for a root group too. Reported-by: Holger Hoffstätte <holger@applied-asynchrony.com> Reported-by: Guoqing Jiang <gqjiang@suse.com> Tested-by: Holger Hoffstätte <holger@applied-asynchrony.com> Signed-off-by: Davide Ferrari <davideferrari8@gmail.com> Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tony Lindgren authored
[ Upstream commit ea3d8465 ] Some devices have the control dlci stay in ADM mode instead of the UA mode. This can seen at least on droid 4 when trying to open the ts 27.010 mux port. Enabling n_gsm debug mode shows the control dlci always respond with DM to SABM instead of UA: # modprobe n_gsm debug=0xff # ldattach -d GSM0710 /dev/ttyS0 & gsmld_output: 00000000: f9 03 3f 01 1c f9 --> 0) C: SABM(P) gsmld_receive: 00000000: f9 03 1f 01 36 f9 <-- 0) C: DM(P) ... $ minicom -D /dev/gsmtty1 minicom: cannot open /dev/gsmtty1: No error information $ strace minicom -D /dev/gsmtty1 ... open("/dev/gsmtty1", O_RDWR|O_NOCTTY|O_NONBLOCK|O_LARGEFILE) = -1 EL2HLT Note that this is different issue from other n_gsm -EL2HLT issues such as timeouts when the control dlci does not respond at all. The ADM mode seems to be a quite common according to "RF Wireless World" article "GSM Issue-UE sends SABM and gets a DM response instead of UA response": This issue is most commonly observed in GSM networks where in UE sends SABM and expects network to send UA response but it ends up receiving DM response from the network. SABM stands for Set asynchronous balanced mode, UA stands for Unnumbered Acknowledge and DA stands for Disconnected Mode. An RLP entity can be in one of two modes: - Asynchronous Balanced Mode (ABM) - Asynchronous Disconnected Mode (ADM) Currently Linux kernel closes the control dlci after several retries in gsm_dlci_t1() on DM. This causes n_gsm /dev/gsmtty ports to produce error code -EL2HLT when trying to open them as the closing of control dlci has already set gsm->dead. Let's fix the issue by allowing control dlci stay in ADM mode after the retries so the /dev/gsmtty ports can be opened and used. It seems that it might take several attempts to get any response from the control dlci, so it's best to allow ADM mode only after the SABM retries are done. Note that for droid 4 additional patches are needed to mux the ttyS0 pins and to toggle RTS gpio_149 to wake up the mdm6600 modem are also needed to use n_gsm. And the mdm6600 modem needs to be powered on. Cc: linux-serial@vger.kernel.org Cc: Alan Cox <alan@llwyncelyn.cymru> Cc: Jiri Prchal <jiri.prchal@aksignal.cz> Cc: Jiri Slaby <jslaby@suse.cz> Cc: Marcel Partap <mpartap@gmx.net> Cc: Michael Scott <michael.scott@linaro.org> Cc: Peter Hurley <peter@hurleysoftware.com> Cc: Russ Gorby <russ.gorby@intel.com> Cc: Sascha Hauer <s.hauer@pengutronix.de> Cc: Sebastian Reichel <sre@kernel.org> Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ming Lei authored
[ Upstream commit 8ab0b7dc ] HW queues may be unmapped in some cases, such as blk_mq_update_nr_hw_queues(), then we need to check it before calling blk_mq_tag_idle(), otherwise the following kernel oops can be triggered, so fix it by checking if the hw queue is unmapped since it doesn't make sense to idle the tags any more after hw queues are unmapped. [ 440.771298] Workqueue: nvme-wq nvme_rdma_del_ctrl_work [nvme_rdma] [ 440.779104] task: ffff894bae755ee0 ti: ffff893bf9bc8000 task.ti: ffff893bf9bc8000 [ 440.788359] RIP: 0010:[<ffffffffb730e2b4>] [<ffffffffb730e2b4>] __blk_mq_tag_idle+0x24/0x40 [ 440.798697] RSP: 0018:ffff893bf9bcbd10 EFLAGS: 00010286 [ 440.805538] RAX: 0000000000000000 RBX: ffff895bb131dc00 RCX: 000000000000011f [ 440.814426] RDX: 00000000ffffffff RSI: 0000000000000120 RDI: ffff895bb131dc00 [ 440.823301] RBP: ffff893bf9bcbd10 R08: 000000000001b860 R09: 4a51d361c00c0000 [ 440.832193] R10: b5907f32b4cc7003 R11: ffffd6cabfb57000 R12: ffff894bafd1e008 [ 440.841091] R13: 0000000000000001 R14: ffff895baf770000 R15: 0000000000000080 [ 440.849988] FS: 0000000000000000(0000) GS:ffff894bbdcc0000(0000) knlGS:0000000000000000 [ 440.859955] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 440.867274] CR2: 0000000000000008 CR3: 000000103d098000 CR4: 00000000001407e0 [ 440.876169] Call Trace: [ 440.879818] [<ffffffffb7309d68>] blk_mq_exit_hctx+0xd8/0xe0 [ 440.887051] [<ffffffffb730dc40>] blk_mq_free_queue+0xf0/0x160 [ 440.894465] [<ffffffffb72ff679>] blk_cleanup_queue+0xd9/0x150 [ 440.901881] [<ffffffffc08a802b>] nvme_ns_remove+0x5b/0xb0 [nvme_core] [ 440.910068] [<ffffffffc08a811b>] nvme_remove_namespaces+0x3b/0x60 [nvme_core] [ 440.919026] [<ffffffffc08b817b>] __nvme_rdma_remove_ctrl+0x2b/0xb0 [nvme_rdma] [ 440.928079] [<ffffffffc08b8237>] nvme_rdma_del_ctrl_work+0x17/0x20 [nvme_rdma] [ 440.937126] [<ffffffffb70ab58a>] process_one_work+0x17a/0x440 [ 440.944517] [<ffffffffb70ac3a8>] worker_thread+0x278/0x3c0 [ 440.951607] [<ffffffffb70ac130>] ? manage_workers.isra.24+0x2a0/0x2a0 [ 440.959760] [<ffffffffb70b352f>] kthread+0xcf/0xe0 [ 440.966055] [<ffffffffb70b3460>] ? insert_kthread_work+0x40/0x40 [ 440.973715] [<ffffffffb76d8658>] ret_from_fork+0x58/0x90 [ 440.980586] [<ffffffffb70b3460>] ? insert_kthread_work+0x40/0x40 [ 440.988229] Code: 5b 41 5c 5d c3 66 90 0f 1f 44 00 00 48 8b 87 20 01 00 00 f0 0f ba 77 40 01 19 d2 85 d2 75 08 c3 0f 1f 80 00 00 00 00 55 48 89 e5 <f0> ff 48 08 48 8d 78 10 e8 7f 0f 05 00 5d c3 0f 1f 00 66 2e 0f [ 441.011620] RIP [<ffffffffb730e2b4>] __blk_mq_tag_idle+0x24/0x40 [ 441.019301] RSP <ffff893bf9bcbd10> [ 441.024052] CR2: 0000000000000008 Reported-by: Zhang Yi <yizhan@redhat.com> Tested-by: Zhang Yi <yizhan@redhat.com> Signed-off-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
chenxiang authored
[ Upstream commit affc6778 ] The status of SAS PHY is in sas_phy->enabled. There is an issue that the status of a remote SAS PHY may be initialized incorrectly: if disable remote SAS PHY through sysfs interface (such as echo 0 > /sys/class/sas_phy/phy-1:0:0/enable), then reboot the system, and we will find the status of remote SAS PHY which is disabled before is 1 (cat /sys/class/sas_phy/phy-1:0:0/enable). But actually the status of remote SAS PHY is disabled and the device attached is not found. In SAS protocol, NEGOTIATED LOGICAL LINK RATE field of DISCOVER response is 0x1 when remote SAS PHY is disabled. So initialize sas_phy->enabled according to the value of NEGOTIATED LOGICAL LINK RATE field. Signed-off-by: chenxiang <chenxiang66@hisilicon.com> Reviewed-by: John Garry <john.garry@huawei.com> Signed-off-by: Jason Yan <yanaijie@huawei.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jason Yan authored
[ Upstream commit 2b23d950 ] The intend purpose here was to goto out if smp_execute_task() returned error. Obviously something got screwed up. We will never get these link error statistics below: ~:/sys/class/sas_phy/phy-1:0:12 # cat invalid_dword_count 0 ~:/sys/class/sas_phy/phy-1:0:12 # cat running_disparity_error_count 0 ~:/sys/class/sas_phy/phy-1:0:12 # cat loss_of_dword_sync_count 0 ~:/sys/class/sas_phy/phy-1:0:12 # cat phy_reset_problem_count 0 Obviously we should goto error handler if smp_execute_task() returns non-zero. Fixes: 2908d778 ("[SCSI] aic94xx: new driver") Signed-off-by: Jason Yan <yanaijie@huawei.com> CC: John Garry <john.garry@huawei.com> CC: chenqilin <chenqilin2@huawei.com> CC: chenxiang <chenxiang66@hisilicon.com> Reviewed-by: Hannes Reinecke <hare@suse.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jason Yan authored
[ Upstream commit 4a491b1a ] We've got a memory leak with the following producer: while true; do cat /sys/class/sas_phy/phy-1:0:12/invalid_dword_count >/dev/null; done The buffer req is allocated and not freed after we return. Fix it. Fixes: 2908d778 ("[SCSI] aic94xx: new driver") Signed-off-by: Jason Yan <yanaijie@huawei.com> CC: John Garry <john.garry@huawei.com> CC: chenqilin <chenqilin2@huawei.com> CC: chenxiang <chenxiang66@hisilicon.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tang Junhui authored
[ Upstream commit 4eca1cb2 ] In such scenario that there are some flash only volumes , and some cached devices, when many tasks request these devices in writeback mode, the write IOs may fall to the same bucket as bellow: | cached data | flash data | cached data | cached data| flash data| then after writeback of these cached devices, the bucket would be like bellow bucket: | free | flash data | free | free | flash data | So, there are many free space in this bucket, but since data of flash only volumes still exists, so this bucket cannot be reclaimable, which would cause waste of bucket space. In this patch, we segregate flash only volume write streams from cached devices, so data from flash only volumes and cached devices can store in different buckets. Compare to v1 patch, this patch do not add a additionally open bucket list, and it is try best to segregate flash only volume write streams from cached devices, sectors of flash only volumes may still be mixed with dirty sectors of cached device, but the number is very small. [mlyle: fixed commit log formatting, permissions, line endings] Signed-off-by: Tang Junhui <tang.junhui@zte.com.cn> Reviewed-by: Michael Lyle <mlyle@lyle.org> Signed-off-by: Michael Lyle <mlyle@lyle.org> Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Tang Junhui authored
[ Upstream commit 8d29c442 ] Currently, when a cached device detaching from cache, writeback thread is not stopped, and writeback_rate_update work is not canceled. For example, after the following command: echo 1 >/sys/block/sdb/bcache/detach you can still see the writeback thread. Then you attach the device to the cache again, bcache will create another writeback thread, for example, after below command: echo ba0fb5cd-658a-4533-9806-6ce166d883b9 > /sys/block/sdb/bcache/attach then you will see 2 writeback threads. This patch stops writeback thread and cancels writeback_rate_update work when cached device detaching from cache. Compare with patch v1, this v2 patch moves code down into the register lock for safety in case of any future changes as Coly and Mike suggested. [edit by mlyle: commit log spelling/formatting] Signed-off-by: Tang Junhui <tang.junhui@zte.com.cn> Reviewed-by: Michael Lyle <mlyle@lyle.org> Signed-off-by: Michael Lyle <mlyle@lyle.org> Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rui Hua authored
[ Upstream commit b221fc13 ] The read request might meet error when searching the btree, but the error was not handled in cache_lookup(), and this kind of metadata failure will not go into cached_dev_read_error(), finally, the upper layer will receive bi_status=0. In this patch we judge the metadata error by the return value of bch_btree_map_keys(), there are two potential paths give rise to the error: 1. Because the btree is not totally cached in memery, we maybe get error when read btree node from cache device (see bch_btree_node_get()), the likely errno is -EIO, -ENOMEM 2. When read miss happens, bch_btree_insert_check_key() will be called to insert a "replace_key" to btree(see cached_dev_cache_miss(), just for doing preparatory work before insert the missed data to cache device), a failure can also happen in this situation, the likely errno is -ENOMEM bch_btree_map_keys() will return MAP_DONE in normal scenario, but we will get either -EIO or -ENOMEM in above two cases. if this happened, we should NOT recover data from backing device (when cache device is dirty) because we don't know whether bkeys the read request covered are all clean. And after that happened, s->iop.status is still its initially value(0) before we submit s->bio.bio, we set it to BLK_STS_IOERR, so it can go into cached_dev_read_error(), and finally it can be passed to upper layer, or recovered by reread from backing device. [edit by mlyle: patch formatting, word-wrap, comment spelling, commit log format] Signed-off-by: Hua Rui <huarui.dev@gmail.com> Reviewed-by: Michael Lyle <mlyle@lyle.org> Signed-off-by: Michael Lyle <mlyle@lyle.org> Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Fuyun Liang authored
[ Upstream commit 5bad95a1 ] when changing MTU, The new MTU must need to be set to netdevice. Fixes: a8e8b7ff ("net: hns3: Add support to change MTU in HNS3 hardware") Signed-off-by: Fuyun Liang <liangfuyun1@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jian Shen authored
[ Upstream commit 57ffee73 ] The member "stats_offset" was designed to indicate the offset of each member of struct ring_stats in struct hns3_enet_ring, but forgot to add the offset of the member in struct ring_stats. Fixes: 496d03e9 ("net: hns3: Add Ethtool support to HNS3 driver") Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jian Shen authored
[ Upstream commit 94bfaafa ] An error loop index was used while querying statistics data of tqps, which may cause call trace. Fixes: 496d03e9 ("net: hns3: Add Ethtool support to HNS3 driver") Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jian Shen authored
[ Upstream commit d2a5dca8 ] The dropped tx/rx packets number of each tqp should also be counted into the total drop tx/rx packets numbers. Fixes: 76ad4f0e ("net: hns3: Add support of HNS3 Ethernet Driver for hip08 SoC") Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Daniel Jurgens authored
[ Upstream commit 734dc065 ] There are two potential problems with the existing implementation. 1. Enable and disable can race after the atomic operations. 2. If a command fails the refcount is left in an inconsistent state. Introduce a lock and perform error checking. Fixes: a6f7d2af ("net/mlx5: Add support for multiple RoCE enable") Signed-off-by: Daniel Jurgens <danielj@mellanox.com> Reviewed-by: Parav Pandit <parav@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Colin Ian King authored
[ Upstream commit ac1181c6 ] Currently the less than zero error check on ret is incorrect as it is checking a far earlier ret assignment rather than the return from the call to wl1251_acx_arp_ip_filter. Fix this by adding in the missing assginment. Detected by CoverityScan, CID#1164835 ("Logically dead code") Fixes: 204cc5c4 ("wl1251: implement hardware ARP filtering") Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Kalle Valo <kvalo@codeaurora.org> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Stanislaw Gruszka authored
[ Upstream commit 6dd80efd ] Pausing queue without checking threshold is racy with txdone path. Moreover we do not need pause queue on any error, but only if queue is full - in case when we send RTS frame ( other cases of almost full queue are already handled in rt2x00queue_write_tx_frame() ). Patch fixes of theoretically possible problem of pausing empty queue. Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com> Tested-by: Enrico Mioso <mrkiko.rs@gmail.com> Signed-off-by: Kalle Valo <kvalo@codeaurora.org> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Hans de Goede authored
[ Upstream commit 165c2357 ] Properly stop any work we may have queued on probe-errors / remove. Rather then adding a remove driver callback for this, and goto style error handling to probe, use a devm_action for this. The devm_action gets registered before we register any of the extcon notifiers which may queue the work, devm does cleanup in reverse order, so this ensures that the notifiers are removed before we cancel the work. Reviewed-by: Chen-Yu Tsai <wens@csie.org> Signed-off-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Sebastian Reichel <sebastian.reichel@collabora.co.uk> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Christophe JAILLET authored
[ Upstream commit eaadb1ca ] In some error handling paths, an error code is assiegned to 'ret'. However, the function always return 0. Fix it and return the error code if such an error paths is taken. Fixes: 3d9ff346 ("ASoC: Intel: sst: add stream operations") Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
NeilBrown authored
[ Upstream commit dbeccabf ] Calling smp_processor_id() without disabling preemption triggers a warning (if CONFIG_DEBUG_PREEMPT). I think the result of cfs_cpt_current() is only used as a hint for load balancing, rather than as a precise and stable indicator of the current CPU. So it doesn't need to be called with preemption disabled. So disable preemption inside cfs_cpt_current() to silence the warning. Signed-off-by: NeilBrown <neilb@suse.com> Reviewed-by: Andreas Dilger <andreas.dilger@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jin Yao authored
[ Upstream commit 40c39e30 ] When enabling '-b' option in perf record, for example, perf record -b ... perf report and then browsing the annotate browser from perf report (press 'A'), it would fail (annotate browser can't be displayed). It's because the '.add_entry_cb' op of struct report is overwritten by hist_iter__branch_callback() in builtin-report.c. But this function doesn't do something like mapping symbols and sources. So next, do_annotate() will return directly. notes = symbol__annotation(act->ms.sym); if (!notes->src) return 0; This patch adds the lost code to hist_iter__branch_callback (refer to hist_iter__report_callback). v2: Fix a crash bug when perform 'perf report --stdio'. The reason is that we init the symbol annotation only in browser mode, it doesn't allocate/init resources for stdio mode. So now in hist_iter__branch_callback(), it will return directly if it's not in browser mode. Signed-off-by: Jin Yao <yao.jin@linux.intel.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1514284963-18587-1-git-send-email-yao.jin@linux.intel.comSigned-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Javier Martinez Canillas authored
[ Upstream commit 095531f8 ] According to the TPM Library Specification, a TPM device must do a command header validation before processing and return a TPM_RC_COMMAND_CODE code if the command is not implemented. So user-space will expect to handle that response as an error. But if the in-kernel resource manager is used (/dev/tpmrm?), an -EINVAL errno code is returned instead if the command isn't implemented. This confuses userspace since it doesn't expect that error value. This also isn't consistent with the behavior when not using TPM spaces and accessing the TPM directly (/dev/tpm?). In this case, the command is sent to the TPM even when not implemented and the TPM responds with an error. Instead of returning an -EINVAL errno code when the tpm_validate_command() function fails, synthesize a TPM command response so user-space can get a TPM_RC_COMMAND_CODE as expected when a chip doesn't implement the command. The TPM only sets 12 of the 32 bits in the TPM_RC response, so the TSS and TAB specifications define that higher layers in the stack should use some of the unused 20 bits to specify from which level of the stack the error is coming from. Since the TPM_RC_COMMAND_CODE response code is sent by the kernel resource manager, set the error level to the TAB/RM layer so user-space is aware of this. Suggested-by: Jason Gunthorpe <jgg@ziepe.ca> Signed-off-by: Javier Martinez Canillas <javierm@redhat.com> Reviewed-by: William Roberts <william.c.roberts@intel.com> Reviewed-by: Philip Tricca <philip.b.tricca@intel.com> Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Tested-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
James Smart authored
[ Upstream commit 278e0960 ] A test case revealed a race condition of an i/o completing on a thread parallel to the delete_association generating the aborts for the outstanding ios on the controller. The i/o completion was freeing the target fcloop context, thus the abort task referenced the just-freed memory. Correct by clearing the target/initiator cross pointers in the io completion and abort tasks before calling the callbacks. On aborts that detect already finished io's, ensure the complete context is called. Signed-off-by: James Smart <james.smart@broadcom.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
James Smart authored
[ Upstream commit 6fda2028 ] The current fcloop driver gets its lport structure from the private area co-allocated with the fc_localport. All is fine except the teardown path, which wants to wait on the completion, which is marked complete by the delete_localport callback performed after unregister_localport. The issue is, the nvme_fc transport frees the localport structure immediately after delete_localport is called, meaning the original routine is trying to wait on a complete that was just freed. Change such that a lport struct is allocated coincident with the addition and registration of a localport. The private area of the localport now contains just a backpointer to the real lport struct. Now, the completion can be waited for, and after completing, the new structure can be kfree'd. Signed-off-by: James Smart <james.smart@broadcom.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Hans de Goede authored
[ Upstream commit 9291c65b ] On some systems, some PCB traces attached to GpioInts are routed in such a way that they pick up enough interference to constantly (many times per second) trigger. Enabling glitch-filtering fixes this. Signed-off-by: Hans de Goede <hdegoede@redhat.com> Acked-by: Mika Westerberg <mika.westerberg@linux.intel.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Robert Jarzmik authored
[ Upstream commit 2023b052 ] Currently the LCD display (TD035S) on the cm-x300 platform is broken and remains blank. The TD0245S specification requires that the chipselect is toggled between commands sent to the panel. This was also the purpose of the former patch of commit f64dcac0 ("backlight: tdo24m: ensure chip select changes between transfers"). Unfortunately, the "cs_change" field of a SPI transfer is misleading. Its true meaning is that for a SPI message holding multiple transfers, the chip select is toggled between each transfer, but for the last transfer it remains asserted. In this driver, all the SPI messages contain exactly one transfer, which means that each transfer is the last of its message, and as a consequence the chip select is never toggled. Actually, there was a second bug hidding the first one, hence the problem was not seen until v4.6. This problem was fixed by commit a52db659 ("spi: pxa2xx: Fix cs_change management") for PXA based boards. This fix makes the TD035S work again on a cm-x300 board. The same applies to other PXA boards, ie. corgi and tosa. Fixes: a52db659 ("spi: pxa2xx: Fix cs_change management") Reported-by: Andrea Adami <andrea.adami@gmail.com> Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr> Acked-by: Daniel Thompson <daniel.thompson@linaro.org> Signed-off-by: Lee Jones <lee.jones@linaro.org> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ming Lei authored
[ Upstream commit fb350e0a ] In both elevator_switch_mq() and blk_mq_update_nr_hw_queues(), sched tags can be allocated, and q->nr_hw_queue is used, and race is inevitable, for example: blk_mq_init_sched() may trigger use-after-free on hctx, which is freed in blk_mq_realloc_hw_ctxs() when nr_hw_queues is decreased. This patch fixes the race be holding q->sysfs_lock. Reviewed-by: Christoph Hellwig <hch@lst.de> Reported-by: Yi Zhang <yi.zhang@redhat.com> Tested-by: Yi Zhang <yi.zhang@redhat.com> Signed-off-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Ming Lei authored
[ Upstream commit 7d4901a9 ] blk_mq_pci_map_queues() may not map one CPU into any hw queue, but its previous map isn't cleared yet, and may point to one stale hw queue index. This patch fixes the following issue by clearing the mapping table before setting it up in blk_mq_pci_map_queues(). This patches fixes this following issue reported by Zhang Yi: [ 101.202734] BUG: unable to handle kernel NULL pointer dereference at 0000000094d3013f [ 101.211487] IP: blk_mq_map_swqueue+0xbc/0x200 [ 101.216346] PGD 0 P4D 0 [ 101.219171] Oops: 0000 [#1] SMP [ 101.222674] Modules linked in: sunrpc ipmi_ssif vfat fat intel_rapl sb_edac x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel intel_cstate intel_uncore mxm_wmi intel_rapl_perf iTCO_wdt ipmi_si ipmi_devintf pcspkr iTCO_vendor_support sg dcdbas ipmi_msghandler wmi mei_me lpc_ich shpchp mei acpi_power_meter dm_multipath ip_tables xfs libcrc32c sd_mod mgag200 i2c_algo_bit drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm ahci libahci crc32c_intel libata tg3 nvme nvme_core megaraid_sas ptp i2c_core pps_core dm_mirror dm_region_hash dm_log dm_mod [ 101.284881] CPU: 0 PID: 504 Comm: kworker/u25:5 Not tainted 4.15.0-rc2 #1 [ 101.292455] Hardware name: Dell Inc. PowerEdge R730xd/072T6D, BIOS 2.5.5 08/16/2017 [ 101.301001] Workqueue: nvme-wq nvme_reset_work [nvme] [ 101.306636] task: 00000000f2c53190 task.stack: 000000002da874f9 [ 101.313241] RIP: 0010:blk_mq_map_swqueue+0xbc/0x200 [ 101.318681] RSP: 0018:ffffc9000234fd70 EFLAGS: 00010282 [ 101.324511] RAX: ffff88047ffc9480 RBX: ffff88047e130850 RCX: 0000000000000000 [ 101.332471] RDX: ffffe8ffffd40580 RSI: ffff88047e509b40 RDI: ffff88046f37a008 [ 101.340432] RBP: 000000000000000b R08: ffff88046f37a008 R09: 0000000011f94280 [ 101.348392] R10: ffff88047ffd4d00 R11: 0000000000000000 R12: ffff88046f37a008 [ 101.356353] R13: ffff88047e130f38 R14: 000000000000000b R15: ffff88046f37a558 [ 101.364314] FS: 0000000000000000(0000) GS:ffff880277c00000(0000) knlGS:0000000000000000 [ 101.373342] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 101.379753] CR2: 0000000000000098 CR3: 000000047f409004 CR4: 00000000001606f0 [ 101.387714] Call Trace: [ 101.390445] blk_mq_update_nr_hw_queues+0xbf/0x130 [ 101.395791] nvme_reset_work+0x6f4/0xc06 [nvme] [ 101.400848] ? pick_next_task_fair+0x290/0x5f0 [ 101.405807] ? __switch_to+0x1f5/0x430 [ 101.409988] ? put_prev_entity+0x2f/0xd0 [ 101.414365] process_one_work+0x141/0x340 [ 101.418836] worker_thread+0x47/0x3e0 [ 101.422921] kthread+0xf5/0x130 [ 101.426424] ? rescuer_thread+0x380/0x380 [ 101.430896] ? kthread_associate_blkcg+0x90/0x90 [ 101.436048] ret_from_fork+0x1f/0x30 [ 101.440034] Code: 48 83 3c ca 00 0f 84 2b 01 00 00 48 63 cd 48 8b 93 10 01 00 00 8b 0c 88 48 8b 83 20 01 00 00 4a 03 14 f5 60 04 af 81 48 8b 0c c8 <48> 8b 81 98 00 00 00 f0 4c 0f ab 30 8b 81 f8 00 00 00 89 42 44 [ 101.461116] RIP: blk_mq_map_swqueue+0xbc/0x200 RSP: ffffc9000234fd70 [ 101.468205] CR2: 0000000000000098 [ 101.471907] ---[ end trace 5fe710f98228a3ca ]--- [ 101.482489] Kernel panic - not syncing: Fatal exception [ 101.488505] Kernel Offset: disabled [ 101.497752] ---[ end Kernel panic - not syncing: Fatal exception Reviewed-by: Christoph Hellwig <hch@lst.de> Suggested-by: Christoph Hellwig <hch@lst.de> Reported-by: Yi Zhang <yi.zhang@redhat.com> Tested-by: Yi Zhang <yi.zhang@redhat.com> Signed-off-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Sasha Levin <alexander.levin@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-