• James Smart's avatar
    scsi: lpfc: Fix EEH support for NVMe I/O · 25ac2c97
    James Smart authored
    Injecting errors on the PCI slot while the driver is handling NVMe I/O will
    cause crashes and hangs.
    
    There are several rather difficult scenarios occurring. The main issue is
    that the adapter can report a PCI error before or simultaneously to the PCI
    subsystem reporting the error. Both paths have different entry points and
    currently there is no interlock between them. Thus multiple teardown paths
    are competing and all heck breaks loose.
    
    Complicating things is the NVMs path. To a large degree, I/O was able to be
    shutdown for a full FC port on the SCSI stack. But on NVMe, there isn't a
    similar call. At best, it works on a per-controller basis, but even at the
    controller level, it's a controller "reset" call. All of which means I/O is
    still flowing on different CPUs with reset paths expecting hw access
    (mailbox commands) to execute properly.
    
    The following modifications are made:
    
     - A new flag is set in PCI error entrypoints so the driver can track being
       called by that path.
    
     - An interlock is added in the SLI hw error path and the PCI error path
       such that only one of the paths proceeds with the teardown logic.
    
     - RPI cleanup is patched such that RPIs are marked unregistered w/o mbx
       cmds in cases of hw error.
    
     - If entering the SLI port re-init calls, a case where SLI error teardown
       was quick and beat the PCI calls now reporting error, check whether the
       SLI port is still live on the PCI bus.
    
     - In the PCI reset code to bring the adapter back, recheck the IRQ
       settings. Different checks for SLI3 vs SLI4.
    
     - In I/O completions, that may be called as part of the cleanup or
       underway just before the hw error, check the state of the adapter.  If
       in error, shortcut handling that would expect further adapter
       completions as the hw error won't be sending them.
    
     - In routines waiting on I/O completions, which may have been in progress
       prior to the hw error, detect the device is being torn down and abort
       from their waits and just give up. This points to a larger issue in the
       driver on ref-counting for data structures, as it doesn't have
       ref-counting on q and port structures. We'll do this fix for now as it
       would be a major rework to be done differently.
    
     - Fix the NVMe cleanup to simulate NVMe I/O completions if I/O is being
       failed back due to hw error.
    
     - In I/O buf allocation, done at the start of new I/Os, check hw state and
       fail if hw error.
    
    Link: https://lore.kernel.org/r/20210910233159.115896-10-jsmart2021@gmail.comCo-developed-by: default avatarJustin Tee <justin.tee@broadcom.com>
    Signed-off-by: default avatarJustin Tee <justin.tee@broadcom.com>
    Signed-off-by: default avatarJames Smart <jsmart2021@gmail.com>
    Signed-off-by: default avatarMartin K. Petersen <martin.petersen@oracle.com>
    25ac2c97
lpfc.h 57.2 KB