1. 03 Aug, 2020 3 commits
  2. 29 Jul, 2020 31 commits
    • Jens Axboe's avatar
      Merge branch 'nvme-5.9' of git://git.infradead.org/nvme into for-5.9/drivers · a9e8e18a
      Jens Axboe authored
      Pull NVMe updates from Christoph.
      
      * 'nvme-5.9' of git://git.infradead.org/nvme: (30 commits)
        nvme-loop: remove extra variable in create ctrl
        nvme-loop: set ctrl state connecting after init
        nvme-multipath: do not fall back to __nvme_find_path() for non-optimized paths
        nvme-multipath: fix logic for non-optimized paths
        nvme-rdma: fix controller reset hang during traffic
        nvme-tcp: fix controller reset hang during traffic
        nvmet: introduce the passthru Kconfig option
        nvmet: introduce the passthru configfs interface
        nvmet: Add passthru enable/disable helpers
        nvmet: add passthru code to process commands
        nvme: export nvme_find_get_ns() and nvme_put_ns()
        nvme: introduce nvme_ctrl_get_by_path()
        nvme: introduce nvme_execute_passthru_rq to call nvme_passthru_[start|end]()
        nvme: create helper function to obtain command effects
        nvme: clear any SGL flags in passthru commands
        nvmet-fc: remove redundant del_work_active flag
        nvmet-fc: check successful reference in nvmet_fc_find_target_assoc
        nvme-fc: set max_segments to lldd max value
        nvme-fc: drop a duplicated word in a comment
        nvme-hwmon: log the controller device name
        ...
      a9e8e18a
    • Chaitanya Kulkarni's avatar
      nvme-loop: remove extra variable in create ctrl · b6cec06d
      Chaitanya Kulkarni authored
      We can call the nvme_change_ctrl_state() directly and have
      WARN_ON_ONCE(1) call instead of having to use an extra variable which
      matches the name of the function.
      Signed-off-by: default avatarChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Reviewed-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      b6cec06d
    • Chaitanya Kulkarni's avatar
      nvme-loop: set ctrl state connecting after init · 64d452b3
      Chaitanya Kulkarni authored
      When creating a loop controller (ctrl) in nvme_loop_create_ctrl() ->
      nvme_init_ctrl() we set the ctrl state to NVME_CTRL_NEW.
      
      Prior to [1] NVME_CTRL_NEW state was allowed in nvmf_check_ready() for
      fabrics command type connect. Now, this fails in the following code path
      for fabrics connect command when creating admin queue :-
      
      nvme_loop_create_ctrl()
       nvme_loo_configure_admin_queue()
        nvmf_connect_admin_queue()
         __nvme_submit_sync_cmd()
          blk_execute_rq()
            nvme_loop_queue_rq()
      	nvmf_check_ready()
      
      # echo  "transport=loop,nqn=fs" > /dev/nvme-fabrics
      [ 6047.741327] nvmet: adding nsid 1 to subsystem fs
      [ 6048.756430] nvme nvme1: Connect command failed, error wo/DNR bit: 880
      
      We need to set the ctrl state to NVME_CTRL_CONNECTING after :-
      nvme_loop_create_ctrl()
       nvme_init_ctrl()
      so that the above mentioned check for nvmf_check_ready() will return
      true.
      
      This patch sets the ctrl state to connecting after we init the ctrl in
      nvme_loop_create_ctrl()
       nvme_init_ctrl() .
      
      [1] commit aa63fa6776a7 ("nvme-fabrics: allow to queue requests for live queues")
      
      Fixes: aa63fa6776a7 ("nvme-fabrics: allow to queue requests for live queues")
      Signed-off-by: default avatarChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Reviewed-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Tested-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      64d452b3
    • Hannes Reinecke's avatar
      nvme-multipath: do not fall back to __nvme_find_path() for non-optimized paths · fbd6a42d
      Hannes Reinecke authored
      When nvme_round_robin_path() finds a valid namespace we should be using it;
      falling back to __nvme_find_path() for non-optimized paths will cause the
      result from nvme_round_robin_path() to be ignored for non-optimized paths.
      
      Fixes: 75c10e73 ("nvme-multipath: round-robin I/O policy")
      Signed-off-by: default avatarMartin Wilck <mwilck@suse.com>
      Signed-off-by: default avatarHannes Reinecke <hare@suse.de>
      Reviewed-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      fbd6a42d
    • Martin Wilck's avatar
      nvme-multipath: fix logic for non-optimized paths · 3f6e3246
      Martin Wilck authored
      Handle the special case where we have exactly one optimized path,
      which we should keep using in this case.
      
      Fixes: 75c10e73 ("nvme-multipath: round-robin I/O policy")
      Signed off-by: Martin Wilck <mwilck@suse.com>
      Signed-off-by: default avatarHannes Reinecke <hare@suse.de>
      Reviewed-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      3f6e3246
    • Sagi Grimberg's avatar
      nvme-rdma: fix controller reset hang during traffic · 9f98772b
      Sagi Grimberg authored
      commit fe35ec58 ("block: update hctx map when use multiple maps")
      exposed an issue where we may hang trying to wait for queue freeze
      during I/O. We call blk_mq_update_nr_hw_queues which in case of multiple
      queue maps (which we have now for default/read/poll) is attempting to
      freeze the queue. However we never started queue freeze when starting the
      reset, which means that we have inflight pending requests that entered the
      queue that we will not complete once the queue is quiesced.
      
      So start a freeze before we quiesce the queue, and unfreeze the queue
      after we successfully connected the I/O queues (and make sure to call
      blk_mq_update_nr_hw_queues only after we are sure that the queue was
      already frozen).
      
      This follows to how the pci driver handles resets.
      
      Fixes: fe35ec58 ("block: update hctx map when use multiple maps")
      Signed-off-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      9f98772b
    • Sagi Grimberg's avatar
      nvme-tcp: fix controller reset hang during traffic · 2875b0ae
      Sagi Grimberg authored
      commit fe35ec58 ("block: update hctx map when use multiple maps")
      exposed an issue where we may hang trying to wait for queue freeze
      during I/O. We call blk_mq_update_nr_hw_queues which in case of multiple
      queue maps (which we have now for default/read/poll) is attempting to
      freeze the queue. However we never started queue freeze when starting the
      reset, which means that we have inflight pending requests that entered the
      queue that we will not complete once the queue is quiesced.
      
      So start a freeze before we quiesce the queue, and unfreeze the queue
      after we successfully connected the I/O queues (and make sure to call
      blk_mq_update_nr_hw_queues only after we are sure that the queue was
      already frozen).
      
      This follows to how the pci driver handles resets.
      
      Fixes: fe35ec58 ("block: update hctx map when use multiple maps")
      Signed-off-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      2875b0ae
    • Chaitanya Kulkarni's avatar
      nvmet: introduce the passthru Kconfig option · d9174c1a
      Chaitanya Kulkarni authored
      This patch updates KConfig file for the NVMeOF target where we add new
      option so that user can selectively enable/disable passthru code.
      Signed-off-by: default avatarChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      [logang@deltatee.com: fixed some of the wording in the help message]
      Signed-off-by: default avatarLogan Gunthorpe <logang@deltatee.com>
      Reviewed-by: default avatarKeith Busch <kbusch@kernel.org>
      Reviewed-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      d9174c1a
    • Logan Gunthorpe's avatar
      nvmet: introduce the passthru configfs interface · cae5b01a
      Logan Gunthorpe authored
      When CONFIG_NVME_TARGET_PASSTHRU as 'passthru' directory will
      be added to each subsystem. The directory is similar to a namespace
      and has two attributes: device_path and enable. The user must set the
      path to the nvme controller's char device and write '1' to enable the
      subsystem to use passthru.
      
      Any given subsystem is prevented from enabling both a regular namespace
      and the passthru device. If one is enabled, enabling the other will
      produce an error.
      Signed-off-by: default avatarLogan Gunthorpe <logang@deltatee.com>
      Reviewed-by: default avatarKeith Busch <kbusch@kernel.org>
      Reviewed-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      cae5b01a
    • Logan Gunthorpe's avatar
      nvmet: Add passthru enable/disable helpers · ba76af67
      Logan Gunthorpe authored
      This patch adds helper functions which are used in the NVMeOF configfs
      when the user is configuring the passthru subsystem. Here we ensure
      that only one subsys is assigned to each nvme_ctrl by using an xarray
      on the cntlid.
      
      The subsystem's version number is overridden by the passed through
      controller's version. However, if that version is less than 1.2.1,
      then we bump the advertised version to that and print a warning
      in dmesg.
      Based-on-a-patch-by: default avatarChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: default avatarLogan Gunthorpe <logang@deltatee.com>
      Reviewed-by: default avatarKeith Busch <kbusch@kernel.org>
      Reviewed-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      ba76af67
    • Logan Gunthorpe's avatar
      nvmet: add passthru code to process commands · c1fef73f
      Logan Gunthorpe authored
      Add passthru command handling capability for the NVMeOF target and
      export passthru APIs which are used to integrate passthru
      code with nvmet-core.
      
      The new file passthru.c handles passthru cmd parsing and execution.
      In the passthru mode, we create a block layer request from the nvmet
      request and map the data on to the block layer request.
      
      Admin commands and features are on an allow list as there are a number
      of each that don't make too much sense with passthrough. We use an
      allow list such that new commands can be considered before being blindly
      passed through. In both cases, vendor specific commands are always
      allowed.
      
      We also reject reservation IO commands as the underlying device cannot
      differentiate between multiple hosts behind a fabric.
      Based-on-a-patch-by: default avatarChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: default avatarLogan Gunthorpe <logang@deltatee.com>
      Reviewed-by: default avatarKeith Busch <kbusch@kernel.org>
      Reviewed-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      c1fef73f
    • Logan Gunthorpe's avatar
      nvme: export nvme_find_get_ns() and nvme_put_ns() · 24493b8b
      Logan Gunthorpe authored
      nvme_find_get_ns() and nvme_put_ns() are required by the target passthru
      code and are exported under the NVME_TARGET_PASSTHRU namespace.
      Based-on-a-patch-by: default avatarChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: default avatarLogan Gunthorpe <logang@deltatee.com>
      Reviewed-by: default avatarKeith Busch <kbusch@kernel.org>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      24493b8b
    • Logan Gunthorpe's avatar
      nvme: introduce nvme_ctrl_get_by_path() · f783f444
      Logan Gunthorpe authored
      nvme_ctrl_get_by_path() is analogous to blkdev_get_by_path() except it
      gets a struct nvme_ctrl from the path to its char dev (/dev/nvme0).
      It makes use of filp_open() to open the file and uses the private
      data to obtain a pointer to the struct nvme_ctrl. If the fops of the
      file do not match, -EINVAL is returned.
      
      The purpose of this function is to support NVMe-OF target passthru
      and is exported under the NVME_TARGET_PASSTHRU namespace.
      Signed-off-by: default avatarLogan Gunthorpe <logang@deltatee.com>
      Reviewed-by: default avatarKeith Busch <kbusch@kernel.org>
      Reviewed-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Reviewed-by: default avatarMax Gurtovoy <maxg@mellanox.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      f783f444
    • Logan Gunthorpe's avatar
      nvme: introduce nvme_execute_passthru_rq to call nvme_passthru_[start|end]() · 17365ae6
      Logan Gunthorpe authored
      Introduce a new nvme_execute_passthru_rq() helper which calls
      nvme_passthru_[start|end]() around blk_execute_rq(). This ensures
      all passthru calls (including nvme_submit_io()) will be wrapped
      appropriately.
      
      nvme_execute_passthru_rq() will also be useful for the nvmet passthru
      code and is exported in the NVME_TARGET_PASSTHRU namespace.
      Signed-off-by: default avatarLogan Gunthorpe <logang@deltatee.com>
      Reviewed-by: default avatarKeith Busch <kbusch@kernel.org>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      17365ae6
    • Logan Gunthorpe's avatar
      nvme: create helper function to obtain command effects · df21b6b1
      Logan Gunthorpe authored
      Separate the code to obtain command effects from the code
      to start a passthru request and move the nvme_passthru_start() and
      nvme_passthru_end() functions up above nvme_submit_user_cmd() in order
      that they may be used in a new helper a subsequent patch.
      
      The new helper function will be necessary for nvmet passthru
      code to determine if we need to change out of interrupt context
      to handle the effects. It is exported in the NVME_TARGET_PASSTHRU
      namespace.
      Signed-off-by: default avatarLogan Gunthorpe <logang@deltatee.com>
      Reviewed-by: default avatarKeith Busch <kbusch@kernel.org>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      df21b6b1
    • Logan Gunthorpe's avatar
      nvme: clear any SGL flags in passthru commands · 2bf5d3bb
      Logan Gunthorpe authored
      The host driver should decide whether to use SGLs or PRPs and they
      currently assume the flags are cleared after the call to
      nvme_setup_cmd(). However, passed-through commands may erroneously
      set these bits; so clear them for all cases.
      Signed-off-by: default avatarLogan Gunthorpe <logang@deltatee.com>
      Reviewed-by: default avatarKeith Busch <kbusch@kernel.org>
      Reviewed-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      2bf5d3bb
    • James Smart's avatar
      nvmet-fc: remove redundant del_work_active flag · ece0278c
      James Smart authored
      The transport has a del_work_active flag to avoid duplicate scheduling
      of the del_work item. This is redundant with the checks that
      schedule_work() makes.
      
      Remove the del_work_active flag.
      Signed-off-by: default avatarJames Smart <jsmart2021@gmail.com>
      Reviewed-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      ece0278c
    • James Smart's avatar
      nvmet-fc: check successful reference in nvmet_fc_find_target_assoc · 34efa232
      James Smart authored
      When searching for an association based on an association id, when there
      is a match, the code takes a reference. However, it is not validating
      that the reference taking was successful.
      
      Check the status of the reference. If unsuccessful, the device is being
      deleted and should be ignored.
      Signed-off-by: default avatarJames Smart <jsmart2021@gmail.com>
      Reviewed-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      34efa232
    • James Smart's avatar
      nvme-fc: set max_segments to lldd max value · 23748076
      James Smart authored
      Currently the FC transport is set max_hw_sectors based on the lldds
      max sgl segment count. However, the block queue max segments is
      set based on the controller's max_segments count, which the transport
      does not set.  As such, the lldd is receiving sgl lists that are
      exceeding its max segment count.
      
      Set the controller max segment count and derive max_hw_sectors from
      the max segment count.
      Signed-off-by: default avatarJames Smart <jsmart2021@gmail.com>
      Reviewed-by: default avatarMax Gurtovoy <maxg@mellanox.com>
      Reviewed-by: default avatarHimanshu Madhani <himanshu.madhani@oracle.com>
      Reviewed-by: default avatarEwan D. Milne <emilne@redhat.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      23748076
    • Randy Dunlap's avatar
      nvme-fc: drop a duplicated word in a comment · fe5e26a7
      Randy Dunlap authored
      Drop the repeated word "a" in a comment.
      Signed-off-by: default avatarRandy Dunlap <rdunlap@infradead.org>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      fe5e26a7
    • Sagi Grimberg's avatar
      nvme-hwmon: log the controller device name · 653303f2
      Sagi Grimberg authored
      Stay consistent with the rest of the driver
      Signed-off-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      653303f2
    • Sagi Grimberg's avatar
      nvme: fix deadlock in disconnect during scan_work and/or ana_work · ecca390e
      Sagi Grimberg authored
      A deadlock happens in the following scenario with multipath:
      1) scan_work(nvme0) detects a new nsid while nvme0
          is an optimized path to it, path nvme1 happens to be
          inaccessible.
      
      2) Before scan_work is complete nvme0 disconnect is initiated
          nvme_delete_ctrl_sync() sets nvme0 state to NVME_CTRL_DELETING
      
      3) scan_work(1) attempts to submit IO,
          but nvme_path_is_optimized() observes nvme0 is not LIVE.
          Since nvme1 is a possible path IO is requeued and scan_work hangs.
      
      --
      Workqueue: nvme-wq nvme_scan_work [nvme_core]
      kernel: Call Trace:
      kernel:  __schedule+0x2b9/0x6c0
      kernel:  schedule+0x42/0xb0
      kernel:  io_schedule+0x16/0x40
      kernel:  do_read_cache_page+0x438/0x830
      kernel:  read_cache_page+0x12/0x20
      kernel:  read_dev_sector+0x27/0xc0
      kernel:  read_lba+0xc1/0x220
      kernel:  efi_partition+0x1e6/0x708
      kernel:  check_partition+0x154/0x244
      kernel:  rescan_partitions+0xae/0x280
      kernel:  __blkdev_get+0x40f/0x560
      kernel:  blkdev_get+0x3d/0x140
      kernel:  __device_add_disk+0x388/0x480
      kernel:  device_add_disk+0x13/0x20
      kernel:  nvme_mpath_set_live+0x119/0x140 [nvme_core]
      kernel:  nvme_update_ns_ana_state+0x5c/0x60 [nvme_core]
      kernel:  nvme_set_ns_ana_state+0x1e/0x30 [nvme_core]
      kernel:  nvme_parse_ana_log+0xa1/0x180 [nvme_core]
      kernel:  nvme_mpath_add_disk+0x47/0x90 [nvme_core]
      kernel:  nvme_validate_ns+0x396/0x940 [nvme_core]
      kernel:  nvme_scan_work+0x24f/0x380 [nvme_core]
      kernel:  process_one_work+0x1db/0x380
      kernel:  worker_thread+0x249/0x400
      kernel:  kthread+0x104/0x140
      --
      
      4) Delete also hangs in flush_work(ctrl->scan_work)
          from nvme_remove_namespaces().
      
      Similiarly a deadlock with ana_work may happen: if ana_work has started
      and calls nvme_mpath_set_live and device_add_disk, it will
      trigger I/O. When we trigger disconnect I/O will block because
      our accessible (optimized) path is disconnecting, but the alternate
      path is inaccessible, so I/O blocks. Then disconnect tries to flush
      the ana_work and hangs.
      
      [  605.550896] Workqueue: nvme-wq nvme_ana_work [nvme_core]
      [  605.552087] Call Trace:
      [  605.552683]  __schedule+0x2b9/0x6c0
      [  605.553507]  schedule+0x42/0xb0
      [  605.554201]  io_schedule+0x16/0x40
      [  605.555012]  do_read_cache_page+0x438/0x830
      [  605.556925]  read_cache_page+0x12/0x20
      [  605.557757]  read_dev_sector+0x27/0xc0
      [  605.558587]  amiga_partition+0x4d/0x4c5
      [  605.561278]  check_partition+0x154/0x244
      [  605.562138]  rescan_partitions+0xae/0x280
      [  605.563076]  __blkdev_get+0x40f/0x560
      [  605.563830]  blkdev_get+0x3d/0x140
      [  605.564500]  __device_add_disk+0x388/0x480
      [  605.565316]  device_add_disk+0x13/0x20
      [  605.566070]  nvme_mpath_set_live+0x5e/0x130 [nvme_core]
      [  605.567114]  nvme_update_ns_ana_state+0x2c/0x30 [nvme_core]
      [  605.568197]  nvme_update_ana_state+0xca/0xe0 [nvme_core]
      [  605.569360]  nvme_parse_ana_log+0xa1/0x180 [nvme_core]
      [  605.571385]  nvme_read_ana_log+0x76/0x100 [nvme_core]
      [  605.572376]  nvme_ana_work+0x15/0x20 [nvme_core]
      [  605.573330]  process_one_work+0x1db/0x380
      [  605.574144]  worker_thread+0x4d/0x400
      [  605.574896]  kthread+0x104/0x140
      [  605.577205]  ret_from_fork+0x35/0x40
      [  605.577955] INFO: task nvme:14044 blocked for more than 120 seconds.
      [  605.579239]       Tainted: G           OE     5.3.5-050305-generic #201910071830
      [  605.580712] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
      [  605.582320] nvme            D    0 14044  14043 0x00000000
      [  605.583424] Call Trace:
      [  605.583935]  __schedule+0x2b9/0x6c0
      [  605.584625]  schedule+0x42/0xb0
      [  605.585290]  schedule_timeout+0x203/0x2f0
      [  605.588493]  wait_for_completion+0xb1/0x120
      [  605.590066]  __flush_work+0x123/0x1d0
      [  605.591758]  __cancel_work_timer+0x10e/0x190
      [  605.593542]  cancel_work_sync+0x10/0x20
      [  605.594347]  nvme_mpath_stop+0x2f/0x40 [nvme_core]
      [  605.595328]  nvme_stop_ctrl+0x12/0x50 [nvme_core]
      [  605.596262]  nvme_do_delete_ctrl+0x3f/0x90 [nvme_core]
      [  605.597333]  nvme_sysfs_delete+0x5c/0x70 [nvme_core]
      [  605.598320]  dev_attr_store+0x17/0x30
      
      Fix this by introducing a new state: NVME_CTRL_DELETE_NOIO, which will
      indicate the phase of controller deletion where I/O cannot be allowed
      to access the namespace. NVME_CTRL_DELETING still allows mpath I/O to
      be issued to the bottom device, and only after we flush the ana_work
      and scan_work (after nvme_stop_ctrl and nvme_prep_remove_namespaces)
      we change the state to NVME_CTRL_DELETING_NOIO. Also we prevent ana_work
      from re-firing by aborting early if we are not LIVE, so we should be safe
      here.
      
      In addition, change the transport drivers to follow the updated state
      machine.
      
      Fixes: 0d0b660f ("nvme: add ANA support")
      Reported-by: default avatarAnton Eidelman <anton@lightbitslabs.com>
      Signed-off-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      ecca390e
    • Sagi Grimberg's avatar
      nvme: document nvme controller states · 4212f4e9
      Sagi Grimberg authored
      We are starting to see some non-trivial states
      so lets start documenting them.
      Signed-off-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      4212f4e9
    • Chaitanya Kulkarni's avatar
      nvmet: use xarray for ctrl ns storing · 7774e77e
      Chaitanya Kulkarni authored
      This patch replaces the ctrl->namespaces tracking from linked list to
      xarray and improves the performance when accessing one namespce :-
      
      XArray vs Default:-
      
      IOPS and BW (more the better) increase BW (~1.8%):-
      ---------------------------------------------------
      
       XArray :-
        read:  IOPS=160k,  BW=626MiB/s  (656MB/s)(18.3GiB/30001msec)
        read:  IOPS=160k,  BW=626MiB/s  (656MB/s)(18.3GiB/30001msec)
        read:  IOPS=162k,  BW=631MiB/s  (662MB/s)(18.5GiB/30001msec)
      
       Default:-
        read:  IOPS=156k,  BW=609MiB/s  (639MB/s)(17.8GiB/30001msec)
        read:  IOPS=157k,  BW=613MiB/s  (643MB/s)(17.0GiB/30001msec)
        read:  IOPS=160k,  BW=626MiB/s  (656MB/s)(18.3GiB/30001msec)
      
      Submission latency (less the better) decrease (~8.3%):-
      -------------------------------------------------------
      
       XArray:-
        slat  (usec):  min=7,  max=8386,  avg=11.19,  stdev=5.96
        slat  (usec):  min=7,  max=441,   avg=11.09,  stdev=4.48
        slat  (usec):  min=7,  max=1088,  avg=11.21,  stdev=4.54
      
       Default :-
        slat  (usec):  min=8,   max=2826.5k,  avg=23.96,  stdev=3911.50
        slat  (usec):  min=8,   max=503,      avg=12.52,  stdev=5.07
        slat  (usec):  min=8,   max=2384,     avg=12.50,  stdev=5.28
      
      CPU Usage (less the better) decrease (~5.2%):-
      ----------------------------------------------
      
       XArray:-
        cpu  :  usr=1.84%,  sys=18.61%,  ctx=949471,  majf=0,  minf=250
        cpu  :  usr=1.83%,  sys=18.41%,  ctx=950262,  majf=0,  minf=237
        cpu  :  usr=1.82%,  sys=18.82%,  ctx=957224,  majf=0,  minf=234
      
       Default:-
        cpu  :  usr=1.70%,  sys=19.21%,  ctx=858196,  majf=0,  minf=251
        cpu  :  usr=1.82%,  sys=19.98%,  ctx=929720,  majf=0,  minf=227
        cpu  :  usr=1.83%,  sys=20.33%,  ctx=947208,  majf=0,  minf=235.
      Signed-off-by: default avatarChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      7774e77e
    • Yamin Friedman's avatar
      nvmet-rdma: use new shared CQ mechanism · ca0f1a80
      Yamin Friedman authored
      Has the driver use shared CQs providing ~10%-20% improvement when
      multiple disks are used. Instead of opening a CQ for each QP per
      controller, a CQ for each core will be provided by the RDMA core driver
      that will be shared between the QPs on that core reducing interrupt
      overhead.
      Signed-off-by: default avatarYamin Friedman <yaminf@mellanox.com>
      Signed-off-by: default avatarMax Gurtovoy <maxg@mellanox.com>
      Reviewed-by: default avatarOr Gerlitz <ogerlitz@mellanox.com>
      Reviewed-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      ca0f1a80
    • Yamin Friedman's avatar
      nvme-rdma: use new shared CQ mechanism · 287f329e
      Yamin Friedman authored
      Has the driver use shared CQs providing ~10%-20% improvement as seen in
      the patch introducing shared CQs. Instead of opening a CQ for each QP
      per controller connected, a CQ for each QP will be provided by the RDMA
      core driver that will be shared between the QPs on that core reducing
      interrupt overhead.
      Signed-off-by: default avatarYamin Friedman <yaminf@mellanox.com>
      Signed-off-by: default avatarMax Gurtovoy <maxg@mellanox.com>
      Reviewed-by: default avatarOr Gerlitz <ogerlitz@mellanox.com>
      Reviewed-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      287f329e
    • David E. Box's avatar
      nvme-pci: add support for ACPI StorageD3Enable property · df4f9bc4
      David E. Box authored
      This patch implements a solution for a BIOS hack used on some currently
      shipping Intel systems to change driver power management policy for PCIe
      NVMe drives. Some newer Intel platforms, like some Comet Lake systems,
      require that PCIe devices use D3 when doing suspend-to-idle in order to
      allow the platform to realize maximum power savings. This is particularly
      needed to support ATX power supply shutdown on desktop systems. In order to
      ensure this happens for root ports with storage devices, Microsoft
      apparently created this ACPI _DSD property as a way to influence their
      driver policy. To my knowledge this property has not been discussed with
      the NVME specification body.
      
      Though the solution is not ideal, it addresses a problem that also affects
      Linux since the NVMe driver's default policy of using NVMe APST during
      suspend-to-idle prevents the PCI root port from going to D3 and leads to
      higher power consumption for these platforms. The power consumption
      difference may be negligible on laptop systems, but many watts on desktop
      systems when the ATX power supply is blocked from powering down.
      
      The patch creates a new nvme_acpi_storage_d3 function to check for the
      StorageD3Enable property during probe and enables D3 as a quirk if set.  It
      also provides a 'noacpi' module parameter to allow skipping the quirk if
      needed.
      
      Tested with:
       - PM961 NVMe SED Samsung 512GB
       - INTEL SSDPEKKF512G8
      
      Link: https://docs.microsoft.com/en-us/windows-hardware/design/component-guidelines/power-management-for-storage-hardware-devices-introSigned-off-by: default avatarDavid E. Box <david.e.box@linux.intel.com>
      Reviewed-by: default avatarRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      df4f9bc4
    • Chaitanya Kulkarni's avatar
      nvme-pci: use max of PRP or SGL for iod size · b13c6393
      Chaitanya Kulkarni authored
      >From the initial implementation of NVMe SGL kernel support
      commit a7a7cbe3 ("nvme-pci: add SGL support") with addition of the
      commit 943e942e ("nvme-pci: limit max IO size and segments to avoid
      high order allocations") now there is only caller left for
      nvme_pci_iod_alloc_size() which statically passes true for last
      parameter that calculates allocation size based on SGL since we need
      size of biggest command supported for mempool allocation.
      
      This patch modifies the helper functions nvme_pci_iod_alloc_size() such
      that it is now uses maximum of PRP and SGL size for iod allocation size
      calculation.
      Signed-off-by: default avatarChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      b13c6393
    • Chaitanya Kulkarni's avatar
      nvme-core: replace ctrl page size with a macro · 6c3c05b0
      Chaitanya Kulkarni authored
      Saving the nvme controller's page size was from a time when the driver
      tried to use different sized pages, but this value is always set to
      a constant, and has been this way for some time. Remove the 'page_size'
      field and replace its usage with the constant value.
      
      This also lets the compiler make some micro-optimizations in the io
      path, and that's always a good thing.
      Signed-off-by: default avatarChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      6c3c05b0
    • Baolin Wang's avatar
      nvme: remove redundant validation in nvme_start_ctrl() · 5887450b
      Baolin Wang authored
      We've already validated the 'kato' in nvme_start_keep_alive(), thus no
      need to validate it again in nvme_start_ctrl(). Remove it.
      Signed-off-by: default avatarBaolin Wang <baolin.wang@linux.alibaba.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      5887450b
    • Dan Carpenter's avatar
      nvme: remove an unnecessary condition · eca9e827
      Dan Carpenter authored
      "v" is an unsigned int so it can't be more than UINT_MAX.  Removing this
      check makes it easier to preserve the error code as well.
      Signed-off-by: default avatarDan Carpenter <dan.carpenter@oracle.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      eca9e827
  3. 28 Jul, 2020 1 commit
  4. 25 Jul, 2020 5 commits
    • Coly Li's avatar
      bcache: fix bio_{start,end}_io_acct with proper device · a2f32ee8
      Coly Li authored
      Commit 85750aeb ("bcache: use bio_{start,end}_io_acct") moves the
      io account code to the location after bio_set_dev(bio, dc->bdev) in
      cached_dev_make_request(). Then the account is performed incorrectly on
      backing device, indeed the I/O should be counted to bcache device like
      /dev/bcache0.
      
      With the mistaken I/O account, iostat does not display I/O counts for
      bcache device and all the numbers go to backing device. In writeback
      mode, the hard drive may have 340K+ IOPS which is impossible and wrong
      for spinning disk.
      
      This patch introduces bch_bio_start_io_acct() and bch_bio_end_io_acct(),
      which switches bio->bi_disk to bcache device before calling
      bio_start_io_acct() or bio_end_io_acct(). Now the I/Os are counted to
      bcache device, and bcache device, cache device and backing device have
      their correct I/O count information back.
      
      Fixes: 85750aeb ("bcache: use bio_{start,end}_io_acct")
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      a2f32ee8
    • Coly Li's avatar
      bcache: avoid extra memory consumption in struct bbio for large bucket size · 4e4d4e09
      Coly Li authored
      Bcache uses struct bbio to do I/Os for meta data pages like uuids,
      disk_buckets, prio_buckets, and btree nodes.
      
      Example writing a btree node onto cache device, the process is,
      - Allocate a struct bbio from mempool c->bio_meta.
      - Inside struct bbio embedded a struct bio, initialize bi_inline_vecs
        for this embedded bio.
      - Call bch_bio_map() to map each meta data page to each bv from the
        inlined  bi_io_vec table.
      - Call bch_submit_bbio() to submit the bio into underlying block layer.
      - When the I/O completed, only release the struct bbio, don't touch the
        reference counter of the meta data pages.
      
      The struct bbio is defined as,
      738 struct bbio {
      739     unsigned int            submit_time_us;
      	[snipped]
      748     struct bio              bio;
      749 };
      
      Because struct bio is embedded at the end of struct bbio, therefore the
      actual size of struct bbio is sizeof(struct bio) + size of the embedded
      bio->bi_inline_vecs.
      
      Now all the meta data bucket size are limited to meta_bucket_pages(), if
      the bucket size is large than meta_bucket_pages()*PAGE_SECTORS, rested
      space in the bucket is unused. Therefore the most used space in meta
      bucket is (1<<MAX_ORDER) pages, or (1<<CONFIG_FORCE_MAX_ZONEORDER) if it
      is configured.
      
      Therefore for large bucket size, it is unnecessary to calculate the
      allocation size of mempool c->bio_meta as,
      	mempool_init_kmalloc_pool(&c->bio_meta, 2,
      			sizeof(struct bbio) +
      			sizeof(struct bio_vec) * bucket_pages(c))
      It is too large, neither the Linux buddy allocator cannot allocate so
      much continuous pages, nor the extra allocated pages are wasted.
      
      This patch replace bucket_pages() to meta_bucket_pages() in two places,
      - In bch_cache_set_alloc(), when initialize mempool c->bio_meta, uses
        sizeof(struct bbio) + sizeof(struct bio_vec) * bucket_pages(c) to set
        the allocating object size.
      - In bch_bbio_alloc(), when calling bio_init() to set inline bvec talbe
        bi_inline_bvecs, uses meta_bucket_pages() to indicate number of the
        inline bio vencs number.
      
      Now the maximum size of embedded bio inside struct bbio exactly matches
      the limit of meta_bucket_pages(), no extra page wasted.
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Reviewed-by: default avatarHannes Reinecke <hare@suse.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      4e4d4e09
    • Coly Li's avatar
      bcache: avoid extra memory allocation from mempool c->fill_iter · 6907dc49
      Coly Li authored
      Mempool c->fill_iter is used to allocate memory for struct btree_iter in
      bch_btree_node_read_done() to iterate all keys of a read-in btree node.
      
      The allocation size is defined in bch_cache_set_alloc() by,
        mempool_init_kmalloc_pool(&c->fill_iter, 1, iter_size))
      where iter_size is defined by a calculation,
        (sb->bucket_size / sb->block_size + 1) * sizeof(struct btree_iter_set)
      
      For 16bit width bucket_size the calculation is OK, but now the bucket
      size is extended to 32bit, the bucket size can be 2GB. By the above
      calculation, iter_size can be 2048 pages (order 11 is still accepted by
      buddy allocator).
      
      But the actual size holds the bkeys in meta data bucket is limited to
      meta_bucket_pages() already, which is 16MB. By the above calculation,
      if replace sb->bucket_size by meta_bucket_pages() * PAGE_SECTORS, the
      result is 16 pages. This is the size large enough for the mempool
      allocation to struct btree_iter.
      
      Therefore in worst case every time mempool c->fill_iter allocates, at
      most 4080 pages are wasted and won't be used. Therefore this patch uses
      meta_bucket_pages() * PAGE_SECTORS to calculate the iter size in
      bch_cache_set_alloc(), to avoid extra memory allocation from mempool
      c->fill_iter.
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Reviewed-by: default avatarHannes Reinecke <hare@suse.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      6907dc49
    • Coly Li's avatar
      bcache: add sysfs file to display feature sets information of cache set · 092bd54d
      Coly Li authored
      The following three sysfs files are created to display according feature
      set information of bcache:
      	/sys/fs/bcache/<cache set UUID>/internal/feature_compat
      	/sys/fs/bcache/<cache set UUID>/internal/feature_ro_compat
      	/sys/fs/bcache/<cache set UUID>/internal/feature_incompat
      is added by this patch, to display feature sets information of the cache
      set.
      
      Now only an incompat feature 'large_bucket' added in bcache, the sysfs
      file content is:
              [large_bucket]
      string large_bucket means the running bcache drive supports incompat
      feature 'large_bucket', the wrapping [] means the 'large_bucket' feature
      is currently enabled on this cache set.
      
      This patch is ready to display compat and ro_compat features, in future
      once bcache code implements such feature sets, the according feature
      strings will be displayed in their sysfs files too.
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      092bd54d
    • Coly Li's avatar
      bcache: add bucket_size_hi into struct cache_sb_disk for large bucket · ffa47032
      Coly Li authored
      The large bucket feature is to extend bucket_size from 16bit to 32bit.
      
      When create cache device on zoned device (e.g. zoned NVMe SSD), making
      a single bucket cover one or more zones of the zoned device is the
      simplest way to support zoned device as cache by bcache.
      
      But current maximum bucket size is 16MB and a typical zone size of zoned
      device is 256MB, this is the major motiviation to extend bucket size to
      a larger bit width.
      
      This patch is the basic and first change to support large bucket size,
      the major changes it makes are,
      - Add BCH_FEATURE_INCOMPAT_LARGE_BUCKET for the large bucket feature,
        INCOMPAT means it introduces incompatible on-disk format change.
      - Add BCH_FEATURE_INCOMPAT_FUNCS(large_bucket, LARGE_BUCKET) routines.
      - Adds __le16 bucket_size_hi into struct cache_sb_disk at offset 0x8d0
        for the on-disk super block format.
      - For the in-memory super block struct cache_sb, member bucket_size is
        extended from __u16 to __32.
      - Add get_bucket_size() to combine the bucket_size and bucket_size_hi
        from struct cache_sb_disk into an unsigned int value.
      
      Since we already have large bucket size helpers meta_bucket_pages(),
      meta_bucket_bytes() and alloc_meta_bucket_pages(), they make sure when
      bucket size > 8MB, the memory allocation for bcache meta data bucket
      won't fail no matter how large the bucket size extended. So these meta
      data buckets are handled properly when the bucket size width increase
      from 16bit to 32bit, we don't need to worry about them.
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      ffa47032