1. 29 Jul, 2020 16 commits
    • Logan Gunthorpe's avatar
      nvme: clear any SGL flags in passthru commands · 2bf5d3bb
      Logan Gunthorpe authored
      The host driver should decide whether to use SGLs or PRPs and they
      currently assume the flags are cleared after the call to
      nvme_setup_cmd(). However, passed-through commands may erroneously
      set these bits; so clear them for all cases.
      Signed-off-by: default avatarLogan Gunthorpe <logang@deltatee.com>
      Reviewed-by: default avatarKeith Busch <kbusch@kernel.org>
      Reviewed-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      2bf5d3bb
    • James Smart's avatar
      nvmet-fc: remove redundant del_work_active flag · ece0278c
      James Smart authored
      The transport has a del_work_active flag to avoid duplicate scheduling
      of the del_work item. This is redundant with the checks that
      schedule_work() makes.
      
      Remove the del_work_active flag.
      Signed-off-by: default avatarJames Smart <jsmart2021@gmail.com>
      Reviewed-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      ece0278c
    • James Smart's avatar
      nvmet-fc: check successful reference in nvmet_fc_find_target_assoc · 34efa232
      James Smart authored
      When searching for an association based on an association id, when there
      is a match, the code takes a reference. However, it is not validating
      that the reference taking was successful.
      
      Check the status of the reference. If unsuccessful, the device is being
      deleted and should be ignored.
      Signed-off-by: default avatarJames Smart <jsmart2021@gmail.com>
      Reviewed-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      34efa232
    • James Smart's avatar
      nvme-fc: set max_segments to lldd max value · 23748076
      James Smart authored
      Currently the FC transport is set max_hw_sectors based on the lldds
      max sgl segment count. However, the block queue max segments is
      set based on the controller's max_segments count, which the transport
      does not set.  As such, the lldd is receiving sgl lists that are
      exceeding its max segment count.
      
      Set the controller max segment count and derive max_hw_sectors from
      the max segment count.
      Signed-off-by: default avatarJames Smart <jsmart2021@gmail.com>
      Reviewed-by: default avatarMax Gurtovoy <maxg@mellanox.com>
      Reviewed-by: default avatarHimanshu Madhani <himanshu.madhani@oracle.com>
      Reviewed-by: default avatarEwan D. Milne <emilne@redhat.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      23748076
    • Randy Dunlap's avatar
      nvme-fc: drop a duplicated word in a comment · fe5e26a7
      Randy Dunlap authored
      Drop the repeated word "a" in a comment.
      Signed-off-by: default avatarRandy Dunlap <rdunlap@infradead.org>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      fe5e26a7
    • Sagi Grimberg's avatar
      nvme-hwmon: log the controller device name · 653303f2
      Sagi Grimberg authored
      Stay consistent with the rest of the driver
      Signed-off-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      653303f2
    • Sagi Grimberg's avatar
      nvme: fix deadlock in disconnect during scan_work and/or ana_work · ecca390e
      Sagi Grimberg authored
      A deadlock happens in the following scenario with multipath:
      1) scan_work(nvme0) detects a new nsid while nvme0
          is an optimized path to it, path nvme1 happens to be
          inaccessible.
      
      2) Before scan_work is complete nvme0 disconnect is initiated
          nvme_delete_ctrl_sync() sets nvme0 state to NVME_CTRL_DELETING
      
      3) scan_work(1) attempts to submit IO,
          but nvme_path_is_optimized() observes nvme0 is not LIVE.
          Since nvme1 is a possible path IO is requeued and scan_work hangs.
      
      --
      Workqueue: nvme-wq nvme_scan_work [nvme_core]
      kernel: Call Trace:
      kernel:  __schedule+0x2b9/0x6c0
      kernel:  schedule+0x42/0xb0
      kernel:  io_schedule+0x16/0x40
      kernel:  do_read_cache_page+0x438/0x830
      kernel:  read_cache_page+0x12/0x20
      kernel:  read_dev_sector+0x27/0xc0
      kernel:  read_lba+0xc1/0x220
      kernel:  efi_partition+0x1e6/0x708
      kernel:  check_partition+0x154/0x244
      kernel:  rescan_partitions+0xae/0x280
      kernel:  __blkdev_get+0x40f/0x560
      kernel:  blkdev_get+0x3d/0x140
      kernel:  __device_add_disk+0x388/0x480
      kernel:  device_add_disk+0x13/0x20
      kernel:  nvme_mpath_set_live+0x119/0x140 [nvme_core]
      kernel:  nvme_update_ns_ana_state+0x5c/0x60 [nvme_core]
      kernel:  nvme_set_ns_ana_state+0x1e/0x30 [nvme_core]
      kernel:  nvme_parse_ana_log+0xa1/0x180 [nvme_core]
      kernel:  nvme_mpath_add_disk+0x47/0x90 [nvme_core]
      kernel:  nvme_validate_ns+0x396/0x940 [nvme_core]
      kernel:  nvme_scan_work+0x24f/0x380 [nvme_core]
      kernel:  process_one_work+0x1db/0x380
      kernel:  worker_thread+0x249/0x400
      kernel:  kthread+0x104/0x140
      --
      
      4) Delete also hangs in flush_work(ctrl->scan_work)
          from nvme_remove_namespaces().
      
      Similiarly a deadlock with ana_work may happen: if ana_work has started
      and calls nvme_mpath_set_live and device_add_disk, it will
      trigger I/O. When we trigger disconnect I/O will block because
      our accessible (optimized) path is disconnecting, but the alternate
      path is inaccessible, so I/O blocks. Then disconnect tries to flush
      the ana_work and hangs.
      
      [  605.550896] Workqueue: nvme-wq nvme_ana_work [nvme_core]
      [  605.552087] Call Trace:
      [  605.552683]  __schedule+0x2b9/0x6c0
      [  605.553507]  schedule+0x42/0xb0
      [  605.554201]  io_schedule+0x16/0x40
      [  605.555012]  do_read_cache_page+0x438/0x830
      [  605.556925]  read_cache_page+0x12/0x20
      [  605.557757]  read_dev_sector+0x27/0xc0
      [  605.558587]  amiga_partition+0x4d/0x4c5
      [  605.561278]  check_partition+0x154/0x244
      [  605.562138]  rescan_partitions+0xae/0x280
      [  605.563076]  __blkdev_get+0x40f/0x560
      [  605.563830]  blkdev_get+0x3d/0x140
      [  605.564500]  __device_add_disk+0x388/0x480
      [  605.565316]  device_add_disk+0x13/0x20
      [  605.566070]  nvme_mpath_set_live+0x5e/0x130 [nvme_core]
      [  605.567114]  nvme_update_ns_ana_state+0x2c/0x30 [nvme_core]
      [  605.568197]  nvme_update_ana_state+0xca/0xe0 [nvme_core]
      [  605.569360]  nvme_parse_ana_log+0xa1/0x180 [nvme_core]
      [  605.571385]  nvme_read_ana_log+0x76/0x100 [nvme_core]
      [  605.572376]  nvme_ana_work+0x15/0x20 [nvme_core]
      [  605.573330]  process_one_work+0x1db/0x380
      [  605.574144]  worker_thread+0x4d/0x400
      [  605.574896]  kthread+0x104/0x140
      [  605.577205]  ret_from_fork+0x35/0x40
      [  605.577955] INFO: task nvme:14044 blocked for more than 120 seconds.
      [  605.579239]       Tainted: G           OE     5.3.5-050305-generic #201910071830
      [  605.580712] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
      [  605.582320] nvme            D    0 14044  14043 0x00000000
      [  605.583424] Call Trace:
      [  605.583935]  __schedule+0x2b9/0x6c0
      [  605.584625]  schedule+0x42/0xb0
      [  605.585290]  schedule_timeout+0x203/0x2f0
      [  605.588493]  wait_for_completion+0xb1/0x120
      [  605.590066]  __flush_work+0x123/0x1d0
      [  605.591758]  __cancel_work_timer+0x10e/0x190
      [  605.593542]  cancel_work_sync+0x10/0x20
      [  605.594347]  nvme_mpath_stop+0x2f/0x40 [nvme_core]
      [  605.595328]  nvme_stop_ctrl+0x12/0x50 [nvme_core]
      [  605.596262]  nvme_do_delete_ctrl+0x3f/0x90 [nvme_core]
      [  605.597333]  nvme_sysfs_delete+0x5c/0x70 [nvme_core]
      [  605.598320]  dev_attr_store+0x17/0x30
      
      Fix this by introducing a new state: NVME_CTRL_DELETE_NOIO, which will
      indicate the phase of controller deletion where I/O cannot be allowed
      to access the namespace. NVME_CTRL_DELETING still allows mpath I/O to
      be issued to the bottom device, and only after we flush the ana_work
      and scan_work (after nvme_stop_ctrl and nvme_prep_remove_namespaces)
      we change the state to NVME_CTRL_DELETING_NOIO. Also we prevent ana_work
      from re-firing by aborting early if we are not LIVE, so we should be safe
      here.
      
      In addition, change the transport drivers to follow the updated state
      machine.
      
      Fixes: 0d0b660f ("nvme: add ANA support")
      Reported-by: default avatarAnton Eidelman <anton@lightbitslabs.com>
      Signed-off-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      ecca390e
    • Sagi Grimberg's avatar
      nvme: document nvme controller states · 4212f4e9
      Sagi Grimberg authored
      We are starting to see some non-trivial states
      so lets start documenting them.
      Signed-off-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      4212f4e9
    • Chaitanya Kulkarni's avatar
      nvmet: use xarray for ctrl ns storing · 7774e77e
      Chaitanya Kulkarni authored
      This patch replaces the ctrl->namespaces tracking from linked list to
      xarray and improves the performance when accessing one namespce :-
      
      XArray vs Default:-
      
      IOPS and BW (more the better) increase BW (~1.8%):-
      ---------------------------------------------------
      
       XArray :-
        read:  IOPS=160k,  BW=626MiB/s  (656MB/s)(18.3GiB/30001msec)
        read:  IOPS=160k,  BW=626MiB/s  (656MB/s)(18.3GiB/30001msec)
        read:  IOPS=162k,  BW=631MiB/s  (662MB/s)(18.5GiB/30001msec)
      
       Default:-
        read:  IOPS=156k,  BW=609MiB/s  (639MB/s)(17.8GiB/30001msec)
        read:  IOPS=157k,  BW=613MiB/s  (643MB/s)(17.0GiB/30001msec)
        read:  IOPS=160k,  BW=626MiB/s  (656MB/s)(18.3GiB/30001msec)
      
      Submission latency (less the better) decrease (~8.3%):-
      -------------------------------------------------------
      
       XArray:-
        slat  (usec):  min=7,  max=8386,  avg=11.19,  stdev=5.96
        slat  (usec):  min=7,  max=441,   avg=11.09,  stdev=4.48
        slat  (usec):  min=7,  max=1088,  avg=11.21,  stdev=4.54
      
       Default :-
        slat  (usec):  min=8,   max=2826.5k,  avg=23.96,  stdev=3911.50
        slat  (usec):  min=8,   max=503,      avg=12.52,  stdev=5.07
        slat  (usec):  min=8,   max=2384,     avg=12.50,  stdev=5.28
      
      CPU Usage (less the better) decrease (~5.2%):-
      ----------------------------------------------
      
       XArray:-
        cpu  :  usr=1.84%,  sys=18.61%,  ctx=949471,  majf=0,  minf=250
        cpu  :  usr=1.83%,  sys=18.41%,  ctx=950262,  majf=0,  minf=237
        cpu  :  usr=1.82%,  sys=18.82%,  ctx=957224,  majf=0,  minf=234
      
       Default:-
        cpu  :  usr=1.70%,  sys=19.21%,  ctx=858196,  majf=0,  minf=251
        cpu  :  usr=1.82%,  sys=19.98%,  ctx=929720,  majf=0,  minf=227
        cpu  :  usr=1.83%,  sys=20.33%,  ctx=947208,  majf=0,  minf=235.
      Signed-off-by: default avatarChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      7774e77e
    • Yamin Friedman's avatar
      nvmet-rdma: use new shared CQ mechanism · ca0f1a80
      Yamin Friedman authored
      Has the driver use shared CQs providing ~10%-20% improvement when
      multiple disks are used. Instead of opening a CQ for each QP per
      controller, a CQ for each core will be provided by the RDMA core driver
      that will be shared between the QPs on that core reducing interrupt
      overhead.
      Signed-off-by: default avatarYamin Friedman <yaminf@mellanox.com>
      Signed-off-by: default avatarMax Gurtovoy <maxg@mellanox.com>
      Reviewed-by: default avatarOr Gerlitz <ogerlitz@mellanox.com>
      Reviewed-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      ca0f1a80
    • Yamin Friedman's avatar
      nvme-rdma: use new shared CQ mechanism · 287f329e
      Yamin Friedman authored
      Has the driver use shared CQs providing ~10%-20% improvement as seen in
      the patch introducing shared CQs. Instead of opening a CQ for each QP
      per controller connected, a CQ for each QP will be provided by the RDMA
      core driver that will be shared between the QPs on that core reducing
      interrupt overhead.
      Signed-off-by: default avatarYamin Friedman <yaminf@mellanox.com>
      Signed-off-by: default avatarMax Gurtovoy <maxg@mellanox.com>
      Reviewed-by: default avatarOr Gerlitz <ogerlitz@mellanox.com>
      Reviewed-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      287f329e
    • David E. Box's avatar
      nvme-pci: add support for ACPI StorageD3Enable property · df4f9bc4
      David E. Box authored
      This patch implements a solution for a BIOS hack used on some currently
      shipping Intel systems to change driver power management policy for PCIe
      NVMe drives. Some newer Intel platforms, like some Comet Lake systems,
      require that PCIe devices use D3 when doing suspend-to-idle in order to
      allow the platform to realize maximum power savings. This is particularly
      needed to support ATX power supply shutdown on desktop systems. In order to
      ensure this happens for root ports with storage devices, Microsoft
      apparently created this ACPI _DSD property as a way to influence their
      driver policy. To my knowledge this property has not been discussed with
      the NVME specification body.
      
      Though the solution is not ideal, it addresses a problem that also affects
      Linux since the NVMe driver's default policy of using NVMe APST during
      suspend-to-idle prevents the PCI root port from going to D3 and leads to
      higher power consumption for these platforms. The power consumption
      difference may be negligible on laptop systems, but many watts on desktop
      systems when the ATX power supply is blocked from powering down.
      
      The patch creates a new nvme_acpi_storage_d3 function to check for the
      StorageD3Enable property during probe and enables D3 as a quirk if set.  It
      also provides a 'noacpi' module parameter to allow skipping the quirk if
      needed.
      
      Tested with:
       - PM961 NVMe SED Samsung 512GB
       - INTEL SSDPEKKF512G8
      
      Link: https://docs.microsoft.com/en-us/windows-hardware/design/component-guidelines/power-management-for-storage-hardware-devices-introSigned-off-by: default avatarDavid E. Box <david.e.box@linux.intel.com>
      Reviewed-by: default avatarRafael J. Wysocki <rafael.j.wysocki@intel.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      df4f9bc4
    • Chaitanya Kulkarni's avatar
      nvme-pci: use max of PRP or SGL for iod size · b13c6393
      Chaitanya Kulkarni authored
      >From the initial implementation of NVMe SGL kernel support
      commit a7a7cbe3 ("nvme-pci: add SGL support") with addition of the
      commit 943e942e ("nvme-pci: limit max IO size and segments to avoid
      high order allocations") now there is only caller left for
      nvme_pci_iod_alloc_size() which statically passes true for last
      parameter that calculates allocation size based on SGL since we need
      size of biggest command supported for mempool allocation.
      
      This patch modifies the helper functions nvme_pci_iod_alloc_size() such
      that it is now uses maximum of PRP and SGL size for iod allocation size
      calculation.
      Signed-off-by: default avatarChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      b13c6393
    • Chaitanya Kulkarni's avatar
      nvme-core: replace ctrl page size with a macro · 6c3c05b0
      Chaitanya Kulkarni authored
      Saving the nvme controller's page size was from a time when the driver
      tried to use different sized pages, but this value is always set to
      a constant, and has been this way for some time. Remove the 'page_size'
      field and replace its usage with the constant value.
      
      This also lets the compiler make some micro-optimizations in the io
      path, and that's always a good thing.
      Signed-off-by: default avatarChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      6c3c05b0
    • Baolin Wang's avatar
      nvme: remove redundant validation in nvme_start_ctrl() · 5887450b
      Baolin Wang authored
      We've already validated the 'kato' in nvme_start_keep_alive(), thus no
      need to validate it again in nvme_start_ctrl(). Remove it.
      Signed-off-by: default avatarBaolin Wang <baolin.wang@linux.alibaba.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      5887450b
    • Dan Carpenter's avatar
      nvme: remove an unnecessary condition · eca9e827
      Dan Carpenter authored
      "v" is an unsigned int so it can't be more than UINT_MAX.  Removing this
      check makes it easier to preserve the error code as well.
      Signed-off-by: default avatarDan Carpenter <dan.carpenter@oracle.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      eca9e827
  2. 28 Jul, 2020 1 commit
  3. 25 Jul, 2020 23 commits
    • Coly Li's avatar
      bcache: fix bio_{start,end}_io_acct with proper device · a2f32ee8
      Coly Li authored
      Commit 85750aeb ("bcache: use bio_{start,end}_io_acct") moves the
      io account code to the location after bio_set_dev(bio, dc->bdev) in
      cached_dev_make_request(). Then the account is performed incorrectly on
      backing device, indeed the I/O should be counted to bcache device like
      /dev/bcache0.
      
      With the mistaken I/O account, iostat does not display I/O counts for
      bcache device and all the numbers go to backing device. In writeback
      mode, the hard drive may have 340K+ IOPS which is impossible and wrong
      for spinning disk.
      
      This patch introduces bch_bio_start_io_acct() and bch_bio_end_io_acct(),
      which switches bio->bi_disk to bcache device before calling
      bio_start_io_acct() or bio_end_io_acct(). Now the I/Os are counted to
      bcache device, and bcache device, cache device and backing device have
      their correct I/O count information back.
      
      Fixes: 85750aeb ("bcache: use bio_{start,end}_io_acct")
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      a2f32ee8
    • Coly Li's avatar
      bcache: avoid extra memory consumption in struct bbio for large bucket size · 4e4d4e09
      Coly Li authored
      Bcache uses struct bbio to do I/Os for meta data pages like uuids,
      disk_buckets, prio_buckets, and btree nodes.
      
      Example writing a btree node onto cache device, the process is,
      - Allocate a struct bbio from mempool c->bio_meta.
      - Inside struct bbio embedded a struct bio, initialize bi_inline_vecs
        for this embedded bio.
      - Call bch_bio_map() to map each meta data page to each bv from the
        inlined  bi_io_vec table.
      - Call bch_submit_bbio() to submit the bio into underlying block layer.
      - When the I/O completed, only release the struct bbio, don't touch the
        reference counter of the meta data pages.
      
      The struct bbio is defined as,
      738 struct bbio {
      739     unsigned int            submit_time_us;
      	[snipped]
      748     struct bio              bio;
      749 };
      
      Because struct bio is embedded at the end of struct bbio, therefore the
      actual size of struct bbio is sizeof(struct bio) + size of the embedded
      bio->bi_inline_vecs.
      
      Now all the meta data bucket size are limited to meta_bucket_pages(), if
      the bucket size is large than meta_bucket_pages()*PAGE_SECTORS, rested
      space in the bucket is unused. Therefore the most used space in meta
      bucket is (1<<MAX_ORDER) pages, or (1<<CONFIG_FORCE_MAX_ZONEORDER) if it
      is configured.
      
      Therefore for large bucket size, it is unnecessary to calculate the
      allocation size of mempool c->bio_meta as,
      	mempool_init_kmalloc_pool(&c->bio_meta, 2,
      			sizeof(struct bbio) +
      			sizeof(struct bio_vec) * bucket_pages(c))
      It is too large, neither the Linux buddy allocator cannot allocate so
      much continuous pages, nor the extra allocated pages are wasted.
      
      This patch replace bucket_pages() to meta_bucket_pages() in two places,
      - In bch_cache_set_alloc(), when initialize mempool c->bio_meta, uses
        sizeof(struct bbio) + sizeof(struct bio_vec) * bucket_pages(c) to set
        the allocating object size.
      - In bch_bbio_alloc(), when calling bio_init() to set inline bvec talbe
        bi_inline_bvecs, uses meta_bucket_pages() to indicate number of the
        inline bio vencs number.
      
      Now the maximum size of embedded bio inside struct bbio exactly matches
      the limit of meta_bucket_pages(), no extra page wasted.
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Reviewed-by: default avatarHannes Reinecke <hare@suse.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      4e4d4e09
    • Coly Li's avatar
      bcache: avoid extra memory allocation from mempool c->fill_iter · 6907dc49
      Coly Li authored
      Mempool c->fill_iter is used to allocate memory for struct btree_iter in
      bch_btree_node_read_done() to iterate all keys of a read-in btree node.
      
      The allocation size is defined in bch_cache_set_alloc() by,
        mempool_init_kmalloc_pool(&c->fill_iter, 1, iter_size))
      where iter_size is defined by a calculation,
        (sb->bucket_size / sb->block_size + 1) * sizeof(struct btree_iter_set)
      
      For 16bit width bucket_size the calculation is OK, but now the bucket
      size is extended to 32bit, the bucket size can be 2GB. By the above
      calculation, iter_size can be 2048 pages (order 11 is still accepted by
      buddy allocator).
      
      But the actual size holds the bkeys in meta data bucket is limited to
      meta_bucket_pages() already, which is 16MB. By the above calculation,
      if replace sb->bucket_size by meta_bucket_pages() * PAGE_SECTORS, the
      result is 16 pages. This is the size large enough for the mempool
      allocation to struct btree_iter.
      
      Therefore in worst case every time mempool c->fill_iter allocates, at
      most 4080 pages are wasted and won't be used. Therefore this patch uses
      meta_bucket_pages() * PAGE_SECTORS to calculate the iter size in
      bch_cache_set_alloc(), to avoid extra memory allocation from mempool
      c->fill_iter.
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Reviewed-by: default avatarHannes Reinecke <hare@suse.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      6907dc49
    • Coly Li's avatar
      bcache: add sysfs file to display feature sets information of cache set · 092bd54d
      Coly Li authored
      The following three sysfs files are created to display according feature
      set information of bcache:
      	/sys/fs/bcache/<cache set UUID>/internal/feature_compat
      	/sys/fs/bcache/<cache set UUID>/internal/feature_ro_compat
      	/sys/fs/bcache/<cache set UUID>/internal/feature_incompat
      is added by this patch, to display feature sets information of the cache
      set.
      
      Now only an incompat feature 'large_bucket' added in bcache, the sysfs
      file content is:
              [large_bucket]
      string large_bucket means the running bcache drive supports incompat
      feature 'large_bucket', the wrapping [] means the 'large_bucket' feature
      is currently enabled on this cache set.
      
      This patch is ready to display compat and ro_compat features, in future
      once bcache code implements such feature sets, the according feature
      strings will be displayed in their sysfs files too.
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      092bd54d
    • Coly Li's avatar
      bcache: add bucket_size_hi into struct cache_sb_disk for large bucket · ffa47032
      Coly Li authored
      The large bucket feature is to extend bucket_size from 16bit to 32bit.
      
      When create cache device on zoned device (e.g. zoned NVMe SSD), making
      a single bucket cover one or more zones of the zoned device is the
      simplest way to support zoned device as cache by bcache.
      
      But current maximum bucket size is 16MB and a typical zone size of zoned
      device is 256MB, this is the major motiviation to extend bucket size to
      a larger bit width.
      
      This patch is the basic and first change to support large bucket size,
      the major changes it makes are,
      - Add BCH_FEATURE_INCOMPAT_LARGE_BUCKET for the large bucket feature,
        INCOMPAT means it introduces incompatible on-disk format change.
      - Add BCH_FEATURE_INCOMPAT_FUNCS(large_bucket, LARGE_BUCKET) routines.
      - Adds __le16 bucket_size_hi into struct cache_sb_disk at offset 0x8d0
        for the on-disk super block format.
      - For the in-memory super block struct cache_sb, member bucket_size is
        extended from __u16 to __32.
      - Add get_bucket_size() to combine the bucket_size and bucket_size_hi
        from struct cache_sb_disk into an unsigned int value.
      
      Since we already have large bucket size helpers meta_bucket_pages(),
      meta_bucket_bytes() and alloc_meta_bucket_pages(), they make sure when
      bucket size > 8MB, the memory allocation for bcache meta data bucket
      won't fail no matter how large the bucket size extended. So these meta
      data buckets are handled properly when the bucket size width increase
      from 16bit to 32bit, we don't need to worry about them.
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      ffa47032
    • Coly Li's avatar
      bcache: handle btree node memory allocation properly for bucket size > 8MB · f9c32a5a
      Coly Li authored
      Currently the bcache internal btree node occupies a whole bucket. When
      loading the btree node from cache device into memory, mca_data_alloc()
      will call bch_btree_keys_alloc() to allocate memory for the whole bucket
      size, ilog2(b->c->btree_pages) is send to bch_btree_keys_alloc() as the
      parameter 'page_order'.
      
      c->btree_pages is set as bucket_pages() in bch_cache_set_alloc(), for
      bucket size > 8MB, ilog2(b->c->btree_pages) is 12 for 4KB page size. By
      default the maximum page order __get_free_pages() accepts is MAX_ORDER
      (11), in this condition bch_btree_keys_alloc() will always fail.
      
      Because of other over-page-order allocation failure fails the cache
      device registration, such btree node allocation failure wasn't observed
      during runtime. After other blocking page allocation failures for bucket
      size > 8MB, this btree node allocation issue may trigger potentical risk
      e.g. infinite dead-loop to retry btree node allocation after failure.
      
      This patch fixes the potential problem by setting c->btree_pages to
      meta_bucket_pages() in bch_cache_set_alloc(). In the condition that
      bucket size > 8MB, meta_bucket_pages() will always return a number which
      won't exceed the maximum page order of the buddy allocator.
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Reviewed-by: default avatarHannes Reinecke <hare@suse.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      f9c32a5a
    • Coly Li's avatar
      bcache: handle cache set verify_ondisk properly for bucket size > 8MB · bf6af170
      Coly Li authored
      In bch_btree_cache_alloc() when CONFIG_BCACHE_DEBUG is configured,
      allocate memory for c->verify_ondisk may fail if the bucket size > 8MB,
      which will require __get_free_pages() to allocate continuous pages
      with order > 11 (the default MAX_ORDER of Linux buddy allocator). Such
      over size allocation will fail, and cause 2 problems,
      - When CONFIG_BCACHE_DEBUG is configured,  bch_btree_verify() does not
        work, because c->verify_ondisk is NULL and bch_btree_verify() returns
        immediately.
      - bch_btree_cache_alloc() will fail due to c->verify_ondisk allocation
        failed, then the whole cache device registration fails. And because of
        this failure, the first problem of bch_btree_verify() has no chance to
        be triggered.
      
      This patch fixes the above problem by two means,
      1) If pages allocation of c->verify_ondisk fails, set it to NULL and
         returns bch_btree_cache_alloc() with -ENOMEM.
      2) When calling __get_free_pages() to allocate c->verify_ondisk pages,
         use ilog2(meta_bucket_pages(&c->sb)) to make sure ilog2() will always
         generate a pages order <= MAX_ORDER (or CONFIG_FORCE_MAX_ZONEORDER).
         Then the buddy system won't directly reject the allocation request.
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Reviewed-by: default avatarHannes Reinecke <hare@suse.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      bf6af170
    • Coly Li's avatar
      bcache: handle cache prio_buckets and disk_buckets properly for bucket size > 8MB · c954ac8d
      Coly Li authored
      Similar to c->uuids, struct cache's prio_buckets and disk_buckets also
      have the potential memory allocation failure during cache registration
      if the bucket size > 8MB.
      
      ca->prio_buckets can be stored on cache device in multiple buckets, its
      in-memory space is allocated by kzalloc() interface but normally
      allocated by alloc_pages() because the size > KMALLOC_MAX_CACHE_SIZE.
      
      So allocation of ca->prio_buckets has the MAX_ORDER restriction too. If
      the bucket size > 8MB, by default the page allocator will fail because
      the page order > 11 (default MAX_ORDER value). ca->prio_buckets should
      also use meta_bucket_bytes(), meta_bucket_pages() to decide its memory
      size and use alloc_meta_bucket_pages() to allocate pages, to avoid the
      allocation failure during cache set registration when bucket size > 8MB.
      
      ca->disk_buckets is a single bucket size memory buffer, it is used to
      iterate each bucket of ca->prio_buckets, and compose the bio based on
      memory of ca->disk_buckets, then write ca->disk_buckets memory to cache
      disk one-by-one for each bucket of ca->prio_buckets. ca->disk_buckets
      should have in-memory size exact to the meta_bucket_pages(), this is the
      size that ca->prio_buckets will be stored into each on-disk bucket.
      
      This patch fixes the above issues and handle cache's prio_buckets and
      disk_buckets properly for bucket size larger than 8MB.
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Reviewed-by: default avatarHannes Reinecke <hare@suse.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      c954ac8d
    • Coly Li's avatar
      bcache: handle c->uuids properly for bucket size > 8MB · 21e478dd
      Coly Li authored
      Bcache allocates a whole bucket to store c->uuids on cache device, and
      allocates continuous pages to store it in-memory. When the bucket size
      exceeds maximum allocable continuous pages, bch_cache_set_alloc() will
      fail and cache device registration will fail.
      
      This patch allocates c->uuids by alloc_meta_bucket_pages(), and uses
      ilog2(meta_bucket_pages(c)) to indicate order of c->uuids pages when
      free it. When writing c->uuids to cache device, its size is decided
      by meta_bucket_pages(c) * PAGE_SECTORS. Now c->uuids is properly handled
      for bucket size > 8MB.
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Reviewed-by: default avatarHannes Reinecke <hare@suse.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      21e478dd
    • Coly Li's avatar
      bcache: introduce meta_bucket_pages() related helper routines · de1fafab
      Coly Li authored
      Currently the in-memory meta data like c->uuids or c->disk_buckets
      are allocated by alloc_bucket_pages(). The macro alloc_bucket_pages()
      calls __get_free_pages() to allocated continuous pages with order
      indicated by ilog2(bucket_pages(c)),
       #define alloc_bucket_pages(gfp, c)                      \
           ((void *) __get_free_pages(__GFP_ZERO|gfp, ilog2(bucket_pages(c))))
      
      The maximum order is defined as MAX_ORDER, the default value is 11 (and
      can be overwritten by CONFIG_FORCE_MAX_ZONEORDER). In bcache code the
      maximum bucket size width is 16bits, this is restricted both by KEY_SIZE
      size and bucket_size size from struct cache_sb_disk. The maximum 16bits
      width and power-of-2 value is (1<<15) in unit of sector (512byte). It
      means the maximum value of bucket size in bytes is (1<<24) bytes a.k.a
      4096 pages.
      
      When the bucket size is set to maximum permitted value, ilog2(4096) is
      12, which exceeds the default maximum order __get_free_pages() can
      accepted, the failed pages allocation will fail cache set registration
      procedure and print a kernel oops message for the exceeded pages order.
      
      This patch introduces meta_bucket_pages(), meta_bucket_bytes(), and
      alloc_bucket_pages() helper routines. meta_bucket_pages() indicates the
      maximum pages can be allocated to meta data bucket, meta_bucket_bytes()
      indicates the according maximum bytes, and alloc_bucket_pages() does
      the pages allocation for meta bucket. Because meta_bucket_pages()
      chooses the smaller value among the bucket size and MAX_ORDER_NR_PAGES,
      it still works when MAX_ORDER overwritten by CONFIG_FORCE_MAX_ZONEORDER.
      
      Following patches will use these helper routines to decide maximum pages
      can be allocated for different meta data buckets. If the bucket size is
      larger than meta_bucket_bytes(), the bcache registration can continue to
      success, just the space more than meta_bucket_bytes() inside the bucket
      is wasted. Comparing bcache failed for large bucket size, wasting some
      space for meta data buckets is acceptable at this moment.
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Reviewed-by: default avatarHannes Reinecke <hare@suse.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      de1fafab
    • Coly Li's avatar
      bcache: struct cache_sb is only for in-memory super block now · 4c1ccd08
      Coly Li authored
      We have struct cache_sb_disk for on-disk super block already, it is
      unnecessary to keep the in-memory super block format exactly mapping
      to the on-disk struct layout.
      
      This patch adds code comments to notice that struct cache_sb is not
      exactly mapping to cache_sb_disk, and removes the useless member csum
      and pad[5].
      
      Although struct cache_sb does not belong to uapi, but there are still
      some on-disk format related macros reference it and it is unncessary to
      get rid of such dependency now. So struct cache_sb will continue to stay
      in include/uapi/linux/bache.h for now.
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Reviewed-by: default avatarHannes Reinecke <hare@suse.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      4c1ccd08
    • Coly Li's avatar
      bcache: move bucket related code into read_super_common() · 198efa35
      Coly Li authored
      Setting sb->first_bucket and checking sb->keys indeed are only for cache
      device, it does not make sense to do them in read_super() for backing
      device too.
      
      This patch moves the related code piece into read_super_common()
      explicitly for cache device and avoid the confusion.
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Reviewed-by: default avatarHannes Reinecke <hare@suse.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      198efa35
    • Coly Li's avatar
      bcache: increase super block version for cache device and backing device · d721a43f
      Coly Li authored
      The new added super block version BCACHE_SB_VERSION_BDEV_WITH_FEATURES
      (5) BCACHE_SB_VERSION_CDEV_WITH_FEATURES value (6), is for the feature
      set bits.
      
      Devices have super block version equal to the new version will have
      three new members for feature set bits in the on-disk super block,
              __le64                  feature_compat;
              __le64                  feature_incompat;
              __le64                  feature_ro_compat;
      
      They are used for further new features which may introduce on-disk
      format change, and avoid unncessary super block version increase.
      
      The very basic features handling code skeleton is also initialized in
      this patch.
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Reviewed-by: default avatarHannes Reinecke <hare@suse.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      d721a43f
    • Coly Li's avatar
      bcache: fix super block seq numbers comparision in register_cache_set() · 117f636e
      Coly Li authored
      In register_cache_set(), c is pointer to struct cache_set, and ca is
      pointer to struct cache, if ca->sb.seq > c->sb.seq, it means this
      registering cache has up to date version and other members, the in-
      memory version and other members should be updated to the newer value.
      
      But current implementation makes a cache set only has a single cache
      device, so the above assumption works well except for a special case.
      The execption is when a cache device new created and both ca->sb.seq and
      c->sb.seq are 0, because the super block is never flushed out yet. In
      the location for the following if() check,
      2156         if (ca->sb.seq > c->sb.seq) {
      2157                 c->sb.version           = ca->sb.version;
      2158                 memcpy(c->sb.set_uuid, ca->sb.set_uuid, 16);
      2159                 c->sb.flags             = ca->sb.flags;
      2160                 c->sb.seq               = ca->sb.seq;
      2161                 pr_debug("set version = %llu\n", c->sb.version);
      2162         }
      c->sb.version is not initialized yet and valued 0. When ca->sb.seq is 0,
      the if() check will fail (because both values are 0), and the cache set
      version, set_uuid, flags and seq won't be updated.
      
      The above problem is hiden for current code, because the bucket size is
      compatible among different super block version. And the next time when
      running cache set again, ca->sb.seq will be larger than 0 and cache set
      super block version will be updated properly.
      
      But if the large bucket feature is enabled,  sb->bucket_size is the low
      16bits of the bucket size. For a power of 2 value, when the actual
      bucket size exceeds 16bit width, sb->bucket_size will always be 0. Then
      read_super_common() will fail because the if() check to
      is_power_of_2(sb->bucket_size) is false. This is how the long time
      hidden bug is triggered.
      
      This patch modifies the if() check to the following way,
      2156         if (ca->sb.seq > c->sb.seq || c->sb.seq == 0) {
      Then cache set's version, set_uuid, flags and seq will always be updated
      corectly including for a new created cache device.
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Reviewed-by: default avatarHannes Reinecke <hare@suse.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      117f636e
    • Coly Li's avatar
      bcache: disassemble the big if() checks in bch_cache_set_alloc() · a42d3c64
      Coly Li authored
      In bch_cache_set_alloc() there is a big if() checks combined by 11 items
      together. When this big if() statement fails, it is difficult to tell
      exactly which item fails indeed.
      
      This patch disassembles this big if() checks into 11 single if() checks,
      which makes code debug more easier.
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Reviewed-by: default avatarHannes Reinecke <hare@suse.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      a42d3c64
    • Coly Li's avatar
      bcache: add more accurate error information in read_super_common() · c557a5f7
      Coly Li authored
      The improperly set bucket or block size will trigger error in
      read_super_common(). For large bucket size, a more accurate error message
      for invalid bucket or block size is necessary.
      
      This patch disassembles the combined if() checks into multiple single
      if() check, and provide more accurate error message for each check
      failure condition.
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Reviewed-by: default avatarHannes Reinecke <hare@suse.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      c557a5f7
    • Coly Li's avatar
      bcache: add read_super_common() to read major part of super block · 5b21403c
      Coly Li authored
      Later patches will introduce feature set bits to on-disk super block and
      increase super block version. Current code in read_super() which reads
      common part of super block for version BCACHE_SB_VERSION_CDEV and version
      BCACHE_SB_VERSION_CDEV_WITH_UUID will be shared with the new version.
      
      Therefore this patch moves the reusable part into read_super_common(),
      this preparation patch will make later patches more simplier and only
      focus on new feature set bits.
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      5b21403c
    • Coly Li's avatar
      bcache: fix overflow in offset_to_stripe() · 7a148126
      Coly Li authored
      offset_to_stripe() returns the stripe number (in type unsigned int) from
      an offset (in type uint64_t) by the following calculation,
      	do_div(offset, d->stripe_size);
      For large capacity backing device (e.g. 18TB) with small stripe size
      (e.g. 4KB), the result is 4831838208 and exceeds UINT_MAX. The actual
      returned value which caller receives is 536870912, due to the overflow.
      
      Indeed in bcache_device_init(), bcache_device->nr_stripes is limited in
      range [1, INT_MAX]. Therefore all valid stripe numbers in bcache are
      in range [0, bcache_dev->nr_stripes - 1].
      
      This patch adds a upper limition check in offset_to_stripe(): the max
      valid stripe number should be less than bcache_device->nr_stripes. If
      the calculated stripe number from do_div() is equal to or larger than
      bcache_device->nr_stripe, -EINVAL will be returned. (Normally nr_stripes
      is less than INT_MAX, exceeding upper limitation doesn't mean overflow,
      therefore -EOVERFLOW is not used as error code.)
      
      This patch also changes nr_stripes' type of struct bcache_device from
      'unsigned int' to 'int', and return value type of offset_to_stripe()
      from 'unsigned int' to 'int', to match their exact data ranges.
      
      All locations where bcache_device->nr_stripes and offset_to_stripe() are
      referenced also get updated for the above type change.
      Reported-and-tested-by: default avatarKen Raeburn <raeburn@redhat.com>
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Cc: stable@vger.kernel.org
      Link: https://bugzilla.redhat.com/show_bug.cgi?id=1783075Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      7a148126
    • Coly Li's avatar
      bcache: avoid nr_stripes overflow in bcache_device_init() · 65f0f017
      Coly Li authored
      For some block devices which large capacity (e.g. 8TB) but small io_opt
      size (e.g. 8 sectors), in bcache_device_init() the stripes number calcu-
      lated by,
      	DIV_ROUND_UP_ULL(sectors, d->stripe_size);
      might be overflow to the unsigned int bcache_device->nr_stripes.
      
      This patch uses the uint64_t variable to store DIV_ROUND_UP_ULL()
      and after the value is checked to be available in unsigned int range,
      sets it to bache_device->nr_stripes. Then the overflow is avoided.
      Reported-and-tested-by: default avatarKen Raeburn <raeburn@redhat.com>
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Cc: stable@vger.kernel.org
      Link: https://bugzilla.redhat.com/show_bug.cgi?id=1783075Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      65f0f017
    • Gustavo A. R. Silva's avatar
      bcache: Use struct_size() in kzalloc() · 29f1d5ca
      Gustavo A. R. Silva authored
      Make use of the struct_size() helper instead of an open-coded version
      in order to avoid any potential type mistakes.
      
      This code was detected with the help of Coccinelle and, audited and
      fixed manually.
      Signed-off-by: default avatarGustavo A. R. Silva <gustavoars@kernel.org>
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      29f1d5ca
    • Gustavo A. R. Silva's avatar
      bcache: movinggc: Use struct_size() helper in kzalloc() · 6706ad56
      Gustavo A. R. Silva authored
      Make use of the struct_size() helper instead of an open-coded version
      in order to avoid any potential type mistakes.
      
      This code was detected with the help of Coccinelle and, audited and
      fixed manually.
      Signed-off-by: default avatarGustavo A. R. Silva <gustavoars@kernel.org>
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      6706ad56
    • Xu Wang's avatar
      bcache: writeback: Remove unneeded variable i · 7236657c
      Xu Wang authored
      Remove unneeded variable i in bch_dirty_init_thread().
      Signed-off-by: default avatarXu Wang <vulab@iscas.ac.cn>
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      7236657c
    • Xu Wang's avatar
      bcache: journel: use for_each_clear_bit() to simplify the code · ef4eeb85
      Xu Wang authored
      Using for_each_clear_bit() to simplify the code.
      Signed-off-by: default avatarXu Wang <vulab@iscas.ac.cn>
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      ef4eeb85