1. 04 Feb, 2021 1 commit
  2. 03 Feb, 2021 2 commits
  3. 02 Feb, 2021 3 commits
  4. 29 Jan, 2021 1 commit
  5. 28 Jan, 2021 7 commits
    • Jens Axboe's avatar
      Merge tag 'nvme-5.11-2021-01-28' of git://git.infradead.org/nvme into block-5.11 · e2579c76
      Jens Axboe authored
      Pull NVMe fixes from Christoph:
      
      "nvme fixes for 5.11:
      
       - add another Write Zeroes quirk (Chaitanya Kulkarni)
       - handle a no path available corner case (Daniel Wagner)
       - use the proper RCU aware list_add helper (Chao Leng)"
      
      * tag 'nvme-5.11-2021-01-28' of git://git.infradead.org/nvme:
        nvme-core: use list_add_tail_rcu instead of list_add_tail for nvme_init_ns_head
        nvme-multipath: Early exit if no path is available
        nvme-pci: add the DISABLE_WRITE_ZEROES quirk for a SPCC device
      e2579c76
    • Chao Leng's avatar
      nvme-core: use list_add_tail_rcu instead of list_add_tail for nvme_init_ns_head · 772ea326
      Chao Leng authored
      The "list" of nvme_ns_head is used as rcu list, now in nvme_init_ns_head
      list_add_tail is used to add ns->siblings to the rcu list. It is not safe.
      Should use list_add_tail_rcu instead of list_add_tail.
      Signed-off-by: default avatarChao Leng <lengchao@huawei.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      772ea326
    • Daniel Wagner's avatar
      nvme-multipath: Early exit if no path is available · d1bcf006
      Daniel Wagner authored
      nvme_round_robin_path() should test if the return ns pointer is valid.
      nvme_next_ns() will return a NULL pointer if there is no path left.
      
      Fixes: 75c10e73 ("nvme-multipath: round-robin I/O policy")
      Signed-off-by: default avatarDaniel Wagner <dwagner@suse.de>
      Reviewed-by: default avatarHannes Reinecke <hare@suse.de>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      d1bcf006
    • Chaitanya Kulkarni's avatar
      nvme-pci: add the DISABLE_WRITE_ZEROES quirk for a SPCC device · 89919929
      Chaitanya Kulkarni authored
      This adds a quirk for SPCC 256GB NVMe 1.3 drive which fixes timeouts and
      I/O errors due to the fact that the controller does not properly
      handle the Write Zeroes command:
      
      [ 2745.659527] CPU: 2 PID: 0 Comm: swapper/2 Tainted: G            E 5.10.6-BET #1
      [ 2745.659528] Hardware name: System manufacturer System Product Name/PRIME X570-P, BIOS 3001 12/04/2020
      [ 2776.138874] nvme nvme1: I/O 414 QID 3 timeout, aborting
      [ 2776.138886] nvme nvme1: I/O 415 QID 3 timeout, aborting
      [ 2776.138891] nvme nvme1: I/O 416 QID 3 timeout, aborting
      [ 2776.138895] nvme nvme1: I/O 417 QID 3 timeout, aborting
      [ 2776.138912] nvme nvme1: Abort status: 0x0
      [ 2776.138921] nvme nvme1: I/O 428 QID 3 timeout, aborting
      [ 2776.138922] nvme nvme1: Abort status: 0x0
      [ 2776.138925] nvme nvme1: Abort status: 0x0
      [ 2776.138974] nvme nvme1: Abort status: 0x0
      [ 2776.138977] nvme nvme1: Abort status: 0x0
      [ 2806.346792] nvme nvme1: I/O 414 QID 3 timeout, reset controller
      [ 2806.363566] nvme nvme1: 15/0/0 default/read/poll queues
      [ 2836.554298] nvme nvme1: I/O 415 QID 3 timeout, disable controller
      [ 2836.672064] blk_update_request: I/O error, dev nvme1n1, sector 16350 op 0x9:(WRITE_ZEROES) flags 0x0 phys_seg 0 prio class 0
      [ 2836.672072] blk_update_request: I/O error, dev nvme1n1, sector 16093 op 0x9:(WRITE_ZEROES) flags 0x0 phys_seg 0 prio class 0
      [ 2836.672074] blk_update_request: I/O error, dev nvme1n1, sector 15836 op 0x9:(WRITE_ZEROES) flags 0x0 phys_seg 0 prio class 0
      [ 2836.672076] blk_update_request: I/O error, dev nvme1n1, sector 15579 op 0x9:(WRITE_ZEROES) flags 0x0 phys_seg 0 prio class 0
      [ 2836.672078] blk_update_request: I/O error, dev nvme1n1, sector 15322 op 0x9:(WRITE_ZEROES) flags 0x0 phys_seg 0 prio class 0
      [ 2836.672080] blk_update_request: I/O error, dev nvme1n1, sector 15065 op 0x9:(WRITE_ZEROES) flags 0x0 phys_seg 0 prio class 0
      [ 2836.672082] blk_update_request: I/O error, dev nvme1n1, sector 14808 op 0x9:(WRITE_ZEROES) flags 0x0 phys_seg 0 prio class 0
      [ 2836.672083] blk_update_request: I/O error, dev nvme1n1, sector 14551 op 0x9:(WRITE_ZEROES) flags 0x0 phys_seg 0 prio class 0
      [ 2836.672085] blk_update_request: I/O error, dev nvme1n1, sector 14294 op 0x9:(WRITE_ZEROES) flags 0x0 phys_seg 0 prio class 0
      [ 2836.672087] blk_update_request: I/O error, dev nvme1n1, sector 14037 op 0x9:(WRITE_ZEROES) flags 0x0 phys_seg 0 prio class 0
      [ 2836.672121] nvme nvme1: failed to mark controller live state
      [ 2836.672123] nvme nvme1: Removing after probe failure status: -19
      [ 2836.689016] Aborting journal on device dm-0-8.
      [ 2836.689024] Buffer I/O error on dev dm-0, logical block 25198592, lost sync page write
      [ 2836.689027] JBD2: Error -5 detected when updating journal superblock for dm-0-8.
      Reported-by: default avatarBradley Chapman <chapman6235@comcast.net>
      Signed-off-by: default avatarChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Tested-by: default avatarBradley Chapman <chapman6235@comcast.net>
      Reviewed-by: default avatarKeith Busch <kbusch@kernel.org>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      89919929
    • Coly Li's avatar
      bcache: only check feature sets when sb->version >= BCACHE_SB_VERSION_CDEV_WITH_FEATURES · 0df28cad
      Coly Li authored
      For super block version < BCACHE_SB_VERSION_CDEV_WITH_FEATURES, it
      doesn't make sense to check the feature sets. This patch checks
      super block version in bch_has_feature_* routines, if the version
      doesn't have feature sets yet, returns 0 (false) to the caller.
      
      Fixes: 5342fd42 ("bcache: set bcache device into read-only mode for BCH_FEATURE_INCOMPAT_OBSO_LARGE_BUCKET")
      Fixes: ffa47032 ("bcache: add bucket_size_hi into struct cache_sb_disk for large bucket")
      Cc: stable@vger.kernel.org # 5.9+
      Reported-and-tested-by: default avatarBockholdt Arne <a.bockholdt@precitec-optronik.de>
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      0df28cad
    • Damien Le Moal's avatar
      block: fix bd_size_lock use · 0fe37724
      Damien Le Moal authored
      Some block device drivers, e.g. the skd driver, call set_capacity() with
      IRQ disabled. This results in lockdep ito complain about inconsistent
      lock states ("inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-W} usage")
      because set_capacity takes a block device bd_size_lock using the
      functions spin_lock() and spin_unlock(). Ensure a consistent locking
      state by replacing these calls with spin_lock_irqsave() and
      spin_lock_irqrestore(). The same applies to bdev_set_nr_sectors().
      With this fix, all lockdep complaints are resolved.
      Signed-off-by: default avatarDamien Le Moal <damien.lemoal@wdc.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      0fe37724
    • Baolin Wang's avatar
      blk-cgroup: Use cond_resched() when destroy blkgs · 6c635cae
      Baolin Wang authored
      On !PREEMPT kernel, we can get below softlockup when doing stress
      testing with creating and destroying block cgroup repeatly. The
      reason is it may take a long time to acquire the queue's lock in
      the loop of blkcg_destroy_blkgs(), or the system can accumulate a
      huge number of blkgs in pathological cases. We can add a need_resched()
      check on each loop and release locks and do cond_resched() if true
      to avoid this issue, since the blkcg_destroy_blkgs() is not called
      from atomic contexts.
      
      [ 4757.010308] watchdog: BUG: soft lockup - CPU#11 stuck for 94s!
      [ 4757.010698] Call trace:
      [ 4757.010700]  blkcg_destroy_blkgs+0x68/0x150
      [ 4757.010701]  cgwb_release_workfn+0x104/0x158
      [ 4757.010702]  process_one_work+0x1bc/0x3f0
      [ 4757.010704]  worker_thread+0x164/0x468
      [ 4757.010705]  kthread+0x108/0x138
      Suggested-by: default avatarTejun Heo <tj@kernel.org>
      Signed-off-by: default avatarBaolin Wang <baolin.wang@linux.alibaba.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      6c635cae
  6. 27 Jan, 2021 1 commit
    • Maxim Mikityanskiy's avatar
      Revert "block: simplify set_init_blocksize" to regain lost performance · 8dc932d3
      Maxim Mikityanskiy authored
      The cited commit introduced a serious regression with SATA write speed,
      as found by bisecting. This patch reverts this commit, which restores
      write speed back to the values observed before this commit.
      
      The performance tests were done on a Helios4 NAS (2nd batch) with 4 HDDs
      (WD8003FFBX) using dd (bs=1M count=2000). "Direct" is a test with a
      single HDD, the rest are different RAID levels built over the first
      partitions of 4 HDDs. Test results are in MB/s, R is read, W is write.
      
                      | Direct | RAID0 | RAID10 f2 | RAID10 n2 | RAID6
      ----------------+--------+-------+-----------+-----------+--------
      9011495c    | R:256  | R:313 | R:276     | R:313     | R:323
      (before faulty) | W:254  | W:253 | W:195     | W:204     | W:117
      ----------------+--------+-------+-----------+-----------+--------
      5ff9f192    | R:257  | R:398 | R:312     | R:344     | R:391
      (faulty commit) | W:154  | W:122 | W:67.7    | W:66.6    | W:67.2
      ----------------+--------+-------+-----------+-----------+--------
      5.10.10         | R:256  | R:401 | R:312     | R:356     | R:375
      unpatched       | W:149  | W:123 | W:64      | W:64.1    | W:61.5
      ----------------+--------+-------+-----------+-----------+--------
      5.10.10         | R:255  | R:396 | R:312     | R:340     | R:393
      patched         | W:247  | W:274 | W:220     | W:225     | W:121
      
      Applying this patch doesn't hurt read performance, while improves the
      write speed by 1.5x - 3.5x (more impact on RAID tests). The write speed
      is restored back to the state before the faulty commit, and even a bit
      higher in RAID tests (which aren't HDD-bound on this device) - that is
      likely related to other optimizations done between the faulty commit and
      5.10.10 which also improved the read speed.
      Signed-off-by: default avatarMaxim Mikityanskiy <maxtram95@gmail.com>
      Fixes: 5ff9f192 ("block: simplify set_init_blocksize")
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Jens Axboe <axboe@kernel.dk>
      Acked-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      8dc932d3
  7. 25 Jan, 2021 4 commits
  8. 21 Jan, 2021 2 commits
    • Pan Bian's avatar
      lightnvm: fix memory leak when submit fails · 97784481
      Pan Bian authored
      The allocated page is not released if error occurs in
      nvm_submit_io_sync_raw(). __free_page() is moved ealier to avoid
      possible memory leak issue.
      
      Fixes: aff3fb18 ("lightnvm: move bad block and chunk state logic to core")
      Signed-off-by: default avatarPan Bian <bianpan2016@163.com>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      97784481
    • Jens Axboe's avatar
      Merge tag 'nvme-5.11-2020-01-21' of git://git.infradead.org/nvme into block-5.11 · 1df35bf0
      Jens Axboe authored
      Pull NVMe fixes from Christoph:
      
      "nvme fixes for 5.11:
      
       - fix a status code in nvmet (Chaitanya Kulkarni)
       - avoid double completions in nvme-rdma/nvme-tcp (Chao Leng)
       - fix the CMB support to cope with NVMe 1.4 controllers (Klaus Jensen)
       - fix PRINFO handling in the passthrough ioctl (Revanth Rajashekar)
       - fix a double DMA unmap in nvme-pci"
      
      * tag 'nvme-5.11-2020-01-21' of git://git.infradead.org/nvme:
        nvme-pci: fix error unwind in nvme_map_data
        nvme-pci: refactor nvme_unmap_data
        nvmet: set right status on error in id-ns handler
        nvme-pci: allow use of cmb on v1.4 controllers
        nvme-tcp: avoid request double completion for concurrent nvme_tcp_timeout
        nvme-rdma: avoid request double completion for concurrent nvme_rdma_timeout
        nvme: check the PRINFO bit before deciding the host buffer length
      1df35bf0
  9. 20 Jan, 2021 4 commits
  10. 18 Jan, 2021 5 commits
    • Chaitanya Kulkarni's avatar
      nvmet: set right status on error in id-ns handler · bffcd507
      Chaitanya Kulkarni authored
      The function nvmet_execute_identify_ns() doesn't set the status if call
      to nvmet_find_namespace() fails. In that case we set the status of the
      request to the value return by the nvmet_copy_sgl().
      
      Set the status to NVME_SC_INVALID_NS and adjust the code such that
      request will have the right status on nvmet_find_namespace() failure.
      
      Without this patch :-
      NVME Identify Namespace 3:
      nsze    : 0
      ncap    : 0
      nuse    : 0
      nsfeat  : 0
      nlbaf   : 0
      flbas   : 0
      mc      : 0
      dpc     : 0
      dps     : 0
      nmic    : 0
      rescap  : 0
      fpi     : 0
      dlfeat  : 0
      nawun   : 0
      nawupf  : 0
      nacwu   : 0
      nabsn   : 0
      nabo    : 0
      nabspf  : 0
      noiob   : 0
      nvmcap  : 0
      mssrl   : 0
      mcl     : 0
      msrc    : 0
      nsattr	: 0
      nvmsetid: 0
      anagrpid: 0
      endgid  : 0
      nguid   : 00000000000000000000000000000000
      eui64   : 0000000000000000
      lbaf  0 : ms:0   lbads:0  rp:0 (in use)
      
      With this patch-series :-
      feb3b88b501e (HEAD -> nvme-5.11) nvmet: remove extra variable in identify ns
      6302aa67210a nvmet: remove extra variable in id-desclist
      ed57951da453 nvmet: remove extra variable in smart log nsid
      be384b8c24dc nvmet: set right status on error in id-ns handler
      
      NVMe status: INVALID_NS: The namespace or the format of that namespace is invalid(0xb)
      Signed-off-by: default avatarChaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
      Reviewed-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      bffcd507
    • Klaus Jensen's avatar
      nvme-pci: allow use of cmb on v1.4 controllers · 20d3bb92
      Klaus Jensen authored
      Since NVMe v1.4 the Controller Memory Buffer must be explicitly enabled
      by the host.
      Signed-off-by: default avatarKlaus Jensen <k.jensen@samsung.com>
      [hch: avoid a local variable and add a comment]
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      20d3bb92
    • Chao Leng's avatar
      nvme-tcp: avoid request double completion for concurrent nvme_tcp_timeout · 9ebbfe49
      Chao Leng authored
      Each name space has a request queue, if complete request long time,
      multi request queues may have time out requests at the same time,
      nvme_tcp_timeout will execute concurrently. Multi requests in different
      request queues may be queued in the same tcp queue, multi
      nvme_tcp_timeout may call nvme_tcp_stop_queue at the same time.
      The first nvme_tcp_stop_queue will clear NVME_TCP_Q_LIVE and continue
      stopping the tcp queue(cancel io_work), but the others check
      NVME_TCP_Q_LIVE is already cleared, and then directly complete the
      requests, complete request before the io work is completely canceled may
      lead to a use-after-free condition.
      Add a multex lock to serialize nvme_tcp_stop_queue.
      Signed-off-by: default avatarChao Leng <lengchao@huawei.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      9ebbfe49
    • Chao Leng's avatar
      nvme-rdma: avoid request double completion for concurrent nvme_rdma_timeout · 7674073b
      Chao Leng authored
      A crash happens when inject completing request long time(nearly 30s).
      Each name space has a request queue, when inject completing request long
      time, multi request queues may have time out requests at the same time,
      nvme_rdma_timeout will execute concurrently. Multi requests in different
      request queues may be queued in the same rdma queue, multi
      nvme_rdma_timeout may call nvme_rdma_stop_queue at the same time.
      The first nvme_rdma_timeout will clear NVME_RDMA_Q_LIVE and continue
      stopping the rdma queue(drain qp), but the others check NVME_RDMA_Q_LIVE
      is already cleared, and then directly complete the requests, complete
      request before the qp is fully drained may lead to a use-after-free
      condition.
      
      Add a multex lock to serialize nvme_rdma_stop_queue.
      Signed-off-by: default avatarChao Leng <lengchao@huawei.com>
      Tested-by: default avatarIsrael Rukshin <israelr@nvidia.com>
      Reviewed-by: default avatarIsrael Rukshin <israelr@nvidia.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      7674073b
    • Revanth Rajashekar's avatar
      nvme: check the PRINFO bit before deciding the host buffer length · 4d6b1c95
      Revanth Rajashekar authored
      According to NVMe spec v1.4, section 8.3.1, the PRINFO bit and
      the metadata size play a vital role in deteriming the host buffer size.
      
      If PRIFNO bit is set and MS==8, the host doesn't add the metadata buffer,
      instead the controller adds it.
      Signed-off-by: default avatarRevanth Rajashekar <revanth.rajashekar@intel.com>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      4d6b1c95
  11. 14 Jan, 2021 5 commits
  12. 09 Jan, 2021 5 commits
    • Coly Li's avatar
      bcache: set bcache device into read-only mode for BCH_FEATURE_INCOMPAT_OBSO_LARGE_BUCKET · 5342fd42
      Coly Li authored
      If BCH_FEATURE_INCOMPAT_OBSO_LARGE_BUCKET is set in incompat feature
      set, it means the cache device is created with obsoleted layout with
      obso_bucket_site_hi. Now bcache does not support this feature bit, a new
      BCH_FEATURE_INCOMPAT_LOG_LARGE_BUCKET_SIZE incompat feature bit is added
      for a better layout to support large bucket size.
      
      For the legacy compatibility purpose, if a cache device created with
      obsoleted BCH_FEATURE_INCOMPAT_OBSO_LARGE_BUCKET feature bit, all bcache
      devices attached to this cache set should be set to read-only. Then the
      dirty data can be written back to backing device before re-create the
      cache device with BCH_FEATURE_INCOMPAT_LOG_LARGE_BUCKET_SIZE feature bit
      by the latest bcache-tools.
      
      This patch checks BCH_FEATURE_INCOMPAT_OBSO_LARGE_BUCKET feature bit
      when running a cache set and attach a bcache device to the cache set. If
      this bit is set,
      - When run a cache set, print an error kernel message to indicate all
        following attached bcache device will be read-only.
      - When attach a bcache device, print an error kernel message to indicate
        the attached bcache device will be read-only, and ask users to update
        to latest bcache-tools.
      
      Such change is only for cache device whose bucket size >= 32MB, this is
      for the zoned SSD and almost nobody uses such large bucket size at this
      moment. If you don't explicit set a large bucket size for a zoned SSD,
      such change is totally transparent to your bcache device.
      
      Fixes: ffa47032 ("bcache: add bucket_size_hi into struct cache_sb_disk for large bucket")
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      5342fd42
    • Coly Li's avatar
      bcache: introduce BCH_FEATURE_INCOMPAT_LOG_LARGE_BUCKET_SIZE for large bucket · b16671e8
      Coly Li authored
      When large bucket feature was added, BCH_FEATURE_INCOMPAT_LARGE_BUCKET
      was introduced into the incompat feature set. It used bucket_size_hi
      (which was added at the tail of struct cache_sb_disk) to extend current
      16bit bucket size to 32bit with existing bucket_size in struct
      cache_sb_disk.
      
      This is not a good idea, there are two obvious problems,
      - Bucket size is always value power of 2, if store log2(bucket size) in
        existing bucket_size of struct cache_sb_disk, it is unnecessary to add
        bucket_size_hi.
      - Macro csum_set() assumes d[SB_JOURNAL_BUCKETS] is the last member in
        struct cache_sb_disk, bucket_size_hi was added after d[] which makes
        csum_set calculate an unexpected super block checksum.
      
      To fix the above problems, this patch introduces a new incompat feature
      bit BCH_FEATURE_INCOMPAT_LOG_LARGE_BUCKET_SIZE, when this bit is set, it
      means bucket_size in struct cache_sb_disk stores the order of power-of-2
      bucket size value. When user specifies a bucket size larger than 32768
      sectors, BCH_FEATURE_INCOMPAT_LOG_LARGE_BUCKET_SIZE will be set to
      incompat feature set, and bucket_size stores log2(bucket size) more
      than store the real bucket size value.
      
      The obsoleted BCH_FEATURE_INCOMPAT_LARGE_BUCKET won't be used anymore,
      it is renamed to BCH_FEATURE_INCOMPAT_OBSO_LARGE_BUCKET and still only
      recognized by kernel driver for legacy compatible purpose. The previous
      bucket_size_hi is renmaed to obso_bucket_size_hi in struct cache_sb_disk
      and not used in bcache-tools anymore.
      
      For cache device created with BCH_FEATURE_INCOMPAT_LARGE_BUCKET feature,
      bcache-tools and kernel driver still recognize the feature string and
      display it as "obso_large_bucket".
      
      With this change, the unnecessary extra space extend of bcache on-disk
      super block can be avoided, and csum_set() may generate expected check
      sum as well.
      
      Fixes: ffa47032 ("bcache: add bucket_size_hi into struct cache_sb_disk for large bucket")
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Cc: stable@vger.kernel.org # 5.9+
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      b16671e8
    • Coly Li's avatar
      bcache: check unsupported feature sets for bcache register · 1dfc0686
      Coly Li authored
      This patch adds the check for features which is incompatible for
      current supported feature sets.
      
      Now if the bcache device created by bcache-tools has features that
      current kernel doesn't support, read_super() will fail with error
      messoage. E.g. if an unsupported incompatible feature detected,
      bcache register will fail with dmesg "bcache: register_bcache() error :
      Unsupported incompatible feature found".
      
      Fixes: d721a43f ("bcache: increase super block version for cache device and backing device")
      Fixes: ffa47032 ("bcache: add bucket_size_hi into struct cache_sb_disk for large bucket")
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Cc: stable@vger.kernel.org # 5.9+
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      1dfc0686
    • Coly Li's avatar
      bcache: fix typo from SUUP to SUPP in features.h · f7b4943d
      Coly Li authored
      This patch fixes the following typos,
      from BCH_FEATURE_COMPAT_SUUP to BCH_FEATURE_COMPAT_SUPP
      from BCH_FEATURE_INCOMPAT_SUUP to BCH_FEATURE_INCOMPAT_SUPP
      from BCH_FEATURE_INCOMPAT_SUUP to BCH_FEATURE_RO_COMPAT_SUPP
      
      Fixes: d721a43f ("bcache: increase super block version for cache device and backing device")
      Fixes: ffa47032 ("bcache: add bucket_size_hi into struct cache_sb_disk for large bucket")
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Cc: stable@vger.kernel.org # 5.9+
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      f7b4943d
    • Yi Li's avatar
      bcache: set pdev_set_uuid before scond loop iteration · e8092707
      Yi Li authored
      There is no need to reassign pdev_set_uuid in the second loop iteration,
      so move it to the place before second loop.
      Signed-off-by: default avatarYi Li <yili@winhong.com>
      Signed-off-by: default avatarColy Li <colyli@suse.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      e8092707