1. 04 Aug, 2023 1 commit
  2. 01 Aug, 2023 1 commit
  3. 27 Jul, 2023 3 commits
  4. 25 Jul, 2023 1 commit
  5. 24 Jul, 2023 4 commits
  6. 21 Jul, 2023 6 commits
    • Mauricio Faria de Oliveira's avatar
      loop: do not enforce max_loop hard limit by (new) default · bb5faa99
      Mauricio Faria de Oliveira authored
      Problem:
      
      The max_loop parameter is used for 2 different purposes:
      
      1) initial number of loop devices to pre-create on init
      2) maximum number of loop devices to add on access/open()
      
      Historically, its default value (zero) caused 1) to create non-zero
      number of devices (CONFIG_BLK_DEV_LOOP_MIN_COUNT), and no hard limit on
      2) to add devices with autoloading.
      
      However, the default value changed in commit 85c50197 ("loop: Fix
      the max_loop commandline argument treatment when it is set to 0") to
      CONFIG_BLK_DEV_LOOP_MIN_COUNT, for max_loop=0 not to pre-create devices.
      
      That does improve 1), but unfortunately it breaks 2), as the default
      behavior changed from no-limit to hard-limit.
      
      Example:
      
      For example, this userspace code broke for N >= CONFIG, if the user
      relied on the default value 0 for max_loop:
      
          mknod("/dev/loopN");
          open("/dev/loopN");  // now fails with ENXIO
      
      Though affected users may "fix" it with (loop.)max_loop=0, this means to
      require a kernel parameter change on stable kernel update (that commit
      Fixes: an old commit in stable).
      
      Solution:
      
      The original semantics for the default value in 2) can be applied if the
      parameter is not set (ie, default behavior).
      
      This still keeps the intended function in 1) and 2) if set, and that
      commit's intended improvement in 1) if max_loop=0.
      
      Before 85c50197:
        - default:     1) CONFIG devices   2) no limit
        - max_loop=0:  1) CONFIG devices   2) no limit
        - max_loop=X:  1) X devices        2) X limit
      
      After 85c50197:
        - default:     1) CONFIG devices   2) CONFIG limit (*)
        - max_loop=0:  1) 0 devices (*)    2) no limit
        - max_loop=X:  1) X devices        2) X limit
      
      This commit:
        - default:     1) CONFIG devices   2) no limit (*)
        - max_loop=0:  1) 0 devices        2) no limit
        - max_loop=X:  1) X devices        2) X limit
      
      Future:
      
      The issue/regression from that commit only affects code under the
      CONFIG_BLOCK_LEGACY_AUTOLOAD deprecation guard, thus the fix too is
      contained under it.
      
      Once that deprecated functionality/code is removed, the purpose 2) of
      max_loop (hard limit) is no longer in use, so the module parameter
      description can be changed then.
      
      Tests:
      
      Linux 6.4-rc7
      CONFIG_BLK_DEV_LOOP_MIN_COUNT=8
      CONFIG_BLOCK_LEGACY_AUTOLOAD=y
      
      - default (original)
      
      	# ls -1 /dev/loop*
      	/dev/loop-control
      	/dev/loop0
      	...
      	/dev/loop7
      
      	# ./test-loop
      	open: /dev/loop8: No such device or address
      
      - default (patched)
      
      	# ls -1 /dev/loop*
      	/dev/loop-control
      	/dev/loop0
      	...
      	/dev/loop7
      
      	# ./test-loop
      	#
      
      - max_loop=0 (original & patched):
      
      	# ls -1 /dev/loop*
      	/dev/loop-control
      
      	# ./test-loop
      	#
      
      - max_loop=8 (original & patched):
      
      	# ls -1 /dev/loop*
      	/dev/loop-control
      	/dev/loop0
      	...
      	/dev/loop7
      
      	# ./test-loop
      	open: /dev/loop8: No such device or address
      
      - max_loop=0 (patched; CONFIG_BLOCK_LEGACY_AUTOLOAD is not set)
      
      	# ls -1 /dev/loop*
      	/dev/loop-control
      
      	# ./test-loop
      	open: /dev/loop8: No such device or address
      
      Fixes: 85c50197 ("loop: Fix the max_loop commandline argument treatment when it is set to 0")
      Signed-off-by: default avatarMauricio Faria de Oliveira <mfo@canonical.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Link: https://lore.kernel.org/r/20230720143033.841001-3-mfo@canonical.comSigned-off-by: default avatarJens Axboe <axboe@kernel.dk>
      bb5faa99
    • Mauricio Faria de Oliveira's avatar
      loop: deprecate autoloading callback loop_probe() · 23881aec
      Mauricio Faria de Oliveira authored
      The 'probe' callback in __register_blkdev() is only used under the
      CONFIG_BLOCK_LEGACY_AUTOLOAD deprecation guard.
      
      The loop_probe() function is only used for that callback, so guard it
      too, accordingly.
      
      See commit fbdee71b ("block: deprecate autoloading based on dev_t").
      Signed-off-by: default avatarMauricio Faria de Oliveira <mfo@canonical.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Link: https://lore.kernel.org/r/20230720143033.841001-2-mfo@canonical.comSigned-off-by: default avatarJens Axboe <axboe@kernel.dk>
      23881aec
    • David Jeffery's avatar
      sbitmap: fix batching wakeup · 10639737
      David Jeffery authored
      Current code supposes that it is enough to provide forward progress by
      just waking up one wait queue after one completion batch is done.
      
      Unfortunately this way isn't enough, cause waiter can be added to wait
      queue just after it is woken up.
      
      Follows one example(64 depth, wake_batch is 8)
      
      1) all 64 tags are active
      
      2) in each wait queue, there is only one single waiter
      
      3) each time one completion batch(8 completions) wakes up just one
         waiter in each wait queue, then immediately one new sleeper is added
         to this wait queue
      
      4) after 64 completions, 8 waiters are wakeup, and there are still 8
         waiters in each wait queue
      
      5) after another 8 active tags are completed, only one waiter can be
         wakeup, and the other 7 can't be waken up anymore.
      
      Turns out it isn't easy to fix this problem, so simply wakeup enough
      waiters for single batch.
      
      Cc: Kemeng Shi <shikemeng@huaweicloud.com>
      Cc: Chengming Zhou <zhouchengming@bytedance.com>
      Cc: Jan Kara <jack@suse.cz>
      Signed-off-by: default avatarDavid Jeffery <djeffery@redhat.com>
      Signed-off-by: default avatarMing Lei <ming.lei@redhat.com>
      Reviewed-by: default avatarGabriel Krisman Bertazi <krisman@suse.de>
      Reviewed-by: default avatarKeith Busch <kbusch@kernel.org>
      Link: https://lore.kernel.org/r/20230721095715.232728-1-ming.lei@redhat.comSigned-off-by: default avatarJens Axboe <axboe@kernel.dk>
      10639737
    • Ming Lei's avatar
      nvme-rdma: fix potential unbalanced freeze & unfreeze · 29b434d1
      Ming Lei authored
      Move start_freeze into nvme_rdma_configure_io_queues(), and there is
      at least two benefits:
      
      1) fix unbalanced freeze and unfreeze, since re-connection work may
      fail or be broken by removal
      
      2) IO during error recovery can be failfast quickly because nvme fabrics
      unquiesces queues after teardown.
      
      One side-effect is that !mpath request may timeout during connecting
      because of queue topo change, but that looks not one big deal:
      
      1) same problem exists with current code base
      
      2) compared with !mpath, mpath use case is dominant
      
      Fixes: 9f98772b ("nvme-rdma: fix controller reset hang during traffic")
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarMing Lei <ming.lei@redhat.com>
      Tested-by: default avatarYi Zhang <yi.zhang@redhat.com>
      Reviewed-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarKeith Busch <kbusch@kernel.org>
      29b434d1
    • Ming Lei's avatar
      nvme-tcp: fix potential unbalanced freeze & unfreeze · 99dc2640
      Ming Lei authored
      Move start_freeze into nvme_tcp_configure_io_queues(), and there is
      at least two benefits:
      
      1) fix unbalanced freeze and unfreeze, since re-connection work may
      fail or be broken by removal
      
      2) IO during error recovery can be failfast quickly because nvme fabrics
      unquiesces queues after teardown.
      
      One side-effect is that !mpath request may timeout during connecting
      because of queue topo change, but that looks not one big deal:
      
      1) same problem exists with current code base
      
      2) compared with !mpath, mpath use case is dominant
      
      Fixes: 2875b0ae ("nvme-tcp: fix controller reset hang during traffic")
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarMing Lei <ming.lei@redhat.com>
      Tested-by: default avatarYi Zhang <yi.zhang@redhat.com>
      Reviewed-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarKeith Busch <kbusch@kernel.org>
      99dc2640
    • Ming Lei's avatar
      nvme: fix possible hang when removing a controller during error recovery · 1b95e817
      Ming Lei authored
      Error recovery can be interrupted by controller removal, then the
      controller is left as quiesced, and IO hang can be caused.
      
      Fix the issue by unquiescing controller unconditionally when removing
      namespaces.
      
      This way is reasonable and safe given forward progress can be made
      when removing namespaces.
      Reviewed-by: default avatarKeith Busch <kbusch@kernel.org>
      Reviewed-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Reported-by: default avatarChunguang Xu <brookxu.cn@gmail.com>
      Closes: https://lore.kernel.org/linux-nvme/cover.1685350577.git.chunguang.xu@shopee.com/
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarMing Lei <ming.lei@redhat.com>
      Signed-off-by: default avatarKeith Busch <kbusch@kernel.org>
      1b95e817
  7. 20 Jul, 2023 2 commits
  8. 14 Jul, 2023 2 commits
  9. 13 Jul, 2023 4 commits
    • Jens Axboe's avatar
      Merge tag 'nvme-6.5-2023-07-13' of git://git.infradead.org/nvme into block-6.5 · 90b46229
      Jens Axboe authored
      Pull NVMe fixes from Keith:
      
      "nvme fixes for Linux 6.5
      
       - Don't require quirk to use duplicate namespace identifiers
         (Christoph, Sagi)
       - One more BOGUS_NID quirk (Pankaj)
       - IO timeout and error hanlding fixes for PCI (Keith)
       - Enhanced metadata format mask fix (Ankit)
       - Association race condition fix for fibre channel (Michael)
       - Correct debugfs error checks (Minjie)
       - Use PAGE_SECTORS_SHIFT where needed (Damien)
       - Reduce kernel logs for legacy nguid attribute (Keith)
       - Use correct dma direction when unmapping metadata (Ming)"
      
      * tag 'nvme-6.5-2023-07-13' of git://git.infradead.org/nvme:
        nvme-pci: fix DMA direction of unmapping integrity data
        nvme: don't reject probe due to duplicate IDs for single-ported PCIe devices
        nvme: ensure disabling pairs with unquiesce
        nvme-fc: fix race between error recovery and creating association
        nvme-fc: return non-zero status code when fails to create association
        nvme: fix parameter check in nvme_fault_inject_init()
        nvme: warn only once for legacy uuid attribute
        nvme: fix the NVME_ID_NS_NVM_STS_MASK definition
        nvmet: use PAGE_SECTORS_SHIFT
        nvme: add BOGUS_NID quirk for Samsung SM953
      90b46229
    • Chengming Zhou's avatar
      blk-mq: fix start_time_ns and alloc_time_ns for pre-allocated rq · 5c17f45e
      Chengming Zhou authored
      The iocost rely on rq start_time_ns and alloc_time_ns to tell saturation
      state of the block device. Most of the time request is allocated after
      rq_qos_throttle() and its alloc_time_ns or start_time_ns won't be affected.
      
      But for plug batched allocation introduced by the commit 47c122e3
      ("block: pre-allocate requests if plug is started and is a batch"), we can
      rq_qos_throttle() after the allocation of the request. This is what the
      blk_mq_get_cached_request() does.
      
      In this case, the cached request alloc_time_ns or start_time_ns is much
      ahead if blocked in any qos ->throttle().
      
      Fix it by setting alloc_time_ns and start_time_ns to now when the allocated
      request is actually used.
      Signed-off-by: default avatarChengming Zhou <zhouchengming@bytedance.com>
      Acked-by: default avatarTejun Heo <tj@kernel.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Link: https://lore.kernel.org/r/20230710105516.2053478-1-chengming.zhou@linux.devSigned-off-by: default avatarJens Axboe <axboe@kernel.dk>
      5c17f45e
    • Ming Lei's avatar
      nvme-pci: fix DMA direction of unmapping integrity data · b8f6446b
      Ming Lei authored
      DMA direction should be taken in dma_unmap_page() for unmapping integrity
      data.
      
      Fix this DMA direction, and reported in Guangwu's test.
      Reported-by: default avatarGuangwu Zhang <guazhang@redhat.com>
      Fixes: 4aedb705 ("nvme-pci: split metadata handling from nvme_map_data / nvme_unmap_data")
      Signed-off-by: default avatarMing Lei <ming.lei@redhat.com>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarKeith Busch <kbusch@kernel.org>
      b8f6446b
    • Christoph Hellwig's avatar
      nvme: don't reject probe due to duplicate IDs for single-ported PCIe devices · ac522fc6
      Christoph Hellwig authored
      While duplicate IDs are still very harmful, including the potential to easily
      see changing devices in /dev/disk/by-id, it turn out they are extremely
      common for cheap end user NVMe devices.
      
      Relax our check for them for so that it doesn't reject the probe on
      single-ported PCIe devices, but prints a big warning instead.  In doubt
      we'd still like to see quirk entries to disable the potential for
      changing supposed stable device identifier links, but this will at least
      allow users how have two (or more) of these devices to use them without
      having to manually add a new PCI ID entry with the quirk through sysfs or
      by patching the kernel.
      
      Fixes: 2079f41e ("nvme: check that EUI/GUID/UUID are globally unique")
      Cc: stable@vger.kernel.org # 6.0+
      Co-developed-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarKeith Busch <kbusch@kernel.org>
      ac522fc6
  10. 12 Jul, 2023 6 commits
  11. 10 Jul, 2023 4 commits
  12. 05 Jul, 2023 3 commits
    • Eric Biggers's avatar
      blk-crypto: use dynamic lock class for blk_crypto_profile::lock · 2fb48d88
      Eric Biggers authored
      When a device-mapper device is passing through the inline encryption
      support of an underlying device, calls to blk_crypto_evict_key() take
      the blk_crypto_profile::lock of the device-mapper device, then take the
      blk_crypto_profile::lock of the underlying device (nested).  This isn't
      a real deadlock, but it causes a lockdep report because there is only
      one lock class for all instances of this lock.
      
      Lockdep subclasses don't really work here because the hierarchy of block
      devices is dynamic and could have more than 2 levels.
      
      Instead, register a dynamic lock class for each blk_crypto_profile, and
      associate that with the lock.
      
      This avoids false-positive lockdep reports like the following:
      
          ============================================
          WARNING: possible recursive locking detected
          6.4.0-rc5 #2 Not tainted
          --------------------------------------------
          fscryptctl/1421 is trying to acquire lock:
          ffffff80829ca418 (&profile->lock){++++}-{3:3}, at: __blk_crypto_evict_key+0x44/0x1c0
      
                         but task is already holding lock:
          ffffff8086b68ca8 (&profile->lock){++++}-{3:3}, at: __blk_crypto_evict_key+0xc8/0x1c0
      
                         other info that might help us debug this:
           Possible unsafe locking scenario:
      
                 CPU0
                 ----
            lock(&profile->lock);
            lock(&profile->lock);
      
                          *** DEADLOCK ***
      
           May be due to missing lock nesting notation
      
      Fixes: 1b262839 ("block: Keyslot Manager for Inline Encryption")
      Reported-by: default avatarBart Van Assche <bvanassche@acm.org>
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Reviewed-by: default avatarBart Van Assche <bvanassche@acm.org>
      Link: https://lore.kernel.org/r/20230610061139.212085-1-ebiggers@kernel.orgSigned-off-by: default avatarJens Axboe <axboe@kernel.dk>
      2fb48d88
    • Michael Schmitz's avatar
      block/partition: fix signedness issue for Amiga partitions · 7eb1e476
      Michael Schmitz authored
      Making 'blk' sector_t (i.e. 64 bit if LBD support is active) fails the
      'blk>0' test in the partition block loop if a value of (signed int) -1 is
      used to mark the end of the partition block list.
      
      Explicitly cast 'blk' to signed int to allow use of -1 to terminate the
      partition block linked list.
      
      Fixes: b6f3f28f ("block: add overflow checks for Amiga partition support")
      Reported-by: default avatarChristian Zigotzky <chzigotzky@xenosoft.de>
      Link: https://lore.kernel.org/r/024ce4fa-cc6d-50a2-9aae-3701d0ebf668@xenosoft.deSigned-off-by: default avatarMichael Schmitz <schmitzmic@gmail.com>
      Reviewed-by: default avatarMartin Steigerwald <martin@lichtvoll.de>
      Tested-by: default avatarChristian Zigotzky <chzigotzky@xenosoft.de>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      7eb1e476
    • Linus Torvalds's avatar
      gup: make the stack expansion warning a bit more targeted · 6cd06ab1
      Linus Torvalds authored
      I added a warning about about GUP no longer expanding the stack in
      commit a425ac53 ("gup: add warning if some caller would seem to want
      stack expansion"), but didn't really expect anybody to hit it.
      
      And it's true that nobody seems to have hit a _real_ case yet, but we
      certainly have a number of reports of false positives.  Which not only
      causes extra noise in itself, but might also end up hiding any real
      cases if they do exist.
      
      So let's tighten up the warning condition, and replace the simplistic
      
      	vma = find_vma(mm, start);
      	if (vma && (start < vma->vm_start)) {
      		WARN_ON_ONCE(vma->vm_flags & VM_GROWSDOWN);
      
      with a
      
      	vma = gup_vma_lookup(mm, start);
      
      helper function which works otherwise like just "vma_lookup()", but with
      some heuristics for when to warn about gup no longer causing stack
      expansion.
      
      In particular, don't just warn for "below the stack", but warn if it's
      _just_ below the stack (with "just below" arbitrarily defined as 64kB,
      because why not?).  And rate-limit it to at most once per hour, which
      means that any false positives shouldn't completely hide subsequent
      reports, but we won't be flooding the logs about it either.
      
      The previous code triggered when some GUP user (chromium crashpad)
      accessing past the end of the previous vma, for example.  That has never
      expanded the stack, it just causes GUP to return early, and as such we
      shouldn't be warning about it.
      
      This is still going trigger the randomized testers, but to mitigate the
      noise from that, use "dump_stack()" instead of "WARN_ON_ONCE()" to get
      the kernel call chain.  We'll get the relevant information, but syzbot
      shouldn't get too upset about it.
      
      Also, don't even bother with the GROWSUP case, which would be using
      different heuristics entirely, but only happens on parisc.
      Reported-by: default avatarkernel test robot <oliver.sang@intel.com>
      Reported-by: default avatarJohn Hubbard <jhubbard@nvidia.com>
      Reported-by: syzbot+6cf44e127903fdf9d929@syzkaller.appspotmail.com
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      6cd06ab1
  13. 04 Jul, 2023 3 commits