1. 20 Dec, 2023 3 commits
  2. 19 Dec, 2023 5 commits
  3. 18 Dec, 2023 1 commit
  4. 15 Dec, 2023 7 commits
  5. 14 Dec, 2023 1 commit
  6. 13 Dec, 2023 3 commits
  7. 08 Dec, 2023 1 commit
    • Jens Axboe's avatar
      Merge tag 'md-next-20231208' of... · f788893d
      Jens Axboe authored
      Merge tag 'md-next-20231208' of https://git.kernel.org/pub/scm/linux/kernel/git/song/md into for-6.8/block
      
      Pull MD updates from Song:
      
      "1. Fix/Cleanup RCU usage from conf->disks[i].rdev, by Yu Kuai;
       2. Fix raid5 hang issue, by Junxiao Bi;
       3. Add Yu Kuai as Reviewer of the md subsystem."
      
      * tag 'md-next-20231208' of https://git.kernel.org/pub/scm/linux/kernel/git/song/md:
        md: synchronize flush io with array reconfiguration
        MAINTAINERS: SOFTWARE RAID: Add Yu Kuai as Reviewer
        md/md-multipath: remove rcu protection to access rdev from conf
        md/raid5: remove rcu protection to access rdev from conf
        md/raid1: remove rcu protection to access rdev from conf
        md/raid10: remove rcu protection to access rdev from conf
        md: remove flag RemoveSynchronized
        Revert "md/raid5: Wait for MD_SB_CHANGE_PENDING in raid5d"
        md: bypass block throttle for superblock update
      f788893d
  8. 07 Dec, 2023 1 commit
    • Matthew Wilcox (Oracle)'s avatar
      block: Remove special-casing of compound pages · 1b151e24
      Matthew Wilcox (Oracle) authored
      The special casing was originally added in pre-git history; reproducing
      the commit log here:
      
      > commit a318a92567d77
      > Author: Andrew Morton <akpm@osdl.org>
      > Date:   Sun Sep 21 01:42:22 2003 -0700
      >
      >     [PATCH] Speed up direct-io hugetlbpage handling
      >
      >     This patch short-circuits all the direct-io page dirtying logic for
      >     higher-order pages.  Without this, we pointlessly bounce BIOs up to
      >     keventd all the time.
      
      In the last twenty years, compound pages have become used for more than
      just hugetlb.  Rewrite these functions to operate on folios instead
      of pages and remove the special case for hugetlbfs; I don't think
      it's needed any more (and if it is, we can put it back in as a call
      to folio_test_hugetlb()).
      
      This was found by inspection; as far as I can tell, this bug can lead
      to pages used as the destination of a direct I/O read not being marked
      as dirty.  If those pages are then reclaimed by the MM without being
      dirtied for some other reason, they won't be written out.  Then when
      they're faulted back in, they will not contain the data they should.
      It'll take a pretty unusual setup to produce this problem with several
      races all going the wrong way.
      
      This problem predates the folio work; it could for example have been
      triggered by mmaping a THP in tmpfs and using that as the target of an
      O_DIRECT read.
      
      Fixes: 800d8c63 ("shmem: add huge pages support")
      Cc:  <stable@vger.kernel.org>
      Signed-off-by: default avatarMatthew Wilcox (Oracle) <willy@infradead.org>
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      1b151e24
  9. 02 Dec, 2023 5 commits
  10. 01 Dec, 2023 1 commit
  11. 28 Nov, 2023 2 commits
    • Song Liu's avatar
      MAINTAINERS: SOFTWARE RAID: Add Yu Kuai as Reviewer · 15da990f
      Song Liu authored
      Add Yu Kuai as reviewer for md/raid subsystem.
      Signed-off-by: default avatarSong Liu <song@kernel.org>
      Acked-by: default avatarYu Kuai <yukuai3@huawei.com>
      Link: https://lore.kernel.org/r/20231128035807.3191738-1-song@kernel.org
      15da990f
    • Song Liu's avatar
      Merge branch 'md-next-rcu-cleanup' into md-next · 726a9b67
      Song Liu authored
      From Yu Kuai:
      
      md: remove rcu protection to access rdev from conf
      
      The lifetime of rdev:
      
      1. md_import_device() generate a rdev based on underlying disk;
      
         mddev_lock()
         rdev = kzalloc();
         rdev->bdev = blkdev_get_by_dev();
         mddev_unlock()
      
      2. bind_rdev_to_array() add this rdev to mddev->disks;
      
         mddev_lock()
         kobject_add(&rdev->kobj, &mddev->kobj, ...);
         list_add_rcu(&rdev->same_set, &mddev->disks);
         mddev_unlock()
      
      3. remove_and_add_spares() add this rdev to conf;
      
         mddev_lock()
         rdev_addable();
         pers->hot_add_disk();
         rcu_assign_pointer(conf->rdev, rdev);
         mddev_unlock()
      
      4. Use this array with rdev;
      
      5. remove_and_add_spares() remove rdev from conf;
      
         // triggered by sysfs/ioctl
         mddev_lock()
         rdev_removeable();
         pers->hot_remove_disk();
          rcu_assign_pointer(conf->rdev, NULL);
          synchronize_rcu();
         mddev_unlock()
      
         // triggered by daemon
         mddev_lock()
         rdev_removeable();
         synchronize_rcu(); -> this can't protect accessing rdev from conf
         pers->hot_remove_disk();
          rcu_assign_pointer(conf->rdev, NULL);
         mddev_unlock()
      
      6. md_kick_rdev_from_array() remove rdev from mddev->disks;
      
         mddev_lock()
         list_del_rcu(&rdev->same_set);
         synchronize_rcu();
         list_add(&rdev->same_set, &mddev->deleting)
         mddev_unlock()
          export_rdev
      
      There are two separate rcu protection for rdev, and this pathset remove
      the protection of conf(step 3 and 5), because it's safe to access rdev
      from conf in following cases:
      
       - If 'reconfig_mutex' is held, because rdev can't be added or rmoved to
       conf;
       - If there is normal IO inflight, because mddev_suspend() will wait for
       IO to be done and prevent rdev to be added or removed to conf;
       - If sync thread is running, because remove_and_add_spares() can only be
       called from daemon thread when sync thread is done, and
       'MD_RECOVERY_RUNNING' is also checked for ioctl/sysfs;
       - if any spinlock or rcu_read_lock() is held, because synchronize_rcu()
       from step 6 prevent rdev to be freed until spinlock is released or
       rcu_read_unlock();
      726a9b67
  12. 27 Nov, 2023 10 commits