• Jon Derrick's avatar
    md: Use optimal I/O size for last bitmap page · 8745faa9
    Jon Derrick authored
    If the bitmap space has enough room, size the I/O for the last bitmap
    page write to the optimal I/O size for the storage device. The expanded
    write is checked that it won't overrun the data or metadata.
    
    The drive this was tested against has higher latencies when there are
    sub-4k writes due to device-side read-mod-writes of its atomic 4k write
    unit. This change helps increase performance by sizing the last bitmap
    page I/O for the device's preferred write unit, if it is given.
    
    Example Intel/Solidigm P5520
    Raid10, Chunk-size 64M, bitmap-size 57228 bits
    
    $ mdadm --create /dev/md0 --level=10 --raid-devices=4 /dev/nvme{0,1,2,3}n1
            --assume-clean --bitmap=internal --bitmap-chunk=64M
    $ fio --name=test --direct=1 --filename=/dev/md0 --rw=randwrite --bs=4k --runtime=60
    
    Without patch:
      write: IOPS=1676, BW=6708KiB/s (6869kB/s)(393MiB/60001msec); 0 zone resets
    
    With patch:
      write: IOPS=15.7k, BW=61.4MiB/s (64.4MB/s)(3683MiB/60001msec); 0 zone resets
    
    Biosnoop:
    Without patch:
    Time        Process        PID     Device      LBA        Size      Lat
    1.410377    md0_raid10     6900    nvme0n1   W 16         4096      0.02
    1.410387    md0_raid10     6900    nvme2n1   W 16         4096      0.02
    1.410374    md0_raid10     6900    nvme3n1   W 16         4096      0.01
    1.410381    md0_raid10     6900    nvme1n1   W 16         4096      0.02
    1.410411    md0_raid10     6900    nvme1n1   W 115346512  4096      0.01
    1.410418    md0_raid10     6900    nvme0n1   W 115346512  4096      0.02
    1.410915    md0_raid10     6900    nvme2n1   W 24         3584      0.43 <--
    1.410935    md0_raid10     6900    nvme3n1   W 24         3584      0.45 <--
    1.411124    md0_raid10     6900    nvme1n1   W 24         3584      0.64 <--
    1.411147    md0_raid10     6900    nvme0n1   W 24         3584      0.66 <--
    1.411176    md0_raid10     6900    nvme3n1   W 2019022184 4096      0.01
    1.411189    md0_raid10     6900    nvme2n1   W 2019022184 4096      0.02
    
    With patch:
    Time        Process        PID     Device      LBA        Size      Lat
    5.747193    md0_raid10     727     nvme0n1   W 16         4096      0.01
    5.747192    md0_raid10     727     nvme1n1   W 16         4096      0.02
    5.747195    md0_raid10     727     nvme3n1   W 16         4096      0.01
    5.747202    md0_raid10     727     nvme2n1   W 16         4096      0.02
    5.747229    md0_raid10     727     nvme3n1   W 1196223704 4096      0.02
    5.747224    md0_raid10     727     nvme0n1   W 1196223704 4096      0.01
    5.747279    md0_raid10     727     nvme0n1   W 24         4096      0.01 <--
    5.747279    md0_raid10     727     nvme1n1   W 24         4096      0.02 <--
    5.747284    md0_raid10     727     nvme3n1   W 24         4096      0.02 <--
    5.747291    md0_raid10     727     nvme2n1   W 24         4096      0.02 <--
    5.747314    md0_raid10     727     nvme2n1   W 2234636712 4096      0.01
    5.747317    md0_raid10     727     nvme1n1   W 2234636712 4096      0.02
    Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
    Signed-off-by: default avatarJon Derrick <jonathan.derrick@linux.dev>
    Signed-off-by: default avatarSong Liu <song@kernel.org>
    Link: https://lore.kernel.org/r/20230224183323.638-4-jonathan.derrick@linux.dev
    8745faa9
md-bitmap.c 71.3 KB