• Ming Lei's avatar
    blk-mq: Build default queue map via group_cpus_evenly() · 6a6dcae8
    Ming Lei authored
    The default queue mapping builder of blk_mq_map_queues doesn't take NUMA
    topo into account, so the built mapping is pretty bad, since CPUs
    belonging to different NUMA node are assigned to same queue. It is
    observed that IOPS drops by ~30% when running two jobs on same hctx
    of null_blk from two CPUs belonging to two NUMA nodes compared with
    from same NUMA node.
    
    Address the issue by reusing group_cpus_evenly() for building queue mapping
    since group_cpus_evenly() does group cpus according to CPU/NUMA locality.
    
    Also performance data becomes more stable with this given correct queue
    mapping is applied wrt. numa locality viewpoint, for example, on one two
    nodes arm64 machine with 160 cpus, node 0(cpu 0~79), node 1(cpu 80~159):
    
    1) modprobe null_blk nr_devices=1 submit_queues=2
    
    2) run 'fio(t/io_uring -p 0 -n 4 -r 20 /dev/nullb0)', and observe that
    IOPS becomes much stable on multiple tests:
    
     - unpatched: IOPS is 2.5M ~ 4.5M
     - patched:   IOPS is 4.3M ~ 5.0M
    
    Lots of drivers may benefit from the change, such as nvme pci poll,
    nvme tcp, ...
    Signed-off-by: default avatarMing Lei <ming.lei@redhat.com>
    Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
    Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
    Reviewed-by: default avatarJohn Garry <john.g.garry@oracle.com>
    Reviewed-by: Jens Axboe <axboe@kernel.dk>                                                                                                                                                                                                    
    Link: https://lore.kernel.org/r/20221227022905.352674-7-ming.lei@redhat.com
    6a6dcae8
blk-mq-cpumap.c 1.28 KB