• Petr Tesarik's avatar
    swiotlb: reduce the number of areas to match actual memory pool size · 8ac04063
    Petr Tesarik authored
    Although the desired size of the SWIOTLB memory pool is increased in
    swiotlb_adjust_nareas() to match the number of areas, the actual allocation
    may be smaller, which may require reducing the number of areas.
    
    For example, Xen uses swiotlb_init_late(), which in turn uses the page
    allocator. On x86, page size is 4 KiB and MAX_ORDER is 10 (1024 pages),
    resulting in a maximum memory pool size of 4 MiB. This corresponds to 2048
    slots of 2 KiB each. The minimum area size is 128 (IO_TLB_SEGSIZE),
    allowing at most 2048 / 128 = 16 areas.
    
    If num_possible_cpus() is greater than the maximum number of areas, areas
    are smaller than IO_TLB_SEGSIZE and contiguous groups of free slots will
    span multiple areas. When allocating and freeing slots, only one area will
    be properly locked, causing race conditions on the unlocked slots and
    ultimately data corruption, kernel hangs and crashes.
    
    Fixes: 20347fca ("swiotlb: split up the global swiotlb lock")
    Signed-off-by: default avatarPetr Tesarik <petr.tesarik.ext@huawei.com>
    Reviewed-by: default avatarRoberto Sassu <roberto.sassu@huawei.com>
    Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
    8ac04063
swiotlb.c 30.6 KB