• Oded Gabbay's avatar
    habanalabs: prevent soft lockup during unmap · 9488307a
    Oded Gabbay authored
    When using Deep learning framework such as tensorflow or pytorch, there
    are tens of thousands of host memory mappings. When the user frees
    all those mappings at the same time, the process of unmapping and
    unpinning them can take a long time, which may cause a soft lockup
    bug.
    
    To prevent this, we need to free the core to do other things during
    the unmapping process. For now, we chose to do it every 32K unmappings
    (each unmap is a single 4K page).
    Signed-off-by: default avatarOded Gabbay <ogabbay@kernel.org>
    9488307a
memory.c 52.8 KB