Commit bea84c4d authored by Omer Shpigelman's avatar Omer Shpigelman Committed by Oded Gabbay

habanalabs: invalidate MMU cache only once

Reduce context close time by performing MMU cache invalidation once at the
end of the unmap loop rather in each iteration, in order to avoid hard
reset with open contexts.
Reset with open contexts can potentially lead to a kernel crash as the
generic pool of the MMU hops is destroyed while it is not empty because
some unmap operations are not done.
The commit affect mainly when running on simulator.
Signed-off-by: default avatarOmer Shpigelman <oshpigelman@habana.ai>
Reviewed-by: default avatarOded Gabbay <oded.gabbay@gmail.com>
Signed-off-by: default avatarOded Gabbay <oded.gabbay@gmail.com>
parent 71c5e55e
......@@ -1065,7 +1065,13 @@ static int unmap_device_va(struct hl_ctx *ctx, u64 vaddr, bool ctx_free)
unmap_phys_pg_pack(ctx, vaddr, phys_pg_pack);
hdev->asic_funcs->mmu_invalidate_cache(hdev, true, *vm_type);
/*
* During context free this function is called in a loop to clean all
* the context mappings. Hence the cache invalidation can be called once
* at the loop end rather than for each iteration
*/
if (!ctx_free)
hdev->asic_funcs->mmu_invalidate_cache(hdev, true, *vm_type);
mutex_unlock(&ctx->mmu_lock);
......@@ -1663,6 +1669,10 @@ void hl_vm_ctx_fini(struct hl_ctx *ctx)
unmap_device_va(ctx, hnode->vaddr, true);
}
/* invalidate the cache once after the unmapping loop */
hdev->asic_funcs->mmu_invalidate_cache(hdev, true, VM_TYPE_USERPTR);
hdev->asic_funcs->mmu_invalidate_cache(hdev, true, VM_TYPE_PHYS_PACK);
spin_lock(&vm->idr_lock);
idr_for_each_entry(&vm->phys_pg_pack_handles, phys_pg_list, i)
if (phys_pg_list->asid == ctx->asid) {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment