Commit c660f40b authored by Nicolai Hähnle's avatar Nicolai Hähnle Committed by Alex Deucher

drm/amdgpu: fix user fence write race condition

The buffer object backing the user fence is reserved using the non-user
fence, i.e., as soon as the non-user fence is signaled, the user fence
buffer object can be moved or even destroyed.

Therefore, emit the user fence first.

Both fences have the same cache invalidation behavior, so this should
have no user-visible effect.
Signed-off-by: default avatarNicolai Hähnle <nicolai.haehnle@amd.com>
Reviewed-by: default avatarAlex Deucher <alexander.deucher@amd.com>
Signed-off-by: default avatarAlex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
parent c3d0280b
...@@ -231,6 +231,12 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs, ...@@ -231,6 +231,12 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs,
if (ib->flags & AMDGPU_IB_FLAG_TC_WB_NOT_INVALIDATE) if (ib->flags & AMDGPU_IB_FLAG_TC_WB_NOT_INVALIDATE)
fence_flags |= AMDGPU_FENCE_FLAG_TC_WB_ONLY; fence_flags |= AMDGPU_FENCE_FLAG_TC_WB_ONLY;
/* wrap the last IB with fence */
if (job && job->uf_addr) {
amdgpu_ring_emit_fence(ring, job->uf_addr, job->uf_sequence,
fence_flags | AMDGPU_FENCE_FLAG_64BIT);
}
r = amdgpu_fence_emit(ring, f, fence_flags); r = amdgpu_fence_emit(ring, f, fence_flags);
if (r) { if (r) {
dev_err(adev->dev, "failed to emit fence (%d)\n", r); dev_err(adev->dev, "failed to emit fence (%d)\n", r);
...@@ -243,12 +249,6 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs, ...@@ -243,12 +249,6 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned num_ibs,
if (ring->funcs->insert_end) if (ring->funcs->insert_end)
ring->funcs->insert_end(ring); ring->funcs->insert_end(ring);
/* wrap the last IB with fence */
if (job && job->uf_addr) {
amdgpu_ring_emit_fence(ring, job->uf_addr, job->uf_sequence,
fence_flags | AMDGPU_FENCE_FLAG_64BIT);
}
if (patch_offset != ~0 && ring->funcs->patch_cond_exec) if (patch_offset != ~0 && ring->funcs->patch_cond_exec)
amdgpu_ring_patch_cond_exec(ring, patch_offset); amdgpu_ring_patch_cond_exec(ring, patch_offset);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment