Commit 135517d3 authored by Andrey Grodzovsky's avatar Andrey Grodzovsky Committed by Christian König

drm/scheduler: Avoid accessing freed bad job.

Problem:
Due to a race between drm_sched_cleanup_jobs in sched thread and
drm_sched_job_timedout in timeout work there is a possiblity that
bad job was already freed while still being accessed from the
timeout thread.

Fix:
Instead of just peeking at the bad job in the mirror list
remove it from the list under lock and then put it back later when
we are garanteed no race with main sched thread is possible which
is after the thread is parked.

v2: Lock around processing ring_mirror_list in drm_sched_cleanup_jobs.

v3: Rebase on top of drm-misc-next. v2 is not needed anymore as
drm_sched_get_cleanup_job already has a lock there.

v4: Fix comments to relfect latest code in drm-misc.
Signed-off-by: default avatarAndrey Grodzovsky <andrey.grodzovsky@amd.com>
Reviewed-by: default avatarChristian König <christian.koenig@amd.com>
Reviewed-by: default avatarEmily Deng <Emily.Deng@amd.com>
Tested-by: default avatarEmily Deng <Emily.Deng@amd.com>
Signed-off-by: default avatarChristian König <christian.koenig@amd.com>
Link: https://patchwork.freedesktop.org/patch/342356
parent 80827318
...@@ -284,10 +284,21 @@ static void drm_sched_job_timedout(struct work_struct *work) ...@@ -284,10 +284,21 @@ static void drm_sched_job_timedout(struct work_struct *work)
unsigned long flags; unsigned long flags;
sched = container_of(work, struct drm_gpu_scheduler, work_tdr.work); sched = container_of(work, struct drm_gpu_scheduler, work_tdr.work);
/* Protects against concurrent deletion in drm_sched_get_cleanup_job */
spin_lock_irqsave(&sched->job_list_lock, flags);
job = list_first_entry_or_null(&sched->ring_mirror_list, job = list_first_entry_or_null(&sched->ring_mirror_list,
struct drm_sched_job, node); struct drm_sched_job, node);
if (job) { if (job) {
/*
* Remove the bad job so it cannot be freed by concurrent
* drm_sched_cleanup_jobs. It will be reinserted back after sched->thread
* is parked at which point it's safe.
*/
list_del_init(&job->node);
spin_unlock_irqrestore(&sched->job_list_lock, flags);
job->sched->ops->timedout_job(job); job->sched->ops->timedout_job(job);
/* /*
...@@ -298,6 +309,8 @@ static void drm_sched_job_timedout(struct work_struct *work) ...@@ -298,6 +309,8 @@ static void drm_sched_job_timedout(struct work_struct *work)
job->sched->ops->free_job(job); job->sched->ops->free_job(job);
sched->free_guilty = false; sched->free_guilty = false;
} }
} else {
spin_unlock_irqrestore(&sched->job_list_lock, flags);
} }
spin_lock_irqsave(&sched->job_list_lock, flags); spin_lock_irqsave(&sched->job_list_lock, flags);
...@@ -369,6 +382,20 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad) ...@@ -369,6 +382,20 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad)
kthread_park(sched->thread); kthread_park(sched->thread);
/*
* Reinsert back the bad job here - now it's safe as
* drm_sched_get_cleanup_job cannot race against us and release the
* bad job at this point - we parked (waited for) any in progress
* (earlier) cleanups and drm_sched_get_cleanup_job will not be called
* now until the scheduler thread is unparked.
*/
if (bad && bad->sched == sched)
/*
* Add at the head of the queue to reflect it was the earliest
* job extracted.
*/
list_add(&bad->node, &sched->ring_mirror_list);
/* /*
* Iterate the job list from later to earlier one and either deactive * Iterate the job list from later to earlier one and either deactive
* their HW callbacks or remove them from mirror list if they already * their HW callbacks or remove them from mirror list if they already
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment