Commit 210787e8 authored by Stanislaw Gruszka's avatar Stanislaw Gruszka Committed by John W. Linville

iwl3945: fix possible il->txq NULL pointer dereference in delayed works

On il3945_down procedure we free tx queue data and nullify il->txq
pointer. After that we drop mutex and then cancel delayed works. There
is possibility, that after drooping mutex and before the cancel, some
delayed work will start and crash while trying to send commands to
the device. For example, here is reported crash in
il3945_bg_reg_txpower_periodic():
https://bugzilla.kernel.org/show_bug.cgi?id=42766#c10

Patch fix problem by adding il->txq check on works that send commands,
hence utilize tx queue.
Reported-by: default avatarClemens Eisserer <linuxhippy@gmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: default avatarStanislaw Gruszka <sgruszka@redhat.com>
Signed-off-by: default avatarJohn W. Linville <linville@tuxdriver.com>
parent 182ada1c
...@@ -2475,7 +2475,7 @@ il3945_bg_alive_start(struct work_struct *data) ...@@ -2475,7 +2475,7 @@ il3945_bg_alive_start(struct work_struct *data)
container_of(data, struct il_priv, alive_start.work); container_of(data, struct il_priv, alive_start.work);
mutex_lock(&il->mutex); mutex_lock(&il->mutex);
if (test_bit(S_EXIT_PENDING, &il->status)) if (test_bit(S_EXIT_PENDING, &il->status) || il->txq == NULL)
goto out; goto out;
il3945_alive_start(il); il3945_alive_start(il);
......
...@@ -1870,11 +1870,12 @@ il3945_bg_reg_txpower_periodic(struct work_struct *work) ...@@ -1870,11 +1870,12 @@ il3945_bg_reg_txpower_periodic(struct work_struct *work)
struct il_priv *il = container_of(work, struct il_priv, struct il_priv *il = container_of(work, struct il_priv,
_3945.thermal_periodic.work); _3945.thermal_periodic.work);
if (test_bit(S_EXIT_PENDING, &il->status))
return;
mutex_lock(&il->mutex); mutex_lock(&il->mutex);
if (test_bit(S_EXIT_PENDING, &il->status) || il->txq == NULL)
goto out;
il3945_reg_txpower_periodic(il); il3945_reg_txpower_periodic(il);
out:
mutex_unlock(&il->mutex); mutex_unlock(&il->mutex);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment