Commit bf4f5f2b authored by Roman Gushchin's avatar Roman Gushchin Committed by Luis Henriques

md/raid5: fix locking in handle_stripe_clean_event()

commit b8a9d66d upstream.

After commit 566c09c5 ("raid5: relieve lock contention in get_active_stripe()")
__find_stripe() is called under conf->hash_locks + hash.
But handle_stripe_clean_event() calls remove_hash() under
conf->device_lock.

Under some cirscumstances the hash chain can be circuited,
and we get an infinite loop with disabled interrupts and locked hash
lock in __find_stripe(). This leads to hard lockup on multiple CPUs
and following system crash.

I was able to reproduce this behavior on raid6 over 6 ssd disks.
The devices_handle_discard_safely option should be set to enable trim
support. The following script was used:

for i in `seq 1 32`; do
    dd if=/dev/zero of=large$i bs=10M count=100 &
done
Signed-off-by: default avatarRoman Gushchin <klamm@yandex-team.ru>
Fixes: 566c09c5 ("raid5: relieve lock contention in get_active_stripe()")
Signed-off-by: default avatarNeilBrown <neilb@suse.com>
Cc: Shaohua Li <shli@kernel.org>
[ luis: backported to 3.16: used Roman's backport to 3.14 ]
Signed-off-by: default avatarLuis Henriques <luis.henriques@canonical.com>
parent 21a6ff7e
......@@ -3069,6 +3069,8 @@ static void handle_stripe_clean_event(struct r5conf *conf,
}
if (!discard_pending &&
test_bit(R5_Discard, &sh->dev[sh->pd_idx].flags)) {
int hash = sh->hash_lock_index;
clear_bit(R5_Discard, &sh->dev[sh->pd_idx].flags);
clear_bit(R5_UPTODATE, &sh->dev[sh->pd_idx].flags);
if (sh->qd_idx >= 0) {
......@@ -3082,9 +3084,9 @@ static void handle_stripe_clean_event(struct r5conf *conf,
* no updated data, so remove it from hash list and the stripe
* will be reinitialized
*/
spin_lock_irq(&conf->device_lock);
spin_lock_irq(conf->hash_locks + hash);
remove_hash(sh);
spin_unlock_irq(&conf->device_lock);
spin_unlock_irq(conf->hash_locks + hash);
if (test_bit(STRIPE_SYNC_REQUESTED, &sh->state))
set_bit(STRIPE_HANDLE, &sh->state);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment