• Alex Lyakas's avatar
    md/raid1: consider WRITE as successful only if at least one non-Faulty and... · 3056e3ae
    Alex Lyakas authored
    md/raid1: consider WRITE as successful only if at least one non-Faulty and non-rebuilding drive completed it.
    
    Without that fix, the following scenario could happen:
    
    - RAID1 with drives A and B; drive B was freshly-added and is rebuilding
    - Drive A fails
    - WRITE request arrives to the array. It is failed by drive A, so
    r1_bio is marked as R1BIO_WriteError, but the rebuilding drive B
    succeeds in writing it, so the same r1_bio is marked as
    R1BIO_Uptodate.
    - r1_bio arrives to handle_write_finished, badblocks are disabled,
    md_error()->error() does nothing because we don't fail the last drive
    of raid1
    - raid_end_bio_io()  calls call_bio_endio()
    - As a result, in call_bio_endio():
            if (!test_bit(R1BIO_Uptodate, &r1_bio->state))
                    clear_bit(BIO_UPTODATE, &bio->bi_flags);
    this code doesn't clear the BIO_UPTODATE flag, and the whole master
    WRITE succeeds, back to the upper layer.
    
    So we returned success to the upper layer, even though we had written
    the data onto the rebuilding drive only. But when we want to read the
    data back, we would not read from the rebuilding drive, so this data
    is lost.
    
    [neilb - applied identical change to raid10 as well]
    
    This bug can result in lost data, so it is suitable for any
    -stable kernel.
    
    Cc: stable@vger.kernel.org
    Signed-off-by: default avatarAlex Lyakas <alex@zadarastorage.com>
    Signed-off-by: default avatarNeilBrown <neilb@suse.de>
    3056e3ae
raid1.c 84.2 KB