1. 31 Oct, 2011 1 commit
    • NeilBrown's avatar
      md/raid10: Fix bug when activating a hot-spare. · 7fcc7c8a
      NeilBrown authored
      This is a fairly serious bug in RAID10.
      
      When a RAID10 array is degraded and a hot-spare is activated, the
      spare does not take up the empty slot, but rather replaces the first
      working device.
      This is likely to make the array non-functional.   It would normally
      be possible to recover the data, but that would need care and is not
      guaranteed.
      
      This bug was introduced in commit
         2bb77736
      which first appeared in 3.1.
      
      Cc: stable@kernel.org
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      7fcc7c8a
  2. 26 Oct, 2011 1 commit
    • NeilBrown's avatar
      md: Fix some bugs in recovery_disabled handling. · d890fa2b
      NeilBrown authored
      In 3.0 we changed the way recovery_disabled was handle so that instead
      of testing against zero, we test an mddev-> value against a conf->
      value.
      Two problems:
        1/ one place in raid1 was missed and still sets to '1'.
        2/ We didn't explicitly set the conf-> value at array creation
           time.
           It defaulted to '0' just like the mddev value does so they
           could appear equal and thus disable recovery.
           This did not affect normal 'md' as it calls bind_rdev_to_array
           which changes the mddev value.  However the dmraid interface
           doesn't call this and so doesn't change ->recovery_disabled; so at
           array start all recovery is incorrectly disabled.
      
      So initialise the 'conf' value to one less that the mddev value, so
      the will only be the same when explicitly set that way.
      Reported-by: default avatarJonathan Brassow <jbrassow@redhat.com>
      Signed-off-by: default avatarNeilBrown  <neilb@suse.de>
      d890fa2b
  3. 25 Oct, 2011 1 commit
    • NeilBrown's avatar
      md/raid5: fix bug that could result in reads from a failed device. · 355840e7
      NeilBrown authored
      This bug was introduced in 415e72d0
      which was in 2.6.36.
      
      There is a small window of time between when a device fails and when
      it is removed from the array.  During this time we might still read
      from it, but we won't write to it - so it is possible that we could
      read stale data.
      
      We didn't need the test of 'Faulty' before because the test on
      In_sync is sufficient.  Since we started allowing reads from the early
      part of non-In_sync devices we need a test on Faulty too.
      
      This is suitable for any kernel from 2.6.36 onwards, though the patch
      might need a bit of tweaking in 3.0 and earlier.
      
      Cc: stable@kernel.org
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      355840e7
  4. 20 Oct, 2011 1 commit
  5. 19 Oct, 2011 1 commit
  6. 18 Oct, 2011 2 commits
  7. 11 Oct, 2011 16 commits
  8. 07 Oct, 2011 8 commits
  9. 23 Sep, 2011 1 commit
    • Daniel P. Berrange's avatar
      md: don't delay reboot by 1 second if no MD devices exist · 2dba6a91
      Daniel P. Berrange authored
      The md_notify_reboot() method includes a call to mdelay(1000),
      to deal with "exotic SCSI devices" which are too volatile on
      reboot. The delay is unconditional. Even if the machine does
      not have any block devices, let alone MD devices, the kernel
      shutdown sequence is slowed down.
      
      1 second does not matter much with physical hardware, but with
      certain virtualization use cases any wasted time in the bootup
      & shutdown sequence counts for alot.
      
      * drivers/md/md.c: md_notify_reboot() - only impose a delay if
        there was at least one MD device to be stopped during reboot
      Signed-off-by: default avatarDaniel P. Berrange <berrange@redhat.com>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      2dba6a91
  10. 21 Sep, 2011 4 commits
    • Wang Sheng-Hui's avatar
    • NeilBrown's avatar
      md/bitmap: improve handling of 'allclean'. · 2585f3ef
      NeilBrown authored
      The 'allclean' flag is used to cache the fact that there is nothing to
      do, so we can avoid waking up and scanning the bitmap regularly.
      
      The two sorts of pages that might need the attention of the bitmap
      daemon are BITMAP_PAGE_PENDING and BITMAP_PAGE_NEEDWRITE pages.
      
      So make sure allclean reflects exactly when there are none of those.
      So:
        set it before scanning all pages with either bit set.
        clear it whenever these bits are set
        clear it when we desire not to clear one of these bits.
        don't clear it any other time.
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      2585f3ef
    • NeilBrown's avatar
      md/bitmap: rename and tidy up BITMAP_PAGE_CLEAN · 5a537df4
      NeilBrown authored
      The flag 'BITMAP_PAGE_CLEAN' has a confusing name as it doesn't mean
      that the page is clean, but rather that there are counters in the page
      which allow bits in the bitmap to be cleared - i.e. maybe cleaning can
      happen.
      
      So change it to BITMAP_PAGE_PENDING and fix some irregularities:
       - Don't set it in bitmap_init_from_disk as bitmap_set_memory_bits
         sets it when needed
       - in bitmap_daemon_work, if we find a counter that is '1', but
         need_sync is set, then set BITMAP_PAGE_PENDING again (it was
         recently cleared) to ensure we don't forget about this bit.
      
      Signed-off-by: NeilBrown <neilb@suse.de>   
      5a537df4
    • NeilBrown's avatar
      md: Avoid waking up a thread after it has been freed. · 01f96c0a
      NeilBrown authored
      Two related problems:
      
      1/ some error paths call "md_unregister_thread(mddev->thread)"
         without subsequently clearing ->thread.  A subsequent call
         to mddev_unlock will try to wake the thread, and crash.
      
      2/ Most calls to md_wakeup_thread are protected against the thread
         disappeared either by:
            - holding the ->mutex
            - having an active request, so something else must be keeping
              the array active.
         However mddev_unlock calls md_wakeup_thread after dropping the
         mutex and without any certainty of an active request, so the
         ->thread could theoretically disappear.
         So we need a spinlock to provide some protections.
      
      So change md_unregister_thread to take a pointer to the thread
      pointer, and ensure that it always does the required locking, and
      clears the pointer properly.
      Reported-by: default avatar"Moshe Melnikov" <moshe@zadarastorage.com>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      cc: stable@kernel.org
      01f96c0a
  11. 10 Sep, 2011 4 commits
    • NeilBrown's avatar
      md: Fix handling for devices from 2TB to 4TB in 0.90 metadata. · 27a7b260
      NeilBrown authored
      0.90 metadata uses an unsigned 32bit number to count the number of
      kilobytes used from each device.
      This should allow up to 4TB per device.
      However we multiply this by 2 (to get sectors) before casting to a
      larger type, so sizes above 2TB get truncated.
      
      Also we allow rdev->sectors to be larger than 4TB, so it is possible
      for the array to be resized larger than the metadata can handle.
      So make sure rdev->sectors never exceeds 4TB when 0.90 metadata is in
      used.
      
      Also the sanity check at the end of super_90_load should include level
      1 as it used ->size too. (RAID0 and Linear don't use ->size at all).
      Reported-by: default avatarPim Zandbergen <P.Zandbergen@macroscoop.nl>
      Cc: stable@kernel.org
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      27a7b260
    • NeilBrown's avatar
      md/raid1,10: Remove use-after-free bug in make_request. · 079fa166
      NeilBrown authored
      A single request to RAID1 or RAID10 might result in multiple
      requests if there are known bad blocks that need to be avoided.
      
      To detect if we need to submit another write request we test:
       	if (sectors_handled < (bio->bi_size >> 9)) {
      
      However this is after we call **_write_done() so the 'bio' no longer
      belongs to us - the writes could have completed and the bio freed.
      
      So move the **_write_done call until after the test against
      bio->bi_size.
      
      This addresses https://bugzilla.kernel.org/show_bug.cgi?id=41862Reported-by: default avatarBruno Wolff III <bruno@wolff.to>
      Tested-by: default avatarBruno Wolff III <bruno@wolff.to>
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      079fa166
    • NeilBrown's avatar
      md/raid10: unify handling of write completion. · 19d5f834
      NeilBrown authored
      A write can complete at two different places:
      1/ when the last member-device write completes, through
         raid10_end_write_request
      2/ in make_request() when we remove the initial bias from ->remaining.
      
      These two should do exactly the same thing and the comment says they
      do, but they don't.
      
      So factor the correct code out into a function and call it in both
      places.  This makes the code much more similar to RAID1.
      
      The difference is only significant if there is an error, and they
      usually take a while, so it is unlikely that there will be an error
      already when make_request is completing, so this is unlikely to cause
      real problems.
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      19d5f834
    • NeilBrown's avatar
      Avoid dereferencing a 'request_queue' after last close. · 94007751
      NeilBrown authored
      On the last close of an 'md' device which as been stopped, the device
      is destroyed and in particular the request_queue is freed.  The free
      is done in a separate thread so it might happen a short time later.
      
      __blkdev_put calls bdev_inode_switch_bdi *after* ->release has been
      called.
      
      Since commit f758eeab
      bdev_inode_switch_bdi will dereference the 'old' bdi, which lives
      inside a request_queue, to get a spin lock.  This causes the last
      close on an md device to sometime take a spin_lock which lives in
      freed memory - which results in an oops.
      
      So move the called to bdev_inode_switch_bdi before the call to
      ->release.
      
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Acked-by: default avatarWu Fengguang <fengguang.wu@intel.com>
      Cc: stable@kernel.org
      Signed-off-by: default avatarNeilBrown <neilb@suse.de>
      94007751