1. 22 Sep, 2016 7 commits
    • Rabin Vincent's avatar
      dm crypt: fix crash on exit · f659b100
      Rabin Vincent authored
      As the documentation for kthread_stop() says, "if threadfn() may call
      do_exit() itself, the caller must ensure task_struct can't go away".
      dm-crypt does not ensure this and therefore crashes when crypt_dtr()
      calls kthread_stop().  The crash is trivially reproducible by adding a
      delay before the call to kthread_stop() and just opening and closing a
      dm-crypt device.
      
       general protection fault: 0000 [#1] PREEMPT SMP
       CPU: 0 PID: 533 Comm: cryptsetup Not tainted 4.8.0-rc7+ #7
       task: ffff88003bd0df40 task.stack: ffff8800375b4000
       RIP: 0010: kthread_stop+0x52/0x300
       Call Trace:
        crypt_dtr+0x77/0x120
        dm_table_destroy+0x6f/0x120
        __dm_destroy+0x130/0x250
        dm_destroy+0x13/0x20
        dev_remove+0xe6/0x120
        ? dev_suspend+0x250/0x250
        ctl_ioctl+0x1fc/0x530
        ? __lock_acquire+0x24f/0x1b10
        dm_ctl_ioctl+0x13/0x20
        do_vfs_ioctl+0x91/0x6a0
        ? ____fput+0xe/0x10
        ? entry_SYSCALL_64_fastpath+0x5/0xbd
        ? trace_hardirqs_on_caller+0x151/0x1e0
        SyS_ioctl+0x41/0x70
        entry_SYSCALL_64_fastpath+0x1f/0xbd
      
      This problem was introduced by bcbd94ff ("dm crypt: fix a possible
      hang due to race condition on exit").
      
      Looking at the description of that patch (excerpted below), it seems
      like the problem it addresses can be solved by just using
      set_current_state instead of __set_current_state, since we obviously
      need the memory barrier.
      
      | dm crypt: fix a possible hang due to race condition on exit
      |
      | A kernel thread executes __set_current_state(TASK_INTERRUPTIBLE),
      | __add_wait_queue, spin_unlock_irq and then tests kthread_should_stop().
      | It is possible that the processor reorders memory accesses so that
      | kthread_should_stop() is executed before __set_current_state().  If
      | such reordering happens, there is a possible race on thread
      | termination: [...]
      
      So this patch just reverts the aforementioned patch and changes the
      __set_current_state(TASK_INTERRUPTIBLE) to set_current_state(...).  This
      fixes the crash and should also fix the potential hang.
      
      Fixes: bcbd94ff ("dm crypt: fix a possible hang due to race condition on exit")
      Cc: Mikulas Patocka <mpatocka@redhat.com>
      Cc: stable@vger.kernel.org # v4.0+
      Signed-off-by: default avatarRabin Vincent <rabinv@axis.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      f659b100
    • Joe Thornber's avatar
      dm cache metadata: switch to using the new cursor api for loading metadata · f177940a
      Joe Thornber authored
      This change offers a pretty significant performance improvement.
      Signed-off-by: default avatarJoe Thornber <ejt@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      f177940a
    • Joe Thornber's avatar
      dm array: introduce cursor api · fdd1315a
      Joe Thornber authored
      More efficient way to iterate an array due to prefetching (makes use of
      the new dm_btree_cursor_* api).
      Signed-off-by: default avatarJoe Thornber <ejt@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      fdd1315a
    • Joe Thornber's avatar
      dm btree: introduce cursor api · 7d111c81
      Joe Thornber authored
      This uses prefetching to speed up iteration through a btree.
      Signed-off-by: default avatarJoe Thornber <ejt@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      7d111c81
    • Joe Thornber's avatar
      dm cache policy smq: distribute entries to random levels when switching to smq · 9d1b404c
      Joe Thornber authored
      For smq the 32 bit 'hint' stores the multiqueue level that the entry
      should be stored in.  If a different policy has been used previously,
      and then switched to smq, the hints will be invalid.  In which case we
      used to put all entries in the bottom level of the multiqueue, and then
      redistribute.  Redistribution is faster if we put entries with invalid
      hints in random levels initially.
      Signed-off-by: default avatarJoe Thornber <ejt@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      9d1b404c
    • Joe Thornber's avatar
      dm cache: speed up writing of the hint array · 4e781b49
      Joe Thornber authored
      It's far quicker to always delete the hint array and recreate with
      dm_array_new() because we avoid the copying caused by mutation.
      
      Also simplifies the policy interface, replacing the walk_hints() with
      the simpler get_hint().
      Signed-off-by: default avatarJoe Thornber <ejt@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      4e781b49
    • Joe Thornber's avatar
      dm array: add dm_array_new() · dd6a77d9
      Joe Thornber authored
      dm_array_new() creates a new, populated array more efficiently than
      starting with an empty one and resizing.
      Signed-off-by: default avatarJoe Thornber <ejt@redhat.com>
      Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
      dd6a77d9
  2. 15 Sep, 2016 4 commits
  3. 14 Sep, 2016 23 commits
  4. 08 Sep, 2016 4 commits
  5. 29 Aug, 2016 2 commits