1. 11 Nov, 2013 10 commits
    • Kent Overstreet's avatar
      bcache: Insert multiple keys at a time · 403b6cde
      Kent Overstreet authored
      We'll often end up with a list of adjacent keys to insert -
      because bch_data_insert() may have to fragment the data it writes.
      
      Originally, to simplify things and avoid having to deal with corner
      cases bch_btree_insert() would pass keys from this list one at a time to
      btree_insert_recurse() - mainly because the list of keys might span leaf
      nodes, so it was easier this way.
      
      With the btree_insert_node() refactoring, it's now a lot easier to just
      pass down the whole list and have btree_insert_recurse() iterate over
      leaf nodes until it's done.
      Signed-off-by: default avatarKent Overstreet <kmo@daterainc.com>
      403b6cde
    • Kent Overstreet's avatar
      bcache: Add btree_insert_node() · 26c949f8
      Kent Overstreet authored
      The flow of control in the old btree insertion code was rather -
      backwards; we'd recurse down the btree (in btree_insert_recurse()), and
      then if we needed to split the keys to be inserted into the parent node
      would be effectively returned up to btree_insert_recurse(), which would
      notice there was more work to do and finish the insertion.
      
      The main problem with this was that the full logic for btree insertion
      could only be used by calling btree_insert_recurse; if you'd gotten to a
      btree leaf some other way and had a key to insert, if it turned out that
      node needed to be split you were SOL.
      
      This inverts the flow of control so btree_insert_node() does _full_
      btree insertion, including splitting - and takes a (leaf) btree node to
      insert into as a parameter.
      
      This means we can now _correctly_ handle cache misses - for cache
      misses, we need to insert a fake "check" key into the btree when we
      discover we have a cache miss - while we still have the btree locked.
      Previously, if the btree node was full inserting a cache miss would just
      fail.
      Signed-off-by: default avatarKent Overstreet <kmo@daterainc.com>
      26c949f8
    • Kent Overstreet's avatar
      bcache: Explicitly track btree node's parent · d6fd3b11
      Kent Overstreet authored
      This is prep work for the reworked btree insertion code.
      
      The way we set b->parent is ugly and hacky... the problem is, when
      btree_split() or garbage collection splits or rewrites a btree node, the
      parent changes for all its (potentially already cached) children.
      
      I may change this later and add some code to look through the btree node
      cache and find all our cached child nodes and change the parent pointer
      then...
      Signed-off-by: default avatarKent Overstreet <kmo@daterainc.com>
      d6fd3b11
    • Kent Overstreet's avatar
      bcache: Remove unnecessary check in should_split() · 8304ad4d
      Kent Overstreet authored
      Checking i->seq was redundant, because since ages ago we always
      initialize the new bset when advancing b->written
      Signed-off-by: default avatarKent Overstreet <kmo@daterainc.com>
      8304ad4d
    • Kent Overstreet's avatar
      bcache: Stripe size isn't necessarily a power of two · 2d679fc7
      Kent Overstreet authored
      Originally I got this right... except that the divides didn't use
      do_div(), which broke 32 bit kernels. When I went to fix that, I forgot
      that the raid stripe size usually isn't a power of two... doh
      Signed-off-by: default avatarKent Overstreet <kmo@daterainc.com>
      2d679fc7
    • Kent Overstreet's avatar
      bcache: Add on error panic/unregister setting · 77c320eb
      Kent Overstreet authored
      Works kind of like the ext4 setting, to panic or remount read only on
      errors.
      Signed-off-by: default avatarKent Overstreet <kmo@daterainc.com>
      77c320eb
    • Kent Overstreet's avatar
      bcache: Use blkdev_issue_discard() · 49b1212d
      Kent Overstreet authored
      The old asynchronous discard code was really a relic from when all the
      allocation code was asynchronous - now that allocation runs out of a
      dedicated thread there's no point in keeping around all that complicated
      machinery.
      Signed-off-by: default avatarKent Overstreet <kmo@daterainc.com>
      49b1212d
    • Kent Overstreet's avatar
      bcache: Fix a lockdep splat · dd9ec84d
      Kent Overstreet authored
      bch_keybuf_del() takes a spinlock that can't be taken in interrupt context -
      whoops. Fortunately, this code isn't enabled by default (you have to toggle a
      sysfs thing).
      Signed-off-by: default avatarKent Overstreet <kmo@daterainc.com>
      dd9ec84d
    • Kent Overstreet's avatar
      7857d5d4
    • Kent Overstreet's avatar
      bcache: Fix dirty_data accounting · 1fa8455d
      Kent Overstreet authored
      Dirty data accounting wasn't quite right - firstly, we were adding the key we're
      inserting after it could have merged with another dirty key already in the
      btree, and secondly we could sometimes pass the wrong offset to
      bcache_dev_sectors_dirty_add() for dirty data we were overwriting - which is
      important when tracking dirty data by stripe.
      Signed-off-by: default avatarKent Overstreet <kmo@daterainc.com>
      Cc: linux-stable <stable@vger.kernel.org> # >= v3.10
      1fa8455d
  2. 08 Nov, 2013 30 commits