1. 24 Aug, 2010 1 commit
    • Sage Weil's avatar
      ceph: maintain i_head_snapc when any caps are dirty, not just for data · 7d8cb26d
      Sage Weil authored
      We used to use i_head_snapc to keep track of which snapc the current epoch
      of dirty data was dirtied under.  It is used by queue_cap_snap to set up
      the cap_snap.  However, since we queue cap snaps for any dirty caps, not
      just for dirty file data, we need to keep a valid i_head_snapc anytime
      we have dirty|flushing caps.  This fixes a NULL pointer deref in
      queue_cap_snap when writing back dirty caps without data (e.g.,
      snaptest-authwb.sh).
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
      7d8cb26d
  2. 23 Aug, 2010 2 commits
  3. 22 Aug, 2010 6 commits
    • Michael Rubin's avatar
      mm: exporting account_page_dirty · 679ceace
      Michael Rubin authored
      This allows code outside of the mm core to safely manipulate page state
      and not worry about the other accounting. Not using these routines means
      that some code will lose track of the accounting and we get bugs. This
      has happened once already.
      Signed-off-by: default avatarMichael Rubin <mrubin@google.com>
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
      679ceace
    • Sage Weil's avatar
      ceph: direct requests in snapped namespace based on nonsnap parent · eb6bb1c5
      Sage Weil authored
      When making a request in the virtual snapdir or a snapped portion of the
      namespace, we should choose the MDS based on the first nonsnap parent (and
      its caps).  If that is not the best place, we will get forward hints to
      find the right MDS in the cluster.  This fixes ESTALE errors when using
      the .snap directory and namespace with multiple MDSs.
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
      eb6bb1c5
    • Sage Weil's avatar
      ceph: queue cap snap writeback for realm children on snap update · ed326044
      Sage Weil authored
      When a realm is updated, we need to queue writeback on inodes in that
      realm _and_ its children.  Otherwise, if the inode gets cowed on the
      server, we can get a hang later due to out-of-sync cap/snap state.
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
      ed326044
    • Sage Weil's avatar
      ceph: include dirty xattrs state in snapped caps · 4a625be4
      Sage Weil authored
      When we snapshot dirty metadata that needs to be written back to the MDS,
      include dirty xattr metadata.  Make the capsnap reference the encoded
      xattr blob so that it will be written back in the FLUSHSNAP op.
      
      Also fix the capsnap creation guard to include dirty auth or file bits,
      not just tests specific to dirty file data or file writes in progress
      (this fixes auth metadata writeback).
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
      4a625be4
    • Sage Weil's avatar
      ceph: fix xattr cap writeback · 082afec9
      Sage Weil authored
      We should include the xattr metadata blob in the cap update message any
      time we are flushing dirty state, NOT just when we are also dropping the
      cap.  This fixes async xattr writeback.
      
      Also, clean up the code slightly to avoid duplicating the bit test.
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
      082afec9
    • Sage Weil's avatar
      ceph: fix multiple mds session shutdown · f3c60c59
      Sage Weil authored
      The use of a completion when waiting for session shutdown during umount is
      inappropriate, given the complexity of the condition.  For multiple MDS's,
      this resulted in the umount thread spinning, often preventing the session
      close message from being processed in some cases.
      
      Switch to a waitqueue and defined a condition helper.  This cleans things
      up nicely.
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
      f3c60c59
  4. 10 Aug, 2010 1 commit
  5. 05 Aug, 2010 1 commit
    • Sage Weil's avatar
      ceph: only queue async writeback on cap revocation if there is dirty data · 0eb6cd49
      Sage Weil authored
      Normally, if the Fb cap bit is being revoked, we queue an async writeback.
      If there is no dirty data but we still hold the cap, this leaves the
      client sitting around doing nothing until the cap timeouts expire and the
      cap is released on its own (as it would have been without the revocation).
      
      Instead, only queue writeback if the bit is actually used (i.e., we have
      dirty data).  If not, we can reply to the revocation immediately.
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
      0eb6cd49
  6. 03 Aug, 2010 3 commits
  7. 02 Aug, 2010 26 commits