1. 23 Mar, 2010 10 commits
    • Sage Weil's avatar
      ceph: fix connection fault con_work reentrancy problem · 3c3f2e32
      Sage Weil authored
      The messenger fault was clearing the BUSY bit, for reasons unclear.  This
      made it possible for the con->ops->fault function to reopen the connection,
      and requeue work in the workqueue--even though the current thread was
      already in con_work.
      
      This avoids a problem where the client busy loops with connection failures
      on an unreachable OSD, but doesn't address the root cause of that problem.
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
      3c3f2e32
    • Sage Weil's avatar
      ceph: prevent dup stale messages to console for restarting mds · e4cb4cb8
      Sage Weil authored
      Prevent duplicate 'mds0 caps stale' message from spamming the console every
      few seconds while the MDS restarts.  Set s_renew_requested earlier, so that
      we only print the message once, even if we don't send an actual request.
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
      e4cb4cb8
    • Sage Weil's avatar
      ceph: fix pg pool decoding from incremental osdmap update · efd7576b
      Sage Weil authored
      The incremental map decoding of pg pool updates wasn't skipping
      the snaps and removed_snaps vectors.  This caused osd requests
      to stall when pool snapshots were created or fs snapshots were
      deleted.  Use a common helper for full and incremental map
      decoders that decodes pools properly.
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
      efd7576b
    • Sage Weil's avatar
      ceph: fix mds sync() race with completing requests · 80fc7314
      Sage Weil authored
      The wait_unsafe_requests() helper dropped the mdsc mutex to wait
      for each request to complete, and then examined r_node to get the
      next request after retaking the lock.  But the request completion
      removes the request from the tree, so r_node was always undefined
      at this point.  Since it's a small race, it usually led to a
      valid request, but not always.  The result was an occasional
      crash in rb_next() while dereferencing node->rb_left.
      
      Fix this by clearing the rb_node when removing the request from
      the request tree, and not walking off into the weeds when we
      are done waiting for a request.  Since the request we waited on
      will _always_ be out of the request tree, take a ref on the next
      request, in the hopes that it won't be.  But if it is, it's ok:
      we can start over from the beginning (and traverse over older read
      requests again).
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
      80fc7314
    • Sage Weil's avatar
      ceph: only release unused caps with mds requests · 916623da
      Sage Weil authored
      We were releasing used caps (e.g. FILE_CACHE) from encode_inode_release
      with MDS requests (e.g. setattr).  We don't carry refs on most caps, so
      this code worked most of the time, but for setattr (utimes) we try to
      drop Fscr.
      
      This causes cap state to get slightly out of sync with reality, and may
      result in subsequent mds revoke messages getting ignored.
      
      Fix by only releasing unused caps.
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
      916623da
    • Sage Weil's avatar
      ceph: clean up handle_cap_grant, handle_caps wrt session mutex · 15637c8b
      Sage Weil authored
      Drop session mutex unconditionally in handle_cap_grant, and do the
      check_caps from the handle_cap_grant helper.  This avoids using a magic
      return value.
      
      Also avoid using a flag variable in the IMPORT case and call
      check_caps at the appropriate point.
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
      15637c8b
    • Sage Weil's avatar
      ceph: fix session locking in handle_caps, ceph_check_caps · cdc2ce05
      Sage Weil authored
      Passing a session pointer to ceph_check_caps() used to mean it would leave
      the session mutex locked.  That wasn't always possible if it wasn't passed
      CHECK_CAPS_AUTHONLY.   If could unlock the passed session and lock a
      differet session mutex, which was clearly wrong, and also emitted a
      warning when it a racing CPU retook it and we did an unlock from the wrong
      context.
      
      This was only a problem when there was more than one MDS.
      
      First, make ceph_check_caps unconditionally drop the session mutex, so that
      it is free to lock other sessions as needed.  Then adjust the one caller
      that passes in a session (handle_cap_grant) accordingly.
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
      cdc2ce05
    • Sage Weil's avatar
      ceph: drop unnecessary WARN_ON in caps migration · 4ea0043a
      Sage Weil authored
      If we don't have the exported cap it's because we already released it. No
      need to WARN.
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
      4ea0043a
    • Sage Weil's avatar
      ceph: fix null pointer deref of r_osd in debug output · 12eadc19
      Sage Weil authored
      This causes an oops when debug output is enabled and we kick
      an osd request with no current r_osd (sometime after an osd
      failure).  Check the pointer before dereferencing.
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
      12eadc19
    • Sage Weil's avatar
      ceph: clean up service ticket decoding · 0a990e70
      Sage Weil authored
      Previously we would decode state directly into our current ticket_handler.
      This is problematic if for some reason we fail to decode, because we end
      up with half new state and half old state.
      
      We are probably already in bad shape if we get an update we can't decode,
      but we may as well be tidy anyway.  Decode into new_* temporaries and
      update the ticket_handler only on success.
      Signed-off-by: default avatarSage Weil <sage@newdream.net>
      0a990e70
  2. 21 Mar, 2010 6 commits
  3. 20 Mar, 2010 3 commits
  4. 19 Mar, 2010 21 commits