1. 02 May, 2013 40 commits
    • Alex Elder's avatar
      libceph: record length of bio list with bio · fdce58cc
      Alex Elder authored
      When assigning a bio pointer to an osd request, we don't have an
      efficient way of knowing the total length bytes in the bio list.
      That information is available at the point it's set up by the rbd
      code, so record it with the osd data when it's set.
      
      This and the next patch are related to maintaining the length of a
      message's data independent of the message header, as described here:
          http://tracker.ceph.com/issues/4589Signed-off-by: default avatarAlex Elder <elder@inktank.com>
      Reviewed-by: default avatarJosh Durgin <josh.durgin@inktank.com>
      fdce58cc
    • Alex Elder's avatar
      libceph: drop ceph_osd_request->r_con_filling_msg · ace6d3a9
      Alex Elder authored
      A field in an osd request keeps track of whether a connection is
      currently filling the request's reply message.  This patch gets rid
      of that field.
      
      An osd request includes two messages--a request and a reply--and
      they're both associated with the connection that existed to its
      the target osd at the time the request was created.
      
      An osd request can be dropped early, even when it's in flight.
      And at that time both messages are released.  It's possible the
      reply message has been supplied to its connection to receive
      an incoming response message at the time the osd request gets
      dropped.  So ceph_osdc_release_request() revokes that message
      from the connection before releasing it so things get cleaned up
      properly.
      
      Previously this may have caused a problem, because the connection
      that a message was associated with might have gone away before the
      revoke request.  And to avoid any problems using that connection,
      the osd client held a reference to it when it supplies its response
      message.
      
      However since this commit:
          38941f80 libceph: have messages point to their connection
      all messages hold a reference to the connection they are associated
      with whenever the connection is actively operating on the message
      (i.e. while the message is queued to send or sending, and when it
      data is being received into it).  And if a message has no connection
      associated with it, ceph_msg_revoke_incoming() won't do anything
      when asked to revoke it.
      
      As a result, there is no need to keep an additional reference to the
      connection associated with a message when we hand the message to the
      messenger when it calls our alloc_msg() method to receive something.
      If the connection *were* operating on it, it would have its own
      reference, and if not, there's no work to be done when we need to
      revoke it.
      
      So get rid of the osd request's r_con_filling_msg field.
      
      This resolves:
          http://tracker.ceph.com/issues/4647Signed-off-by: default avatarAlex Elder <elder@inktank.com>
      Reviewed-by: default avatarJosh Durgin <josh.durgin@inktank.com>
      ace6d3a9
    • Alex Elder's avatar
      ceph: use page_offset() in ceph_writepages_start() · 25d71cb9
      Alex Elder authored
      There's one spot in ceph_writepages_start() that open-codes what
      page_offset() does safely.  Use the macro so we don't have to worry
      about wrapping.
      
      This resolves:
          http://tracker.ceph.com/issues/4648Signed-off-by: default avatarAlex Elder <elder@inktank.com>
      Reviewed-by: default avatarJosh Durgin <josh.durgin@inktank.com>
      25d71cb9
    • Alex Elder's avatar
      libceph: define ceph_decode_pgid() only once · ef4859d6
      Alex Elder authored
      There are two basically identical definitions of __decode_pgid()
      in libceph, one in "net/ceph/osdmap.c" and the other in
      "net/ceph/osd_client.c".  Get rid of both, and instead define
      a single inline version in "include/linux/ceph/osdmap.h".
      Signed-off-by: default avatarAlex Elder <elder@inktank.com>
      Reviewed-by: default avatarJosh Durgin <josh.durgin@inktank.com>
      ef4859d6
    • Alex Elder's avatar
      libceph: drop mutex on error in handle_reply() · 8058fd45
      Alex Elder authored
      The osd client mutex is acquired just before getting a reference to
      a request in handle_reply().  However the error paths after that
      don't drop the mutex before returning as they should.
      
      Drop the mutex after dropping the request reference.  Also add a
      bad_mutex label at that point and use it so the failed request
      lookup case can be handled with the rest.
      
      This resolves:
          http://tracker.ceph.com/issues/4615Signed-off-by: default avatarAlex Elder <elder@inktank.com>
      Reviewed-by: default avatarSage Weil <sage@inktank.com>
      8058fd45
    • Alex Elder's avatar
      ceph: set up page array mempool with correct size · 3bf53337
      Alex Elder authored
      In create_fs_client() a memory pool is set up be used for arrays of
      pages that might be needed in ceph_writepages_start() if memory is
      tight.  There are two problems with the way it's initialized:
          - The size provided is the number of pages we want in the
            array, but it should be the number of bytes required for
            that many page pointers.
          - The number of pages computed can end up being 0, while we
            will always need at least one page.
      
      This patch fixes both of these problems.
      
      This resolves the two simple problems defined in:
          http://tracker.ceph.com/issues/4603Signed-off-by: default avatarAlex Elder <elder@inktank.com>
      Reviewed-by: default avatarJosh Durgin <josh.durgin@inktank.com>
      3bf53337
    • Alex Elder's avatar
      libceph: use osd_req_op_extent_init() · b0270324
      Alex Elder authored
      Use osd_req_op_extent_init() in ceph_osdc_new_request() to
      initialize the one or two ops built in that function.
      Signed-off-by: default avatarAlex Elder <elder@inktank.com>
      Reviewed-by: default avatarJosh Durgin <josh.durgin@inktank.com>
      b0270324
    • Alex Elder's avatar
      libceph: clean up ceph_osd_new_request() · d18d1e28
      Alex Elder authored
      All callers of ceph_osd_new_request() pass either CEPH_OSD_OP_READ
      or CEPH_OSD_OP_WRITE as the opcode value.  The function assumes it
      by filling in the extent fields in the ops array it builds.  So just
      assert that is the case, and don't bother calling op_has_extent()
      before filling in the first osd operation in the array.
      
      Define some local variables to gather the information to fill into
      the first op, and then fill in the op array all in one place.
      Signed-off-by: default avatarAlex Elder <elder@inktank.com>
      Reviewed-by: default avatarJosh Durgin <josh.durgin@inktank.com>
      d18d1e28
    • Alex Elder's avatar
      libceph: don't update op in calc_layout() · a19dadfb
      Alex Elder authored
      The ceph_osdc_new_request() an array of osd operations is built up
      and filled in partially within that function and partially in the
      called function calc_layout().  Move the latter part back out to
      ceph_osdc_new_request() so it's all done in one place.  This makes
      it unnecessary to pass the op pointer to calc_layout(), so get rid
      of that parameter.
      Signed-off-by: default avatarAlex Elder <elder@inktank.com>
      Reviewed-by: default avatarJosh Durgin <josh.durgin@inktank.com>
      a19dadfb
    • Alex Elder's avatar
      libceph: pass offset and length out of calc_layout() · 75d1c941
      Alex Elder authored
      The purpose of calc_layout() is to determine, given a file offset
      and length and a layout describing the placement of file data across
      objects, where in "object space" that data resides.
      
      Specifically, it determines which object should hold the first part
      of the specified range of file data, and the offset and length of
      data within that object.  The length will not exceed the bounds
      of the object, and the caller is informed of that maximum length.
      
      Add two parameters to calc_layout() to allow the object-relative
      offset and length to be passed back to the caller.
      
      This is the first steps toward having ceph_osdc_new_request() build
      its osd op structure using osd_req_op_extent_init().
      Signed-off-by: default avatarAlex Elder <elder@inktank.com>
      Reviewed-by: default avatarJosh Durgin <josh.durgin@inktank.com>
      75d1c941
    • Alex Elder's avatar
      libceph: define source request op functions · 33803f33
      Alex Elder authored
      The rbd code has a function that allocates and populates a
      ceph_osd_req_op structure (the in-core version of an osd request
      operation).  When reviewed, Josh suggested two things: that the
      big varargs function might be better split into type-specific
      functions; and that this functionality really belongs in the osd
      client rather than rbd.
      
      This patch implements both of Josh's suggestions.  It breaks
      up the rbd function into separate functions and defines them
      in the osd client module as exported interfaces.  Unlike the
      rbd version, however, the functions don't allocate an osd_req_op
      structure; they are provided the address of one and that is
      initialized instead.
      
      The rbd function has been eliminated and calls to it have been
      replaced by calls to the new routines.  The rbd code now now use a
      stack (struct) variable to hold the op rather than allocating and
      freeing it each time.
      
      For now only the capabilities used by rbd are implemented.
      Implementing all the other osd op types, and making the rest of the
      code use it will be done separately, in the next few patches.
      
      Note that only the extent, cls, and watch portions of the
      ceph_osd_req_op structure are currently used.  Delete the others
      (xattr, pgls, and snap) from its definition so nobody thinks it's
      actually implemented or needed.  We can add it back again later
      if needed, when we know it's been tested.
      
      This (and a few follow-on patches) resolves:
          http://tracker.ceph.com/issues/3861Signed-off-by: default avatarAlex Elder <elder@inktank.com>
      Reviewed-by: default avatarJosh Durgin <josh.durgin@inktank.com>
      33803f33
    • Alex Elder's avatar
      libceph: define osd_req_opcode_valid() · a8dd0a37
      Alex Elder authored
      Define a separate function to determine the validity of an opcode,
      and use it inside osd_req_encode_op() in order to unclutter that
      function.
      
      Don't update the destination op at all--and return zero--if an
      unsupported or unrecognized opcode is seen in osd_req_encode_op().
      Signed-off-by: default avatarAlex Elder <elder@inktank.com>
      Reviewed-by: default avatarJosh Durgin <josh.durgin@inktank.com>
      a8dd0a37
    • Alex Elder's avatar
      ceph: move max constant definitions · adfe695a
      Alex Elder authored
      Move some definitions for max integer values out of the rbd code and
      into the more central "decode.h" header file.  These really belong
      in a Linux (or libc) header somewhere, but I haven't gotten around
      to proposing that yet.
      
      This is in preparation for moving some code out of rbd.c and into
      the osd client.
      Signed-off-by: default avatarAlex Elder <elder@inktank.com>
      Reviewed-by: default avatarJosh Durgin <josh.durgin@inktank.com>
      adfe695a
    • Alex Elder's avatar
      libceph: be explicit in masking bottom 16 bits · 0baa1bd9
      Alex Elder authored
      In ceph_osdc_build_request() there is a call to cpu_to_le16() which
      provides a 64-bit value as its argument.  Because of the implied
      byte swapping going on it looked pretty suspect to me.
      
      At the moment it turns out the behavior is well defined, but masking
      off those bottom bits explicitly eliminates this distraction, and is
      in fact more directly related to the purpose of the message header's
      data_off field.
      
      This resolves:
          http://tracker.ceph.com/issues/4125Signed-off-by: default avatarAlex Elder <elder@inktank.com>
      Reviewed-by: default avatarJosh Durgin <josh.durgin@inktank.com>
      0baa1bd9
    • Alex Elder's avatar
      libceph: account for alignment in pages cursor · 56fc5659
      Alex Elder authored
      When a cursor for a page array data message is initialized it needs
      to determine the initial value for cursor->last_piece.  Currently it
      just checks if length is less than a page, but that's not correct.
      The data in the first page in the array will be offset by a page
      offset based on the alignment recorded for the data.  (All pages
      thereafter will be aligned at the base of the page, so there's
      no need to account for this except for the first page.)
      
      Because this was wrong, there was a case where the length of a piece
      would be calculated as all of the residual bytes in the message and
      that plus the page offset could exceed the length of a page.
      
      So fix this case.  Make sure the sum won't wrap.
      
      This resolves a third issue described in:
          http://tracker.ceph.com/issues/4598Signed-off-by: default avatarAlex Elder <elder@inktank.com>
      Reviewed-by: default avatarSage Weil <sage@inktank.com>
      56fc5659
    • Alex Elder's avatar
      libceph: page offset must be less than page size · 5df521b1
      Alex Elder authored
      Currently ceph_msg_data_pages_advance() allows the page offset value
      to be PAGE_SIZE, apparently assuming ceph_msg_data_pages_next() will
      treat it as 0.  But that doesn't happen, and the result led to a
      helpful assertion failure.
      
      Change ceph_msg_data_pages_advance() to truncate the offset to 0
      before returning if it reaches PAGE_SIZE.
      
      Make a few other minor adjustments in this area (comments and a
      better assertion) while modifying it.
      
      This resolves a second issue described in:
          http://tracker.ceph.com/issues/4598Signed-off-by: default avatarAlex Elder <elder@inktank.com>
      Reviewed-by: default avatarSage Weil <sage@inktank.com>
      5df521b1
    • Alex Elder's avatar
      libceph: fix broken data length assertions · 1190bf06
      Alex Elder authored
      It's OK for the result of a read to come back with fewer bytes than
      were requested.  So don't trigger a BUG() in that case when
      initializing the data cursor.
      
      This resolves the first problem described in:
          http://tracker.ceph.com/issues/4598Signed-off-by: default avatarAlex Elder <elder@inktank.com>
      Reviewed-by: default avatarSage Weil <sage@inktank.com>
      1190bf06
    • Alex Elder's avatar
      libceph: make message data be a pointer · 6644ed7b
      Alex Elder authored
      Begin the transition from a single message data item to a list of
      them by replacing the "data" structure in a message with a pointer
      to a ceph_msg_data structure.
      
      A null pointer will indicate the message has no data; replace the
      use of ceph_msg_has_data() with a simple check for a null pointer.
      
      Create functions ceph_msg_data_create() and ceph_msg_data_destroy()
      to dynamically allocate and free a data item structure of a given type.
      
      When a message has its data item "set," allocate one of these to
      hold the data description, and free it when the last reference to
      the message is dropped.
      
      This partially resolves:
          http://tracker.ceph.com/issues/4429Signed-off-by: default avatarAlex Elder <elder@inktank.com>
      Reviewed-by: default avatarJosh Durgin <josh.durgin@inktank.com>
      6644ed7b
    • Alex Elder's avatar
      libceph: use only ceph_msg_data_advance() · 8ea299bc
      Alex Elder authored
      The *_msg_pos_next() functions do little more than call
      ceph_msg_data_advance().  Replace those wrapper functions with
      a simple call to ceph_msg_data_advance().
      
      This cleanup is related to:
          http://tracker.ceph.com/issues/4428Signed-off-by: default avatarAlex Elder <elder@inktank.com>
      Reviewed-by: default avatarJosh Durgin <josh.durgin@inktank.com>
      8ea299bc
    • Alex Elder's avatar
      libceph: don't add to crc unless data sent · 143334ff
      Alex Elder authored
      In write_partial_message_data() we aggregate the crc for the data
      portion of the message as each new piece of the data item is
      encountered.  Because it was computed *before* sending the data, if
      an attempt to send a new piece resulted in 0 bytes being sent, the
      crc crc across that piece would erroneously get computed again and
      added to the aggregate result.  This would occasionally happen in
      the evnet of a connection failure.
      
      The crc value isn't really needed until the complete value is known
      after sending all data, so there's no need to compute it before
      sending.
      
      So don't calculate the crc for a piece until *after* we know at
      least one byte of it has been sent.  That will avoid this problem.
      
      This resolves:
          http://tracker.ceph.com/issues/4450Signed-off-by: default avatarAlex Elder <elder@inktank.com>
      Reviewed-by: default avatarSage Weil <sage@inktank.com>
      143334ff
    • Alex Elder's avatar
      libceph: kill last of ceph_msg_pos · f5db90bc
      Alex Elder authored
      The only remaining field in the ceph_msg_pos structure is
      did_page_crc.  In the new cursor model of things that flag (or
      something like it) belongs in the cursor.
      
      Define a new field "need_crc" in the cursor (which applies to all
      types of data) and initialize it to true whenever a cursor is
      initialized.
      
      In write_partial_message_data(), the data CRC still will be computed
      as before, but it will check the cursor->need_crc field to determine
      whether it's needed.  Any time the cursor is advanced to a new piece
      of a data item, need_crc will be set, and this will cause the crc
      for that entire piece to be accumulated into the data crc.
      
      In write_partial_message_data() the intermediate crc value is now
      held in a local variable so it doesn't have to be byte-swapped so
      many times.  In read_partial_msg_data() we do something similar
      (but mainly for consistency there).
      
      With that, the ceph_msg_pos structure can go away,  and it no longer
      needs to be passed as an argument to prepare_message_data().
      
      This cleanup is related to:
          http://tracker.ceph.com/issues/4428Signed-off-by: default avatarAlex Elder <elder@inktank.com>
      Reviewed-by: default avatarJosh Durgin <josh.durgin@inktank.com>
      f5db90bc
    • Alex Elder's avatar
      libceph: kill most of ceph_msg_pos · 859a35d5
      Alex Elder authored
      All but one of the fields in the ceph_msg_pos structure are now
      never used (only assigned), so get rid of them.  This allows
      several small blocks of code to go away.
      
      This is cleanup of old code related to:
          http://tracker.ceph.com/issues/4428Signed-off-by: default avatarAlex Elder <elder@inktank.com>
      Reviewed-by: default avatarJosh Durgin <josh.durgin@inktank.com>
      859a35d5
    • Alex Elder's avatar
      libceph: use cursor resid for loop condition · 643c68a4
      Alex Elder authored
      Use the "resid" field of a cursor rather than finding when the
      message data position has moved up to meet the data length to
      determine when all data has been sent or received in
      write_partial_message_data() and read_partial_msg_data().
      
      This is cleanup of old code related to:
          http://tracker.ceph.com/issues/4428Signed-off-by: default avatarAlex Elder <elder@inktank.com>
      Reviewed-by: default avatarJosh Durgin <josh.durgin@inktank.com>
      643c68a4
    • Alex Elder's avatar
      libceph: collapse all data items into one · 4c59b4a2
      Alex Elder authored
      It turns out that only one of the data item types is ever used at
      any one time in a single message (currently).
          - A page array is used by the osd client (on behalf of the file
            system) and by rbd.  Only one osd op (and therefore at most
            one data item) is ever used at a time by rbd.  And the only
            time the file system sends two, the second op contains no
            data.
          - A bio is only used by the rbd client (and again, only one
            data item per message)
          - A page list is used by the file system and by rbd for outgoing
            data, but only one op (and one data item) at a time.
      
      We can therefore collapse all three of our data item fields into a
      single field "data", and depend on the messenger code to properly
      handle it based on its type.
      
      This allows us to eliminate quite a bit of duplicated code.
      
      This is related to:
          http://tracker.ceph.com/issues/4429Signed-off-by: default avatarAlex Elder <elder@inktank.com>
      Reviewed-by: default avatarJosh Durgin <josh.durgin@inktank.com>
      4c59b4a2
    • Alex Elder's avatar
      libceph: get rid of read helpers · 686be208
      Alex Elder authored
      Now that read_partial_message_pages() and read_partial_message_bio()
      are literally identical functions we can factor them out.  They're
      pretty simple as well, so just move their relevant content into
      read_partial_msg_data().
      
      This is and previous patches together resolve:
          http://tracker.ceph.com/issues/4428Signed-off-by: default avatarAlex Elder <elder@inktank.com>
      Reviewed-by: default avatarJosh Durgin <josh.durgin@inktank.com>
      686be208
    • Alex Elder's avatar
      libceph: no outbound zero data · 61fcdc97
      Alex Elder authored
      There is handling in write_partial_message_data() for the case where
      only the length of--and no other information about--the data to be
      sent has been specified.  It uses the zero page as the source of
      data to send in this case.
      
      This case doesn't occur.  All message senders set up a page array,
      pagelist, or bio describing the data to be sent.  So eliminate the
      block of code that handles this (but check and issue a warning for
      now, just in case it happens for some reason).
      
      This resolves:
          http://tracker.ceph.com/issues/4426Signed-off-by: default avatarAlex Elder <elder@inktank.com>
      Reviewed-by: default avatarJosh Durgin <josh.durgin@inktank.com>
      61fcdc97
    • Alex Elder's avatar
      libceph: use cursor for inbound data pages · 878efabd
      Alex Elder authored
      The cursor code for a page array selects the right page, page
      offset, and length to use for a ceph_tcp_recvpage() call, so
      we can use it to replace a block in read_partial_message_pages().
      
      This partially resolves:
          http://tracker.ceph.com/issues/4428Signed-off-by: default avatarAlex Elder <elder@inktank.com>
      Reviewed-by: default avatarJosh Durgin <josh.durgin@inktank.com>
      878efabd
    • Alex Elder's avatar
      libceph: kill ceph message bio_iter, bio_seg · 6518be47
      Alex Elder authored
      The bio_iter and bio_seg fields in a message are no longer used, we
      use the cursor instead.  So get rid of them and the functions that
      operate on them them.
      
      This is related to:
          http://tracker.ceph.com/issues/4428Signed-off-by: default avatarAlex Elder <elder@inktank.com>
      Reviewed-by: default avatarJosh Durgin <josh.durgin@inktank.com>
      6518be47
    • Alex Elder's avatar
      libceph: use cursor for bio reads · 463207aa
      Alex Elder authored
      Replace the use of the information in con->in_msg_pos for incoming
      bio data.  The old in_msg_pos and the new cursor mechanism do
      basically the same thing, just slightly differently.
      
      The main functional difference is that in_msg_pos keeps track of the
      length of the complete bio list, and assumed it was fully consumed
      when that many bytes had been transferred.  The cursor does not assume
      a length, it simply consumes all bytes in the bio list.  Because the
      only user of bio data is the rbd client, and because the length of a
      bio list provided by rbd client always matches the number of bytes
      in the list, both ways of tracking length are equivalent.
      
      In addition, for in_msg_pos the initial bio vector is selected as
      the initial value of the bio->bi_idx, while the cursor assumes this
      is zero.  Again, the rbd client always passes 0 as the initial index
      so the effect is the same.
      
      Other than that, they basically match:
          in_msg_pos      cursor
          ----------      ------
          bio_iter        bio
          bio_seg         vec_index
          page_pos        page_offset
      
      The in_msg_pos field is initialized by a call to init_bio_iter().
      The bio cursor is initialized by ceph_msg_data_cursor_init().
      Both now happen in the same spot, in prepare_message_data().
      
      The in_msg_pos field is advanced by a call to in_msg_pos_next(),
      which updates page_pos and calls iter_bio_next() to move to the next
      bio vector, or to the next bio in the list.  The cursor is advanced
      by ceph_msg_data_advance().  That isn't currently happening so
      add a call to that in in_msg_pos_next().
      
      Finally, the next piece of data to use for a read is determined
      by a bunch of lines in read_partial_message_bio().  Those can be
      replaced by an equivalent ceph_msg_data_bio_next() call.
      
      This partially resolves:
          http://tracker.ceph.com/issues/4428Signed-off-by: default avatarAlex Elder <elder@inktank.com>
      Reviewed-by: default avatarJosh Durgin <josh.durgin@inktank.com>
      463207aa
    • Alex Elder's avatar
      libceph: record residual bytes for all message data types · 25aff7c5
      Alex Elder authored
      All of the data types can use this, not just the page array.  Until
      now, only the bio type doesn't have it available, and only the
      initiator of the request (the rbd client) is able to supply the
      length of the full request without re-scanning the bio list.  Change
      the cursor init routines so the length is supplied based on the
      message header "data_len" field, and use that length to intiialize
      the "resid" field of the cursor.
      
      In addition, change the way "last_piece" is defined so it is based
      on the residual number of bytes in the original request.  This is
      necessary (at least for bio messages) because it is possible for
      a read request to succeed without consuming all of the space
      available in the data buffer.
      
      This resolves:
          http://tracker.ceph.com/issues/4427Signed-off-by: default avatarAlex Elder <elder@inktank.com>
      Reviewed-by: default avatarJosh Durgin <josh.durgin@inktank.com>
      25aff7c5
    • Alex Elder's avatar
      libceph: drop pages parameter · 28a89dde
      Alex Elder authored
      The value passed for "pages" in read_partial_message_pages() is
      always the pages pointer from the incoming message, which can be
      derived inside that function.  So just get rid of the parameter.
      Signed-off-by: default avatarAlex Elder <elder@inktank.com>
      Reviewed-by: default avatarJosh Durgin <josh.durgin@inktank.com>
      28a89dde
    • Alex Elder's avatar
      libceph: initialize data fields on last msg put · 888334f9
      Alex Elder authored
      When the last reference to a ceph message is dropped,
      ceph_msg_last_put() is called to clean things up.
      
      For "normal" messages (allocated via ceph_msg_new() rather than
      being allocated from a memory pool) it's sufficient to just release
      resources.  But for a mempool-allocated message we actually have to
      re-initialize the data fields in the message back to initial state
      so they're ready to go in the event the message gets reused.
      
      Some of this was already done; this fleshes it out so it's done
      more completely.
      
      This resolves:
          http://tracker.ceph.com/issues/4540Signed-off-by: default avatarAlex Elder <elder@inktank.com>
      Reviewed-by: default avatarSage Weil <sage@inktank.com>
      Reviewed-by: default avatarJosh Durgin <josh.durgin@inktank.com>
      888334f9
    • Alex Elder's avatar
      libceph: send queued requests when starting new one · 7e2766a1
      Alex Elder authored
      An osd expects the transaction ids of arriving request messages from
      a given client to a given osd to increase monotonically.  So the osd
      client needs to send its requests in ascending tid order.
      
      The transaction id for a request is set at the time it is
      registered, in __register_request().  This is also where the request
      gets placed at the end of the osd client's unsent messages list.
      
      At the end of ceph_osdc_start_request(), the request message for a
      newly-mapped osd request is supplied to the messenger to be sent
      (via __send_request()).  If any other messages were present in the
      osd client's unsent list at that point they would be sent *after*
      this new request message.
      
      Because those unsent messages have already been registered, their
      tids would be lower than the newly-mapped request message, and
      sending that message first can violate the tid ordering rule.
      
      Rather than sending the new request only, send all queued requests
      (including the new one) at that point in ceph_osdc_start_request().
      This ensures the tid ordering property is preserved.
      
      With this in place, all messages should now be sent in tid order
      regardless of whether they're being sent for the first time or
      re-sent as a result of a call to osd_reset().
      
      This resolves:
          http://tracker.ceph.com/issues/4392Signed-off-by: default avatarAlex Elder <elder@inktank.com>
      Reviewed-off-by: default avatarSage Weil <sage@inktank.com>
      7e2766a1
    • Alex Elder's avatar
      libceph: keep request lists in tid order · ad885927
      Alex Elder authored
      In __map_request(), when adding a request to an osd client's unsent
      list, add it to the tail rather than the head.  That way the newest
      entries (with the highest tid value) will be last.
      
      Maintain an osd's request list in order of increasing tid also.
      
      Finally--to be consistent--maintain an osd client's "notarget" list
      in that order as well.
      
      This partially resolves:
          http://tracker.ceph.com/issues/4392Signed-off-by: default avatarAlex Elder <elder@inktank.com>
      Reviewed-off-by: default avatarSage Weil <sage@inktank.com>
      ad885927
    • Alex Elder's avatar
      libceph: requeue only sent requests when kicking · e02493c0
      Alex Elder authored
      The osd expects incoming requests for a given object from a given
      client to arrive in order, with the tid for each request being
      greater than the tid for requests that have already arrived.  This
      patch fixes two places the osd client might not maintain that
      ordering.
      
      For the osd client, the connection fault method is osd_reset().
      That function calls __reset_osd() to close and re-open the
      connection, then calls __kick_osd_requests() to cause all
      outstanding requests for the affected osd to be re-sent after
      the connection has been re-established.
      
      When an osd is reset, any in-flight messages will need to be
      re-sent.  An osd client maintains distinct lists for unsent and
      in-flight messages.  Meanwhile, an osd maintains a single list of
      all its requests (both sent and un-sent).  (Each message is linked
      into two lists--one for the osd client and one list for the osd.)
      
      To process an osd "kick" operation, the request list for the *osd*
      is traversed, and each request is moved off whichever osd *client*
      list it was on (unsent or sent) and placed onto the osd client's
      unsent list.  (It remains where it is on the osd's request list.)
      
      When that is done, osd_reset() calls __send_queued() to cause each
      of the osd client's unsent messages to be sent.
      
      OK, with that background...
      
      As the osd request list is traversed each request is prepended to
      the osd client's unsent list in the order they're seen.  The effect
      of this is to reverse the order of these requests as they are put
      (back) onto the unsent list.
      
      Instead, build up a list of only the requests for an osd that have
      already been sent (by checking their r_sent flag values).  Once an
      unsent request is found, stop examining requests and prepend the
      requests that need re-sending to the osd client's unsent list.
      
      Preserve the original order of requests in the process (previously
      re-queued requests were reversed in this process).  Because they
      have already been sent, they will have lower tids than any request
      already present on the unsent list.
      
      Just below that, traverse the linger list in forward order as
      before, but add them to the *tail* of the list rather than the head.
      These requests get re-registered, and in the process are give a new
      (higher) tid, so the should go at the end.
      
      This partially resolves:
          http://tracker.ceph.com/issues/4392Signed-off-by: default avatarAlex Elder <elder@inktank.com>
      Reviewed-off-by: default avatarSage Weil <sage@inktank.com>
      e02493c0
    • Alex Elder's avatar
      libceph: no more kick_requests() race · 92451b49
      Alex Elder authored
      Since we no longer drop the request mutex between registering and
      mapping an osd request in ceph_osdc_start_request(), there is no
      chance of a race with kick_requests().
      
      We can now therefore map and send the new request unconditionally
      (but we'll issue a warning should it ever occur).
      Signed-off-by: default avatarAlex Elder <elder@inktank.com>
      Reviewed-off-by: default avatarSage Weil <sage@inktank.com>
      92451b49
    • Alex Elder's avatar
      libceph: slightly defer registering osd request · dc4b870c
      Alex Elder authored
      One of the first things ceph_osdc_start_request() does is register
      the request.  It then acquires the osd client's map semaphore and
      request mutex and proceeds to map and send the request.
      
      There is no reason the request has to be registered before acquiring
      the map semaphore.  So hold off doing so until after the map
      semaphore is held.
      
      Since register_request() is nothing more than a wrapper around
      __register_request(), call the latter function instead, after
      acquiring the request mutex.
      
      That leaves register_request() unused, so get rid of it.
      
      This partially resolves:
          http://tracker.ceph.com/issues/4392Signed-off-by: default avatarAlex Elder <elder@inktank.com>
      Reviewed-off-by: default avatarSage Weil <sage@inktank.com>
      dc4b870c
    • Sage Weil's avatar
      libceph: wrap auth methods in a mutex · e9966076
      Sage Weil authored
      The auth code is called from a variety of contexts, include the mon_client
      (protected by the monc's mutex) and the messenger callbacks (currently
      protected by nothing).  Avoid chaos by protecting all auth state with a
      mutex.  Nothing is blocking, so this should be simple and lightweight.
      Signed-off-by: default avatarSage Weil <sage@inktank.com>
      Reviewed-by: default avatarAlex Elder <elder@inktank.com>
      e9966076
    • Sage Weil's avatar
      libceph: wrap auth ops in wrapper functions · 27859f97
      Sage Weil authored
      Use wrapper functions that check whether the auth op exists so that callers
      do not need a bunch of conditional checks.  Simplifies the external
      interface.
      Signed-off-by: default avatarSage Weil <sage@inktank.com>
      Reviewed-by: default avatarAlex Elder <elder@inktank.com>
      27859f97
    • Sage Weil's avatar
      libceph: add update_authorizer auth method · 0bed9b5c
      Sage Weil authored
      Currently the messenger calls out to a get_authorizer con op, which will
      create a new authorizer if it doesn't yet have one.  In the meantime, when
      we rotate our service keys, the authorizer doesn't get updated.  Eventually
      it will be rejected by the server on a new connection attempt and get
      invalidated, and we will then rebuild a new authorizer, but this is not
      ideal.
      
      Instead, if we do have an authorizer, call a new update_authorizer op that
      will verify that the current authorizer is using the latest secret.  If it
      is not, we will build a new one that does.  This avoids the transient
      failure.
      
      This fixes one of the sorry sequence of events for bug
      
      	http://tracker.ceph.com/issues/4282Signed-off-by: default avatarSage Weil <sage@inktank.com>
      Reviewed-by: default avatarAlex Elder <elder@inktank.com>
      0bed9b5c