1. 29 Sep, 2002 2 commits
  2. 28 Sep, 2002 1 commit
  3. 27 Sep, 2002 9 commits
    • Guido van Rossum's avatar
      In wait(), when there's no asyncore main loop, we called · 8cba5055
      Guido van Rossum authored
      asyncore.poll() with a timeout of 10 seconds.  Change this to a
      variable timeout starting at 1 msec and doubling until 1 second.
      
      While debugging Win2k crashes in the check4ExtStorageThread test from
      ZODB/tests/MTStorage.py, Tim noticed that there were frequent 10
      second gaps in the log file where *nothing* happens.  These were caused
      by the following scenario.
      
      Suppose a ZEO client process has two threads using the same connection
      to the ZEO server, and there's no asyncore loop active.  T1 makes a
      synchronous call, and enters the wait() function.  Then T2 makes
      another synchronous call, and enters the wait() function.  At this
      point, both are blocked in the select() call in asyncore.poll(), with
      a timeout of 10 seconds (in the old version).  Now the replies for
      both calls arrive.  Say T1 wakes up.  The handle_read() method in
      smac.py calls self.recv(8096), so it gets both replies in its buffer,
      decodes both, and calls self.message_input() for both, which sticks
      both replies in the self.replies dict.  Now T1 finds its response, its
      wait() call returns with it.  But T2 is still stuck in
      asyncore.poll(): its select() call never woke up, and has to "sit out"
      the whole timeout of 10 seconds.  (Good thing I added timeouts to
      everything!  Or perhaps not, since it masked the problem.)
      
      One other condition must be satisfied before this becomes a disaster:
      T2 must have started a transaction, and all other threads must be
      waiting to start another transaction.  This is what I saw in the log.
      (Hmm, maybe a message should be logged when a thread is waiting to
      start a transaction this way.)
      
      In a real Zope application, this won't happen, because there's a
      centralized asyncore loop in a separate thread (probably the client's
      main thread) and the various threads would be waiting on the condition
      variable; whenever a reply is inserted in the replies dict, all
      threads are notified.  But in the test suite there's no asyncore loop,
      and I don't feel like adding one.  So the exponential backoff seems
      the easiest "solution".
      8cba5055
    • Guido van Rossum's avatar
      Whitespace normalization. · 4a981c73
      Guido van Rossum authored
      4a981c73
    • Guido van Rossum's avatar
      Add a log msg "closing troubled socket <address>" when we receive an · 7597373d
      Guido van Rossum authored
      'x' event for a wrapper and then close it.
      7597373d
    • Guido van Rossum's avatar
      Whle we're at it, show the length of the message output as well. Get · ba7fae18
      Guido van Rossum authored
      rid of the silly "smac" word.
      ba7fae18
    • Guido van Rossum's avatar
      Use the zrpc.log module's log() method so the process identity is · b70bef7f
      Guido van Rossum authored
      logged with the message_output.
      b70bef7f
    • Guido van Rossum's avatar
      If you're going to patch __builtin__, at least do it right, by · 3b4b2508
      Guido van Rossum authored
      importing __builtin__, rather than using __main__.__builtins__.
      3b4b2508
    • Guido van Rossum's avatar
      Add missing import of sys, needed for error logging in except clause · 19ece134
      Guido van Rossum authored
      in load_class().  Found by pychecker.
      19ece134
    • Guido van Rossum's avatar
      e1dbb1bf
    • Guido van Rossum's avatar
      When using textwrap, don't break long words. Occasionally a line will · 02c7e4a8
      Guido van Rossum authored
      be too long, but breaking these at an arbitrary character looks wrong
      (and can occasionally prevent you from finding a search string).
      02c7e4a8
  4. 26 Sep, 2002 5 commits
  5. 25 Sep, 2002 6 commits
  6. 24 Sep, 2002 2 commits
  7. 23 Sep, 2002 5 commits
  8. 20 Sep, 2002 3 commits
    • Guido van Rossum's avatar
    • Guido van Rossum's avatar
      I set out making wait=1 work for fallback connections, i.e. the · 24afe7ac
      Guido van Rossum authored
      ClientStorage constructor called with both wait=1 and
      read_only_fallback=1 should return, indicating its readiness, when a
      read-only connection was made.  This is done by calling
      connect(sync=1).  Previously this waited for the ConnectThread to
      finish, but that thread doesn't finish until it's made a read-write
      connection, so a different mechanism is needed.
      
      I ended up doing a major overhaul of the interfaces between
      ClientStorage, ConnectionManager, ConnectThread/ConnectWrapper, and
      even ManagedConnection.  Changes:
      
      ClientStorage.py:
      
        ClientStorage:
      
        - testConnection() now returns just the preferred flag; stubs are
          cheap and I like to have the notifyConnected() signature be the
          same for clients and servers.
      
        - notifyConnected() now takes a connection (to match the signature
          of this method in StorageServer), and creates a new stub.  It also
          takes care of the reconnect business if the client was already
          connected, rather than the ClientManager.  It stores the
          connection as self._connection so it can close the previous one.
          This is also reset by notifyDisconnected().
      
      zrpc/client.py:
      
        ConnectionManager:
      
        - Changed self.thread_lock into a condition variable.  It now also
          protects self.connection.  The condition is notified when
          self.connection is set to a non-None value in connect_done();
          connect(sync=1) waits for it.  The self.connected variable is no
          more; we test "self.connection is not None" instead.
      
        - Tried to made close() reentrant.  (There's a trick: you can't set
          self.connection to None, conn.close() ends up calling close_conn()
          which does this.)
      
        - Renamed notify_closed() to close_conn(), for symmetry with the
          StorageServer API.
      
        - Added an is_connected() method so ConnectThread.try_connect()
          doesn't have to dig inside the manager's guts to find out if the
          manager is connected (important for the disposition of fallback
          wrappers).
      
        ConnectThread and ConnectWrapper:
      
        - Follow above changes in the ClientStorage and ConnectionManager
          APIs: don't close the manager's connection when reconnecting, but
          leave that up to notifyConnected(); ConnectWrapper no longer
          manages the stub.
      
        - ConnectWrapper sets self.sock to None once it's created a
          ManagedConnection -- from there on the connection is is charge of
          closing the socket.
      
      zrpc/connection.py:
      
        ManagedServerConnection:
      
        - Changed the order in which close() calls things; super_close()
          should be last.
      
        ManagedConnection:
      
        - Ditto, and call the manager's close_conn() instead of
          notify_closed().
      
      tests/testZEO.py:
      
        - In checkReconnectSwitch(), we can now open the client storage with
          wait=1 and read_only_fallback=1.
      24afe7ac
    • Guido van Rossum's avatar
      Address Chris McDonough's request: make the ClientStorage() · f8411024
      Guido van Rossum authored
      constructor signature backwards compatible with ZEO 1.  This means
      adding wait_for_server_on_startup and debug options.
      wait_for_server_on_startup is an alias for wait, which makes the
      argument decoding for these two a little tricky.  debug is ignored.
      
      Also change the default of wait to True, like it was in ZEO 1.  This
      is less likely to screw naive customers.
      f8411024
  9. 19 Sep, 2002 7 commits
    • Guido van Rossum's avatar
    • Guido van Rossum's avatar
      This test (with ZEO) failed frequently on my Win98 box with a timeout · 9eb5bb8a
      Guido van Rossum authored
      of 30 seconds.  There's nothing wrong with the code, it's just slow.
      So increase the timeout to 60 seconds.
      9eb5bb8a
    • Guido van Rossum's avatar
      Change the random port generator to only generate even port numbers. · e0300a10
      Guido van Rossum authored
      On Windows, port+1 is used as well, so we don't want accidentally to
      allocate two adjacent ports when we ask for multiple ports.
      e0300a10
    • Guido van Rossum's avatar
      5aa4c348
    • Guido van Rossum's avatar
      pack()'s 'wait' argument is a boolean, not an object, so test it using · 0b9bb581
      Guido van Rossum authored
      "if wait" rather than "if wait is not None".  Also change the default
      to 0.
      0b9bb581
    • Guido van Rossum's avatar
      The mystery of the Win98 hangs in the checkReconnectSwitch() test · da28b620
      Guido van Rossum authored
      until I added an is_connected() test to testConnection() is solved.
      
      After the ConnectThread has switched the client to the new, read-write
      connection, it closes the read-only connection(s) that it was saving
      up in case there was no read-write connection.  But closing a
      ManagedConnection calls notify_closed() on the manager, which
      disconnected the manager and the client from its brand new read-write
      connection.  The mistake here is that this should only be done when
      closing the manager's current connection!
      
      The fix was to add an argument to notify_closed() that passes the
      connection object being closed; notify_closed() returns without doing
      a thing when that is not the current connection.
      
      I presume this didn't happen on Linux because there the sockets
      happened to connect in a different order, and there was no read-only
      connection to close yet (just a socket trying to connect).
      
      I'm taking out the previous "fix" to ClientStorage, because that only
      masked the problem in this relatively simple test case.  The problem
      could still occur when both a read-only and a read-write server are up
      initially, and the read-only server connects first; once the
      read-write server connects, the read-write connection is installed,
      and then the saved read-only connection is closed which would again
      mistakenly disconnect the read-write connection.
      
      Another (related) fix is not to call self.mgr.notify_closed() but to
      call self.mgr.connection.close() when reconnecting.  (Hmm, I wonder if
      it would make more sense to have an explicit reconnect callback to the
      manager and the client?  Later.)
      da28b620
    • Guido van Rossum's avatar
      Define __str__ as an alias for __repr__. Otherwise __str__ will get · b0e16c71
      Guido van Rossum authored
      the socket's __str__ due to a __getattr__ method in asyncore's
      dispatcher base class that everybody hates but nobody dares take away.
      b0e16c71