- 29 Sep, 2002 1 commit
-
-
Guido van Rossum authored
is *not* thread-safe. So don't share the Pickler.
-
- 28 Sep, 2002 1 commit
-
-
Guido van Rossum authored
says it should be cPickle.dumps(), so do that.
-
- 27 Sep, 2002 9 commits
-
-
Guido van Rossum authored
asyncore.poll() with a timeout of 10 seconds. Change this to a variable timeout starting at 1 msec and doubling until 1 second. While debugging Win2k crashes in the check4ExtStorageThread test from ZODB/tests/MTStorage.py, Tim noticed that there were frequent 10 second gaps in the log file where *nothing* happens. These were caused by the following scenario. Suppose a ZEO client process has two threads using the same connection to the ZEO server, and there's no asyncore loop active. T1 makes a synchronous call, and enters the wait() function. Then T2 makes another synchronous call, and enters the wait() function. At this point, both are blocked in the select() call in asyncore.poll(), with a timeout of 10 seconds (in the old version). Now the replies for both calls arrive. Say T1 wakes up. The handle_read() method in smac.py calls self.recv(8096), so it gets both replies in its buffer, decodes both, and calls self.message_input() for both, which sticks both replies in the self.replies dict. Now T1 finds its response, its wait() call returns with it. But T2 is still stuck in asyncore.poll(): its select() call never woke up, and has to "sit out" the whole timeout of 10 seconds. (Good thing I added timeouts to everything! Or perhaps not, since it masked the problem.) One other condition must be satisfied before this becomes a disaster: T2 must have started a transaction, and all other threads must be waiting to start another transaction. This is what I saw in the log. (Hmm, maybe a message should be logged when a thread is waiting to start a transaction this way.) In a real Zope application, this won't happen, because there's a centralized asyncore loop in a separate thread (probably the client's main thread) and the various threads would be waiting on the condition variable; whenever a reply is inserted in the replies dict, all threads are notified. But in the test suite there's no asyncore loop, and I don't feel like adding one. So the exponential backoff seems the easiest "solution".
-
Guido van Rossum authored
-
Guido van Rossum authored
'x' event for a wrapper and then close it.
-
Guido van Rossum authored
rid of the silly "smac" word.
-
Guido van Rossum authored
logged with the message_output.
-
Guido van Rossum authored
importing __builtin__, rather than using __main__.__builtins__.
-
Guido van Rossum authored
in load_class(). Found by pychecker.
-
Guido van Rossum authored
-
Guido van Rossum authored
be too long, but breaking these at an arbitrary character looks wrong (and can occasionally prevent you from finding a search string).
-
- 26 Sep, 2002 5 commits
-
-
Guido van Rossum authored
initialized from the StorageServer's read_only attribute, and later if the client registers in read_only mode, it may be set even if was off initially. This attribute is tested by all write-ish operations.
-
Guido van Rossum authored
-
Guido van Rossum authored
simple read-only tests. (These tests pass.)
-
Guido van Rossum authored
StorageServer as well as that of the storage itself. Currently the test fails.
-
Guido van Rossum authored
start_zeo_server() has changed dramatically.)
-
- 25 Sep, 2002 6 commits
-
-
Jeremy Hylton authored
-
Jeremy Hylton authored
Don't catch a specific set of errors, catch anything, log the message that failed, and re-raise the exception. Eliminate unused class variable VERSION and unused import of struct.
-
Jeremy Hylton authored
If an exception occurs while decoding a message, there is really nothing the server can do to recover. If the message was a synchronous call, the client will wait for ever for the reply. The server can't send the reply, because it couldn't unpickle the message id. Instead of trying to recover, just let the exception propogate up to asyncore where the connection will be closed. As a result, eliminate DecodingError and special case in handle_error() that handled flags == None.
-
Guido van Rossum authored
instead. return_error(): be more careful calling repr() on err_value.
-
Barry Warsaw authored
which tests a particular combination of packing and transactional undo at the ZODB layer.
-
Barry Warsaw authored
FileStorage/Berkeley storage definition, and undoInfo(), and the storage interface definition.
-
- 24 Sep, 2002 2 commits
-
-
Jeremy Hylton authored
This is no longer an invariant. Storage methods like restore(), abortVersion(), and commitVersion() can result in a serialno that does not match the transaction id.
-
Guido van Rossum authored
Rather than blaming window for reporting success as an error, the else clause on the second try block should be an except clause.
-
- 23 Sep, 2002 5 commits
-
-
Guido van Rossum authored
ZODB/Connection.py -- the Connection class has a sync() method which calls the sync() method on the storage if it exists.
-
Guido van Rossum authored
- Change pending() to use select.select() instead of select.poll(), so it'll work on Windows. - Clarify comment to say that only Exceptions are propagated. - Change some private variables to public (everything else is public). - Remove XXX comment about logging at INFO level (we already do that now :-).
-
Guido van Rossum authored
-
Guido van Rossum authored
XXX This created two unused attributes, self._commit_lock_{acquire,release}. Why? I've gotten rid of them. The test suite succeeds. But they are created by BaseStorage; maybe they play a role in the standard storage API???
-
Guido van Rossum authored
-
- 20 Sep, 2002 3 commits
-
-
Guido van Rossum authored
-
Guido van Rossum authored
ClientStorage constructor called with both wait=1 and read_only_fallback=1 should return, indicating its readiness, when a read-only connection was made. This is done by calling connect(sync=1). Previously this waited for the ConnectThread to finish, but that thread doesn't finish until it's made a read-write connection, so a different mechanism is needed. I ended up doing a major overhaul of the interfaces between ClientStorage, ConnectionManager, ConnectThread/ConnectWrapper, and even ManagedConnection. Changes: ClientStorage.py: ClientStorage: - testConnection() now returns just the preferred flag; stubs are cheap and I like to have the notifyConnected() signature be the same for clients and servers. - notifyConnected() now takes a connection (to match the signature of this method in StorageServer), and creates a new stub. It also takes care of the reconnect business if the client was already connected, rather than the ClientManager. It stores the connection as self._connection so it can close the previous one. This is also reset by notifyDisconnected(). zrpc/client.py: ConnectionManager: - Changed self.thread_lock into a condition variable. It now also protects self.connection. The condition is notified when self.connection is set to a non-None value in connect_done(); connect(sync=1) waits for it. The self.connected variable is no more; we test "self.connection is not None" instead. - Tried to made close() reentrant. (There's a trick: you can't set self.connection to None, conn.close() ends up calling close_conn() which does this.) - Renamed notify_closed() to close_conn(), for symmetry with the StorageServer API. - Added an is_connected() method so ConnectThread.try_connect() doesn't have to dig inside the manager's guts to find out if the manager is connected (important for the disposition of fallback wrappers). ConnectThread and ConnectWrapper: - Follow above changes in the ClientStorage and ConnectionManager APIs: don't close the manager's connection when reconnecting, but leave that up to notifyConnected(); ConnectWrapper no longer manages the stub. - ConnectWrapper sets self.sock to None once it's created a ManagedConnection -- from there on the connection is is charge of closing the socket. zrpc/connection.py: ManagedServerConnection: - Changed the order in which close() calls things; super_close() should be last. ManagedConnection: - Ditto, and call the manager's close_conn() instead of notify_closed(). tests/testZEO.py: - In checkReconnectSwitch(), we can now open the client storage with wait=1 and read_only_fallback=1.
-
Guido van Rossum authored
constructor signature backwards compatible with ZEO 1. This means adding wait_for_server_on_startup and debug options. wait_for_server_on_startup is an alias for wait, which makes the argument decoding for these two a little tricky. debug is ignored. Also change the default of wait to True, like it was in ZEO 1. This is less likely to screw naive customers.
-
- 19 Sep, 2002 8 commits
-
-
Guido van Rossum authored
-
Guido van Rossum authored
of 30 seconds. There's nothing wrong with the code, it's just slow. So increase the timeout to 60 seconds.
-
Guido van Rossum authored
On Windows, port+1 is used as well, so we don't want accidentally to allocate two adjacent ports when we ask for multiple ports.
-
Guido van Rossum authored
-
Guido van Rossum authored
"if wait" rather than "if wait is not None". Also change the default to 0.
-
Guido van Rossum authored
until I added an is_connected() test to testConnection() is solved. After the ConnectThread has switched the client to the new, read-write connection, it closes the read-only connection(s) that it was saving up in case there was no read-write connection. But closing a ManagedConnection calls notify_closed() on the manager, which disconnected the manager and the client from its brand new read-write connection. The mistake here is that this should only be done when closing the manager's current connection! The fix was to add an argument to notify_closed() that passes the connection object being closed; notify_closed() returns without doing a thing when that is not the current connection. I presume this didn't happen on Linux because there the sockets happened to connect in a different order, and there was no read-only connection to close yet (just a socket trying to connect). I'm taking out the previous "fix" to ClientStorage, because that only masked the problem in this relatively simple test case. The problem could still occur when both a read-only and a read-write server are up initially, and the read-only server connects first; once the read-write server connects, the read-write connection is installed, and then the saved read-only connection is closed which would again mistakenly disconnect the read-write connection. Another (related) fix is not to call self.mgr.notify_closed() but to call self.mgr.connection.close() when reconnecting. (Hmm, I wonder if it would make more sense to have an explicit reconnect callback to the manager and the client? Later.)
-
Guido van Rossum authored
the socket's __str__ due to a __getattr__ method in asyncore's dispatcher base class that everybody hates but nobody dares take away.
-
Guido van Rossum authored
Add the pid to the label.
-