- 20 Apr, 2016 1 commit
-
-
Julien Muchembled authored
This fixes the following issue: WARNING replication aborted for partition 1 DEBUG connection started for <ClientConnection(uuid=None, address=...:43776, handler=StorageOperationHandler, fd=10, on_close=onConnectionClosed, connecting, client) at 7f5d2067fdd0> DEBUG connect failed for <SocketConnectorIPv6 at 0x7f5d2067fe10 fileno 10 ('::', 0), opened to ('...', 43776)>: ENETUNREACH (Network is unreachable) WARNING replication aborted for partition 5 DEBUG connection started for <ClientConnection(uuid=None, address=...:43776, handler=StorageOperationHandler, fd=10, on_close=onConnectionClosed, connecting, client) at 7f5d1c409510> PACKET #0x0000 RequestIdentification > None (...:43776) | (<EnumItem STORAGE (1)>, None, ('...', 60533), '...') ERROR Pre-mortem data: ERROR Traceback (most recent call last): ERROR File "neo/storage/app.py", line 157, in run ERROR self._run() ERROR File "neo/storage/app.py", line 197, in _run ERROR self.doOperation() ERROR File "neo/storage/app.py", line 285, in doOperation ERROR poll() ERROR File "neo/storage/app.py", line 95, in _poll ERROR self.em.poll(1) ERROR File "neo/lib/event.py", line 121, in poll ERROR self._poll(blocking) ERROR File "neo/lib/event.py", line 165, in _poll ERROR if conn.readable(): ERROR File "neo/lib/connection.py", line 481, in readable ERROR self._closure() ERROR File "neo/lib/connection.py", line 539, in _closure ERROR self.close() ERROR File "neo/lib/connection.py", line 531, in close ERROR handler.connectionClosed(self) ERROR File "neo/lib/handler.py", line 135, in connectionClosed ERROR self.connectionLost(conn, NodeStates.TEMPORARILY_DOWN) ERROR File "neo/storage/handlers/storage.py", line 59, in connectionLost ERROR replicator.abort() ERROR File "neo/storage/replicator.py", line 339, in abort ERROR self._nextPartition() ERROR File "neo/storage/replicator.py", line 260, in _nextPartition ERROR None if name else app.uuid, app.server, name or app.name)) ERROR File "neo/lib/connection.py", line 562, in ask ERROR raise ConnectionClosed ERROR ConnectionClosed
-
- 18 Apr, 2016 1 commit
-
-
Julien Muchembled authored
This fixes a lock leak on storages, causing further transactions to timeout.
-
- 01 Apr, 2016 1 commit
-
-
Julien Muchembled authored
-
- 31 Mar, 2016 1 commit
-
-
Julien Muchembled authored
-
- 30 Mar, 2016 2 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 28 Mar, 2016 2 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 22 Mar, 2016 2 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 21 Mar, 2016 3 commits
-
-
Julien Muchembled authored
This fixes the following crash (for example when a client disconnects during tpc_finish): Traceback (most recent call last): ... File "neo/master/handlers/storage.py", line 68, in answerInformationLocked self.app.tm.lock(ttid, conn.getUUID()) File "neo/master/transactions.py", line 338, in lock if self._ttid_dict[ttid].lock(uuid) and self._queue[0][1] == ttid: IndexError: list index out of range
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 09 Mar, 2016 2 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 08 Mar, 2016 2 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 04 Mar, 2016 3 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
Before this change, a storage node did 3 commits per transaction: - once all data are stored - when locking the transaction - when unlocking the transaction The last one is not important for ACID. In case of a crash, the transaction is unlocked again (verification phase). By deferring it by 1 second, we only have 2 commits per transaction during high activity because all pending changes are merged with the commits caused by other transactions. This change compensates the extra commit(s) per transaction that were introduced in commit 7eb7cf1b ("Minimize the amount of work during tpc_finish").
-
Julien Muchembled authored
-
- 02 Mar, 2016 1 commit
-
-
Julien Muchembled authored
Since commit d2d77437 ("client: make the cache tolerant to late invalidations when the entry is in the history queue"), invalidated items became current again when they were moved to the history queue, which was wrong for 2 reasons: - only the last items of _oid_dict values may have next_tid=None, - and for such items, they could be wrongly reused when caching the real current data.
-
- 01 Mar, 2016 1 commit
-
-
Julien Muchembled authored
-
- 26 Feb, 2016 4 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 05 Feb, 2016 1 commit
-
-
Julien Muchembled authored
This fixes the following scenario: 1. the master sends invalidations to clients, and unlocks to storages (oid1, tid1) 2. the storage receives/processes the unlock 3. the client asks data (oid1, tid0) 4. the storage returns tid1 as next tid, whereas it's still None in the cache (before, it caused an assertion failure) 6. the client processes invalidations
-
- 25 Jan, 2016 2 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 21 Jan, 2016 2 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 12 Jan, 2016 1 commit
-
-
Julien Muchembled authored
See commit c277ed20 ("client: really process all invalidations in poll thread").
-
- 16 Dec, 2015 2 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 13 Dec, 2015 3 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
This is a partial implementation. To truncate at a smaller tid, you must wait that data is imported up to this tid and stop using the Importer backend.
-
Julien Muchembled authored
This backend does not support replication. Even if we implemented it, such node could only be a source for other nodes so we should never delete transactions.
-
- 12 Dec, 2015 1 commit
-
-
Julien Muchembled authored
-
- 11 Dec, 2015 1 commit
-
-
Julien Muchembled authored
-
- 09 Dec, 2015 1 commit
-
-
Julien Muchembled authored
This fixes a regression caused by commit eef52c27
-