- 17 Jan, 2017 1 commit
-
-
Julien Muchembled authored
-
- 16 Jan, 2017 4 commits
-
-
Julien Muchembled authored
100 is too small for tests like testBasicStore (MySQL) and testDelayedStore.
-
Julien Muchembled authored
When receiving 1 byte, benchmarking shows no visible difference with values between 4096 and 65536 for the buffer size. With higher values, it becomes significantly slower. On the other side, a 64k buffer is faster with bigger packets. Time to run testBasicStore with MySQL: 4096 65536 real 0m51.115s 0m21.592s user 0m41.857s 0m13.540s sys 0m8.700s 0m2.687s
-
Julien Muchembled authored
It started to fail with commit fd007f5d.
-
Julien Muchembled authored
It should have been removed with commit cf32e594 (see also c277ed20 and related commits).
-
- 13 Jan, 2017 4 commits
-
-
Julien Muchembled authored
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 12 Jan, 2017 1 commit
-
-
Julien Muchembled authored
Instances of NEOCluster were not deleted as soon as the only referrers were weak proxies (at least that's what a quick check with the 'gc' module showed at the beginning of tearDown). In some cases, __del__ was called while the next test was logging a message, which led to deadlocks. Without those proxies, it may be reliable, but only on CPython. See http://doc.pypy.org/en/latest/cpython_differences.html#differences-related-to-garbage-collection-strategies Relying on __del__ to close a cluster was wrong. NEOCluster is now a context manager that does it explicitly at exit, in addition to automatically stop it. The NEOCluster.stop method combines the previous stop/__del__/reset methods. A new 'with_cluster' decorator is also added to avoid excessive indentation in tests. Unindentation of existing tests will be done later.
-
- 11 Jan, 2017 3 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
This is important when using --loop, otherwise tearDown is slower and slower at removing packets from the log.
-
Julien Muchembled authored
testExternalInvalidation is splitted to minimize reindentation.
-
- 09 Jan, 2017 1 commit
-
-
Julien Muchembled authored
-
- 06 Jan, 2017 3 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 04 Jan, 2017 2 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
It is extended to check that the storage is only notified about the transactions that existed at the time it asked for them. Otherwise, Replicator.transactionFinished would be called more than once, and `self.ttid_set.remove(ttid)` would raise KeyError. The functional version also contained an annoying 'sleep(10)'.
-
- 03 Jan, 2017 1 commit
-
-
Julien Muchembled authored
-
- 30 Dec, 2016 1 commit
-
-
Julien Muchembled authored
Leaks in filter_queue caused deadlocks in the following threaded tests that filter connections.
-
- 28 Dec, 2016 6 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
The removed tests only covered this.
-
Julien Muchembled authored
The removed test_answerStoreObject_{1,2} only covered the 'raise NEOStorageError', which is already an assertion.
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 27 Dec, 2016 1 commit
-
-
Julien Muchembled authored
-
- 26 Dec, 2016 5 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 23 Dec, 2016 1 commit
-
-
Julien Muchembled authored
-
- 22 Dec, 2016 1 commit
-
-
Julien Muchembled authored
-
- 21 Dec, 2016 3 commits
-
-
Julien Muchembled authored
This fixes the following case when the backup is far behing the upstream DB, and there are transactions being committed at the same time: 1. replicate partition 0 2. replicate partition 0 3. replicate partition 1 4. replicate partition 0 5. replicate partition 1 6. replicate partition 2 7. replicate partition 0 ... and so on in a quadratic way. When the upstream activity was too high, the backup could even be stuck looping on the first partitions.
-
Julien Muchembled authored
The issue happens when there were commits while the backup cluster was down. In this case, the master thinks that these commits are already replicated, reporting wrong backup_tid to neoctl. It solved by itself once: - there are new commits triggering replication for all partitions; - all storage nodes have really replicated. This also resulted in an inconsistent database when leaving backup mode during this period.
-
Julien Muchembled authored
-
- 20 Dec, 2016 1 commit
-
-
Julien Muchembled authored
-
- 06 Dec, 2016 1 commit
-
-
Julien Muchembled authored
A backup master crashed with the following traceback after a reconnection: Traceback (most recent call last): File "neo/master/app.py", line 127, in run self._run() File "neo/master/app.py", line 147, in _run self.playPrimaryRole() File "neo/master/app.py", line 348, in playPrimaryRole self.backup_app.provideService()) File "neo/master/backup_app.py", line 123, in provideService poll(1) File "neo/lib/event.py", line 126, in poll to_process.process() File "neo/lib/connection.py", line 500, in process self._handlers.handle(self, self._queue.pop(0)) File "neo/lib/connection.py", line 110, in handle self._handle(connection, packet) File "neo/lib/connection.py", line 125, in _handle handler.packetReceived(connection, packet) File "neo/lib/handler.py", line 117, in packetReceived self.dispatch(*args) File "neo/lib/handler.py", line 66, in dispatch method(conn, *args, **kw) File "neo/master/handlers/backup.py", line 52, in invalidateObjects app.invalidatePartitions(tid, partition_set) File "neo/master/backup_app.py", line 257, in invalidatePartitions self.triggerBackup(node) File "neo/master/backup_app.py", line 281, in triggerBackup assert cell_list, offset AssertionError: 0
-