- 27 Apr, 2019 4 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
neoctl gets a new command to change the number of replicas. The number of replicas becomes a new partition table attribute and like the PT id, it is stored in the config table. On the other side, the configuration value for the number of partitions is dropped, since it can be computed from the partition table, which is always stored in full. The -p/-r master options now only apply at database creation. Some implementation notes: - The protocol is slightly optimized in that the master now sends automatically the whole partition tables to the admin & client nodes upon connection, like for storage nodes. This makes the protocol more consistent, and the master is the only remaining node requesting partition tables, during recovery. - Some parts become tricky because app.pt can be None in more cases. For example, the extra condition in NodeManager.update (before app.pt.dropNode) was added for this is the reason. Or the 'loadPartitionTable' method (storage) that is not inlined because of unit tests. Overall, this commit simplifies more than it complicates. - In the master handlers, we stop hijacking the 'connectionCompleted' method for tasks to be performed (often send the full partition table) on handler switches. - The admin's 'bootstrapped' flag could have been removed earlier: race conditions can't happen since the AskNodeInformation packet was removed (commit d048a52d).
-
Julien Muchembled authored
It is often faster to set up replicas by stopping a node (and any underlying database server like MariaDB) and do a raw copy of the database (e.g. with rsync). So far, it required to stop the whole cluster and use tools like 'mysql' or sqlite3' to edit: - the 'pt' table in databases, - the 'config.nid' values of the new nodes. With this new option, if you already have 1 replica, you can set up new replicas with such fast raw copy, and without interruption of service. Obviously, this implies less redundancy during the operation.
-
Julien Muchembled authored
-
- 26 Apr, 2019 4 commits
-
-
Julien Muchembled authored
--kill-mysqld should be combined with something like -f .3 -r .1 to give storage nodes enough time to recover. And also -D 0 to focus testing on the storage backend rather than NEO.
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 16 Apr, 2019 5 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
This also reverts commit 442bb43a.
-
- 05 Apr, 2019 3 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
This fixes up commit be839e92.
-
Julien Muchembled authored
-
- 01 Apr, 2019 1 commit
-
-
Julien Muchembled authored
-
- 21 Mar, 2019 2 commits
-
-
Julien Muchembled authored
This is not used currently.
-
Julien Muchembled authored
This breaks compatibily but it was mentionned from the beginning that these options are only there for testing purpose. TODO: rename all remaining occurrences of UUID into NID in the code
-
- 16 Mar, 2019 1 commit
-
-
Julien Muchembled authored
If the source DB is lost during the import and then restored from a backup, all new transactions have to written back again on resume. It is the most common case for which the writeback hits the maximum number of transactions per partition to process at each iteration; the previous code was buggy in that it could skip transactions.
-
- 11 Mar, 2019 3 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 26 Feb, 2019 2 commits
-
-
Julien Muchembled authored
Example output: stress: yes (toggle with F1) cluster state: RUNNING last oid: 0x44c0 last tid: 0x3cdee272ef19355 (2019-02-26 15:35:11.002419) clients: 2308, 2311, 2302, 2173, 2226, 2215, 2306, 2255, 2314, 2356 (+48) 8m53.988s (42.633861/s) pt id: 4107 RRRDDRRR 0: OU...... 1: ..UO.... 2: ....OU.. 3: ......UU 4: OU...... 5: ..UO.... 6: ....OU.. 7: ......UU 8: OU...... 9: ..UO.... 10: ....OU.. 11: ......UU 12: OU...... 13: ..UO.... 14: ....OU.. 15: ......UU 16: OU...... 17: ..UO.... 18: ....OU.. 19: ......UU 20: OU...... 21: ..UO.... 22: ....OU.. 23: ......UU
-
Julien Muchembled authored
-
- 25 Feb, 2019 1 commit
-
-
Julien Muchembled authored
getAddress (via __repr__) raised EBADF on closed connectors.
-
- 31 Dec, 2018 7 commits
-
-
Julien Muchembled authored
In functional tests (or anything reusing this framework), the mapping could be incorrect at the beginning of logs.
-
Julien Muchembled authored
Corrupted logs cause neolog to fail with the following error: AttributeError: 'Log' object has no attribute 'uuid_str'
-
Julien Muchembled authored
This makes commit 3c7a3160 (storage: speed up reads by indexing 'obj' primarily by 'oid') effective for SQLite. The fake changes in test data are because we don't force upgrade for this optimization.
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
Commit aa4d621d broke log rotation and neolog sometimes failed to read in new format.
-
- 05 Dec, 2018 1 commit
-
-
Julien Muchembled authored
neolog has new options: -N for old behaviour, and -C to show the cluster name.
-
- 21 Nov, 2018 4 commits
-
-
Julien Muchembled authored
Since commit 50e7fe52, some code can be simplified.
-
Julien Muchembled authored
This fixes a bug that could manifest as follows: Traceback (most recent call last): File "neo/client/app.py", line 432, in load self._cache.store(oid, data, tid, next_tid) File "neo/client/cache.py", line 223, in store assert item.tid == tid, (item, tid) AssertionError: (<CacheItem oid='\x00\x00\x00\x00\x00\x00\x00\x01' tid='\x03\xcb\xc6\xca\xfd\xc7\xda\xee' next_tid='\x03\xcb\xc6\xca\xfd\xd8\t\x88' data='...' counter=1 level=1 expire=10000 prev=<...> next=<...>>, '\x03\xcb\xc6\xca\xfd\xd8\t\x88') The big changes in the threaded test framework are required because we need to reproduce a race condition between client threads and this conflicts with the serialization of epoll events (deadlock).
-
Julien Muchembled authored
This was found when stress-testing a big cluster. 1 client node was stuck: (Pdb) pp app.dispatcher.__dict__ {'lock_acquire': <built-in method acquire of thread.lock object at 0x7f788c6e4250>, 'lock_release': <built-in method release of thread.lock object at 0x7f788c6e4250>, 'message_table': {140155667614608: {}, 140155668875280: {}, 140155671145872: {}, 140155672381008: {}, 140155672381136: {}, 140155672381456: {}, 140155673002448: {}, 140155673449680: {}, 140155676093648: {170: <neo.lib.locking.SimpleQueue object at 0x7f788a109c58>}, 140155677536464: {}, 140155679224336: {}, 140155679876496: {}, 140155680702992: {}, 140155681851920: {}, 140155681852624: {}, 140155682773584: {}, 140155685988880: {}, 140155693061328: {}, 140155693062224: {}, 140155693074960: {}, 140155696334736: {278: <neo.lib.locking.SimpleQueue object at 0x7f788a109c58>}, 140155696411408: {}, 140155696414160: {}, 140155696576208: {}, 140155722373904: {}}, 'queue_dict': {140155673622936: 1, 140155689147480: 2}} 140155673622936 should not be queue_dict
-
Julien Muchembled authored
-
- 15 Nov, 2018 2 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-