- 21 Aug, 2020 2 commits
-
-
Julien Muchembled authored
It has never been enabled and the code to drop partitions will be changed in a way that only 'trans' may still benefit of partitioning. We'll see in the future if we have cases where 'trans' is too big to delete all rows (of a given partition) in a single query.
-
Julien Muchembled authored
Resetting a storage node could mark all TEST log entries as being emitted by this storage node. For example: 16:18:12.9114 S2 #0x0007 AskStoreObject > S1 (...)
-
- 25 Jun, 2020 1 commit
-
-
Julien Muchembled authored
-
- 24 Jun, 2020 1 commit
-
-
Julien Muchembled authored
-
- 12 Jun, 2020 1 commit
-
-
Julien Muchembled authored
====================================================================== FAIL: check_tid_ordering_w_commit (neo.tests.zodb.testBasic.BasicTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "ZODB/tests/BasicStorage.py", line 397, in check_tid_ordering_w_commit self.assertEqual(results.pop('lastTransaction'), tids[1]) File "neo/tests/__init__.py", line 301, in assertEqual return super(NeoTestBase, self).assertEqual(first, second, msg=msg) failureException: '\x03\xd8\x85H\xbffp\xbb' != '\x03\xd8\x85H\xbfs\x0b\xdd'
-
- 11 Jun, 2020 1 commit
-
-
Julien Muchembled authored
This requires ZODB >= 5.6.0
-
- 29 May, 2020 3 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 18 May, 2020 1 commit
-
-
Julien Muchembled authored
This fixes the bug that with only email notification, monitoring stopped checking whether backup clusters are lagging after status is unchanged since the last check (about lagging, what is compared is the set of lagging backups). Until another event wakes up monitoring. The code is also simplified in that there's no need for the moment to have a different timeout between the normal case and a smtp failure.
-
- 20 Mar, 2020 1 commit
-
-
Vincent Pelletier authored
-
- 16 Mar, 2020 2 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 14 Feb, 2020 1 commit
-
-
Julien Muchembled authored
When concurrent transactions fail with different storages (e.g. only network issues between C1-S2 and C2-S1), in such a way that each transaction can be committed but not both (or the cluster would be non-operational), and if the first transaction is aborted (between tpc_vote and tpc_finish), then the second wrongly failed with INCOMPLETE_TRANSACTION. And if both transactions could be committed (e.g. more than 1 replica), some nodes would be disconnected for nothing.
-
- 21 Jan, 2020 1 commit
-
-
Julien Muchembled authored
This fixes: Traceback (most recent call last): ... File "neo/admin/handler.py", line 200, in answerLastTransaction app.maybeNotify(name) File "neo/admin/app.py", line 380, in maybeNotify self._notify(False) File "neo/admin/app.py", line 302, in _notify body += '', name, ' ' + backup.formatSummary(upstream)[1] File "neo/admin/app.py", line 74, in formatSummary tid = self.backup_tid if backup else self.ltid AttributeError: 'Backup' object has no attribute 'backup_tid'
-
- 10 Jan, 2020 1 commit
-
-
Julien Muchembled authored
This fixes: Traceback (most recent call last): File "neo/master/app.py", line 172, in run self._run() File "neo/master/app.py", line 182, in _run self.playPrimaryRole() File "neo/master/app.py", line 314, in playPrimaryRole self.backup_app.provideService()) File "neo/master/backup_app.py", line 101, in provideService app.changeClusterState(ClusterStates.STARTING_BACKUP) File "neo/master/app.py", line 474, in changeClusterState ) or not node.isClient(), (state, node) AssertionError: (<EnumItem STARTING_BACKUP (4)>, <ClientNode(uuid=C1, state=RUNNING, connection=<ServerConnection(nid=C1, address=127.0.0.1:52430, handler=ClientReadOnlyServiceHandler, fd=59, on_close=onConnectionClosed, server) at 7f38f5628390>) at 7f38f5628ad0>)
-
- 07 Jan, 2020 1 commit
-
-
Julien Muchembled authored
In such case, it didn't reconnect, but thought it was connected, which eventually led to crashes like: Traceback (most recent call last): ... File "neo/admin/handler.py", line 130, in answerClusterState self.app.updateMonitorInformation(None, cluster_state=state) File "neo/admin/app.py", line 274, in updateMonitorInformation self.upstream_admin_conn.send(Packets.NotifyMonitorInformation(kw)) File "neo/lib/connection.py", line 565, in send raise ConnectionClosed neo.lib.connection.ConnectionClosed
-
- 26 Dec, 2019 2 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 13 Nov, 2019 1 commit
-
-
Julien Muchembled authored
This fixes: Traceback (most recent call last): File "neo/scripts/neoadmin.py", line 31, in main app.run() File "neo/admin/app.py", line 179, in run self._run() File "neo/admin/app.py", line 199, in _run self.em.poll(1) File "neo/lib/event.py", line 155, in poll self._poll(blocking) File "neo/lib/event.py", line 220, in _poll if conn.readable(): File "neo/lib/connection.py", line 487, in readable self._closure() File "neo/lib/connection.py", line 545, in _closure self.close() File "neo/lib/connection.py", line 534, in close handler.connectionFailed(self) File "neo/admin/handler.py", line 210, in connectionClosed app.connectToUpstreamAdmin() File "neo/admin/app.py", line 230, in connectToUpstreamAdmin None, None, self.name, None, {})) File "neo/lib/connection.py", line 574, in ask raise ConnectionClosed neo.lib.connection.ConnectionClosed
-
- 22 Oct, 2019 3 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 17 Oct, 2019 3 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 14 Oct, 2019 5 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
- make the stress process log to stress.log - log decisions to firewall/kill nodes - new --backlog option
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
Stress code reuses the admin application class and the latter was changed in commit e434c253.
-
- 16 Aug, 2019 3 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
Same as commit a00ab78b. It was reverted mistakenly when switching to msgpack.
-
Julien Muchembled authored
This task is done by the admin node, in 2 possible ways: - email notifications, as soon as some state change; - new 'neoctl print summary' command that can be used periodically to check the health of the database. They report the same information. About backup clusters: The admin of the main cluster also monitors selected backup clusters, with the help of their admin nodes. Internally, when a backup master node connects to the upstream master node, it receives the address of the upstream admin node and forwards it to its admin node, which is therefore able to connect to the upstream admin node. So the 2 admin nodes remain connected and communicate in 2 ways: - the backup node notifies upstream about the health of the backup cluster; - the upstream node queries the backup node periodically to check whether replication is not too late. TODO: A few things are hard-coded and we may want to configure them: - backup lateness is checked every 10 min; - backup is expected to never be late. There's also no delay to prevent 2 consecutive emails from having the same Date: (unfortunately, the RFC 5322 does not allow sub-second precision), in which case the MUA can display them in random order. This is mostly confusing when one notification is OK and the other is not, because one may wonder if there's a new problem.
-
- 05 Jun, 2019 2 commits
-
-
Julien Muchembled authored
Explicit fields in RequestIdentification are only suitable for the actual identification or for properties that most nodes have. But some current (and future) features require to pass values (always and as soon as possible) for tasks that are unrelated to identification.
-
Julien Muchembled authored
What Packet.setId does was overridden by Connection.answer and that would have broken concurrent queries to the admin node (this is something we currently don't do).
-
- 29 May, 2019 1 commit
-
-
Julien Muchembled authored
-
- 28 May, 2019 1 commit
-
-
Julien Muchembled authored
-
- 24 May, 2019 1 commit
-
-
Julien Muchembled authored
-
- 20 May, 2019 1 commit
-
-
Julien Muchembled authored
-