An error occurred fetching the project authors.
- 16 Oct, 2023 1 commit
-
-
Julien Muchembled authored
This is still pack without garbage collection, and without deleting any transaction metadata ('trans' table). Partial pack means that the client can take a list of oids: only these oids will be packed. No API is defined yet at IStorage level. Storage nodes pack in background, independently from other storage nodes, partition by partition, and calling IStorage.pack() returns immediately (though internally, NEO does have a mechanism to wait until it's done, which can be required for some ZODB unit tests). This new implementation also introduces the concept of signing pack orders. The idea is that calling IStorage.pack() only records a pack order in the database, that can be reviewed/approved/rejected using an UI that is left to be done. For the moment, pack orders are automatically approved (by the master). Internally, pack orders are stored as extra metadata of a transaction. IOW, IStorage.pack() implies the commit of an (empty) transaction. IStorage.pack() can be called without waiting for the previous one to be completed. Pack orders processed in the same order as they are requested: - an unsigned pack order blocks the processing of any newer pack order; - rejected pack order are ignored. Approving a pack order also triggers pack on backup clusters. That's the simplest way to have everything consistent. Maybe later we could identify scenarios where it would be ok to unsign pack orders during asynchronous replication. The feature to check replicas is marked as experimental because it is not aware of differences that can happen during pack operations. _______________________________________________________________________ About concurrency within the storage node, a first implementation extended what was done to delete partitions in background (see previous commit). But here, the job can't be easily split in splices that are never too big: - it's simpler to never split the processing of an oid but this can freeze the application for a long time when packing an oid that was modified many times (e.g. 30 min for an oid with 20 millions historical records); - then an attempt so that an oid can be processed in several times was inefficient, maybe due to a limit in RocksDB (packing the oid in the above example would take days during which NEO is significantly slower). So background database jobs were moved to a separate thread, using a separate connection to the underlying database. This is obviously only useful for the MySQL backend. In order to share as much code as possible between backends, SQLite also does the work in a separate thread but sharing the main connection instead of opening a separate one (so such backend would not be suited in the above example). But deleting raw data with a secondary connection is not possible without fsyncing too often (or transaction isolation issues...): these deletions are deferred by recording them in a new table, which is processed later with the main connection. This is not so bad because the actual deletion of raw data is usually more efficient this way (more sequential IO). Here are a few numbers: - without load: 10h45 (12h for the first reimplementation) - with a load that normally takes 6h58: - load: 7h33 (so 8.4% slower) - pack: 15h36 (+4h51) As explained above, the pack of a partition is split in 2 steps: - the longest one (here 78% without load) should have negligible peformance impact on the application because the work is done in a separate thread with a secondary connection, and also with something to minimize GIL impact by prioritizing the main thread; - the shortest one (22%) to process the deferred deletions, with even lower priority than replication: it tries to split the work in tasks that take ~10ms.
-
- 25 Sep, 2020 1 commit
-
-
Julien Muchembled authored
The time complexity of previous one was too bad. With several tens of concurrent transactions, we saw commits take minutes to complete and the whole application looked frozen. This new algorithm is much simpler. Instead of asking the oldest transaction to somewhat restart (we used the "rebase" term because the concept was similar to what git-rebase does), the storage gives it priority and the newest is asked to relock (this request is ignored if vote already happened, which means there was actually no deadlock). testLocklessWriteDuringConflictResolution was initially more complex because Transaction.written (client) ignored KeyError (which is not the case anymore since commit 8ef1ddba).
-
- 16 Aug, 2019 1 commit
-
-
Julien Muchembled authored
This task is done by the admin node, in 2 possible ways: - email notifications, as soon as some state change; - new 'neoctl print summary' command that can be used periodically to check the health of the database. They report the same information. About backup clusters: The admin of the main cluster also monitors selected backup clusters, with the help of their admin nodes. Internally, when a backup master node connects to the upstream master node, it receives the address of the upstream admin node and forwards it to its admin node, which is therefore able to connect to the upstream admin node. So the 2 admin nodes remain connected and communicate in 2 ways: - the backup node notifies upstream about the health of the backup cluster; - the upstream node queries the backup node periodically to check whether replication is not too late. TODO: A few things are hard-coded and we may want to configure them: - backup lateness is checked every 10 min; - backup is expected to never be late. There's also no delay to prevent 2 consecutive emails from having the same Date: (unfortunately, the RFC 5322 does not allow sub-second precision), in which case the MUA can display them in random order. This is mostly confusing when one notification is OK and the other is not, because one may wonder if there's a new problem.
-
- 05 Jun, 2019 1 commit
-
-
Julien Muchembled authored
Explicit fields in RequestIdentification are only suitable for the actual identification or for properties that most nodes have. But some current (and future) features require to pass values (always and as soon as possible) for tasks that are unrelated to identification.
-
- 20 May, 2019 1 commit
-
-
Julien Muchembled authored
-
- 28 Apr, 2019 1 commit
-
-
Julien Muchembled authored
With the switch to msgpack, there was no schema anymore whereas it was sometimes used for both automatic conversion (e.g. the last argument of AskStoreTransaction must now be explicitly cast to list) and type checking. This somewhat reintroduces a kind of schema that: - is used by the test suite for type checking - can be generated automatically from the test suite when one change the procotol
-