- 24 May, 2018 7 commits
-
-
Julien Muchembled authored
It was confusing and there's already the 'Unlock TXN' log just before abort() is called (in this case, it's more a cleanup than an abort).
-
Julien Muchembled authored
Future migration steps are likely to alter tables, possibly with transformation of data, and this is complicated for both supported backend.
-
Julien Muchembled authored
-
Julien Muchembled authored
Some changes in the storage format are minor and applying them automatically would cost too much for big databases. Here, we apply them manually so that testStorageUpgrade will be able to compare dumps. We hope however that with improvements like https://jira.mariadb.org/browse/MDEV-12836 we'll be able to implement more migration steps and revert parts of this commit.
-
Julien Muchembled authored
These dumps were generated with an old version of NEO, plus a backport of the test that will use them. In MySQL dumps, --hex-blob was used only for inserts in the 'data' table.
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 17 May, 2018 1 commit
-
-
Julien Muchembled authored
- for FileStorage DB, make sure a transaction index is built at most once - for other DB types, reopen the DB in the subprocess Now that we have specific code for FileStorage, the generic case is not tested anymore. We should add a test using ZEO. Or better, and in some way crazy, one with NEO, but one would need to fix a special case in getObject.
-
- 16 May, 2018 5 commits
-
-
Julien Muchembled authored
The protocol version is increased to ensure that client nodes are able to handle an empty 'extension' field in AnswerTransactionInformation. It also means that once new transactions are written, going back to a previous revision is not possible.
-
Julien Muchembled authored
The correct way to specify a start/stop tid is when constructing the 'source' object, hence the remove of start/stop args. In fact, source.iterator() does not always take such args. On the other hand, when resuming import, Application.importFrom must manage with incomplete preindex.
-
Julien Muchembled authored
Same as previous commit: only cosmetics so optional.
-
Julien Muchembled authored
'title' means both process name and command line. This is cosmetics so it won't fail if the 'setproctitle' module is not available.
-
Julien Muchembled authored
A new subprocess is used to: - fetch data from the source DB - repickle to change oids (when merging several DB) - compress - checksum This is mostly useful for the second step, which is relatively much slower than any other step, while not releasing the GIL. By using a second CPU core, it is also often possible to use a better compression algorithm for free (e.g. zlib=9). Actually, smaller data can speed up the writing process. In addition to greatly speed up the import by parallelizing fetch+process with write, it also makes the main process more reactive to queries from client nodes.
-
- 15 May, 2018 1 commit
-
-
Julien Muchembled authored
By doing the work with secondary connections to the underlying databases, asynchronously and in a separate process, this should have minimal impact on the performance of the storage node. Extra complexity comes from backends that may lose connection to the database (here MySQL): this commit fully implements reconnection.
-
- 11 May, 2018 3 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
For FileStorage DB, this avoids: - keeping a lock on the source DB during the whole import, - saving the whole index when the import was resumed.
-
Julien Muchembled authored
-
- 07 May, 2018 4 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 18 Apr, 2018 3 commits
-
-
Julien Muchembled authored
It was disabled by mistake in commit fd80cc30.
-
Julien Muchembled authored
- Stop using NEO source code as sample data. - For ZODB5, add a test that does not merge several DB.
-
Julien Muchembled authored
-
- 16 Apr, 2018 3 commits
-
-
Julien Muchembled authored
In the Importer storage backend, the repickler code never really worked with ZODB 5 (use of protocol > 1), and now the test does not pass anymore. The other issues caused by ZODB commit 12ee41c47310156027a674932df34b60de86ba36 are fixed: TypeError: list indices must be integers, not binary ValueError: unsupported pickle protocol: 3 Although not necessary as long as we don't support Python 3, this commit also replaces `str` by `bytes` in a few places.
-
Julien Muchembled authored
-
Julien Muchembled authored
When importing a FileStorage DB without interruption and without having to serve client nodes, the index built by speedupFileStorageTxnLookup is useless. Such case happens when doing simulation tests and on DB with many oids, it can take a lot of time and memory for nothing.
-
- 13 Apr, 2018 2 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
This was forgotten in commit 5de0ff3a.
-
- 12 Apr, 2018 2 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
The Importer storage backend already does this.
-
- 10 Apr, 2018 1 commit
-
-
Julien Muchembled authored
This fixes a random failure in testSafeTweak: failureException: 'UU.|U.U|.UU' != 'UU.|.UU|U.U'
-
- 29 Mar, 2018 2 commits
-
-
Julien Muchembled authored
This is a follow-up of commit 2ca7c335, which changed 'tweak' not to discard readable cells too quickly. The scenario of a storage being lost whereas it has feeding cells was forgotten. These must be discarded immediately, otherwise we end up with more up-to-date cells than wanted. Without the change in outdate(), testSafeTweak would end with: UU.|U.U|UUU Once replication is optimized not to always restart checking cells from the beginning: - Remembering that an out-of-date cell was feeding could be a safer option, but it may not be worth the extra complexity. - Another possibility may be to replace the FEEDING state by an automatic partial tweak that only discards up-to-date cells too many whenever a cell becomes up-to-date.
-
Julien Muchembled authored
-
- 20 Mar, 2018 2 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 14 Mar, 2018 1 commit
-
-
Julien Muchembled authored
For records that undo object creation, None values are used at the backend level whereas the protocol is not designed to serialize None for any field. Therefore, a dance done in many places around packet serialization, using the specific 0/ZERO_HASH/'' triplet to represent a deleted oid. For replication, it was missing at the sender side, leading to the following crash: Traceback (most recent call last): File "neo/storage/app.py", line 147, in run self._run() File "neo/storage/app.py", line 178, in _run self.doOperation() File "neo/storage/app.py", line 257, in doOperation next(task_queue[-1]) or task_queue.rotate() File "neo/storage/handlers/storage.py", line 271, in push conn.send(Packets.AddObject(oid, *object), msg_id) File "neo/lib/protocol.py", line 234, in __init__ self._fmt.encode(buf.write, args) File "neo/lib/protocol.py", line 345, in encode return self._trace(self._encode, writer, items) File "neo/lib/protocol.py", line 334, in _trace return method(*args) File "neo/lib/protocol.py", line 367, in _encode item.encode(writer, value) File "neo/lib/protocol.py", line 345, in encode return self._trace(self._encode, writer, items) File "neo/lib/protocol.py", line 342, in _trace raise ParseError(self, trace) ParseError: at add_object/checksum: File "neo/lib/protocol.py", line 553, in _encode assert len(checksum) == 20, (len(checksum), checksum) TypeError: object of type 'NoneType' has no len()
-
- 13 Mar, 2018 1 commit
-
-
Julien Muchembled authored
-
- 02 Mar, 2018 2 commits
-
-
Julien Muchembled authored
Before, it waited for upstream activity until all partitions are touched. However, when upstream is idle the backup cluster could remain stuck forever if it was interrupted whereas some cells were still late.
-
Julien Muchembled authored
The 'min_tid < new_tid' assertion failed when jumping to the past.
-