- 15 Jun, 2023 6 commits
-
-
Levin Zimmermann authored
This patchs origin is !20. With recent changes on kirr/neo!2 and levin.zimmermann/wendelin.core@f546d836 we can already use WCFS with multiple master nodes in combination with this patch. Until we proceed in !20 we can already use a NEO/py with this patch in order to test WCFS on our wind clone. --- Original patch content: When a node tries to connect to another node it initially sends a 'RequestIdentification' packet. The other node can either reply with 'AcceptIdentification' or in case of a secondary master with 'NotPrimaryMaster'. In the second case the message id differs from the initial requests message id. This makes it difficult in a multi-threaded implementation to proceed this answer: due to the different msg/connection - id the multi-threaded implementation tries to proceed this incoming message in a different thread, while the requesting thread waits forever for its peers reply. The most straightforward solution is to use the same connection - id for both possible answers to the 'RequestIdentification' packet. This doesn't break given NEO/py implementation and is only a small patch. A workaround in a multi-threaded implementation on the other hand seems to be much more complicated and time-consuming. Finally it also makes sense semantically, because "Message IDs are used to identify response packets" and in the given context the 'NotPrimaryMaster' *is* the de facto response of a 'RequestIdentification'. Because the msgid is the same now, '_ask' uses the 'PrimaryBootstrapHandler' (if it's the same connection) [1], so it may also be necessary to move 'notPrimaryMaster' to 'PrimaryBootstrapHandler'. [1] https://lab.nexedi.com/nexedi/neoppod/blob/ecc9c63c7/neo/lib/threaded_app.py#L142
-
Kirill Smelkov authored
-
Kirill Smelkov authored
parse_qsl no longer treats ';' as valid query separator for security reason because most proxies did not do so and it was possible to poison proxy cache due to difference in query separator handling (see bugs.python.org/issue42967). To handle credentials we don't have any proxy here, and it is still perfectly valid to use ';' as credentials separator. -> Fix it with ';' -> '&' replace workaround, before feeding credentials string to parse_qsl. Amends: b9a42957.
-
Kirill Smelkov authored
- Move ca/cert/key out of query part into credentials part of URL - As a consequence move cluster name out of credentials part into path part - Allow both neo:// and neos:// schemes to be used. neo:// means always without SSL and neos:// means always with SSL. Even if neos:// URL comes without credentials embedded into it - they can be obtained from elsewhere. I need this change in wendelin.core 2 which needs to take a zurl and compute a mountpoint patch that corresponds to it, so that multiple several zopes that use the same NEO storage, could be associated with the same single WCFS instance.
-
Kirill Smelkov authored
This is 2020 edition of my original patch from 2016 ( kirr/neo@dd3bb8b4 ). It was described in my NEO/go article ( https://navytux.spb.ru/~kirr/neo.html ) in the text quoted below: Then comes the link layer which provides service to exchange messages over network. In current NEO/py every message has `msg_id` field, that similarly to ZEO/py marks a request with serial number with requester then waiting for corresponding answer to come back with the same message id. Even though there might be several reply messages coming back to a single request, as e.g. NEO/py asynchronous replication code[0], this approach is still similar to ZEO/py remote procedure call (RPC) model because of single request semantic. One of the places where this limitation shows is the same replicator code where transactions metadata is fetched first with first series of RPC calls, and only then object data is fetched with the second series of RPC calls. This could be not very good e.g. in case when there is a lot of transactions/data to synchronize, because 1) it puts assumption on, and so constraints, the storage backend model on how data is stored (separate SQL tables for metadata and data), and 2) no data will be synchronized at all until all transactions are synchronized first. The second point prevents for example the syncing storage in turn to provide, even if read-only, service for the already fetched data. What would be maybe more useful is for requester to send request that it wants to fetch ZODB data in `tid_min..tid_max` range and then the sender sending intermixed stream of metadata/data in zodbdump-like format. Keeping in mind this, and other examples, NEO/go shifts from thinking about protocol logic as RPC to thinking of it as more general network protocol and settles to provide general connection-oriented message exchange service[1] : whenever a message with new `msg_id` is sent, a new connection is established multiplexed on top of a single node-node TCP link. Then it is possible to send/receive arbitrary messages over back and forth until so established connection is closed. This works transparently to NEO/py who still thinks it operates in simple RPC mode because of the way messages are put on the wire and because simple RPC is subset of a general exchange. The `neonet` module also provides `DialLink` and `ListenLink` primitives[2] that work similarly to standard Go `net.Dial` and `net.Listen` but wrap so created link into the multiplexing layer. What is actually done this way is very similar to HTTP/2 which also provides multiple general streams multiplexing on top of a single TCP connection ([3], [4]). However if connection ids (sent in place of `msg_id` on the wire) are assigned arbitrary, there could be a case when two nodes could try to initiate two new different connections to each other with the same connection id. To prevent such kind of conflict a simple rule to allocate connection ids either even or odd, depending on the role peer played while establishing the link, could be used. HTTP/2 takes similar approach[5] where `"Streams initiated by a client MUST use odd-numbered stream identifiers; those initiated by the server MUST use even-numbered stream identifiers."` with NEO/go doing the same corresponding to who was originally dialer and who was a listener. However it requires small patch to be applied on NEO/py side to increment `msg_id` by 2 instead of 1. NEO/py currently explicitly specifies `msg_id` for an answer in only limited set of cases, by default assuming a reply comes to the last received message whose `msg_id` it remembers globally per TCP-link. This approach is error-prone and cannot generally work in cases where several simultaneous requests are received over single link. This way NEO/go does not maintain any such global per-link knowledge and handles every request by always explicitly using corresponding connection object created at request reception time. [0] https://lab.nexedi.com/kirr/neo/blob/463ef9ad/neo/storage/replicator.py [1] https://lab.nexedi.com/kirr/neo/blob/463ef9ad/go/neo/neonet/connection.go [2] https://lab.nexedi.com/kirr/neo/blob/463ef9ad/go/neo/neonet/newlink.go [3] https://tools.ietf.org/html/rfc7540#section-5 [4] https://http2.github.io/faq/#why-is-http2-multiplexed [5] https://tools.ietf.org/html/rfc7540#section-5.1.1 It can be criticized, but the fact is: - it does no harm to NEO/py and is backward-compatible: a NEO/py node without this patch can still successfully connect and interoperate to another NEO/py node with this patch. - it is required for NEO/go to be able to interoperate with NEO/py. Both client and server parts of NEO/go use the same neonet module to exchange messages. - NEO/go client is used by wendelin.core 2, which organizes access to on-ZODB ZBigFile data via WCFS filesystem implemented in Go. So on one side this patch is small, simple and does not do any harm to NEO/py. On the other side it is required for NEO/go and wendelin.core 2. To me this clearly indicates that there should be no good reason to reject inclusion of this patch into NEO/py. -------- My original patch from 2016 came with corresponding adjustments to neo/tests/testConnection.py ( kirr/neo@dd3bb8b4 ) but commit f6eb02b4 (Remove packet timeouts; 2017-05-04) removed testConnection.py completely and, if I understand correctly, did not add any other test to compensate that. This way I'm not trying to restore my tests to Connection neither. Anyway, with this patch there is no regression to all other existing NEO/py tests. -------- My original patch description from 2016 follows: - even for server initiated streams - odd for client initiated streams This way I will be able to use Pkt.msg_id as real stream_id in go's Conn because with even / odd scheme there is no possibility for id conflicts in between two peers.
-
Levin Zimmermann authored
Between NEO/go and NEO/py there are various incompatibilities. Similarity to the 'wc2' branch [1], this branch aims to transparently communicate those incompatibilities. Unlike the 'wc2' branch, we apply those patches here on an up-to-date recent NEO/py. The background is that current NEO/go version still uses an old pre-msgpack protocol and didn't fully implement the new msgpack protocol yet. So the 'wc2' branch still builds upon an old NEO/py version (1.12, from the oldproto branch). The diff between the wc2-future branch and master on the other hand aims to be as minimal as possible and to only contain the compatibility patches. [1] https://lab.nexedi.com/nexedi/neoppod/tree/wc2 and 739096b7
-
- 04 Apr, 2023 4 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
Found by running testPruneOrphan many times. Once I even got: SystemError: NULL result without error in PyObject_Call
-
- 09 Mar, 2023 2 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
The reverts a wrong change in commit 30a02bdc ("importer: new option to write back new transactions to the source database").
-
- 19 Feb, 2023 1 commit
-
-
Julien Muchembled authored
-
- 16 Feb, 2023 2 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 14 Feb, 2023 3 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
It's been many years we don't get 'array' objects, no idea when exactly.
-
Julien Muchembled authored
-
- 10 Feb, 2023 1 commit
-
-
Julien Muchembled authored
Like commit 243c1a0f ("sqlite: optimize storage of metadata"), the fake changes in test data are because we don't force upgrade for this optimization.
-
- 02 Feb, 2022 1 commit
-
-
Kirill Smelkov authored
Starting from zodbpickle 2 its binary class does not allow users to set arbitrary attributes and so binary._pack = bytes.__str__ fails with TypeError: can't set attributes of built-in/extension type 'zodbpickle.binary' -> Fix it by explicitly checking for binary type on encoding instead of setting binary._pack See slapos@27f574bc for pre-history. /cc @jerome
-
- 04 Jun, 2021 1 commit
-
-
Julien Muchembled authored
Traceback (most recent call last): ... File ".../neo/lib/handler.py", line 75, in dispatch method(conn, *args, **kw) File ".../neo/admin/handler.py", line 174, in wrapper return func(self, name, *args, **kw) File ".../neo/admin/handler.py", line 190, in notifyMonitorInformation self.app.updateMonitorInformation(name, **info) File ".../neo/admin/app.py", line 290, in updateMonitorInformation self._notify(self.operational) File ".../neo/admin/app.py", line 315, in _notify body += '', name, ' ' + backup.formatSummary(upstream)[1] File ".../neo/admin/app.py", line 83, in formatSummary tid = self.ltid AttributeError: 'Backup' object has no attribute 'ltid'
-
- 11 May, 2021 1 commit
-
-
Julien Muchembled authored
-
- 02 Apr, 2021 5 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 22 Mar, 2021 1 commit
-
-
Julien Muchembled authored
-
- 04 Mar, 2021 2 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 15 Jan, 2021 2 commits
-
-
Julien Muchembled authored
The purpose of suppress_ragged_eofs=False was to micro-optimize the normal case: when there's no EOF. But commit 061cd5d8 showed that this option only turns ragged EOF into an exception. It may be easier for alternate NEO implementations to close the SSL connection properly. Or the performance benefit was not worth the risk to freeze a NEO process.
-
Kirill Smelkov authored
Testing NEO/go client wrt NEO/py server revealed a bug in NEO/py SSL handling: proper non-ragged EOF from a peer is ignored, and so leads to hang in infinite loop inside _SSL.receive with read_buf memory growing indefinitely. Details are below: NEO/py wraps raw sockets with ssl.wrap_socket(suppress_ragged_eofs=False) which instructs SSL layer to convert unexpected EOF when receiving a TLS record into SSLEOFError exception. However when remote peer properly closes its side of the connection, socket.read() still returns b'' to report non-ragged regular EOF: https://github.com/python/cpython/blob/v2.7.18/Lib/ssl.py#L630-L650 The code was handling SSLEOFError but not b'' return from socket recv. Thus after NEO/go client was disconnecting and properly closing its side of the connection, the code started to loop indefinitely in _SSL.receive under `while 1` with b'' returned by self.socket.recv() appended to read_buf again and again. -> Fix it by detecting non-ragged EOF as well and, similarly to how SSLEOFError is handled, converting them into self._error('recv', None). See merge request nexedi/neoppod!17
-
- 11 Jan, 2021 4 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
-
Julien Muchembled authored
The scenario that was described in comments was meaningless because S1 never goes out-of-date.
-
- 02 Oct, 2020 1 commit
-
-
Julien Muchembled authored
For the master, the purpose of -m/--masters is to specify addresses of other master nodes, since its own address is already known via -b/--bind. Therefore, an empty value for -m/--masters is valid. The user remains free to repeat the -b value in -m. More generally, a node may choose to only specify master addresses via -D/--dynamic-master-list, so the check that at least one master address is specified is moved where the NodeManager is expected to be initialized.
-
- 29 Sep, 2020 1 commit
-
-
Julien Muchembled authored
-
- 25 Sep, 2020 2 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
The time complexity of previous one was too bad. With several tens of concurrent transactions, we saw commits take minutes to complete and the whole application looked frozen. This new algorithm is much simpler. Instead of asking the oldest transaction to somewhat restart (we used the "rebase" term because the concept was similar to what git-rebase does), the storage gives it priority and the newest is asked to relock (this request is ignored if vote already happened, which means there was actually no deadlock). testLocklessWriteDuringConflictResolution was initially more complex because Transaction.written (client) ignored KeyError (which is not the case anymore since commit 8ef1ddba).
-