An error occurred fetching the project authors.
- 22 Jul, 2024 2 commits
-
-
Levin Zimmermann authored
We need to drop client-specific options so that NEO URI that only differ due to client options while actually pointing to the same NEO server are equal after normalization.
-
Levin Zimmermann authored
NEO/go and NEO/py URI format diverged over time: - kirr/neo@8c974485 However with nexedi/neoppod!21 a common solution was found. With kirr/neo!7 NEO/go and NEO/py URI formats are in sync again. We therefore now need to update 'wendelin.core' to support the finally agreed on URI format.
-
- 30 Jul, 2023 3 commits
-
-
Levin Zimmermann authored
If a NEO cluster has multiple master nodes, there is no agreed on order in which the master node addresses appear in the URI. In order to insure we always get the same normalized URI among different clients of a NEO cluster with more than one master node, we explicitly sort the master node address order with this patch. /reviewed-by @kirr /reviewed-on nexedi/wendelin.core!17
-
Levin Zimmermann authored
In the old source code we already filtered NEO URI by dropping credentials, but we applied this filter to any URI, not only the NEO one. This patch adds a mechanism to apply various filter according to the specific storage type. Starting with this new patch, 'zurl_normalize_main' also refuses to normalize an URI with an unknown scheme. /reviewed-by @kirr /reviewed-on nexedi/wendelin.core!17
-
Levin Zimmermann authored
The WCFS mountpoint of any ZODB storage must be a unique, persistent, repeatable hash. This means any client which uses the same storage must always calculate the same WCFS mountpoint (independent from client-only parameters etc.). Therefore the WCFS mountpoint calculation must be robust for all supported ZODB storage types (at least NEO, ZEO, filestorage). It was recently decided [1] that in order to provide this robustness, WCFS mountpoint calculation should filter the parsed URI in order to drop parts, which prevents the repeatability/persistence across different clients (e.g. parts which can differ between clients although the same storage is accessed). In order to make this filtering implementation a bit easier to read and the wcfs/__init__.py less dense, the first step is to move the zurl filtering ("normalization") into lib/zodb.py This also makes sense since this normalization can be regarded as a general zodb tool which may be useful for other solutions which use zodburi. [1] nexedi/neoppod!18 (comment 184671) /reviewed-by @kirr /reviewed-on nexedi/wendelin.core!17
-
- 19 Jun, 2023 1 commit
-
-
Levin Zimmermann authored
The old code raised an explicit exception when converting a NEO storage with > 1 master nodes into a URI. Perhaps the rationale for this exception was that there isn't any agreed on order of master nodes in a NEO URI, which means that building a URI from such a storage could potentially break the invariant that any client which points to the same storage should result in the same WCFS mountpoint. With 6f5196fa we can now rely on WCFS mountpoint calculation to always return the same mountpoint even if the order of master node addresses differ. Therefore we can drop this exception and allow WCFS to support NEO clusters with more than one master. -------- kirr: support for multiple masters was simply not implemented because in a05db040 (lib/zodb: Teach zstor_2zurl about ZEO, NEO and Demo storages) I though that we do not yet actually need it and wanted to have something minimal first. I agree that in WCFS context it is ok and makes sense to normalize zurl to have masters coming in particular order. But at zstor_2zurl level we rely on the order of masters that app.nm.getMasterList gives us. The normalization is separate function. /reviewed-by @kirr /reviewed-on nexedi/wendelin.core!17
-
- 26 Nov, 2022 1 commit
-
-
Levin Zimmermann authored
This patch allows using WCFS with a NEO or ZEO storage which is reachable by a URL which contains an ipv6 host. Without this patch the following example doesn't work: >>> from wendelin.lib.zodb import dbopen >>> root = dbopen("neo://cluster-name@[::1]:2051") >>> # "abc" points to a ZBigArray >>> root["abc"][0] It doesn't work because the parser missed adding square brackets around ipv6 hosts, due to which unparsing the resulting URL resulted in a wrong interpretation where a port starts. This patch furthermore amends 'test_zstor_2zurl' to test ZEO and NEO storages with ipv6 hosts. --- /reviewed-by @kirr /reviewed-on nexedi/wendelin.core!13
-
- 16 Nov, 2021 1 commit
-
-
Kirill Smelkov authored
This way on plain ZODB4 the following non-wcfs tests will continue to pass test.py/fs-!wcfs test.py/zeo-!wcfs test.py/neo-!wcfs instead of failing as e.g. in here: https://nexedijs.erp5.net/#/test_result_module/20211116-123A66706 On plain ZODB4 WCFS-related functionality - which uses zconn_at - will continue to raise corresponding assertion in WCFS-related tests, as e.g. in https://nexedijs.erp5.net/#/test_result_module/20211116-123A66706/6
-
- 08 Nov, 2021 1 commit
-
-
Kirill Smelkov authored
Fix how unpatched ZODB4 is reported to lack required patch: Before: Traceback (most recent call last): File "/home/kirr/src/wendelin/wendelin.core/lib/tests/test_zodb.py", line 251, in test_zconn_at assert zconn_at(conn1) == at0 File "/home/kirr/src/wendelin/wendelin.core/lib/zodb.py", line 162, in zconn_at assert 'conn:MVCC-via-loadBefore-only' in ZODB.nxd_patches, \ AttributeError: 'module' object has no attribute 'nxd_patches' After: Traceback (most recent call last): File "/home/kirr/src/wendelin/wendelin.core/lib/tests/test_zodb.py", line 251, in test_zconn_at assert zconn_at(conn1) == at0 File "/home/kirr/src/wendelin/wendelin.core/lib/zodb.py", line 163, in zconn_at "ZODB!1") File "/home/kirr/src/wendelin/wendelin.core/lib/zodb.py", line 191, in _zassertHasNXDPatch (zmajor, patch, details_link)) AssertionError: ZODB4 is not patched with required Nexedi patch 'conn:MVCC-via-loadBefore-only' See ZODB!1 for details Fixes 1f866c00 (lib/zodb: Teach zconn_at to work on ZODB4).
-
- 28 Oct, 2021 5 commits
-
-
Kirill Smelkov authored
It is not possible for WCFS to access data of in-RAM storage of another process. But without explicit explanation the error message is confusing - it was something like: NotImplementedError: don't know how to extract zurl from <ZODB.MappingStorage.MappingStorage object at 0x7f28f04cea10> which suggests it was just not implemented.
-
Kirill Smelkov authored
In 3bd82127 (lib/zodb: Add zconn_at draft (ZODB5 only)) we added zconn_at function to find out as of which state a ZODB connection is viewing the database. That was ZODB5-only however. Let's add support for ZODB4 now - by requiring ZODB4-wc2 - a version of ZODB4 with MVCC backported from ZODB5: nexedi/ZODB!1 This makes wendelin.core to work on either ZODB5 or ZODB4-wc2, but not plain ZODB4. However as zconn_at will be used only for WCFS-integration, non-wcfs mode will continue to work on all ZODB5, ZODB4-wc2 and plain ZODB4. ZBigFile + WCFS client integration will use zconn_at to open WCFS connection that corresponds to ZODB connection. Preliminary history: kirr/wendelin.core@1c3b7750 X zconn_at for ZODB4
-
Kirill Smelkov authored
Add patch to ZODB.Connection to support callback on after database is closed. ZBigFile + WCFS client integration will use this callback to close WCFS connection when corresponding ZODB.DB is closed. Preliminary history: kirr/wendelin.core@a26d9659 X lib/zodb: Connection += onShutdownCallback
-
Kirill Smelkov authored
In 959ae2d0 (lib/zodb: Add patch to ZODB.Connection to support callback on connection DB view change) we added patch for ZODB.Connection to support callback when database view of the connection changes. At that time the patch was working for ZODB5 and ZODB4 was TODO. Let's add support for ZODB4 (both ZODB4 and ZODB4-wc2) now. As a reminder: ZBigFile + WCFS client integration will use this callback to keep WCFS connection in sync with ZODB connection. Preliminary history: kirr/wendelin.core@533a4cfa X onResyncCallback for ZODB4
-
Kirill Smelkov authored
In 6637d216 (lib/zodb: Add zstor_2zurl - way to convert a ZODB storage into URL to access it) we added zstor_2zurl function to convert a ZODB storage client object into an URL to access the storage. At that time the function knew how to understand FileStorage only. Let's add support for other storages that WCFS will need to support now. NEO URI scheme matches the one currently used on ZODB/go side. It semantically needs nexedi/neoppod!18 to be also applied to NEO/py side, but we do not care for now that that patch is not merged (yet, or forever) because extracted ZURL is used only with WCFS which uses NEO/go. NEO support also depends on custom patch to remember SSL credentials on NEO Client: neo@a2f192cb Some preliminary history: 5cb39463 fixup! X wcfs/zeo started to work locally 1cf3b228 X zstor_2zurl += NEO 7f8fa32a X lib/zodb: zstor_2zurl += NEO/SSL support e26524df X wcfs, lib/zodb: DemoStorage support
-
- 08 Mar, 2021 2 commits
-
-
Kirill Smelkov authored
After switching to ZODB >= 4 in the previous commit, we can safely require zodbtools, because there is now no conflict in between ZODB3/ZODB eggs.
-
Kirill Smelkov authored
It's been a while since last ZODB3 3.10.7 release in 2016 and the last commit in upstream ZODB3 repository (3.10 branch) is from 2017. The world switched since then to ZODB4 and to ZODB5 after that. We were still requiring ZODB3, because ZODB3 3.11 egg was just a dependency on newer ZODB, ZEO, BTrees and persistent; and this way we could be supporting all ZODB3.10.x and ZODB4 and ZODB5 via ZODB3.11. However upcoming Wendelin.core 2, for its proper working, needs MVCC semantic as implemented in ZODB5. This forces us, even for ZODB4, to backport non-trivial bits from ZODB5 (see [1]). Maintaining ZODB3 support at this point becomes non-practical, because, to our knowledge, there is no wendelin.core user that plans to continue using ZODB3 without switching to at least ZODB4 in the near future. So goodbye ZODB3. Even though ZODB still stays with us, it gives a feeling similar to [2], because in 2014, when I was myself learning ZODB, it was through ZODB3 - still at the time when all ZODB bits were living together in one place. [1] nexedi/ZODB!1 [2] https://lists.osuosl.org/pipermail/darcs-users/2008-September/014095.html
-
- 17 May, 2020 2 commits
-
-
Kirill Smelkov authored
-
Kirill Smelkov authored
It is hard for people to understand current wording, so let's expand zconn_at description to additionally explain what it is providing with second set of words, which, hopefully, lowers potential ambiguity a bit. /reported-by @jwolf083
-
- 15 Apr, 2020 4 commits
-
-
Kirill Smelkov authored
Wendelin.core 2 will need to spawn WCFS filesystem server that accesses the same ZODB database as the program that spawns it. The database argument passed to WCFS is passed in the form of URL[1,2]. Even though zodburi provides way to convert an URL into ZODB storage instance, there is currently no way for reverse operation - to convert ZODB storage instance into URL to access it(*). So we have to build it by our own. Provide zstor_2zurl stub that currently works for FileStorage only. ZEO and NEO support is TODO. In the future we might want to move this functionality into zodbtools/py. [1] https://lab.nexedi.com/nexedi/zodbtools/blob/a2e4dd23/zodbtools/help.py#L27-53 [2] https://lab.nexedi.com/kirr/neo/blob/3d909114/go/zodb/zodbtools/help.go#L25-51 (*) contrary to ZODB/go where this functionality is provided out of the box: https://godoc.org/lab.nexedi.com/kirr/neo/go/zodb#IStorage
-
Kirill Smelkov authored
Wendelin.core 2 will need to hook into when client ZODB.Connection changes its database view and readjust WCFS-level client connection accordingly. ZODB.Connection can change its view on either connection reopen, or even without reopen on start of new transaction. This patch implements ZODB.Connection.onResyncCallback for ZODB5 only. ZODB4 and ZODB3 support is TODO.
-
Kirill Smelkov authored
For wendelin.core v2 we need a way to know at which particular database state application-level ZODB connection is viewing the database. Knowing that state, WCFS client library will interact with WCFS filesystem server and, in simple terms, request the server to provide data as of that particular database state. Contrary to ZODB/go[1] ZODB/py does not provide the functionality to obtain DB state of connection view, so we have to build it ourselves. Let us call the function that for a client ZODB connection returns database state corresponding to its database view as zconn_at. It is relatively easy to implement zconn_at for ZODB5, since ZODB5 adopted MVCC uniformly and this patch does just that. However even with ZODB5 currently all released ZODB5 versions have race in Connection.open() vs invalidations[2], and so the first ZODB5 release with which zconn_at implemented here will work reliable should be upcoming ZODB 5.5.2 It is TODO to implement zconn_at for ZODB4 and ZODB3, which organize things differently. Please note what would happen if zconn_at gives, even a bit, incorrect answer: wcfs client will ask wcfs server to provide array data as of different database state compared to current on-client ZODB connection. This will result in that data accessed via ZBigArray will _not_ correspond to all other data accessed via regular ZODB mechanism. It is, in other words, would be a data corruptions. [1] https://godoc.org/lab.nexedi.com/kirr/neo/go/zodb#Connection [2] https://github.com/zopefoundation/ZODB/issues/290
-
Kirill Smelkov authored
This will be needed in the following patches to know how to inject zconn_at or zconn resync functionality into particular ZODB version.
-
- 01 Apr, 2020 1 commit
-
-
Kirill Smelkov authored
-
- 18 Dec, 2019 2 commits
-
-
Kirill Smelkov authored
It was from long-ago marked as "XXX move to common place".
-
Kirill Smelkov authored
Add package-level documentation to - bigfile/file_zodb.py, - bigarray/array_zodb.py, and - lib/zodb.py The most interesting read is file_zodb.py . Slightly improve documenation for functions in a couple of places. Improving documentation was long overdue and it is improved only slightly by this commit.
-
- 21 Feb, 2018 1 commit
-
-
Kirill Smelkov authored
This allows e.g. to open `neo://cluster@master?compress=false` - in other words with using options, which our current simplified opening code does not support. Keep old dbstoropen around as the fallback to work when zodbtools/zodburi are not available, since we still want to try to support ZODB 3.10.
-
- 24 Oct, 2017 1 commit
-
-
Kirill Smelkov authored
Relicense to GPLv3+ with wide exception for all Free Software / Open Source projects + Business options. Nexedi stack is licensed under Free Software licenses with various exceptions that cover three business cases: - Free Software - Proprietary Software - Rebranding As long as one intends to develop Free Software based on Nexedi stack, no license cost is involved. Developing proprietary software based on Nexedi stack may require a proprietary exception license. Rebranding Nexedi stack is prohibited unless rebranding license is acquired. Through this licensing approach, Nexedi expects to encourage Free Software development without restrictions and at the same time create a framework for proprietary software to contribute to the long term sustainability of the Nexedi stack. Please see https://www.nexedi.com/licensing for details, rationale and options.
-
- 14 Aug, 2016 1 commit
-
-
Kirill Smelkov authored
13c0c17c (bigfile/zodb: Format #1 which is optimized for small changes) used BTree to organize ZBlk1 block's chunks and for loadblkdata() added "TODO we are missing to free internal BTree structures on data load". nexedi/wendelin.core#3 besides other things showed that even when we deactivate ZData objects, we are still keeping them as ghosts occupying memory and the same for IOBucket objects. This all happens because there is no proper way to deactivate whole btree - including internal buckets objects. And since internal buckets are not deactivated, they stay in picklecache and thus hold a reference to ZData objects and ZData objects in turn, even if explicitly deactivated, stay in memory. We can fix this all via implementing whole-btree deactivation procedure. To do so we need to iterate over all btree buckets recursively, but unfortunately there is no BTree API to access/iterate btree's buckets. We can however still get reference to first top-level buckets via gc.get_referents(btree) and then scan buckets further without hacks. gc.get_referents(btree) is a hack, but - it works in O(1) (we only get pointers from btree, not scanning all gcable objects and deducing them) - it works reliable if we filter out non-interesting objects. So in the end it works. Before the patch loading more and more ZBlk1 data with objgraph instrumentation was showing itself like # Nobj δ wendelin.bigfile.file_zodb.ZData 7168 +512 BTrees.IOBTree.IOBucket 238 +17 BTrees.IOBTree.IOBTree 14 +1 and after this patch we now have BTrees.IOBTree.IOBTree 14 +1 we cannot remove that "IOBTree + 1", since ZBlk1 is holding direct reference on it (via .chunktab) and we have to keep ZBlk1 live with ._v_zfile and ._v_zblk set for invalidation to work. "+1 IOBtree" is however small - 144 bytes per 2M (= 0.006%) so we can neglect that the same way we neglect keeping ZBlk1 staying live for each block.
-
- 25 Jun, 2015 3 commits
-
-
Kirill Smelkov authored
Done via manual hacky way for now. The clean solution would be to reuse e.g. repoze.zodbconn[1] or zodburi[2] and teach them to support NEO. But for now we can't -- those eggs depend on ZODB, and we still use ZODB3 for maintaining compatibility with both ZODB3.10 and ZODB4. /cc @jm [1] https://pypi.python.org/pypi/repoze.zodbconn [2] https://pypi.python.org/pypi/zodburi
-
Kirill Smelkov authored
Factor out those routines to open a ZODB database to common place. The reason for doing so is that we'll soon teach dbopen to automatically recognize several protocols, e.g. neo:// and zeo:// and this way, clients who use dbopen() could automatically access storages besides FileStorage.
-
Kirill Smelkov authored
-
- 03 Apr, 2015 1 commit
-
-
Kirill Smelkov authored
This adds transactionality and with e.g. NEO[1] allows to distribute objects to nodes into cluster. We hook into ZODB two-phase commit process as a separate data manager, and synchronize changes to memory, to changes to object only at that time. Alternative would be to get notified on every page change, and mark appropriate object as dirty right at that moment. But I wanted to stay close to filesystem design (we don't get notification for every file change from kernel) - that's why it is done the first way. [1] http://www.neoppod.org/
-