- 04 May, 2021 3 commits
-
-
Kirill Smelkov authored
@d-maurer suggests to keep loadBefore without deprecation (https://github.com/zopefoundation/ZODB/pull/323#pullrequestreview-650963363). -> Don't emit warnings about deprecating loadBefore. -> Keep the deprecation text in loadBefore interface, since loadBeforeEx should practically provide wider functionality without putting unnecessary constraint on storage implementations. In other words loadBefore deprecation is still there, but less aggressively advertised with the idea to make transition for outside-of-ZODB code to loadBeforeEx more smooth and with a bit more steps (we might want to reinstate the deprecation warnings at a later time).
-
Kirill Smelkov authored
Suggested by @d-maurer: https://github.com/zopefoundation/ZODB/pull/323#discussion_r625573381
-
Kirill Smelkov authored
@d-maurer suggests[1]: The ZODB logic relating to historical data (including MVCC) was largely centered around before. You have changed this to at - requiring wide spread modifications. I would much prefer to keep the before centered approach... (https://github.com/zopefoundation/ZODB/pull/323#pullrequestreview-650963363) So let's change "at"-based logic to "before"-based logic and rename the new method from loadAt to loadBeforeEx.
-
- 16 Mar, 2021 1 commit
-
-
Kirill Smelkov authored
loadAt is new optional storage interface that is intended to replace loadBefore with more clean and uniform semantic. Compared to loadBefore, loadAt: 1) returns data=None and serial of the removal, when loaded object was found to be deleted. loadBefore is returning only data=None in such case. This loadAt property allows to fix DemoStorage data corruption when whiteouts in overlay part were not previously correctly taken into account. https://github.com/zopefoundation/ZODB/issues/318 2) for regular data records, does not require storages to return next_serial, in addition to (data, serial). loadBefore requirement to return both serial and next_serial is constraining storages unnecessarily, and, while for FileStorage it is free to implement, for other storages it is not - for example for NEO and RelStorage, finding out next_serial, after looking up oid@at data record, costs one more SQL query: https://lab.nexedi.com/nexedi/neoppod/blob/fb746e6b/neo/storage/database/mysqldb.py#L484-508 https://lab.nexedi.com/nexedi/neoppod/blob/fb746e6b/neo/storage/database/mysqldb.py#L477-482 https://github.com/zodb/relstorage/blob/3.1.1-1-ge7628f9/src/relstorage/storage/load.py#L259-L264 https://github.com/zodb/relstorage/blob/3.1.1-1-ge7628f9/src/relstorage/adapters/mover.py#L177-L199 next_serial is not only about execution overhead - it is semantically redundant to be there and can be removed from load return. The reason I say that next_serial can be removed is that in ZODB/py the only place, that I could find, where next_serial is used on client side is in client cache (e.g. in NEO client cache), and that cache can be remade to work without using that next_serial at all. In simple words whenever after loadAt(oid, at) -> (data, serial) query, the cache can remember data for oid in [serial, at] range. Next, when invalidation message from server is received, cache entries, that had at == client_head, are extended (at -> new_head) for oids that are not present in invalidation message, while for oids that are present in invalidation message no such extension is done. This allows to maintain cache in correct state, invalidate it when there is a need to invalidate, and not to throw away cache entries that should remain live. This of course requires ZODB server to include both modified and just-created objects into invalidation messages ( https://github.com/zopefoundation/ZEO/pull/160 , https://github.com/zopefoundation/ZODB/pull/319 ). Switching to loadAt should thus allow storages like NEO and, maybe, RelStorage, to do 2x less SQL queries on every object access. https://github.com/zopefoundation/ZODB/issues/318#issuecomment-657685745 In other words loadAt unifies return signature to always be (data, serial) instead of POSKeyError object does not exist at all None object was removed (data, serial, next_serial) regular data record used by loadBefore. This patch: - introduces new interface. - introduces ZODB.utils.loadAt helper, that uses either storage.loadAt, or, if the storage does not implement loadAt interface, tries to mimic loadAt semantic via storage.loadBefore to possible extent + emits corresponding warning. - converts MVCCAdapter to use loadAt instead of loadBefore. - changes DemoStorage to use loadAt, and this way fixes above-mentioned data corruption issue; adds corresponding test; converts DemoStorage.loadBefore to be a wrapper around DemoStorage.loadAt. - adds loadAt implementation to FileStorage and MappingStorage. - adapts other tests/code correspondingly. /cc @jimfulton, @jamadden, @vpelletier, @jmuchemb, @arnaud-fontaine, @gidzit, @klawlf82, @hannosch
-
- 19 Feb, 2021 1 commit
-
-
Michael Howitz authored
to trigger the new web hook.
-
- 28 Oct, 2020 2 commits
-
-
Jens Vagelpohl authored
Update badge URL for Travis
-
Jürgen Gmach authored
Committed via https://github.com/asottile/all-repos
-
- 23 Sep, 2020 1 commit
-
-
Philip Bauer authored
Co-authored-by: ale-rt <alessandro.pisa@gmail.com>
-
- 04 Sep, 2020 2 commits
-
-
Jérome Perrin authored
To manually mark an object modified, `_p_changed` attribute must be set, no `_p_changed__`
-
Jérome Perrin authored
setutools 50.0.0 does not seem compatible with the pypy3 used on travis.
-
- 31 Aug, 2020 2 commits
-
-
Kirill Smelkov authored
Currently invalidate documentation is not clear whether it should be called for every transaction and whether it should include full set of objects created/modified by that transaction. Until now this was working relatively well for the sole purpose of invalidating client ZEO cache, because for that particular task it is relatively OK not to include just created objects into invalidation messages, and even to completely skip sending invalidation if transaction only create - not modify - objects. Due to this fact the workings of the client cache was indifferent to the ambiguity of the interface. In 2016 skipping transactions with only created objects was reconsidered as bug and fixed in ZEO5 because ZODB5 relies more heavily on MVCC semantic and needs to be notified about every transaction committed to storage to be able to properly update ZODB.Connection view: https://github.com/zopefoundation/ZEO/commit/02943acd#diff-52fb76aaf08a1643cdb8fdaf69e37802L889-R834 https://github.com/zopefoundation/ZEO/commit/9613f09b However just-created objects were not included into invalidation messages until, hopefully, recently: https://github.com/zopefoundation/ZEO/pull/160 As ZODB is started to be used more widely in areas where it was not traditionally used before, the ambiguity in invalidate interface and the lack of guarantees - for any storage - to be notified with full set of information, creates at least the following problems: - a ZODB client (not necessarily native ZODB/py client) can maintain raw cache for the storage. If such client tries to load an oid at database view when that object did not existed yet, gets "no object" reply and stores that information into raw cache, to properly invalidate the cache it needs an invalidation message from ZODB server that *includes* created object. - tools like `zodb watch` [1,2,3] don't work properly (give incorrect output) if not all objects modified/created by a transaction are included into invalidation messages. - similarly to `zodb watch`, a monitoring tool, that would want to be notified of all created/modified objects, won't see full database-change picture, and so won't work properly without knowing which objects were created. - wendelin.core 2 - which builds data from ZODB BTrees and data objects into virtual filesystem - needs to get invalidation messages with both modified and created objects to properly implement its own lazy invalidation and isolation protocol for file blocks in OS cache: when a block of file is accessed, all clients, that have this block mmaped, need to be notified and asked to remmap that block into particular revision of the file depending on a client's view of the filesystem and database [4,5]. To compute to where a client needs to remmap the block, WCFS server (that in turn acts as ZODB client wrt ZEO/NEO server), needs to be able to see whether client's view of the filesystem is before object creation (and then ask that client to pin that block to hole), or after creation (and then ask the client to pin that block to corresponding revision). This computation needs ZODB server to send invalidation messages in full: with both modified and just created objects. Also: - the property that all objects - both modified and just created - are included into invalidation messages is required and can help to remove `next_serial` from `loadBefore` return in the future. This, in turn, can help to do 2x less SQL queries in loadBefore for NEO and RelStorage (and maybe other storages too): https://github.com/zopefoundation/ZODB/issues/318#issuecomment-657685745 Current state of storages with respect to new requirements: - ZEO: does not skip transactions, but includes only modified - not created - objects. This is fixed by https://github.com/zopefoundation/ZEO/pull/160 - NEO: already implements the requirements in full - RelStorage: already implements the requirements in full, if I understand correctly: https://github.com/zodb/relstorage/blob/3.1.2-1-gaf57d6c/src/relstorage/adapters/poller.py#L28-L145 While editing invalidate documentation, use the occasion to document recently added property that invalidate(tid) is always called before storage starts to report its lastTransaction() ≥ tid - see 4a6b0283 (mvccadapter: check if the last TID changed without invalidation). /cc @jimfulton, @jamadden, @jmuchemb, @vpelletier, @arnaud-fontaine, @gidzit, @klawlf82, @jwolf083 /reviewed-on https://github.com/zopefoundation/ZODB/pull/319 /reviewed-by @dataflake /reviewed-by @jmuchemb [1] https://lab.nexedi.com/kirr/neo/blob/049cb9a0/go/zodb/zodbtools/watch.go [2] neo@e0d59f5d [3] neo@c41c2907 [4] https://lab.nexedi.com/kirr/wendelin.core/blob/1efb5876/wcfs/wcfs.go#L94-182 [5] https://lab.nexedi.com/kirr/wendelin.core/blob/1efb5876/wcfs/client/wcfs.h#L20-71
-
Jérome Perrin authored
Fix requirements for sphinx on python2
-
- 26 Aug, 2020 2 commits
-
-
Jérome Perrin authored
For consistency, defined all versions constraints in version section. Also don't mention python34 since ZODB no longer supports python3.4
-
Jérome Perrin authored
Starting from 1.2.0 sphinxcontrib-websupport officially only supports python3. Since 1.2.4 it depends on sphinxcontrib-serializinghtml which can not even be imported on python2
-
- 19 Aug, 2020 1 commit
-
-
Julien Muchembled authored
It is pointless for lastTransaction() to block until it is allowed to return the TID of a transaction that has just been committed, because it may still not be the real last TID (e.g. for some storage implementations, invalidations are received from a shared server via the network). While invalidations are still being processed, it's fine to return immediately with the previous last TID. This was clarified in commit 4a6b0283 ("mvccadapter: check if the last TID changed without invalidation"). See pull request #316
-
- 31 Jul, 2020 1 commit
-
-
Kirill Smelkov authored
In the early days, before MVCC was introduced, ZODB used to raise ReadConflictError on access to object that was simultaneously changed by another client in concurrent transaction. However, as doc/articles/ZODB-overview.rst says Since Zope 2.8 ZODB has implemented **Multi Version Concurrency Control**. This means no more ReadConflictErrors, each transaction is guaranteed to be able to load any object as it was when the transaction begun. So today the only way to get a ReadConflictError should be 1) at commit time for an object that was requested to stay unchanged via checkCurrentSerialInTransaction, and 2) at plain access time, if a pack running simultaneously to current transaction, removes object revision that we try to load. The second point is a bit unfortunate, since when load discovers that object was deleted or not yet created, it is logically more clean to raise POSKeyError. However due to backward compatibility we still want to raise ReadConflictError in this case - please see comments added to MVCCAdapter for details. Anyway, let's remove leftovers of handling regular read-conflicts from pre-MVCC era: Adjust docstring of ReadConflictError to explicitly describe that this error can only happen at commit time for objects requested to be current, or at plain access if pack is running simultaneously under connection foot. There were also leftover code, comment and test bits in Connection, interfaces, testmvcc and testZODB, that are corrected/removed correspondingly. testZODB actually had ReadConflictTests that was completely deactivated: commit b0f992fd ("Removed the mvcc option..."; 2007) moved read-conflict-on-access related tests out of ZODBTests, but did not activated moved parts at all, because as that commit says when MVCC is always on unconditionally, there is no on-access conflicts: Removed the mvcc option. Everybody wants mvcc and removing us lets us simplify the code a little. (We'll be able to simplify more when we stop supporting versions.) Today, if I try to manually activate that ReadConflictTests via @@ -637,6 +637,7 @@ def __init__(self, poisonedjar): def test_suite(): return unittest.TestSuite(( unittest.makeSuite(ZODBTests, 'check'), + unittest.makeSuite(ReadConflictTests, 'check'), )) if __name__ == "__main__": it fails in dumb way showing that this tests were unmaintained for ages: Error in test checkReadConflict (ZODB.tests.testZODB.ReadConflictTests) Traceback (most recent call last): File "/usr/lib/python2.7/unittest/case.py", line 320, in run self.setUp() File "/home/kirr/src/wendelin/z/ZODB/src/ZODB/tests/testZODB.py", line 451, in setUp ZODB.tests.utils.TestCase.setUp(self) AttributeError: 'module' object has no attribute 'utils' Since today ZODB always uses MVCC and there is no way to get ReadConflictError on concurrent plain read/write access, those tests should be also gone together with old pre-MVCC way of handling concurrency. /cc @jimfulton /reviewed-on https://github.com/zopefoundation/ZODB/pull/320 /reviewed-by @jamadden
-
- 12 Jun, 2020 2 commits
-
-
Jason Madden authored
Add a change note for #280
-
Jason Madden authored
ConnectionPool and ConnectionPool.map both had docstrings and were used by third-party code. People should be warned about this potentially breaking change.
-
- 11 Jun, 2020 2 commits
-
-
Julien Muchembled authored
-
Julien Muchembled authored
-
- 10 Jun, 2020 1 commit
-
-
Julien Muchembled authored
-
- 09 Jun, 2020 1 commit
-
-
Julien Muchembled authored
Since commit b5895a5c ("mvccadapter: fix race with invalidations when starting a new transaction"), a ZEO test fails as follows: File "src/ZEO/tests/drop_cache_rather_than_verify.txt", line 114, in drop_cache_rather_than_verify.txt Failed example: conn.root()[1].x Expected: 6 Got: 1 Earlier in the test, the ZEO server is restarted and then another client commits. When disconnected, the first client does not receive invalidations anymore and the connection gets stuck in the past until there's a new commit after it reconnected. It was possible to make the test pass with the following patch: --- a/src/ZEO/ClientStorage.py +++ b/src/ZEO/ClientStorage.py @@ -357,6 +357,7 @@ def notify_connected(self, conn, info): # invalidate our db cache if self._db is not None: + self._db.invalidate(self.lastTransaction(), ()) self._db.invalidateCache() logger.info("%s %s to storage: %s", Other implementations like NEO are probably affected the same way. Rather than changing interfaces in a backward-incompatible way, this commit revert to the original behaviour, and all the changes that were done in existing tests are reverted. However, the interfaces are clarified about the fact that storage implementations must update at a precise moment the value that is returned by lastTransaction(): just after invalidate() or tpc_finish callback.
-
- 02 Jun, 2020 1 commit
-
-
Julien Muchembled authored
Fix inconsistent resolution order with zope.interface v5.
-
- 20 May, 2020 4 commits
-
-
Jan-Jaap Driessen authored
The j1m.sphinxautointerface dependency can be dropped, the functionality is now in https://pypi.org/project/sphinxcontrib-zopeext
-
Jan-Jaap Driessen authored
Leave the direct definition of IBlobStorage in place, only expand the definition if necessary inside the __init__.
-
Jan-Jaap Driessen authored
-
Jan-Jaap Driessen authored
-
- 31 Mar, 2020 1 commit
-
-
Kirill Smelkov authored
ZODB tries to avoid saving empty transactions to storage on `transaction.commit()`. The way it works is: if no objects were changed during ongoing transaction, ZODB.Connection does not join current TransactionManager, and transaction.commit() performs two-phase commit protocol only on joined DataManagers. In other words if no objects were changed, no tpc_*() methods are called at all on ZODB.Connection at transaction.commit() time. This way application servers like Zope/ZServer/ERP5/... can have something as try: # process incoming request transaction.commit() # processed ok except: transaction.abort() # problem: log + reraise in top-level code to process requests without creating many on-disk transactions with empty data changes just because read-only requests were served. Everything is working as intended. However at storage level, FileStorage currently also checks whether transaction that is being committed also comes with empty data changes, and _skips_ saving transaction into disk *at all* for such cases, even if it has been explicitly told to commit the transaction via two-phase commit protocol calls done at storage level. This creates the situation, where contrary to promise in ZODB/interfaces.py(*), after successful tpc_begin/tpc_vote/tpc_finish() calls made at storage level, transaction is _not_ made permanent, despite tid of "committed" transaction being returned to caller. In other words FileStorage, when asked to commit a transaction, even if one with empty data changes, reports "ok" and gives transaction ID to the caller, without creating corresponding transaction record on disk. This behaviour is a) redundant to application-level avoidance to create empty transaction on storage described in the beginning, and b) creates problems: The first problem is that application that works at storage-level might be interested in persisting transaction, even with empty changes to data, just because it wants to save the metadata similarly to e.g. `git commit --allow-empty`. The other problem is that an application view and data in database become inconsistent: an application is told that a transaction was created with corresponding transaction ID, but if the storage is actually inspected, e.g. by iteration, the transaction is not there. This, in particular, can create problems if TID of committed transaction is reported elsewhere and that second database client does not find the transaction it was told should exist. I hit this particular problem with wendelin.core. In wendelin.core, there is custom virtual memory layer that keeps memory in sync with data in ZODB. At commit time, the memory is inspected for being dirtied, and if a page was changed, virtual memory layer joins current transaction _and_ forces corresponding ZODB.Connection - via which it will be saving data into ZODB objects - to join the transaction too, because it would be too late to join ZODB.Connection after 2PC process has begun(+). One of the format in which data are saved tries to optimize disk space usage, and it actually might happen, that even if data in RAM were dirtied, the data itself stayed the same and so nothing should be saved into ZODB. However ZODB.Connection is already joined into transaction and it is hard not to join it because joining a DataManager when the 2PC is already ongoing does not work. This used to work ok with wendelin.core 1, but with wendelin.core 2 - where separate virtual filesystem is also connected to the database to provide base layer for arrays mappings - this creates problem, because when wcfs (the filesystem) is told to synchronize to view the database @tid of committed transaction, it can wait forever waiting for that, or later, transaction to appear on disk in the database, creating application-level deadlock. I agree that some more effort might be made at wendelin.core side to avoid committing transactions with empty data at storage level. However the most clean way to fix this problem in my view is to fix FileStorage itself, because if at storage level it was asked to commit something, it should not silently skip doing so and dropping even non-empty metadata + returning ok and committed transaction ID to the caller. As described in the beginning this should not create problems for application-level ZODB users, while at storage-level the implementation is now consistently matching interface and common sense. ---- (*) tpc_finish: Finish the transaction, making any transaction changes permanent. Changes must be made permanent at this point. ... https://github.com/zopefoundation/ZODB/blob/5.5.1-35-gb5895a5c2/src/ZODB/interfaces.py#L828-L831 (+) https://lab.nexedi.com/kirr/wendelin.core/blob/9ff5ed32/bigfile/file_zodb.py#L788-822
-
- 27 Mar, 2020 1 commit
-
-
sblondon authored
Update package links from pypi.python.org to pypi.org
-
- 26 Mar, 2020 1 commit
-
-
Stéphane Blondon authored
-
- 20 Mar, 2020 1 commit
-
-
Éloi Rivard authored
Added newtdb link
-
- 17 Mar, 2020 9 commits
-
-
Éloi Rivard authored
Removed log.ini
-
Éloi Rivard authored
Use python3 for coverage
-
Éloi Rivard authored
-
Éloi Rivard authored
-
Éloi Rivard authored
-
Éloi Rivard authored
Some documentation love
-
Éloi Rivard authored
-
Éloi Rivard authored
-
Éloi Rivard authored
-