1. 21 Jan, 2022 4 commits
    • Kirill Smelkov's avatar
      wcfs: tests: Add test do demonstrate "at out of bounds" crash on readPinWatchers -> ΔFtail.BlkRevAt · 97ce5105
      Kirill Smelkov authored
      The codepath that sends pin messages to watchers on FUSE READ, similarly
      to what was showed in 339f1884 is also vulnerable to "at out of bounds"
      panic if at=ΔFtail.tail:
      
          wcfs_test.py::test_wcfs_crash_old_data
          ---------------- live log call -----------------
          WARNING  ZODB.FileStorage:FileStorage.py:413 Ignoring index for /tmp/testdb_fs.nbSKXu/1.fs
      
          M: commit -> @at0 (03e5a31e5e5ef6bb)
      
          M: commit -> @at1 (03e5a31e5e63fa77)
          M:      f<0000000000000002>     [0]
          INFO     wcfs:__init__.py:293 starting for file:///tmp/testdb_fs.nbSKXu/1.fs ...
          I0120 16:50:22.136098  697106 wcfs.go:2393] start "/dev/shm/wcfs/93026d44ef96f87df2cc0e2e451c5aabee91b652" "file:///tmp/testdb_fs.nbSKXu/1.fs"
          I0120 16:50:22.136127  697106 wcfs.go:2399] (built with go1.17.6)
          W0120 16:50:22.136233  697106 storage.go:152] zodb: FIXME: open file:///tmp/testdb_fs.nbSKXu/1.fs: raw cache is not ready for invalidations -> NoCache forced
          INFO     wcfs:__init__.py:334 started pid697106 @ /dev/shm/wcfs/93026d44ef96f87df2cc0e2e451c5aabee91b652
      
          C: setup watch f<0000000000000002> @at1 (03e5a31e5e63fa77)
          #  pinok: {}
          panic: at out of bounds: at: @03e5a31e5e63fa77,  (tail, head] = (@03e5a31e5e63fa77, @03e5a31e5e63fa77]
      
          goroutine 7 [running]:
          lab.nexedi.com/nexedi/wendelin.core/wcfs/internal/zdata.panicf(...)
                  /home/kirr/src/neo/src/lab.nexedi.com/nexedi/wendelin.core/wcfs/internal/zdata/misc.go:47
          lab.nexedi.com/nexedi/wendelin.core/wcfs/internal/zdata.(*ΔFtail).BlkRevAt(0xc0000a5d40, {0x969718, 0xc000076140}, 0xc0001a22a0, 0xc0001c0200, 0x3e5a31e5e63fa77)
                  /home/kirr/src/neo/src/lab.nexedi.com/nexedi/wendelin.core/wcfs/internal/zdata/δftail.go:1077 +0xa45
          main.(*BigFile).readPinWatchers(0xc0001d0200, {0x969718, 0xc000076140}, 0x0, 0xffffffffffffffff)
                  /home/kirr/src/neo/src/lab.nexedi.com/nexedi/wendelin.core/wcfs/wcfs.go:1559 +0x2a5
          main.(*BigFile).readBlk(0xc0001d0200, {0x969718, 0xc000076140}, 0x0, {0xc000320000, 0x200000, 0x0})
                  /home/kirr/src/neo/src/lab.nexedi.com/nexedi/wendelin.core/wcfs/wcfs.go:1281 +0x4d2
          main.(*BigFile).Read.func1({0x969718, 0xc000076140})
                  /home/kirr/src/neo/src/lab.nexedi.com/nexedi/wendelin.core/wcfs/wcfs.go:1223 +0x71
          lab.nexedi.com/kirr/go123/xsync.(*WorkGroup).Go.func1()
                  /home/kirr/src/neo/src/lab.nexedi.com/kirr/go123/xsync/xsync.go:86 +0x68
          created by lab.nexedi.com/kirr/go123/xsync.(*WorkGroup).Go
                  /home/kirr/src/neo/src/lab.nexedi.com/kirr/go123/xsync/xsync.go:83 +0x92
          >>> Change history by file:
      
          f<0000000000000002>:
                                          0 1 2 3 4 5 6 7
                                          a b c d e f g h
                  @at0 (03e5a31e5e5ef6bb)
                  @at1 (03e5a31e5e63fa77) 0
      
          ...
      
              @func
              def test_wcfs_crash_old_data():
                  # start wcfs with ΔFtail/ΔBtail not covering that initial data.
                  t = tDB(old_data=[{0:'a'}]); zf = t.zfile; at1 = t.head
                  defer(t.close)
      
                  f = t.open(zf)
      
                  # ΔFtail coverage is currently (at1,at1]
                  wl = t.openwatch()
                  wl.watch(zf, at1, {})
      
                  # wcfs is crashing on readPinWatcher -> ΔFtail.BlkRevAt with
                  #   "at out of bounds: at: @at1,  (tail,head] = (@at1,@at1]
                  # because BlkRevAt(at=tail) query was disallowed.
          >       f.assertBlk(0, 'a')          # [0] becomes tracked
      
      Still also crashing in test_wcfs_watch_setup.
      97ce5105
    • Kirill Smelkov's avatar
      wcfs: tests: Move tests for crashing WCFS due to old data to dedicated section · 67519be7
      Kirill Smelkov authored
      Soon this test will also exercise functionality from isolation protocol
      as well and so it will stop to be basic.
      
      Move plus rename test_wcfs_basic_invalidation_wo_dFtail_coverage ->
      test_wcfs_crash_old_data.
      
      Still crashing in test_wcfs_watch_setup.
      67519be7
    • Kirill Smelkov's avatar
      wcfs: tests: Teach tDB to create database with initial ZBigFile changes before WCFS is started · 1da89b57
      Kirill Smelkov authored
      This semantically moves initialization code from
      test_wcfs_basic_invalidation_wo_dFtail_coverage (see a7bf0311 "wcfs: Fix
      crash if on invalidation handledδZ needs to access ZODB") to tDB itself,
      and will be useful to exercise similar scenarios in other tests.
      
      Still crashing in test_wcfs_watch_setup.
      1da89b57
    • Kirill Smelkov's avatar
      wcfs: tests: Always start tDB with ZBigFile pre-created before WCFS startup · 339f1884
      Kirill Smelkov authored
      This should hopefully exercise codepaths in wcfs.go a bit more for
      mistakes similar to a7bf0311 (wcfs: Fix crash if on invalidation
      handledδZ needs to access ZODB) where the code on server side forgets to
      put zhead's transaction into context.
      
      Currently, because watching @tail is disallowed, this leads to panic triggered by test_wcfs_watch_setup:
      
          @at0 (03e59e3e606b89bb) -> @at1 (03e59e3e610692bb) -> @at2 (03e59e3e612a5811) -> @at3 (03e59e3e614fa9cc) -> @at4 (03e59e3e6189c3ee) -> @at5 (03e59e3e61af0baa)
      
          C: setup watch f<0000000000000002> @at0 (03e59e3e606b89bb)
          #  pinok: {0: @at0 (03e59e3e606b89bb), 2: @at0 (03e59e3e606b89bb), 3: @at0 (03e59e3e606b89bb), 5: @at0 (03e59e3e606b89bb)}
          panic: at out of bounds: at: @03e59e3e606b89bb,  (tail, head] = (@03e59e3e606b89bb, @03e59e3e61af0baa]
      
          goroutine 187 [running]:
          lab.nexedi.com/nexedi/wendelin.core/wcfs/internal/zdata.panicf(...)
                  /home/kirr/src/neo/src/lab.nexedi.com/nexedi/wendelin.core/wcfs/internal/zdata/misc.go:47
          lab.nexedi.com/nexedi/wendelin.core/wcfs/internal/zdata.(*ΔFtail).BlkRevAt(0xc000077d40, {0x969718, 0xc000062940}, 0xc0003060c0, 0x4174f4, 0x3e59e3e606b89bb)
                  /home/kirr/src/neo/src/lab.nexedi.com/nexedi/wendelin.core/wcfs/internal/zdata/δftail.go:1077 +0xa45
          main.(*WatchLink).setupWatch(0xc000108050, {0x969718, 0xc000062940}, 0x2, 0x3e59e3e606b89bb)
                  /home/kirr/src/neo/src/lab.nexedi.com/nexedi/wendelin.core/wcfs/wcfs.go:1754 +0xe3f
          main.(*WatchLink)._handleWatch(0x0, {0x969718, 0xc000062940}, {0xc00001c812, 0xa00000})
                  /home/kirr/src/neo/src/lab.nexedi.com/nexedi/wendelin.core/wcfs/wcfs.go:1973 +0x65
          main.(*WatchLink).handleWatch(0x74039b, {0x969718, 0xc000062940}, 0xc0000a4280, {0xc00001c812, 0x28})
                  /home/kirr/src/neo/src/lab.nexedi.com/nexedi/wendelin.core/wcfs/wcfs.go:1955 +0x10c
          main.(*WatchLink)._serve.func3({0x969718, 0xc000062940})
                  /home/kirr/src/neo/src/lab.nexedi.com/nexedi/wendelin.core/wcfs/wcfs.go:1944 +0x3c
          lab.nexedi.com/kirr/go123/xsync.(*WorkGroup).Go.func1()
                  /home/kirr/src/neo/src/lab.nexedi.com/kirr/go123/xsync/xsync.go:86 +0x68
          created by lab.nexedi.com/kirr/go123/xsync.(*WorkGroup).Go
                  /home/kirr/src/neo/src/lab.nexedi.com/kirr/go123/xsync/xsync.go:83 +0x92
          >>> Change history by file:
      
          f<0000000000000002>:
                                          0 1 2 3 4 5 6 7
                                          a b c d e f g h
                  @at0 (03e59e3e606b89bb)
                  @at1 (03e59e3e610692bb)     2
                  @at2 (03e59e3e612a5811)     2 3 4 5
                  @at3 (03e59e3e614fa9cc) 0   2     5
                  @at4 (03e59e3e6189c3ee)     2   4 5
                  @at5 (03e59e3e61af0baa)       3   5
      
      However next we will anyway need to allow to setup watches @tail, and so
      we will be fixing this and other errors in followup commits.
      
      NOTE: we don't loose coverage for the case when ZBigFile is created after wcfs
      startup due to test_wcfs_watch_2files, where that scenario is tested.
      
      ΔFtail/ΔBtail tests also exercise ZBigFile/BTree epochs
      (creation/deletion) well.
      339f1884
  2. 19 Jan, 2022 4 commits
  3. 18 Jan, 2022 1 commit
    • Kirill Smelkov's avatar
      wcfs: Fix crash if on invalidation handledδZ needs to access ZODB · a7bf0311
      Kirill Smelkov authored
      The invalidation logic is generally right, but invalidateBlk -> ΔFtail.BlkRevAt
      was being called with ctx without transaction. As the result it was
      panicking as
      
          panic: transaction: no current transaction
      
          goroutine 41 [running]:
          lab.nexedi.com/kirr/neo/go/transaction.currentTxn({0x9696d8, 0xc0000d8080})
                  /home/kirr/src/neo/src/lab.nexedi.com/kirr/neo/go/transaction/transaction.go:59 +0x77
          lab.nexedi.com/kirr/neo/go/transaction.Current(...)
                  /home/kirr/src/neo/src/lab.nexedi.com/kirr/neo/go/transaction/api.go:206
          lab.nexedi.com/kirr/neo/go/zodb.(*Connection).checkTxnCtx(...)
                  /home/kirr/src/neo/src/lab.nexedi.com/kirr/neo/go/zodb/connection.go:374
          lab.nexedi.com/kirr/neo/go/zodb.(*Connection).Get(0xc00010c640, {0x9696d8, 0xc0000d8080}, 0x4)
                  /home/kirr/src/neo/src/lab.nexedi.com/kirr/neo/go/zodb/connection.go:331 +0x73
          lab.nexedi.com/nexedi/wendelin.core/wcfs/internal/zdata.(*ΔFtail).BlkRevAt(0xc000077d40, {0x9696d8, 0xc0000d8080}, 0xc000064f60, 0x0, 0x3e5983329bbd100)
                  /home/kirr/src/neo/src/lab.nexedi.com/nexedi/wendelin.core/wcfs/internal/zdata/δftail.go:1140 +0x39d
          main.(*BigFile).invalidateBlk.func1(0xc000164400, {0x9696d8, 0xc0000d8080}, 0xc0005a0000, 0x200000, 0x200000, {0xc0005a0000, 0x200000, 0x200000})
                  /home/kirr/src/neo/src/lab.nexedi.com/nexedi/wendelin.core/wcfs/wcfs.go:1089 +0xb8
          main.(*BigFile).invalidateBlk(0xc000164400, {0x9696d8, 0xc0000d8080}, 0x0)
                  /home/kirr/src/neo/src/lab.nexedi.com/nexedi/wendelin.core/wcfs/wcfs.go:1105 +0x3bb
          main.(*Root).handleδZ.func3({0x9696d8, 0xc0000d8080})
                  /home/kirr/src/neo/src/lab.nexedi.com/nexedi/wendelin.core/wcfs/wcfs.go:898 +0x34
          lab.nexedi.com/kirr/go123/xsync.(*WorkGroup).Go.func1()
                  /home/kirr/src/neo/src/lab.nexedi.com/kirr/go123/xsync/xsync.go:86 +0x68
          created by lab.nexedi.com/kirr/go123/xsync.(*WorkGroup).Go
                  /home/kirr/src/neo/src/lab.nexedi.com/kirr/go123/xsync/xsync.go:83 +0x92
      
      on any new change to tracked file block whose previous history is not covered by ΔFtail/ΔBtail.
      
      Problem reported by @Francois.
      a7bf0311
  4. 26 Nov, 2021 1 commit
    • Kirill Smelkov's avatar
      t/qemu-runlinux: Use multidevs=remaps for 9P setup · c9f64495
      Kirill Smelkov authored
      Fixes the following warning that started to appear:
      
          kirr@deca:~/src/wendelin/wendelin.core/t$ ./qemu-runlinux -g  /home/kirr/src/linux/obj-qemu_debug/arch/x86/boot/bzImage /bin/bash
          qemu-system-x86_64: warning: 9p: Multiple devices detected in same VirtFS export, which might lead to file ID collisions and severe misbehaviours on guest! You should either use a separate export for each device shared from host or use virtfs option 'multidevs=remap'!
      
      See https://wiki.qemu.org/Documentation/9psetup for documentation of
      multidevs option.
      c9f64495
  5. 23 Nov, 2021 4 commits
    • Kirill Smelkov's avatar
      fixup! *: Use defer for dbclose & friends · b6916ca8
      Kirill Smelkov authored
      In 5c8340d2 we said:
      
          dbclose now uses defer almost everywhere - there are still few places in
          tests, where one test function is opening/closing test database multiple
          times - those were not (yet ?) converted.
      
      Let's convert those remaining places now, because when wendelin.core
      tests are run wrt plain ZODB4 (contrary to ZODB4-wc2), many tests fail
      at fileh_open time, e.g.
      
              @func
              def test_bigfile_filezodb_fileh_gc():
                  root1= dbopen()
                  conn1= root1._p_jar
                  db   = conn1.db()
                  defer(db.close)
                  root1['zfile4'] = f1 = ZBigFile(blksize)
                  transaction.commit()
      
          >       fh1  = f1.fileh_open()
      
          bigfile/tests/test_filezodb.py:588:
          _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
          bigfile/file_zodb.py:603: in fileh_open
              fileh = _ZBigFileH(self, _use_wcfs)
          bigfile/file_zodb.py:664: in __init__
              self.zfileh = zfile._v_file.fileh_open(use_wcfs)
          bigfile/_file_zodb.pyx:112: in wendelin.bigfile._file_zodb._ZBigFile.fileh_open
              pywconn   = wczsync.pywconnOf(zconn)
          wcfs/client/_wczsync.pyx:56: in wendelin.wcfs.client._wczsync.pywconnOf
              wconn = wc.connect(zconn_at(zconn))
          lib/zodb.py:163: in zconn_at
              "nexedi/ZODB!1")
          _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
      
          patch = 'conn:MVCC-via-loadBefore-only', details_link = 'nexedi/ZODB!1'
      
              def _zassertHasNXDPatch(patch, details_link):
                  if not _zhasNXDPatch(patch):
                      raise AssertionError(
                          "ZODB%s is not patched with required Nexedi patch %r\n\tSee %s for details" %
          >               (zmajor, patch, details_link))
          E           AssertionError: ZODB4 is not patched with required Nexedi patch 'conn:MVCC-via-loadBefore-only'
          E               See nexedi/ZODB!1 for details
      
      and DB is left unclosed.
      
      This change should reduce, if not completely fix, the number of
      leaked /tmp/testdb_* directories for Wendelin.core.UnitTest-ZODB4(xfail) testsuite.
      b6916ca8
    • Kirill Smelkov's avatar
      wcfs: client: tests: Turn SIGSEGV in tMapping.assertBlk into exception · c5624fa9
      Kirill Smelkov authored
      When WCFS-mmapped memory is accessed, it can get SIGBUS on IO error (and
      automatically on WCFS crash), and SIGSEGV when accessed client mapping is closed.
      
      tFile.assertBlk in wcfs_test.py already converts SIGSEGV into python
      exception when accessing on-wcfs file's block. However
      tMapping.assertBlk was not doing so, which, instead of providing proper
      details, leads to test crashes if something goes wrong.
      
      For example when wendelin.core tests are run wrt plain ZODB4 (contrary to
      ZODB4-wc2, see ZODB!1 and
      slapos@e256ed97), it first fails in
      pinner and then gets SIGSEGV on data access, because, to mimic SIGBUS on
      EIO, pinner shutdowns all mappings on its failure:
      
      https://lab.nexedi.com/nexedi/wendelin.core/blob/49f826b1/wcfs/client/wcfs.cpp#L477-501
      https://nexedijs.erp5.net/#/test_result_module/20211118-7C45220A/25
      
      -> Fix it by wrapping test block access with appropriate read_exfault
      variant.
      
      Before this patch:
      
          .../wendelin.core$ WENDELIN_CORE_TEST_DB='<zeo>' WENDELIN_CORE_VIRTMEM='r:wcfs+w:uvmm' python -m pytest -vsx wcfs/ -k test_wcfs_client
          ...
          wcfs/client/client_test.py::test_wcfs_client
          -------------------- live log call ---------------------
          INFO     wcfs:__init__.py:293 starting for zeo://localhost:28866 ...
          I1122 19:17:14.376182  110032 wcfs.go:2384] start "/dev/shm/wcfs/ef87339c054c3e0e48d494fa584bb209518844b2" "zeo://localhost:28866"
          I1122 19:17:14.376291  110032 wcfs.go:2390] (built with go1.17.3)
          W1122 19:17:14.380882  110032 storage.go:152] zodb: FIXME: open zeo://localhost:28866: raw cache is not ready for invalidations -> NoCache forced
          INFO     wcfs:__init__.py:334 started pid110032 @ /dev/shm/wcfs/ef87339c054c3e0e48d494fa584bb209518844b2
      
          M: commit -> @at0 (03e452313dddbc00)
      
          M: commit -> @at1 (03e452313e0f3b99)
          M:      f<0000000000000002>     [2, 3]
      
          M: commit -> @at2 (03e452313e1adb55)
          M:      f<0000000000000002>     [2]
      
          M: commit -> @at3 (03e452313e3be500)
          M:      f<0000000000000002>     [3, 4]
          W1122 19:17:14.597654  110032 wcfs.go:2050] /@03e452313d343c88/bigfile: lookup "0000000000000002": bigfopen 0000000000000002 @03e452313d343c88: invalid argument: Get 0000000000000002: Get 03e452313d343c88:0000000000000002: zeo://localhost:28866: load 03e452313d343c88:0000000000000002: 0000000000000002: no such object
          E1122 19:17:14.597759  110032 wcfs.go:1220] /head/bigfile/0000000000000002: readblk #4: pin watchers: wlink1: f<0000000000000002>: pin #4 @03e452313d343c88: expect "ack"; got "nak: _remmapblk #4 @03e452313d343c88: open /dev/shm/wcfs/ef87339c054c3e0e48d494fa584bb209518844b2/@03e452313d343c88/bigfile/0000000000000002: Invalid argument"
          F1122 19:17:14.597803  110050 wcfs/client/wcfs.cpp:487] CRITICAL: pinner: pin f<0000000000000002> #4 @03e452313d343c88: _remmapblk #4 @03e452313d343c88: open /dev/shm/wcfs/ef87339c054c3e0e48d494fa584bb209518844b2/@03e452313d343c88/bigfile/0000000000000002: Invalid argument
          F1122 19:17:14.597835  110050 wcfs/client/wcfs.cpp:488] CRITICAL: wcfs server will likely kill us soon.
          CRITICAL: pinner: pin f<0000000000000002> #4 @03e452313d343c88: _remmapblk #4 @03e452313d343c88: open /dev/shm/wcfs/ef87339c054c3e0e48d494fa584bb209518844b2/@03e452313d343c88/bigfile/0000000000000002: Invalid argument
          CRITICAL: wcfs server will likely kill us soon.
          Segmentation fault: read @00007ff7b9534000
          /home/kirr/src/wendelin/wendelin.core/wcfs/client/./../../bigfile/liblibvirtmem.so(dump_traceback+0x34)[0x7ff7d6b5c279]
          /home/kirr/src/wendelin/wendelin.core/wcfs/client/./../../bigfile/liblibvirtmem.so(+0x27b0)[0x7ff7d6b577b0]
          /lib/x86_64-linux-gnu/libpthread.so.0(+0x14140)[0x7ff7da078140]
          python(PyString_FromStringAndSize+0x228)[0x5627feb96b58]
          python(PyEval_EvalFrameEx+0x603e)[0x5627febb7a4e]
          python(PyEval_EvalCodeEx+0x57c)[0x5627febb03cc]
          ...
          python(PyObject_Call+0x43)[0x5627feb9d903]
          python(+0x18a7e1)[0x5627fec5d7e1]
          python(Py_Main+0x3ad)[0x5627fec4b8ed]
          /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xea)[0x7ff7d9d59d0a]
          python(_start+0x2a)[0x5627fec4b46a]
          Ошибка сегментирования (стек памяти сброшен на диск)
      
      After this patch:
      
          .../wendelin.core$ WENDELIN_CORE_TEST_DB='<zeo>' WENDELIN_CORE_VIRTMEM='r:wcfs+w:uvmm' python -m pytest -vsx wcfs/ -k test_wcfs_client
          ...
          wcfs/client/client_test.py::test_wcfs_client
          -------------------- live log call ---------------------
          INFO     wcfs:__init__.py:293 starting for zeo://localhost:22854 ...
          I1122 18:17:22.486445  102541 wcfs.go:2384] start "/dev/shm/wcfs/c818c147676f8d6f3b408b02f727aca5e3229e98" "zeo://localhost:22854"
          I1122 18:17:22.486525  102541 wcfs.go:2390] (built with go1.17.3)
          W1122 18:17:22.489908  102541 storage.go:152] zodb: FIXME: open zeo://localhost:22854: raw cache is not ready for invalidations -> NoCache forced
          INFO     wcfs:__init__.py:334 started pid102541 @ /dev/shm/wcfs/c818c147676f8d6f3b408b02f727aca5e3229e98
      
          M: commit -> @at0 (03e451f560834477)
      
          M: commit -> @at1 (03e451f560a2aa77)
          M:      f<0000000000000002>     [2, 3]
      
          M: commit -> @at2 (03e451f560adafcc)
          M:      f<0000000000000002>     [2]
      
          M: commit -> @at3 (03e451f560d02111)
          M:      f<0000000000000002>     [3, 4]
          W1122 18:17:22.703710  102541 wcfs.go:2050] /@03e451f55fcc4c77/bigfile: lookup "0000000000000002": bigfopen 0000000000000002 @03e451f55fcc4c77: invalid argument: Get 0000000000000002: Get 03e451f55fcc4c77:0000000000000002: zeo://localhost:22854: load 03e451f55fcc4c77:0000000000000002: 0000000000000002: no such object
          E1122 18:17:22.703840  102541 wcfs.go:1220] /head/bigfile/0000000000000002: readblk #4: pin watchers: wlink1: f<0000000000000002>: pin #4 @03e451f55fcc4c77: expect "ack"; got "nak: _remmapblk #4 @03e451f55fcc4c77: open /dev/shm/wcfs/c818c147676f8d6f3b408b02f727aca5e3229e98/@03e451f55fcc4c77/bigfile/0000000000000002: Invalid argument"
          F1122 18:17:22.704380  102558 wcfs/client/wcfs.cpp:487] CRITICAL: pinner: pin f<0000000000000002> #4 @03e451f55fcc4c77: _remmapblk #4 @03e451f55fcc4c77: open /dev/shm/wcfs/c818c147676f8d6f3b408b02f727aca5e3229e98/@03e451f55fcc4c77/bigfile/0000000000000002: Invalid argument
          F1122 18:17:22.704639  102558 wcfs/client/wcfs.cpp:488] CRITICAL: wcfs server will likely kill us soon.
          CRITICAL: pinner: pin f<0000000000000002> #4 @03e451f55fcc4c77: _remmapblk #4 @03e451f55fcc4c77: open /dev/shm/wcfs/c818c147676f8d6f3b408b02f727aca5e3229e98/@03e451f55fcc4c77/bigfile/0000000000000002: Invalid argument
          CRITICAL: wcfs server will likely kill us soon.
          >>> Change history by file:
      
          f<0000000000000002>:
                                          0 1 2 3 4 5 6 7
                                          a b c d e f g h
                  @at0 (03e451f560834477)
                  @at1 (03e451f560a2aa77)     2 3
                  @at2 (03e451f560adafcc)     2
                  @at3 (03e451f560d02111)       3 4
      
          INFO     wcfs:__init__.py:400 unmount/stop wcfs pid102541 @ /dev/shm/wcfs/c818c147676f8d6f3b408b02f727aca5e3229e98
          I1122 18:17:22.728452  102541 wcfs.go:2560] stop "/dev/shm/wcfs/c818c147676f8d6f3b408b02f727aca5e3229e98" "zeo://localhost:22854"
          FAILED
      
          ======================= FAILURES =======================
          ___________________ test_wcfs_client ___________________
      
              @func
              def test_wcfs_client():
                  t = tDB(); zf = t.zfile; at0=t.at0
                  defer(t.close)
                  pinned = lambda fh: fhpinned(t, fh)
      
                  at1 = t.commit(zf, {2:'c1', 3:'d1'})
                  at2 = t.commit(zf, {2:'c2'})
      
                  wconn = t.wc.connect(at1)
                  defer(wconn.close)
      
                  fh = wconn.open(zf._p_oid)
                  defer(fh.close)
      
                  # create mmap with 1 block beyond file size
                  m1 = fh.mmap(2, 3)
                  defer(m1.unmap)
      
                  assert m1.blk_start == 2
                  assert m1.blk_stop  == 5
                  assert len(m1.mem)  == 3*zf.blksize
      
                  tm1 = tMapping(t, m1)
      
                  assert pinned(fh) == {}
      
                  # verify initial data reads
                  tm1.assertBlk(2, 'c1',  {2:at1})
                  tm1.assertBlk(3, 'd1',  {2:at1})
                  tm1.assertBlk(4, '',    {2:at1})
      
                  # commit with growing file size -> verify data read as the same, #3 pinned.
                  # (#4 is not yet pinned because it was not accessed)
                  at3 = t.commit(zf, {3:'d3', 4:'e3'})
                  assert pinned(fh) == {2:at1}
                  tm1.assertBlk(2, 'c1',  {2:at1})
                  tm1.assertBlk(3, 'd1',  {2:at1, 3:at1})
                  tm1.assertBlk(4, '',    {2:at1, 3:at1})
      
                  # resync at1 -> at2:    #2 must unpin to @head; #4 must stay as zero
                  wconn.resync(at2)
                  assert pinned(fh) == {3:at1}
                  tm1.assertBlk(2, 'c2',  {       3:at1})
                  tm1.assertBlk(3, 'd1',  {       3:at1})
          >       tm1.assertBlk(4, '',    {       3:at1,  4:at0})     # XXX at0->ø ?
      
          wcfs/client/client_test.py:158:
          _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
          wcfs/client/client_test.py:86: in assertBlk
              _ = read_exfault_withgil(blkview[0:1])
          wcfs/internal/wcfs_test.pyx:90: in wendelin.wcfs.internal.wcfs_test.read_exfault_withgil
              return _read_exfault(mem, withgil=True)
          _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
      
          >   raise SegmentationFault()
          E   SegmentationFault
      
          wcfs/internal/wcfs_test.pyx:120: SegmentationFault
          ------------------ Captured log call -------------------
          INFO     wcfs:__init__.py:293 starting for zeo://localhost:22854 ...
          INFO     wcfs:__init__.py:334 started pid102541 @ /dev/shm/wcfs/c818c147676f8d6f3b408b02f727aca5e3229e98
          INFO     wcfs:__init__.py:400 unmount/stop wcfs pid102541 @ /dev/shm/wcfs/c818c147676f8d6f3b408b02f727aca5e3229e98
      c5624fa9
    • Kirill Smelkov's avatar
      wcfs: Server.stop: Don't log "after SIGTERM" when first wait for wcfs.go exit failed · 81274eb7
      Kirill Smelkov authored
      Here wcfs.go should have exited due to either unmount request, _or_
      SIGTERM.
      81274eb7
    • Kirill Smelkov's avatar
      wcfs: Server.stop: Don't report first unmount failure to outside · d0c4469a
      Kirill Smelkov authored
      If first unmount fails, e.g. due to "device or resource is busy", we are
      trying to unmount the filesystem the second time after force
      kill/FUSE-abort (see 5f684a49 "wcfs: Server.stop: Make sure to remove
      mount entry even if we had to use FUSE abort").
      
      This way the caller of Server.stop should get an error only if that
      second unmount fails, not on unmount-1 error, which should be considered
      as internal to Server.stop implementation.
      
      If we don't hide that unmount-1 error and raise it to the caller, from
      outside it can confusingly look like "the server is successfully
      stopped, but nevertheless we are raised with an error".
      d0c4469a
  6. 16 Nov, 2021 3 commits
  7. 15 Nov, 2021 1 commit
    • Kirill Smelkov's avatar
      wcfs: Server.stop: Make sure to remove mount entry even if we had to use FUSE abort · 5f684a49
      Kirill Smelkov authored
      Server.stop currently tries to unmount, and if that fails invokes FUSE
      abort and kills wcfs.go . However it does not call unmount the second
      time after such abort, and this way the filesystem remains mounted (in
      ENOTCONN state) and rmdir(mountpoint) fails.
      
      -> Fix it by calling unmount the second time if we had to abort FUSE
      connection. In that second try use lazy unmounting, because regular
      unmount can still fail with "Device or resource busy" since there
      could be still client file descriptors left pointing to the mounted
      filesystem. With lazy mode unmounting + followup rmdir, hopefully,
      always succeeds.
      
      Here is example test run where one test timed out, FUSE connection was
      aborted, but neither the filesystem was unmounted, nor mountpoint
      directory was deleted, which led to all followup tests failing in setup
      assert that testmountpoint does not exist:
      
      https://nexedijs.erp5.net/#/test_result_module/20211112-1ACEA62D/22
      
      This patch should fix those followup failures + fix another leakage of
      WCFS mounts in real services.
      5f684a49
  8. 12 Nov, 2021 4 commits
    • Kirill Smelkov's avatar
      tests: Don't leak WCFS log files · 54f6e741
      Kirill Smelkov authored
      By default every WCFS run creates several files in /tmp/wcfs.*.log.* and
      without explicit cleanup those files are left hanging on testnodes. Over
      last ~6 months we accumulated ~ 300K such files.
      
      Don't allow those files to be leaked by instructing WCFS to log to
      stderr during test run. This should be also useful to see details in the
      test output.
      54f6e741
    • Kirill Smelkov's avatar
      tests: Remove test NEO database after test run is over · 49251408
      Kirill Smelkov authored
      With NEO we were creating test database on /tmp but we were not deleting
      it in the end. As the result many /tmp/neo_XXXXXX non-empty directories
      were being leaked.
      
      -> Fix it by creating testdb directory outselves and removing it at the
      end, similarly to FileStorage and ZEO.
      
      Fixes: 7fc4ec66 (tests: Allow to test with ZEO & NEO ZODB storages)
      49251408
    • Kirill Smelkov's avatar
      nxdtest: Don't run test.go for multiple GOMAXPROCS · 45178531
      Kirill Smelkov authored
      We run tests with different GOMAXPROCS because some WCFS bugs are only
      likely to trigger when there is only 1 or 2 main OS thread(s) in WCFS.
      
      However test.go does not exercise filesystem functionality - it runs
      unit tests for ZBlk decoding, ΔBtail and similar. At the same time
      test.go:* currently occupies ~ 50% of whole time to run full testsuite
      with the main consumer being ΔBtail random testing.
      
      -> Run test.go only once. This should save ~ 1000s for each run and
      lower whole time to run wendelin.core testsuite on testnode from
      ~60m -> to ~40 minutes.
      45178531
    • Kirill Smelkov's avatar
      wcfs: Make sure to remove mountpoint directory on Server.stop · d2fd8b77
      Kirill Smelkov authored
      Else every time test.py/wcfs is run several empty directories are left
      in /dev/shm/wcfs - each corresponding to WCFS server that was
      automatically spawned and stopped at the end of the test. Over time this
      can accumulate to some big number as e.g. ~20000 of such directories
      were left on the testnode during last 6 months.
      d2fd8b77
  9. 09 Nov, 2021 2 commits
    • Kirill Smelkov's avatar
      nxdtest: Run WCFS-related tests in verbose mode on testnodes · 5c13cc82
      Kirill Smelkov authored
      This are the early days of WCFS - we want full details which in default
      configuration might not be available to see if WCFS gets stuck for one
      reason or another. See added comments for details.
      5c13cc82
    • Kirill Smelkov's avatar
      setup: Fix egg_info after addition of δbtail.go · d07824dc
      Kirill Smelkov authored
      `python setup.py egg_info` stopped working after we added non-ASCII
      files, e.g. δbtail.go in 2ab4be93 (wcfs: xbtree: ΔBtail) and δftail.go
      in f980471f (wcfs: zdata: ΔFtail):
      
          (neo) (z-dev) (g.env) kirr@deca:~/src/neo/src/lab.nexedi.com/nexedi/wendelin.core$ python setup.py egg_info
          running egg_info
          writing requirements to wendelin.core.egg-info/requires.txt
          writing wendelin.core.egg-info/PKG-INFO
          writing top-level names to wendelin.core.egg-info/top_level.txt
          writing dependency_links to wendelin.core.egg-info/dependency_links.txt
          writing entry points to wendelin.core.egg-info/entry_points.txt
          package init file '__init__.py' not found (or not a regular file)
          /usr/lib/python2.7/distutils/filelist.py:64: UnicodeWarning: Unicode equal comparison failed to convert both arguments to Unicode - interpreting them as being unequal
            sortable_files.sort()
          Traceback (most recent call last):
            File "setup.py", line 416, in <module>
              """.splitlines()]
            File "/home/kirr/src/tools/go/pygolang/golang/pyx/build.py", line 118, in setup
              setuptools_dso.setup(**kw)
            File "/home/kirr/src/wendelin/venv/z-dev/lib/python2.7/site-packages/setuptools_dso/__init__.py", line 37, in setup
              _setup(**kws)
            File "/home/kirr/src/wendelin/venv/z-dev/lib/python2.7/site-packages/setuptools/__init__.py", line 162, in setup
              return distutils.core.setup(**attrs)
            File "/usr/lib/python2.7/distutils/core.py", line 151, in setup
              dist.run_commands()
            File "/usr/lib/python2.7/distutils/dist.py", line 953, in run_commands
              self.run_command(cmd)
            File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command
              cmd_obj.run()
            File "/home/kirr/src/wendelin/venv/z-dev/lib/python2.7/site-packages/setuptools/command/egg_info.py", line 296, in run
              self.find_sources()
            File "/home/kirr/src/wendelin/venv/z-dev/lib/python2.7/site-packages/setuptools/command/egg_info.py", line 303, in find_sources
              mm.run()
            File "/home/kirr/src/wendelin/venv/z-dev/lib/python2.7/site-packages/setuptools/command/egg_info.py", line 538, in run
              self.filelist.sort()
            File "/usr/lib/python2.7/distutils/filelist.py", line 64, in sort
              sortable_files.sort()
          UnicodeDecodeError: 'ascii' codec can't decode byte 0xce in position 0: ordinal not in range(128)
      
      This happens becuase by default setuptools collects filenames as str, not
      unicode, and our git_lsfiles - also registered into setuptools.file_finders
      entrypoint - collects filenames as unicode. Previously everything was working
      because there was no on-ASCII filenames, and so unicode vs str coercion worked
      automatically. But now, after there is filename like 'δbtail.go', it stopped to
      work and raises UnicodeDecodeError.
      
      -> Fix it by adjusting git_lsfiles to collect filenames as UTF-8 encoded
      strings instead of unicode.
      d07824dc
  10. 08 Nov, 2021 4 commits
    • Kirill Smelkov's avatar
      fixup! wcfs: Handle ZODB invalidations · 083251b3
      Kirill Smelkov authored
      Fix last-minute error that crept in during
      kirr/wendelin.core@4af54da9 :
      
          (neo) (z-dev) (g.env) kirr@deca:~/src/neo/src/lab.nexedi.com/nexedi/wendelin.core/wcfs$ go test
          # lab.nexedi.com/nexedi/wendelin.core/wcfs
          ./wcfs.go:957:4: Errorf format %s has arg sk of wrong type *lab.nexedi.com/nexedi/wendelin.core/wcfs.FileSock
      
      Amends 4430de41.
      083251b3
    • Kirill Smelkov's avatar
      wcfs/internal/mm: Complete the package · 482b1a10
      Kirill Smelkov authored
      Add two functions, that were developed during wendelin.core 2 α, to the
      package for completeness:
      
      - map_zero_into_ro complements map_zero_ro, but mmaps into user-provided buffer.
      - sync calls msync on the provided memory.
      482b1a10
    • Kirill Smelkov's avatar
      fixup! wcfs: client: Provide client package to care about isolation protocol details · a49d737e
      Kirill Smelkov authored
      Remove outdated TODO because test_wcfs_watch_before_create passes this
      days. It was fixed after ΔFtail was taught about epochs and the fix was
      reflected in kirr/wendelin.core@63ae8326.
      
      Amends 10f7153a.
      a49d737e
    • Kirill Smelkov's avatar
      lib/zodb: zconn_at: Fix how ZODB4 is asserted to be patched · fc0445c8
      Kirill Smelkov authored
      Fix how unpatched ZODB4 is reported to lack required patch:
      
      Before:
      
          Traceback (most recent call last):
            File "/home/kirr/src/wendelin/wendelin.core/lib/tests/test_zodb.py", line 251, in test_zconn_at
              assert zconn_at(conn1) == at0
            File "/home/kirr/src/wendelin/wendelin.core/lib/zodb.py", line 162, in zconn_at
              assert 'conn:MVCC-via-loadBefore-only' in ZODB.nxd_patches, \
          AttributeError: 'module' object has no attribute 'nxd_patches'
      
      After:
      
          Traceback (most recent call last):
            File "/home/kirr/src/wendelin/wendelin.core/lib/tests/test_zodb.py", line 251, in test_zconn_at
              assert zconn_at(conn1) == at0
            File "/home/kirr/src/wendelin/wendelin.core/lib/zodb.py", line 163, in zconn_at
              "nexedi/ZODB!1")
            File "/home/kirr/src/wendelin/wendelin.core/lib/zodb.py", line 191, in _zassertHasNXDPatch
              (zmajor, patch, details_link))
          AssertionError: ZODB4 is not patched with required Nexedi patch 'conn:MVCC-via-loadBefore-only'
                  See nexedi/ZODB!1 for details
      
      Fixes 1f866c00 (lib/zodb: Teach zconn_at to work on ZODB4).
      fc0445c8
  11. 28 Oct, 2021 12 commits
    • Kirill Smelkov's avatar
      lib/zodb: zstor_2zurl: Explicitly reject MappingStorage · fe9c46c9
      Kirill Smelkov authored
      It is not possible for WCFS to access data of in-RAM storage of another
      process. But without explicit explanation the error message is confusing
      - it was something like:
      
          NotImplementedError: don't know how to extract zurl from <ZODB.MappingStorage.MappingStorage object at 0x7f28f04cea10>
      
      which suggests it was just not implemented.
      fe9c46c9
    • Kirill Smelkov's avatar
      bigfile/zodb: Teach ZBigFile backend to use WCFS · c5e18c74
      Kirill Smelkov authored
      By using WCFS as mmap-overlay for base data(*). WCFS-mode is still opt-in
      with default remaining to use old full user-space virtual memory manager
      mode as initially introduced in 2015.
      
      Wendelin.core should be draftly usable in WCFS mode now.
      
      This patch is organized as follows:
      
      - file_zodb.cpp provides mmap-overlay operations for WCFS implemented via
        WCFS client library.
      - file_zodb.py is adjusted accordingly to use WCFS if requested.
        Low-level things specific to gluing to file_zodb.cpp are moved to _file_zodb.pyx.
      - the rest of the changes are drive-by by main ones.
      
      (*) see the following patches for what is mmap-overlay:
      
      - fae045cc  (bigfile/virtmem: Introduce "mmap overlay" mode)
      - 23362204  (bigfile/py: Allow PyBigFile backend to expose "mmap overlay" functionality)
      
      Some preliminary history:
      
      kirr/wendelin.core@01916f09    X Draft demo that reading data through wcfs works
      kirr/wendelin.core@fd58082a    X Fix build on old GCC
      kirr/wendelin.core@f622e751    X tests: Stop wcfs spawned during tests
      kirr/wendelin.core@f118617b    X tests: Don't try to stop wcfs that is already exited
      c5e18c74
    • Kirill Smelkov's avatar
      wcfs: client: Provide virtmem integration · 986cf86e
      Kirill Smelkov authored
      Provide integration with virtmem, so that WCFS Mapping can be associated
      and managed under virtmem VMA. In other words provide support so that WCFS can
      be used as ZBigFile backend in "mmap overlay" mode (see fae045cc "bigfile/virtmem:
      Introduce "mmap overlay" mode" for description of mmap-overlay mode).
      
      We'll need this functionality for ZBigFile + WCFS client integration.
      
      Virtmem integration will be tested via running whole wendelin.core functional
      testsuite in wcfs-mode after the next patch.
      
      Quoting added description:
      
      ---- 8< ----
      
      Integration with wendelin.core virtmem layer
      ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      
      This client package can be used standalone, but additionally provides
      integration with wendelin.core userspace virtual memory manager: when a
      Mapping is created, it can be associated as serving base layer for a
      particular virtmem VMA via FileH.mmap(vma=...). In that case, since virtmem
      itself adds another layer of dirty pages over read-only base provided by
      Mapping(+)
      
                       ┌──┐                      ┌──┐
                       │RW│                      │RW│    ← virtmem VMA dirty pages
                       └──┘                      └──┘
                                 +
                                                         VMA base = X@at view provided by Mapping:
      
                                                ___        /@revA/bigfile/X
              __                                           /@revB/bigfile/X
                     _                                     /@revC/bigfile/X
                                 +                         ...
           ───  ───── ──────────────────────────   ─────   /head/bigfile/X
      
      the Mapping will interact with virtmem layer to coordinate
      updates to mapping virtual memory.
      
      How it works
      ~~~~~~~~~~~~
      
      Wcfs client integrates with virtmem layer to support virtmem handle
      dirtying pages of read-only base-layer that wcfs client provides via
      isolated Mapping. For wcfs-backed bigfiles every virtmem VMA is interlinked
      with Mapping:
      
            VMA     -> BigFileH -> ZBigFile -----> Z
             ↑↓                                    O
           Mapping  -> FileH    -> wcfs server --> DB
      
      When a page is write-accessed, virtmem mmaps in a page of RAM in place of
      accessed virtual memory, copies base-layer content provided by Mapping into
      there, and marks that page as read-write.
      
      Upon receiving pin message, the pinner consults virtmem, whether
      corresponding page was already dirtied in virtmem's BigFileH (call to
      __fileh_page_isdirty), and if it was, the pinner does not remmap Mapping
      part to wcfs/@revX/f and just leaves dirty page in its place, remembering
      pin information in fileh._pinned.
      
      Once dirty pages are no longer needed (either after discard/abort or
      writeout/commit), virtmem asks wcfs client to remmap corresponding regions
      of Mapping in its place again via calls to Mapping.remmap_blk for previously
      dirtied blocks.
      
      The scheme outlined above does not need to split Mapping upon dirtying an
      inner page.
      
      See bigfile_ops interface (wendelin/bigfile/file.h) that explains base-layer
      and overlaying from virtmem point of view. For wcfs this interface is
      provided by small wcfs client wrapper in bigfile/file_zodb.cpp.
      
      (+) see bigfile_ops interface (wendelin/bigfile/file.h) that gives virtmem
          point of view on layering.
      
      ----------------------------------------
      
      Some preliminary history:
      
      kirr/wendelin.core@f330bd2f    X wcfs/client: Overview += interaction with virtmem layer
      986cf86e
    • Kirill Smelkov's avatar
      wcfs: client: Add wczsync package to maintain WCFS connection in sync to ZODB connection · e11edc70
      Kirill Smelkov authored
      For ZBigFile + WCFS client integration we'll need to open WCFS
      connections that observer database at the same state as current ZODB
      connection. Later that WCFS connection needs to adjust its on-WCFS view
      in accordance to how ZODB connection adjusts its one.
      
      Wczsync provides a function to do so: pywconnOf(zconn) will open WCFS
      connection and maintain it in sync with ZODB connection zconn.
      
      Some preliminary history:
      
      8bf8f23b    X bigfile/_file_zodb: Fix logic around ZSync usage
      571cb737    fixup! X bigfile/_file_zodb: Fix logic around ZSync usage
      a9a82d5a    X bigfile/_file_zodb: Fix ZSync to close not only wconn, but also wconn.wc through which wconn was created
      cf92937f    X wcfs: Move wconn<->zconn sync functionality into wcfs.client._wczsync
      7203d7ab    X wcfs: Fix ZSync to close wconn on zdb.close, even if zconn stays alive
      e11edc70
    • Kirill Smelkov's avatar
      lib/zodb: Teach zconn_at to work on ZODB4 · 1f866c00
      Kirill Smelkov authored
      In 3bd82127 (lib/zodb: Add zconn_at draft (ZODB5 only)) we added
      zconn_at function to find out as of which state a ZODB connection is
      viewing the database. That was ZODB5-only however.
      
      Let's add support for ZODB4 now - by requiring ZODB4-wc2 - a version of
      ZODB4 with MVCC backported from ZODB5: nexedi/ZODB!1
      
      This makes wendelin.core to work on either ZODB5 or ZODB4-wc2, but not
      plain ZODB4. However as zconn_at will be used only for WCFS-integration,
      non-wcfs mode will continue to work on all ZODB5, ZODB4-wc2 and plain
      ZODB4.
      
      ZBigFile + WCFS client integration will use zconn_at to open WCFS
      connection that corresponds to ZODB connection.
      
      Preliminary history:
      
      kirr/wendelin.core@1c3b7750    X zconn_at for ZODB4
      1f866c00
    • Kirill Smelkov's avatar
      lib/zodb: Add ZODB.Connection.onShutdownCallback · 1dba3a9a
      Kirill Smelkov authored
      Add patch to ZODB.Connection to support callback on after database is
      closed. ZBigFile + WCFS client integration will use this callback to
      close WCFS connection when corresponding ZODB.DB is closed.
      
      Preliminary history:
      
      kirr/wendelin.core@a26d9659    X lib/zodb: Connection += onShutdownCallback
      1dba3a9a
    • Kirill Smelkov's avatar
      lib/zodb: Teach Connection.onResyncCallback to work on ZODB4 · ceadfcc7
      Kirill Smelkov authored
      In 959ae2d0 (lib/zodb: Add patch to ZODB.Connection to support callback
      on connection DB view change) we added patch for ZODB.Connection to
      support callback when database view of the connection changes. At that
      time the patch was working for ZODB5 and ZODB4 was TODO.
      Let's add support for ZODB4 (both ZODB4 and ZODB4-wc2) now.
      
      As a reminder: ZBigFile + WCFS client integration will use this callback
      to keep WCFS connection in sync with ZODB connection.
      
      Preliminary history:
      
      533a4cfa     X onResyncCallback for ZODB4
      ceadfcc7
    • Kirill Smelkov's avatar
      bigfile/py: Allow PyBigFile backend to expose "mmap overlay" functionality · 23362204
      Kirill Smelkov authored
      This patch logically continues previous change `bigfile/virtmem:
      Introduce "mmap overlay" mode` and exposes mmap-overlay functionality to
      Python: if PyBigFile backend provides .blkmmapper PyCapsule the
      mmap-related methods will be extracted from it and passed on through to
      virtmem - see _bigfile.h for details.
      
      ZBigFile will use this to hook into using WCFS.
      23362204
    • Kirill Smelkov's avatar
      bigfile/virtmem: Introduce "mmap overlay" mode · fae045cc
      Kirill Smelkov authored
      with the intention to later use WCFS through it.
      
      Before this patch virtmem had only one mode: a BigFile backend was
      providing loadblk and storeblk methods, and on every block access
      loadblk was called to load block data into allocated RAM page.
      
      However with WCFS virtmem won't be needed to do anything to load data -
      because loading from head/bigfile/f mmaped through OS will be handled by
      OS directly. Thus for wcfs, that leaves virtmem only to handle dirtying
      and writeout.
      
      -> Introduce "mmap overlay" mode into virtmem to handle WCFS-like
      BigFile backends - that can provide read-only base layer suitable for
      mmapping.
      
      This patch is organized as follows:
      
      - fileh_open is added flags argument to indicate which mode to use for
        opened fileh. BigFileH is added .mmap_overlay bitfield correspondingly.
        (virtmem.h)
      
      - struct bigfile_ops is extended with 3 optional methods that a BigFile
        backend might provide to support mmap-overlay mode:
      
        * mmap_setup_read,
        * remmap_blk_read, and
        * munmap
      
        (see file.h changes for documentation of this new interface)
      
      - if opened with MMAP_OVERLAY flag, virtmem is using those methods to
        organize VMA views backed by read-only base mmap layer and writeout
        for such VMAs (virtmem.c)
      
      - a test is added to exercise MMAP_OVERLAY virtmem mode (test_virtmem.c)
      
      - everything else, including bigfile.py, is switched to use
        DONT_MMAP_OVERLAY unconditionally for now.
      
      In internal comments inside virtmem new mode is interchangeable called
      "mmap overlay" and "wcfs", even though wcfs is not hooked to be used
      mmap-overlaying yet.
      
      Some preliminary history:
      
      kirr/wendelin.core@fb6932a2    X Split PAGE_LOADED -> PAGE_LOADED, PAGE_LOADED_FOR_WRITE
      kirr/wendelin.core@4a20a573    X Settled on what should happen after writeout for wcfs case
      kirr/wendelin.core@f084ff9b    X Transition to all VMA under 1 fileh to be either all based on wcfs or all based on !wcfs
      fae045cc
    • Kirill Smelkov's avatar
      wcfs: client: Provide client package to care about isolation protocol details · 10f7153a
      Kirill Smelkov authored
      This patch follows-up on previous patch, that added server-side part of
      isolation protocol handling, and adds client package that takes care about
      WCFS isolation protocol details and provides to clients simple interface to
      isolated view of bigfile data on WCFS similar to regular files: given a
      particular revision of database @at, it provides synthetic read-only bigfile
      memory mappings with data corresponding to @at state, but using /head/bigfile/*
      most of the time to build and maintain the mappings.
      
      The patch is organized as follows:
      
      - wcfs.h and wcfs.cpp brings in usage documentation, internal overview and the
        main part of the implementation.
      
      - wcfs/client/client_test.py is tests.
      
      - The rest of the changes in wcfs/client/ are to support the implementation and tests.
      
      Quoting package documentation for the reference:
      
      ---- 8< ----
      
      Package wcfs provides WCFS client.
      
      This client package takes care about WCFS isolation protocol details and
      provides to clients simple interface to isolated view of bigfile data on
      WCFS similar to regular files: given a particular revision of database @at,
      it provides synthetic read-only bigfile memory mappings with data
      corresponding to @at state, but using /head/bigfile/* most of the time to
      build and maintain the mappings.
      
      For its data a mapping to bigfile X mostly reuses kernel cache for
      /head/bigfile/X with amount of data not associated with kernel cache for
      /head/bigfile/X being proportional to δ(bigfile/X, at..head). In the usual
      case where many client workers simultaneously serve requests, their database
      views are a bit outdated, but close to head, which means that in practice
      the kernel cache for /head/bigfile/* is being used almost 100% of the time.
      
      A mapping for bigfile X@at is built from OS-level memory mappings of
      on-WCFS files as follows:
      
                                                ___        /@revA/bigfile/X
              __                                           /@revB/bigfile/X
                     _                                     /@revC/bigfile/X
                                 +                         ...
           ───  ───── ──────────────────────────   ─────   /head/bigfile/X
      
      where @revR mmaps are being dynamically added/removed by this client package
      to maintain X@at data view according to WCFS isolation protocol(*).
      
      API overview
      
       - `WCFS` represents filesystem-level connection to wcfs server.
       - `Conn` represents logical connection that provides view of data on wcfs
         filesystem as of particular database state.
       - `FileH` represent isolated file view under Conn.
       - `Mapping` represents one memory mapping of FileH.
      
      A path from WCFS to Mapping is as follows:
      
       WCFS.connect(at)                    -> Conn
       Conn.open(foid)                     -> FileH
       FileH.mmap([blk_start +blk_len))    -> Mapping
      
      A connection can be resynced to another database view via Conn.resync(at').
      
      Documentation for classes provides more thorough overview and API details.
      
      --------
      
      (*) see wcfs.go documentation for WCFS isolation protocol overview and details.
      
      .
      
      Wcfs client organization
      ~~~~~~~~~~~~~~~~~~~~~~~~
      
      Wcfs client provides to its users isolated bigfile views backed by data on
      WCFS filesystem. In the absence of Isolation property, wcfs client would
      reduce to just directly using OS-level file wcfs/head/f for a bigfile f. On
      the other hand there is a simple, but inefficient, way to support isolation:
      for @at database view of bigfile f - directly use OS-level file wcfs/@at/f.
      The latter works, but is very inefficient because OS-cache for f data is not
      shared in between two connections with @at1 and @at2 views. The cache is
      also lost when connection view of the database is resynced on transaction
      boundary. To support isolation efficiently, wcfs client uses wcfs/head/f
      most of the time, but injects wcfs/@revX/f parts into mappings to maintain
      f@at view driven by pin messages that wcfs server sends to client in
      accordance to WCFS isolation protocol(*).
      
      Wcfs server sends pin messages synchronously triggered by access to mmaped
      memory. That means that a client thread, that is accessing wcfs/head/f mmap,
      is completely blocked while wcfs server sends pins and waits to receive acks
      from all clients. In other words on-client handling of pins has to be done
      in separate thread, because wcfs server can also send pins to client that
      triggered the access.
      
      Wcfs client implements pins handling in so-called "pinner" thread(+). The
      pinner thread receives pin requests from wcfs server via watchlink handle
      opened through wcfs/head/watch. For every pin request the pinner finds
      corresponding Mappings and injects wcfs/@revX/f parts via Mapping._remmapblk
      appropriately.
      
      The same watchlink handle is used to send client-originated requests to wcfs
      server. The requests are sent to tell wcfs that client wants to observe a
      particular bigfile as of particular revision, or to stop watching it.
      Such requests originate from regular client threads - not pinner - via entry
      points like Conn.open, Conn.resync and FileH.close.
      
      Every FileH maintains fileh._pinned {} with currently pinned blk -> rev. This
      dict is updated by pinner driven by pin messages, and is used when
      new fileh Mapping is created (FileH.mmap).
      
      In wendelin.core a bigfile has semantic that it is infinite in size and
      reads as all zeros beyond region initialized with data. Memory-mapping of
      OS-level files can also go beyond file size, however accessing memory
      corresponding to file region after file.size triggers SIGBUS. To preserve
      wendelin.core semantic wcfs client mmaps-in zeros for Mapping regions after
      wcfs/head/f.size. For simplicity it is assumed that bigfiles only grow and
      never shrink. It is indeed currently so, but will have to be revisited
      if/when wendelin.core adds bigfile truncation. Wcfs client restats
      wcfs/head/f at every transaction boundary (Conn.resync) and remembers f.size
      in FileH._headfsize for use during one transaction(%).
      
      --------
      
      (*) see wcfs.go documentation for WCFS isolation protocol overview and details.
      (+) currently, for simplicity, there is one pinner thread for each connection.
          In the future, for efficiency, it might be reworked to be one pinner thread
          that serves all connections simultaneously.
      (%) see _headWait comments on how this has to be reworked.
      
      Wcfs client locking organization
      
      Wcfs client needs to synchronize regular user threads vs each other and vs
      pinner. A major lock Conn.atMu protects updates to changes to Conn's view of
      the database. Whenever atMu.W is taken - Conn.at is changing (Conn.resync),
      and contrary whenever atMu.R is taken - Conn.at is stable (roughly speaking
      Conn.resync is not running).
      
      Similarly to wcfs.go(*) several locks that protect internal data structures
      are minor to Conn.atMu - they need to be taken only under atMu.R (to
      synchronize e.g. multiple fileh open running simultaneously), but do not
      need to be taken at all if atMu.W is taken. In data structures such locks
      are noted as follows
      
           sync::Mutex xMu;    // atMu.W  |  atMu.R + xMu
      
      After atMu, Conn.filehMu protects registry of opened file handles
      (Conn._filehTab), and FileH.mmapMu protects registry of created Mappings
      (FileH.mmaps) and FileH.pinned.
      
      Several locks are RWMutex instead of just Mutex not only to allow more
      concurrency, but, in the first place for correctness: pinner thread being
      core element in handling WCFS isolation protocol, is effectively invoked
      synchronously from other threads via messages coming through wcfs server.
      For example Conn.resync sends watch request to wcfs server and waits for the
      answer. Wcfs server, in turn, might send corresponding pin messages to the
      pinner and _wait_ for the answer before answering to resync:
      
             - - - - - -
            |       .···|·····.        ---->   = request
               pinner <------.↓        <····   = response
            |           |   wcfs
               resync -------^↓
            |      `····|·····
             - - - - - -
            client process
      
      This creates the necessity to use RWMutex for locks that pinner and other
      parts of the code could be using at the same time in synchronous scenarios
      similar to the above. This locks are:
      
           - Conn.atMu
           - Conn.filehMu
      
      Note that FileH.mmapMu is regular - not RW - mutex, since nothing in wcfs
      client calls into wcfs server via watchlink with mmapMu held.
      
      The ordering of locks is:
      
           Conn.atMu > Conn.filehMu > FileH.mmapMu
      
      The pinner takes the following locks:
      
           - wconn.atMu.R
           - wconn.filehMu.R
           - fileh.mmapMu (to read .mmaps  +  write .pinned)
      
      (*) see "Wcfs locking organization" in wcfs.go
      
      Handling of fork
      
      When a process calls fork, OS copies its memory and creates child process
      with only 1 thread. That child inherits file descriptors and memory mappings
      from parent. To correctly continue using Conn, FileH and Mappings, the child
      must recreate pinner thread and reconnect to wcfs via reopened watchlink.
      The reason here is that without reconnection - by using watchlink file
      descriptor inherited from parent - the child would interfere into
      parent-wcfs exchange and neither parent nor child could continue normal
      protocol communication with WCFS.
      
      For simplicity, since fork is seldomly used for things besides followup
      exec, wcfs client currently takes straightforward approach by disabling
      mappings and detaching from WCFS server in the child right after fork. This
      ensures that there is no interference into parent-wcfs exchange should child
      decide not to exec and to continue running in the forked thread. Without
      this protection the interference might come even automatically via e.g.
      Python GC -> PyFileH.__del__ -> FileH.close -> message to WCFS.
      
      ----------------------------------------
      
      Some preliminary history:
      
      kirr/wendelin.core@a8fa9178    X wcfs: move client tests into client/
      kirr/wendelin.core@990afac1    X wcfs/client: Package overview (draft)
      kirr/wendelin.core@3f83469c    X wcfs: client: Handle fork
      kirr/wendelin.core@0ed6b8b6    fixup! X wcfs: client: Handle fork
      kirr/wendelin.core@24378c46    X wcfs: client: Provide Conn.at()
      10f7153a
    • Kirill Smelkov's avatar
      wcfs: Provide isolation to clients · 6f0cdaff
      Kirill Smelkov authored
      Via custom isolation protocol that both server and clients must cooperatively
      follow. This is the core change that enables file cache to be practically
      shared while each client can still be provided with isolated view of the database.
      
      This patch brings only server changes, tests + the minimum client bits to support the tests.
      The client library, that will implement isolation protocol on client side, will come next.
      
      This patch is organized as follows:
      
      - wcfs.go brings in description of the protocol, overview of how server
        implements that protocol and the implementation itself.
        See also notes.txt
      
      - wcfs_test.py brings in tests for server implementation.
        tWCFS._abort_ontimeout had to be moved into nogil mode into wcfs_test.pyx
        to avoid deadlock on the GIL (see comments in wcfs_test.pyx for details).
      
      - files added in wcfs/client/ are needed to provide client-side
        implementation of WatchLink - the message exchange protocol over
        opened head/watch file - for tests. Client-side watchlink implementation
        lives in wcfs/client/wcfs_watchlink.{h,cpp}. The other additions in
        wcfs/client/ are to support that and to expose the WatchLink to Python.
      
        Client-side bits are done right in C++ because upcoming WCFS client
        library will be implemented in C++ to work in nogil mode in order to
        avoid deadlock on the GIL because client-side pinner thread might be
        woken-up synchronously by WCFS server at any moment, including when
        another client thread already holds the GIL and is paused by WCFS.
      
      Some preliminary history:
      
      kirr/wendelin.core@9b4a42a3    X invalidation design draftly settled
      kirr/wendelin.core@27d91d47    X δFtail settled
      kirr/wendelin.core@c27c1940    X mmap over under pagefault to this mmapping works
      kirr/wendelin.core@d36b171f    X ptrace when client is under pagefault or syscall won't work
      kirr/wendelin.core@c1f5bb19    X notes on why lazy-invalidate approach was taken
      kirr/wendelin.core@4fbdd270    X Proof that that it is possible to change mmapping while under pagefault to it
      kirr/wendelin.core@33e0dfce    X ΔTail draftly done
      kirr/wendelin.core@12628943    X make sure "bye" is always processed immediately - even if a handleWatch is currently blocked
      kirr/wendelin.core@af0a64cb    X test for "bye" canceling blocked handlers
      kirr/wendelin.core@996dc6a8    X Fix race in test
      kirr/wendelin.core@43915fe9    X wcfs: Don't forbid simultaneous watch requests
      kirr/wendelin.core@941dc54b    X wcfs: threading.Lock -> sync.Mutex
      kirr/wendelin.core@d75b2304    X wcfs: Move _abort_ontimeout to pyx/nogil
      kirr/wendelin.core@79234659    X Notes on why eagier invalidation was rejected
      kirr/wendelin.core@f05271b1    X Test that sysread(/head/watch) can be interrupted
      kirr/wendelin.core@5ba816da    X restore test_wcfs_watch_robust after f05271b1.
      kirr/wendelin.core@4bd88564    X "Invalidation protocol" -> "Isolation protocol"
      kirr/wendelin.core@f7b54ca4    X avoid fmt::vsprintf  (now compils again with latest pygolang@master)
      kirr/wendelin.core@0a8fcd9d    X wcfs/client: Move EOF -> pygolang
      kirr/wendelin.core@153e02e6    X test_wcfs_watch_setup and test_wcfs_watch_setup_ahead work again
      kirr/wendelin.core@17f98edc    X wcfs: client: os: Factor syserr -> string into _sysErrString
      kirr/wendelin.core@7b0c301c    X wcfs: tests: Fix tFile.assertBlk not to segfault on a test failure
      kirr/wendelin.core@b74dda09    X Start switching Track from Track(key) to Track(keycov)
      kirr/wendelin.core@8b5d8523    X Move tracking of which blocks were accessed from wcfs to ΔFtail
      6f0cdaff
    • Kirill Smelkov's avatar
      wcfs: Handle ZODB invalidations · 4430de41
      Kirill Smelkov authored
      Use ΔFtail.Track on every READ, and query accumulated ΔFtail upon
      receiving ZODB invalidation to query it about which blocks of which
      files have been changed. Then invalidate those blocks in OS file cache.
      
      See added documentation to wcfs.go and notes.txt for details.
      
      Now the filesystem is no longer stale: it provides view of data
      that is uptodate wrt changes on ZODB storage.
      
      Some preliminary history:
      
      kirr/wendelin.core@9b4a42a3    X invalidation design draftly settled
      kirr/wendelin.core@27d91d47    X δFtail settled
      kirr/wendelin.core@33e0dfce    X ΔTail draftly done
      kirr/wendelin.core@822366a7    X keeping fd to root opened prevents the filesystem from being unmounted
      kirr/wendelin.core@89ad3a79    X Don't keep ZBigFile activated during whole current transaction
      kirr/wendelin.core@245511ac    X Give pointer on from where to get nxd-fuse.ko
      kirr/wendelin.core@d1cd128c    X Hit FUSE-related deadlock
      kirr/wendelin.core@d134ee44    X FUSE lookup deadlock should be hopefully fixed
      kirr/wendelin.core@0e60e9ff    X wcfs: Don't noise ZWatcher trace logs with "select ..."
      kirr/wendelin.core@bf9a7405    X No longer rely on ZODB cache invariant for invalidations
      4430de41