1. 16 Oct, 2023 1 commit
    • Julien Muchembled's avatar
      Reimplement pack in a scalable way, partial pack & approval/reject of pack orders · 4c3b6c4d
      Julien Muchembled authored
      This is still pack without garbage collection, and without deleting
      any transaction metadata ('trans' table).
      
      Partial pack means that the client can take a list of oids: only these
      oids will be packed. No API is defined yet at IStorage level.
      
      Storage nodes pack in background, independently from other storage
      nodes, partition by partition, and calling IStorage.pack() returns
      immediately (though internally, NEO does have a mechanism to wait
      until it's done, which can be required for some ZODB unit tests).
      
      This new implementation also introduces the concept of signing pack
      orders. The idea is that calling IStorage.pack() only records a pack
      order in the database, that can be reviewed/approved/rejected using
      an UI that is left to be done. For the moment, pack orders are
      automatically approved (by the master).
      
      Internally, pack orders are stored as extra metadata of a transaction.
      IOW, IStorage.pack() implies the commit of an (empty) transaction.
      
      IStorage.pack() can be called without waiting for the previous one
      to be completed. Pack orders processed in the same order as they are
      requested:
      - an unsigned pack order blocks the processing of any newer pack order;
      - rejected pack order are ignored.
      
      Approving a pack order also triggers pack on backup clusters.
      That's the simplest way to have everything consistent.
      Maybe later we could identify scenarios where it would be ok
      to unsign pack orders during asynchronous replication.
      
      The feature to check replicas is marked as experimental because it is
      not aware of differences that can happen during pack operations.
      _______________________________________________________________________
      
      About concurrency within the storage node, a first implementation
      extended what was done to delete partitions in background (see
      previous commit). But here, the job can't be easily split in splices
      that are never too big:
      - it's simpler to never split the processing of an oid but this can
        freeze the application for a long time when packing an oid that was
        modified many times (e.g. 30 min for an oid with 20 millions
        historical records);
      - then an attempt so that an oid can be processed in several times was
        inefficient, maybe due to a limit in RocksDB (packing the oid in the
        above example would take days during which NEO is significantly
        slower).
      
      So background database jobs were moved to a separate thread, using a
      separate connection to the underlying database. This is obviously
      only useful for the MySQL backend. In order to share as much code as
      possible between backends, SQLite also does the work in a separate
      thread but sharing the main connection instead of opening a separate
      one (so such backend would not be suited in the above example).
      
      But deleting raw data with a secondary connection is not possible
      without fsyncing too often (or transaction isolation issues...): these
      deletions are deferred by recording them in a new table, which is
      processed later with the main connection. This is not so bad because
      the actual deletion of raw data is usually more efficient this way
      (more sequential IO).
      
      Here are a few numbers:
      - without load: 10h45 (12h for the first reimplementation)
      - with a load that normally takes 6h58:
        - load: 7h33 (so 8.4% slower)
        - pack: 15h36 (+4h51)
      
      As explained above, the pack of a partition is split in 2 steps:
      - the longest one (here 78% without load) should have negligible
        peformance impact on the application because the work is done in a
        separate thread with a secondary connection, and also with something
        to minimize GIL impact by prioritizing the main thread;
      - the shortest one (22%) to process the deferred deletions,
        with even lower priority than replication: it tries to split
        the work in tasks that take ~10ms.
      4c3b6c4d
  2. 11 Oct, 2023 1 commit
  3. 04 Apr, 2023 9 commits
  4. 09 Mar, 2023 2 commits
  5. 19 Feb, 2023 1 commit
  6. 16 Feb, 2023 2 commits
  7. 14 Feb, 2023 3 commits
  8. 10 Feb, 2023 1 commit
  9. 02 Feb, 2022 1 commit
    • Kirill Smelkov's avatar
      Fix breakage with zodbpickle >= 2 · d5afef8e
      Kirill Smelkov authored
      Starting from zodbpickle 2 its binary class does not allow users to set
      arbitrary attributes and so
      
      	binary._pack = bytes.__str__
      
      fails with
      
      	TypeError: can't set attributes of built-in/extension type 'zodbpickle.binary'
      
      -> Fix it by explicitly checking for binary type on encoding instead of
      setting binary._pack
      
      See nexedi/slapos@27f574bc for pre-history.
      
      /cc @jerome
      d5afef8e
  10. 04 Jun, 2021 1 commit
    • Julien Muchembled's avatar
      admin: fix crash if not operational and a downstream cluster is RUNNING · 7f81ac2d
      Julien Muchembled authored
      Traceback (most recent call last):
        ...
        File ".../neo/lib/handler.py", line 75, in dispatch
          method(conn, *args, **kw)
        File ".../neo/admin/handler.py", line 174, in wrapper
          return func(self, name, *args, **kw)
        File ".../neo/admin/handler.py", line 190, in notifyMonitorInformation
          self.app.updateMonitorInformation(name, **info)
        File ".../neo/admin/app.py", line 290, in updateMonitorInformation
          self._notify(self.operational)
        File ".../neo/admin/app.py", line 315, in _notify
          body += '', name, '    ' + backup.formatSummary(upstream)[1]
        File ".../neo/admin/app.py", line 83, in formatSummary
          tid = self.ltid
      AttributeError: 'Backup' object has no attribute 'ltid'
      7f81ac2d
  11. 11 May, 2021 1 commit
  12. 02 Apr, 2021 5 commits
  13. 22 Mar, 2021 1 commit
  14. 04 Mar, 2021 2 commits
  15. 15 Jan, 2021 2 commits
    • Julien Muchembled's avatar
      ssl: don't care whether EOF is ragged or not · d98205d0
      Julien Muchembled authored
      The purpose of suppress_ragged_eofs=False was to micro-optimize the
      normal case: when there's no EOF.
      
      But commit 061cd5d8 showed that this
      option only turns ragged EOF into an exception. It may be easier for
      alternate NEO implementations to close the SSL connection properly. Or
      the performance benefit was not worth the risk to freeze a NEO process.
      d98205d0
    • Kirill Smelkov's avatar
      ssl: Don't ignore non-ragged EOF · 061cd5d8
      Kirill Smelkov authored
      Testing NEO/go client wrt NEO/py server revealed a bug in NEO/py SSL
      handling: proper non-ragged EOF from a peer is ignored, and so leads to
      hang in infinite loop inside _SSL.receive with read_buf memory growing
      indefinitely. Details are below:
      
      NEO/py wraps raw sockets with
      
      	ssl.wrap_socket(suppress_ragged_eofs=False)
      
      which instructs SSL layer to convert unexpected EOF when receiving a TLS
      record into SSLEOFError exception. However when remote peer properly
      closes its side of the connection, socket.read() still returns b'' to
      report non-ragged regular EOF:
      
      https://github.com/python/cpython/blob/v2.7.18/Lib/ssl.py#L630-L650
      
      The code was handling SSLEOFError but not b'' return from socket recv.
      Thus after NEO/go client was disconnecting and properly closing its side
      of the connection, the code started to loop indefinitely in _SSL.receive
      under `while 1` with  b'' returned by self.socket.recv() appended to
      read_buf again and again.
      
      -> Fix it by detecting non-ragged EOF as well and, similarly to how
      SSLEOFError is handled, converting them into self._error('recv', None).
      
      See merge request nexedi/neoppod!17
      061cd5d8
  16. 11 Jan, 2021 4 commits
  17. 02 Oct, 2020 1 commit
    • Julien Muchembled's avatar
      Fix handling of -m/--masters arg · fa63d856
      Julien Muchembled authored
      For the master, the purpose of -m/--masters is to specify addresses
      of other master nodes, since its own address is already known via
      -b/--bind. Therefore, an empty value for -m/--masters is valid.
      The user remains free to repeat the -b value in -m.
      
      More generally, a node may choose to only specify master addresses
      via -D/--dynamic-master-list, so the check that at least one master
      address is specified is moved where the NodeManager is expected to be
      initialized.
      fa63d856
  18. 29 Sep, 2020 1 commit
  19. 25 Sep, 2020 1 commit