1. 29 May, 2019 1 commit
    • Julien Muchembled's avatar
      Move mock.py to neo/tests to not conflict with https://pypi.python.org/pypi/mock · 80e5aa5c
      Julien Muchembled authored
      This is a backport commit for a project branch.
      Not sure of this cherry-pick conflict resolution, neoppod is not used in
      this project anyway, what matters is to fix the conflict when importing
      mock
      
       Conflicts:
      	.gitignore
      	neo/tests/client/testClientApp.py
      	neo/tests/client/testConnectionPool.py
      	neo/tests/storage/testClientHandler.py
      	neo/tests/storage/testStorageApp.py
      	neo/tests/storage/testStorageMySQL.py
      	neo/tests/storage/testTransactions.py
      	neo/tests/testNodes.py
      	setup.py
      80e5aa5c
  2. 09 Mar, 2016 2 commits
  3. 08 Mar, 2016 2 commits
  4. 04 Mar, 2016 3 commits
  5. 02 Mar, 2016 1 commit
    • Julien Muchembled's avatar
      client: revert incorrect memory optimization · 763806e0
      Julien Muchembled authored
      Since commit d2d77437 ("client: make the cache
      tolerant to late invalidations when the entry is in the history queue"),
      invalidated items became current again when they were moved to the history
      queue, which was wrong for 2 reasons:
      - only the last items of _oid_dict values may have next_tid=None,
      - and for such items, they could be wrongly reused when caching the real
        current data.
      763806e0
  6. 01 Mar, 2016 1 commit
  7. 26 Feb, 2016 4 commits
  8. 05 Feb, 2016 1 commit
  9. 25 Jan, 2016 2 commits
  10. 21 Jan, 2016 2 commits
  11. 12 Jan, 2016 1 commit
  12. 16 Dec, 2015 2 commits
  13. 13 Dec, 2015 3 commits
  14. 12 Dec, 2015 1 commit
  15. 11 Dec, 2015 1 commit
  16. 09 Dec, 2015 1 commit
  17. 02 Dec, 2015 1 commit
  18. 01 Dec, 2015 3 commits
    • Julien Muchembled's avatar
    • Julien Muchembled's avatar
    • Julien Muchembled's avatar
      Safer DB truncation, new 'truncate' ctl command · d3c8b76d
      Julien Muchembled authored
      With the previous commit, the request to truncate the DB was not stored
      persistently, which means that this operation was still vulnerable to the case
      where the master is restarted after some nodes, but not all, have already
      truncated. The master didn't have the information to fix this and the result
      was a DB partially truncated.
      
      -> On a Truncate packet, a storage node only stores the tid somewhere, to send
         it back to the master, which stays in RECOVERING state as long as any node
         has a different value than that of the node with the latest partition table.
      
      We also want to make sure that there is no unfinished data, because a user may
      truncate at a tid higher than a locked one.
      
      -> Truncation is now effective at the end on the VERIFYING phase, just before
         returning the last ids to the master.
      
      At last all nodes should be truncated, to avoid that an offline node comes back
      with a different history. Currently, this would not be an issue since
      replication is always restart from the beginning, but later we'd like they
      remember where they stopped to replicate.
      
      -> If a truncation is requested, the master waits for all nodes to be pending,
         even if it was previously started (the user can still force the cluster to
         start with neoctl). And any lost node during verification also causes the
         master to go back to recovery.
      
      Obviously, the protocol has been changed to split the LastIDs packet and
      introduce a new Recovery, since it does not make sense anymore to ask last ids
      during recovery.
      d3c8b76d
  19. 30 Nov, 2015 8 commits
    • Julien Muchembled's avatar
      Perform DB truncation during recovery, send PT to storages before verification · 3e3eab5b
      Julien Muchembled authored
      Currently, the database may only be truncated when leaving backup mode, but
      the issue will be the same when neoctl gets a new command to truncate at an
      arbitrary tid: we want to be sure that all nodes are truncated before anything
      else.
      
      Therefore, we stop sending Truncate orders before stopping operation because
      nodes could fail/exit before actually processing them. Truncation must also
      happen before asking nodes their last ids.
      
      With this commit, if a truncation is requested:
      - this is always the first thing done when a storage node connects to the
        primary master during the RECOVERING phase,
      - and the cluster does not start automatically if there are missing nodes,
        unless an admin forces it.
      
      Other changes:
      - Connections to storage nodes don't need to be aborted anymore when leaving
        backup mode.
      - The master always initiates communication when a storage node identifies,
        which simplifies code and reduces the number of exchanged packets.
      3e3eab5b
    • Julien Muchembled's avatar
      master: fix possible blockage during recovery after a storage disconnection · 2485f151
      Julien Muchembled authored
      At some point, the master asks a storage node its partition table. If this node
      is lost before getting an answer, another node (or the same one if it comes
      back) must be asked.
      
      Before this change, the master node had to be restarted.
      2485f151
    • Julien Muchembled's avatar
      master: last tid/oid after recovery/verification · dec81519
      Julien Muchembled authored
      The important bugfix is to update the last oid when the master verifies a
      transaction with new oids.
      
      By resetting the transaction manager at the beginning of the recovery phase,
      it become possible to avoid tid/oid holes:
      - by reallocating previously unused allocated oids
      - when going back "in the past", i.e. reverting to an older version of the
        database (with fewer oids) and/or adjusting the clock
      dec81519
    • Julien Muchembled's avatar
      Go back/stay in RECOVERING state when the partition table can't be operational · e1f9a7da
      Julien Muchembled authored
      This fixes several cases where the partition table could become corrupt and
      the whole cluster being stuck in VERIFYING state.
      
      This also reduces the probability the have cells out of date when restarting
      several storage nodes simultaneously.
      
      At last, if a master node becomes primary again, a cluster must not be started
      automatically if nodes with readable cells are missing, in order to avoid
      a split of the database. This could happen if this master node was previously
      forced to start it.
      e1f9a7da
    • Julien Muchembled's avatar
      Minimize the amount of work during tpc_finish · 7eb7cf1b
      Julien Muchembled authored
      NEO did not ensure that all data and metadata are written on disk before
      tpc_finish, and it was for example vulnerable to ENOSPC errors.
      In other words, some work had to be moved to tpc_vote:
      
      - In tpc_vote, all involved storage nodes are now asked to write all metadata
        to ttrans/tobj and _commit_. Because the final tid is not known yet, the tid
        column of ttrans and tobj now contains NULL and the ttid respectively.
      
      - In tpc_finish, AskLockInformation is still required for read locking,
        ttrans.tid is updated with the final value and this change is _committed_.
      
      - The verification phase is greatly simplified, more reliable and faster. For
        all voted transactions, we can know if a tpc_finish was started by getting
        the final tid from the ttid, either from ttrans or from trans. And we know
        that such transactions can't be partial so we don't need to check oids.
      
      So in addition to minimizing the risk of failures during tpc_finish, we also
      fix a bug causing the verification phase to discard transactions with objects
      for which readCurrent was called.
      
      On performance side:
      
      - Although tpc_vote now asks all involved storages, instead of only those
        storing the transaction metadata, the client has been improved to do this
        in parallel. The additional commits are also all done in parallel.
      
      - A possible improvement to compensate the additional commits is to delay the
        commit done by the unlock.
      
      - By minimizing the time to lock transactions, objects are read-locked for a
        much shorter period. This is even more important that locked transactions
        must be unlocked in the same order.
      
      Transactions with too many modified objects will now timeout inside tpc_vote
      instead of tpc_finish. Of course, such transactions may still cause other
      transaction to timeout in tpc_finish.
      7eb7cf1b
    • Julien Muchembled's avatar
    • Julien Muchembled's avatar
      fixup! storage: fix pruning of data when deleting partial transactions during verification · cff279af
      Julien Muchembled authored
      This fixes a regression in commit 83fe64bf
      when ttrans has several rows to the same data_id.
      cff279af
    • Julien Muchembled's avatar