An error occurred fetching the project authors.
  1. 04 Mar, 2016 1 commit
    • Julien Muchembled's avatar
      storage: defer commit when unlocking a transaction (-> better performance) · eaa07e25
      Julien Muchembled authored
      Before this change, a storage node did 3 commits per transaction:
      - once all data are stored
      - when locking the transaction
      - when unlocking the transaction
      
      The last one is not important for ACID. In case of a crash, the transaction
      is unlocked again (verification phase). By deferring it by 1 second, we
      only have 2 commits per transaction during high activity because all pending
      changes are merged with the commits caused by other transactions.
      
      This change compensates the extra commit(s) per transaction that were
      introduced in commit 7eb7cf1b
      ("Minimize the amount of work during tpc_finish").
      eaa07e25
  2. 25 Jan, 2016 1 commit
  3. 01 Dec, 2015 1 commit
    • Julien Muchembled's avatar
      Safer DB truncation, new 'truncate' ctl command · d3c8b76d
      Julien Muchembled authored
      With the previous commit, the request to truncate the DB was not stored
      persistently, which means that this operation was still vulnerable to the case
      where the master is restarted after some nodes, but not all, have already
      truncated. The master didn't have the information to fix this and the result
      was a DB partially truncated.
      
      -> On a Truncate packet, a storage node only stores the tid somewhere, to send
         it back to the master, which stays in RECOVERING state as long as any node
         has a different value than that of the node with the latest partition table.
      
      We also want to make sure that there is no unfinished data, because a user may
      truncate at a tid higher than a locked one.
      
      -> Truncation is now effective at the end on the VERIFYING phase, just before
         returning the last ids to the master.
      
      At last all nodes should be truncated, to avoid that an offline node comes back
      with a different history. Currently, this would not be an issue since
      replication is always restart from the beginning, but later we'd like they
      remember where they stopped to replicate.
      
      -> If a truncation is requested, the master waits for all nodes to be pending,
         even if it was previously started (the user can still force the cluster to
         start with neoctl). And any lost node during verification also causes the
         master to go back to recovery.
      
      Obviously, the protocol has been changed to split the LastIDs packet and
      introduce a new Recovery, since it does not make sense anymore to ask last ids
      during recovery.
      d3c8b76d
  4. 30 Nov, 2015 1 commit
    • Julien Muchembled's avatar
      Perform DB truncation during recovery, send PT to storages before verification · 3e3eab5b
      Julien Muchembled authored
      Currently, the database may only be truncated when leaving backup mode, but
      the issue will be the same when neoctl gets a new command to truncate at an
      arbitrary tid: we want to be sure that all nodes are truncated before anything
      else.
      
      Therefore, we stop sending Truncate orders before stopping operation because
      nodes could fail/exit before actually processing them. Truncation must also
      happen before asking nodes their last ids.
      
      With this commit, if a truncation is requested:
      - this is always the first thing done when a storage node connects to the
        primary master during the RECOVERING phase,
      - and the cluster does not start automatically if there are missing nodes,
        unless an admin forces it.
      
      Other changes:
      - Connections to storage nodes don't need to be aborted anymore when leaving
        backup mode.
      - The master always initiates communication when a storage node identifies,
        which simplifies code and reduces the number of exchanged packets.
      3e3eab5b
  5. 21 May, 2015 1 commit
  6. 20 Jun, 2014 1 commit
  7. 07 Jan, 2014 1 commit
  8. 21 Aug, 2012 1 commit
  9. 26 Jul, 2012 1 commit
  10. 20 Mar, 2012 1 commit
  11. 13 Mar, 2012 1 commit
  12. 24 Feb, 2012 1 commit
    • Julien Muchembled's avatar
      Implements backup using specialised storage nodes and relying on replication · 8e3c7b01
      Julien Muchembled authored
      Replication is also fully reimplemented:
      - It is not done anymore on whole partitions.
      - It runs at lowest priority not to degrades performance for client nodes.
      
      Schema of MySQL table is changed to optimize storage layout: rows are now
      grouped by age, for good partial replication performance.
      This certainly also speeds up simple loads/stores.
      8e3c7b01
  13. 17 Jan, 2012 1 commit
  14. 16 Jan, 2012 1 commit
  15. 26 Oct, 2011 1 commit
  16. 17 Jan, 2011 1 commit
  17. 29 Oct, 2010 3 commits
  18. 21 Jun, 2010 1 commit
    • Grégory Wisniewski's avatar
      Move stored OIDs check to master side. · 1f629dfc
      Grégory Wisniewski authored
      - The storages no more check the last OID during a store
      - The storages inconditionnaly store the last OID notified by the master
      - The master check during the if a greater oid was used by a client
      - The master always notify the last OID when a pool is generated or if the
      check above is True
      - The master's transaction manager manager the last oid and oid generator
      
      git-svn-id: https://svn.erp5.org/repos/neo/trunk@2180 71dcc9de-d417-0410-9af5-da40c76e7ee4
      1f629dfc
  19. 15 May, 2010 1 commit
    • Grégory Wisniewski's avatar
      Answer the partition table in one packet. · 39465fec
      Grégory Wisniewski authored
      SendPartitionTable packet was sent between Ask and Answer PartitionTable
      packets, as notifications. In this case, the only purpose of the 'Answer'
      was to check that the partition table was filled. The 'Ask' allowed also
      to request a part of the partitions but was not used and redundant with
      AskPartitionList for neoctl.
      
      This commit include the following work:
      - The partition table is always send in one packet.
      - The full partition table is always requested with AskPartitionTable
      - The full partition table is notified with SendPartitionTable
      - Client node process the answer in the bootstrap handler.
      - Admin can receive answer *and* notifications for the partition table.
      - Move the log calls to the pt.py module
      - Add pt.getRowList() to factorise the code.
      - Build partition table packets out of the loop when possible
      - Always load inconditionnaly the partition table in generic pt.py
      -
      
      git-svn-id: https://svn.erp5.org/repos/neo/trunk@2114 71dcc9de-d417-0410-9af5-da40c76e7ee4
      39465fec
  20. 13 May, 2010 1 commit
  21. 30 Mar, 2010 1 commit
  22. 16 Feb, 2010 1 commit
  23. 10 Feb, 2010 1 commit
  24. 01 Feb, 2010 2 commits
  25. 13 Jan, 2010 1 commit
  26. 07 Oct, 2009 1 commit
  27. 05 Oct, 2009 1 commit
  28. 01 Oct, 2009 1 commit
  29. 30 Sep, 2009 1 commit
  30. 29 Sep, 2009 1 commit
  31. 06 Aug, 2009 1 commit
  32. 31 Jul, 2009 1 commit
  33. 29 Jul, 2009 1 commit
  34. 28 Jul, 2009 1 commit
  35. 22 Jul, 2009 1 commit
  36. 20 Jul, 2009 1 commit
  37. 14 Jul, 2009 1 commit