1. 05 Mar, 2019 5 commits
  2. 04 Mar, 2019 1 commit
    • Julien Muchembled's avatar
      CMFActivity: new activate() parameter to prefer executing on the same node · 1bbd5e6e
      Julien Muchembled authored
      This implements a special case of node specialization, to make better
      use of the ZODB Storage cache. By default, a non-grouped message is
      marked to be executed by the same node that created it, if the object
      is not a tool and if it was not activated by path. This can be
      overridden (either forced or prevented) using a new 'node' activate()
      parameter. See message of the first merged commits for details, and
      also ActiveObject.activate() docstring. For SQLDict & SQLQueue only.
      
      In the future, the new 'node' argument could accept any other string
      value that refers to a group of nodes. Groups would be defined on the
      activity tool, and be assigned negative integers. Contrary to what is
      implemented here, such specialization would be strict, in that a node
      would never process a message for a group it does not belong.
      
      /reviewed-on nexedi/erp5!836
      1bbd5e6e
  3. 01 Mar, 2019 1 commit
    • Georgios Dagkakis's avatar
      erp5_core: Ignore non-existent transitions in Module_listWorkflowTransitionItemList · 14940e3f
      Georgios Dagkakis authored
      instead of crashing.
      
      This can happen in workflow if we have:
      - transition_x is declared as possible in state_x
      - then transition_x gets deleted
      In this case in state_x in manage_properties nothing is visible,
      which creates an inconsistent in its data, yet functional generally workflow
      
      Since it is easy to create such cases, just ignore them in Module_listWorkflowTransitionItemList
      so that mass change state can work
      14940e3f
  4. 27 Feb, 2019 1 commit
  5. 26 Feb, 2019 6 commits
  6. 25 Feb, 2019 3 commits
  7. 22 Feb, 2019 2 commits
  8. 21 Feb, 2019 4 commits
    • Julien Muchembled's avatar
    • Julien Muchembled's avatar
      CMFActivity: new activate() parameter to prefer executing on the same node · 301962ad
      Julien Muchembled authored
      The goal is to make better use of the ZODB Storage cache. It is common to do
      processing on a data set in several sequential transactions: in such case, by
      continuing execution of these messages on the same node, data is loaded from
      ZODB only once. Without this, and if there are many other messages to process,
      processing always continue on a random node, causing much more load from ZODB.
      
      To prevent nodes from having too much work to do, or too little compared to
      other nodes, this new parameter is only a hint for CMFActivity. It remains
      possible for a node to execute a message that was intended for another node.
      
      Before this commit, a processing node selects the first message(s) according to
      the following ordering:
      
        priority, date
      
      and now:
      
        priority, node_preference, date
      
      where node_preference is:
      
        -1 -> same node
         0 -> no preferred node
         1 -> another node
      
      The implementation is tricky for 2 reasons:
      - MariaDB can't order this way in a single simple query, so we have 1
        subquery for each case, potentially getting 3 times the wanted maximum of
        messages, then order/filter on the resulting union.
      - MariaDB also can't filter efficiently messages for other nodes, so the 3rd
        subquery returns messages for any node, potentially duplicating results from
        the first 2 subqueries. This works because they'll be ordered last.
        Unfortunately, this requires extra indices.
      
      In any case, message reservation must be very efficient, or MariaDB deadlocks
      quickly happen, and locking an activity table during reservation reduces
      parallelism too much.
      
      In addition to better cache efficiency, this new feature can be used as a
      workaround for a bug affecting serialiation_tag, causing IntegrityError when
      reindexing many new objects. If you have 2 recursive reindexations for both a
      document and one of its lines, and if you have so many messages than grouping
      is split between these 2 messages, then you end up with 2 nodes indexing the
      same line in parallel: for some tables, the pattern DELETE+INSERT conflicts
      since InnoDB does not take any lock when deleting a non-existent row.
      
      If you have many activities creating such documents, you can combine with
      grouping and appropriate priority to make sure that such pair of messages won't
      be executed on different nodes, except maybe at the end (when there's no
      document to create anymore; then activity reexecution may be enough).
      For example:
      
        from Products.CMFActivity.ActivityTool import getCurrentNode
        portal.setPlacelessDefaultReindexParameters(
          activate_kw={'node': 'same', 'priority': priority},
          group_id=getCurrentNode())
      
      where `priority` is the same as the activity containing the above code, which
      can also use grouping without increasing the probability of IntegrityError.
      301962ad
    • Sebastien Robin's avatar
      testnode: try much more agressively to kill remaining processes (fixup) · ae0a20fc
      Sebastien Robin authored
      Kill any process having in command line the path reserved for unit
      test. This would really allows to kill any remaining process if any.
      
      Do not look if processes are child of testnode, because processes
      like mariadb and others are not running as child
      ae0a20fc
    • Nicolas Wavrant's avatar
      8fa3d858
  9. 20 Feb, 2019 4 commits
  10. 18 Feb, 2019 3 commits
  11. 16 Feb, 2019 1 commit
  12. 15 Feb, 2019 7 commits
  13. 14 Feb, 2019 2 commits