• Alex Kalderimis's avatar
    Ensure that we always run the update worker · 185fea33
    Alex Kalderimis authored
    Due to a bug in our sidekiq middleware, post-update actions were not
    being applied after changing assignees. This is fixed by changing
    the format of the arguments to the background worker.
    
    An unused worker is deleted, and we opt-in to the use of specialized
    workers in the API, meaning we can delete some duplicated code.
    
    Details of the fix:
    
    This is vital to getting correct deduplication behavior in our sidekiq
    middleware. The current logic is:
    
    - CLIENT creates hash from worker queue name and arguments (calling
      `#to_s`)
    - a job matching that hash is already running (in the working queue), it
      is not enqueued by the client.
    - otherwise, the job is serialized and sent to the server
    - SERVER pops the job and deserializes it
    - the server runs the job
    - The server calculates the deduplication hash, and deletes it, allowing
      similar jobs to be enqueued again.
    
    This protocol relies on serialization being completely isomorphic. i.e.
    `load . serialize === id`. If not, then the two hashes will not
    match, and the deduplication block will remain up, and not be taken down
    by the server.
    
    The workaround in our case is to use a round-trippable hashmap (i.e.
    string keys rather than symbols), since:
    
    ```
    JSON.load({ 'x' => true}.to_json) == { 'x' => true }
    ```
    
    but
    
    ```
    JSON.load({ x: => true}.to_json) /= { x: => true }
    ```
    
    See: https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/1090
    
    Changelog: fixed
    185fea33
all_queues.yml 80 KB