An error occurred fetching the project authors.
  1. 12 Mar, 2021 1 commit
  2. 10 Mar, 2021 1 commit
  3. 08 Mar, 2021 1 commit
  4. 17 Feb, 2021 1 commit
  5. 16 Feb, 2021 1 commit
    • Patrick Steinhardt's avatar
      gitaly_client: Fix force-routing to primary with empty hook env · fa1ddf5c
      Patrick Steinhardt authored
      With commit edab619a (gitaly: Fix access checks with transactions and
      quarantine environments, 2021-02-05), we started injecting a flag into
      Gitaly requests to force-route to the primary Gitaly node in case a hook
      environment is set in order to not break access to quarantined objects.
      Turns out that this change breaks read distribution though as now all
      requests are force-routed to the primary.
      
      The cause of this is trivial enough: the SafeRequestStore returns an
      empty hash if it wasn't set up to contain anything. Given that the
      checks whether a HookEnv was set up only checked whether there was
      something in the SafeRequestStore, they thus always thought that
      requests were running in the context of a HookEnv.
      
      Fix the issue by checking that the returned value is non-empty.
      fa1ddf5c
  6. 08 Feb, 2021 1 commit
    • Patrick Steinhardt's avatar
      gitaly: Fix access checks with transactions and quarantine environments · edab619a
      Patrick Steinhardt authored
      In order to check whether certain operations are allowed to be executed
      by a user, Gitaly POSTs to the `/internal/allowed` endpoint. The request
      includes information about what change the user wants to perform, but it
      also contains information about the environment the change is currently
      performed in.
      
      When a user performs a push, git will first store all pushed objects
      into a quarantine environment. This is a separate temporary directory
      containing all new objects such that if the push gets rejected, new
      objects will not persist in the repository. The crux is that in order to
      inspect these new objects, git needs to be told that such a quarantine
      environment exists. This is why Gitaly sends information about this
      quarantine environment to `/internal/allowed`, so that we can again
      relay this information back to Gitaly when we want to inspect newly
      pushed objects to determine whether they're allowed or not.
      
      While it's a leaky abstraction, it has worked fine until now. But with
      transactions, that's not true anymore: when multiple Gitaly nodes take
      part in a push, then they'll all generate a randomly named quarantine
      environment. But as only the primary node will inject its info into the
      request, we are not able to acces quarantine environments of secondary
      nodes. If we now route accessor requests to any of the secondary Gitaly
      nodes with the quarantine environment of the primary, then the request
      will fail as git cannot find quarantined objects.
      
      To fix this, Gitaly has recently grown a new GRPC header which allows us
      to force-route requests to the primary via 1102b0b67 (praefect:
      Implement force-routing to primary for node-manager router, 2021-02-03)
      and 4d877d7d5 (praefect: Implement force-routing to primary for per-repo
      router, 2021-02-03). So let's set that header if we know that we're
      being executed via a hook, which is the only case where a quarantine
      environment may exist.
      edab619a
  7. 26 Jan, 2021 1 commit
  8. 21 Dec, 2020 1 commit
  9. 02 Sep, 2020 1 commit
  10. 05 Aug, 2020 1 commit
  11. 23 Jul, 2020 1 commit
  12. 30 Jun, 2020 1 commit
    • Oswaldo Ferreira's avatar
      Add instrumentation to Gitaly streamed responses · dfa80312
      Oswaldo Ferreira authored
      This is a stab into fixing the Gitaly timing in logs (gitaly_duration_s)
      for streamed responses using the same GitalyClient.call method.
      
      The problem of having a GitalyClient.call for non-streamed responses
      and GitalyClient.streaming_call (with a block) for streamed responses
      is that we'll need to rely mostly on documentation in order to
      get the timings right for new RPCs.
      
      In order to solve that, here we look further into the Gitaly response.
      If it's an Enumerator (that's what the Ruby implementation of gRPC
      streams return from the server https://grpc.io/docs/languages/ruby/basics/),
      we wrap that Enumerator into a custom enumerator, which instruments
      that stream consumption.
      
      Another advantage of that over wrapping the whole stream consumption
      into a block is that we won't add too much of Ruby CPU time at it,
      just the response.next call is measured, which is the point of
      contact with Gitaly.
      dfa80312
  13. 20 May, 2020 1 commit
  14. 30 Apr, 2020 1 commit
  15. 27 Apr, 2020 1 commit
    • Oswaldo Ferreira's avatar
      Use microseconds precision for log timings · 5c2a5394
      Oswaldo Ferreira authored
      We have 6 (microsecond) precision for a few Go service
      timings, so making all existing *_duration_s on
      Rails/API/Sidekiq use a 6 decimal precision instead of 2
      would make more sense, and that's what we accomplish here.
      5c2a5394
  16. 16 Apr, 2020 1 commit
  17. 09 Mar, 2020 1 commit
  18. 05 Mar, 2020 1 commit
  19. 24 Jan, 2020 1 commit
    • nnelson's avatar
      Support retrieval of disk statistics from Gitaly · 81a54f29
      nnelson authored
      Add a rudimentary client implementation for the new gitaly
      disk_statistics grpc.
      
      Add unit tests.
      
      New gitaly server methods include disk_used and
      disk_available.  This commit adds unit testing
      for those methods.
      
      Add default server response on error
      
      When the remote service is unavailable, times out,
      or otherwise causes a service call failure, return
      a default response with trivial values set to its
      fields.
      81a54f29
  20. 13 Jan, 2020 1 commit
    • Bob Van Landuyt's avatar
      Add deadlines based on the request to gitaly · 9e6c9ec4
      Bob Van Landuyt authored
      This makes sure that the deadline we set on a gitaly call never
      exceeds the request deadline.
      
      We also raise an exception if the request deadline has already been
      exceeded by the time we're trying to make a Gitaly call.
      
      These deadlines don't apply when calling gitaly from Sidekiq.
      
      We do this by storing the start time of the request in the request
      store on the thread (using the RequestStore).
      
      The maximum request duration defaults to 55 seconds, as the default
      worker timeout is 60 seconds. But can be configured through
      gitlab.yml.
      9e6c9ec4
  21. 27 Dec, 2019 1 commit
  22. 20 Dec, 2019 2 commits
  23. 16 Dec, 2019 1 commit
  24. 13 Dec, 2019 2 commits
    • Aleksei Lipniagov's avatar
      Rename Gitlab::Sentry into Gitlab::ErrorTracking · 035e7359
      Aleksei Lipniagov authored
      Gitlab::Sentry does not reflect the purpose of the class, it needs more
      generic name which service-agnostic.
      035e7359
    • Kamil Trzciński's avatar
      Refactor Sentry handling · 1ee162b6
      Kamil Trzciński authored
      Rename methods of Sentry class:
      - `track_acceptable_exception` => `track_exception`: we just want
         to capture exception
      - `track_exception` => `track_and_raise_for_dev_exception`: as said,
      - `track_and_raise_exception` => we want to capture
        and re-raise exception
      
      Update exception tracking
      
      - Remove `extra:` and instead accept hash,
      - Update documentation to include the best practices,
      - Remove manual logging of exceptions
      1ee162b6
  25. 11 Dec, 2019 1 commit
    • Matthias Käppler's avatar
      Introduce `Runtime` class to identify runtime proc · 3a1ea22c
      Matthias Käppler authored
      This will serve as the new single access point into
      identifying which runtime is active.
      
      Add Process.max_threads method
      
      This will return the maximum concurrency for the given
      runtime environment.
      
      Revert to including `defined?` checks for Process
      
      This is based on a reference impl by new relic which they
      use to detect the current dispatcher.
      
      Add `name` method, throw if ambiguous
      
      This can be called from an initializer for instance.
      
      Log the current runtime in an initializer
      
      Add `multi_threaded?` and `app_server?` helpers
      
      To allow easier grouping of configuration
      
      Rename `Process` to `Runtime`
      
      And move it into its own file.
      
      Replace all remaining runtime checks with new API
      
      Including a commit body because the danger bot politely asked me
      to. There really is nothing else to say.
      
      Prefer `class << self` over `instance_eval`
      
      It seems to be the more widely adopted style.
      
      Simplify `has_instances?` helper method
      
      Fix rubocop offense
      
      Remove max_threads method
      
      This wasn't currently used anywhere and we should define this elsewhere.
      
      Remove references to NR library
      
      This caused some legal questions. We weren't using the instance lookup
      before, so it should be OK.
      3a1ea22c
  26. 01 Nov, 2019 1 commit
    • Stan Hu's avatar
      Extend gRPC timeouts for Rake tasks · 580fc82e
      Stan Hu authored
      In https://gitlab.com/gitlab-org/gitlab/merge_requests/16926 we added
      gRPC timeouts for calls that did not previously have timeouts to prevent
      Sidekiq queries from getting stuck. In addition, we also made long
      timeouts 55 seconds for non-Sidekiq requests, but this meant Rake tasks
      also fell into this bucket. Rake backup tasks with large repositories
      would fail because the CreateBundle RPC would time out after 55 seconds.
      
      To avoid this trap, we flip the logic of long_timeout: instead of
      checking for Sidekiq (or other background jobs), we only lower the
      timeout to 55 seconds if we're servicing a Web request in Puma or
      Unicorn.
      
      Closes https://gitlab.com/gitlab-org/gitlab/issues/22398
      580fc82e
  27. 25 Oct, 2019 1 commit
    • Stan Hu's avatar
      Fix Gitaly call duration measurements · ffd31cca
      Stan Hu authored
      In many cases, we were only measuring the time to return from the Gitaly
      RPC, not the total execution time of the RPC for streaming responses. To
      handle this, we wrap the RPC in a new method,
      `GitalyClient.streaming_call` that yields a response and consumes it.
      
      Only `CommitService` has been updated to use this new measurement.
      Other services should be updated in subsequent merge requests.
      
      Relates to https://gitlab.com/gitlab-org/gitlab/issues/30334
      ffd31cca
  28. 21 Oct, 2019 1 commit
  29. 15 Oct, 2019 1 commit
  30. 02 Oct, 2019 1 commit
  31. 19 Sep, 2019 1 commit
  32. 28 Aug, 2019 2 commits
    • Andrew Newdigate's avatar
      Rename Labkit::Tracing::GRPCInterceptor to GRPC::ClientInterceptor · e369dee8
      Andrew Newdigate authored
      The original name has been deprecated
      e369dee8
    • Sean McGivern's avatar
      Make performance bar enabled checks consistent · f9c456bd
      Sean McGivern authored
      Previously, we called the `peek_enabled?` method like so:
      
          prepend_before_action :set_peek_request_id, if: :peek_enabled?
      
      Now we don't have a `set_peek_request_id` method, so we don't need that
      line. However, the `peek_enabled?` part had a side-effect: it would also
      populate the request store cache for whether the performance bar was
      enabled for the current request or not.
      
      This commit makes that side-effect explicit, and replaces all uses of
      `peek_enabled?` with the more explicit
      `Gitlab::PerformanceBar.enabled_for_request?`. There is one spec that
      still sets `SafeRequestStore[:peek_enabled]` directly, because it is
      contrasting behaviour with and without a request store enabled.
      
      The upshot is:
      
      1. We still set the value in one place. We make it more explicit that
         that's what we're doing.
      2. Reading that value uses a consistent method so it's easier to find in
         future.
      f9c456bd
  33. 23 Aug, 2019 1 commit
  34. 09 Aug, 2019 1 commit
    • Stan Hu's avatar
      Add Gitaly and Rugged call timing in Sidekiq logs · a74396dc
      Stan Hu authored
      This will help identify Sidekiq jobs that invoke excessive number of
      filesystem access.
      
      The timing data is stored in `RequestStore`, but this is only active
      within the middleware and is not directly accessible to the Sidekiq
      logger. However, it is possible for the middleware to modify the job
      hash to pass this data along to the logger.
      a74396dc
  35. 30 Jul, 2019 1 commit
  36. 19 Jul, 2019 1 commit
  37. 16 Jul, 2019 1 commit