An error occurred fetching the project authors.
- 12 Mar, 2021 1 commit
-
-
Alex Buijs authored
From Labkit::Context.current.to_h To Gitlab::ApplicationContext.current
-
- 10 Mar, 2021 1 commit
-
-
Stan Hu authored
This reverts merge request !56003
-
- 08 Mar, 2021 1 commit
-
-
Alex Buijs authored
From Labkit::Context.current.to_h To Gitlab::ApplicationContext.current
-
- 17 Feb, 2021 1 commit
-
-
Patrick Steinhardt authored
The check whether the hook env is active needs to check whether the hook env is set at all, and if it is, also whether it carries any values. This is currently done in two calls, but can be improved to simply use `#blank?`.
-
- 16 Feb, 2021 1 commit
-
-
Patrick Steinhardt authored
With commit edab619a (gitaly: Fix access checks with transactions and quarantine environments, 2021-02-05), we started injecting a flag into Gitaly requests to force-route to the primary Gitaly node in case a hook environment is set in order to not break access to quarantined objects. Turns out that this change breaks read distribution though as now all requests are force-routed to the primary. The cause of this is trivial enough: the SafeRequestStore returns an empty hash if it wasn't set up to contain anything. Given that the checks whether a HookEnv was set up only checked whether there was something in the SafeRequestStore, they thus always thought that requests were running in the context of a HookEnv. Fix the issue by checking that the returned value is non-empty.
-
- 08 Feb, 2021 1 commit
-
-
Patrick Steinhardt authored
In order to check whether certain operations are allowed to be executed by a user, Gitaly POSTs to the `/internal/allowed` endpoint. The request includes information about what change the user wants to perform, but it also contains information about the environment the change is currently performed in. When a user performs a push, git will first store all pushed objects into a quarantine environment. This is a separate temporary directory containing all new objects such that if the push gets rejected, new objects will not persist in the repository. The crux is that in order to inspect these new objects, git needs to be told that such a quarantine environment exists. This is why Gitaly sends information about this quarantine environment to `/internal/allowed`, so that we can again relay this information back to Gitaly when we want to inspect newly pushed objects to determine whether they're allowed or not. While it's a leaky abstraction, it has worked fine until now. But with transactions, that's not true anymore: when multiple Gitaly nodes take part in a push, then they'll all generate a randomly named quarantine environment. But as only the primary node will inject its info into the request, we are not able to acces quarantine environments of secondary nodes. If we now route accessor requests to any of the secondary Gitaly nodes with the quarantine environment of the primary, then the request will fail as git cannot find quarantined objects. To fix this, Gitaly has recently grown a new GRPC header which allows us to force-route requests to the primary via 1102b0b67 (praefect: Implement force-routing to primary for node-manager router, 2021-02-03) and 4d877d7d5 (praefect: Implement force-routing to primary for per-repo router, 2021-02-03). So let's set that header if we know that we're being executed via a hook, which is the only case where a quarantine environment may exist.
-
- 26 Jan, 2021 1 commit
-
-
Takuya Noguchi authored
EE port: gitlab-org/gitlab MR 52431 Signed-off-by:
Takuya Noguchi <takninnovationresearch@gmail.com>
-
- 21 Dec, 2020 1 commit
-
-
Igor Wiedler authored
-
- 02 Sep, 2020 1 commit
-
-
Rajendra Kadam authored
-
- 05 Aug, 2020 1 commit
-
-
Doug Stull authored
- enforce standards...
-
- 23 Jul, 2020 1 commit
-
-
nmilojevic1 authored
- Refactor Git Blob - Refactor Highlight Cache - Refactor Method Call - Refactor Gitlab Database - Fix Specs
-
- 30 Jun, 2020 1 commit
-
-
Oswaldo Ferreira authored
This is a stab into fixing the Gitaly timing in logs (gitaly_duration_s) for streamed responses using the same GitalyClient.call method. The problem of having a GitalyClient.call for non-streamed responses and GitalyClient.streaming_call (with a block) for streamed responses is that we'll need to rely mostly on documentation in order to get the timings right for new RPCs. In order to solve that, here we look further into the Gitaly response. If it's an Enumerator (that's what the Ruby implementation of gRPC streams return from the server https://grpc.io/docs/languages/ruby/basics/), we wrap that Enumerator into a custom enumerator, which instruments that stream consumption. Another advantage of that over wrapping the whole stream consumption into a block is that we won't add too much of Ruby CPU time at it, just the response.next call is measured, which is the point of contact with Gitaly.
-
- 20 May, 2020 1 commit
-
-
Jacob Vosmaer authored
Rounding causes a loss of information. We should only round numbers when we have to. This commit fixes instrumentation code that was unnecessarily rounding intermediate results.
-
- 30 Apr, 2020 1 commit
-
-
Robert May authored
Runs the Gitlab/Json cop autocorrector over the lib directory.
-
- 27 Apr, 2020 1 commit
-
-
Oswaldo Ferreira authored
We have 6 (microsecond) precision for a few Go service timings, so making all existing *_duration_s on Rails/API/Sidekiq use a 6 decimal precision instead of 2 would make more sense, and that's what we accomplish here.
-
- 16 Apr, 2020 1 commit
-
-
Oswaldo Ferreira authored
It makes the decision on how to log timings within JSON logs based on https://www.robustperception.io/who-wants-seconds.
-
- 09 Mar, 2020 1 commit
-
-
James Fargher authored
-
- 05 Mar, 2020 1 commit
-
-
Jacob Vosmaer authored
-
- 24 Jan, 2020 1 commit
-
-
nnelson authored
Add a rudimentary client implementation for the new gitaly disk_statistics grpc. Add unit tests. New gitaly server methods include disk_used and disk_available. This commit adds unit testing for those methods. Add default server response on error When the remote service is unavailable, times out, or otherwise causes a service call failure, return a default response with trivial values set to its fields.
-
- 13 Jan, 2020 1 commit
-
-
Bob Van Landuyt authored
This makes sure that the deadline we set on a gitaly call never exceeds the request deadline. We also raise an exception if the request deadline has already been exceeded by the time we're trying to make a Gitaly call. These deadlines don't apply when calling gitaly from Sidekiq. We do this by storing the start time of the request in the request store on the thread (using the RequestStore). The maximum request duration defaults to 55 seconds, as the default worker timeout is 60 seconds. But can be configured through gitlab.yml.
-
- 27 Dec, 2019 1 commit
-
-
Stan Hu authored
We call `GitLab::Profiler.clean_backtrace` in a lot of places where we aren't actually profiling. It makes sense to break this out into its own class method. This change also reduces memory usage and speeds up the backtrace cleaner since the regexp is computed once at load time. Closes https://gitlab.com/gitlab-org/gitlab/issues/36645
-
- 20 Dec, 2019 2 commits
-
-
Matthias Kaeppler authored
-
Matthias Kaeppler authored
This reverts commit b0944b95.
-
- 16 Dec, 2019 1 commit
-
-
Michael Kozono authored
This reverts merge request !20294
-
- 13 Dec, 2019 2 commits
-
-
Aleksei Lipniagov authored
Gitlab::Sentry does not reflect the purpose of the class, it needs more generic name which service-agnostic.
-
Kamil Trzciński authored
Rename methods of Sentry class: - `track_acceptable_exception` => `track_exception`: we just want to capture exception - `track_exception` => `track_and_raise_for_dev_exception`: as said, - `track_and_raise_exception` => we want to capture and re-raise exception Update exception tracking - Remove `extra:` and instead accept hash, - Update documentation to include the best practices, - Remove manual logging of exceptions
-
- 11 Dec, 2019 1 commit
-
-
Matthias Käppler authored
This will serve as the new single access point into identifying which runtime is active. Add Process.max_threads method This will return the maximum concurrency for the given runtime environment. Revert to including `defined?` checks for Process This is based on a reference impl by new relic which they use to detect the current dispatcher. Add `name` method, throw if ambiguous This can be called from an initializer for instance. Log the current runtime in an initializer Add `multi_threaded?` and `app_server?` helpers To allow easier grouping of configuration Rename `Process` to `Runtime` And move it into its own file. Replace all remaining runtime checks with new API Including a commit body because the danger bot politely asked me to. There really is nothing else to say. Prefer `class << self` over `instance_eval` It seems to be the more widely adopted style. Simplify `has_instances?` helper method Fix rubocop offense Remove max_threads method This wasn't currently used anywhere and we should define this elsewhere. Remove references to NR library This caused some legal questions. We weren't using the instance lookup before, so it should be OK.
-
- 01 Nov, 2019 1 commit
-
-
Stan Hu authored
In https://gitlab.com/gitlab-org/gitlab/merge_requests/16926 we added gRPC timeouts for calls that did not previously have timeouts to prevent Sidekiq queries from getting stuck. In addition, we also made long timeouts 55 seconds for non-Sidekiq requests, but this meant Rake tasks also fell into this bucket. Rake backup tasks with large repositories would fail because the CreateBundle RPC would time out after 55 seconds. To avoid this trap, we flip the logic of long_timeout: instead of checking for Sidekiq (or other background jobs), we only lower the timeout to 55 seconds if we're servicing a Web request in Puma or Unicorn. Closes https://gitlab.com/gitlab-org/gitlab/issues/22398
-
- 25 Oct, 2019 1 commit
-
-
Stan Hu authored
In many cases, we were only measuring the time to return from the Gitaly RPC, not the total execution time of the RPC for streaming responses. To handle this, we wrap the RPC in a new method, `GitalyClient.streaming_call` that yields a response and consumes it. Only `CommitService` has been updated to use this new measurement. Other services should be updated in subsequent merge requests. Relates to https://gitlab.com/gitlab-org/gitlab/issues/30334
-
- 21 Oct, 2019 1 commit
-
-
allison.browne authored
-
- 15 Oct, 2019 1 commit
-
-
Thong Kuah authored
There should be no cases where we need to inherit=true.
-
- 02 Oct, 2019 1 commit
-
-
Jacob Vosmaer authored
-
- 19 Sep, 2019 1 commit
-
-
Zeger-Jan van de Weg authored
When setting the timeouts the client can give up on the RPC call, which allows for a protection for the client and server. This commit also removes the implicit lack of a timeout when running on sidekiq. This is a good thing as if one server is overloaded with work, it wouldn't block the whole queue from processing. Part of: https://gitlab.com/groups/gitlab-org/-/epics/1737
-
- 28 Aug, 2019 2 commits
-
-
Andrew Newdigate authored
The original name has been deprecated
-
Sean McGivern authored
Previously, we called the `peek_enabled?` method like so: prepend_before_action :set_peek_request_id, if: :peek_enabled? Now we don't have a `set_peek_request_id` method, so we don't need that line. However, the `peek_enabled?` part had a side-effect: it would also populate the request store cache for whether the performance bar was enabled for the current request or not. This commit makes that side-effect explicit, and replaces all uses of `peek_enabled?` with the more explicit `Gitlab::PerformanceBar.enabled_for_request?`. There is one spec that still sets `SafeRequestStore[:peek_enabled]` directly, because it is contrasting behaviour with and without a request store enabled. The upshot is: 1. We still set the value in one place. We make it more explicit that that's what we're doing. 2. Reading that value uses a consistent method so it's easier to find in future.
-
- 23 Aug, 2019 1 commit
-
-
John Cai authored
-
- 09 Aug, 2019 1 commit
-
-
Stan Hu authored
This will help identify Sidekiq jobs that invoke excessive number of filesystem access. The timing data is stored in `RequestStore`, but this is only active within the middleware and is not directly accessible to the Sidekiq logger. However, it is possible for the middleware to modify the job hash to pass this data along to the logger.
-
- 30 Jul, 2019 1 commit
-
-
Stan Hu authored
In SELinux, the file cannot be written, and `Errno::EACCES`, not `Errno::ACCESS` is thrown. Closes https://gitlab.com/gitlab-org/gitlab-ce/issues/65328
-
- 19 Jul, 2019 1 commit
-
-
Stan Hu authored
If `GitalyClient#can_use_disk?` returned `false`, it was never cached properly and led to excessive number of Gitaly calls. Instead of using `cached_value.present?`, we need to check `cached_value.nil?`. Closes https://gitlab.com/gitlab-org/gitlab-ce/issues/64802
-
- 16 Jul, 2019 1 commit
-
-
John Cai authored
Whenever we use the rugged implementation, we are going straight to disk so we want to bypass the disk access check.
-