- 07 Dec, 2020 1 commit
-
-
Jacob Vosmaer authored
-
- 02 Dec, 2020 5 commits
-
-
Alessio Caiazza authored
Support alternate document root directory See merge request gitlab-org/gitlab-workhorse!626
-
Patrick Bajao authored
Do not resize when image is less than 8 bytes, add comments See merge request gitlab-org/gitlab-workhorse!666
-
Aleksei Lipniagov authored
-
Patrick Bajao authored
Auto-register Prometheus metrics Closes #326 See merge request gitlab-org/gitlab-workhorse!660
-
Ben Kochie authored
* Update to latest upstream Prometheus library. * Update to latest upstream Prometheus gRPC library. * Switch to `promauto` package to avoid missing metrics. Closes: https://gitlab.com/gitlab-org/gitlab-workhorse/-/issues/326Signed-off-by: Ben Kochie <bjk@gitlab.com>
-
- 01 Dec, 2020 1 commit
-
-
Stan Hu authored
This will be useful for supporting no-downtime upgrades. Admins attempting to upgrade GitLab via our no-downtime upgrade procedure have found that CSS and JavaScript often don't load while the upgrade is in progress. This is because in a mixed deployment scenario with a load balancer, this can happen: 1. User accesses node version N+1, which then makes a CSS/JS request on version N. 2. User accesses node version N, which then makes a CSS/JS requests on version N+1. In both scenarios, the user gets a 404 since only one version of the assets exist on a given server. To fix this, we provide an alternate path where previous and future assets can be stored. Relates to https://gitlab.com/gitlab-org/gitlab-workhorse/-/issues/304
-
- 30 Nov, 2020 5 commits
-
-
Jacob Vosmaer authored
Fix uploader not returning 413 when artifact too large Closes #328 See merge request gitlab-org/gitlab-workhorse!663
-
Alessio Caiazza authored
-
Alessio Caiazza authored
[ci skip]
-
Alessio Caiazza authored
Add upload acceleration for Requirements import See merge request gitlab-org/gitlab-workhorse!664
-
Eugenia Grieff authored
- Add spec
-
- 27 Nov, 2020 1 commit
-
-
Stan Hu authored
When an upload exceeds the maximum limit, `ErrEntityTooLarge` gets returned but is wrapped in multiple layers of errors when it is checked. As a result, when Google Cloud Storage were used to upload files, artifacts exceeding the maximum size would report a "500 Internal Server Error" instead of the correct "413 Request Entity Too Large" error message. To fix this, we check the state of `hardLimitReader` and set the error to `ErrEntityTooLarge` at the end of the `SaveFileFromReader` to ensure that this error will be returned. Closes https://gitlab.com/gitlab-org/gitlab-workhorse/-/issues/328
-
- 26 Nov, 2020 2 commits
-
-
Alessio Caiazza authored
-
Alessio Caiazza authored
[ci skip]
-
- 25 Nov, 2020 2 commits
-
-
Nick Thomas authored
Consistent logging in image resizer module Closes #320 See merge request gitlab-org/gitlab-workhorse!652
-
Matthias Käppler authored
This allows us to consistently log these errors with all labels applied
-
- 24 Nov, 2020 2 commits
-
-
Jacob Vosmaer authored
Add Patrick Bajao as code owner NO CHANGELOG See merge request gitlab-org/gitlab-workhorse!661
-
Patrick Bajao authored
-
- 23 Nov, 2020 2 commits
-
-
Nick Thomas authored
Fix EXIF cleaning for S3 compatible Object Storage See merge request gitlab-org/gitlab-workhorse!658
-
Nick Thomas authored
Update LabKit library to v1.0.0 See merge request gitlab-org/gitlab-workhorse!659
-
- 20 Nov, 2020 5 commits
-
-
Andrew Newdigate authored
LabKit has reached a 1.0.0 milestone
🎉 See https://gitlab.com/gitlab-org/labkit/-/releases/v1.0.0 -
Alessio Caiazza authored
-
Alessio Caiazza authored
-
Alessio Caiazza authored
When exiftool is already terminated, we no longer attempt to read from its stdout. Related to: https://gitlab.com/gitlab-org/gitlab-workhorse/-/issues/233
-
Alessio Caiazza authored
Exif cleaner do not support additional reads after the underling process is completed. When the Object Storage configuration requires a MultipartUpload, workhorse loops over the input with a LimitReader, such loop will call Read one extra time to make sure the input was consumed entirely. Related to: https://gitlab.com/gitlab-org/gitlab-workhorse/-/issues/233
-
- 19 Nov, 2020 5 commits
-
-
Alessio Caiazza authored
Return 413 HTTP status for S3 uploads if max upload limit is reached See merge request gitlab-org/gitlab-workhorse!655
-
Alessio Caiazza authored
-
Alessio Caiazza authored
[ci skip]
-
Alessio Caiazza authored
Add metric image upload route for acceleration See merge request gitlab-org/gitlab-workhorse!653
-
Sean Arnold authored
Add spec
-
- 17 Nov, 2020 5 commits
-
-
Nick Thomas authored
Enable Secret Detection in CI NO CHANGELOG See merge request gitlab-org/gitlab-workhorse!654
-
Michael Henriksen authored
-
Michael Henriksen authored
-
Nick Thomas authored
Add success-client-cache status for image scaler See merge request gitlab-org/gitlab-workhorse!656
-
Matthias Käppler authored
Since we weren't counting cached responses as successes, in Prometheus there were showing up as `unknown`, which contributes to the overall error budget. We need to follow up with a runbook change that also counts cached responses as successes.
-
- 16 Nov, 2020 1 commit
-
-
Stan Hu authored
-
- 14 Nov, 2020 1 commit
-
-
Stan Hu authored
When an upload (e.g. a CI artifact) reaches the maximum file size limit, uploads via S3 would return a 500 error to the user. This made it difficult to understand why the upload failed. This was happening because the `hardLimitReader` was aborting the transfer with `ErrEntityTooLarge`, but this error was wrapped in layers of AWS errors. Since none of these AWS errors were understood by the file handler, a 500 error was returned. To fix this, AWS has a way to retrieve the original error. We now recursively go down the error stack to find the root cause. Note that there is an open issue in the AWS SDK to make this easier with Golang (https://github.com/aws/aws-sdk-go/issues/2820).
-
- 13 Nov, 2020 2 commits
-
-
Michael Henriksen authored
to help detect accidental exposure of secrets such as API tokens and cryptographic keys in commits.
-
Alessio Caiazza authored
-