An error occurred fetching the project authors.
- 18 Oct, 2019 1 commit
-
-
James Fargher authored
Ran: bundle exec rubocop --only RSpec/EmptyLineAfterSubject -a
-
- 17 Oct, 2019 1 commit
-
-
Michael Kozono authored
It replaces everything before the repo-specific path.
-
- 16 Oct, 2019 1 commit
-
-
Marius Bobin authored
Allow jobs to use CI_JOB_TOKEN to trigger downstream pipelines
-
- 15 Oct, 2019 1 commit
-
-
manojmj authored
This change removes the usage of `access_requestable ` trait from factories and specs
-
- 11 Oct, 2019 1 commit
-
-
Thong Kuah authored
This association will be used for a cluster to indicate which project is used to manage it. Validate against duplicate scope for same project. If multiple clusters with the same scope points to the same management_project, it will be impossible to deterministically select a cluster
-
- 10 Oct, 2019 2 commits
-
-
Aleksei Lipniagov authored
-
Aleksei Lipniagov authored
-
- 08 Oct, 2019 1 commit
-
-
John Cai authored
Creates a repository_exists? method that replaces the exists? method in shell.rb in order to deprecate the namespace service
-
- 07 Oct, 2019 2 commits
-
-
Erick Bajao authored
Clean up specs and fix logic about handling boolean type settings. Move out responsibility of fetching closest namespace setting to the namespace model.
-
Erick Bajao authored
Now that we have project and namespace level setting for max_artifacts_size, we need to update the authorization for it in some of the runner related API endpoints. This also adds a new helper method `#closest_setting` which fetches the closest non-nil value for a given setting name. This will be useful for other settings like max_pages_size.
-
- 03 Oct, 2019 2 commits
-
-
John Cai authored
Since CreateRepository will do a mkdir -p when creating a new repository, we do not need to call AddNamespace before creating the repository.
-
Brett Walker authored
as it's no longer needed
-
- 01 Oct, 2019 2 commits
-
-
Krasimir Angelov authored
On-demand migrate `project_pages_metadata` when asked for namespace/custom domain virtual domain. Related to https://gitlab.com/gitlab-org/gitlab/issues/28781#note_217282591.
-
Mark Chao authored
Add spec to test different combinations.
-
- 25 Sep, 2019 1 commit
-
-
Krasimir Angelov authored
Introduce new `project_pages_metadata` table, insert new record on project creation. Update its `depoyed` flag when pages are deployed/removed. Return only these projects from namespace that have pages marked as deployed. On-demand and mass data migration will handled in subsequent commits. Related to https://gitlab.com/gitlab-org/gitlab/issues/28781.
-
- 16 Sep, 2019 1 commit
-
-
Alex Ives authored
- Added latest_pipeline_for_ref method to project - Updated pipelines_controller to use latest_pipeline_for_ref method - Added api endpoint to pipelines api for latest pipeline
-
- 11 Sep, 2019 1 commit
-
-
Krasimir Angelov authored
-
- 05 Sep, 2019 2 commits
-
-
Fabio Pitino authored
When using a mirror for CI/CD only we register a pull_request webhook. When a pull_request webhook is received, if the source branch SHA matches the actual head of the branch in the repository we create immediately a new pipeline for the external pull request. Otherwise we store the pull request info for when the push webhook is received. When using "only/except: external_pull_requests" we can detect if the pipeline has a open pull request on GitHub and create or not the job based on that. Feedback from review Split big non-transactional migration into smaller ones Refactorings by moving methods to relative models Return 422 when webhook has unsupported actions
-
Fabio Pitino authored
Detect if pipeline runs for a GitHub pull request When using a mirror for CI/CD only we register a pull_request webhook. When a pull_request webhook is received, if the source branch SHA matches the actual head of the branch in the repository we create immediately a new pipeline for the external pull request. Otherwise we store the pull request info for when the push webhook is received. When using "only/except: external_pull_requests" we can detect if the pipeline has a open pull request on GitHub and create or not the job based on that.
-
- 30 Aug, 2019 1 commit
-
-
Manoj MJ authored
This change limits the number of emails for new access requests notifications to 10 most recently active owners/maintainers
-
- 26 Aug, 2019 1 commit
-
-
Zeger-Jan van de Weg authored
The flag defaulted to true, so there's no change unless users turned it off. Given there's a lack of issues regarding object pools, this should be OK.
-
- 15 Aug, 2019 4 commits
-
-
Adam Hegyi authored
This change lays the foundation for customizable cycle analytics stages. The main reason for the change is to extract the event definitions to separate objects (start_event, end_event) so that it could be easily customized later on.
-
Adam Hegyi authored
This change lays the foundation for customizable cycle analytics stages. The main reason for the change is to extract the event definitions to separate objects (start_event, end_event) so that it could be easily customized later on.
-
Brett Walker authored
- Adds UI to configure in group and project settings - Removes notification configuration for users when disabled at group or project level
-
Brett Walker authored
- Adds UI to configure in group and project settings - Removes notification configuration for users when disabled at group or project level
-
- 13 Aug, 2019 3 commits
-
-
Bob Van Landuyt authored
**Prevention of running 2 simultaneous updates** Instead of using `RemoteMirror#update_status` and raise an error if it's already running to prevent the same mirror being updated at the same time we now use `Gitlab::ExclusiveLease` for that. When we fail to obtain a lease in 3 tries, 30 seconds apart, we bail and reschedule. We'll reschedule faster for the protected branches. If the mirror already ran since it was scheduled, the job will be skipped. **Error handling: Remote side** When an update fails because of a `Gitlab::Git::CommandError`, we won't track this error in sentry, this could be on the remote side: for example when branches have diverged. In this case, we'll try 3 times scheduled 1 or 5 minutes apart. In between, the mirror is marked as "to_retry", the error would be visible to the user when they visit the settings page. After 3 tries we'll mark the mirror as failed and notify the user. We won't track this error in sentry, as it's not likely we can help it. The next event that would trigger a new refresh. **Error handling: our side** If an unexpected error occurs, we mark the mirror as failed, but we'd still retry the job based on the regular sidekiq retries with backoff. Same as we used to The error would be reported in sentry, since its likely we need to do something about it.
-
Bob Van Landuyt authored
**Prevention of running 2 simultaneous updates** Instead of using `RemoteMirror#update_status` and raise an error if it's already running to prevent the same mirror being updated at the same time we now use `Gitlab::ExclusiveLease` for that. When we fail to obtain a lease in 3 tries, 30 seconds apart, we bail and reschedule. We'll reschedule faster for the protected branches. If the mirror already ran since it was scheduled, the job will be skipped. **Error handling: Remote side** When an update fails because of a `Gitlab::Git::CommandError`, we won't track this error in sentry, this could be on the remote side: for example when branches have diverged. In this case, we'll try 3 times scheduled 1 or 5 minutes apart. In between, the mirror is marked as "to_retry", the error would be visible to the user when they visit the settings page. After 3 tries we'll mark the mirror as failed and notify the user. We won't track this error in sentry, as it's not likely we can help it. The next event that would trigger a new refresh. **Error handling: our side** If an unexpected error occurs, we mark the mirror as failed, but we'd still retry the job based on the regular sidekiq retries with backoff. Same as we used to The error would be reported in sentry, since its likely we need to do something about it.
-
Stan Hu authored
This commit reduces I/O load and memory utilization during PostReceive for the common case when no project hooks or services are set up. We saw a Gitaly N+1 issue in `CommitDelta` when many tags or branches are pushed. We can reduce this overhead in the common case because we observe that most new projects do not have any Web hooks or services, especially when they are first created. Previously, `BaseHooksService` unconditionally iterated through the last 20 commits of each ref to build the `push_data` structure. The `push_data` structured was used in numerous places: 1. Building the push payload in `EventCreateService` 2. Creating a CI pipeline 3. Executing project Web or system hooks 4. Executing project services 5. As the return value of `BaseHooksService#execute` 6. `BranchHooksService#invalidated_file_types` We only need to generate the full `push_data` for items 3, 4, and 6. Item 1: `EventCreateService` only needs the last commit and doesn't actually need the commit deltas. Item 2: In addition, `Ci::CreatePipelineService` only needed a subset of the parameters. Item 5: The return value of `BaseHooksService#execute` also wasn't being used anywhere. Item 6: This is only used when pushing to the default branch, so if many tags are pushed we can save significant I/O here. Closes https://gitlab.com/gitlab-org/gitlab-ce/issues/65878 Fic
-
- 08 Aug, 2019 2 commits
-
-
Hordur Freyr Yngvason authored
As decided in https://gitlab.com/gitlab-org/gitlab-ce/issues/53593
-
Hordur Freyr Yngvason authored
As decided in https://gitlab.com/gitlab-org/gitlab-ce/issues/53593
-
- 07 Aug, 2019 1 commit
-
-
Tiger Watson authored
Kubernetes deployments on new clusters will now have a separate namespace per project environment, instead of sharing a single namespace for the project. Behaviour of existing clusters is unchanged. All new functionality is controlled by the :kubernetes_namespace_per_environment feature flag, which is safe to enable/disable at any time.
-
- 06 Aug, 2019 1 commit
-
-
Matija Čupić authored
-
- 01 Aug, 2019 1 commit
-
-
Jason Goodman authored
Add spec for cases where URI.join does not work as expected
-
- 31 Jul, 2019 1 commit
-
-
Tiger authored
All cluster resources are now created on demand when a deployment job starts.
-
- 25 Jul, 2019 4 commits
-
-
Matija Čupić authored
-
Matija Čupić authored
* Reword Project#latest_successful_build_for to Project#latest_successful_build_for_ref * Reword Ci::Pipeline#latest_successful_for to Ci::Pipeline#latest_successful_build_for_ref
-
Heinrich Lee Yu authored
These are not required because MySQL is not supported anymore
-
Heinrich Lee Yu authored
These are not required because MySQL is not supported anymore
-
- 24 Jul, 2019 2 commits
-
-
Kamil Trzciński authored
- Fix `O(n)` complexity of `append_or_update_attribute`, we append objects to an array and re-save project - Remove the usage of `keys.include?` as it performs `O(n)` search, instead use `.has_key?` - Remove the usage of `.keys.first` as it performs a copy of all keys, instead use `.first.first`
-
Kamil Trzciński authored
- Fix `O(n)` complexity of `append_or_update_attribute`, we append objects to an array and re-save project - Remove the usage of `keys.include?` as it performs `O(n)` search, instead use `.has_key?` - Remove the usage of `.keys.first` as it performs a copy of all keys, instead use `.first.first`
-