An error occurred fetching the project authors.
  1. 18 Oct, 2019 1 commit
  2. 17 Oct, 2019 1 commit
  3. 16 Oct, 2019 1 commit
  4. 15 Oct, 2019 1 commit
  5. 11 Oct, 2019 1 commit
    • Thong Kuah's avatar
      Add management_project_id to clusters · d0fb5fac
      Thong Kuah authored
      This association will be used for a cluster to indicate which project is
      used to manage it.
      
      Validate against duplicate scope for same project. If multiple clusters
      with the same scope points to the same management_project, it will be
      impossible to deterministically select a cluster
      d0fb5fac
  6. 10 Oct, 2019 2 commits
  7. 08 Oct, 2019 1 commit
  8. 07 Oct, 2019 2 commits
    • Erick Bajao's avatar
      Fix closest_setting to properly support boolean types · 28fa95fe
      Erick Bajao authored
      Clean up specs and fix logic about handling boolean type
      settings.
      
      Move out responsibility of fetching closest namespace setting
      to the namespace model.
      28fa95fe
    • Erick Bajao's avatar
      Update artifact size authorization · 784612e2
      Erick Bajao authored
      Now that we have project and namespace level setting
      for max_artifacts_size, we need to update the authorization
      for it in some of the runner related API endpoints.
      
      This also adds a new helper method `#closest_setting`
      which fetches the closest non-nil value for a given setting name.
      This will be useful for other settings like max_pages_size.
      784612e2
  9. 03 Oct, 2019 2 commits
  10. 01 Oct, 2019 2 commits
  11. 25 Sep, 2019 1 commit
  12. 16 Sep, 2019 1 commit
    • Alex Ives's avatar
      Add latest pipelines link to api · 59db520f
      Alex Ives authored
      - Added latest_pipeline_for_ref method to project
      - Updated pipelines_controller to use latest_pipeline_for_ref method
      - Added api endpoint to pipelines api for latest pipeline
      59db520f
  13. 11 Sep, 2019 1 commit
  14. 05 Sep, 2019 2 commits
    • Fabio Pitino's avatar
      Detect if pipeline runs for a GitHub pull request · fd450ddc
      Fabio Pitino authored
      When using a mirror for CI/CD only we register a pull_request
      webhook. When a pull_request webhook is received, if the
      source branch SHA matches the actual head of the branch in the
      repository we create immediately a new pipeline for the
      external pull request. Otherwise we store the
      pull request info for when the push webhook is received.
      
      When using "only/except: external_pull_requests" we can detect
      if the pipeline has a open pull request on GitHub and create or
      not the job based on that.
      
      Feedback from review
      
      Split big non-transactional migration into smaller ones
      Refactorings by moving methods to relative models
      Return 422 when webhook has unsupported actions
      fd450ddc
    • Fabio Pitino's avatar
      CE port for pipelines for external pull requests · ca6a1f33
      Fabio Pitino authored
      Detect if pipeline runs for a GitHub pull request
      
      When using a mirror for CI/CD only we register a pull_request
      webhook. When a pull_request webhook is received, if the
      source branch SHA matches the actual head of the branch in the
      repository we create immediately a new pipeline for the
      external pull request. Otherwise we store the
      pull request info for when the push webhook is received.
      
      When using "only/except: external_pull_requests" we can detect
      if the pipeline has a open pull request on GitHub and create or
      not the job based on that.
      ca6a1f33
  15. 30 Aug, 2019 1 commit
  16. 26 Aug, 2019 1 commit
  17. 15 Aug, 2019 4 commits
    • Adam Hegyi's avatar
      Migrations for Cycle Analytics backend · ca6cfde5
      Adam Hegyi authored
      This change lays the foundation for customizable cycle analytics stages.
      The main reason for the change is to extract the event definitions to
      separate objects (start_event, end_event) so that it could be easily
      customized later on.
      ca6cfde5
    • Adam Hegyi's avatar
      Migrations for Cycle Analytics backend · 138964dd
      Adam Hegyi authored
      This change lays the foundation for customizable cycle analytics stages.
      The main reason for the change is to extract the event definitions to
      separate objects (start_event, end_event) so that it could be easily
      customized later on.
      138964dd
    • Brett Walker's avatar
      Allow disabling group/project email notifications · 3489dc3d
      Brett Walker authored
      - Adds UI to configure in group and project settings
      - Removes notification configuration for users when
      disabled at group or project level
      3489dc3d
    • Brett Walker's avatar
      Allow disabling group/project email notifications · 45c33bcb
      Brett Walker authored
      - Adds UI to configure in group and project settings
      - Removes notification configuration for users when
      disabled at group or project level
      45c33bcb
  18. 13 Aug, 2019 3 commits
    • Bob Van Landuyt's avatar
      Rework retry strategy for remote mirrors · 452bc36d
      Bob Van Landuyt authored
      **Prevention of running 2 simultaneous updates**
      
      Instead of using `RemoteMirror#update_status` and raise an error if
      it's already running to prevent the same mirror being updated at the
      same time we now use `Gitlab::ExclusiveLease` for that.
      
      When we fail to obtain a lease in 3 tries, 30 seconds apart, we bail
      and reschedule. We'll reschedule faster for the protected branches.
      
      If the mirror already ran since it was scheduled, the job will be
      skipped.
      
      **Error handling: Remote side**
      
      When an update fails because of a `Gitlab::Git::CommandError`, we
      won't track this error in sentry, this could be on the remote side:
      for example when branches have diverged.
      
      In this case, we'll try 3 times scheduled 1 or 5 minutes apart.
      
      In between, the mirror is marked as "to_retry", the error would be
      visible to the user when they visit the settings page.
      
      After 3 tries we'll mark the mirror as failed and notify the user.
      
      We won't track this error in sentry, as it's not likely we can help
      it.
      
      The next event that would trigger a new refresh.
      
      **Error handling: our side**
      
      If an unexpected error occurs, we mark the mirror as failed, but we'd
      still retry the job based on the regular sidekiq retries with
      backoff. Same as we used to
      
      The error would be reported in sentry, since its likely we need to do
      something about it.
      452bc36d
    • Bob Van Landuyt's avatar
      Rework retry strategy for remote mirrors · 23e7a876
      Bob Van Landuyt authored
      **Prevention of running 2 simultaneous updates**
      
      Instead of using `RemoteMirror#update_status` and raise an error if
      it's already running to prevent the same mirror being updated at the
      same time we now use `Gitlab::ExclusiveLease` for that.
      
      When we fail to obtain a lease in 3 tries, 30 seconds apart, we bail
      and reschedule. We'll reschedule faster for the protected branches.
      
      If the mirror already ran since it was scheduled, the job will be
      skipped.
      
      **Error handling: Remote side**
      
      When an update fails because of a `Gitlab::Git::CommandError`, we
      won't track this error in sentry, this could be on the remote side:
      for example when branches have diverged.
      
      In this case, we'll try 3 times scheduled 1 or 5 minutes apart.
      
      In between, the mirror is marked as "to_retry", the error would be
      visible to the user when they visit the settings page.
      
      After 3 tries we'll mark the mirror as failed and notify the user.
      
      We won't track this error in sentry, as it's not likely we can help
      it.
      
      The next event that would trigger a new refresh.
      
      **Error handling: our side**
      
      If an unexpected error occurs, we mark the mirror as failed, but we'd
      still retry the job based on the regular sidekiq retries with
      backoff. Same as we used to
      
      The error would be reported in sentry, since its likely we need to do
      something about it.
      23e7a876
    • Stan Hu's avatar
      Reduce Gitaly calls in PostReceive · 4e2bb4e5
      Stan Hu authored
      This commit reduces I/O load and memory utilization during PostReceive
      for the common case when no project hooks or services are set up.
      
      We saw a Gitaly N+1 issue in `CommitDelta` when many tags or branches
      are pushed. We can reduce this overhead in the common case because we
      observe that most new projects do not have any Web hooks or services,
      especially when they are first created. Previously, `BaseHooksService`
      unconditionally iterated through the last 20 commits of each ref to
      build the `push_data` structure. The `push_data` structured was used in
      numerous places:
      
      1. Building the push payload in `EventCreateService`
      2. Creating a CI pipeline
      3. Executing project Web or system hooks
      4. Executing project services
      5. As the return value of `BaseHooksService#execute`
      6. `BranchHooksService#invalidated_file_types`
      
      We only need to generate the full `push_data` for items 3, 4, and 6.
      
      Item 1: `EventCreateService` only needs the last commit and doesn't
      actually need the commit deltas.
      
      Item 2: In addition, `Ci::CreatePipelineService` only needed a subset of
      the parameters.
      
      Item 5: The return value of `BaseHooksService#execute` also wasn't being
      used anywhere.
      
      Item 6: This is only used when pushing to the default branch, so if
      many tags are pushed we can save significant I/O here.
      
      Closes https://gitlab.com/gitlab-org/gitlab-ce/issues/65878
      
      Fic
      4e2bb4e5
  19. 08 Aug, 2019 2 commits
  20. 07 Aug, 2019 1 commit
    • Tiger Watson's avatar
      Use separate Kubernetes namespaces per environment · 36a01a88
      Tiger Watson authored
      Kubernetes deployments on new clusters will now have
      a separate namespace per project environment, instead
      of sharing a single namespace for the project.
      
      Behaviour of existing clusters is unchanged.
      
      All new functionality is controlled by the
      :kubernetes_namespace_per_environment feature flag,
      which is safe to enable/disable at any time.
      36a01a88
  21. 06 Aug, 2019 1 commit
  22. 01 Aug, 2019 1 commit
  23. 31 Jul, 2019 1 commit
  24. 25 Jul, 2019 4 commits
  25. 24 Jul, 2019 2 commits
    • Kamil Trzciński's avatar
      Optimise import performance · 8bd18dff
      Kamil Trzciński authored
      - Fix `O(n)` complexity of `append_or_update_attribute`,
        we append objects to an array and re-save project
      - Remove the usage of `keys.include?` as it performs `O(n)`
        search, instead use `.has_key?`
      - Remove the usage of `.keys.first` as it performs a copy
        of all keys, instead use `.first.first`
      8bd18dff
    • Kamil Trzciński's avatar
      Optimise import performance · 8d1e97fc
      Kamil Trzciński authored
      - Fix `O(n)` complexity of `append_or_update_attribute`,
        we append objects to an array and re-save project
      - Remove the usage of `keys.include?` as it performs `O(n)`
        search, instead use `.has_key?`
      - Remove the usage of `.keys.first` as it performs a copy
        of all keys, instead use `.first.first`
      8d1e97fc