An error occurred fetching the project authors.
  1. 28 Mar, 2018 1 commit
  2. 20 Mar, 2018 1 commit
  3. 07 Mar, 2018 3 commits
  4. 01 Mar, 2018 1 commit
  5. 14 Feb, 2018 2 commits
    • Stan Hu's avatar
      Simplify license generator error handling · 5b3b2b82
      Stan Hu authored
      5b3b2b82
    • Stan Hu's avatar
      Fix Error 500s loading repositories with no master branch · 35b3a0b9
      Stan Hu authored
      We removed the exception handling for Rugged errors in !16770, which
      revealed that the licensee gem attempts to retrieve a license file
      via Rugged in `refs/heads/master` by default. If that branch
      did not exist, a Rugged::ReferenceError would be thrown.
      
      There were two issues:
      
      1. Not every project uses `master` as the default branch. This
      change uses the head commit to identify the license.
      
      2. Removing the exception handling caused repositories to fail
      loading. We can safely catch and ignore any Rugged error because
      this means we weren't able to load a license file.
      
      Closes #43268
      35b3a0b9
  6. 07 Feb, 2018 1 commit
  7. 02 Feb, 2018 1 commit
  8. 01 Feb, 2018 1 commit
    • Zeger-Jan van de Weg's avatar
      Client changes for Tag,BranchNamesContainingCommit · 0a47d192
      Zeger-Jan van de Weg authored
      As part of gitlab-org/gitaly#884, this commit contains the client
      implementation for both TagNamesContaintingCommit and
      BranchNamesContainingCommit. The interface in the Repository model stays
      the same, but the implementation on the serverside, e.g. Gitaly, uses
      `for-each-ref`, as opposed to `branch` or `tag` which both aren't
      plumbing command. The result stays the same.
      
      On the serverside, we have the opportunity to limit the number of names
      to return. However, this is not supported on the front end yet. My
      proposal to use this ability: gitlab-org/gitlab-ce#42581. For now, this
      ability is not used as that would change more behaviours on a feature
      flag which might lead to unexpected changes on page refresh for example.
      0a47d192
  9. 30 Jan, 2018 1 commit
  10. 29 Jan, 2018 1 commit
  11. 25 Jan, 2018 1 commit
  12. 23 Jan, 2018 1 commit
  13. 16 Jan, 2018 2 commits
  14. 15 Jan, 2018 3 commits
  15. 11 Jan, 2018 1 commit
  16. 10 Jan, 2018 1 commit
  17. 05 Jan, 2018 2 commits
  18. 20 Dec, 2017 1 commit
  19. 19 Dec, 2017 1 commit
    • Zeger-Jan van de Weg's avatar
      Load commit in batches for pipelines#index · c6edae38
      Zeger-Jan van de Weg authored
      Uses `list_commits_by_oid` on the CommitService, to request the needed
      commits for pipelines. These commits are needed to display the user that
      created the commit and the commit title.
      
      This includes fixes for tests failing that depended on the commit
      being `nil`. However, now these are batch loaded, this doesn't happen
      anymore and the commits are an instance of BatchLoader.
      c6edae38
  20. 14 Dec, 2017 1 commit
  21. 13 Dec, 2017 2 commits
  22. 12 Dec, 2017 2 commits
  23. 08 Dec, 2017 1 commit
    • Bob Van Landuyt's avatar
      Move the circuitbreaker check out in a separate process · f1ae1e39
      Bob Van Landuyt authored
      Moving the check out of the general requests, makes sure we don't have
      any slowdown in the regular requests.
      
      To keep the process performing this checks small, the check is still
      performed inside a unicorn. But that is called from a process running
      on the same server.
      
      Because the checks are now done outside normal request, we can have a
      simpler failure strategy:
      
      The check is now performed in the background every
      `circuitbreaker_check_interval`. Failures are logged in redis. The
      failures are reset when the check succeeds. Per check we will try
      `circuitbreaker_access_retries` times within
      `circuitbreaker_storage_timeout` seconds.
      
      When the number of failures exceeds
      `circuitbreaker_failure_count_threshold`, we will block access to the
      storage.
      
      After `failure_reset_time` of no checks, we will clear the stored
      failures. This could happen when the process that performs the checks
      is not running.
      f1ae1e39
  24. 07 Dec, 2017 1 commit
  25. 05 Dec, 2017 1 commit
  26. 04 Dec, 2017 1 commit
  27. 23 Nov, 2017 1 commit
  28. 21 Nov, 2017 1 commit
  29. 03 Nov, 2017 1 commit
  30. 27 Oct, 2017 2 commits
    • Lin Jen-Shin (godfat)'s avatar
      Fetch the merged branches at once · 57d7ed05
      Lin Jen-Shin (godfat) authored
      57d7ed05
    • Zeger-Jan van de Weg's avatar
      Cache commits on the repository model · 3411fef1
      Zeger-Jan van de Weg authored
      Now, when requesting a commit from the Repository model, the results are
      not cached. This means we're fetching the same commit by oid multiple times
      during the same request. To prevent us from doing this, we now cache
      results. Caching is done only based on object id (aka SHA).
      
      Given we cache on the Repository model, results are scoped to the
      associated project, eventhough the change of two repositories having the
      same oids for different commits is small.
      3411fef1