Commit 1218f7b4 authored by GitLab Bot's avatar GitLab Bot

Automatic merge of gitlab-org/gitlab master

parents da34c8ed 9e44919c
d513d220b183d83ae7219ec52f49aa3b4f7fc551
4a4ea7c0489b0eb0f7a65c65e5b15c35e3d16f14
......@@ -123,6 +123,8 @@ module Integrations
object_kind = 'job' if object_kind == 'build'
return unless supported_events.include?(object_kind)
data = data.with_retried_builds if data.respond_to?(:with_retried_builds)
execute_web_hook!(data, "#{object_kind} hook")
end
......
......@@ -419,9 +419,13 @@ configurations. Local configurations in the `.gitlab-ci.yml` file override inclu
> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/284883) in GitLab 13.8.
> - [Feature flag removed](https://gitlab.com/gitlab-org/gitlab/-/issues/294294) in GitLab 13.9.
> - [Custom variable support added](https://gitlab.com/gitlab-org/gitlab/-/issues/219065) in GitLab 14.2.
You can [use some predefined variables in `include` sections](../variables/where_variables_can_be_used.md#gitlab-ciyml-file)
in your `.gitlab-ci.yml` file:
In `include` sections in your `.gitlab-ci.yml` file, you can use:
- Project [predefined variables](../variables/predefined_variables.md).
- [Custom instance, group, and project variables](../variables/index.md#custom-cicd-variables)
in GitLab 14.2 and later.
```yaml
include:
......
......@@ -8,7 +8,7 @@ info: "See the Technical Writers assigned to Development Guidelines: https://abo
# Background migrations
Background migrations should be used to perform data migrations whenever a
migration exceeds [the time limits in our guidelines](database_review.md#timing-guidelines-for-migrations). For example, you can use background
migration exceeds [the time limits in our guidelines](migration_style_guide.md#how-long-a-migration-should-take). For example, you can use background
migrations to migrate data that's stored in a single JSON column
to a separate table instead.
......@@ -18,7 +18,7 @@ migrations automatically reschedule themselves for a later point in time.
## When To Use Background Migrations
You should use a background migration when you migrate _data_ in tables that have
so many rows that the process would exceed [the time limits in our guidelines](database_review.md#timing-guidelines-for-migrations) if performed using a regular Rails migration.
so many rows that the process would exceed [the time limits in our guidelines](migration_style_guide.md#how-long-a-migration-should-take) if performed using a regular Rails migration.
- Background migrations should be used when migrating data in [high-traffic tables](migration_style_guide.md#high-traffic-tables).
- Background migrations may also be used when executing numerous single-row queries
......
......@@ -216,7 +216,7 @@ test its execution using `CREATE INDEX CONCURRENTLY` in the `#database-lab` Slac
it's suggested to treat background migrations as post migrations:
place them in `db/post_migrate` instead of `db/migrate`. Keep in mind
that post migrations are executed post-deployment in production.
- Check [timing guidelines for migrations](#timing-guidelines-for-migrations)
- Check [timing guidelines for migrations](migration_style_guide.md#how-long-a-migration-should-take)
- Check migrations are reversible and implement a `#down` method
- Check data migrations:
- Establish a time estimate for execution on GitLab.com.
......@@ -234,18 +234,3 @@ test its execution using `CREATE INDEX CONCURRENTLY` in the `#database-lab` Slac
to queries (changing the query, schema or adding indexes and similar)
- General guideline is for queries to come in below [100ms execution time](query_performance.md#timing-guidelines-for-queries)
- Avoid N+1 problems and minimize the [query count](merge_request_performance_guidelines.md#query-counts).
### Timing guidelines for migrations
In general, migrations for a single deploy shouldn't take longer than
1 hour for GitLab.com. The following guidelines are not hard rules, they were
estimated to keep migration timing to a minimum.
NOTE:
Keep in mind that all runtimes should be measured against GitLab.com.
| Migration Type | Execution Time Recommended | Notes |
|----|----|---|
| Regular migrations on `db/migrate` | `3 minutes` | A valid exception are index creation as this can take a long time. |
| Post migrations on `db/post_migrate` | `10 minutes` | |
| Background migrations | --- | Since these are suitable for larger tables, it's not possible to set a precise timing guideline, however, any single query must stay below [`1 second` execution time](query_performance.md#timing-guidelines-for-queries) with cold caches. |
......@@ -31,11 +31,66 @@ Please don't depend on GitLab-specific code since it can change in future
versions. If needed copy-paste GitLab code into the migration to make it forward
compatible.
For GitLab.com, please take into consideration that regular migrations (under `db/migrate`)
are run before [Canary is deployed](https://gitlab.com/gitlab-com/gl-infra/readiness/-/tree/master/library/canary/#configuration-and-deployment),
and [post-deployment migrations](post_deployment_migrations.md) (`db/post_migrate`) are run after the deployment to production has finished.
## Choose an appropriate migration type
The first step before adding a new migration should be to decide which type is most appropriate.
There are currently three kinds of migrations you can create, depending on the kind of
work it needs to perform and how long it takes to complete:
1. [**Regular schema migrations.**](#create-a-regular-schema-migration) These are traditional Rails migrations in `db/migrate` that run _before_ new application code is deployed
(for GitLab.com before [Canary is deployed](https://gitlab.com/gitlab-com/gl-infra/readiness/-/tree/master/library/canary/#configuration-and-deployment)).
This means that they should be relatively fast, no more than a few minutes, so as not to unnecessarily delay a deployment.
One exception is a migration that takes longer but is absolutely critical for the application to operate correctly.
For example, you might have indices that enforce unique tuples, or that are needed for query performance in critical parts of the application. In cases where the migration would be unacceptably slow, however, a better option might be to guard the feature with a [feature flag](feature_flags/index.md)
and perform a post-deployment migration instead. The feature can then be turned on after the migration finishes.
1. [**Post-deployment migrations.**](post_deployment_migrations.md) These are Rails migrations in `db/post_migrate` and
run _after_ new application code has been deployed (for GitLab.com after the production deployment has finished).
They can be used for schema changes that aren't critical for the application to operate, or data migrations that take at most a few minutes.
Common examples for schema changes that should run post-deploy include:
- Clean-ups, like removing unused columns.
- Adding non-critical indices on high-traffic tables.
- Adding non-critical indices that take a long time to create.
1. [**Background migrations.**](background_migrations.md) These aren't regular Rails migrations, but application code that is
executed via Sidekiq jobs, although a post-deployment migration is used to schedule them. Use them only for data migrations that
exceed the timing guidelines for post-deploy migrations. Background migrations should _not_ change the schema.
Use the following diagram to guide your decision, but keep in mind that it is just a tool, and
the final outcome will always be dependent on the specific changes being made:
```mermaid
graph LR
A{Schema<br/>changed?}
A -->|Yes| C{Critical to<br/>speed or<br/>behavior?}
A -->|No| D{Is it fast?}
C -->|Yes| H{Is it fast?}
C -->|No| F[Post-deploy migration]
H -->|Yes| E[Regular migration]
H -->|No| I[Post-deploy migration<br/>+ feature flag]
D -->|Yes| F[Post-deploy migration]
D -->|No| G[Background migration]
```
### How long a migration should take
In general, all migrations for a single deploy shouldn't take longer than
1 hour for GitLab.com. The following guidelines are not hard rules, they were
estimated to keep migration duration to a minimum.
NOTE:
Keep in mind that all durations should be measured against GitLab.com.
| Migration Type | Recommended Duration | Notes |
|----|----|---|
| Regular migrations | `<= 3 minutes` | A valid exception are changes without which application functionality or performance would be severely degraded and which cannot be delayed. |
| Post-deployment migrations | `<= 10 minutes` | A valid exception are schema changes, since they must not happen in background migrations. |
| Background migrations | `> 10 minutes` | Since these are suitable for larger tables, it's not possible to set a precise timing guideline, however, any single query must stay below [`1 second` execution time](query_performance.md#timing-guidelines-for-queries) with cold caches. |
## Create database migrations
## Create a regular schema migration
To create a migration you can use the following Rails generator:
......
......@@ -18,7 +18,7 @@ When you are optimizing your SQL queries, there are two dimensions to pay attent
| Query Type | Maximum Query Time | Notes |
|----|----|---|
| General queries | `100ms` | This is not a hard limit, but if a query is getting above it, it is important to spend time understanding why it can or cannot be optimized. |
| Queries in a migration | `100ms` | This is different than the total [migration time](database_review.md#timing-guidelines-for-migrations). |
| Queries in a migration | `100ms` | This is different than the total [migration time](migration_style_guide.md#how-long-a-migration-should-take). |
| Concurrent operations in a migration | `5min` | Concurrent operations do not block the database, but they block the GitLab update. This includes operations such as `add_concurrent_index` and `add_concurrent_foreign_key`. |
| Background migrations | `1s` | |
| Service Ping | `1s` | See the [Service Ping docs](service_ping/index.md#developing-and-testing-service-ping) for more details. |
......
......@@ -56,7 +56,7 @@ RSpec.describe Gitlab::Ci::Pipeline::Chain::Validate::External do
it 'does not fire N+1 SQL queries' do
stub_request(:post, validation_service_url)
expect { step.perform! }.not_to exceed_query_limit(4)
expect { step.perform! }.not_to exceed_query_limit(16)
end
context 'with a project in a subgroup' do
......
......@@ -114,7 +114,22 @@ module Gitlab
sha: sha || find_sha(project),
user: user,
parent_pipeline: parent_pipeline,
variables: project&.predefined_variables&.to_runner_variables)
variables: build_variables(project: project, ref: sha))
end
def build_variables(project:, ref:)
Gitlab::Ci::Variables::Collection.new.tap do |variables|
break variables unless project
# The order of the next 4 lines is important as priority of CI variables is
# defined globally within GitLab.
#
# See more detail in the docs: https://docs.gitlab.com/ee/ci/variables/#cicd-variable-precedence
variables.concat(project.predefined_variables)
variables.concat(project.ci_instance_variables_for(ref: ref))
variables.concat(project.group.ci_variables_for(ref, project)) if project.group
variables.concat(project.ci_variables_for(ref: ref))
end
end
def track_and_raise_for_dev_exception(error)
......
......@@ -2,21 +2,35 @@
module Gitlab
module DataBuilder
module Pipeline
extend self
# Some callers want to include retried builds, so we wrap the payload hash
# in a SimpleDelegator with additional methods.
class Pipeline < SimpleDelegator
def self.build(pipeline)
new(pipeline)
end
def build(pipeline)
{
def initialize(pipeline)
@pipeline = pipeline
super(
object_kind: 'pipeline',
object_attributes: hook_attrs(pipeline),
merge_request: pipeline.merge_request && merge_request_attrs(pipeline.merge_request),
user: pipeline.user.try(:hook_attrs),
project: pipeline.project.hook_attrs(backward: false),
commit: pipeline.commit.try(:hook_attrs),
builds: pipeline.builds.latest.map(&method(:build_hook_attrs))
}
builds: Gitlab::Lazy.new { pipeline.builds.latest.map(&method(:build_hook_attrs)) }
)
end
def with_retried_builds
merge(
builds: Gitlab::Lazy.new { @pipeline.builds.map(&method(:build_hook_attrs)) }
)
end
private
def hook_attrs(pipeline)
{
id: pipeline.id,
......
......@@ -286,7 +286,9 @@ RSpec.describe Gitlab::Ci::Config do
end
context "when using 'include' directive" do
let(:project) { create(:project, :repository) }
let(:group) { create(:group) }
let(:project) { create(:project, :repository, group: group) }
let(:main_project) { create(:project, :repository, :public, group: group) }
let(:remote_location) { 'https://gitlab.com/gitlab-org/gitlab-foss/blob/1234/.gitlab-ci-1.yml' }
let(:local_location) { 'spec/fixtures/gitlab/ci/external_files/.gitlab-ci-template-1.yml' }
......@@ -317,7 +319,9 @@ RSpec.describe Gitlab::Ci::Config do
include:
- #{local_location}
- #{remote_location}
- project: '$MAIN_PROJECT'
ref: '$REF'
file: '$FILENAME'
image: ruby:2.7
HEREDOC
end
......@@ -331,6 +335,26 @@ RSpec.describe Gitlab::Ci::Config do
allow(project.repository)
.to receive(:blob_data_at).and_return(local_file_content)
main_project.repository.create_file(
main_project.creator,
'.gitlab-ci.yml',
local_file_content,
message: 'Add README.md',
branch_name: 'master'
)
main_project.repository.create_file(
main_project.creator,
'.another-ci-file.yml',
local_file_content,
message: 'Add README.md',
branch_name: 'master'
)
create(:ci_variable, project: project, key: "REF", value: "HEAD")
create(:ci_group_variable, group: group, key: "FILENAME", value: ".gitlab-ci.yml")
create(:ci_instance_variable, key: 'MAIN_PROJECT', value: main_project.full_path)
end
context "when gitlab_ci_yml has valid 'include' defined" do
......@@ -344,6 +368,38 @@ RSpec.describe Gitlab::Ci::Config do
expect(config.to_hash).to eq(composed_hash)
end
context 'handling variables' do
it 'contains all project variables' do
ref = config.context.variables.find { |v| v[:key] == 'REF' }
expect(ref[:value]).to eq("HEAD")
end
it 'contains all group variables' do
filename = config.context.variables.find { |v| v[:key] == 'FILENAME' }
expect(filename[:value]).to eq(".gitlab-ci.yml")
end
it 'contains all instance variables' do
project = config.context.variables.find { |v| v[:key] == 'MAIN_PROJECT' }
expect(project[:value]).to eq(main_project.full_path)
end
context 'overriding a group variable at project level' do
before do
create(:ci_variable, project: project, key: "FILENAME", value: ".another-ci-file.yml")
end
it 'successfully overrides' do
filename = config.context.variables.to_hash[:FILENAME]
expect(filename).to eq('.another-ci-file.yml')
end
end
end
end
context "when gitlab_ci.yml has invalid 'include' defined" do
......
......@@ -107,6 +107,7 @@ RSpec.describe Gitlab::Ci::Pipeline::Chain::Populate do
context 'when ref is protected' do
before do
allow(project).to receive(:protected_for?).with('master').and_return(true)
allow(project).to receive(:protected_for?).with('b83d6e391c22777fca1ed3012fce84f633d7fed0').and_return(true)
allow(project).to receive(:protected_for?).with('refs/heads/master').and_return(true)
dependencies.map(&:perform!)
......
......@@ -3,10 +3,10 @@
require 'spec_helper'
RSpec.describe Gitlab::DataBuilder::Pipeline do
let(:user) { create(:user) }
let(:project) { create(:project, :repository) }
let_it_be(:user) { create(:user) }
let_it_be(:project) { create(:project, :repository) }
let(:pipeline) do
let_it_be_with_reload(:pipeline) do
create(:ci_pipeline,
project: project,
status: 'success',
......@@ -15,12 +15,12 @@ RSpec.describe Gitlab::DataBuilder::Pipeline do
user: user)
end
let!(:build) { create(:ci_build, pipeline: pipeline) }
let_it_be(:build) { create(:ci_build, pipeline: pipeline) }
describe '.build' do
let(:data) { described_class.build(pipeline) }
let(:attributes) { data[:object_attributes] }
let(:build_data) { data[:builds].first }
let(:build_data) { data[:builds].last }
let(:runner_data) { build_data[:runner] }
let(:project_data) { data[:project] }
......@@ -51,9 +51,9 @@ RSpec.describe Gitlab::DataBuilder::Pipeline do
end
context 'build with runner' do
let!(:build) { create(:ci_build, pipeline: pipeline, runner: ci_runner) }
let!(:tag_names) { %w(tag-1 tag-2) }
let(:ci_runner) { create(:ci_runner, tag_list: tag_names.map { |n| ActsAsTaggableOn::Tag.create!(name: n)}) }
let_it_be(:tag_names) { %w(tag-1 tag-2) }
let_it_be(:ci_runner) { create(:ci_runner, tag_list: tag_names.map { |n| ActsAsTaggableOn::Tag.create!(name: n)}) }
let_it_be(:build) { create(:ci_build, pipeline: pipeline, runner: ci_runner) }
it 'has runner attributes', :aggregate_failures do
expect(runner_data[:id]).to eq(ci_runner.id)
......@@ -73,18 +73,15 @@ RSpec.describe Gitlab::DataBuilder::Pipeline do
end
context 'pipeline with variables' do
let(:build) { create(:ci_build, pipeline: pipeline) }
let(:data) { described_class.build(pipeline) }
let(:attributes) { data[:object_attributes] }
let!(:pipeline_variable) { create(:ci_pipeline_variable, pipeline: pipeline, key: 'TRIGGER_KEY_1', value: 'TRIGGER_VALUE_1') }
let_it_be(:pipeline_variable) { create(:ci_pipeline_variable, pipeline: pipeline, key: 'TRIGGER_KEY_1', value: 'TRIGGER_VALUE_1') }
it { expect(attributes[:variables]).to be_a(Array) }
it { expect(attributes[:variables]).to contain_exactly({ key: 'TRIGGER_KEY_1', value: 'TRIGGER_VALUE_1' }) }
end
context 'when pipeline is a detached merge request pipeline' do
let(:merge_request) { create(:merge_request, :with_detached_merge_request_pipeline) }
let(:pipeline) { merge_request.all_pipelines.first }
let_it_be(:merge_request) { create(:merge_request, :with_detached_merge_request_pipeline) }
let_it_be(:pipeline) { merge_request.all_pipelines.first }
it 'returns a source ref' do
expect(attributes[:ref]).to eq(merge_request.source_branch)
......@@ -108,19 +105,25 @@ RSpec.describe Gitlab::DataBuilder::Pipeline do
end
context 'when pipeline has retried builds' do
before do
create(:ci_build, :retried, pipeline: pipeline)
end
let_it_be(:retried_build) { create(:ci_build, :retried, pipeline: pipeline) }
it 'does not contain retried builds in payload' do
expect(data[:builds].count).to eq(1)
expect(build_data[:id]).to eq(build.id)
builds = data[:builds]
expect(builds.pluck(:id)).to contain_exactly(build.id)
end
it 'contains retried builds if requested' do
builds = data.with_retried_builds[:builds]
expect(builds.pluck(:id)).to contain_exactly(build.id, retried_build.id)
end
end
context 'build with environment' do
let!(:build) { create(:ci_build, :environment_with_deployment_tier, :with_deployment, pipeline: pipeline) }
let!(:build_environment_data) { build_data[:environment] }
let_it_be(:build) { create(:ci_build, :environment_with_deployment_tier, :with_deployment, pipeline: pipeline) }
let(:build_environment_data) { build_data[:environment] }
it 'has environment attributes', :aggregate_failures do
expect(build_environment_data[:name]).to eq(build.expanded_environment_name)
......
......@@ -6,7 +6,8 @@ require 'spec_helper'
RSpec.describe Integrations::Datadog do
let_it_be(:project) { create(:project) }
let_it_be(:pipeline) { create(:ci_pipeline, project: project) }
let_it_be(:build) { create(:ci_build, project: project) }
let_it_be(:build) { create(:ci_build, pipeline: pipeline) }
let_it_be(:retried_build) { create(:ci_build, :retried, pipeline: pipeline) }
let(:active) { true }
let(:dd_site) { 'datadoghq.com' }
......@@ -159,6 +160,10 @@ RSpec.describe Integrations::Datadog do
end
describe '#execute' do
around do |example|
freeze_time { example.run }
end
before do
stub_request(:post, expected_hook_url)
saved_instance.execute(data)
......@@ -166,20 +171,18 @@ RSpec.describe Integrations::Datadog do
context 'with pipeline data' do
let(:data) { pipeline_data }
let(:expected_headers) do
{ WebHookService::GITLAB_EVENT_HEADER => 'Pipeline Hook' }
end
let(:expected_headers) { { WebHookService::GITLAB_EVENT_HEADER => 'Pipeline Hook' } }
let(:expected_body) { data.with_retried_builds.to_json }
it { expect(a_request(:post, expected_hook_url).with(headers: expected_headers)).to have_been_made }
it { expect(a_request(:post, expected_hook_url).with(headers: expected_headers, body: expected_body)).to have_been_made }
end
context 'with job data' do
let(:data) { build_data }
let(:expected_headers) do
{ WebHookService::GITLAB_EVENT_HEADER => 'Job Hook' }
end
let(:expected_headers) { { WebHookService::GITLAB_EVENT_HEADER => 'Job Hook' } }
let(:expected_body) { data.to_json }
it { expect(a_request(:post, expected_hook_url).with(headers: expected_headers)).to have_been_made }
it { expect(a_request(:post, expected_hook_url).with(headers: expected_headers, body: expected_body)).to have_been_made }
end
end
end
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment