Commit 70dc8cea authored by Amy Qualls's avatar Amy Qualls Committed by Evan Read

Tone and style revisions, AutoDevops page

Cleans up the latter half of the Auto DevOps page for tone and style.
parent b9992b32
......@@ -43,18 +43,18 @@ it will continue to be used, whether or not Auto DevOps is enabled.
## Quick start
If you are using GitLab.com, see the [quick start guide](quick_start_guide.md)
If you're using GitLab.com, see the [quick start guide](quick_start_guide.md)
for how to use Auto DevOps with GitLab.com and a Kubernetes cluster on Google Kubernetes
Engine (GKE).
If you are using a self-managed instance of GitLab, you will need to configure the
If you're using a self-managed instance of GitLab, you will need to configure the
[Google OAuth2 OmniAuth Provider](../../integration/google.md) before
you can configure a cluster on GKE. Once this is set up, you can follow the steps on the
[quick start guide](quick_start_guide.md) to get started.
## Comparison to application platforms and PaaS
Auto DevOps provides functionality that is often included in an application
Auto DevOps provides functionality that's often included in an application
platform or a Platform as a Service (PaaS). It takes inspiration from the
innovative work done by [Heroku](https://www.heroku.com/) and goes beyond it
in multiple ways:
......@@ -120,15 +120,15 @@ To make full use of Auto DevOps, you will need:
For Kubernetes 1.16+ clusters, there is some additional configuration for [Auto Deploy for Kubernetes 1.16+](stages.md#kubernetes-116).
1. NGINX Ingress. You can deploy it to your Kubernetes cluster by installing
the [GitLab-managed app for Ingress](../../user/clusters/applications.md#ingress),
once you have configured GitLab's Kubernetes integration in the previous step.
once you've configured GitLab's Kubernetes integration in the previous step.
Alternatively, you can use the
[`nginx-ingress`](https://github.com/helm/charts/tree/master/stable/nginx-ingress)
Helm chart to install Ingress manually.
NOTE: **Note:**
If you are using your own Ingress instead of the one provided by GitLab's managed
apps, ensure you are running at least version 0.9.0 of NGINX Ingress and
If you're using your own Ingress instead of the one provided by GitLab's managed
apps, ensure you're running at least version 0.9.0 of NGINX Ingress and
[enable Prometheus metrics](https://github.com/helm/charts/tree/master/stable/nginx-ingress#prometheus-metrics)
in order for the response metrics to appear. You will also have to
[annotate](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/)
......@@ -150,25 +150,25 @@ To make full use of Auto DevOps, you will need:
means using either the [Docker](https://docs.gitlab.com/runner/executors/docker.html)
or [Kubernetes](https://docs.gitlab.com/runner/executors/kubernetes.html) executors, with
[privileged mode enabled](https://docs.gitlab.com/runner/executors/docker.html#use-docker-in-docker-with-privileged-mode).
The Runners do not need to be installed in the Kubernetes cluster, but the
The Runners don't need to be installed in the Kubernetes cluster, but the
Kubernetes executor is easy to use and is automatically autoscaling.
Docker-based Runners can be configured to autoscale as well, using [Docker
Machine](https://docs.gitlab.com/runner/install/autoscaling.html).
If you have configured GitLab's Kubernetes integration in the first step, you
If you've configured GitLab's Kubernetes integration in the first step, you
can deploy it to your cluster by installing the
[GitLab-managed app for GitLab Runner](../../user/clusters/applications.md#gitlab-runner).
Runners should be registered as [shared Runners](../../ci/runners/README.md#registering-a-shared-runner)
for the entire GitLab instance, or [specific Runners](../../ci/runners/README.md#registering-a-specific-runner)
that are assigned to specific projects (the default if you have installed the
that are assigned to specific projects (the default if you've installed the
GitLab Runner managed application).
- **Prometheus** (for Auto Monitoring)
To enable Auto Monitoring, you will need Prometheus installed somewhere
(inside or outside your cluster) and configured to scrape your Kubernetes cluster.
If you have configured GitLab's Kubernetes integration, you can deploy it to
If you've configured GitLab's Kubernetes integration, you can deploy it to
your cluster by installing the
[GitLab-managed app for Prometheus](../../user/clusters/applications.md#prometheus).
......@@ -186,11 +186,11 @@ To make full use of Auto DevOps, you will need:
a native Kubernetes certificate management controller that helps with issuing certificates.
Installing cert-manager on your cluster will issue a certificate by
[Let’s Encrypt](https://letsencrypt.org/) and ensure that certificates are valid and up-to-date.
If you have configured GitLab's Kubernetes integration, you can deploy it to
If you've configured GitLab's Kubernetes integration, you can deploy it to
your cluster by installing the
[GitLab-managed app for cert-manager](../../user/clusters/applications.md#cert-manager).
If you do not have Kubernetes or Prometheus installed, then Auto Review Apps,
If you don't have Kubernetes or Prometheus installed, then Auto Review Apps,
Auto Deploy, and Auto Monitoring will be silently skipped.
One all requirements are met, you can go ahead and [enable Auto DevOps](#enablingdisabling-auto-devops).
......@@ -212,54 +212,57 @@ as other environment [variables](../../ci/variables/README.md#priority-of-enviro
TIP: **Tip:**
If you're using the [GitLab managed app for Ingress](../../user/clusters/applications.md#ingress),
the URL endpoint should be automatically configured for you. All you have to do
is use its value for the `KUBE_INGRESS_BASE_DOMAIN` variable.
the URL endpoint should be automatically configured for you.
Use its value for the `KUBE_INGRESS_BASE_DOMAIN` variable.
NOTE: **Note:**
`AUTO_DEVOPS_DOMAIN` was [deprecated in GitLab 11.8](https://gitlab.com/gitlab-org/gitlab-foss/issues/52363)
and replaced with `KUBE_INGRESS_BASE_DOMAIN`. It was removed in
and replaced with `KUBE_INGRESS_BASE_DOMAIN`, and removed in
[GitLab 12.0](https://gitlab.com/gitlab-org/gitlab-foss/issues/56959).
A wildcard DNS A record matching the base domain(s) is required, for example,
given a base domain of `example.com`, you'd need a DNS entry like:
Auto DevOps requires a wildcard DNS A record matching the base domain(s). For
a base domain of `example.com`, you'd need a DNS entry like:
```text
*.example.com 3600 A 1.2.3.4
```
In this case, `example.com` is the domain name under which the deployed apps will be served,
and `1.2.3.4` is the IP address of your load balancer; generally NGINX
([see requirements](#requirements)). How to set up the DNS record is beyond
the scope of this document; you should check with your DNS provider.
In this case, the deployed applications are served from `example.com`, and `1.2.3.4`
is the IP address of your load balancer; generally NGINX ([see requirements](#requirements)).
Setting up the DNS record is beyond the scope of this document; check with your
DNS provider for information.
Alternatively you can use free public services like [nip.io](https://nip.io)
which provide automatic wildcard DNS without any configuration. Just set the
Auto DevOps base domain to `1.2.3.4.nip.io`.
Alternatively, you can use free public services like [nip.io](https://nip.io)
which provide automatic wildcard DNS without any configuration. For [nip.io](https://nip.io),
set the Auto DevOps base domain to `1.2.3.4.nip.io`.
Once set up, all requests will hit the load balancer, which in turn will route
them to the Kubernetes pods that run your application(s).
After completing setup, all requests hit the load balancer, which routes requests
to the Kubernetes pods running your application.
## Enabling/Disabling Auto DevOps
When first using Auto DevOps, review the [requirements](#requirements) to ensure all necessary components to make
full use of Auto DevOps are available. If this is your fist time, we recommend you follow the
[quick start guide](quick_start_guide.md).
When first using Auto DevOps, review the [requirements](#requirements) to ensure
all the necessary components to make full use of Auto DevOps are available. First-time
users should follow the [quick start guide](quick_start_guide.md).
GitLab.com users can enable/disable Auto DevOps at the project-level only. Self-managed users
can enable/disable Auto DevOps at the project-level, group-level or instance-level.
GitLab.com users can enable or disable Auto DevOps only at the project level.
Self-managed users can enable or disable Auto DevOps at the project, group, or
instance level.
### At the project level
If enabling, check that your project doesn't have a `.gitlab-ci.yml`, or if one exists, remove it.
If enabling, confirm your project does not have a `.gitlab-ci.yml`. If one
exists, remove it.
1. Go to your project's **Settings > CI/CD > Auto DevOps**.
1. Toggle the **Default to Auto DevOps pipeline** checkbox (checked to enable, unchecked to disable)
1. When enabling, it's optional but recommended to add in the [base domain](#auto-devops-base-domain)
that will be used by Auto DevOps to [deploy your application](stages.md#auto-deploy)
1. Go to your project's **{settings}** **Settings > CI/CD > Auto DevOps**.
1. Select the **Default to Auto DevOps pipeline** checkbox to enable it.
1. (Optional, but recommended) When enabling, you can add in the
[base domain](#auto-devops-base-domain) Auto DevOps uses to
[deploy your application](stages.md#auto-deploy),
and choose the [deployment strategy](#deployment-strategy).
1. Click **Save changes** for the changes to take effect.
When the feature has been enabled, an Auto DevOps pipeline is triggered on the default branch.
After enabling the feature, an Auto DevOps pipeline is triggered on the default branch.
### At the group level
......@@ -267,48 +270,50 @@ When the feature has been enabled, an Auto DevOps pipeline is triggered on the d
Only administrators and group owners can enable or disable Auto DevOps at the group level.
To enable or disable Auto DevOps at the group-level:
When enabling or disabling Auto DevOps at group level, group configuration is
implicitly used for the subgroups and projects inside that group, unless Auto DevOps
is specifically enabled or disabled on the subgroup or project.
1. Go to group's **Settings > CI/CD > Auto DevOps** page.
1. Toggle the **Default to Auto DevOps pipeline** checkbox (checked to enable, unchecked to disable).
1. Click **Save changes** button for the changes to take effect.
To enable or disable Auto DevOps at the group level:
When enabling or disabling Auto DevOps at group-level, group configuration will be implicitly used for
the subgroups and projects inside that group, unless Auto DevOps is specifically enabled or disabled on
the subgroup or project.
1. Go to your group's **{settings}** **Settings > CI/CD > Auto DevOps** page.
1. Select the **Default to Auto DevOps pipeline** checkbox to enable it.
1. Click **Save changes** for the changes to take effect.
### At the instance level (Administrators only)
Even when disabled at the instance level, group owners and project maintainers can still enable
Auto DevOps at the group and project level, respectively.
1. Go to **Admin Area > Settings > Continuous Integration and Deployment**.
1. Toggle the checkbox labeled **Default to Auto DevOps pipeline for all projects**.
1. If enabling, optionally set up the Auto DevOps [base domain](#auto-devops-base-domain) which will be used for Auto Deploy and Auto Review Apps.
1. Go to **{admin}** **Admin Area > Settings > Continuous Integration and Deployment**.
1. Select **Default to Auto DevOps pipeline for all projects** to enable it.
1. (Optional) You can set up the Auto DevOps [base domain](#auto-devops-base-domain),
for Auto Deploy and Auto Review Apps to use.
1. Click **Save changes** for the changes to take effect.
### Enable for a percentage of projects
There is also a feature flag to enable Auto DevOps by default to your chosen percentage of projects.
This can be enabled from the console with the following, which uses the example of 10%:
You can use a feature flag to enable Auto DevOps by default to your desired percentage
of projects. From the console, enter the following command, replacing `10` with
your desired percentage:
`Feature.get(:force_autodevops_on_by_default).enable_percentage_of_actors(10)`
```ruby
Feature.get(:force_autodevops_on_by_default).enable_percentage_of_actors(10)
```
### Deployment strategy
> [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/issues/38542) in GitLab 11.0.
You can change the deployment strategy used by Auto DevOps by going to your
project's **Settings > CI/CD > Auto DevOps**.
The available options are:
project's **{settings}** **Settings > CI/CD > Auto DevOps**. The following options
are available:
- **Continuous deployment to production**: Enables [Auto Deploy](stages.md#auto-deploy)
with `master` branch directly deployed to production.
- **Continuous deployment to production using timed incremental rollout**: Sets the
[`INCREMENTAL_ROLLOUT_MODE`](customize.md#timed-incremental-rollout-to-production-premium) variable
to `timed`, and production deployment will be executed with a 5 minute delay between
to `timed`. Production deployments execute with a 5 minute delay between
each increment in rollout.
- **Automatic deployment to staging, manual deployment to production**: Sets the
[`STAGING_ENABLED`](customize.md#deploy-policy-for-staging-and-production-environments) and
......@@ -320,63 +325,60 @@ The available options are:
## Using multiple Kubernetes clusters **(PREMIUM)**
When using Auto DevOps, you may want to deploy different environments to
different Kubernetes clusters. This is possible due to the 1:1 connection that
[exists between them](../../user/project/clusters/index.md#multiple-kubernetes-clusters-premium).
When using Auto DevOps, you can deploy different environments to
different Kubernetes clusters, due to the 1:1 connection
[existing between them](../../user/project/clusters/index.md#multiple-kubernetes-clusters-premium).
In the [Auto DevOps template](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Auto-DevOps.gitlab-ci.yml) (used behind the scenes by Auto DevOps), there
are currently 3 defined environment names that you need to know:
The [Auto DevOps template](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Auto-DevOps.gitlab-ci.yml) currently defines 3 environment names:
- `review/` (every environment starting with `review/`)
- `staging`
- `production`
Those environments are tied to jobs that use [Auto Deploy](stages.md#auto-deploy), so
except for the environment scope, they would also need to have a different
domain they would be deployed to. This is why you need to define a separate
`KUBE_INGRESS_BASE_DOMAIN` variable for all the above
Those environments are tied to jobs using [Auto Deploy](stages.md#auto-deploy), so
except for the environment scope, they must have a different deployment domain.
You must define a separate `KUBE_INGRESS_BASE_DOMAIN` variable for each of the above
[based on the environment](../../ci/variables/README.md#limiting-environment-scopes-of-environment-variables).
The following table is an example of how the three different clusters would
be configured.
The following table is an example of how to configure the three different clusters:
| Cluster name | Cluster environment scope | `KUBE_INGRESS_BASE_DOMAIN` variable value | Variable environment scope | Notes |
|--------------|---------------------------|-------------------------------------------|----------------------------|---|
| review | `review/*` | `review.example.com` | `review/*` | The review cluster which will run all [Review Apps](../../ci/review_apps/index.md). `*` is a wildcard, which means it will be used by every environment name starting with `review/`. |
| staging | `staging` | `staging.example.com` | `staging` | (Optional) The staging cluster which will run the deployments of the staging environments. You need to [enable it first](customize.md#deploy-policy-for-staging-and-production-environments). |
| production | `production` | `example.com` | `production` | The production cluster which will run the deployments of the production environment. You can use [incremental rollouts](customize.md#incremental-rollout-to-production-premium). |
| review | `review/*` | `review.example.com` | `review/*` | The review cluster which runs all [Review Apps](../../ci/review_apps/index.md). `*` is a wildcard, used by every environment name starting with `review/`. |
| staging | `staging` | `staging.example.com` | `staging` | (Optional) The staging cluster which runs the deployments of the staging environments. You must [enable it first](customize.md#deploy-policy-for-staging-and-production-environments). |
| production | `production` | `example.com` | `production` | The production cluster which runs the production environment deployments. You can use [incremental rollouts](customize.md#incremental-rollout-to-production-premium). |
To add a different cluster for each environment:
1. Navigate to your project's **Operations > Kubernetes** and create the Kubernetes clusters
with their respective environment scope as described from the table above.
1. Navigate to your project's **{cloud-gear}** **Operations > Kubernetes**.
1. Create the Kubernetes clusters with their respective environment scope, as
described from the table above.
![Auto DevOps multiple clusters](img/autodevops_multiple_clusters.png)
1. After the clusters are created, navigate to each one and install Helm Tiller
1. After creating the clusters, navigate to each cluster and install Helm Tiller
and Ingress. Wait for the Ingress IP address to be assigned.
1. Make sure you have [configured your DNS](#auto-devops-base-domain) with the
1. Make sure your [DNS is configured](#auto-devops-base-domain) with the
specified Auto DevOps domains.
1. Navigate to each cluster's page, through **Operations > Kubernetes**,
1. Navigate to each cluster's page, through **{cloud-gear}** **Operations > Kubernetes**,
and add the domain based on its Ingress IP address.
Now that all is configured, you can test your setup by creating a merge request
and verifying that your app is deployed as a review app in the Kubernetes
After completing configuration, you can test your setup by creating a merge request
and verifying your app is deployed as a review app in the Kubernetes
cluster with the `review/*` environment scope. Similarly, you can check the
other environments.
## Currently supported languages
Note that not all buildpacks support Auto Test yet, as it's a relatively new
enhancement. All of Heroku's [officially supported
languages](https://devcenter.heroku.com/articles/heroku-ci#currently-supported-languages)
support it, and some third-party buildpacks as well e.g., Go, Node, Java, PHP,
Python, Ruby, Gradle, Scala, and Elixir all support Auto Test, but notably the
multi-buildpack does not.
Note that not all buildpacks support Auto Test yet, as it's a relatively new enhancement.
All of Heroku's [officially supported languages](https://devcenter.heroku.com/articles/heroku-ci#currently-supported-languages)
support Auto Test. While some third-party buildpacks, such as Go, Node, Java, PHP,
Python, Ruby, Gradle, Scala, and Elixir all support Auto Test, the
multi-buildpack notably does not.
As of GitLab 10.0, the supported buildpacks are:
```text
```plaintext
- heroku-buildpack-multi v1.0.0
- heroku-buildpack-ruby v168
- heroku-buildpack-nodejs v99
......@@ -398,18 +400,18 @@ The following restrictions apply.
### Private registry support
There is no documented way of using private container registry with Auto DevOps.
We strongly advise using GitLab Container Registry with Auto DevOps in order to
No documented way of using private container registry with Auto DevOps exists.
We strongly advise using GitLab Container Registry with Auto DevOps to
simplify configuration and prevent any unforeseen issues.
### Installing Helm behind a proxy
GitLab does not yet support installing [Helm as a GitLab-managed App](../../user/clusters/applications.md#helm) when
behind a proxy. Users who wish to do so must inject their proxy settings
into the installation pods at runtime, for example by using a
GitLab does not support installing [Helm as a GitLab-managed App](../../user/clusters/applications.md#helm) when
behind a proxy. Users who want to do so must inject their proxy settings
into the installation pods at runtime, such as by using a
[`PodPreset`](https://kubernetes.io/docs/concepts/workloads/pods/podpreset/):
```yml
```yaml
apiVersion: settings.k8s.io/v1alpha1
kind: PodPreset
metadata:
......@@ -439,12 +441,12 @@ spec:
- Your application may be missing the key files the buildpack is looking for. For
example, for Ruby applications you must have a `Gemfile` to be properly detected,
even though it is possible to write a Ruby app without a `Gemfile`.
even though it's possible to write a Ruby app without a `Gemfile`.
- There may be no buildpack for your application. Try specifying a
[custom buildpack](customize.md#custom-buildpacks).
- Auto Test may fail because of a mismatch between testing frameworks. In this
case, you may need to customize your `.gitlab-ci.yml` with your test commands.
- Auto Deploy will fail if GitLab can not create a Kubernetes namespace and
- Auto Deploy will fail if GitLab can't create a Kubernetes namespace and
service account for your project. For help debugging this issue, see
[Troubleshooting failed deployment jobs](../../user/project/clusters/index.md#troubleshooting).
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment