Commit eb38a133 authored by Achilleas Pipinellis's avatar Achilleas Pipinellis

Move/redirect remaining HA components

The last step to complete the reference architecture
documentation is to move/redirect the remaining components
listed under administration/high_availability/ to new locations.
parent 0d013cc9
...@@ -2,4 +2,4 @@ ...@@ -2,4 +2,4 @@
redirect_to: 'database.md' redirect_to: 'database.md'
--- ---
This document was moved to [another location](database.md). This document was moved to [another location](../postgresql/index.md).
--- ---
stage: Create redirect_to: ../gitaly/index.md
group: Gitaly
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
type: reference
--- ---
# Configuring Gitaly for Scaled and High Availability This document was moved to [another location](../gitaly/index.md).
A [Gitaly Cluster](../gitaly/praefect.md) can be used to increase the fault
tolerance of Gitaly in high availability configurations.
This document is relevant for [scalable and highly available setups](../reference_architectures/index.md).
## Running Gitaly on its own server
See [Run Gitaly on its own server](../gitaly/index.md#run-gitaly-on-its-own-server)
in Gitaly documentation.
Continue configuration of other components by going back to the
[reference architecture](../reference_architectures/index.md#configure-gitlab-to-scale) page.
## Enable Monitoring
> [Introduced](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/3786) in GitLab 12.0.
1. Make sure to collect [`CONSUL_SERVER_NODES`](../postgresql/replication_and_failover.md#consul-information), which are the IP addresses or DNS records of the Consul server nodes, for the next step. Note they are presented as `Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z`
1. Create/edit `/etc/gitlab/gitlab.rb` and add the following configuration:
```ruby
# Enable service discovery for Prometheus
consul['enable'] = true
consul['monitoring_service_discovery'] = true
# Replace placeholders
# Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z
# with the addresses of the Consul server nodes
consul['configuration'] = {
retry_join: %w(Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z),
}
# Set the network addresses that the exporters will listen on
node_exporter['listen_address'] = '0.0.0.0:9100'
gitaly['prometheus_listen_addr'] = "0.0.0.0:9236"
```
1. Run `sudo gitlab-ctl reconfigure` to compile the configuration.
<!-- ## Troubleshooting
Include any troubleshooting steps that you can foresee. If you know beforehand what issues
one might have when setting this up, or when something is changed, or on upgrading, it's
important to describe those, too. Think of things that may go wrong and include them here.
This is important to minimize requests for support, and to avoid doc comments with
questions that you know someone might ask.
Each scenario can be a third-level heading, e.g. `### Getting error message X`.
If you have none to add when creating a doc, leave this section in place
but commented out to help encourage others to add to it in the future. -->
--- ---
type: reference redirect_to: ../load_balancer.md
--- ---
# Load Balancer for multi-node GitLab This document was moved to [another location](../load_balancer.md).
In an multi-node GitLab configuration, you will need a load balancer to route
traffic to the application servers. The specifics on which load balancer to use
or the exact configuration is beyond the scope of GitLab documentation. We hope
that if you're managing HA systems like GitLab you have a load balancer of
choice already. Some examples including HAProxy (open-source), F5 Big-IP LTM,
and Citrix Net Scaler. This documentation will outline what ports and protocols
you need to use with GitLab.
## SSL
How will you handle SSL in your multi-node environment? There are several different
options:
- Each application node terminates SSL
- The load balancer(s) terminate SSL and communication is not secure between
the load balancer(s) and the application nodes
- The load balancer(s) terminate SSL and communication is *secure* between the
load balancer(s) and the application nodes
### Application nodes terminate SSL
Configure your load balancer(s) to pass connections on port 443 as 'TCP' rather
than 'HTTP(S)' protocol. This will pass the connection to the application nodes
NGINX service untouched. NGINX will have the SSL certificate and listen on port 443.
See [NGINX HTTPS documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
for details on managing SSL certificates and configuring NGINX.
### Load Balancer(s) terminate SSL without backend SSL
Configure your load balancer(s) to use the 'HTTP(S)' protocol rather than 'TCP'.
The load balancer(s) will then be responsible for managing SSL certificates and
terminating SSL.
Since communication between the load balancer(s) and GitLab will not be secure,
there is some additional configuration needed. See
[NGINX Proxied SSL documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#supporting-proxied-ssl)
for details.
### Load Balancer(s) terminate SSL with backend SSL
Configure your load balancer(s) to use the 'HTTP(S)' protocol rather than 'TCP'.
The load balancer(s) will be responsible for managing SSL certificates that
end users will see.
Traffic will also be secure between the load balancer(s) and NGINX in this
scenario. There is no need to add configuration for proxied SSL since the
connection will be secure all the way. However, configuration will need to be
added to GitLab to configure SSL certificates. See
[NGINX HTTPS documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
for details on managing SSL certificates and configuring NGINX.
## Ports
### Basic ports
| LB Port | Backend Port | Protocol |
| ------- | ------------ | ------------------------ |
| 80 | 80 | HTTP (*1*) |
| 443 | 443 | TCP or HTTPS (*1*) (*2*) |
| 22 | 22 | TCP |
- (*1*): [Web terminal](../../ci/environments/index.md#web-terminals) support requires
your load balancer to correctly handle WebSocket connections. When using
HTTP or HTTPS proxying, this means your load balancer must be configured
to pass through the `Connection` and `Upgrade` hop-by-hop headers. See the
[web terminal](../integration/terminal.md) integration guide for
more details.
- (*2*): When using HTTPS protocol for port 443, you will need to add an SSL
certificate to the load balancers. If you wish to terminate SSL at the
GitLab application server instead, use TCP protocol.
### GitLab Pages Ports
If you're using GitLab Pages with custom domain support you will need some
additional port configurations.
GitLab Pages requires a separate virtual IP address. Configure DNS to point the
`pages_external_url` from `/etc/gitlab/gitlab.rb` at the new virtual IP address. See the
[GitLab Pages documentation](../pages/index.md) for more information.
| LB Port | Backend Port | Protocol |
| ------- | ------------- | --------- |
| 80 | Varies (*1*) | HTTP |
| 443 | Varies (*1*) | TCP (*2*) |
- (*1*): The backend port for GitLab Pages depends on the
`gitlab_pages['external_http']` and `gitlab_pages['external_https']`
setting. See [GitLab Pages documentation](../pages/index.md) for more details.
- (*2*): Port 443 for GitLab Pages should always use the TCP protocol. Users can
configure custom domains with custom SSL, which would not be possible
if SSL was terminated at the load balancer.
### Alternate SSH Port
Some organizations have policies against opening SSH port 22. In this case,
it may be helpful to configure an alternate SSH hostname that allows users
to use SSH on port 443. An alternate SSH hostname will require a new virtual IP address
compared to the other GitLab HTTP configuration above.
Configure DNS for an alternate SSH hostname such as `altssh.gitlab.example.com`.
| LB Port | Backend Port | Protocol |
| ------- | ------------ | -------- |
| 443 | 22 | TCP |
## Readiness check
It is strongly recommend that multi-node deployments configure load balancers to utilize the [readiness check](../../user/admin_area/monitoring/health_check.md#readiness) to ensure a node is ready to accept traffic, before routing traffic to it. This is especially important when utilizing Puma, as there is a brief period during a restart where Puma will not accept requests.
---
Read more on high-availability configuration:
1. [Configure the database](../postgresql/replication_and_failover.md)
1. [Configure Redis](redis.md)
1. [Configure NFS](nfs.md)
1. [Configure the GitLab application servers](gitlab.md)
<!-- ## Troubleshooting
Include any troubleshooting steps that you can foresee. If you know beforehand what issues
one might have when setting this up, or when something is changed, or on upgrading, it's
important to describe those, too. Think of things that may go wrong and include them here.
This is important to minimize requests for support, and to avoid doc comments with
questions that you know someone might ask.
Each scenario can be a third-level heading, e.g. `### Getting error message X`.
If you have none to add when creating a doc, leave this section in place
but commented out to help encourage others to add to it in the future. -->
--- ---
type: reference redirect_to: ../monitoring/prometheus/index.md#standalone-prometheus-using-onmibus-gitlab
--- ---
# Configuring a Monitoring node for Scaling and High Availability This document was moved to [another location](../monitoring/prometheus/index.md#standalone-prometheus-using-onmibus-gitlab).
> [Introduced](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/3786) in GitLab 12.0.
You can configure a Prometheus node to monitor GitLab.
## Standalone Monitoring node using Omnibus GitLab
The Omnibus GitLab package can be used to configure a standalone Monitoring node running [Prometheus](../monitoring/prometheus/index.md) and [Grafana](../monitoring/performance/grafana_configuration.md).
The monitoring node is not highly available. See [Scaling and High Availability](../reference_architectures/index.md)
for an overview of GitLab scaling and high availability options.
The steps below are the minimum necessary to configure a Monitoring node running Prometheus and Grafana with
Omnibus:
1. SSH into the Monitoring node.
1. [Download/install](https://about.gitlab.com/install/) the Omnibus GitLab
package you want using **steps 1 and 2** from the GitLab downloads page.
- Do not complete any other steps on the download page.
1. Make sure to collect [`CONSUL_SERVER_NODES`](../postgresql/replication_and_failover.md#consul-information), which are the IP addresses or DNS records of the Consul server nodes, for the next step. Note they are presented as `Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z`
1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
```ruby
external_url 'http://gitlab.example.com'
# Enable Prometheus
prometheus['enable'] = true
prometheus['listen_address'] = '0.0.0.0:9090'
prometheus['monitor_kubernetes'] = false
# Enable Login form
grafana['disable_login_form'] = false
# Enable Grafana
grafana['enable'] = true
grafana['admin_password'] = 'toomanysecrets'
# Enable service discovery for Prometheus
consul['enable'] = true
consul['monitoring_service_discovery'] = true
# Replace placeholders
# Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z
# with the addresses of the Consul server nodes
consul['configuration'] = {
retry_join: %w(Y.Y.Y.Y consul1.gitlab.example.com Z.Z.Z.Z),
}
# Disable all other services
gitlab_rails['auto_migrate'] = false
alertmanager['enable'] = false
gitaly['enable'] = false
gitlab_exporter['enable'] = false
gitlab_workhorse['enable'] = false
nginx['enable'] = true
postgres_exporter['enable'] = false
postgresql['enable'] = false
redis['enable'] = false
redis_exporter['enable'] = false
sidekiq['enable'] = false
puma['enable'] = false
node_exporter['enable'] = false
gitlab_exporter['enable'] = false
```
1. Run `sudo gitlab-ctl reconfigure` to compile the configuration.
The next step is to tell all the other nodes where the monitoring node is:
1. Edit `/etc/gitlab/gitlab.rb`, and add, or find and uncomment the following line:
```ruby
gitlab_rails['prometheus_address'] = '10.0.0.1:9090'
```
Where `10.0.0.1:9090` is the IP address and port of the Prometheus node.
1. Save the file and [reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to
take effect.
## Migrating to Service Discovery
Once monitoring using Service Discovery is enabled with `consul['monitoring_service_discovery'] = true`,
ensure that `prometheus['scrape_configs']` is not set in `/etc/gitlab/gitlab.rb`. Setting both
`consul['monitoring_service_discovery'] = true` and `prometheus['scrape_configs']` in `/etc/gitlab/gitlab.rb`
will result in errors.
<!-- ## Troubleshooting
Include any troubleshooting steps that you can foresee. If you know beforehand what issues
one might have when setting this up, or when something is changed, or on upgrading, it's
important to describe those, too. Think of things that may go wrong and include them here.
This is important to minimize requests for support, and to avoid doc comments with
questions that you know someone might ask.
Each scenario can be a third-level heading, e.g. `### Getting error message X`.
If you have none to add when creating a doc, leave this section in place
but commented out to help encourage others to add to it in the future. -->
---
type: reference
---
# Load Balancer for multi-node GitLab
In an multi-node GitLab configuration, you will need a load balancer to route
traffic to the application servers. The specifics on which load balancer to use
or the exact configuration is beyond the scope of GitLab documentation. We hope
that if you're managing HA systems like GitLab you have a load balancer of
choice already. Some examples including HAProxy (open-source), F5 Big-IP LTM,
and Citrix Net Scaler. This documentation will outline what ports and protocols
you need to use with GitLab.
## SSL
How will you handle SSL in your multi-node environment? There are several different
options:
- Each application node terminates SSL
- The load balancer(s) terminate SSL and communication is not secure between
the load balancer(s) and the application nodes
- The load balancer(s) terminate SSL and communication is *secure* between the
load balancer(s) and the application nodes
### Application nodes terminate SSL
Configure your load balancer(s) to pass connections on port 443 as 'TCP' rather
than 'HTTP(S)' protocol. This will pass the connection to the application nodes
NGINX service untouched. NGINX will have the SSL certificate and listen on port 443.
See [NGINX HTTPS documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
for details on managing SSL certificates and configuring NGINX.
### Load Balancer(s) terminate SSL without backend SSL
Configure your load balancer(s) to use the 'HTTP(S)' protocol rather than 'TCP'.
The load balancer(s) will then be responsible for managing SSL certificates and
terminating SSL.
Since communication between the load balancer(s) and GitLab will not be secure,
there is some additional configuration needed. See
[NGINX Proxied SSL documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#supporting-proxied-ssl)
for details.
### Load Balancer(s) terminate SSL with backend SSL
Configure your load balancer(s) to use the 'HTTP(S)' protocol rather than 'TCP'.
The load balancer(s) will be responsible for managing SSL certificates that
end users will see.
Traffic will also be secure between the load balancer(s) and NGINX in this
scenario. There is no need to add configuration for proxied SSL since the
connection will be secure all the way. However, configuration will need to be
added to GitLab to configure SSL certificates. See
[NGINX HTTPS documentation](https://docs.gitlab.com/omnibus/settings/nginx.html#enable-https)
for details on managing SSL certificates and configuring NGINX.
## Ports
### Basic ports
| LB Port | Backend Port | Protocol |
| ------- | ------------ | ------------------------ |
| 80 | 80 | HTTP (*1*) |
| 443 | 443 | TCP or HTTPS (*1*) (*2*) |
| 22 | 22 | TCP |
- (*1*): [Web terminal](../ci/environments/index.md#web-terminals) support requires
your load balancer to correctly handle WebSocket connections. When using
HTTP or HTTPS proxying, this means your load balancer must be configured
to pass through the `Connection` and `Upgrade` hop-by-hop headers. See the
[web terminal](integration/terminal.md) integration guide for
more details.
- (*2*): When using HTTPS protocol for port 443, you will need to add an SSL
certificate to the load balancers. If you wish to terminate SSL at the
GitLab application server instead, use TCP protocol.
### GitLab Pages Ports
If you're using GitLab Pages with custom domain support you will need some
additional port configurations.
GitLab Pages requires a separate virtual IP address. Configure DNS to point the
`pages_external_url` from `/etc/gitlab/gitlab.rb` at the new virtual IP address. See the
[GitLab Pages documentation](pages/index.md) for more information.
| LB Port | Backend Port | Protocol |
| ------- | ------------- | --------- |
| 80 | Varies (*1*) | HTTP |
| 443 | Varies (*1*) | TCP (*2*) |
- (*1*): The backend port for GitLab Pages depends on the
`gitlab_pages['external_http']` and `gitlab_pages['external_https']`
setting. See [GitLab Pages documentation](pages/index.md) for more details.
- (*2*): Port 443 for GitLab Pages should always use the TCP protocol. Users can
configure custom domains with custom SSL, which would not be possible
if SSL was terminated at the load balancer.
### Alternate SSH Port
Some organizations have policies against opening SSH port 22. In this case,
it may be helpful to configure an alternate SSH hostname that allows users
to use SSH on port 443. An alternate SSH hostname will require a new virtual IP address
compared to the other GitLab HTTP configuration above.
Configure DNS for an alternate SSH hostname such as `altssh.gitlab.example.com`.
| LB Port | Backend Port | Protocol |
| ------- | ------------ | -------- |
| 443 | 22 | TCP |
## Readiness check
It is strongly recommend that multi-node deployments configure load balancers to utilize the [readiness check](../user/admin_area/monitoring/health_check.md#readiness) to ensure a node is ready to accept traffic, before routing traffic to it. This is especially important when utilizing Puma, as there is a brief period during a restart where Puma will not accept requests.
<!-- ## Troubleshooting
Include any troubleshooting steps that you can foresee. If you know beforehand what issues
one might have when setting this up, or when something is changed, or on upgrading, it's
important to describe those, too. Think of things that may go wrong and include them here.
This is important to minimize requests for support, and to avoid doc comments with
questions that you know someone might ask.
Each scenario can be a third-level heading, e.g. `### Getting error message X`.
If you have none to add when creating a doc, leave this section in place
but commented out to help encourage others to add to it in the future. -->
...@@ -109,6 +109,81 @@ prometheus['scrape_configs'] = [ ...@@ -109,6 +109,81 @@ prometheus['scrape_configs'] = [
] ]
``` ```
### Standalone Prometheus using Omnibus GitLab
The Omnibus GitLab package can be used to configure a standalone Monitoring node running Prometheus and [Grafana](../performance/grafana_configuration.md).
The steps below are the minimum necessary to configure a Monitoring node running Prometheus and Grafana with Omnibus GitLab:
1. SSH into the Monitoring node.
1. [Install](https://about.gitlab.com/install/) the Omnibus GitLab
package you want using **steps 1 and 2** from the GitLab downloads page, but
do not follow the remaining steps.
1. Make sure to collect the IP addresses or DNS records of the Consul server nodes, for the next step.
1. Edit `/etc/gitlab/gitlab.rb` and add the contents:
```ruby
external_url 'http://gitlab.example.com'
# Enable Prometheus
prometheus['enable'] = true
prometheus['listen_address'] = '0.0.0.0:9090'
prometheus['monitor_kubernetes'] = false
# Enable Login form
grafana['disable_login_form'] = false
# Enable Grafana
grafana['enable'] = true
grafana['admin_password'] = 'toomanysecrets'
# Enable service discovery for Prometheus
consul['enable'] = true
consul['monitoring_service_discovery'] = true
# The addresses can be IPs or FQDNs
consul['configuration'] = {
retry_join: %w(10.0.0.1 10.0.0.2 10.0.0.3),
}
# Disable all other services
gitlab_rails['auto_migrate'] = false
alertmanager['enable'] = false
gitaly['enable'] = false
gitlab_exporter['enable'] = false
gitlab_workhorse['enable'] = false
nginx['enable'] = true
postgres_exporter['enable'] = false
postgresql['enable'] = false
redis['enable'] = false
redis_exporter['enable'] = false
sidekiq['enable'] = false
puma['enable'] = false
node_exporter['enable'] = false
gitlab_exporter['enable'] = false
```
1. Run `sudo gitlab-ctl reconfigure` to compile the configuration.
The next step is to tell all the other nodes where the monitoring node is:
1. Edit `/etc/gitlab/gitlab.rb`, and add, or find and uncomment the following line:
```ruby
gitlab_rails['prometheus_address'] = '10.0.0.1:9090'
```
Where `10.0.0.1:9090` is the IP address and port of the Prometheus node.
1. Save the file and [reconfigure GitLab](../../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to
take effect.
NOTE: **Note:**
Once monitoring using Service Discovery is enabled with `consul['monitoring_service_discovery'] = true`,
ensure that `prometheus['scrape_configs']` is not set in `/etc/gitlab/gitlab.rb`. Setting both
`consul['monitoring_service_discovery'] = true` and `prometheus['scrape_configs']` in `/etc/gitlab/gitlab.rb`
will result in errors.
### Using an external Prometheus server ### Using an external Prometheus server
NOTE: **Note:** NOTE: **Note:**
...@@ -128,14 +203,24 @@ To use an external Prometheus server: ...@@ -128,14 +203,24 @@ To use an external Prometheus server:
1. Set each bundled service's [exporter](#bundled-software-metrics) to listen on a network address, for example: 1. Set each bundled service's [exporter](#bundled-software-metrics) to listen on a network address, for example:
```ruby ```ruby
node_exporter['listen_address'] = '0.0.0.0:9100'
gitlab_workhorse['prometheus_listen_addr'] = "0.0.0.0:9229"
# Rails nodes
gitlab_exporter['listen_address'] = '0.0.0.0' gitlab_exporter['listen_address'] = '0.0.0.0'
sidekiq['listen_address'] = '0.0.0.0'
gitlab_exporter['listen_port'] = '9168' gitlab_exporter['listen_port'] = '9168'
node_exporter['listen_address'] = '0.0.0.0:9100'
# Sidekiq nodes
sidekiq['listen_address'] = '0.0.0.0'
# Redis nodes
redis_exporter['listen_address'] = '0.0.0.0:9121' redis_exporter['listen_address'] = '0.0.0.0:9121'
# PostgreSQL nodes
postgres_exporter['listen_address'] = '0.0.0.0:9187' postgres_exporter['listen_address'] = '0.0.0.0:9187'
# Gitaly nodes
gitaly['prometheus_listen_addr'] = "0.0.0.0:9236" gitaly['prometheus_listen_addr'] = "0.0.0.0:9236"
gitlab_workhorse['prometheus_listen_addr'] = "0.0.0.0:9229"
``` ```
1. Install and set up a dedicated Prometheus instance, if necessary, using the [official installation instructions](https://prometheus.io/docs/prometheus/latest/installation/). 1. Install and set up a dedicated Prometheus instance, if necessary, using the [official installation instructions](https://prometheus.io/docs/prometheus/latest/installation/).
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment