Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
G
gitlab-ce
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
1
Merge Requests
1
Analytics
Analytics
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Commits
Issue Boards
Open sidebar
nexedi
gitlab-ce
Commits
8d3246cf
Commit
8d3246cf
authored
Jul 17, 2020
by
Achilleas Pipinellis
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Add all new sections
parent
6df61f1c
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
620 additions
and
521 deletions
+620
-521
doc/administration/reference_architectures/10k_users.md
doc/administration/reference_architectures/10k_users.md
+620
-521
No files found.
doc/administration/reference_architectures/10k_users.md
View file @
8d3246cf
...
@@ -8,12 +8,6 @@ This page describes GitLab reference architecture for up to 10,000 users.
...
@@ -8,12 +8,6 @@ This page describes GitLab reference architecture for up to 10,000 users.
For a full list of reference architectures, see
For a full list of reference architectures, see
[
Available reference architectures
](
index.md#available-reference-architectures
)
.
[
Available reference architectures
](
index.md#available-reference-architectures
)
.
NOTE:
**Note:**
The 10,000-user reference architecture documented below is
designed to help your organization achieve a highly-available GitLab deployment.
If you do not have the expertise or need to maintain a highly-available
environment, you can have a simpler and less costly-to-operate environment by
following the
[
2,000-user reference architecture
](
2k_users.md
)
.
> - **Supported users (approximate):** 10,000
> - **Supported users (approximate):** 10,000
> - **High Availability:** True
> - **High Availability:** True
> - **Test RPS rates:** API: 200 RPS, Web: 20 RPS, Git: 20 RPS
> - **Test RPS rates:** API: 200 RPS, Web: 20 RPS, Git: 20 RPS
...
@@ -57,8 +51,10 @@ To set up GitLab and its components to accommodate up to 10,000 users:
...
@@ -57,8 +51,10 @@ To set up GitLab and its components to accommodate up to 10,000 users:
1.
[
Configure PostgreSQL
](
#configure-postgresql
)
, the database for GitLab.
1.
[
Configure PostgreSQL
](
#configure-postgresql
)
, the database for GitLab.
1.
[
Configure PgBouncer
](
#configure-pgbouncer
)
.
1.
[
Configure PgBouncer
](
#configure-pgbouncer
)
.
1.
[
Configure the internal load balancing node
](
#configure-the-internal-load-balancer
)
1.
[
Configure the internal load balancing node
](
#configure-the-internal-load-balancer
)
1.
[
Configure Redis
](
#configure-redis
)
.
1.
[
Configure Redis Cache
](
#configure-redis-cache
)
.
1.
[
Configure Sentinel
](
#configure-sentinel
)
.
1.
[
Configure Redis Queues
](
#configure-redis-queues
)
.
1.
[
Configure Sentinel Cache
](
#configure-sentinel-cache
)
.
1.
[
Configure Sentinel Queues
](
#configure-sentinel-queues
)
.
1.
[
Configure Gitaly
](
#configure-gitaly
)
,
1.
[
Configure Gitaly
](
#configure-gitaly
)
,
which provides access to the Git repositories.
which provides access to the Git repositories.
1.
[
Configure Sidekiq
](
#configure-sidekiq
)
.
1.
[
Configure Sidekiq
](
#configure-sidekiq
)
.
...
@@ -82,35 +78,35 @@ Here is a list and description of each machine and the assigned IP:
...
@@ -82,35 +78,35 @@ Here is a list and description of each machine and the assigned IP:
-
`10.6.0.11`
: Consul 1
-
`10.6.0.11`
: Consul 1
-
`10.6.0.12`
: Consul 2
-
`10.6.0.12`
: Consul 2
-
`10.6.0.13`
: Consul 3
-
`10.6.0.13`
: Consul 3
-
`10.6.0.
3
1`
: PostgreSQL primary
-
`10.6.0.
2
1`
: PostgreSQL primary
-
`10.6.0.
3
2`
: PostgreSQL secondary 1
-
`10.6.0.
2
2`
: PostgreSQL secondary 1
-
`10.6.0.
3
3`
: PostgreSQL secondary 2
-
`10.6.0.
2
3`
: PostgreSQL secondary 2
-
`10.6.0.
2
1`
: PgBouncer 1
-
`10.6.0.
3
1`
: PgBouncer 1
-
`10.6.0.
2
2`
: PgBouncer 2
-
`10.6.0.
3
2`
: PgBouncer 2
-
`10.6.0.
2
3`
: PgBouncer 3
-
`10.6.0.
3
3`
: PgBouncer 3
-
`10.6.0.
2
0`
: Internal Load Balancer
-
`10.6.0.
4
0`
: Internal Load Balancer
-
`10.6.0.
6
1`
: Redis - Cache Primary
-
`10.6.0.
5
1`
: Redis - Cache Primary
-
`10.6.0.
6
2`
: Redis - Cache Replica 1
-
`10.6.0.
5
2`
: Redis - Cache Replica 1
-
`10.6.0.
6
3`
: Redis - Cache Replica 2
-
`10.6.0.
5
3`
: Redis - Cache Replica 2
-
`10.6.0.61`
: Redis - Queues Primary
-
`10.6.0.61`
: Redis - Queues Primary
-
`10.6.0.62`
: Redis - Queues Replica 1
-
`10.6.0.62`
: Redis - Queues Replica 1
-
`10.6.0.63`
: Redis - Queues Replica 2
-
`10.6.0.63`
: Redis - Queues Replica 2
-
`10.6.0.
1
1`
: Sentinel - Cache 1
-
`10.6.0.
7
1`
: Sentinel - Cache 1
-
`10.6.0.
1
2`
: Sentinel - Cache 2
-
`10.6.0.
7
2`
: Sentinel - Cache 2
-
`10.6.0.
1
3`
: Sentinel - Cache 3
-
`10.6.0.
7
3`
: Sentinel - Cache 3
-
`10.6.0.
1
1`
: Sentinel - Queues 1
-
`10.6.0.
8
1`
: Sentinel - Queues 1
-
`10.6.0.
1
2`
: Sentinel - Queues 2
-
`10.6.0.
8
2`
: Sentinel - Queues 2
-
`10.6.0.
1
3`
: Sentinel - Queues 3
-
`10.6.0.
8
3`
: Sentinel - Queues 3
-
`10.6.0.
5
1`
: Gitaly 1
-
`10.6.0.
9
1`
: Gitaly 1
-
`10.6.0.
5
2`
: Gitaly 2
-
`10.6.0.
9
2`
: Gitaly 2
-
`10.6.0.
7
1`
: Sidekiq 1
-
`10.6.0.
10
1`
: Sidekiq 1
-
`10.6.0.
7
2`
: Sidekiq 2
-
`10.6.0.
10
2`
: Sidekiq 2
-
`10.6.0.
7
3`
: Sidekiq 3
-
`10.6.0.
10
3`
: Sidekiq 3
-
`10.6.0.
7
4`
: Sidekiq 4
-
`10.6.0.
10
4`
: Sidekiq 4
-
`10.6.0.
4
1`
: GitLab application 1
-
`10.6.0.
11
1`
: GitLab application 1
-
`10.6.0.
4
2`
: GitLab application 2
-
`10.6.0.
11
2`
: GitLab application 2
-
`10.6.0.
4
3`
: GitLab application 3
-
`10.6.0.
11
3`
: GitLab application 3
-
`10.6.0.
8
1`
: Prometheus
-
`10.6.0.
12
1`
: Prometheus
## Configure the external load balancer
## Configure the external load balancer
...
@@ -226,88 +222,28 @@ Configure DNS for an alternate SSH hostname such as `altssh.gitlab.example.com`.
...
@@ -226,88 +222,28 @@ Configure DNS for an alternate SSH hostname such as `altssh.gitlab.example.com`.
</a>
</a>
</div>
</div>
## Configure Redis
## Configure Consul
Using
[
Redis
](
https://redis.io/
)
in scalable environment is possible using a
**Primary**
x
**Replica**
topology with a
[
Redis Sentinel
](
https://redis.io/topics/sentinel
)
service to watch and automatically
start the failover procedure.
Redis requires authentication if used with Sentinel. See
[
Redis Security
](
https://redis.io/topics/security
)
documentation for more
information. We recommend using a combination of a Redis password and tight
firewall rules to secure your Redis service.
You are highly encouraged to read the
[
Redis Sentinel
](
https://redis.io/topics/sentinel
)
documentation
before configuring Redis with GitLab to fully understand the topology and
architecture.
In this section, you'll be guided through configuring an external Redis instance
to be used with GitLab. The following IPs will be used as an example:
-
`10.6.0.61`
: Redis Primary
-
`10.6.0.62`
: Redis Replica 1
-
`10.6.0.63`
: Redis Replica 2
### Provide your own Redis instance
Managed Redis from cloud providers such as AWS ElastiCache will work. If these
services support high availability, be sure it is
**not**
the Redis Cluster type.
Redis version 5.0 or higher is required, as this is what ships with
Omnibus GitLab packages starting with GitLab 13.0. Older Redis versions
do not support an optional count argument to SPOP which is now required for
[
Merge Trains
](
../../ci/merge_request_pipelines/pipelines_for_merged_results/merge_trains/index.md
)
.
Note the Redis node's IP address or hostname, port, and password (if required).
These will be necessary when configuring the
[
GitLab application servers
](
#configure-gitlab-rails
)
later.
### Standalone Redis using Omnibus GitLab
This is the section where we install and set up the new Redis instances.
The requirements for a Redis setup are the following:
1.
All Redis nodes must be able to talk to each other and accept incoming
The following IPs will be used as an example:
connections over Redis (
`6379`
) and Sentinel (
`26379`
) ports (unless you
change the default ones).
1.
The server that hosts the GitLab application must be able to access the
Redis nodes.
1.
Protect the nodes from access from external networks
(
[
Internet
](
https://gitlab.com/gitlab-org/gitlab-foss/uploads/c4cc8cd353604bd80315f9384035ff9e/The_Internet_IT_Crowd.png
)
),
using a firewall.
NOTE:
**Note:**
-
`10.6.0.11`
: Consul 1
Redis nodes (both primary and replica) will need the same password defined in
-
`10.6.0.12`
: Consul 2
`redis['password']`
. At any time during a failover the Sentinels can
-
`10.6.0.13`
: Consul 3
reconfigure a node and change its status from primary to replica and vice versa.
#### Configuring the primary Redis instance
To configure Consul:
1.
SSH into the
**Primary**
Redis server.
1.
SSH into the server that will host Consul.
1.
[
Download/install
](
https://about.gitlab.com/install/
)
the Omnibus GitLab
1.
[
Download/install
](
https://about.gitlab.com/install/
)
the
package you want using
**steps 1 and 2**
from the GitLab downloads page.
Omnibus GitLab Enterprise Edition package using
**steps 1 and 2**
from the
GitLab downloads page.
-
Make sure you select the correct Omnibus package, with the same version
-
Make sure you select the correct Omnibus package, with the same version
and type (Community, Enterprise editions) of your current install
.
the GitLab application is running
.
-
Do not complete any other steps on the download page.
-
Do not complete any other steps on the download page.
1.
Edit
`/etc/gitlab/gitlab.rb`
and add the contents:
1.
Edit
`/etc/gitlab/gitlab.rb`
and add the contents:
```
ruby
```
ruby
# Specify server role as 'redis_master_role'
roles
[
'consul_role'
]
roles
[
'redis_master_role'
]
# IP address pointing to a local IP that the other machines can reach to.
# You can also set bind to '0.0.0.0' which listen in all interfaces.
# If you really need to bind to an external accessible IP, make
# sure you add extra firewall rules to prevent unauthorized access.
redis
[
'bind'
]
=
'10.6.0.61'
# Define a port so Redis can listen for TCP requests which will allow other
# machines to connect to it.
redis
[
'port'
]
=
6379
# Set up password authentication for Redis (use the same password in all nodes).
redis
[
'password'
]
=
'redis-password-goes-here'
## Enable service discovery for Prometheus
## Enable service discovery for Prometheus
consul
[
'enable'
]
=
true
consul
[
'enable'
]
=
true
...
@@ -316,329 +252,146 @@ reconfigure a node and change its status from primary to replica and vice versa.
...
@@ -316,329 +252,146 @@ reconfigure a node and change its status from primary to replica and vice versa.
## The IPs of the Consul server nodes
## The IPs of the Consul server nodes
## You can also use FQDNs and intermix them with IPs
## You can also use FQDNs and intermix them with IPs
consul
[
'configuration'
]
=
{
consul
[
'configuration'
]
=
{
server:
true
,
retry_join:
%w(10.6.0.11 10.6.0.12 10.6.0.13)
,
retry_join:
%w(10.6.0.11 10.6.0.12 10.6.0.13)
,
}
}
# Set the network addresses that the exporters will listen on
# Set the network addresses that the exporters will listen on
node_exporter
[
'listen_address'
]
=
'0.0.0.0:9100'
node_exporter
[
'listen_address'
]
=
'0.0.0.0:9100'
redis_exporter
[
'listen_address'
]
=
'0.0.0.0:9121'
```
1.
Only the primary GitLab application server should handle migrations. To
prevent database migrations from running on upgrade, add the following
configuration to your
`/etc/gitlab/gitlab.rb`
file:
```
ruby
# Disable auto migrations
gitlab_rails
[
'auto_migrate'
]
=
false
gitlab_rails
[
'auto_migrate'
]
=
false
```
```
1.
[
Reconfigure Omnibus GitLab
](
../restart_gitlab.md#omnibus-gitlab-reconfigure
)
for the changes to take effect.
1.
[
Reconfigure Omnibus GitLab
](
../restart_gitlab.md#omnibus-gitlab-reconfigure
)
for the changes to take effect.
1.
Go through the steps again for all the other Consul nodes, and
make sure you set up the correct IPs.
NOTE:
**Note:**
NOTE:
**Note:**
You can specify multiple roles like sentinel and Redis as:
A Consul leader will be elected when the provisioning of the third Consul server is completed.
`roles ['redis_sentinel_role', 'redis_master_role']`
.
Viewing the Consul logs
`sudo gitlab-ctl tail consul`
will display
Read more about
[
roles
](
https://docs.gitlab.com/omnibus/roles/
)
.
`...[INFO] consul: New leader elected: ...`
#### Configuring the replica Redis instances
1.
SSH into the
**replica**
Redis server.
You can list the current Consul members (server, client):
1.
[
Download/install
](
https://about.gitlab.com/install/
)
the Omnibus GitLab
package you want using
**steps 1 and 2**
from the GitLab downloads page.
-
Make sure you select the correct Omnibus package, with the same version
and type (Community, Enterprise editions) of your current install.
-
Do not complete any other steps on the download page.
1.
Edit
`/etc/gitlab/gitlab.rb`
and add the contents:
```
shell
sudo
/opt/gitlab/embedded/bin/consul members
```
```
ruby
You can verify the GitLab services are running:
# Specify server role as 'redis_replica_role'
roles
[
'redis_replica_role'
]
# IP address pointing to a local IP that the other machines can reach to.
```
shell
# You can also set bind to '0.0.0.0' which listen in all interfaces.
sudo
gitlab-ctl status
# If you really need to bind to an external accessible IP, make
```
# sure you add extra firewall rules to prevent unauthorized access.
redis
[
'bind'
]
=
'10.6.0.62'
# Define a port so Redis can listen for TCP requests which will allow other
The output should be similar to the following:
# machines to connect to it.
redis
[
'port'
]
=
6379
# The same password for Redis authentication you set up for the primary node.
```
plaintext
redis
[
'password'
]
=
'redis-password-goes-here'
run: consul: (pid 30074) 76834s; run: log: (pid 29740) 76844s
run: logrotate: (pid 30925) 3041s; run: log: (pid 29649) 76861s
run: node-exporter: (pid 30093) 76833s; run: log: (pid 29663) 76855s
```
# The IP of the primary Redis node.
<div
align=
"right"
>
redis
[
'master_ip'
]
=
'10.6.0.61'
<a
type=
"button"
class=
"btn btn-default"
href=
"#setup-components"
>
Back to setup components
<i
class=
"fa fa-angle-double-up"
aria-hidden=
"true"
></i>
</a>
</div>
# Port of primary Redis server, uncomment to change to non default. Defaults
## Configure PostgreSQL
# to `6379`.
#redis['master_port'] = 6379
## Enable service discovery for Prometheus
In this section, you'll be guided through configuring an external PostgreSQL database
consul
[
'enable'
]
=
true
to be used with GitLab.
consul
[
'monitoring_service_discovery'
]
=
true
## The IPs of the Consul server nodes
### Provide your own PostgreSQL instance
## You can also use FQDNs and intermix them with IPs
consul
[
'configuration'
]
=
{
retry_join:
%w(10.6.0.11 10.6.0.12 10.6.0.13)
,
}
# Set the network addresses that the exporters will listen on
If you're hosting GitLab on a cloud provider, you can optionally use a
node_exporter
[
'listen_address'
]
=
'0.0.0.0:9100'
managed service for PostgreSQL. For example, AWS offers a managed Relational
redis_exporter
[
'listen_address'
]
=
'0.0.0.0:9121'
Database Service (RDS) that runs PostgreSQL.
```
1.
To prevent reconfigure from running automatically on upgrade, run
:
If you use a cloud-managed service, or provide your own PostgreSQL
:
```
shell
1.
Set up PostgreSQL according to the
sudo touch
/etc/gitlab/skip-auto-reconfigure
[
database requirements document
](
../../install/requirements.md#database
)
.
```
1.
Set up a
`gitlab`
username with a password of your choice. The
`gitlab`
user
needs privileges to create the
`gitlabhq_production`
database.
1.
Configure the GitLab application servers with the appropriate details.
This step is covered in
[
Configuring the GitLab Rails application
](
#configure-gitlab-rails
)
.
1.
[
Reconfigure Omnibus GitLab
](
../restart_gitlab.md#omnibus-gitlab-reconfigure
)
for the changes to take effect.
### Standalone PostgreSQL using Omnibus GitLab
1.
Go through the steps again for all the other replica nodes, and
make sure to set up the IPs correctly.
NOTE:
**Note:**
The following IPs will be used as an example:
You can specify multiple roles like sentinel and Redis as:
`roles ['redis_sentinel_role', 'redis_master_role']`
.
Read more about
[
roles
](
https://docs.gitlab.com/omnibus/roles/
)
.
These values don't have to be changed again in
`/etc/gitlab/gitlab.rb`
after
-
`10.6.0.21`
: PostgreSQL primary
a failover, as the nodes will be managed by the
[
Sentinels
](
#configure-consul-and-sentinel
)
, and even after a
-
`10.6.0.22`
: PostgreSQL secondary 1
`gitlab-ctl reconfigure`
, they will get their configuration restored by
-
`10.6.0.23`
: PostgreSQL secondary 2
the same Sentinels.
Advanced
[
configuration options
](
https://docs.gitlab.com/omnibus/settings/redis.html
)
First, make sure to
[
install
](
https://about.gitlab.com/install/
)
are supported and can be added if needed.
the Linux GitLab package
**on each node**
. Following the steps,
install the necessary dependencies from step 1, and add the
GitLab package repository from step 2. When installing GitLab
in the second step, do not supply the
`EXTERNAL_URL`
value.
<div
align=
"right"
>
#### PostgreSQL primary node
<a
type=
"button"
class=
"btn btn-default"
href=
"#setup-components"
>
Back to setup components
<i
class=
"fa fa-angle-double-up"
aria-hidden=
"true"
></i>
</a>
</div>
## Configure Consul and Sentinel
1.
SSH into the PostgreSQL primary node.
1.
Generate a password hash for the PostgreSQL username/password pair. This assumes you will use the default
username of
`gitlab`
(recommended). The command will request a password
and confirmation. Use the value that is output by this command in the next
step as the value of
`<postgresql_password_hash>`
:
NOTE:
**Note:**
If you are using an external Redis Sentinel instance, be sure
```
shell
to exclude the
`requirepass`
parameter from the Sentinel
sudo
gitlab-ctl pg-password-md5 gitlab
configuration. This parameter will cause clients to report
`NOAUTH
```
Authentication required.`
.
[
Redis Sentinel 3.2.x does not support
password authentication
](
https://github.com/antirez/redis/issues/3279
)
.
Now that the Redis servers are all set up, let's configure the Sentinel
1.
Generate a password hash for the PgBouncer username/password pair. This assumes you will use the default
servers. The following IPs will be used as an example:
username of
`pgbouncer`
(recommended). The command will request a password
and confirmation. Use the value that is output by this command in the next
step as the value of
`<pgbouncer_password_hash>`
:
-
`10.6.0.11`
: Consul/Sentinel 1
```
shell
-
`10.6.0.12`
: Consul/Sentinel 2
sudo
gitlab-ctl pg-password-md5 pgbouncer
-
`10.6.0.13`
: Consul/Sentinel 3
```
To configure the Sentinel:
1.
Generate a password hash for the Consul database username/password pair. This assumes you will use the default
username of
`gitlab-consul`
(recommended). The command will request a password
and confirmation. Use the value that is output by this command in the next
step as the value of
`<consul_password_hash>`
:
1.
SSH into the server that will host Consul/Sentinel.
```
shell
1.
[
Download/install
](
https://about.gitlab.com/install/
)
the
sudo
gitlab-ctl pg-password-md5 gitlab-consul
Omnibus GitLab Enterprise Edition package using
**steps 1 and 2**
from the
```
GitLab downloads page.
-
Make sure you select the correct Omnibus package, with the same version
the GitLab application is running.
-
Do not complete any other steps on the download page.
1.
Edit
`/etc/gitlab/gitlab.rb`
and add the contents
:
1.
On the primary database node, edit
`/etc/gitlab/gitlab.rb`
replacing values noted in the
`# START user configuration`
section
:
```
ruby
```
ruby
roles
[
'redis_sentinel_role'
,
'consul_role'
]
# Disable all components except PostgreSQL and Repmgr and Consul
roles
[
'postgres_role'
]
# Must be the same in every sentinel node
# PostgreSQL configuration
redis
[
'master_name'
]
=
'gitlab-redis'
postgresql
[
'listen_address'
]
=
'0.0.0.0'
postgresql
[
'hot_standby'
]
=
'on'
postgresql
[
'wal_level'
]
=
'replica'
postgresql
[
'shared_preload_libraries'
]
=
'repmgr_funcs'
#
The same password for Redis authentication you set up for the primary node.
#
Disable automatic database migrations
redis
[
'master_password'
]
=
'redis-password-goes-here'
gitlab_rails
[
'auto_migrate'
]
=
false
#
The IP of the primary Redis node.
#
Configure the Consul agent
redis
[
'master_ip'
]
=
'10.6.0.61'
consul
[
'services'
]
=
%w(postgresql)
# Define a port so Redis can listen for TCP requests which will allow other
# START user configuration
# machines to connect to it.
# Please set the real values as explained in Required Information section
redis
[
'port'
]
=
6379
#
# Replace PGBOUNCER_PASSWORD_HASH with a generated md5 value
# Port of primary Redis server, uncomment to change to non default. Defaults
postgresql
[
'pgbouncer_user_password'
]
=
'<pgbouncer_password_hash>'
# to `6379`.
# Replace POSTGRESQL_PASSWORD_HASH with a generated md5 value
#redis['master_port'] = 6379
postgresql
[
'sql_user_password'
]
=
'<postgresql_password_hash>'
# Set `max_wal_senders` to one more than the number of database nodes in the cluster.
## Configure Sentinel
# This is used to prevent replication from using up all of the
sentinel
[
'bind'
]
=
'10.6.0.11'
# available database connections.
postgresql
[
'max_wal_senders'
]
=
4
# Port that Sentinel listens on, uncomment to change to non default. Defaults
postgresql
[
'max_replication_slots'
]
=
4
# to `26379`.
# sentinel['port'] = 26379
## Quorum must reflect the amount of voting sentinels it take to start a failover.
## Value must NOT be greater then the amount of sentinels.
##
## The quorum can be used to tune Sentinel in two ways:
## 1. If a the quorum is set to a value smaller than the majority of Sentinels
## we deploy, we are basically making Sentinel more sensible to primary failures,
## triggering a failover as soon as even just a minority of Sentinels is no longer
## able to talk with the primary.
## 1. If a quorum is set to a value greater than the majority of Sentinels, we are
## making Sentinel able to failover only when there are a very large number (larger
## than majority) of well connected Sentinels which agree about the primary being down.s
sentinel
[
'quorum'
]
=
2
## Consider unresponsive server down after x amount of ms.
# sentinel['down_after_milliseconds'] = 10000
## Specifies the failover timeout in milliseconds. It is used in many ways:
##
## - The time needed to re-start a failover after a previous failover was
## already tried against the same primary by a given Sentinel, is two
## times the failover timeout.
##
## - The time needed for a replica replicating to a wrong primary according
## to a Sentinel current configuration, to be forced to replicate
## with the right primary, is exactly the failover timeout (counting since
## the moment a Sentinel detected the misconfiguration).
##
## - The time needed to cancel a failover that is already in progress but
## did not produced any configuration change (REPLICAOF NO ONE yet not
## acknowledged by the promoted replica).
##
## - The maximum time a failover in progress waits for all the replica to be
## reconfigured as replicas of the new primary. However even after this time
## the replicas will be reconfigured by the Sentinels anyway, but not with
## the exact parallel-syncs progression as specified.
# sentinel['failover_timeout'] = 60000
## Enable service discovery for Prometheus
consul
[
'enable'
]
=
true
consul
[
'monitoring_service_discovery'
]
=
true
## The IPs of the Consul server nodes
## You can also use FQDNs and intermix them with IPs
consul
[
'configuration'
]
=
{
retry_join:
%w(10.6.0.11 10.6.0.12 10.6.0.13)
,
}
# Set the network addresses that the exporters will listen on
node_exporter
[
'listen_address'
]
=
'0.0.0.0:9100'
redis_exporter
[
'listen_address'
]
=
'0.0.0.0:9121'
# Disable auto migrations
gitlab_rails
[
'auto_migrate'
]
=
false
```
1.
To prevent database migrations from running on upgrade, run:
```
shell
sudo touch
/etc/gitlab/skip-auto-reconfigure
```
Only the primary GitLab application server should handle migrations.
1.
[
Reconfigure Omnibus GitLab
](
../restart_gitlab.md#omnibus-gitlab-reconfigure
)
for the changes to take effect.
1.
Go through the steps again for all the other Consul/Sentinel nodes, and
make sure you set up the correct IPs.
<div
align=
"right"
>
<a
type=
"button"
class=
"btn btn-default"
href=
"#setup-components"
>
Back to setup components
<i
class=
"fa fa-angle-double-up"
aria-hidden=
"true"
></i>
</a>
</div>
## Configure PostgreSQL
In this section, you'll be guided through configuring an external PostgreSQL database
to be used with GitLab.
### Provide your own PostgreSQL instance
If you're hosting GitLab on a cloud provider, you can optionally use a
managed service for PostgreSQL. For example, AWS offers a managed Relational
Database Service (RDS) that runs PostgreSQL.
If you use a cloud-managed service, or provide your own PostgreSQL:
1.
Set up PostgreSQL according to the
[
database requirements document
](
../../install/requirements.md#database
)
.
1.
Set up a
`gitlab`
username with a password of your choice. The
`gitlab`
user
needs privileges to create the
`gitlabhq_production`
database.
1.
Configure the GitLab application servers with the appropriate details.
This step is covered in
[
Configuring the GitLab Rails application
](
#configure-gitlab-rails
)
.
### Standalone PostgreSQL using Omnibus GitLab
The following IPs will be used as an example:
-
`10.6.0.31`
: PostgreSQL primary
-
`10.6.0.32`
: PostgreSQL secondary 1
-
`10.6.0.33`
: PostgreSQL secondary 2
First, make sure to
[
install
](
https://about.gitlab.com/install/
)
the Linux GitLab package
**on each node**
. Following the steps,
install the necessary dependencies from step 1, and add the
GitLab package repository from step 2. When installing GitLab
in the second step, do not supply the
`EXTERNAL_URL`
value.
#### PostgreSQL primary node
1.
SSH into the PostgreSQL primary node.
1.
Generate a password hash for the PostgreSQL username/password pair. This assumes you will use the default
username of
`gitlab`
(recommended). The command will request a password
and confirmation. Use the value that is output by this command in the next
step as the value of
`<postgresql_password_hash>`
:
```
shell
sudo
gitlab-ctl pg-password-md5 gitlab
```
1.
Generate a password hash for the PgBouncer username/password pair. This assumes you will use the default
username of
`pgbouncer`
(recommended). The command will request a password
and confirmation. Use the value that is output by this command in the next
step as the value of
`<pgbouncer_password_hash>`
:
```
shell
sudo
gitlab-ctl pg-password-md5 pgbouncer
```
1.
Generate a password hash for the Consul database username/password pair. This assumes you will use the default
username of
`gitlab-consul`
(recommended). The command will request a password
and confirmation. Use the value that is output by this command in the next
step as the value of
`<consul_password_hash>`
:
```
shell
sudo
gitlab-ctl pg-password-md5 gitlab-consul
```
1.
On the primary database node, edit
`/etc/gitlab/gitlab.rb`
replacing values noted in the
`# START user configuration`
section:
```
ruby
# Disable all components except PostgreSQL and Repmgr and Consul
roles
[
'postgres_role'
]
# PostgreSQL configuration
postgresql
[
'listen_address'
]
=
'0.0.0.0'
postgresql
[
'hot_standby'
]
=
'on'
postgresql
[
'wal_level'
]
=
'replica'
postgresql
[
'shared_preload_libraries'
]
=
'repmgr_funcs'
# Disable automatic database migrations
gitlab_rails
[
'auto_migrate'
]
=
false
# Configure the Consul agent
consul
[
'services'
]
=
%w(postgresql)
# START user configuration
# Please set the real values as explained in Required Information section
#
# Replace PGBOUNCER_PASSWORD_HASH with a generated md5 value
postgresql
[
'pgbouncer_user_password'
]
=
'<pgbouncer_password_hash>'
# Replace POSTGRESQL_PASSWORD_HASH with a generated md5 value
postgresql
[
'sql_user_password'
]
=
'<postgresql_password_hash>'
# Set `max_wal_senders` to one more than the number of database nodes in the cluster.
# This is used to prevent replication from using up all of the
# available database connections.
postgresql
[
'max_wal_senders'
]
=
4
postgresql
[
'max_replication_slots'
]
=
4
# Replace XXX.XXX.XXX.XXX/YY with Network Address
# Replace XXX.XXX.XXX.XXX/YY with Network Address
postgresql
[
'trust_auth_cidr_addresses'
]
=
%w(10.6.0.0/24)
postgresql
[
'trust_auth_cidr_addresses'
]
=
%w(10.6.0.0/24)
...
@@ -801,38 +554,367 @@ SSH into the **secondary node**:
...
@@ -801,38 +554,367 @@ SSH into the **secondary node**:
ok: run: repmgrd: (pid 19068) 0s
ok: run: repmgrd: (pid 19068) 0s
```
```
Before moving on, make sure the databases are configured correctly. Run the
Before moving on, make sure the databases are configured correctly. Run the
following command on the
**primary**
node to verify that replication is working
following command on the
**primary**
node to verify that replication is working
properly and the secondary nodes appear in the cluster:
properly and the secondary nodes appear in the cluster:
```
shell
gitlab-ctl repmgr cluster show
```
The output should be similar to the following:
```
plaintext
Role | Name | Upstream | Connection String
----------+---------|-----------|------------------------------------------------
* master | MASTER | | host=<primary_node_name> user=gitlab_repmgr dbname=gitlab_repmgr
standby | STANDBY | MASTER | host=<secondary_node_name> user=gitlab_repmgr dbname=gitlab_repmgr
standby | STANDBY | MASTER | host=<secondary_node_name> user=gitlab_repmgr dbname=gitlab_repmgr
```
If the 'Role' column for any node says "FAILED", check the
[
Troubleshooting section
](
troubleshooting.md
)
before proceeding.
Also, check that the
`repmgr-check-master`
command works successfully on each node:
```
shell
su - gitlab-consul
gitlab-ctl repmgr-check-master
||
echo
'This node is a standby repmgr node'
```
This command relies on exit codes to tell Consul whether a particular node is a master
or secondary. The most important thing here is that this command does not produce errors.
If there are errors it's most likely due to incorrect
`gitlab-consul`
database user permissions.
Check the
[
Troubleshooting section
](
troubleshooting.md
)
before proceeding.
<div
align=
"right"
>
<a
type=
"button"
class=
"btn btn-default"
href=
"#setup-components"
>
Back to setup components
<i
class=
"fa fa-angle-double-up"
aria-hidden=
"true"
></i>
</a>
</div>
## Configure PgBouncer
Now that the PostgreSQL servers are all set up, let's configure PgBouncer.
The following IPs will be used as an example:
-
`10.6.0.31`
: PgBouncer 1
-
`10.6.0.32`
: PgBouncer 2
-
`10.6.0.33`
: PgBouncer 3
1.
On each PgBouncer node, edit
`/etc/gitlab/gitlab.rb`
, and replace
`<consul_password_hash>`
and
`<pgbouncer_password_hash>`
with the
password hashes you
[
set up previously
](
#postgresql-primary-node
)
:
```
ruby
# Disable all components except Pgbouncer and Consul agent
roles
[
'pgbouncer_role'
]
# Configure PgBouncer
pgbouncer
[
'admin_users'
]
=
%w(pgbouncer gitlab-consul)
pgbouncer
[
'users'
]
=
{
'gitlab-consul'
:
{
password:
'<consul_password_hash>'
},
'pgbouncer'
:
{
password:
'<pgbouncer_password_hash>'
}
}
# Configure Consul agent
consul
[
'watchers'
]
=
%w(postgresql)
consul
[
'enable'
]
=
true
consul
[
'configuration'
]
=
{
retry_join:
%w(10.6.0.11 10.6.0.12 10.6.0.13)
}
# Enable service discovery for Prometheus
consul
[
'monitoring_service_discovery'
]
=
true
# Set the network addresses that the exporters will listen on
node_exporter
[
'listen_address'
]
=
'0.0.0.0:9100'
```
1.
[
Reconfigure Omnibus GitLab
](
../restart_gitlab.md#omnibus-gitlab-reconfigure
)
for the changes to take effect.
1.
Create a
`.pgpass`
file so Consul is able to
reload PgBouncer. Enter the PgBouncer password twice when asked:
```
shell
gitlab-ctl write-pgpass
--host
127.0.0.1
--database
pgbouncer
--user
pgbouncer
--hostuser
gitlab-consul
```
1.
Ensure each node is talking to the current master:
```
shell
gitlab-ctl pgb-console
# You will be prompted for PGBOUNCER_PASSWORD
```
If there is an error
`psql: ERROR: Auth failed`
after typing in the
password, ensure you previously generated the MD5 password hashes with the correct
format. The correct format is to concatenate the password and the username:
`PASSWORDUSERNAME`
. For example,
`Sup3rS3cr3tpgbouncer`
would be the text
needed to generate an MD5 password hash for the
`pgbouncer`
user.
1.
Once the console prompt is available, run the following queries:
```
shell
show databases
;
show clients
;
```
The output should be similar to the following:
```
plaintext
name | host | port | database | force_user | pool_size | reserve_pool | pool_mode | max_connections | current_connections
---------------------+-------------+------+---------------------+------------+-----------+--------------+-----------+-----------------+---------------------
gitlabhq_production | MASTER_HOST | 5432 | gitlabhq_production | | 20 | 0 | | 0 | 0
pgbouncer | | 6432 | pgbouncer | pgbouncer | 2 | 0 | statement | 0 | 0
(2 rows)
type | user | database | state | addr | port | local_addr | local_port | connect_time | request_time | ptr | link | remote_pid | tls
------+-----------+---------------------+---------+----------------+-------+------------+------------+---------------------+---------------------+-----------+------+------------+-----
C | pgbouncer | pgbouncer | active | 127.0.0.1 | 56846 | 127.0.0.1 | 6432 | 2017-08-21 18:09:59 | 2017-08-21 18:10:48 | 0x22b3880 | | 0 |
(2 rows)
```
<div
align=
"right"
>
<a
type=
"button"
class=
"btn btn-default"
href=
"#setup-components"
>
Back to setup components
<i
class=
"fa fa-angle-double-up"
aria-hidden=
"true"
></i>
</a>
</div>
### Configure the internal load balancer
If you're running more than one PgBouncer node as recommended, then at this time you'll need to set
up a TCP internal load balancer to serve each correctly.
The following IP will be used as an example:
-
`10.6.0.40`
: Internal Load Balancer
Here's how you could do it with
[
HAProxy
](
https://www.haproxy.org/
)
:
```
plaintext
global
log /dev/log local0
log localhost local1 notice
log stdout format raw local0
defaults
log global
default-server inter 10s fall 3 rise 2
balance leastconn
frontend internal-pgbouncer-tcp-in
bind *:6432
mode tcp
option tcplog
default_backend pgbouncer
backend pgbouncer
mode tcp
option tcp-check
server pgbouncer1 10.6.0.21:6432 check
server pgbouncer2 10.6.0.22:6432 check
server pgbouncer3 10.6.0.23:6432 check
```
Refer to your preferred Load Balancer's documentation for further guidance.
<div
align=
"right"
>
<a
type=
"button"
class=
"btn btn-default"
href=
"#setup-components"
>
Back to setup components
<i
class=
"fa fa-angle-double-up"
aria-hidden=
"true"
></i>
</a>
</div>
## Configure Redis Cache
Using
[
Redis
](
https://redis.io/
)
in scalable environment is possible using a
**Primary**
x
**Replica**
topology with a
[
Redis Sentinel
](
https://redis.io/topics/sentinel
)
service to watch and automatically
start the failover procedure.
Redis requires authentication if used with Sentinel. See
[
Redis Security
](
https://redis.io/topics/security
)
documentation for more
information. We recommend using a combination of a Redis password and tight
firewall rules to secure your Redis service.
You are highly encouraged to read the
[
Redis Sentinel
](
https://redis.io/topics/sentinel
)
documentation
before configuring Redis with GitLab to fully understand the topology and
architecture.
In this section, you'll be guided through configuring an external Redis instance
to be used with GitLab. The following IPs will be used as an example:
-
`10.6.0.51`
: Redis - Cache Primary
-
`10.6.0.52`
: Redis - Cache Replica 1
-
`10.6.0.53`
: Redis - Cache Replica 2
### Provide your own Redis instance
Managed Redis from cloud providers such as AWS ElastiCache will work. If these
services support high availability, be sure it is
**not**
the Redis Cluster type.
Redis version 5.0 or higher is required, as this is what ships with
Omnibus GitLab packages starting with GitLab 13.0. Older Redis versions
do not support an optional count argument to SPOP which is now required for
[
Merge Trains
](
../../ci/merge_request_pipelines/pipelines_for_merged_results/merge_trains/index.md
)
.
Note the Redis node's IP address or hostname, port, and password (if required).
These will be necessary when configuring the
[
GitLab application servers
](
#configure-gitlab-rails
)
later.
### Standalone Redis using Omnibus GitLab
This is the section where we install and set up the new Redis instances.
The requirements for a Redis setup are the following:
1.
All Redis nodes must be able to talk to each other and accept incoming
connections over Redis (
`6379`
) and Sentinel (
`26379`
) ports (unless you
change the default ones).
1.
The server that hosts the GitLab application must be able to access the
Redis nodes.
1.
Protect the nodes from access from external networks
(
[
Internet
](
https://gitlab.com/gitlab-org/gitlab-foss/uploads/c4cc8cd353604bd80315f9384035ff9e/The_Internet_IT_Crowd.png
)
),
using a firewall.
NOTE:
**Note:**
Redis nodes (both primary and replica) will need the same password defined in
`redis['password']`
. At any time during a failover the Sentinels can
reconfigure a node and change its status from primary to replica and vice versa.
#### Configuring the primary Redis instance
1.
SSH into the
**Primary**
Redis server.
1.
[
Download/install
](
https://about.gitlab.com/install/
)
the Omnibus GitLab
package you want using
**steps 1 and 2**
from the GitLab downloads page.
-
Make sure you select the correct Omnibus package, with the same version
and type (Community, Enterprise editions) of your current install.
-
Do not complete any other steps on the download page.
1.
Edit
`/etc/gitlab/gitlab.rb`
and add the contents:
```
ruby
# Specify server role as 'redis_master_role'
roles
[
'redis_master_role'
]
# IP address pointing to a local IP that the other machines can reach to.
# You can also set bind to '0.0.0.0' which listen in all interfaces.
# If you really need to bind to an external accessible IP, make
# sure you add extra firewall rules to prevent unauthorized access.
redis
[
'bind'
]
=
'10.6.0.61'
# Define a port so Redis can listen for TCP requests which will allow other
# machines to connect to it.
redis
[
'port'
]
=
6379
# Set up password authentication for Redis (use the same password in all nodes).
redis
[
'password'
]
=
'redis-password-goes-here'
## Enable service discovery for Prometheus
consul
[
'enable'
]
=
true
consul
[
'monitoring_service_discovery'
]
=
true
## The IPs of the Consul server nodes
## You can also use FQDNs and intermix them with IPs
consul
[
'configuration'
]
=
{
retry_join:
%w(10.6.0.11 10.6.0.12 10.6.0.13)
,
}
# Set the network addresses that the exporters will listen on
node_exporter
[
'listen_address'
]
=
'0.0.0.0:9100'
redis_exporter
[
'listen_address'
]
=
'0.0.0.0:9121'
```
1.
Only the primary GitLab application server should handle migrations. To
prevent database migrations from running on upgrade, add the following
configuration to your
`/etc/gitlab/gitlab.rb`
file:
```
ruby
gitlab_rails
[
'auto_migrate'
]
=
false
```
1.
[
Reconfigure Omnibus GitLab
](
../restart_gitlab.md#omnibus-gitlab-reconfigure
)
for the changes to take effect.
NOTE:
**Note:**
You can specify multiple roles like sentinel and Redis as:
`roles ['redis_sentinel_role', 'redis_master_role']`
.
Read more about
[
roles
](
https://docs.gitlab.com/omnibus/roles/
)
.
#### Configuring the replica Redis instances
1.
SSH into the
**replica**
Redis server.
1.
[
Download/install
](
https://about.gitlab.com/install/
)
the Omnibus GitLab
package you want using
**steps 1 and 2**
from the GitLab downloads page.
-
Make sure you select the correct Omnibus package, with the same version
and type (Community, Enterprise editions) of your current install.
-
Do not complete any other steps on the download page.
1.
Edit
`/etc/gitlab/gitlab.rb`
and add the contents:
```
ruby
# Specify server role as 'redis_replica_role'
roles
[
'redis_replica_role'
]
# IP address pointing to a local IP that the other machines can reach to.
# You can also set bind to '0.0.0.0' which listen in all interfaces.
# If you really need to bind to an external accessible IP, make
# sure you add extra firewall rules to prevent unauthorized access.
redis
[
'bind'
]
=
'10.6.0.62'
# Define a port so Redis can listen for TCP requests which will allow other
# machines to connect to it.
redis
[
'port'
]
=
6379
# The same password for Redis authentication you set up for the primary node.
redis
[
'password'
]
=
'redis-password-goes-here'
# The IP of the primary Redis node.
redis
[
'master_ip'
]
=
'10.6.0.61'
# Port of primary Redis server, uncomment to change to non default. Defaults
# to `6379`.
#redis['master_port'] = 6379
## Enable service discovery for Prometheus
consul
[
'enable'
]
=
true
consul
[
'monitoring_service_discovery'
]
=
true
## The IPs of the Consul server nodes
## You can also use FQDNs and intermix them with IPs
consul
[
'configuration'
]
=
{
retry_join:
%w(10.6.0.11 10.6.0.12 10.6.0.13)
,
}
```
shell
# Set the network addresses that the exporters will listen on
gitlab-ctl repmgr cluster show
node_exporter
[
'listen_address'
]
=
'0.0.0.0:9100'
```
redis_exporter
[
'listen_address'
]
=
'0.0.0.0:9121'
```
The output should be similar to the following
:
1.
To prevent reconfigure from running automatically on upgrade, run
:
```
plaintext
```
shell
Role | Name | Upstream | Connection String
sudo touch
/etc/gitlab/skip-auto-reconfigure
----------+---------|-----------|------------------------------------------------
```
* master | MASTER | | host=<primary_node_name> user=gitlab_repmgr dbname=gitlab_repmgr
standby | STANDBY | MASTER | host=<secondary_node_name> user=gitlab_repmgr dbname=gitlab_repmgr
standby | STANDBY | MASTER | host=<secondary_node_name> user=gitlab_repmgr dbname=gitlab_repmgr
```
If the 'Role' column for any node says "FAILED", check the
1.
[
Reconfigure Omnibus GitLab
](
../restart_gitlab.md#omnibus-gitlab-reconfigure
)
for the changes to take effect.
[
Troubleshooting section
](
troubleshooting.md
)
before proceeding.
1.
Go through the steps again for all the other replica nodes, and
make sure to set up the IPs correctly.
Also, check that the
`repmgr-check-master`
command works successfully on each node:
NOTE:
**Note:**
You can specify multiple roles like sentinel and Redis as:
`roles ['redis_sentinel_role', 'redis_master_role']`
.
Read more about
[
roles
](
https://docs.gitlab.com/omnibus/roles/
)
.
```
shell
These values don't have to be changed again in
`/etc/gitlab/gitlab.rb`
after
su - gitlab-consul
a failover, as the nodes will be managed by the
[
Sentinels
](
#configure-consul-and-sentinel
)
, and even after a
gitlab-ctl repmgr-check-master
||
echo
'This node is a standby repmgr node'
`gitlab-ctl reconfigure`
, they will get their configuration restored by
```
the same Sentinels.
This command relies on exit codes to tell Consul whether a particular node is a master
Advanced
[
configuration options
](
https://docs.gitlab.com/omnibus/settings/redis.html
)
or secondary. The most important thing here is that this command does not produce errors.
are supported and can be added if needed.
If there are errors it's most likely due to incorrect
`gitlab-consul`
database user permissions.
Check the
[
Troubleshooting section
](
troubleshooting.md
)
before proceeding.
<div
align=
"right"
>
<div
align=
"right"
>
<a
type=
"button"
class=
"btn btn-default"
href=
"#setup-components"
>
<a
type=
"button"
class=
"btn btn-default"
href=
"#setup-components"
>
...
@@ -840,136 +922,132 @@ Check the [Troubleshooting section](troubleshooting.md) before proceeding.
...
@@ -840,136 +922,132 @@ Check the [Troubleshooting section](troubleshooting.md) before proceeding.
</a>
</a>
</div>
</div>
## Configure PgBouncer
## Configure Redis Queues
Now that the PostgreSQL servers are all set up, let's configure PgBouncer.
The following IPs will be used as an example:
-
`10.6.0.21`
: PgBouncer 1
-
`10.6.0.22`
: PgBouncer 2
-
`10.6.0.23`
: PgBouncer 3
1.
On each PgBouncer node, edit
`/etc/gitlab/gitlab.rb`
, and replace
`<consul_password_hash>`
and
`<pgbouncer_password_hash>`
with the
password hashes you
[
set up previously
](
#postgresql-primary-node
)
:
```
ruby
# Disable all components except Pgbouncer and Consul agent
roles
[
'pgbouncer_role'
]
# Configure PgBouncer
pgbouncer
[
'admin_users'
]
=
%w(pgbouncer gitlab-consul)
pgbouncer
[
'users'
]
=
{
'gitlab-consul'
:
{
password:
'<consul_password_hash>'
},
'pgbouncer'
:
{
password:
'<pgbouncer_password_hash>'
}
}
# Configure Consul agent
-
`10.6.0.61`
: Redis - Queues Primary
consul
[
'watchers'
]
=
%w(postgresql)
-
`10.6.0.62`
: Redis - Queues Replica 1
consul
[
'enable'
]
=
true
-
`10.6.0.63`
: Redis - Queues Replica 2
consul
[
'configuration'
]
=
{
retry_join:
%w(10.6.0.11 10.6.0.12 10.6.0.13)
}
# Enable service discovery for Prometheus
## Configure Sentinel Cache
consul
[
'monitoring_service_discovery'
]
=
true
# Set the network addresses that the exporters will listen on
NOTE:
**Note:**
If you are using an external Redis Sentinel instance, be sure
node_exporter
[
'listen_address'
]
=
'0.0.0.0:9100'
to exclude the
`requirepass`
parameter from the Sentinel
```
configuration. This parameter will cause clients to report
`NOAUTH
Authentication required.`
.
[
Redis Sentinel 3.2.x does not support
password authentication
](
https://github.com/antirez/redis/issues/3279
)
.
1.
[
Reconfigure Omnibus GitLab
](
../restart_gitlab.md#omnibus-gitlab-reconfigure
)
for the changes to take effect.
Now that the Redis servers are all set up, let's configure the Sentinel
servers. The following IPs will be used as an example:
1.
Create a
`.pgpass`
file so Consul is able to
-
`10.6.0.71`
: Sentinel - Cache 1
reload PgBouncer. Enter the PgBouncer password twice when asked:
-
`10.6.0.72`
: Sentinel - Cache 2
-
`10.6.0.73`
: Sentinel - Cache 3
```
shell
To configure the Sentinel:
gitlab-ctl write-pgpass
--host
127.0.0.1
--database
pgbouncer
--user
pgbouncer
--hostuser
gitlab-consul
```
1.
Ensure each node is talking to the current master:
1.
SSH into the server that will host Consul/Sentinel.
1.
[
Download/install
](
https://about.gitlab.com/install/
)
the
Omnibus GitLab Enterprise Edition package using
**steps 1 and 2**
from the
GitLab downloads page.
-
Make sure you select the correct Omnibus package, with the same version
the GitLab application is running.
-
Do not complete any other steps on the download page.
```
shell
1.
Edit
`/etc/gitlab/gitlab.rb`
and add the contents:
gitlab-ctl pgb-console
# You will be prompted for PGBOUNCER_PASSWORD
```
If there is an error
`psql: ERROR: Auth failed`
after typing in the
```
ruby
password, ensure you previously generated the MD5 password hashes with the correct
roles
[
'redis_sentinel_role'
,
'consul_role'
]
format. The correct format is to concatenate the password and the username:
`PASSWORDUSERNAME`
. For example,
`Sup3rS3cr3tpgbouncer`
would be the text
needed to generate an MD5 password hash for the
`pgbouncer`
user.
1.
Once the console prompt is available, run the following queries:
# Must be the same in every sentinel node
redis
[
'master_name'
]
=
'gitlab-redis'
```
shell
# The same password for Redis authentication you set up for the primary node.
show databases
;
show clients
;
redis
[
'master_password'
]
=
'redis-password-goes-here'
```
The output should be similar to the following:
# The IP of the primary Redis node.
redis
[
'master_ip'
]
=
'10.6.0.61'
```
plaintext
# Define a port so Redis can listen for TCP requests which will allow other
name | host | port | database | force_user | pool_size | reserve_pool | pool_mode | max_connections | current_connections
# machines to connect to it.
---------------------+-------------+------+---------------------+------------+-----------+--------------+-----------+-----------------+---------------------
redis
[
'port'
]
=
6379
gitlabhq_production | MASTER_HOST | 5432 | gitlabhq_production | | 20 | 0 | | 0 | 0
pgbouncer | | 6432 | pgbouncer | pgbouncer | 2 | 0 | statement | 0 | 0
(2 rows)
type | user | database | state | addr | port | local_addr | local_port | connect_time | request_time | ptr | link | remote_pid | tls
# Port of primary Redis server, uncomment to change to non default. Defaults
------+-----------+---------------------+---------+----------------+-------+------------+------------+---------------------+---------------------+-----------+------+------------+-----
# to `6379`.
C | pgbouncer | pgbouncer | active | 127.0.0.1 | 56846 | 127.0.0.1 | 6432 | 2017-08-21 18:09:59 | 2017-08-21 18:10:48 | 0x22b3880 | | 0 |
#redis['master_port'] = 6379
(2 rows)
```
<div
align=
"right"
>
## Configure Sentinel
<a
type=
"button"
class=
"btn btn-default"
href=
"#setup-components"
>
sentinel
[
'bind'
]
=
'10.6.0.11'
Back to setup components
<i
class=
"fa fa-angle-double-up"
aria-hidden=
"true"
></i>
</a>
</div>
### Configure the internal load balancer
# Port that Sentinel listens on, uncomment to change to non default. Defaults
# to `26379`.
# sentinel['port'] = 26379
If you're running more than one PgBouncer node as recommended, then at this time you'll need to set
## Quorum must reflect the amount of voting sentinels it take to start a failover.
up a TCP internal load balancer to serve each correctly.
## Value must NOT be greater then the amount of sentinels.
##
## The quorum can be used to tune Sentinel in two ways:
## 1. If a the quorum is set to a value smaller than the majority of Sentinels
## we deploy, we are basically making Sentinel more sensible to primary failures,
## triggering a failover as soon as even just a minority of Sentinels is no longer
## able to talk with the primary.
## 1. If a quorum is set to a value greater than the majority of Sentinels, we are
## making Sentinel able to failover only when there are a very large number (larger
## than majority) of well connected Sentinels which agree about the primary being down.s
sentinel
[
'quorum'
]
=
2
The following IP will be used as an example:
## Consider unresponsive server down after x amount of ms.
# sentinel['down_after_milliseconds'] = 10000
-
`10.6.0.20`
: Internal Load Balancer
## Specifies the failover timeout in milliseconds. It is used in many ways:
##
## - The time needed to re-start a failover after a previous failover was
## already tried against the same primary by a given Sentinel, is two
## times the failover timeout.
##
## - The time needed for a replica replicating to a wrong primary according
## to a Sentinel current configuration, to be forced to replicate
## with the right primary, is exactly the failover timeout (counting since
## the moment a Sentinel detected the misconfiguration).
##
## - The time needed to cancel a failover that is already in progress but
## did not produced any configuration change (REPLICAOF NO ONE yet not
## acknowledged by the promoted replica).
##
## - The maximum time a failover in progress waits for all the replica to be
## reconfigured as replicas of the new primary. However even after this time
## the replicas will be reconfigured by the Sentinels anyway, but not with
## the exact parallel-syncs progression as specified.
# sentinel['failover_timeout'] = 60000
Here's how you could do it with
[
HAProxy
](
https://www.haproxy.org/
)
:
## Enable service discovery for Prometheus
consul
[
'enable'
]
=
true
consul
[
'monitoring_service_discovery'
]
=
true
```
plaintext
## The IPs of the Consul server nodes
global
## You can also use FQDNs and intermix them with IPs
log /dev/log local0
consul
[
'configuration'
]
=
{
log localhost local1 notice
retry_join:
%w(10.6.0.11 10.6.0.12 10.6.0.13)
,
log stdout format raw local0
}
defaults
# Set the network addresses that the exporters will listen on
log global
node_exporter
[
'listen_address'
]
=
'0.0.0.0:9100'
default-server inter 10s fall 3 rise 2
redis_exporter
[
'listen_address'
]
=
'0.0.0.0:9121'
balance leastconn
frontend internal-pgbouncer-tcp-in
# Disable auto migrations
bind *:6432
gitlab_rails
[
'auto_migrate'
]
=
false
mode tcp
```
option tcplog
default_backend pgbouncer
1.
To prevent database migrations from running on upgrade, run:
backend pgbouncer
```
shell
mode tcp
sudo touch
/etc/gitlab/skip-auto-reconfigure
option tcp-check
```
server pgbouncer1 10.6.0.21:6432 check
Only the primary GitLab application server should handle migrations.
server pgbouncer2 10.6.0.22:6432 check
server pgbouncer3 10.6.0.23:6432 check
```
Refer to your preferred Load Balancer's documentation for further guidance.
1.
[
Reconfigure Omnibus GitLab
](
../restart_gitlab.md#omnibus-gitlab-reconfigure
)
for the changes to take effect.
1.
Go through the steps again for all the other Consul/Sentinel nodes, and
make sure you set up the correct IPs.
<div
align=
"right"
>
<div
align=
"right"
>
<a
type=
"button"
class=
"btn btn-default"
href=
"#setup-components"
>
<a
type=
"button"
class=
"btn btn-default"
href=
"#setup-components"
>
...
@@ -977,6 +1055,14 @@ Refer to your preferred Load Balancer's documentation for further guidance.
...
@@ -977,6 +1055,14 @@ Refer to your preferred Load Balancer's documentation for further guidance.
</a>
</a>
</div>
</div>
## Configure Sentinel Queues
-
`10.6.0.81`
: Sentinel - Queues 1
-
`10.6.0.82`
: Sentinel - Queues 2
-
`10.6.0.83`
: Sentinel - Queues 3
## Configure Gitaly
## Configure Gitaly
Deploying Gitaly in its own server can benefit GitLab installations that are
Deploying Gitaly in its own server can benefit GitLab installations that are
...
@@ -1019,8 +1105,8 @@ tokens created for the GitLab API or other similar web API tokens.
...
@@ -1019,8 +1105,8 @@ tokens created for the GitLab API or other similar web API tokens.
Below we describe how to configure two Gitaly servers, with IPs and
Below we describe how to configure two Gitaly servers, with IPs and
domain names:
domain names:
-
`10.6.0.
5
1`
: Gitaly 1 (
`gitaly1.internal`
)
-
`10.6.0.
9
1`
: Gitaly 1 (
`gitaly1.internal`
)
-
`10.6.0.
5
2`
: Gitaly 2 (
`gitaly2.internal`
)
-
`10.6.0.
9
2`
: Gitaly 2 (
`gitaly2.internal`
)
The secret token is assumed to be
`gitalysecret`
and that
The secret token is assumed to be
`gitalysecret`
and that
your GitLab installation has three repository storages:
your GitLab installation has three repository storages:
...
@@ -1189,10 +1275,10 @@ To configure Gitaly with TLS:
...
@@ -1189,10 +1275,10 @@ To configure Gitaly with TLS:
Sidekiq requires connection to the Redis, PostgreSQL and Gitaly instance.
Sidekiq requires connection to the Redis, PostgreSQL and Gitaly instance.
The following IPs will be used as an example:
The following IPs will be used as an example:
-
`10.6.0.
7
1`
: Sidekiq 1
-
`10.6.0.
10
1`
: Sidekiq 1
-
`10.6.0.
7
2`
: Sidekiq 2
-
`10.6.0.
10
2`
: Sidekiq 2
-
`10.6.0.
7
3`
: Sidekiq 3
-
`10.6.0.
10
3`
: Sidekiq 3
-
`10.6.0.
7
4`
: Sidekiq 4
-
`10.6.0.
10
4`
: Sidekiq 4
To configure the Sidekiq nodes, one each one:
To configure the Sidekiq nodes, one each one:
...
@@ -1303,6 +1389,13 @@ accordingly where we've found 50% achieves a good balance but this is dependent
...
@@ -1303,6 +1389,13 @@ accordingly where we've found 50% achieves a good balance but this is dependent
on workload.
on workload.
This section describes how to configure the GitLab application (Rails) component.
This section describes how to configure the GitLab application (Rails) component.
The following IPs will be used as an example:
-
`10.6.0.111`
: GitLab application 1
-
`10.6.0.112`
: GitLab application 2
-
`10.6.0.113`
: GitLab application 3
On each node perform the following:
On each node perform the following:
1.
If you're
[
using NFS
](
#configure-nfs-optional
)
:
1.
If you're
[
using NFS
](
#configure-nfs-optional
)
:
...
@@ -1465,7 +1558,13 @@ in the Troubleshooting section before proceeding.
...
@@ -1465,7 +1558,13 @@ in the Troubleshooting section before proceeding.
The Omnibus GitLab package can be used to configure a standalone Monitoring node
The Omnibus GitLab package can be used to configure a standalone Monitoring node
running
[
Prometheus
](
../monitoring/prometheus/index.md
)
and
running
[
Prometheus
](
../monitoring/prometheus/index.md
)
and
[
Grafana
](
../monitoring/performance/grafana_configuration.md
)
:
[
Grafana
](
../monitoring/performance/grafana_configuration.md
)
.
The following IP will be used as an example:
-
`10.6.0.121`
: Prometheus
To configure the Monitoring node:
1.
SSH into the Monitoring node.
1.
SSH into the Monitoring node.
1.
[
Download/install
](
https://about.gitlab.com/install/
)
the Omnibus GitLab
1.
[
Download/install
](
https://about.gitlab.com/install/
)
the Omnibus GitLab
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment