# We don't support setting a permanent replication slot for logical replication type
patroni['replication_slots']={
'geo_secondary'=>{'type'=>'physical'}
...
...
@@ -559,14 +559,14 @@ Leader instance**:
#### Step 2. Configure the internal load balancer on the primary site
To avoid reconfiguring the Standby Leader on the secondary site whenever a new
Leader is elected on the primary site, we'll need to set up a TCP internal load
balancer which will give a single endpoint for connecting to the Patroni
Leader is elected on the primary site, we need to set up a TCP internal load
balancer which gives a single endpoint for connecting to the Patroni
cluster's Leader.
The Omnibus GitLab packages do not include a Load Balancer. Here's how you
could do it with [HAProxy](https://www.haproxy.org/).
The following IPs and names will be used as an example:
The following IPs and names are used as an example:
-`10.6.0.21`: Patroni 1 (`patroni1.internal`)
-`10.6.0.21`: Patroni 2 (`patroni2.internal`)
...
...
@@ -662,7 +662,7 @@ Follow the minimal configuration for the PgBouncer node:
NOTE:
If you are converting a secondary site to a Patroni Cluster, you must start
on the PostgreSQL instance. It will become the Patroni Standby Leader instance,
on the PostgreSQL instance. It becomes the Patroni Standby Leader instance,
and then you can switchover to another replica if you need.
For each Patroni instance on the secondary site:
...
...
@@ -734,7 +734,7 @@ For each Patroni instance on the secondary site:
1. Before migrating, it is recommended that there is no replication lag between the primary and secondary sites and that replication is paused. In GitLab 13.2 and later, you can pause and resume replication with `gitlab-ctl geo-replication-pause` and `gitlab-ctl geo-replication-resume` on a Geo secondary database node.
1. Follow the [instructions to migrate repmgr to Patroni](../../postgresql/replication_and_failover.md#switching-from-repmgr-to-patroni). When configuring Patroni on each primary site database node, add `patroni['replication_slots'] = { '<slot_name>' => 'physical' }`
to `gitlab.rb` where `<slot_name>` is the name of the replication slot for your Geo secondary. This will ensure that Patroni recognizes the replication slot as permanent and will not drop it upon restarting.
to `gitlab.rb` where `<slot_name>` is the name of the replication slot for your Geo secondary. This ensures that Patroni recognizes the replication slot as permanent and not drop it upon restarting.
1. If database replication to the secondary was paused before migration, resume replication once Patroni is confirmed working on the primary.
### Migrating a single PostgreSQL node to Patroni
...
...
@@ -750,7 +750,7 @@ With Patroni it's now possible to support that. In order to migrate the existing
1.[Configure a Standby Cluster](#step-4-configure-a-standby-cluster-on-the-secondary-site)
on that single node machine.
You will end up with a "Standby Cluster" with a single node. That allows you to later on add additional Patroni nodes
You end up with a "Standby Cluster" with a single node. That allows you to later on add additional Patroni nodes
by following the same instructions above.
### Configuring Patroni cluster for the tracking PostgreSQL database
...
...
@@ -934,8 +934,8 @@ Patroni implementation on Omnibus that do not allow us to manage two different
clusters on the same machine, we recommend setting up a new Patroni cluster for
the tracking database by following the same instructions above.
The secondary nodes will backfill the new tracking database, and no data
synchronization will be required.
The secondary nodes backfill the new tracking database, and no data