nodes. Due to its experimental nature, Patroni support is subject to change without notice.
on the primary node, but it can also be used for PostgreSQL HA on a secondary
node.
Patroni support is intended to replace `repmgr` as a [High Availability PostgreSQL solution](../../postgresql/replication_and_failover.md)
Starting with GitLab 13.5, Patroni is available for _experimental_ use with Geo
on the primary node, and can also be used for PostgreSQL HA on a secondary node.
primary and secondary nodes. Due to its experimental nature, Patroni support is
subject to change without notice.
In the current experimental implementation there are the following limitations:
This experimental implementation has the following limitations:
- Whenever a new Leader is elected, the PgBouncer instance needs to be reconfigured to point to the new Leader.
- Whenever a new Leader is elected, the PgBouncer instance must be reconfigured
- Whenever a new Leader is elected on the primary node, the Standby Leader on the secondary needs to be reconfigured
to point to the new Leader.
to point to the new Leader.
- Whenever `gitlab-ctl reconfigure` runs on a Patroni Leader instance, there is a chance the node will be demoted
- Whenever a new Leader is elected on the primary node, the Standby Leader on
due to the short-time restart required. To avoid this you can pause auto-failover: `gitlab-ctl patroni pause` (after a reconfigure it automatically unpauses).
the secondary needs to be reconfigured to point to the new Leader.
- Whenever `gitlab-ctl reconfigure` runs on a Patroni Leader instance, there's a
chance the node will be demoted due to the required short-time restart. To
avoid this, you can pause auto-failover by running `gitlab-ctl patroni pause`.
After a reconfigure, it unpauses on its own.
In order to setup Patroni on the primary node, you can follow the information provided in the
For instructions about how to set up Patroni on the primary node, see the
[High Availability with Omnibus GitLab](../../postgresql/replication_and_failover.md#patroni) page.
[PostgreSQL replication and failover with Omnibus GitLab](../../postgresql/replication_and_failover.md#patroni) page.
A production ready and secure setup will require at least 3 Patroni instances on the primary and a similar configuration on
A production-ready and secure setup requires at least three Patroni instances on
secondary nodes. Use password credentials and other database best-practices.
the primary, and a similar configuration on the secondary nodes. Be sure to use
password credentials and other database best practices.
Similar to `repmgr`, using Patroni on a secondary node is optional.
Similar to `repmgr`, using Patroni on a secondary node is optional.
To setup database replication with Patroni on a Secondary node you need to configure a **permanent replication slot**
To set up database replication with Patroni on a secondary node, configure a
on the Primary node's Patroni cluster and ensure password authentication is used.
_permanent replication slot_ on the primary node's Patroni cluster, and ensure
'PATRONI_SECONDARY1_IP/32','PATRONI_SECONDARY2_IP/32','PATRONI_SECONDARY3_IP/32'# we list all secondary instances as they can all become a Standby Leader
'PATRONI_SECONDARY1_IP/32','PATRONI_SECONDARY2_IP/32','PATRONI_SECONDARY3_IP/32'# we list all secondary instances as they can all become a Standby Leader
# any other instance that needs access to the database as per HA documentation
# any other instance that needs access to the database as per documentation