Commit b2a0effa authored by Craig Norris's avatar Craig Norris

Update page to match 5K page styling

parent 9133aac7
---
reading_time: true
stage: Enablement
group: Distribution
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
---
# Reference architecture: up to 50,000 users
This page describes GitLab reference architecture for up to 50,000 users.
For a full list of reference architectures, see
This page describes GitLab reference architecture for up to 50,000 users. For a
full list of reference architectures, see
[Available reference architectures](index.md#available-reference-architectures).
> - **Supported users (approximate):** 50,000
> - **High Availability:** True
> - **Test RPS rates:** API: 1000 RPS, Web: 100 RPS, Git: 100 RPS
> - **High Availability:** Yes
> - **Test requests per second (RPS) rates:** API: 1000 RPS, Web: 100 RPS, Git: 100 RPS
| Service | Nodes | Configuration | GCP | AWS | Azure |
|--------------------------------------------------------------|-------|---------------------------------|------------------|-----------------------|----------------|
| External load balancing node | 1 | 8 vCPU, 7.2GB Memory | `n1-highcpu-8` | `c5.2xlarge` | `F8s v2` |
| Consul | 3 | 2 vCPU, 1.8GB Memory | `n1-highcpu-2` | `c5.large` | `F2s v2` |
| PostgreSQL | 3 | 16 vCPU, 60GB Memory | `n1-standard-16` | `m5.4xlarge` | `D16s v3` |
| PgBouncer | 3 | 2 vCPU, 1.8GB Memory | `n1-highcpu-2` | `c5.large` | `F2s v2` |
| Internal load balancing node | 1 | 8 vCPU, 7.2GB Memory | `n1-highcpu-8` | `c5.2xlarge` | `F8s v2` |
| Redis - Cache | 3 | 4 vCPU, 15GB Memory | `n1-standard-4` | `m5.xlarge` | `D4s v3` |
| Redis - Queues / Shared State | 3 | 4 vCPU, 15GB Memory | `n1-standard-4` | `m5.xlarge` | `D4s v3` |
| Redis Sentinel - Cache | 3 | 1 vCPU, 1.7GB Memory | `g1-small` | `t2.small` | `B1MS` |
| Redis Sentinel - Queues / Shared State | 3 | 1 vCPU, 1.7GB Memory | `g1-small` | `t2.small` | `B1MS` |
| Gitaly | 2 minimum | 64 vCPU, 240GB Memory | `n1-standard-64` | `m5.16xlarge` | `D64s v3` |
| Sidekiq | 4 | 4 vCPU, 15GB Memory | `n1-standard-4` | `m5.xlarge` | `D4s v3` |
| GitLab Rails | 12 | 32 vCPU, 28.8GB Memory | `n1-highcpu-32` | `c5.9xlarge` | `F32s v2` |
| Monitoring node | 1 | 4 vCPU, 3.6GB Memory | `n1-highcpu-4` | `c5.xlarge` | `F4s v2` |
|-----------------------------------------|-------------|-------------------------|-----------------|--------------|----------|
| External load balancing node | 1 | 8 vCPU, 7.2GB memory | n1-highcpu-8 | c5.2xlarge | F8s v2 |
| Consul | 3 | 2 vCPU, 1.8GB memory | n1-highcpu-2 | c5.large | F2s v2 |
| PostgreSQL | 3 | 16 vCPU, 60GB memory | n1-standard-16 | m5.4xlarge | D16s v3 |
| PgBouncer | 3 | 2 vCPU, 1.8GB memory | n1-highcpu-2 | c5.large | F2s v2 |
| Internal load balancing node | 1 | 8 vCPU, 7.2GB memory | n1-highcpu-8 | c5.2xlarge | F8s v2 |
| Redis - Cache | 3 | 4 vCPU, 15GB memory | n1-standard-4 | m5.xlarge | D4s v3 |
| Redis - Queues / Shared State | 3 | 4 vCPU, 15GB memory | n1-standard-4 | m5.xlarge | D4s v3 |
| Redis Sentinel - Cache | 3 | 1 vCPU, 1.7GB memory | g1-small | t2.small | B1MS |
| Redis Sentinel - Queues / Shared State | 3 | 1 vCPU, 1.7GB memory | g1-small | t2.small | B1MS |
| Gitaly | 2 (minimum) | 64 vCPU, 240GB memory | n1-standard-64 | m5.16xlarge | D64s v3 |
| Sidekiq | 4 | 4 vCPU, 15GB memory | n1-standard-4 | m5.xlarge | D4s v3 |
| GitLab Rails | 12 | 32 vCPU, 28.8GB memory | n1-highcpu-32 | c5.9xlarge | F32s v2 |
| Monitoring node | 1 | 4 vCPU, 3.6GB memory | n1-highcpu-4 | c5.xlarge | F4s v2 |
| Object Storage | n/a | n/a | n/a | n/a | n/a |
| NFS Server | 1 | 4 vCPU, 3.6GB Memory | `n1-highcpu-4` | `c5.xlarge` | `F4s v2` |
The architectures were built and tested with the [Intel Xeon E5 v3 (Haswell)](https://cloud.google.com/compute/docs/cpu-platforms)
CPU platform on GCP. On different hardware you may find that adjustments, either lower
or higher, are required for your CPU or Node counts accordingly. For more information, a
[Sysbench](https://github.com/akopytov/sysbench) benchmark of the CPU can be found
[here](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Reference-Architectures/GCP-CPU-Benchmarks).
For data objects such as LFS, Uploads, Artifacts, etc., an [object storage service](#configure-the-object-storage)
is recommended over NFS where possible, due to better performance and availability.
Since this doesn't require a node to be set up, it's marked as not applicable (n/a)
in the table above.
| NFS Server | 1 | 4 vCPU, 3.6GB memory | n1-highcpu-4 | c5.xlarge | F4s v2 |
The Google Cloud Platform (GCP) architectures were built and tested using the
[Intel Xeon E5 v3 (Haswell)](https://cloud.google.com/compute/docs/cpu-platforms)
CPU platform. On different hardware you may find that adjustments, either lower
or higher, are required for your CPU or node counts. For more information, see
our [Sysbench](https://github.com/akopytov/sysbench)-based
[CPU benchmark](https://gitlab.com/gitlab-org/quality/performance/-/wikis/Reference-Architectures/GCP-CPU-Benchmarks).
For data objects (such as LFS, Uploads, or Artifacts), an
[object storage service](#configure-the-object-storage) is recommended instead
of NFS where possible, due to better performance and availability. Since this
doesn't require a node to be set up, *Object Storage* is noted as not
applicable (n/a) in the previous table.
## Setup components
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment