@@ -17,15 +17,14 @@ This documentation is not for clusters for deployment of GitLab itself, but inst
...
@@ -17,15 +17,14 @@ This documentation is not for clusters for deployment of GitLab itself, but inst
Information on deploying GitLab onto EKS can be found in [Provisioning GitLab Cloud Native Hybrid on AWS EKS](gitlab_hybrid_on_aws.md).
Information on deploying GitLab onto EKS can be found in [Provisioning GitLab Cloud Native Hybrid on AWS EKS](gitlab_hybrid_on_aws.md).
## Use AWS EKS quick start or `eksctl`
## Use `eksctl`
Using the EKS Quick Start or `eksctl` enables the following when building an EKS Cluster:
Using `eksctl` enables the following when building an EKS Cluster:
- It can be part of CloudFormation IaC or [CLI (`eksctl`)](https://eksctl.io/) automation
- You have various cluster configuration options:
- You have various cluster configuration options:
- Selection of operating system: Amazon Linux 2, Windows, Bottlerocket
- Selection of operating system: Amazon Linux 2, Windows, Bottlerocket
- Selection of Hardware Architecture: x86, ARM, GPU
- Selection of Hardware Architecture: x86, ARM, GPU
- Selection of Fargate backend
- Selection of Kubernetes version (the GitLab-managed clusters for your project's applications have [specific Kubernetes version requirements](../../user/infrastructure/clusters/connect/index.md#supported-cluster-versions))
- It can deploy high value-add items to the cluster, including:
- It can deploy high value-add items to the cluster, including:
- A bastion host to keep the cluster endpoint private and possible perform performance testing.
- A bastion host to keep the cluster endpoint private and possible perform performance testing.
# GitLab Site Reliability Engineering for AWS **(FREE SELF)**
# GitLab Site Reliability Engineering for AWS **(FREE SELF)**
## Known issues list
## AWS known issues list
Known issues are gathered from within GitLab and from customer reported issues. Customers successfully implement GitLab with a variety of "as a Service" components that GitLab has not specifically been designed for, nor has ongoing testing for. While GitLab does take partner technologies very seriously, the highlighting of known issues here is a convenience for implementers and it does not imply that GitLab has targeted compatibility with, nor carries any type of guarantee of running on the partner technology where the issues occur. Please consult individual issues to understand GitLabs stance and plans on any given known issue.
Known issues are gathered from within GitLab and from customer reported issues. Customers successfully implement GitLab with a variety of "as a Service" components that GitLab has not specifically been designed for, nor has ongoing testing for. While GitLab does take partner technologies very seriously, the highlighting of known issues here is a convenience for implementers and it does not imply that GitLab has targeted compatibility with, nor carries any type of guarantee of running on the partner technology where the issues occur. Please consult individual issues to understand GitLabs stance and plans on any given known issue.
...
@@ -17,11 +17,11 @@ See the [GitLab AWS known issues list](https://gitlab.com/gitlab-com/alliances/a
...
@@ -17,11 +17,11 @@ See the [GitLab AWS known issues list](https://gitlab.com/gitlab-com/alliances/a
## Gitaly SRE considerations
## Gitaly SRE considerations
Gitaly and Gitaly Cluster have been engineered by GitLab to overcome fundamental challenges with horizontal scaling of the open source Git binaries. Here is indepth technical reading on the topic:
Gitaly is an embedded service for Git Repository Storage. Gitaly and Gitaly Cluster have been engineered by GitLab to overcome fundamental challenges with horizontal scaling of the open source Git binaries that must be used on the service side of GitLab. Here is indepth technical reading on the topic:
### Why Gitaly was built
### Why Gitaly was built
Below are some links to better understand why Gitaly was built:
If you would like to understand the underlying rationale on why GitLab had to invest in creating Gitaly, read the following minimal list of topics:
-[Git characteristics that make horizontal scaling difficult](https://gitlab.com/gitlab-org/gitaly/-/blob/master/doc/DESIGN.md#git-characteristics-that-make-horizontal-scaling-difficult)
-[Git characteristics that make horizontal scaling difficult](https://gitlab.com/gitlab-org/gitaly/-/blob/master/doc/DESIGN.md#git-characteristics-that-make-horizontal-scaling-difficult)
-[Git architectural characteristics and assumptions](https://gitlab.com/gitlab-org/gitaly/-/blob/master/doc/DESIGN.md#git-architectural-characteristics-and-assumptions)
-[Git architectural characteristics and assumptions](https://gitlab.com/gitlab-org/gitaly/-/blob/master/doc/DESIGN.md#git-architectural-characteristics-and-assumptions)
...
@@ -36,19 +36,63 @@ As part of Gitaly cluster consistency, Praefect nodes will occasionally need to
...
@@ -36,19 +36,63 @@ As part of Gitaly cluster consistency, Praefect nodes will occasionally need to
Complete performance metrics should be collected for Gitaly instances for identification of bottlenecks, as they could have to do with disk IO, network IO or memory.
Complete performance metrics should be collected for Gitaly instances for identification of bottlenecks, as they could have to do with disk IO, network IO or memory.
Gitaly must be implemented on instance compute.
### Gitaly performance guidelines
### Gitaly EBS volume sizing guidelines
Gitaly functions as the primary Git Repository Storage in GitLab. However, it's not simply a streaming file server. It also does a lot of demanding computing work, such as preparing and caching Git pack files which informs some of the performance recommendations below.
Gitaly storage is expected to be local (not NFS of any type including EFS).
NOTE:
Gitaly servers also need disk space for building and caching Git pack files.
All recommendations are for production configurations, including performance testing. For test configurations, like training or functional testing, you can use less expensive options. However, you should adjust or rebuild if performance is an issue.
Background:
#### Overall recommendations
- When not using provisioned EBS IO, EBS volume size determines the IO level, so provisioning volumes that are much larger than needed can be the least expensive way to improve EBS IO.
- Production-grade Gitaly must be implemented on instance compute due to all of the above and below characteristics.
- Only use nitro instance types due to higher IO and EBS optimization.
- Never use [burstable instance types](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-performance-instances.html)(such as `t2`, `t3`, `t4g`) for Gitaly.
- Use Amazon Linux 2 to ensure the best disk and memory optimizations (for example, ENA network adapters and drivers).
- Always use at least the [AWS Nitro generation of instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html#ec2-nitro-instances) to ensure many of the below concerns are automatically handled.
- If GitLab backup scripts are used, they need a temporary space location large enough to hold 2 times the current size of the Git File system. If that will be done on Gitaly servers, separate volumes should be used.
- Use Amazon Linux 2 to ensure that all [AWS oriented hardware and OS optimizations](https://aws.amazon.com/amazon-linux-2/faqs/) are maximized without additional configuration or SRE management.
#### CPU and memory recommendations
- The general GitLab Gitaly node recommendations for CPU and Memory assume relatively even loading across repositories. GPT testing of any non-characteristic repositories and/or SRE monitoring of Gitaly metrics may inform when to choose memory and/or CPU higher than general recommendations.
**To accommodate:**
- Git Pack file operations are memory and CPU intensive.
- If repository commit traffic is dense, large, or very frequent, then more CPU and Memory are required to handle the load. Patterns such as storing binaries and/or busy or large monorepos are examples that can cause high loading.
#### Disk I/O recommendations
- Use only SSD storage and the [class of EBS storage](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-volume-types.html) that suites your durability and speed requirements.
- When not using provisioned EBS IO, EBS volume size determines the I/O level, so provisioning volumes that are much larger than needed can be the least expensive way to improve EBS IO.
- If Gitaly performance monitoring shows signs of disk stress then one of the provisioned IOPs levels can be chosen. Note that EBS IOPs levels also have enhanced durability which may be appealing for some implementations aside from performance considerations.
**To accommodate:**
- Gitaly storage is expected to be local (not NFS of any type including EFS).
- Gitaly servers also need disk space for building and caching Git pack files. This is above and beyond the permanent storage of your Git Repositories.
- Git Pack files are cached in Gitaly. Creation of pack files in temporary disk benefits from fast disk, and disk caching of pack files benefits from ample disk space.
#### Network I/O recommendations
- Use only instance types [from the list of ones that support ENA advanced networking](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html#instance-type-summary-table) to ensure that cluster replication latency is not due to instance level network I/O bottlenecking.
- Choose instances with sizes with more than 10 Gbps - but only if needed and only when having proven a node level network bottleneck with monitoring and/or stress testing.
**To accommodate:**
- Gitaly nodes do the main work of streaming repositories for push and pull operations (to add development endpoints, and to CI/CD).
- Gitaly servers need reasonable low latency between cluster nodes and with Praefect services in order for the cluster to maintain operational and data integrity.
- Gitaly nodes should be selected with network bottlenecking avoidance as a primary consideration.
- Gitaly nodes should be monitored for network saturation.
- Not all networking issues can be solved through optimizing the node level networking:
- Gitaly cluster node replication depends on all networking between nodes.
- Gitaly networking performance to pull and push endpoints depends on all networking in between.
### AWS Gitaly backup
Due to the nature of how Praefect tracks the replication metadata of Gitaly disk information, the best backup method is [the official backup and restore Rake tasks](../../raketasks/backup_restore.md).
### AWS Gitaly recovery
Gitaly Cluster does not support snapshot backups as these can cause issues where the Praefect database becomes out of syn with the disk storage. Due to the nature of how Praefect rebuilds the replication metadata of Gitaly disk information during a restore, the best recovery method is [the official backup and restore Rake tasks](../../raketasks/backup_restore.md).
# Installing GitLab on Amazon Web Services (AWS) (DEPRECATED) **(FREE SELF)**
# Installing a GitLab POC on Amazon Web Services (AWS) **(FREE SELF)**
This page offers a walkthrough of a common configuration for GitLab on AWS using the official GitLab Linux package. You should customize it to accommodate your needs.
This page offers a walkthrough of a common configuration for GitLab on AWS using the official GitLab Linux package. You should customize it to accommodate your needs.
NOTE:
NOTE:
For organizations with 1,000 users or less, the recommended AWS installation method is to launch an EC2 single box [Omnibus Installation](https://about.gitlab.com/install/) and implement a snapshot strategy for backing up the data. See the [1,000 user reference architecture](../../administration/reference_architectures/1k_users.md) for more.
For organizations with 1,000 users or less, the recommended AWS installation method is to launch an EC2 single box [Omnibus Installation](https://about.gitlab.com/install/) and implement a snapshot strategy for backing up the data. See the [1,000 user reference architecture](../../administration/reference_architectures/1k_users.md) for more information.
## Getting started for production-grade GitLab
NOTE:
NOTE:
The [GitLab Environment Toolkit (GET)](https://gitlab.com/gitlab-org/quality/gitlab-environment-toolkit/-/tree/master) is GitLabs internal effort to create a multi-cloud, multi-GitLab toolkit to provision GitLab. It can be used to deploy Omnibus GitLab on AWS. GET is developed by GitLab developers and is open to community contributions.
This document is an installation guide for a proof of concept instance. It is not a reference architecture and it does not result in a highly available configuration.
Following this guide exactly results in a proof of concept instance that roughly equates to a **scaled down** version of a **two availability zone implementation** of the **Non-HA**[Omnibus 2000 User Reference Architecture](../../administration/reference_architectures/2k_users.md). The 2K reference architecture is not HA because it is primarily intended to provide some scaling while keeping costs and complexity low. The [3000 User Reference Architecture](../../administration/reference_architectures/3k_users.md) is the smallest size that is GitLab HA. It has additional service roles to achieve HA, most notably it uses Gitaly Cluster to achieve HA for Git repository storage and specifies triple redundancy.
GitLab maintains and tests two main types of Reference Architectures. The **Omnibus architectures** are implemented on instance compute while **Cloud Native Hybrid architectures** maximize the use of a Kubernetes cluster. Cloud Native Hybrid reference architecture specifications are addendum sections to the Reference Architecture size pages that start by describing the Omnibus architecture. For example, the 3000 User Cloud Native Reference Architecture is in the subsection titled [Cloud Native Hybrid reference architecture with Helm Charts (alternative)](../../administration/reference_architectures/3k_users.md#cloud-native-hybrid-reference-architecture-with-helm-charts-alternative) in the 3000 User Reference Architecture page.
### Getting started for production-grade Omnibus GitLab
The Infrastructure as Code tooling [GitLab Environment Tool (GET)](https://gitlab.com/gitlab-org/quality/gitlab-environment-toolkit/-/tree/master) is the best place to start for building Omnibus GitLab on AWS and most especially if you are targeting an HA setup. While it does not automate everything, it does complete complex setups like Gitaly Cluster for you. GET is open source so anyone can build on top of it and contribute improvements to it.
### Getting started for production-grade Cloud Native Hybrid GitLab
For the Cloud Native Hybrid architectures there are two Infrastructure as Code options which are compared in GitLab Cloud Native Hybrid on AWS EKS implementation pattern in the section [Available Infrastructure as Code for GitLab Cloud Native Hybrid](gitlab_hybrid_on_aws.md#available-infrastructure-as-code-for-gitlab-cloud-native-hybrid). It compares the [GitLab Environment Toolkit](https://gitlab.com/gitlab-org/quality/gitlab-environment-toolkit/-/tree/master) to the AWS Quick Start for GitLab Cloud Native Hybrid on EKS which was codeveloped by GitLab and AWS. GET and the AWS Quick Start are both open source so anyone can build on top of them and contribute improvements to them.