If you use your own Ingress instead of the one provided by GitLab Managed
For metrics to appear when using the [Prometheus cluster integration](../../user/clusters/integrations.md#prometheus-cluster-integration), you must [enable Prometheus metrics](https://github.com/kubernetes/ingress-nginx/tree/master/charts/ingress-nginx#prometheus-metrics).
Apps, ensure you're running at least version 0.9.0 of NGINX Ingress and
@@ -91,7 +91,7 @@ Here's an example setup flow from scratch:
...
@@ -91,7 +91,7 @@ Here's an example setup flow from scratch:
1. Prepare an [Auto DevOps-enabled](../../topics/autodevops/index.md) project.
1. Prepare an [Auto DevOps-enabled](../../topics/autodevops/index.md) project.
1. Set up a [Kubernetes Cluster](../../user/project/clusters/index.md) in your project.
1. Set up a [Kubernetes Cluster](../../user/project/clusters/index.md) in your project.
1. Install [Ingress](../../user/clusters/applications.md#ingress) as a GitLab Managed App.
1. Install [NGINX Ingress](https://github.com/kubernetes/ingress-nginx/tree/master/charts/ingress-nginx) in your cluster.
1. Set up [the base domain](../../user/project/clusters/index.md#base-domain) based on the Ingress
1. Set up [the base domain](../../user/project/clusters/index.md#base-domain) based on the Ingress
Endpoint assigned above.
Endpoint assigned above.
1. Check if [`v2.0.0+` of `auto-deploy-image` is used in your Auto DevOps pipelines](../../topics/autodevops/upgrading_auto_deploy_dependencies.md#verify-dependency-versions).
1. Check if [`v2.0.0+` of `auto-deploy-image` is used in your Auto DevOps pipelines](../../topics/autodevops/upgrading_auto_deploy_dependencies.md#verify-dependency-versions).
@@ -184,8 +184,7 @@ To create and add a new Kubernetes cluster to your project, group, or instance:
...
@@ -184,8 +184,7 @@ To create and add a new Kubernetes cluster to your project, group, or instance:
See the [Managed clusters section](index.md#gitlab-managed-clusters) for more information.
See the [Managed clusters section](index.md#gitlab-managed-clusters) for more information.
1. Finally, click the **Create Kubernetes cluster** button.
1. Finally, click the **Create Kubernetes cluster** button.
After about 10 minutes, your cluster is ready to go. You can now proceed
After about 10 minutes, your cluster is ready to go.
to install some [pre-defined applications](index.md#installing-applications).
NOTE:
NOTE:
If you have [installed and configured](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html#get-started-kubectl)`kubectl` and you would like to manage your cluster with it, you must add your AWS external ID in the AWS configuration. For more information on how to configure AWS CLI, see [using an IAM role in the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html#cli-configure-role-xaccount).
If you have [installed and configured](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html#get-started-kubectl)`kubectl` and you would like to manage your cluster with it, you must add your AWS external ID in the AWS configuration. For more information on how to configure AWS CLI, see [using an IAM role in the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html#cli-configure-role-xaccount).
...
@@ -290,7 +289,7 @@ you've assigned the role the correct permissions.
...
@@ -290,7 +289,7 @@ you've assigned the role the correct permissions.
### Key Pairs are not loaded
### Key Pairs are not loaded
GitLab loads the key pairs from the **Cluster Region** specified. Ensure that key pair exists in that region.
GitLab loads the key pairs from the **Cluster Region** specified. Ensure that key pair exists in that region.
@@ -457,6 +446,6 @@ Automatically detect and monitor Kubernetes metrics. Automatic monitoring of
...
@@ -457,6 +446,6 @@ Automatically detect and monitor Kubernetes metrics. Automatic monitoring of
> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/4701) in [GitLab Ultimate](https://about.gitlab.com/pricing/) 10.6.
> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/4701) in [GitLab Ultimate](https://about.gitlab.com/pricing/) 10.6.
> - [Moved](https://gitlab.com/gitlab-org/gitlab/-/issues/208224) to GitLab Free in 13.2.
> - [Moved](https://gitlab.com/gitlab-org/gitlab/-/issues/208224) to GitLab Free in 13.2.
When [Prometheus is deployed](#installing-applications), GitLab monitors the cluster's health. At the top of the cluster settings page, CPU and Memory utilization is displayed, along with the total amount available. Keeping an eye on cluster resources can be important, if the cluster runs out of memory pods may be shutdown or fail to start.
When [the Prometheus cluster integration is enabled](../../clusters/integrations.md#prometheus-cluster-integration), GitLab monitors the cluster's health. At the top of the cluster settings page, CPU and Memory utilization is displayed, along with the total amount available. Keeping an eye on cluster resources can be important, if the cluster runs out of memory pods may be shutdown or fail to start.
@@ -53,7 +53,7 @@ To run Knative on GitLab, you need:
...
@@ -53,7 +53,7 @@ To run Knative on GitLab, you need:
The set of minimum recommended cluster specifications to run Knative is 3 nodes, 6 vCPUs, and 22.50 GB memory.
The set of minimum recommended cluster specifications to run Knative is 3 nodes, 6 vCPUs, and 22.50 GB memory.
1.**GitLab Runner:** A runner is required to run the CI jobs that deploy serverless
1.**GitLab Runner:** A runner is required to run the CI jobs that deploy serverless
applications or functions onto your cluster. You can install GitLab Runner
applications or functions onto your cluster. You can install GitLab Runner
onto the existing Kubernetes cluster. See [Installing Applications](../index.md#installing-applications) for more information.
onto the [existing Kubernetes cluster](https://docs.gitlab.com/runner/install/kubernetes.html).
1.**Domain Name:** Knative provides its own load balancer using Istio, and an
1.**Domain Name:** Knative provides its own load balancer using Istio, and an
external IP address or hostname for all the applications served by Knative. Enter a
external IP address or hostname for all the applications served by Knative. Enter a
wildcard domain to serve your applications. Configure your DNS server to use the
wildcard domain to serve your applications. Configure your DNS server to use the
...
@@ -68,54 +68,18 @@ To run Knative on GitLab, you need:
...
@@ -68,54 +68,18 @@ To run Knative on GitLab, you need:
`Dockerfile` in order to build your applications. It should be included at the root of your
`Dockerfile` in order to build your applications. It should be included at the root of your
project's repository and expose port `8080`. `Dockerfile` is not require if you plan to build serverless functions
project's repository and expose port `8080`. `Dockerfile` is not require if you plan to build serverless functions
using our [runtimes](https://gitlab.com/gitlab-org/serverless/runtimes).
using our [runtimes](https://gitlab.com/gitlab-org/serverless/runtimes).
1.**Prometheus** (optional): Installing Prometheus allows you to monitor the scale and traffic of your serverless function/application.
1.**Prometheus** (optional): The [Prometheus cluster integration](../../../clusters/integrations.md#prometheus-cluster-integration)
See [Installing Applications](../index.md#installing-applications) for more information.
allows you to monitor the scale and traffic of your serverless function/application.
1.**Logging** (optional): Configuring logging allows you to view and search request logs for your serverless function/application.
1.**Logging** (optional): Configuring logging allows you to view and search request logs for your serverless function/application.
See [Configuring logging](#configuring-logging) for more information.
See [Configuring logging](#configuring-logging) for more information.
## Installing Knative via the GitLab Kubernetes integration
## Configuring Knative
The minimum recommended cluster size to run Knative is 3-nodes, 6 vCPUs, and 22.50 GB
memory. **RBAC must be enabled.**
1.[Add a Kubernetes cluster](../add_remove_clusters.md).
1. Select the **Applications** tab and scroll down to the Knative app section. Enter the domain to be used with
your application/functions (e.g. `example.com`) and click **Install**.
![install-knative](img/install-knative.png)
1. After the Knative installation has finished, you can wait for the IP address or hostname to be displayed in the
**Knative Endpoint** field or [retrieve the Istio Ingress Endpoint manually](../../../clusters/applications.md#determining-the-external-endpoint-manually).
NOTE:
Running `kubectl` commands on your cluster requires setting up access to the cluster first.
For clusters created on GKE, see [GKE Cluster Access](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl),
for other platforms [Install kubectl](https://kubernetes.io/docs/tasks/tools/).
1. The Ingress is now available at this address and routes incoming requests to the proper service based on the DNS
name in the request. To support this, a wildcard DNS record should be created for the desired domain name. For example,
if your Knative base domain is `knative.info` then you need to create an A record or CNAME record with domain `*.knative.info`
pointing the IP address or hostname of the Ingress.
![DNS entry](img/dns-entry.png)
You can deploy either [functions](#deploying-functions) or [serverless applications](#deploying-serverless-applications)
on a given project, but not both. The current implementation makes use of a
`serverless.yml` file to signal a FaaS project.
## Using an existing installation of Knative
> [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/58941) in GitLab 12.0.
> [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/58941) in GitLab 12.0.
The _invocations_ monitoring feature of GitLab serverless is unavailable when
adding an existing installation of Knative.
It's also possible to use GitLab Serverless with an existing Kubernetes cluster
which already has Knative installed. You must do the following:
1. Follow the steps to deploy [functions](#deploying-functions)
1. Follow the steps to deploy [functions](#deploying-functions)
or [serverless applications](#deploying-serverless-applications) onto your
or [serverless applications](#deploying-serverless-applications) onto your
cluster.
cluster.
1.**Optional:** For invocation metrics to show in GitLab, additional Istio metrics need to be configured in your cluster. For example, with Knative v0.9.0, you can use [this manifest](https://gitlab.com/gitlab-org/charts/knative/-/raw/v0.10.0/vendor/istio-metrics.yml).
## Supported runtimes
## Supported runtimes
Serverless functions for GitLab can be run using:
Serverless functions for GitLab can be run using:
...
@@ -183,7 +151,7 @@ If a runtime is not available for the required programming language, consider de
...
@@ -183,7 +151,7 @@ If a runtime is not available for the required programming language, consider de
### GitLab-managed runtimes
### GitLab-managed runtimes
Currently the following GitLab-managed [runtimes](https://gitlab.com/gitlab-org/serverless/runtimes)
The following GitLab-managed [runtimes](https://gitlab.com/gitlab-org/serverless/runtimes)
are available:
are available:
-`go` (proof of concept)
-`go` (proof of concept)
...
@@ -499,10 +467,10 @@ rows to bring up the function details page.
...
@@ -499,10 +467,10 @@ rows to bring up the function details page.
The pod count gives you the number of pods running the serverless function instances on a given cluster.
The pod count gives you the number of pods running the serverless function instances on a given cluster.
For the Knative function invocations to appear,
For the Knative function invocations to appear, the
[Prometheus must be installed](../index.md#installing-applications).
[Prometheus cluster integration must be enabled](../../../clusters/integrations.md#prometheus-cluster-integration).
Once Prometheus is installed, a message may appear indicating that the metrics data _is
Once Prometheus is enabled, a message may appear indicating that the metrics data _is
loading or is not available at this time._ It appears upon the first access of the
loading or is not available at this time._ It appears upon the first access of the
page, but should go away after a few seconds. If the message does not disappear, then it
page, but should go away after a few seconds. If the message does not disappear, then it
is possible that GitLab is unable to connect to the Prometheus instance running on the
is possible that GitLab is unable to connect to the Prometheus instance running on the
@@ -27,28 +27,6 @@ NGINX Ingress versions prior to 0.16.0 offer an included [VTS Prometheus metrics
...
@@ -27,28 +27,6 @@ NGINX Ingress versions prior to 0.16.0 offer an included [VTS Prometheus metrics
## Configuring NGINX Ingress monitoring
## Configuring NGINX Ingress monitoring
If you have deployed NGINX Ingress using the GitLab [Kubernetes cluster integration](../../clusters/index.md#installing-applications), Prometheus [automatically monitors it](#about-managed-nginx-ingress-deployments).
For other deployments, there is [some configuration](#manually-setting-up-nginx-ingress-for-prometheus-monitoring) required depending on your installation:
- NGINX Ingress should be version 0.16.0 or above, with metrics enabled.
- NGINX Ingress should be annotated for Prometheus monitoring.
- Prometheus should be configured to monitor annotated pods.
### About managed NGINX Ingress deployments
NGINX Ingress is deployed into the `gitlab-managed-apps` namespace, using the [official Helm chart](https://github.com/helm/charts/tree/master/stable/nginx-ingress). NGINX Ingress is [externally reachable via the Load Balancer's Endpoint](../../../clusters/applications.md#ingress).
NGINX is configured for Prometheus monitoring, by setting:
-`enable-vts-status: "true"`, to export Prometheus metrics.
-`prometheus.io/scrape: "true"`, to enable automatic discovery.
-`prometheus.io/port: "10254"`, to specify the metrics port.
When used in conjunction with the GitLab deployed Prometheus service, response metrics are automatically collected.
### Manually setting up NGINX Ingress for Prometheus monitoring
Version 0.9.0 and above of [NGINX Ingress](https://github.com/kubernetes/ingress-nginx) have built-in support for exporting Prometheus metrics. To enable, a ConfigMap setting must be passed: `enable-vts-status: "true"`. Once enabled, a Prometheus metrics endpoint starts running on port 10254.
Version 0.9.0 and above of [NGINX Ingress](https://github.com/kubernetes/ingress-nginx) have built-in support for exporting Prometheus metrics. To enable, a ConfigMap setting must be passed: `enable-vts-status: "true"`. Once enabled, a Prometheus metrics endpoint starts running on port 10254.
Next, the Ingress needs to be annotated for Prometheus monitoring. Two new annotations need to be added:
Next, the Ingress needs to be annotated for Prometheus monitoring. Two new annotations need to be added:
@@ -27,28 +27,6 @@ GitLab has support for automatically detecting and monitoring the Kubernetes NGI
...
@@ -27,28 +27,6 @@ GitLab has support for automatically detecting and monitoring the Kubernetes NGI
## Configuring NGINX Ingress monitoring
## Configuring NGINX Ingress monitoring
If you have deployed NGINX Ingress using the GitLab [Kubernetes cluster integration](../../clusters/index.md#installing-applications), Prometheus [automatically monitors](#about-managed-nginx-ingress-deployments) it.
For other deployments, there is [some configuration](#manually-setting-up-nginx-ingress-for-prometheus-monitoring) required depending on your installation:
- NGINX Ingress should be version 0.9.0 or above, with metrics enabled.
- NGINX Ingress should be annotated for Prometheus monitoring.
- Prometheus should be configured to monitor annotated pods.
### About managed NGINX Ingress deployments
NGINX Ingress is deployed into the `gitlab-managed-apps` namespace, using the [official Helm chart](https://github.com/helm/charts/tree/master/stable/nginx-ingress). NGINX Ingress is [externally reachable via the Load Balancer's Endpoint](../../../clusters/applications.md#ingress).
NGINX is configured for Prometheus monitoring, by setting:
-`enable-vts-status: "true"`, to export Prometheus metrics.
-`prometheus.io/scrape: "true"`, to enable automatic discovery.
-`prometheus.io/port: "10254"`, to specify the metrics port.
When used in conjunction with the GitLab deployed Prometheus service, response metrics are automatically collected.
### Manually setting up NGINX Ingress for Prometheus monitoring
Version 0.9.0 and above of [NGINX Ingress](https://github.com/kubernetes/ingress-nginx) has built-in support for exporting Prometheus metrics. To enable, a ConfigMap setting must be passed: `enable-vts-status: "true"`. Once enabled, a Prometheus metrics endpoint begins running on port 10254.
Version 0.9.0 and above of [NGINX Ingress](https://github.com/kubernetes/ingress-nginx) has built-in support for exporting Prometheus metrics. To enable, a ConfigMap setting must be passed: `enable-vts-status: "true"`. Once enabled, a Prometheus metrics endpoint begins running on port 10254.
Next, the Ingress needs to be annotated for Prometheus monitoring. Two new annotations need to be added:
Next, the Ingress needs to be annotated for Prometheus monitoring. Two new annotations need to be added:
msgid "PrometheusService|PrometheusService|The ID of the IAP-secured resource."
msgid "PrometheusService|PrometheusService|The ID of the IAP-secured resource."
msgstr ""
msgstr ""
...
@@ -26483,7 +26471,7 @@ msgstr ""
...
@@ -26483,7 +26471,7 @@ msgstr ""
msgid "PrometheusService|These metrics will only be monitored after your first deployment to an environment"
msgid "PrometheusService|These metrics will only be monitored after your first deployment to an environment"
msgstr ""
msgstr ""
msgid "PrometheusService|To enable the installation of Prometheus on your clusters, deactivate the manual configuration."
msgid "PrometheusService|To use a Prometheus installed on a cluster, deactivate the manual configuration."
msgstr ""
msgstr ""
msgid "PrometheusService|Waiting for your first deployment to an environment to find common metrics"
msgid "PrometheusService|Waiting for your first deployment to an environment to find common metrics"
...
@@ -26492,6 +26480,9 @@ msgstr ""
...
@@ -26492,6 +26480,9 @@ msgstr ""
msgid "PrometheusService|You can now manage your Prometheus settings on the %{operations_link_start}Operations%{operations_link_end} page. Fields on this page has been deprecated."
msgid "PrometheusService|You can now manage your Prometheus settings on the %{operations_link_start}Operations%{operations_link_end} page. Fields on this page has been deprecated."
msgstr ""
msgstr ""
msgid "PrometheusService|You have a cluster with the Prometheus integration enabled."
msgid "ServerlessDetails|Function invocation metrics require Prometheus to be installed first."
msgid "ServerlessDetails|Configure cluster."
msgstr ""
msgstr ""
msgid "ServerlessDetails|Install Prometheus"
msgid "ServerlessDetails|Function invocation metrics require the Prometheus cluster integration."
msgstr ""
msgstr ""
msgid "ServerlessDetails|Invocation metrics loading or not available at this time."
msgid "ServerlessDetails|Invocation metrics loading or not available at this time."
...
@@ -29678,9 +29669,6 @@ msgstr ""
...
@@ -29678,9 +29669,6 @@ msgstr ""
msgid "Serverless|In order to start using functions as a service, you must first install Knative on your Kubernetes cluster. %{linkStart}More information%{linkEnd}"
msgid "Serverless|In order to start using functions as a service, you must first install Knative on your Kubernetes cluster. %{linkStart}More information%{linkEnd}"
<p>In order to start using functions as a service, you must first install Knative on your Kubernetes cluster. <gl-link-stub href=\\"/help\\">More information</gl-link-stub>
<p>In order to start using functions as a service, you must first install Knative on your Kubernetes cluster. <gl-link-stub href=\\"/help\\">More information</gl-link-stub>