- Are disabled if not defined globally or per job (using `cache:`).
- Are disabled if not defined globally or per job (using `cache:`).
- Are available for all jobs in your `.gitlab-ci.yml` if enabled globally.
- Are available for all jobs in your `.gitlab-ci.yml` if enabled globally.
- Can be used in subsequent pipelines by the same job in which the cache was created (if not defined globally).
- Can be used in subsequent pipelines by the same job in which the cache was created (if not defined globally).
- Are stored where the Runner is installed **and** uploaded to S3 if [distributed cache is enabled](https://docs.gitlab.com/runner/configuration/autoscale.html#distributed-runners-caching).
- Are stored where GitLab Runner is installed **and** uploaded to S3 if [distributed cache is enabled](https://docs.gitlab.com/runner/configuration/autoscale.html#distributed-runners-caching).
- If defined per job, are used:
- If defined per job, are used:
- By the same job in a subsequent pipeline.
- By the same job in a subsequent pipeline.
- By subsequent jobs in the same pipeline, if they have identical dependencies.
- By subsequent jobs in the same pipeline, if they have identical dependencies.
...
@@ -80,33 +80,33 @@ can't link to files outside it.
...
@@ -80,33 +80,33 @@ can't link to files outside it.
## Good caching practices
## Good caching practices
We have the cache from the perspective of the developers (who consume a cache
We have the cache from the perspective of the developers (who consume a cache
within the job) and the cache from the perspective of the Runner. Depending on
within the job) and the cache from the perspective of the runner. Depending on
which type of Runner you are using, cache can act differently.
which type of runner you are using, cache can act differently.
From the perspective of the developer, to ensure maximum availability of the
From the perspective of the developer, to ensure maximum availability of the
cache, when declaring `cache` in your jobs, use one or a mix of the following:
cache, when declaring `cache` in your jobs, use one or a mix of the following:
-[Tag your Runners](../runners/README.md#use-tags-to-limit-the-number-of-jobs-using-the-runner) and use the tag on jobs
-[Tag your runners](../runners/README.md#use-tags-to-limit-the-number-of-jobs-using-the-runner) and use the tag on jobs
| [Shell](https://docs.gitlab.com/runner/executors/shell.html) | Locally, stored under the `gitlab-runner` user's home directory: `/home/gitlab-runner/cache/<user>/<project>/<cache-key>/cache.zip`. |
| [Shell](https://docs.gitlab.com/runner/executors/shell.html) | Locally, stored under the `gitlab-runner` user's home directory: `/home/gitlab-runner/cache/<user>/<project>/<cache-key>/cache.zip`. |
| [Docker](https://docs.gitlab.com/runner/executors/docker.html) | Locally, stored under [Docker volumes](https://docs.gitlab.com/runner/executors/docker.html#the-builds-and-cache-storage): `/var/lib/docker/volumes/<volume-id>/_data/<user>/<project>/<cache-key>/cache.zip`. |
| [Docker](https://docs.gitlab.com/runner/executors/docker.html) | Locally, stored under [Docker volumes](https://docs.gitlab.com/runner/executors/docker.html#the-builds-and-cache-storage): `/var/lib/docker/volumes/<volume-id>/_data/<user>/<project>/<cache-key>/cache.zip`. |
| [Docker machine](https://docs.gitlab.com/runner/executors/docker_machine.html)(autoscaleRunners) | Behaves the same as the Docker executor. |
| [Docker machine](https://docs.gitlab.com/runner/executors/docker_machine.html)(autoscalerunners) | Behaves the same as the Docker executor. |
### How archiving and extracting works
### How archiving and extracting works
In the most simple scenario, consider that you use only one machine where the
In the most simple scenario, consider that you use only one machine where the
Runner is installed, and all jobs of your project run on the same host.
runner is installed, and all jobs of your project run on the same host.
Let's see the following example of two jobs that belong to two consecutive
Let's see the following example of two jobs that belong to two consecutive
stages:
stages:
...
@@ -426,17 +426,17 @@ Here's what happens behind the scenes:
...
@@ -426,17 +426,17 @@ Here's what happens behind the scenes:
1.`after_script` is executed.
1.`after_script` is executed.
1.`cache` runs and the `vendor/` directory is zipped into `cache.zip`.
1.`cache` runs and the `vendor/` directory is zipped into `cache.zip`.
This file is then saved in the directory based on the
This file is then saved in the directory based on the
[Runner's setting](#where-the-caches-are-stored) and the `cache: key`.
[runner's setting](#where-the-caches-are-stored) and the `cache: key`.
1.`job B` runs.
1.`job B` runs.
1. The cache is extracted (if found).
1. The cache is extracted (if found).
1.`before_script` is executed.
1.`before_script` is executed.
1.`script` is executed.
1.`script` is executed.
1. Pipeline finishes.
1. Pipeline finishes.
By using a single Runner on a single machine, you'll not have the issue where
By using a single runner on a single machine, you'll not have the issue where
`job B` might execute on a Runner different from `job A`, thus guaranteeing the
`job B` might execute on a runner different from `job A`, thus guaranteeing the
cache between stages. That will only work if the build goes from stage `build`
cache between stages. That will only work if the build goes from stage `build`
to `test` in the same Runner/machine, otherwise, you [might not have the cache
to `test` in the same runner/machine, otherwise, you [might not have the cache
available](#cache-mismatch).
available](#cache-mismatch).
During the caching process, there's also a couple of things to consider:
During the caching process, there's also a couple of things to consider:
...
@@ -448,13 +448,13 @@ During the caching process, there's also a couple of things to consider:
...
@@ -448,13 +448,13 @@ During the caching process, there's also a couple of things to consider:
their cache.
their cache.
- When extracting the cache from `cache.zip`, everything in the zip file is
- When extracting the cache from `cache.zip`, everything in the zip file is
extracted in the job's working directory (usually the repository which is
extracted in the job's working directory (usually the repository which is
pulled down), and the Runner doesn't mind if the archive of `job A` overwrites
pulled down), and the runner doesn't mind if the archive of `job A` overwrites
things in the archive of `job B`.
things in the archive of `job B`.
The reason why it works this way is because the cache created for one Runner
The reason why it works this way is because the cache created for one runner
often will not be valid when used by a different one which can run on a
often will not be valid when used by a different one which can run on a
**different architecture** (e.g., when the cache includes binary files). And
**different architecture** (e.g., when the cache includes binary files). And
since the different steps might be executed by Runners running on different
since the different steps might be executed by runners running on different
machines, it is a safe default.
machines, it is a safe default.
### Cache mismatch
### Cache mismatch
...
@@ -464,17 +464,17 @@ mismatch and a few ideas how to fix it.
...
@@ -464,17 +464,17 @@ mismatch and a few ideas how to fix it.
| Reason of a cache mismatch | How to fix it |
| Reason of a cache mismatch | How to fix it |
| -------------------------- | ------------- |
| -------------------------- | ------------- |
| You use multiple standalone Runners (not in autoscale mode) attached to one project without a shared cache | Use only one Runner for your project or use multiple Runners with distributed cache enabled |
| You use multiple standalone runners (not in autoscale mode) attached to one project without a shared cache | Use only one runner for your project or use multiple runners with distributed cache enabled |
| You use Runners in autoscale mode without a distributed cache enabled | Configure the autoscale Runner to use a distributed cache |
| You use runners in autoscale mode without a distributed cache enabled | Configure the autoscale runner to use a distributed cache |
| The machine the Runner is installed on is low on disk space or, if you've set up distributed cache, the S3 bucket where the cache is stored doesn't have enough space | Make sure you clear some space to allow new caches to be stored. Currently, there's no automatic way to do this. |
| The machine the runner is installed on is low on disk space or, if you've set up distributed cache, the S3 bucket where the cache is stored doesn't have enough space | Make sure you clear some space to allow new caches to be stored. Currently, there's no automatic way to do this. |
| You use the same `key` for jobs where they cache different paths. | Use different cache keys to that the cache archive is stored to a different location and doesn't overwrite wrong caches. |
| You use the same `key` for jobs where they cache different paths. | Use different cache keys to that the cache archive is stored to a different location and doesn't overwrite wrong caches. |
Let's explore some examples.
Let's explore some examples.
#### Examples
#### Examples
Let's assume you have only one Runner assigned to your project, so the cache
Let's assume you have only one runner assigned to your project, so the cache
will be stored in the Runner's machine by default. If two jobs, A and B,
will be stored in the runner's machine by default. If two jobs, A and B,
have the same cache key, but they cache different paths, cache B would overwrite
have the same cache key, but they cache different paths, cache B would overwrite
cache A, even if their `paths` don't match:
cache A, even if their `paths` don't match:
...
@@ -513,7 +513,7 @@ job B:
...
@@ -513,7 +513,7 @@ job B:
To fix that, use different `keys` for each job.
To fix that, use different `keys` for each job.
In another case, let's assume you have more than one Runners assigned to your
In another case, let's assume you have more than one runner assigned to your
project, but the distributed cache is not enabled. The second time the
project, but the distributed cache is not enabled. The second time the
pipeline is run, we want `job A` and `job B` to re-use their cache (which in this case
pipeline is run, we want `job A` and `job B` to re-use their cache (which in this case
will be different):
will be different):
...
@@ -542,11 +542,11 @@ job B:
...
@@ -542,11 +542,11 @@ job B:
In that case, even if the `key` is different (no fear of overwriting), you
In that case, even if the `key` is different (no fear of overwriting), you
might experience that the cached files "get cleaned" before each stage if the
might experience that the cached files "get cleaned" before each stage if the
jobs run on different Runners in the subsequent pipelines.
jobs run on different runners in the subsequent pipelines.
## Clearing the cache
## Clearing the cache
GitLab Runners use [cache](../yaml/README.md#cache) to speed up the execution
Runners use [cache](../yaml/README.md#cache) to speed up the execution
of your jobs by reusing existing data. This however, can sometimes lead to an
of your jobs by reusing existing data. This however, can sometimes lead to an
inconsistent behavior.
inconsistent behavior.
...
@@ -565,9 +565,9 @@ If you want to avoid editing `.gitlab-ci.yml`, you can easily clear the cache
...
@@ -565,9 +565,9 @@ If you want to avoid editing `.gitlab-ci.yml`, you can easily clear the cache
via GitLab's UI:
via GitLab's UI:
1. Navigate to your project's **CI/CD > Pipelines** page.
1. Navigate to your project's **CI/CD > Pipelines** page.
1. Click on the **Clear Runner caches** button to clean up the cache.
1. Click on the **Clear runner caches** button to clean up the cache.