Commit b0be58a1 authored by Brett Walker's avatar Brett Walker Committed by Sean McGivern

Resolve "CE documentation is not CommonMark compliant"

parent 2d16f479
......@@ -36,7 +36,7 @@ _This notice should stay as the first item in the CONTRIBUTING.md file._
- [Release Scoping labels](#release-scoping-labels)
- [Priority labels](#priority-labels)
- [Severity labels](#severity-labels)
- [Severity impact guidance](#severity-impact-guidance)
- [Severity impact guidance](#severity-impact-guidance)
- [Label for community contributors](#label-for-community-contributors)
- [Implement design & UI elements](#implement-design--ui-elements)
- [Issue tracker](#issue-tracker)
......
......@@ -382,29 +382,30 @@ the configuration option `lowercase_usernames`. By default, this configuration o
1. Edit `/etc/gitlab/gitlab.rb`:
```ruby
gitlab_rails['ldap_servers'] = YAML.load <<-EOS
main:
# snip...
lowercase_usernames: true
EOS
```
```ruby
gitlab_rails['ldap_servers'] = YAML.load <<-EOS
main:
# snip...
lowercase_usernames: true
EOS
```
2. [Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
1. [Reconfigure GitLab](../restart_gitlab.md#omnibus-gitlab-reconfigure) for the changes to take effect.
**Source configuration**
1. Edit `config/gitlab.yaml`:
```yaml
production:
ldap:
servers:
main:
# snip...
lowercase_usernames: true
```
2. [Restart GitLab](../restart_gitlab.md#installations-from-source) for the changes to take effect.
```yaml
production:
ldap:
servers:
main:
# snip...
lowercase_usernames: true
```
1. [Restart GitLab](../restart_gitlab.md#installations-from-source) for the changes to take effect.
## Encryption
......
# GitLab Container Registry administration
> **Notes:**
- [Introduced][ce-4040] in GitLab 8.8.
- Container Registry manifest `v1` support was added in GitLab 8.9 to support
Docker versions earlier than 1.10.
- This document is about the admin guide. To learn how to use GitLab Container
Registry [user documentation](../user/project/container_registry.md).
> - [Introduced][ce-4040] in GitLab 8.8.
> - Container Registry manifest `v1` support was added in GitLab 8.9 to support
> Docker versions earlier than 1.10.
> - This document is about the admin guide. To learn how to use GitLab Container
> Registry [user documentation](../user/project/container_registry.md).
With the Container Registry integrated into GitLab, every project can have its
own space to store its Docker images.
......@@ -203,10 +203,10 @@ If you have a [wildcard certificate][], you need to specify the path to the
certificate in addition to the URL, in this case `/etc/gitlab/gitlab.rb` will
look like:
>
```ruby
registry_nginx['ssl_certificate'] = "/etc/gitlab/ssl/certificate.pem"
registry_nginx['ssl_certificate_key'] = "/etc/gitlab/ssl/certificate.key"
```
> ```ruby
> registry_nginx['ssl_certificate'] = "/etc/gitlab/ssl/certificate.pem"
> registry_nginx['ssl_certificate_key'] = "/etc/gitlab/ssl/certificate.key"
> ```
---
......@@ -375,7 +375,7 @@ Read more about the individual driver's config options in the
> **Warning** GitLab will not backup Docker images that are not stored on the
filesystem. Remember to enable backups with your object storage provider if
desired.
>
> **Important** Enabling storage driver other than `filesystem` would mean
that your Docker client needs to be able to access the storage backend directly.
So you must use an address that resolves and is accessible outside GitLab server.
......@@ -598,11 +598,11 @@ thus the error above.
While GitLab doesn't support using self-signed certificates with Container
Registry out of the box, it is possible to make it work if you follow
[Docker's documentation][docker-insecure]. You may find some additional
[Docker's documentation][docker-insecure-self-signed]. You may find some additional
information in [issue 18239][ce-18239].
[ce-18239]: https://gitlab.com/gitlab-org/gitlab-ce/issues/18239
[docker-insecure]: https://docs.docker.com/registry/insecure/#using-self-signed-certificates
[docker-insecure-self-signed]: https://docs.docker.com/registry/insecure/#use-self-signed-certificates
[reconfigure gitlab]: restart_gitlab.md#omnibus-gitlab-reconfigure
[restart gitlab]: restart_gitlab.md#installations-from-source
[wildcard certificate]: https://en.wikipedia.org/wiki/Wildcard_certificate
......
# Custom Git Hooks
>
**Note:** Custom Git hooks must be configured on the filesystem of the GitLab
> **Note:** Custom Git hooks must be configured on the filesystem of the GitLab
server. Only GitLab server administrators will be able to complete these tasks.
Please explore [webhooks] and [CI] as an option if you do not
have filesystem access. For a user configurable Git hook interface, see
......
......@@ -101,8 +101,7 @@ documentation on configuring Gitaly
authentication](https://gitlab.com/gitlab-org/gitaly/blob/master/doc/configuration/README.md#authentication)
.
>
**NOTE:** In most or all cases the storage paths below end in `/repositories` which is
> **NOTE:** In most or all cases the storage paths below end in `/repositories` which is
different than `path` in `git_data_dirs` of Omnibus installations. Check the
directory layout on your Gitaly server to be sure.
......
......@@ -37,7 +37,7 @@ Follow the steps below to configure an active/active setup:
1. [Configure the database](database.md)
1. [Configure Redis](redis.md)
1. [Configure Redis for GitLab source installations](redis_source.md)
1. [Configure Redis for GitLab source installations](redis_source.md)
1. [Configure NFS](nfs.md)
1. [Configure the GitLab application servers](gitlab.md)
1. [Configure the load balancers](load_balancer.md)
......
......@@ -84,7 +84,7 @@ for each GitLab application server in your environment.
servers should point to the external url that users will use to access GitLab.
In a typical HA setup, this will be the url of the load balancer which will
route traffic to all GitLab application servers in the HA cluster.
>
> **Note:** When you specify `https` in the `external_url`, as in the example
above, GitLab assumes you have SSL certificates in `/etc/gitlab/ssl/`. If
certificates are not present, Nginx will fail to start. See
......
# Configuring Redis for GitLab HA
>
Experimental Redis Sentinel support was [Introduced][ce-1877] in GitLab 8.11.
> Experimental Redis Sentinel support was [Introduced][ce-1877] in GitLab 8.11.
Starting with 8.14, Redis Sentinel is no longer experimental.
If you've used it with versions `< 8.14` before, please check the updated
documentation here.
......@@ -15,20 +14,20 @@ a hosted cloud solution or you can use the one that comes bundled with
Omnibus GitLab packages.
> **Notes:**
- Redis requires authentication for High Availability. See
[Redis Security](http://redis.io/topics/security) documentation for more
information. We recommend using a combination of a Redis password and tight
firewall rules to secure your Redis service.
- You are highly encouraged to read the [Redis Sentinel][sentinel] documentation
before configuring Redis HA with GitLab to fully understand the topology and
architecture.
- This is the documentation for the Omnibus GitLab packages. For installations
from source, follow the [Redis HA source installation](redis_source.md) guide.
- Redis Sentinel daemon is bundled with Omnibus GitLab Enterprise Edition only.
For configuring Sentinel with the Omnibus GitLab Community Edition and
installations from source, read the
[Available configuration setups](#available-configuration-setups) section
below.
> - Redis requires authentication for High Availability. See
> [Redis Security](http://redis.io/topics/security) documentation for more
> information. We recommend using a combination of a Redis password and tight
> firewall rules to secure your Redis service.
> - You are highly encouraged to read the [Redis Sentinel][sentinel] documentation
> before configuring Redis HA with GitLab to fully understand the topology and
> architecture.
> - This is the documentation for the Omnibus GitLab packages. For installations
> from source, follow the [Redis HA source installation](redis_source.md) guide.
> - Redis Sentinel daemon is bundled with Omnibus GitLab Enterprise Edition only.
> For configuring Sentinel with the Omnibus GitLab Community Edition and
> installations from source, read the
> [Available configuration setups](#available-configuration-setups) section
> below.
## Overview
......@@ -55,11 +54,11 @@ components below.
### High Availability with Sentinel
>**Notes:**
- Starting with GitLab `8.11`, you can configure a list of Redis Sentinel
servers that will monitor a group of Redis servers to provide failover support.
- Starting with GitLab `8.14`, the Omnibus GitLab Enterprise Edition package
comes with Redis Sentinel daemon built-in.
> **Notes:**
> - Starting with GitLab `8.11`, you can configure a list of Redis Sentinel
> servers that will monitor a group of Redis servers to provide failover support.
> - Starting with GitLab `8.14`, the Omnibus GitLab Enterprise Edition package
> comes with Redis Sentinel daemon built-in.
High Availability with Redis requires a few things:
......@@ -231,13 +230,13 @@ Pick the one that suits your needs.
This is the section where we install and setup the new Redis instances.
>**Notes:**
- We assume that you have installed GitLab and all HA components from scratch. If you
already have it installed and running, read how to
[switch from a single-machine installation to Redis HA](#switching-from-an-existing-single-machine-installation-to-redis-ha).
- Redis nodes (both master and slaves) will need the same password defined in
`redis['password']`. At any time during a failover the Sentinels can
reconfigure a node and change its status from master to slave and vice versa.
> **Notes:**
> - We assume that you have installed GitLab and all HA components from scratch. If you
> already have it installed and running, read how to
> [switch from a single-machine installation to Redis HA](#switching-from-an-existing-single-machine-installation-to-redis-ha).
> - Redis nodes (both master and slaves) will need the same password defined in
> `redis['password']`. At any time during a failover the Sentinels can
> reconfigure a node and change its status from master to slave and vice versa.
### Prerequisites
......@@ -383,9 +382,9 @@ multiple machines with the Sentinel daemon.
[Download/install](https://about.gitlab.com/downloads-ee) the
Omnibus GitLab Enterprise Edition package using **steps 1 and 2** from the
GitLab downloads page.
- Make sure you select the correct Omnibus package, with the same version
the GitLab application is running.
- Do not complete any other steps on the download page.
- Make sure you select the correct Omnibus package, with the same version
the GitLab application is running.
- Do not complete any other steps on the download page.
1. Edit `/etc/gitlab/gitlab.rb` and add the contents (if you are installing the
Sentinels in the same node as the other Redis instances, some values might
......
......@@ -135,15 +135,15 @@ created in snippets, wikis, and repos.
- [Monitoring GitLab](monitoring/index.md):
- [Monitoring uptime](../user/admin_area/monitoring/health_check.md): Check the server status using the health check endpoint.
- [IP whitelist](monitoring/ip_whitelist.md): Monitor endpoints that provide health check information when probed.
- [Monitoring GitHub imports](monitoring/github_imports.md): GitLab's GitHub Importer displays Prometheus metrics to monitor the health and progress of the importer.
- [IP whitelist](monitoring/ip_whitelist.md): Monitor endpoints that provide health check information when probed.
- [Monitoring GitHub imports](monitoring/github_imports.md): GitLab's GitHub Importer displays Prometheus metrics to monitor the health and progress of the importer.
### Performance Monitoring
- [GitLab Performance Monitoring](monitoring/performance/index.md):
- [Enable Performance Monitoring](monitoring/performance/gitlab_configuration.md): Enable GitLab Performance Monitoring.
- [GitLab performance monitoring with InfluxDB](monitoring/performance/influxdb_configuration.md): Configure GitLab and InfluxDB for measuring performance metrics.
- [InfluxDB Schema](monitoring/performance/influxdb_schema.md): Measurements stored in InfluxDB.
- [InfluxDB Schema](monitoring/performance/influxdb_schema.md): Measurements stored in InfluxDB.
- [GitLab performance monitoring with Prometheus](monitoring/prometheus/index.md): Configure GitLab and Prometheus for measuring performance metrics.
- [GitLab performance monitoring with Grafana](monitoring/performance/grafana_configuration.md): Configure GitLab to visualize time series metrics through graphs and dashboards.
- [Request Profiling](monitoring/performance/request_profiling.md): Get a detailed profile on slow requests.
......
# Koding & GitLab
>**Notes:**
- **As of GitLab 10.0, the Koding integration is deprecated and will be removed
in a future version. The option to configure it is removed from GitLab's admin
area.**
- [Introduced][ce-5909] in GitLab 8.11.
> **Notes:**
> - **As of GitLab 10.0, the Koding integration is deprecated and will be removed
> in a future version. The option to configure it is removed from GitLab's admin
> area.**
> - [Introduced][ce-5909] in GitLab 8.11.
This document will guide you through installing and configuring Koding with
GitLab.
......@@ -117,12 +117,11 @@ requests.
You need to enable Koding integration from Settings under Admin Area. To do
that login with an Admin account and do followings;
- open [http://127.0.0.1:3000/admin/application_settings](http://127.0.0.1:3000/admin/application_settings)
- scroll to bottom of the page until Koding section
- check `Enable Koding` checkbox
- provide GitLab team page for running Koding instance as `Koding URL`*
* For `Koding URL` you need to provide the gitlab integration enabled team on
- open [http://127.0.0.1:3000/admin/application_settings](http://127.0.0.1:3000/admin/application_settings)
- scroll to bottom of the page until Koding section
- check `Enable Koding` checkbox
- provide GitLab team page for running Koding instance as `Koding URL`*
* For `Koding URL` you need to provide the gitlab integration enabled team on
your Koding installation. Team called `gitlab` has integration on Koding out
of the box, so if you didn't change anything your team on Koding should be
`gitlab`.
......
......@@ -74,28 +74,27 @@ our AsciiDoc snippets, wikis and repos using delimited blocks:
```plantuml
Bob -> Alice : hello
Alice -> Bob : Go Away
```
</pre>
```</pre>
- **AsciiDoc**
<pre>
```
[plantuml, format="png", id="myDiagram", width="200px"]
--
Bob->Alice : hello
Alice -> Bob : Go Away
--
</pre>
```
- **reStructuredText**
<pre>
```
.. plantuml::
:caption: Caption with **bold** and *italic*
Bob -> Alice: hello
Alice -> Bob: Go Away
</pre>
```
You can also use the `uml::` directive for compatibility with [sphinxcontrib-plantuml](https://pypi.python.org/pypi/sphinxcontrib-plantuml), but please note that we currently only support the `caption` option.
......
# Web terminals
>
[Introduced](https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/7690)
> [Introduced](https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/7690)
in GitLab 8.15. Only project maintainers and owners can access web terminals.
With the introduction of the [Kubernetes integration](../../user/project/clusters/index.md),
......
......@@ -87,13 +87,13 @@ _The artifacts are stored by default in
### Using object storage
>**Notes:**
- [Introduced](https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/1762) in
[GitLab Premium](https://about.gitlab.com/pricing/) 9.4.
- Since version 9.5, artifacts are [browsable](../user/project/pipelines/job_artifacts.md#browsing-artifacts),
when object storage is enabled. 9.4 lacks this feature.
- Since version 10.6, available in [GitLab Core](https://about.gitlab.com/pricing/)
- Since version 11.0, we support `direct_upload` to S3.
> **Notes:**
> - [Introduced](https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/1762) in
> [GitLab Premium](https://about.gitlab.com/pricing/) 9.4.
> - Since version 9.5, artifacts are [browsable](../user/project/pipelines/job_artifacts.md#browsing-artifacts),
> when object storage is enabled. 9.4 lacks this feature.
> - Since version 10.6, available in [GitLab Core](https://about.gitlab.com/pricing/)
> - Since version 11.0, we support `direct_upload` to S3.
If you don't want to use the local disk where GitLab is installed to store the
artifacts, you can use an object storage like AWS S3 instead.
......@@ -162,15 +162,15 @@ _The artifacts are stored by default in
1. Save the file and [reconfigure GitLab][] for the changes to take effect.
1. Migrate any existing local artifacts to the object storage:
```bash
gitlab-rake gitlab:artifacts:migrate
```
```bash
gitlab-rake gitlab:artifacts:migrate
```
Currently this has to be executed manually and it will allow you to
migrate the existing artifacts to the object storage, but all new
artifacts will still be stored on the local disk. In the future
you will be given an option to define a default storage artifacts for all
new files.
Currently this has to be executed manually and it will allow you to
migrate the existing artifacts to the object storage, but all new
artifacts will still be stored on the local disk. In the future
you will be given an option to define a default storage artifacts for all
new files.
---
......@@ -198,15 +198,15 @@ _The artifacts are stored by default in
1. Save the file and [restart GitLab][] for the changes to take effect.
1. Migrate any existing local artifacts to the object storage:
```bash
sudo -u git -H bundle exec rake gitlab:artifacts:migrate RAILS_ENV=production
```
```bash
sudo -u git -H bundle exec rake gitlab:artifacts:migrate RAILS_ENV=production
```
Currently this has to be executed manually and it will allow you to
migrate the existing artifacts to the object storage, but all new
artifacts will still be stored on the local disk. In the future
you will be given an option to define a default storage artifacts for all
new files.
Currently this has to be executed manually and it will allow you to
migrate the existing artifacts to the object storage, but all new
artifacts will still be stored on the local disk. In the future
you will be given an option to define a default storage artifacts for all
new files.
## Expiring artifacts
......@@ -266,6 +266,7 @@ you can flip the feature flag from a Rails console.
```ruby
Feature.enable('ci_disable_validates_dependencies')
```
---
**In installations from source:**
......
......@@ -67,24 +67,24 @@ To archive those legacy job traces, please follow the instruction below.
1. Execute the following command
```bash
gitlab-rake gitlab:traces:archive
```
```bash
gitlab-rake gitlab:traces:archive
```
After you executed this task, GitLab instance queues up Sidekiq jobs (asynchronous processes)
for migrating job trace files from local storage to object storage.
It could take time to complete the all migration jobs. You can check the progress by the following command
After you executed this task, GitLab instance queues up Sidekiq jobs (asynchronous processes)
for migrating job trace files from local storage to object storage.
It could take time to complete the all migration jobs. You can check the progress by the following command
```bash
sudo gitlab-rails console
```
```bash
sudo gitlab-rails console
```
```bash
[1] pry(main)> Sidekiq::Stats.new.queues['pipeline_background:archive_trace']
=> 100
```
```bash
[1] pry(main)> Sidekiq::Stats.new.queues['pipeline_background:archive_trace']
=> 100
```
If the count becomes zero, the archiving processes are done
If the count becomes zero, the archiving processes are done
## How to migrate archived job traces to object storage
......@@ -95,9 +95,9 @@ If job traces have already been archived into local storage, and you want to mig
1. Ensure [Object storage integration for Job Artifacts](job_artifacts.md#object-storage-settings) is enabled
1. Execute the following command
```bash
gitlab-rake gitlab:traces:migrate
```
```bash
gitlab-rake gitlab:traces:migrate
```
## How to remove job traces
......@@ -185,15 +185,15 @@ with the legacy architecture.
In some cases, having data stored on Redis could incur data loss:
1. **Case 1: When all data in Redis are accidentally flushed**
- On going live traces could be recovered by re-sending traces (this is
supported by all versions of the GitLab Runner).
- Finished jobs which have not archived live traces will lose the last part
(~128KB) of trace data.
- On going live traces could be recovered by re-sending traces (this is
supported by all versions of the GitLab Runner).
- Finished jobs which have not archived live traces will lose the last part
(~128KB) of trace data.
1. **Case 2: When Sidekiq workers fail to archive (e.g., there was a bug that
prevents archiving process, Sidekiq inconsistency, etc.)**
- Currently all trace data in Redis will be deleted after one week. If the
Sidekiq workers can't finish by the expiry date, the part of trace data will be lost.
- Currently all trace data in Redis will be deleted after one week. If the
Sidekiq workers can't finish by the expiry date, the part of trace data will be lost.
Another issue that might arise is that it could consume all memory on the Redis
instance. If the number of jobs is 1000, 128MB (128KB * 1000) is consumed.
......
# GitLab Prometheus
>**Notes:**
- Prometheus and the various exporters listed in this page are bundled in the
Omnibus GitLab package. Check each exporter's documentation for the timeline
they got added. For installations from source you will have to install them
yourself. Over subsequent releases additional GitLab metrics will be captured.
- Prometheus services are on by default with GitLab 9.0.
- Prometheus and its exporters do not authenticate users, and will be available
to anyone who can access them.
> **Notes:**
> - Prometheus and the various exporters listed in this page are bundled in the
> Omnibus GitLab package. Check each exporter's documentation for the timeline
> they got added. For installations from source you will have to install them
> yourself. Over subsequent releases additional GitLab metrics will be captured.
> - Prometheus services are on by default with GitLab 9.0.
> - Prometheus and its exporters do not authenticate users, and will be available
> to anyone who can access them.
[Prometheus] is a powerful time-series monitoring service, providing a flexible
platform for monitoring GitLab and other software products.
......@@ -107,7 +107,7 @@ Sample Prometheus queries:
> Introduced in GitLab 9.0.
> Pod monitoring introduced in GitLab 9.4.
If your GitLab server is running within Kubernetes, Prometheus will collect metrics from the Nodes and [annotated Pods](https://prometheus.io/docs/operating/configuration/#<kubernetes_sd_config>) in the cluster, including performance data on each container. This is particularly helpful if your CI/CD environments run in the same cluster, as you can use the [Prometheus project integration][] to monitor them.
If your GitLab server is running within Kubernetes, Prometheus will collect metrics from the Nodes and [annotated Pods](https://prometheus.io/docs/operating/configuration/#kubernetes_sd_config) in the cluster, including performance data on each container. This is particularly helpful if your CI/CD environments run in the same cluster, as you can use the [Prometheus project integration][] to monitor them.
To disable the monitoring of Kubernetes:
......
......@@ -5,13 +5,13 @@ description: 'Learn how to administer GitLab Pages.'
# GitLab Pages administration
> **Notes:**
- [Introduced][ee-80] in GitLab EE 8.3.
- Custom CNAMEs with TLS support were [introduced][ee-173] in GitLab EE 8.5.
- GitLab Pages [were ported][ce-14605] to Community Edition in GitLab 8.17.
- This guide is for Omnibus GitLab installations. If you have installed
GitLab from source, follow the [Pages source installation document](source.md).
- To learn how to use GitLab Pages, read the [user documentation][pages-userguide].
- Does NOT support subgroups. See [this issue](https://gitlab.com/gitlab-org/gitlab-ce/issues/30548) for more information and status.
> - [Introduced][ee-80] in GitLab EE 8.3.
> - Custom CNAMEs with TLS support were [introduced][ee-173] in GitLab EE 8.5.
> - GitLab Pages [were ported][ce-14605] to Community Edition in GitLab 8.17.
> - This guide is for Omnibus GitLab installations. If you have installed
> GitLab from source, follow the [Pages source installation document](source.md).
> - To learn how to use GitLab Pages, read the [user documentation][pages-userguide].
> - Does NOT support subgroups. See [this issue](https://gitlab.com/gitlab-org/gitlab-ce/issues/30548) for more information and status.
This document describes how to set up the _latest_ GitLab Pages feature. Make
sure to read the [changelog](#changelog) if you are upgrading to a new GitLab
......@@ -107,12 +107,12 @@ since that is needed in all configurations.
### Wildcard domains
>**Requirements:**
- [Wildcard DNS setup](#dns-configuration)
> **Requirements:**
> - [Wildcard DNS setup](#dns-configuration)
>
>---
> ---
>
URL scheme: `http://page.example.io`
> URL scheme: `http://page.example.io`
This is the minimum setup that you can use Pages with. It is the base for all
other setups as described below. Nginx will proxy all requests to the daemon.
......@@ -131,13 +131,13 @@ Watch the [video tutorial][video-admin] for this configuration.
### Wildcard domains with TLS support
>**Requirements:**
- [Wildcard DNS setup](#dns-configuration)
- Wildcard TLS certificate
> **Requirements:**
> - [Wildcard DNS setup](#dns-configuration)
> - Wildcard TLS certificate
>
>---
> ---
>
URL scheme: `https://page.example.io`
> URL scheme: `https://page.example.io`
Nginx will proxy all requests to the daemon. Pages daemon doesn't listen to the
outside world.
......@@ -168,13 +168,13 @@ you have IPv6 as well as IPv4 addresses, you can use them both.
### Custom domains
>**Requirements:**
- [Wildcard DNS setup](#dns-configuration)
- Secondary IP
> **Requirements:**
> - [Wildcard DNS setup](#dns-configuration)
> - Secondary IP
>
---
> ---
>
URL scheme: `http://page.example.io` and `http://domain.com`
> URL scheme: `http://page.example.io` and `http://domain.com`
In that case, the Pages daemon is running, Nginx still proxies requests to
the daemon but the daemon is also able to receive requests from the outside
......@@ -197,14 +197,14 @@ world. Custom domains are supported, but no TLS.
### Custom domains with TLS support
>**Requirements:**
- [Wildcard DNS setup](#dns-configuration)
- Wildcard TLS certificate
- Secondary IP
> **Requirements:**
> - [Wildcard DNS setup](#dns-configuration)
> - Wildcard TLS certificate
> - Secondary IP
>
---
> ---
>
URL scheme: `https://page.example.io` and `https://domain.com`
> URL scheme: `https://page.example.io` and `https://domain.com`
In that case, the Pages daemon is running, Nginx still proxies requests to
the daemon but the daemon is also able to receive requests from the outside
......@@ -251,9 +251,9 @@ Follow the steps below to configure verbose logging of GitLab Pages daemon.
If you wish to make it log events with level `DEBUG` you must configure this in
`/etc/gitlab/gitlab.rb`:
```shell
gitlab_pages['log_verbose'] = true
```
```shell
gitlab_pages['log_verbose'] = true
```
1. [Reconfigure GitLab][reconfigure]
......@@ -266,9 +266,9 @@ are stored.
If you wish to store them in another location you must set it up in
`/etc/gitlab/gitlab.rb`:
```shell
gitlab_rails['pages_path'] = "/mnt/storage/pages"
```
```shell
gitlab_rails['pages_path'] = "/mnt/storage/pages"
```
1. [Reconfigure GitLab][reconfigure]
......@@ -279,19 +279,19 @@ Omnibus GitLab 11.1.
1. By default the listener is configured to listen for requests on `localhost:8090`.
If you wish to disable it you must configure this in
`/etc/gitlab/gitlab.rb`:
If you wish to disable it you must configure this in
`/etc/gitlab/gitlab.rb`:
```shell
gitlab_pages['listen_proxy'] = nil
```
```shell
gitlab_pages['listen_proxy'] = nil
```
If you wish to make it listen on a different port you must configure this also in
`/etc/gitlab/gitlab.rb`:
If you wish to make it listen on a different port you must configure this also in
`/etc/gitlab/gitlab.rb`:
```shell
gitlab_pages['listen_proxy'] = "localhost:10080"
```
```shell
gitlab_pages['listen_proxy'] = "localhost:10080"
```
1. [Reconfigure GitLab][reconfigure]
......
......@@ -89,11 +89,11 @@ since that is needed in all configurations.
### Wildcard domains
>**Requirements:**
- [Wildcard DNS setup](#dns-configuration)
> - [Wildcard DNS setup](#dns-configuration)
>
>---
> ---
>
URL scheme: `http://page.example.io`
> URL scheme: `http://page.example.io`
This is the minimum setup that you can use Pages with. It is the base for all
other setups as described below. Nginx will proxy all requests to the daemon.
......@@ -111,24 +111,24 @@ The Pages daemon doesn't listen to the outside world.
1. Go to the GitLab installation directory:
```bash
cd /home/git/gitlab
```
```bash
cd /home/git/gitlab
```
1. Edit `gitlab.yml` and under the `pages` setting, set `enabled` to `true` and
the `host` to the FQDN under which GitLab Pages will be served:
```yaml
## GitLab Pages
pages:
enabled: true
# The location where pages are stored (default: shared/pages).
# path: shared/pages
```yaml
## GitLab Pages
pages:
enabled: true
# The location where pages are stored (default: shared/pages).
# path: shared/pages
host: example.io
port: 80
https: false
```
host: example.io
port: 80
https: false
```
1. Edit `/etc/default/gitlab` and set `gitlab_pages_enabled` to `true` in
order to enable the pages daemon. In `gitlab_pages_options` the
......@@ -151,13 +151,13 @@ The Pages daemon doesn't listen to the outside world.
### Wildcard domains with TLS support
>**Requirements:**
- [Wildcard DNS setup](#dns-configuration)
- Wildcard TLS certificate
> **Requirements:**
> - [Wildcard DNS setup](#dns-configuration)
> - Wildcard TLS certificate
>
>---
> ---
>
URL scheme: `https://page.example.io`
> URL scheme: `https://page.example.io`
Nginx will proxy all requests to the daemon. Pages daemon doesn't listen to the
outside world.
......@@ -174,17 +174,17 @@ outside world.
1. In `gitlab.yml`, set the port to `443` and https to `true`:
```bash
## GitLab Pages
pages:
enabled: true
# The location where pages are stored (default: shared/pages).
# path: shared/pages
host: example.io
port: 443
https: true
```
```bash
## GitLab Pages
pages:
enabled: true
# The location where pages are stored (default: shared/pages).
# path: shared/pages
host: example.io
port: 443
https: true
```
1. Edit `/etc/default/gitlab` and set `gitlab_pages_enabled` to `true` in
order to enable the pages daemon. In `gitlab_pages_options` the
......@@ -216,13 +216,13 @@ that without TLS certificates.
### Custom domains
>**Requirements:**
- [Wildcard DNS setup](#dns-configuration)
- Secondary IP
> **Requirements:**
> - [Wildcard DNS setup](#dns-configuration)
> - Secondary IP
>
---
> ---
>
URL scheme: `http://page.example.io` and `http://domain.com`
> URL scheme: `http://page.example.io` and `http://domain.com`
In that case, the pages daemon is running, Nginx still proxies requests to
the daemon but the daemon is also able to receive requests from the outside
......@@ -243,18 +243,18 @@ world. Custom domains are supported, but no TLS.
`external_http` to the secondary IP on which the pages daemon will listen
for connections:
```yaml
pages:
enabled: true
# The location where pages are stored (default: shared/pages).
# path: shared/pages
```yaml
pages:
enabled: true
# The location where pages are stored (default: shared/pages).
# path: shared/pages
host: example.io
port: 80
https: false
host: example.io
port: 80
https: false
external_http: 192.0.2.2:80
```
external_http: 192.0.2.2:80
```
1. Edit `/etc/default/gitlab` and set `gitlab_pages_enabled` to `true` in
order to enable the pages daemon. In `gitlab_pages_options` the
......@@ -281,14 +281,14 @@ world. Custom domains are supported, but no TLS.
### Custom domains with TLS support
>**Requirements:**
- [Wildcard DNS setup](#dns-configuration)
- Wildcard TLS certificate
- Secondary IP
> **Requirements:**
> - [Wildcard DNS setup](#dns-configuration)
> - Wildcard TLS certificate
> - Secondary IP
>
---
> ---
>
URL scheme: `https://page.example.io` and `https://domain.com`
> URL scheme: `https://page.example.io` and `https://domain.com`
In that case, the pages daemon is running, Nginx still proxies requests to
the daemon but the daemon is also able to receive requests from the outside
......@@ -309,20 +309,20 @@ world. Custom domains and TLS are supported.
`external_http` and `external_https` to the secondary IP on which the pages
daemon will listen for connections:
```yaml
## GitLab Pages
pages:
enabled: true
# The location where pages are stored (default: shared/pages).
# path: shared/pages
```yaml
## GitLab Pages
pages:
enabled: true
# The location where pages are stored (default: shared/pages).
# path: shared/pages
host: example.io
port: 443
https: true
host: example.io
port: 443
https: true
external_http: 192.0.2.2:80
external_https: 192.0.2.2:443
```
external_http: 192.0.2.2:80
external_https: 192.0.2.2:443
```
1. Edit `/etc/default/gitlab` and set `gitlab_pages_enabled` to `true` in
order to enable the pages daemon. In `gitlab_pages_options` the
......@@ -358,9 +358,9 @@ are stored.
If you wish to store them in another location you must set it up in
`/etc/gitlab/gitlab.rb`:
```ruby
gitlab_rails['pages_path'] = "/mnt/storage/pages"
```
```ruby
gitlab_rails['pages_path'] = "/mnt/storage/pages"
```
1. [Reconfigure GitLab][reconfigure]
......@@ -400,12 +400,12 @@ are stored.
If you wish to store them in another location you must set it up in
`gitlab.yml` under the `pages` section:
```yaml
pages:
enabled: true
# The location where pages are stored (default: shared/pages).
path: /mnt/storage/pages
```
```yaml
pages:
enabled: true
# The location where pages are stored (default: shared/pages).
path: /mnt/storage/pages
```
1. [Restart GitLab][restart]
......
......@@ -5,36 +5,36 @@
GitLab allows you to define multiple repository storage paths to distribute the
storage load between several mount points.
>**Notes:**
> **Notes:**
>
- You must have at least one storage path called `default`.
- The paths are defined in key-value pairs. The key is an arbitrary name you
can pick to name the file path.
- The target directories and any of its subpaths must not be a symlink.
> - You must have at least one storage path called `default`.
> - The paths are defined in key-value pairs. The key is an arbitrary name you
> can pick to name the file path.
> - The target directories and any of its subpaths must not be a symlink.
## Configure GitLab
>**Warning:**
In order for [backups] to work correctly, the storage path must **not** be a
mount point and the GitLab user should have correct permissions for the parent
directory of the path. In Omnibus GitLab this is taken care of automatically,
but for source installations you should be extra careful.
> **Warning:**
> In order for [backups] to work correctly, the storage path must **not** be a
> mount point and the GitLab user should have correct permissions for the parent
> directory of the path. In Omnibus GitLab this is taken care of automatically,
> but for source installations you should be extra careful.
>
The thing is that for compatibility reasons `gitlab.yml` has a different
structure than Omnibus. In `gitlab.yml` you indicate the path for the
repositories, for example `/home/git/repositories`, while in Omnibus you
indicate `git_data_dirs`, which for the example above would be `/home/git`.
Then, Omnibus will create a `repositories` directory under that path to use with
`gitlab.yml`.
> The thing is that for compatibility reasons `gitlab.yml` has a different
> structure than Omnibus. In `gitlab.yml` you indicate the path for the
> repositories, for example `/home/git/repositories`, while in Omnibus you
> indicate `git_data_dirs`, which for the example above would be `/home/git`.
> Then, Omnibus will create a `repositories` directory under that path to use with
> `gitlab.yml`.
>
This little detail matters because while restoring a backup, the current
contents of `/home/git/repositories` [are moved to][raketask] `/home/git/repositories.old`,
so if `/home/git/repositories` is the mount point, then `mv` would be moving
things between mount points, and bad things could happen. Ideally,
`/home/git` would be the mount point, so then things would be moving within the
same mount point. This is guaranteed with Omnibus installations (because they
don't specify the full repository path but the parent path), but not for source
installations.
> This little detail matters because while restoring a backup, the current
> contents of `/home/git/repositories` [are moved to][raketask] `/home/git/repositories.old`,
> so if `/home/git/repositories` is the mount point, then `mv` would be moving
> things between mount points, and bad things could happen. Ideally,
> `/home/git` would be the mount point, so then things would be moving within the
> same mount point. This is guaranteed with Omnibus installations (because they
> don't specify the full repository path but the parent path), but not for source
> installations.
---
......
......@@ -8,15 +8,15 @@ may not show up and merge requests may not be updated. The following are some
troubleshooting steps that will help you diagnose the bottleneck.
> **Note:** GitLab administrators/users should consider working through these
debug steps with GitLab Support so the backtraces can be analyzed by our team.
It may reveal a bug or necessary improvement in GitLab.
> debug steps with GitLab Support so the backtraces can be analyzed by our team.
> It may reveal a bug or necessary improvement in GitLab.
>
> **Note:** In any of the backtraces, be wary of suspecting cases where every
thread appears to be waiting in the database, Redis, or waiting to acquire
a mutex. This **may** mean there's contention in the database, for example,
but look for one thread that is different than the rest. This other thread
may be using all available CPU, or have a Ruby Global Interpreter Lock,
preventing other threads from continuing.
> thread appears to be waiting in the database, Redis, or waiting to acquire
> a mutex. This **may** mean there's contention in the database, for example,
> but look for one thread that is different than the rest. This other thread
> may be using all available CPU, or have a Ruby Global Interpreter Lock,
> preventing other threads from continuing.
## Thread dump
......
......@@ -50,9 +50,10 @@ _The uploads are stored by default in
### Using object storage
>**Notes:**
- [Introduced][ee-3867] in [GitLab Enterprise Edition Premium][eep] 10.5.
- Since version 11.1, we support direct_upload to S3.
> **Notes:**
>
> - [Introduced][ee-3867] in [GitLab Enterprise Edition Premium][eep] 10.5.
> - Since version 11.1, we support direct_upload to S3.
If you don't want to use the local disk where GitLab is installed to store the
uploads, you can use an object storage provider like AWS S3 instead.
......@@ -105,8 +106,8 @@ _The uploads are stored by default in
}
```
>**Note:**
If you are using AWS IAM profiles, be sure to omit the AWS access key and secret access key/value pairs.
>**Note:**
If you are using AWS IAM profiles, be sure to omit the AWS access key and secret access key/value pairs.
```ruby
gitlab_rails['uploads_object_store_connection'] = {
......@@ -119,28 +120,28 @@ If you are using AWS IAM profiles, be sure to omit the AWS access key and secret
1. Save the file and [reconfigure GitLab][] for the changes to take effect.
1. Migrate any existing local uploads to the object storage:
>**Notes:**
These task complies with the `BATCH` environment variable to process uploads in batch (200 by default). All of the processing will be done in a background worker and requires **no downtime**.
```bash
# gitlab-rake gitlab:uploads:migrate[uploader_class, model_class, mount_point]
# Avatars
gitlab-rake "gitlab:uploads:migrate[AvatarUploader, Project, :avatar]"
gitlab-rake "gitlab:uploads:migrate[AvatarUploader, Group, :avatar]"
gitlab-rake "gitlab:uploads:migrate[AvatarUploader, User, :avatar]"
# Attachments
gitlab-rake "gitlab:uploads:migrate[AttachmentUploader, Note, :attachment]"
gitlab-rake "gitlab:uploads:migrate[AttachmentUploader, Appearance, :logo]"
gitlab-rake "gitlab:uploads:migrate[AttachmentUploader, Appearance, :header_logo]"
# Markdown
gitlab-rake "gitlab:uploads:migrate[FileUploader, Project]"
gitlab-rake "gitlab:uploads:migrate[PersonalFileUploader, Snippet]"
gitlab-rake "gitlab:uploads:migrate[NamespaceFileUploader, Snippet]"
gitlab-rake "gitlab:uploads:migrate[FileUploader, MergeRequest]"
```
> **Notes:**
> These task complies with the `BATCH` environment variable to process uploads in batch (200 by default). All of the processing will be done in a background worker and requires **no downtime**.
```bash
# gitlab-rake gitlab:uploads:migrate[uploader_class, model_class, mount_point]
# Avatars
gitlab-rake "gitlab:uploads:migrate[AvatarUploader, Project, :avatar]"
gitlab-rake "gitlab:uploads:migrate[AvatarUploader, Group, :avatar]"
gitlab-rake "gitlab:uploads:migrate[AvatarUploader, User, :avatar]"
# Attachments
gitlab-rake "gitlab:uploads:migrate[AttachmentUploader, Note, :attachment]"
gitlab-rake "gitlab:uploads:migrate[AttachmentUploader, Appearance, :logo]"
gitlab-rake "gitlab:uploads:migrate[AttachmentUploader, Appearance, :header_logo]"
# Markdown
gitlab-rake "gitlab:uploads:migrate[FileUploader, Project]"
gitlab-rake "gitlab:uploads:migrate[PersonalFileUploader, Snippet]"
gitlab-rake "gitlab:uploads:migrate[NamespaceFileUploader, Snippet]"
gitlab-rake "gitlab:uploads:migrate[FileUploader, MergeRequest]"
```
---
......@@ -167,32 +168,30 @@ _The uploads are stored by default in
1. Save the file and [restart GitLab][] for the changes to take effect.
1. Migrate any existing local uploads to the object storage:
>**Notes:**
- These task comply with the `BATCH` environment variable to process uploads in batch (200 by default). All of the processing will be done in a background worker and requires **no downtime**.
- To migrate in production use `RAILS_ENV=production` environment variable.
```bash
# sudo -u git -H bundle exec rake gitlab:uploads:migrate
# Avatars
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[AvatarUploader, Project, :avatar]"
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[AvatarUploader, Group, :avatar]"
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[AvatarUploader, User, :avatar]"
# Attachments
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[AttachmentUploader, Note, :attachment]"
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[AttachmentUploader, Appearance, :logo]"
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[AttachmentUploader, Appearance, :header_logo]"
# Markdown
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[FileUploader, Project]"
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[PersonalFileUploader, Snippet]"
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[NamespaceFileUploader, Snippet]"
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[FileUploader, MergeRequest]"
```
> **Notes:**
> - These task comply with the `BATCH` environment variable to process uploads in batch (200 by default). All of the processing will be done in a background worker and requires **no downtime**.
> - To migrate in production use `RAILS_ENV=production` environment variable.
```bash
# sudo -u git -H bundle exec rake gitlab:uploads:migrate
# Avatars
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[AvatarUploader, Project, :avatar]"
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[AvatarUploader, Group, :avatar]"
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[AvatarUploader, User, :avatar]"
# Attachments
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[AttachmentUploader, Note, :attachment]"
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[AttachmentUploader, Appearance, :logo]"
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[AttachmentUploader, Appearance, :header_logo]"
# Markdown
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[FileUploader, Project]"
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[PersonalFileUploader, Snippet]"
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[NamespaceFileUploader, Snippet]"
sudo -u git -H bundle exec rake "gitlab:uploads:migrate[FileUploader, MergeRequest]"
```
[reconfigure gitlab]: restart_gitlab.md#omnibus-gitlab-reconfigure "How to reconfigure Omnibus GitLab"
[restart gitlab]: restart_gitlab.md#installations-from-source "How to restart GitLab"
......
......@@ -319,7 +319,8 @@ Example of response
## Get job artifacts
> **Notes**:
- [Introduced][ce-2893] in GitLab 8.5.
>
> - [Introduced][ce-2893] in GitLab 8.5.
Get job artifacts of a project.
......@@ -350,7 +351,8 @@ Response:
## Download the artifacts archive
> **Notes**:
- [Introduced][ce-5347] in GitLab 8.10.
>
> - [Introduced][ce-5347] in GitLab 8.10.
Download the artifacts archive from the given reference name and job provided the
job finished successfully.
......
......@@ -544,10 +544,10 @@ GET /projects/:id/services/jira
Set JIRA service for a project.
>**Notes:**
- Starting with GitLab 8.14, `api_url`, `issues_url`, `new_issue_url` and
`project_url` are replaced by `project_key`, `url`. If you are using an
older version, [follow this documentation][old-jira-api].
> **Notes:**
> - Starting with GitLab 8.14, `api_url`, `issues_url`, `new_issue_url` and
> `project_url` are replaced by `project_key`, `url`. If you are using an
> older version, [follow this documentation][old-jira-api].
```
PUT /projects/:id/services/jira
......
......@@ -76,8 +76,8 @@ Below are the changes made between V3 and V4.
- Simplify project payload exposed on Environment endpoints [!9675](https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/9675)
- API uses merge request `IID`s (internal ID, as in the web UI) rather than `ID`s. This affects the merge requests, award emoji, todos, and time tracking APIs. [!9530](https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/9530)
- API uses issue `IID`s (internal ID, as in the web UI) rather than `ID`s. This affects the issues, award emoji, todos, and time tracking APIs. [!9530](https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/9530)
- Change initial page from `0` to `1` on `GET /projects/:id/repository/commits` (like on the rest of the API) [!9679] (https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/9679)
- Return correct `Link` header data for `GET /projects/:id/repository/commits` [!9679] (https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/9679)
- Change initial page from `0` to `1` on `GET /projects/:id/repository/commits` (like on the rest of the API) [!9679](https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/9679)
- Return correct `Link` header data for `GET /projects/:id/repository/commits` [!9679](https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/9679)
- Update endpoints for repository files [!9637](https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/9637)
- Moved `GET /projects/:id/repository/files?file_path=:file_path` to `GET /projects/:id/repository/files/:file_path` (`:file_path` should be URL-encoded)
- `GET /projects/:id/repository/blobs/:sha` now returns JSON attributes for the blob identified by `:sha`, instead of finding the commit identified by `:sha` and returning the raw content of the blob in that commit identified by the required `?filepath=:filepath`
......
......@@ -381,17 +381,18 @@ environment = ["DOCKER_DRIVER=overlay2"]
If you're running multiple Runners you will have to modify all configuration files.
> **Notes:**
- More information about the Runner configuration is available in the [Runner documentation](https://docs.gitlab.com/runner/configuration/).
- For more information about using OverlayFS with Docker, you can read
[Use the OverlayFS storage driver](https://docs.docker.com/engine/userguide/storagedriver/overlayfs-driver/).
>
> - More information about the Runner configuration is available in the [Runner documentation](https://docs.gitlab.com/runner/configuration/).
> - For more information about using OverlayFS with Docker, you can read
> [Use the OverlayFS storage driver](https://docs.docker.com/engine/userguide/storagedriver/overlayfs-driver/).
## Using the GitLab Container Registry
> **Notes:**
- This feature requires GitLab 8.8 and GitLab Runner 1.2.
- Starting from GitLab 8.12, if you have [2FA] enabled in your account, you need
to pass a [personal access token][pat] instead of your password in order to
login to GitLab's Container Registry.
> - This feature requires GitLab 8.8 and GitLab Runner 1.2.
> - Starting from GitLab 8.12, if you have [2FA] enabled in your account, you need
> to pass a [personal access token][pat] instead of your password in order to
> login to GitLab's Container Registry.
Once you've built a Docker image, you can push it up to the built-in
[GitLab Container Registry](../../user/project/container_registry.md). For example,
......
......@@ -452,13 +452,14 @@ that runner.
## Define an image from a private Container Registry
> **Notes:**
- This feature requires GitLab Runner **1.8** or higher
- For GitLab Runner versions **>= 0.6, <1.8** there was a partial
support for using private registries, which required manual configuration
of credentials on runner's host. We recommend to upgrade your Runner to
at least version **1.8** if you want to use private registries.
- If the repository is private you need to authenticate your GitLab Runner in the
registry. Learn more about how [GitLab Runner works in this case][runner-priv-reg].
>
> - This feature requires GitLab Runner **1.8** or higher
> - For GitLab Runner versions **>= 0.6, <1.8** there was a partial
> support for using private registries, which required manual configuration
> of credentials on runner's host. We recommend to upgrade your Runner to
> at least version **1.8** if you want to use private registries.
> - If the repository is private you need to authenticate your GitLab Runner in the
> registry. Learn more about how [GitLab Runner works in this case][runner-priv-reg].
As an example, let's assume that you want to use the `registry.example.com/private/image:latest`
image which is private and requires you to login into a private container registry.
......@@ -475,57 +476,57 @@ To configure access for `registry.example.com`, follow these steps:
1. Find what the value of `DOCKER_AUTH_CONFIG` should be. There are two ways to
accomplish this:
- **First way -** Do a `docker login` on your local machine:
- **First way -** Do a `docker login` on your local machine:
```bash
docker login registry.example.com --username my_username --password my_password
```
```bash
docker login registry.example.com --username my_username --password my_password
```
Then copy the content of `~/.docker/config.json`.
- **Second way -** In some setups, it's possible that Docker client will use
Then copy the content of `~/.docker/config.json`.
- **Second way -** In some setups, it's possible that Docker client will use
the available system keystore to store the result of `docker login`. In
that case, it's impossible to read `~/.docker/config.json`, so you will
need to prepare the required base64-encoded version of
`${username}:${password}` manually. Open a terminal and execute the
following command:
```bash
echo -n "my_username:my_password" | base64
```bash
echo -n "my_username:my_password" | base64
# Example output to copy
bXlfdXNlcm5hbWU6bXlfcGFzc3dvcmQ=
```
# Example output to copy
bXlfdXNlcm5hbWU6bXlfcGFzc3dvcmQ=
```
1. Create a [variable] `DOCKER_AUTH_CONFIG` with the content of the
Docker configuration file as the value:
```json
{
"auths": {
"registry.example.com": {
"auth": "bXlfdXNlcm5hbWU6bXlfcGFzc3dvcmQ="
}
}
}
```
```json
{
"auths": {
"registry.example.com": {
"auth": "bXlfdXNlcm5hbWU6bXlfcGFzc3dvcmQ="
}
}
}
```
1. Optionally,if you followed the first way of finding the `DOCKER_AUTH_CONFIG`
value, do a `docker logout` on your computer if you don't need access to the
registry from it:
```bash
docker logout registry.example.com
```
```bash
docker logout registry.example.com
```
1. You can now use any private image from `registry.example.com` defined in
`image` and/or `services` in your `.gitlab-ci.yml` file:
```yaml
image: my.registry.tld:5000/namespace/image:tag
```
```yaml
image: my.registry.tld:5000/namespace/image:tag
```
In the example above, GitLab Runner will look at `my.registry.tld:5000` for the
image `namespace/image:tag`.
In the example above, GitLab Runner will look at `my.registry.tld:5000` for the
image `namespace/image:tag`.
You can add configuration for as many registries as you want, adding more
registries to the `"auths"` hash as described above.
......
......@@ -87,18 +87,18 @@ will later see, is exposed in various places within GitLab. Each time a job that
has an environment specified and succeeds, a deployment is recorded, remembering
the Git SHA and environment name.
>**Note:**
Starting with GitLab 8.15, the environment name is exposed to the Runner in
two forms: `$CI_ENVIRONMENT_NAME`, and `$CI_ENVIRONMENT_SLUG`. The first is
the name given in `.gitlab-ci.yml` (with any variables expanded), while the
second is a "cleaned-up" version of the name, suitable for use in URLs, DNS,
etc.
>**Note:**
Starting with GitLab 9.3, the environment URL is exposed to the Runner via
`$CI_ENVIRONMENT_URL`. The URL would be expanded from `.gitlab-ci.yml`, or if
the URL was not defined there, the external URL from the environment would be
used.
> **Note:**
> Starting with GitLab 8.15, the environment name is exposed to the Runner in
> two forms: `$CI_ENVIRONMENT_NAME`, and `$CI_ENVIRONMENT_SLUG`. The first is
> the name given in `.gitlab-ci.yml` (with any variables expanded), while the
> second is a "cleaned-up" version of the name, suitable for use in URLs, DNS,
> etc.
>
> **Note:**
> Starting with GitLab 9.3, the environment URL is exposed to the Runner via
> `$CI_ENVIRONMENT_URL`. The URL would be expanded from `.gitlab-ci.yml`, or if
> the URL was not defined there, the external URL from the environment would be
> used.
To sum up, with the above `.gitlab-ci.yml` we have achieved that:
......@@ -134,14 +134,15 @@ There's a bunch of information there, specifically you can see:
- A button that re-deploys the latest deployment, meaning it runs the job
defined by the environment name for that specific commit
>**Notes:**
- While you can create environments manually in the web interface, we recommend
that you define your environments in `.gitlab-ci.yml` first. They will
be automatically created for you after the first deploy.
- The environments page can only be viewed by Reporters and above. For more
information on the permissions, see the [permissions documentation][permissions].
- Only deploys that happen after your `.gitlab-ci.yml` is properly configured
will show up in the "Environment" and "Last deployment" lists.
> **Notes:**
>
> - While you can create environments manually in the web interface, we recommend
> that you define your environments in `.gitlab-ci.yml` first. They will
> be automatically created for you after the first deploy.
> - The environments page can only be viewed by Reporters and above. For more
> information on the permissions, see the [permissions documentation][permissions].
> - Only deploys that happen after your `.gitlab-ci.yml` is properly configured
> will show up in the "Environment" and "Last deployment" lists.
The information shown in the Environments page is limited to the latest
deployments, but as you may have guessed an environment can have multiple
......@@ -563,13 +564,13 @@ exist, you should see something like:
## Monitoring environments
>**Notes:**
> **Notes:**
>
- For the monitoring dashboard to appear, you need to:
- Have enabled the [Prometheus integration][prom]
- Configured Prometheus to collect at least one [supported metric](../user/project/integrations/prometheus_library/metrics.md)
- With GitLab 9.2, all deployments to an environment are shown directly on the
monitoring dashboard
> - For the monitoring dashboard to appear, you need to:
> - Have enabled the [Prometheus integration][prom]
> - Configured Prometheus to collect at least one [supported metric](../user/project/integrations/prometheus_library/metrics.md)
> - With GitLab 9.2, all deployments to an environment are shown directly on the
> monitoring dashboard
If you have enabled [Prometheus for monitoring system and response metrics](https://docs.gitlab.com/ee/user/project/integrations/prometheus.html), you can monitor the performance behavior of your app running in each environment.
......
......@@ -85,7 +85,7 @@ When asked, answer `Y` to fetch and install dependencies.
If everything went fine, you'll get an output like this:
![`mix phoenix.new`](img/mix-phoenix-new.png)
![mix phoenix.new](img/mix-phoenix-new.png)
Now, our project is located inside the directory with the same name we pass to `mix` command, for
example, `~/GitLab/hello_gitlab_ci`.
......@@ -145,7 +145,7 @@ Now, we have our app running locally. We can preview it directly on our browser.
not work, open [`127.0.0.1:4000`](http://127.0.0.1:4000) instead and later, configure your OS to
point `localhost` to `127.0.0.1`.
![`mix phoenix.server`](img/mix-phoenix-server.png)
![mix phoenix.server](img/mix-phoenix-server.png)
Great, now we have a local Phoenix Server running our app.
......
# Using Git submodules with GitLab CI
> **Notes:**
- GitLab 8.12 introduced a new [CI job permissions model][newperms] and you
are encouraged to upgrade your GitLab instance if you haven't done already.
If you are **not** using GitLab 8.12 or higher, you would need to work your way
around submodules in order to access the sources of e.g., `gitlab.com/group/project`
with the use of [SSH keys](ssh_keys/README.md).
- With GitLab 8.12 onward, your permissions are used to evaluate what a CI job
can access. More information about how this system works can be found in the
[Jobs permissions model](../user/permissions.md#job-permissions).
- The HTTP(S) Git protocol [must be enabled][gitpro] in your GitLab instance.
>
> - GitLab 8.12 introduced a new [CI job permissions model][newperms] and you
> are encouraged to upgrade your GitLab instance if you haven't done already.
> If you are **not** using GitLab 8.12 or higher, you would need to work your way
> around submodules in order to access the sources of e.g., `gitlab.com/group/project`
> with the use of [SSH keys](ssh_keys/README.md).
> - With GitLab 8.12 onward, your permissions are used to evaluate what a CI job
> can access. More information about how this system works can be found in the
> [Jobs permissions model](../user/permissions.md#job-permissions).
> - The HTTP(S) Git protocol [must be enabled][gitpro] in your GitLab instance.
## Configuring the `.gitmodules` file
......
# Getting started with Review Apps
>
- [Introduced][ce-21971] in GitLab 8.12. Further additions were made in GitLab
8.13 and 8.14.
- Inspired by [Heroku's Review Apps][heroku-apps] which itself was inspired by
[Fourchette].
> - [Introduced][ce-21971] in GitLab 8.12. Further additions were made in GitLab
> 8.13 and 8.14.
> - Inspired by [Heroku's Review Apps][heroku-apps] which itself was inspired by
> [Fourchette].
The basis of Review Apps is the [dynamic environments] which allow you to create
a new environment (dynamically) for each one of your branches.
......
......@@ -144,9 +144,8 @@ An admin can enable/disable a specific Runner for projects:
## Protected Runners
>
[Introduced](https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/13194)
in GitLab 10.0.
> [Introduced](https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/13194)
> in GitLab 10.0.
You can protect Runners from revealing sensitive information.
Whenever a Runner is protected, the Runner picks only jobs created on
......
......@@ -46,7 +46,7 @@ to access it. This is where an SSH key pair comes in handy.
1. You will first need to create an SSH key pair. For more information, follow
the instructions to [generate an SSH key](../../ssh/README.md#generating-a-new-ssh-key-pair).
**Do not** add a passphrase to the SSH key, or the `before_script` will\
**Do not** add a passphrase to the SSH key, or the `before_script` will
prompt for it.
1. Create a new [variable](../variables/README.md#variables).
......@@ -175,7 +175,7 @@ Now that the `SSH_KNOWN_HOSTS` variable is created, in addition to the
[content of `.gitlab-ci.yml`](#ssh-keys-when-using-the-docker-executor)
above, here's what more you need to add:
```yaml
```yaml
before_script:
##
## Assuming you created the SSH_KNOWN_HOSTS variable, uncomment the
......
# Triggering pipelines through the API
> **Notes**:
- [Introduced][ci-229] in GitLab CE 7.14.
- GitLab 8.12 has a completely redesigned job permissions system. Read all
about the [new model and its implications](../../user/project/new_ci_build_permissions_model.md#job-triggers).
>
> - [Introduced][ci-229] in GitLab CE 7.14.
> - GitLab 8.12 has a completely redesigned job permissions system. Read all
> about the [new model and its implications](../../user/project/new_ci_build_permissions_model.md#job-triggers).
Triggers can be used to force a pipeline rerun of a specific `ref` (branch or
tag) with an API call.
......@@ -49,11 +50,12 @@ The action is irreversible.
## Triggering a pipeline
> **Notes**:
- Valid refs are only the branches and tags. If you pass a commit SHA as a ref,
it will not trigger a job.
- If your project is public, passing the token in plain text is probably not the
wisest idea, so you might want to use a
[variable](../variables/README.md#variables) for that purpose.
>
> - Valid refs are only the branches and tags. If you pass a commit SHA as a ref,
> it will not trigger a job.
> - If your project is public, passing the token in plain text is probably not the
> wisest idea, so you might want to use a
> [variable](../variables/README.md#variables) for that purpose.
To trigger a job you need to send a `POST` request to GitLab's API endpoint:
......@@ -122,11 +124,12 @@ Now, whenever a new tag is pushed on project A, the job will run and the
## Triggering a pipeline from a webhook
> **Notes**:
- Introduced in GitLab 8.14.
- `ref` should be passed as part of the URL in order to take precedence over
`ref` from the webhook body that designates the branch ref that fired the
trigger in the source repository.
- `ref` should be URL-encoded if it contains slashes.
>
> - Introduced in GitLab 8.14.
> - `ref` should be passed as part of the URL in order to take precedence over
> `ref` from the webhook body that designates the branch ref that fired the
> trigger in the source repository.
> - `ref` should be URL-encoded if it contains slashes.
To trigger a job from a webhook of another project you need to add the following
webhook URL for Push and Tag events (change the project ID, ref and token):
......
......@@ -383,7 +383,7 @@ except master.
## `only` and `except` (complex)
> `refs` and `kubernetes` policies introduced in GitLab 10.0
>
> `variables` policy introduced in 10.7
CAUTION: **Warning:**
......@@ -583,9 +583,10 @@ The above script will:
### `when:manual`
> **Notes:**
- Introduced in GitLab 8.10.
- Blocking manual actions were introduced in GitLab 9.0.
- Protected actions were introduced in GitLab 9.2.
>
> - Introduced in GitLab 8.10.
> - Blocking manual actions were introduced in GitLab 9.0.
> - Protected actions were introduced in GitLab 9.2.
Manual actions are a special type of job that are not executed automatically,
they need to be explicitly started by a user. An example usage of manual actions
......@@ -616,11 +617,11 @@ have ability to merge to this branch.
## `environment`
> **Notes:**
>
**Notes:**
- Introduced in GitLab 8.9.
- You can read more about environments and find more examples in the
[documentation about environments][environment].
> - Introduced in GitLab 8.9.
> - You can read more about environments and find more examples in the
> [documentation about environments][environment].
`environment` is used to define that a job deploys to a specific environment.
If `environment` is specified and no environment under that name exists, a new
......@@ -641,15 +642,15 @@ deployment to the `production` environment.
### `environment:name`
> **Notes:**
>
**Notes:**
- Introduced in GitLab 8.11.
- Before GitLab 8.11, the name of an environment could be defined as a string like
`environment: production`. The recommended way now is to define it under the
`name` keyword.
- The `name` parameter can use any of the defined CI variables,
including predefined, secure variables and `.gitlab-ci.yml` [`variables`](#variables).
You however cannot use variables defined under `script`.
> - Introduced in GitLab 8.11.
> - Before GitLab 8.11, the name of an environment could be defined as a string like
> `environment: production`. The recommended way now is to define it under the
> `name` keyword.
> - The `name` parameter can use any of the defined CI variables,
> including predefined, secure variables and `.gitlab-ci.yml` [`variables`](#variables).
> You however cannot use variables defined under `script`.
The `environment` name can contain:
......@@ -680,14 +681,14 @@ deploy to production:
### `environment:url`
> **Notes:**
>
**Notes:**
- Introduced in GitLab 8.11.
- Before GitLab 8.11, the URL could be added only in GitLab's UI. The
recommended way now is to define it in `.gitlab-ci.yml`.
- The `url` parameter can use any of the defined CI variables,
including predefined, secure variables and `.gitlab-ci.yml` [`variables`](#variables).
You however cannot use variables defined under `script`.
> - Introduced in GitLab 8.11.
> - Before GitLab 8.11, the URL could be added only in GitLab's UI. The
> recommended way now is to define it in `.gitlab-ci.yml`.
> - The `url` parameter can use any of the defined CI variables,
> including predefined, secure variables and `.gitlab-ci.yml` [`variables`](#variables).
> You however cannot use variables defined under `script`.
This is an optional value that when set, it exposes buttons in various places
in GitLab which when clicked take you to the defined URL.
......@@ -707,12 +708,12 @@ deploy to production:
### `environment:on_stop`
> **Notes:**
>
**Notes:**
- [Introduced][ce-6669] in GitLab 8.13.
- Starting with GitLab 8.14, when you have an environment that has a stop action
defined, GitLab will automatically trigger a stop action when the associated
branch is deleted.
> - [Introduced][ce-6669] in GitLab 8.13.
> - Starting with GitLab 8.14, when you have an environment that has a stop action
> defined, GitLab will automatically trigger a stop action when the associated
> branch is deleted.
Closing (stoping) environments can be achieved with the `on_stop` keyword defined under
`environment`. It declares a different job that runs in order to close
......@@ -763,13 +764,13 @@ The `stop_review_app` job is **required** to have the following keywords defined
### Dynamic environments
> **Notes:**
>
**Notes:**
- [Introduced][ce-6323] in GitLab 8.12 and GitLab Runner 1.6.
- The `$CI_ENVIRONMENT_SLUG` was [introduced][ce-7983] in GitLab 8.15.
- The `name` and `url` parameters can use any of the defined CI variables,
including predefined, secure variables and `.gitlab-ci.yml` [`variables`](#variables).
You however cannot use variables defined under `script`.
> - [Introduced][ce-6323] in GitLab 8.12 and GitLab Runner 1.6.
> - The `$CI_ENVIRONMENT_SLUG` was [introduced][ce-7983] in GitLab 8.15.
> - The `name` and `url` parameters can use any of the defined CI variables,
> including predefined, secure variables and `.gitlab-ci.yml` [`variables`](#variables).
> You however cannot use variables defined under `script`.
For example:
......@@ -799,13 +800,13 @@ as Review Apps. You can see a simple example using Review Apps at
## `cache`
> **Notes:**
>
**Notes:**
- Introduced in GitLab Runner v0.7.0.
- `cache` can be set globally and per-job.
- From GitLab 9.0, caching is enabled and shared between pipelines and jobs
by default.
- From GitLab 9.2, caches are restored before [artifacts](#artifacts).
> - Introduced in GitLab Runner v0.7.0.
> - `cache` can be set globally and per-job.
> - From GitLab 9.0, caching is enabled and shared between pipelines and jobs
> by default.
> - From GitLab 9.2, caches are restored before [artifacts](#artifacts).
TIP: **Learn more:**
Read how caching works and find out some good practices in the
......@@ -967,13 +968,13 @@ skip the download step.
## `artifacts`
> **Notes:**
>
**Notes:**
- Introduced in GitLab Runner v0.7.0 for non-Windows platforms.
- Windows support was added in GitLab Runner v.1.0.0.
- From GitLab 9.2, caches are restored before artifacts.
- Not all executors are [supported](https://docs.gitlab.com/runner/executors/#compatibility-chart).
- Job artifacts are only collected for successful jobs by default.
> - Introduced in GitLab Runner v0.7.0 for non-Windows platforms.
> - Windows support was added in GitLab Runner v.1.0.0.
> - From GitLab 9.2, caches are restored before artifacts.
> - Not all executors are [supported](https://docs.gitlab.com/runner/executors/#compatibility-chart).
> - Job artifacts are only collected for successful jobs by default.
`artifacts` is used to specify a list of files and directories which should be
attached to the job after success.
......
......@@ -9,16 +9,16 @@ This merge is done automatically in a
## What to do if you are pinged in a `CE Upstream` merge request to resolve a conflict?
1. Please resolve the conflict as soon as possible or ask someone else to do it
- It's ok to resolve more conflicts than the one that you are asked to resolve.
In that case, it's a good habit to ask for a double-check on your resolution
by someone who is familiar with the code you touched.
- It's ok to resolve more conflicts than the one that you are asked to resolve.
In that case, it's a good habit to ask for a double-check on your resolution
by someone who is familiar with the code you touched.
1. Once you have resolved your conflicts, push to the branch (no force-push)
1. Assign the merge request to the next person that has to resolve a conflict
1. If all conflicts are resolved after your resolution is pushed, keep the merge
request assigned to you: **you are now responsible for the merge request to be
green**
request assigned to you: **you are now responsible for the merge request to be
green**
1. If you need any help, you can ping the current [release managers], or ask in
the `#ce-to-ee` Slack channel
the `#ce-to-ee` Slack channel
A few notes about the automatic CE->EE merge job:
......@@ -127,19 +127,19 @@ Now, every time you create an MR for CE and EE:
1. Open two terminal windows, one in CE, and another one in EE
1. In the CE terminal:
1. Create the CE branch, e.g., `branch-example`
1. Make your changes and push a commit (commit A)
1. Create the CE merge request in GitLab
1. Create the CE branch, e.g., `branch-example`
1. Make your changes and push a commit (commit A)
1. Create the CE merge request in GitLab
1. In the EE terminal:
1. Create the EE-equivalent branch ending with `-ee`, e.g.,
`git checkout -b branch-example-ee`
1. Fetch the CE branch: `git fetch ce branch-example`
1. Cherry-pick the commit A: `git cherry-pick commit-A-SHA`
1. If Git prompts you to fix the conflicts, do a `git status`
to check which files contain conflicts, fix them, save the files
1. Add the changes with `git add .` but **DO NOT commit** them
1. Continue cherry-picking: `git cherry-pick --continue`
1. Push to EE: `git push origin branch-example-ee`
1. Create the EE-equivalent branch ending with `-ee`, e.g.,
`git checkout -b branch-example-ee`
1. Fetch the CE branch: `git fetch ce branch-example`
1. Cherry-pick the commit A: `git cherry-pick commit-A-SHA`
1. If Git prompts you to fix the conflicts, do a `git status`
to check which files contain conflicts, fix them, save the files
1. Add the changes with `git add .` but **DO NOT commit** them
1. Continue cherry-picking: `git cherry-pick --continue`
1. Push to EE: `git push origin branch-example-ee`
1. Create the EE-equivalent MR and link to the CE MR from the
description "Ports [CE-MR-LINK] to EE"
1. Once all the jobs are passing in both CE and EE, you've addressed the
......
......@@ -10,10 +10,10 @@ migrations automatically reschedule themselves for a later point in time.
## When To Use Background Migrations
>**Note:**
When adding background migrations _you must_ make sure they are announced in the
monthly release post along with an estimate of how long it will take to complete
the migrations.
> **Note:**
> When adding background migrations _you must_ make sure they are announced in the
> monthly release post along with an estimate of how long it will take to complete
> the migrations.
In the vast majority of cases you will want to use a regular Rails migration
instead. Background migrations should _only_ be used when migrating _data_ in
......@@ -127,23 +127,23 @@ big JSON blob) to column `bar` (containing a string). The process for this would
roughly be as follows:
1. Release A:
1. Create a migration class that perform the migration for a row with a given ID.
1. Deploy the code for this release, this should include some code that will
schedule jobs for newly created data (e.g. using an `after_create` hook).
1. Schedule jobs for all existing rows in a post-deployment migration. It's
possible some newly created rows may be scheduled twice so your migration
should take care of this.
1. Create a migration class that perform the migration for a row with a given ID.
1. Deploy the code for this release, this should include some code that will
schedule jobs for newly created data (e.g. using an `after_create` hook).
1. Schedule jobs for all existing rows in a post-deployment migration. It's
possible some newly created rows may be scheduled twice so your migration
should take care of this.
1. Release B:
1. Deploy code so that the application starts using the new column and stops
scheduling jobs for newly created data.
1. In a post-deployment migration you'll need to ensure no jobs remain.
1. Use `Gitlab::BackgroundMigration.steal` to process any remaining
jobs in Sidekiq.
1. Reschedule the migration to be run directly (i.e. not through Sidekiq)
on any rows that weren't migrated by Sidekiq. This can happen if, for
instance, Sidekiq received a SIGKILL, or if a particular batch failed
enough times to be marked as dead.
1. Remove the old column.
1. Deploy code so that the application starts using the new column and stops
scheduling jobs for newly created data.
1. In a post-deployment migration you'll need to ensure no jobs remain.
1. Use `Gitlab::BackgroundMigration.steal` to process any remaining
jobs in Sidekiq.
1. Reschedule the migration to be run directly (i.e. not through Sidekiq)
on any rows that weren't migrated by Sidekiq. This can happen if, for
instance, Sidekiq received a SIGKILL, or if a particular batch failed
enough times to be marked as dead.
1. Remove the old column.
This may also require a bump to the [import/export version][import-export], if
importing a project from a prior version of GitLab requires the data to be in
......
......@@ -5,30 +5,30 @@
There are a few rules to get your merge request accepted:
1. Your merge request should only be **merged by a [maintainer][team]**.
1. If your merge request includes only backend changes [^1], it must be
**approved by a [backend maintainer][projects]**.
1. If your merge request includes only frontend changes [^1], it must be
**approved by a [frontend maintainer][projects]**.
1. If your merge request includes UX changes [^1], it must
be **approved by a [UX team member][team]**.
1. If your merge request includes adding a new JavaScript library [^1], it must be
**approved by a [frontend lead][team]**.
1. If your merge request includes adding a new UI/UX paradigm [^1], it must be
**approved by a [UX lead][team]**.
1. If your merge request includes frontend and backend changes [^1], it must
be **approved by a [frontend and a backend maintainer][projects]**.
1. If your merge request includes UX and frontend changes [^1], it must
be **approved by a [UX team member and a frontend maintainer][team]**.
1. If your merge request includes UX, frontend and backend changes [^1], it must
be **approved by a [UX team member, a frontend and a backend maintainer][team]**.
1. If your merge request includes a new dependency or a filesystem change, it must
be *approved by a [Distribution team member][team]*. See how to work with the [Distribution team for more details.](https://about.gitlab.com/handbook/engineering/dev-backend/distribution/)
1. If your merge request includes only backend changes [^1], it must be
**approved by a [backend maintainer][projects]**.
1. If your merge request includes only frontend changes [^1], it must be
**approved by a [frontend maintainer][projects]**.
1. If your merge request includes UX changes [^1], it must
be **approved by a [UX team member][team]**.
1. If your merge request includes adding a new JavaScript library [^1], it must be
**approved by a [frontend lead][team]**.
1. If your merge request includes adding a new UI/UX paradigm [^1], it must be
**approved by a [UX lead][team]**.
1. If your merge request includes frontend and backend changes [^1], it must
be **approved by a [frontend and a backend maintainer][projects]**.
1. If your merge request includes UX and frontend changes [^1], it must
be **approved by a [UX team member and a frontend maintainer][team]**.
1. If your merge request includes UX, frontend and backend changes [^1], it must
be **approved by a [UX team member, a frontend and a backend maintainer][team]**.
1. If your merge request includes a new dependency or a filesystem change, it must
be *approved by a [Distribution team member][team]*. See how to work with the [Distribution team for more details.](https://about.gitlab.com/handbook/engineering/dev-backend/distribution/)
1. To lower the amount of merge requests maintainers need to review, you can
ask or assign any [reviewers][projects] for a first review.
1. If you need some guidance (e.g. it's your first merge request), feel free
to ask one of the [Merge request coaches][team].
1. The reviewer will assign the merge request to a maintainer once the
reviewer is satisfied with the state of the merge request.
ask or assign any [reviewers][projects] for a first review.
1. If you need some guidance (e.g. it's your first merge request), feel free
to ask one of the [Merge request coaches][team].
1. The reviewer will assign the merge request to a maintainer once the
reviewer is satisfied with the state of the merge request.
1. Keep in mind that maintainers are also going to perform a final code review.
The ideal scenario is that the reviewer has already addressed any concerns
the maintainer would have found, and the maintainer only has to perform the
......@@ -160,41 +160,41 @@ Enterprise Edition instance. This has some implications:
1. **Query changes** should be tested to ensure that they don't result in worse
performance at the scale of GitLab.com:
1. Generating large quantities of data locally can help.
2. Asking for query plans from GitLab.com is the most reliable way to validate
these.
1. Generating large quantities of data locally can help.
2. Asking for query plans from GitLab.com is the most reliable way to validate
these.
2. **Database migrations** must be:
1. Reversible.
2. Performant at the scale of GitLab.com - ask a maintainer to test the
migration on the staging environment if you aren't sure.
3. Categorised correctly:
- Regular migrations run before the new code is running on the instance.
- [Post-deployment migrations](post_deployment_migrations.md) run _after_
the new code is deployed, when the instance is configured to do that.
- [Background migrations](background_migrations.md) run in Sidekiq, and
should only be done for migrations that would take an extreme amount of
time at GitLab.com scale.
1. Reversible.
2. Performant at the scale of GitLab.com - ask a maintainer to test the
migration on the staging environment if you aren't sure.
3. Categorised correctly:
- Regular migrations run before the new code is running on the instance.
- [Post-deployment migrations](post_deployment_migrations.md) run _after_
the new code is deployed, when the instance is configured to do that.
- [Background migrations](background_migrations.md) run in Sidekiq, and
should only be done for migrations that would take an extreme amount of
time at GitLab.com scale.
3. **Sidekiq workers**
[cannot change in a backwards-incompatible way](sidekiq_style_guide.md#removing-or-renaming-queues):
1. Sidekiq queues are not drained before a deploy happens, so there will be
workers in the queue from the previous version of GitLab.
2. If you need to change a method signature, try to do so across two releases,
and accept both the old and new arguments in the first of those.
3. Similarly, if you need to remove a worker, stop it from being scheduled in
one release, then remove it in the next. This will allow existing jobs to
execute.
4. Don't forget, not every instance will upgrade to every intermediate version
(some people may go from X.1.0 to X.10.0, or even try bigger upgrades!), so
try to be liberal in accepting the old format if it is cheap to do so.
1. Sidekiq queues are not drained before a deploy happens, so there will be
workers in the queue from the previous version of GitLab.
2. If you need to change a method signature, try to do so across two releases,
and accept both the old and new arguments in the first of those.
3. Similarly, if you need to remove a worker, stop it from being scheduled in
one release, then remove it in the next. This will allow existing jobs to
execute.
4. Don't forget, not every instance will upgrade to every intermediate version
(some people may go from X.1.0 to X.10.0, or even try bigger upgrades!), so
try to be liberal in accepting the old format if it is cheap to do so.
4. **Cached values** may persist across releases. If you are changing the type a
cached value returns (say, from a string or nil to an array), change the
cache key at the same time.
5. **Settings** should be added as a
[last resort](https://about.gitlab.com/handbook/product/#convention-over-configuration).
If you're adding a new setting in `gitlab.yml`:
1. Try to avoid that, and add to `ApplicationSetting` instead.
2. Ensure that it is also
[added to Omnibus](https://docs.gitlab.com/omnibus/settings/gitlab.yml.html#adding-a-new-setting-to-gitlab-yml).
1. Try to avoid that, and add to `ApplicationSetting` instead.
2. Ensure that it is also
[added to Omnibus](https://docs.gitlab.com/omnibus/settings/gitlab.yml.html#adding-a-new-setting-to-gitlab-yml).
6. **Filesystem access** can be slow, so try to avoid
[shared files](shared_files.md) when an alternative solution is available.
......
......@@ -16,7 +16,7 @@
- [Milestone labels](#milestone-labels)
- [Bug Priority labels](#bug-priority-labels)
- [Bug Severity labels](#bug-severity-labels)
- [Severity impact guidance](#severity-impact-guidance)
- [Severity impact guidance](#severity-impact-guidance)
- [Label for community contributors](#label-for-community-contributors)
- [Implement design & UI elements](#implement-design--ui-elements)
- [Issue tracker](#issue-tracker)
......
......@@ -9,7 +9,7 @@
- [Release Scoping labels](#release-scoping-labels)
- [Priority labels](#priority-labels)
- [Severity labels](#severity-labels)
- [Severity impact guidance](#severity-impact-guidance)
- [Severity impact guidance](#severity-impact-guidance)
- [Label for community contributors](#label-for-community-contributors)
- [Issue triaging](#issue-triaging)
- [Feature proposals](#feature-proposals)
......@@ -250,7 +250,7 @@ code snippet right after your description in a new line: `~"feature proposal"`.
Please keep feature proposals as small and simple as possible, complex ones
might be edited to make them small and simple.
Please submit Feature Proposals using the ['Feature Proposal' issue template](https://gitlab.com/gitlab-org/gitlab-ce/blob/master/.gitlab/issue_templates/Feature Proposal.md) provided on the issue tracker.
Please submit Feature Proposals using the ['Feature Proposal' issue template](https://gitlab.com/gitlab-org/gitlab-ce/blob/master/.gitlab/issue_templates/Feature%20Proposal.md) provided on the issue tracker.
For changes in the interface, it is helpful to include a mockup. Issues that add to, or change, the interface should
be given the ~"UX" label. This will allow the UX team to provide input and guidance. You may
......
......@@ -55,30 +55,30 @@ request is as follows:
organized commits by [squashing them][git-squash]
1. Push the commit(s) to your fork
1. Submit a merge request (MR) to the `master` branch
1. Your merge request needs at least 1 approval but feel free to require more.
For instance if you're touching backend and frontend code, it's a good idea
to require 2 approvals: 1 from a backend maintainer and 1 from a frontend
maintainer
1. You don't have to select any approvers, but you can if you really want
specific people to approve your merge request
1. Your merge request needs at least 1 approval but feel free to require more.
For instance if you're touching backend and frontend code, it's a good idea
to require 2 approvals: 1 from a backend maintainer and 1 from a frontend
maintainer
1. You don't have to select any approvers, but you can if you really want
specific people to approve your merge request
1. The MR title should describe the change you want to make
1. The MR description should give a motive for your change and the method you
used to achieve it.
1. If you are contributing code, fill in the template already provided in the
"Description" field.
1. If you are contributing documentation, choose `Documentation` from the
"Choose a template" menu and fill in the template.
1. Mention the issue(s) your merge request solves, using the `Solves #XXX` or
`Closes #XXX` syntax to auto-close the issue(s) once the merge request will
be merged.
1. If you are contributing code, fill in the template already provided in the
"Description" field.
1. If you are contributing documentation, choose `Documentation` from the
"Choose a template" menu and fill in the template.
1. Mention the issue(s) your merge request solves, using the `Solves #XXX` or
`Closes #XXX` syntax to auto-close the issue(s) once the merge request will
be merged.
1. If you're allowed to, set a relevant milestone and labels
1. If the MR changes the UI it should include *Before* and *After* screenshots
1. If the MR changes CSS classes please include the list of affected pages,
`grep css-class ./app -R`
1. Be prepared to answer questions and incorporate feedback even if requests
for this arrive weeks or months after your MR submission
1. If a discussion has been addressed, select the "Resolve discussion" button
beneath it to mark it resolved.
1. If a discussion has been addressed, select the "Resolve discussion" button
beneath it to mark it resolved.
1. If your MR touches code that executes shell commands, reads or opens files or
handles paths to files on disk, make sure it adheres to the
[shell command guidelines](../shell_commands.md)
......@@ -138,7 +138,7 @@ When having your code reviewed and when reviewing merge requests please take the
making and testing future changes
1. Changes do not adversely degrade performance.
- Avoid repeated polling of endpoints that require a significant amount of overhead
- Check for N+1 queries via the SQL log or [`QueryRecorder`](https://docs.gitlab.com/ce/development/mer ge_request_performance_guidelines.html)
- Check for N+1 queries via the SQL log or [`QueryRecorder`](https://docs.gitlab.com/ce/development/merge_request_performance_guidelines.html)
- Avoid repeated access of filesystem
1. If you need polling to support real-time features, please use
[polling with ETag caching][polling-etag].
......
......@@ -26,11 +26,11 @@ In order to present diffs information on the Merge Request diffs page, we:
1. Fetch all diff files from database `merge_request_diff_files`
2. Fetch the _old_ and _new_ file blobs in batch to:
1. Highlight old and new file content
2. Know which viewer it should use for each file (text, image, deleted, etc)
3. Know if the file content changed
4. Know if it was stored externally
5. Know if it had storage errors
1. Highlight old and new file content
2. Know which viewer it should use for each file (text, image, deleted, etc)
3. Know if the file content changed
4. Know if it was stored externally
5. Know if it had storage errors
3. If the diff file is cacheable (text-based), it's cached on Redis
using `Gitlab::Diff::FileCollection::MergeRequestDiff`
......
......@@ -297,10 +297,10 @@ avoid duplication, link to the special document that can be found in
[`doc/administration/restart_gitlab.md`][doc-restart]. Usually the text will
read like:
```
Save the file and [reconfigure GitLab](../../administration/restart_gitlab.md)
for the changes to take effect.
```
```
Save the file and [reconfigure GitLab](../../administration/restart_gitlab.md)
for the changes to take effect.
```
If the document you are editing resides in a place other than the GitLab CE/EE
`doc/` directory, instead of the relative link, use the full path:
......
......@@ -166,47 +166,53 @@ There are a few gotchas with it:
to make it call the other method we want to extend, like a [template method
pattern](https://en.wikipedia.org/wiki/Template_method_pattern).
For example, given this base:
``` ruby
class Base
def execute
return unless enabled?
# ...
# ...
end
end
```
Instead of just overriding `Base#execute`, we should update it and extract
the behaviour into another method:
``` ruby
class Base
def execute
return unless enabled?
do_something
```ruby
class Base
def execute
return unless enabled?
# ...
# ...
end
end
```
private
Instead of just overriding `Base#execute`, we should update it and extract
the behaviour into another method:
def do_something
# ...
# ...
```ruby
class Base
def execute
return unless enabled?
do_something
end
private
def do_something
# ...
# ...
end
end
end
```
Then we're free to override that `do_something` without worrying about the
guards:
``` ruby
module EE::Base
extend ::Gitlab::Utils::Override
override :do_something
def do_something
# Follow the above pattern to call super and extend it
```
Then we're free to override that `do_something` without worrying about the
guards:
```ruby
module EE::Base
extend ::Gitlab::Utils::Override
override :do_something
def do_something
# Follow the above pattern to call super and extend it
end
end
end
```
This would require updating CE first, or make sure this is back ported to CE.
```
This would require updating CE first, or make sure this is back ported to CE.
When prepending, place them in the `ee/` specific sub-directory, and
wrap class or module in `module EE` to avoid naming conflicts.
......@@ -469,7 +475,7 @@ Put the EE module files following
For EE API routes, we put them in a `prepended` block:
``` ruby
```ruby
module EE
module API
module MergeRequests
......@@ -503,7 +509,7 @@ interface first here.
For example, suppose we have a few more optional params for EE, given this CE
API code:
``` ruby
```ruby
module API
class MergeRequests < Grape::API
# EE::API::MergeRequests would override the following helpers
......@@ -525,7 +531,7 @@ end
And then we could override it in EE module:
``` ruby
```ruby
module EE
module API
module MergeRequests
......@@ -552,7 +558,7 @@ To make it easy for an EE module to override the CE helpers, we need to define
those helpers we want to extend first. Try to do that immediately after the
class definition to make it easy and clear:
``` ruby
```ruby
module API
class JobArtifacts < Grape::API
# EE::API::JobArtifacts would override the following helpers
......@@ -569,7 +575,7 @@ end
And then we can follow regular object-oriented practices to override it:
``` ruby
```ruby
module EE
module API
module JobArtifacts
......@@ -596,7 +602,7 @@ therefore can't be simply overridden. We need to extract them into a standalone
method, or introduce some "hooks" where we could inject behavior in the CE
route. Something like this:
``` ruby
```ruby
module API
class MergeRequests < Grape::API
helpers do
......@@ -623,7 +629,7 @@ end
Note that `update_merge_request_ee` doesn't do anything in CE, but
then we could override it in EE:
``` ruby
```ruby
module EE
module API
module MergeRequests
......@@ -662,7 +668,7 @@ For example, in one place we need to pass an extra argument to
`at_least_one_of` so that the API could consider an EE-only argument as the
least argument. This is not quite beautiful but it's working:
``` ruby
```ruby
module API
class MergeRequests < Grape::API
def self.update_params_at_least_one_of
......@@ -683,7 +689,7 @@ end
And then we could easily extend that argument in the EE class method:
``` ruby
```ruby
module EE
module API
module MergeRequests
......
......@@ -74,7 +74,7 @@ bundle and included on the page.
> `content_for :page_specific_javascripts` within haml files, along with
> manually generated webpack bundles. However under this new system you should
> not ever need to manually add an entry point to the `webpack.config.js` file.
>
> **Tip:**
> If you are unsure what controller and action corresponds to a given page, you
> can find this out by inspecting `document.body.dataset.page` within your
......@@ -109,7 +109,6 @@ bundle and included on the page.
`my_widget.js` is only imported within `pages/widget/show/index.js`, you
should place the module at `pages/widget/show/my_widget.js` and import it
with a relative path (e.g. `import initMyWidget from './my_widget';`).
- If a class or module is _used by multiple routes_, place it within a
shared directory at the closest common parent directory for the entry
points that import it. For example, if `my_widget.js` is imported within
......
......@@ -17,74 +17,81 @@ at the top, but legacy files are a special case. Any time you develop a new fea
refactor an existing one, you should abide by the eslint rules.
1. **Never Ever EVER** disable eslint globally for a file
```javascript
// bad
/* eslint-disable */
// better
/* eslint-disable some-rule, some-other-rule */
// best
// nothing :)
```
```javascript
// bad
/* eslint-disable */
// better
/* eslint-disable some-rule, some-other-rule */
// best
// nothing :)
```
1. If you do need to disable a rule for a single violation, try to do it as locally as possible
```javascript
// bad
/* eslint-disable no-new */
import Foo from 'foo';
new Foo();
// better
import Foo from 'foo';
```javascript
// bad
/* eslint-disable no-new */
import Foo from 'foo';
new Foo();
// better
import Foo from 'foo';
// eslint-disable-next-line no-new
new Foo();
```
// eslint-disable-next-line no-new
new Foo();
```
1. There are few rules that we need to disable due to technical debt. Which are:
1. [no-new][eslint-new]
1. [class-methods-use-this][eslint-this]
1. [no-new][eslint-new]
1. [class-methods-use-this][eslint-this]
1. When they are needed _always_ place ESlint directive comment blocks on the first line of a script,
followed by any global declarations, then a blank newline prior to any imports or code.
```javascript
// bad
/* global Foo */
/* eslint-disable no-new */
import Bar from './bar';
// good
/* eslint-disable no-new */
/* global Foo */
import Bar from './bar';
```
```javascript
// bad
/* global Foo */
/* eslint-disable no-new */
import Bar from './bar';
// good
/* eslint-disable no-new */
/* global Foo */
import Bar from './bar';
```
1. **Never** disable the `no-undef` rule. Declare globals with `/* global Foo */` instead.
1. When declaring multiple globals, always use one `/* global [name] */` line per variable.
```javascript
// bad
/* globals Flash, Cookies, jQuery */
// good
/* global Flash */
/* global Cookies */
/* global jQuery */
```
```javascript
// bad
/* globals Flash, Cookies, jQuery */
// good
/* global Flash */
/* global Cookies */
/* global jQuery */
```
1. Use up to 3 parameters for a function or class. If you need more accept an Object instead.
```javascript
// bad
fn(p1, p2, p3, p4) {}
// good
fn(options) {}
```
```javascript
// bad
fn(p1, p2, p3, p4) {}
// good
fn(options) {}
```
#### Modules, Imports, and Exports
1. Use ES module syntax to import modules
```javascript
// bad
......@@ -178,109 +185,116 @@ Do not use them anymore and feel free to remove them when refactoring legacy cod
```
#### Data Mutation and Pure functions
1. Strive to write many small pure functions, and minimize where mutations occur.
```javascript
```javascript
// bad
const values = {foo: 1};
function impureFunction(items) {
const bar = 1;
items.foo = items.a * bar + 2;
return items.a;
}
const c = impureFunction(values);
// good
var values = {foo: 1};
function pureFunction (foo) {
var bar = 1;
foo = foo * bar + 2;
return foo;
}
var c = pureFunction(values.foo);
```
```
1. Avoid constructors with side-effects.
Although we aim for code without side-effects we need some side-effects for our code to run.
If the class won't do anything if we only instantiate it, it's ok to add side effects into the constructor (_Note:_ The following is just an example. If the only purpose of the class is to add an event listener and handle the callback a function will be more suitable.)
```javascript
// Bad
export class Foo {
constructor() {
this.init();
}
init() {
document.addEventListener('click', this.handleCallback)
},
handleCallback() {
}
}
// Good
export class Foo {
constructor() {
document.addEventListener()
}
handleCallback() {
}
}
```
On the other hand, if a class only needs to extend a third party/add event listeners in some specific cases, they should be initialized outside of the constructor.
Although we aim for code without side-effects we need some side-effects for our code to run.
1. Prefer `.map`, `.reduce` or `.filter` over `.forEach`
A forEach will most likely cause side effects, it will be mutating the array being iterated. Prefer using `.map`,
`.reduce` or `.filter`
```javascript
const users = [ { name: 'Foo' }, { name: 'Bar' } ];
If the class won't do anything if we only instantiate it, it's ok to add side effects into the constructor (_Note:_ The following is just an example. If the only purpose of the class is to add an event listener and handle the callback a function will be more suitable.)
// bad
users.forEach((user, index) => {
user.id = index;
});
```javascript
// Bad
export class Foo {
constructor() {
this.init();
}
init() {
document.addEventListener('click', this.handleCallback)
},
handleCallback() {
}
}
// Good
export class Foo {
constructor() {
document.addEventListener()
}
handleCallback() {
}
}
```
// good
const usersWithId = users.map((user, index) => {
return Object.assign({}, user, { id: index });
});
```
On the other hand, if a class only needs to extend a third party/add event listeners in some specific cases, they should be initialized outside of the constructor.
1. Prefer `.map`, `.reduce` or `.filter` over `.forEach`
A forEach will most likely cause side effects, it will be mutating the array being iterated. Prefer using `.map`,
`.reduce` or `.filter`
```javascript
const users = [ { name: 'Foo' }, { name: 'Bar' } ];
// bad
users.forEach((user, index) => {
user.id = index;
});
// good
const usersWithId = users.map((user, index) => {
return Object.assign({}, user, { id: index });
});
```
#### Parse Strings into Numbers
1. `parseInt()` is preferable over `Number()` or `+`
```javascript
// bad
+'10' // 10
// good
Number('10') // 10
1. `parseInt()` is preferable over `Number()` or `+`
// better
parseInt('10', 10);
```
```javascript
// bad
+'10' // 10
// good
Number('10') // 10
// better
parseInt('10', 10);
```
#### CSS classes used for JavaScript
1. If the class is being used in Javascript it needs to be prepend with `js-`
```html
// bad
<button class="add-user">
Add User
</button>
// good
<button class="js-add-user">
Add User
</button>
```
```html
// bad
<button class="add-user">
Add User
</button>
// good
<button class="js-add-user">
Add User
</button>
```
### Vue.js
......@@ -292,197 +306,211 @@ Please check this [rules][eslint-plugin-vue-rules] for more documentation.
1. The service has it's own file
1. The store has it's own file
1. Use a function in the bundle file to instantiate the Vue component:
```javascript
// bad
class {
init() {
new Component({})
}
}
// good
document.addEventListener('DOMContentLoaded', () => new Vue({
el: '#element',
components: {
componentName
},
render: createElement => createElement('component-name'),
}));
```
```javascript
// bad
class {
init() {
new Component({})
}
}
// good
document.addEventListener('DOMContentLoaded', () => new Vue({
el: '#element',
components: {
componentName
},
render: createElement => createElement('component-name'),
}));
```
1. Do not use a singleton for the service or the store
```javascript
// bad
class Store {
constructor() {
if (!this.prototype.singleton) {
// do something
```javascript
// bad
class Store {
constructor() {
if (!this.prototype.singleton) {
// do something
}
}
}
}
// good
class Store {
constructor() {
// do something
// good
class Store {
constructor() {
// do something
}
}
}
```
```
1. Use `.vue` for Vue templates. Do not use `%template` in HAML.
#### Naming
1. **Extensions**: Use `.vue` extension for Vue components. Do not use `.js` as file extension ([#34371]).
1. **Reference Naming**: Use PascalCase for their instances:
```javascript
// bad
import cardBoard from 'cardBoard.vue'
components: {
cardBoard,
};
// good
import CardBoard from 'cardBoard.vue'
components: {
CardBoard,
};
```
```javascript
// bad
import cardBoard from 'cardBoard.vue'
components: {
cardBoard,
};
// good
import CardBoard from 'cardBoard.vue'
components: {
CardBoard,
};
```
1. **Props Naming:** Avoid using DOM component prop names.
1. **Props Naming:** Use kebab-case instead of camelCase to provide props in templates.
```javascript
// bad
<component class="btn">
// good
<component css-class="btn">
// bad
<component myProp="prop" />
// good
<component my-prop="prop" />
```
```javascript
// bad
<component class="btn">
// good
<component css-class="btn">
// bad
<component myProp="prop" />
// good
<component my-prop="prop" />
```
[#34371]: https://gitlab.com/gitlab-org/gitlab-ce/issues/34371
#### Alignment
1. Follow these alignment styles for the template method:
1. With more than one attribute, all attributes should be on a new line:
```javascript
// bad
<component v-if="bar"
param="baz" />
<button class="btn">Click me</button>
// good
<component
v-if="bar"
param="baz"
/>
<button class="btn">
Click me
</button>
```
1. The tag can be inline if there is only one attribute:
```javascript
// good
<component bar="bar" />
// good
<component
bar="bar"
/>
// bad
<component
bar="bar" />
```
1. With more than one attribute, all attributes should be on a new line:
```javascript
// bad
<component v-if="bar"
param="baz" />
<button class="btn">Click me</button>
// good
<component
v-if="bar"
param="baz"
/>
<button class="btn">
Click me
</button>
```
1. The tag can be inline if there is only one attribute:
```javascript
// good
<component bar="bar" />
// good
<component
bar="bar"
/>
// bad
<component
bar="bar" />
```
#### Quotes
1. Always use double quotes `"` inside templates and single quotes `'` for all other JS.
```javascript
// bad
template: `
<button :class='style'>Button</button>
`
// good
template: `
<button :class="style">Button</button>
`
```
```javascript
// bad
template: `
<button :class='style'>Button</button>
`
// good
template: `
<button :class="style">Button</button>
`
```
#### Props
1. Props should be declared as an object
```javascript
// bad
props: ['foo']
// good
props: {
foo: {
type: String,
required: false,
default: 'bar'
1. Props should be declared as an object
```javascript
// bad
props: ['foo']
// good
props: {
foo: {
type: String,
required: false,
default: 'bar'
}
}
}
```
```
1. Required key should always be provided when declaring a prop
```javascript
// bad
props: {
foo: {
type: String,
}
}
// good
props: {
foo: {
type: String,
required: false,
default: 'bar'
```javascript
// bad
props: {
foo: {
type: String,
}
}
}
```
// good
props: {
foo: {
type: String,
required: false,
default: 'bar'
}
}
```
1. Default key should be provided if the prop is not required.
_Note:_ There are some scenarios where we need to check for the existence of the property.
On those a default key should not be provided.
```javascript
// good
props: {
foo: {
type: String,
required: false,
}
}
// good
props: {
foo: {
type: String,
required: false,
default: 'bar'
```javascript
// good
props: {
foo: {
type: String,
required: false,
}
}
}
// good
props: {
foo: {
type: String,
required: true
// good
props: {
foo: {
type: String,
required: false,
default: 'bar'
}
}
}
```
// good
props: {
foo: {
type: String,
required: true
}
}
```
#### Data
1. `data` method should always be a function
```javascript
......@@ -502,38 +530,41 @@ On those a default key should not be provided.
#### Directives
1. Shorthand `@` is preferable over `v-on`
```javascript
// bad
<component v-on:click="eventHandler"/>
// good
<component @click="eventHandler"/>
```
```javascript
// bad
<component v-on:click="eventHandler"/>
// good
<component @click="eventHandler"/>
```
1. Shorthand `:` is preferable over `v-bind`
```javascript
// bad
<component v-bind:class="btn"/>
// good
<component :class="btn"/>
```
```javascript
// bad
<component v-bind:class="btn"/>
// good
<component :class="btsn"/>
```
#### Closing tags
1. Prefer self closing component tags
```javascript
// bad
<component></component>
// good
<component />
```
```javascript
// bad
<component></component>
// good
<component />
```
#### Ordering
1. Tag order in `.vue` file
```
<script>
// ...
......@@ -550,12 +581,14 @@ On those a default key should not be provided.
```
1. Properties in a Vue Component:
Check [order of properties in components rule][vue-order].
Check [order of properties in components rule][vue-order].
#### `:key`
When using `v-for` you need to provide a *unique* `:key` attribute for each item.
1. If the elements of the array being iterated have an unique `id` it is advised to use it:
```html
<div
v-for="item in items"
......@@ -566,6 +599,7 @@ When using `v-for` you need to provide a *unique* `:key` attribute for each item
```
1. When the elements being iterated don't have a unique id, you can use the array index as the `:key` attribute
```html
<div
v-for="(item, index) in items"
......@@ -575,8 +609,8 @@ When using `v-for` you need to provide a *unique* `:key` attribute for each item
</div>
```
1. When using `v-for` with `template` and there is more than one child element, the `:key` values must be unique. It's advised to use `kebab-case` namespaces.
```html
<template v-for="(item, index) in items">
<span :key="`span-${index}`"></span>
......@@ -585,64 +619,69 @@ When using `v-for` you need to provide a *unique* `:key` attribute for each item
```
1. When dealing with nested `v-for` use the same guidelines as above.
```html
<div
v-for="item in items"
:key="item.id"
>
<span
v-for="element in array"
:key="element.id"
>
<!-- content -->
</span>
</div>
```
```html
<div
v-for="item in items"
:key="item.id"
>
<span
v-for="element in array"
:key="element.id"
>
<!-- content -->
</span>
</div>
```
Useful links:
1. [`key`](https://vuejs.org/v2/guide/list.html#key)
1. [Vue Style Guide: Keyed v-for](https://vuejs.org/v2/style-guide/#Keyed-v-for-essential )
#### Vue and Bootstrap
1. Tooltips: Do not rely on `has-tooltip` class name for Vue components
```javascript
// bad
<span
class="has-tooltip"
title="Some tooltip text">
Text
</span>
// good
<span
v-tooltip
title="Some tooltip text">
Text
</span>
```
```javascript
// bad
<span
class="has-tooltip"
title="Some tooltip text">
Text
</span>
// good
<span
v-tooltip
title="Some tooltip text">
Text
</span>
```
1. Tooltips: When using a tooltip, include the tooltip directive, `./app/assets/javascripts/vue_shared/directives/tooltip.js`
1. Don't change `data-original-title`.
```javascript
// bad
<span data-original-title="tooltip text">Foo</span>
// good
<span title="tooltip text">Foo</span>
$('span').tooltip('_fixTitle');
```
```javascript
// bad
<span data-original-title="tooltip text">Foo</span>
// good
<span title="tooltip text">Foo</span>
$('span').tooltip('_fixTitle');
```
### The Javascript/Vue Accord
The goal of this accord is to make sure we are all on the same page.
1. When writing Vue, you may not use jQuery in your application.
1. If you need to grab data from the DOM, you may query the DOM 1 time while bootstrapping your application to grab data attributes using `dataset`. You can do this without jQuery.
1. You may use a jQuery dependency in Vue.js following [this example from the docs](https://vuejs.org/v2/examples/select2.html).
1. If an outside jQuery Event needs to be listen to inside the Vue application, you may use jQuery event listeners.
1. We will avoid adding new jQuery events when they are not required. Instead of adding new jQuery events take a look at [different methods to do the same task](https://vuejs.org/v2/api/#vm-emit).
1. If you need to grab data from the DOM, you may query the DOM 1 time while bootstrapping your application to grab data attributes using `dataset`. You can do this without jQuery.
1. You may use a jQuery dependency in Vue.js following [this example from the docs](https://vuejs.org/v2/examples/select2.html).
1. If an outside jQuery Event needs to be listen to inside the Vue application, you may use jQuery event listeners.
1. We will avoid adding new jQuery events when they are not required. Instead of adding new jQuery events take a look at [different methods to do the same task](https://vuejs.org/v2/api/#vm-emit).
1. You may query the `window` object 1 time, while bootstrapping your application for application specific data (e.g. `scrollTo` is ok to access anytime). Do this access during the bootstrapping of your application.
1. You may have a temporary but immediate need to create technical debt by writing code that does not follow our standards, to be refactored later. Maintainers need to be ok with the tech debt in the first place. An issue should be created for that tech debt to evaluate it further and discuss. In the coming months you should fix that tech debt, with it's priority to be determined by maintainers.
1. When creating tech debt you must write the tests for that code before hand and those tests may not be rewritten. e.g. jQuery tests rewritten to Vue tests.
......@@ -650,6 +689,7 @@ The goal of this accord is to make sure we are all on the same page.
1. Once you have chosen a centralized state management solution you must use it for your entire application. i.e. Don't mix and match your state management solutions.
## SCSS
- [SCSS](style_guide_scss.md)
[airbnb-js-style-guide]: https://github.com/airbnb/javascript
......
......@@ -290,23 +290,24 @@ export default {
```
### Vuex Gotchas
1. Do not call a mutation directly. Always use an action to commit a mutation. Doing so will keep consistency throughout the application. From Vuex docs:
> why don't we just call store.commit('action') directly? Well, remember that mutations must be synchronous? Actions aren't. We can perform asynchronous operations inside an action.
```javascript
// component.vue
// bad
created() {
this.$store.commit('mutation');
}
1. Do not call a mutation directly. Always use an action to commit a mutation. Doing so will keep consistency throughout the application. From Vuex docs:
// good
created() {
this.$store.dispatch('action');
}
```
> why don't we just call store.commit('action') directly? Well, remember that mutations must be synchronous? Actions aren't. We can perform asynchronous operations inside an action.
```javascript
// component.vue
// bad
created() {
this.$store.commit('mutation');
}
// good
created() {
this.$store.dispatch('action');
}
```
1. Use mutation types instead of hardcoding strings. It will be less error prone.
1. The State will be accessible in all components descending from the use where the store is instantiated.
......@@ -342,7 +343,7 @@ describe('component', () => {
name: 'Foo',
age: '30',
};
store = createStore();
// populate the store
......
......@@ -101,8 +101,10 @@ end
in a prepended module, which is very likely the case in EE. We could see
error like this:
1.1) Failure/Error: allow_any_instance_of(ApplicationSetting).to receive_messages(messages)
Using `any_instance` to stub a method (elasticsearch_indexing) that has been defined on a prepended module (EE::ApplicationSetting) is not supported.
```
1.1) Failure/Error: allow_any_instance_of(ApplicationSetting).to receive_messages(messages)
Using `any_instance` to stub a method (elasticsearch_indexing) that has been defined on a prepended module (EE::ApplicationSetting) is not supported.
```
### Alternative: `expect_next_instance_of`
......
......@@ -65,7 +65,6 @@ are very appreciative of the work done by translators and proofreaders!
Add your language in alphabetical order, and add yourself to the list
including:
- name
- link to your GitLab profile
- link to your CrowdIn profile
......
......@@ -6,7 +6,7 @@
* **[Ruby unit tests](#ruby-unit-tests-spec-rb)** for models, controllers, helpers, etc. (`/spec/**/*.rb`)
* **[Full feature tests](#full-feature-tests-spec-features-rb)** (`/spec/features/**/*.rb`)
* **[Karma](#karma-tests-spec-javascripts-js)** (`/spec/javascripts/**/*.js`)
* ~~Spinach~~ — These have been removed from our codebase in May 2018. (`/features/`)
* <s>Spinach</s> — These have been removed from our codebase in May 2018. (`/features/`)
## RSpec: Ruby unit tests `/spec/**/*.rb`
......
......@@ -12,158 +12,157 @@ You can run eslint locally by running `yarn eslint`
<a name="avoid-foreach"></a><a name="1.1"></a>
- [1.1](#avoid-foreach) **Avoid ForEach when mutating data** Use `map`, `reduce` or `filter` instead of `forEach` when mutating data. This will minimize mutations in functions ([which is aligned with Airbnb's style guide][airbnb-minimize-mutations])
```
// bad
users.forEach((user, index) => {
user.id = index;
});
// good
const usersWithId = users.map((user, index) => {
return Object.assign({}, user, { id: index });
});
```
```
// bad
users.forEach((user, index) => {
user.id = index;
});
// good
const usersWithId = users.map((user, index) => {
return Object.assign({}, user, { id: index });
});
```
## Functions
<a name="limit-params"></a><a name="2.1"></a>
- [2.1](#limit-params) **Limit number of parameters** If your function or method has more than 3 parameters, use an object as a parameter instead.
```
// bad
function a(p1, p2, p3) {
// ...
};
// good
function a(p) {
// ...
};
```
```
// bad
function a(p1, p2, p3) {
// ...
};
// good
function a(p) {
// ...
};
```
## Classes & constructors
<a name="avoid-constructor-side-effects"></a><a name="3.1"></a>
- [3.1](#avoid-constructor-side-effects) **Avoid side effects in constructors** Avoid making some operations in the `constructor`, such as asynchronous calls, API requests and DOM manipulations. Prefer moving them into separate functions. This will make tests easier to write and code easier to maintain.
```javascript
// bad
class myClass {
constructor(config) {
this.config = config;
axios.get(this.config.endpoint)
```javascript
// bad
class myClass {
constructor(config) {
this.config = config;
axios.get(this.config.endpoint)
}
}
}
// good
class myClass {
constructor(config) {
this.config = config;
// good
class myClass {
constructor(config) {
this.config = config;
}
makeRequest() {
axios.get(this.config.endpoint)
}
}
makeRequest() {
axios.get(this.config.endpoint)
}
}
const instance = new myClass();
instance.makeRequest();
```
const instance = new myClass();
instance.makeRequest();
```
<a name="avoid-classes-to-handle-dom-events"></a><a name="3.2"></a>
- [3.2](#avoid-classes-to-handle-dom-events) **Avoid classes to handle DOM events** If the only purpose of the class is to bind a DOM event and handle the callback, prefer using a function.
```
// bad
class myClass {
constructor(config) {
this.config = config;
}
init() {
document.addEventListener('click', () => {});
}
}
// good
const myFunction = () => {
document.addEventListener('click', () => {
// handle callback here
});
}
```
```
// bad
class myClass {
constructor(config) {
this.config = config;
}
init() {
document.addEventListener('click', () => {});
}
}
// good
const myFunction = () => {
document.addEventListener('click', () => {
// handle callback here
});
}
```
<a name="element-container"></a><a name="3.3"></a>
- [3.3](#element-container) **Pass element container to constructor** When your class manipulates the DOM, receive the element container as a parameter.
This is more maintainable and performant.
```
// bad
class a {
constructor() {
document.querySelector('.b');
}
}
// good
class a {
constructor(options) {
options.container.querySelector('.b');
}
}
```
```
// bad
class a {
constructor() {
document.querySelector('.b');
}
}
// good
class a {
constructor(options) {
options.container.querySelector('.b');
}
}
```
## Type Casting & Coercion
<a name="use-parseint"></a><a name="4.1"></a>
- [4.1](#use-parseint) **Use ParseInt** Use `ParseInt` when converting a numeric string into a number.
```
// bad
Number('10')
// good
parseInt('10', 10);
```
```
// bad
Number('10')
// good
parseInt('10', 10);
```
## CSS Selectors
<a name="use-js-prefix"></a><a name="5.1"></a>
- [5.1](#use-js-prefix) **Use js prefix** If a CSS class is only being used in JavaScript as a reference to the element, prefix the class name with `js-`
```
// bad
<button class="add-user"></button>
// good
<button class="js-add-user"></button>
```
```
// bad
<button class="add-user"></button>
// good
<button class="js-add-user"></button>
```
## Modules
<a name="use-absolute-paths"></a><a name="6.1"></a>
- [6.1](#use-absolute-paths) **Use absolute paths for nearby modules** Use absolute paths if the module you are importing is less than two levels up.
```
// bad
import GitLabStyleGuide from '~/guides/GitLabStyleGuide';
// good
import GitLabStyleGuide from '../GitLabStyleGuide';
```
```
// bad
import GitLabStyleGuide from '~/guides/GitLabStyleGuide';
// good
import GitLabStyleGuide from '../GitLabStyleGuide';
```
<a name="use-relative-paths"></a><a name="6.2"></a>
- [6.2](#use-relative-paths) **Use relative paths for distant modules** If the module you are importing is two or more levels up, use a relative path instead of an absolute path.
```
// bad
import GitLabStyleGuide from '../../../guides/GitLabStyleGuide';
// good
import GitLabStyleGuide from '~/GitLabStyleGuide';
```
```
// bad
import GitLabStyleGuide from '../../../guides/GitLabStyleGuide';
// good
import GitLabStyleGuide from '~/GitLabStyleGuide';
```
<a name="global-namespace"></a><a name="6.3"></a>
- [6.3](#global-namespace) **Do not add to global namespace**
......
......@@ -10,7 +10,7 @@ def method
issue = Issue.new
issue.save
render json: issue
end
```
......@@ -20,7 +20,7 @@ end
def method
issue = Issue.new
issue.save
render json: issue
end
```
......
......@@ -3,8 +3,7 @@
## Log arguments to Sidekiq jobs
If you want to see what arguments are being passed to Sidekiq jobs you can set
the `SIDEKIQ_LOG_ARGUMENTS` [environment variable]
(https://docs.gitlab.com/omnibus/settings/environment-variables.html) to `1` (true).
the `SIDEKIQ_LOG_ARGUMENTS` [environment variable](https://docs.gitlab.com/omnibus/settings/environment-variables.html) to `1` (true).
Example:
......
......@@ -5,23 +5,23 @@
Our current CI parallelization setup is as follows:
1. The `knapsack` job in the prepare stage that is supposed to ensure we have a
`knapsack/${CI_PROJECT_NAME}/rspec_report-master.json` file:
- The `knapsack/${CI_PROJECT_NAME}/rspec_report-master.json` file is fetched
from S3, if it's not here we initialize the file with `{}`.
`knapsack/${CI_PROJECT_NAME}/rspec_report-master.json` file:
- The `knapsack/${CI_PROJECT_NAME}/rspec_report-master.json` file is fetched
from S3, if it's not here we initialize the file with `{}`.
1. Each `rspec x y` job are run with `knapsack rspec` and should have an evenly
distributed share of tests:
- It works because the jobs have access to the
`knapsack/${CI_PROJECT_NAME}/rspec_report-master.json` since the "artifacts
from all previous stages are passed by default". [^1]
- the jobs set their own report path to
`KNAPSACK_REPORT_PATH=knapsack/${CI_PROJECT_NAME}/${JOB_NAME[0]}_node_${CI_NODE_INDEX}_${CI_NODE_TOTAL}_report.json`.
- if knapsack is doing its job, test files that are run should be listed under
`Report specs`, not under `Leftover specs`.
distributed share of tests:
- It works because the jobs have access to the
`knapsack/${CI_PROJECT_NAME}/rspec_report-master.json` since the "artifacts
from all previous stages are passed by default".
- the jobs set their own report path to
`KNAPSACK_REPORT_PATH=knapsack/${CI_PROJECT_NAME}/${JOB_NAME[0]}_node_${CI_NODE_INDEX}_${CI_NODE_TOTAL}_report.json`.
- if knapsack is doing its job, test files that are run should be listed under
`Report specs`, not under `Leftover specs`.
1. The `update-knapsack` job takes all the
`knapsack/${CI_PROJECT_NAME}/${JOB_NAME[0]}_node_${CI_NODE_INDEX}_${CI_NODE_TOTAL}_report.json`
files from the `rspec x y` jobs and merge them all together into a single
`knapsack/${CI_PROJECT_NAME}/rspec_report-master.json` file that is then
uploaded to S3.
`knapsack/${CI_PROJECT_NAME}/${JOB_NAME[0]}_node_${CI_NODE_INDEX}_${CI_NODE_TOTAL}_report.json`
files from the `rspec x y` jobs and merge them all together into a single
`knapsack/${CI_PROJECT_NAME}/rspec_report-master.json` file that is then
uploaded to S3.
After that, the next pipeline will use the up-to-date
`knapsack/${CI_PROJECT_NAME}/rspec_report-master.json` file.
......
......@@ -153,7 +153,7 @@ trade-off:
- Unit tests are usually cheap, and you should consider them like the basement
of your house: you need them to be confident that your code is behaving
correctly. However if you run only unit tests without integration / system
tests, you might [miss] the [big] [picture]!
tests, you might [miss] the [big] / [picture] !
- Integration tests are a bit more expensive, but don't abuse them. A system test
is often better than an integration test that is stubbing a lot of internals.
- System tests are expensive (compared to unit tests), even more if they require
......
# How to create a project in GitLab
>**Notes:**
- For a list of words that are not allowed to be used as project names see the
[reserved names][reserved].
> **Notes:**
> - For a list of words that are not allowed to be used as project names see the
> [reserved names][reserved].
1. In your dashboard, click the green **New project** button or use the plus
icon in the upper right corner of the navigation bar.
......
......@@ -71,7 +71,7 @@ The first items we need to configure are the basic settings of the underlying vi
1. Enter a `User name` - e.g. **"gitlab-admin"**
1. Select an `Authentication type`, either **SSH public key** or **Password**:
>**Note:** if you're unsure which authentication type to use, select **Password**
> **Note:** if you're unsure which authentication type to use, select **Password**
1. If you chose **SSH public key** - enter your `SSH public key` into the field provided
_(read the [SSH documentation][GitLab-Docs-SSH] to learn more about how to setup SSH
......@@ -80,8 +80,10 @@ The first items we need to configure are the basic settings of the underlying vi
will use later in this tutorial to [SSH] into the VM, so make sure it's a strong password/passphrase)_
1. Choose the appropriate `Subscription` tier for your Azure account
1. Choose an existing `Resource Group` or create a new one - e.g. **"GitLab-CE-Azure"**
>**Note:** a "Resource group" is a way to group related resources together for easier administration.
We chose "GitLab-CE-Azure", but your resource group can have the same name as your VM.
> **Note:** a "Resource group" is a way to group related resources together for easier administration.
> We chose "GitLab-CE-Azure", but your resource group can have the same name as your VM.
1. Choose a `Location` - if you're unsure, select the default location
Here are the settings we've used:
......@@ -95,7 +97,7 @@ Check the settings you have entered, and then click **"OK"** when you're ready t
Next, you need to choose the size of your VM - selecting features such as the number of CPU cores,
the amount of RAM, the size of storage (and its speed), etc.
>**Note:** in common with other cloud vendors, Azure operates a resource/usage pricing model, i.e.
> **Note:** in common with other cloud vendors, Azure operates a resource/usage pricing model, i.e.
the more resources your VM consumes the more it will cost you to run, so make your selection
carefully. You'll see that Azure provides an _estimated_ monthly cost beneath each VM Size to help
guide your selection.
......@@ -106,7 +108,7 @@ ahead and select this one, but please choose the size which best meets your own
![Azure - Create Virtual Machine - Size](img/azure-create-virtual-machine-size.png)
>**Note:** be aware that whilst your VM is active (known as "allocated"), it will incur
> **Note:** be aware that whilst your VM is active (known as "allocated"), it will incur
"compute charges" which, ultimately, you will be billed for. So, even if you're using the
free trial credits, you'll likely want to learn
[how to properly shutdown an Azure VM to save money][Azure-Properly-Shutdown-VM].
......@@ -132,7 +134,7 @@ new VM. You'll be billed only for the VM itself (e.g. "Standard DS1 v2") because
![Azure - Create Virtual Machine - Purchase](img/azure-create-virtual-machine-purchase.png)
>**Note:** at this stage, you can review and modify the any of the settings you have made during all
> **Note:** at this stage, you can review and modify the any of the settings you have made during all
previous steps, just click on any of the four steps to re-open them.
When you have read and agreed to the terms of use and are ready to proceed, click **"Purchase"**.
......@@ -174,7 +176,7 @@ _(the full domain name of your own VM will be different, of course)_.
Click **"Save"** for the changes to take effect.
>**Note:** if you want to use your own domain name, you will need to add a DNS `A` record at your
> **Note:** if you want to use your own domain name, you will need to add a DNS `A` record at your
domain registrar which points to the public IP address of your Azure VM. If you do this, you'll need
to make sure your VM is configured to use a _static_ public IP address (i.e. not a _dynamic_ one)
or you will have to reconfigure the DNS `A` record each time Azure reassigns your VM a new public IP
......@@ -190,7 +192,7 @@ Ports are opened by adding _security rules_ to the **"Network security group"**
has been assigned to. If you followed the process above, then Azure will have automatically created
an NSG named `GitLab-CE-nsg` and assigned the `GitLab-CE` VM to it.
>**Note:** if you gave your VM a different name then the NSG automatically created by Azure will
> **Note:** if you gave your VM a different name then the NSG automatically created by Azure will
also have a different name - the name you have your VM, with `-nsg` appended to it.
You can navigate to the NSG settings via many different routes in the Azure Portal, but one of the
......@@ -321,7 +323,7 @@ Under the **"Components"** section, we can see that our VM is currently running
GitLab. This is the version of GitLab which was contained in the Azure Marketplace
**"GitLab Community Edition"** offering we used to build the VM when we wrote this tutorial.
>**Note:** The version of GitLab in your own VM instance may well be different, but the update
> **Note:** The version of GitLab in your own VM instance may well be different, but the update
process will still be the same.
### Connect via SSH
......@@ -333,11 +335,11 @@ connect to it using SSH ([Secure Shell][SSH]).
If you're running Windows, you'll need to connect using [PuTTY] or an equivalent Windows SSH client.
If you're running Linux or macOS, then you already have an SSH client installed.
>**Note:**
- Remember that you will need to login with the username and password you specified
[when you created](#basics) your Azure VM
- If you need to reset your VM password, read
[how to reset SSH credentials for a user on an Azure VM][Azure-Troubleshoot-SSH-Connection].
> **Note:**
> - Remember that you will need to login with the username and password you specified
> [when you created](#basics) your Azure VM
> - If you need to reset your VM password, read
> [how to reset SSH credentials for a user on an Azure VM][Azure-Troubleshoot-SSH-Connection].
#### SSH from the command-line
......
# Database MySQL
>**Note:**
- We do not recommend using MySQL due to various issues. For example, case
[(in)sensitivity](https://dev.mysql.com/doc/refman/5.0/en/case-sensitivity.html)
and [problems](https://bugs.mysql.com/bug.php?id=65830) that
[suggested](https://bugs.mysql.com/bug.php?id=50909)
[fixes](https://bugs.mysql.com/bug.php?id=65830) [have](https://bugs.mysql.com/bug.php?id=63164).
> **Note:**
> - We do not recommend using MySQL due to various issues. For example, case
[(in)sensitivity](https://dev.mysql.com/doc/refman/5.0/en/case-sensitivity.html)
and [problems](https://bugs.mysql.com/bug.php?id=65830) that
[suggested](https://bugs.mysql.com/bug.php?id=50909)
[fixes](https://bugs.mysql.com/bug.php?id=65830) [have](https://bugs.mysql.com/bug.php?id=63164).
## Initial database setup
......@@ -146,10 +146,12 @@ Congrats, your GitLab database uses the right InnoDB tablespace format.
However, you must still ensure that any **future tables** created by GitLab will still use the right format:
- If `SELECT @@innodb_file_per_table` returned **1** previously, your server is running correctly.
> It's however a requirement to check *now* that this setting is indeed persisted in your [my.cnf](https://dev.mysql.com/doc/refman/5.7/en/tablespace-enabling.html) file!
> It's however a requirement to check *now* that this setting is indeed persisted in your [my.cnf](https://dev.mysql.com/doc/refman/5.7/en/tablespace-enabling.html) file!
- If `SELECT @@innodb_file_per_table` returned **0** previously, your server is not running correctly.
> [Enable innodb_file_per_table](https://dev.mysql.com/doc/refman/5.7/en/tablespace-enabling.html) by running in a MySQL session as root the command `SET GLOBAL innodb_file_per_table=1, innodb_file_format=Barracuda;` and persist the two settings in your [my.cnf](https://dev.mysql.com/doc/refman/5.7/en/tablespace-enabling.html) file
> [Enable innodb_file_per_table](https://dev.mysql.com/doc/refman/5.7/en/tablespace-enabling.html) by running in a MySQL session as root the command `SET GLOBAL innodb_file_per_table=1, innodb_file_format=Barracuda;` and persist the two settings in your [my.cnf](https://dev.mysql.com/doc/refman/5.7/en/tablespace-enabling.html) file
Now, if you have a **different result** returned by the 2 commands above, it means you have a **mix of tables format** uses in your GitLab database. This can happen if your MySQL server had different values for `innodb_file_per_table` in its life and you updated GitLab at different moments with those inconsistent values. So keep reading.
......
......@@ -40,9 +40,9 @@ In order to deploy GitLab on Kubernetes, the following are required:
1. `helm` and `kubectl` [installed on your computer](preparation/tools_installation.md).
1. A Kubernetes cluster, version 1.8 or higher. 6vCPU and 16GB of RAM is recommended.
- [Google GKE](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-container-cluster)
- [Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html)
- [Microsoft AKS](https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough-portal)
- [Google GKE](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-container-cluster)
- [Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html)
- [Microsoft AKS](https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough-portal)
1. A [wildcard DNS entry and external IP address](preparation/networking.md)
1. [Authenticate and connect](preparation/connect.md) to the cluster
1. Configure and initialize [Helm Tiller](preparation/tiller.md).
......
......@@ -63,16 +63,18 @@ what we will use in this tutorial.
In short:
1. Open a terminal and in a new directory run:
```sh
vagrant init openshift/origin-all-in-one
```
```sh
vagrant init openshift/origin-all-in-one
```
1. This will generate a Vagrantfile based on the all-in-one VM image
1. In the same directory where you generated the Vagrantfile
enter:
```sh
vagrant up
```
```sh
vagrant up
```
This will download the VirtualBox image and fire up the VM with some preconfigured
values as you can see in the Vagrantfile. As you may have noticed, you need
......@@ -187,22 +189,22 @@ In that case, the OpenShift service might not be running, so in order to fix it:
1. SSH into the VM by going to the directory where the Vagrantfile is and then
run:
```sh
vagrant ssh
```
```sh
vagrant ssh
```
1. Run `systemctl` and verify by the output that the `openshift` service is not
running (it will be in red color). If that's the case start the service with:
```sh
sudo systemctl start openshift
```
```sh
sudo systemctl start openshift
```
1. Verify the service is up with:
```sh
systemctl status openshift -l
```
```sh
systemctl status openshift -l
```
Now you will be able to login using `oc` (like we did before) and visit the web
console.
......@@ -385,55 +387,55 @@ Let's see how to do that using the following steps.
1. Make sure you are in the `gitlab` project:
```sh
oc project gitlab
```
```sh
oc project gitlab
```
1. See what services are used for this project:
```sh
oc get svc
```
```sh
oc get svc
```
The output will be similar to:
The output will be similar to:
```
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
gitlab-ce 172.30.243.177 <none> 22/TCP,80/TCP 5d
gitlab-ce-postgresql 172.30.116.75 <none> 5432/TCP 5d
gitlab-ce-redis 172.30.105.88 <none> 6379/TCP 5d
```
```
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
gitlab-ce 172.30.243.177 <none> 22/TCP,80/TCP 5d
gitlab-ce-postgresql 172.30.116.75 <none> 5432/TCP 5d
gitlab-ce-redis 172.30.105.88 <none> 6379/TCP 5d
```
1. We need to see the replication controllers of the `gitlab-ce` service.
Get a detailed view of the current ones:
```sh
oc describe rc gitlab-ce
```
```sh
oc describe rc gitlab-ce
```
This will return a large detailed list of the current replication controllers.
Search for the name of the GitLab controller, usually `gitlab-ce-1` or if
that failed at some point and you spawned another one, it will be named
`gitlab-ce-2`.
This will return a large detailed list of the current replication controllers.
Search for the name of the GitLab controller, usually `gitlab-ce-1` or if
that failed at some point and you spawned another one, it will be named
`gitlab-ce-2`.
1. Scale GitLab using the previous information:
```sh
oc scale --replicas=2 replicationcontrollers gitlab-ce-2
```
```sh
oc scale --replicas=2 replicationcontrollers gitlab-ce-2
```
1. Get the new replicas number to make sure scaling worked:
```sh
oc get rc gitlab-ce-2
```
```sh
oc get rc gitlab-ce-2
```
which will return something like:
which will return something like:
```
NAME DESIRED CURRENT AGE
gitlab-ce-2 2 2 5d
```
```
NAME DESIRED CURRENT AGE
gitlab-ce-2 2 2 5d
```
And that's it! We successfully scaled the replicas to 2 using the CLI.
......@@ -469,9 +471,10 @@ GitLab service account to the `anyuid` [Security Context Constraints][scc].
For OpenShift v3.0, you will need to do this manually:
1. Edit the Security Context:
```sh
oc edit scc anyuid
```
```sh
oc edit scc anyuid
```
1. Add `system:serviceaccount:<project>:gitlab-ce-user` to the `users` section.
If you changed the Application Name from the default the user will
......
......@@ -2,7 +2,7 @@
To enable the Microsoft Azure OAuth2 OmniAuth provider you must register your application with Azure. Azure will generate a client ID and secret key for you to use.
1. Sign in to the [Azure Management Portal](https://manage.windowsazure.com>).
1. Sign in to the [Azure Management Portal](https://manage.windowsazure.com).
1. Select "Active Directory" on the left and choose the directory you want to use to register GitLab.
......
......@@ -24,11 +24,11 @@ This strategy is designed to allow configuration of the simple OmniAuth SSO proc
1. Register your application in the OAuth2 provider you wish to authenticate with.
The redirect URI you provide when registering the application should be:
The redirect URI you provide when registering the application should be:
```
http://your-gitlab.host.com/users/auth/oauth2_generic/callback
```
```
http://your-gitlab.host.com/users/auth/oauth2_generic/callback
```
1. You should now be able to get a Client ID and Client Secret.
Where this shows up will differ for each provider.
......@@ -36,18 +36,18 @@ This strategy is designed to allow configuration of the simple OmniAuth SSO proc
1. On your GitLab server, open the configuration file.
For Omnibus package:
For Omnibus package:
```sh
sudo editor /etc/gitlab/gitlab.rb
```
```sh
sudo editor /etc/gitlab/gitlab.rb
```
For installations from source:
For installations from source:
```sh
cd /home/git/gitlab
sudo -u git -H editor config/gitlab.yml
```
```sh
cd /home/git/gitlab
sudo -u git -H editor config/gitlab.yml
```
1. See [Initial OmniAuth Configuration](omniauth.md#initial-omniauth-configuration) for initial settings
......
......@@ -50,16 +50,16 @@ that are in common for all providers that we need to consider.
be blocked by default and will have to be unblocked by an administrator before
they are able to sign in.
>**Note:**
If you set `block_auto_created_users` to `false`, make sure to only
define providers under `allow_single_sign_on` that you are able to control, like
SAML, Shibboleth, Crowd or Google, or set it to `false` otherwise any user on
the Internet will be able to successfully sign in to your GitLab without
administrative approval.
>**Note:**
`auto_link_ldap_user` requires the `uid` of the user to be the same in both LDAP
and the OmniAuth provider.
> **Note:**
> If you set `block_auto_created_users` to `false`, make sure to only
> define providers under `allow_single_sign_on` that you are able to control, like
> SAML, Shibboleth, Crowd or Google, or set it to `false` otherwise any user on
> the Internet will be able to successfully sign in to your GitLab without
> administrative approval.
>
> **Note:**
> `auto_link_ldap_user` requires the `uid` of the user to be the same in both LDAP
> and the OmniAuth provider.
To change these settings:
......@@ -233,15 +233,15 @@ You can enable profile syncing from selected OmniAuth providers and for all or f
When authenticating using LDAP, the user's email is always synced.
```ruby
gitlab_rails['sync_profile_from_provider'] = ['twitter', 'google_oauth2']
gitlab_rails['sync_profile_attributes'] = ['name', 'email', 'location']
```ruby
gitlab_rails['sync_profile_from_provider'] = ['twitter', 'google_oauth2']
gitlab_rails['sync_profile_attributes'] = ['name', 'email', 'location']
```
**For installations from source**
**For installations from source**
```yaml
omniauth:
sync_profile_from_provider: ['twitter', 'google_oauth2']
sync_profile_attributes: ['email', 'location']
```
```yaml
omniauth:
sync_profile_from_provider: ['twitter', 'google_oauth2']
sync_profile_attributes: ['email', 'location']
```
......@@ -123,9 +123,10 @@ in your SAML IdP:
To ease configuration, most IdP accept a metadata URL for the application to provide
configuration information to the IdP. To build the metadata URL for GitLab, append
`users/auth/saml/metadata` to the HTTPS URL of your GitLab installation, for instance:
```
https://gitlab.example.com/users/auth/saml/metadata
```
```
https://gitlab.example.com/users/auth/saml/metadata
```
At a minimum the IdP *must* provide a claim containing the user's email address, using
claim name `email` or `mail`. The email will be used to automatically generate the GitLab
......
......@@ -41,10 +41,9 @@ Here's what we'll cover in this tutorial:
- [Without history modification](#undo-remote-changes-without-changing-history) (preferred way)
- [With history modification](#undo-remote-changes-with-modifying-history) (requires
coordination with team and force pushes).
- [Usecases when modifying history is generally acceptable](#where-modifying-history-is-generally-acceptable)
- [How to modify history](#how-modifying-history-is-done)
- [How to remove sensitive information from repository](#deleting-sensitive-information-from-commits)
- [Usecases when modifying history is generally acceptable](#where-modifying-history-is-generally-acceptable)
- [How to modify history](#how-modifying-history-is-done)
- [How to remove sensitive information from repository](#deleting-sensitive-information-from-commits)
### Branching strategy
......@@ -101,24 +100,23 @@ no changes added to commit (use "git add" and/or "git commit -a")
At this point there are 3 options to undo the local changes you have:
- Discard all local changes, but save them for possible re-use [later](#quickly-save-local-changes)
```shell
git stash
```
- Discard all local changes, but save them for possible re-use [later](#quickly-save-local-changes)
- Discarding local changes (permanently) to a file
```shell
git stash
```
```shell
git checkout -- <file>
```
- Discarding local changes (permanently) to a file
- Discard all local changes to all files permanently
```shell
git checkout -- <file>
```
```shell
git reset --hard
```
- Discard all local changes to all files permanently
```shell
git reset --hard
```
Before executing `git reset --hard`, keep in mind that there is also a way to
just temporary store the changes without committing them using `git stash`.
......@@ -140,10 +138,10 @@ them. This is achieved by Git stashing command `git stash`, which in fact saves
current work and runs `git reset --hard`, but it also has various
additional options like:
- `git stash save`, which enables including temporary commit message, which will help you identify changes, among with other options
- `git stash list`, which lists all previously stashed commits (yes, there can be more) that were not `pop`ed
- `git stash pop`, which redoes previously stashed changes and removes them from stashed list
- `git stash apply`, which redoes previously stashed changes, but keeps them on stashed list
- `git stash save`, which enables including temporary commit message, which will help you identify changes, among with other options
- `git stash list`, which lists all previously stashed commits (yes, there can be more) that were not `pop`ed
- `git stash pop`, which redoes previously stashed changes and removes them from stashed list
- `git stash apply`, which redoes previously stashed changes, but keeps them on stashed list
### Staged local changes (before you commit)
......@@ -174,29 +172,29 @@ Changes to be committed:
Now you have 4 options to undo your changes:
- Unstage the file to current commit (HEAD)
- Unstage the file to current commit (HEAD)
```shell
git reset HEAD <file>
```
```shell
git reset HEAD <file>
```
- Unstage everything - retain changes
- Unstage everything - retain changes
```shell
git reset
```
```shell
git reset
```
- Discard all local changes, but save them for [later](#quickly-save-local-changes)
- Discard all local changes, but save them for [later](#quickly-save-local-changes)
```shell
git stash
```
```shell
git stash
```
- Discard everything permanently
- Discard everything permanently
```shell
git reset --hard
```
```shell
git reset --hard
```
## Committed local changes
......@@ -232,42 +230,42 @@ In our example we will end up with commit `B`, that introduced bug/error. We hav
- Undo (swap additions and deletions) changes introduced by commit `B`.
```shell
git revert commit-B-id
```
```shell
git revert commit-B-id
```
- Undo changes on a single file or directory from commit `B`, but retain them in the staged state
```shell
git checkout commit-B-id <file>
```
```shell
git checkout commit-B-id <file>
```
- Undo changes on a single file or directory from commit `B`, but retain them in the unstaged state
```shell
git reset commit-B-id <file>
```
- There is one command we also must not forget: **creating a new branch**
from the point where changes are not applicable or where the development has hit a
dead end. For example you have done commits `A-B-C-D` on your feature-branch
and then you figure `C` and `D` are wrong. At this point you either reset to `B`
and do commit `F` (which will cause problems with pushing and if forced pushed also with other developers)
since branch now looks `A-B-F`, which clashes with what other developers have locally (you will
[change history](#with-history-modification)), or you simply checkout commit `B` create
a new branch and do commit `F`. In the last case, everyone else can still do their work while you
have your new way to get it right and merge it back in later. Alternatively, with GitLab,
you can [cherry-pick](../../../user/project/merge_requests/cherry_pick_changes.md#cherry-picking-a-commit)
that commit into a new merge request.
![Create a new branch to avoid clashing](img/branching.png)
```shell
git checkout commit-B-id
git checkout -b new-path-of-feature
# Create <commit F>
git commit -a
```
```shell
git reset commit-B-id <file>
```
- There is one command we also must not forget: **creating a new branch**
from the point where changes are not applicable or where the development has hit a
dead end. For example you have done commits `A-B-C-D` on your feature-branch
and then you figure `C` and `D` are wrong. At this point you either reset to `B`
and do commit `F` (which will cause problems with pushing and if forced pushed also with other developers)
since branch now looks `A-B-F`, which clashes with what other developers have locally (you will
[change history](#with-history-modification)), or you simply checkout commit `B` create
a new branch and do commit `F`. In the last case, everyone else can still do their work while you
have your new way to get it right and merge it back in later. Alternatively, with GitLab,
you can [cherry-pick](../../../user/project/merge_requests/cherry_pick_changes.md#cherry-picking-a-commit)
that commit into a new merge request.
![Create a new branch to avoid clashing](img/branching.png)
```shell
git checkout commit-B-id
git checkout -b new-path-of-feature
# Create <commit F>
git commit -a
```
### With history modification
......@@ -287,9 +285,9 @@ delete commit `B`.
- Rebase the range from current commit D to A:
```shell
git rebase -i A
```
```shell
git rebase -i A
```
- Command opens your favorite editor where you write `drop` in front of commit
`B`, but you leave default `pick` with all other commits. Save and exit the
......
......@@ -73,10 +73,10 @@ The curriculum is composed of GitLab videos, screencasts, presentations, project
#### 1.7 Community and Support
1. [Getting Help](https://about.gitlab.com/getting-help/)
- Proposing Features and Reporting and Tracking bugs for GitLab
- The GitLab IRC channel, Gitter Chat Room, Community Forum and Mailing List
- Getting Technical Support
- Being part of our Great Community and Contributing to GitLab
- Proposing Features and Reporting and Tracking bugs for GitLab
- The GitLab IRC channel, Gitter Chat Room, Community Forum and Mailing List
- Getting Technical Support
- Being part of our Great Community and Contributing to GitLab
1. [Getting Started with the GitLab Development Kit (GDK)](https://about.gitlab.com/2016/06/08/getting-started-with-gitlab-development-kit/)
1. [Contributing Technical Articles to the GitLab Blog](https://about.gitlab.com/2016/01/26/call-for-writers/)
1. [GitLab Training Workshops](https://docs.gitlab.com/ce/university/training/end-user/)
......
......@@ -395,5 +395,5 @@ some redundancy options but it might also imply Geographic replication.
There is a lot of ground yet to cover so have a read through these other
resources and feel free to open an issue to request additional material.
* [GitLab High Availability](http://docs.gitlab.com/ce/administration/high_availability/README.html#sts=High Availability)
* [GitLab Geo](http://docs.gitlab.com/ee/gitlab-geo/README.html)
* [GitLab High Availability](http://docs.gitlab.com/ce/administration/high_availability/README.html#sts=High%20Availability)
* [GitLab Geo](http://docs.gitlab.com/ee/gitlab-geo/README.html)
......@@ -55,13 +55,13 @@ Sometimes we need to upgrade customers from old versions of GitLab to latest, so
- Keep this up-to-date as patch and version releases become available, just like our customers would
- Try out the following installation path
- [Install GitLab 4.2 from source](https://gitlab.com/gitlab-org/gitlab-ce/blob/d67117b5a185cfb15a1d7e749588ff981ffbf779/doc/install/installation.md)
- External MySQL database
- External NGINX
- External MySQL database
- External NGINX
- Create some test data
- Populated Repos
- Users
- Groups
- Projects
- Populated Repos
- Users
- Groups
- Projects
- [Backup using our Backup rake task](https://docs.gitlab.com/ce/raketasks/backup_restore.html#create-a-backup-of-the-gitlab-system)
- [Upgrade to 5.0 source using our Upgrade documentation](https://gitlab.com/gitlab-org/gitlab-ee/blob/master/doc/update/4.2-to-5.0.md)
- [Upgrade to 5.1 source](https://gitlab.com/gitlab-org/gitlab-ee/blob/master/doc/update/5.0-to-5.1.md)
......@@ -72,7 +72,7 @@ Sometimes we need to upgrade customers from old versions of GitLab to latest, so
- [Upgrade to Omnibus 7.14](https://docs.gitlab.com/omnibus/update/README.html#upgrading-from-a-non-omnibus-installation-to-an-omnibus-installation)
- [Restore backup using our Restore rake task](https://docs.gitlab.com/ce/raketasks/backup_restore.html#restore-a-previously-created-backup)
- [Upgrade to latest EE](https://about.gitlab.com/downloads-ee)
- (GitLab inc. only) Acquire and apply a license for the Enterprise Edition product, ask in #support
- (GitLab inc. only) Acquire and apply a license for the Enterprise Edition product, ask in #support
- Perform a downgrade from [EE to CE](https://docs.gitlab.com/ee/downgrade_ee_to_ce/README.html)
#### Start to learn about some of the integrations that we support
......@@ -98,9 +98,9 @@ Our integrations add great value to GitLab. User questions often relate to integ
- [Environment Information and maintenance checks](https://docs.gitlab.com/ce/raketasks/maintenance.html)
- [GitLab check](https://docs.gitlab.com/ce/raketasks/check.html)
- Omnibus commands
- [Status](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/doc/maintenance/README.md#get-service-status)
- [Starting and stopping services](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/doc/maintenance/README.md#starting-and-stopping)
- [Starting a rails console](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/doc/maintenance/README.md#invoking-rake-tasks)
- [Status](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/doc/maintenance/README.md#get-service-status)
- [Starting and stopping services](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/doc/maintenance/README.md#starting-and-stopping)
- [Starting a rails console](https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/doc/maintenance/README.md#invoking-rake-tasks)
#### Learn about the Support process
......@@ -118,16 +118,16 @@ Zendesk is our Support Centre and our main communication line with our Customers
- Here you will find a large variety of queries mainly from our Users who are self hosting GitLab CE
- Understand the questions that are asked and dig in to try to find a solution
- [Proceed on to the GitLab.com Support Forum](https://about.gitlab.com/handbook/support/#gitlabcom-support-trackera-namesupp-foruma)
- Here you will find queries regarding our own GitLab.com
- Helping Users here will give you an understanding of our Admin interface and other tools
- Here you will find queries regarding our own GitLab.com
- Helping Users here will give you an understanding of our Admin interface and other tools
- [Proceed on to the Twitter tickets in Zendesk](https://about.gitlab.com/handbook/support/#twitter)
- Here you will gain a great insight into our userbase
- Learn from any complaints and problems and feed them back to the team
- Tweets can range from help needed with GitLab installations, the API and just general queries
- Here you will gain a great insight into our userbase
- Learn from any complaints and problems and feed them back to the team
- Tweets can range from help needed with GitLab installations, the API and just general queries
- [Proceed on to Regular email Support tickets](https://about.gitlab.com/handbook/support/#regular-zendesk-tickets-a-nameregulara)
- Here you will find tickets from our GitLab EE Customers and GitLab CE Users
- Tickets here are extremely varied and often very technical
- You should be prepared for these tickets, given the knowledge gained from previous tiers and your training
- Here you will find tickets from our GitLab EE Customers and GitLab CE Users
- Tickets here are extremely varied and often very technical
- You should be prepared for these tickets, given the knowledge gained from previous tiers and your training
- Check out your colleagues' responses
- Hop on to the #support-live-feed channel in Slack and see the tickets as they come in and are updated
- Read through old tickets that your colleagues have worked on
......@@ -135,10 +135,10 @@ Zendesk is our Support Centre and our main communication line with our Customers
- [Learn about Cisco WebEx](https://about.gitlab.com/handbook/support/onboarding/#webexa-namewebexa)
- Training calls
- Information gathering calls
- It's good to find out how new and prospective customers are going to be using the product and how they will set up their infrastructure
- It's good to find out how new and prospective customers are going to be using the product and how they will set up their infrastructure
- Diagnosis calls
- When email isn't enough we may need to hop on a call and do some debugging along side the customer
- These paired calls are a great learning experience
- When email isn't enough we may need to hop on a call and do some debugging along side the customer
- These paired calls are a great learning experience
- Upgrade calls
- Emergency calls
......
......@@ -10,7 +10,7 @@ comments: false
- Find a commit that introduced a bug
- Works through a process of elimination
- Specify a known good and bad revision to begin
- Specify a known good and bad revision to begin
----------
......
......@@ -16,10 +16,10 @@ comments: false
- **Linux**
```bash
sudo yum install git-all
sudo yum install git-all
```
```bash
sudo apt-get install git-all
sudo apt-get install git-all
```
----------
......
......@@ -9,13 +9,15 @@ comments: false
## Instantiating Repositories
* Create a new repository by instantiating it through
```bash
git init
```
```bash
git init
```
* Copy an existing project by cloning the repository through
```bash
git clone <url>
```
```bash
git clone <url>
```
----------
......@@ -24,17 +26,18 @@ git clone <url>
* To instantiate a central repository a `--bare` flag is required.
* Bare repositories don't allow file editing or committing changes.
* Create a bare repo with
```bash
git init --bare project-name.git
```
```bash
git init --bare project-name.git
```
----------
## Instantiate workflow with clone
1. Create a project in your user namespace
- Choose to import from 'Any Repo by URL' and use
https://gitlab.com/gitlab-org/training-examples.git
- Choose to import from 'Any Repo by URL' and use
https://gitlab.com/gitlab-org/training-examples.git
2. Create a '`Workspace`' directory in your home directory.
3. Clone the '`training-examples`' project
......
......@@ -11,27 +11,35 @@ comments: false
Adds content to the index or staging area.
* Adds a list of file
```bash
git add <files>
```
```bash
git add <files>
```
* Adds all files including deleted ones
```bash
git add -A
```
```bash
git add -A
```
----------
## Git add continued
* Add all text files in current dir
```bash
git add *.txt
```
```bash
git add *.txt
```
* Add all text file in the project
```bash
git add "*.txt*"
```
```bash
git add "*.txt*"
```
* Adds all files in directory
```bash
git add views/layouts/
```
```bash
git add views/layouts/
```
......@@ -9,31 +9,36 @@ comments: false
Git log lists commit history. It allows searching and filtering.
* Initiate log
```
git log
```
```
git log
```
* Retrieve set number of records:
```
git log -n 2
```
```
git log -n 2
```
* Search commits by author. Allows user name or a regular expression.
```
git log --author="user_name"
```
```
git log --author="user_name"
```
----------
* Search by comment message.
```
git log --grep="<pattern>"
```
```
git log --grep="<pattern>"
```
* Search by date
```
git log --since=1.month.ago --until=3.weeks.ago
```
```
git log --since=1.month.ago --until=3.weeks.ago
```
----------
......
......@@ -9,26 +9,30 @@ comments: false
## Undo Commits
* Undo last commit putting everything back into the staging area.
```
git reset --soft HEAD^
```
```
git reset --soft HEAD^
```
* Add files and change message with:
```
git commit --amend -m "New Message"
```
```
git commit --amend -m "New Message"
```
----------
* Undo last and remove changes
```
git reset --hard HEAD^
```
```
git reset --hard HEAD^
```
* Same as last one but for two commits back
```
git reset --hard HEAD^^
```
```
git reset --hard HEAD^^
```
** Don't reset after pushing **
......
......@@ -10,50 +10,52 @@ We use git stash to store our changes when they are not ready to be committed
and we need to change to a different branch.
* Stash
```
git stash save
# or
git stash
# or with a message
git stash save "this is a message to display on the list"
```
```
git stash save
# or
git stash
# or with a message
git stash save "this is a message to display on the list"
```
* Apply stash to keep working on it
```
git stash apply
# or apply a specific one from out stack
git stash apply stash@{3}
```
```
git stash apply
# or apply a specific one from out stack
git stash apply stash@{3}
```
----------
* Every time we save a stash it gets stacked so by using list we can see all our
stashes.
```
git stash list
# or for more information (log methods)
git stash list --stat
```
```
git stash list
# or for more information (log methods)
git stash list --stat
```
* To clean our stack we need to manually remove them.
```
# drop top stash
git stash drop
# or
git stash drop <name>
# to clear all history we can use
git stash clear
```
```
# drop top stash
git stash drop
# or
git stash drop <name>
# to clear all history we can use
git stash clear
```
----------
* Apply and drop on one command
```
git stash pop
```
```
git stash pop
```
* If we meet conflicts we need to either reset or commit our changes.
......
......@@ -10,26 +10,27 @@ comments: false
* To remove files from stage use reset HEAD. Where HEAD is the last commit of the current branch.
```bash
git reset HEAD <file>
```
```bash
git reset HEAD <file>
```
* This will unstage the file but maintain the modifications. To revert the file back to the state it was in before the changes we can use:
```bash
git checkout -- <file>
```
```bash
git checkout -- <file>
```
----------
* To remove a file from disk and repo use 'git rm' and to rm a dir use the '-r' flag.
```
git rm '*.txt'
git rm -r <dirname>
```
```
git rm '*.txt'
git rm -r <dirname>
```
* If we want to remove a file from the repository but keep it on disk, say we forgot to add it to our `.gitignore` file then use `--cache`.
```
git rm <filename> --cache
```
```
git rm <filename> --cache
```
......@@ -150,48 +150,48 @@ Update your current configuration as follows, replacing with your storages names
1. Update your `gitlab.yml`, from
```yaml
repositories:
storages: # You must have at least a 'default' storage path.
default: /home/git/repositories
nfs: /mnt/nfs/repositories
cephfs: /mnt/cephfs/repositories
```
to
```yaml
repositories:
storages: # You must have at least a 'default' storage path.
default:
path: /home/git/repositories
nfs:
path: /mnt/nfs/repositories
cephfs:
path: /mnt/cephfs/repositories
```
```yaml
repositories:
storages: # You must have at least a 'default' storage path.
default: /home/git/repositories
nfs: /mnt/nfs/repositories
cephfs: /mnt/cephfs/repositories
```
to
```yaml
repositories:
storages: # You must have at least a 'default' storage path.
default:
path: /home/git/repositories
nfs:
path: /mnt/nfs/repositories
cephfs:
path: /mnt/cephfs/repositories
```
**For Omnibus installations**
1. Update your `/etc/gitlab/gitlab.rb`, from
```ruby
git_data_dirs({
"default" => "/var/opt/gitlab/git-data",
"nfs" => "/mnt/nfs/git-data",
"cephfs" => "/mnt/cephfs/git-data"
})
```
to
```ruby
git_data_dirs({
"default" => { "path" => "/var/opt/gitlab/git-data" },
"nfs" => { "path" => "/mnt/nfs/git-data" },
"cephfs" => { "path" => "/mnt/cephfs/git-data" }
})
```
```ruby
git_data_dirs({
"default" => "/var/opt/gitlab/git-data",
"nfs" => "/mnt/nfs/git-data",
"cephfs" => "/mnt/cephfs/git-data"
})
```
to
```ruby
git_data_dirs({
"default" => { "path" => "/var/opt/gitlab/git-data" },
"nfs" => { "path" => "/mnt/nfs/git-data" },
"cephfs" => { "path" => "/mnt/cephfs/git-data" }
})
```
#### Git configuration
......
......@@ -150,48 +150,48 @@ Update your current configuration as follows, replacing with your storages names
1. Update your `gitlab.yml`, from
```yaml
repositories:
storages: # You must have at least a 'default' storage path.
default: /home/git/repositories
nfs: /mnt/nfs/repositories
cephfs: /mnt/cephfs/repositories
```
to
```yaml
repositories:
storages: # You must have at least a 'default' storage path.
default:
path: /home/git/repositories
nfs:
path: /mnt/nfs/repositories
cephfs:
path: /mnt/cephfs/repositories
```
```yaml
repositories:
storages: # You must have at least a 'default' storage path.
default: /home/git/repositories
nfs: /mnt/nfs/repositories
cephfs: /mnt/cephfs/repositories
```
to
```yaml
repositories:
storages: # You must have at least a 'default' storage path.
default:
path: /home/git/repositories
nfs:
path: /mnt/nfs/repositories
cephfs:
path: /mnt/cephfs/repositories
```
**For Omnibus installations**
1. Update your `/etc/gitlab/gitlab.rb`, from
```ruby
git_data_dirs({
"default" => "/var/opt/gitlab/git-data",
"nfs" => "/mnt/nfs/git-data",
"cephfs" => "/mnt/cephfs/git-data"
})
```
to
```ruby
git_data_dirs({
"default" => { "path" => "/var/opt/gitlab/git-data" },
"nfs" => { "path" => "/mnt/nfs/git-data" },
"cephfs" => { "path" => "/mnt/cephfs/git-data" }
})
```
```ruby
git_data_dirs({
"default" => "/var/opt/gitlab/git-data",
"nfs" => "/mnt/nfs/git-data",
"cephfs" => "/mnt/cephfs/git-data"
})
```
to
```ruby
git_data_dirs({
"default" => { "path" => "/var/opt/gitlab/git-data" },
"nfs" => { "path" => "/mnt/nfs/git-data" },
"cephfs" => { "path" => "/mnt/cephfs/git-data" }
})
```
#### Git configuration
......
# Health Check
>**Notes:**
- Liveness and readiness probes were [introduced][ce-10416] in GitLab 9.1.
- The `health_check` endpoint was [introduced][ce-3888] in GitLab 8.8 and will
be deprecated in GitLab 9.1. Read more in the [old behavior](#old-behavior)
section.
- [Access token](#access-token) has been deprecated in GitLab 9.4
in favor of [IP whitelist](#ip-whitelist)
> **Notes:**
> - Liveness and readiness probes were [introduced][ce-10416] in GitLab 9.1.
> - The `health_check` endpoint was [introduced][ce-3888] in GitLab 8.8 and will
> be deprecated in GitLab 9.1. Read more in the [old behavior](#old-behavior)
> section.
> - [Access token](#access-token) has been deprecated in GitLab 9.4
> in favor of [IP whitelist](#ip-whitelist)
GitLab provides liveness and readiness probes to indicate service health and
reachability to required services. These probes report on the status of the
......
# Award emoji
>**Notes:**
- First [introduced][1825] in GitLab 8.2.
- GitLab 9.0 [introduced][ce-9570] the usage of native emojis if the platform
supports them and falls back to images or CSS sprites. This change greatly
improved the award emoji performance overall.
> **Notes:**
> - First [introduced][1825] in GitLab 8.2.
> - GitLab 9.0 [introduced][ce-9570] the usage of native emojis if the platform
> supports them and falls back to images or CSS sprites. This change greatly
> improved the award emoji performance overall.
When you're collaborating online, you get fewer opportunities for high-fives
and thumbs-ups. Emoji can be awarded to issues, merge requests, snippets, and
......
......@@ -23,9 +23,9 @@ in the form of a resolvable or threaded discussion.
## Resolvable discussions
>**Notes:**
- The main feature was [introduced][ce-5022] in GitLab 8.11.
- Resolvable discussions can be added only to merge request diffs.
> **Notes:**
> - The main feature was [introduced][ce-5022] in GitLab 8.11.
> - Resolvable discussions can be added only to merge request diffs.
Discussion resolution helps keep track of progress during planning or code review.
Resolving comments prevents you from forgetting to address feedback and lets you
......
......@@ -22,14 +22,14 @@ group and grant access to all their projects at once
- Create a group, include members of your team, and make it easier to
`@mention` all the team at once in issues and merge requests
- Create a group for your company members, and create [subgroups](subgroups/index.md)
for each individual team. Let's say you create a group called `company-team`, and among others,
you created subgroups in this group for each individual team `backend-team`,
`frontend-team`, and `production-team`:
1. When you start a new implementation from an issue, you add a comment:
for each individual team. Let's say you create a group called `company-team`, and among others,
you created subgroups in this group for each individual team `backend-team`,
`frontend-team`, and `production-team`:
1. When you start a new implementation from an issue, you add a comment:
_"`@company-team`, let's do it! `@company-team/backend-team` you're good to go!"_
1. When your backend team needs help from frontend, they add a comment:
1. When your backend team needs help from frontend, they add a comment:
_"`@company-team/frontend-team` could you help us here please?"_
1. When the frontend team completes their implementation, they comment:
1. When the frontend team completes their implementation, they comment:
_"`@company-team/backend-team`, it's done! Let's ship it `@company-team/production-team`!"_
## Namespaces
......@@ -64,8 +64,8 @@ together in a single list view.
## Create a new group
> **Notes:**
- For a list of words that are not allowed to be used as group names see the
[reserved names](../reserved_names.md).
> - For a list of words that are not allowed to be used as group names see the
> [reserved names](../reserved_names.md).
You can create a group in GitLab from:
......
# Subgroups
>**Notes:**
- [Introduced][ce-2772] in GitLab 9.0.
- Not available when using MySQL as external database (support removed in
GitLab 9.3 [due to performance reasons][issue]).
> **Notes:**
> - [Introduced][ce-2772] in GitLab 9.0.
> - Not available when using MySQL as external database (support removed in
> GitLab 9.3 [due to performance reasons][issue]).
With subgroups (aka nested groups or hierarchical groups) you can have
up to 20 levels of nested groups, which among other things can help you to:
......@@ -79,14 +79,14 @@ structure.
## Creating a subgroup
>**Notes:**
- You need to be an Owner of a group in order to be able to create
a subgroup. For more information check the [permissions table][permissions].
- For a list of words that are not allowed to be used as group names see the
[reserved names][reserved].
- Users can always create subgroups if they are explicitly added as an Owner to
a parent group even if group creation is disabled by an administrator in their
settings.
> **Notes:**
> - You need to be an Owner of a group in order to be able to create
> a subgroup. For more information check the [permissions table][permissions].
> - For a list of words that are not allowed to be used as group names see the
> [reserved names][reserved].
> - Users can always create subgroups if they are explicitly added as an Owner to
> a parent group even if group creation is disabled by an administrator in their
> settings.
To create a subgroup:
......
......@@ -66,15 +66,15 @@ GFM honors the markdown specification in how [paragraphs and line breaks are han
A paragraph is simply one or more consecutive lines of text, separated by one or more blank lines.
Line-breaks, or soft returns, are rendered if you end a line with two or more spaces:
[//]: # (Do *NOT* remove the two ending whitespaces in the following line.)
[//]: # (They are needed for the Markdown text to render correctly.)
<!-- (Do *NOT* remove the two ending whitespaces in the following line.) -->
<!-- (They are needed for the Markdown text to render correctly.) -->
Roses are red [followed by two or more spaces]
Violets are blue
Sugar is sweet
[//]: # (Do *NOT* remove the two ending whitespaces in the following line.)
[//]: # (They are needed for the Markdown text to render correctly.)
<!-- (Do *NOT* remove the two ending whitespaces in the following line.) -->
<!-- (They are needed for the Markdown text to render correctly.) -->
Roses are red
Violets are blue
......@@ -472,7 +472,7 @@ Become:
> [Introduced](https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/15107) in
GitLab 10.3.
>
> If this is not rendered correctly, see
https://gitlab.com/gitlab-org/gitlab-ce/blob/master/doc/user/markdown.md#mermaid
......
......@@ -59,8 +59,8 @@ of recovery codes.
### Enable 2FA via U2F device
> **Notes:**
- GitLab officially only supports [Yubikey] U2F devices.
- Support for U2F devices was added in GitLab 8.8.
> - GitLab officially only supports [Yubikey] U2F devices.
> - Support for U2F devices was added in GitLab 8.8.
**In GitLab:**
......@@ -145,7 +145,7 @@ codes. If you saved these codes, you can use one of them to sign in.
To use a recovery code, enter your username/email and password on the GitLab
sign-in page. When prompted for a two-factor code, enter the recovery code.
>**Note:**
> **Note:**
Once you use a recovery code, you cannot re-use it. You can still use the other
recovery codes you saved.
......@@ -187,7 +187,7 @@ a new set of recovery codes with SSH.
When prompted for a two-factor code, enter one of the recovery codes obtained
from the command-line output.
>**Note:**
> **Note:**
After signing in, visit your **Profile settings > Account** immediately to set
up two-factor authentication with a new device.
......
# Bulk editing issues and merge requests
>
**Notes:**
- A permission level of `Reporter` or higher is required in order to manage
issues.
- A permission level of `Developer` or higher is required in order to manage
merge requests.
> **Notes:**
> - A permission level of `Reporter` or higher is required in order to manage
> issues.
> - A permission level of `Developer` or higher is required in order to manage
> merge requests.
Attributes can be updated simultaneously across multiple issues or merge requests
by using the bulk editing feature.
......
......@@ -43,9 +43,9 @@ From the left side bar, hover over `Operations` and select `Kubernetes`, then cl
A few details from the EKS cluster will be required to connect it to GitLab.
1. A valid Kubernetes certificate and token are needed to authenticate to the EKS cluster. A pair is created by default, which can be used. Open a shell and use `kubectl` to retrieve them:
* List the secrets with `kubectl get secrets`, and one should named similar to `default-token-xxxxx`. Copy that token name for use below.
* Get the certificate with `kubectl get secret <secret name> -o jsonpath="{['data']['ca\.crt']}" | base64 -D`
* Retrieve the token with `kubectl get secret <secret name> -o jsonpath="{['data']['token']}" | base64 -D`.
* List the secrets with `kubectl get secrets`, and one should named similar to `default-token-xxxxx`. Copy that token name for use below.
* Get the certificate with `kubectl get secret <secret name> -o jsonpath="{['data']['ca\.crt']}" | base64 -D`
* Retrieve the token with `kubectl get secret <secret name> -o jsonpath="{['data']['token']}" | base64 -D`.
1. The API server endpoint is also required, so GitLab can connect to the cluster. This is displayed on the AWS EKS console, when viewing the EKS cluster details.
You now have all the information needed to connect the EKS cluster:
......
......@@ -54,16 +54,16 @@ new Kubernetes cluster to your project:
1. Connect your Google account if you haven't done already by clicking the
**Sign in with Google** button.
1. From there on, choose your cluster's settings:
- **Kubernetes cluster name** - The name you wish to give the cluster.
- **Environment scope** - The [associated environment](#setting-the-environment-scope) to this cluster.
- **Google Cloud Platform project** - Choose the project you created in your GCP
console that will host the Kubernetes cluster. Learn more about
[Google Cloud Platform projects](https://cloud.google.com/resource-manager/docs/creating-managing-projects).
- **Zone** - Choose the [region zone](https://cloud.google.com/compute/docs/regions-zones/)
under which the cluster will be created.
- **Number of nodes** - Enter the number of nodes you wish the cluster to have.
- **Machine type** - The [machine type](https://cloud.google.com/compute/docs/machine-types)
of the Virtual Machine instance that the cluster will be based on.
- **Kubernetes cluster name** - The name you wish to give the cluster.
- **Environment scope** - The [associated environment](#setting-the-environment-scope) to this cluster.
- **Google Cloud Platform project** - Choose the project you created in your GCP
console that will host the Kubernetes cluster. Learn more about
[Google Cloud Platform projects](https://cloud.google.com/resource-manager/docs/creating-managing-projects).
- **Zone** - Choose the [region zone](https://cloud.google.com/compute/docs/regions-zones/)
under which the cluster will be created.
- **Number of nodes** - Enter the number of nodes you wish the cluster to have.
- **Machine type** - The [machine type](https://cloud.google.com/compute/docs/machine-types)
of the Virtual Machine instance that the cluster will be based on.
1. Finally, click the **Create Kubernetes cluster** button.
After a couple of minutes, your cluster will be ready to go. You can now proceed
......
# GitLab Container Registry
>**Notes:**
> **Notes:**
> [Introduced][ce-4040] in GitLab 8.8.
- Docker Registry manifest `v1` support was added in GitLab 8.9 to support Docker
versions earlier than 1.10.
- This document is about the user guide. To learn how to enable GitLab Container
Registry across your GitLab instance, visit the
[administrator documentation](../../administration/container_registry.md).
- Starting from GitLab 8.12, if you have 2FA enabled in your account, you need
to pass a [personal access token][pat] instead of your password in order to
login to GitLab's Container Registry.
- Multiple level image names support was added in GitLab 9.1
> - Docker Registry manifest `v1` support was added in GitLab 8.9 to support Docker
> versions earlier than 1.10.
> - This document is about the user guide. To learn how to enable GitLab Container
> Registry across your GitLab instance, visit the
> [administrator documentation](../../administration/container_registry.md).
> - Starting from GitLab 8.12, if you have 2FA enabled in your account, you need
> to pass a [personal access token][pat] instead of your password in order to
> login to GitLab's Container Registry.
> - Multiple level image names support was added in GitLab 9.1
With the Docker Container Registry integrated into GitLab, every project can
have its own space to store its Docker images.
......@@ -40,12 +40,12 @@ to enable it.
## Build and push images
>**Notes:**
- Moving or renaming existing container registry repositories is not supported
once you have pushed images because the images are signed, and the
signature includes the repository name.
- To move or rename a repository with a container registry you will have to
delete all existing images.
> **Notes:**
> - Moving or renaming existing container registry repositories is not supported
> once you have pushed images because the images are signed, and the
> signature includes the repository name.
> - To move or rename a repository with a container registry you will have to
> delete all existing images.
If you visit the **Registry** link under your project's menu, you can see the
......
......@@ -72,7 +72,7 @@ Here's a little explanation of how this works behind the scenes:
`<issue, merge request>` pair, the merge request has the [issue closing pattern]
for the corresponding issue. All other issues and merge requests are **not**
considered.
1. Then the <issue, merge request> pairs are filtered out by last XX days (specified
1. Then the `<issue, merge request>` pairs are filtered out by last XX days (specified
by the UI - default is 90 days). So it prohibits these pairs from being considered.
1. For the remaining `<issue, merge request>` pairs, we check the information that
we need for the stages, like issue creation date, merge request merge time,
......
......@@ -57,6 +57,6 @@ service in GitLab.
If builds are not triggered, ensure you entered the right GitLab IP address in
Bamboo under 'Trigger IP addresses'.
>**Note:**
- Starting with GitLab 8.14.0, builds are triggered on push events.
> **Note:**
> - Starting with GitLab 8.14.0, builds are triggered on push events.
......@@ -92,15 +92,15 @@ password as they will be needed when configuring GitLab in the next section.
### Configuring GitLab
>**Notes:**
- The currently supported JIRA versions are `v6.x` and `v7.x.`. GitLab 7.8 or
higher is required.
- GitLab 8.14 introduced a new way to integrate with JIRA which greatly simplified
the configuration options you have to enter. If you are using an older version,
[follow this documentation][jira-repo-old-docs].
- In order to support Oracle's Access Manager, GitLab will send additional cookies
to enable Basic Auth. The cookie being added to each request is `OBBasicAuth` with
a value of `fromDialog`.
> **Notes:**
> - The currently supported JIRA versions are `v6.x` and `v7.x.`. GitLab 7.8 or
> higher is required.
> - GitLab 8.14 introduced a new way to integrate with JIRA which greatly simplified
> the configuration options you have to enter. If you are using an older version,
> [follow this documentation][jira-repo-old-docs].
> - In order to support Oracle's Access Manager, GitLab will send additional cookies
> to enable Basic Auth. The cookie being added to each request is `OBBasicAuth` with
> a value of `fromDialog`.
To enable JIRA integration in a project, navigate to the
[Integrations page](project_services.md#accessing-the-project-services), click
......@@ -182,11 +182,11 @@ the same goal:
where `PROJECT-1` is the issue ID of the JIRA project.
>**Note:**
- Only commits and merges into the project's default branch (usually **master**) will
close an issue in Jira. You can change your projects default branch under
[project settings](img/jira_project_settings.png).
- The JIRA issue will not be transitioned if it has a resolution.
> **Notes:**
> - Only commits and merges into the project's default branch (usually **master**) will
> close an issue in Jira. You can change your projects default branch under
> [project settings](img/jira_project_settings.png).
> - The JIRA issue will not be transitioned if it has a resolution.
### JIRA issue closing example
......
# Webhooks
>**Note:**
Starting from GitLab 8.5:
- the `repository` key is deprecated in favor of the `project` key
- the `project.ssh_url` key is deprecated in favor of the `project.git_ssh_url` key
- the `project.http_url` key is deprecated in favor of the `project.git_http_url` key
>**Note:**
Starting from GitLab 11.1, the logs of web hooks are automatically removed after
one month.
>**Note**
Starting from GitLab 11.2:
- The `description` field for issues, merge requests, comments, and wiki pages
is rewritten so that simple Markdown image references (like
`![](/uploads/...)`) have their target URL changed to an absolute URL. See
[image URL rewriting](#image-url-rewriting) for more details.
> **Note:**
> Starting from GitLab 8.5:
> - the `repository` key is deprecated in favor of the `project` key
> - the `project.ssh_url` key is deprecated in favor of the `project.git_ssh_url` key
> - the `project.http_url` key is deprecated in favor of the `project.git_http_url` key
>
> **Note:**
> Starting from GitLab 11.1, the logs of web hooks are automatically removed after
> one month.
>
> **Note:**
> Starting from GitLab 11.2:
> - The `description` field for issues, merge requests, comments, and wiki pages
> is rewritten so that simple Markdown image references (like
> `![](/uploads/...)`) have their target URL changed to an absolute URL. See
> [image URL rewriting](#image-url-rewriting) for more details.
Project webhooks allow you to trigger a URL if for example new code is pushed or
a new issue is created. You can configure webhooks to listen for specific events
......@@ -320,7 +320,7 @@ X-Gitlab-Event: Issue Hook
}
```
**Note**: `assignee` and `assignee_id` keys are deprecated and now show the first assignee only.
> **Note**: `assignee` and `assignee_id` keys are deprecated and now show the first assignee only.
### Comment events
......@@ -619,7 +619,7 @@ X-Gitlab-Event: Note Hook
}
```
**Note**: `assignee_id` field is deprecated and now shows the first assignee only.
> **Note**: `assignee_id` field is deprecated and now shows the first assignee only.
#### Comment on code snippet
......@@ -1174,7 +1174,7 @@ On this page, you can see data that GitLab sends (request headers and body) and
From this page, you can repeat delivery with the same data by clicking `Resend Request` button.
>**Note:** If URL or secret token of the webhook were updated, data will be delivered to the new address.
> **Note:** If URL or secret token of the webhook were updated, data will be delivered to the new address.
### Receiving duplicate or multiple web hook requests triggered by one event
......
......@@ -353,23 +353,23 @@ To remove an assignee list, just as with a label list, click the trash icon.
When dragging issues between lists, different behavior occurs depending on the source list and the target list.
| | To Open | To Closed | To label `B` list | To assignee `Bob` list |
| --- | --- | --- | --- | --- |
| From Open | - | Issue closed | `B` added | `Bob` assigned |
| From Closed | Issue reopened | - | Issue reopened<br/>`B` added | Issue reopened<br/>`Bob` assigned |
| From label `A` list | `A` removed | Issue closed | `A` removed<br/>`B` added | `Bob` assigned |
| From assignee `Alice` list | `Alice` unassigned | Issue closed | `B` added | `Alice` unassigned<br/>`Bob` assigned |
| | To Open | To Closed | To label `B` list | To assignee `Bob` list |
|----------------------------|--------------------|--------------|------------------------------|---------------------------------------|
| From Open | - | Issue closed | `B` added | `Bob` assigned |
| From Closed | Issue reopened | - | Issue reopened<br/>`B` added | Issue reopened<br/>`Bob` assigned |
| From label `A` list | `A` removed | Issue closed | `A` removed<br/>`B` added | `Bob` assigned |
| From assignee `Alice` list | `Alice` unassigned | Issue closed | `B` added | `Alice` unassigned<br/>`Bob` assigned |
## Features per tier
Different issue board features are available in different [GitLab tiers](https://about.gitlab.com/pricing/), as shown in the following table:
| Tier | Number of Project Issue Boards | Number of Group Issue Boards | Configurable Issue Boards | Assignee Lists
| --- | --- | --- | --- | --- | --- |
| Core | 1 | 1 | No | No |
| Starter | Multiple | 1 | Yes | No |
| Premium | Multiple | Multiple | Yes | Yes |
| Ultimate | Multiple | Multiple | Yes | Yes |
| Tier | Number of Project Issue Boards | Number of Group Issue Boards | Configurable Issue Boards | Assignee Lists |
|----------|--------------------------------|------------------------------|---------------------------|----------------|
| Core | 1 | 1 | No | No |
| Starter | Multiple | 1 | Yes | No |
| Premium | Multiple | Multiple | Yes | Yes |
| Ultimate | Multiple | Multiple | Yes | Yes |
## Tips
......
# Koding integration
>**Notes:**
- **As of GitLab 10.0, the Koding integration is deprecated and will be removed
in a future version.**
- [Introduced][ce-5909] in GitLab 8.11.
> **Notes:**
> - **As of GitLab 10.0, the Koding integration is deprecated and will be removed
> in a future version.**
> - [Introduced][ce-5909] in GitLab 8.11.
This document will guide you through using Koding integration on GitLab in
detail. For configuring and installing please follow the
......
# Merge requests versions
>**Notes:**
- [Introduced][ce-5467] in GitLab 8.12.
- Comments are disabled while viewing outdated merge versions or comparing to
versions other than base.
- Merge request versions are based on push not on commit. So, if you pushed 5
commits in a single push, it will be a single option in the dropdown. If you
pushed 5 times, that will count for 5 options.
> **Notes:**
> - [Introduced][ce-5467] in GitLab 8.12.
> - Comments are disabled while viewing outdated merge versions or comparing to
> versions other than base.
> - Merge request versions are based on push not on commit. So, if you pushed 5
> commits in a single push, it will be a single option in the dropdown. If you
> pushed 5 times, that will count for 5 options.
Every time you push to a branch that is tied to a merge request, a new version
of merge request diff is created. When you visit a merge request that contains
......
......@@ -205,16 +205,16 @@ With the update permission model we also extended the support for accessing
Container Registries for private projects.
> **Notes:**
- GitLab Runner versions prior to 1.8 don't incorporate the introduced changes
for permissions. This makes the `image:` directive to not work with private
projects automatically and it needs to be configured manually on Runner's host
with a predefined account (for example administrator's personal account with
access token created explicitly for this purpose). This issue is resolved with
latest changes in GitLab Runner 1.8 which receives GitLab credentials with
build data.
- Starting from GitLab 8.12, if you have [2FA] enabled in your account, you need
to pass a [personal access token][pat] instead of your password in order to
login to GitLab's Container Registry.
> - GitLab Runner versions prior to 1.8 don't incorporate the introduced changes
> for permissions. This makes the `image:` directive to not work with private
> projects automatically and it needs to be configured manually on Runner's host
> with a predefined account (for example administrator's personal account with
> access token created explicitly for this purpose). This issue is resolved with
> latest changes in GitLab Runner 1.8 which receives GitLab credentials with
> build data.
> - Starting from GitLab 8.12, if you have [2FA] enabled in your account, you need
> to pass a [personal access token][pat] instead of your password in order to
> login to GitLab's Container Registry.
Your jobs can access all container images that you would normally have access
to. The only implication is that you can push to the Container Registry of the
......
......@@ -12,16 +12,16 @@ With GitLab Pages it's easy to publish your project website. GitLab Pages is a h
to get you started quickly, or,
alternatively, start from an existing project as follows:
- 1. [Fork](../../../gitlab-basics/fork-project.md#how-to-fork-a-project) an [example project](https://gitlab.com/pages):
by forking a project, you create a copy of the codebase you're forking from to start from a template instead of starting from scratch.
- 2. Change a file to trigger a GitLab CI/CD pipeline: GitLab CI/CD will build and deploy your site to GitLab Pages.
- 3. Visit your project's **Settings > Pages** to see your **website link**, and click on it. Bam! Your website is live! :)
1. [Fork](../../../gitlab-basics/fork-project.md#how-to-fork-a-project) an [example project](https://gitlab.com/pages):
by forking a project, you create a copy of the codebase you're forking from to start from a template instead of starting from scratch.
2. Change a file to trigger a GitLab CI/CD pipeline: GitLab CI/CD will build and deploy your site to GitLab Pages.
3. Visit your project's **Settings > Pages** to see your **website link**, and click on it. Bam! Your website is live! :)
_Further steps (optional):_
_Further steps (optional):_
- 4. Remove the [fork relationship](getting_started_part_two.md#fork-a-project-to-get-started-from)
(_You don't need the relationship unless you intent to contribute back to the example project you forked from_).
- 5. Make it a [user/group website](getting_started_part_one.md#user-and-group-websites)
4. Remove the [fork relationship](getting_started_part_two.md#fork-a-project-to-get-started-from)
(_You don't need the relationship unless you intent to contribute back to the example project you forked from_).
5. Make it a [user/group website](getting_started_part_one.md#user-and-group-websites)
**Watch a video with the steps above: https://www.youtube.com/watch?v=TWqh9MtT4Bg**
......
# Introduction to job artifacts
>**Notes:**
>- Since GitLab 8.2 and GitLab Runner 0.7.0, job artifacts that are created by
GitLab Runner are uploaded to GitLab and are downloadable as a single archive
(`tar.gz`) using the GitLab UI.
>- Starting with GitLab 8.4 and GitLab Runner 1.0, the artifacts archive format
changed to `ZIP`, and it is now possible to browse its contents, with the added
ability of downloading the files separately.
>- Starting with GitLab 8.17, builds are renamed to jobs.
>- The artifacts browser will be available only for new artifacts that are sent
to GitLab using GitLab Runner version 1.0 and up. It will not be possible to
browse old artifacts already uploaded to GitLab.
> **Notes:**
> - Since GitLab 8.2 and GitLab Runner 0.7.0, job artifacts that are created by
> GitLab Runner are uploaded to GitLab and are downloadable as a single archive
> (`tar.gz`) using the GitLab UI.
> - Starting with GitLab 8.4 and GitLab Runner 1.0, the artifacts archive format
> changed to `ZIP`, and it is now possible to browse its contents, with the added
> ability of downloading the files separately.
> - Starting with GitLab 8.17, builds are renamed to jobs.
> - The artifacts browser will be available only for new artifacts that are sent
> to GitLab using GitLab Runner version 1.0 and up. It will not be possible to
> browse old artifacts already uploaded to GitLab.
>- This is the user documentation. For the administration guide see
[administration/job_artifacts](../../../administration/job_artifacts.md).
> [administration/job_artifacts](../../../administration/job_artifacts.md).
Artifacts is a list of files and directories which are attached to a job
after it completes successfully. This feature is enabled by default in all
......@@ -46,14 +46,14 @@ For more examples on artifacts, follow the [artifacts reference in
## Browsing artifacts
>**Note:**
With GitLab 9.2, PDFs, images, videos and other formats can be previewed
directly in the job artifacts browser without the need to download them.
>**Note:**
With [GitLab 10.1][ce-14399], HTML files in a public project can be previewed
directly in a new tab without the need to download them when
[GitLab Pages](../../../administration/pages/index.md) is enabled
> **Note:**
> With GitLab 9.2, PDFs, images, videos and other formats can be previewed
> directly in the job artifacts browser without the need to download them.
>
> **Note:**
> With [GitLab 10.1][ce-14399], HTML files in a public project can be previewed
> directly in a new tab without the need to download them when
> [GitLab Pages](../../../administration/pages/index.md) is enabled
After a job finishes, if you visit the job's specific page, there are three
buttons. You can download the artifacts archive or browse its contents, whereas
......
# Pipeline Schedules
> **Notes**:
- This feature was introduced in 9.1 as [Trigger Schedule][ce-10533].
- In 9.2, the feature was [renamed to Pipeline Schedule][ce-10853].
- Cron notation is parsed by [Rufus-Scheduler](https://github.com/jmettraux/rufus-scheduler).
> - This feature was introduced in 9.1 as [Trigger Schedule][ce-10533].
> - In 9.2, the feature was [renamed to Pipeline Schedule][ce-10853].
> - Cron notation is parsed by [Rufus-Scheduler](https://github.com/jmettraux/rufus-scheduler).
Pipeline schedules can be used to run a pipeline at specific intervals, for example every
month on the 22nd for a certain branch.
......@@ -19,7 +19,7 @@ In order to schedule a pipeline:
![New Schedule Form](img/pipeline_schedules_new_form.png)
>**Attention:**
> **Attention:**
The pipelines won't be executed precisely, because schedules are handled by
Sidekiq, which runs according to its interval.
See [advanced admin configuration](#advanced-admin-configuration) for more
......@@ -83,7 +83,7 @@ The next time a pipeline is scheduled, your credentials will be used.
![Schedules list](img/pipeline_schedules_ownership.png)
>**Note:**
> **Note:**
When the owner of the schedule doesn't have the ability to create pipelines
anymore, due to e.g., being blocked or removed from the project, or lacking
the permission to run on protected branches or tags. When this happened, the
......
......@@ -76,7 +76,7 @@ You can specify a wildcard protected branch, which will protect all branches
matching the wildcard. For example:
| Wildcard Protected Branch | Matching Branches |
|---------------------------+--------------------------------------------------------|
|---------------------------|--------------------------------------------------------|
| `*-stable` | `production-stable`, `staging-stable` |
| `production/*` | `production/app-server`, `production/load-balancer` |
| `*gitlab*` | `gitlab`, `gitlab/staging`, `master/gitlab/production` |
......
......@@ -37,7 +37,7 @@ You can specify a wildcard protected tag, which will protect all tags
matching the wildcard. For example:
| Wildcard Protected Tag | Matching Tags |
|------------------------+-------------------------------|
|------------------------|-------------------------------|
| `v*` | `v1.0.0`, `version-9.1` |
| `*-deploy` | `march-deploy`, `1.0-deploy` |
| `*gitlab*` | `gitlab`, `gitlab/v1` |
......
......@@ -36,15 +36,16 @@ to be met:
## Generating a GPG key
>**Notes:**
- If your Operating System has `gpg2` installed, replace `gpg` with `gpg2` in
the following commands.
- If Git is using `gpg` and you get errors like `secret key not available` or
`gpg: signing failed: secret key not available`, run the following command to
change to `gpg2`:
```
git config --global gpg.program gpg2
```
> **Notes:**
> - If your Operating System has `gpg2` installed, replace `gpg` with `gpg2` in
> the following commands.
> - If Git is using `gpg` and you get errors like `secret key not available` or
> `gpg: signing failed: secret key not available`, run the following command to
> change to `gpg2`:
>
> ```
> git config --global gpg.program gpg2
> ```
If you don't already have a GPG key, the following steps will help you get
started:
......
......@@ -33,11 +33,10 @@ following method.
## Using `git filter-branch` to purge files
>
**Warning:**
Make sure to first make a copy of your repository since rewriting history will
purge the files and information you are about to delete. Also make sure to
inform any collaborators to not use `pull` after your changes, but use `rebase`.
> **Warning:**
> Make sure to first make a copy of your repository since rewriting history will
> purge the files and information you are about to delete. Also make sure to
> inform any collaborators to not use `pull` after your changes, but use `rebase`.
1. Navigate to your repository:
......@@ -71,10 +70,10 @@ inform any collaborators to not use `pull` after your changes, but use `rebase`.
Your repository should now be below the size limit.
>**Note:**
As an alternative to `filter-branch`, you can use the `bfg` tool with a
command like: `bfg --delete-files path/to/big_file.mpg`. Read the
[BFG Repo-Cleaner][bfg] documentation for more information.
> **Note:**
> As an alternative to `filter-branch`, you can use the `bfg` tool with a
> command like: `bfg --delete-files path/to/big_file.mpg`. Read the
> [BFG Repo-Cleaner][bfg] documentation for more information.
[admin-repo-size]: https://docs.gitlab.com/ee/user/admin_area/settings/account_and_limit_settings.html#repository-size-limit
[bfg]: https://rtyley.github.io/bfg-repo-cleaner/
......
......@@ -13,7 +13,7 @@ For a list of words that are not allowed to be used as group or project names, s
It is currently not possible to create a project with the following names:
- -
- \-
- badges
- blame
- blob
......@@ -40,7 +40,7 @@ It is currently not possible to create a project with the following names:
Currently the following names are reserved as top level groups:
- 503.html
- -
- \-
- .well-known
- 404.html
- 422.html
......@@ -88,7 +88,7 @@ Currently the following names are reserved as top level groups:
These group names are unavailable as subgroup names:
- -
- \-
- activity
- analytics
- audit_events
......
......@@ -61,11 +61,12 @@ git commit -am "Added Debian iso" # commit the file meta data
git push origin master # sync the git repo and large file to the GitLab server
```
>**Note**: Make sure that `.gitattributes` is tracked by git. Otherwise Git
LFS will not be working properly for people cloning the project.
```bash
git add .gitattributes
```
> **Note**: Make sure that `.gitattributes` is tracked by git. Otherwise Git
> LFS will not be working properly for people cloning the project.
>
> ```bash
> git add .gitattributes
> ```
Cloning the repository works the same as before. Git automatically detects the
LFS-tracked files and clones them via HTTP. If you performed the git clone
......
......@@ -77,10 +77,8 @@ In most of the below cases, the notification will be sent to:
- the author and assignee of the issue/merge request
- authors of comments on the issue/merge request
- anyone mentioned by `@username` in the issue/merge request title or description
- anyone mentioned by `@username` in any of the comments on the issue/merge request
...with notification level "Participating" or higher
- anyone mentioned by `@username` in any of the comments on the issue/merge request
...with notification level "Participating" or higher
- Watchers: users with notification level "Watch"
- Subscribers: anyone who manually subscribed to the issue/merge request
- Custom: Users with notification level "custom" who turned on notifications for any of the events present in the table below
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment