Commit ac39fa2b authored by Achilleas Pipinellis's avatar Achilleas Pipinellis

Merge branch 'docs/add-mdl-and-rule-032' into 'master'

Add Markdown linting and one rule

See merge request gitlab-org/gitlab-ce!29970
parents cf291a11 e1282393
...@@ -66,6 +66,10 @@ docs lint: ...@@ -66,6 +66,10 @@ docs lint:
- scripts/lint-changelog-yaml - scripts/lint-changelog-yaml
- mv doc/ /tmp/gitlab-docs/content/$DOCS_GITLAB_REPO_SUFFIX - mv doc/ /tmp/gitlab-docs/content/$DOCS_GITLAB_REPO_SUFFIX
- cd /tmp/gitlab-docs - cd /tmp/gitlab-docs
# Lint Markdown
# https://github.com/markdownlint/markdownlint/blob/master/docs/RULES.md
- bundle exec mdl content/$DOCS_GITLAB_REPO_SUFFIX/**/*.md --rules \
MD032
# Build HTML from Markdown # Build HTML from Markdown
- bundle exec nanoc - bundle exec nanoc
# Check the internal links # Check the internal links
......
...@@ -144,20 +144,20 @@ for more details: ...@@ -144,20 +144,20 @@ for more details:
If you're having trouble, here are some tips: If you're having trouble, here are some tips:
1. Ensure `discovery` is set to `true`. Setting it to `false` requires 1. Ensure `discovery` is set to `true`. Setting it to `false` requires
specifying all the URLs and keys required to make OpenID work. specifying all the URLs and keys required to make OpenID work.
1. Check your system clock to ensure the time is synchronized properly. 1. Check your system clock to ensure the time is synchronized properly.
1. As mentioned in [the 1. As mentioned in [the
documentation](https://github.com/m0n9oose/omniauth_openid_connect), documentation](https://github.com/m0n9oose/omniauth_openid_connect),
make sure `issuer` corresponds to the base URL of the Discovery URL. For make sure `issuer` corresponds to the base URL of the Discovery URL. For
example, `https://accounts.google.com` is used for the URL example, `https://accounts.google.com` is used for the URL
`https://accounts.google.com/.well-known/openid-configuration`. `https://accounts.google.com/.well-known/openid-configuration`.
1. The OpenID Connect client uses HTTP Basic Authentication to send the 1. The OpenID Connect client uses HTTP Basic Authentication to send the
OAuth2 access token. For example, if you are seeing 401 errors upon OAuth2 access token. For example, if you are seeing 401 errors upon
retrieving the `userinfo` endpoint, you may want to check your OpenID retrieving the `userinfo` endpoint, you may want to check your OpenID
Web server configuration. For example, for Web server configuration. For example, for
[oauth2-server-php](https://github.com/bshaffer/oauth2-server-php), you [oauth2-server-php](https://github.com/bshaffer/oauth2-server-php), you
may need to [add a configuration parameter to may need to [add a configuration parameter to
Apache](https://github.com/bshaffer/oauth2-server-php/issues/926#issuecomment-387502778). Apache](https://github.com/bshaffer/oauth2-server-php/issues/926#issuecomment-387502778).
...@@ -6,8 +6,8 @@ The requirements are listed [on the index page](index.md#requirements-for-runnin ...@@ -6,8 +6,8 @@ The requirements are listed [on the index page](index.md#requirements-for-runnin
## How does Geo know which projects to sync? ## How does Geo know which projects to sync?
On each **secondary** node, there is a read-only replicated copy of the GitLab database. On each **secondary** node, there is a read-only replicated copy of the GitLab database.
A **secondary** node also has a tracking database where it stores which projects have been synced. A **secondary** node also has a tracking database where it stores which projects have been synced.
Geo compares the two databases to find projects that are not yet tracked. Geo compares the two databases to find projects that are not yet tracked.
At the start, this tracking database is empty, so Geo will start trying to update from every project that it can see in the GitLab database. At the start, this tracking database is empty, so Geo will start trying to update from every project that it can see in the GitLab database.
...@@ -15,19 +15,19 @@ At the start, this tracking database is empty, so Geo will start trying to updat ...@@ -15,19 +15,19 @@ At the start, this tracking database is empty, so Geo will start trying to updat
For each project to sync: For each project to sync:
1. Geo will issue a `git fetch geo --mirror` to get the latest information from the **primary** node. 1. Geo will issue a `git fetch geo --mirror` to get the latest information from the **primary** node.
If there are no changes, the sync will be fast and end quickly. Otherwise, it will pull the latest commits. If there are no changes, the sync will be fast and end quickly. Otherwise, it will pull the latest commits.
1. The **secondary** node will update the tracking database to store the fact that it has synced projects A, B, C, etc. 1. The **secondary** node will update the tracking database to store the fact that it has synced projects A, B, C, etc.
1. Repeat until all projects are synced. 1. Repeat until all projects are synced.
When someone pushes a commit to the **primary** node, it generates an event in the GitLab database that the repository has changed. When someone pushes a commit to the **primary** node, it generates an event in the GitLab database that the repository has changed.
The **secondary** node sees this event, marks the project in question as dirty, and schedules the project to be resynced. The **secondary** node sees this event, marks the project in question as dirty, and schedules the project to be resynced.
To ensure that problems with pipelines (for example, syncs failing too many times or jobs being lost) don't permanently stop projects syncing, Geo also periodically checks the tracking database for projects that are marked as dirty. This check happens when To ensure that problems with pipelines (for example, syncs failing too many times or jobs being lost) don't permanently stop projects syncing, Geo also periodically checks the tracking database for projects that are marked as dirty. This check happens when
the number of concurrent syncs falls below `repos_max_capacity` and there are no new projects waiting to be synced. the number of concurrent syncs falls below `repos_max_capacity` and there are no new projects waiting to be synced.
Geo also has a checksum feature which runs a SHA256 sum across all the Git references to the SHA values. Geo also has a checksum feature which runs a SHA256 sum across all the Git references to the SHA values.
If the refs don't match between the **primary** node and the **secondary** node, then the **secondary** node will mark that project as dirty and try to resync it. If the refs don't match between the **primary** node and the **secondary** node, then the **secondary** node will mark that project as dirty and try to resync it.
So even if we have an outdated tracking database, the validation should activate and find discrepancies in the repository state and resync. So even if we have an outdated tracking database, the validation should activate and find discrepancies in the repository state and resync.
## Can I use Geo in a disaster recovery situation? ## Can I use Geo in a disaster recovery situation?
......
...@@ -331,7 +331,7 @@ There are a few key points to remember: ...@@ -331,7 +331,7 @@ There are a few key points to remember:
1. The FDW settings are configured on the Geo **tracking** database. 1. The FDW settings are configured on the Geo **tracking** database.
1. The configured foreign server enables a login to the Geo 1. The configured foreign server enables a login to the Geo
**secondary**, read-only database. **secondary**, read-only database.
By default, the Geo secondary and tracking database are running on the By default, the Geo secondary and tracking database are running on the
same host on different ports. That is, 5432 and 5431 respectively. same host on different ports. That is, 5432 and 5431 respectively.
...@@ -350,7 +350,7 @@ To check the configuration: ...@@ -350,7 +350,7 @@ To check the configuration:
``` ```
1. Check whether any tables are present. If everything is working, you 1. Check whether any tables are present. If everything is working, you
should see something like this: should see something like this:
```sql ```sql
gitlabhq_geo_production=# SELECT * from information_schema.foreign_tables; gitlabhq_geo_production=# SELECT * from information_schema.foreign_tables;
......
...@@ -83,7 +83,7 @@ deploy the bundled PostgreSQL. ...@@ -83,7 +83,7 @@ deploy the bundled PostgreSQL.
plain text password. These will be necessary when configuring the GitLab plain text password. These will be necessary when configuring the GitLab
application servers later. application servers later.
1. [Enable monitoring](#enable-monitoring) 1. [Enable monitoring](#enable-monitoring)
Advanced configuration options are supported and can be added if Advanced configuration options are supported and can be added if
needed. needed.
...@@ -204,9 +204,9 @@ Few notes on the service itself: ...@@ -204,9 +204,9 @@ Few notes on the service itself:
- The service runs under a system account, by default `gitlab-consul`. - The service runs under a system account, by default `gitlab-consul`.
- If you are using a different username, you will have to specify it. We - If you are using a different username, you will have to specify it. We
will refer to it with `CONSUL_USERNAME`, will refer to it with `CONSUL_USERNAME`,
- There will be a database user created with read only access to the repmgr - There will be a database user created with read only access to the repmgr
database database
- Passwords will be stored in the following locations: - Passwords will be stored in the following locations:
- `/etc/gitlab/gitlab.rb`: hashed - `/etc/gitlab/gitlab.rb`: hashed
- `/var/opt/gitlab/pgbouncer/pg_auth`: hashed - `/var/opt/gitlab/pgbouncer/pg_auth`: hashed
......
...@@ -285,7 +285,7 @@ Example response: ...@@ -285,7 +285,7 @@ Example response:
### Scope: wiki_blobs **[STARTER]** ### Scope: wiki_blobs **[STARTER]**
This scope is available only if [Elasticsearch](../integration/elasticsearch.md) is enabled. This scope is available only if [Elasticsearch](../integration/elasticsearch.md) is enabled.
```bash ```bash
curl --request GET --header "PRIVATE-TOKEN: <your_access_token>" https://gitlab.example.com/api/v4/search?scope=wiki_blobs&search=bye curl --request GET --header "PRIVATE-TOKEN: <your_access_token>" https://gitlab.example.com/api/v4/search?scope=wiki_blobs&search=bye
...@@ -346,6 +346,7 @@ Example response: ...@@ -346,6 +346,7 @@ Example response:
This scope is available only if [Elasticsearch](../integration/elasticsearch.md) is enabled. This scope is available only if [Elasticsearch](../integration/elasticsearch.md) is enabled.
Filters are available for this scope: Filters are available for this scope:
- filename - filename
- path - path
- extension - extension
...@@ -679,6 +680,7 @@ Example response: ...@@ -679,6 +680,7 @@ Example response:
This scope is available only if [Elasticsearch](../integration/elasticsearch.md) is enabled. This scope is available only if [Elasticsearch](../integration/elasticsearch.md) is enabled.
Filters are available for this scope: Filters are available for this scope:
- filename - filename
- path - path
- extension - extension
......
...@@ -489,6 +489,7 @@ it's provided as an environment variable. This is because GitLab Runnner uses ** ...@@ -489,6 +489,7 @@ it's provided as an environment variable. This is because GitLab Runnner uses **
runtime. runtime.
### Using statically-defined credentials ### Using statically-defined credentials
As an example, let's assume that you want to use the `registry.example.com:5000/private/image:latest` As an example, let's assume that you want to use the `registry.example.com:5000/private/image:latest`
image which is private and requires you to login into a private container registry. image which is private and requires you to login into a private container registry.
...@@ -566,7 +567,6 @@ for the Runner to match the `DOCKER_AUTH_CONFIG`. For example, if ...@@ -566,7 +567,6 @@ for the Runner to match the `DOCKER_AUTH_CONFIG`. For example, if
then the `DOCKER_AUTH_CONFIG` must also specify `registry.example.com:5000`. then the `DOCKER_AUTH_CONFIG` must also specify `registry.example.com:5000`.
Specifying only `registry.example.com` will not work. Specifying only `registry.example.com` will not work.
### Using Credentials Store ### Using Credentials Store
> Support for using Credentials Store was added in GitLab Runner 9.5. > Support for using Credentials Store was added in GitLab Runner 9.5.
...@@ -574,7 +574,7 @@ Specifying only `registry.example.com` will not work. ...@@ -574,7 +574,7 @@ Specifying only `registry.example.com` will not work.
To configure credentials store, follow these steps: To configure credentials store, follow these steps:
1. To use a credentials store, you need an external helper program to interact with a specific keychain or external store. 1. To use a credentials store, you need an external helper program to interact with a specific keychain or external store.
Make sure helper program is available in GitLab Runner `$PATH`. Make sure helper program is available in GitLab Runner `$PATH`.
1. Make GitLab Runner use it. There are two ways to accomplish this. Either: 1. Make GitLab Runner use it. There are two ways to accomplish this. Either:
- Create a - Create a
......
...@@ -47,10 +47,10 @@ deploy: ...@@ -47,10 +47,10 @@ deploy:
In the above configuration: In the above configuration:
- The `before_script` installs [SBT](http://www.scala-sbt.org/) and - The `before_script` installs [SBT](http://www.scala-sbt.org/) and
displays the version that is being used. displays the version that is being used.
- The `test` stage executes SBT to compile and test the project. - The `test` stage executes SBT to compile and test the project.
- [sbt-scoverage](https://github.com/scoverage/sbt-scoverage) is used as an SBT - [sbt-scoverage](https://github.com/scoverage/sbt-scoverage) is used as an SBT
plugin to measure test coverage. plugin to measure test coverage.
- The `deploy` stage automatically deploys the project to Heroku using dpl. - The `deploy` stage automatically deploys the project to Heroku using dpl.
You can use other versions of Scala and SBT by defining them in You can use other versions of Scala and SBT by defining them in
......
...@@ -339,7 +339,7 @@ Group-level variables can be added by: ...@@ -339,7 +339,7 @@ Group-level variables can be added by:
1. Navigating to your group's **Settings > CI/CD** page. 1. Navigating to your group's **Settings > CI/CD** page.
1. Inputing variable types, keys, and values in the **Variables** section. 1. Inputing variable types, keys, and values in the **Variables** section.
Any variables of [subgroups](../../user/group/subgroups/index.md) will be inherited recursively. Any variables of [subgroups](../../user/group/subgroups/index.md) will be inherited recursively.
Once you set them, they will be available for all subsequent pipelines. Once you set them, they will be available for all subsequent pipelines.
......
...@@ -198,9 +198,9 @@ abilities as in the Rails app. ...@@ -198,9 +198,9 @@ abilities as in the Rails app.
If the: If the:
- Currently authenticated user fails the authorization, the authorized - Currently authenticated user fails the authorization, the authorized
resource will be returned as `null`. resource will be returned as `null`.
- Resource is part of a collection, the collection will be filtered to - Resource is part of a collection, the collection will be filtered to
exclude the objects that the user's authorization checks failed against. exclude the objects that the user's authorization checks failed against.
TIP: **Tip:** TIP: **Tip:**
Try to load only what the currently authenticated user is allowed to Try to load only what the currently authenticated user is allowed to
...@@ -496,4 +496,4 @@ it 'returns a successful response' do ...@@ -496,4 +496,4 @@ it 'returns a successful response' do
expect(response).to have_gitlab_http_status(:success) expect(response).to have_gitlab_http_status(:success)
expect(graphql_mutation_response(:merge_request_set_wip)['errors']).to be_empty expect(graphql_mutation_response(:merge_request_set_wip)['errors']).to be_empty
end end
``` ```
\ No newline at end of file
...@@ -126,16 +126,16 @@ When writing commit messages, please follow the guidelines below: ...@@ -126,16 +126,16 @@ When writing commit messages, please follow the guidelines below:
- The commit subject must contain at least 3 words. - The commit subject must contain at least 3 words.
- The commit subject should ideally contain up to 50 characters, - The commit subject should ideally contain up to 50 characters,
and must not be longer than 72 characters. and must not be longer than 72 characters.
- The commit subject must start with a capital letter. - The commit subject must start with a capital letter.
- The commit subject must not end with a period. - The commit subject must not end with a period.
- The commit subject and body must be separated by a blank line. - The commit subject and body must be separated by a blank line.
- The commit body must not contain more than 72 characters per line. - The commit body must not contain more than 72 characters per line.
- Commits that change 30 or more lines across at least 3 files must - Commits that change 30 or more lines across at least 3 files must
describe these changes in the commit body. describe these changes in the commit body.
- The commit subject or body must not contain Emojis. - The commit subject or body must not contain Emojis.
- Use issues and merge requests' full URLs instead of short references, - Use issues and merge requests' full URLs instead of short references,
as they are displayed as plain text outside of GitLab. as they are displayed as plain text outside of GitLab.
- The merge request must not contain more than 10 commit messages. - The merge request must not contain more than 10 commit messages.
If the guidelines are not met, the MR will not pass the If the guidelines are not met, the MR will not pass the
......
...@@ -76,10 +76,10 @@ After a given documentation path is aligned across CE and EE, all merge requests ...@@ -76,10 +76,10 @@ After a given documentation path is aligned across CE and EE, all merge requests
affecting that path must be submitted to CE, regardless of the content it has. affecting that path must be submitted to CE, regardless of the content it has.
This means that: This means that:
* For **EE-only docs changes**, you only have to submit a CE MR. - For **EE-only docs changes**, you only have to submit a CE MR.
* For **EE-only features** that touch both the code and the docs, you have to submit - For **EE-only features** that touch both the code and the docs, you have to submit
an EE MR containing all changes, and a CE MR containing only the docs changes an EE MR containing all changes, and a CE MR containing only the docs changes
and without a changelog entry. and without a changelog entry.
This might seem like a duplicate effort, but it's only for the short term. This might seem like a duplicate effort, but it's only for the short term.
A list of the already aligned docs can be found in A list of the already aligned docs can be found in
......
...@@ -165,8 +165,8 @@ The table below shows what kind of documentation goes where. ...@@ -165,8 +165,8 @@ The table below shows what kind of documentation goes where.
`doc/topics/topic-name/subtopic-name/index.md` when subtopics become necessary. `doc/topics/topic-name/subtopic-name/index.md` when subtopics become necessary.
General user- and admin- related documentation, should be placed accordingly. General user- and admin- related documentation, should be placed accordingly.
1. The directories `/workflow/`, `/university/`, and `/articles/` have 1. The directories `/workflow/`, `/university/`, and `/articles/` have
been **deprecated** and the majority their docs have been moved to their correct location been **deprecated** and the majority their docs have been moved to their correct location
in small iterations. in small iterations.
If you are unsure where a document or a content addition should live, this should If you are unsure where a document or a content addition should live, this should
not stop you from authoring and contributing. You can use your best judgment and not stop you from authoring and contributing. You can use your best judgment and
......
...@@ -909,11 +909,12 @@ import bundle from 'ee_else_ce/protected_branches/protected_branches_bundle.js'; ...@@ -909,11 +909,12 @@ import bundle from 'ee_else_ce/protected_branches/protected_branches_bundle.js';
See the frontend guide [performance section](fe_guide/performance.md) for See the frontend guide [performance section](fe_guide/performance.md) for
information on managing page-specific javascript within EE. information on managing page-specific javascript within EE.
## Vue code in `assets/javascript` ## Vue code in `assets/javascript`
### script tag ### script tag
#### Child Component only used in EE #### Child Component only used in EE
To separate Vue template differences we should [async import the components](https://vuejs.org/v2/guide/components-dynamic-async.html#Async-Components). To separate Vue template differences we should [async import the components](https://vuejs.org/v2/guide/components-dynamic-async.html#Async-Components).
Doing this allows for us to load the correct component in EE whilst in CE Doing this allows for us to load the correct component in EE whilst in CE
...@@ -937,10 +938,12 @@ export default { ...@@ -937,10 +938,12 @@ export default {
``` ```
#### For JS code that is EE only, like props, computed properties, methods, etc, we will keep the current approach #### For JS code that is EE only, like props, computed properties, methods, etc, we will keep the current approach
- Since we [can't async load a mixin](https://github.com/vuejs/vue-loader/issues/418#issuecomment-254032223) we will use the [`ee_else_ce`](../development/ee_features.md#javascript-code-in-assetsjavascripts) alias we already have for webpack.
- Since we [can't async load a mixin](https://github.com/vuejs/vue-loader/issues/418#issuecomment-254032223) we will use the [`ee_else_ce`](../development/ee_features.md#javascript-code-in-assetsjavascripts) alias we already have for webpack.
- This means all the EE specific props, computed properties, methods, etc that are EE only should be in a mixin in the `ee/` folder and we need to create a CE counterpart of the mixin - This means all the EE specific props, computed properties, methods, etc that are EE only should be in a mixin in the `ee/` folder and we need to create a CE counterpart of the mixin
##### Example: ##### Example:
```javascript ```javascript
import mixin from 'ee_else_ce/path/mixin'; import mixin from 'ee_else_ce/path/mixin';
...@@ -955,6 +958,7 @@ import mixin from 'ee_else_ce/path/mixin'; ...@@ -955,6 +958,7 @@ import mixin from 'ee_else_ce/path/mixin';
- You can see an MR with an example [here](https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9762) - You can see an MR with an example [here](https://gitlab.com/gitlab-org/gitlab-ee/merge_requests/9762)
#### `template` tag #### `template` tag
* **EE Child components** * **EE Child components**
- Since we are using the async loading to check which component to load, we'd still use the component's name, check [this example](#child-component-only-used-in-ee). - Since we are using the async loading to check which component to load, we'd still use the component's name, check [this example](#child-component-only-used-in-ee).
...@@ -962,11 +966,12 @@ import mixin from 'ee_else_ce/path/mixin'; ...@@ -962,11 +966,12 @@ import mixin from 'ee_else_ce/path/mixin';
- For the templates that have extra HTML in EE we should move it into a new component and use the `ee_else_ce` dynamic import - For the templates that have extra HTML in EE we should move it into a new component and use the `ee_else_ce` dynamic import
### Non Vue Files ### Non Vue Files
For regular JS files, the approach is similar. For regular JS files, the approach is similar.
1. We will keep using the [`ee_else_ce`](../development/ee_features.md#javascript-code-in-assetsjavascripts) helper, this means that EE only code should be inside the `ee/` folder. 1. We will keep using the [`ee_else_ce`](../development/ee_features.md#javascript-code-in-assetsjavascripts) helper, this means that EE only code should be inside the `ee/` folder.
1. An EE file should be created with the EE only code, and it should extend the CE counterpart. 1. An EE file should be created with the EE only code, and it should extend the CE counterpart.
1. For code inside functions that can't be extended, the code should be moved into a new file and we should use `ee_else_ce` helper: 1. For code inside functions that can't be extended, the code should be moved into a new file and we should use `ee_else_ce` helper:
##### Example: ##### Example:
...@@ -996,6 +1001,7 @@ to isolate such ruleset from rest of CE rules (along with adding comment describ ...@@ -996,6 +1001,7 @@ to isolate such ruleset from rest of CE rules (along with adding comment describ
to avoid conflicts during CE to EE merge. to avoid conflicts during CE to EE merge.
#### Bad #### Bad
```scss ```scss
.section-body { .section-body {
.section-title { .section-title {
...@@ -1011,6 +1017,7 @@ to avoid conflicts during CE to EE merge. ...@@ -1011,6 +1017,7 @@ to avoid conflicts during CE to EE merge.
``` ```
#### Good #### Good
```scss ```scss
.section-body { .section-body {
.section-title { .section-title {
......
...@@ -64,20 +64,25 @@ All indexing after the initial one is done via `ElasticIndexerWorker` (sidekiq j ...@@ -64,20 +64,25 @@ All indexing after the initial one is done via `ElasticIndexerWorker` (sidekiq j
Search queries are generated by the concerns found in [ee/app/models/concerns/elastic](https://gitlab.com/gitlab-org/gitlab-ee/tree/master/ee/app/models/concerns/elastic). These concerns are also in charge of access control, and have been a historic source of security bugs so please pay close attention to them! Search queries are generated by the concerns found in [ee/app/models/concerns/elastic](https://gitlab.com/gitlab-org/gitlab-ee/tree/master/ee/app/models/concerns/elastic). These concerns are also in charge of access control, and have been a historic source of security bugs so please pay close attention to them!
## Existing Analyzers/Tokenizers/Filters ## Existing Analyzers/Tokenizers/Filters
These are all defined in https://gitlab.com/gitlab-org/gitlab-ee/blob/master/ee/lib/elasticsearch/git/model.rb These are all defined in https://gitlab.com/gitlab-org/gitlab-ee/blob/master/ee/lib/elasticsearch/git/model.rb
### Analyzers ### Analyzers
#### `path_analyzer` #### `path_analyzer`
Used when indexing blobs' paths. Uses the `path_tokenizer` and the `lowercase` and `asciifolding` filters. Used when indexing blobs' paths. Uses the `path_tokenizer` and the `lowercase` and `asciifolding` filters.
Please see the `path_tokenizer` explanation below for an example. Please see the `path_tokenizer` explanation below for an example.
#### `sha_analyzer` #### `sha_analyzer`
Used in blobs and commits. Uses the `sha_tokenizer` and the `lowercase` and `asciifolding` filters. Used in blobs and commits. Uses the `sha_tokenizer` and the `lowercase` and `asciifolding` filters.
Please see the `sha_tokenizer` explanation later below for an example. Please see the `sha_tokenizer` explanation later below for an example.
#### `code_analyzer` #### `code_analyzer`
Used when indexing a blob's filename and content. Uses the `whitespace` tokenizer and the filters: `code`, `edgeNGram_filter`, `lowercase`, and `asciifolding` Used when indexing a blob's filename and content. Uses the `whitespace` tokenizer and the filters: `code`, `edgeNGram_filter`, `lowercase`, and `asciifolding`
The `whitespace` tokenizer was selected in order to have more control over how tokens are split. For example the string `Foo::bar(4)` needs to generate tokens like `Foo` and `bar(4)` in order to be properly searched. The `whitespace` tokenizer was selected in order to have more control over how tokens are split. For example the string `Foo::bar(4)` needs to generate tokens like `Foo` and `bar(4)` in order to be properly searched.
...@@ -85,15 +90,19 @@ The `whitespace` tokenizer was selected in order to have more control over how t ...@@ -85,15 +90,19 @@ The `whitespace` tokenizer was selected in order to have more control over how t
Please see the `code` filter for an explanation on how tokens are split. Please see the `code` filter for an explanation on how tokens are split.
#### `code_search_analyzer` #### `code_search_analyzer`
Not directly used for indexing, but rather used to transform a search input. Uses the `whitespace` tokenizer and the `lowercase` and `asciifolding` filters. Not directly used for indexing, but rather used to transform a search input. Uses the `whitespace` tokenizer and the `lowercase` and `asciifolding` filters.
### Tokenizers ### Tokenizers
#### `sha_tokenizer` #### `sha_tokenizer`
This is a custom tokenizer that uses the [`edgeNGram` tokenizer](https://www.elastic.co/guide/en/elasticsearch/reference/5.5/analysis-edgengram-tokenizer.html) to allow SHAs to be searcheable by any sub-set of it (minimum of 5 chars). This is a custom tokenizer that uses the [`edgeNGram` tokenizer](https://www.elastic.co/guide/en/elasticsearch/reference/5.5/analysis-edgengram-tokenizer.html) to allow SHAs to be searcheable by any sub-set of it (minimum of 5 chars).
example: Example:
`240c29dc7e` becomes: `240c29dc7e` becomes:
- `240c2` - `240c2`
- `240c29` - `240c29`
- `240c29d` - `240c29d`
...@@ -102,21 +111,26 @@ example: ...@@ -102,21 +111,26 @@ example:
- `240c29dc7e` - `240c29dc7e`
#### `path_tokenizer` #### `path_tokenizer`
This is a custom tokenizer that uses the [`path_hierarchy` tokenizer](https://www.elastic.co/guide/en/elasticsearch/reference/5.5/analysis-pathhierarchy-tokenizer.html) with `reverse: true` in order to allow searches to find paths no matter how much or how little of the path is given as input. This is a custom tokenizer that uses the [`path_hierarchy` tokenizer](https://www.elastic.co/guide/en/elasticsearch/reference/5.5/analysis-pathhierarchy-tokenizer.html) with `reverse: true` in order to allow searches to find paths no matter how much or how little of the path is given as input.
example: Example:
`'/some/path/application.js'` becomes: `'/some/path/application.js'` becomes:
- `'/some/path/application.js'` - `'/some/path/application.js'`
- `'some/path/application.js'` - `'some/path/application.js'`
- `'path/application.js'` - `'path/application.js'`
- `'application.js'` - `'application.js'`
### Filters ### Filters
#### `code` #### `code`
Uses a [Pattern Capture token filter](https://www.elastic.co/guide/en/elasticsearch/reference/5.5/analysis-pattern-capture-tokenfilter.html) to split tokens into more easily searched versions of themselves.
Uses a [Pattern Capture token filter](https://www.elastic.co/guide/en/elasticsearch/reference/5.5/analysis-pattern-capture-tokenfilter.html) to split tokens into more easily searched versions of themselves.
Patterns: Patterns:
- `"(\\p{Ll}+|\\p{Lu}\\p{Ll}+|\\p{Lu}+)"`: captures CamelCased and lowedCameCased strings as separate tokens - `"(\\p{Ll}+|\\p{Lu}\\p{Ll}+|\\p{Lu}+)"`: captures CamelCased and lowedCameCased strings as separate tokens
- `"(\\d+)"`: extracts digits - `"(\\d+)"`: extracts digits
- `"(?=([\\p{Lu}]+[\\p{L}]+))"`: captures CamelCased strings recursively. Ex: `ThisIsATest` => `[ThisIsATest, IsATest, ATest, Test]` - `"(?=([\\p{Lu}]+[\\p{L}]+))"`: captures CamelCased strings recursively. Ex: `ThisIsATest` => `[ThisIsATest, IsATest, ATest, Test]`
...@@ -126,6 +140,7 @@ Patterns: ...@@ -126,6 +140,7 @@ Patterns:
- `'\/?([^\/]+)(?=\/|\b)'`: separate path terms `like/this/one` - `'\/?([^\/]+)(?=\/|\b)'`: separate path terms `like/this/one`
#### `edgeNGram_filter` #### `edgeNGram_filter`
Uses an [Edge NGram token filter](https://www.elastic.co/guide/en/elasticsearch/reference/5.5/analysis-edgengram-tokenfilter.html) to allow inputs with only parts of a token to find the token. For example it would turn `glasses` into permutations starting with `gl` and ending with `glasses`, which would allow a search for "`glass`" to find the original token `glasses` Uses an [Edge NGram token filter](https://www.elastic.co/guide/en/elasticsearch/reference/5.5/analysis-edgengram-tokenfilter.html) to allow inputs with only parts of a token to find the token. For example it would turn `glasses` into permutations starting with `gl` and ending with `glasses`, which would allow a search for "`glass`" to find the original token `glasses`
## Gotchas ## Gotchas
...@@ -140,13 +155,13 @@ Uses an [Edge NGram token filter](https://www.elastic.co/guide/en/elasticsearch/ ...@@ -140,13 +155,13 @@ Uses an [Edge NGram token filter](https://www.elastic.co/guide/en/elasticsearch/
You might get an error such as You might get an error such as
``` ```
[2018-10-31T15:54:19,762][WARN ][o.e.c.r.a.DiskThresholdMonitor] [pval5Ct] [2018-10-31T15:54:19,762][WARN ][o.e.c.r.a.DiskThresholdMonitor] [pval5Ct]
flood stage disk watermark [95%] exceeded on flood stage disk watermark [95%] exceeded on
[pval5Ct7SieH90t5MykM5w][pval5Ct][/usr/local/var/lib/elasticsearch/nodes/0] free: 56.2gb[3%], [pval5Ct7SieH90t5MykM5w][pval5Ct][/usr/local/var/lib/elasticsearch/nodes/0] free: 56.2gb[3%],
all indices on this node will be marked read-only all indices on this node will be marked read-only
``` ```
This is because you've exceeded the disk space threshold - it thinks you don't have enough disk space left, based on the default 95% threshold. This is because you've exceeded the disk space threshold - it thinks you don't have enough disk space left, based on the default 95% threshold.
In addition, the `read_only_allow_delete` setting will be set to `true`. It will block indexing, `forcemerge`, etc In addition, the `read_only_allow_delete` setting will be set to `true`. It will block indexing, `forcemerge`, etc
...@@ -158,16 +173,16 @@ Add this to your `elasticsearch.yml` file: ...@@ -158,16 +173,16 @@ Add this to your `elasticsearch.yml` file:
``` ```
# turn off the disk allocator # turn off the disk allocator
cluster.routing.allocation.disk.threshold_enabled: false cluster.routing.allocation.disk.threshold_enabled: false
``` ```
_or_ _or_
``` ```
# set your own limits # set your own limits
cluster.routing.allocation.disk.threshold_enabled: true cluster.routing.allocation.disk.threshold_enabled: true
cluster.routing.allocation.disk.watermark.flood_stage: 5gb # ES 6.x only cluster.routing.allocation.disk.watermark.flood_stage: 5gb # ES 6.x only
cluster.routing.allocation.disk.watermark.low: 15gb cluster.routing.allocation.disk.watermark.low: 15gb
cluster.routing.allocation.disk.watermark.high: 10gb cluster.routing.allocation.disk.watermark.high: 10gb
``` ```
......
...@@ -14,10 +14,10 @@ Geo handles replication for different components: ...@@ -14,10 +14,10 @@ Geo handles replication for different components:
- [Database](#database-replication): includes the entire application, except cache and jobs. - [Database](#database-replication): includes the entire application, except cache and jobs.
- [Git repositories](#repository-replication): includes both projects and wikis. - [Git repositories](#repository-replication): includes both projects and wikis.
- [Uploaded blobs](#uploads-replication): includes anything from images attached on issues - [Uploaded blobs](#uploads-replication): includes anything from images attached on issues
to raw logs and assets from CI. to raw logs and assets from CI.
With the exception of the Database replication, on a *secondary* node, everything is coordinated With the exception of the Database replication, on a *secondary* node, everything is coordinated
by the [Geo Log Cursor](#geo-log-cursor). by the [Geo Log Cursor](#geo-log-cursor).
### Geo Log Cursor daemon ### Geo Log Cursor daemon
...@@ -31,8 +31,8 @@ picks the event up and schedules a `Geo::ProjectSyncWorker` job which will ...@@ -31,8 +31,8 @@ picks the event up and schedules a `Geo::ProjectSyncWorker` job which will
use the `Geo::RepositorySyncService` and `Geo::WikiSyncService` classes use the `Geo::RepositorySyncService` and `Geo::WikiSyncService` classes
to update the repository and the wiki respectively. to update the repository and the wiki respectively.
The Geo Log Cursor daemon can operate in High Availability mode automatically. The Geo Log Cursor daemon can operate in High Availability mode automatically.
The daemon will try to acquire a lock from time to time and once acquired, it The daemon will try to acquire a lock from time to time and once acquired, it
will behave as the *active* daemon. will behave as the *active* daemon.
Any additional running daemons on the same node, will be in standby Any additional running daemons on the same node, will be in standby
...@@ -164,20 +164,20 @@ The Git Push Proxy exists as a functionality built inside the `gitlab-shell` com ...@@ -164,20 +164,20 @@ The Git Push Proxy exists as a functionality built inside the `gitlab-shell` com
It is active on a **secondary** node only. It allows the user that has cloned a repository It is active on a **secondary** node only. It allows the user that has cloned a repository
from the secondary node to push to the same URL. from the secondary node to push to the same URL.
Git `push` requests directed to a **secondary** node will be sent over to the **primary** node, Git `push` requests directed to a **secondary** node will be sent over to the **primary** node,
while `pull` requests will continue to be served by the **secondary** node for maximum efficiency. while `pull` requests will continue to be served by the **secondary** node for maximum efficiency.
HTTPS and SSH requests are handled differently: HTTPS and SSH requests are handled differently:
- With HTTPS, we will give the user a `HTTP 302 Redirect` pointing to the project on the **primary** node. - With HTTPS, we will give the user a `HTTP 302 Redirect` pointing to the project on the **primary** node.
The git client is wise enough to understand that status code and process the redirection. The git client is wise enough to understand that status code and process the redirection.
- With SSH, because there is no equivalent way to perform a redirect, we have to proxy the request. - With SSH, because there is no equivalent way to perform a redirect, we have to proxy the request.
This is done inside [`gitlab-shell`](https://gitlab.com/gitlab-org/gitlab-shell), by first translating the request This is done inside [`gitlab-shell`](https://gitlab.com/gitlab-org/gitlab-shell), by first translating the request
to the HTTP protocol, and then proxying it to the **primary** node. to the HTTP protocol, and then proxying it to the **primary** node.
The [`gitlab-shell`](https://gitlab.com/gitlab-org/gitlab-shell) daemon knows when to proxy based on the response The [`gitlab-shell`](https://gitlab.com/gitlab-org/gitlab-shell) daemon knows when to proxy based on the response
from `/api/v4/allowed`. A special `HTTP 300` status code is returned and we execute a "custom action", from `/api/v4/allowed`. A special `HTTP 300` status code is returned and we execute a "custom action",
specified in the response body. The response contains additional data that allows the proxied `push` operation specified in the response body. The response contains additional data that allows the proxied `push` operation
to happen on the **primary** node. to happen on the **primary** node.
## Using the Tracking Database ## Using the Tracking Database
...@@ -229,17 +229,17 @@ named `gitlab_secondary`. This configuration exists within the database's user ...@@ -229,17 +229,17 @@ named `gitlab_secondary`. This configuration exists within the database's user
context only. To access the `gitlab_secondary`, GitLab needs to use the context only. To access the `gitlab_secondary`, GitLab needs to use the
same database user that had previously been configured. same database user that had previously been configured.
The Geo Tracking Database accesses the readonly database replica via FDW as a regular user, The Geo Tracking Database accesses the readonly database replica via FDW as a regular user,
limited by its own restrictions. The credentials are configured as a limited by its own restrictions. The credentials are configured as a
`USER MAPPING` associated with the `SERVER` mapped previously `USER MAPPING` associated with the `SERVER` mapped previously
(`gitlab_secondary`). (`gitlab_secondary`).
FDW configuration and credentials definition are managed automatically by the FDW configuration and credentials definition are managed automatically by the
Omnibus GitLab `gitlab-ctl reconfigure` command. Omnibus GitLab `gitlab-ctl reconfigure` command.
#### Refeshing the Foreign Tables #### Refeshing the Foreign Tables
Whenever a new Geo node is configured or the database schema changes on the Whenever a new Geo node is configured or the database schema changes on the
**primary** node, you must refresh the foreign tables on the **secondary** node **primary** node, you must refresh the foreign tables on the **secondary** node
by running the following: by running the following:
...@@ -279,11 +279,11 @@ on the Tracking Database: ...@@ -279,11 +279,11 @@ on the Tracking Database:
SELECT project_registry.* SELECT project_registry.*
FROM project_registry FROM project_registry
JOIN gitlab_secondary.projects JOIN gitlab_secondary.projects
ON (project_registry.project_id = gitlab_secondary.projects.id ON (project_registry.project_id = gitlab_secondary.projects.id
AND gitlab_secondary.projects.archived IS FALSE) AND gitlab_secondary.projects.archived IS FALSE)
``` ```
At the ActiveRecord level, we have additional Models that represent the At the ActiveRecord level, we have additional Models that represent the
foreign tables. They must be mapped in a slightly different way, and they are read-only. foreign tables. They must be mapped in a slightly different way, and they are read-only.
Check the existing FDW models in `ee/app/models/geo/fdw` for reference. Check the existing FDW models in `ee/app/models/geo/fdw` for reference.
......
...@@ -5,8 +5,8 @@ We devised a solution to solve common test automation problems such as the dread ...@@ -5,8 +5,8 @@ We devised a solution to solve common test automation problems such as the dread
Other problems that dynamic element validations solve are... Other problems that dynamic element validations solve are...
- When we perform an action with the mouse, we expect something to occur. - When we perform an action with the mouse, we expect something to occur.
- When our test is navigating to (or from) a page, we ensure that we are on the page we expect before - When our test is navigating to (or from) a page, we ensure that we are on the page we expect before
test continuation. test continuation.
## How it works ## How it works
...@@ -19,7 +19,7 @@ We interpret user actions on the page to have some sort of effect. These actions ...@@ -19,7 +19,7 @@ We interpret user actions on the page to have some sort of effect. These actions
When a page is navigated to, there are elements that will always appear on the page unconditionally. When a page is navigated to, there are elements that will always appear on the page unconditionally.
Dynamic element validation is instituted when using Dynamic element validation is instituted when using
```ruby ```ruby
Runtime::Browser.visit(:gitlab, Some::Page) Runtime::Browser.visit(:gitlab, Some::Page)
...@@ -27,7 +27,7 @@ Runtime::Browser.visit(:gitlab, Some::Page) ...@@ -27,7 +27,7 @@ Runtime::Browser.visit(:gitlab, Some::Page)
### Clicks ### Clicks
When we perform a click within our tests, we expect something to occur. That something could be a component to now When we perform a click within our tests, we expect something to occur. That something could be a component to now
appear on the webpage, or the test to navigate away from the page entirely. appear on the webpage, or the test to navigate away from the page entirely.
Dynamic element validation is instituted when using Dynamic element validation is instituted when using
...@@ -71,7 +71,7 @@ class MyPage < Page::Base ...@@ -71,7 +71,7 @@ class MyPage < Page::Base
element :another_element, required: true element :another_element, required: true
element :conditional_element element :conditional_element
end end
def open_layer def open_layer
click_element :my_element, Layer::MyLayer click_element :my_element, Layer::MyLayer
end end
...@@ -95,7 +95,7 @@ execute_stuff ...@@ -95,7 +95,7 @@ execute_stuff
``` ```
will invoke GitLab QA to scan `MyPage` for `my_element` and `another_element` to be on the page before continuing to will invoke GitLab QA to scan `MyPage` for `my_element` and `another_element` to be on the page before continuing to
`execute_stuff` `execute_stuff`
### Clicking ### Clicking
......
...@@ -82,7 +82,7 @@ module Page ...@@ -82,7 +82,7 @@ module Page
end end
# ... # ...
end end
end end
end end
``` ```
...@@ -134,7 +134,7 @@ for each element defined. ...@@ -134,7 +134,7 @@ for each element defined.
In our case, `qa-login-field`, `qa-password-field` and `qa-sign-in-button` In our case, `qa-login-field`, `qa-password-field` and `qa-sign-in-button`
**app/views/my/view.html.haml** **app/views/my/view.html.haml**
```haml ```haml
= f.text_field :login, class: "form-control top qa-login-field", autofocus: "autofocus", autocapitalize: "off", autocorrect: "off", required: true, title: "This field is required." = f.text_field :login, class: "form-control top qa-login-field", autofocus: "autofocus", autocapitalize: "off", autocorrect: "off", required: true, title: "This field is required."
...@@ -146,7 +146,7 @@ Things to note: ...@@ -146,7 +146,7 @@ Things to note:
- The CSS class must be `kebab-cased` (separated with hyphens "`-`") - The CSS class must be `kebab-cased` (separated with hyphens "`-`")
- If the element appears on the page unconditionally, add `required: true` to the element. See - If the element appears on the page unconditionally, add `required: true` to the element. See
[Dynamic element validation](dynamic_element_validation.md) [Dynamic element validation](dynamic_element_validation.md)
## Running the test locally ## Running the test locally
......
...@@ -25,22 +25,22 @@ and [Migrating from Jenkins to GitLab](https://www.youtube.com/watch?v=RlEVGOpYF ...@@ -25,22 +25,22 @@ and [Migrating from Jenkins to GitLab](https://www.youtube.com/watch?v=RlEVGOpYF
## Use cases ## Use cases
- Suppose you are new to GitLab, and want to keep using Jenkins until you prepare - Suppose you are new to GitLab, and want to keep using Jenkins until you prepare
your projects to build with [GitLab CI/CD](../ci/README.md). You set up the your projects to build with [GitLab CI/CD](../ci/README.md). You set up the
integration between GitLab and Jenkins, then you migrate to GitLab CI later. While integration between GitLab and Jenkins, then you migrate to GitLab CI later. While
you organize yourself and your team to onboard GitLab, you keep your pipelines you organize yourself and your team to onboard GitLab, you keep your pipelines
running with Jenkins, but view the results in your project's repository in GitLab. running with Jenkins, but view the results in your project's repository in GitLab.
- Your team uses [Jenkins Plugins](https://plugins.jenkins.io/) for other proceedings, - Your team uses [Jenkins Plugins](https://plugins.jenkins.io/) for other proceedings,
therefore, you opt for keep using Jenkins to build your apps. Show the results of your therefore, you opt for keep using Jenkins to build your apps. Show the results of your
pipelines directly in GitLab. pipelines directly in GitLab.
For a real use case, read the blog post [Continuous integration: From Jenkins to GitLab using Docker](https://about.gitlab.com/2017/07/27/docker-my-precious/). For a real use case, read the blog post [Continuous integration: From Jenkins to GitLab using Docker](https://about.gitlab.com/2017/07/27/docker-my-precious/).
## Requirements ## Requirements
* [Jenkins GitLab Plugin](https://wiki.jenkins.io/display/JENKINS/GitLab+Plugin) - [Jenkins GitLab Plugin](https://wiki.jenkins.io/display/JENKINS/GitLab+Plugin)
* [Jenkins Git Plugin](https://wiki.jenkins.io/display/JENKINS/Git+Plugin) - [Jenkins Git Plugin](https://wiki.jenkins.io/display/JENKINS/Git+Plugin)
* Git clone access for Jenkins from the GitLab repository - Git clone access for Jenkins from the GitLab repository
* GitLab API access to report build status - GitLab API access to report build status
## Configure GitLab users ## Configure GitLab users
...@@ -65,7 +65,7 @@ Go to Manage Jenkins -> Configure System and scroll down to the 'GitLab' section ...@@ -65,7 +65,7 @@ Go to Manage Jenkins -> Configure System and scroll down to the 'GitLab' section
Enter the GitLab server URL in the 'GitLab host URL' field and paste the API token Enter the GitLab server URL in the 'GitLab host URL' field and paste the API token
copied earlier in the 'API Token' field. copied earlier in the 'API Token' field.
For more information, see GitLab Plugin documentation about For more information, see GitLab Plugin documentation about
[Jenkins-to-GitLab authentication](https://github.com/jenkinsci/gitlab-plugin#jenkins-to-gitlab-authentication) [Jenkins-to-GitLab authentication](https://github.com/jenkinsci/gitlab-plugin#jenkins-to-gitlab-authentication)
![Jenkins GitLab plugin configuration](img/jenkins_gitlab_plugin_config.png) ![Jenkins GitLab plugin configuration](img/jenkins_gitlab_plugin_config.png)
...@@ -76,8 +76,8 @@ Follow the GitLab Plugin documentation about [Jenkins Job Configuration](https:/ ...@@ -76,8 +76,8 @@ Follow the GitLab Plugin documentation about [Jenkins Job Configuration](https:/
NOTE: **Note:** NOTE: **Note:**
Be sure to include the steps about [Build status configuration](https://github.com/jenkinsci/gitlab-plugin#build-status-configuration). Be sure to include the steps about [Build status configuration](https://github.com/jenkinsci/gitlab-plugin#build-status-configuration).
The 'Publish build status to GitLab' post-build step is required to view The 'Publish build status to GitLab' post-build step is required to view
Jenkins build status in GitLab Merge Requests. Jenkins build status in GitLab Merge Requests.
## Configure a GitLab project ## Configure a GitLab project
...@@ -114,21 +114,21 @@ and storing build status for Commits and Merge Requests. ...@@ -114,21 +114,21 @@ and storing build status for Commits and Merge Requests.
All steps are implemented using AJAX requests on the merge request page. All steps are implemented using AJAX requests on the merge request page.
1. In order to display the build status in a merge request you must create a project service in GitLab. 1. In order to display the build status in a merge request you must create a project service in GitLab.
2. Your project service will do a (JSON) query to a URL of the CI tool with the SHA1 of the commit. 1. Your project service will do a (JSON) query to a URL of the CI tool with the SHA1 of the commit.
3. The project service builds this URL and payload based on project service settings and knowledge of the CI tool. 1. The project service builds this URL and payload based on project service settings and knowledge of the CI tool.
4. The response is parsed to give a response in GitLab (success/failed/pending). 1. The response is parsed to give a response in GitLab (success/failed/pending).
## Troubleshooting ## Troubleshooting
### Error in merge requests - "Could not connect to the CI server" ### Error in merge requests - "Could not connect to the CI server"
This integration relies on Jenkins reporting the build status back to GitLab via This integration relies on Jenkins reporting the build status back to GitLab via
the [Commit Status API](../api/commits.md#commit-status). the [Commit Status API](../api/commits.md#commit-status).
The error 'Could not connect to the CI server' usually means that GitLab did not The error 'Could not connect to the CI server' usually means that GitLab did not
receive a build status update via the API. Either Jenkins was not properly receive a build status update via the API. Either Jenkins was not properly
configured or there was an error reporting the status via the API. configured or there was an error reporting the status via the API.
1. [Configure the Jenkins server](#configure-the-jenkins-server) for GitLab API access 1. [Configure the Jenkins server](#configure-the-jenkins-server) for GitLab API access
2. [Configure a Jenkins project](#configure-a-jenkins-project), including the 1. [Configure a Jenkins project](#configure-a-jenkins-project), including the
'Publish build status to GitLab' post-build action. 'Publish build status to GitLab' post-build action.
...@@ -14,7 +14,7 @@ Learn how GitLab helps you in the stages of the DevOps lifecycle by learning mor ...@@ -14,7 +14,7 @@ Learn how GitLab helps you in the stages of the DevOps lifecycle by learning mor
### Self-managed: Install GitLab ### Self-managed: Install GitLab
Take a look at [installing GitLab](https://about.gitlab.com/install/) and our [administrator documentation](../administration/index.md). Then, follow the instructions below under [Your subscription](#your-subscription) to apply your license file. Take a look at [installing GitLab](https://about.gitlab.com/install/) and our [administrator documentation](../administration/index.md). Then, follow the instructions below under [Your subscription](#your-subscription) to apply your license file.
### GitLab.com: Create a user and group ### GitLab.com: Create a user and group
...@@ -74,11 +74,11 @@ Please note that you need to be a group owner to associate a group to your subsc ...@@ -74,11 +74,11 @@ Please note that you need to be a group owner to associate a group to your subsc
To see the status of your GitLab.com subscription, you can click on the Billings To see the status of your GitLab.com subscription, you can click on the Billings
section of the relevant namespace: section of the relevant namespace:
* For individuals, this is located at https://gitlab.com/profile/billings under - For individuals, this is located at https://gitlab.com/profile/billings under
in your Settings, in your Settings,
* For groups, this is located under the group's Settings dropdown, under Billing. - For groups, this is located under the group's Settings dropdown, under Billing.
For groups, you can see details of your subscription - including your current For groups, you can see details of your subscription - including your current
plan - in the included table: plan - in the included table:
![Billing table](billing_table.png) ![Billing table](billing_table.png)
...@@ -86,11 +86,11 @@ plan - in the included table: ...@@ -86,11 +86,11 @@ plan - in the included table:
| Field | Description | | Field | Description |
| ------ | ------ | | ------ | ------ |
| Seats in subscription | If this is a paid plan, this represents the number of seats you've paid to support in your group. | | Seats in subscription | If this is a paid plan, this represents the number of seats you've paid to support in your group. |
| Seats currently in use | The number of active seats currently in use. | | Seats currently in use | The number of active seats currently in use. |
| Max seats used | The highest number of seats you've used. If this exceeds the seats in subscription, you may owe an additional fee for the additional users. | | Max seats used | The highest number of seats you've used. If this exceeds the seats in subscription, you may owe an additional fee for the additional users. |
| Seats owed | If your max seats used exceeds the seats in your subscription, you'll owe an additional fee for the users you've added. | | Seats owed | If your max seats used exceeds the seats in your subscription, you'll owe an additional fee for the users you've added. |
| Subscription start date | The date your subscription started. If this is for a Free plan, this is the date you transitioned off your group's paid plan. | | Subscription start date | The date your subscription started. If this is for a Free plan, this is the date you transitioned off your group's paid plan. |
| Subscription end date | The date your current subscription will end. This does not apply to Free plans. | | Subscription end date | The date your current subscription will end. This does not apply to Free plans. |
### Subscription changes and your data ### Subscription changes and your data
......
...@@ -186,7 +186,7 @@ the sort order to *Last Contacted* from the dropdown beside the search field. ...@@ -186,7 +186,7 @@ the sort order to *Last Contacted* from the dropdown beside the search field.
To search Runners' descriptions: To search Runners' descriptions:
1. In the **Search or filter results...** field, type the description of the Runner you want to 1. In the **Search or filter results...** field, type the description of the Runner you want to
find. find.
1. Press Enter. 1. Press Enter.
You can also filter Runners by status, type, and tag. To filter: You can also filter Runners by status, type, and tag. To filter:
......
...@@ -94,7 +94,6 @@ a group in the **Usage Quotas** page available to the group page settings list. ...@@ -94,7 +94,6 @@ a group in the **Usage Quotas** page available to the group page settings list.
![Group pipelines quota](img/group_pipelines_quota.png) ![Group pipelines quota](img/group_pipelines_quota.png)
## Extra Shared Runners pipeline minutes quota ## Extra Shared Runners pipeline minutes quota
NOTE: **Note:** NOTE: **Note:**
...@@ -110,27 +109,27 @@ In order to purchase additional minutes, you should follow these steps: ...@@ -110,27 +109,27 @@ In order to purchase additional minutes, you should follow these steps:
![Buy additional minutes](img/buy_btn.png) ![Buy additional minutes](img/buy_btn.png)
1. Locate the subscription card that is linked to your group on GitLab.com, 1. Locate the subscription card that is linked to your group on GitLab.com,
click on **Buy more CI minutes**, and complete the details about the transaction. click on **Buy more CI minutes**, and complete the details about the transaction.
![Buy additional minutes](img/buy_minutes_card.png) ![Buy additional minutes](img/buy_minutes_card.png)
1. Once we have processed your payment, the extra CI minutes 1. Once we have processed your payment, the extra CI minutes
will be synced to your Group and you can visualize it from the will be synced to your Group and you can visualize it from the
**Group > Settings > Pipelines quota** page: **Group > Settings > Pipelines quota** page:
![Additional minutes](img/additional_minutes.png) ![Additional minutes](img/additional_minutes.png)
Be aware that: Be aware that:
1. If you have purchased extra CI minutes before the purchase of a paid plan, 1. If you have purchased extra CI minutes before the purchase of a paid plan,
we will calculate a pro-rated charge for your paid plan. That means you may we will calculate a pro-rated charge for your paid plan. That means you may
be charged for less than one year since your subscription was previously be charged for less than one year since your subscription was previously
created with the extra CI minutes. created with the extra CI minutes.
1. Once the extra CI minutes has been assigned to a Group they cannot be transferred 1. Once the extra CI minutes has been assigned to a Group they cannot be transferred
to a different Group. to a different Group.
1. If you have some minutes used over your default quota, these minutes will 1. If you have some minutes used over your default quota, these minutes will
be deducted from your Additional Minutes quota immediately after your purchase of additional be deducted from your Additional Minutes quota immediately after your purchase of additional
minutes. minutes.
## What happens when my CI minutes quota run out ## What happens when my CI minutes quota run out
......
...@@ -17,7 +17,7 @@ To enforce acceptance of a Terms of Service and Privacy Policy: ...@@ -17,7 +17,7 @@ To enforce acceptance of a Terms of Service and Privacy Policy:
1. Go to **Admin Area > Settings > General**. 1. Go to **Admin Area > Settings > General**.
1. Expand the **Terms of Service and Privacy Policy** section. 1. Expand the **Terms of Service and Privacy Policy** section.
1. Check the **Require all users to accept Terms of Service and Privacy Policy when they access 1. Check the **Require all users to accept Terms of Service and Privacy Policy when they access
GitLab.** checkbox. GitLab.** checkbox.
1. Input the text of the **Terms of Service and Privacy Policy**. Markdown formatting can be used in this input box. 1. Input the text of the **Terms of Service and Privacy Policy**. Markdown formatting can be used in this input box.
1. Click **Save changes**. 1. Click **Save changes**.
1. When you are presented with the **Terms of Service** statement, click **Accept terms**. 1. When you are presented with the **Terms of Service** statement, click **Accept terms**.
......
...@@ -21,9 +21,9 @@ page. ...@@ -21,9 +21,9 @@ page.
## Use cases ## Use cases
- Analyze your team's contributions over a period of time, and offer a bonus for the top - Analyze your team's contributions over a period of time, and offer a bonus for the top
contributors. contributors.
- Identify opportunities for improvement with group members who may benefit from additional - Identify opportunities for improvement with group members who may benefit from additional
support. support.
## Using Contribution Analytics ## Using Contribution Analytics
......
...@@ -202,8 +202,8 @@ You may also consult the [group permissions table][permissions]. ...@@ -202,8 +202,8 @@ You may also consult the [group permissions table][permissions].
## Thread ## Thread
- Comments: collaborate on that epic by posting comments in its thread. - Comments: collaborate on that epic by posting comments in its thread.
These text fields also fully support These text fields also fully support
[GitLab Flavored Markdown](../../markdown.md#gitlab-flavored-markdown-gfm). [GitLab Flavored Markdown](../../markdown.md#gitlab-flavored-markdown-gfm).
## Comment, or start a discussion ## Comment, or start a discussion
...@@ -216,7 +216,7 @@ Once you wrote your comment, you can either: ...@@ -216,7 +216,7 @@ Once you wrote your comment, you can either:
- You can [award an emoji](../../award_emojis.md) to that epic or its comments. - You can [award an emoji](../../award_emojis.md) to that epic or its comments.
## Notifications ## Notifications
- [Receive notifications](../../../workflow/notifications.md) for epic events. - [Receive notifications](../../../workflow/notifications.md) for epic events.
......
...@@ -41,12 +41,13 @@ You can create groups for numerous reasons. To name a couple: ...@@ -41,12 +41,13 @@ You can create groups for numerous reasons. To name a couple:
- Make it easier to `@mention` all of your team at once in issues and merge requests by creating a group and including the appropriate members. - Make it easier to `@mention` all of your team at once in issues and merge requests by creating a group and including the appropriate members.
For example, you could create a group for your company members, and create a [subgroup](subgroups/index.md) for each individual team. Let's say you create a group called `company-team`, and you create subgroups in this group for the individual teams `backend-team`, `frontend-team`, and `production-team`. For example, you could create a group for your company members, and create a [subgroup](subgroups/index.md) for each individual team. Let's say you create a group called `company-team`, and you create subgroups in this group for the individual teams `backend-team`, `frontend-team`, and `production-team`.
- When you start a new implementation from an issue, you add a comment:
_"`@company-team`, let's do it! `@company-team/backend-team` you're good to go!"_ - When you start a new implementation from an issue, you add a comment:
- When your backend team needs help from frontend, they add a comment: _"`@company-team`, let's do it! `@company-team/backend-team` you're good to go!"_
_"`@company-team/frontend-team` could you help us here please?"_ - When your backend team needs help from frontend, they add a comment:
- When the frontend team completes their implementation, they comment: _"`@company-team/frontend-team` could you help us here please?"_
_"`@company-team/backend-team`, it's done! Let's ship it `@company-team/production-team`!"_ - When the frontend team completes their implementation, they comment:
_"`@company-team/backend-team`, it's done! Let's ship it `@company-team/production-team`!"_
## Namespaces ## Namespaces
......
...@@ -24,27 +24,27 @@ The following identity providers are supported: ...@@ -24,27 +24,27 @@ The following identity providers are supported:
## Requirements ## Requirements
- [Group SSO](index.md) needs to be configured. - [Group SSO](index.md) needs to be configured.
- The `scim_group` feature flag must be enabled: - The `scim_group` feature flag must be enabled:
Run the following commands in a Rails console: Run the following commands in a Rails console:
```sh ```sh
# Omnibus GitLab # Omnibus GitLab
gitlab-rails console gitlab-rails console
# Installation from source # Installation from source
cd /home/git/gitlab cd /home/git/gitlab
sudo -u git -H bin/rails console RAILS_ENV=production sudo -u git -H bin/rails console RAILS_ENV=production
``` ```
To enable SCIM for a group named `group_name`: To enable SCIM for a group named `group_name`:
```ruby ```ruby
group = Group.find_by_full_path('group_name') group = Group.find_by_full_path('group_name')
Feature.enable(:group_scim, group) Feature.enable(:group_scim, group)
``` ```
### GitLab configuration ### GitLab configuration
Once [Single sign-on](index.md) has been configured, we can: Once [Single sign-on](index.md) has been configured, we can:
...@@ -53,7 +53,7 @@ Once [Single sign-on](index.md) has been configured, we can: ...@@ -53,7 +53,7 @@ Once [Single sign-on](index.md) has been configured, we can:
1. Click on the **Generate a SCIM token** button. 1. Click on the **Generate a SCIM token** button.
1. Save the token and URL so they can be used in the next step. 1. Save the token and URL so they can be used in the next step.
![SCIM token configuration](img/scim_token.png) ![SCIM token configuration](img/scim_token.png)
## SCIM IdP configuration ## SCIM IdP configuration
...@@ -63,15 +63,15 @@ In the [Single sign-on](index.md) configuration for the group, make sure ...@@ -63,15 +63,15 @@ In the [Single sign-on](index.md) configuration for the group, make sure
that the **Name identifier value** (NameID) points to a unique identifier, such that the **Name identifier value** (NameID) points to a unique identifier, such
as the `user.objectid`. This will match the `extern_uid` used on GitLab. as the `user.objectid`. This will match the `extern_uid` used on GitLab.
The GitLab app in Azure needs to be configured following The GitLab app in Azure needs to be configured following
[Azure's SCIM setup](https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/use-scim-to-provision-users-and-groups#getting-started). [Azure's SCIM setup](https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/use-scim-to-provision-users-and-groups#getting-started).
Note the following: Note the following:
- The `Tenant URL` and `secret token` are the ones retrieved in the - The `Tenant URL` and `secret token` are the ones retrieved in the
[previous step](#gitlab-configuration). [previous step](#gitlab-configuration).
- Should there be any problems with the availability of GitLab or similar - Should there be any problems with the availability of GitLab or similar
errors, the notification email set will get those. errors, the notification email set will get those.
- For mappings, we will only leave `Synchronize Azure Active Directory Users to AppName` enabled. - For mappings, we will only leave `Synchronize Azure Active Directory Users to AppName` enabled.
You can then test the connection clicking on `Test Connection`. You can then test the connection clicking on `Test Connection`.
...@@ -79,14 +79,14 @@ You can then test the connection clicking on `Test Connection`. ...@@ -79,14 +79,14 @@ You can then test the connection clicking on `Test Connection`.
### Synchronize Azure Active Directory users ### Synchronize Azure Active Directory users
1. Click on `Synchronize Azure Active Directory Users to AppName`, to configure 1. Click on `Synchronize Azure Active Directory Users to AppName`, to configure
the attribute mapping. the attribute mapping.
1. Select the unique identifier (in the example `objectId`) as the `id` and `externalId`, 1. Select the unique identifier (in the example `objectId`) as the `id` and `externalId`,
and enable the `Create`, `Update`, and `Delete` actions. and enable the `Create`, `Update`, and `Delete` actions.
1. Map the `userPricipalName` to `emails[type eq "work"].value` and `mailNickname` to 1. Map the `userPricipalName` to `emails[type eq "work"].value` and `mailNickname` to
`userName`. `userName`.
Example configuration: Example configuration:
![Azure's attribute mapping configuration](img/scim_attribute_mapping.png) ![Azure's attribute mapping configuration](img/scim_attribute_mapping.png)
1. Click on **Show advanced options > Edit attribute list for AppName**. 1. Click on **Show advanced options > Edit attribute list for AppName**.
...@@ -95,11 +95,11 @@ and enable the `Create`, `Update`, and `Delete` actions. ...@@ -95,11 +95,11 @@ and enable the `Create`, `Update`, and `Delete` actions.
NOTE: **Note:** NOTE: **Note:**
`username` should neither be primary nor required as we don't support `username` should neither be primary nor required as we don't support
that field on GitLab SCIM yet. that field on GitLab SCIM yet.
![Azure's attribute advanced configuration](img/scim_advanced.png) ![Azure's attribute advanced configuration](img/scim_advanced.png)
1. Save all the screens and, in the **Provisioning** step, set 1. Save all the screens and, in the **Provisioning** step, set
the `Provisioning Status` to `ON`. the `Provisioning Status` to `ON`.
NOTE: **Note:** NOTE: **Note:**
You can control what is actually synced by selecting the `Scope`. For example, You can control what is actually synced by selecting the `Scope`. For example,
......
...@@ -12,9 +12,9 @@ By displaying the logs directly in GitLab, developers can avoid having to manage ...@@ -12,9 +12,9 @@ By displaying the logs directly in GitLab, developers can avoid having to manage
1. Go to **Operations > Environments** and find the environment which contains the desired pod, like `production`. 1. Go to **Operations > Environments** and find the environment which contains the desired pod, like `production`.
1. On the **Environments** page, you should see the status of the environment's pods with [Deploy Boards](../deploy_boards.md). 1. On the **Environments** page, you should see the status of the environment's pods with [Deploy Boards](../deploy_boards.md).
1. When mousing over the list of pods, a tooltip will appear with the exact pod name and status. 1. When mousing over the list of pods, a tooltip will appear with the exact pod name and status.
![Deploy Boards pod list](img/pod_logs_deploy_board.png) ![Deploy Boards pod list](img/pod_logs_deploy_board.png)
1. Click on the desired pod to bring up the logs view, which will contain the last 500 lines for that pod. Support for pods with multiple containers is coming [in a future release](https://gitlab.com/gitlab-org/gitlab-ee/issues/6502). 1. Click on the desired pod to bring up the logs view, which will contain the last 500 lines for that pod. Support for pods with multiple containers is coming [in a future release](https://gitlab.com/gitlab-org/gitlab-ee/issues/6502).
![Deploy Boards pod list](img/kubernetes_pod_logs.png) ![Deploy Boards pod list](img/kubernetes_pod_logs.png)
## Requirements ## Requirements
......
...@@ -25,14 +25,14 @@ templates of the default branch will be taken into account. ...@@ -25,14 +25,14 @@ templates of the default branch will be taken into account.
## Use-cases ## Use-cases
- Add a template to be used in every issue for a specific project, - Add a template to be used in every issue for a specific project,
giving instructions and guidelines, requiring for information specific to that subject. giving instructions and guidelines, requiring for information specific to that subject.
For example, if you have a project for tracking new blog posts, you can require the For example, if you have a project for tracking new blog posts, you can require the
title, outlines, author name, author social media information, etc. title, outlines, author name, author social media information, etc.
- Following the previous example, you can make a template for every MR submitted - Following the previous example, you can make a template for every MR submitted
with a new blog post, requiring information about the post date, frontmatter data, with a new blog post, requiring information about the post date, frontmatter data,
images guidelines, link to the related issue, reviewer name, etc. images guidelines, link to the related issue, reviewer name, etc.
- You can also create issues and merge request templates for different - You can also create issues and merge request templates for different
stages of your workflow, e.g., feature proposal, feature improvement, bug report, etc. stages of your workflow, e.g., feature proposal, feature improvement, bug report, etc.
## Creating issue templates ## Creating issue templates
......
...@@ -23,7 +23,7 @@ allow GitLab to send messages only to *one* room. ...@@ -23,7 +23,7 @@ allow GitLab to send messages only to *one* room.
1. Find "Build Your Own!" and click "Create". 1. Find "Build Your Own!" and click "Create".
1. Select the desired room, name the integration "GitLab", and click "Create". 1. Select the desired room, name the integration "GitLab", and click "Create".
1. In the "Send messages to this room by posting this URL" column, you should 1. In the "Send messages to this room by posting this URL" column, you should
see a URL in the format: see a URL in the format:
``` ```
https://api.hipchat.com/v2/room/<room>/notification?auth_token=<token> https://api.hipchat.com/v2/room/<room>/notification?auth_token=<token>
......
...@@ -134,4 +134,4 @@ For more information, see [Crosslinking issues](crosslinking_issues.md). ...@@ -134,4 +134,4 @@ For more information, see [Crosslinking issues](crosslinking_issues.md).
- [Export issues](csv_export.md) **[STARTER]** - [Export issues](csv_export.md) **[STARTER]**
- [Issues API](../../../api/issues.md) - [Issues API](../../../api/issues.md)
- Configure an [external issue tracker](../../../integration/external-issue-tracker.md) such as Jira, Redmine, - Configure an [external issue tracker](../../../integration/external-issue-tracker.md) such as Jira, Redmine,
or Bugzilla. or Bugzilla.
...@@ -12,9 +12,9 @@ to approve a merge request before it can be unblocked for merging. ...@@ -12,9 +12,9 @@ to approve a merge request before it can be unblocked for merging.
## Use cases ## Use cases
1. Enforcing review of all code that gets merged into a repository. 1. Enforcing review of all code that gets merged into a repository.
2. Specifying code maintainers for an entire repository. 1. Specifying code maintainers for an entire repository.
3. Specifying reviewers for a given proposed code change. 1. Specifying reviewers for a given proposed code change.
4. Specifying categories of reviewers, such as BE, FE, QA, DB, etc., for all proposed code changes. 1. Specifying categories of reviewers, such as BE, FE, QA, DB, etc., for all proposed code changes.
## Enabling the new approvals interface ## Enabling the new approvals interface
...@@ -246,7 +246,7 @@ restrictions (compared to [GitLab Starter](#overriding-the-merge-request-approva ...@@ -246,7 +246,7 @@ restrictions (compared to [GitLab Starter](#overriding-the-merge-request-approva
- Approval rules can be added to an MR with no restriction. - Approval rules can be added to an MR with no restriction.
- For project sourced approval rules, editing and removing approvers is not allowed. - For project sourced approval rules, editing and removing approvers is not allowed.
- The approvals required of all approval rules is configurable, but if a rule is backed by a project rule, then it is restricted - The approvals required of all approval rules is configurable, but if a rule is backed by a project rule, then it is restricted
to the minimum approvals required set in the project's corresponding rule. to the minimum approvals required set in the project's corresponding rule.
## Resetting approvals on push ## Resetting approvals on push
......
...@@ -77,10 +77,10 @@ containing the most popular SSGs templates to get you started. ...@@ -77,10 +77,10 @@ containing the most popular SSGs templates to get you started.
1. [Fork](../../../gitlab-basics/fork-project.md) a sample project from the [GitLab Pages examples](https://gitlab.com/pages) group. 1. [Fork](../../../gitlab-basics/fork-project.md) a sample project from the [GitLab Pages examples](https://gitlab.com/pages) group.
1. From the left sidebar, navigate to your project's **CI/CD > Pipelines** 1. From the left sidebar, navigate to your project's **CI/CD > Pipelines**
and click **Run pipeline** to trigger GitLab CI/CD to build and deploy your and click **Run pipeline** to trigger GitLab CI/CD to build and deploy your
site to the server. site to the server.
1. Once the pipeline has finished successfully, find the link to visit your 1. Once the pipeline has finished successfully, find the link to visit your
website from your project's **Settings > Pages**. website from your project's **Settings > Pages**.
You can also take some **optional** further steps: You can also take some **optional** further steps:
...@@ -89,14 +89,14 @@ You can also take some **optional** further steps: ...@@ -89,14 +89,14 @@ You can also take some **optional** further steps:
![remove fork relationship](img/remove_fork_relationship.png) ![remove fork relationship](img/remove_fork_relationship.png)
- _Make it a user or group website._ To turn a **project website** forked - _Make it a user or group website._ To turn a **project website** forked
from the Pages group into a **user/group** website, you'll need to: from the Pages group into a **user/group** website, you'll need to:
- Rename it to `namespace.gitlab.io`: go to your project's - Rename it to `namespace.gitlab.io`: go to your project's
**Settings > General** and expand **Advanced**. Scroll down to **Settings > General** and expand **Advanced**. Scroll down to
**Rename repository** and change the path to `namespace.gitlab.io`. **Rename repository** and change the path to `namespace.gitlab.io`.
- Adjust your SSG's [base URL](#urls-and-baseurls) from `"project-name"` to - Adjust your SSG's [base URL](#urls-and-baseurls) from `"project-name"` to
`""`. This setting will be at a different place for each SSG, as each of them `""`. This setting will be at a different place for each SSG, as each of them
have their own structure and file tree. Most likely, it will be in the SSG's have their own structure and file tree. Most likely, it will be in the SSG's
config file. config file.
### Create a project from scratch ### Create a project from scratch
......
...@@ -12,7 +12,6 @@ type: index, reference ...@@ -12,7 +12,6 @@ type: index, reference
> - Support for subgroup project's websites was [introduced](https://gitlab.com/gitlab-org/gitlab-ce/issues/30548) in GitLab 11.8. > - Support for subgroup project's websites was [introduced](https://gitlab.com/gitlab-org/gitlab-ce/issues/30548) in GitLab 11.8.
> - Bundled project templates were [introduced](https://gitlab.com/gitlab-org/gitlab-ce/issues/47857) in GitLab 11.8. > - Bundled project templates were [introduced](https://gitlab.com/gitlab-org/gitlab-ce/issues/47857) in GitLab 11.8.
**GitLab Pages is a feature that allows you to publish static websites **GitLab Pages is a feature that allows you to publish static websites
directly from a repository in GitLab.** directly from a repository in GitLab.**
...@@ -105,10 +104,10 @@ To get started with GitLab Pages, you can either: ...@@ -105,10 +104,10 @@ To get started with GitLab Pages, you can either:
![Project templates for Pages](img/pages_project_templates_11-8.png) ![Project templates for Pages](img/pages_project_templates_11-8.png)
1. From the left sidebar, navigate to your project's **CI/CD > Pipelines** 1. From the left sidebar, navigate to your project's **CI/CD > Pipelines**
and click **Run pipeline** to trigger GitLab CI/CD to build and deploy your and click **Run pipeline** to trigger GitLab CI/CD to build and deploy your
site to the server. site to the server.
1. Once the pipeline has finished successfully, find the link to visit your 1. Once the pipeline has finished successfully, find the link to visit your
website from your project's **Settings > Pages**. website from your project's **Settings > Pages**.
Your website is then visible on your domain, and you can modify yourfiles Your website is then visible on your domain, and you can modify yourfiles
as you wish. For every modification pushed to your repository, GitLab CI/CD as you wish. For every modification pushed to your repository, GitLab CI/CD
......
...@@ -13,17 +13,17 @@ To familiarize yourself with GitLab Pages first: ...@@ -13,17 +13,17 @@ To familiarize yourself with GitLab Pages first:
- Read an [introduction to GitLab Pages](index.md#overview). - Read an [introduction to GitLab Pages](index.md#overview).
- Learn [how to get started with Pages](index.md#getting-started). - Learn [how to get started with Pages](index.md#getting-started).
- Learn how to enable GitLab Pages - Learn how to enable GitLab Pages
across your GitLab instance on the [administrator documentation](../../../administration/pages/index.md). across your GitLab instance on the [administrator documentation](../../../administration/pages/index.md).
## GitLab Pages requirements ## GitLab Pages requirements
In brief, this is what you need to upload your website in GitLab Pages: In brief, this is what you need to upload your website in GitLab Pages:
1. Domain of the instance: domain name that is used for GitLab Pages 1. Domain of the instance: domain name that is used for GitLab Pages
(ask your administrator). (ask your administrator).
1. GitLab CI/CD: a `.gitlab-ci.yml` file with a specific job named [`pages`][pages] in the root directory of your repository. 1. GitLab CI/CD: a `.gitlab-ci.yml` file with a specific job named [`pages`][pages] in the root directory of your repository.
1. A directory called `public` in your site's repo containing the content 1. A directory called `public` in your site's repo containing the content
to be published. to be published.
1. GitLab Runner enabled for the project. 1. GitLab Runner enabled for the project.
## GitLab Pages on GitLab.com ## GitLab Pages on GitLab.com
......
...@@ -26,10 +26,10 @@ Set up your project's access, [visibility](../../../public_access/public_access. ...@@ -26,10 +26,10 @@ Set up your project's access, [visibility](../../../public_access/public_access.
![projects sharing permissions](img/sharing_and_permissions_settings.png) ![projects sharing permissions](img/sharing_and_permissions_settings.png)
If Issues are disabled, or you can't access Issues because you're not a project member, then Lables and Milestones If Issues are disabled, or you can't access Issues because you're not a project member, then Lables and Milestones
links will be missing from the sidebar UI. links will be missing from the sidebar UI.
You can still access them with direct links if you can access Merge Requests. This is deliberate, if you can see You can still access them with direct links if you can access Merge Requests. This is deliberate, if you can see
Issues or Merge Requests, both of which use Labels and Milestones, then you shouldn't be denied access to Labels and Milestones pages. Issues or Merge Requests, both of which use Labels and Milestones, then you shouldn't be denied access to Labels and Milestones pages.
### Issue settings ### Issue settings
...@@ -109,8 +109,8 @@ You can transfer an existing project into a [group](../../group/index.md) if: ...@@ -109,8 +109,8 @@ You can transfer an existing project into a [group](../../group/index.md) if:
1. You have at least **Maintainer** [permissions] to that group. 1. You have at least **Maintainer** [permissions] to that group.
1. The project is in a subgroup you own. 1. The project is in a subgroup you own.
1. You are at least a **Maintainer** of the project under your personal namespace. 1. You are at least a **Maintainer** of the project under your personal namespace.
Similarly, if you are an owner of a group, you can transfer any of its projects Similarly, if you are an owner of a group, you can transfer any of its projects
under your own user. under your own user.
To transfer a project: To transfer a project:
......
...@@ -171,13 +171,13 @@ syntax but with some restrictions: ...@@ -171,13 +171,13 @@ syntax but with some restrictions:
- No global blocks can be defined (ie: `before_script` or `after_script`) - No global blocks can be defined (ie: `before_script` or `after_script`)
- Only one job named `terminal` can be added to this file. - Only one job named `terminal` can be added to this file.
- Only the keywords `image`, `services`, `tags`, `before_script`, `script`, and - Only the keywords `image`, `services`, `tags`, `before_script`, `script`, and
`variables` are allowed to be used to configure the job. `variables` are allowed to be used to configure the job.
- To connect to the interactive terminal, the `terminal` job must be still alive - To connect to the interactive terminal, the `terminal` job must be still alive
and running, otherwise the terminal won't be able to connect to the job's session. and running, otherwise the terminal won't be able to connect to the job's session.
By default the `script` keyword has the value `sleep 60` to prevent By default the `script` keyword has the value `sleep 60` to prevent
the job from ending and giving the Web IDE enough time to connect. This means the job from ending and giving the Web IDE enough time to connect. This means
that, if you override the default `script` value, you'll have to add a command that, if you override the default `script` value, you'll have to add a command
which would keep the job running, like `sleep`. which would keep the job running, like `sleep`.
In the code below there is an example of this configuration file: In the code below there is an example of this configuration file:
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment