Commit b7892c51 authored by Suzanne Selhorn's avatar Suzanne Selhorn

Merge branch 'axil-enablement-new-menu' into 'master'

Update admin docs with new admin area access info

See merge request gitlab-org/gitlab!64317
parents 2db78598 fb4a0046
...@@ -53,17 +53,16 @@ helpful: ...@@ -53,17 +53,16 @@ helpful:
you can create an Auditor user and then share the credentials with those users you can create an Auditor user and then share the credentials with those users
to which you want to grant access. to which you want to grant access.
## Adding an Auditor user ## Add an Auditor user
To create a new Auditor user: To create an Auditor user:
1. Create a new user or edit an existing one by navigating to 1. On the top bar, select **Menu >** **{admin}** **Admin**.
**Admin Area > Users**. The option of the access level is located in 1. On the left sidebar, select **Overview > Users**.
the 'Access' section. 1. Create a new user or edit an existing one, and in the **Access** section
select Auditor.
![Admin Area Form](img/auditor_access_form.png) 1. Select **Create user** or **Save changes** if you created a new user or
edited an existing one respectively.
1. Select **Save changes** or **Create user** for the changes to take effect.
To revoke Auditor permissions from a user, make them a regular user by To revoke Auditor permissions from a user, make them a regular user by
following the previous steps. following the previous steps.
......
...@@ -58,19 +58,25 @@ Feature.enable('geo_repository_verification') ...@@ -58,19 +58,25 @@ Feature.enable('geo_repository_verification')
## Repository verification ## Repository verification
Go to the **Admin Area > Geo** dashboard on the **primary** node and expand On the **primary** node:
the **Verification information** section for that node to view automatic checksumming
status for each data type. Successes are shown in green, pending work
in gray, and failures in red.
![Verification status](img/verification_status_primary_v14_0.png) 1. On the top bar, select **Menu >** **{admin}** **Admin**.
1. On the left sidebar, select **Geo > Nodes**.
1. Expand **Verification information** tab for that node to view automatic checksumming
status for repositories and wikis. Successes are shown in green, pending work
in gray, and failures in red.
Go to the **Admin Area > Geo** dashboard on the **secondary** node and expand ![Verification status](img/verification_status_primary_v14_0.png)
the **Verification information** section for that node to view automatic verification
status for each data type. As with checksumming, successes are shown in
green, pending work in gray, and failures in red.
![Verification status](img/verification_status_secondary_v14_0.png) On the **secondary** node:
1. On the top bar, select **Menu >** **{admin}** **Admin**.
1. On the left sidebar, select **Geo > Nodes**.
1. Expand **Verification information** tab for that node to view automatic checksumming
status for repositories and wikis. Successes are shown in green, pending work
in gray, and failures in red.
![Verification status](img/verification_status_secondary_v14_0.png)
## Using checksums to compare Geo nodes ## Using checksums to compare Geo nodes
...@@ -92,11 +98,14 @@ data. The default and recommended re-verification interval is 7 days, though ...@@ -92,11 +98,14 @@ data. The default and recommended re-verification interval is 7 days, though
an interval as short as 1 day can be set. Shorter intervals reduce risk but an interval as short as 1 day can be set. Shorter intervals reduce risk but
increase load and vice versa. increase load and vice versa.
Go to the **Admin Area > Geo** dashboard on the **primary** node, and On the **primary** node:
click the **Edit** button for the **primary** node to customize the minimum
re-verification interval: 1. On the top bar, select **Menu >** **{admin}** **Admin**.
1. On the left sidebar, select **Geo > Nodes**.
1. Select **Edit** for the **primary** node to customize the minimum
re-verification interval:
![Re-verification interval](img/reverification-interval.png) ![Re-verification interval](img/reverification-interval.png)
The automatic background re-verification is enabled by default, but you can The automatic background re-verification is enabled by default, but you can
disable if you need. Run the following commands in a Rails console on the disable if you need. Run the following commands in a Rails console on the
...@@ -141,17 +150,19 @@ sudo gitlab-rake geo:verification:wiki:reset ...@@ -141,17 +150,19 @@ sudo gitlab-rake geo:verification:wiki:reset
If the **primary** and **secondary** nodes have a checksum verification mismatch, the cause may not be apparent. To find the cause of a checksum mismatch: If the **primary** and **secondary** nodes have a checksum verification mismatch, the cause may not be apparent. To find the cause of a checksum mismatch:
1. Go to the **Admin Area > Overview > Projects** dashboard on the **primary** node, find the 1. On the **primary** node:
project that you want to check the checksum differences and click on the 1. On the top bar, select **Menu >** **{admin}** **Admin**.
**Edit** button: 1. On the left sidebar, select **Overview > Projects**.
![Projects dashboard](img/checksum-differences-admin-projects.png) 1. Find the project that you want to check the checksum differences and
select its name.
1. On the project administration page get the **Gitaly storage name**,
and **Gitaly relative path**.
1. On the project administration page get the **Gitaly storage name**, and **Gitaly relative path**: ![Project administration page](img/checksum-differences-admin-project-page.png)
![Project administration page](img/checksum-differences-admin-project-page.png)
1. Go to the project's repository directory on both **primary** and **secondary** nodes 1. Go to the project's repository directory on both **primary** and **secondary** nodes
(the path is usually `/var/opt/gitlab/git-data/repositories`). Note that if `git_data_dirs` (the path is usually `/var/opt/gitlab/git-data/repositories`). Note that if `git_data_dirs`
is customized, check the directory layout on your server to be sure. is customized, check the directory layout on your server to be sure:
```shell ```shell
cd /var/opt/gitlab/git-data/repositories cd /var/opt/gitlab/git-data/repositories
......
...@@ -109,13 +109,16 @@ The maintenance window won't end until Geo replication and verification is ...@@ -109,13 +109,16 @@ The maintenance window won't end until Geo replication and verification is
completely finished. To keep the window as short as possible, you should completely finished. To keep the window as short as possible, you should
ensure these processes are close to 100% as possible during active use. ensure these processes are close to 100% as possible during active use.
Go to the **Admin Area > Geo** dashboard on the **secondary** node to On the **secondary** node:
review status. Replicated objects (shown in green) should be close to 100%,
and there should be no failures (shown in red). If a large proportion of
objects aren't yet replicated (shown in gray), consider giving the node more
time to complete
![Replication status](../replication/img/geo_node_dashboard_v14_0.png) 1. On the top bar, select **Menu >** **{admin}** **Admin**.
1. On the left sidebar, select **Geo > Nodes**.
Replicated objects (shown in green) should be close to 100%,
and there should be no failures (shown in red). If a large proportion of
objects aren't yet replicated (shown in gray), consider giving the node more
time to complete
![Replication status](../replication/img/geo_node_dashboard_v14_0.png)
If any objects are failing to replicate, this should be investigated before If any objects are failing to replicate, this should be investigated before
scheduling the maintenance window. Following a planned failover, anything that scheduling the maintenance window. Following a planned failover, anything that
...@@ -134,23 +137,26 @@ This [content was moved to another location](background_verification.md). ...@@ -134,23 +137,26 @@ This [content was moved to another location](background_verification.md).
### Notify users of scheduled maintenance ### Notify users of scheduled maintenance
On the **primary** node, navigate to **Admin Area > Messages**, add a broadcast On the **primary** node:
message. You can check under **Admin Area > Geo** to estimate how long it
takes to finish syncing. An example message would be:
> A scheduled maintenance takes place at XX:XX UTC. We expect it to take 1. On the top bar, select **Menu >** **{admin}** **Admin**.
> less than 1 hour. 1. On the left sidebar, select **Messages**.
1. Add a message notifying users on the maintenance window.
You can check under **Geo > Nodes** to estimate how long it
takes to finish syncing.
1. Select **Add broadcast message**.
## Prevent updates to the **primary** node ## Prevent updates to the **primary** node
To ensure that all data is replicated to a secondary site, updates (write requests) need to To ensure that all data is replicated to a secondary site, updates (write requests) need to
be disabled on the primary site: be disabled on the **primary** site:
1. Enable [maintenance mode](../../maintenance_mode/index.md). 1. Enable [maintenance mode](../../maintenance_mode/index.md) on the **primary** node.
1. On the top bar, select **Menu >** **{admin}** **Admin**.
1. Disable non-Geo periodic background jobs on the **primary** node by navigating 1. On the left sidebar, select **Monitoring > Background Jobs**.
to **Admin Area > Monitoring > Background Jobs > Cron**, pressing `Disable All`, 1. On the Sidekiq dashboard, select **Cron**.
and then pressing `Enable` for the `geo_sidekiq_cron_config_worker` cron job. 1. Select `Disable All` to disable non-Geo periodic background jobs.
1. Select `Enable` for the `geo_sidekiq_cron_config_worker` cron job.
This job re-enables several other cron jobs that are essential for planned This job re-enables several other cron jobs that are essential for planned
failover to complete successfully. failover to complete successfully.
...@@ -158,23 +164,28 @@ be disabled on the primary site: ...@@ -158,23 +164,28 @@ be disabled on the primary site:
1. If you are manually replicating any data not managed by Geo, trigger the 1. If you are manually replicating any data not managed by Geo, trigger the
final replication process now. final replication process now.
1. On the **primary** node, navigate to **Admin Area > Monitoring > Background Jobs > Queues** 1. On the **primary** node:
and wait for all queues except those with `geo` in the name to drop to 0. 1. On the top bar, select **Menu >** **{admin}** **Admin**.
These queues contain work that has been submitted by your users; failing over 1. On the left sidebar, select **Monitoring > Background Jobs**.
before it is completed, causes the work to be lost. 1. On the Sidekiq dashboard, select **Queues**, and wait for all queues except
1. On the **primary** node, navigate to **Admin Area > Geo** and wait for the those with `geo` in the name to drop to 0.
following conditions to be true of the **secondary** node you are failing over to: These queues contain work that has been submitted by your users; failing over
before it is completed, causes the work to be lost.
- All replication meters to each 100% replicated, 0% failures. 1. On the left sidebar, select **Geo > Nodes** and wait for the
- All verification meters reach 100% verified, 0% failures. following conditions to be true of the **secondary** node you are failing over to:
- Database replication lag is 0ms.
- The Geo log cursor is up to date (0 events behind). - All replication meters reach 100% replicated, 0% failures.
- All verification meters reach 100% verified, 0% failures.
1. On the **secondary** node, navigate to **Admin Area > Monitoring > Background Jobs > Queues** - Database replication lag is 0ms.
and wait for all the `geo` queues to drop to 0 queued and 0 running jobs. - The Geo log cursor is up to date (0 events behind).
1. On the **secondary** node, use [these instructions](../../raketasks/check.md)
to verify the integrity of CI artifacts, LFS objects, and uploads in file 1. On the **secondary** node:
storage. 1. On the top bar, select **Menu >** **{admin}** **Admin**.
1. On the left sidebar, select **Monitoring > Background Jobs**.
1. On the Sidekiq dashboard, select **Queues**, and wait for all the `geo`
queues to drop to 0 queued and 0 running jobs.
1. [Run an integrity check](../../raketasks/check.md) to verify the integrity
of CI artifacts, LFS objects, and uploads in file storage.
At this point, your **secondary** node contains an up-to-date copy of everything the At this point, your **secondary** node contains an up-to-date copy of everything the
**primary** node has, meaning nothing was lost when you fail over. **primary** node has, meaning nothing was lost when you fail over.
......
...@@ -63,13 +63,16 @@ Before following any of those steps, make sure you have `root` access to the ...@@ -63,13 +63,16 @@ Before following any of those steps, make sure you have `root` access to the
**secondary** to promote it, since there isn't provided an automated way to **secondary** to promote it, since there isn't provided an automated way to
promote a Geo replica and perform a failover. promote a Geo replica and perform a failover.
On the **secondary** node, navigate to the **Admin Area > Geo** dashboard to On the **secondary** node:
review its status. Replicated objects (shown in green) should be close to 100%,
and there should be no failures (shown in red). If a large proportion of
objects aren't yet replicated (shown in gray), consider giving the node more
time to complete.
![Replication status](../../replication/img/geo_node_dashboard_v14_0.png) 1. On the top bar, select **Menu >** **{admin}** **Admin**.
1. On the left sidebar, select **Geo > Nodes** to see its status.
Replicated objects (shown in green) should be close to 100%,
and there should be no failures (shown in red). If a large proportion of
objects aren't yet replicated (shown in gray), consider giving the node more
time to complete.
![Replication status](../../replication/img/geo_node_dashboard_v14_0.png)
If any objects are failing to replicate, this should be investigated before If any objects are failing to replicate, this should be investigated before
scheduling the maintenance window. After a planned failover, anything that scheduling the maintenance window. After a planned failover, anything that
...@@ -126,11 +129,14 @@ follow these steps to avoid unnecessary data loss: ...@@ -126,11 +129,14 @@ follow these steps to avoid unnecessary data loss:
existing Git repository with an SSH remote URL. The server should refuse existing Git repository with an SSH remote URL. The server should refuse
connection. connection.
1. On the **primary** node, disable non-Geo periodic background jobs by navigating 1. On the **primary** node:
to **Admin Area > Monitoring > Background Jobs > Cron**, clicking `Disable All`, 1. On the top bar, select **Menu >** **{admin}** **Admin**.
and then clicking `Enable` for the `geo_sidekiq_cron_config_worker` cron job. 1. On the left sidebar, select **Monitoring > Background Jobs**.
This job will re-enable several other cron jobs that are essential for planned 1. On the Sidekiq dhasboard, select **Cron**.
failover to complete successfully. 1. Select `Disable All` to disable any non-Geo periodic background jobs.
1. Select `Enable` for the `geo_sidekiq_cron_config_worker` cron job.
This job will re-enable several other cron jobs that are essential for planned
failover to complete successfully.
1. Finish replicating and verifying all data: 1. Finish replicating and verifying all data:
...@@ -141,22 +147,28 @@ follow these steps to avoid unnecessary data loss: ...@@ -141,22 +147,28 @@ follow these steps to avoid unnecessary data loss:
1. If you are manually replicating any 1. If you are manually replicating any
[data not managed by Geo](../../replication/datatypes.md#limitations-on-replicationverification), [data not managed by Geo](../../replication/datatypes.md#limitations-on-replicationverification),
trigger the final replication process now. trigger the final replication process now.
1. On the **primary** node, navigate to **Admin Area > Monitoring > Background Jobs > Queues** 1. On the **primary** node:
and wait for all queues except those with `geo` in the name to drop to 0. 1. On the top bar, select **Menu >** **{admin}** **Admin**.
These queues contain work that has been submitted by your users; failing over 1. On the left sidebar, select **Monitoring > Background Jobs**.
before it is completed will cause the work to be lost. 1. On the Sidekiq dashboard, select **Queues**, and wait for all queues except
1. On the **primary** node, navigate to **Admin Area > Geo** and wait for the those with `geo` in the name to drop to 0.
following conditions to be true of the **secondary** node you are failing over to: These queues contain work that has been submitted by your users; failing over
- All replication meters to each 100% replicated, 0% failures. before it is completed, causes the work to be lost.
- All verification meters reach 100% verified, 0% failures. 1. On the left sidebar, select **Geo > Nodes** and wait for the
- Database replication lag is 0ms. following conditions to be true of the **secondary** node you are failing over to:
- The Geo log cursor is up to date (0 events behind).
- All replication meters reach 100% replicated, 0% failures.
1. On the **secondary** node, navigate to **Admin Area > Monitoring > Background Jobs > Queues** - All verification meters reach 100% verified, 0% failures.
and wait for all the `geo` queues to drop to 0 queued and 0 running jobs. - Database replication lag is 0ms.
1. On the **secondary** node, use [these instructions](../../../raketasks/check.md) - The Geo log cursor is up to date (0 events behind).
to verify the integrity of CI artifacts, LFS objects, and uploads in file
storage. 1. On the **secondary** node:
1. On the top bar, select **Menu >** **{admin}** **Admin**.
1. On the left sidebar, select **Monitoring > Background Jobs**.
1. On the Sidekiq dashboard, select **Queues**, and wait for all the `geo`
queues to drop to 0 queued and 0 running jobs.
1. [Run an integrity check](../../../raketasks/check.md) to verify the integrity
of CI artifacts, LFS objects, and uploads in file storage.
At this point, your **secondary** node will contain an up-to-date copy of everything the At this point, your **secondary** node will contain an up-to-date copy of everything the
**primary** node has, meaning nothing will be lost when you fail over. **primary** node has, meaning nothing will be lost when you fail over.
......
...@@ -114,11 +114,14 @@ follow these steps to avoid unnecessary data loss: ...@@ -114,11 +114,14 @@ follow these steps to avoid unnecessary data loss:
existing Git repository with an SSH remote URL. The server should refuse existing Git repository with an SSH remote URL. The server should refuse
connection. connection.
1. On the **primary** node, disable non-Geo periodic background jobs by navigating 1. On the **primary** node:
to **Admin Area > Monitoring > Background Jobs > Cron**, clicking `Disable All`, 1. On the top bar, select **Menu >** **{admin}** **Admin**.
and then clicking `Enable` for the `geo_sidekiq_cron_config_worker` cron job. 1. On the left sidebar, select **Monitoring > Background Jobs**.
This job will re-enable several other cron jobs that are essential for planned 1. On the Sidekiq dhasboard, select **Cron**.
failover to complete successfully. 1. Select `Disable All` to disable any non-Geo periodic background jobs.
1. Select `Enable` for the `geo_sidekiq_cron_config_worker` cron job.
This job will re-enable several other cron jobs that are essential for planned
failover to complete successfully.
1. Finish replicating and verifying all data: 1. Finish replicating and verifying all data:
...@@ -129,22 +132,28 @@ follow these steps to avoid unnecessary data loss: ...@@ -129,22 +132,28 @@ follow these steps to avoid unnecessary data loss:
1. If you are manually replicating any 1. If you are manually replicating any
[data not managed by Geo](../../replication/datatypes.md#limitations-on-replicationverification), [data not managed by Geo](../../replication/datatypes.md#limitations-on-replicationverification),
trigger the final replication process now. trigger the final replication process now.
1. On the **primary** node, navigate to **Admin Area > Monitoring > Background Jobs > Queues** 1. On the **primary** node:
and wait for all queues except those with `geo` in the name to drop to 0. 1. On the top bar, select **Menu >** **{admin}** **Admin**.
These queues contain work that has been submitted by your users; failing over 1. On the left sidebar, select **Monitoring > Background Jobs**.
before it is completed will cause the work to be lost. 1. On the Sidekiq dashboard, select **Queues**, and wait for all queues except
1. On the **primary** node, navigate to **Admin Area > Geo** and wait for the those with `geo` in the name to drop to 0.
following conditions to be true of the **secondary** node you are failing over to: These queues contain work that has been submitted by your users; failing over
- All replication meters to each 100% replicated, 0% failures. before it is completed, causes the work to be lost.
- All verification meters reach 100% verified, 0% failures. 1. On the left sidebar, select **Geo > Nodes** and wait for the
- Database replication lag is 0ms. following conditions to be true of the **secondary** node you are failing over to:
- The Geo log cursor is up to date (0 events behind).
- All replication meters reach 100% replicated, 0% failures.
1. On the **secondary** node, navigate to **Admin Area > Monitoring > Background Jobs > Queues** - All verification meters reach 100% verified, 0% failures.
and wait for all the `geo` queues to drop to 0 queued and 0 running jobs. - Database replication lag is 0ms.
1. On the **secondary** node, use [these instructions](../../../raketasks/check.md) - The Geo log cursor is up to date (0 events behind).
to verify the integrity of CI artifacts, LFS objects, and uploads in file
storage. 1. On the **secondary** node:
1. On the top bar, select **Menu >** **{admin}** **Admin**.
1. On the left sidebar, select **Monitoring > Background Jobs**.
1. On the Sidekiq dashboard, select **Queues**, and wait for all the `geo`
queues to drop to 0 queued and 0 running jobs.
1. [Run an integrity check](../../../raketasks/check.md) to verify the integrity
of CI artifacts, LFS objects, and uploads in file storage.
At this point, your **secondary** node will contain an up-to-date copy of everything the At this point, your **secondary** node will contain an up-to-date copy of everything the
**primary** node has, meaning nothing will be lost when you fail over. **primary** node has, meaning nothing will be lost when you fail over.
......
...@@ -196,9 +196,9 @@ keys must be manually replicated to the **secondary** node. ...@@ -196,9 +196,9 @@ keys must be manually replicated to the **secondary** node.
gitlab-ctl reconfigure gitlab-ctl reconfigure
``` ```
1. Visit the **primary** node's **Admin Area > Geo** 1. On the top bar, select **Menu >** **{admin}** **Admin**.
(`/admin/geo/nodes`) in your browser. 1. On the left sidebar, select **Geo > Nodes**.
1. Click the **New node** button. 1. Select **New node**.
![Add secondary node](img/adding_a_secondary_node_v13_3.png) ![Add secondary node](img/adding_a_secondary_node_v13_3.png)
1. Fill in **Name** with the `gitlab_rails['geo_node_name']` in 1. Fill in **Name** with the `gitlab_rails['geo_node_name']` in
`/etc/gitlab/gitlab.rb`. These values must always match *exactly*, character `/etc/gitlab/gitlab.rb`. These values must always match *exactly*, character
...@@ -209,7 +209,7 @@ keys must be manually replicated to the **secondary** node. ...@@ -209,7 +209,7 @@ keys must be manually replicated to the **secondary** node.
1. Optionally, choose which groups or storage shards should be replicated by the 1. Optionally, choose which groups or storage shards should be replicated by the
**secondary** node. Leave blank to replicate all. Read more in **secondary** node. Leave blank to replicate all. Read more in
[selective synchronization](#selective-synchronization). [selective synchronization](#selective-synchronization).
1. Click the **Add node** button to add the **secondary** node. 1. Select **Add node** to add the **secondary** node.
1. SSH into your GitLab **secondary** server and restart the services: 1. SSH into your GitLab **secondary** server and restart the services:
```shell ```shell
...@@ -252,18 +252,22 @@ on the **secondary** node. ...@@ -252,18 +252,22 @@ on the **secondary** node.
Geo synchronizes repositories over HTTP/HTTPS, and therefore requires this clone Geo synchronizes repositories over HTTP/HTTPS, and therefore requires this clone
method to be enabled. This is enabled by default, but if converting an existing node to Geo it should be checked: method to be enabled. This is enabled by default, but if converting an existing node to Geo it should be checked:
1. Go to **Admin Area > Settings** (`/admin/application_settings/general`) on the **primary** node. On the **primary** node:
1. Expand "Visibility and access controls".
1. On the top bar, select **Menu >** **{admin}** **Admin**.
1. On the left sidebar, select **Settings > General**.
1. Expand **Visibility and access controls**.
1. Ensure "Enabled Git access protocols" is set to either "Both SSH and HTTP(S)" or "Only HTTP(S)". 1. Ensure "Enabled Git access protocols" is set to either "Both SSH and HTTP(S)" or "Only HTTP(S)".
### Step 6. Verify proper functioning of the **secondary** node ### Step 6. Verify proper functioning of the **secondary** node
Your **secondary** node is now configured! You can sign in to the **secondary** node with the same credentials you used with
the **primary** node. After you sign in:
You can sign in to the _secondary_ node with the same credentials you used with 1. On the top bar, select **Menu >** **{admin}** **Admin**.
the _primary_ node. Visit the _secondary_ node's **Admin Area > Geo** 1. On the left sidebar, select **Geo > Nodes**.
(`/admin/geo/nodes`) in your browser to determine if it's correctly identified 1. Verify that it's correctly identified as a **secondary** Geo node, and that
as a _secondary_ Geo node, and if Geo is enabled. Geo is enabled.
The initial replication, or 'backfill', is probably still in progress. You The initial replication, or 'backfill', is probably still in progress. You
can monitor the synchronization process on each Geo node from the **primary** can monitor the synchronization process on each Geo node from the **primary**
......
...@@ -33,9 +33,12 @@ to do that. ...@@ -33,9 +33,12 @@ to do that.
## Remove the primary site from the UI ## Remove the primary site from the UI
1. Go to **Admin Area > Geo** (`/admin/geo/nodes`). To remove the **primary** site:
1. Click the **Remove** button for the **primary** node.
1. Confirm by clicking **Remove** when the prompt appears. 1. On the top bar, select **Menu >** **{admin}** **Admin**.
1. On the left sidebar, select **Geo > Nodes**.
1. Select **Remove** for the **primary** node.
1. Confirm by selecting **Remove** when the prompt appears.
## Remove secondary replication slots ## Remove secondary replication slots
......
...@@ -127,7 +127,10 @@ For each application and Sidekiq node on the **secondary** site: ...@@ -127,7 +127,10 @@ For each application and Sidekiq node on the **secondary** site:
### Verify replication ### Verify replication
To verify Container Registry replication is working, go to **Admin Area > Geo** To verify Container Registry replication is working, on the **secondary** site:
(`/admin/geo/nodes`) on the **secondary** site.
The initial replication, or "backfill", is probably still in progress. 1. On the top bar, select **Menu >** **{admin}** **Admin**.
1. On the left sidebar, select **Geo > Nodes**.
The initial replication, or "backfill", is probably still in progress.
You can monitor the synchronization process on each Geo site from the **primary** site's **Geo Nodes** dashboard in your browser. You can monitor the synchronization process on each Geo site from the **primary** site's **Geo Nodes** dashboard in your browser.
...@@ -21,7 +21,7 @@ To have: ...@@ -21,7 +21,7 @@ To have:
[Read more about using object storage with GitLab](../../object_storage.md). [Read more about using object storage with GitLab](../../object_storage.md).
## Enabling GitLab managed object storage replication ## Enabling GitLab-managed object storage replication
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/10586) in GitLab 12.4. > [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/10586) in GitLab 12.4.
...@@ -31,10 +31,11 @@ This is a [**beta** feature](https://about.gitlab.com/handbook/product/#beta) an ...@@ -31,10 +31,11 @@ This is a [**beta** feature](https://about.gitlab.com/handbook/product/#beta) an
**Secondary** sites can replicate files stored on the **primary** site regardless of **Secondary** sites can replicate files stored on the **primary** site regardless of
whether they are stored on the local file system or in object storage. whether they are stored on the local file system or in object storage.
To enable GitLab replication, you must: To enable GitLab replication:
1. Go to **Admin Area > Geo**. 1. On the top bar, select **Menu >** **{admin}** **Admin**.
1. Press **Edit** on the **secondary** site. 1. On the left sidebar, select **Geo > Nodes**.
1. Select **Edit** on the **secondary** site.
1. In the **Synchronization Settings** section, find the **Allow this secondary node to replicate content on Object Storage** 1. In the **Synchronization Settings** section, find the **Allow this secondary node to replicate content on Object Storage**
checkbox to enable it. checkbox to enable it.
......
...@@ -9,7 +9,8 @@ type: howto ...@@ -9,7 +9,8 @@ type: howto
**Secondary** sites can be removed from the Geo cluster using the Geo administration page of the **primary** site. To remove a **secondary** site: **Secondary** sites can be removed from the Geo cluster using the Geo administration page of the **primary** site. To remove a **secondary** site:
1. Go to **Admin Area > Geo** (`/admin/geo/nodes`). 1. On the top bar, select **Menu >** **{admin}** **Admin**.
1. On the left sidebar, select **Geo > Nodes**.
1. Select the **Remove** button for the **secondary** site you want to remove. 1. Select the **Remove** button for the **secondary** site you want to remove.
1. Confirm by selecting **Remove** when the prompt appears. 1. Confirm by selecting **Remove** when the prompt appears.
......
...@@ -25,8 +25,12 @@ Before attempting more advanced troubleshooting: ...@@ -25,8 +25,12 @@ Before attempting more advanced troubleshooting:
### Check the health of the **secondary** node ### Check the health of the **secondary** node
Visit the **primary** node's **Admin Area > Geo** (`/admin/geo/nodes`) in On the **primary** node:
your browser. We perform the following health checks on each **secondary** node
1. On the top bar, select **Menu >** **{admin}** **Admin**.
1. On the left sidebar, select **Geo > Nodes**.
We perform the following health checks on each **secondary** node
to help identify if something is wrong: to help identify if something is wrong:
- Is the node running? - Is the node running?
...@@ -129,7 +133,8 @@ Geo finds the current machine's Geo node name in `/etc/gitlab/gitlab.rb` by: ...@@ -129,7 +133,8 @@ Geo finds the current machine's Geo node name in `/etc/gitlab/gitlab.rb` by:
- Using the `gitlab_rails['geo_node_name']` setting. - Using the `gitlab_rails['geo_node_name']` setting.
- If that is not defined, using the `external_url` setting. - If that is not defined, using the `external_url` setting.
This name is used to look up the node with the same **Name** in **Admin Area > Geo**. This name is used to look up the node with the same **Name** in the **Geo Nodes**
dashboard.
To check if the current machine has a node name that matches a node in the To check if the current machine has a node name that matches a node in the
database, run the check task: database, run the check task:
...@@ -739,8 +744,11 @@ If you are able to log in to the **primary** node, but you receive this error ...@@ -739,8 +744,11 @@ If you are able to log in to the **primary** node, but you receive this error
when attempting to log into a **secondary**, you should check that the Geo when attempting to log into a **secondary**, you should check that the Geo
node's URL matches its external URL. node's URL matches its external URL.
1. On the primary, visit **Admin Area > Geo**. On the **primary** node:
1. Find the affected **secondary** and click **Edit**.
1. On the top bar, select **Menu >** **{admin}** **Admin**.
1. On the left sidebar, select **Geo > Nodes**.
1. Find the affected **secondary** site and select **Edit**.
1. Ensure the **URL** field matches the value found in `/etc/gitlab/gitlab.rb` 1. Ensure the **URL** field matches the value found in `/etc/gitlab/gitlab.rb`
in `external_url "https://gitlab.example.com"` on the frontend server(s) of in `external_url "https://gitlab.example.com"` on the frontend server(s) of
the **secondary** node. the **secondary** node.
......
...@@ -7,20 +7,28 @@ type: howto ...@@ -7,20 +7,28 @@ type: howto
# Tuning Geo **(PREMIUM SELF)** # Tuning Geo **(PREMIUM SELF)**
## Changing the sync/verification capacity values You can limit the number of concurrent operations the nodes can run
in the background.
In **Admin Area > Geo** (`/admin/geo/nodes`), ## Changing the sync/verification concurrency values
there are several variables that can be tuned to improve performance of Geo:
- Repository sync capacity On the **primary** site:
- File sync capacity
- Container repositories sync capacity
- Verification capacity
Increasing capacity values will increase the number of jobs that are scheduled. 1. On the top bar, select **Menu >** **{admin}** **Admin**.
1. On the left sidebar, select **Geo > Nodes**.
1. Select **Edit** of the secondary node you want to tune.
1. Under **Tuning settings**, there are several variables that can be tuned to
improve the performance of Geo:
- Repository synchronization concurrency limit
- File synchronization concurrency limit
- Container repositories synchronization concurrency limit
- Verification concurrency limit
Increasing the concurrency values will increase the number of jobs that are scheduled.
However, this may not lead to more downloads in parallel unless the number of However, this may not lead to more downloads in parallel unless the number of
available Sidekiq threads is also increased. For example, if repository sync available Sidekiq threads is also increased. For example, if repository synchronization
capacity is increased from 25 to 50, you may also want to increase the number concurrency is increased from 25 to 50, you may also want to increase the number
of Sidekiq threads from 25 to 50. See the of Sidekiq threads from 25 to 50. See the
[Sidekiq concurrency documentation](../../operations/extra_sidekiq_processes.md#number-of-threads) [Sidekiq concurrency documentation](../../operations/extra_sidekiq_processes.md#number-of-threads)
for more details. for more details.
......
...@@ -9,25 +9,27 @@ info: To determine the technical writer assigned to the Stage/Group associated w ...@@ -9,25 +9,27 @@ info: To determine the technical writer assigned to the Stage/Group associated w
GitLab supports and automates housekeeping tasks within your current repository, GitLab supports and automates housekeeping tasks within your current repository,
such as compressing file revisions and removing unreachable objects. such as compressing file revisions and removing unreachable objects.
## Automatic housekeeping ## Configure housekeeping
GitLab automatically runs `git gc` and `git repack` on repositories GitLab automatically runs `git gc` and `git repack` on repositories
after Git pushes. You can change how often this happens or turn it off in after Git pushes.
**Admin Area > Settings > Repository** (`/admin/application_settings/repository`).
## Manual housekeeping You can change how often this happens or turn it off:
The housekeeping function runs `repack` or `gc` depending on the 1. On the top bar, select **Menu >** **{admin}** **Admin**.
**Housekeeping** settings configured in **Admin Area > Settings > Repository**. 1. On the left sidebar, select **Settings > Repository**.
1. Expand **Repository maintenance**.
1. Configure the Housekeeping options.
1. Select **Save changes**.
For example in the following scenario a `git repack -d` will be executed: For example, in the following scenario a `git repack -d` will be executed:
- Project: pushes since GC counter (`pushes_since_gc`) = `10` - Project: pushes since GC counter (`pushes_since_gc`) = `10`
- Git GC period = `200` - Git GC period = `200`
- Full repack period = `50` - Full repack period = `50`
When the `pushes_since_gc` value is 50 a `repack -A -d --pack-kept-objects` runs, similarly when When the `pushes_since_gc` value is 50 a `repack -A -d --pack-kept-objects` runs, similarly when
the `pushes_since_gc` value is 200 a `git gc` runs. the `pushes_since_gc` value is 200 a `git gc` runs:
- `git gc` ([man page](https://mirrors.edge.kernel.org/pub/software/scm/git/docs/git-gc.html)) runs a number of housekeeping tasks, - `git gc` ([man page](https://mirrors.edge.kernel.org/pub/software/scm/git/docs/git-gc.html)) runs a number of housekeeping tasks,
such as compressing file revisions (to reduce disk space and increase performance) such as compressing file revisions (to reduce disk space and increase performance)
...@@ -38,12 +40,6 @@ the `pushes_since_gc` value is 200 a `git gc` runs. ...@@ -38,12 +40,6 @@ the `pushes_since_gc` value is 200 a `git gc` runs.
Housekeeping also [removes unreferenced LFS files](../raketasks/cleanup.md#remove-unreferenced-lfs-files) Housekeeping also [removes unreferenced LFS files](../raketasks/cleanup.md#remove-unreferenced-lfs-files)
from your project on the same schedule as the `git gc` operation, freeing up storage space for your project. from your project on the same schedule as the `git gc` operation, freeing up storage space for your project.
To manually start the housekeeping process:
1. In your project, go to **Settings > General**.
1. Expand the **Advanced** section.
1. Select **Run housekeeping**.
## How housekeeping handles pool repositories ## How housekeeping handles pool repositories
Housekeeping for pool repositories is handled differently from standard repositories. Housekeeping for pool repositories is handled differently from standard repositories.
......
...@@ -21,10 +21,11 @@ Maintenance Mode allows most external actions that do not change internal state. ...@@ -21,10 +21,11 @@ Maintenance Mode allows most external actions that do not change internal state.
There are three ways to enable Maintenance Mode as an administrator: There are three ways to enable Maintenance Mode as an administrator:
- **Web UI**: - **Web UI**:
1. Go to **Admin Area > Settings > General**, expand **Maintenance Mode**, and toggle **Enable Maintenance Mode**. 1. On the top bar, select **Menu >** **{admin}** **Admin**.
1. On the left sidebar, select **Settings > General**.
1. Expand **Maintenance Mode**, and toggle **Enable Maintenance Mode**.
You can optionally add a message for the banner as well. You can optionally add a message for the banner as well.
1. Select **Save changes**.
1. Click **Save** for the changes to take effect.
- **API**: - **API**:
...@@ -44,9 +45,11 @@ There are three ways to enable Maintenance Mode as an administrator: ...@@ -44,9 +45,11 @@ There are three ways to enable Maintenance Mode as an administrator:
There are three ways to disable Maintenance Mode: There are three ways to disable Maintenance Mode:
- **Web UI**: - **Web UI**:
1. Go to **Admin Area > Settings > General**, expand **Maintenance Mode**, and toggle **Enable Maintenance Mode**. 1. On the top bar, select **Menu >** **{admin}** **Admin**.
1. On the left sidebar, select **Settings > General**.
1. Click **Save** for the changes to take effect. 1. Expand **Maintenance Mode**, and toggle **Enable Maintenance Mode**.
You can optionally add a message for the banner as well.
1. Select **Save changes**.
- **API**: - **API**:
...@@ -166,7 +169,10 @@ Background jobs (cron jobs, Sidekiq) continue running as is, because background ...@@ -166,7 +169,10 @@ Background jobs (cron jobs, Sidekiq) continue running as is, because background
[During a planned Geo failover](../geo/disaster_recovery/planned_failover.md#prevent-updates-to-the-primary-node), [During a planned Geo failover](../geo/disaster_recovery/planned_failover.md#prevent-updates-to-the-primary-node),
it is recommended that you disable all cron jobs except for those related to Geo. it is recommended that you disable all cron jobs except for those related to Geo.
You can monitor queues and disable jobs in **Admin Area > Monitoring > Background Jobs**. To monitor queues and disable jobs:
1. On the top bar, select **Menu >** **{admin}** **Admin**.
1. On the left sidebar, select **Monitoring > Background Jobs**.
### Incident management ### Incident management
......
...@@ -87,10 +87,10 @@ To start multiple processes: ...@@ -87,10 +87,10 @@ To start multiple processes:
sudo gitlab-ctl reconfigure sudo gitlab-ctl reconfigure
``` ```
After the extra Sidekiq processes are added, navigate to To view the Sidekiq processes in GitLab:
**Admin Area > Monitoring > Background Jobs** (`/admin/background_jobs`) in GitLab.
![Multiple Sidekiq processes](img/sidekiq-cluster.png) 1. On the top bar, select **Menu >** **{admin}** **Admin**.
1. On the left sidebar, select **Monitoring > Background Jobs**.
## Negate settings ## Negate settings
......
...@@ -104,11 +104,13 @@ In the case of lookup failures (which are common), the `authorized_keys` ...@@ -104,11 +104,13 @@ In the case of lookup failures (which are common), the `authorized_keys`
file is still scanned. So Git SSH performance would still be slow for many file is still scanned. So Git SSH performance would still be slow for many
users as long as a large file exists. users as long as a large file exists.
You can disable any more writes to the `authorized_keys` file by unchecking To disable any more writes to the `authorized_keys` file:
`Write to "authorized_keys" file` in the **Admin Area > Settings > Network > Performance optimization** of your GitLab
installation.
![Write to authorized keys setting](img/write_to_authorized_keys_setting.png) 1. On the top bar, select **Menu >** **{admin}** **Admin**.
1. On the left sidebar, select **Settings > Network**.
1. Expand **Performance optimization**.
1. Clear the **Write to "authorized_keys" file** checkbox.
1. Select **Save changes**.
Again, confirm that SSH is working by removing your user's SSH key in the UI, Again, confirm that SSH is working by removing your user's SSH key in the UI,
adding a new one, and attempting to pull a repository. adding a new one, and attempting to pull a repository.
......
...@@ -9,23 +9,24 @@ info: To determine the technical writer assigned to the Stage/Group associated w ...@@ -9,23 +9,24 @@ info: To determine the technical writer assigned to the Stage/Group associated w
The GitLab UI polls for updates for different resources (issue notes, issue The GitLab UI polls for updates for different resources (issue notes, issue
titles, pipeline statuses, etc.) on a schedule appropriate to the resource. titles, pipeline statuses, etc.) on a schedule appropriate to the resource.
In **[Admin Area](../user/admin_area/index.md) > Settings > Preferences > Real-time features**, To configure the polling interval multiplier:
you can configure "Polling
interval multiplier". This multiplier is applied to all resources at once,
and decimal values are supported. For the sake of the examples below, we will
say that issue notes poll every 2 seconds, and issue titles poll every 5
seconds; these are _not_ the actual values.
- 1 is the default, and recommended for most installations. (Issue notes poll 1. On the top bar, select **Menu >** **{admin}** **Admin**.
every 2 seconds, and issue titles poll every 5 seconds.) 1. On the left sidebar, select **Settings > Preferences**.
- 0 disables UI polling completely. (On the next poll, clients stop 1. Expand **Real-time features**.
polling for updates.) 1. Set a value for the polling interval multiplier. This multiplier is applied
- A value greater than 1 slows polling down. If you see issues with to all resources at once, and decimal values are supported:
database load from lots of clients polling for updates, increasing the
multiplier from 1 can be a good compromise, rather than disabling polling - `1.0` is the default, and recommended for most installations.
completely. (For example: If this is set to 2, then issue notes poll every 4 - `0` disables UI polling completely. On the next poll, clients stop
seconds, and issue titles poll every 10 seconds.) polling for updates.
- A value between 0 and 1 makes the UI poll more frequently (so updates - A value greater than `1` slows polling down. If you see issues with
show in other sessions faster), but is **not recommended**. 1 should be database load from lots of clients polling for updates, increasing the
fast enough. (For example, if this is set to 0.5, then issue notes poll every multiplier from 1 can be a good compromise, rather than disabling polling
1 second, and issue titles poll every 2.5 seconds.) completely. For example, if you set the value to `2`, all polling intervals
are multiplied by 2, which means that polling happens half as frequently.
- A value between `0` and `1` makes the UI poll more frequently (so updates
show in other sessions faster), but is **not recommended**. `1` should be
fast enough.
1. Select **Save changes**.
...@@ -207,8 +207,7 @@ above. ...@@ -207,8 +207,7 @@ above.
### Dangling commits ### Dangling commits
`gitlab:git:fsck` can find dangling commits. To fix them, try `gitlab:git:fsck` can find dangling commits. To fix them, try
[manually triggering housekeeping](../housekeeping.md#manual-housekeeping) [enabling housekeeping](../housekeeping.md).
for the affected project(s).
If the issue persists, try triggering `gc` via the If the issue persists, try triggering `gc` via the
[Rails Console](../operations/rails_console.md#starting-a-rails-console-session): [Rails Console](../operations/rails_console.md#starting-a-rails-console-session):
......
...@@ -50,8 +50,13 @@ Note the following: ...@@ -50,8 +50,13 @@ Note the following:
- Importing is only possible if the version of the import and export GitLab instances are - Importing is only possible if the version of the import and export GitLab instances are
compatible as described in the [Version history](../../user/project/settings/import_export.md#version-history). compatible as described in the [Version history](../../user/project/settings/import_export.md#version-history).
- The project import option must be enabled in - The project import option must be enabled:
application settings (`/admin/application_settings/general`) under **Import sources**, which is available
under **Admin Area > Settings > Visibility and access controls**. 1. On the top bar, select **Menu >** **{admin}** **Admin**.
1. On the left sidebar, select **Settings > General**.
1. Expand **Visibility and access controls**.
1. Under **Import sources**, check the "Project export enabled" option.
1. Select **Save changes**.
- The exports are stored in a temporary directory and are deleted every - The exports are stored in a temporary directory and are deleted every
24 hours by a specific worker. 24 hours by a specific worker.
...@@ -107,12 +107,15 @@ to project IDs 50 to 100 in an Omnibus GitLab installation: ...@@ -107,12 +107,15 @@ to project IDs 50 to 100 in an Omnibus GitLab installation:
sudo gitlab-rake gitlab:storage:migrate_to_hashed ID_FROM=50 ID_TO=100 sudo gitlab-rake gitlab:storage:migrate_to_hashed ID_FROM=50 ID_TO=100
``` ```
You can monitor the progress in the **Admin Area > Monitoring > Background Jobs** page. To monitor the progress in GitLab:
There is a specific queue you can watch to see how long it will take to finish:
`hashed_storage:hashed_storage_project_migrate`. 1. On the top bar, select **Menu >** **{admin}** **Admin**.
1. On the left sidebar, select **Monitoring > Background Jobs**.
1. Watch how long the `hashed_storage:hashed_storage_project_migrate` queue
will take to finish. After it reaches zero, you can confirm every project
has been migrated by running the commands above.
After it reaches zero, you can confirm every project has been migrated by running the commands above. If you find it necessary, you can run the previous migration script again to schedule missing projects.
If you find it necessary, you can run this migration script again to schedule missing projects.
Any error or warning is logged in Sidekiq's log file. Any error or warning is logged in Sidekiq's log file.
...@@ -120,7 +123,7 @@ If [Geo](../geo/index.md) is enabled, each project that is successfully migrated ...@@ -120,7 +123,7 @@ If [Geo](../geo/index.md) is enabled, each project that is successfully migrated
generates an event to replicate the changes on any **secondary** nodes. generates an event to replicate the changes on any **secondary** nodes.
You only need the `gitlab:storage:migrate_to_hashed` Rake task to migrate your repositories, but there are You only need the `gitlab:storage:migrate_to_hashed` Rake task to migrate your repositories, but there are
[additional commands(#list-projects-and-attachments) to help you inspect projects and attachments in both legacy and hashed storage. [additional commands](#list-projects-and-attachments) to help you inspect projects and attachments in both legacy and hashed storage.
## Rollback from hashed storage to legacy storage ## Rollback from hashed storage to legacy storage
......
...@@ -238,9 +238,11 @@ in this section whenever you need to update GitLab. ...@@ -238,9 +238,11 @@ in this section whenever you need to update GitLab.
### Check the current version ### Check the current version
To determine the version of GitLab you're currently running, To determine the version of GitLab you're currently running:
go to the **{admin}** **Admin Area**, and find the version
under the **Components** table. 1. On the top bar, select **Menu >** **{admin}** **Admin**.
1. On the left sidebar, select **Overview > Dashboard**.
1. Find the version under the **Components** table.
If there's a newer available version of GitLab that contains one or more If there's a newer available version of GitLab that contains one or more
security fixes, GitLab displays an **Update asap** notification message that security fixes, GitLab displays an **Update asap** notification message that
......
...@@ -10,7 +10,10 @@ type: howto ...@@ -10,7 +10,10 @@ type: howto
You can configure various settings for GitLab Geo nodes. For more information, see You can configure various settings for GitLab Geo nodes. For more information, see
[Geo documentation](../../administration/geo/index.md). [Geo documentation](../../administration/geo/index.md).
On the primary node, go to **Admin Area > Geo**. On secondary nodes, go to **Admin Area > Geo > Nodes**. On either the primary or secondary node:
1. On the top bar, select **Menu >** **{admin}** **Admin**.
1. On the left sidebar, select **Geo > Nodes**.
## Common settings ## Common settings
...@@ -61,8 +64,13 @@ The **primary** node's Internal URL is used by **secondary** nodes to contact it ...@@ -61,8 +64,13 @@ The **primary** node's Internal URL is used by **secondary** nodes to contact it
[External URL](https://docs.gitlab.com/omnibus/settings/configuration.html#configuring-the-external-url-for-gitlab) [External URL](https://docs.gitlab.com/omnibus/settings/configuration.html#configuring-the-external-url-for-gitlab)
which is used by users. Internal URL does not need to be a private address. which is used by users. Internal URL does not need to be a private address.
Internal URL defaults to External URL, but you can customize it under Internal URL defaults to external URL, but you can also customize it:
**Admin Area > Geo > Nodes**.
1. On the top bar, select **Menu >** **{admin}** **Admin**.
1. On the left sidebar, select **Geo > Nodes**.
1. Select **Edit** on the node you want to customize.
1. Edit the internal URL.
1. Select **Save changes**.
WARNING: WARNING:
We recommend using an HTTPS connection while configuring the Geo nodes. To avoid We recommend using an HTTPS connection while configuring the Geo nodes. To avoid
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment