Commit cbe30081 authored by Suzanne Selhorn's avatar Suzanne Selhorn

Merge branch 'CMDale-master-patch-98067' into 'master'

Fix Vale Issues - Update sidekiq.md

See merge request gitlab-org/gitlab!60196
parents eb501c6e ed220e9d
...@@ -11,7 +11,7 @@ tasks. When things go wrong it can be difficult to troubleshoot. These ...@@ -11,7 +11,7 @@ tasks. When things go wrong it can be difficult to troubleshoot. These
situations also tend to be high-pressure because a production system job queue situations also tend to be high-pressure because a production system job queue
may be filling up. Users will notice when this happens because new branches may be filling up. Users will notice when this happens because new branches
may not show up and merge requests may not be updated. The following are some may not show up and merge requests may not be updated. The following are some
troubleshooting steps that will help you diagnose the bottleneck. troubleshooting steps to help you diagnose the bottleneck.
GitLab administrators/users should consider working through these GitLab administrators/users should consider working through these
debug steps with GitLab Support so the backtraces can be analyzed by our team. debug steps with GitLab Support so the backtraces can be analyzed by our team.
...@@ -42,7 +42,7 @@ Example log output: ...@@ -42,7 +42,7 @@ Example log output:
When using [Sidekiq JSON logging](../logs.md#sidekiqlog), When using [Sidekiq JSON logging](../logs.md#sidekiqlog),
arguments logs are limited to a maximum size of 10 kilobytes of text; arguments logs are limited to a maximum size of 10 kilobytes of text;
any arguments after this limit will be discarded and replaced with a any arguments after this limit are discarded and replaced with a
single argument containing the string `"..."`. single argument containing the string `"..."`.
You can set `SIDEKIQ_LOG_ARGUMENTS` [environment variable](https://docs.gitlab.com/omnibus/settings/environment-variables.html) You can set `SIDEKIQ_LOG_ARGUMENTS` [environment variable](https://docs.gitlab.com/omnibus/settings/environment-variables.html)
...@@ -58,7 +58,7 @@ In GitLab 13.5 and earlier, set `SIDEKIQ_LOG_ARGUMENTS` to `1` to start logging ...@@ -58,7 +58,7 @@ In GitLab 13.5 and earlier, set `SIDEKIQ_LOG_ARGUMENTS` to `1` to start logging
## Thread dump ## Thread dump
Send the Sidekiq process ID the `TTIN` signal and it will output thread Send the Sidekiq process ID the `TTIN` signal to output thread
backtraces in the log file. backtraces in the log file.
```shell ```shell
...@@ -66,7 +66,7 @@ kill -TTIN <sidekiq_pid> ...@@ -66,7 +66,7 @@ kill -TTIN <sidekiq_pid>
``` ```
Check in `/var/log/gitlab/sidekiq/current` or `$GITLAB_HOME/log/sidekiq.log` for Check in `/var/log/gitlab/sidekiq/current` or `$GITLAB_HOME/log/sidekiq.log` for
the backtrace output. The backtraces will be lengthy and generally start with the backtrace output. The backtraces are lengthy and generally start with
several `WARN` level messages. Here's an example of a single thread's backtrace: several `WARN` level messages. Here's an example of a single thread's backtrace:
```plaintext ```plaintext
...@@ -88,8 +88,8 @@ Move on to other troubleshooting methods if this happens. ...@@ -88,8 +88,8 @@ Move on to other troubleshooting methods if this happens.
## Process profiling with `perf` ## Process profiling with `perf`
Linux has a process profiling tool called `perf` that is helpful when a certain Linux has a process profiling tool called `perf` that is helpful when a certain
process is eating up a lot of CPU. If you see high CPU usage and Sidekiq won't process is eating up a lot of CPU. If you see high CPU usage and Sidekiq isn't
respond to the `TTIN` signal, this is a good next step. responding to the `TTIN` signal, this is a good next step.
If `perf` is not installed on your system, install it with `apt-get` or `yum`: If `perf` is not installed on your system, install it with `apt-get` or `yum`:
...@@ -134,8 +134,8 @@ corresponding Ruby code where this is happening. ...@@ -134,8 +134,8 @@ corresponding Ruby code where this is happening.
`gdb` can be another effective tool for debugging Sidekiq. It gives you a little `gdb` can be another effective tool for debugging Sidekiq. It gives you a little
more interactive way to look at each thread and see what's causing problems. more interactive way to look at each thread and see what's causing problems.
Attaching to a process with `gdb` will suspends the normal operation Attaching to a process with `gdb` suspends the normal operation
of the process (Sidekiq will not process jobs while `gdb` is attached). of the process (Sidekiq does not process jobs while `gdb` is attached).
Start by attaching to the Sidekiq PID: Start by attaching to the Sidekiq PID:
...@@ -285,7 +285,7 @@ end ...@@ -285,7 +285,7 @@ end
### Remove Sidekiq jobs for given parameters (destructive) ### Remove Sidekiq jobs for given parameters (destructive)
The general method to kill jobs conditionally is the following command, which The general method to kill jobs conditionally is the following command, which
will remove jobs that are queued but not started. Running jobs will not be killed. removes jobs that are queued but not started. Running jobs can not be killed.
```ruby ```ruby
queue = Sidekiq::Queue.new('<queue name>') queue = Sidekiq::Queue.new('<queue name>')
...@@ -294,7 +294,7 @@ queue.each { |job| job.delete if <condition>} ...@@ -294,7 +294,7 @@ queue.each { |job| job.delete if <condition>}
Have a look at the section below for cancelling running jobs. Have a look at the section below for cancelling running jobs.
In the method above, `<queue-name>` is the name of the queue that contains the job(s) you want to delete and `<condition>` will decide which jobs get deleted. In the method above, `<queue-name>` is the name of the queue that contains the job(s) you want to delete and `<condition>` decides which jobs get deleted.
Commonly, `<condition>` references the job arguments, which depend on the type of job in question. To find the arguments for a specific queue, you can have a look at the `perform` function of the related worker file, commonly found at `/app/workers/<queue-name>_worker.rb`. Commonly, `<condition>` references the job arguments, which depend on the type of job in question. To find the arguments for a specific queue, you can have a look at the `perform` function of the related worker file, commonly found at `/app/workers/<queue-name>_worker.rb`.
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment