Commit 3c8efb6a authored by Stan Hu's avatar Stan Hu

Move feature flag list into process cache

When we switched from a single-threaded application server (Unicorn) to
a multithreaded one (Puma), we did not realize that Puma often reaps
threads after a request is done and recreates them later. This makes the
thread-local cache ineffective, as the cache does not store anything
beyond the lifetime of the thread.

Since `ActiveSupport::Cache::MemoryStore` is thread-safe, we should be
able to switch the L1 cache for the feature flag list to use this to
reduce load on Redis.

Since read and write access is synchronized, this does have the side
effect of adding contention when feature flags are accessed.

We made a similar change in
https://gitlab.com/gitlab-org/gitlab/-/merge_requests/26935, and this
seems to be working fine.

Discovered in
https://gitlab.com/gitlab-com/gl-infra/infrastructure/-/issues/9414
parent b2f799c3
---
title: Move feature flag list into process cache
merge_request: 27511
author:
type: performance
......@@ -38,7 +38,7 @@ class Feature
begin
# We saw on GitLab.com, this database request was called 2300
# times/s. Let's cache it for a minute to avoid that load.
Gitlab::ThreadMemoryCache.cache_backend.fetch('flipper:persisted_names', expires_in: 1.minute) do
Gitlab::ProcessMemoryCache.cache_backend.fetch('flipper:persisted_names', expires_in: 1.minute) do
FlipperFeature.feature_names
end
end
......
......@@ -42,7 +42,7 @@ describe Feature do
.once
.and_call_original
expect(Gitlab::ThreadMemoryCache.cache_backend)
expect(Gitlab::ProcessMemoryCache.cache_backend)
.to receive(:fetch)
.once
.with('flipper:persisted_names', expires_in: 1.minute)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment