Commit 33b2d630 authored by Suren Baghdasaryan's avatar Suren Baghdasaryan Committed by Linus Torvalds

psi: introduce state_mask to represent stalled psi states

Patch series "psi: pressure stall monitors", v6.

This is a respin of:
  https://lwn.net/ml/linux-kernel/20190308184311.144521-1-surenb%40google.com/

Android is adopting psi to detect and remedy memory pressure that
results in stuttering and decreased responsiveness on mobile devices.

Psi gives us the stall information, but because we're dealing with
latencies in the millisecond range, periodically reading the pressure
files to detect stalls in a timely fashion is not feasible.  Psi also
doesn't aggregate its averages at a high-enough frequency right now.

This patch series extends the psi interface such that users can
configure sensitive latency thresholds and use poll() and friends to be
notified when these are breached.

As high-frequency aggregation is costly, it implements an aggregation
method that is optimized for fast, short-interval averaging, and makes
the aggregation frequency adaptive, such that high-frequency updates
only happen while monitored stall events are actively occurring.

With these patches applied, Android can monitor for, and ward off,
mounting memory shortages before they cause problems for the user.  For
example, using memory stall monitors in userspace low memory killer
daemon (lmkd) we can detect mounting pressure and kill less important
processes before device becomes visibly sluggish.  In our memory stress
testing psi memory monitors produce roughly 10x less false positives
compared to vmpressure signals.  Having ability to specify multiple
triggers for the same psi metric allows other parts of Android framework
to monitor memory state of the device and act accordingly.

The new interface is straight-forward.  The user opens one of the
pressure files for writing and writes a trigger description into the
file descriptor that defines the stall state - some or full, and the
maximum stall time over a given window of time.  E.g.:

        /* Signal when stall time exceeds 100ms of a 1s window */
        char trigger[] = "full 100000 1000000"
        fd = open("/proc/pressure/memory")
        write(fd, trigger, sizeof(trigger))
        while (poll() >= 0) {
                ...
        };
        close(fd);

When the monitored stall state is entered, psi adapts its aggregation
frequency according to what the configured time window requires in order
to emit event signals in a timely fashion.  Once the stalling subsides,
aggregation reverts back to normal.

The trigger is associated with the open file descriptor.  To stop
monitoring, the user only needs to close the file descriptor and the
trigger is discarded.

Patches 1-6 prepare the psi code for polling support.  Patch 7
implements the adaptive polling logic, the pressure growth detection
optimized for short intervals, and hooks up write() and poll() on the
pressure files.

The patches were developed in collaboration with Johannes Weiner.

This patch (of 7):

The psi monitoring patches will need to determine the same states as
record_times().  To avoid calculating them twice, maintain a state mask
that can be consulted cheaply.  Do this in a separate patch to keep the
churn in the main feature patch at a minimum.

This adds 4-byte state_mask member into psi_group_cpu struct which
results in its first cacheline-aligned part becoming 52 bytes long.  Add
explicit values to enumeration element counters that affect
psi_group_cpu struct size.

Link: http://lkml.kernel.org/r/20190124211518.244221-4-surenb@google.comSigned-off-by: default avatarSuren Baghdasaryan <surenb@google.com>
Acked-by: default avatarJohannes Weiner <hannes@cmpxchg.org>
Cc: Dennis Zhou <dennis@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Li Zefan <lizefan@huawei.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 136ac591
...@@ -11,7 +11,7 @@ enum psi_task_count { ...@@ -11,7 +11,7 @@ enum psi_task_count {
NR_IOWAIT, NR_IOWAIT,
NR_MEMSTALL, NR_MEMSTALL,
NR_RUNNING, NR_RUNNING,
NR_PSI_TASK_COUNTS, NR_PSI_TASK_COUNTS = 3,
}; };
/* Task state bitmasks */ /* Task state bitmasks */
...@@ -24,7 +24,7 @@ enum psi_res { ...@@ -24,7 +24,7 @@ enum psi_res {
PSI_IO, PSI_IO,
PSI_MEM, PSI_MEM,
PSI_CPU, PSI_CPU,
NR_PSI_RESOURCES, NR_PSI_RESOURCES = 3,
}; };
/* /*
...@@ -41,7 +41,7 @@ enum psi_states { ...@@ -41,7 +41,7 @@ enum psi_states {
PSI_CPU_SOME, PSI_CPU_SOME,
/* Only per-CPU, to weigh the CPU in the global average: */ /* Only per-CPU, to weigh the CPU in the global average: */
PSI_NONIDLE, PSI_NONIDLE,
NR_PSI_STATES, NR_PSI_STATES = 6,
}; };
struct psi_group_cpu { struct psi_group_cpu {
...@@ -53,6 +53,9 @@ struct psi_group_cpu { ...@@ -53,6 +53,9 @@ struct psi_group_cpu {
/* States of the tasks belonging to this group */ /* States of the tasks belonging to this group */
unsigned int tasks[NR_PSI_TASK_COUNTS]; unsigned int tasks[NR_PSI_TASK_COUNTS];
/* Aggregate pressure state derived from the tasks */
u32 state_mask;
/* Period time sampling buckets for each state of interest (ns) */ /* Period time sampling buckets for each state of interest (ns) */
u32 times[NR_PSI_STATES]; u32 times[NR_PSI_STATES];
......
...@@ -213,17 +213,17 @@ static bool test_state(unsigned int *tasks, enum psi_states state) ...@@ -213,17 +213,17 @@ static bool test_state(unsigned int *tasks, enum psi_states state)
static void get_recent_times(struct psi_group *group, int cpu, u32 *times) static void get_recent_times(struct psi_group *group, int cpu, u32 *times)
{ {
struct psi_group_cpu *groupc = per_cpu_ptr(group->pcpu, cpu); struct psi_group_cpu *groupc = per_cpu_ptr(group->pcpu, cpu);
unsigned int tasks[NR_PSI_TASK_COUNTS];
u64 now, state_start; u64 now, state_start;
enum psi_states s;
unsigned int seq; unsigned int seq;
int s; u32 state_mask;
/* Snapshot a coherent view of the CPU state */ /* Snapshot a coherent view of the CPU state */
do { do {
seq = read_seqcount_begin(&groupc->seq); seq = read_seqcount_begin(&groupc->seq);
now = cpu_clock(cpu); now = cpu_clock(cpu);
memcpy(times, groupc->times, sizeof(groupc->times)); memcpy(times, groupc->times, sizeof(groupc->times));
memcpy(tasks, groupc->tasks, sizeof(groupc->tasks)); state_mask = groupc->state_mask;
state_start = groupc->state_start; state_start = groupc->state_start;
} while (read_seqcount_retry(&groupc->seq, seq)); } while (read_seqcount_retry(&groupc->seq, seq));
...@@ -239,7 +239,7 @@ static void get_recent_times(struct psi_group *group, int cpu, u32 *times) ...@@ -239,7 +239,7 @@ static void get_recent_times(struct psi_group *group, int cpu, u32 *times)
* (u32) and our reported pressure close to what's * (u32) and our reported pressure close to what's
* actually happening. * actually happening.
*/ */
if (test_state(tasks, s)) if (state_mask & (1 << s))
times[s] += now - state_start; times[s] += now - state_start;
delta = times[s] - groupc->times_prev[s]; delta = times[s] - groupc->times_prev[s];
...@@ -407,15 +407,15 @@ static void record_times(struct psi_group_cpu *groupc, int cpu, ...@@ -407,15 +407,15 @@ static void record_times(struct psi_group_cpu *groupc, int cpu,
delta = now - groupc->state_start; delta = now - groupc->state_start;
groupc->state_start = now; groupc->state_start = now;
if (test_state(groupc->tasks, PSI_IO_SOME)) { if (groupc->state_mask & (1 << PSI_IO_SOME)) {
groupc->times[PSI_IO_SOME] += delta; groupc->times[PSI_IO_SOME] += delta;
if (test_state(groupc->tasks, PSI_IO_FULL)) if (groupc->state_mask & (1 << PSI_IO_FULL))
groupc->times[PSI_IO_FULL] += delta; groupc->times[PSI_IO_FULL] += delta;
} }
if (test_state(groupc->tasks, PSI_MEM_SOME)) { if (groupc->state_mask & (1 << PSI_MEM_SOME)) {
groupc->times[PSI_MEM_SOME] += delta; groupc->times[PSI_MEM_SOME] += delta;
if (test_state(groupc->tasks, PSI_MEM_FULL)) if (groupc->state_mask & (1 << PSI_MEM_FULL))
groupc->times[PSI_MEM_FULL] += delta; groupc->times[PSI_MEM_FULL] += delta;
else if (memstall_tick) { else if (memstall_tick) {
u32 sample; u32 sample;
...@@ -436,10 +436,10 @@ static void record_times(struct psi_group_cpu *groupc, int cpu, ...@@ -436,10 +436,10 @@ static void record_times(struct psi_group_cpu *groupc, int cpu,
} }
} }
if (test_state(groupc->tasks, PSI_CPU_SOME)) if (groupc->state_mask & (1 << PSI_CPU_SOME))
groupc->times[PSI_CPU_SOME] += delta; groupc->times[PSI_CPU_SOME] += delta;
if (test_state(groupc->tasks, PSI_NONIDLE)) if (groupc->state_mask & (1 << PSI_NONIDLE))
groupc->times[PSI_NONIDLE] += delta; groupc->times[PSI_NONIDLE] += delta;
} }
...@@ -448,6 +448,8 @@ static void psi_group_change(struct psi_group *group, int cpu, ...@@ -448,6 +448,8 @@ static void psi_group_change(struct psi_group *group, int cpu,
{ {
struct psi_group_cpu *groupc; struct psi_group_cpu *groupc;
unsigned int t, m; unsigned int t, m;
enum psi_states s;
u32 state_mask = 0;
groupc = per_cpu_ptr(group->pcpu, cpu); groupc = per_cpu_ptr(group->pcpu, cpu);
...@@ -480,6 +482,13 @@ static void psi_group_change(struct psi_group *group, int cpu, ...@@ -480,6 +482,13 @@ static void psi_group_change(struct psi_group *group, int cpu,
if (set & (1 << t)) if (set & (1 << t))
groupc->tasks[t]++; groupc->tasks[t]++;
/* Calculate state mask representing active states */
for (s = 0; s < NR_PSI_STATES; s++) {
if (test_state(groupc->tasks, s))
state_mask |= (1 << s);
}
groupc->state_mask = state_mask;
write_seqcount_end(&groupc->seq); write_seqcount_end(&groupc->seq);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment