Commit 408587ba authored by Shakeel Butt's avatar Shakeel Butt Committed by Andrew Morton

mm: page_counter: rearrange struct page_counter fields

With memcg v2 enabled, memcg->memory.usage is a very hot member for the
workloads doing memcg charging on multiple CPUs concurrently. 
Particularly the network intensive workloads.  In addition, there is a
false cache sharing between memory.usage and memory.high on the charge
path.  This patch moves the usage into a separate cacheline and move all
the read most fields into separate cacheline.

To evaluate the impact of this optimization, on a 72 CPUs machine, we ran
the following workload in a three level of cgroup hierarchy.

 $ netserver -6
 # 36 instances of netperf with following params
 $ netperf -6 -H ::1 -l 60 -t TCP_SENDFILE -- -m 10K

Results (average throughput of netperf):
Without (6.0-rc1)	10482.7 Mbps
With patch		12413.7 Mbps (18.4% improvement)

With the patch, the throughput improved by 18.4%.

One side-effect of this patch is the increase in the size of struct
mem_cgroup.  For example with this patch on 64 bit build, the size of
struct mem_cgroup increased from 4032 bytes to 4416 bytes.  However for
the performance improvement, this additional size is worth it.  In
addition there are opportunities to reduce the size of struct mem_cgroup
like deprecation of kmem and tcpmem page counters and better packing.

Link: https://lkml.kernel.org/r/20220825000506.239406-3-shakeelb@google.comSigned-off-by: default avatarShakeel Butt <shakeelb@google.com>
Reported-by: default avatarkernel test robot <oliver.sang@intel.com>
Reviewed-by: default avatarFeng Tang <feng.tang@intel.com>
Acked-by: default avatarSoheil Hassas Yeganeh <soheil@google.com>
Acked-by: default avatarRoman Gushchin <roman.gushchin@linux.dev>
Acked-by: default avatarMichal Hocko <mhocko@suse.com>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: "Michal Koutný" <mkoutny@suse.com>
Cc: Muchun Song <songmuchun@bytedance.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent cfdab60b
...@@ -3,15 +3,26 @@ ...@@ -3,15 +3,26 @@
#define _LINUX_PAGE_COUNTER_H #define _LINUX_PAGE_COUNTER_H
#include <linux/atomic.h> #include <linux/atomic.h>
#include <linux/cache.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <asm/page.h> #include <asm/page.h>
#if defined(CONFIG_SMP)
struct pc_padding {
char x[0];
} ____cacheline_internodealigned_in_smp;
#define PC_PADDING(name) struct pc_padding name
#else
#define PC_PADDING(name)
#endif
struct page_counter { struct page_counter {
/*
* Make sure 'usage' does not share cacheline with any other field. The
* memcg->memory.usage is a hot member of struct mem_cgroup.
*/
atomic_long_t usage; atomic_long_t usage;
unsigned long min; PC_PADDING(_pad1_);
unsigned long low;
unsigned long high;
unsigned long max;
/* effective memory.min and memory.min usage tracking */ /* effective memory.min and memory.min usage tracking */
unsigned long emin; unsigned long emin;
...@@ -23,18 +34,18 @@ struct page_counter { ...@@ -23,18 +34,18 @@ struct page_counter {
atomic_long_t low_usage; atomic_long_t low_usage;
atomic_long_t children_low_usage; atomic_long_t children_low_usage;
/* legacy */
unsigned long watermark; unsigned long watermark;
unsigned long failcnt; unsigned long failcnt;
/* /* Keep all the read most fields in a separete cacheline. */
* 'parent' is placed here to be far from 'usage' to reduce PC_PADDING(_pad2_);
* cache false sharing, as 'usage' is written mostly while
* parent is frequently read for cgroup's hierarchical unsigned long min;
* counting nature. unsigned long low;
*/ unsigned long high;
unsigned long max;
struct page_counter *parent; struct page_counter *parent;
}; } ____cacheline_internodealigned_in_smp;
#if BITS_PER_LONG == 32 #if BITS_PER_LONG == 32
#define PAGE_COUNTER_MAX LONG_MAX #define PAGE_COUNTER_MAX LONG_MAX
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment