Commit 5d0ce359 authored by Jiebin Sun's avatar Jiebin Sun Committed by Andrew Morton

percpu: add percpu_counter_add_local and percpu_counter_sub_local

Patch series "/msg: mitigate the lock contention in ipc/msg", v6.

Here are two patches to mitigate the lock contention in ipc/msg.

The 1st patch is to add the new interface percpu_counter_add_local and
percpu_counter_sub_local.  The batch size in percpu_counter_add_batch
should be very large in heavy writing and rare reading case.  Add the
"_local" version, and mostly it will do local adding, reduce the global
updating and mitigate lock contention in writing.

The 2nd patch is to use percpu_counter instead of atomic update in
ipc/msg.  The msg_bytes and msg_hdrs atomic counters are frequently
updated when IPC msg queue is in heavy use, causing heavy cache bounce and
overhead.  Change them to percpu_counter greatly improve the performance. 
Since there is one percpu struct per namespace, additional memory cost is
minimal.  Reading of the count done in msgctl call, which is infrequent. 
So the need to sum up the counts in each CPU is infrequent.


This patch (of 2):

The batch size in percpu_counter_add_batch should be very large in
heavy writing and rare reading case. Add the "_local" version, and
mostly it will do local adding, reduce the global updating and
mitigate lock contention in writing.

Link: https://lkml.kernel.org/r/20220913192538.3023708-1-jiebin.sun@intel.com
Link: https://lkml.kernel.org/r/20220913192538.3023708-2-jiebin.sun@intel.comSigned-off-by: default avatarJiebin Sun <jiebin.sun@intel.com>
Reviewed-by: default avatarTim Chen <tim.c.chen@linux.intel.com>
Cc: Alexander Mikhalitsyn <alexander.mikhalitsyn@virtuozzo.com>
Cc: Alexey Gladkov <legion@kernel.org>
Cc: Christoph Lameter <cl@linux.com>
Cc: Dennis Zhou <dennis@kernel.org>
Cc: "Eric W . Biederman" <ebiederm@xmission.com>
Cc: Manfred Spraul <manfred@colorfullife.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Vasily Averin <vasily.averin@linux.dev>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent e77999c1
...@@ -15,6 +15,9 @@ ...@@ -15,6 +15,9 @@
#include <linux/types.h> #include <linux/types.h>
#include <linux/gfp.h> #include <linux/gfp.h>
/* percpu_counter batch for local add or sub */
#define PERCPU_COUNTER_LOCAL_BATCH INT_MAX
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
struct percpu_counter { struct percpu_counter {
...@@ -56,6 +59,22 @@ static inline void percpu_counter_add(struct percpu_counter *fbc, s64 amount) ...@@ -56,6 +59,22 @@ static inline void percpu_counter_add(struct percpu_counter *fbc, s64 amount)
percpu_counter_add_batch(fbc, amount, percpu_counter_batch); percpu_counter_add_batch(fbc, amount, percpu_counter_batch);
} }
/*
* With percpu_counter_add_local() and percpu_counter_sub_local(), counts
* are accumulated in local per cpu counter and not in fbc->count until
* local count overflows PERCPU_COUNTER_LOCAL_BATCH. This makes counter
* write efficient.
* But percpu_counter_sum(), instead of percpu_counter_read(), needs to be
* used to add up the counts from each CPU to account for all the local
* counts. So percpu_counter_add_local() and percpu_counter_sub_local()
* should be used when a counter is updated frequently and read rarely.
*/
static inline void
percpu_counter_add_local(struct percpu_counter *fbc, s64 amount)
{
percpu_counter_add_batch(fbc, amount, PERCPU_COUNTER_LOCAL_BATCH);
}
static inline s64 percpu_counter_sum_positive(struct percpu_counter *fbc) static inline s64 percpu_counter_sum_positive(struct percpu_counter *fbc)
{ {
s64 ret = __percpu_counter_sum(fbc); s64 ret = __percpu_counter_sum(fbc);
...@@ -138,6 +157,13 @@ percpu_counter_add(struct percpu_counter *fbc, s64 amount) ...@@ -138,6 +157,13 @@ percpu_counter_add(struct percpu_counter *fbc, s64 amount)
preempt_enable(); preempt_enable();
} }
/* non-SMP percpu_counter_add_local is the same with percpu_counter_add */
static inline void
percpu_counter_add_local(struct percpu_counter *fbc, s64 amount)
{
percpu_counter_add(fbc, amount);
}
static inline void static inline void
percpu_counter_add_batch(struct percpu_counter *fbc, s64 amount, s32 batch) percpu_counter_add_batch(struct percpu_counter *fbc, s64 amount, s32 batch)
{ {
...@@ -193,4 +219,10 @@ static inline void percpu_counter_sub(struct percpu_counter *fbc, s64 amount) ...@@ -193,4 +219,10 @@ static inline void percpu_counter_sub(struct percpu_counter *fbc, s64 amount)
percpu_counter_add(fbc, -amount); percpu_counter_add(fbc, -amount);
} }
static inline void
percpu_counter_sub_local(struct percpu_counter *fbc, s64 amount)
{
percpu_counter_add_local(fbc, -amount);
}
#endif /* _LINUX_PERCPU_COUNTER_H */ #endif /* _LINUX_PERCPU_COUNTER_H */
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment