Commit 2e12072c authored by Abel Wu's avatar Abel Wu Committed by Paolo Abeni

sock: Doc behaviors for pressure heurisitics

There are now two accounting infrastructures for skmem, while the
heuristics in __sk_mem_raise_allocated() were actually introduced
before memcg was born.

Add some comments to clarify whether they can be applied to both
infrastructures or not.
Suggested-by: default avatarShakeel Butt <shakeelb@google.com>
Signed-off-by: default avatarAbel Wu <wuyun.abel@bytedance.com>
Acked-by: default avatarShakeel Butt <shakeelb@google.com>
Reviewed-by: default avatarSimon Horman <horms@kernel.org>
Link: https://lore.kernel.org/r/20231019120026.42215-2-wuyun.abel@bytedance.comSigned-off-by: default avatarPaolo Abeni <pabeni@redhat.com>
parent 2def8ff3
...@@ -3067,7 +3067,14 @@ int __sk_mem_raise_allocated(struct sock *sk, int size, int amt, int kind) ...@@ -3067,7 +3067,14 @@ int __sk_mem_raise_allocated(struct sock *sk, int size, int amt, int kind)
if (allocated > sk_prot_mem_limits(sk, 2)) if (allocated > sk_prot_mem_limits(sk, 2))
goto suppress_allocation; goto suppress_allocation;
/* guarantee minimum buffer size under pressure */ /* Guarantee minimum buffer size under pressure (either global
* or memcg) to make sure features described in RFC 7323 (TCP
* Extensions for High Performance) work properly.
*
* This rule does NOT stand when exceeds global or memcg's hard
* limit, or else a DoS attack can be taken place by spawning
* lots of sockets whose usage are under minimum buffer size.
*/
if (kind == SK_MEM_RECV) { if (kind == SK_MEM_RECV) {
if (atomic_read(&sk->sk_rmem_alloc) < sk_get_rmem0(sk, prot)) if (atomic_read(&sk->sk_rmem_alloc) < sk_get_rmem0(sk, prot))
return 1; return 1;
...@@ -3088,6 +3095,11 @@ int __sk_mem_raise_allocated(struct sock *sk, int size, int amt, int kind) ...@@ -3088,6 +3095,11 @@ int __sk_mem_raise_allocated(struct sock *sk, int size, int amt, int kind)
if (!sk_under_memory_pressure(sk)) if (!sk_under_memory_pressure(sk))
return 1; return 1;
/* Try to be fair among all the sockets under global
* pressure by allowing the ones that below average
* usage to raise.
*/
alloc = sk_sockets_allocated_read_positive(sk); alloc = sk_sockets_allocated_read_positive(sk);
if (sk_prot_mem_limits(sk, 2) > alloc * if (sk_prot_mem_limits(sk, 2) > alloc *
sk_mem_pages(sk->sk_wmem_queued + sk_mem_pages(sk->sk_wmem_queued +
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment