Commit 9f907439 authored by Tonghao Zhang's avatar Tonghao Zhang Committed by Martin KaFai Lau

bpf: hash map, avoid deadlock with suitable hash mask

The deadlock still may occur while accessed in NMI and non-NMI
context. Because in NMI, we still may access the same bucket but with
different map_locked index.

For example, on the same CPU, .max_entries = 2, we update the hash map,
with key = 4, while running bpf prog in NMI nmi_handle(), to update
hash map with key = 20, so it will have the same bucket index but have
different map_locked index.

To fix this issue, using min mask to hash again.

Fixes: 20b6cc34 ("bpf: Avoid hashtab deadlock with map_locked")
Signed-off-by: default avatarTonghao Zhang <tong@infragraf.org>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Andrii Nakryiko <andrii@kernel.org>
Cc: Martin KaFai Lau <martin.lau@linux.dev>
Cc: Song Liu <song@kernel.org>
Cc: Yonghong Song <yhs@fb.com>
Cc: John Fastabend <john.fastabend@gmail.com>
Cc: KP Singh <kpsingh@kernel.org>
Cc: Stanislav Fomichev <sdf@google.com>
Cc: Hao Luo <haoluo@google.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Hou Tao <houtao1@huawei.com>
Acked-by: default avatarYonghong Song <yhs@fb.com>
Acked-by: default avatarHou Tao <houtao1@huawei.com>
Link: https://lore.kernel.org/r/20230111092903.92389-1-tong@infragraf.orgSigned-off-by: default avatarMartin KaFai Lau <martin.lau@kernel.org>
parent e7895f01
...@@ -152,7 +152,7 @@ static inline int htab_lock_bucket(const struct bpf_htab *htab, ...@@ -152,7 +152,7 @@ static inline int htab_lock_bucket(const struct bpf_htab *htab,
{ {
unsigned long flags; unsigned long flags;
hash = hash & HASHTAB_MAP_LOCK_MASK; hash = hash & min_t(u32, HASHTAB_MAP_LOCK_MASK, htab->n_buckets - 1);
preempt_disable(); preempt_disable();
if (unlikely(__this_cpu_inc_return(*(htab->map_locked[hash])) != 1)) { if (unlikely(__this_cpu_inc_return(*(htab->map_locked[hash])) != 1)) {
...@@ -171,7 +171,7 @@ static inline void htab_unlock_bucket(const struct bpf_htab *htab, ...@@ -171,7 +171,7 @@ static inline void htab_unlock_bucket(const struct bpf_htab *htab,
struct bucket *b, u32 hash, struct bucket *b, u32 hash,
unsigned long flags) unsigned long flags)
{ {
hash = hash & HASHTAB_MAP_LOCK_MASK; hash = hash & min_t(u32, HASHTAB_MAP_LOCK_MASK, htab->n_buckets - 1);
raw_spin_unlock_irqrestore(&b->raw_lock, flags); raw_spin_unlock_irqrestore(&b->raw_lock, flags);
__this_cpu_dec(*(htab->map_locked[hash])); __this_cpu_dec(*(htab->map_locked[hash]));
preempt_enable(); preempt_enable();
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment