Commit 43ca5bc4 authored by Michal Hocko's avatar Michal Hocko Committed by Linus Torvalds

lib/rhashtable.c: simplify a strange allocation pattern

alloc_bucket_locks allocation pattern is quite unusual.  We are
preferring vmalloc when CONFIG_NUMA is enabled.  The rationale is that
vmalloc will respect the memory policy of the current process and so the
backing memory will get distributed over multiple nodes if the requester
is configured properly.  At least that is the intention, in reality
rhastable is shrunk and expanded from a kernel worker so no mempolicy
can be assumed.

Let's just simplify the code and use kvmalloc helper, which is a
transparent way to use kmalloc with vmalloc fallback, if the caller is
allowed to block and use the flag otherwise.

Link: http://lkml.kernel.org/r/20170306103032.2540-4-mhocko@kernel.orgSigned-off-by: default avatarMichal Hocko <mhocko@suse.com>
Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
Cc: Tom Herbert <tom@herbertland.com>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 6c5ab651
...@@ -86,16 +86,9 @@ static int alloc_bucket_locks(struct rhashtable *ht, struct bucket_table *tbl, ...@@ -86,16 +86,9 @@ static int alloc_bucket_locks(struct rhashtable *ht, struct bucket_table *tbl,
size = min(size, 1U << tbl->nest); size = min(size, 1U << tbl->nest);
if (sizeof(spinlock_t) != 0) { if (sizeof(spinlock_t) != 0) {
tbl->locks = NULL; if (gfpflags_allow_blocking(gfp))
#ifdef CONFIG_NUMA tbl->locks = kvmalloc(size * sizeof(spinlock_t), gfp);
if (size * sizeof(spinlock_t) > PAGE_SIZE && else
gfp == GFP_KERNEL)
tbl->locks = vmalloc(size * sizeof(spinlock_t));
#endif
if (gfp != GFP_KERNEL)
gfp |= __GFP_NOWARN | __GFP_NORETRY;
if (!tbl->locks)
tbl->locks = kmalloc_array(size, sizeof(spinlock_t), tbl->locks = kmalloc_array(size, sizeof(spinlock_t),
gfp); gfp);
if (!tbl->locks) if (!tbl->locks)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment