Commit 3f804920 authored by Sebastian Andrzej Siewior's avatar Sebastian Andrzej Siewior Committed by Andrew Morton

mm/vmalloc: use raw_cpu_ptr() for vmap_block_queue access

The per-CPU resource vmap_block_queue is accessed via get_cpu_var().  That
macro disables preemption and then loads the pointer from the current CPU.

This doesn't work on PREEMPT_RT because a spinlock_t is later accessed
within the preempt-disable section.

There is no need to disable preemption while accessing the per-CPU struct
vmap_block_queue because the list is protected with a spinlock_t.  The
per-CPU struct is also accessed cross-CPU in purge_fragmented_blocks().

It is possible that by using raw_cpu_ptr() the code migrates to another
CPU and uses struct from another CPU.  This is fine because the list is
locked and the locked section is very short.

Use raw_cpu_ptr() to access vmap_block_queue.

Link: https://lkml.kernel.org/r/YnKx3duAB53P7ojN@linutronix.deSigned-off-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Uladzislau Rezki (Sony) <urezki@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent fe573327
...@@ -1938,11 +1938,10 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask) ...@@ -1938,11 +1938,10 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
return ERR_PTR(err); return ERR_PTR(err);
} }
vbq = &get_cpu_var(vmap_block_queue); vbq = raw_cpu_ptr(&vmap_block_queue);
spin_lock(&vbq->lock); spin_lock(&vbq->lock);
list_add_tail_rcu(&vb->free_list, &vbq->free); list_add_tail_rcu(&vb->free_list, &vbq->free);
spin_unlock(&vbq->lock); spin_unlock(&vbq->lock);
put_cpu_var(vmap_block_queue);
return vaddr; return vaddr;
} }
...@@ -2021,7 +2020,7 @@ static void *vb_alloc(unsigned long size, gfp_t gfp_mask) ...@@ -2021,7 +2020,7 @@ static void *vb_alloc(unsigned long size, gfp_t gfp_mask)
order = get_order(size); order = get_order(size);
rcu_read_lock(); rcu_read_lock();
vbq = &get_cpu_var(vmap_block_queue); vbq = raw_cpu_ptr(&vmap_block_queue);
list_for_each_entry_rcu(vb, &vbq->free, free_list) { list_for_each_entry_rcu(vb, &vbq->free, free_list) {
unsigned long pages_off; unsigned long pages_off;
...@@ -2044,7 +2043,6 @@ static void *vb_alloc(unsigned long size, gfp_t gfp_mask) ...@@ -2044,7 +2043,6 @@ static void *vb_alloc(unsigned long size, gfp_t gfp_mask)
break; break;
} }
put_cpu_var(vmap_block_queue);
rcu_read_unlock(); rcu_read_unlock();
/* Allocate new block if nothing was found */ /* Allocate new block if nothing was found */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment