Commit ef932473 authored by Joonsoo Kim's avatar Joonsoo Kim Committed by Linus Torvalds

mm, vmalloc: change iterating a vmlist to find_vm_area()

This patchset removes vm_struct list management after initializing
vmalloc.  Adding and removing an entry to vmlist is linear time
complexity, so it is inefficient.  If we maintain this list, overall
time complexity of adding and removing area to vmalloc space is O(N),
although we use rbtree for finding vacant place and it's time complexity
is just O(logN).

And vmlist and vmlist_lock is used many places of outside of vmalloc.c.
It is preferable that we hide this raw data structure and provide
well-defined function for supporting them, because it makes that they
cannot mistake when manipulating theses structure and it makes us easily
maintain vmalloc layer.

For kexec and makedumpfile, I export vmap_area_list, instead of vmlist.
This comes from Atsushi's recommendation.  For more information, please
refer below link.  https://lkml.org/lkml/2012/12/6/184

This patch:

The purpose of iterating a vmlist is finding vm area with specific virtual
address.  find_vm_area() is provided for this purpose and more efficient,
because it uses a rbtree.  So change it.
Signed-off-by: default avatarJoonsoo Kim <js1304@gmail.com>
Signed-off-by: default avatarJoonsoo Kim <iamjoonsoo.kim@lge.com>
Acked-by: default avatarGuan Xuetao <gxt@mprc.pku.edu.cn>
Acked-by: default avatarIngo Molnar <mingo@kernel.org>
Acked-by: default avatarChris Metcalf <cmetcalf@tilera.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Atsushi Kumagai <kumagai-atsushi@mxc.nes.nec.co.jp>
Cc: Dave Anderson <anderson@redhat.com>
Cc: Eric Biederman <ebiederm@xmission.com>
Cc: Vivek Goyal <vgoyal@redhat.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 71368511
...@@ -592,12 +592,7 @@ void iounmap(volatile void __iomem *addr_in) ...@@ -592,12 +592,7 @@ void iounmap(volatile void __iomem *addr_in)
in parallel. Reuse of the virtual address is prevented by in parallel. Reuse of the virtual address is prevented by
leaving it in the global lists until we're done with it. leaving it in the global lists until we're done with it.
cpa takes care of the direct mappings. */ cpa takes care of the direct mappings. */
read_lock(&vmlist_lock); p = find_vm_area((void *)addr);
for (p = vmlist; p; p = p->next) {
if (p->addr == addr)
break;
}
read_unlock(&vmlist_lock);
if (!p) { if (!p) {
pr_err("iounmap: bad address %p\n", addr); pr_err("iounmap: bad address %p\n", addr);
......
...@@ -235,7 +235,7 @@ EXPORT_SYMBOL(__uc32_ioremap_cached); ...@@ -235,7 +235,7 @@ EXPORT_SYMBOL(__uc32_ioremap_cached);
void __uc32_iounmap(volatile void __iomem *io_addr) void __uc32_iounmap(volatile void __iomem *io_addr)
{ {
void *addr = (void *)(PAGE_MASK & (unsigned long)io_addr); void *addr = (void *)(PAGE_MASK & (unsigned long)io_addr);
struct vm_struct **p, *tmp; struct vm_struct *vm;
/* /*
* If this is a section based mapping we need to handle it * If this is a section based mapping we need to handle it
...@@ -244,17 +244,10 @@ void __uc32_iounmap(volatile void __iomem *io_addr) ...@@ -244,17 +244,10 @@ void __uc32_iounmap(volatile void __iomem *io_addr)
* all the mappings before the area can be reclaimed * all the mappings before the area can be reclaimed
* by someone else. * by someone else.
*/ */
write_lock(&vmlist_lock); vm = find_vm_area(addr);
for (p = &vmlist ; (tmp = *p) ; p = &tmp->next) { if (vm && (vm->flags & VM_IOREMAP) &&
if ((tmp->flags & VM_IOREMAP) && (tmp->addr == addr)) { (vm->flags & VM_UNICORE_SECTION_MAPPING))
if (tmp->flags & VM_UNICORE_SECTION_MAPPING) { unmap_area_sections((unsigned long)vm->addr, vm->size);
unmap_area_sections((unsigned long)tmp->addr,
tmp->size);
}
break;
}
}
write_unlock(&vmlist_lock);
vunmap(addr); vunmap(addr);
} }
......
...@@ -282,12 +282,7 @@ void iounmap(volatile void __iomem *addr) ...@@ -282,12 +282,7 @@ void iounmap(volatile void __iomem *addr)
in parallel. Reuse of the virtual address is prevented by in parallel. Reuse of the virtual address is prevented by
leaving it in the global lists until we're done with it. leaving it in the global lists until we're done with it.
cpa takes care of the direct mappings. */ cpa takes care of the direct mappings. */
read_lock(&vmlist_lock); p = find_vm_area((void __force *)addr);
for (p = vmlist; p; p = p->next) {
if (p->addr == (void __force *)addr)
break;
}
read_unlock(&vmlist_lock);
if (!p) { if (!p) {
printk(KERN_ERR "iounmap: bad address %p\n", addr); printk(KERN_ERR "iounmap: bad address %p\n", addr);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment