Commit 153090f2 authored by Baoquan He's avatar Baoquan He Committed by akpm

mm/vmalloc: add code comment for find_vmap_area_exceed_addr()

Its behaviour is like find_vma() which finds an area above the specified
address, add comment to make it easier to understand.

And also fix two places of grammer mistake/typo.

Link: https://lkml.kernel.org/r/20220607105958.382076-5-bhe@redhat.comSigned-off-by: default avatarBaoquan He <bhe@redhat.com>
Reviewed-by: default avatarUladzislau Rezki (Sony) <urezki@gmail.com>
Cc: Christoph Hellwig <hch@infradead.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent baa468a6
...@@ -790,6 +790,7 @@ unsigned long vmalloc_nr_pages(void) ...@@ -790,6 +790,7 @@ unsigned long vmalloc_nr_pages(void)
return atomic_long_read(&nr_vmalloc_pages); return atomic_long_read(&nr_vmalloc_pages);
} }
/* Look up the first VA which satisfies addr < va_end, NULL if none. */
static struct vmap_area *find_vmap_area_exceed_addr(unsigned long addr) static struct vmap_area *find_vmap_area_exceed_addr(unsigned long addr)
{ {
struct vmap_area *va = NULL; struct vmap_area *va = NULL;
...@@ -929,7 +930,7 @@ link_va(struct vmap_area *va, struct rb_root *root, ...@@ -929,7 +930,7 @@ link_va(struct vmap_area *va, struct rb_root *root,
* Some explanation here. Just perform simple insertion * Some explanation here. Just perform simple insertion
* to the tree. We do not set va->subtree_max_size to * to the tree. We do not set va->subtree_max_size to
* its current size before calling rb_insert_augmented(). * its current size before calling rb_insert_augmented().
* It is because of we populate the tree from the bottom * It is because we populate the tree from the bottom
* to parent levels when the node _is_ in the tree. * to parent levels when the node _is_ in the tree.
* *
* Therefore we set subtree_max_size to zero after insertion, * Therefore we set subtree_max_size to zero after insertion,
...@@ -1655,7 +1656,7 @@ static atomic_long_t vmap_lazy_nr = ATOMIC_LONG_INIT(0); ...@@ -1655,7 +1656,7 @@ static atomic_long_t vmap_lazy_nr = ATOMIC_LONG_INIT(0);
/* /*
* Serialize vmap purging. There is no actual critical section protected * Serialize vmap purging. There is no actual critical section protected
* by this look, but we want to avoid concurrent calls for performance * by this lock, but we want to avoid concurrent calls for performance
* reasons and to make the pcpu_get_vm_areas more deterministic. * reasons and to make the pcpu_get_vm_areas more deterministic.
*/ */
static DEFINE_MUTEX(vmap_purge_lock); static DEFINE_MUTEX(vmap_purge_lock);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment