Commit b944afc9 authored by Christoph Hellwig's avatar Christoph Hellwig Committed by Linus Torvalds

mm: add a VM_MAP_PUT_PAGES flag for vmap

Add a flag so that vmap takes ownership of the passed in page array.  When
vfree is called on such an allocation it will put one reference on each
page, and free the page array itself.
Signed-off-by: default avatarChristoph Hellwig <hch@lst.de>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Nitin Gupta <ngupta@vflare.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Uladzislau Rezki (Sony) <urezki@gmail.com>
Link: https://lkml.kernel.org/r/20201002122204.1534411-3-hch@lst.deSigned-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent fa307474
...@@ -24,6 +24,7 @@ struct notifier_block; /* in notifier.h */ ...@@ -24,6 +24,7 @@ struct notifier_block; /* in notifier.h */
#define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */ #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */
#define VM_NO_GUARD 0x00000040 /* don't add guard page */ #define VM_NO_GUARD 0x00000040 /* don't add guard page */
#define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */ #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */
#define VM_MAP_PUT_PAGES 0x00000100 /* put pages and free array in vfree */
/* /*
* VM_KASAN is used slighly differently depending on CONFIG_KASAN_VMALLOC. * VM_KASAN is used slighly differently depending on CONFIG_KASAN_VMALLOC.
......
...@@ -2377,8 +2377,11 @@ EXPORT_SYMBOL(vunmap); ...@@ -2377,8 +2377,11 @@ EXPORT_SYMBOL(vunmap);
* @flags: vm_area->flags * @flags: vm_area->flags
* @prot: page protection for the mapping * @prot: page protection for the mapping
* *
* Maps @count pages from @pages into contiguous kernel virtual * Maps @count pages from @pages into contiguous kernel virtual space.
* space. * If @flags contains %VM_MAP_PUT_PAGES the ownership of the pages array itself
* (which must be kmalloc or vmalloc memory) and one reference per pages in it
* are transferred from the caller to vmap(), and will be freed / dropped when
* vfree() is called on the return value.
* *
* Return: the address of the area or %NULL on failure * Return: the address of the area or %NULL on failure
*/ */
...@@ -2404,6 +2407,8 @@ void *vmap(struct page **pages, unsigned int count, ...@@ -2404,6 +2407,8 @@ void *vmap(struct page **pages, unsigned int count,
return NULL; return NULL;
} }
if (flags & VM_MAP_PUT_PAGES)
area->pages = pages;
return area->addr; return area->addr;
} }
EXPORT_SYMBOL(vmap); EXPORT_SYMBOL(vmap);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment