- 14 Dec, 2013 40 commits
-
-
Rebecca Schultz Zavin authored
Add the ability for a heap to free buffers asynchrounously. Freed buffers are placed on a free list and freed from a low priority background thread. If allocations from a particular heap fail, the free list is drained. This patch also enable asynchronous frees from the chunk heap. Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com> [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Johan Mossberg authored
Currently ion can only share buffers with dma buf fd's. Fd's can not be used inside the kernel as they are process specific so support for sharing buffers with dma buf kernel handles is needed to support kernel only use cases. An example use case could be a GPU driver using ion that wants to share its output buffers with a 3d party display controller driver supporting dma buf. Signed-off-by: Johan Mossberg <johan.mossberg@stericsson.com> [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rebecca Schultz Zavin authored
Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com> [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rebecca Schultz Zavin authored
Refactor the code in the system heap used to map and zero the buffers into a seperate utility so it can be called from other heaps. Use it from the chunk heap. Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com> [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rebecca Schultz Zavin authored
vmap/vunmap spend a significant amount of time allocating the address space to map into. Rather than allocating address space for each page, instead allocate once for the entire allocation and then just map and unmap each page into that address space. Signed-off-by: Rebecca Schultz Zavin <rschultz@google.com> [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rebecca Schultz Zavin authored
The heapmask in the client generally wasn't being used. This patch removes it. Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com> [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Johan Mossberg authored
Will enable modules to allocate memory with ion. Signed-off-by: Johan Mossberg <johan.mossberg@stericsson.com> [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rebecca Schultz Zavin authored
There is some confusion between when to use the heap type and when the id. This patch clarifies this by using clearer variable names and describing the intention in the comments. Also fixes the client debug code to print heaps by id instead of type. Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com> [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rebecca Schultz Zavin authored
This patch adds support for a chunk heap that allows for buffers that are made up of a list of fixed size chunks taken from a carveout. Chunk sizes are configured when the heaps are created by passing the chunk size in the priv field of the heap platform data. Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com> [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rebecca Schultz Zavin authored
The system heap contained several general purpose functions to map buffers to the kernel and userspace. This patch refactors those into ion_heap.c so they can be used by other heaps. Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com> [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rebecca Schultz Zavin authored
Switches the rbtree tree of heaps for a plist. This significantly simplifies the code and the list is small and is modified only at first boot so the rbtree is unnecessary. This also switches the traversal of the heap list to traverse from highest to lowest id's. This allows allocations to pass a heap mask that falls back on the system heap -- typically id 0, which is the common case. Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com> [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rebecca Schultz Zavin authored
Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com> [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rebecca Schultz Zavin authored
This patch allows you to specify a heap that requires carveout memory but that doesn't specify a start address. Memblock_alloc will be called to find a location for these heaps. Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com> [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Benjamin Gaignard authored
use atomic_read to get the refcount value to avoid compilation warning Signed-off-by: Benjamin Gaignard <benjamin.gaignard@linaro.org> [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Benjamin Gaignard authored
when using carveout heap ion_buffer_create function failed because map_dma and unmap_dma operations aren't set by carveout heap. Signed-off-by: Benjamin Gaignard <benjamin.gaignard@linaro.org> [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rebecca Schultz Zavin authored
Pages are zeroed for security purposes when returned to the ion heap. There was a bug in this code preventing this from happening. Bug: 7573871 Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com> [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rebecca Schultz Zavin authored
When the requested mmap length was not an integer number of chunks or the buffer, or if an offset was provided, a bug would cause extra or incorrect pages of the buffer to be mapped. Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com> [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rebecca Schultz Zavin authored
Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com> [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rebecca Schultz Zavin authored
This will prevent the kernel from kicking off compaction when higher order allocations are made. Instead we will get these high order allocations only if they are readily available. Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com> [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rebecca Schultz Zavin authored
Removes contention for lock between allocate and free by reducing the length of time the lock is held for. Split out a seperate lock to protect the list of heaps and replace it with a rwsem since the list will most likely only be updated during initialization. Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com> [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rebecca Schultz Zavin authored
The single shrink function will free lower order pages first. This enables compaction to work properly. [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rebecca Schultz Zavin authored
Currently the mutex is held while kmalloc is called, under a low memory condition this might trigger the shrinker which also takes this mutex. Refactor so the mutex is not held during allocation. Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com> [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rebecca Schultz Zavin authored
Split out low and high mem pages so they are correctly reported when the shrinker is called. Fix potential deadlock caused by holding the page pool lock while allocationg and also needing that lock from the shrink function Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com> [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rebecca Schultz Zavin authored
When allocations larger than order 4 are made, use _GFP_NORETRY and __GFP_NO_KSWAPD so kswapd doesn't get kicked off to reclaim these larger chunks. For smaller allocaitons, these are unnecessary, as the system should be able to reclaim these. Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com> [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rebecca Schultz Zavin authored
With this change the system heap will use pagepools to avoid having to invalidate memory when it is allocated, a significant performance improvement on some systems. Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com> [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rebecca Schultz Zavin authored
This patch adds a new utility heaps can use to manage memory. In the past we have found it can be very expensive to manage the caches when allocating memory, but it is imposible to know whether a previous user of a given memory allocation had a cached mapping. This patch adds the ability to store a pool of pages that were previously used uncached so that cache maintenance only need be done when growing this pool. The pool also contains a shrinker so memory from the pool can be recovered in low memory conditions. Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com> [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rebecca Schultz Zavin authored
When ion_map_kernel is execute the system must allocate an array large enough to hold a pointer to each page in the buffer. If the buffer is very large and the system memory has become very fragmented, there may not be sufficient high order allocations available from kmalloc. Use vmalloc instead. Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com> [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rebecca Schultz Zavin authored
Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com> [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rebecca Schultz Zavin authored
With this patch the system heap will only try to allocate from each order as long as allocations succeed. If it failes to obtain a higher order allocation, it doesn't retry that order. Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com> [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rebecca Schultz Zavin authored
If a buffer's user mappings are not going to be faulted in it need not be allocated page wise. We can optimize this common case by allocating an sglist of larger chunks rather than creating an entry for each page in the allocation. Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com> [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rebecca Schultz Zavin authored
We have found that faulting in the mappings for cached allocations has a significant performance impact and is only a benefit if only a small part of the buffer is touched by the cpu (an uncommon case for software rendering). This patch introduces a ION_FLAG_CACHED_NEEDS_SYNC which determines whether a mapping should be created by faulting or at mmap time. If this flag is set, userspace must manage the caches explictly using the SYNC ioctl. Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com> [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rebecca Schultz Zavin authored
Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com> [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rebecca Schultz Zavin authored
It is possible for a buffer to exist only as a dma_buf file descriptor without it being held in any handles. When this occurs it is impossible to track where the buffer is in the system (without traversing every process in the system and inspecting its file table). When buffers are orphaned like this, copy the task comm and pid of the last client to hold them into the buffer so we have a debugging hint as to where this buffer came from. In practice this will probalby be the process that allocated the buffer. Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com> [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rebecca Schultz Zavin authored
Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com> [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rebecca Schultz Zavin authored
If preemted during ion_free after the refcount is updated but before the handle can be removed from the rb_tree, import might find that handle in the tree and try to reuse it when execution returns to free, the handle will be cleaned up leaving the caller of import with a corrupt handle. This patch modifies the locking to protect agains this race. Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com> [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Greg Hackmann authored
Signed-off-by: Greg Hackmann <ghackmann@google.com> [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rebecca Schultz Zavin authored
Previously, metadata was stored in the allocated pages themselves during allocation. However the system can only have a limited number of kmapped pages. A very large allocation might exceed this limit. Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com> [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Rebecca Schultz Zavin authored
Signed-off-by: Rebecca Schultz Zavin <rebecca@android.com> [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Laura Abbott authored
If dma_buf_fd fails, the dma_buf needs to be cleaned up by calling dma_buf_put. dma_buf_put will call ion_dma_buf_release which in turn calls ion_buffer_put to clean up the buffer reference. Calling ion_buffer_put after dma_buf_put drops the reference count by one more which is incorrect. Fix this by getting rid of the extra ion_buffer_put call. Signed-off-by: Laura Abbott <lauraa@codeaurora.org> [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Olav Haugan authored
ION_IOC_MAP, ION_IOC_SHARE, and ION_IOC_IMPORT may return success when an error occurs. Add correct error handling to ION_IOC_MAP, ION_IOC_SHARE, and ION_IOC_IMPORT. Signed-off-by: Olav Haugan <ohaugan@codeaurora.org> [jstultz: modified patch to apply to staging directory] Signed-off-by: John Stultz <john.stultz@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-