Commit 792fc497 authored by Aneesh Kumar K.V's avatar Aneesh Kumar K.V Committed by Alexander Graf

KVM: PPC: BOOK3S: HV: Prefer CMA region for hash page table allocation

Today when KVM tries to reserve memory for the hash page table it
allocates from the normal page allocator first. If that fails it
falls back to CMA's reserved region. One of the side effects of
this is that we could end up exhausting the page allocator and
get linux into OOM conditions while we still have plenty of space
available in CMA.

This patch addresses this issue by first trying hash page table
allocation from CMA's reserved region before falling back to the normal
page allocator. So if we run out of memory, we really are out of memory.
Signed-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: default avatarAlexander Graf <agraf@suse.de>
parent 9916d57e
...@@ -52,7 +52,7 @@ static void kvmppc_rmap_reset(struct kvm *kvm); ...@@ -52,7 +52,7 @@ static void kvmppc_rmap_reset(struct kvm *kvm);
long kvmppc_alloc_hpt(struct kvm *kvm, u32 *htab_orderp) long kvmppc_alloc_hpt(struct kvm *kvm, u32 *htab_orderp)
{ {
unsigned long hpt; unsigned long hpt = 0;
struct revmap_entry *rev; struct revmap_entry *rev;
struct page *page = NULL; struct page *page = NULL;
long order = KVM_DEFAULT_HPT_ORDER; long order = KVM_DEFAULT_HPT_ORDER;
...@@ -64,22 +64,11 @@ long kvmppc_alloc_hpt(struct kvm *kvm, u32 *htab_orderp) ...@@ -64,22 +64,11 @@ long kvmppc_alloc_hpt(struct kvm *kvm, u32 *htab_orderp)
} }
kvm->arch.hpt_cma_alloc = 0; kvm->arch.hpt_cma_alloc = 0;
/* VM_BUG_ON(order < KVM_CMA_CHUNK_ORDER);
* try first to allocate it from the kernel page allocator. page = kvm_alloc_hpt(1 << (order - PAGE_SHIFT));
* We keep the CMA reserved for failed allocation. if (page) {
*/ hpt = (unsigned long)pfn_to_kaddr(page_to_pfn(page));
hpt = __get_free_pages(GFP_KERNEL | __GFP_ZERO | __GFP_REPEAT | kvm->arch.hpt_cma_alloc = 1;
__GFP_NOWARN, order - PAGE_SHIFT);
/* Next try to allocate from the preallocated pool */
if (!hpt) {
VM_BUG_ON(order < KVM_CMA_CHUNK_ORDER);
page = kvm_alloc_hpt(1 << (order - PAGE_SHIFT));
if (page) {
hpt = (unsigned long)pfn_to_kaddr(page_to_pfn(page));
kvm->arch.hpt_cma_alloc = 1;
} else
--order;
} }
/* Lastly try successively smaller sizes from the page allocator */ /* Lastly try successively smaller sizes from the page allocator */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment