Commit d670ffd8 authored by Aneesh Kumar K.V's avatar Aneesh Kumar K.V Committed by Linus Torvalds

mm/thp/pagecache/collapse: free the pte page table on collapse for thp page cache.

With THP page cache, when trying to build a huge page from regular pte
pages, we just clear the pmd entry.  We will take another fault and at
that point we will find the huge page in the radix tree, thereby using
the huge page to complete the page fault

The second fault path will allocate the needed pgtable_t page for archs
like ppc64.  So no need to deposit the same in collapse path.
Depositing them in the collapse path resulting in a pgtable_t memory
leak also giving errors like

  BUG: non-zero nr_ptes on freeing mm: 3

Fixes: 953c66c2 ("mm: THP page cache support for ppc64")
Link: http://lkml.kernel.org/r/20161212163428.6780-2-aneesh.kumar@linux.vnet.ibm.comSigned-off-by: default avatarAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Acked-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 965d004a
...@@ -1242,7 +1242,6 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff) ...@@ -1242,7 +1242,6 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
struct vm_area_struct *vma; struct vm_area_struct *vma;
unsigned long addr; unsigned long addr;
pmd_t *pmd, _pmd; pmd_t *pmd, _pmd;
bool deposited = false;
i_mmap_lock_write(mapping); i_mmap_lock_write(mapping);
vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff) { vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff) {
...@@ -1267,26 +1266,10 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff) ...@@ -1267,26 +1266,10 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
spinlock_t *ptl = pmd_lock(vma->vm_mm, pmd); spinlock_t *ptl = pmd_lock(vma->vm_mm, pmd);
/* assume page table is clear */ /* assume page table is clear */
_pmd = pmdp_collapse_flush(vma, addr, pmd); _pmd = pmdp_collapse_flush(vma, addr, pmd);
/*
* now deposit the pgtable for arch that need it
* otherwise free it.
*/
if (arch_needs_pgtable_deposit()) {
/*
* The deposit should be visibile only after
* collapse is seen by others.
*/
smp_wmb();
pgtable_trans_huge_deposit(vma->vm_mm, pmd,
pmd_pgtable(_pmd));
deposited = true;
}
spin_unlock(ptl); spin_unlock(ptl);
up_write(&vma->vm_mm->mmap_sem); up_write(&vma->vm_mm->mmap_sem);
if (!deposited) { atomic_long_dec(&vma->vm_mm->nr_ptes);
atomic_long_dec(&vma->vm_mm->nr_ptes); pte_free(vma->vm_mm, pmd_pgtable(_pmd));
pte_free(vma->vm_mm, pmd_pgtable(_pmd));
}
} }
} }
i_mmap_unlock_write(mapping); i_mmap_unlock_write(mapping);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment