Commit 67b10813 authored by Benjamin Herrenschmidt's avatar Benjamin Herrenschmidt Committed by Linus Torvalds

[PATCH] ppc64: Fix huge pages MMU mapping bug

Current kernel has a couple of sneaky bugs in the ppc64 hugetlb code that
cause huge pages to be potentially left stale in the hash table and TLBs
(improperly invalidated), with all the nasty consequences that can have.

One is that we forgot to set the "secondary" bit in the hash PTEs when
hashing a huge page in the secondary bucket (fortunately very rare).

The other one is on non-LPAR machines (like Apple G5s), flush_hash_range()
which is used to flush a batch of PTEs simply did not work for huge pages.
Historically, our huge page code didn't batch, but this was changed without
fixing this routine.  This patch fixes both.
Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
parent 2601c2e2
...@@ -343,9 +343,7 @@ static void native_flush_hash_range(unsigned long context, ...@@ -343,9 +343,7 @@ static void native_flush_hash_range(unsigned long context,
hpte_t *hptep; hpte_t *hptep;
unsigned long hpte_v; unsigned long hpte_v;
struct ppc64_tlb_batch *batch = &__get_cpu_var(ppc64_tlb_batch); struct ppc64_tlb_batch *batch = &__get_cpu_var(ppc64_tlb_batch);
unsigned long large;
/* XXX fix for large ptes */
unsigned long large = 0;
local_irq_save(flags); local_irq_save(flags);
...@@ -358,6 +356,7 @@ static void native_flush_hash_range(unsigned long context, ...@@ -358,6 +356,7 @@ static void native_flush_hash_range(unsigned long context,
va = (vsid << 28) | (batch->addr[i] & 0x0fffffff); va = (vsid << 28) | (batch->addr[i] & 0x0fffffff);
batch->vaddr[j] = va; batch->vaddr[j] = va;
large = pte_huge(batch->pte[i]);
if (large) if (large)
vpn = va >> HPAGE_SHIFT; vpn = va >> HPAGE_SHIFT;
else else
......
...@@ -710,10 +710,13 @@ int hash_huge_page(struct mm_struct *mm, unsigned long access, ...@@ -710,10 +710,13 @@ int hash_huge_page(struct mm_struct *mm, unsigned long access,
hpte_group = ((~hash & htab_hash_mask) * hpte_group = ((~hash & htab_hash_mask) *
HPTES_PER_GROUP) & ~0x7UL; HPTES_PER_GROUP) & ~0x7UL;
slot = ppc_md.hpte_insert(hpte_group, va, prpn, slot = ppc_md.hpte_insert(hpte_group, va, prpn,
HPTE_V_LARGE, rflags); HPTE_V_LARGE |
HPTE_V_SECONDARY,
rflags);
if (slot == -1) { if (slot == -1) {
if (mftb() & 0x1) if (mftb() & 0x1)
hpte_group = ((hash & htab_hash_mask) * HPTES_PER_GROUP) & ~0x7UL; hpte_group = ((hash & htab_hash_mask) *
HPTES_PER_GROUP)&~0x7UL;
ppc_md.hpte_remove(hpte_group); ppc_md.hpte_remove(hpte_group);
goto repeat; goto repeat;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment