Commit d00ef7a7 authored by Oliver O'Halloran's avatar Oliver O'Halloran Committed by Stefan Bader

powerpc/eeh: Handle hugepages in ioremap space

BugLink: https://bugs.launchpad.net/bugs/1840081

[ Upstream commit 33439620 ]

In commit 4a7b06c157a2 ("powerpc/eeh: Handle hugepages in ioremap
space") support for using hugepages in the vmalloc and ioremap areas was
enabled for radix. Unfortunately this broke EEH MMIO error checking.

Detection works by inserting a hook which checks the results of the
ioreadXX() set of functions.  When a read returns a 0xFFs response we
need to check for an error which we do by mapping the (virtual) MMIO
address back to a physical address, then mapping physical address to a
PCI device via an interval tree.

When translating virt -> phys we currently assume the ioremap space is
only populated by PAGE_SIZE mappings. If a hugepage mapping is found we
emit a WARN_ON(), but otherwise handles the check as though a normal
page was found. In pathalogical cases such as copying a buffer
containing a lot of 0xFFs from BAR memory this can result in the system
not booting because it's too busy printing WARN_ON()s.

There's no real reason to assume huge pages can't be present and we're
prefectly capable of handling them, so do that.

Fixes: 4a7b06c157a2 ("powerpc/eeh: Handle hugepages in ioremap space")
Reported-by: default avatarSachin Sant <sachinp@linux.vnet.ibm.com>
Signed-off-by: default avatarOliver O'Halloran <oohall@gmail.com>
Tested-by: default avatarSachin Sant <sachinp@linux.vnet.ibm.com>
Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20190710150517.27114-1-oohall@gmail.comSigned-off-by: default avatarSasha Levin <sashal@kernel.org>
Signed-off-by: default avatarConnor Kuehl <connor.kuehl@canonical.com>
Signed-off-by: default avatarKleber Sacilotto de Souza <kleber.souza@canonical.com>
parent ba72aff7
......@@ -363,10 +363,19 @@ static inline unsigned long eeh_token_to_phys(unsigned long token)
NULL, &hugepage_shift);
if (!ptep)
return token;
WARN_ON(hugepage_shift);
pa = pte_pfn(*ptep) << PAGE_SHIFT;
return pa | (token & (PAGE_SIZE-1));
pa = pte_pfn(*ptep);
/* On radix we can do hugepage mappings for io, so handle that */
if (hugepage_shift) {
pa <<= hugepage_shift;
pa |= token & ((1ul << hugepage_shift) - 1);
} else {
pa <<= PAGE_SHIFT;
pa |= token & (PAGE_SIZE - 1);
}
return pa;
}
/*
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment