Commit bba42aff authored by Juergen Gross's avatar Juergen Gross Committed by Thomas Gleixner

x86/mm: Fix dump_pagetables with Xen PV

Commit 2ae27137 ("x86: mm: convert dump_pagetables to use
walk_page_range") broke Xen PV guests as the hypervisor reserved hole in
the memory map was not taken into account.

Fix that by starting the kernel range only at GUARD_HOLE_END_ADDR.

Fixes: 2ae27137 ("x86: mm: convert dump_pagetables to use walk_page_range")
Reported-by: default avatarJulien Grall <julien@xen.org>
Signed-off-by: default avatarJuergen Gross <jgross@suse.com>
Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
Tested-by: default avatarJulien Grall <julien@xen.org>
Link: https://lkml.kernel.org/r/20200221103851.7855-1-jgross@suse.com
parent 99bcd4a6
...@@ -363,13 +363,8 @@ static void ptdump_walk_pgd_level_core(struct seq_file *m, ...@@ -363,13 +363,8 @@ static void ptdump_walk_pgd_level_core(struct seq_file *m,
{ {
const struct ptdump_range ptdump_ranges[] = { const struct ptdump_range ptdump_ranges[] = {
#ifdef CONFIG_X86_64 #ifdef CONFIG_X86_64
#define normalize_addr_shift (64 - (__VIRTUAL_MASK_SHIFT + 1))
#define normalize_addr(u) ((signed long)((u) << normalize_addr_shift) >> \
normalize_addr_shift)
{0, PTRS_PER_PGD * PGD_LEVEL_MULT / 2}, {0, PTRS_PER_PGD * PGD_LEVEL_MULT / 2},
{normalize_addr(PTRS_PER_PGD * PGD_LEVEL_MULT / 2), ~0UL}, {GUARD_HOLE_END_ADDR, ~0UL},
#else #else
{0, ~0UL}, {0, ~0UL},
#endif #endif
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment