Commit e3a1f6ca authored by David Vrabel's avatar David Vrabel Committed by Linus Torvalds

x86: pte_protnone() and pmd_protnone() must check entry is not present

Since _PAGE_PROTNONE aliases _PAGE_GLOBAL it is only valid if
_PAGE_PRESENT is clear.  Make pte_protnone() and pmd_protnone() check
for this.

This fixes a 64-bit Xen PV guest regression introduced by 8a0516ed
("mm: convert p[te|md]_numa users to p[te|md]_protnone_numa").  Any
userspace process would endlessly fault.

In a 64-bit PV guest, userspace page table entries have _PAGE_GLOBAL set
by the hypervisor.  This meant that any fault on a present userspace
entry (e.g., a write to a read-only mapping) would be misinterpreted as
a NUMA hinting fault and the fault would not be correctly handled,
resulting in the access endlessly faulting.
Signed-off-by: default avatarDavid Vrabel <david.vrabel@citrix.com>
Acked-by: default avatarMel Gorman <mgorman@suse.de>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 2b9fb532
...@@ -476,12 +476,14 @@ static inline int pmd_present(pmd_t pmd) ...@@ -476,12 +476,14 @@ static inline int pmd_present(pmd_t pmd)
*/ */
static inline int pte_protnone(pte_t pte) static inline int pte_protnone(pte_t pte)
{ {
return pte_flags(pte) & _PAGE_PROTNONE; return (pte_flags(pte) & (_PAGE_PROTNONE | _PAGE_PRESENT))
== _PAGE_PROTNONE;
} }
static inline int pmd_protnone(pmd_t pmd) static inline int pmd_protnone(pmd_t pmd)
{ {
return pmd_flags(pmd) & _PAGE_PROTNONE; return (pmd_flags(pmd) & (_PAGE_PROTNONE | _PAGE_PRESENT))
== _PAGE_PROTNONE;
} }
#endif /* CONFIG_NUMA_BALANCING */ #endif /* CONFIG_NUMA_BALANCING */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment