Commit cca10df1 authored by David Hildenbrand's avatar David Hildenbrand Committed by Andrew Morton

sh/mm: support __HAVE_ARCH_PTE_SWP_EXCLUSIVE

Let's support __HAVE_ARCH_PTE_SWP_EXCLUSIVE by using bit 6 in the PTE,
reducing the swap type in the !CONFIG_X2TLB case to 5 bits.  Generic MM
currently only uses 5 bits for the type (MAX_SWAPFILES_SHIFT), so the
stolen bit is effectively unused.

Interrestingly, the swap type in the !CONFIG_X2TLB case could currently
overlap with the _PAGE_PRESENT bit, because there is a sneaky shift by 1
in __pte_to_swp_entry() and __swp_entry_to_pte().  Bit 0-7 in the
architecture specific swap PTE would get shifted to bit 1-8 in the PTE. 
As generic MM uses 5 bits only, this didn't matter so far.

While at it, mask the type in __swp_entry().

Link: https://lkml.kernel.org/r/20230113171026.582290-21-david@redhat.comSigned-off-by: default avatarDavid Hildenbrand <david@redhat.com>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Cc: Rich Felker <dalias@libc.org>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
parent 51a1007d
...@@ -423,40 +423,70 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd) ...@@ -423,40 +423,70 @@ static inline unsigned long pmd_page_vaddr(pmd_t pmd)
#endif #endif
/* /*
* Encode and de-code a swap entry * Encode/decode swap entries and swap PTEs. Swap PTEs are all PTEs that
* are !pte_none() && !pte_present().
* *
* Constraints: * Constraints:
* _PAGE_PRESENT at bit 8 * _PAGE_PRESENT at bit 8
* _PAGE_PROTNONE at bit 9 * _PAGE_PROTNONE at bit 9
* *
* For the normal case, we encode the swap type into bits 0:7 and the * For the normal case, we encode the swap type and offset into the swap PTE
* swap offset into bits 10:30. For the 64-bit PTE case, we keep the * such that bits 8 and 9 stay zero. For the 64-bit PTE case, we use the
* preserved bits in the low 32-bits and use the upper 32 as the swap * upper 32 for the swap offset and swap type, following the same approach as
* offset (along with a 5-bit type), following the same approach as x86 * x86 PAE. This keeps the logic quite simple.
* PAE. This keeps the logic quite simple.
* *
* As is evident by the Alpha code, if we ever get a 64-bit unsigned * As is evident by the Alpha code, if we ever get a 64-bit unsigned
* long (swp_entry_t) to match up with the 64-bit PTEs, this all becomes * long (swp_entry_t) to match up with the 64-bit PTEs, this all becomes
* much cleaner.. * much cleaner..
*
* NOTE: We should set ZEROs at the position of _PAGE_PRESENT
* and _PAGE_PROTNONE bits
*/ */
#ifdef CONFIG_X2TLB #ifdef CONFIG_X2TLB
/*
* Format of swap PTEs:
*
* 6 6 6 6 5 5 5 5 5 5 5 5 5 5 4 4 4 4 4 4 4 4 4 4 3 3 3 3 3 3 3 3
* 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2
* <--------------------- offset ----------------------> < type ->
*
* 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1
* 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
* <------------------- zeroes --------------------> E 0 0 0 0 0 0
*/
#define __swp_type(x) ((x).val & 0x1f) #define __swp_type(x) ((x).val & 0x1f)
#define __swp_offset(x) ((x).val >> 5) #define __swp_offset(x) ((x).val >> 5)
#define __swp_entry(type, offset) ((swp_entry_t){ (type) | (offset) << 5}) #define __swp_entry(type, offset) ((swp_entry_t){ ((type) & 0x1f) | (offset) << 5})
#define __pte_to_swp_entry(pte) ((swp_entry_t){ (pte).pte_high }) #define __pte_to_swp_entry(pte) ((swp_entry_t){ (pte).pte_high })
#define __swp_entry_to_pte(x) ((pte_t){ 0, (x).val }) #define __swp_entry_to_pte(x) ((pte_t){ 0, (x).val })
#else #else
#define __swp_type(x) ((x).val & 0xff) /*
* Format of swap PTEs:
*
* 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1
* 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
* <--------------- offset ----------------> 0 0 0 0 E < type -> 0
*
* E is the exclusive marker that is not stored in swap entries.
*/
#define __swp_type(x) ((x).val & 0x1f)
#define __swp_offset(x) ((x).val >> 10) #define __swp_offset(x) ((x).val >> 10)
#define __swp_entry(type, offset) ((swp_entry_t){(type) | (offset) <<10}) #define __swp_entry(type, offset) ((swp_entry_t){((type) & 0x1f) | (offset) << 10})
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) >> 1 }) #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) >> 1 })
#define __swp_entry_to_pte(x) ((pte_t) { (x).val << 1 }) #define __swp_entry_to_pte(x) ((pte_t) { (x).val << 1 })
#endif #endif
/* In both cases, we borrow bit 6 to store the exclusive marker in swap PTEs. */
#define _PAGE_SWP_EXCLUSIVE _PAGE_USER
#define __HAVE_ARCH_PTE_SWP_EXCLUSIVE
static inline int pte_swp_exclusive(pte_t pte)
{
return pte.pte_low & _PAGE_SWP_EXCLUSIVE;
}
PTE_BIT_FUNC(low, swp_mkexclusive, |= _PAGE_SWP_EXCLUSIVE);
PTE_BIT_FUNC(low, swp_clear_exclusive, &= ~_PAGE_SWP_EXCLUSIVE);
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#endif /* __ASM_SH_PGTABLE_32_H */ #endif /* __ASM_SH_PGTABLE_32_H */
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment