Commit 007ccec5 authored by Martin Schwidefsky's avatar Martin Schwidefsky

s390/pageattr: do a single TLB flush for change_page_attr

The change of the access rights for an address range in the kernel
address space is currently done with a loop of IPTE + a store of the
modified PTE. Between the IPTE and the store the PTE will be invalid,
this intermediate state can cause problems with concurrent accesses.

Consider a change of a kernel area from read-write to read-only, a
concurrent reader of that area should be fine but with the invalid
PTE it might get an unexpected exception.

Remove the IPTEs for each PTE and do a global flush after all PTEs
have been modified.
Reviewed-by: default avatarHeiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: default avatarMartin Schwidefsky <schwidefsky@de.ibm.com>
parent 2cfc5f9c
...@@ -65,19 +65,17 @@ static pte_t *walk_page_table(unsigned long addr) ...@@ -65,19 +65,17 @@ static pte_t *walk_page_table(unsigned long addr)
static void change_page_attr(unsigned long addr, int numpages, static void change_page_attr(unsigned long addr, int numpages,
pte_t (*set) (pte_t)) pte_t (*set) (pte_t))
{ {
pte_t *ptep, pte; pte_t *ptep;
int i; int i;
for (i = 0; i < numpages; i++) { for (i = 0; i < numpages; i++) {
ptep = walk_page_table(addr); ptep = walk_page_table(addr);
if (WARN_ON_ONCE(!ptep)) if (WARN_ON_ONCE(!ptep))
break; break;
pte = *ptep; *ptep = set(*ptep);
pte = set(pte);
__ptep_ipte(addr, ptep);
*ptep = pte;
addr += PAGE_SIZE; addr += PAGE_SIZE;
} }
__tlb_flush_kernel();
} }
int set_memory_ro(unsigned long addr, int numpages) int set_memory_ro(unsigned long addr, int numpages)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment