Commit 2faee8ff authored by Dominik Dingel's avatar Dominik Dingel Committed by Martin Schwidefsky

s390/mm: prevent and break zero page mappings in case of storage keys

As soon as storage keys are enabled we need to stop working on zero page
mappings to prevent inconsistencies between storage keys and pgste.

Otherwise following data corruption could happen:
1) guest enables storage key
2) guest sets storage key for not mapped page X
   -> change goes to PGSTE
3) guest reads from page X
   -> as X was not dirty before, the page will be zero page backed,
      storage key from PGSTE for X will go to storage key for zero page
4) guest sets storage key for not mapped page Y (same logic as above
5) guest reads from page Y
   -> as Y was not dirty before, the page will be zero page backed,
      storage key from PGSTE for Y will got to storage key for zero page
      overwriting storage key for X

While holding the mmap sem, we are safe against changes on entries we
already fixed, as every fault would need to take the mmap_sem (read).

Other vCPUs executing storage key instructions will get a one time interception
and be serialized also with mmap_sem.
Signed-off-by: default avatarDominik Dingel <dingel@linux.vnet.ibm.com>
Reviewed-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
Signed-off-by: default avatarMartin Schwidefsky <schwidefsky@de.ibm.com>
parent 593befa6
...@@ -479,6 +479,11 @@ static inline int mm_has_pgste(struct mm_struct *mm) ...@@ -479,6 +479,11 @@ static inline int mm_has_pgste(struct mm_struct *mm)
return 0; return 0;
} }
/*
* In the case that a guest uses storage keys
* faults should no longer be backed by zero pages
*/
#define mm_forbids_zeropage mm_use_skey
static inline int mm_use_skey(struct mm_struct *mm) static inline int mm_use_skey(struct mm_struct *mm)
{ {
#ifdef CONFIG_PGSTE #ifdef CONFIG_PGSTE
......
...@@ -1256,6 +1256,15 @@ static int __s390_enable_skey(pte_t *pte, unsigned long addr, ...@@ -1256,6 +1256,15 @@ static int __s390_enable_skey(pte_t *pte, unsigned long addr,
pgste_t pgste; pgste_t pgste;
pgste = pgste_get_lock(pte); pgste = pgste_get_lock(pte);
/*
* Remove all zero page mappings,
* after establishing a policy to forbid zero page mappings
* following faults for that page will get fresh anonymous pages
*/
if (is_zero_pfn(pte_pfn(*pte))) {
ptep_flush_direct(walk->mm, addr, pte);
pte_val(*pte) = _PAGE_INVALID;
}
/* Clear storage key */ /* Clear storage key */
pgste_val(pgste) &= ~(PGSTE_ACC_BITS | PGSTE_FP_BIT | pgste_val(pgste) &= ~(PGSTE_ACC_BITS | PGSTE_FP_BIT |
PGSTE_GR_BIT | PGSTE_GC_BIT); PGSTE_GR_BIT | PGSTE_GC_BIT);
...@@ -1274,9 +1283,11 @@ void s390_enable_skey(void) ...@@ -1274,9 +1283,11 @@ void s390_enable_skey(void)
down_write(&mm->mmap_sem); down_write(&mm->mmap_sem);
if (mm_use_skey(mm)) if (mm_use_skey(mm))
goto out_up; goto out_up;
mm->context.use_skey = 1;
walk.mm = mm; walk.mm = mm;
walk_page_range(0, TASK_SIZE, &walk); walk_page_range(0, TASK_SIZE, &walk);
mm->context.use_skey = 1;
out_up: out_up:
up_write(&mm->mmap_sem); up_write(&mm->mmap_sem);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment