Commit 4a5e7e38 authored by David Hildenbrand's avatar David Hildenbrand Committed by Christian Borntraeger

KVM: s390: cmma: don't check entry content

We should never inject an exception after we manually rewound the PSW
(to retry the ESSA instruction in this case). This will mess up the PSW.
So this never worked and therefore never really triggered.

Looking at the details, we don't even have to perform any validity checks.
1. Bits 52-63 of an entry are stored as 0 by the hardware.
2. We are dealing with absolute addresses but only check for the prefix
   starting at address 0. This isn't correct and doesn't make much sense,
   cpus could still zap the prefix of other cpus. But as prefix pages
   cannot be swapped out without a notifier being called for the affected
   VCPU, a zap can never remove a protected prefix.
Reviewed-by: default avatarDominik Dingel <dingel@linux.vnet.ibm.com>
Signed-off-by: default avatarDavid Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: default avatarChristian Borntraeger <borntraeger@de.ibm.com>
parent 05b1159e
...@@ -744,7 +744,7 @@ static int handle_essa(struct kvm_vcpu *vcpu) ...@@ -744,7 +744,7 @@ static int handle_essa(struct kvm_vcpu *vcpu)
{ {
/* entries expected to be 1FF */ /* entries expected to be 1FF */
int entries = (vcpu->arch.sie_block->cbrlo & ~PAGE_MASK) >> 3; int entries = (vcpu->arch.sie_block->cbrlo & ~PAGE_MASK) >> 3;
unsigned long *cbrlo, cbrle; unsigned long *cbrlo;
struct gmap *gmap; struct gmap *gmap;
int i; int i;
...@@ -765,17 +765,9 @@ static int handle_essa(struct kvm_vcpu *vcpu) ...@@ -765,17 +765,9 @@ static int handle_essa(struct kvm_vcpu *vcpu)
vcpu->arch.sie_block->cbrlo &= PAGE_MASK; /* reset nceo */ vcpu->arch.sie_block->cbrlo &= PAGE_MASK; /* reset nceo */
cbrlo = phys_to_virt(vcpu->arch.sie_block->cbrlo); cbrlo = phys_to_virt(vcpu->arch.sie_block->cbrlo);
down_read(&gmap->mm->mmap_sem); down_read(&gmap->mm->mmap_sem);
for (i = 0; i < entries; ++i) { for (i = 0; i < entries; ++i)
cbrle = cbrlo[i]; __gmap_zap(gmap, cbrlo[i]);
if (unlikely(cbrle & ~PAGE_MASK || cbrle < 2 * PAGE_SIZE))
/* invalid entry */
break;
/* try to free backing */
__gmap_zap(gmap, cbrle);
}
up_read(&gmap->mm->mmap_sem); up_read(&gmap->mm->mmap_sem);
if (i < entries)
return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
return 0; return 0;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment