- 15 Jun, 2017 5 commits
-
-
Marc Zyngier authored
In order to start handling guest access to GICv3 system registers, let's add a hook that will get called when we trap a system register access. This is gated by a new static key (vgic_v3_cpuif_trap). Tested-by: Alexander Graf <agraf@suse.de> Acked-by: David Daney <david.daney@cavium.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Christoffer Dall <cdall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <cdall@linaro.org>
-
Marc Zyngier authored
As we're about to trap CP15 accesses and handle them at EL2, we need to evaluate whether or not the condition flags are valid, as an implementation is allowed to trap despite the condition not being met. Tagging the function as __hyp_text allows this. We still rely on the cc_map array to be mapped at EL2 by virtue of being "const", and the linker to only emit relative references. Tested-by: Alexander Graf <agraf@suse.de> Acked-by: David Daney <david.daney@cavium.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <cdall@linaro.org>
-
Marc Zyngier authored
As we're about to access the Active Priority registers a lot more, let's define accessors that take the register number as a parameter. Tested-by: Alexander Graf <agraf@suse.de> Acked-by: David Daney <david.daney@cavium.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <cdall@linaro.org>
-
Marc Zyngier authored
It is often useful to compare an ESR syndrome reporting the trapping of a system register with a value matching that system register. Since encoding both the sysreg and the ESR version seem to be a bit overkill, let's add a set of macros that convert an ESR value into the corresponding sysreg encoding. We handle both AArch32 and AArch64, taking advantage of identical encodings between system registers and CP15 accessors. Tested-by: Alexander Graf <agraf@suse.de> Acked-by: David Daney <david.daney@cavium.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <cdall@linaro.org>
-
Marc Zyngier authored
-
- 08 Jun, 2017 9 commits
-
-
Christoffer Dall authored
The PMU IRQ number is set through the VCPU device's KVM_SET_DEVICE_ATTR ioctl handler for the KVM_ARM_VCPU_PMU_V3_IRQ attribute, but there is no enforced or stated requirement that this must happen after initializing the VGIC. As a result, calling vgic_valid_spi() which relies on the nr_spis being set during the VGIC init can incorrectly fail. Introduce irq_is_spi, which determines if an IRQ number is within the SPI range without verifying it against the actual VGIC properties. Signed-off-by: Christoffer Dall <cdall@linaro.org> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
When injecting an IRQ to the VGIC, you now have to present an owner token for that IRQ line to show that you are the owner of that line. IRQ lines driven from userspace or via an irqfd do not have an owner and will simply pass a NULL pointer. Also get rid of the unused kvm_vgic_inject_mapped_irq prototype. Signed-off-by: Christoffer Dall <cdall@linaro.org> Acked-by: Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
We check if other in-kernel devices have already been connected to the GIC for a particular interrupt line when possible. For the PMU, we can do this whenever setting the PMU interrupt number from userspace. For the timers, we have to wait until we try to enable the timer, because we have a concept of default IRQ numbers that userspace shouldn't have to work around in the initialization phase. Signed-off-by: Christoffer Dall <cdall@linaro.org> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
Having multiple devices being able to signal the same interrupt line is very confusing and almost certainly guarantees a configuration error. Therefore, introduce a very simple allocator which allows a device to claim an interrupt line from the vgic for a given VM. Signed-off-by: Christoffer Dall <cdall@linaro.org> Acked-by: Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
First we define an ABI using the vcpu devices that lets userspace set the interrupt numbers for the various timers on both the 32-bit and 64-bit KVM/ARM implementations. Second, we add the definitions for the groups and attributes introduced by the above ABI. (We add the PMU define on the 32-bit side as well for symmetry and it may get used some day.) Third, we set up the arch-specific vcpu device operation handlers to call into the timer code for anything related to the KVM_ARM_VCPU_TIMER_CTRL group. Fourth, we implement support for getting and setting the timer interrupt numbers using the above defined ABI in the arch timer code. Fifth, we introduce error checking upon enabling the arch timer (which is called when first running a VCPU) to check that all VCPUs are configured to use the same PPI for the timer (as mandated by the architecture) and that the virtual and physical timers are not configured to use the same IRQ number. Signed-off-by: Christoffer Dall <cdall@linaro.org> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
We currently initialize the arch timer IRQ numbers from the reset code, presumably because we once intended to model multiple CPU or SoC types from within the kernel and have hard-coded reset values in the reset code. As we are moving towards userspace being in charge of more fine-grained CPU emulation and stitching together the pieces needed to emulate a particular type of CPU, we should no longer have a tight coupling between resetting a VCPU and setting IRQ numbers. Therefore, move the logic to define and use the default IRQ numbers to the timer code and set the IRQ number immediately when creating the VCPU. Signed-off-by: Christoffer Dall <cdall@linaro.org> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
We are about to need this define in the arch timer code as well so move it to a common location. Signed-off-by: Christoffer Dall <cdall@linaro.org> Acked-by: Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
As we are about to support VCPU attributes to set the timer IRQ numbers in guest.c, move the static inlines for the VCPU attributes handlers from the header file to guest.c. Signed-off-by: Christoffer Dall <cdall@linaro.org> Acked-by: Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
Since we got support for devices in userspace which allows reporting the PMU overflow output status to userspace, we should actually allow creating the PMU on systems without an in-kernel irqchip, which in turn requires us to slightly clarify error codes for the ABI and move things around for the initialization phase. Signed-off-by: Christoffer Dall <cdall@linaro.org> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
-
- 06 Jun, 2017 5 commits
-
-
Marc Zyngier authored
We currently have the HSCTLR.A bit set, trapping unaligned accesses at HYP, but we're not really prepared to deal with it. Since the rest of the kernel is pretty happy about that, let's follow its example and set HSCTLR.A to zero. Modern CPUs don't really care. Cc: stable@vger.kernel.org Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <cdall@linaro.org>
-
Marc Zyngier authored
We currently have the SCTLR_EL2.A bit set, trapping unaligned accesses at EL2, but we're not really prepared to deal with it. So far, this has been unnoticed, until GCC 7 started emitting those (in particular 64bit writes on a 32bit boundary). Since the rest of the kernel is pretty happy about that, let's follow its example and set SCTLR_EL2.A to zero. Modern CPUs don't really care. Cc: stable@vger.kernel.org Reported-by: Alexander Graf <agraf@suse.de> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <cdall@linaro.org>
-
Marc Zyngier authored
__do_hyp_init has the rather bad habit of ignoring RES1 bits and writing them back as zero. On a v8.0-8.2 CPU, this doesn't do anything bad, but may end-up being pretty nasty on future revisions of the architecture. Let's preserve those bits so that we don't have to fix this later on. Cc: stable@vger.kernel.org Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <cdall@linaro.org>
-
Marc Zyngier authored
Under memory pressure, we start ageing pages, which amounts to parsing the page tables. Since we don't want to allocate any extra level, we pass NULL for our private allocation cache. Which means that stage2_get_pud() is allowed to fail. This results in the following splat: [ 1520.409577] Unable to handle kernel NULL pointer dereference at virtual address 00000008 [ 1520.417741] pgd = ffff810f52fef000 [ 1520.421201] [00000008] *pgd=0000010f636c5003, *pud=0000010f56f48003, *pmd=0000000000000000 [ 1520.429546] Internal error: Oops: 96000006 [#1] PREEMPT SMP [ 1520.435156] Modules linked in: [ 1520.438246] CPU: 15 PID: 53550 Comm: qemu-system-aar Tainted: G W 4.12.0-rc4-00027-g1885c397eaec #7205 [ 1520.448705] Hardware name: FOXCONN R2-1221R-A4/C2U4N_MB, BIOS G31FB12A 10/26/2016 [ 1520.463726] task: ffff800ac5fb4e00 task.stack: ffff800ce04e0000 [ 1520.469666] PC is at stage2_get_pmd+0x34/0x110 [ 1520.474119] LR is at kvm_age_hva_handler+0x44/0xf0 [ 1520.478917] pc : [<ffff0000080b137c>] lr : [<ffff0000080b149c>] pstate: 40000145 [ 1520.486325] sp : ffff800ce04e33d0 [ 1520.489644] x29: ffff800ce04e33d0 x28: 0000000ffff40064 [ 1520.494967] x27: 0000ffff27e00000 x26: 0000000000000000 [ 1520.500289] x25: ffff81051ba65008 x24: 0000ffff40065000 [ 1520.505618] x23: 0000ffff40064000 x22: 0000000000000000 [ 1520.510947] x21: ffff810f52b20000 x20: 0000000000000000 [ 1520.516274] x19: 0000000058264000 x18: 0000000000000000 [ 1520.521603] x17: 0000ffffa6fe7438 x16: ffff000008278b70 [ 1520.526940] x15: 000028ccd8000000 x14: 0000000000000008 [ 1520.532264] x13: ffff7e0018298000 x12: 0000000000000002 [ 1520.537582] x11: ffff000009241b93 x10: 0000000000000940 [ 1520.542908] x9 : ffff0000092ef800 x8 : 0000000000000200 [ 1520.548229] x7 : ffff800ce04e36a8 x6 : 0000000000000000 [ 1520.553552] x5 : 0000000000000001 x4 : 0000000000000000 [ 1520.558873] x3 : 0000000000000000 x2 : 0000000000000008 [ 1520.571696] x1 : ffff000008fd5000 x0 : ffff0000080b149c [ 1520.577039] Process qemu-system-aar (pid: 53550, stack limit = 0xffff800ce04e0000) [...] [ 1521.510735] [<ffff0000080b137c>] stage2_get_pmd+0x34/0x110 [ 1521.516221] [<ffff0000080b149c>] kvm_age_hva_handler+0x44/0xf0 [ 1521.522054] [<ffff0000080b0610>] handle_hva_to_gpa+0xb8/0xe8 [ 1521.527716] [<ffff0000080b3434>] kvm_age_hva+0x44/0xf0 [ 1521.532854] [<ffff0000080a58b0>] kvm_mmu_notifier_clear_flush_young+0x70/0xc0 [ 1521.539992] [<ffff000008238378>] __mmu_notifier_clear_flush_young+0x88/0xd0 [ 1521.546958] [<ffff00000821eca0>] page_referenced_one+0xf0/0x188 [ 1521.552881] [<ffff00000821f36c>] rmap_walk_anon+0xec/0x250 [ 1521.558370] [<ffff000008220f78>] rmap_walk+0x78/0xa0 [ 1521.563337] [<ffff000008221104>] page_referenced+0x164/0x180 [ 1521.569002] [<ffff0000081f1af0>] shrink_active_list+0x178/0x3b8 [ 1521.574922] [<ffff0000081f2058>] shrink_node_memcg+0x328/0x600 [ 1521.580758] [<ffff0000081f23f4>] shrink_node+0xc4/0x328 [ 1521.585986] [<ffff0000081f2718>] do_try_to_free_pages+0xc0/0x340 [ 1521.592000] [<ffff0000081f2a64>] try_to_free_pages+0xcc/0x240 [...] The trivial fix is to handle this NULL pud value early, rather than dereferencing it blindly. Cc: stable@vger.kernel.org Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Christoffer Dall <cdall@linaro.org> Signed-off-by: Christoffer Dall <cdall@linaro.org>
-
Christoffer Dall authored
We used to extract PRIbits from the ICH_VT_EL2 which was the upper field in the register word, so a mask wasn't necessary, but as we switched to looking at PREbits, which is bits 26 through 28 with the PRIbits field being potentially non-zero, we really need to mask off the field value, otherwise fun things may happen. Signed-off-by: Christoffer Dall <cdall@linaro.org> Acked-by: Marc Zyngier <marc.zyngier@arm.com>
-
- 04 Jun, 2017 12 commits
-
-
Andrew Jones authored
The timer work is only scheduled for a VCPU when that VCPU is blocked. This means we only need to wake it up, not kick (IPI) it. While calling kvm_vcpu_kick() would just do the wake up, and not kick, anyway, let's change this to avoid request-less vcpu kicks, as they're generally not a good idea (see "Request-less VCPU Kicks" in Documentation/virtual/kvm/vcpu-requests.rst) Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Christoffer Dall <cdall@linaro.org> Signed-off-by: Christoffer Dall <cdall@linaro.org>
-
Andrew Jones authored
Refactor PMU overflow handling in order to remove the request-less vcpu kick. Now, since kvm_vgic_inject_irq() uses vcpu requests, there should be no chance that a kick sent at just the wrong time (between the VCPU's call to kvm_pmu_flush_hwstate() and before it enters guest mode) results in a failure for the guest to see updated GIC state until its next exit some time later for some other reason. Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Christoffer Dall <cdall@linaro.org> Signed-off-by: Christoffer Dall <cdall@linaro.org>
-
Andrew Jones authored
Don't use request-less VCPU kicks when injecting IRQs, as a VCPU kick meant to trigger the interrupt injection could be sent while the VCPU is outside guest mode, which means no IPI is sent, and after it has called kvm_vgic_flush_hwstate(), meaning it won't see the updated GIC state until its next exit some time later for some other reason. The receiving VCPU only needs to check this request in VCPU RUN to handle it. By checking it, if it's pending, a memory barrier will be issued that ensures all state is visible. See "Ensuring Requests Are Seen" of Documentation/virtual/kvm/vcpu-requests.rst Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Christoffer Dall <cdall@linaro.org> Signed-off-by: Christoffer Dall <cdall@linaro.org>
-
Andrew Jones authored
A request called EXIT is too generic. All requests are meant to cause exits, but different requests have different flags. Let's not make it difficult to decide if the EXIT request is correct for some case by just always providing unique requests for each case. This patch changes EXIT to SLEEP, because that's what the request is asking the VCPU to do. Signed-off-by: Andrew Jones <drjones@redhat.com> Acked-by: Christoffer Dall <cdall@linaro.org> Signed-off-by: Christoffer Dall <cdall@linaro.org>
-
Andrew Jones authored
We can make a small optimization by not checking the state of the power_off field on each run. This is done by treating power_off like pause, only checking it when we get the EXIT VCPU request. When a VCPU powers off another VCPU the EXIT request is already made, so we just need to make sure the request is also made on self power off. kvm_vcpu_kick() isn't necessary for these cases, as the VCPU would just be kicking itself, but we add it anyway as a self kick doesn't cost much, and it makes the code more future-proof. Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Christoffer Dall <cdall@linaro.org> Signed-off-by: Christoffer Dall <cdall@linaro.org>
-
Andrew Jones authored
System shutdown is currently using request-less VCPU kicks. This leaves open a tiny race window, as it doesn't ensure the state change to power_off is seen by a VCPU just about to enter guest mode. VCPU requests, OTOH, are guaranteed to be seen (see "Ensuring Requests Are Seen" of Documentation/virtual/kvm/vcpu-requests.rst) This patch applies the EXIT request used by pause to power_off, fixing the race. Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Christoffer Dall <cdall@linaro.org> Signed-off-by: Christoffer Dall <cdall@linaro.org>
-
Andrew Jones authored
The current use of KVM_REQ_VCPU_EXIT for pause is fine. Even the requester clearing the request is OK, as this is the special case where the sole requesting thread and receiving VCPU are executing synchronously (see "Clearing Requests" in Documentation/virtual/kvm/vcpu-requests.rst) However, that's about to change, so let's ensure only the receiving VCPU clears the request. Additionally, by guaranteeing KVM_REQ_VCPU_EXIT is always set when pause is, we can avoid checking pause directly in VCPU RUN. Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Christoffer Dall <cdall@linaro.org> Signed-off-by: Christoffer Dall <cdall@linaro.org>
-
Andrew Jones authored
arm/arm64 already has one VCPU request used when setting pause, but it doesn't properly check requests in VCPU RUN. Check it and also make sure we set vcpu->mode at the appropriate time (before the check) and with the appropriate barriers. See Documentation/virtual/kvm/vcpu-requests.rst. Also make sure we don't leave any vcpu requests we don't intend to handle later set in the request bitmap. If we don't clear them, then kvm_request_pending() may return true when it shouldn't. Using VCPU requests properly fixes a small race where pause could get set just as a VCPU was entering guest mode. Signed-off-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Christoffer Dall <cdall@linaro.org> Signed-off-by: Christoffer Dall <cdall@linaro.org>
-
Andrew Jones authored
Signed-off-by: Andrew Jones <drjones@redhat.com> Acked-by: Christoffer Dall <cdall@linaro.org> Signed-off-by: Christoffer Dall <cdall@linaro.org>
-
Radim Krčmář authored
A first step in vcpu->requests encapsulation. Additionally, we now use READ_ONCE() when accessing vcpu->requests, which ensures we always load vcpu->requests when it's accessed. This is important as other threads can change it any time. Also, READ_ONCE() documents that vcpu->requests is used with other threads, likely requiring memory barriers, which it does. Signed-off-by: Radim Krčmář <rkrcmar@redhat.com> [ Documented the new use of READ_ONCE() and converted another check in arch/mips/kvm/vz.c ] Signed-off-by: Andrew Jones <drjones@redhat.com> Acked-by: Christoffer Dall <cdall@linaro.org> Signed-off-by: Christoffer Dall <cdall@linaro.org>
-
Andrew Jones authored
Marc Zyngier suggested that we define the arch specific VCPU request base, rather than requiring each arch to remember to start from 8. That suggestion, along with Radim Krcmar's recent VCPU request flag addition, snowballed into defining something of an arch VCPU request defining API. No functional change. (Looks like x86 is running out of arch VCPU request bits. Maybe someday we'll need to extend to 64.) Signed-off-by: Andrew Jones <drjones@redhat.com> Acked-by: Christoffer Dall <cdall@linaro.org> Signed-off-by: Christoffer Dall <cdall@linaro.org>
-
Christoffer Dall authored
We recently rewrote the sactive and cactive handlers to take the kvm lock for guest accesses to these registers. However, when accessed from userspace this lock is already held. Unfortunately we forgot to change the private accessors for GICv3, because these are redistributor registers and not distributor registers. Signed-off-by: Christoffer Dall <cdall@linaro.org>
-
- 24 May, 2017 1 commit
-
-
Christoffer Dall authored
We have been a little loose with our intermediate VMCR representation where we had a 'ctlr' field, but we failed to differentiate between the GICv2 GICC_CTLR and ICC_CTLR_EL1 layouts, and therefore ended up mapping the wrong bits into the individual fields of the ICH_VMCR_EL2 when emulating a GICv2 on a GICv3 system. Fix this by using explicit fields for the VMCR bits instead. Cc: Eric Auger <eric.auger@redhat.com> Reported-by: wanghaibin <wanghaibin.wang@huawei.com> Signed-off-by: Christoffer Dall <cdall@linaro.org> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Tested-by: Marc Zyngier <marc.zyngier@arm.com>
-
- 23 May, 2017 3 commits
-
-
Christoffer Dall authored
We don't need to stop a specific VCPU when changing the active state, because private IRQs can only be modified by a running VCPU for the VCPU itself and it is therefore already stopped. However, it is also possible for two VCPUs to be modifying the active state of SPIs at the same time, which can cause the thread being stuck in the loop that checks other VCPU threads for a potentially very long time, or to modify the active state of a running VCPU. Fix this by serializing all accesses to setting and clearing the active state of interrupts using the KVM mutex. Reported-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Christoffer Dall <cdall@linaro.org> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
Factor out the core register modifier functionality from the entry points from the register description table, and only call the prepare/finish functions from the guest path, not the uaccess path. Signed-off-by: Christoffer Dall <cdall@linaro.org> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
We are about to differentiate between writes from a VCPU and from userspace to the GIC's GICD_ISACTIVER and GICD_ICACTIVER registers due to different synchronization requirements. Expand the macro to define a register description for the GIC to take uaccess functions as well. Signed-off-by: Christoffer Dall <cdall@linaro.org> Acked-by: Marc Zyngier <marc.zyngier@arm.com>
-
- 18 May, 2017 2 commits
-
-
Christoffer Dall authored
We were not holding the kvm->slots_lock as required when calling kvm_io_bus_unregister_dev() as required. This only affects the error path, but still, let's do our due diligence. Reported by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Christoffer Dall <cdall@linaro.org> Reviewed-by: Eric Auger <eric.auger@redhat.com>
-
Christoffer Dall authored
If userspace creates the VCPUs after initializing the VGIC, then we end up in a situation where we trigger a bug in kvm_vcpu_get_idx(), because it is called prior to adding the VCPU into the vcpus array on the VM. There is no tight coupling between the VCPU index and the area of the redistributor region used for the VCPU, so we can simply ensure that all creations of redistributors are serialized per VM, and increment an offset when we successfully add a redistributor. The vgic_register_redist_iodev() function can be called from two paths: vgic_redister_all_redist_iodev() which is called via the kvm_vgic_addr() device attribute handler. This patch already holds the kvm->lock mutex. The other path is via kvm_vgic_vcpu_init, which is called through a longer chain from kvm_vm_ioctl_create_vcpu(), which releases the kvm->lock mutex just before calling kvm_arch_vcpu_create(), so we can simply take this mutex again later for our purposes. Fixes: ab6f468c10 ("KVM: arm/arm64: Register iodevs when setting redist base and creating VCPUs") Signed-off-by: Christoffer Dall <cdall@linaro.org> Tested-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Reviewed-by: Eric Auger <eric.auger@redhat.com>
-
- 16 May, 2017 3 commits
-
-
Suzuki K Poulose authored
We yield the kvm->mmu_lock occassionaly while performing an operation (e.g, unmap or permission changes) on a large area of stage2 mappings. However this could possibly cause another thread to clear and free up the stage2 page tables while we were waiting for regaining the lock and thus the original thread could end up in accessing memory that was freed. This patch fixes the problem by making sure that the stage2 pagetable is still valid after we regain the lock. The fact that mmu_notifer->release() could be called twice (via __mmu_notifier_release and mmu_notifier_unregsister) enhances the possibility of hitting this race where there are two threads trying to unmap the entire guest shadow pages. While at it, cleanup the redudant checks around cond_resched_lock in stage2_wp_range(), as cond_resched_lock already does the same checks. Cc: Mark Rutland <mark.rutland@arm.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: andreyknvl@google.com Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: stable@vger.kernel.org Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Christoffer Dall <cdall@linaro.org> Signed-off-by: Christoffer Dall <cdall@linaro.org>
-
Suzuki K Poulose authored
Make sure we don't use a cached value of the KVM stage2 PGD while resetting the PGD. Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: stable@vger.kernel.org Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Reviewed-by: Christoffer Dall <cdall@linaro.org> Signed-off-by: Christoffer Dall <cdall@linaro.org>
-
James Morse authored
When KVM panics, it hurridly restores the host context and parachutes into the host's panic() code. At some point panic() touches the physical timer/counter. Unless we are an arm64 system with VHE, this traps back to EL2. If we're lucky, we panic again. Add a __timer_save_state() call to KVMs hyp_panic() path, this saves the guest registers and disables the traps for the host. Fixes: 53fd5b64 ("arm64: KVM: Add panic handling") Signed-off-by: James Morse <james.morse@arm.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Christoffer Dall <cdall@linaro.org> Signed-off-by: Christoffer Dall <cdall@linaro.org>
-