- 10 Nov, 2017 3 commits
-
-
Marc Zyngier authored
Add a new has_gicv4 field in the global VGIC state that indicates whether the HW is GICv4 capable, as a per-VM predicate indicating if there is a possibility for a VM to support direct injection (the above being true and the VM having an ITS). Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
-
Marc Zyngier authored
In order to help integrating the vITS code with GICv4, let's add a new helper that deals with updating the affinity of an LPI, which will later be augmented with super duper extra GICv4 goodness. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
-
Marc Zyngier authored
The whole MSI injection process is fairly monolithic. An MSI write gets turned into an injected LPI in one swift go. But this is actually a more fine-grained process: - First, a virtual ITS gets selected using the doorbell address - Then the DevID/EventID pair gets translated into an LPI - Finally the LPI is injected Since the GICv4 code needs the first two steps in order to match an IRQ routing entry to an LPI, let's expose them as helpers, and refactor the existing code to use them Reviewed-by: Christoffer Dall <cdall@linaro.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
-
- 06 Nov, 2017 31 commits
-
-
Marc Zyngier authored
The way we call kvm_vgic_destroy is a bit bizarre. We call it *after* having freed the vcpus, which sort of defeats the point of cleaning up things before that point. Let's move kvm_vgic_destroy towards the beginning of kvm_arch_destroy_vm, which seems more sensible. Acked-by: Christoffer Dall <cdall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
-
Marc Zyngier authored
The GICv4 support introduces a hard dependency between the KVM core and the ITS infrastructure. arm64 already selects it at the architecture level, but 32bit doesn't. In order to avoid littering the kernel with #ifdefs, let's just select the whole of the GICv3 suport code. You know you want it. Acked-by: Christoffer Dall <cdall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
-
Eric Auger authored
We want to reuse the core of the map/unmap functions for IRQ forwarding. Let's move the computation of the hwirq in kvm_vgic_map_phys_irq and pass the linux IRQ as parameter. the host_irq is added to struct vgic_irq. We introduce kvm_vgic_map/unmap_irq which take a struct vgic_irq handle as a parameter. Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
-
Eric Auger authored
This patch selects IRQ_BYPASS_MANAGER and HAVE_KVM_IRQ_BYPASS configs for ARM/ARM64. kvm_arch_has_irq_bypass() now is implemented and returns true. As a consequence the irq bypass consumer will be registered for ARM/ARM64 with the forwarding callbacks: - stop/start: halt/resume guest execution - add/del_producer: set/unset forwarding at vgic/irqchip level We don't have any actual support yet, so nothing gets actually forwarded. Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Eric Auger <eric.auger@redhat.com> [maz: dropped the DEOI stuff for the time being in order to reduce the dependency chain, amended commit message] Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
-
-
Dongjiu Geng authored
kvm_vcpu_dabt_isextabt() tries to match a full fault syndrome, but calls kvm_vcpu_trap_get_fault_type() that only returns the fault class, thus reducing the scope of the check. This doesn't cause any observable bug yet as we end-up matching a closely related syndrome for which we return the same value. Using kvm_vcpu_trap_get_fault() instead fixes it for good. Signed-off-by: Dongjiu Geng <gengdongjiu@huawei.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
-
Marc Zyngier authored
Both arm and arm64 implementations are capable of injecting faults, and yet have completely divergent implementations, leading to different bugs and reduced maintainability. Let's elect the arm64 version as the canonical one and move it into aarch32.c, which is common to both architectures. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
-
Eric Auger authored
On reset we clear the valid bits of GITS_CBASER and GITS_BASER<n>. We also clear command queue registers and free the cache (device, collection, and lpi lists). As we need to take the same locks as save/restore functions, we create a vgic_its_ctrl() wrapper that handles KVM_DEV_ARM_VGIC_GRP_CTRL group functions. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
-
Eric Auger authored
At the moment, the in-kernel emulated ITS is not properly reset. On guest restart/reset some registers keep their old values and internal structures like device, ITE, and collection lists are not freed. This may lead to various bugs. Among them, we can have incorrect state backup or failure when saving the ITS state at early guest boot stage. This patch documents a new attribute, KVM_DEV_ARM_ITS_CTRL_RESET in the KVM_DEV_ARM_VGIC_GRP_CTRL group. Upon this action, we can reset registers and especially those pointing to tables previously allocated by the guest and free the internal data structures storing the list of devices, collections and lpis. The usual approach for device reset of having userspace write the reset values of the registers to the kernel via the register read/write APIs doesn't work for the ITS because it has some internal state (caches) which is not exposed as registers, and there is no register interface for "drop cached data without writing it back to RAM". So we need a KVM API which mimics the hardware's reset line, to provide the equivalent behaviour to a "pull the power cord out of the back of the machine" reset. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Eric Auger <eric.auger@redhat.com> Reported-by: wanghaibin <wanghaibin.wang@huawei.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
-
Eric Auger authored
When the GITS_BASER<n>.Valid gets cleared, the data structures in guest RAM are not valid anymore. The device, collection and LPI lists stored in the in-kernel ITS represent the same information in some form of cache. So let's void the cache. Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
-
wanghaibin authored
We create two new functions that free the device and collection lists. They are currently called by vgic_its_destroy() and other callers will be added in subsequent patches. We also remove the check on its->device_list.next. Lists are initialized in vgic_create_its() and the device is added to the device list only if this latter succeeds. vgic_its_destroy is the device destroy ops. This latter is called by kvm_destroy_devices() which loops on all created devices. So at this point the list is initialized. Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: wanghaibin <wanghaibin.wang@huawei.com> Signed-off-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
-
Eric Auger authored
Let's remove kvm_its_unmap_device and use kvm_its_free_device as both functions are identical. Signed-off-by: Eric Auger <eric.auger@redhat.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
-
Christoffer Dall authored
After being lazy with saving/restoring the timer state, we defer that work to vcpu_load and vcpu_put, which ensure that the timer state is loaded on the hardware timers whenever the VCPU runs. Unfortunately, we are failing to do that the first time vcpu_load() runs, because the timer has not yet been enabled at that time. As long as the initialized timer state matches what happens to be in the hardware (a disabled timer, because we never leave the timer screaming), this does not show up as a problem, but is nevertheless incorrect. The solution is simple; disable preemption while setting the timer to be enabled, and call the timer load function when first enabling the timer. Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
-
Christoffer Dall authored
kvm_timer_should_fire() can be called in two different situations from the kvm_vcpu_block(). The first case is before calling kvm_timer_schedule(), used for wait polling, and in this case the VCPU thread is running and the timer state is loaded onto the hardware so all we have to do is check if the virtual interrupt lines are asserted, becasue the timer interrupt handler functions will raise those lines as appropriate. The second case is inside the wait loop of kvm_vcpu_block(), where we have already called kvm_timer_schedule() and therefore the hardware will be disabled and the software view of the timer state is up to date (timer->loaded is false), and so we can simply check if the timer should fire by looking at the software state. Signed-off-by: Christoffer Dall <cdall@linaro.org> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
Now when both the vtimer and the ptimer when using both the in-kernel vgic emulation and a userspace IRQ chip are driven by the timer signals and at the vcpu load/put boundaries, instead of recomputing the timer state at every entry/exit to/from the guest, we can get entirely rid of the flush hwstate function. Signed-off-by: Christoffer Dall <cdall@linaro.org> Acked-by: Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
There is no need to schedule and cancel a hrtimer when entering and exiting the guest, because we know when the physical timer is going to fire when the guest programs it, and we can simply program the hrtimer at that point. Now when the register modifications from the guest go through the kvm_arm_timer_set/get_reg functions, which always call kvm_timer_update_state(), we can simply consider the timer state in this function and schedule and cancel the timers as needed. This avoids looking at the physical timer emulation state when entering and exiting the VCPU, allowing for faster servicing of the VM when needed. Signed-off-by: Christoffer Dall <cdall@linaro.org> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
We are about to call phys_timer_emulate() from kvm_timer_update_state() and modify phys_timer_emulate() at the same time. Moving the function and modifying it in a single patch makes the diff hard to read, so do this separately first. No functional change. Signed-off-by: Christoffer Dall <cdall@linaro.org> Acked-by: Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
When trapping on a guest access to one of the timer registers, we were messing with the internals of the timer state from the sysregs handling code, and that logic was about to receive more added complexity when optimizing the timer handling code. Therefore, since we already have timer register access functions (to access registers from userspace), reuse those for the timer register traps from a VM and let the timer code maintain its own consistency. Signed-off-by: Christoffer Dall <cdall@linaro.org> Acked-by: Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
Add suport for the physical timer registers in kvm_arm_timer_set_reg and kvm_arm_timer_get_reg so that these functions can be reused to interact with the rest of the system. Note that this paves part of the way for the physical timer state save/restore, but we still need to add those registers to KVM_GET_REG_LIST before we support migrating the physical timer state. Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <cdall@linaro.org>
-
Christoffer Dall authored
We don't need to save and restore the hardware timer state and examine if it generates interrupts on on every entry/exit to the guest. The timer hardware is perfectly capable of telling us when it has expired by signaling interrupts. When taking a vtimer interrupt in the host, we don't want to mess with the timer configuration, we just want to forward the physical interrupt to the guest as a virtual interrupt. We can use the split priority drop and deactivate feature of the GIC to do this, which leaves an EOI'ed interrupt active on the physical distributor, making sure we don't keep taking timer interrupts which would prevent the guest from running. We can then forward the physical interrupt to the VM using the HW bit in the LR of the GIC, like we do already, which lets the guest directly deactivate both the physical and virtual timer simultaneously, allowing the timer hardware to exit the VM and generate a new physical interrupt when the timer output is again asserted later on. We do need to capture this state when migrating VCPUs between physical CPUs, however, which we use the vcpu put/load functions for, which are called through preempt notifiers whenever the thread is scheduled away from the CPU or called directly if we return from the ioctl to userspace. One caveat is that we have to save and restore the timer state in both kvm_timer_vcpu_[put/load] and kvm_timer_[schedule/unschedule], because we can have the following flows: 1. kvm_vcpu_block 2. kvm_timer_schedule 3. schedule 4. kvm_timer_vcpu_put (preempt notifier) 5. schedule (vcpu thread gets scheduled back) 6. kvm_timer_vcpu_load (preempt notifier) 7. kvm_timer_unschedule And a version where we don't actually call schedule: 1. kvm_vcpu_block 2. kvm_timer_schedule 7. kvm_timer_unschedule Since kvm_timer_[schedule/unschedule] may not be followed by put/load, but put/load also may be called independently, we call the timer save/restore functions from both paths. Since they rely on the loaded flag to never save/restore when unnecessary, this doesn't cause any harm, and we ensure that all invokations of either set of functions work as intended. An added benefit beyond not having to read and write the timer sysregs on every entry and exit is that we no longer have to actively write the active state to the physical distributor, because we configured the irq for the vtimer to only get a priority drop when handling the interrupt in the GIC driver (we called irq_set_vcpu_affinity()), and the interrupt stays active after firing on the host. Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <cdall@linaro.org>
-
Christoffer Dall authored
As we are about to take physical interrupts for the virtual timer on the host but want to leave those active while running the VM (and let the VM deactivate them), we need to set the vtimer PPI affinity accordingly. Signed-off-by: Christoffer Dall <cdall@linaro.org> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
As we are about to be lazy with saving and restoring the timer registers, we prepare by moving all possible timer configuration logic out of the hyp code. All virtual timer registers can be programmed from EL1 and since the arch timer is always a level triggered interrupt we can safely do this with interrupts disabled in the host kernel on the way to the guest without taking vtimer interrupts in the host kernel (yet). The downside is that the cntvoff register can only be programmed from hyp mode, so we jump into hyp mode and back to program it. This is also safe, because the host kernel doesn't use the virtual timer in the KVM code. It may add a little performance performance penalty, but only until following commits where we move this operation to vcpu load/put. Signed-off-by: Christoffer Dall <cdall@linaro.org> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
We were using the same hrtimer for emulating the physical timer and for making sure a blocking VCPU thread would be eventually woken up. That worked fine in the previous arch timer design, but as we are about to actually use the soft timer expire function for the physical timer emulation, change the logic to use a dedicated hrtimer. This has the added benefit of not having to cancel any work in the sync path, which in turn allows us to run the flush and sync with IRQs disabled. Note that the hrtimer used to program the host kernel's timer to generate an exit from the guest when the emulated physical timer fires never has to inject any work, and to share the soft_timer_cancel() function with the bg_timer, we change the function to only cancel any pending work if the pointer to the work struct is not null. Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <cdall@linaro.org>
-
Christoffer Dall authored
As we are about to play tricks with the timer to be more lazy in saving and restoring state, we need to move the timer sync and flush functions under a disabled irq section and since we have to flush the vgic state after the timer and PMU state, we do the whole flush/sync sequence with disabled irqs. The only downside is a slightly longer delay before being able to process hardware interrupts and run softirqs. Signed-off-by: Christoffer Dall <cdall@linaro.org> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
As we are about to introduce a separate hrtimer for the physical timer, call this timer bg_timer, because we refer to this timer as the background timer in the code and comments elsewhere. Signed-off-by: Christoffer Dall <cdall@linaro.org> Acked-by: Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
We are about to add an additional soft timer to the arch timer state for a VCPU and would like to be able to reuse the functions to program and cancel a timer, so we make them slightly more generic and rename to make it more clear that these functions work on soft timers and not the hardware resource that this code is managing. The armed flag on the timer state is only used to assert a condition, and we don't rely on this assertion in any meaningful way, so we can simply get rid of this flack and slightly reduce complexity. Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <cdall@linaro.org>
-
Christoffer Dall authored
Some systems without proper firmware and/or hardware description data don't support the split EOI and deactivate operation. On such systems, we cannot leave the physical interrupt active after the timer handler on the host has run, so we cannot support KVM with an in-kernel GIC with the timer changes we are about to introduce. This patch makes sure that trying to initialize the KVM GIC code will fail on such systems. Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <cdall@linaro.org>
-
Christoffer Dall authored
We are about to optimize our timer handling logic which involves injecting irqs to the vgic directly from the irq handler. Unfortunately, the injection path can take any AP list lock and irq lock and we must therefore make sure to use spin_lock_irqsave where ever interrupts are enabled and we are taking any of those locks, to avoid deadlocking between process context and the ISR. This changes a lot of the VGIC code, but the good news are that the changes are mostly mechanical. Acked-by: Marc Zyngier <marc,zyngier@arm.com> Signed-off-by: Christoffer Dall <cdall@linaro.org>
-
Christoffer Dall authored
If the vgic is not initialized, don't try to grab its spinlocks or traverse its data structures. This is important because we soon have to start considering the active state of a virtual interrupts when doing vcpu_load, which may happen early on before the vgic is initialized. Signed-off-by: Christoffer Dall <cdall@linaro.org> Acked-by: Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
Using the physical counter allows KVM to retain the offset between the virtual and physical counter as long as it is actively running a VCPU. As soon as a VCPU is released, another thread is scheduled or we start running userspace applications, we reset the offset to 0, so that userspace accessing the virtual timer can still read the virtual counter and get the same view of time as the kernel. This opens up potential improvements for KVM performance, but we have to make a few adjustments to preserve system consistency. Currently get_cycles() is hardwired to arch_counter_get_cntvct() on arm64, but as we move to using the physical timer for the in-kernel time-keeping on systems that boot in EL2, we should use the same counter for get_cycles() as for other in-kernel timekeeping operations. Similarly, implementations of arch_timer_set_next_event_phys() is modified to use the counter specific to the timer being programmed. VHE kernels or kernels continuing to use the virtual timer are unaffected. Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <cdall@linaro.org>
-
Christoffer Dall authored
As we are about to use the physical counter on arm64 systems that have KVM support, implement arch_counter_get_cntpct() and the associated errata workaround functionality for stable timer reads. Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
-
- 02 Nov, 2017 6 commits
-
-
Thomas Gleixner authored
Merge tag 'irqchip-4.15-2' of git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms into irq/core Pull the second batch of irqchip updates for 4.15 from marc Zyngier: - A number of MIPS GIC updates and cleanups - One GICv4 update - Another firmware workaround for GICv2 - Support for Mason8 GPIOs - Tiny documentation fix
-
Paul Burton authored
We have 2 bitmaps used to keep track of interrupts dedicated to IPIs in the MIPS GIC irqchip driver. These bitmaps are only used from the one compilation unit of that driver, and so can be made static. Do so in order to avoid polluting the symbol table & global namespace. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: Jason Cooper <jason@lakedaemon.net> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-mips@linux-mips.org Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Paul Burton authored
The gic_set_type() function included writes to the MIPS GIC polarity, trigger & dual-trigger registers in each case of a switch statement determining the IRQs type. This is all well & good when we only have a single cluster & thus a single GIC whose register we want to update. It will lead to significant duplication once we have multi-cluster support & multiple GICs to update. Refactor this such that we determine values for the polarity, trigger & dual-trigger registers and then have a single set of register writes following the switch statement. This will allow us to write the same values to each GIC in a multi-cluster system in a later patch, rather than needing to duplicate more register writes in each case. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: Jason Cooper <jason@lakedaemon.net> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-mips@linux-mips.org Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Paul Burton authored
Following the past few patches nothing uses the gic_vpes variable any longer. Remove the dead code. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: Jason Cooper <jason@lakedaemon.net> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-mips@linux-mips.org Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Paul Burton authored
Reserving a number of IPIs based upon the number of VPs reported by the GIC makes little sense for a few reasons: - The kernel may have been configured with NR_CPUS less than the number of VPs in the cluster, in which case using gic_vpes causes us to reserve more interrupts for IPIs than we will possibly use. - If a kernel is configured without support for multi-threading & runs on a system with multi-threading & multiple VPs per core then we'll similarly reserve more interrupts for IPIs than we will possibly use. - In systems with multiple clusters the GIC can only provide us with the number of VPs in its cluster, not across all clusters. In this case we'll reserve fewer interrupts for IPIs than we need. Fix these issues by using num_possible_cpus() instead, which in all cases is actually indicative of how many IPIs we may need. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: Jason Cooper <jason@lakedaemon.net> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-mips@linux-mips.org Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Paul Burton authored
Rather than configuring EIC mode for all CPUs during boot, configure it locally on each when they come online. This will become important with multi-cluster support, since clusters may be powered on & off (for example via hotplug) and would lose the EIC configuration when powered off. Signed-off-by: Paul Burton <paul.burton@mips.com> Cc: Jason Cooper <jason@lakedaemon.net> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-mips@linux-mips.org Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-