- 19 Mar, 2018 40 commits
-
-
Marc Zyngier authored
So far, the branch from the vector slots to the main vectors can at most be 4GB from the main vectors (the reach of ADRP), and this distance is known at compile time. If we were to remap the slots to an unrelated VA, things would break badly. A way to achieve VA independence would be to load the absolute address of the vectors (__kvm_hyp_vector), either using a constant pool or a series of movs, followed by an indirect branch. This patches implements the latter solution, using another instance of a patching callback. Note that since we have to save a register pair on the stack, we branch to the *second* instruction in the vectors in order to compensate for it. This also results in having to adjust this balance in the invalid vector entry point. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Marc Zyngier authored
So far, we only reserve a single instruction in the BPI template in order to branch to the vectors. As we're going to stuff a few more instructions there, let's reserve a total of 5 instructions, which we're going to patch later on as required. We also introduce a small refactor of the vectors themselves, so that we stop carrying the target branch around. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Marc Zyngier authored
There is no reason why the BP hardening vectors shouldn't be part of the HYP text at compile time, rather than being mapped at runtime. Also introduce a new config symbol that controls the compilation of bpi.S. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Marc Zyngier authored
All our useful entry points into the hypervisor are starting by saving x0 and x1 on the stack. Let's move those into the vectors by introducing macros that annotate whether a vector is valid or not, thus indicating whether we want to stash registers or not. The only drawback is that we now also stash registers for el2_error, but this should never happen, and we pop them back right at the start of the handling sequence. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Marc Zyngier authored
We currently provide the hyp-init code with a kernel VA, and expect it to turn it into a HYP va by itself. As we're about to provide the hypervisor with mappings that are not necessarily in the memory range, let's move the kern_hyp_va macro to kvm_get_hyp_vector. No functionnal change. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Marc Zyngier authored
Update the documentation to reflect the new tricks we play on the EL2 mappings... Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Marc Zyngier authored
The main idea behind randomising the EL2 VA is that we usually have a few spare bits between the most significant bit of the VA mask and the most significant bit of the linear mapping. Those bits could be a bunch of zeroes, and could be useful to move things around a bit. Of course, the more memory you have, the less randomisation you get... Alternatively, these bits could be the result of KASLR, in which case they are already random. But it would be nice to have a *different* randomization, just to make the job of a potential attacker a bit more difficult. Inserting these random bits is a bit involved. We don't have a spare register (short of rewriting all the kern_hyp_va call sites), and the immediate we want to insert is too random to be used with the ORR instruction. The best option I could come up with is the following sequence: and x0, x0, #va_mask ror x0, x0, #first_random_bit add x0, x0, #(random & 0xfff) add x0, x0, #(random >> 12), lsl #12 ror x0, x0, #(63 - first_random_bit) making it a fairly long sequence, but one that a decent CPU should be able to execute without breaking a sweat. It is of course NOPed out on VHE. The last 4 instructions can also be turned into NOPs if it appears that there is no free bits to use. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Marc Zyngier authored
As we're moving towards a much more dynamic way to compute our HYP VA, let's express the mask in a slightly different way. Instead of comparing the idmap position to the "low" VA mask, we directly compute the mask by taking into account the idmap's (VA_BIT-1) bit. No functionnal change. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Marc Zyngier authored
The encoder for ADD/SUB (immediate) can only cope with 12bit immediates, while there is an encoding for a 12bit immediate shifted by 12 bits to the left. Let's fix this small oversight by allowing the LSL_12 bit to be set. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Marc Zyngier authored
Add an encoder for the EXTR instruction, which also implements the ROR variant (where Rn == Rm). Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Marc Zyngier authored
We so far mapped our HYP IO (which is essentially the GICv2 control registers) using the same method as for memory. It recently appeared that is a bit unsafe: We compute the HYP VA using the kern_hyp_va helper, but that helper is only designed to deal with kernel VAs coming from the linear map, and not from the vmalloc region... This could in turn cause some bad aliasing between the two, amplified by the upcoming VA randomisation. A solution is to come up with our very own basic VA allocator for MMIO. Since half of the HYP address space only contains a single page (the idmap), we have plenty to borrow from. Let's use the idmap as a base, and allocate downwards from it. GICv2 now lives on the other side of the great VA barrier. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Marc Zyngier authored
Unmapping the idmap range using 52bit PA is quite broken, as we don't take into account the right number of PGD entries, and rely on PTRS_PER_PGD. The result is that pgd_index() truncates the address, and we end-up in the weed. Let's introduce a new unmap_hyp_idmap_range() that knows about this, together with a kvm_pgd_index() helper, which hides a bit of the complexity of the issue. Fixes: 98732d1b ("KVM: arm/arm64: fix HYP ID map extension to 52 bits") Reported-by: James Morse <james.morse@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Marc Zyngier authored
Although the idmap section of KVM can only be at most 4kB and must be aligned on a 4kB boundary, the rest of the code expects it to be page aligned. Things get messy when tearing down the HYP page tables when PAGE_SIZE is 64K, and the idmap section isn't 64K aligned. Let's fix this by computing aligned boundaries that the HYP code will use. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: James Morse <james.morse@arm.com> Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Marc Zyngier authored
As we're about to change the way we map devices at HYP, we need to move away from kern_hyp_va on an IO address. One way of achieving this is to store the VAs in kvm_vgic_global_state, and use that directly from the HYP code. This requires a small change to create_hyp_io_mappings so that it can also return a HYP VA. We take this opportunity to nuke the vctrl_base field in the emulated distributor, as it is not used anymore. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Marc Zyngier authored
Both HYP io mappings call ioremap, followed by create_hyp_io_mappings. Let's move the ioremap call into create_hyp_io_mappings itself, which simplifies the code a bit and allows for further refactoring. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Marc Zyngier authored
Displaying the HYP VA information is slightly counterproductive when using VA randomization. Turn it into a debug feature only, and adjust the last displayed value to reflect the top of RAM instead of ~0. Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Marc Zyngier authored
kvm_vgic_global_state is part of the read-only section, and is usually accessed using a PC-relative address generation (adrp + add). It is thus useless to use kern_hyp_va() on it, and actively problematic if kern_hyp_va() becomes non-idempotent. On the other hand, there is no way that the compiler is going to guarantee that such access is always PC relative. So let's bite the bullet and provide our own accessor. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Marc Zyngier authored
Now that we can dynamically compute the kernek/hyp VA mask, there is no need for a feature flag to trigger the alternative patching. Let's drop the flag and everything that depends on it. Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Marc Zyngier authored
So far, we're using a complicated sequence of alternatives to patch the kernel/hyp VA mask on non-VHE, and NOP out the masking altogether when on VHE. The newly introduced dynamic patching gives us the opportunity to simplify that code by patching a single instruction with the correct mask (instead of the mind bending cumulative masking we have at the moment) or even a single NOP on VHE. This also adds some initial code that will allow the patching callback to switch to a more complex patching. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Marc Zyngier authored
We lack a way to encode operations such as AND, ORR, EOR that take an immediate value. Doing so is quite involved, and is all about reverse engineering the decoding algorithm described in the pseudocode function DecodeBitMasks(). This has been tested by feeding it all the possible literal values and comparing the output with that of GAS. Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Marc Zyngier authored
We're missing the a way to generate the encoding of the N immediate, which is only a single bit used in a number of instruction that take an immediate. Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Marc Zyngier authored
We've so far relied on a patching infrastructure that only gave us a single alternative, without any way to provide a range of potential replacement instructions. For a single feature, this is an all or nothing thing. It would be interesting to have a more flexible grained way of patching the kernel though, where we could dynamically tune the code that gets injected. In order to achive this, let's introduce a new form of dynamic patching, assiciating a callback to a patching site. This callback gets source and target locations of the patching request, as well as the number of instructions to be patched. Dynamic patching is declared with the new ALTERNATIVE_CB and alternative_cb directives: asm volatile(ALTERNATIVE_CB("mov %0, #0\n", callback) : "r" (v)); or alternative_cb callback mov x0, #0 alternative_cb_end where callback is the C function computing the alternative. Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
We can finally get completely rid of any calls to the VGICv3 save/restore functions when the AP lists are empty on VHE systems. This requires carefully factoring out trap configuration from saving and restoring state, and carefully choosing what to do on the VHE and non-VHE path. One of the challenges is that we cannot save/restore the VMCR lazily because we can only write the VMCR when ICC_SRE_EL1.SRE is cleared when emulating a GICv2-on-GICv3, since otherwise all Group-0 interrupts end up being delivered as FIQ. To solve this problem, and still provide fast performance in the fast path of exiting a VM when no interrupts are pending (which also optimized the latency for actually delivering virtual interrupts coming from physical interrupts), we orchestrate a dance of only doing the activate/deactivate traps in vgic load/put for VHE systems (which can have ICC_SRE_EL1.SRE cleared when running in the host), and doing the configuration on every round-trip on non-VHE systems. Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
The APRs can only have bits set when the guest acknowledges an interrupt in the LR and can only have a bit cleared when the guest EOIs an interrupt in the LR. Therefore, if we have no LRs with any pending/active interrupts, the APR cannot change value and there is no need to clear it on every exit from the VM (hint: it will have already been cleared when we exited the guest the last time with the LRs all EOIed). The only case we need to take care of is when we migrate the VCPU away from a CPU or migrate a new VCPU onto a CPU, or when we return to userspace to capture the state of the VCPU for migration. To make sure this works, factor out the APR save/restore functionality into separate functions called from the VCPU (and by extension VGIC) put/load hooks. Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
Just like we can program the GICv2 hypervisor control interface directly from the core vgic code, we can do the same for the GICv3 hypervisor control interface on VHE systems. We do this by simply calling the save/restore functions when we have VHE and we can then get rid of the save/restore function calls from the VHE world switch function. One caveat is that we now write GICv3 system register state before the potential early exit path in the run loop, and because we sync back state in the early exit path, we have to ensure that we read a consistent GIC state from the sync path, even though we have never actually run the guest with the newly written GIC state. We solve this by inserting an ISB in the early exit path. Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
The vgic-v2-sr.c file now only contains the logic to replay unaligned accesses to the virtual CPU interface on 16K and 64K page systems, which is only relevant on 64-bit platforms. Therefore move this file to the arm64 KVM tree, remove the compile directive from the 32-bit side makefile, and remove the ifdef in the C file. Since this file also no longer saves/restores anything, rename the file to vgic-v2-cpuif-proxy.c to more accurately describe the logic in this file. Reviewed-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
We can program the GICv2 hypervisor control interface logic directly from the core vgic code and can instead do the save/restore directly from the flush/sync functions, which can lead to a number of future optimizations. Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
There is really no need to store the vgic_elrsr on the VGIC data structures as the only need we have for the elrsr is to figure out if an LR is inactive when we save the VGIC state upon returning from the guest. We can might as well store this in a temporary local variable. Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
To make the code more readable and to avoid the overhead of a function call, let's get rid of a pair of the alternative function selectors and explicitly call the VHE and non-VHE functions using the has_vhe() static key based selector instead, telling the compiler to try to inline the static function if it can. Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
We do not have to change the c15 trap setting on each switch to/from the guest on VHE systems, because this setting only affects guest EL1/EL0 (and therefore not the VHE host). The PMU and debug trap configuration can also be done on vcpu load/put instead, because they don't affect how the VHE host kernel can access the debug registers while executing KVM kernel code. Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
There is no longer a need for an alternative to choose the right function to tell us whether or not FPSIMD was enabled for the VM, because we can simply can the appropriate functions directly from within the _vhe and _nvhe run functions. Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
As we are about to be more lazy with some of the trap configuration register read/writes for VHE systems, move the logic that is currently shared between VHE and non-VHE into a separate function which can be called from either the world-switch path or from vcpu_load/vcpu_put. Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
When running a 32-bit VM (EL1 in AArch32), the AArch32 system registers can be deferred to vcpu load/put on VHE systems because neither the host kernel nor host userspace uses these registers. Note that we can't save DBGVCR32_EL2 conditionally based on the state of the debug dirty flag on VHE after this change, because during vcpu_load() we haven't calculated a valid debug flag yet, and when we've restored the register during vcpu_load() we also have to save it during vcpu_put(). This means that we'll always restore/save the register for VHE on load/put, but luckily vcpu load/put are called rarely, so saving an extra register unconditionally shouldn't significantly hurt performance. We can also not defer saving FPEXC32_32 because this register only holds a guest-valid value for 32-bit guests during the exit path when the guest has used FPSIMD registers and restored the register in the early assembly handler from taking the EL2 fault, and therefore we have to check if fpsimd is enabled for the guest in the exit path and save the register then, for both VHE and non-VHE guests. Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
32-bit registers are not used by a 64-bit host kernel and can be deferred, but we need to rework the accesses to these register to access the latest values depending on whether or not guest system registers are loaded on the CPU or only reside in memory. Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
Some system registers do not affect the host kernel's execution and can therefore be loaded when we are about to run a VCPU and we don't have to restore the host state to the hardware before the time when we are actually about to return to userspace or schedule out the VCPU thread. The EL1 system registers and the userspace state registers only affecting EL0 execution do not need to be saved and restored on every switch between the VM and the host, because they don't affect the host kernel's execution. We mark all registers which are now deffered as such in the vcpu_{read,write}_sys_reg accessors in sys-regs.c to ensure the most up-to-date copy is always accessed. Note MPIDR_EL1 (controlled via VMPIDR_EL2) is accessed from other vcpu threads, for example via the GIC emulation, and therefore must be declared as immediate, which is fine as the guest cannot modify this value. The 32-bit sysregs can also be deferred but we do this in a separate patch as it requires a bit more infrastructure. Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
ELR_EL1 is not used by a VHE host kernel and can be deferred, but we need to rework the accesses to this register to access the latest value depending on whether or not guest system registers are loaded on the CPU or only reside in memory. Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
SPSR_EL1 is not used by a VHE host kernel and can be deferred, but we need to rework the accesses to this register to access the latest value depending on whether or not guest system registers are loaded on the CPU or only reside in memory. The handling of accessing the various banked SPSRs for 32-bit VMs is a bit clunky, but this will be improved in following patches which will first prepare and subsequently implement deferred save/restore of the 32-bit registers, including the 32-bit SPSRs. Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
We are about to defer saving and restoring some groups of system registers to vcpu_put and vcpu_load on supported systems. This means that we need some infrastructure to access system registes which supports either accessing the memory backing of the register or directly accessing the system registers, depending on the state of the system when we access the register. We do this by defining read/write accessor functions, which can handle both "immediate" and "deferrable" system registers. Immediate registers are always saved/restored in the world-switch path, but deferrable registers are only saved/restored in vcpu_put/vcpu_load when supported and sysregs_loaded_on_cpu will be set in that case. Note that we don't use the deferred mechanism yet in this patch, but only introduce infrastructure. This is to improve convenience of review in the subsequent patches where it is clear which registers become deferred. Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
Currently we access the system registers array via the vcpu_sys_reg() macro. However, we are about to change the behavior to some times modify the register file directly, so let's change this to two primitives: * Accessor macros vcpu_write_sys_reg() and vcpu_read_sys_reg() * Direct array access macro __vcpu_sys_reg() The accessor macros should be used in places where the code needs to access the currently loaded VCPU's state as observed by the guest. For example, when trapping on cache related registers, a write to a system register should go directly to the VCPU version of the register. The direct array access macro can be used in places where the VCPU is known to never be running (for example userspace access) or for registers which are never context switched (for example all the PMU system registers). This rewrites all users of vcpu_sys_regs to one of the macros described above. No functional change. Acked-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Andrew Jones <drjones@redhat.com> Signed-off-by: Christoffer Dall <cdall@cs.columbia.edu> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
We currently handle 32-bit accesses to trapped VM system registers using the 32-bit index into the coproc array on the vcpu structure, which is a union of the coproc array and the sysreg array. Since all the 32-bit coproc indices are created to correspond to the architectural mapping between 64-bit system registers and 32-bit coprocessor registers, and because the AArch64 system registers are the double in size of the AArch32 coprocessor registers, we can always find the system register entry that we must update by dividing the 32-bit coproc index by 2. This is going to make our lives much easier when we have to start accessing system registers that use deferred save/restore and might have to be read directly from the physical CPU. Reviewed-by: Andrew Jones <drjones@redhat.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-