- 25 May, 2018 32 commits
-
-
Eric Auger authored
Let's raise the number of supported vcpus along with vgic v3 now that HW is looming with more physical CPUs. Signed-off-by: Eric Auger <eric.auger@redhat.com> Acked-by: Christoffer Dall <christoffer.dall@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Eric Auger authored
Now all the internals are ready to handle multiple redistributor regions, let's allow the userspace to register them. Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Christoffer Dall <christoffer.dall@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Eric Auger authored
This new attribute allows the userspace to set the base address of a reditributor region, relaxing the constraint of having all consecutive redistibutor frames contiguous. Signed-off-by: Eric Auger <eric.auger@redhat.com> Acked-by: Christoffer Dall <christoffer.dall@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Eric Auger authored
On vcpu first run, we eventually know the actual number of vcpus. This is a synchronization point to check all redistributors were assigned. On kvm_vgic_map_resources() we check both dist and redist were set, eventually check potential base address inconsistencies. Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Christoffer Dall <christoffer.dall@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Eric Auger authored
As we are going to register several redist regions, vgic_register_all_redist_iodevs() may be called several times. We need to register a redist_iodev for a given vcpu only once. So let's check if the base address has already been set. Initialize this latter in kvm_vgic_vcpu_init(). Signed-off-by: Eric Auger <eric.auger@redhat.com> Acked-by: Christoffer Dall <christoffer.dall@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Eric Auger authored
kvm_vgic_vcpu_early_init gets called after kvm_vgic_cpu_init which is confusing. The call path is as follows: kvm_vm_ioctl_create_vcpu |_ kvm_arch_cpu_create |_ kvm_vcpu_init |_ kvm_arch_vcpu_init |_ kvm_vgic_vcpu_init |_ kvm_arch_vcpu_postcreate |_ kvm_vgic_vcpu_early_init Static initialization currently done in kvm_vgic_vcpu_early_init() can be moved to kvm_vgic_vcpu_init(). So let's move the code and remove kvm_vgic_vcpu_early_init(). kvm_arch_vcpu_postcreate() does nothing. Signed-off-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Eric Auger authored
We introduce a new helper that creates and inserts a new redistributor region into the rdist region list. This helper both handles the case where the redistributor region size is known at registration time and the legacy case where it is not (eventually depending on the number of online vcpus). Depending on pfns, we perform all the possible checks that we can do: - end of memory crossing - incorrect alignment of the base address - collision with distributor region if already defined - collision with already registered rdist regions - check of the new index Rdist regions must be inserted by increasing order of indices. Indices must be contiguous. Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Christoffer Dall <christoffer.dall@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Eric Auger authored
vgic_v3_check_base() currently only handles the case of a unique legacy redistributor region whose size is not explicitly set but inferred, instead, from the number of online vcpus. We adapt it to handle the case of multiple redistributor regions with explicitly defined size. We rely on two new helpers: - vgic_v3_rdist_overlap() is used to detect overlap with the dist region if defined - vgic_v3_rd_region_size computes the size of the redist region, would it be a legacy unique region or a new explicitly sized region. Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Christoffer Dall <christoffer.dall@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Eric Auger authored
The TYPER of an redistributor reflects whether the rdist is the last one of the redistributor region. Let's compare the TYPER GPA against the address of the last occupied slot within the redistributor region. Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Christoffer Dall <christoffer.dall@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Eric Auger authored
We introduce vgic_v3_rdist_free_slot to help identifying where we can place a new 2x64KB redistributor. Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Christoffer Dall <christoffer.dall@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Eric Auger authored
At the moment KVM supports a single rdist region. We want to support several separate rdist regions so let's introduce a list of them. This patch currently only cares about a single entry in this list as the functionality to register several redist regions is not yet there. So this only translates the existing code into something functionally similar using that new data struct. The redistributor region handle is stored in the vgic_cpu structure to allow later computation of the TYPER last bit. Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Christoffer Dall <christoffer.dall@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Eric Auger authored
We introduce a new KVM_VGIC_V3_ADDR_TYPE_REDIST_REGION attribute in KVM_DEV_ARM_VGIC_GRP_ADDR group. It allows userspace to provide the base address and size of a redistributor region Compared to KVM_VGIC_V3_ADDR_TYPE_REDIST, this new attribute allows to declare several separate redistributor regions. So the whole redist space does not need to be contiguous anymore. Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Acked-by: Christoffer Dall <christoffer.dall@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Eric Auger authored
in case kvm_vgic_map_resources() fails, typically if the vgic distributor is not defined, __kvm_vgic_destroy will be called several times. Indeed kvm_vgic_map_resources() is called on first vcpu run. As a result dist->spis is freeed more than once and on the second time it causes a "kernel BUG at mm/slub.c:3912!" Set dist->spis to NULL to avoid the crash. Fixes: ad275b8b ("KVM: arm/arm64: vgic-new: vgic_init: implement vgic_init") Signed-off-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Christoffer Dall <christoffer.dall@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Dave Martin authored
The conversion of the FPSIMD context switch trap code to C has added some overhead to calling it, due to the need to save registers that the procedure call standard defines as caller-saved. So, perhaps it is no longer worth invoking this trap handler quite so early. Instead, we can invoke it from fixup_guest_exit(), with little likelihood of increasing the overhead much further. As a convenience, this patch gives __hyp_switch_fpsimd() the same return semantics fixup_guest_exit(). For now there is no possibility of a spurious FPSIMD trap, so the function always returns true, but this allows it to be tail-called with a single return statement. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Christoffer Dall <christoffer.dall@arm.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Dave Martin authored
The entire tail of fixup_guest_exit() is contained in if statements of the form if (x && *exit_code == ARM_EXCEPTION_TRAP). As a result, we can check just once and bail out of the function early, allowing the remaining if conditions to be simplified. The only awkward case is where *exit_code is changed to ARM_EXCEPTION_EL1_SERROR in the case of an illegal GICv2 CPU interface access: in that case, the GICv3 trap handling code is skipped using a goto. This avoids pointlessly evaluating the static branch check for the GICv3 case, even though we can't have vgic_v2_cpuif_trap and vgic_v3_cpuif_trap true simultaneously unless we have a GICv3 and GICv2 on the host: that sounds stupid, but I haven't satisfied myself that it can't happen. No functional change. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Acked-by: Christoffer Dall <christoffer.dall@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Dave Martin authored
In fixup_guest_exit(), there are a couple of cases where after checking what the exit code was, we assign it explicitly with the value it already had. Assuming this is not indicative of a bug, these assignments are not needed. This patch removes the redundant assignments, and simplifies some if-nesting that becomes trivial as a result. No functional change. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Christoffer Dall <christoffer.dall@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Dave Martin authored
Now that the host SVE context can be saved on demand from Hyp, there is no longer any need to save this state in advance before entering the guest. This patch removes the relevant call to kvm_fpsimd_flush_cpu_state(). Since the problem that function was intended to solve now no longer exists, the function and its dependencies are also deleted. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Acked-by: Christoffer Dall <christoffer.dall@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Dave Martin authored
This patch adds SVE context saving to the hyp FPSIMD context switch path. This means that it is no longer necessary to save the host SVE state in advance of entering the guest, when in use. In order to avoid adding pointless complexity to the code, VHE is assumed if SVE is in use. VHE is an architectural prerequisite for SVE, so there is no good reason to turn CONFIG_ARM64_VHE off in kernels that support both SVE and KVM. Historically, software models exist that can expose the architecturally invalid configuration of SVE without VHE, so if this situation is detected at kvm_init() time then KVM will be disabled. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Dave Martin authored
In order to make sve_save_state()/sve_load_state() more easily reusable and to get rid of a potential branch on context switch critical paths, this patch makes sve_pffr() inline and moves it to fpsimd.h. <asm/processor.h> must be included in fpsimd.h in order to make this work, and this creates an #include cycle that is tricky to avoid without modifying core code, due to the way the PR_SVE_*() prctl helpers are included in the core prctl implementation. Instead of breaking the cycle, this patch defers inclusion of <asm/fpsimd.h> in <asm/processor.h> until the point where it is actually needed: i.e., immediately before the prctl definitions. No functional change. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Dave Martin authored
sve_pffr(), which is used to derive the base address used for low-level SVE save/restore routines, currently takes the relevant task_struct as an argument. The only accessed fields are actually part of thread_struct, so this patch changes the argument type accordingly. This is done in preparation for moving this function to a header, where we do not want to have to include <linux/sched.h> due to the consequent circular #include problems. No functional change. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Dave Martin authored
Having read_zcr_features() inline in cpufeature.h results in that header requiring #includes which make it hard to include <asm/fpsimd.h> elsewhere without triggering header inclusion cycles. This is not a hot-path function and arguably should not be in cpufeature.h in the first place, so this patch moves it to fpsimd.c, compiled conditionally if CONFIG_ARM64_SVE=y. This allows some SVE-related #includes to be dropped from cpufeature.h, which will ease future maintenance. A couple of missing #includes of <asm/fpsimd.h> are exposed by this change under arch/arm64/. This patch adds the missing #includes as necessary. No functional change. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Dave Martin authored
This patch refactors KVM to align the host and guest FPSIMD save/restore logic with each other for arm64. This reduces the number of redundant save/restore operations that must occur, and reduces the common-case IRQ blackout time during guest exit storms by saving the host state lazily and optimising away the need to restore the host state before returning to the run loop. Four hooks are defined in order to enable this: * kvm_arch_vcpu_run_map_fp(): Called on PID change to map necessary bits of current to Hyp. * kvm_arch_vcpu_load_fp(): Set up FP/SIMD for entering the KVM run loop (parse as "vcpu_load fp"). * kvm_arch_vcpu_ctxsync_fp(): Get FP/SIMD into a safe state for re-enabling interrupts after a guest exit back to the run loop. For arm64 specifically, this involves updating the host kernel's FPSIMD context tracking metadata so that kernel-mode NEON use will cause the vcpu's FPSIMD state to be saved back correctly into the vcpu struct. This must be done before re-enabling interrupts because kernel-mode NEON may be used by softirqs. * kvm_arch_vcpu_put_fp(): Save guest FP/SIMD state back to memory and dissociate from the CPU ("vcpu_put fp"). Also, the arm64 FPSIMD context switch code is updated to enable it to save back FPSIMD state for a vcpu, not just current. A few helpers drive this: * fpsimd_bind_state_to_cpu(struct user_fpsimd_state *fp): mark this CPU as having context fp (which may belong to a vcpu) currently loaded in its registers. This is the non-task equivalent of the static function fpsimd_bind_to_cpu() in fpsimd.c. * task_fpsimd_save(): exported to allow KVM to save the guest's FPSIMD state back to memory on exit from the run loop. * fpsimd_flush_state(): invalidate any context's FPSIMD state that is currently loaded. Used to disassociate the vcpu from the CPU regs on run loop exit. These changes allow the run loop to enable interrupts (and thus softirqs that may use kernel-mode NEON) without having to save the guest's FPSIMD state eagerly. Some new vcpu_arch fields are added to make all this work. Because host FPSIMD state can now be saved back directly into current's thread_struct as appropriate, host_cpu_context is no longer used for preserving the FPSIMD state. However, it is still needed for preserving other things such as the host's system registers. To avoid ABI churn, the redundant storage space in host_cpu_context is not removed for now. arch/arm is not addressed by this patch and continues to use its current save/restore logic. It could provide implementations of the helpers later if desired. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Christoffer Dall <christoffer.dall@arm.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Dave Martin authored
In struct vcpu_arch, the debug_flags field is used to store debug-related flags about the vcpu state. Since we are about to add some more flags related to FPSIMD and SVE, it makes sense to add them to the existing flags field rather than adding new fields. Since there is only one debug_flags flag defined so far, there is plenty of free space for expansion. In preparation for adding more flags, this patch renames the debug_flags field to simply "flags", and updates comments appropriately. The flag definitions are also moved to <asm/kvm_host.h>, since their presence in <asm/kvm_asm.h> was for purely historical reasons: these definitions are not used from asm any more, and not very likely to be as more Hyp asm is migrated to C. KVM_ARM64_DEBUG_DIRTY_SHIFT has not been used since commit 1ea66d27 ("arm64: KVM: Move away from the assembly version of the world switch"), so this patch gets rid of that too. No functional change. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Acked-by: Christoffer Dall <christoffer.dall@arm.com> [maz: fixed minor conflict] Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Dave Martin authored
In preparation for optimising the way KVM manages switching the guest and host FPSIMD state, it is necessary to provide a means for code outside arch/arm64/kernel/fpsimd.c to restore the user trap configuration for SVE correctly for the current task. Rather than requiring external code to duplicate the maintenance explicitly, this patch moves the trap maintenenace to fpsimd_bind_to_cpu(), since it is logically part of the work of associating the current task with the cpu. Because fpsimd_bind_to_cpu() is rather a cryptic name to publish alongside fpsimd_bind_state_to_cpu(), the former function is renamed to fpsimd_bind_task_to_cpu() to make its purpose more explicit. This patch makes appropriate changes to ensure that fpsimd_bind_task_to_cpu() is always called alongside task_fpsimd_load(), so that the trap maintenance continues to be done in every situation where it was done prior to this patch. As a side-effect, the metadata updates done by fpsimd_bind_task_to_cpu() now change from conditional to unconditional in the "already bound" case of sigreturn. This is harmless, and a couple of extra stores on this slow path will not impact performance. I consider this a reasonable price to pay for a slightly cleaner interface. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Dave Martin authored
Currently the FPSIMD handling code uses the condition task->mm == NULL as a hint that task has no FPSIMD register context. The ->mm check is only there to filter out tasks that cannot possibly have FPSIMD context loaded, for optimisation purposes. Also, TIF_FOREIGN_FPSTATE must always be checked anyway before saving FPSIMD context back to memory. For these reasons, the ->mm checks are not useful, providing that TIF_FOREIGN_FPSTATE is maintained in a consistent way for all threads. The context switch logic is already deliberately optimised to defer reloads of the regs until ret_to_user (or sigreturn as a special case), and save them only if they have been previously loaded. These paths are the only places where the wrong_task and wrong_cpu conditions can be made false, by calling fpsimd_bind_task_to_cpu(). Kernel threads by definition never reach these paths. As a result, the wrong_task and wrong_cpu tests in fpsimd_thread_switch() will always yield true for kernel threads. This patch removes the redundant checks and special-case code, ensuring that TIF_FOREIGN_FPSTATE is set whenever a kernel thread is scheduled in, and ensures that this flag is set for the init task. The fpsimd_flush_task_state() call already present in copy_thread() ensures the same for any new task. With TIF_FOREIGN_FPSTATE always set for kernel threads, this patch ensures that no extra context save work is added for kernel threads, and eliminates the redundant context saving that may currently occur for kernel threads that have acquired an mm via use_mm(). Signed-off-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Christoffer Dall <christoffer.dall@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Dave Martin authored
The init task is started with thread_flags equal to 0, which means that TIF_FOREIGN_FPSTATE is initially clear. It is theoretically possible (if unlikely) that the init task could reach userspace without ever being scheduled out. If this occurs, data left in the FPSIMD registers by the kernel could be exposed. This patch fixes this anomaly by ensuring that the init task's initial TIF_FOREIGN_FPSTATE is set. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Fixes: 005f78cd ("arm64: defer reloading a task's FPSIMD state to userland resume") Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Cc: Will Deacon <will.deacon@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Dave Martin authored
In preparation for allowing non-task (i.e., KVM vcpu) FPSIMD contexts to be handled by the fpsimd common code, this patch adapts task_fpsimd_save() to save back the currently loaded context, removing the explicit dependency on current. The relevant storage to write back to in memory is now found by examining the fpsimd_last_state percpu struct. fpsimd_save() does nothing unless TIF_FOREIGN_FPSTATE is clear, and fpsimd_last_state is updated under local_bh_disable() or local_irq_disable() everywhere that TIF_FOREIGN_FPSTATE is cleared: thus, fpsimd_save() will write back to the correct storage for the loaded context. No functional change. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Dave Martin authored
To make the lazy FPSIMD context switch trap code easier to hack on, this patch converts it to C. This is not amazingly efficient, but the trap should typically only be taken once per host context switch. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Christoffer Dall authored
KVM/ARM differs from other architectures in having to maintain an additional virtual address space from that of the host and the guest, because we split the execution of KVM across both EL1 and EL2. This results in a need to explicitly map data structures into EL2 (hyp) which are accessed from the hyp code. As we are about to be more clever with our FPSIMD handling on arm64, which stores data in the task struct and uses thread_info flags, we will have to map parts of the currently executing task struct into the EL2 virtual address space. However, we don't want to do this on every KVM_RUN, because it is a fairly expensive operation to walk the page tables, and the common execution mode is to map a single thread to a VCPU. By introducing a hook that architectures can select with HAVE_KVM_VCPU_RUN_PID_CHANGE, we do not introduce overhead for other architectures, but have a simple way to only map the data we need when required for arm64. This patch introduces the framework only, and wires it up in the arm/arm64 KVM common code. No functional change. Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Dave Martin authored
This patch uses the new update_thread_flag() helpers to simplify a couple of if () set; else clear; constructs. No functional change. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Dave Martin authored
There are a number of bits of code sprinkled around the kernel to set a thread flag if a certain condition is true, and clear it otherwise. To help make those call sites terser and less cumbersome, this patch adds a new family of thread flag manipulators update*_thread_flag([...,] flag, cond) which do the equivalent of: if (cond) set*_thread_flag([...,] flag); else clear*_thread_flag([...,] flag); Signed-off-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
Dave Martin authored
fpsimd_last_state.st is set to NULL as a way of indicating that current's FPSIMD registers are no longer loaded in the cpu. In particular, this is done when the kernel temporarily uses or clobbers the FPSIMD registers for its own purposes, as in CPU PM or kernel-mode NEON, resulting in them being populated with garbage data not belonging to a task. Commit 17eed27b ("arm64/sve: KVM: Prevent guests from using SVE") factors this operation out as a new helper fpsimd_flush_cpu_state() to make it clearer what is being done here, and on SVE systems this helper is now used, via kvm_fpsimd_flush_cpu_state(), to invalidate the registers after KVM has run a vcpu. The reason for this is that KVM does not yet understand how to restore the full host SVE registers itself after loading the guest FPSIMD context into them. This exposes a particular problem: if fpsimd_last_state.st is set to NULL without also setting TIF_FOREIGN_FPSTATE, the kernel may continue to think that current's FPSIMD registers are live even though they have actually been clobbered. Prior to the aforementioned commit, the only path where fpsimd_last_state.st is set to NULL without setting TIF_FOREIGN_FPSTATE is when kernel_neon_begin() is called by a kernel thread (where current->mm can be NULL). This does not matter, because the only harm is that at context-switch time fpsimd_thread_switch() may unnecessarily save the FPSIMD registers back to current's thread_struct (even though kernel threads are not considered to have any FPSIMD context of their own and the registers will never be reloaded). Note that although CPU_PM_ENTER lacks the TIF_FOREIGN_FPSTATE setting, every CPU passing through that path must subsequently pass through CPU_PM_EXIT before it can re-enter the kernel proper. CPU_PM_EXIT sets the flag. The sve_flush_cpu_state() function added by commit 17eed27b also lacks the proper maintenance of TIF_FOREIGN_FPSTATE. This may cause the bits of a host task's SVE registers that do not alias the FPSIMD register file to spontaneously appear zeroed if a KVM vcpu runs in the same task in the meantime. Although this effect is hidden by the fact that the non-FPSIMD bits of the SVE registers are zeroed by a syscall anyway, it is doubtless a bad idea to rely on these different code paths interacting correctly under future maintenance. This patch makes TIF_FOREIGN_FPSTATE an unconditional side-effect of fpsimd_flush_cpu_state(), and removes the set_thread_flag() calls that become redundant as a result. This ensures that TIF_FOREIGN_FPSTATE cannot remain clear if the FPSIMD state in the FPSIMD registers is invalid. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Christoffer Dall <christoffer.dall@arm.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
- 20 May, 2018 1 commit
-
-
Mark Rutland authored
For historical reasons, we open-code lm_alias() in kvm_ksym_ref(). Let's use lm_alias() to avoid duplication and make things clearer. As we have to pull this from <linux/mm.h> (which is not safe for inclusion in assembly), we may as well move the kvm_ksym_ref() definition into the existing !__ASSEMBLY__ block. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Christoffer Dall <christoffer.dall@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: kvmarm@lists.cs.columbia.edu Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
-
- 07 May, 2018 1 commit
-
-
Linus Torvalds authored
-
- 06 May, 2018 6 commits
-
-
git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds authored
Pll KVM fixes from Radim Krčmář: "ARM: - Fix proxying of GICv2 CPU interface accesses - Fix crash when switching to BE - Track source vcpu git GICv2 SGIs - Fix an outdated bit of documentation x86: - Speed up injection of expired timers (for stable)" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: KVM: x86: remove APIC Timer periodic/oneshot spikes arm64: vgic-v2: Fix proxying of cpuif access KVM: arm/arm64: vgic_init: Cleanup reference to process_maintenance KVM: arm64: Fix order of vcpu_write_sys_reg() arguments KVM: arm/arm64: vgic: Fix source vcpu issues for GICv2 SGI
-
git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommuLinus Torvalds authored
Pull iommu fixes from Joerg Roedel: - fix a compile warning in the AMD IOMMU driver with irq remapping disabled - fix for VT-d interrupt remapping and invalidation size (caused a BUG_ON when trying to invalidate more than 4GB) - build fix and a regression fix for broken graphics with old DTS for the rockchip iommu driver - a revert in the PCI window reservation code which fixes a regression with VFIO. * tag 'iommu-fixes-v4.17-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: iommu: rockchip: fix building without CONFIG_OF iommu/vt-d: Use WARN_ON_ONCE instead of BUG_ON in qi_flush_dev_iotlb() iommu/vt-d: fix shift-out-of-bounds in bug checking iommu/dma: Move PCI window region reservation back into dma specific path. iommu/rockchip: Make clock handling optional iommu/amd: Hide unused iommu_table_lock iommu/vt-d: Fix usage of force parameter in intel_ir_reconfigure_irte()
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull x86 fix from Thomas Gleixner: "Unbreak the CPUID CPUID_8000_0008_EBX reload which got dropped when the evaluation of physical and virtual bits which uses the same CPUID leaf was moved out of get_cpu_cap()" * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/cpu: Restore CPUID_8000_0008_EBX reload
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull clocksource fixes from Thomas Gleixner: "The recent addition of the early TSC clocksource breaks on machines which have an unstable TSC because in case that TSC is disabled, then the clocksource selection logic falls back to the early TSC which is obviously bogus. That also unearthed a few robustness issues in the clocksource derating code which are addressed as well" * 'timers-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: clocksource: Rework stale comment clocksource: Consistent de-rate when marking unstable x86/tsc: Fix mark_tsc_unstable() clocksource: Initialize cs->wd_list clocksource: Allow clocksource_mark_unstable() on unregistered clocksources x86/tsc: Always unregister clocksource_tsc_early
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull irq fix from Thomas Gleixner: "A single fix to prevent false positives in the spurious interrupt detector when more than a single demultiplex register is evaluated in the Qualcom irq combiner driver" * 'irq-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: irqchip/qcom: Fix check for spurious interrupts
-
git://git.infradead.org/linux-platform-drivers-x86Linus Torvalds authored
Pull x86 platform driver fixes from Darren Hart: - We missed a case in the Dell config dependencies resulting in a possible bad configuration, resolve it by giving up on trying to keep DELL_LAPTOP visible in the menu and make it depend on DELL_SMBIOS. - Fix a null pointer dereference at module unload for the asus-wireless driver. * tag 'platform-drivers-x86-v4.17-2' of git://git.infradead.org/linux-platform-drivers-x86: platform/x86: Kconfig: Fix dell-laptop dependency chain. platform/x86: asus-wireless: Fix NULL pointer dereference
-