An error occurred fetching the project authors.
- 24 May, 2023 1 commit
-
-
Marc Zyngier authored
We appear to have missed the Set/Way CMOs when adding MTE support. Not that we really expect anyone to use them, but you never know what stupidity some people can come up with... Treat these mostly like we deal with the classic S/W CMOs, only with an additional check that MTE really is enabled. Signed-off-by:
Marc Zyngier <maz@kernel.org> Reviewed-by:
Cornelia Huck <cohuck@redhat.com> Reviewed-by:
Steven Price <steven.price@arm.com> Reviewed-by:
Oliver Upton <oliver.upton@linux.dev> Link: https://lore.kernel.org/r/20230515204601.1270428-3-maz@kernel.org
-
- 13 Apr, 2023 1 commit
-
-
Marc Zyngier authored
When CNTPOFF isn't implemented and that we have a non-zero counter offset, CNTPCT and CNTPCTSS are trapped. We properly handle the former, but not the latter, as it is not present in the sysreg table (despite being actually handled in the code). Bummer. Just populate the cp15_64 table with the missing register. Reported-by:
Reiji Watanabe <reijiw@google.com> Signed-off-by:
Marc Zyngier <maz@kernel.org>
-
- 30 Mar, 2023 2 commits
-
-
Marc Zyngier authored
CNTPOFF_EL2 is awesome, but it is mostly vapourware, and no publicly available implementation has it. So for the common mortals, let's implement the emulated version of this thing. It means trapping accesses to the physical counter and timer, and emulate some of it as necessary. As for CNTPOFF_EL2, nobody sets the offset yet. Reviewed-by:
Colton Lewis <coltonlewis@google.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20230330174800.2677007-6-maz@kernel.org
-
Reiji Watanabe authored
Currently, with VHE, KVM enables the EL0 event counting for the guest on vcpu_load() or KVM enables it as a part of the PMU register emulation process, when needed. However, in the migration case (with VHE), the same handling is lacking, as vPMU register values that were restored by userspace haven't been propagated yet (the PMU events haven't been created) at the vcpu load-time on the first KVM_RUN (kvm_vcpu_pmu_restore_guest() called from vcpu_load() on the first KVM_RUN won't do anything as events_{guest,host} of kvm_pmu_events are still zero). So, with VHE, enable the guest's EL0 event counting on the first KVM_RUN (after the migration) when needed. More specifically, have kvm_pmu_handle_pmcr() call kvm_vcpu_pmu_restore_guest() so that kvm_pmu_handle_pmcr() on the first KVM_RUN can take care of it. Fixes: d0c94c49 ("KVM: arm64: Restore PMU configuration on first run") Cc: stable@vger.kernel.org Reviewed-by:
Marc Zyngier <maz@kernel.org> Signed-off-by:
Reiji Watanabe <reijiw@google.com> Link: https://lore.kernel.org/r/20230329023944.2488484-1-reijiw@google.comSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
- 13 Mar, 2023 1 commit
-
-
Reiji Watanabe authored
Have KVM_GET_ONE_REG for vPMU counter (vPMC) registers (PMCCNTR_EL0 and PMEVCNTR<n>_EL0) return the sum of the register value in the sysreg file and the current perf event counter value. Values of vPMC registers are saved in sysreg files on certain occasions. These saved values don't represent the current values of the vPMC registers if the perf events for the vPMCs count events after the save. The current values of those registers are the sum of the sysreg file value and the current perf event counter value. But, when userspace reads those registers (using KVM_GET_ONE_REG), KVM returns the sysreg file value to userspace (not the sum value). Fix this to return the sum value for KVM_GET_ONE_REG. Fixes: 051ff581 ("arm64: KVM: Add access handler for event counter register") Cc: stable@vger.kernel.org Reviewed-by:
Marc Zyngier <maz@kernel.org> Signed-off-by:
Reiji Watanabe <reijiw@google.com> Link: https://lore.kernel.org/r/20230313033208.1475499-1-reijiw@google.comSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
- 11 Feb, 2023 5 commits
-
-
Marc Zyngier authored
As there is a number of features that we either can't support, or don't want to support right away with NV, let's add some basic filtering so that we don't advertize silly things to the EL2 guest. Whilst we are at it, advertize FEAT_TTL as well as FEAT_GTG, which the NV implementation will implement. Reviewed-by:
Ganapatrao Kulkarni <gankulkarni@os.amperecomputing.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20230209175820.1939006-18-maz@kernel.orgSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
Jintack Lim authored
With HCR_EL2.NV bit set, accesses to EL12 registers in the virtual EL2 trap to EL2. Handle those traps just like we do for EL1 registers. One exception is CNTKCTL_EL12. We don't trap on CNTKCTL_EL1 for non-VHE virtual EL2 because we don't have to. However, accessing CNTKCTL_EL12 will trap since it's one of the EL12 registers controlled by HCR_EL2.NV bit. Therefore, add a handler for it and don't treat it as a non-trap-registers when preparing a shadow context. These registers, being only a view on their EL1 counterpart, are permanently hidden from userspace. Reviewed-by:
Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by:
Jintack Lim <jintack.lim@linaro.org> [maz: EL12_REG(), register visibility] Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20230209175820.1939006-17-maz@kernel.orgSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
Marc Zyngier authored
So far, we never needed to distinguish between registers hidden from userspace and being hidden from a guest (they are always either visible to both, or hidden from both). With NV, we have the ugly case of the EL02 and EL12 registers, which are only a view on the EL0 and EL1 registers. It makes absolutely no sense to expose them to userspace, since it already has the canonical view. Add a new visibility flag (REG_HIDDEN_USER) and a new helper that checks for it and REG_HIDDEN when checking whether to expose a sysreg to userspace. Subsequent patches will make use of it. Reviewed-by:
Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20230209175820.1939006-16-maz@kernel.orgSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
Jintack Lim authored
For the same reason we trap virtual memory register accesses at virtual EL2, we need to trap SPSR_EL1, ELR_EL1 and VBAR_EL1 accesses. ARM v8.3 introduces the HCR_EL2.NV1 bit to be able to trap on those register accesses in EL1. Do not set this bit until the whole nesting support is completed, which happens further down the line... Reviewed-by:
Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by:
Jintack Lim <jintack.lim@linaro.org> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20230209175820.1939006-14-maz@kernel.orgSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
Jintack Lim authored
ARM v8.3 introduces a new bit in the HCR_EL2, which is the NV bit. When this bit is set, accessing EL2 registers in EL1 traps to EL2. In addition, executing the following instructions in EL1 will trap to EL2: tlbi, at, eret, and msr/mrs instructions to access SP_EL1. Most of the instructions that trap to EL2 with the NV bit were undef at EL1 prior to ARM v8.3. The only instruction that was not undef is eret. This patch sets up a handler for EL2 registers and SP_EL1 register accesses at EL1. The host hypervisor keeps those register values in memory, and will emulate their behavior. This patch doesn't set the NV bit yet. It will be set in a later patch once nested virtualization support is completed. Reviewed-by:
Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by:
Jintack Lim <jintack.lim@linaro.org> [maz: EL2_REG() macros] Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20230209175820.1939006-9-maz@kernel.orgSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
- 07 Feb, 2023 1 commit
-
-
Oliver Upton authored
Generally speaking, any memory allocations that can be associated with a particular VM should be charged to the cgroup of its process. Nonetheless, there are a couple spots in KVM/arm64 that aren't currently accounted: - the ccsidr array containing the virtualized cache hierarchy - the cpumask of supported cpus, for use of the vPMU on heterogeneous systems Go ahead and set __GFP_ACCOUNT for these allocations. Reviewed-by:
Marc Zyngier <maz@kernel.org> Reviewed-by:
Akihiko Odaki <akihiko.odaki@daynix.com> Reviewed-by:
Gavin Shan <gshan@redhat.com> Link: https://lore.kernel.org/r/20230206235229.4174711-1-oliver.upton@linux.devSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
- 31 Jan, 2023 1 commit
-
-
Jiapeng Chong authored
No functional modification involved. ./arch/arm64/kvm/sys_regs.c:80:2-9: code aligned with following code on line 82. Reported-by:
Abaci Robot <abaci@linux.alibaba.com> Link: https://bugzilla.openanolis.cn/show_bug.cgi?id=3897Signed-off-by:
Jiapeng Chong <jiapeng.chong@linux.alibaba.com> Link: https://lore.kernel.org/r/20230131082703.118101-1-jiapeng.chong@linux.alibaba.comSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
- 26 Jan, 2023 1 commit
-
-
Marc Zyngier authored
Although not handling a trap is a pretty bad situation to be in, panicing the kernel isn't useful and provides no valuable information to help debugging the situation. Instead, dump the encoding of the unhandled sysreg, and inject an UNDEF in the guest. At least, this gives a user an opportunity to report the issue with some information to help debugging it. Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20230112123829.458912-4-maz@kernel.orgSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
- 21 Jan, 2023 1 commit
-
-
Akihiko Odaki authored
Before this change, the cache configuration of the physical CPU was exposed to vcpus. This is problematic because the cache configuration a vcpu sees varies when it migrates between vcpus with different cache configurations. Fabricate cache configuration from the sanitized value, which holds the CTR_EL0 value the userspace sees regardless of which physical CPU it resides on. CLIDR_EL1 and CCSIDR_EL1 are now writable from the userspace so that the VMM can restore the values saved with the old kernel. Suggested-by:
Marc Zyngier <maz@kernel.org> Signed-off-by:
Akihiko Odaki <akihiko.odaki@daynix.com> Link: https://lore.kernel.org/r/20230112023852.42012-8-akihiko.odaki@daynix.com [ Oliver: Squash Marc's fix for CCSIDR_EL1.LineSize when set from userspace ] Signed-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
- 12 Jan, 2023 1 commit
-
-
Akihiko Odaki authored
The CCSIDR access handler masks the associativity bits according to the bit layout for processors without FEAT_CCIDX. KVM also assumes CCSIDR is 32-bit where it will be 64-bit if FEAT_CCIDX is enabled. Mask FEAT_CCIDX so that these assumptions hold. Suggested-by:
Marc Zyngier <maz@kernel.org> Signed-off-by:
Akihiko Odaki <akihiko.odaki@daynix.com> Link: https://lore.kernel.org/r/20230112023852.42012-7-akihiko.odaki@daynix.comSigned-off-by:
Oliver Upton <oliver.upton@linux.dev>
-
- 29 Dec, 2022 1 commit
-
-
Sean Christopherson authored
Define pr_fmt using KBUILD_MODNAME for all KVM x86 code so that printks use consistent formatting across common x86, Intel, and AMD code. In addition to providing consistent print formatting, using KBUILD_MODNAME, e.g. kvm_amd and kvm_intel, allows referencing SVM and VMX (and SEV and SGX and ...) as technologies without generating weird messages, and without causing naming conflicts with other kernel code, e.g. "SEV: ", "tdx: ", "sgx: " etc.. are all used by the kernel for non-KVM subsystems. Opportunistically move away from printk() for prints that need to be modified anyways, e.g. to drop a manual "kvm: " prefix. Opportunistically convert a few SGX WARNs that are similarly modified to WARN_ONCE; in the very unlikely event that the WARNs fire, odds are good that they would fire repeatedly and spam the kernel log without providing unique information in each print. Note, defining pr_fmt yields undesirable results for code that uses KVM's printk wrappers, e.g. vcpu_unimpl(). But, that's a pre-existing problem as SVM/kvm_amd already defines a pr_fmt, and thankfully use of KVM's wrappers is relatively limited in KVM x86 code. Signed-off-by:
Sean Christopherson <seanjc@google.com> Reviewed-by:
Paul Durrant <paul@xen.org> Message-Id: <20221130230934.1014142-35-seanjc@google.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
- 12 Dec, 2022 1 commit
-
-
James Clark authored
ARMV8_PMU_PMCR_N_MASK is an unshifted value which results in the wrong reset value for PMCR_EL0, so shift it to fix it. This fixes the following error when running qemu: $ qemu-system-aarch64 -cpu host -machine type=virt,accel=kvm -kernel ... target/arm/helper.c:1813: pmevcntr_rawwrite: Assertion `counter < pmu_num_counters(env)' failed. Fixes: 292e8f14 ("KVM: arm64: PMU: Simplify PMCR_EL0 reset handling") Signed-off-by:
James Clark <james.clark@arm.com> Reviewed-by:
Oliver Upton <oliver.upton@linux.dev> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221209164446.1972014-2-james.clark@arm.com
-
- 01 Dec, 2022 1 commit
-
-
James Morse authored
To convert the 32bit id registers to use the sysreg generation, they must first have a regular pattern, to match the symbols the script generates. Ensure symbols for the ID_DFR0_EL1 register have an _EL1 suffix, and use lower-case for feature names where the arm-arm does the same. The arm-arm has feature names for some of the ID_DFR0_EL1.PerMon encodings. Use these feature names in preference to the '8_4' indication of the architecture version they were introduced in. No functional change. Signed-off-by:
James Morse <james.morse@arm.com> Link: https://lore.kernel.org/r/20221130171637.718182-12-james.morse@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
- 28 Nov, 2022 2 commits
-
-
Marc Zyngier authored
Userspace can play some dirty tricks on us by selecting a given PMU version (such as PMUv3p5), restore a PMCR_EL0 value that has PMCR_EL0.LP set, and then switch the PMU version to PMUv3p1, for example. In this situation, we end-up with PMCR_EL0.LP being set and spreading havoc in the PMU emulation. This is specially hard as the first two step can be done on one vcpu and the third step on another, meaning that we need to sanitise *all* vcpus when the PMU version is changed. In orer to avoid a pretty complicated locking situation, defer the sanitisation of PMCR_EL0 to the point where the vcpu is actually run for the first tine, using the existing KVM_REQ_RELOAD_PMU request that calls into kvm_pmu_handle_pmcr(). There is still an obscure corner case where userspace could do the above trick, and then save the VM without running it. They would then observe an inconsistent state (PMUv3.1 + LP set), but that state will be fixed on the first run anyway whenever the guest gets restored on a host. Reported-by:
Reiji Watanabe <reijiw@google.com> Signed-off-by:
Marc Zyngier <maz@kernel.org>
-
Marc Zyngier authored
Resetting PMCR_EL0 is a pretty involved process that includes poisoning some of the writable bits, just because we can. It makes it hard to reason about about what gets configured, and just resetting things to 0 seems like a much saner option. Reduce reset_pmcr() to just preserving PMCR_EL0.N from the host, and setting PMCR_EL0.LC if we don't support AArch32. Signed-off-by:
Marc Zyngier <maz@kernel.org>
-
- 19 Nov, 2022 4 commits
-
-
Marc Zyngier authored
PMUv3p5 (which is mandatory with ARMv8.5) comes with some extra features: - All counters are 64bit - The overflow point is controlled by the PMCR_EL0.LP bit Add the required checks in the helpers that control counter width and overflow, as well as the sysreg handling for the LP bit. A new kvm_pmu_is_3p5() helper makes it easy to spot the PMUv3p5 specific handling. Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221113163832.3154370-14-maz@kernel.org
-
Marc Zyngier authored
Allow userspace to write ID_DFR0_EL1, on the condition that only the PerfMon field can be altered and be something that is compatible with what was computed for the AArch64 view of the guest. Reviewed-by:
Reiji Watanabe <reijiw@google.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221113163832.3154370-13-maz@kernel.org
-
Marc Zyngier authored
Allow userspace to write ID_AA64DFR0_EL1, on the condition that only the PMUver field can be altered and be at most the one that was initially computed for the guest. Reviewed-by:
Reiji Watanabe <reijiw@google.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221113163832.3154370-12-maz@kernel.org
-
Marc Zyngier authored
As further patches will enable the selection of a PMU revision from userspace, sample the supported PMU revision at VM creation time, rather than building each time the ID_AA64DFR0_EL1 register is accessed. This shouldn't result in any change in behaviour. Reviewed-by:
Reiji Watanabe <reijiw@google.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20221113163832.3154370-11-maz@kernel.org
-
- 16 Sep, 2022 3 commits
-
-
Mark Brown authored
Currently the kernel refers to the versions of the PMU and SPE features by the version of the architecture where those features were updated but the ARM refers to them using the FEAT_ names for the features. To improve consistency and help with updating for newer features and since v9 will make our current naming scheme a bit more confusing update the macros identfying features to use the FEAT_ based scheme. Signed-off-by:
Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20220910163354.860255-4-broonie@kernel.orgSigned-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
Normally we include the full register name in the defines for fields within registers but this has not been followed for ID registers. In preparation for automatic generation of defines add the _EL1s into the defines for ID_AA64DFR0_EL1 to follow the convention. No functional changes. Signed-off-by:
Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20220910163354.860255-3-broonie@kernel.orgSigned-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
The naming scheme the architecture uses for the fields in ID_AA64DFR0_EL1 does not align well with kernel conventions, using as it does a lot of MixedCase in various arrangements. In preparation for automatically generating the defines for this register rename the defines used to match what is in the architecture. Signed-off-by:
Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20220910163354.860255-2-broonie@kernel.orgSigned-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 14 Sep, 2022 5 commits
-
-
Oliver Upton authored
One of the oddities of the architecture is that the AArch64 views of the AArch32 ID registers are UNKNOWN if AArch32 isn't implemented at any EL. Nonetheless, KVM exposes these registers to userspace for the sake of save/restore. It is possible that the UNKNOWN value could differ between systems, leading to a rejected write from userspace. Avoid the issue altogether by handling the AArch32 ID registers as RAZ/WI when on an AArch64-only system. Signed-off-by:
Oliver Upton <oliver.upton@linux.dev> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220913094441.3957645-7-oliver.upton@linux.dev
-
Oliver Upton authored
We're about to ignore writes to AArch32 ID registers on AArch64-only systems. Add a bit to indicate a register is handled as write ignore when accessed from userspace. Signed-off-by:
Oliver Upton <oliver.upton@linux.dev> Reviewed-by:
Reiji Watanabe <reijiw@google.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220913094441.3957645-6-oliver.upton@linux.dev
-
Oliver Upton authored
There is no longer a need for caller-specified RAZ visibility. Hoist the call to sysreg_visible_as_raz() into read_id_reg() and drop the parameter. No functional change intended. Suggested-by:
Reiji Watanabe <reijiw@google.com> Signed-off-by:
Oliver Upton <oliver.upton@linux.dev> Reviewed-by:
Reiji Watanabe <reijiw@google.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220913094441.3957645-4-oliver.upton@linux.dev
-
Oliver Upton authored
The internal accessors are only ever called once. Dump out their contents in the caller. No functional change intended. Signed-off-by:
Oliver Upton <oliver.upton@linux.dev> Reviewed-by:
Reiji Watanabe <reijiw@google.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220913094441.3957645-3-oliver.upton@linux.dev
-
Oliver Upton authored
The generic id reg accessors already handle RAZ registers by way of the visibility hook. Add a visibility hook that returns REG_RAZ unconditionally and throw out the RAZ specific accessors. Reviewed-by:
Reiji Watanabe <reijiw@google.com> Signed-off-by:
Oliver Upton <oliver.upton@linux.dev> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220913094441.3957645-2-oliver.upton@linux.dev
-
- 09 Sep, 2022 3 commits
-
-
Kristina Martsenko authored
In preparation for converting the ID_AA64MMFR1_EL1 system register defines to automatic generation, rename them to follow the conventions used by other automatically generated registers: * Add _EL1 in the register name. * Rename fields to match the names in the ARM ARM: * LOR -> LO * HPD -> HPDS * VHE -> VH * HADBS -> HAFDBS * SPECSEI -> SpecSEI * VMIDBITS -> VMIDBits There should be no functional change as a result of this patch. Signed-off-by:
Kristina Martsenko <kristina.martsenko@arm.com> Signed-off-by:
Mark Brown <broonie@kernel.org> Reviewed-by:
Kristina Martsenko <kristina.martsenko@arm.com> Link: https://lore.kernel.org/r/20220905225425.1871461-11-broonie@kernel.orgSigned-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
Our standard is to include the _EL1 in the constant names for registers but we did not do that for ID_AA64PFR1_EL1, update to do so in preparation for conversion to automatic generation. No functional change. Signed-off-by:
Mark Brown <broonie@kernel.org> Reviewed-by:
Kristina Martsenko <kristina.martsenko@arm.com> Link: https://lore.kernel.org/r/20220905225425.1871461-8-broonie@kernel.orgSigned-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
Normally we include the full register name in the defines for fields within registers but this has not been followed for ID registers. In preparation for automatic generation of defines add the _EL1s into the defines for ID_AA64PFR0_EL1 to follow the convention. No functional changes. Signed-off-by:
Mark Brown <broonie@kernel.org> Reviewed-by:
Kristina Martsenko <kristina.martsenko@arm.com> Link: https://lore.kernel.org/r/20220905225425.1871461-7-broonie@kernel.orgSigned-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 17 Aug, 2022 1 commit
-
-
Oliver Upton authored
KVM does not support AArch32 on asymmetric systems. To that end, enforce AArch64-only behavior on PMCR_EL1.LC when on an asymmetric system. Fixes: 2122a833 ("arm64: Allow mismatched 32-bit EL0 support") Signed-off-by:
Oliver Upton <oliver.upton@linux.dev> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20220816192554.1455559-2-oliver.upton@linux.dev
-
- 17 Jul, 2022 4 commits
-
-
Marc Zyngier authored
Once apon a time, the 32bit KVM/arm port was the reference, while the arm64 version was the new kid on the block, without a clear future... This was a long time ago. "The times, they are a-changing." Signed-off-by:
Marc Zyngier <maz@kernel.org>
-
Marc Zyngier authored
This helper doesn't have a user anymore, let's get rid of it. Signed-off-by:
Marc Zyngier <maz@kernel.org>
-
Marc Zyngier authored
These helpers are only used by the invariant stuff now, and while they pretend to support non-64bit registers, this only serves as a way to scare the casual reviewer... Replace these helpers with our good friends get/put_user(), and don't look back. Reviewed-by:
Reiji Watanabe <reijiw@google.com> Signed-off-by:
Marc Zyngier <maz@kernel.org>
-
Marc Zyngier authored
Until now, the .set_user and .get_user callbacks have to implement (directly or not) the userspace memory accesses. Although this gives us maximem flexibility, this is also a maintenance burden, making it hard to audit, and I'd feel much better if it was all located in a single place. So let's do just that, simplifying most of the function signatures in the process (the callbacks are now only concerned with the data itself, and not with userspace). Reviewed-by:
Reiji Watanabe <reijiw@google.com> Signed-off-by:
Marc Zyngier <maz@kernel.org>
-