- 12 Feb, 2021 6 commits
-
-
Marc Zyngier authored
Signed-off-by:
Marc Zyngier <maz@kernel.org>
-
Marc Zyngier authored
Signed-off-by:
Marc Zyngier <maz@kernel.org>
-
Marc Zyngier authored
Signed-off-by:
Marc Zyngier <maz@kernel.org>
-
Marc Zyngier authored
Signed-off-by:
Marc Zyngier <maz@kernel.org>
-
Marc Zyngier authored
Signed-off-by:
Marc Zyngier <maz@kernel.org>
-
Marc Zyngier authored
KVM/arm64 fixes for 5.11, take #2 - Don't allow tagged pointers to point to memslots - Filter out ARMv8.1+ PMU events on v8.0 hardware - Hide PMU registers from userspace when no PMU is configured - More PMU cleanups - Don't try to handle broken PSCI firmware - More sys_reg() to reg_to_encoding() conversions Signed-off-by:
Marc Zyngier <maz@kernel.org>
-
- 03 Feb, 2021 10 commits
-
-
Quentin Perret authored
In order to ensure the module loader does not get confused if a symbol is exported in EL2 nVHE code (as will be the case when we will compile e.g. lib/memset.S into the EL2 object), make sure to stub all exports using __DISABLE_EXPORTS in the nvhe folder. Suggested-by:
Ard Biesheuvel <ardb@kernel.org> Signed-off-by:
Quentin Perret <qperret@google.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210203141931.615898-3-qperret@google.com
-
Quentin Perret authored
It is currently possible to stub EXPORT_SYMBOL() macros in C code using __DISABLE_EXPORTS, which is necessary to run in constrained environments such as the EFI stub or the decompressor. But this currently doesn't apply to exports from assembly, which can lead to somewhat confusing situations. Consolidate the __DISABLE_EXPORTS infrastructure by checking it from asm-generic/export.h as well. Signed-off-by:
Quentin Perret <qperret@google.com> Acked-by:
Will Deacon <will@kernel.org> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210203141931.615898-2-qperret@google.com
-
Alexandru Elisei authored
The aarch32 debug ID register is called DBG*D*IDR (emphasis added), not DBGIDR, use the correct spelling. Signed-off-by:
Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210128132823.35067-1-alexandru.elisei@arm.com
-
Marc Zyngier authored
Instead of using a bunch of magic numbers, use the existing definitions that have been added since 8673e02e ("arm64: perf: Add support for ARMv8.5-PMU 64-bit counters") Reviewed-by:
Alexandru Elisei <alexandru.elisei@arm.com> Reviewed-by:
Eric Auger <eric.auger@redhat.com> Signed-off-by:
Marc Zyngier <maz@kernel.org>
-
Marc Zyngier authored
Upgrading the PMU code from ARMv8.1 to ARMv8.4 turns out to be pretty easy. All that is required is support for PMMIR_EL1, which is read-only, and for which returning 0 is a valid option as long as we don't advertise STALL_SLOT as an implemented event. Let's just do that and adjust what we return to the guest. Signed-off-by:
Marc Zyngier <maz@kernel.org>
-
Marc Zyngier authored
Let's not pretend we support anything but ARMv8.0 as far as the debug architecture is concerned. Reviewed-by:
Eric Auger <eric.auger@redhat.com> Reviewed-by:
Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by:
Marc Zyngier <maz@kernel.org>
-
Marc Zyngier authored
Our current ID register filtering is starting to be a mess of if() statements, and isn't going to get any saner. Let's turn it into a switch(), which has a chance of being more readable, and introduce a FEATURE() macro that allows easy generation of feature masks. No functionnal change intended. Reviewed-by:
Eric Auger <eric.auger@redhat.com> Reviewed-by:
Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by:
Marc Zyngier <maz@kernel.org>
-
Marc Zyngier authored
Despite advertising support for AArch32 PMUv3p1, we fail to handle the PMCEID{2,3} registers, which conveniently alias with the top bits of PMCEID{0,1}_EL1. Implement these registers with the usual AA32(HI/LO) aliasing mechanism. Reviewed-by:
Eric Auger <eric.auger@redhat.com> Signed-off-by:
Marc Zyngier <maz@kernel.org>
-
Marc Zyngier authored
We shouldn't expose *any* PMU capability when no PMU has been configured for this VM. Reviewed-by:
Eric Auger <eric.auger@redhat.com> Reviewed-by:
Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by:
Marc Zyngier <maz@kernel.org>
-
Marc Zyngier authored
The AArch32 CP14 DBGDIDR has bit 15 set to RES1, which our current emulation doesn't set. Just add the missing bit. Reported-by:
Peter Maydell <peter.maydell@linaro.org> Reviewed-by:
Eric Auger <eric.auger@redhat.com> Reviewed-by:
Alexandru Elisei <alexandru.elisei@arm.com> Signed-off-by:
Marc Zyngier <maz@kernel.org>
-
- 01 Feb, 2021 1 commit
-
-
Marc Zyngier authored
gen-hyprel is, for better or worse, a native-endian program: it assumes that the ELF data structures are in the host's endianness, and even assumes that the compiled kernel is little-endian in one particular case. None of these assumptions hold true though: people actually build (use?) BE arm64 kernels, and seem to avoid doing so on BE hosts. Madness! In order to solve this, wrap each access to the ELF data structures with the required byte-swapping magic. This requires to obtain the kernel data structure, and provide per-endianess wrappers. This result in a kernel that links and even boots in a model. Fixes: 8c49b5d4 ("KVM: arm64: Generate hyp relocation data") Reported-by:
Guenter Roeck <linux@roeck-us.net> Tested-by:
Guenter Roeck <linux@roeck-us.net> Acked-by:
David Brazdil <dbrazdil@google.com> Signed-off-by:
Marc Zyngier <maz@kernel.org>
-
- 27 Jan, 2021 1 commit
-
-
Marc Zyngier authored
Signed-off-by:
Marc Zyngier <maz@kernel.org>
-
- 25 Jan, 2021 5 commits
-
-
Ard Biesheuvel authored
Provide a hypervisor implementation of the ARM architected TRNG firmware interface described in ARM spec DEN0098. All function IDs are implemented, including both 32-bit and 64-bit versions of the TRNG_RND service, which is the centerpiece of the API. The API is backed by the kernel's entropy pool only, to avoid guests draining more precious direct entropy sources. Signed-off-by:
Ard Biesheuvel <ardb@kernel.org> [Andre: minor fixes, drop arch_get_random() usage] Signed-off-by:
Andre Przywara <andre.przywara@arm.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210106103453.152275-6-andre.przywara@arm.com
-
Yanan Wang authored
We now set the pfn dirty and mark the page dirty before calling fault handlers in user_mem_abort(), so we might end up having spurious dirty pages if update of permissions or mapping has failed. Let's move these two operations after the fault handlers, and they will be done only if the fault has been handled successfully. When an -EAGAIN errno is returned from the map handler, we hope to the vcpu to enter guest directly instead of exiting back to userspace, so adjust the return value at the end of function. Signed-off-by:
Yanan Wang <wangyanan55@huawei.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210114121350.123684-4-wangyanan55@huawei.com
-
Yanan Wang authored
(1) During running time of a a VM with numbers of vCPUs, if some vCPUs access the same GPA almost at the same time and the stage-2 mapping of the GPA has not been built yet, as a result they will all cause translation faults. The first vCPU builds the mapping, and the followed ones end up updating the valid leaf PTE. Note that these vCPUs might want different access permissions (RO, RW, RX, RWX, etc.). (2) It's inevitable that we sometimes will update an existing valid leaf PTE in the map path, and we perform break-before-make in this case. Then more unnecessary translation faults could be caused if the *break stage* of BBM is just catched by other vCPUS. With (1) and (2), something unsatisfactory could happen: vCPU A causes a translation fault and builds the mapping with RW permissions, vCPU B then update the valid leaf PTE with break-before-make and permissions are updated back to RO. Besides, *break stage* of BBM may trigger more translation faults. Finally, some useless small loops could occur. We can make some optimization to solve above problems: When we need to update a valid leaf PTE in the map path, let's filter out the case where this update only change access permissions, and don't update the valid leaf PTE here in this case. Instead, let the vCPU enter back the guest and it will exit next time to go through the relax_perms path without break-before-make if it still wants more permissions. Signed-off-by:
Yanan Wang <wangyanan55@huawei.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210114121350.123684-3-wangyanan55@huawei.com
-
Yanan Wang authored
Procedures of hyp stage-1 map and guest stage-2 map are quite different, but they are tied closely by function kvm_set_valid_leaf_pte(). So adjust the relative code for ease of code maintenance in the future. Signed-off-by:
Will Deacon <will@kernel.org> Signed-off-by:
Yanan Wang <wangyanan55@huawei.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210114121350.123684-2-wangyanan55@huawei.com
-
Andrew Scull authored
The arguments for __do_hyp_init are now passed with a pointer to a struct which means there are scratch registers available for use. Thanks to this, we no longer need to use clever, but hard to read, tricks that avoid the need for scratch registers when checking for the __kvm_hyp_init HVC. Tested-by:
David Brazdil <dbrazdil@google.com> Signed-off-by:
Andrew Scull <ascull@google.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210125145415.122439-2-ascull@google.com
-
- 23 Jan, 2021 8 commits
-
-
David Brazdil authored
Hyp code used the hyp_symbol_addr helper to force PC-relative addressing because absolute addressing results in kernel VAs due to the way hyp code is linked. This is not true anymore, so remove the helper and update all of its users. Acked-by:
Ard Biesheuvel <ardb@kernel.org> Signed-off-by:
David Brazdil <dbrazdil@google.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210105180541.65031-9-dbrazdil@google.com
-
David Brazdil authored
Storing a function pointer in hyp now generates relocation information used at early boot to convert the address to hyp VA. The existing alternative-based conversion mechanism is therefore obsolete. Remove it and simplify its users. Acked-by:
Ard Biesheuvel <ardb@kernel.org> Signed-off-by:
David Brazdil <dbrazdil@google.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210105180541.65031-8-dbrazdil@google.com
-
David Brazdil authored
Hyp code uses absolute addressing to obtain a kimg VA of a small number of kernel symbols. Since the kernel now converts constant pool addresses to hyp VAs, this trick does not work anymore. Change the helpers to convert from hyp VA back to kimg VA or PA, as needed and rework the callers accordingly. Signed-off-by:
David Brazdil <dbrazdil@google.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210105180541.65031-7-dbrazdil@google.com
-
David Brazdil authored
KVM nVHE code runs under a different VA mapping than the kernel, hence so far it avoided using absolute addressing because the VA in a constant pool is relocated by the linker to a kernel VA (see hyp_symbol_addr). Now the kernel has access to a list of positions that contain a kimg VA but will be accessed only in hyp execution context. These are generated by the gen-hyprel build-time tool and stored in .hyp.reloc. Add early boot pass over the entries and convert the kimg VAs to hyp VAs. Note that this requires for .hyp* ELF sections to be mapped read-write at that point. Signed-off-by:
David Brazdil <dbrazdil@google.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210105180541.65031-6-dbrazdil@google.com
-
David Brazdil authored
Add a post-processing step to compilation of KVM nVHE hyp code which calls a custom host tool (gen-hyprel) on the partially linked object file (hyp sections' names prefixed). The tool lists all R_AARCH64_ABS64 data relocations targeting hyp sections and generates an assembly file that will form a new section .hyp.reloc in the kernel binary. The new section contains an array of 32-bit offsets to the positions targeted by these relocations. Since these addresses of those positions will not be determined until linking of `vmlinux`, each 32-bit entry carries a R_AARCH64_PREL32 relocation with addend <section_base_sym> + <r_offset>. The linker of `vmlinux` will therefore fill the slot accordingly. This relocation data will be used at runtime to convert the kernel VAs at those positions to hyp VAs. Signed-off-by:
David Brazdil <dbrazdil@google.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210105180541.65031-5-dbrazdil@google.com
-
David Brazdil authored
Generating hyp relocations will require referencing positions at a given offset from the beginning of hyp sections. Since the final layout will not be determined until the linking of `vmlinux`, modify the hyp linker script to insert a symbol at the first byte of each hyp section to use as an anchor. The linker of `vmlinux` will place the symbols together with the sections. Signed-off-by:
David Brazdil <dbrazdil@google.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210105180541.65031-4-dbrazdil@google.com
-
David Brazdil authored
We will need to recognize pointers in .rodata specific to hyp, so establish a .hyp.rodata ELF section. Merge it with the existing .hyp.data..ro_after_init as they are treated the same at runtime. Signed-off-by:
David Brazdil <dbrazdil@google.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210105180541.65031-3-dbrazdil@google.com
-
David Brazdil authored
So far hyp-init.S created a .hyp.idmap.text section directly, without relying on the hyp linker script to prefix its name. Change it to create .idmap.text and add a HYP_SECTION entry to hyp.lds.S. This way all .hyp* sections go through the linker script and can be instrumented there. Signed-off-by:
David Brazdil <dbrazdil@google.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210105180541.65031-2-dbrazdil@google.com
-
- 21 Jan, 2021 3 commits
-
-
Marc Zyngier authored
The use of a tagged address could be pretty confusing for the whole memslot infrastructure as well as the MMU notifiers. Forbid it altogether, as it never quite worked the first place. Cc: stable@vger.kernel.org Reported-by:
Rick Edgecombe <rick.p.edgecombe@intel.com> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
Marc Zyngier <maz@kernel.org>
-
Marc Zyngier authored
When running on v8.0 HW, make sure we don't try to advertise events in the 0x4000-0x403f range. Cc: stable@vger.kernel.org Fixes: 88865bec ("KVM: arm64: Mask out filtered events in PCMEID{0,1}_EL1") Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210121105636.1478491-1-maz@kernel.org
-
Steven Price authored
KASAN in HW_TAGS mode will store MTE tags in the top byte of the pointer. When computing the offset for TPIDR_EL2 we don't want anything in the top byte, so remove the tag to ensure the computation is correct no matter what the tag. Fixes: 94ab5b61 ("kasan, arm64: enable CONFIG_KASAN_HW_TAGS") Signed-off-by:
Steven Price <steven.price@arm.com> [maz: added comment] Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210108161254.53674-1-steven.price@arm.com
-
- 20 Jan, 2021 2 commits
-
-
Ard Biesheuvel authored
The ARM architected TRNG firmware interface, described in ARM spec DEN0098, define an ARM SMCCC based interface to a true random number generator, provided by firmware. Add the definitions of the SMCCC functions as defined by the spec. Signed-off-by:
Ard Biesheuvel <ardb@kernel.org> Signed-off-by:
Andre Przywara <andre.przywara@arm.com> Reviewed-by:
Linus Walleij <linus.walleij@linaro.org> Reviewed-by:
Sudeep Holla <sudeep.holla@arm.com> Link: https://lore.kernel.org/r/20210106103453.152275-2-andre.przywara@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Marc Zyngier authored
Since GCC < 5.1 has been shown to be unsuitable for the arm64 kernel, let's drop the workaround for the 'S' asm constraint that GCC 4.9 doesn't always grok. This is effectively a revert of 9fd339a4 ("arm64: Work around broken GCC 4.9 handling of "S" constraint"). Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20210118130129.2875949-1-maz@kernel.orgSigned-off-by:
Will Deacon <will@kernel.org>
-
- 18 Jan, 2021 1 commit
-
-
Linus Torvalds authored
-
- 17 Jan, 2021 3 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/acme/linuxLinus Torvalds authored
Pull perf tools fixes from Arnaldo Carvalho de Melo: - Fix 'CPU too large' error in Intel PT - Correct event attribute sizes in 'perf inject' - Sync build_bug.h and kvm.h kernel copies - Fix bpf.h header include directive in 5sec.c 'perf trace' bpf example - libbpf tests fixes - Fix shadow stat 'perf test' for non-bash shells - Take cgroups into account for shadow stats in 'perf stat' * tag 'perf-tools-fixes-2021-01-17' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux: perf inject: Correct event attribute sizes perf intel-pt: Fix 'CPU too large' error perf stat: Take cgroups into account for shadow stats perf stat: Introduce struct runtime_stat_data libperf tests: Fail when failing to get a tracepoint id libperf tests: If a test fails return non-zero libperf tests: Avoid uninitialized variable warning perf test: Fix shadow stat test for non-bash shells tools headers: Syncronize linux/build_bug.h with the kernel sources tools headers UAPI: Sync kvm.h headers with the kernel sources perf bpf examples: Fix bpf.h header include directive in 5sec.c example
-
git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linuxLinus Torvalds authored
Pull powerpc fixes from Michael Ellerman: "One fix for a lack of alignment in our linker script, that can lead to crashes depending on configuration etc. One fix for the 32-bit VDSO after the C VDSO conversion. Thanks to Andreas Schwab, Ariel Marcovitch, and Christophe Leroy" * tag 'powerpc-5.11-4' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: powerpc/vdso: Fix clock_gettime_fallback for vdso32 powerpc: Fix alignment bug within the init sections
-
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfsLinus Torvalds authored
Pull misc vfs fixes from Al Viro: "Several assorted fixes. I still think that audit ->d_name race is better fixed this way for the benefit of backports, with any possibly fancier variants done on top of it" * 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: dump_common_audit_data(): fix racy accesses to ->d_name iov_iter: fix the uaccess area in copy_compat_iovec_from_user umount(2): move the flag validity checks first
-