- 25 Aug, 2023 8 commits
-
-
Will Deacon authored
* for-next/perf: drivers/perf: hisi: Update HiSilicon PMU maintainers arm_pmu: acpi: Add a representative platform device for TRBE arm_pmu: acpi: Refactor arm_spe_acpi_register_device() hw_breakpoint: fix single-stepping when using bpf_overflow_handler perf/imx_ddr: don't enable counter0 if none of 4 counters are used perf/imx_ddr: speed up overflow frequency of cycle drivers/perf: hisi: Schedule perf session according to locality perf/arm-dmc620: Fix dmc620_pmu_irqs_lock/cpu_hotplug_lock circular lock dependency perf/smmuv3: Add MODULE_ALIAS for module auto loading perf/smmuv3: Enable HiSilicon Erratum 162001900 quirk for HIP08/09 perf: pmuv3: Remove comments from armv8pmu_[enable|disable]_event() perf/arm-cmn: Add CMN-700 r3 support perf/arm-cmn: Refactor HN-F event selector macros perf/arm-cmn: Remove spurious event aliases drivers/perf: Explicitly include correct DT includes perf: pmuv3: Add Cortex A520, A715, A720, X3 and X4 PMUs dt-bindings: arm: pmu: Add Cortex A520, A715, A720, X3, and X4 perf/smmuv3: Remove build dependency on ACPI perf: xgene_pmu: Convert to devm_platform_ioremap_resource() driver/perf: Add identifier sysfs file for Yitian 710 DDR
-
Will Deacon authored
* for-next/mm: arm64: fix build warning for ARM64_MEMSTART_SHIFT arm64: Remove unsued extern declaration init_mem_pgprot() arm64/mm: Set only the PTE_DIRTY bit while preserving the HW dirty state arm64/mm: Add pte_rdonly() helper arm64/mm: Directly use ID_AA64MMFR2_EL1_VARange_MASK arm64/mm: Replace an open coding with ID_AA64MMFR1_EL1_HAFDBS_MASK
-
Will Deacon authored
* for-next/misc: arm64/sysreg: refactor deprecated strncpy arm64: sysreg: Generate C compiler warnings on {read,write}_sysreg_s arguments arm64: sdei: abort running SDEI handlers during crash arm64: Explicitly include correct DT includes arm64/Kconfig: Sort the RCpc feature under the ARMv8.3 features menu arm64: vdso: remove two .altinstructions related symbols arm64/ptrace: Clean up error handling path in sve_set_common()
-
Will Deacon authored
* for-next/errata: arm64: errata: Group all Cortex-A510 errata together
-
Will Deacon authored
* for-next/entry: arm64: syscall: unmask DAIF earlier for SVCs
-
Will Deacon authored
* for-next/docs: Documentation: arm64: Correct SME ZA macros name
-
Will Deacon authored
* for-next/cpufeature: arm64/fpsimd: Only provide the length to cpufeature for xCR registers selftests/arm64: add HWCAP2_HBC test arm64: add HWCAP for FEAT_HBC (hinted conditional branches) arm64/cpufeature: Use ARM64_CPUID_FIELD() to match EVT
-
Jijie Shao authored
Since Guangbin and Shaokun have left HiSilicon and will no longer maintain the drivers, update the maintainer information and thanks for their work. Signed-off-by:
Jijie Shao <shaojijie@huawei.com> Acked-by:
Jonathan Cameron <Jonathan.Cameron@huawei.com> Acked-by:
Yicong Yang <yangyicong@hisilicon.com> Link: https://lore.kernel.org/r/20230824024135.1291459-1-shaojijie@huawei.com [will: left the HNS3 title as-is to avoid the churn of resorting the entries] Signed-off-by:
Will Deacon <will@kernel.org>
-
- 18 Aug, 2023 3 commits
-
-
Anshuman Khandual authored
ACPI TRBE does not have a HID for identification which could create and add a platform device into the platform bus. Also without a platform device, it cannot be probed and bound to a platform driver. This creates a dummy platform device for TRBE after ascertaining that ACPI provides required interrupts uniformly across all cpus on the system. This device gets created inside drivers/perf/arm_pmu_acpi.c to accommodate TRBE being built as a module. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Signed-off-by:
Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20230817055405.249630-3-anshuman.khandual@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Anshuman Khandual authored
Sanity checking all the GICC tables for same interrupt number, and ensuring a homogeneous ACPI based machine, could be used for other platform devices as well. Hence this refactors arm_spe_acpi_register_device() into a common helper arm_acpi_register_pmu_device(). Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Co-developed-by:
Will Deacon <will@kernel.org> Signed-off-by:
Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20230817055405.249630-2-anshuman.khandual@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Tomislav Novak authored
Arm platforms use is_default_overflow_handler() to determine if the hw_breakpoint code should single-step over the breakpoint trigger or let the custom handler deal with it. Since bpf_overflow_handler() currently isn't recognized as a default handler, attaching a BPF program to a PERF_TYPE_BREAKPOINT event causes it to keep firing (the instruction triggering the data abort exception is never skipped). For example: # bpftrace -e 'watchpoint:0x10000:4:w { print("hit") }' -c ./test Attaching 1 probe... hit hit [...] ^C (./test performs a single 4-byte store to 0x10000) This patch replaces the check with uses_default_overflow_handler(), which accounts for the bpf_overflow_handler() case by also testing if one of the perf_event_output functions gets invoked indirectly, via orig_default_handler. Signed-off-by:
Tomislav Novak <tnovak@meta.com> Tested-by: Samuel Gosselin <sgosselin@google.com> # arm64 Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/linux-arm-kernel/20220923203644.2731604-1-tnovak@fb.com/ Link: https://lore.kernel.org/r/20230605191923.1219974-1-tnovak@meta.comSigned-off-by:
Will Deacon <will@kernel.org>
-
- 16 Aug, 2023 6 commits
-
-
Justin Stitt authored
`strncpy` is deprecated for use on NUL-terminated destination strings [1]. Which seems to be the case here due to the forceful setting of `buf`'s tail to 0. A suitable replacement is `strscpy` [2] due to the fact that it guarantees NUL-termination on its destination buffer argument which is _not_ the case for `strncpy`! In this case, we can simplify the logic and also check for any silent truncation by using `strscpy`'s return value. This should have no functional change and yet uses a more robust and less ambiguous interface whilst reducing code complexity. Link: www.kernel.org/doc/html/latest/process/deprecated.html#strncpy-on-nul-terminated-strings[1] Link: https://manpages.debian.org/testing/linux-manual-4.8/strscpy.9.en.html [2] Link: https://github.com/KSPP/linux/issues/90Suggested-by:
Kees Cook <keescook@chromium.org> Cc: linux-hardening@vger.kernel.org Signed-off-by:
Justin Stitt <justinstitt@google.com> Link: https://lore.kernel.org/r/20230811-strncpy-arch-arm64-v2-1-ba84eabffadb@google.comSigned-off-by:
Will Deacon <will@kernel.org>
-
James Clark authored
Evaluate the register before the asm section so that the C compiler generates warnings when there is an issue with the register argument. This will prevent possible future issues such as the one seen here [1] where a missing bracket caused the shift and addition operators to be evaluated in the wrong order, but no warning was emitted. The GNU assembler has no warning for when expressions evaluate differently to C due to different operator precedence, but the C compiler has some warnings that may suggest something is wrong. For example in this case the following warning would have been emitted: error: operator '>>' has lower precedence than '+'; '+' will be evaluated first [-Werror,-Wshift-op-parentheses] There are currently no existing warnings that need to be fixed. [1]: https://lore.kernel.org/linux-perf-users/20230728162011.GA22050@willie-the-truck/Signed-off-by:
James Clark <james.clark@arm.com> Link: https://lore.kernel.org/r/20230815140639.614769-1-james.clark@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Xu Yang authored
In current driver, counter0 will be enabled after ddr_perf_pmu_enable() is called even though none of the 4 counters are used. This will cause counter0 continue to count until ddr_perf_pmu_disabled() is called. If pmu is not disabled all the time, the pmu interrupt will be asserted from time to time due to counter0 will overflow and irq handler will clear it. It's not an expected behavior. This patch will not enable counter0 if none of 4 counters are used. Fixes: 9a66d36c ("drivers/perf: imx_ddr: Add DDR performance counter support to perf") Signed-off-by:
Xu Yang <xu.yang_2@nxp.com> Reviewed-by:
Frank Li <Frank.Li@nxp.com> Link: https://lore.kernel.org/r/20230811015438.1999307-2-xu.yang_2@nxp.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Xu Yang authored
For i.MX8MP, we cannot ensure that cycle counter overflow occurs at least 4 times as often as other events. Due to byte counters will count for any event configured, it will overflow more often. And if byte counters overflow that related counters would stop since they share the COUNTER_CNTL. We can speed up cycle counter overflow frequency by setting counter parameter (CP) field of cycle counter. In this way, we can avoid stop counting byte counters when interrupt didn't come and the byte counters can be fetched or updated from each cycle counter overflow interrupt. Because we initialize CP filed to shorten counter0 overflow time, the cycle counter will start couting from a fixed/base value each time. We need to remove the base from the result too. Therefore, we could get precise result from cycle counter. Signed-off-by:
Xu Yang <xu.yang_2@nxp.com> Reviewed-by:
Frank Li <Frank.Li@nxp.com> Link: https://lore.kernel.org/r/20230811015438.1999307-1-xu.yang_2@nxp.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Yicong Yang authored
The PCIe PMUs locate on different NUMA node but currently we don't consider it and likely stack all the sessions on the same CPU: [root@localhost tmp]# cat /sys/devices/hisi_pcie*/cpumask 0 0 0 0 0 0 This can be optimize a bit to use a local CPU for the PMU. Signed-off-by:
Yicong Yang <yangyicong@hisilicon.com> Link: https://lore.kernel.org/r/20230815131010.2147-1-yangyicong@huawei.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Waiman Long authored
The following circular locking dependency was reported when running cpus online/offline test on an arm64 system. [ 84.195923] Chain exists of: dmc620_pmu_irqs_lock --> cpu_hotplug_lock --> cpuhp_state-down [ 84.207305] Possible unsafe locking scenario: [ 84.213212] CPU0 CPU1 [ 84.217729] ---- ---- [ 84.222247] lock(cpuhp_state-down); [ 84.225899] lock(cpu_hotplug_lock); [ 84.232068] lock(cpuhp_state-down); [ 84.238237] lock(dmc620_pmu_irqs_lock); [ 84.242236] *** DEADLOCK *** The following locking order happens when dmc620_pmu_get_irq() calls cpuhp_state_add_instance_nocalls(). lock(dmc620_pmu_irqs_lock) --> lock(cpu_hotplug_lock) On the other hand, the calling sequence cpuhp_thread_fun() => cpuhp_invoke_callback() => dmc620_pmu_cpu_teardown() leads to the locking sequence lock(cpuhp_state-down) => lock(dmc620_pmu_irqs_lock) Here dmc620_pmu_irqs_lock protects both the dmc620_pmu_irqs and the pmus_node lists in various dmc620_pmu instances. dmc620_pmu_get_irq() requires protected access to dmc620_pmu_irqs whereas dmc620_pmu_cpu_teardown() needs protection to the pmus_node lists. Break this circular locking dependency by using two separate locks to protect dmc620_pmu_irqs list and the pmus_node lists respectively. Suggested-by:
Robin Murphy <robin.murphy@arm.com> Signed-off-by:
Waiman Long <longman@redhat.com> Link: https://lore.kernel.org/r/20230812235549.494174-1-longman@redhat.comSigned-off-by:
Will Deacon <will@kernel.org>
-
- 15 Aug, 2023 2 commits
-
-
Yicong Yang authored
On my ACPI based arm64 server, if the SMMUv3 PMU is configured as module it won't be loaded automatically after booting even if the device has already been scanned and added. It's because the module lacks a platform alias, the uevent mechanism and userspace tools like udevd make use of this to find the target driver module of the device. This patch adds the missing platform alias of the module, then module will be loaded automatically if device exists. Before this patch: [root@localhost tmp]# modinfo arm_smmuv3_pmu | grep alias alias: of:N*T*Carm,smmu-v3-pmcgC* alias: of:N*T*Carm,smmu-v3-pmcg After this patch: [root@localhost tmp]# modinfo arm_smmuv3_pmu | grep alias alias: platform:arm-smmu-v3-pmcg alias: of:N*T*Carm,smmu-v3-pmcgC* alias: of:N*T*Carm,smmu-v3-pmcg Signed-off-by:
Yicong Yang <yangyicong@hisilicon.com> Link: https://lore.kernel.org/r/20230814131642.65263-1-yangyicong@huawei.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Yicong Yang authored
Some HiSilicon SMMU PMCG suffers the erratum 162001900 that the PMU disable control sometimes fail to disable the counters. This will lead to error or inaccurate data since before we enable the counters the counter's still counting for the event used in last perf session. This patch tries to fix this by hardening the global disable process. Before disable the PMU, writing an invalid event type (0xffff) to focibly stop the counters. Correspondingly restore each events on pmu::pmu_enable(). Signed-off-by:
Yicong Yang <yangyicong@hisilicon.com> Link: https://lore.kernel.org/r/20230814124012.58013-1-yangyicong@huawei.comSigned-off-by:
Will Deacon <will@kernel.org>
-
- 11 Aug, 2023 1 commit
-
-
Mark Rutland authored
For a number of historical reasons, when handling SVCs we don't unmask DAIF in el0_svc() or el0_svc_compat(), and instead do so later in el0_svc_common(). This is unfortunate and makes it harder to make changes to the DAIF management in entry-common.c as we'd like to do as cleanup and preparation for FEAT_NMI support. We can move the DAIF unmasking to entry-common.c as long as we also hoist the fp_user_discard() logic, as reasoned below. We converted the syscall trace logic from assembly to C in commit: f37099b6 ("arm64: convert syscall trace logic to C") ... which was intended to have no functional change, and mirrored the existing assembly logic to avoid the risk of any functional regression. With the logic in C, it's clear that there is currently no reason to unmask DAIF so late within el0_svc_common(): * The thread flags are read prior to unmasking DAIF, but are not consumed until after DAIF is unmasked, and we don't perform a read-modify-write sequence of the thread flags for which we might need to serialize against an IPI modifying the flags. Similarly, for any thread flags set by other threads, whether DAIF is masked or not has no impact. The read_thread_flags() helpers performs a single-copy-atomic read of the flags, and so this can safely be moved after unmasking DAIF. * The pt_regs::orig_x0 and pt_regs::syscallno fields are neither consumed nor modified by the handler for any DAIF exception (e.g. these do not exist in the `perf_event_arm_regs` enum and are not sampled by perf in its IRQ handler). Thus, the manipulation of pt_regs::orig_x0 and pt_regs::syscallno can safely be moved after unmasking DAIF. Given the above, we can safely hoist unmasking of DAIF out of el0_svc_common(), and into its immediate callers: do_el0_svc() and do_el0_svc_compat(). Further: * In do_el0_svc(), we sample the syscall number from pt_regs::regs[8]. This is not modified by the handler for any DAIF exception, and thus can safely be moved after unmasking DAIF. As fp_user_discard() operates on the live FP/SVE/SME register state, this needs to occur before we clear DAIF.IF, as interrupts could result in preemption which would cause this state to become foreign. As fp_user_discard() is the first function called within do_el0_svc(), it has no dependency on other parts of do_el0_svc() and can be moved earlier so long as it is called prior to unmasking DAIF.IF. * In do_el0_svc_compat(), we sample the syscall number from pt_regs::regs[7]. This is not modified by the handler for any DAIF exception, and thus can safely be moved after unmasking DAIF. Compat threads cannot use SVE or SME, so there's no need for el0_svc_compat() to call fp_user_discard(). Given the above, we can safely hoist the unmasking of DAIF out of do_el0_svc() and do_el0_svc_compat(), and into their immediate callers: el0_svc() and el0_svc_compat(), so long a we also hoist fp_user_discard() into el0_svc(). Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Marc Zyngier <maz@kernel.org> Cc: Mark Brown <broonie@kernel.org> Cc: Will Deacon <will@kernel.org> Reviewed-by:
Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20230808101148.1064172-1-mark.rutland@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
- 10 Aug, 2023 1 commit
-
-
Mark Brown authored
For both SVE and SME we abuse the generic register field comparison support in the cpufeature code as part of our detection of unsupported variations in the vector lengths available to PEs, reporting the maximum vector lengths via ZCR_EL1.LEN and SMCR_EL1.LEN. Since these are configuration registers rather than identification registers the assumptions the cpufeature code makes about how unknown bitfields behave are invalid, leading to warnings when SME features like FA64 are enabled and we hotplug a CPU: CPU features: SANITY CHECK: Unexpected variation in SYS_SMCR_EL1. Boot CPU: 0x0000000000000f, CPU3: 0x0000008000000f CPU features: Unsupported CPU feature variation detected. SVE has no controls other than the vector length so is not yet impacted but the same issue will apply there if any are defined. Since the only field we are interested in having the cpufeature code handle is the length field and we use a custom read function to obtain the value we can avoid these warnings by filtering out all other bits when we return the register value, if we're doing that we don't need to bother reading the register at all and can simply use the RDVL/RDSVL value we were filling in instead. Fixes: 2e0f2478 ("arm64/sve: Probe SVE capabilities and usable vector lengths") FixeS: b42990d3 ("arm64/sme: Identify supported SME vector lengths at boot") Signed-off-by:
Mark Brown <broonie@kernel.org> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20230731-arm64-sme-fa64-hotplug-v2-1-7714c00dd902@kernel.orgSigned-off-by:
Will Deacon <will@kernel.org>
-
- 04 Aug, 2023 5 commits
-
-
D Scott Phillips authored
Interrupts are blocked in SDEI context, per the SDEI spec: "The client interrupts cannot preempt the event handler." If we crashed in the SDEI handler-running context (as with ACPI's AGDI) then we need to clean up the SDEI state before proceeding to the crash kernel so that the crash kernel can have working interrupts. Track the active SDEI handler per-cpu so that we can COMPLETE_AND_RESUME the handler, discarding the interrupted context. Fixes: f5df2696 ("arm64: kernel: Add arch-specific SDEI entry code and CPU masking") Signed-off-by:
D Scott Phillips <scott@os.amperecomputing.com> Cc: stable@vger.kernel.org Reviewed-by:
James Morse <james.morse@arm.com> Tested-by:
Mihai Carabas <mihai.carabas@oracle.com> Link: https://lore.kernel.org/r/20230627002939.2758-1-scott@os.amperecomputing.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Joey Gouly authored
Add a test for the newly added HWCAP2_HBC. Signed-off-by:
Joey Gouly <joey.gouly@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20230804143746.3900803-3-joey.gouly@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Joey Gouly authored
Add a HWCAP for FEAT_HBC, so that userspace can make a decision on using this feature. Signed-off-by:
Joey Gouly <joey.gouly@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20230804143746.3900803-2-joey.gouly@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Anshuman Khandual authored
The comments in armv8pmu_[enable|disable]_event() are blindingly obvious, and does not contribute in making things any better. Let's drop them off. Functional change is not intended. Cc: Will Deacon <will@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Suggested-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20230802090853.1190391-1-anshuman.khandual@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Zhang Jianhua authored
When building with W=1, the following warning occurs. arch/arm64/include/asm/kernel-pgtable.h:129:41: error: "PUD_SHIFT" is not defined, evaluates to 0 [-Werror=undef] 129 | #define ARM64_MEMSTART_SHIFT PUD_SHIFT | ^~~~~~~~~ arch/arm64/include/asm/kernel-pgtable.h:142:5: note: in expansion of macro ‘ARM64_MEMSTART_SHIFT’ 142 | #if ARM64_MEMSTART_SHIFT < SECTION_SIZE_BITS | ^~~~~~~~~~~~~~~~~~~~ The generic PUD_SHIFT was defined in include/asm-generic/pgtable-nopud.h, however the #ifndef __ASSEMBLY__ guard in this header file makes it unavailable for assembly files. While someone .S file include the <asm/kernel-pgtable.h>, the build warning would occur. Now move the macro ARM64_MEMSTART_SHIFT and ARM64_MEMSTART_ALIGN to arch/arm64/mm/init.c where it is used only, to avoid this issue. Signed-off-by:
Zhang Jianhua <chris.zjh@huawei.com> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Link: https://lore.kernel.org/r/20230804075615.3334756-1-chris.zjh@huawei.comSigned-off-by:
Will Deacon <will@kernel.org>
-
- 31 Jul, 2023 1 commit
-
-
Rob Herring authored
Remove unused 'of*.h' header inclusions from the arm64 arch code to allow for the eventual untangling of 'of_device.h and 'of_platform.h', which currently include each other. Signed-off-by:
Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20230714174021.4039807-1-robh@kernel.orgSigned-off-by:
Will Deacon <will@kernel.org>
-
- 28 Jul, 2023 3 commits
-
-
Robin Murphy authored
CMN-700 r3 has a special configuration option for a so-called "Super Home Node", which is a superset of the standard HN-F that also manages remote-chip coherency for multi-chip setups. As such it has a similar but expanded set of PMU events compared to HN-F, with some additional filtering options to boot. Signed-off-by:
Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/49153b72253f6af0e625cb55b9e1b825b110c49c.1688746690.git.robin.murphy@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Robin Murphy authored
Refactor the macros for defining HN-F events with additional selectors, so they can be shared with another upcoming similar-but-distinct HN type. No functional change intended. Signed-off-by:
Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/0f05327941e06c665dbfd47e03fad29276b9e63c.1688746690.git.robin.murphy@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Robin Murphy authored
As the name suggests, the "partial DAT flit" event is only counted for the DAT channel, and furthermore is only applicable to device ports, not mesh links (strictly it's only device ports with CHI-A requesters connected, but detecting that degree of detail is more bother than it's worth). Stop generating spurious event aliases for other combinations which aren't meaningful. Signed-off-by:
Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/b01a58e3ff05c322547fbfd015f6dbfedf555ed3.1688746690.git.robin.murphy@arm.comSigned-off-by:
Will Deacon <will@kernel.org>
-
- 27 Jul, 2023 10 commits
-
-
Zeng Heng authored
Moving LDAPR detective config under the ARMv8.3 menu would be more reasonable than under ARMv8.1, since this feature was released together with the ARMv8.3 features list. Signed-off-by:
Zeng Heng <zengheng4@huawei.com> Link: https://lore.kernel.org/r/20230727020324.2149960-1-zengheng4@huawei.comSigned-off-by:
Will Deacon <will@kernel.org>
-
David Spickett authored
It should be ZA_PT_ZA*. ZA_PT_ZA_OFFSET is one example. It is not ZA_PT_ZA_* because there is one macro ZA_PT_ZAV_OFFSET that doesn't fit that pattern. Fixes: 96d32e63 ("arm64/sme: Provide ABI documentation for SME") Signed-off-by:
David Spickett <david.spickett@linaro.org> Reviewed-by:
Mark Brown <broonie@kernel.org> Cc: Mark Brown <broonie@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Signed-off-by:
Will Deacon <will@kernel.org>
-
Rob Herring authored
The DT of_device.h and of_platform.h date back to the separate of_platform_bus_type before it as merged into the regular platform bus. As part of that merge prepping Arm DT support 13 years ago, they "temporarily" include each other. They also include platform_device.h and of.h. As a result, there's a pretty much random mix of those include files used throughout the tree. In order to detangle these headers and replace the implicit includes with struct declarations, users need to explicitly include the correct includes. Signed-off-by:
Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20230714174832.4061752-1-robh@kernel.orgSigned-off-by:
Will Deacon <will@kernel.org>
-
Rob Herring authored
Add support for the Arm Cortex-A520, Cortex-A715, Cortex-A720, Cortex-X3, and Cortex-X4 CPU PMUs. They are straight-forward additions with just new compatible strings. Acked-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20230706205505.308523-2-robh@kernel.orgSigned-off-by:
Will Deacon <will@kernel.org>
-
Rob Herring authored
Add compatible strings for the Arm Cortex-A520, Cortex-A715, Cortex-A720, Cortex-X3, and Cortex-X4 CPU PMUs. Acked-by:
Conor Dooley <conor.dooley@microchip.com> Signed-off-by:
Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20230706205505.308523-1-robh@kernel.orgSigned-off-by:
Will Deacon <will@kernel.org>
-
Vincent Whitchurch authored
This driver supports working without ACPI since commit 3f7be435 ("perf/smmuv3: Add devicetree support"), so remove the build dependency. Signed-off-by:
Vincent Whitchurch <vincent.whitchurch@axis.com> Reviewed-by:
Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/20230706-smmuv3-pmu-noacpi-v1-1-7083ef189158@axis.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Yangtao Li authored
Use devm_platform_ioremap_resource() to simplify code. Signed-off-by:
Yangtao Li <frank.li@vivo.com> Link: https://lore.kernel.org/r/20230704093556.17926-1-frank.li@vivo.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Jing Zhang authored
To allow userspace to identify the specific implementation of the device, add an "identifier" sysfs file. The perf tool can match the Yitian 710 DDR metric through the identifier. Signed-off-by:
Jing Zhang <renyu.zj@linux.alibaba.com> Acked-by:
Ian Rogers <irogers@google.com> Reviewed-by:
Shuai Xue <xueshuai@linux.alibaba.com> Link: https://lore.kernel.org/r/1687245156-61215-2-git-send-email-renyu.zj@linux.alibaba.comSigned-off-by:
Will Deacon <will@kernel.org>
-
Jisheng Zhang authored
The two symbols __alt_instructions and __alt_instructions_end are not used, since the vDSO patching code looks for the '.altinstructions' ELF section directly. Remove the unused linker symbols. Fixes: 4e3bca8f ("arm64: alternative: patch alternatives in the vDSO") Signed-off-by:
Jisheng Zhang <jszhang@kernel.org> Link: https://lore.kernel.org/r/20230726173619.3732-1-jszhang@kernel.orgSigned-off-by:
Will Deacon <will@kernel.org>
-
YueHaibing authored
commit a501e324 ("arm64: Clean up the default pgprot setting") left behind this. Signed-off-by:
YueHaibing <yuehaibing@huawei.com> Reviewed-by:
Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20230720143555.26044-1-yuehaibing@huawei.comSigned-off-by:
Will Deacon <will@kernel.org>
-