- 30 Oct, 2017 1 commit
-
-
Catalin Marinas authored
The generic pte_access_permitted() implementation only checks for pte_present() (together with the write permission where applicable). However, for both kernel ptes and PROT_NONE mappings pte_present() also returns true on arm64 even though such mappings are not user accessible. Additionally, arm64 now supports execute-only user permission (PROT_EXEC) which is implemented by clearing the PTE_USER bit. With this patch the arm64 implementation of pte_access_permitted() checks for the PTE_VALID and PTE_USER bits together with writable access if applicable. Cc: <stable@vger.kernel.org> Reported-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 27 Oct, 2017 4 commits
-
-
Will Deacon authored
PSTATE.Q only exists for AArch32, which can be referred to using COMPAT_PSR_Q_BIT. Remove PSR_Q_BIT, since the native bit doesn't exist in the architecture Tested-by: Laura Abbott <labbott@redhat.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
We can decode the PSTATE easily enough, so pretty-print it in register dumps. Tested-by: Laura Abbott <labbott@redhat.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
Printing raw pointer values in backtraces has potential security implications and are of questionable value anyway. This patch follows x86's lead and removes the "Exception stack:" dump from kernel backtraces, as well as converting PC/LR values to symbols such as "sysrq_handle_crash+0x20/0x30". Tested-by: Laura Abbott <labbott@redhat.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
When we take a fault we can't handle, we try to dump some relevant information, but we're not consistent about doing so. In do_mem_abort(), we log the full ESR, but don't dump a page table walk. In __do_kernel_fault, we dump an attempted decoding of the ESR (but not the ESR itself) along with a page table walk. Let's try to make things more consistent by dumping the full ESR in mem_abort_decode(), and having do_mem_abort dump a page table walk. The existing dump of the ESR in do_mem_abort() is rendered redundant, and removed. Tested-by: Laura Abbott <labbott@redhat.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Julien Thierry <julien.thierry@arm.com> Cc: Kristina Martsenko <kristina.martsenko@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 25 Oct, 2017 3 commits
-
-
Dave Martin authored
Currently ASM_BUG() and its constituent macros define local assembler labels 0, 1 and 2 internally, which carries a high risk of clash with callers' labels and consequent mis-assembly. This patch gives the labels a big random offset to minimise the chance of such errors. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Julien Thierry authored
Software Step exception is missing after stepping a trapped instruction. Ensure SPSR.SS gets set to 0 after emulating/skipping a trapped instruction before doing ERET. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Julien Thierry <julien.thierry@arm.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> [will: replaced AARCH32_INSN_SIZE with 4] Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Julien Thierry authored
Literal values are being used to set single stepping in mdscr from assembly code. There are already existing defines representing those values, use those instead of the literal values. Signed-off-by: Julien Thierry <julien.thierry@arm.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 24 Oct, 2017 4 commits
-
-
Mark Salyzyn authored
__memcpy_{to,from}io fall back to byte-at-a-time copying if both the source and destination pointers are not 8-byte aligned. Since one of the pointers always points at normal memory, this is unnecessary and detrimental to performance, so only do byte copying until we hit an 8-byte boundary for the device pointer. This change was motivated by performance issues in the pstore driver. On a test platform, measuring probe time for pstore, console buffer size of 1/4MB and pmsg of 1/2MB, was in the 90-107ms region. Change managed to reduce it to 10-25ms, an improvement in boot time. Cc: Kees Cook <keescook@chromium.org> Cc: Anton Vorontsov <anton@enomsg.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Anton Vorontsov <anton@enomsg.org> Cc: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Mark Salyzyn <salyzyn@android.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
Merge in ARM PMU and perf updates for 4.15: - Support for the Statistical Profiling Extension - Support for Hisilicon's SoC PMU Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Julien Thierry authored
arm_pmu interrupts are maked as PERCPU even when these are not local physical interrupts to a single CPU. When using non-local interrupts, interrupts marked as PERCPU will not get freed not disabled properly by the PMU driver. Check if interrupts are local to a single CPU with PERCPU_DEVID since this is what the PMU driver really needs to know. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Julien Thierry authored
irq_is_percpu indicates whether an irq should only target a single cpu. PERCPU_DEVID flag indicates that an irq can be configured differently on each cpu it can target. Provide a function to check whether an irq is PERCPU_DEVID. Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Julien Thierry <julien.thierry@arm.com>
-
- 19 Oct, 2017 8 commits
-
-
Suzuki K Poulose authored
Now that the ARM ARM clearly specifies the rules for inferring the values of the ID register fields, fix the types of the feature bits we have in the kernel. As per ARM ARM DDI0487B.b, section D10.1.4 "Principles of the ID scheme for fields in ID registers" lists the registers to which the scheme applies along with the exceptions. This patch changes the relevant feature bits from FTR_EXACT to FTR_LOWER_SAFE to select the safer value. This will enable an older kernel running on a new CPU detect the safer option rather than completely disabling the feature. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Dave Martin <dave.martin@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Shaokun Zhang authored
Add support HiSilicon SoC uncore PMU driver. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Shaokun Zhang authored
This patch adds support for DDRC PMU driver in HiSilicon SoC chip, Each DDRC has own control, counter and interrupt registers and is an separate PMU. For each DDRC PMU, it has 8-fixed-purpose counters which have been mapped to 8-events by hardware, it assumes that counter index is equal to event code (0 - 7) in DDRC PMU driver. Interrupt is supported to handle counter (32-bits) overflow. Acked-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com> Signed-off-by: Anurup M <anurup.m@huawei.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Shaokun Zhang authored
L3 cache coherence is maintained by Hydra Home Agent (HHA) in HiSilicon SoC. This patch adds support for HHA PMU driver, Each HHA has own control, counter and interrupt registers and is an separate PMU. For each HHA PMU, it has 16-programable counters and each counter is free-running. Interrupt is supported to handle counter (48-bits) overflow. Acked-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com> Signed-off-by: Anurup M <anurup.m@huawei.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Shaokun Zhang authored
This patch adds support for L3C PMU driver in HiSilicon SoC chip, Each L3C has own control, counter and interrupt registers and is an separate PMU. For each L3C PMU, it has 8-programable counters and each counter is free-running. Interrupt is supported to handle counter (48-bits) overflow. Acked-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com> Signed-off-by: Anurup M <anurup.m@huawei.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Shaokun Zhang authored
This patch adds support HiSilicon SoC uncore PMU driver framework and interfaces. Acked-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com> Signed-off-by: Anurup M <anurup.m@huawei.com> [will: Fix leader accounting in uncore group validation] Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Shaokun Zhang authored
This patch adds documentation for the uncore PMUs on HiSilicon SoC. Acked-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com> Signed-off-by: Anurup M <anurup.m@huawei.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Julien Thierry authored
Based on: ARM Architecture Reference Manual, ARMv8 (DDI 0487B.b). ARMv8.1 introduces the optional feature ARMv8.1-TTHM which can trigger a new type of memory abort. This exception is triggered when hardware update of page table flags is not atomic in regards to other memory accesses. Replace the corresponding unknown entry with a more accurate one. Cf: Section D10.2.28 ESR_ELx, Exception Syndrome Register (p D10-2381), section D4.4.11 Restriction on memory types for hardware updates on page tables (p D4-2116 - D4-2117). ARMv8.2 does not add new exception types, however it is worth mentioning that when obligatory feature RAS (optional for ARMv8.{0,1}) is implemented, exceptions related to "Synchronous parity or ECC error on memory access, not on translation table walk" become reserved and should not occur. Signed-off-by: Julien Thierry <julien.thierry@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 18 Oct, 2017 7 commits
-
-
Will Deacon authored
The ARMv8.2 architecture introduces the optional Statistical Profiling Extension (SPE). SPE can be used to profile a population of operations in the CPU pipeline after instruction decode. These are either architected instructions (i.e. a dynamic instruction trace) or CPU-specific uops and the choice is fixed statically in the hardware and advertised to userspace via caps/. Sampling is controlled using a sampling interval, similar to a regular PMU counter, but also with an optional random perturbation to avoid falling into patterns where you continuously profile the same instruction in a hot loop. After each operation is decoded, the interval counter is decremented. When it hits zero, an operation is chosen for profiling and tracked within the pipeline until it retires. Along the way, information such as TLB lookups, cache misses, time spent to issue etc is captured in the form of a sample. The sample is then filtered according to certain criteria (e.g. load latency) that can be specified in the event config (described under format/) and, if the sample satisfies the filter, it is written out to memory as a record, otherwise it is discarded. Only one operation can be sampled at a time. The in-memory buffer is linear and virtually addressed, raising an interrupt when it fills up. The PMU driver handles these interrupts to give the appearance of a ring buffer, as expected by the AUX code. The in-memory trace-like format is self-describing (though not parseable in reverse) and written as a series of records, with each record corresponding to a sample and consisting of a sequence of packets. These packets are defined by the architecture, although some have CPU-specific fields for recording information specific to the microarchitecture. As a simple example, a record generated for a branch instruction may consist of the following packets: 0 (Address) : Virtual PC of the branch instruction 1 (Type) : Conditional direct branch 2 (Counter) : Number of cycles taken from Dispatch to Issue 3 (Address) : Virtual branch target + condition flags 4 (Counter) : Number of cycles taken from Dispatch to Complete 5 (Events) : Mispredicted as not-taken 6 (END) : End of record It is also possible to toggle properties such as timestamp packets in each record. This patch adds support for SPE in the form of a new perf driver. Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
This patch documents the devicetree binding in use for ARM SPE. Cc: Rob Herring <robh@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
When booting at EL2, ensure that we permit the EL1 host to sample physical addresses and physical counter values using SPE. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
SPE is part of the v8.2 architecture, so move its system register and field definitions into sysreg.h and the new PSB barrier into barrier.h Finally, move KVM over to using the generic definitions so that it doesn't have to open-code its own versions. Acked-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
The ARM SPE architecture permits an implementation to ignore a sample if the sample is due to be taken whilst another sample is already being produced. In this case, it is desirable to report the collision to userspace, as they may want to lower the sample period. This patch adds a PERF_AUX_FLAG_COLLISION flag, so that such events can be relayed to userspace. Acked-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
Perf PMU drivers using AUX buffers cannot be built as modules unless the AUX helpers are exported. This patch exports perf_aux_output_{begin,end,skip} and perf_get_aux to modules. Cc: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
Any modular driver using cluster-affine PPIs needs to be able to call irq_get_percpu_devid_partition so that it can enable the IRQ on the correct subset of CPUs. This patch exports the symbol so that it can be called from within a module. Acked-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 17 Oct, 2017 1 commit
-
-
Will Deacon authored
Merge tag 'acpi/iort-for-v4.15' of git://git.kernel.org/pub/scm/linux/kernel/git/lpieralisi/linux into aarch64/for-next/core Pull arm64 ACPI IORT updates from Lorenzo Pieralisi: - Code clean-ups (A.Yadav, L.Pieralisi) - Platform devices inizialization rework in preparation for IORT PMCG handling (L.Pieralisi) - Mapping API rework to enable MSIs for IORT components as defined in IORT specification issue C (H.Guo, L.Pieralisi) Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 16 Oct, 2017 8 commits
-
-
Lorenzo Pieralisi authored
ITS specific mappings for SMMUv3/PMCG components can be retrieved through special index mapping entries introduced in IORT revision C. Introduce a new API iort_set_device_domain() to set the MSI domain for SMMUv3/PMCG nodes (extendable to any future IORT node requiring special index ITS mapping entries) that represent MSI through special index mappings in order to enable MSI support for the devices their nodes represent. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org>
-
Hanjun Guo authored
IORT revision C introduced a mapping entry binding to describe ITS device ID mapping for SMMUv3 MSI interrupts. Enable the single mapping flag (ie that is used by SMMUv3 component for its special index mappings) for the SMMUv3 node in the IORT mapping API and add IORT code to handle special index mapping entry for the SMMUv3 IORT nodes to enable their MSI interrupts. In case the ACPICA for SMMUv3 device ID mapping is not ready, use the ACPICA version as a guard for function iort_get_id_mapping_index(). Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org> [lorenzo.pieralisi@arm.com: patch split, typos fixing, rewrote the log] Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
-
Hanjun Guo authored
IORT revision C introduced SMMUv3 and PMCG MSI support by adding specific mapping entries in the SMMUv3/PMCG subtables to retrieve the device ID and the ITS group it maps to for a given SMMUv3/PMCG IORT node. Introduce a mapping function (ie iort_get_id_mapping_index()), that for a given IORT node looks up if an ITS specific ID mapping entry exists and if so retrieve the corresponding mapping index in the IORT node mapping array. Since an ITS specific index mapping can be present for an IORT node that is not a leaf node (eg SMMUv3 - to describe its own ITS device ID) special handling is required for two steps mapping cases such as PCI/NamedComponent--->SMMUv3--->ITS because the SMMUv3 ITS specific index mapping entry should be skipped to prevent the IORT API from considering the mapping entry as a regular mapping one. If we take the following IORT topology example: |----------------------| | Root Complex Node | |----------------------| | map entry[x] | |----------------------| | id value | | output_reference | |---|------------------| | | |----------------------| |-->| SMMUv3 | |----------------------| | SMMUv3 dev ID | | mapping index 0 | |----------------------| | map entry[0] | |----------------------| | id value | | output_reference-----------> ITS 1 (SMMU MSI domain) |----------------------| | map entry[1] | |----------------------| | id value | | output_reference-----------> ITS 2 (PCI MSI domain) |----------------------| where the SMMUv3 ITS specific mapping entry is index 0 and it represents the SMMUv3 ITS specific index mapping entry (describing its own ITS device ID), we need to skip that mapping entry while carrying out the Root Complex Node regular mappings to prevent erroneous translations. Reuse the iort_get_id_mapping_index() function to detect the ITS specific mapping index for a specific IORT node and skip it in the IORT mapping API (ie iort_node_map_id()) loop to prevent considering it a normal PCI/Named Component ID mapping entry. Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org> [lorenzo.pieralisi@arm.com: split patch/rewrote commit log] Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
-
Hanjun Guo authored
Current IORT code provides a function (ie iort_get_fwnode()) which looks up a struct fwnode_handle pointer through a struct acpi_iort_node pointer for SMMU components but it lacks a function that implements the reverse look-up, namely struct fwnode_handle* -> struct acpi_iort_node*. Devices that are not IORT named components cannot be retrieved through their associated IORT named component scan interface because they just are not represented in the ACPI namespace; the reverse look-up is therefore required for all platform devices that represent IORT nodes (eg SMMUs) so that the struct acpi_iort_node* can be retrieved from the struct device->fwnode pointer. Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org> [lorenzo.pieralisi@arm.com: re-indented/rewrote the commit log] Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
-
Lorenzo Pieralisi authored
The way current IORT code initializes platform devices for SMMU nodes is somewhat tied (mostly for naming convention) to the SMMU nodes themselves but it need not be in that it is completely generic and can easily be made so by structures renaming and code reshuffling. Rework IORT platform devices initialization code to make the functions and data structures SMMU agnostic. No functional changes intended. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Hanjun Guo <hanjun.guo@linaro.org> Cc: Hanjun Guo <hanjun.guo@linaro.org> Cc: Sudeep Holla <sudeep.holla@arm.com>
-
Lorenzo Pieralisi authored
Some functions definition indentations are using a style that is frowned upon with return value type/storage class specifier in a separate line. Reindent the function definitions to fix them. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Hanjun Guo <hanjun.guo@linaro.org> Cc: Hanjun Guo <hanjun.guo@linaro.org> Cc: Sudeep Holla <sudeep.holla@arm.com>
-
Lorenzo Pieralisi authored
The conditional ACPI_IORT_SMMU_V3_PXM_VALID guard around arm_smmu_v3_set_proximity() was added to manage a cross tree ACPICA merge dependency; with ACPICA changes merged in: commit c9442300 ("ACPICA: iasl: Update to IORT SMMUv3 disassembling") the guard has become useless. Remove it. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Hanjun Guo <hanjun.guo@linaro.org> Cc: Hanjun Guo <hanjun.guo@linaro.org> Cc: Sudeep Holla <sudeep.holla@arm.com> Cc: Ganapatrao Kulkarni <ganapatrao.kulkarni@cavium.com>
-
Arvind Yadav authored
pr_err() messages should terminated with a new-line to avoid other messages being concatenated onto the end. Signed-off-by: Arvind Yadav <arvind.yadav.cs@gmail.com> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Hanjun Guo <hanjun.guo@linaro.org>
-
- 13 Oct, 2017 2 commits
-
-
Julien Thierry authored
The current delay implementation uses the yield instruction, which is a hint that it is beneficial to schedule another thread. As this is a hint, it may be implemented as a NOP, causing all delays to be busy loops. This is the case for many existing CPUs. Taking advantage of the generic timer sending periodic events to all cores, we can use WFE during delays to reduce power consumption. This is beneficial only for delays longer than the period of the timer event stream. If timer event stream is not enabled, delays will behave as yield/busy loops. Signed-off-by: Julien Thierry <julien.thierry@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Julien Thierry authored
The arch timer configuration for a CPU might get reset after suspending said CPU. In order to reliably use the event stream in the kernel (e.g. for delays), we keep track of the state where we can safely consider the event stream as properly configured. After writing to cntkctl, we issue an ISB to ensure that subsequent delay loops can rely on the event stream being enabled. Signed-off-by: Julien Thierry <julien.thierry@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Russell King <linux@armlinux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 11 Oct, 2017 2 commits
-
-
Mark Rutland authored
We don't document our ELF hwcaps, leaving developers to interpret them according to hearsay, guesswork, or (in exceptional cases) inspection of the current kernel code. This is less than optimal, and it would be far better if we had some definitive description of each of the ELF hwcaps that developers could refer to. This patch adds a document describing the (native) arm64 ELF hwcaps. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Dave Martin <Dave.Martin@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> [ Updated new hwcap entries in the document ] Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Suzuki K Poulose authored
ARMv8-A adds a few optional features for ARMv8.2 and ARMv8.3. Expose them to the userspace via HWCAPs and mrs emulation. SHA2-512 - Instruction support for SHA512 Hash algorithm (e.g SHA512H, SHA512H2, SHA512U0, SHA512SU1) SHA3 - SHA3 crypto instructions (EOR3, RAX1, XAR, BCAX). SM3 - Instruction support for Chinese cryptography algorithm SM3 SM4 - Instruction support for Chinese cryptography algorithm SM4 DP - Dot Product instructions (UDOT, SDOT). Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Dave Martin <dave.martin@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-