- 27 Jul, 2015 39 commits
-
-
Dave P Martin authored
Currently, the minimal default BUG() implementation from asm- generic is used for arm64. This patch uses the BRK software breakpoint instruction to generate a trap instead, similarly to most other arches, with the generic BUG code generating the dmesg boilerplate. This allows bug metadata to be moved to a separate table and reduces the amount of inline code at BUG and WARN sites. This also avoids clobbering any registers before they can be dumped. To mitigate the size of the bug table further, this patch makes use of the existing infrastructure for encoding addresses within the bug table as 32-bit offsets instead of absolute pointers. (Note that this limits the kernel size to 2GB.) Traps are registered at arch_initcall time for aarch64, but BUG has minimal real dependencies and it is desirable to be able to generate bug splats as early as possible. This patch redirects all debug exceptions caused by BRK directly to bug_handler() until the full debug exception support has been initialised. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Dave P Martin authored
<asm/debug-monitors.h> relies on <asm/ptrace.h>, but doesn't declare this dependency. This becomes a problem once debug-monitors.h starts getting included all over the place to get the BRK immedates. The missing include of <asm/memory.h> (for UL()) in <asm/esr.h> is also added. The series no longer relies on this, but I spotted it during development and it may as well get fixed. No functional change. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Dave P Martin authored
The way the KGDB_DYN_BRK_INS_BYTEx macros are declared is more complex than it needs to be. Also, the macros are only used in one place, which is arch-specific anyway. This patch refactors the macros to simplify them, and exposes an argument so that we can have a single macro instead of 4. As a side effect, this patch also fixes some anomalous spellings of "KGDB". These changes alter the compile types of some integer constants that are harmless but trigger truncation warnings in gcc when assigning to 32-bit variables. This patch adds an explicit cast for the affected cases. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Dave P Martin authored
It makes sense to keep all the architectural exception syndrome definitions in the same place. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Dave P Martin authored
The naming of DBG_ESR_VAL_BRK is inconsistent with the way other similar macros are named. This patch makes the naming more consistent, and appends "64" as a reminder that this ESR pattern only matches from AArch64 state. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Dave P Martin authored
<asm/esr.h> has perfectly good constants for defining ESR values already. Let's use them. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Dave P Martin authored
There are only 16 comment bits in a BRK instruction, which correspond to ESR bits 15:0. Bits 24:16 of the ESR are RES0, and might have weird meanings in the future. This code inserts 16 bits of comment in the ESR value instead of 20 (almost certainly a typo in the original code). Signed-off-by: Dave Martin <Dave.Martin@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Dave P Martin authored
The size of an A64 BRK instruction is the same as the size of all other A64 instructions, because all A64 instructions are the same size. BREAK_INSTR_SIZE is retained for readibility, but it should not be an independent constant from AARCH64_INSN_SIZE. Signed-off-by: Dave Martin <Dave.Martin@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
yalin wang authored
A little change to patch_map() function, use set_fixmap_offset() to make code more clear. Signed-off-by: yalin wang <yalin.wang2010@gmail.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Ard Biesheuvel authored
When allocating memory for the kernel image, try the AllocatePages() boot service to obtain memory at the preferred offset of 'dram_base + TEXT_OFFSET', and only revert to efi_low_alloc() if that fails. This is the only way to allocate at the base of DRAM if DRAM starts at 0x0, since efi_low_alloc() refuses to allocate at 0x0. Tested-by: Haojian Zhuang <haojian.zhuang@linaro.org> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Sudeep Holla authored
Since both CONFIG_ACPI and CONFIG_OF are enabled when booting using ACPI tables on ARM64 platforms, we get few device tree warnings which are not valid for ACPI boot. We can use of_have_populated_dt to check if the device tree is populated or not before throwing out those errors. This patch uses of_have_populated_dt to remove non legitimate device tree warning when booting using ACPI tables. Cc: Lorenzo Pieralisi <Lorenzo.Pieralisi@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
There are cases where we want to compile out both versions of an alternative code block, so add an enable parameter to the new conditional alternative assembly macros in the same way as alternative_insn. Signed-off-by: Will Deacon <will.deacon@arm.com>
-
James Morse authored
'Privileged Access Never' is a new arm8.1 feature which prevents privileged code from accessing any virtual address where read or write access is also permitted at EL0. This patch enables the PAN feature on all CPUs, and modifies {get,put}_user helpers temporarily to permit access. This will catch kernel bugs where user memory is accessed directly. 'Unprivileged loads and stores' using ldtrb et al are unaffected by PAN. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: James Morse <james.morse@arm.com> [will: use ALTERNATIVE in asm and tidy up pan_enable check] Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Suzuki K. Poulose authored
The system register encoding generated by sys_reg() works only for MRS/MSR(Register) operations, as we hardcode Bit20 to 1 in mrs_s/msr_s mask. This makes it unusable for generating instructions accessing registers with Op0 < 2(e.g, PSTATE.x with Op0=0). As per ARMv8 ARM, (Ref: ARMv8 ARM, Section: "System instruction class encoding overview", C5.2, version:ARM DDI 0487A.f), the instruction encoding reserves bits [20-19] for Op0. This patch generalises the sys_reg, mrs_s and msr_s macros, so that we could use them to access any of the supported system register. Cc: James Morse <james.morse@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
James Morse authored
Some uses of ALTERNATIVE() may depend on a feature that is disabled at compile time by a Kconfig option. In this case the unused alternative instructions waste space, and if the original instruction is a nop, it wastes time and space. This patch adds an optional 'config' option to ALTERNATIVE() and alternative_insn that allows the compiler to remove both the original and alternative instructions if the config option is not defined. Suggested-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
James Morse authored
When a new cpu feature is available, the cpu feature bits will have some initial value, which is incremented when the feature is updated. This patch changes 'register_value' to be 'min_field_value', and checks the feature bits value (interpreted as a signed int) is greater than this minimum. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
James Morse authored
This patch adds an 'enable()' callback to cpu capability/feature detection, allowing features that require some setup or configuration to get this opportunity once the feature has been detected. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
James Morse authored
Later patches need config_sctlr_el1 to set/clear bits in the sctlr_el1 register. This patch moves this function into header a file. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Daniel Thompson authored
Convert the dynamic patching for ARM64_HAS_SYSREG_GIC_CPUIF over to the newly added alternative assembler macros. Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Daniel Thompson authored
Convert the dynamic patching for ARM64_WORKAROUND_845719 over to the newly added alternative assembler macros. Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Daniel Thompson authored
Convert the dynamic patching for ARM64_WORKAROUND_CLEAN_CACHE over to the newly added alternative assembler macros. Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Daniel Thompson authored
The existing alternative_insn macro has some limitations that make it hard to work with. In particular the fact it takes instructions from it own macro arguments means it doesn't play very nicely with C pre-processor macros because the macro arguments look like a string to the C pre-processor. Workarounds are (probably) possible but things start to look ugly. Introduce an alternative set of macros that allows instructions to be presented to the assembler as normal and switch everything over to the new macros. Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
James Morse authored
Based on arch/arm/include/asm/cputype.h, this function does the shifting and sign extension necessary when accessing cpu feature fields. Signed-off-by: James Morse <james.morse@arm.com> Suggested-by: Russell King <linux@arm.linux.org.uk> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
Most of the cache events an architecture might support do not map well to those provided by the ARM architecture, and as such most entries in the event number maps are *_UNSUPPORTED. Unfortuantely as 0 is a valid physical event identifier, the *_UNSUPPORTED macros expand to a non-zero value and thus each unsupported event must be explicitly initialised as such. This leads to large diffs when adding support for a new CPU, and makes it difficult to spot the important information. This patch follows arch/arm/ in making use of PERF_*_ALL_UNSUPPORTED macros to initialise all entries to *_UNSUPPORTED before overriding this for the specific events we actually support, resulting in a significant source code reduction. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Jisheng Zhang authored
Remove paragraph about writing to the Free Software Foundation's mailing address from GPL notice. Signed-off-by: Jisheng Zhang <jszhang@marvell.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Robin Murphy authored
The default dma_common_get_sgtable() implementation relies on the CPU address of the buffer being a regular lowmem address. This is not always the case on arm64, since allocations from the various DMA pools may have remapped vmalloc addresses, rendering the use of virt_to_page() invalid. Fix this by providing our own implementation based on the fact that we can safely derive a physical address from the DMA address in both cases. CC: Jon Medhurst <tixy@linaro.org> Signed-off-by: Robin Murphy <robin.murphy@arm.com> [will: made static] Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
Nobody seems to be producing !SMP systems anymore, so this is just becoming a source of kernel bugs, particularly if people want to use coherent DMA with non-shared pages. This patch forces CONFIG_SMP=y for arm64, removing a modest amount of code in the process. Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
We currently bundle the callchain handling code with the PMU code, despite the fact the two are distinct, and the former can be useful even in the absence of the latter. Follow the example of arch/arm and factor the callchain handling into its own file dependent on CONFIG_PERF_EVENTS rather than CONFIG_HW_PERF_EVENTS. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
The AArch64 instruction set contains load/store pair memory accessors, so use these in our copy_*_user routines to transfer 16 bytes per iteration. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Catalin Marinas authored
The compat ptrace interface allows access to the TLS register, hardware breakpoints and watchpoints, syscall number. However, a native task using the native ptrace interface to debug compat tasks (e.g. multi-arch gdb) only has access to the general and VFP register sets. The compat ptrace interface cannot be accessed from a native task. This patch adds a new user_aarch32_ptrace_view which contains the TLS, hardware breakpoint/watchpoint and syscall number regsets in addition to the existing GPR and VFP regsets. This view is backwards compatible with the previous kernels. Core dumping of 32-bit tasks and compat ptrace are not affected since the original user_aarch32_view is preserved. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: Yao Qi <yao.qi@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Olof Johansson authored
Plumb up Makefile arguments for the already supported formats in the kbuild system: lz4, bzip2, lzma, and lzo. Note that just as with Image.gz, these images are not self-decompressing and the booting firmware still needs to handle decompression before launching the kernel image. Signed-off-by: Olof Johansson <olof@lixom.net> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Catalin Marinas authored
The ARMv8.1 architecture extensions introduce support for hardware updates of the access and dirty information in page table entries. With TCR_EL1.HA enabled, when the CPU accesses an address with the PTE_AF bit cleared in the page table, instead of raising an access flag fault the CPU sets the actual page table entry bit. To ensure that kernel modifications to the page tables do not inadvertently revert a change introduced by hardware updates, the exclusive monitor (ldxr/stxr) is adopted in the pte accessors. When TCR_EL1.HD is enabled, a write access to a memory location with the DBM (Dirty Bit Management) bit set in the corresponding pte automatically clears the read-only bit (AP[2]). Such DBM bit maps onto the Linux PTE_WRITE bit and to check whether a writable (DBM set) page is dirty, the kernel tests the PTE_RDONLY bit. In order to allow read-only and dirty pages, the kernel needs to preserve the software dirty bit. The hardware dirty status is transferred to the software dirty bit in ptep_set_wrprotect() (using load/store exclusive loop) and pte_modify(). Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Salter authored
Commit 68234df4 ("arm64: kill flush_cache_all()") removed soft_reset() from the kernel. This was the only caller of setup_mm_for_reboot(), so remove that also. Signed-off-by: Mark Salter <msalter@redhat.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
Mark Brown reported an allnoconfig build failure in -next: Today's linux-next fails to build an arm64 allnoconfig due to "mm: make GUP handle pfn mapping unless FOLL_GET is requested" which causes: > arm64-allnoconfig > ../mm/gup.c:51:4: error: implicit declaration of function 'update_mmu_cache' [-Werror=implicit-function-declaration] Fix the error by moving the function to asm/pgtable.h, as is the case for most other architectures. Reported-by: Mark Brown <broonie@kernel.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Robin Murphy authored
Since commit 9d3bfbb4 ("arm64: Combine coherent and non-coherent swiotlb dma_ops"), __dma_common_mmap is no longer shared between two callers, so roll it into the remaining one. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Daniel Thompson authored
Commit 68234df4 ("arm64: kill flush_cache_all()") removed the only users of these macros. Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Rohit Thapliyal authored
On 64bit kernel, the dump_mem gives 32 bit addresses on the stack dump. This gives unorganized information regarding the 64bit values on the stack. Hence, modified to get a complete 64bit memory dump. With patch: [ 93.534801] Process insmod (pid: 1587, stack limit = 0xffffffc976be4058) [ 93.541441] Stack: (0xffffffc976be7cf0 to 0xffffffc976be8000) [ 93.547136] 7ce0: ffffffc976be7d00 ffffffc00008163c [ 93.554898] 7d00: ffffffc976be7d40 ffffffc0000f8a44 ffffffc00098ef38 ffffffbffc000088 [ 93.562659] 7d20: ffffffc00098ef50 ffffffbffc0000c0 0000000000000001 ffffffbffc000070 [ 93.570419] 7d40: ffffffc976be7e40 ffffffc0000f935c 0000000000000000 000000002b424090 [ 93.578179] 7d60: 000000002b424010 0000007facc555f4 0000000080000000 0000000000000015 [ 93.585937] 7d80: 0000000000000116 0000000000000069 ffffffc00097b000 ffffffc976be4000 [ 93.593694] 7da0: 0000000000000064 0000000000000072 000000000000006e 000000000000003f [ 93.601453] 7dc0: 000000000000feff 000000000000fff1 ffffffbffc002028 0000000000000124 [ 93.609211] 7de0: ffffffc976be7e10 0000000000000001 ffffff8000000000 ffffffbbffff0000 [ 93.616969] 7e00: ffffffc976be7e60 0000000000000000 0000000000000000 0000000000000000 [ 93.624726] 7e20: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 [ 93.632484] 7e40: 0000007fcc474550 ffffffc0000841ec 000000002b424010 0000007facda0710 [ 93.640241] 7e60: ffffffffffffffff ffffffc0000be6dc ffffff80007d2000 000000000001c010 [ 93.647999] 7e80: ffffff80007e0ae0 ffffff80007e09d0 ffffff80007edf70 0000000000000288 [ 93.655757] 7ea0: 00000000000002e8 0000000000000000 0000000000000000 0000001c0000001b [ 93.663514] 7ec0: 0000000000000009 0000000000000007 000000002b424090 000000000001c010 [ 93.671272] 7ee0: 000000002b424010 0000007faccd3a48 0000000000000000 0000000000000000 [ 93.679030] 7f00: 0000007fcc4743f8 0000007fcc4743f8 0000000000000069 0000000000000003 [ 93.686787] 7f20: 0101010101010101 0000000000000004 0000000000000020 00000000000003f3 [ 93.694544] 7f40: 0000007facb95664 0000007facda7030 0000007facc555d0 0000000000498378 [ 93.702301] 7f60: 0000000000000000 000000002b424010 0000007facda0710 000000002b424090 [ 93.710058] 7f80: 0000007fcc474698 0000000000498000 0000007fcc474ebb 0000000000474f58 [ 93.717815] 7fa0: 0000000000498000 0000000000000000 0000000000000000 0000007fcc474550 [ 93.725573] 7fc0: 00000000004104bc 0000007fcc474430 0000007facc555f4 0000000080000000 [ 93.733330] 7fe0: 000000002b424090 0000000000000069 0950020128000244 4104000008000004 [ 93.741084] Call trace: The above output makes a debugger life a lot more easier. Signed-off-by: Rohit Thapliyal <r.thapliyal@samsung.com> Signed-off-by: Maninder Singh <maninder1.s@samsung.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Sudeep Holla authored
arch_find_n_match_cpu_physical_id parses the device tree to get the device node for a given logical cpu index. However, since ARM PMUs get probed after the CPU device nodes are stashed while registering the cpus, we can use of_cpu_device_node_get to avoid another DT parse. This patch replaces arch_find_n_match_cpu_physical_id with of_cpu_device_node_get to reuse the stashed value directly instead. Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Suzuki K. Poulose authored
ARM64 pmu prints an error message in event_init() when no hardware PMU is available. This is pretty annoying as it keeps printing the message for every single trial, flooding the kernel logs, unnecessarily. The return code is sufficient for the user to figure out the reason. Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 26 Jul, 2015 1 commit
-
-
Linus Torvalds authored
-