- 07 Nov, 2013 1 commit
-
-
Sudeep KarkadaNagesha authored
The non-IPI interrupts are displayed only for the online cpus from show_interrupts in kernel/irq/proc.c before calling arch_show_interrupts(). As a result, the column headers and the IPI count don't match if any CPU is offline. This patch fixes show_ipi_list to display IPIs for online CPUs only. Signed-off-by: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 06 Nov, 2013 3 commits
-
-
Catalin Marinas authored
Commit 52ea2a56 (arm64: locks: introduce ticket-based spinlock implementation) introduces the arch_spin_is_contended() function making CONFIG_GENERIC_LOCKBREAK unnecessary. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Will Deacon <will.deacon@arm.com>
-
Marc Zyngier authored
Ensure that accesses to the GICH_* registers are byteswapped when the kernel is compiled as big-endian. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Marc Zyngier authored
Force SCTLR_EL2.EE to 1 if the kernel is compiled as BE. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 05 Nov, 2013 5 commits
-
-
T.J. Purtell authored
The ARM architecture reference specifies that the IT state bits in the PSR must be all zeros in ARM mode or behavior is unspecified. If an ARM function is registered as a signal handler, and that signal is delivered inside a block of instructions following an IT instruction, some of the instructions at the beginning of the signal handler may be skipped if the IT state bits of the Program Status Register are not cleared by the kernel. Signed-off-by: T.J. Purtell <tj@mobisocial.us> [catalin.marinas@arm.com: code comment and commit log updated] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Catalin Marinas authored
This patch expands the VA_BITS to 42 when the 64K page configuration is enabled allowing 2TB kernel linear mapping. Linux still uses 2 levels of page tables in this configuration with pgd now being a full page. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com>
-
Will Deacon authored
Relocations that require an instruction immediate to be re-encoded must ensure that the instruction pattern is represented in a little-endian format for the manipulation code to work correctly. This patch converts the loaded instruction into native-endianess prior to encoding and then converts back to little-endian byteorder before updating memory. Signed-off-by: Will Deacon <will.deacon@arm.com> Tested-by: Matthew Leach <matthew.leach@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Catalin Marinas authored
This way we can spot early bugs when just testing with the default config. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Marc Zyngier authored
preempt_count is defined as an int. Oddly enough, we access it as a 64bit value. Things become interesting when running a BE kernel, and looking at the current CPU number, which is stored as an int next to preempt_count. Like in a per-cpu interrupt handler, for example... Using a 32bit access fixes the issue for good. Cc: Matthew Leach <matthew.leach@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 04 Nov, 2013 2 commits
-
-
Marc Zyngier authored
Commit 53ae3acd (arm64: Only enable local interrupts after the CPU is marked online) moved the enabling of the GIC after the CPUs are marked online. This has some interesting effect: [...] [<ffffffc0002eefd8>] gic_raise_softirq+0xf8/0x160 [<ffffffc000088f58>] smp_send_reschedule+0x38/0x40 [<ffffffc0000c8728>] resched_task+0x84/0xc0 [<ffffffc0000c8cdc>] check_preempt_curr+0x58/0x98 [<ffffffc0000c8d38>] ttwu_do_wakeup+0x1c/0xf4 [<ffffffc0000c8f90>] ttwu_do_activate.constprop.84+0x64/0x70 [<ffffffc0000cad30>] try_to_wake_up+0x1d4/0x2b4 [<ffffffc0000cae6c>] default_wake_function+0x10/0x18 [<ffffffc0000c5ca4>] __wake_up_common+0x60/0xa0 [<ffffffc0000c7784>] complete+0x48/0x64 [<ffffffc000088bec>] secondary_start_kernel+0xe8/0x110 [...] Here, we end-up calling gic_raise_softirq without having initialized the interrupt controller for this CPU. While this goes unnoticed with GICv2 (the distributor is always accessible), it explodes with GICv3. The fix is to move the call to notify_cpu_starting before we set the secondary CPU online. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Salter authored
The .data section in the arm64 linker script currently lacks a definition for page-aligned data. This leads to a .page_aligned section being placed between the end of data and start of bss. This patch corrects that by using the generic RW_DATA_SECTION macro which includes support for page-aligned data. Signed-off-by: Mark Salter <msalter@redhat.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 31 Oct, 2013 1 commit
-
-
Catalin Marinas authored
Commit e8765b26 (arm64: read enable-method for CPU0) introduced checks for the enable method on CPU0 (to be later used with CPU suspend). However, if the kernel is compiled for UP and a DT file is used with a method like 'spin-table', Linux complains about 'invalid enable method'. This patch turns it into an 'unsupported enable method' warning. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 30 Oct, 2013 3 commits
-
-
Sudeep KarkadaNagesha authored
Once the cpu_logical_map for any logical cpu is populated with the corresponding physical identifier(i.e. mpidr), it's device node can be retrieved using the DT helper 'of_get_cpu_node'. Currently the device tree parsing code to get boot cpu node is duplicated in 'cpu_read_bootcpu_ops'. This patch replaces the code parsing the device tree for the boot cpu with of_get_cpu_node. Signed-off-by: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Sudeep KarkadaNagesha authored
OF/DT core library provides architecture specific hook to match the logical cpu index with the corresponding physical identifier. On ARM64, the MPIDR_EL1 contains specific bitfields(MPIDR_EL1.Aff{3..0}) which uniquely identify a CPU, in addition to some non-identifying information and reserved bits. The ARM cpu binding defines the 'reg' property to only contain the affinity bits, and any cpu nodes with other bits set in their 'reg' entry are skipped. This patch overrides the weak definition of arch_match_cpu_phys_id with ARM64 specific version using MPIDR_EL1.Aff{3..0} as cpu physical identifiers. Signed-off-by: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Salter authored
Some drivers (ACPI notably) use ioremap_cache() to map an area which could either be outside of kernel RAM or in an already mapped reserved area of RAM. To avoid aliases with different caching attributes, ioremap() does not allow RAM to be remapped. But for ioremap_cache(), the existing kernel mapping may be used. Signed-off-by: Mark Salter <msalter@redhat.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 28 Oct, 2013 1 commit
-
-
Robin Murphy authored
This patch updates the barrier semantics in the kuser helper functions to take advantage of the ARMv8 additions to AArch32, which are guaranteed to be available in situations where these functions will be called. Note that this slightly changes the cmpxchg functions in that they are no longer necessarily full barriers if they return 1. However, the documentation only states they include their own barriers "as needed", not that they are obligated to act as a full barrier for the caller. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> CC: Matthew Leach <matthew.leach@arm.com> CC: Dave Martin <dave.martin@arm.com> CC: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 25 Oct, 2013 20 commits
-
-
Vinayak Kale authored
This patch fixes ARMV8_EVTYPE_* macros since evtCount (event number) field width is 10bits in event selection register. Signed-off-by: Vinayak Kale <vkale@apm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
This patch wires up CONFIG_CPU_BIG_ENDIAN for the AArch64 kernel configuration. Selecting this option builds a big-endian kernel which can boot into a big-endian userspace. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Catalin Marinas authored
The owner and next members of the arch_spinlock_t structure need to be swapped when compiling for big endian. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: Matthew Leach <matthew.leach@arm.com> Acked-by: Will Deacon <will.deacon@arm.com>
-
Matthew Leach authored
Currently when CPUs are brought online via a spin-table, the address they should jump to is written to the cpu-release-addr in the kernel's native endianness. As the kernel may switch endianness, secondaries might read the value byte-reversed from what was intended, and they would jump to the wrong address. As the only current arm64 spin-table implementations are little-endian, stricten up the arm64 spin-table definition such that the value written to cpu-release-addr is _always_ little-endian regardless of the endianness of any CPU. If a spinning CPU is operating big-endian, it must byte-reverse the value before jumping to handle this. Signed-off-by: Matthew Leach <matthew.leach@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Matthew Leach authored
The endianness of memory accesses at EL2 and EL1 are configured by SCTLR_EL2.EE and SCTLR_EL1.EE respectively. When the kernel is booted, the state of SCTLR_EL{2,1}.EE is unknown, and thus the kernel must ensure that they are set before performing any memory accesses. This patch ensures that SCTLR_EL{2,1} are configured appropriately at boot for kernels of either endianness. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Matthew Leach <matthew.leach@arm.com> [catalin.marinas@arm.com: fix SCTLR_EL1.E0E bit setting in head.S] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Matthew Leach authored
Currently, the code for setting the __cpu_boot_mode flag is munged in with el2_setup. This makes things difficult on a BE bringup as a memory access has to have occurred before el2_setup which is the place that we'd like to set the endianess on the current EL. Create a new function for setting __cpu_boot_mode and have el2_setup return the mode the CPU. Also define a new constant in virt.h, BOOT_CPU_MODE_EL1, for readability. Acked-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Matthew Leach <matthew.leach@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Matthew Leach authored
Add CPU_LE and CPU_BE to select assembler code in little and big endian configurations respectively. Signed-off-by: Matthew Leach <matthew.leach@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Matthew Leach authored
Currently the sigreturn compat code is copied to an offset in the vectors table. When using a BE kernel this data will be stored in the wrong endianess so when returning from a signal on a 32-bit BE system, arbitrary code will be executed. Instead of declaring the code inside a struct and copying that, use the assembler's .byte directives to store the code in the correct endianess regardless of platform endianess. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Matthew Leach <matthew.leach@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Matthew Leach authored
The arm64 port contains wrappers for arm32 syscalls that pass 64-bit values. These wrappers concatenate the two registers to hold a 64-bit value in a single X register. On BE, however, the lower and higher words are swapped. Create a new assembler macro, regs_to_64, that when on BE systems swaps the registers in the orr instruction. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Matthew Leach <matthew.leach@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
This patch adds support for BE8 AArch32 tasks to the compat layer. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
uname -m reports the machine field from the current utsname, which should reflect the endianness of the system. This patch reports ELF_PLATFORM for the field, so that everything appears consistent from userspace. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
This patch adds support for the aarch64_be ELF format to the AArch64 ELF loader. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
For big-endian processors, we must include linux/byteorder/big_endian.h to get the relevant definitions for swabbing between CPU order and a defined endianness. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
This patch adds big-endian support to the AArch64 top-level Makefile. This currently just passes the relevant flags to the toolchain and is predicated on a Kconfig option that will be introduced later on. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
This patch adds support for using PSCI CPU_OFF calls for CPU hotplug. With this code it is possible to hot unplug CPUs with "psci" as their boot-method, as long as there's an appropriate cpu_off function id specified in the psci node. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
This patch adds the basic infrastructure necessary to support CPU_HOTPLUG on arm64, based on the arm implementation. Actual hotplug support will depend on an implementation's cpu_operations (e.g. PSCI). Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
With the advent of CPU_HOTPLUG, the enable-method property for CPU0 may tells us something useful (i.e. how to hotplug it back on), so we must read it along with all the enable-method for all the other CPUs. Even on UP the enable-method may tell us useful information (e.g. if a core has some mechanism that might be usable for cpuidle), so we should always read it. This patch factors out the reading of the enable method, and ensures that CPU0's enable method is read regardless of whether the kernel is built with SMP support. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
The arm64 kernel has an internal holding pen, which is necessary for some systems where we can't bring CPUs online individually and must hold multiple CPUs in a safe area until the kernel is able to handle them. The current SMP infrastructure for arm64 is closely coupled to this holding pen, and alternative boot methods must launch CPUs into the pen, where they sit before they are launched into the kernel proper. With PSCI (and possibly other future boot methods), we can bring CPUs online individually, and need not perform the secondary_holding_pen dance. Instead, this patch factors the holding pen management code out to the spin-table boot method code, as it is the only boot method requiring the pen. A new entry point for secondaries, secondary_entry is added for other boot methods to use, which bypasses the holding pen and its associated overhead when bringing CPUs online. The smp.pen.text section is also removed, as the pen can live in head.text without problem. The cpu_operations structure is extended with two new functions, cpu_boot and cpu_postboot, for bringing a cpu into the kernel and performing any post-boot cleanup required by a bootmethod (e.g. resetting the secondary_holding_pen_release to INVALID_HWID). Documentation is added for cpu_operations. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
For hotplug support, we're going to want a place to store operations that do more than bring CPUs online, and it makes sense to group these with our current smp_enable_ops. For cpuidle support, we'll want to group additional functions, and we may want them even for UP kernels. This patch renames smp_enable_ops to the more general cpu_operations, and pulls the definitions out of smp code such that they can be used in UP kernels. While we're at it, fix up instances of the cpu parameter to be an unsigned int, drop the init markings and rename the *_cpu functions to cpu_* to reduce future churn when cpu_operations is extended. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
The functions in psci.c are only used from smp_psci.c, and smp_psci cannot function without psci.c. Additionally psci.c is built when !SMP, where it's expected that cpu_suspend may be useful. This patch unifies the two files, removing pointless duplication and paving the way for PSCI support in UP systems. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 24 Oct, 2013 4 commits
-
-
Mark Rutland authored
There are a few points in the arm64 booting document which are unclear (such as the initial state of secondary CPUs), and/or have not been documented (PSCI is a supported mechanism for booting secondary CPUs). This patch amends the arm64 boot document to better express the (existing) requirements, and to describe PSCI as a supported booting mechanism. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Dave Martin <dave.martin@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Fu Wei <tekkamanninja@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Catalin Marinas authored
This function may be called from loadable modules, so it needs exporting. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: Loc Ho <lho@apm.com>
-
Will Deacon authored
This patch introduces cmpxchg64_relaxed for arm64 using the existing cmpxchg_local macro, which performs a cmpxchg operation (up to 64 bits) without barrier semantics. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
Our spinlocks are only 32-bit (2x16-bit tickets) and our cmpxchg can deal with 8-bytes (as one would hope!). This patch wires up the cmpxchg-based lockless lockref implementation for arm64. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-