- 24 Jun, 2015 1 commit
-
-
Guenter Roeck authored
mips:allmodconfig fails to build with drivers/clocksource/timer-sp804.c: In function '__sp804_clocksource_and_sched_clock_init': drivers/clocksource/timer-sp804.c:88:3: error: implicit declaration of function 'clk_get_sys' because CLKDEV_LOOKUP is not configured and the driver depends on it. Fixes: 0b7402dc ("ARM: 8366/1: move Dual-Timer SP804 driver to drivers/clocksource") Acked-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
- 12 Jun, 2015 9 commits
-
-
Russell King authored
Commit 32e55a77 ("ARM: 8389/1: Add cpu_resume_arm() for firmwares that resume in ARM state") needed to introduce a new usage of BSYM() to fix a problem with a previous patch. This in turn causes a conflict with the "bsym" branch which removes this symbol, replacing it with a 'badr' assembly macro. Fix this up. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Russell King authored
-
Russell King authored
Conflicts: arch/arm/kernel/perf_event_cpu.c
-
Stefan Agner authored
In Thumb2 mode, the stack register r13 is deprecated if the destination register is the program counter (r15). Similar to head.S, head-nommu.S uses r13 to store the return address used after configuring the CPU's CP15 register. However, since we do not enable a MMU, there will be no address switch and it is possible to use branch with link instruction to call __after_proc_init. Avoid using r13 completely by using bl to call __after_proc_init and get rid of __secondary_switched. Beside removing unnecessary complexity, this also fixes a compiler warning when compiling a !MMU kernel: Warning: Use of r13 as a source register is deprecated when r15 is the destination register. Tested-?by: Maxime Coquelin <mcoquelin.stm32@gmail.com> Signed-off-by: Stefan Agner <stefan@agner.ch> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Russell King authored
Conflicts: arch/arm/kernel/head.S
-
Russell King authored
-
Russell King authored
-
Russell King authored
Fix: arch/arm/kernel/sleep.S:121: Error: selected processor does not support ARM opcodes arch/arm/kernel/sleep.S:123: Error: attempt to use an ARM instruction on a Thumb-only processor -- `adr r9,1f+1' arch/arm/kernel/sleep.S:124: Error: attempt to use an ARM instruction on a Thumb-only processor -- `bx r9' Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Stephen Boyd authored
Some platforms always enter the kernel in the ARM state even if the kernel is compiled for THUMB2. Add a small wrapper on top of cpu_resume() that switches into THUMB2 state. This provides the functionality to fix a problem reported by Kevin Hilman on next-20150601 where the ifc6410 fails to boot a THUMB2 kernel because the platform's firmware always enters the kernel in ARM mode from deep idle states. (rmk: tweaked to work without BSYM->badr changes.) Reported-by: Kevin Hilman <khilman@linaro.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Lina Iyer <lina.iyer@linaro.org> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
- 10 Jun, 2015 2 commits
-
-
Hauke Mehrtens authored
These options make it possible to overwrites the data and instruction prefetching behavior of the arm pl310 cache controller. Signed-off-by: Hauke Mehrtens <hauke@hauke-m.de> Acked-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Daniel Thompson authored
Commit cb1293e2 ("ARM: 8375/1: disable some options on ARMv7-M") causes the build to on ARMv7-M machines: CC arch/arm/kernel/asm-offsets.s In file included from include/linux/sem.h:5:0, from include/linux/sched.h:35, from arch/arm/kernel/asm-offsets.c:14: include/linux/rcupdate.h: In function 'rcu_read_lock_sched_held': include/linux/rcupdate.h:539:2: error: implicit declaration of function 'arch_irqs_disabled' [-Werror=implicit-function-declaration] return preempt_count() != 0 || irqs_disabled(); asm-generic/irqflags.h provides an implementation of arch_irqs_disabled(). Lets grab an implementation from there! Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org> Acked-by: Maxime Coquelin <maxime.coquelin@st.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
- 06 Jun, 2015 5 commits
-
-
Mike Looijmans authored
When dma-coherent transfers are enabled, the mmap call must not change the pg_prot flags in the vma struct. Split the arm_dma_mmap into a common and specific parts, and add a "arm_coherent_dma_mmap" implementation that does not alter the page protection flags. Tested on a topic-miami board (Zynq) using the ACP port to transfer data between FPGA and CPU using the Dyplo framework. Without this patch, byte-wise access to mmapped coherent DMA memory was about 20x slower because of the memory being marked as non-cacheable, and transfer speeds would not exceed 240MB/s. After this patch, the mapped memory is cacheable and the transfer speed is again 600MB/s (limited by the FPGA) when the data is in the L2 cache, while data integrity is being maintained. The patch has no effect on non-coherent DMA. Signed-off-by: Mike Looijmans <mike.looijmans@topic.nl> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Michael van der Westhuizen authored
Fixes the TCM initialisation code to handle TCM banks that are present but inaccessible due to TrustZone configuration. This is the default case when enabling the non-secure world. It may also be the case that that the user decided to use TCM for TrustZone. This change has exposed a bug in handling of TCM where no TCM bank was usable (the 0 size TCM case). This change addresses the resulting hang. This code only handles the ARMv6 TCMTR register format, and will not work correctly on boards that use the ARMv7 (or any other) format. This is handled by performing an early exit from the initialisation function when the TCMTR reports any format other than v6. Signed-off-by: Michael van der Westhuizen <michael@smart-africa.com> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Nathan Lynch authored
When using a toolchain with gold as the default linker, the VDSO build fails: VDSO arch/arm/vdso/vdso.so.raw HOSTCC arch/arm/vdso/vdsomunge MUNGE arch/arm/vdso/vdso.so.dbg OBJCOPY arch/arm/vdso/vdso.so BFD: arch/arm/vdso/vdso.so: Not enough room for program headers, try linking with -N For whatever reason, ld.gold is omitting an exidx program header that ld.bfd emits, and even when I work around that, I don't get a working VDSO. For now, instead of supporting gold (which will fail to link the kernel anyway since it does not implement --pic-veneer), direct the compiler to use the traditional bfd linker. This is accomplished by using -fuse-ld, which is implemented in GCC 4.8 and later. Note: one limitation of this is that if the toolchain is configured to use gold by default, and the bfd linker is not in $PATH, the VDSO build will fail: VDSO arch/arm/vdso/vdso.so.raw collect2: fatal error: cannot find 'ld' This will happen if CROSS_COMPILE begins with a path such as /opt/bin/arm-linux-gnu- but /opt/bin is not in $PATH. This is considered an acceptable corner-case limitation and is easily worked around. Additonal note: we use cc-option instead of cc-ldoption so that -fuse-ld=bfd is placed in the command line if the compiler recognizes the option. Using cc-ldoption results in an attempt to link, which fails in the situation just described, causing -fuse-ld=bfd to be omitted and gold to be used for the VDSO link, which is what we're trying to prevent. Reported-by: Stefan Agner <stefan@agner.ch> Signed-off-by: Nathan Lynch <nathan_lynch@mentor.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Nathan Lynch authored
Currently the VDSO's link options are kind of a mess spread between ccflags-y and cmd_vdsold. Collect linker directives into one variable, VDSO_LDFLAGS, and use that in cmd_vdsold. Signed-off-by: Nathan Lynch <nathan_lynch@mentor.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Russell King authored
Merge branch 'for-rmk/perf' of git://git.kernel.org/pub/scm/linux/kernel/git/will/linux into devel-stable There's quite a lot here, most of it from Mark Rutland, who has been working on big.LITTLE PMU support for a while now. His work also brings us significantly closer to moving the bulk of the CPU PMU driver out into drivers/, where it can be shared with arm64. As part of this work, there is a small patch to perf/core, which has been Acked-by PeterZ and doesn't conflict with tip/perf/core at present. I've kept that patch on a separate branch, merged in here, so that the tip guys can pull it too if any unexpected issues crop up. Please note that there is a conflict with mainline, since we remove perf_event_cpu.c. The correct resolution is also to remove the file, since the changes there are already reflected in the rework (and this resolution is already included in linux-next).
-
- 02 Jun, 2015 9 commits
-
-
Russell King authored
A recent change in kernel/acct.c added a new warning for many configurations on ARM: kernel/acct.c: In function 'acct_pin_kill': arch/arm/include/asm/cmpxchg.h:122:3: warning: value computed is not used [-Wunused-value] The code is in fact correct, it's just a cmpxchg() call that intentionally ignores the result, and no other code does that. The warning does not show up on x86 because of the way that its cmpxchg() macro is written. This changes the ARM implementation to use a similar construct with a compound expression instead of a typecast, which causes the compiler to not complain about an unused result. Fix the other macros in this file in a similar way, and place them just below their function implementations. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Russell King authored
We want link errors if xchg() is called for a variable size we do not support. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Stefan Agner authored
Vybrids has 112 peripheral interrupts which can be routed to the Cortex-M4's NVIC interrupt controller. Signed-off-by: Stefan Agner <stefan@agner.ch> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Sudeep Holla authored
Commit 5261ef2ea836 ("ARM: 8366/1: move Dual-Timer SP804 driver to drivers/clocksource") moved SP804 to drivers/clocksource resulting in it being selectable on platforms/architectures without the config GENERIC_SCHED_CLOCK enabled. Due to that, it results in the following build failure(e.g. x86_64 allmodconfig) drivers/built-in.o: In function `__sp804_clocksource_and_sched_clock_init': (.init.text+0x1a0e7): undefined reference to `sched_clock_register' This patch fixes the build by making ARM_TIMER_SP804 depend on GENERIC_SCHED_CLOCK Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Thomas Gleixner <tglx@linutronix.de> Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Sudeep Holla authored
The ARM Dual-Timer SP804 module is peripheral found not only on ARM32 platforms but also on ARM64 platforms. This patch moves the driver out of arch/arm to driver/clocksource so that it can be used on ARM64 platforms also. Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Rob Herring <robh@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Olof Johansson <olof@lixom.net> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Sudeep Holla authored
The header asm/hardware/arm_timer.h is included in various machine specific files to access TIMER_CTRL and initialise to a known state. This patch introduces a new function sp804_timer_disable to disable the SP804 timers and uses the same for initialising the timers to known(off) state, thereby removing the dependency on the header asm/hardware/arm_timer.h This change is in prepartion to move sp804 timer support out of arch/arm so that it can be used on ARM64 platforms. Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Olof Johansson <olof@lixom.net> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Arnd Bergmann authored
The new veneer support for loadable modules on ARM uses the __opcode_to_mem_thumb32() function to count R_ARM_THM_CALL and R_ARM_THM_JUMP24 relocations. However, this function is not defined for big-endian kernels on ARMv5 or before, causing a compile-time error: arch/arm/kernel/module-plts.c: In function 'count_plts': arch/arm/kernel/module-plts.c:124:9: error: implicit declaration of function '__opcode_to_mem_thumb32' [-Werror=implicit-function-declaration] __opcode_to_mem_thumb32(0x07ff2fff))) ^ As we know that this part of the function is only needed for Thumb2 kernels, and that those can never happen with BE32, we can avoid the error by enclosing the code in an #ifdef. Fixes: 7d485f64 ("ARM: 8220/1: allow modules outside of bl range") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Yingjoe Chen authored
Put secondary_startup_arm() prototype in arch/arm/include/asm/smp.h so users doesn't have to add extern prototype in their code. Signed-off-by: Yingjoe Chen <yingjoe.chen@mediatek.com> Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Yingjoe Chen authored
secondary_startup_arm is used as ARM mode secondary start up function when ther kernel is compiled in THUMB mode, however the label itself is still in .thumb mode. readelf shows: 160979: c020a581 120 FUNC GLOBAL DEFAULT 2 secondary_startup_arm Make sure the label is in ARM mode as well. Signed-off-by: Yingjoe Chen <yingjoe.chen@mediatek.com> Tested-by: Matthias Brugger <matthias.bgg@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
- 01 Jun, 2015 11 commits
-
-
Russell King authored
Document that r13 is not a stack in the initialisation function, in case anyone gets other ideas. Document the registers available for the errata workarounds, and specifically which registers contain parts of the MIDR register, as well as which registers must be preserved. Lastly, use the lowest numbered available register (r0) rather than r10 for temporary storage. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Russell King authored
We already have the main ID register available in r9, there's no need to refetch it. Use the saved value. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Russell King authored
Rather than having a long sprawling __v7_setup function, which is hard to maintain properly, move the CPU errata out of line. While doing this, it was discovered that the Cortex-A15 errata had been incorrectly added: ldr r10, =0x00000c08 @ Cortex-A8 primary part number teq r0, r10 bne 2f /* Cortex-A8 errata */ b 3f 2: ldr r10, =0x00000c09 @ Cortex-A9 primary part number teq r0, r10 bne 3f /* Cortex-A9 errata */ 3: ldr r10, =0x00000c0f @ Cortex-A15 primary part number teq r0, r10 bne 4f /* Cortex-A15 errata */ 4: This results in the Cortex-A15 test always being executed after the Cortex-A8 and Cortex-A9 errata, which is obviously not what is intended. The 'b 3f' labels should have been updated to 'b 4f'. The new structure of: /* Cortex-A8 Errata */ ldr r10, =0x00000c08 @ Cortex-A8 primary part number teq r0, r10 beq __ca8_errata /* Cortex-A9 Errata */ ldr r10, =0x00000c09 @ Cortex-A9 primary part number teq r0, r10 beq __ca9_errata /* Cortex-A15 Errata */ ldr r10, =0x00000c0f @ Cortex-A15 primary part number teq r0, r10 beq __ca15_errata __errata_finish: is much cleaner and easier to see that this kind of thing doesn't happen. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Russell King authored
Re-engineer the LPAE TTBR setup code. Rather than passing some shifted address in order to fit in a CPU register, pass either a full physical address (in the case of r4, r5 for TTBR0) or a PFN (for TTBR1). This removes the ARCH_PGD_SHIFT hack, and the last dangerous user of cpu_set_ttbr() in the secondary CPU startup code path (which was there to re-set TTBR1 to the appropriate high physical address space on Keystone2.) Tested-by: Murali Karicheri <m-karicheri2@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Russell King authored
Eliminate the needless nommu version of this function, and get rid of the proc_info_list structure argument - we no longer need this in order to fix up the page table entries. Acked-by: Santosh Shilimkar <ssantosh@kernel.org> Tested-by: Murali Karicheri <m-karicheri2@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Russell King authored
Re-implement the physical address space switching to be architecturally compliant. This involves flushing the caches, disabling the MMU, and only then updating the page tables. Once that is complete, the system can be brought back up again. Since we disable the MMU, we need to do the update in assembly code. Luckily, the entries which need updating are fairly trivial, and are all setup by the early assembly code. We can merely adjust each entry by the delta required. Not only does this fix the code to be architecturally compliant, but it fixes a couple of bugs too: 1. The original code would only ever update the first L2 entry covering a fraction of the kernel; the remainder were left untouched. 2. The L2 entries covering the DTB blob were likewise untouched. This solution fixes up all entries. Tested-by: Murali Karicheri <m-karicheri2@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Russell King authored
The init_meminfo() method is not about initialising meminfo - it's about fixing up the physical to virtual translation so that we use a different physical address space, possibly above the 4GB physical address space. Therefore, the name "init_meminfo()" is confusing. Rename it to pv_fixup() instead. Acked-by: Santosh Shilimkar <ssantosh@kernel.org> Tested-by: Murali Karicheri <m-karicheri2@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Russell King authored
There is no point platform code doing this, let's move it into the generic code so it doesn't get duplicated. Acked-by: Santosh Shilimkar <ssantosh@kernel.org> Tested-by: Murali Karicheri <m-karicheri2@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Russell King authored
Make the init_meminfo function return the offset to be applied to the phys-to-virt translation constants. This allows us to move the update into generic code, along with the requirements for this update. This avoids platforms having to know the details of the phys-to-virt translation support. Acked-by: Santosh Shilimkar <ssantosh@kernel.org> Tested-by: Murali Karicheri <m-karicheri2@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Russell King authored
We ideally want the init_meminfo function to do nothing but return the delta to be applied to PHYS_OFFSET - it should do nothing else. As we can detect in platform init code whether we are running in the high physical address space, move the platform notifier initialisation entirely into the platform init code. Acked-by: Santosh Shilimkar <ssantosh@kernel.org> Tested-by: Murali Karicheri <m-karicheri2@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Russell King authored
All ARMv5 and older CPUs invalidate their caches in the early assembly setup function, prior to enabling the MMU. This is because the L1 cache should not contain any data relevant to the execution of the kernel at this point; all data should have been flushed out to memory. This requirement should also be true for ARMv6 and ARMv7 CPUs - indeed, these typically do not search their caches when caching is disabled (as it needs to be when the MMU is disabled) so this change should be safe. ARMv7 allows there to be CPUs which search their caches while caching is disabled, and it's permitted that the cache is uninitialised at boot; for these, the architecture reference manual requires that an implementation specific code sequence is used immediately after reset to ensure that the cache is placed into a sane state. Such functionality is definitely outside the remit of the Linux kernel, and must be done by the SoC's firmware before _any_ CPU gets to the Linux kernel. Changing the data cache clean+invalidate to a mere invalidate allows us to get rid of a lot of platform specific hacks around this issue for their secondary CPU bringup paths - some of which were buggy. Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Tested-by: Florian Fainelli <f.fainelli@gmail.com> Tested-by: Heiko Stuebner <heiko@sntech.de> Tested-by: Dinh Nguyen <dinguyen@opensource.altera.com> Acked-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Tested-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Acked-by: Shawn Guo <shawn.guo@linaro.org> Tested-by: Thierry Reding <treding@nvidia.com> Acked-by: Thierry Reding <treding@nvidia.com> Tested-by: Geert Uytterhoeven <geert+renesas@glider.be> Tested-by: Michal Simek <michal.simek@xilinx.com> Tested-by: Wei Xu <xuwei5@hisilicon.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
- 28 May, 2015 3 commits
-
-
Mark Rutland authored
Now that the arm_pmu framework is only used for CPU PMUs, there's no reason to keep the pseudo-generic and CPU-specific framework portions separate. This patch folds the two into perf_event.c. Signed-off-by: Mark Rutland <mark.rutland@arm.com> [will: fixed up irq cfg to match upstream] Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
Now that the core arm perf code maintains no global state and all microarchitecture-specific PMU data can be fed in through the shared probe function, it's possible to use it as a library and get rid of the C file includes we have currently. This patch factors out the ARMv7-specific portions out into the ARMv7 driver. For the moment this is always built if perf event support is enabled, but the preprocessor guards will leave behind an empty file. Now that perf_event_cpu.c contains no microarchitecture-specific data, the associated probing code is removed, completing its relegation to a library file. The vestigal "arm-pmu" platform device ID is removed in this patch, as it has been unused since platform files were updated to specify a more specific PMU variant. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
Now that the core arm perf code maintains no global state and all microarchitecture-specific PMU data can be fed in through the shared probe function, it's possible to use it as a library and get rid of the C file includes we have currently. This patch factors out the ARMv6-specific portions out into the ARMv6 driver. For the moment this is always built if perf event support is enabled, but the preprocessor guards will leave behind an empty file. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-