- 29 May, 2014 3 commits
-
-
AKASHI Takahiro authored
Recordmcount utility under scripts is run, after compiling each object, to find out all the locations of calling _mcount() and put them into specific seciton named __mcount_loc. Then linker collects all such information into a table in the kernel image (between __start_mcount_loc and __stop_mcount_loc) for later use by ftrace. This patch adds arm64 specific definitions to identify such locations. There are two types of implementation, C and Perl. On arm64, only C version is used to build the kernel now that CONFIG_HAVE_C_RECORDMCOUNT is on. But Perl version is also maintained. This patch also contains a workaround just in case where a header file, elf.h, on host machine doesn't have definitions of EM_AARCH64 nor R_AARCH64_ABS64. Without them, compiling C version of recordmcount will fail. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
AKASHI Takahiro authored
walk_stackframe() calls unwind_frame(), and if walk_stackframe() is "notrace", unwind_frame() should be also "notrace". Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
AKASHI Takahiro authored
Since insn.h is indirectly included in asm/entry-ftrace.S, we need to exclude some declarations by __ASSEMBLY__. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 28 May, 2014 1 commit
-
-
Will Deacon authored
Merge branch 'ftrace/arm64' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace into for-next/core Core ftrace changes from Steve Rostedt, required by the arm64 support code.
-
- 23 May, 2014 7 commits
-
-
Geoff Levand authored
Change the arm64 linker script ENTRY() command to define _text as the kernel entry point. The arm64 boot protocol specifies that the kernel must be entered at the beginning of the kernel image. The existing ENTRY() command defined the symbol stext as the entry point, which emitted an incorrect entry point, but would not cause a runtime error because the existing entry code immediately jumps to stext. Signed-off-by: Geoff Levand <geoff@infradead.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
zhichang.yuan authored
This patch, based on Linaro's Cortex Strings library, adds an assembly optimized strlen() and strnlen() functions. Signed-off-by: Zhichang Yuan <zhichang.yuan@linaro.org> Signed-off-by: Deepak Saxena <dsaxena@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
zhichang.yuan authored
This patch, based on Linaro's Cortex Strings library, adds an assembly optimized strcmp() and strncmp() functions. Signed-off-by: Zhichang Yuan <zhichang.yuan@linaro.org> Signed-off-by: Deepak Saxena <dsaxena@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
zhichang.yuan authored
This patch, based on Linaro's Cortex Strings library, adds an assembly optimized memcmp() function. Signed-off-by: Zhichang Yuan <zhichang.yuan@linaro.org> Signed-off-by: Deepak Saxena <dsaxena@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
zhichang.yuan authored
This patch, based on Linaro's Cortex Strings library, improves the performance of the assembly optimized memset() function. Signed-off-by: Zhichang Yuan <zhichang.yuan@linaro.org> Signed-off-by: Deepak Saxena <dsaxena@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
zhichang.yuan authored
This patch, based on Linaro's Cortex Strings library, improves the performance of the assembly optimized memmove() function. Signed-off-by: Zhichang Yuan <zhichang.yuan@linaro.org> Signed-off-by: Deepak Saxena <dsaxena@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
zhichang.yuan authored
This patch, based on Linaro's Cortex Strings library, improves the performance of the assembly optimized memcpy() function. Signed-off-by: Zhichang Yuan <zhichang.yuan@linaro.org> Signed-off-by: Deepak Saxena <dsaxena@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 22 May, 2014 1 commit
-
-
Will Deacon authored
Whilst our defconfig is certainly usable, there are a few extra features we can enable to make it considerably more useful, particularly if people are using it for testing: - KVM - SWAP - Hugepages - ARMv8 crypto This patch enables these options in our defconfig. Note that the ordering has changed slightly, since this is the result of a new savedefconfig make target. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 21 May, 2014 1 commit
-
-
AKASHI Takahiro authored
Most archs with HAVE_ARCH_CALLER_ADDR have pretty much the same definitions of CALLER_ADDRx(n). Instead of duplicating the code for all the archs, define a ftrace_return_address0() and ftrace_return_address(n) that can be overwritten by the archs if they need to do something different. Instead of 7 macros in every arch, we now only have at most 2 (and actually only 1 as ftrace_return_address0() should be the same for all archs). The CALLER_ADDRx(n) will now be defined in linux/ftrace.h and use the ftrace_return_address*(n?) macros. This removes a lot of the duplicate code. Link: http://lkml.kernel.org/p/1400585464-30333-1-git-send-email-takahiro.akashi@linaro.orgSigned-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
-
- 16 May, 2014 8 commits
-
-
Arun KS authored
If one process calls sys_reboot and that process then stops other CPUs while those CPUs are within a spin_lock() region we can potentially encounter a deadlock scenario like below. CPU 0 CPU 1 ----- ----- spin_lock(my_lock) smp_send_stop() <send IPI> handle_IPI() disable_preemption/irqs while(1); <PREEMPT> spin_lock(my_lock) <--- Waits forever We shouldn't attempt to run any other tasks after we send a stop IPI to a CPU so disable preemption so that this task runs to completion. We use local_irq_disable() here for cross-arch consistency with x86. Based-on-work-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Arun KS <getarunks@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Arun KS authored
This patch ports most of commit 19ab428f "ARM: 7759/1: decouple CPU offlining from reboot/shutdown" by Stephen Warren from arch/arm to arch/arm64. machine_shutdown() is a hook for kexec. Add a comment saying so, since it isn't obvious from the function name. Halt, power-off, and restart have different requirements re: stopping secondary CPUs than kexec has. The former simply require the secondary CPUs to be quiesced somehow, whereas kexec requires them to be completely non-operational, so that no matter where the kexec target images are written in RAM, they won't influence operation of the secondary CPUS,which could happen if the CPUs were still executing some kind of pin loop. To this end, modify machine_halt, power_off, and restart to call smp_send_stop() directly, rather than calling machine_shutdown(). In machine_shutdown(), replace the call to smp_send_stop() with a call to disable_nonboot_cpus(). This completely disables all but one CPU, thus satisfying the kexec requirements a couple paragraphs above. Signed-off-by: Arun KS <getarunks@gmail.com> Acked-by: Stephen Warren <swarren@nvidia.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Larry Bassel authored
Support for arch_irq_work_raise() was missing from arm64 (a prerequisite for FULL_NOHZ). This patch is based on the arm32 patch ARM 7872/1. commit bf18525f Author: Stephen Boyd <sboyd@codeaurora.org> Date: Tue Oct 29 20:32:56 2013 +0100 ARM: 7872/1: Support arch_irq_work_raise() via self IPIs By default, IRQ work is run from the tick interrupt (see irq_work_run() in update_process_times()). When we're in full NOHZ mode, restarting the tick requires the use of IRQ work and if the only place we run IRQ work is in the tick interrupt we have an unbreakable cycle. Implement arch_irq_work_raise() via self IPIs to break this cycle and get the tick started again. Note that we implement this via IPIs which are only available on SMP builds. This shouldn't be a problem because full NOHZ is only supported on SMP builds anyway. Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Reviewed-by: Kevin Hilman <khilman@linaro.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Larry Bassel <larry.bassel@linaro.org> Reviewed-by: Kevin Hilman <khilman@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
Add support for parsing the explicit topology bindings to discover the topology of the system. Since it is not currently clear how to map multi-level clusters for the scheduler all leaf clusters are presented to the scheduler at the same level. This should be enough to provide good support for current systems. Signed-off-by: Mark Brown <broonie@linaro.org> Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Brown authored
As a legacy of the way 32 bit ARM did things the topology code uses a null topology map by default and then overwrites it by mapping cores with no information to a cluster by themselves later. In order to make it simpler to reset things as part of recovering from parse failures in firmware information directly set this configuration on init. A core will always be its own sibling so there should be no risk of confusion with firmware provided information. Signed-off-by: Mark Brown <broonie@linaro.org> Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Zi Shen Lim authored
Remove unused and deprecated mc_capable() and smt_capable(). Both were added recently by f6e763b9 ("arm64: topology: Implement basic CPU topology support"). Uses of both were removed by 8e7fbcbc ("sched: Remove stale power aware scheduling remnants and dysfunctional knobs"). Signed-off-by: Zi Shen Lim <zlim@broadcom.com> Signed-off-by: Mark Brown <broonie@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Catalin Marinas authored
This reverts commit bc07c2c6. While the aim is increased security for --x memory maps, it does not protect against kernel level reads. Until SECCOMP is implemented for arm64, revert this patch to avoid giving a false idea of execute-only mappings. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
git://git.linaro.org/people/ard.biesheuvel/linux-armCatalin Marinas authored
FPSIMD register bank context switching and crypto algorithms optimisations for arm64 from Ard Biesheuvel. * tag 'for-3.16' of git://git.linaro.org/people/ard.biesheuvel/linux-arm: arm64/crypto: AES-ECB/CBC/CTR/XTS using ARMv8 NEON and Crypto Extensions arm64: pull in <asm/simd.h> from asm-generic arm64/crypto: AES in CCM mode using ARMv8 Crypto Extensions arm64/crypto: AES using ARMv8 Crypto Extensions arm64/crypto: GHASH secure hash using ARMv8 Crypto Extensions arm64/crypto: SHA-224/SHA-256 using ARMv8 Crypto Extensions arm64/crypto: SHA-1 using ARMv8 Crypto Extensions arm64: add support for kernel mode NEON in interrupt context arm64: defer reloading a task's FPSIMD state to userland resume arm64: add abstractions for FPSIMD state manipulation asm-generic: allow generic unaligned access if the arch supports it Conflicts: arch/arm64/include/asm/thread_info.h
-
- 14 May, 2014 7 commits
-
-
Ard Biesheuvel authored
This adds ARMv8 implementations of AES in ECB, CBC, CTR and XTS modes, both for ARMv8 with Crypto Extensions and for plain ARMv8 NEON. The Crypto Extensions version can only run on ARMv8 implementations that have support for these optional extensions. The plain NEON version is a table based yet time invariant implementation. All S-box substitutions are performed in parallel, leveraging the wide range of ARMv8's tbl/tbx instructions, and the huge NEON register file, which can comfortably hold the entire S-box and still have room to spare for doing the actual computations. The key expansion routines were borrowed from aes_generic. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
-
Ard Biesheuvel authored
This patch adds support for the AES-CCM encryption algorithm for CPUs that have support for the AES part of the ARM v8 Crypto Extensions. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
This patch adds support for the AES symmetric encryption algorithm for CPUs that have support for the AES part of the ARM v8 Crypto Extensions. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
This is a port to ARMv8 (Crypto Extensions) of the Intel implementation of the GHASH Secure Hash (used in the Galois/Counter chaining mode). It relies on the optional PMULL/PMULL2 instruction (polynomial multiply long, what Intel call carry-less multiply). Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
This patch adds support for the SHA-224 and SHA-256 Secure Hash Algorithms for CPUs that have support for the SHA-2 part of the ARM v8 Crypto Extensions. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
This patch adds support for the SHA-1 Secure Hash Algorithm for CPUs that have support for the SHA-1 part of the ARM v8 Crypto Extensions. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 12 May, 2014 5 commits
-
-
AKASHI Takahiro authored
Some kernel files may include both linux/compat.h and asm/compat.h directly or indirectly. Since both header files contain is_compat_task() under !CONFIG_COMPAT, compiling them with !CONFIG_COMPAT will eventually fail. Such files include kernel/auditsc.c, kernel/seccomp.c and init/do_mountfs.c (do_mountfs.c may read asm/compat.h via asm/ftrace.h once ftrace is implemented). So this patch proactively 1) removes is_compat_task() under !CONFIG_COMPAT from asm/compat.h 2) replaces asm/compat.h to linux/compat.h in kernel/*.c, but asm/compat.h is still necessary in ptrace.c and process.c because they use is_compat_thread(). Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
AKASHI Takahiro authored
This macro, regs_return_value, is used mainly for audit to record system call's results, but may also be used in test_kprobes.c. Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Richard Guy Briggs <rgb@redhat.com> Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
AKASHI Takahiro authored
As done in arm, this change makes it easy to confirm we invoke syscall related hooks, including syscall tracepoint, audit and seccomp which would be implemented later, in correct order. That is, undoing operations in the opposite order on exit that they were done on entry. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
AKASHI Takahiro authored
Currently syscall_trace() is called only for ptrace. With additional TIF_xx flags defined, it is now called in all the cases of audit, ftrace and seccomp in addition to ptrace. Acked-by: Richard Guy Briggs <rgb@redhat.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
Since mdscr_el1 is part of the debug register group, it is highly likely to be trapped by a hypervisor to prevent virtual machines from debugging (buggering?) each other. Unfortunately, this absolutely destroys our performance, since we access the register on many of our low-level fault handling paths to keep track of the various debug state machines. This patch removes our dependency on mdscr_el1 in the case that debugging is not being used. More specifically we: - Use TIF_SINGLESTEP to indicate that a task is stepping at EL0 and avoid disabling step in the MDSCR when we don't need to. MDSCR_EL1.SS handling is moved to kernel_entry, when trapping from userspace. - Ensure debug exceptions are re-enabled on *all* exception entry paths, even the debug exception handling path (where we re-enable exceptions after invoking the handler). Since we can now rely on MDSCR_EL1.SS being cleared by the entry code, exception handlers can usually enable debug immediately before enabling interrupts. - Remove all debug exception unmasking from ret_to_user and el1_preempt, since we will never get here with debug exceptions masked. This results in a slight change to kernel debug behaviour, where we now step into interrupt handlers and data aborts from EL1 when debugging the kernel, which is actually a useful thing to do. A side-effect of this is that it *does* potentially prevent stepping off {break,watch}points when there is a high-frequency interrupt source (e.g. a timer), so a debugger would need to use either breakpoints or manually disable interrupts to get around this issue. With this patch applied, guest performance is restored under KVM when debug register accesses are trapped (and we get a measurable performance increase on the host on Cortex-A57 too). Cc: Ian Campbell <ian.campbell@citrix.com> Tested-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 09 May, 2014 7 commits
-
-
Linus Torvalds authored
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull x86 fixes from Peter Anvin: "A somewhat unpleasantly large collection of small fixes. The big ones are the __visible tree sweep and a fix for 'earlyprintk=efi,keep'. It was using __init functions with predictably suboptimal results. Another key fix is a build fix which would produce output that simply would not decompress correctly in some configuration, due to the existing Makefiles picking up an unfortunate local label and mistaking it for the global symbol _end. Additional fixes include the handling of 64-bit numbers when setting the vdso data page (a latent bug which became manifest when i386 started exporting a vdso with time functions), a fix to the new MSR manipulation accessors which would cause features to not get properly unblocked, a build fix for 32-bit userland, and a few new platform quirks" * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86, vdso, time: Cast tv_nsec to u64 for proper shifting in update_vsyscall() x86: Fix typo in MSR_IA32_MISC_ENABLE_LIMIT_CPUID macro x86: Fix typo preventing msr_set/clear_bit from having an effect x86/intel: Add quirk to disable HPET for the Baytrail platform x86/hpet: Make boot_hpet_disable extern x86-64, build: Fix stack protector Makefile breakage with 32-bit userland x86/reboot: Add reboot quirk for Certec BPC600 asmlinkage: Add explicit __visible to drivers/*, lib/*, kernel/* asmlinkage, x86: Add explicit __visible to arch/x86/* asmlinkage: Revert "lto: Make asmlinkage __visible" x86, build: Don't get confused by local symbols x86/efi: earlyprintk=efi,keep fix
-
Will Deacon authored
In order to ensure ordering and completion of inner-shareable maintenance instructions (cache and TLB) on AArch64, we can use the -ish suffix to the dmb and dsb instructions respectively. This patch updates our low-level cache and tlb maintenance routines to use the inner-shareable barrier variants where appropriate. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
In order to ensure completion of inner-shareable maintenance instructions (cache and TLB) on AArch64, we can use the -ish suffix to the dsb instruction. This patch relaxes our dsb sy instructions to dsb ish where possible. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
set_cpu_boot_mode_flag is used to identify which exception levels are encountered across the system by CPUs trying to enter the kernel. The basic algorithm is: if a CPU is booting at EL2, it will set a flag at an offset of #4 from __boot_cpu_mode, a cacheline-aligned variable. Otherwise, a flag is set at an offset of zero into the same cacheline. This enables us to check that all CPUs booted at the same exception level. This cacheline is written with the stage-1 MMU off (that is, via a strongly-ordered mapping) and will bypass any clean lines in the cache, leading to potential coherence problems when the variable is later checked via the normal, cacheable mapping of the kernel image. This patch reworks the broken flushing code so that we: (1) Use a DMB to order the strongly-ordered write of the cacheline against the subsequent cache-maintenance operation (by-VA operations only hazard against normal, cacheable accesses). (2) Use a single dc ivac instruction to invalidate any clean lines containing a stale copy of the line after it has been updated. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
The recently introduced acquire/release accessors refer to smp_mb() in the !CONFIG_SMP case. This is confusing when reading the code, so use barrier() directly when we know we're UP. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
Now that all callers of the barrier macros are updated to pass the mandatory options, update the macros so the option is actually used. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-