Commit 141b9743 authored by Russell King's avatar Russell King

Merge branches 'debug-choice', 'devel-stable' and 'misc' into for-linus

......@@ -18,7 +18,8 @@ following:
2. Initialise one serial port.
3. Detect the machine type.
4. Setup the kernel tagged list.
5. Call the kernel image.
5. Load initramfs.
6. Call the kernel image.
1. Setup and initialise RAM
......@@ -120,12 +121,27 @@ tagged list.
The boot loader must pass at a minimum the size and location of the
system memory, and the root filesystem location. The dtb must be
placed in a region of memory where the kernel decompressor will not
overwrite it. The recommended placement is in the first 16KiB of RAM
with the caveat that it may not be located at physical address 0 since
the kernel interprets a value of 0 in r2 to mean neither a tagged list
nor a dtb were passed.
overwrite it, whilst remaining within the region which will be covered
by the kernel's low-memory mapping.
5. Calling the kernel image
A safe location is just above the 128MiB boundary from start of RAM.
5. Load initramfs.
------------------
Existing boot loaders: OPTIONAL
New boot loaders: OPTIONAL
If an initramfs is in use then, as with the dtb, it must be placed in
a region of memory where the kernel decompressor will not overwrite it
while also with the region which will be covered by the kernel's
low-memory mapping.
A safe location is just above the device tree blob which itself will
be loaded just above the 128MiB boundary from the start of RAM as
recommended above.
6. Calling the kernel image
---------------------------
Existing boot loaders: MANDATORY
......@@ -136,11 +152,17 @@ is stored in flash, and is linked correctly to be run from flash,
then it is legal for the boot loader to call the zImage in flash
directly.
The zImage may also be placed in system RAM (at any location) and
called there. Note that the kernel uses 16K of RAM below the image
to store page tables. The recommended placement is 32KiB into RAM.
The zImage may also be placed in system RAM and called there. The
kernel should be placed in the first 128MiB of RAM. It is recommended
that it is loaded above 32MiB in order to avoid the need to relocate
prior to decompression, which will make the boot process slightly
faster.
When booting a raw (non-zImage) kernel the constraints are tighter.
In this case the kernel must be loaded at an offset into system equal
to TEXT_OFFSET - PAGE_OFFSET.
In either case, the following conditions must be met:
In any case, the following conditions must be met:
- Quiesce all DMA capable devices so that memory does not get
corrupted by bogus network packets or disk data. This will save
......
Kernel mode NEON
================
TL;DR summary
-------------
* Use only NEON instructions, or VFP instructions that don't rely on support
code
* Isolate your NEON code in a separate compilation unit, and compile it with
'-mfpu=neon -mfloat-abi=softfp'
* Put kernel_neon_begin() and kernel_neon_end() calls around the calls into your
NEON code
* Don't sleep in your NEON code, and be aware that it will be executed with
preemption disabled
Introduction
------------
It is possible to use NEON instructions (and in some cases, VFP instructions) in
code that runs in kernel mode. However, for performance reasons, the NEON/VFP
register file is not preserved and restored at every context switch or taken
exception like the normal register file is, so some manual intervention is
required. Furthermore, special care is required for code that may sleep [i.e.,
may call schedule()], as NEON or VFP instructions will be executed in a
non-preemptible section for reasons outlined below.
Lazy preserve and restore
-------------------------
The NEON/VFP register file is managed using lazy preserve (on UP systems) and
lazy restore (on both SMP and UP systems). This means that the register file is
kept 'live', and is only preserved and restored when multiple tasks are
contending for the NEON/VFP unit (or, in the SMP case, when a task migrates to
another core). Lazy restore is implemented by disabling the NEON/VFP unit after
every context switch, resulting in a trap when subsequently a NEON/VFP
instruction is issued, allowing the kernel to step in and perform the restore if
necessary.
Any use of the NEON/VFP unit in kernel mode should not interfere with this, so
it is required to do an 'eager' preserve of the NEON/VFP register file, and
enable the NEON/VFP unit explicitly so no exceptions are generated on first
subsequent use. This is handled by the function kernel_neon_begin(), which
should be called before any kernel mode NEON or VFP instructions are issued.
Likewise, the NEON/VFP unit should be disabled again after use to make sure user
mode will hit the lazy restore trap upon next use. This is handled by the
function kernel_neon_end().
Interruptions in kernel mode
----------------------------
For reasons of performance and simplicity, it was decided that there shall be no
preserve/restore mechanism for the kernel mode NEON/VFP register contents. This
implies that interruptions of a kernel mode NEON section can only be allowed if
they are guaranteed not to touch the NEON/VFP registers. For this reason, the
following rules and restrictions apply in the kernel:
* NEON/VFP code is not allowed in interrupt context;
* NEON/VFP code is not allowed to sleep;
* NEON/VFP code is executed with preemption disabled.
If latency is a concern, it is possible to put back to back calls to
kernel_neon_end() and kernel_neon_begin() in places in your code where none of
the NEON registers are live. (Additional calls to kernel_neon_begin() should be
reasonably cheap if no context switch occurred in the meantime)
VFP and support code
--------------------
Earlier versions of VFP (prior to version 3) rely on software support for things
like IEEE-754 compliant underflow handling etc. When the VFP unit needs such
software assistance, it signals the kernel by raising an undefined instruction
exception. The kernel responds by inspecting the VFP control registers and the
current instruction and arguments, and emulates the instruction in software.
Such software assistance is currently not implemented for VFP instructions
executed in kernel mode. If such a condition is encountered, the kernel will
fail and generate an OOPS.
Separating NEON code from ordinary code
---------------------------------------
The compiler is not aware of the special significance of kernel_neon_begin() and
kernel_neon_end(), i.e., that it is only allowed to issue NEON/VFP instructions
between calls to these respective functions. Furthermore, GCC may generate NEON
instructions of its own at -O3 level if -mfpu=neon is selected, and even if the
kernel is currently compiled at -O2, future changes may result in NEON/VFP
instructions appearing in unexpected places if no special care is taken.
Therefore, the recommended and only supported way of using NEON/VFP in the
kernel is by adhering to the following rules:
* isolate the NEON code in a separate compilation unit and compile it with
'-mfpu=neon -mfloat-abi=softfp';
* issue the calls to kernel_neon_begin(), kernel_neon_end() as well as the calls
into the unit containing the NEON code from a compilation unit which is *not*
built with the GCC flag '-mfpu=neon' set.
As the kernel is compiled with '-msoft-float', the above will guarantee that
both NEON and VFP instructions will only ever appear in designated compilation
units at any optimization level.
NEON assembler
--------------
NEON assembler is supported with no additional caveats as long as the rules
above are followed.
NEON code generated by GCC
--------------------------
The GCC option -ftree-vectorize (implied by -O3) tries to exploit implicit
parallelism, and generates NEON code from ordinary C source code. This is fully
supported as long as the rules above are followed.
NEON intrinsics
---------------
NEON intrinsics are also supported. However, as code using NEON intrinsics
relies on the GCC header <arm_neon.h>, (which #includes <stdint.h>), you should
observe the following in addition to the rules above:
* Compile the unit containing the NEON intrinsics with '-ffreestanding' so GCC
uses its builtin version of <stdint.h> (this is a C99 header which the kernel
does not supply);
* Include <arm_neon.h> last, or at least after <linux/types.h>
......@@ -16,9 +16,11 @@ Required properties:
performs the same operation).
"marvell,"aurora-outer-cache: Marvell Controller designed to be
compatible with the ARM one with outer cache mode.
"bcm,bcm11351-a2-pl310-cache": For Broadcom bcm11351 chipset where an
"brcm,bcm11351-a2-pl310-cache": For Broadcom bcm11351 chipset where an
offset needs to be added to the address before passing down to the L2
cache controller
"bcm,bcm11351-a2-pl310-cache": DEPRECATED by
"brcm,bcm11351-a2-pl310-cache"
- cache-unified : Specifies the cache is a unified cache.
- cache-level : Should be set to 2 for a level 2 cache.
- reg : Physical base address and size of cache controller's memory mapped
......
......@@ -52,6 +52,7 @@ config ARM
select HAVE_REGS_AND_STACK_ACCESS_API
select HAVE_SYSCALL_TRACEPOINTS
select HAVE_UID16
select IRQ_FORCED_THREADING
select KTIME_SCALAR
select PERF_USE_VMALLOC
select RTC_LIB
......@@ -1372,6 +1373,15 @@ config ARM_ERRATA_798181
which sends an IPI to the CPUs that are running the same ASID
as the one being invalidated.
config ARM_ERRATA_773022
bool "ARM errata: incorrect instructions may be executed from loop buffer"
depends on CPU_V7
help
This option enables the workaround for the 773022 Cortex-A15
(up to r0p4) erratum. In certain rare sequences of code, the
loop buffer may deliver incorrect instructions. This
workaround disables the loop buffer to avoid the erratum.
endmenu
source "arch/arm/common/Kconfig"
......@@ -1613,13 +1623,49 @@ config ARCH_NR_GPIO
source kernel/Kconfig.preempt
config HZ
config HZ_FIXED
int
default 200 if ARCH_EBSA110 || ARCH_S3C24XX || ARCH_S5P64X0 || \
ARCH_S5PV210 || ARCH_EXYNOS4
default AT91_TIMER_HZ if ARCH_AT91
default SHMOBILE_TIMER_HZ if ARCH_SHMOBILE
default 100
choice
depends on !HZ_FIXED
prompt "Timer frequency"
config HZ_100
bool "100 Hz"
config HZ_200
bool "200 Hz"
config HZ_250
bool "250 Hz"
config HZ_300
bool "300 Hz"
config HZ_500
bool "500 Hz"
config HZ_1000
bool "1000 Hz"
endchoice
config HZ
int
default HZ_FIXED if HZ_FIXED
default 100 if HZ_100
default 200 if HZ_200
default 250 if HZ_250
default 300 if HZ_300
default 500 if HZ_500
default 1000
config SCHED_HRTICK
def_bool HIGH_RES_TIMERS
config SCHED_HRTICK
def_bool HIGH_RES_TIMERS
......@@ -1756,6 +1802,9 @@ config HAVE_ARCH_TRANSPARENT_HUGEPAGE
def_bool y
depends on ARM_LPAE
config ARCH_WANT_GENERAL_HUGETLB
def_bool y
source "mm/Kconfig"
config FORCE_MAX_ZONEORDER
......@@ -2175,6 +2224,13 @@ config NEON
Say Y to include support code for NEON, the ARMv7 Advanced SIMD
Extension.
config KERNEL_MODE_NEON
bool "Support for NEON in kernel mode"
default n
depends on NEON
help
Say Y to include support for NEON in kernel mode.
endmenu
menu "Userspace binary formats"
......@@ -2199,7 +2255,7 @@ source "kernel/power/Kconfig"
config ARCH_SUSPEND_POSSIBLE
depends on !ARCH_S5PC100
depends on CPU_ARM920T || CPU_ARM926T || CPU_SA1100 || \
depends on CPU_ARM920T || CPU_ARM926T || CPU_FEROCEON || CPU_SA1100 || \
CPU_V6 || CPU_V6K || CPU_V7 || CPU_XSC3 || CPU_XSCALE || CPU_MOHAWK
def_bool y
......
This diff is collapsed.
......@@ -151,7 +151,7 @@ mcpm_setup_leave:
mov r0, #INBOUND_NOT_COMING_UP
strb r0, [r8, #MCPM_SYNC_CLUSTER_INBOUND]
dsb
dsb st
sev
mov r0, r11
......
......@@ -42,7 +42,7 @@
dmb
mov \rscratch, #0
strb \rscratch, [\rbase, \rcpu]
dsb
dsb st
sev
.endm
......@@ -102,7 +102,7 @@ ENTRY(vlock_unlock)
dmb
mov r1, #VLOCK_OWNER_NONE
strb r1, [r0, #VLOCK_OWNER_OFFSET]
dsb
dsb st
sev
bx lr
ENDPROC(vlock_unlock)
......@@ -220,9 +220,9 @@
#ifdef CONFIG_SMP
#if __LINUX_ARM_ARCH__ >= 7
.ifeqs "\mode","arm"
ALT_SMP(dmb)
ALT_SMP(dmb ish)
.else
ALT_SMP(W(dmb))
ALT_SMP(W(dmb) ish)
.endif
#elif __LINUX_ARM_ARCH__ == 6
ALT_SMP(mcr p15, 0, r0, c7, c10, 5) @ dmb
......
......@@ -14,27 +14,27 @@
#endif
#if __LINUX_ARM_ARCH__ >= 7
#define isb() __asm__ __volatile__ ("isb" : : : "memory")
#define dsb() __asm__ __volatile__ ("dsb" : : : "memory")
#define dmb() __asm__ __volatile__ ("dmb" : : : "memory")
#define isb(option) __asm__ __volatile__ ("isb " #option : : : "memory")
#define dsb(option) __asm__ __volatile__ ("dsb " #option : : : "memory")
#define dmb(option) __asm__ __volatile__ ("dmb " #option : : : "memory")
#elif defined(CONFIG_CPU_XSC3) || __LINUX_ARM_ARCH__ == 6
#define isb() __asm__ __volatile__ ("mcr p15, 0, %0, c7, c5, 4" \
#define isb(x) __asm__ __volatile__ ("mcr p15, 0, %0, c7, c5, 4" \
: : "r" (0) : "memory")
#define dsb() __asm__ __volatile__ ("mcr p15, 0, %0, c7, c10, 4" \
#define dsb(x) __asm__ __volatile__ ("mcr p15, 0, %0, c7, c10, 4" \
: : "r" (0) : "memory")
#define dmb() __asm__ __volatile__ ("mcr p15, 0, %0, c7, c10, 5" \
#define dmb(x) __asm__ __volatile__ ("mcr p15, 0, %0, c7, c10, 5" \
: : "r" (0) : "memory")
#elif defined(CONFIG_CPU_FA526)
#define isb() __asm__ __volatile__ ("mcr p15, 0, %0, c7, c5, 4" \
#define isb(x) __asm__ __volatile__ ("mcr p15, 0, %0, c7, c5, 4" \
: : "r" (0) : "memory")
#define dsb() __asm__ __volatile__ ("mcr p15, 0, %0, c7, c10, 4" \
#define dsb(x) __asm__ __volatile__ ("mcr p15, 0, %0, c7, c10, 4" \
: : "r" (0) : "memory")
#define dmb() __asm__ __volatile__ ("" : : : "memory")
#define dmb(x) __asm__ __volatile__ ("" : : : "memory")
#else
#define isb() __asm__ __volatile__ ("" : : : "memory")
#define dsb() __asm__ __volatile__ ("mcr p15, 0, %0, c7, c10, 4" \
#define isb(x) __asm__ __volatile__ ("" : : : "memory")
#define dsb(x) __asm__ __volatile__ ("mcr p15, 0, %0, c7, c10, 4" \
: : "r" (0) : "memory")
#define dmb() __asm__ __volatile__ ("" : : : "memory")
#define dmb(x) __asm__ __volatile__ ("" : : : "memory")
#endif
#ifdef CONFIG_ARCH_HAS_BARRIERS
......@@ -42,7 +42,7 @@
#elif defined(CONFIG_ARM_DMA_MEM_BUFFERABLE) || defined(CONFIG_SMP)
#define mb() do { dsb(); outer_sync(); } while (0)
#define rmb() dsb()
#define wmb() mb()
#define wmb() do { dsb(st); outer_sync(); } while (0)
#else
#define mb() barrier()
#define rmb() barrier()
......@@ -54,9 +54,9 @@
#define smp_rmb() barrier()
#define smp_wmb() barrier()
#else
#define smp_mb() dmb()
#define smp_rmb() dmb()
#define smp_wmb() dmb()
#define smp_mb() dmb(ish)
#define smp_rmb() smp_mb()
#define smp_wmb() dmb(ishst)
#endif
#define read_barrier_depends() do { } while(0)
......
......@@ -268,8 +268,7 @@ extern void flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr
* Harvard caches are synchronised for the user space address range.
* This is used for the ARM private sys_cacheflush system call.
*/
#define flush_cache_user_range(start,end) \
__cpuc_coherent_user_range((start) & PAGE_MASK, PAGE_ALIGN(end))
#define flush_cache_user_range(s,e) __cpuc_coherent_user_range(s,e)
/*
* Perform necessary cache operations to ensure that data previously
......@@ -352,7 +351,7 @@ static inline void flush_cache_vmap(unsigned long start, unsigned long end)
* set_pte_at() called from vmap_pte_range() does not
* have a DSB after cleaning the cache line.
*/
dsb();
dsb(ishst);
}
static inline void flush_cache_vunmap(unsigned long start, unsigned long end)
......
......@@ -65,12 +65,12 @@ struct machine_desc {
/*
* Current machine - only accessible during boot.
*/
extern struct machine_desc *machine_desc;
extern const struct machine_desc *machine_desc;
/*
* Machine type table - also only accessible during boot
*/
extern struct machine_desc __arch_info_begin[], __arch_info_end[];
extern const struct machine_desc __arch_info_begin[], __arch_info_end[];
#define for_each_machine_desc(p) \
for (p = __arch_info_begin; p < __arch_info_end; p++)
......
......@@ -4,8 +4,7 @@
struct meminfo;
struct machine_desc;
extern void arm_memblock_init(struct meminfo *, struct machine_desc *);
void arm_memblock_init(struct meminfo *, const struct machine_desc *);
phys_addr_t arm_memblock_steal(phys_addr_t size, phys_addr_t align);
#endif
......@@ -12,6 +12,8 @@ enum {
ARM_SEC_CORE,
ARM_SEC_EXIT,
ARM_SEC_DEVEXIT,
ARM_SEC_HOT,
ARM_SEC_UNLIKELY,
ARM_SEC_MAX,
};
......
/*
* linux/arch/arm/include/asm/neon.h
*
* Copyright (C) 2013 Linaro Ltd <ard.biesheuvel@linaro.org>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <asm/hwcap.h>
#define cpu_has_neon() (!!(elf_hwcap & HWCAP_NEON))
#ifdef __ARM_NEON__
/*
* If you are affected by the BUILD_BUG below, it probably means that you are
* using NEON code /and/ calling the kernel_neon_begin() function from the same
* compilation unit. To prevent issues that may arise from GCC reordering or
* generating(1) NEON instructions outside of these begin/end functions, the
* only supported way of using NEON code in the kernel is by isolating it in a
* separate compilation unit, and calling it from another unit from inside a
* kernel_neon_begin/kernel_neon_end pair.
*
* (1) Current GCC (4.7) might generate NEON instructions at O3 level if
* -mpfu=neon is set.
*/
#define kernel_neon_begin() \
BUILD_BUG_ON_MSG(1, "kernel_neon_begin() called from NEON code")
#else
void kernel_neon_begin(void);
#endif
void kernel_neon_end(void);
......@@ -100,7 +100,7 @@ extern pgprot_t pgprot_s2_device;
#define PAGE_HYP _MOD_PROT(pgprot_kernel, L_PTE_HYP)
#define PAGE_HYP_DEVICE _MOD_PROT(pgprot_hyp_device, L_PTE_HYP)
#define PAGE_S2 _MOD_PROT(pgprot_s2, L_PTE_S2_RDONLY)
#define PAGE_S2_DEVICE _MOD_PROT(pgprot_s2_device, L_PTE_USER | L_PTE_S2_RDONLY)
#define PAGE_S2_DEVICE _MOD_PROT(pgprot_s2_device, L_PTE_S2_RDWR)
#define __PAGE_NONE __pgprot(_L_PTE_DEFAULT | L_PTE_RDONLY | L_PTE_XN | L_PTE_NONE)
#define __PAGE_SHARED __pgprot(_L_PTE_DEFAULT | L_PTE_USER | L_PTE_XN)
......
......@@ -15,13 +15,13 @@
#ifdef CONFIG_OF
extern struct machine_desc *setup_machine_fdt(unsigned int dt_phys);
extern const struct machine_desc *setup_machine_fdt(unsigned int dt_phys);
extern void arm_dt_memblock_reserve(void);
extern void __init arm_dt_init_cpu_maps(void);
#else /* CONFIG_OF */
static inline struct machine_desc *setup_machine_fdt(unsigned int dt_phys)
static inline const struct machine_desc *setup_machine_fdt(unsigned int dt_phys)
{
return NULL;
}
......
......@@ -46,7 +46,7 @@ static inline void dsb_sev(void)
{
#if __LINUX_ARM_ARCH__ >= 7
__asm__ __volatile__ (
"dsb\n"
"dsb ishst\n"
SEV
);
#else
......
......@@ -3,6 +3,16 @@
#include <linux/thread_info.h>
/*
* For v7 SMP cores running a preemptible kernel we may be pre-empted
* during a TLB maintenance operation, so execute an inner-shareable dsb
* to ensure that the maintenance completes in case we migrate to another
* CPU.
*/
#if defined(CONFIG_PREEMPT) && defined(CONFIG_SMP) && defined(CONFIG_CPU_V7)
#define finish_arch_switch(prev) dsb(ish)
#endif
/*
* switch_to(prev, next) should switch from task `prev' to `next'
* `prev' will never be the same as `next'. schedule() itself
......
......@@ -43,6 +43,16 @@ struct cpu_context_save {
__u32 extra[2]; /* Xscale 'acc' register, etc */
};
struct arm_restart_block {
union {
/* For user cache flushing */
struct {
unsigned long start;
unsigned long end;
} cache;
};
};
/*
* low level task data that entry.S needs immediate access to.
* __switch_to() assumes cpu_context follows immediately after cpu_domain.
......@@ -68,6 +78,7 @@ struct thread_info {
unsigned long thumbee_state; /* ThumbEE Handler Base register */
#endif
struct restart_block restart_block;
struct arm_restart_block arm_restart_block;
};
#define INIT_THREAD_INFO(tsk) \
......
......@@ -319,67 +319,110 @@ extern struct cpu_tlb_fns cpu_tlb;
#define tlb_op(f, regs, arg) __tlb_op(f, "p15, 0, %0, " regs, arg)
#define tlb_l2_op(f, regs, arg) __tlb_op(f, "p15, 1, %0, " regs, arg)
static inline void local_flush_tlb_all(void)
static inline void __local_flush_tlb_all(void)
{
const int zero = 0;
const unsigned int __tlb_flag = __cpu_tlb_flags;
if (tlb_flag(TLB_WB))
dsb();
tlb_op(TLB_V4_U_FULL | TLB_V6_U_FULL, "c8, c7, 0", zero);
tlb_op(TLB_V4_D_FULL | TLB_V6_D_FULL, "c8, c6, 0", zero);
tlb_op(TLB_V4_I_FULL | TLB_V6_I_FULL, "c8, c5, 0", zero);
tlb_op(TLB_V7_UIS_FULL, "c8, c3, 0", zero);
}
static inline void local_flush_tlb_all(void)
{
const int zero = 0;
const unsigned int __tlb_flag = __cpu_tlb_flags;
if (tlb_flag(TLB_WB))
dsb(nshst);
__local_flush_tlb_all();
tlb_op(TLB_V7_UIS_FULL, "c8, c7, 0", zero);
if (tlb_flag(TLB_BARRIER)) {
dsb();
dsb(nsh);
isb();
}
}
static inline void local_flush_tlb_mm(struct mm_struct *mm)
static inline void __flush_tlb_all(void)
{
const int zero = 0;
const int asid = ASID(mm);
const unsigned int __tlb_flag = __cpu_tlb_flags;
if (tlb_flag(TLB_WB))
dsb();
dsb(ishst);
__local_flush_tlb_all();
tlb_op(TLB_V7_UIS_FULL, "c8, c3, 0", zero);
if (tlb_flag(TLB_BARRIER)) {
dsb(ish);
isb();
}
}
static inline void __local_flush_tlb_mm(struct mm_struct *mm)
{
const int zero = 0;
const int asid = ASID(mm);
const unsigned int __tlb_flag = __cpu_tlb_flags;
if (possible_tlb_flags & (TLB_V4_U_FULL|TLB_V4_D_FULL|TLB_V4_I_FULL)) {
if (cpumask_test_cpu(get_cpu(), mm_cpumask(mm))) {
if (cpumask_test_cpu(smp_processor_id(), mm_cpumask(mm))) {
tlb_op(TLB_V4_U_FULL, "c8, c7, 0", zero);
tlb_op(TLB_V4_D_FULL, "c8, c6, 0", zero);
tlb_op(TLB_V4_I_FULL, "c8, c5, 0", zero);
}
put_cpu();
}
tlb_op(TLB_V6_U_ASID, "c8, c7, 2", asid);
tlb_op(TLB_V6_D_ASID, "c8, c6, 2", asid);
tlb_op(TLB_V6_I_ASID, "c8, c5, 2", asid);
}
static inline void local_flush_tlb_mm(struct mm_struct *mm)
{
const int asid = ASID(mm);
const unsigned int __tlb_flag = __cpu_tlb_flags;
if (tlb_flag(TLB_WB))
dsb(nshst);
__local_flush_tlb_mm(mm);
tlb_op(TLB_V7_UIS_ASID, "c8, c7, 2", asid);
if (tlb_flag(TLB_BARRIER))
dsb(nsh);
}
static inline void __flush_tlb_mm(struct mm_struct *mm)
{
const unsigned int __tlb_flag = __cpu_tlb_flags;
if (tlb_flag(TLB_WB))
dsb(ishst);
__local_flush_tlb_mm(mm);
#ifdef CONFIG_ARM_ERRATA_720789
tlb_op(TLB_V7_UIS_ASID, "c8, c3, 0", zero);
tlb_op(TLB_V7_UIS_ASID, "c8, c3, 0", 0);
#else
tlb_op(TLB_V7_UIS_ASID, "c8, c3, 2", asid);
tlb_op(TLB_V7_UIS_ASID, "c8, c3, 2", ASID(mm));
#endif
if (tlb_flag(TLB_BARRIER))
dsb();
dsb(ish);
}
static inline void
local_flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr)
__local_flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr)
{
const int zero = 0;
const unsigned int __tlb_flag = __cpu_tlb_flags;
uaddr = (uaddr & PAGE_MASK) | ASID(vma->vm_mm);
if (tlb_flag(TLB_WB))
dsb();
if (possible_tlb_flags & (TLB_V4_U_PAGE|TLB_V4_D_PAGE|TLB_V4_I_PAGE|TLB_V4_I_FULL) &&
cpumask_test_cpu(smp_processor_id(), mm_cpumask(vma->vm_mm))) {
tlb_op(TLB_V4_U_PAGE, "c8, c7, 1", uaddr);
......@@ -392,6 +435,36 @@ local_flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr)
tlb_op(TLB_V6_U_PAGE, "c8, c7, 1", uaddr);
tlb_op(TLB_V6_D_PAGE, "c8, c6, 1", uaddr);
tlb_op(TLB_V6_I_PAGE, "c8, c5, 1", uaddr);
}
static inline void
local_flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr)
{
const unsigned int __tlb_flag = __cpu_tlb_flags;
uaddr = (uaddr & PAGE_MASK) | ASID(vma->vm_mm);
if (tlb_flag(TLB_WB))
dsb(nshst);
__local_flush_tlb_page(vma, uaddr);
tlb_op(TLB_V7_UIS_PAGE, "c8, c7, 1", uaddr);
if (tlb_flag(TLB_BARRIER))
dsb(nsh);
}
static inline void
__flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr)
{
const unsigned int __tlb_flag = __cpu_tlb_flags;
uaddr = (uaddr & PAGE_MASK) | ASID(vma->vm_mm);
if (tlb_flag(TLB_WB))
dsb(ishst);
__local_flush_tlb_page(vma, uaddr);
#ifdef CONFIG_ARM_ERRATA_720789
tlb_op(TLB_V7_UIS_PAGE, "c8, c3, 3", uaddr & PAGE_MASK);
#else
......@@ -399,19 +472,14 @@ local_flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr)
#endif
if (tlb_flag(TLB_BARRIER))
dsb();
dsb(ish);
}
static inline void local_flush_tlb_kernel_page(unsigned long kaddr)
static inline void __local_flush_tlb_kernel_page(unsigned long kaddr)
{
const int zero = 0;
const unsigned int __tlb_flag = __cpu_tlb_flags;
kaddr &= PAGE_MASK;
if (tlb_flag(TLB_WB))
dsb();
tlb_op(TLB_V4_U_PAGE, "c8, c7, 1", kaddr);
tlb_op(TLB_V4_D_PAGE, "c8, c6, 1", kaddr);
tlb_op(TLB_V4_I_PAGE, "c8, c5, 1", kaddr);
......@@ -421,26 +489,75 @@ static inline void local_flush_tlb_kernel_page(unsigned long kaddr)
tlb_op(TLB_V6_U_PAGE, "c8, c7, 1", kaddr);
tlb_op(TLB_V6_D_PAGE, "c8, c6, 1", kaddr);
tlb_op(TLB_V6_I_PAGE, "c8, c5, 1", kaddr);
}
static inline void local_flush_tlb_kernel_page(unsigned long kaddr)
{
const unsigned int __tlb_flag = __cpu_tlb_flags;
kaddr &= PAGE_MASK;
if (tlb_flag(TLB_WB))
dsb(nshst);
__local_flush_tlb_kernel_page(kaddr);
tlb_op(TLB_V7_UIS_PAGE, "c8, c7, 1", kaddr);
if (tlb_flag(TLB_BARRIER)) {
dsb(nsh);
isb();
}
}
static inline void __flush_tlb_kernel_page(unsigned long kaddr)
{
const unsigned int __tlb_flag = __cpu_tlb_flags;
kaddr &= PAGE_MASK;
if (tlb_flag(TLB_WB))
dsb(ishst);
__local_flush_tlb_kernel_page(kaddr);
tlb_op(TLB_V7_UIS_PAGE, "c8, c3, 1", kaddr);
if (tlb_flag(TLB_BARRIER)) {
dsb();
dsb(ish);
isb();
}
}
/*
* Branch predictor maintenance is paired with full TLB invalidation, so
* there is no need for any barriers here.
*/
static inline void __local_flush_bp_all(void)
{
const int zero = 0;
const unsigned int __tlb_flag = __cpu_tlb_flags;
if (tlb_flag(TLB_V6_BP))
asm("mcr p15, 0, %0, c7, c5, 6" : : "r" (zero));
}
static inline void local_flush_bp_all(void)
{
const int zero = 0;
const unsigned int __tlb_flag = __cpu_tlb_flags;
__local_flush_bp_all();
if (tlb_flag(TLB_V7_UIS_BP))
asm("mcr p15, 0, %0, c7, c1, 6" : : "r" (zero));
else if (tlb_flag(TLB_V6_BP))
asm("mcr p15, 0, %0, c7, c5, 6" : : "r" (zero));
}
if (tlb_flag(TLB_BARRIER))
isb();
static inline void __flush_bp_all(void)
{
const int zero = 0;
const unsigned int __tlb_flag = __cpu_tlb_flags;
__local_flush_bp_all();
if (tlb_flag(TLB_V7_UIS_BP))
asm("mcr p15, 0, %0, c7, c1, 6" : : "r" (zero));
}
#include <asm/cputype.h>
......@@ -461,7 +578,7 @@ static inline void dummy_flush_tlb_a15_erratum(void)
* Dummy TLBIMVAIS. Using the unmapped address 0 and ASID 0.
*/
asm("mcr p15, 0, %0, c8, c3, 1" : : "r" (0));
dsb();
dsb(ish);
}
#else
static inline int erratum_a15_798181(void)
......@@ -495,7 +612,7 @@ static inline void flush_pmd_entry(void *pmd)
tlb_l2_op(TLB_L2CLEAN_FR, "c15, c9, 1 @ L2 flush_pmd", pmd);
if (tlb_flag(TLB_WB))
dsb();
dsb(ishst);
}
static inline void clean_pmd_entry(void *pmd)
......
#ifndef _ASM_TYPES_H
#define _ASM_TYPES_H
#include <asm-generic/int-ll64.h>
/*
* The C99 types uintXX_t that are usually defined in 'stdint.h' are not as
* unambiguous on ARM as you would expect. For the types below, there is a
* difference on ARM between GCC built for bare metal ARM, GCC built for glibc
* and the kernel itself, which results in build errors if you try to build with
* -ffreestanding and include 'stdint.h' (such as when you include 'arm_neon.h'
* in order to use NEON intrinsics)
*
* As the typedefs for these types in 'stdint.h' are based on builtin defines
* supplied by GCC, we can tweak these to align with the kernel's idea of those
* types, so 'linux/types.h' and 'stdint.h' can be safely included from the same
* source file (provided that -ffreestanding is used).
*
* int32_t uint32_t uintptr_t
* bare metal GCC long unsigned long unsigned int
* glibc GCC int unsigned int unsigned int
* kernel int unsigned int unsigned long
*/
#ifdef __INT32_TYPE__
#undef __INT32_TYPE__
#define __INT32_TYPE__ int
#endif
#ifdef __UINT32_TYPE__
#undef __UINT32_TYPE__
#define __UINT32_TYPE__ unsigned int
#endif
#ifdef __UINTPTR_TYPE__
#undef __UINTPTR_TYPE__
#define __UINTPTR_TYPE__ unsigned long
#endif
#endif /* _ASM_TYPES_H */
......@@ -15,6 +15,10 @@
#define V7M_SCB_VTOR 0x08
#define V7M_SCB_AIRCR 0x0c
#define V7M_SCB_AIRCR_VECTKEY (0x05fa << 16)
#define V7M_SCB_AIRCR_SYSRESETREQ (1 << 2)
#define V7M_SCB_SCR 0x10
#define V7M_SCB_SCR_SLEEPDEEP (1 << 2)
......@@ -42,3 +46,11 @@
*/
#define EXC_RET_STACK_MASK 0x00000004
#define EXC_RET_THREADMODE_PROCESSSTACK 0xfffffffd
#ifndef __ASSEMBLY__
enum reboot_mode;
void armv7m_restart(enum reboot_mode mode, const char *cmd);
#endif /* __ASSEMBLY__ */
......@@ -7,7 +7,10 @@
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/hardirq.h>
#include <asm-generic/xor.h>
#include <asm/hwcap.h>
#include <asm/neon.h>
#define __XOR(a1, a2) a1 ^= a2
......@@ -138,4 +141,74 @@ static struct xor_block_template xor_block_arm4regs = {
xor_speed(&xor_block_arm4regs); \
xor_speed(&xor_block_8regs); \
xor_speed(&xor_block_32regs); \
NEON_TEMPLATES; \
} while (0)
#ifdef CONFIG_KERNEL_MODE_NEON
extern struct xor_block_template const xor_block_neon_inner;
static void
xor_neon_2(unsigned long bytes, unsigned long *p1, unsigned long *p2)
{
if (in_interrupt()) {
xor_arm4regs_2(bytes, p1, p2);
} else {
kernel_neon_begin();
xor_block_neon_inner.do_2(bytes, p1, p2);
kernel_neon_end();
}
}
static void
xor_neon_3(unsigned long bytes, unsigned long *p1, unsigned long *p2,
unsigned long *p3)
{
if (in_interrupt()) {
xor_arm4regs_3(bytes, p1, p2, p3);
} else {
kernel_neon_begin();
xor_block_neon_inner.do_3(bytes, p1, p2, p3);
kernel_neon_end();
}
}
static void
xor_neon_4(unsigned long bytes, unsigned long *p1, unsigned long *p2,
unsigned long *p3, unsigned long *p4)
{
if (in_interrupt()) {
xor_arm4regs_4(bytes, p1, p2, p3, p4);
} else {
kernel_neon_begin();
xor_block_neon_inner.do_4(bytes, p1, p2, p3, p4);
kernel_neon_end();
}
}
static void
xor_neon_5(unsigned long bytes, unsigned long *p1, unsigned long *p2,
unsigned long *p3, unsigned long *p4, unsigned long *p5)
{
if (in_interrupt()) {
xor_arm4regs_5(bytes, p1, p2, p3, p4, p5);
} else {
kernel_neon_begin();
xor_block_neon_inner.do_5(bytes, p1, p2, p3, p4, p5);
kernel_neon_end();
}
}
static struct xor_block_template xor_block_neon = {
.name = "neon",
.do_2 = xor_neon_2,
.do_3 = xor_neon_3,
.do_4 = xor_neon_4,
.do_5 = xor_neon_5
};
#define NEON_TEMPLATES \
do { if (cpu_has_neon()) xor_speed(&xor_block_neon); } while (0)
#else
#define NEON_TEMPLATES
#endif
/*
* arch/arm/include/asm/hardware/debug-8250.S
* arch/arm/include/debug/8250.S
*
* Copyright (C) 1994-1999 Russell King
* Copyright (C) 1994-2013 Russell King
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
......@@ -9,20 +9,45 @@
*/
#include <linux/serial_reg.h>
.macro addruart, rp, rv, tmp
ldr \rp, =CONFIG_DEBUG_UART_PHYS
ldr \rv, =CONFIG_DEBUG_UART_VIRT
.endm
#ifdef CONFIG_DEBUG_UART_8250_WORD
.macro store, rd, rx:vararg
str \rd, \rx
.endm
.macro load, rd, rx:vararg
ldr \rd, \rx
.endm
#else
.macro store, rd, rx:vararg
strb \rd, \rx
.endm
.macro load, rd, rx:vararg
ldrb \rd, \rx
.endm
#endif
#define UART_SHIFT CONFIG_DEBUG_UART_8250_SHIFT
.macro senduart,rd,rx
strb \rd, [\rx, #UART_TX << UART_SHIFT]
store \rd, [\rx, #UART_TX << UART_SHIFT]
.endm
.macro busyuart,rd,rx
1002: ldrb \rd, [\rx, #UART_LSR << UART_SHIFT]
1002: load \rd, [\rx, #UART_LSR << UART_SHIFT]
and \rd, \rd, #UART_LSR_TEMT | UART_LSR_THRE
teq \rd, #UART_LSR_TEMT | UART_LSR_THRE
bne 1002b
.endm
.macro waituart,rd,rx
#ifdef FLOW_CONTROL
1001: ldrb \rd, [\rx, #UART_MSR << UART_SHIFT]
#ifdef CONFIG_DEBUG_UART_8250_FLOW_CONTROL
1001: load \rd, [\rx, #UART_MSR << UART_SHIFT]
tst \rd, #UART_MSR_CTS
beq 1001b
#endif
......
/*
* Copyright (c) 2011 Picochip Ltd., Jamie Iles
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* Derived from arch/arm/mach-davinci/include/mach/debug-macro.S to use 32-bit
* accesses to the 8250.
*/
#include <linux/serial_reg.h>
.macro senduart,rd,rx
str \rd, [\rx, #UART_TX << UART_SHIFT]
.endm
.macro busyuart,rd,rx
1002: ldr \rd, [\rx, #UART_LSR << UART_SHIFT]
and \rd, \rd, #UART_LSR_TEMT | UART_LSR_THRE
teq \rd, #UART_LSR_TEMT | UART_LSR_THRE
bne 1002b
.endm
/* The UART's don't have any flow control IO's wired up. */
.macro waituart,rd,rx
.endm
/*
* Debugging macro include header
*
* Copyright (C) 2010 Broadcom
* Copyright (C) 1994-1999 Russell King
* Moved from linux/arch/arm/kernel/debug.S by Ben Dooks
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
*/
#define BCM2835_DEBUG_PHYS 0x20201000
#define BCM2835_DEBUG_VIRT 0xf0201000
.macro addruart, rp, rv, tmp
ldr \rp, =BCM2835_DEBUG_PHYS
ldr \rv, =BCM2835_DEBUG_VIRT
.endm
#include <asm/hardware/debug-pl01x.S>
/*
* Debugging macro include header
*
* Copyright 1994-1999 Russell King
* Copyright 2008 Cavium Networks
* Moved from linux/arch/arm/kernel/debug.S by Ben Dooks
*
* This file is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License, Version 2, as
* published by the Free Software Foundation.
*/
.macro addruart,rp,rv,tmp
mov \rp, #0x00009000
orr \rv, \rp, #0xf0000000 @ virtual base
orr \rp, \rp, #0x10000000
.endm
#include <asm/hardware/debug-pl01x.S>
/*
* Debugging macro include header
*
* Copyright (C) 1994-1999 Russell King
* Moved from linux/arch/arm/kernel/debug.S by Ben Dooks
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
.macro addruart,rp,rv,tmp
ldr \rv, =0xfee36000
ldr \rp, =0xfff36000
.endm
#include <asm/hardware/debug-pl01x.S>
/*
* Early serial debug output macro for Keystone SOCs
*
* Copyright 2013 Texas Instruments, Inc.
* Santosh Shilimkar <santosh.shilimkar@ti.com>
*
* Based on RMKs low level debug code.
* Copyright (C) 1994-1999 Russell King
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/serial_reg.h>
#define UART_SHIFT 2
#if defined(CONFIG_DEBUG_KEYSTONE_UART0)
#define UART_PHYS 0x02530c00
#define UART_VIRT 0xfeb30c00
#elif defined(CONFIG_DEBUG_KEYSTONE_UART1)
#define UART_PHYS 0x02531000
#define UART_VIRT 0xfeb31000
#endif
.macro addruart, rp, rv, tmp
ldr \rv, =UART_VIRT @ physical base address
ldr \rp, =UART_PHYS @ virtual base address
.endm
.macro senduart,rd,rx
str \rd, [\rx, #UART_TX << UART_SHIFT]
.endm
.macro busyuart,rd,rx
1002: ldr \rd, [\rx, #UART_LSR << UART_SHIFT]
and \rd, \rd, #UART_LSR_TEMT | UART_LSR_THRE
teq \rd, #UART_LSR_TEMT | UART_LSR_THRE
bne 1002b
.endm
.macro waituart,rd,rx
.endm
/*
* Early serial output macro for Marvell SoC
*
* Copyright (C) 2012 Marvell
*
* Lior Amsalem <alior@marvell.com>
* Gregory Clement <gregory.clement@free-electrons.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifdef CONFIG_DEBUG_MVEBU_UART_ALTERNATE
#define ARMADA_370_XP_REGS_PHYS_BASE 0xf1000000
#else
#define ARMADA_370_XP_REGS_PHYS_BASE 0xd0000000
#endif
#define ARMADA_370_XP_REGS_VIRT_BASE 0xfec00000
.macro addruart, rp, rv, tmp
ldr \rp, =ARMADA_370_XP_REGS_PHYS_BASE
ldr \rv, =ARMADA_370_XP_REGS_VIRT_BASE
orr \rp, \rp, #0x00012000
orr \rv, \rv, #0x00012000
.endm
#define UART_SHIFT 2
#include <asm/hardware/debug-8250.S>
/* arch/arm/mach-mxs/include/mach/debug-macro.S
*
* Debugging macro include header
*
* Copyright (C) 1994-1999 Russell King
* Moved from linux/arch/arm/kernel/debug.S by Ben Dooks
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
*/
#ifdef CONFIG_DEBUG_IMX23_UART
#define UART_PADDR 0x80070000
#elif defined (CONFIG_DEBUG_IMX28_UART)
#define UART_PADDR 0x80074000
#endif
#define UART_VADDR 0xfe100000
.macro addruart, rp, rv, tmp
ldr \rp, =UART_PADDR @ physical
ldr \rv, =UART_VADDR @ virtual
.endm
#include <asm/hardware/debug-pl01x.S>
/*
* Debugging macro include header
*
* Copyright (C) 1994-1999 Russell King
* Moved from linux/arch/arm/kernel/debug.S by Ben Dooks
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
*/
.macro addruart, rp, rv, tmp
mov \rp, #0x00100000
add \rp, \rp, #0x000fb000
add \rv, \rp, #0xf0000000 @ virtual base
add \rp, \rp, #0x10000000 @ physical base address
.endm
#include <asm/hardware/debug-pl01x.S>
/*
* linux/arch/arm/include/debug/nspire.S
*
* Copyright (C) 2013 Daniel Tang <tangrs@tangrs.id.au>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2, as
* published by the Free Software Foundation.
*
*/
#define NSPIRE_EARLY_UART_PHYS_BASE 0x90020000
#define NSPIRE_EARLY_UART_VIRT_BASE 0xfee20000
.macro addruart, rp, rv, tmp
ldr \rp, =(NSPIRE_EARLY_UART_PHYS_BASE) @ physical base address
ldr \rv, =(NSPIRE_EARLY_UART_VIRT_BASE) @ virtual base address
.endm
#ifdef CONFIG_DEBUG_NSPIRE_CX_UART
#include <asm/hardware/debug-pl01x.S>
#endif
#ifdef CONFIG_DEBUG_NSPIRE_CLASSIC_UART
#define UART_SHIFT 2
#include <asm/hardware/debug-8250.S>
#endif
/*
* Copyright (c) 2011 Picochip Ltd., Jamie Iles
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
*/
#define UART_SHIFT 2
#define PICOXCELL_UART1_BASE 0x80230000
#define PHYS_TO_IO(x) (((x) & 0x00ffffff) | 0xfe000000)
.macro addruart, rp, rv, tmp
ldr \rv, =PHYS_TO_IO(PICOXCELL_UART1_BASE)
ldr \rp, =PICOXCELL_UART1_BASE
.endm
#include "8250_32.S"
/* arch/arm/include/asm/hardware/debug-pl01x.S
/* arch/arm/include/debug/pl01x.S
*
* Debugging macro include header
*
......@@ -12,6 +12,13 @@
*/
#include <linux/amba/serial.h>
#ifdef CONFIG_DEBUG_UART_PHYS
.macro addruart, rp, rv, tmp
ldr \rp, =CONFIG_DEBUG_UART_PHYS
ldr \rv, =CONFIG_DEBUG_UART_VIRT
.endm
#endif
.macro senduart,rd,rx
strb \rd, [\rx, #UART01x_DR]
.endm
......
/*
* Early serial output macro for Marvell PXA/MMP SoC
*
* Copyright (C) 1994-1999 Russell King
* Moved from linux/arch/arm/kernel/debug.S by Ben Dooks
*
* Copyright (C) 2013 Haojian Zhuang
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#if defined(CONFIG_DEBUG_PXA_UART1)
#define PXA_UART_REG_PHYS_BASE 0x40100000
#define PXA_UART_REG_VIRT_BASE 0xf2100000
#elif defined(CONFIG_DEBUG_MMP_UART2)
#define PXA_UART_REG_PHYS_BASE 0xd4017000
#define PXA_UART_REG_VIRT_BASE 0xfe017000
#elif defined(CONFIG_DEBUG_MMP_UART3)
#define PXA_UART_REG_PHYS_BASE 0xd4018000
#define PXA_UART_REG_VIRT_BASE 0xfe018000
#else
#error "Select uart for DEBUG_LL"
#endif
.macro addruart, rp, rv, tmp
ldr \rp, =PXA_UART_REG_PHYS_BASE
ldr \rv, =PXA_UART_REG_VIRT_BASE
.endm
#define UART_SHIFT 2
#include <asm/hardware/debug-8250.S>
/*
* Early serial output macro for Rockchip SoCs
*
* Copyright (C) 2012 Maxime Ripard
*
* Maxime Ripard <maxime.ripard@free-electrons.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#if defined(CONFIG_DEBUG_RK29_UART0)
#define ROCKCHIP_UART_DEBUG_PHYS_BASE 0x20060000
#define ROCKCHIP_UART_DEBUG_VIRT_BASE 0xfed60000
#elif defined(CONFIG_DEBUG_RK29_UART1)
#define ROCKCHIP_UART_DEBUG_PHYS_BASE 0x20064000
#define ROCKCHIP_UART_DEBUG_VIRT_BASE 0xfed64000
#elif defined(CONFIG_DEBUG_RK29_UART2)
#define ROCKCHIP_UART_DEBUG_PHYS_BASE 0x20068000
#define ROCKCHIP_UART_DEBUG_VIRT_BASE 0xfed68000
#elif defined(CONFIG_DEBUG_RK3X_UART0)
#define ROCKCHIP_UART_DEBUG_PHYS_BASE 0x10124000
#define ROCKCHIP_UART_DEBUG_VIRT_BASE 0xfeb24000
#elif defined(CONFIG_DEBUG_RK3X_UART1)
#define ROCKCHIP_UART_DEBUG_PHYS_BASE 0x10126000
#define ROCKCHIP_UART_DEBUG_VIRT_BASE 0xfeb26000
#elif defined(CONFIG_DEBUG_RK3X_UART2)
#define ROCKCHIP_UART_DEBUG_PHYS_BASE 0x20064000
#define ROCKCHIP_UART_DEBUG_VIRT_BASE 0xfed64000
#elif defined(CONFIG_DEBUG_RK3X_UART3)
#define ROCKCHIP_UART_DEBUG_PHYS_BASE 0x20068000
#define ROCKCHIP_UART_DEBUG_VIRT_BASE 0xfed68000
#endif
.macro addruart, rp, rv, tmp
ldr \rp, =ROCKCHIP_UART_DEBUG_PHYS_BASE
ldr \rv, =ROCKCHIP_UART_DEBUG_VIRT_BASE
.endm
#define UART_SHIFT 2
#include <asm/hardware/debug-8250.S>
/*
* Copyright (C) 1994-1999 Russell King
* Moved from linux/arch/arm/kernel/debug.S by Ben Dooks
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#define UART_SHIFT 2
#define DEBUG_LL_UART_OFFSET 0x00002000
.macro addruart, rp, rv, tmp
mov \rp, #DEBUG_LL_UART_OFFSET
orr \rp, \rp, #0x00c00000
orr \rv, \rp, #0xfe000000 @ virtual base
orr \rp, \rp, #0xff000000 @ physical base
.endm
#include "8250_32.S"
/*
* Early serial output macro for Allwinner A1X SoCs
*
* Copyright (C) 2012 Maxime Ripard
*
* Maxime Ripard <maxime.ripard@free-electrons.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#if defined(CONFIG_DEBUG_SUNXI_UART0)
#define SUNXI_UART_DEBUG_PHYS_BASE 0x01c28000
#define SUNXI_UART_DEBUG_VIRT_BASE 0xf1c28000
#elif defined(CONFIG_DEBUG_SUNXI_UART1)
#define SUNXI_UART_DEBUG_PHYS_BASE 0x01c28400
#define SUNXI_UART_DEBUG_VIRT_BASE 0xf1c28400
#endif
.macro addruart, rp, rv, tmp
ldr \rp, =SUNXI_UART_DEBUG_PHYS_BASE
ldr \rv, =SUNXI_UART_DEBUG_VIRT_BASE
.endm
#define UART_SHIFT 2
#include <asm/hardware/debug-8250.S>
......@@ -221,3 +221,32 @@
1002:
#endif
.endm
/*
* Storage for the state maintained by the macros above.
*
* In the kernel proper, this data is located in arch/arm/mach-tegra/common.c.
* That's because this header is included from multiple files, and we only
* want a single copy of the data. In particular, the UART probing code above
* assumes it's running using physical addresses. This is true when this file
* is included from head.o, but not when included from debug.o. So we need
* to share the probe results between the two copies, rather than having
* to re-run the probing again later.
*
* In the decompressor, we put the symbol/storage right here, since common.c
* isn't included in the decompressor build. This symbol gets put in .text
* even though it's really data, since .data is discarded from the
* decompressor. Luckily, .text is writeable in the decompressor, unless
* CONFIG_ZBOOT_ROM. That dependency is handled in arch/arm/Kconfig.debug.
*/
#if defined(ZIMAGE)
tegra_uart_config:
/* Debug UART initialization required */
.word 1
/* Debug UART physical address */
.word 0
/* Debug UART virtual address */
.word 0
/* Scratch space for debug macro */
.word 0
#endif
/*
* Copyright (C) 2006-2013 ST-Ericsson AB
* License terms: GNU General Public License (GPL) version 2
* Debugging macro include header.
* Author: Linus Walleij <linus.walleij@stericsson.com>
*/
#define U300_SLOW_PER_PHYS_BASE 0xc0010000
#define U300_SLOW_PER_VIRT_BASE 0xff000000
.macro addruart, rp, rv, tmp
/* If we move the address using MMU, use this. */
ldr \rp, = U300_SLOW_PER_PHYS_BASE @ MMU off, physical address
ldr \rv, = U300_SLOW_PER_VIRT_BASE @ MMU on, virtual address
orr \rp, \rp, #0x00003000
orr \rv, \rv, #0x00003000
.endm
#include <asm/hardware/debug-pl01x.S>
......@@ -45,4 +45,4 @@
ldr \rv, =UART_VIRT_BASE @ yes, virtual address
.endm
#include <asm/hardware/debug-pl01x.S>
#include <debug/pl01x.S>
......@@ -47,51 +47,5 @@
.endm
#include <asm/hardware/debug-pl01x.S>
#elif defined(CONFIG_DEBUG_VEXPRESS_UART0_CA9)
.macro addruart,rp,rv,tmp
mov \rp, #DEBUG_LL_UART_OFFSET
orr \rv, \rp, #DEBUG_LL_VIRT_BASE
orr \rp, \rp, #DEBUG_LL_PHYS_BASE
.endm
#include <asm/hardware/debug-pl01x.S>
#elif defined(CONFIG_DEBUG_VEXPRESS_UART0_RS1)
.macro addruart,rp,rv,tmp
mov \rp, #DEBUG_LL_UART_OFFSET_RS1
orr \rv, \rp, #DEBUG_LL_VIRT_BASE
orr \rp, \rp, #DEBUG_LL_PHYS_BASE_RS1
.endm
#include <asm/hardware/debug-pl01x.S>
#elif defined(CONFIG_DEBUG_VEXPRESS_UART0_CRX)
.macro addruart,rp,tmp,tmp2
ldr \rp, =DEBUG_LL_UART_PHYS_CRX
.endm
#include <asm/hardware/debug-pl01x.S>
#else /* CONFIG_DEBUG_LL_UART_NONE */
.macro addruart, rp, rv, tmp
/* Safe dummy values */
mov \rp, #0
mov \rv, #DEBUG_LL_VIRT_BASE
.endm
.macro senduart,rd,rx
.endm
.macro waituart,rd,rx
.endm
.macro busyuart,rd,rx
.endm
#include <debug/pl01x.S>
#endif
......@@ -24,7 +24,7 @@ obj-$(CONFIG_ATAGS_PROC) += atags_proc.o
obj-$(CONFIG_DEPRECATED_PARAM_STRUCT) += atags_compat.o
ifeq ($(CONFIG_CPU_V7M),y)
obj-y += entry-v7m.o
obj-y += entry-v7m.o v7m.o
else
obj-y += entry-armv.o
endif
......
......@@ -7,9 +7,10 @@ static inline void save_atags(struct tag *tags) { }
void convert_to_tag_list(struct tag *tags);
#ifdef CONFIG_ATAGS
struct machine_desc *setup_machine_tags(phys_addr_t __atags_pointer, unsigned int machine_nr);
const struct machine_desc *setup_machine_tags(phys_addr_t __atags_pointer,
unsigned int machine_nr);
#else
static inline struct machine_desc *
static inline const struct machine_desc *
setup_machine_tags(phys_addr_t __atags_pointer, unsigned int machine_nr)
{
early_print("no ATAGS support: can't continue\n");
......
......@@ -178,11 +178,11 @@ static void __init squash_mem_tags(struct tag *tag)
tag->hdr.tag = ATAG_NONE;
}
struct machine_desc * __init setup_machine_tags(phys_addr_t __atags_pointer,
unsigned int machine_nr)
const struct machine_desc * __init
setup_machine_tags(phys_addr_t __atags_pointer, unsigned int machine_nr)
{
struct tag *tags = (struct tag *)&default_tags;
struct machine_desc *mdesc = NULL, *p;
const struct machine_desc *mdesc = NULL, *p;
char *from = default_command_line;
default_tags.mem.start = PHYS_OFFSET;
......
......@@ -176,10 +176,10 @@ void __init arm_dt_init_cpu_maps(void)
* If a dtb was passed to the kernel in r2, then use it to choose the
* correct machine_desc and to setup the system.
*/
struct machine_desc * __init setup_machine_fdt(unsigned int dt_phys)
const struct machine_desc * __init setup_machine_fdt(unsigned int dt_phys)
{
struct boot_param_header *devtree;
struct machine_desc *mdesc, *mdesc_best = NULL;
const struct machine_desc *mdesc, *mdesc_best = NULL;
unsigned int score, mdesc_score = ~1;
unsigned long dt_root;
const char *model;
......@@ -188,7 +188,7 @@ struct machine_desc * __init setup_machine_fdt(unsigned int dt_phys)
DT_MACHINE_START(GENERIC_DT, "Generic DT based system")
MACHINE_END
mdesc_best = (struct machine_desc *)&__mach_desc_GENERIC_DT;
mdesc_best = &__mach_desc_GENERIC_DT;
#endif
if (!dt_phys)
......
......@@ -442,10 +442,10 @@ local_restart:
ldrcc pc, [tbl, scno, lsl #2] @ call sys_* routine
add r1, sp, #S_OFF
2: mov why, #0 @ no longer a real syscall
cmp scno, #(__ARM_NR_BASE - __NR_SYSCALL_BASE)
eor r0, scno, #__NR_SYSCALL_BASE @ put OS number back
bcs arm_syscall
bcs arm_syscall
2: mov why, #0 @ no longer a real syscall
b sys_ni_syscall @ not private func
#if defined(CONFIG_OABI_COMPAT) || !defined(CONFIG_AEABI)
......
......@@ -292,12 +292,20 @@ int module_finalize(const Elf32_Ehdr *hdr, const Elf_Shdr *sechdrs,
maps[ARM_SEC_CORE].unw_sec = s;
else if (strcmp(".ARM.exidx.exit.text", secname) == 0)
maps[ARM_SEC_EXIT].unw_sec = s;
else if (strcmp(".ARM.exidx.text.unlikely", secname) == 0)
maps[ARM_SEC_UNLIKELY].unw_sec = s;
else if (strcmp(".ARM.exidx.text.hot", secname) == 0)
maps[ARM_SEC_HOT].unw_sec = s;
else if (strcmp(".init.text", secname) == 0)
maps[ARM_SEC_INIT].txt_sec = s;
else if (strcmp(".text", secname) == 0)
maps[ARM_SEC_CORE].txt_sec = s;
else if (strcmp(".exit.text", secname) == 0)
maps[ARM_SEC_EXIT].txt_sec = s;
else if (strcmp(".text.unlikely", secname) == 0)
maps[ARM_SEC_UNLIKELY].txt_sec = s;
else if (strcmp(".text.hot", secname) == 0)
maps[ARM_SEC_HOT].txt_sec = s;
}
for (i = 0; i < ARM_SEC_MAX; i++)
......
......@@ -118,7 +118,8 @@ static int cpu_pmu_request_irq(struct arm_pmu *cpu_pmu, irq_handler_t handler)
continue;
}
err = request_irq(irq, handler, IRQF_NOBALANCING, "arm-pmu",
err = request_irq(irq, handler,
IRQF_NOBALANCING | IRQF_NO_THREAD, "arm-pmu",
cpu_pmu);
if (err) {
pr_err("unable to request IRQ%d for ARM PMU counters\n",
......
......@@ -72,10 +72,10 @@ static int __init fpe_setup(char *line)
__setup("fpe=", fpe_setup);
#endif
extern void paging_init(struct machine_desc *desc);
extern void paging_init(const struct machine_desc *desc);
extern void sanity_check_meminfo(void);
extern enum reboot_mode reboot_mode;
extern void setup_dma_zone(struct machine_desc *desc);
extern void setup_dma_zone(const struct machine_desc *desc);
unsigned int processor_id;
EXPORT_SYMBOL(processor_id);
......@@ -139,7 +139,7 @@ EXPORT_SYMBOL(elf_platform);
static const char *cpu_name;
static const char *machine_name;
static char __initdata cmd_line[COMMAND_LINE_SIZE];
struct machine_desc *machine_desc __initdata;
const struct machine_desc *machine_desc __initdata;
static union { char c[4]; unsigned long l; } endian_test __initdata = { { 'l', '?', '?', 'b' } };
#define ENDIANNESS ((char)endian_test.l)
......@@ -607,7 +607,7 @@ static void __init setup_processor(void)
void __init dump_machine_table(void)
{
struct machine_desc *p;
const struct machine_desc *p;
early_print("Available machine support:\n\nID (hex)\tNAME\n");
for_each_machine_desc(p)
......@@ -694,7 +694,7 @@ static int __init early_mem(char *p)
}
early_param("mem", early_mem);
static void __init request_standard_resources(struct machine_desc *mdesc)
static void __init request_standard_resources(const struct machine_desc *mdesc)
{
struct memblock_region *region;
struct resource *res;
......@@ -852,7 +852,7 @@ void __init hyp_mode_check(void)
void __init setup_arch(char **cmdline_p)
{
struct machine_desc *mdesc;
const struct machine_desc *mdesc;
setup_processor();
mdesc = setup_machine_fdt(__atags_pointer);
......@@ -994,15 +994,6 @@ static int c_show(struct seq_file *m, void *v)
seq_printf(m, "model name\t: %s rev %d (%s)\n",
cpu_name, cpuid & 15, elf_platform);
#if defined(CONFIG_SMP)
seq_printf(m, "BogoMIPS\t: %lu.%02lu\n",
per_cpu(cpu_data, i).loops_per_jiffy / (500000UL/HZ),
(per_cpu(cpu_data, i).loops_per_jiffy / (5000UL/HZ)) % 100);
#else
seq_printf(m, "BogoMIPS\t: %lu.%02lu\n",
loops_per_jiffy / (500000/HZ),
(loops_per_jiffy / (5000/HZ)) % 100);
#endif
/* dump out the processor features */
seq_puts(m, "Features\t: ");
......
......@@ -398,17 +398,8 @@ asmlinkage void secondary_start_kernel(void)
void __init smp_cpus_done(unsigned int max_cpus)
{
int cpu;
unsigned long bogosum = 0;
for_each_online_cpu(cpu)
bogosum += per_cpu(cpu_data, cpu).loops_per_jiffy;
printk(KERN_INFO "SMP: Total of %d processors activated "
"(%lu.%02lu BogoMIPS).\n",
num_online_cpus(),
bogosum / (500000/HZ),
(bogosum / (5000/HZ)) % 100);
printk(KERN_INFO "SMP: Total of %d processors activated.\n",
num_online_cpus());
hyp_mode_check();
}
......
......@@ -104,7 +104,7 @@ void flush_tlb_all(void)
if (tlb_ops_need_broadcast())
on_each_cpu(ipi_flush_tlb_all, NULL, 1);
else
local_flush_tlb_all();
__flush_tlb_all();
broadcast_tlb_a15_erratum();
}
......@@ -113,7 +113,7 @@ void flush_tlb_mm(struct mm_struct *mm)
if (tlb_ops_need_broadcast())
on_each_cpu_mask(mm_cpumask(mm), ipi_flush_tlb_mm, mm, 1);
else
local_flush_tlb_mm(mm);
__flush_tlb_mm(mm);
broadcast_tlb_mm_a15_erratum(mm);
}
......@@ -126,7 +126,7 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr)
on_each_cpu_mask(mm_cpumask(vma->vm_mm), ipi_flush_tlb_page,
&ta, 1);
} else
local_flush_tlb_page(vma, uaddr);
__flush_tlb_page(vma, uaddr);
broadcast_tlb_mm_a15_erratum(vma->vm_mm);
}
......@@ -137,7 +137,7 @@ void flush_tlb_kernel_page(unsigned long kaddr)
ta.ta_start = kaddr;
on_each_cpu(ipi_flush_tlb_kernel_page, &ta, 1);
} else
local_flush_tlb_kernel_page(kaddr);
__flush_tlb_kernel_page(kaddr);
broadcast_tlb_a15_erratum();
}
......@@ -173,5 +173,5 @@ void flush_bp_all(void)
if (tlb_ops_need_broadcast())
on_each_cpu(ipi_flush_bp_all, NULL, 1);
else
local_flush_bp_all();
__flush_bp_all();
}
......@@ -497,28 +497,64 @@ static int bad_syscall(int n, struct pt_regs *regs)
return regs->ARM_r0;
}
static long do_cache_op_restart(struct restart_block *);
static inline int
do_cache_op(unsigned long start, unsigned long end, int flags)
__do_cache_op(unsigned long start, unsigned long end)
{
struct mm_struct *mm = current->active_mm;
struct vm_area_struct *vma;
int ret;
unsigned long chunk = PAGE_SIZE;
do {
if (signal_pending(current)) {
struct thread_info *ti = current_thread_info();
ti->restart_block = (struct restart_block) {
.fn = do_cache_op_restart,
};
ti->arm_restart_block = (struct arm_restart_block) {
{
.cache = {
.start = start,
.end = end,
},
},
};
return -ERESTART_RESTARTBLOCK;
}
ret = flush_cache_user_range(start, start + chunk);
if (ret)
return ret;
cond_resched();
start += chunk;
} while (start < end);
return 0;
}
static long do_cache_op_restart(struct restart_block *unused)
{
struct arm_restart_block *restart_block;
restart_block = &current_thread_info()->arm_restart_block;
return __do_cache_op(restart_block->cache.start,
restart_block->cache.end);
}
static inline int
do_cache_op(unsigned long start, unsigned long end, int flags)
{
if (end < start || flags)
return -EINVAL;
down_read(&mm->mmap_sem);
vma = find_vma(mm, start);
if (vma && vma->vm_start < end) {
if (start < vma->vm_start)
start = vma->vm_start;
if (end > vma->vm_end)
end = vma->vm_end;
if (!access_ok(VERIFY_READ, start, end - start))
return -EFAULT;
up_read(&mm->mmap_sem);
return flush_cache_user_range(start, end);
}
up_read(&mm->mmap_sem);
return -EINVAL;
return __do_cache_op(start, end);
}
/*
......
/*
* Copyright (C) 2013 Uwe Kleine-Koenig for Pengutronix
*
* This program is free software; you can redistribute it and/or modify it under
* the terms of the GNU General Public License version 2 as published by the
* Free Software Foundation.
*/
#include <linux/io.h>
#include <linux/reboot.h>
#include <asm/barrier.h>
#include <asm/v7m.h>
void armv7m_restart(enum reboot_mode mode, const char *cmd)
{
dsb();
__raw_writel(V7M_SCB_AIRCR_VECTKEY | V7M_SCB_AIRCR_SYSRESETREQ,
BASEADDR_V7M_SCB + V7M_SCB_AIRCR);
dsb();
}
......@@ -142,7 +142,7 @@ target: @ We're now in the trampoline code, switch page tables
@ Invalidate the old TLBs
mcr p15, 4, r0, c8, c7, 0 @ TLBIALLH
dsb
dsb ish
eret
......
......@@ -55,7 +55,7 @@ ENTRY(__kvm_tlb_flush_vmid_ipa)
mcrr p15, 6, r2, r3, c2 @ Write VTTBR
isb
mcr p15, 0, r0, c8, c3, 0 @ TLBIALLIS (rt ignored)
dsb
dsb ish
isb
mov r2, #0
mov r3, #0
......@@ -79,7 +79,7 @@ ENTRY(__kvm_flush_vm_context)
mcr p15, 4, r0, c8, c3, 4
/* Invalidate instruction caches Inner Shareable (ICIALLUIS) */
mcr p15, 0, r0, c7, c1, 0
dsb
dsb ish
isb @ Not necessary if followed by eret
bx lr
......
......@@ -489,7 +489,6 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
for (addr = guest_ipa; addr < end; addr += PAGE_SIZE) {
pte_t pte = pfn_pte(pfn, PAGE_S2_DEVICE);
kvm_set_s2pte_writable(&pte);
ret = mmu_topup_memory_cache(&cache, 2, 2);
if (ret)
......
......@@ -45,3 +45,9 @@ lib-$(CONFIG_ARCH_SHARK) += io-shark.o
$(obj)/csumpartialcopy.o: $(obj)/csumpartialcopygeneric.S
$(obj)/csumpartialcopyuser.o: $(obj)/csumpartialcopygeneric.S
ifeq ($(CONFIG_KERNEL_MODE_NEON),y)
NEON_FLAGS := -mfloat-abi=softfp -mfpu=neon
CFLAGS_xor-neon.o += $(NEON_FLAGS)
lib-$(CONFIG_XOR_BLOCKS) += xor-neon.o
endif
/*
* linux/arch/arm/lib/xor-neon.c
*
* Copyright (C) 2013 Linaro Ltd <ard.biesheuvel@linaro.org>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/raid/xor.h>
#ifndef __ARM_NEON__
#error You should compile this file with '-mfloat-abi=softfp -mfpu=neon'
#endif
/*
* Pull in the reference implementations while instructing GCC (through
* -ftree-vectorize) to attempt to exploit implicit parallelism and emit
* NEON instructions.
*/
#if __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6)
#pragma GCC optimize "tree-vectorize"
#else
/*
* While older versions of GCC do not generate incorrect code, they fail to
* recognize the parallel nature of these functions, and emit plain ARM code,
* which is known to be slower than the optimized ARM code in asm-arm/xor.h.
*/
#warning This code requires at least version 4.6 of GCC
#endif
#pragma GCC diagnostic ignored "-Wunused-variable"
#include <asm-generic/xor.h>
struct xor_block_template const xor_block_neon_inner = {
.name = "__inner_neon__",
.do_2 = xor_8regs_2,
.do_3 = xor_8regs_3,
.do_4 = xor_8regs_4,
.do_5 = xor_8regs_5,
};
/*
* Debugging macro for DaVinci
*
* Author: Kevin Hilman, MontaVista Software, Inc. <source@mvista.com>
*
* 2007 (c) MontaVista Software, Inc. This file is licensed under
* the terms of the GNU General Public License version 2. This program
* is licensed "as is" without any warranty of any kind, whether express
* or implied.
*/
/* Modifications
* Jan 2009 Chaithrika U S Added senduart, busyuart, waituart
* macros, based on debug-8250.S file
* but using 32-bit accesses required for
* some davinci devices.
*/
#include <linux/serial_reg.h>
#include <mach/serial.h>
#define UART_SHIFT 2
#if defined(CONFIG_DEBUG_DAVINCI_DMx_UART0)
#define UART_BASE DAVINCI_UART0_BASE
#elif defined(CONFIG_DEBUG_DAVINCI_DA8XX_UART1)
#define UART_BASE DA8XX_UART1_BASE
#elif defined(CONFIG_DEBUG_DAVINCI_DA8XX_UART2)
#define UART_BASE DA8XX_UART2_BASE
#elif defined(CONFIG_DEBUG_DAVINCI_TNETV107X_UART1)
#define UART_BASE TNETV107X_UART2_BASE
#define UART_VIRTBASE TNETV107X_UART2_VIRT
#else
#error "Select a specifc port for DEBUG_LL"
#endif
#ifndef UART_VIRTBASE
#define UART_VIRTBASE IO_ADDRESS(UART_BASE)
#endif
.macro addruart, rp, rv, tmp
ldr \rp, =UART_BASE
ldr \rv, =UART_VIRTBASE
.endm
.macro senduart,rd,rx
str \rd, [\rx, #UART_TX << UART_SHIFT]
.endm
.macro busyuart,rd,rx
1002: ldr \rd, [\rx, #UART_LSR << UART_SHIFT]
and \rd, \rd, #UART_LSR_TEMT | UART_LSR_THRE
teq \rd, #UART_LSR_TEMT | UART_LSR_THRE
bne 1002b
.endm
.macro waituart,rd,rx
#ifdef FLOW_CONTROL
1001: ldr \rd, [\rx, #UART_MSR << UART_SHIFT]
tst \rd, #UART_MSR_CTS
beq 1001b
#endif
.endm
/*
* arch/arm/mach-dove/include/mach/debug-macro.S
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <mach/bridge-regs.h>
.macro addruart, rp, rv, tmp
ldr \rp, =DOVE_SB_REGS_PHYS_BASE
ldr \rv, =DOVE_SB_REGS_VIRT_BASE
orr \rp, \rp, #0x00012000
orr \rv, \rv, #0x00012000
.endm
#define UART_SHIFT 2
#include <asm/hardware/debug-8250.S>
/* arch/arm/mach-ebsa110/include/mach/debug-macro.S
*
* Debugging macro include header
*
* Copyright (C) 1994-1999 Russell King
* Moved from linux/arch/arm/kernel/debug.S by Ben Dooks
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
**/
.macro addruart, rp, rv, tmp
mov \rp, #0xf0000000
orr \rp, \rp, #0x00000be0
mov \rp, \rv
.endm
#define UART_SHIFT 2
#define FLOW_CONTROL
#include <asm/hardware/debug-8250.S>
......@@ -194,20 +194,6 @@ config MACH_VISION_EP9307
Say 'Y' here if you want your kernel to support the
Vision Engraving Systems EP9307 SoM.
choice
prompt "Select a UART for early kernel messages"
config EP93XX_EARLY_UART1
bool "UART1"
config EP93XX_EARLY_UART2
bool "UART2"
config EP93XX_EARLY_UART3
bool "UART3"
endchoice
endmenu
endif
/*
* arch/arm/mach-ep93xx/include/mach/debug-macro.S
* Debugging macro include header
*
* Copyright (C) 2006 Lennert Buytenhek <buytenh@wantstofly.org>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or (at
* your option) any later version.
*/
#include <mach/ep93xx-regs.h>
.macro addruart, rp, rv, tmp
ldr \rp, =EP93XX_APB_PHYS_BASE @ Physical base
ldr \rv, =EP93XX_APB_VIRT_BASE @ virtual base
orr \rp, \rp, #0x000c0000
orr \rv, \rv, #0x000c0000
.endm
#include <asm/hardware/debug-pl01x.S>
......@@ -31,18 +31,8 @@ static void __raw_writel(unsigned int value, unsigned int ptr)
*((volatile unsigned int *)ptr) = value;
}
#if defined(CONFIG_EP93XX_EARLY_UART1)
#define UART_BASE EP93XX_UART1_PHYS_BASE
#elif defined(CONFIG_EP93XX_EARLY_UART2)
#define UART_BASE EP93XX_UART2_PHYS_BASE
#elif defined(CONFIG_EP93XX_EARLY_UART3)
#define UART_BASE EP93XX_UART3_PHYS_BASE
#else
#define UART_BASE EP93XX_UART1_PHYS_BASE
#endif
#define PHYS_UART_DATA (UART_BASE + 0x00)
#define PHYS_UART_FLAG (UART_BASE + 0x18)
#define PHYS_UART_DATA (CONFIG_DEBUG_UART_PHYS + 0x00)
#define PHYS_UART_FLAG (CONFIG_DEBUG_UART_PHYS + 0x18)
#define UART_FLAG_TXFF 0x20
static inline void putc(int c)
......
......@@ -13,20 +13,6 @@
#include <asm/hardware/dec21285.h>
#ifndef CONFIG_DEBUG_DC21285_PORT
/* For NetWinder debugging */
.macro addruart, rp, rv, tmp
mov \rp, #0x000003f8
orr \rv, \rp, #0xfe000000 @ virtual
orr \rv, \rv, #0x00e00000 @ virtual
orr \rp, \rp, #0x7c000000 @ physical
.endm
#define UART_SHIFT 0
#define FLOW_CONTROL
#include <asm/hardware/debug-8250.S>
#else
#include <mach/hardware.h>
/* For EBSA285 debugging */
.equ dc21285_high, ARMCSR_BASE & 0xff000000
......@@ -54,4 +40,3 @@
.macro waituart,rd,rx
.endm
#endif
/*
* Debugging macro include header
*
* Copyright (C) 1994-1999 Russell King
* Copyright (C) 2001-2006 Storlink, Corp.
* Copyright (C) 2008-2009 Paulius Zaleckas <paulius.zaleckas@teltonika.lt>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <mach/hardware.h>
.macro addruart, rp, rv, tmp
ldr \rp, =GEMINI_UART_BASE @ physical
ldr \rv, =IO_ADDRESS(GEMINI_UART_BASE) @ virtual
.endm
#define UART_SHIFT 2
#define FLOW_CONTROL
#include <asm/hardware/debug-8250.S>
/* arch/arm/mach-integrator/include/mach/debug-macro.S
*
* Debugging macro include header
*
* Copyright (C) 1994-1999 Russell King
* Moved from linux/arch/arm/kernel/debug.S by Ben Dooks
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
*/
.macro addruart, rp, rv, tmp
mov \rp, #0x16000000 @ physical base address
mov \rv, #0xf0000000 @ virtual base
add \rv, \rv, #0x16000000 >> 4
.endm
#include <asm/hardware/debug-pl01x.S>
/*
* arch/arm/mach-iop13xx/include/mach/debug-macro.S
*
* Debugging macro include header
*
* Copyright (C) 1994-1999 Russell King
* Moved from linux/arch/arm/kernel/debug.S by Ben Dooks
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
.macro addruart, rp, rv, tmp
mov \rp, #0x00002300
orr \rp, \rp, #0x00000040
orr \rv, \rp, #0xfe000000 @ virtual
orr \rv, \rv, #0x00e80000
orr \rp, \rp, #0xff000000 @ physical
orr \rp, \rp, #0x00d80000
.endm
#define UART_SHIFT 2
#include <asm/hardware/debug-8250.S>
/*
* arch/arm/mach-iop32x/include/mach/debug-macro.S
*
* Debugging macro include header
*
* Copyright (C) 1994-1999 Russell King
* Moved from linux/arch/arm/kernel/debug.S by Ben Dooks
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
.macro addruart, rp, rv, tmp
mov \rp, #0xfe000000 @ physical as well as virtual
orr \rp, \rp, #0x00800000 @ location of the UART
mov \rv, \rp
.endm
#define UART_SHIFT 0
#include <asm/hardware/debug-8250.S>
/*
* arch/arm/mach-iop33x/include/mach/debug-macro.S
*
* Debugging macro include header
*
* Copyright (C) 1994-1999 Russell King
* Moved from linux/arch/arm/kernel/debug.S by Ben Dooks
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
.macro addruart, rp, rv, tmp
mov \rp, #0x00ff0000
orr \rp, \rp, #0x0000f700
orr \rv, #0xfe000000 @ virtual
orr \rp, #0xff000000 @ physical
.endm
#define UART_SHIFT 2
#include <asm/hardware/debug-8250.S>
/* arch/arm/mach-ixp4xx/include/mach/debug-macro.S
*
* Debugging macro include header
*
* Copyright (C) 1994-1999 Russell King
* Moved from linux/arch/arm/kernel/debug.S by Ben Dooks
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
.macro addruart, rp, rv, tmp
#ifdef __ARMEB__
mov \rp, #3 @ Uart regs are at off set of 3 if
@ byte writes used - Big Endian.
#else
mov \rp, #0
#endif
orr \rv, \rp, #0xfe000000 @ virtual
orr \rv, \rv, #0x00f00000
orr \rp, \rp, #0xc8000000 @ physical
.endm
#define UART_SHIFT 2
#include <asm/hardware/debug-8250.S>
/*
* arch/arm/mach-kirkwood/include/mach/debug-macro.S
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <mach/bridge-regs.h>
.macro addruart, rp, rv, tmp
ldr \rp, =KIRKWOOD_REGS_PHYS_BASE
ldr \rv, =KIRKWOOD_REGS_VIRT_BASE
orr \rp, \rp, #0x00012000
orr \rv, \rv, #0x00012000
.endm
#define UART_SHIFT 2
#include <asm/hardware/debug-8250.S>
/*
* arch/arm/mach-lpc32xx/include/mach/debug-macro.S
*
* Author: Kevin Wells <kevin.wells@nxp.com>
*
* Copyright (C) 2010 NXP Semiconductors
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
/*
* Debug output is hardcoded to standard UART 5
*/
.macro addruart, rp, rv, tmp
ldreq \rp, =0x40090000
ldrne \rv, =0xF4090000
.endm
#define UART_SHIFT 2
#include <asm/hardware/debug-8250.S>
/*
* arch/arm/mach-mv78xx0/include/mach/debug-macro.S
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <mach/mv78xx0.h>
.macro addruart, rp, rv, tmp
ldr \rp, =MV78XX0_REGS_PHYS_BASE
ldr \rv, =MV78XX0_REGS_VIRT_BASE
orr \rp, \rp, #0x00012000
orr \rv, \rv, #0x00012000
.endm
#define UART_SHIFT 2
#include <asm/hardware/debug-8250.S>
/*
* arch/arm/mach-orion5x/include/mach/debug-macro.S
*
* Debugging macro include header
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <mach/orion5x.h>
.macro addruart, rp, rv, tmp
ldr \rp, =ORION5X_REGS_PHYS_BASE
ldr \rv, =ORION5X_REGS_VIRT_BASE
orr \rp, \rp, #0x00012000
orr \rv, \rv, #0x00012000
.endm
#define UART_SHIFT 2
#include <asm/hardware/debug-8250.S>
/* arch/arm/mach-realview/include/mach/debug-macro.S
*
* Debugging macro include header
*
* Copyright (C) 1994-1999 Russell King
* Moved from linux/arch/arm/kernel/debug.S by Ben Dooks
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifdef CONFIG_DEBUG_REALVIEW_STD_PORT
#define DEBUG_LL_UART_OFFSET 0x00009000
#elif defined(CONFIG_DEBUG_REALVIEW_PB1176_PORT)
#define DEBUG_LL_UART_OFFSET 0x0010c000
#endif
#ifndef DEBUG_LL_UART_OFFSET
#error "Unknown RealView platform"
#endif
.macro addruart, rp, rv, tmp
mov \rp, #DEBUG_LL_UART_OFFSET
orr \rv, \rp, #0xfb000000 @ virtual base
orr \rp, \rp, #0x10000000 @ physical base
.endm
#include <asm/hardware/debug-pl01x.S>
/* arch/arm/mach-rpc/include/mach/debug-macro.S
*
* Debugging macro include header
*
* Copyright (C) 1994-1999 Russell King
* Moved from linux/arch/arm/kernel/debug.S by Ben Dooks
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
*/
.macro addruart, rp, rv, tmp
mov \rp, #0x00010000
orr \rp, \rp, #0x00000fe0
orr \rv, \rp, #0xe0000000 @ virtual
orr \rp, \rp, #0x03000000 @ physical
.endm
#define UART_SHIFT 2
#define FLOW_CONTROL
#include <asm/hardware/debug-8250.S>
/*
* arch/arm/plat-spear/include/plat/debug-macro.S
*
* Debugging macro include header for spear platform
*
* Copyright (C) 2009 ST Microelectronics
* Viresh Kumar <viresh.linux@gmail.com>
*
* This file is licensed under the terms of the GNU General Public
* License version 2. This program is licensed "as is" without any
* warranty of any kind, whether express or implied.
*/
#include <linux/amba/serial.h>
#include <mach/spear.h>
.macro addruart, rp, rv, tmp
mov \rp, #SPEAR_DBG_UART_BASE @ Physical base
mov \rv, #VA_SPEAR_DBG_UART_BASE @ Virtual base
.endm
.macro senduart, rd, rx
strb \rd, [\rx, #UART01x_DR] @ ASC_TX_BUFFER
.endm
.macro waituart, rd, rx
1001: ldr \rd, [\rx, #UART01x_FR] @ FLAG REGISTER
tst \rd, #UART01x_FR_TXFF @ TX_FULL
bne 1001b
.endm
.macro busyuart, rd, rx
1002: ldr \rd, [\rx, #UART01x_FR] @ FLAG REGISTER
tst \rd, #UART011_FR_TXFE @ TX_EMPTY
beq 1002b
.endm
......@@ -39,7 +39,6 @@
/* Debug uart for linux, will be used for debug and uncompress messages */
#define SPEAR_DBG_UART_BASE SPEAR_ICM1_UART_BASE
#define VA_SPEAR_DBG_UART_BASE VA_SPEAR_ICM1_UART_BASE
/* Sysctl base for spear platform */
#define SPEAR_SYS_CTRL_BASE SPEAR_ICM3_SYS_CTRL_BASE
......@@ -86,7 +85,6 @@
/* Debug uart for linux, will be used for debug and uncompress messages */
#define SPEAR_DBG_UART_BASE UART_BASE
#define VA_SPEAR_DBG_UART_BASE VA_UART_BASE
#endif /* SPEAR13XX */
......
/* arch/arm/mach-versatile/include/mach/debug-macro.S
*
* Debugging macro include header
*
* Copyright (C) 1994-1999 Russell King
* Moved from linux/arch/arm/kernel/debug.S by Ben Dooks
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
*/
.macro addruart, rp, rv, tmp
mov \rp, #0x001F0000
orr \rp, \rp, #0x00001000
orr \rv, \rp, #0xf1000000 @ virtual base
orr \rp, \rp, #0x10000000 @ physical base
.endm
#include <asm/hardware/debug-pl01x.S>
......@@ -290,7 +290,7 @@ static void l2x0_disable(void)
raw_spin_lock_irqsave(&l2x0_lock, flags);
__l2x0_flush_all();
writel_relaxed(0, l2x0_base + L2X0_CTRL);
dsb();
dsb(st);
raw_spin_unlock_irqrestore(&l2x0_lock, flags);
}
......@@ -417,9 +417,9 @@ void __init l2x0_init(void __iomem *base, u32 aux_val, u32 aux_mask)
outer_cache.disable = l2x0_disable;
}
printk(KERN_INFO "%s cache controller enabled\n", type);
printk(KERN_INFO "l2x0: %d ways, CACHE_ID 0x%08x, AUX_CTRL 0x%08x, Cache size: %d B\n",
ways, cache_id, aux, l2x0_size);
pr_info("%s cache controller enabled\n", type);
pr_info("l2x0: %d ways, CACHE_ID 0x%08x, AUX_CTRL 0x%08x, Cache size: %d kB\n",
ways, cache_id, aux, l2x0_size >> 10);
}
#ifdef CONFIG_OF
......@@ -929,7 +929,9 @@ static const struct of_device_id l2x0_ids[] __initconst = {
.data = (void *)&aurora_no_outer_data},
{ .compatible = "marvell,aurora-outer-cache",
.data = (void *)&aurora_with_outer_data},
{ .compatible = "bcm,bcm11351-a2-pl310-cache",
{ .compatible = "brcm,bcm11351-a2-pl310-cache",
.data = (void *)&bcm_l2x0_data},
{ .compatible = "bcm,bcm11351-a2-pl310-cache", /* deprecated name */
.data = (void *)&bcm_l2x0_data},
{}
};
......
......@@ -282,7 +282,7 @@ ENTRY(v7_coherent_user_range)
add r12, r12, r2
cmp r12, r1
blo 1b
dsb
dsb ishst
icache_line_size r2, r3
sub r3, r2, #1
bic r12, r0, r3
......@@ -294,7 +294,7 @@ ENTRY(v7_coherent_user_range)
mov r0, #0
ALT_SMP(mcr p15, 0, r0, c7, c1, 6) @ invalidate BTB Inner Shareable
ALT_UP(mcr p15, 0, r0, c7, c5, 6) @ invalidate BTB
dsb
dsb ishst
isb
mov pc, lr
......
......@@ -162,10 +162,7 @@ static void flush_context(unsigned int cpu)
}
/* Queue a TLB invalidate and flush the I-cache if necessary. */
if (!tlb_ops_need_broadcast())
cpumask_set_cpu(cpu, &tlb_flush_pending);
else
cpumask_setall(&tlb_flush_pending);
cpumask_setall(&tlb_flush_pending);
if (icache_is_vivt_asid_tagged())
__flush_icache_all();
......@@ -245,8 +242,6 @@ void check_and_switch_context(struct mm_struct *mm, struct task_struct *tsk)
if (cpumask_test_and_clear_cpu(cpu, &tlb_flush_pending)) {
local_flush_bp_all();
local_flush_tlb_all();
if (erratum_a15_798181())
dummy_flush_tlb_a15_erratum();
}
atomic64_set(&per_cpu(active_asids, cpu), asid);
......
......@@ -455,7 +455,6 @@ static void __dma_remap(struct page *page, size_t size, pgprot_t prot)
unsigned end = start + size;
apply_to_page_range(&init_mm, start, size, __dma_update_pte, &prot);
dsb();
flush_tlb_kernel_range(start, end);
}
......
......@@ -36,22 +36,6 @@
* of type casting from pmd_t * to pte_t *.
*/
pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr)
{
pgd_t *pgd;
pud_t *pud;
pmd_t *pmd = NULL;
pgd = pgd_offset(mm, addr);
if (pgd_present(*pgd)) {
pud = pud_offset(pgd, addr);
if (pud_present(*pud))
pmd = pmd_offset(pud, addr);
}
return (pte_t *)pmd;
}
struct page *follow_huge_addr(struct mm_struct *mm, unsigned long address,
int write)
{
......@@ -68,33 +52,6 @@ int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr, pte_t *ptep)
return 0;
}
pte_t *huge_pte_alloc(struct mm_struct *mm,
unsigned long addr, unsigned long sz)
{
pgd_t *pgd;
pud_t *pud;
pte_t *pte = NULL;
pgd = pgd_offset(mm, addr);
pud = pud_alloc(mm, pgd, addr);
if (pud)
pte = (pte_t *)pmd_alloc(mm, pud, addr);
return pte;
}
struct page *
follow_huge_pmd(struct mm_struct *mm, unsigned long address,
pmd_t *pmd, int write)
{
struct page *page;
page = pte_page(*(pte_t *)pmd);
if (page)
page += ((address & ~PMD_MASK) >> PAGE_SHIFT);
return page;
}
int pmd_huge(pmd_t pmd)
{
return pmd_val(pmd) && !(pmd_val(pmd) & PMD_TABLE_BIT);
......
......@@ -231,7 +231,7 @@ static void __init arm_adjust_dma_zone(unsigned long *size, unsigned long *hole,
}
#endif
void __init setup_dma_zone(struct machine_desc *mdesc)
void __init setup_dma_zone(const struct machine_desc *mdesc)
{
#ifdef CONFIG_ZONE_DMA
if (mdesc->dma_zone_size) {
......@@ -335,7 +335,8 @@ phys_addr_t __init arm_memblock_steal(phys_addr_t size, phys_addr_t align)
return phys;
}
void __init arm_memblock_init(struct meminfo *mi, struct machine_desc *mdesc)
void __init arm_memblock_init(struct meminfo *mi,
const struct machine_desc *mdesc)
{
int i;
......
......@@ -1186,7 +1186,7 @@ void __init arm_mm_memblock_reserve(void)
* called function. This means you can't use any function or debugging
* method which may touch any device, otherwise the kernel _will_ crash.
*/
static void __init devicemaps_init(struct machine_desc *mdesc)
static void __init devicemaps_init(const struct machine_desc *mdesc)
{
struct map_desc map;
unsigned long addr;
......@@ -1319,7 +1319,7 @@ static void __init map_lowmem(void)
* paging_init() sets up the page tables, initialises the zone memory
* maps, and sets up the zero page, bad page and bad page tables.
*/
void __init paging_init(struct machine_desc *mdesc)
void __init paging_init(const struct machine_desc *mdesc)
{
void *zero_page;
......
......@@ -299,7 +299,7 @@ void __init sanity_check_meminfo(void)
* paging_init() sets up the page tables, initialises the zone memory
* maps, and sets up the zero page, bad page and bad page tables.
*/
void __init paging_init(struct machine_desc *mdesc)
void __init paging_init(const struct machine_desc *mdesc)
{
early_trap_init((void *)CONFIG_VECTORS_BASE);
mpu_setup();
......
......@@ -514,6 +514,32 @@ ENTRY(cpu_feroceon_set_pte_ext)
#endif
mov pc, lr
/* Suspend/resume support: taken from arch/arm/mm/proc-arm926.S */
.globl cpu_feroceon_suspend_size
.equ cpu_feroceon_suspend_size, 4 * 3
#ifdef CONFIG_ARM_CPU_SUSPEND
ENTRY(cpu_feroceon_do_suspend)
stmfd sp!, {r4 - r6, lr}
mrc p15, 0, r4, c13, c0, 0 @ PID
mrc p15, 0, r5, c3, c0, 0 @ Domain ID
mrc p15, 0, r6, c1, c0, 0 @ Control register
stmia r0, {r4 - r6}
ldmfd sp!, {r4 - r6, pc}
ENDPROC(cpu_feroceon_do_suspend)
ENTRY(cpu_feroceon_do_resume)
mov ip, #0
mcr p15, 0, ip, c8, c7, 0 @ invalidate I+D TLBs
mcr p15, 0, ip, c7, c7, 0 @ invalidate I+D caches
ldmia r0, {r4 - r6}
mcr p15, 0, r4, c13, c0, 0 @ PID
mcr p15, 0, r5, c3, c0, 0 @ Domain ID
mcr p15, 0, r1, c2, c0, 0 @ TTB address
mov r0, r6 @ control register
b cpu_resume_mmu
ENDPROC(cpu_feroceon_do_resume)
#endif
.type __feroceon_setup, #function
__feroceon_setup:
mov r0, #0
......
......@@ -83,7 +83,7 @@ ENTRY(cpu_v7_dcache_clean_area)
add r0, r0, r2
subs r1, r1, r2
bhi 2b
dsb
dsb ishst
mov pc, lr
ENDPROC(cpu_v7_dcache_clean_area)
......@@ -330,7 +330,19 @@ __v7_setup:
1:
#endif
3: mov r10, #0
/* Cortex-A15 Errata */
3: ldr r10, =0x00000c0f @ Cortex-A15 primary part number
teq r0, r10
bne 4f
#ifdef CONFIG_ARM_ERRATA_773022
cmp r6, #0x4 @ only present up to r0p4
mrcle p15, 0, r10, c1, c0, 1 @ read aux control register
orrle r10, r10, #1 << 1 @ disable loop buffer
mcrle p15, 0, r10, c1, c0, 1 @ write aux control register
#endif
4: mov r10, #0
mcr p15, 0, r10, c7, c5, 0 @ I+BTB cache invalidate
dsb
#ifdef CONFIG_MMU
......
......@@ -35,7 +35,7 @@
ENTRY(v7wbi_flush_user_tlb_range)
vma_vm_mm r3, r2 @ get vma->vm_mm
mmid r3, r3 @ get vm_mm->context.id
dsb
dsb ish
mov r0, r0, lsr #PAGE_SHIFT @ align address
mov r1, r1, lsr #PAGE_SHIFT
asid r3, r3 @ mask ASID
......@@ -56,7 +56,7 @@ ENTRY(v7wbi_flush_user_tlb_range)
add r0, r0, #PAGE_SZ
cmp r0, r1
blo 1b
dsb
dsb ish
mov pc, lr
ENDPROC(v7wbi_flush_user_tlb_range)
......@@ -69,7 +69,7 @@ ENDPROC(v7wbi_flush_user_tlb_range)
* - end - end address (exclusive, may not be aligned)
*/
ENTRY(v7wbi_flush_kern_tlb_range)
dsb
dsb ish
mov r0, r0, lsr #PAGE_SHIFT @ align address
mov r1, r1, lsr #PAGE_SHIFT
mov r0, r0, lsl #PAGE_SHIFT
......@@ -84,7 +84,7 @@ ENTRY(v7wbi_flush_kern_tlb_range)
add r0, r0, #PAGE_SZ
cmp r0, r1
blo 1b
dsb
dsb ish
isb
mov pc, lr
ENDPROC(v7wbi_flush_kern_tlb_range)
......
......@@ -78,6 +78,11 @@
ENTRY(vfp_support_entry)
DBGSTR3 "instr %08x pc %08x state %p", r0, r2, r10
ldr r3, [sp, #S_PSR] @ Neither lazy restore nor FP exceptions
and r3, r3, #MODE_MASK @ are supported in kernel mode
teq r3, #USR_MODE
bne vfp_kmode_exception @ Returns through lr
VFPFMRX r1, FPEXC @ Is the VFP enabled?
DBGSTR1 "fpexc %08x", r1
tst r1, #FPEXC_EN
......
......@@ -20,6 +20,7 @@
#include <linux/init.h>
#include <linux/uaccess.h>
#include <linux/user.h>
#include <linux/export.h>
#include <asm/cp15.h>
#include <asm/cputype.h>
......@@ -648,6 +649,72 @@ static int vfp_hotplug(struct notifier_block *b, unsigned long action,
return NOTIFY_OK;
}
void vfp_kmode_exception(void)
{
/*
* If we reach this point, a floating point exception has been raised
* while running in kernel mode. If the NEON/VFP unit was enabled at the
* time, it means a VFP instruction has been issued that requires
* software assistance to complete, something which is not currently
* supported in kernel mode.
* If the NEON/VFP unit was disabled, and the location pointed to below
* is properly preceded by a call to kernel_neon_begin(), something has
* caused the task to be scheduled out and back in again. In this case,
* rebuilding and running with CONFIG_DEBUG_ATOMIC_SLEEP enabled should
* be helpful in localizing the problem.
*/
if (fmrx(FPEXC) & FPEXC_EN)
pr_crit("BUG: unsupported FP instruction in kernel mode\n");
else
pr_crit("BUG: FP instruction issued in kernel mode with FP unit disabled\n");
}
#ifdef CONFIG_KERNEL_MODE_NEON
/*
* Kernel-side NEON support functions
*/
void kernel_neon_begin(void)
{
struct thread_info *thread = current_thread_info();
unsigned int cpu;
u32 fpexc;
/*
* Kernel mode NEON is only allowed outside of interrupt context
* with preemption disabled. This will make sure that the kernel
* mode NEON register contents never need to be preserved.
*/
BUG_ON(in_interrupt());
cpu = get_cpu();
fpexc = fmrx(FPEXC) | FPEXC_EN;
fmxr(FPEXC, fpexc);
/*
* Save the userland NEON/VFP state. Under UP,
* the owner could be a task other than 'current'
*/
if (vfp_state_in_hw(cpu, thread))
vfp_save_state(&thread->vfpstate, fpexc);
#ifndef CONFIG_SMP
else if (vfp_current_hw_state[cpu] != NULL)
vfp_save_state(vfp_current_hw_state[cpu], fpexc);
#endif
vfp_current_hw_state[cpu] = NULL;
}
EXPORT_SYMBOL(kernel_neon_begin);
void kernel_neon_end(void)
{
/* Disable the NEON/VFP unit. */
fmxr(FPEXC, fmrx(FPEXC) & ~FPEXC_EN);
put_cpu();
}
EXPORT_SYMBOL(kernel_neon_end);
#endif /* CONFIG_KERNEL_MODE_NEON */
/*
* VFP support code initialisation.
*/
......@@ -731,4 +798,4 @@ static int __init vfp_init(void)
return 0;
}
late_initcall(vfp_init);
core_initcall(vfp_init);
......@@ -114,6 +114,11 @@ extern const struct raid6_recov_calls raid6_recov_intx1;
extern const struct raid6_recov_calls raid6_recov_ssse3;
extern const struct raid6_recov_calls raid6_recov_avx2;
extern const struct raid6_calls raid6_neonx1;
extern const struct raid6_calls raid6_neonx2;
extern const struct raid6_calls raid6_neonx4;
extern const struct raid6_calls raid6_neonx8;
/* Algorithm list */
extern const struct raid6_calls * const raid6_algos[];
extern const struct raid6_recov_calls *const raid6_recov_algos[];
......
......@@ -2,3 +2,4 @@ mktables
altivec*.c
int*.c
tables.c
neon?.c
......@@ -5,6 +5,7 @@ raid6_pq-y += algos.o recov.o tables.o int1.o int2.o int4.o \
raid6_pq-$(CONFIG_X86) += recov_ssse3.o recov_avx2.o mmx.o sse1.o sse2.o avx2.o
raid6_pq-$(CONFIG_ALTIVEC) += altivec1.o altivec2.o altivec4.o altivec8.o
raid6_pq-$(CONFIG_KERNEL_MODE_NEON) += neon.o neon1.o neon2.o neon4.o neon8.o
hostprogs-y += mktables
......@@ -16,6 +17,21 @@ ifeq ($(CONFIG_ALTIVEC),y)
altivec_flags := -maltivec -mabi=altivec
endif
# The GCC option -ffreestanding is required in order to compile code containing
# ARM/NEON intrinsics in a non C99-compliant environment (such as the kernel)
ifeq ($(CONFIG_KERNEL_MODE_NEON),y)
NEON_FLAGS := -ffreestanding
ifeq ($(ARCH),arm)
NEON_FLAGS += -mfloat-abi=softfp -mfpu=neon
endif
ifeq ($(ARCH),arm64)
CFLAGS_REMOVE_neon1.o += -mgeneral-regs-only
CFLAGS_REMOVE_neon2.o += -mgeneral-regs-only
CFLAGS_REMOVE_neon4.o += -mgeneral-regs-only
CFLAGS_REMOVE_neon8.o += -mgeneral-regs-only
endif
endif
targets += int1.c
$(obj)/int1.c: UNROLL := 1
$(obj)/int1.c: $(src)/int.uc $(src)/unroll.awk FORCE
......@@ -70,6 +86,30 @@ $(obj)/altivec8.c: UNROLL := 8
$(obj)/altivec8.c: $(src)/altivec.uc $(src)/unroll.awk FORCE
$(call if_changed,unroll)
CFLAGS_neon1.o += $(NEON_FLAGS)
targets += neon1.c
$(obj)/neon1.c: UNROLL := 1
$(obj)/neon1.c: $(src)/neon.uc $(src)/unroll.awk FORCE
$(call if_changed,unroll)
CFLAGS_neon2.o += $(NEON_FLAGS)
targets += neon2.c
$(obj)/neon2.c: UNROLL := 2
$(obj)/neon2.c: $(src)/neon.uc $(src)/unroll.awk FORCE
$(call if_changed,unroll)
CFLAGS_neon4.o += $(NEON_FLAGS)
targets += neon4.c
$(obj)/neon4.c: UNROLL := 4
$(obj)/neon4.c: $(src)/neon.uc $(src)/unroll.awk FORCE
$(call if_changed,unroll)
CFLAGS_neon8.o += $(NEON_FLAGS)
targets += neon8.c
$(obj)/neon8.c: UNROLL := 8
$(obj)/neon8.c: $(src)/neon.uc $(src)/unroll.awk FORCE
$(call if_changed,unroll)
quiet_cmd_mktable = TABLE $@
cmd_mktable = $(obj)/mktables > $@ || ( rm -f $@ && exit 1 )
......
......@@ -70,6 +70,12 @@ const struct raid6_calls * const raid6_algos[] = {
&raid6_intx2,
&raid6_intx4,
&raid6_intx8,
#ifdef CONFIG_KERNEL_MODE_NEON
&raid6_neonx1,
&raid6_neonx2,
&raid6_neonx4,
&raid6_neonx8,
#endif
NULL
};
......
/*
* linux/lib/raid6/neon.c - RAID6 syndrome calculation using ARM NEON intrinsics
*
* Copyright (C) 2013 Linaro Ltd <ard.biesheuvel@linaro.org>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/raid/pq.h>
#ifdef __KERNEL__
#include <asm/neon.h>
#else
#define kernel_neon_begin()
#define kernel_neon_end()
#define cpu_has_neon() (1)
#endif
/*
* There are 2 reasons these wrappers are kept in a separate compilation unit
* from the actual implementations in neonN.c (generated from neon.uc by
* unroll.awk):
* - the actual implementations use NEON intrinsics, and the GCC support header
* (arm_neon.h) is not fully compatible (type wise) with the kernel;
* - the neonN.c files are compiled with -mfpu=neon and optimization enabled,
* and we have to make sure that we never use *any* NEON/VFP instructions
* outside a kernel_neon_begin()/kernel_neon_end() pair.
*/
#define RAID6_NEON_WRAPPER(_n) \
static void raid6_neon ## _n ## _gen_syndrome(int disks, \
size_t bytes, void **ptrs) \
{ \
void raid6_neon ## _n ## _gen_syndrome_real(int, \
unsigned long, void**); \
kernel_neon_begin(); \
raid6_neon ## _n ## _gen_syndrome_real(disks, \
(unsigned long)bytes, ptrs); \
kernel_neon_end(); \
} \
struct raid6_calls const raid6_neonx ## _n = { \
raid6_neon ## _n ## _gen_syndrome, \
raid6_have_neon, \
"neonx" #_n, \
0 \
}
static int raid6_have_neon(void)
{
return cpu_has_neon();
}
RAID6_NEON_WRAPPER(1);
RAID6_NEON_WRAPPER(2);
RAID6_NEON_WRAPPER(4);
RAID6_NEON_WRAPPER(8);
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment