Commit bb0fd7ab authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'for-linus' of git://ftp.arm.linux.org.uk/~rmk/linux-arm

Pull ARM updates from Russell King:
 "Included in this update are both some long term fixes and some new
  features.

  Fixes:

   - An integer overflow in the calculation of ELF_ET_DYN_BASE.

   - Avoiding OOMs for high-order IOMMU allocations

   - SMP requires the data cache to be enabled for synchronisation
     primitives to work, so prevent the CPU_DCACHE_DISABLE option being
     visible on SMP builds.

   - A bug going back 10+ years in the noMMU ARM94* CPU support code,
     where it corrupts registers.  Found by folk getting Linux running
     on their cameras.

   - Versatile Express needs an errata workaround enabled for CPU
     hot-unplug to work.

  Features:

   - Clean up module linker by handling out of range relocations
     separately from relocation cases we don't handle.

   - Fix a long term bug in the pci_mmap_page_range() code, which we
     hope won't impact userspace (we hope there's no users of the
     existing broken interface.)

   - Don't map DMA coherent allocations when we don't have a MMU.

   - Drop experimental status for SMP_ON_UP.

   - Warn when DT doesn't specify ePAPR mandatory cache properties.

   - Add documentation concerning how we find the start of physical
     memory for AUTO_ZRELADDR kernels, detailing why we have chosen the
     mask and the implications of changing it.

   - Updates from Ard Biesheuvel to address some issues with large
     kernels (such as allyesconfig) failing to link.

   - Allow hibernation to work on modern (ARMv7) CPUs - this appears to
     have never worked in the past on these CPUs.

   - Enable IRQ_SHOW_LEVEL, which changes the /proc/interrupts output
     format (hopefully without userspace breaking...  let's hope that if
     it causes someone a problem, they tell us.)

   - Fix tegra-ahb DT offsets.

   - Rework ARM errata 643719 code (and ARMv7 flush_cache_louis()/
     flush_dcache_all()) code to be more efficient, and enable this
     errata workaround by default for ARMv7+SMP CPUs.  This complements
     the Versatile Express fix above.

   - Rework ARMv7 context code for errata 430973, so that only Cortex A8
     CPUs are impacted by the branch target buffer flush when this
     errata is enabled.  Also update the help text to indicate that all
     r1p* A8 CPUs are impacted.

   - Switch ARM to the generic show_mem() implementation, it conveys all
     the information which we were already reporting.

   - Prevent slow timer sources being used for udelay() - timers running
     at less than 1MHz are not useful for this, and can cause udelay()
     to return immediately, without any wait.  Using such a slow timer
     is silly.

   - VDSO support for 32-bit ARM, mainly for gettimeofday() using the
     ARM architected timer.

   - Perf support for Scorpion performance monitoring units"

vdso semantic conflict fixed up as per linux-next.

* 'for-linus' of git://ftp.arm.linux.org.uk/~rmk/linux-arm: (52 commits)
  ARM: update errata 430973 documentation to cover Cortex A8 r1p*
  ARM: ensure delay timer has sufficient accuracy for delays
  ARM: switch to use the generic show_mem() implementation
  ARM: proc-v7: avoid errata 430973 workaround for non-Cortex A8 CPUs
  ARM: enable ARM errata 643719 workaround by default
  ARM: cache-v7: optimise test for Cortex A9 r0pX devices
  ARM: cache-v7: optimise branches in v7_flush_cache_louis
  ARM: cache-v7: consolidate initialisation of cache level index
  ARM: cache-v7: shift CLIDR to extract appropriate field before masking
  ARM: cache-v7: use movw/movt instructions
  ARM: allow 16-bit instructions in ALT_UP()
  ARM: proc-arm94*.S: fix setup function
  ARM: vexpress: fix CPU hotplug with CT9x4 tile.
  ARM: 8276/1: Make CPU_DCACHE_DISABLE depend on !SMP
  ARM: 8335/1: Documentation: DT bindings: Tegra AHB: document the legacy base address
  ARM: 8334/1: amba: tegra-ahb: detect and correct bogus base address
  ARM: 8333/1: amba: tegra-ahb: fix register offsets in the macros
  ARM: 8339/1: Enable CONFIG_GENERIC_IRQ_SHOW_LEVEL
  ARM: 8338/1: kexec: Relax SMP validation to improve DT compatibility
  ARM: 8337/1: mm: Do not invoke OOM for higher order IOMMU DMA allocations
  ...
parents bdfa54df 4b2f8838
...@@ -18,6 +18,8 @@ Required properties: ...@@ -18,6 +18,8 @@ Required properties:
"arm,arm11mpcore-pmu" "arm,arm11mpcore-pmu"
"arm,arm1176-pmu" "arm,arm1176-pmu"
"arm,arm1136-pmu" "arm,arm1136-pmu"
"qcom,scorpion-pmu"
"qcom,scorpion-mp-pmu"
"qcom,krait-pmu" "qcom,krait-pmu"
- interrupts : 1 combined interrupt or 1 per core. If the interrupt is a per-cpu - interrupts : 1 combined interrupt or 1 per core. If the interrupt is a per-cpu
interrupt (PPI) then 1 interrupt should be specified. interrupt (PPI) then 1 interrupt should be specified.
......
...@@ -5,9 +5,12 @@ Required properties: ...@@ -5,9 +5,12 @@ Required properties:
Tegra30, must contain "nvidia,tegra30-ahb". Otherwise, must contain Tegra30, must contain "nvidia,tegra30-ahb". Otherwise, must contain
'"nvidia,<chip>-ahb", "nvidia,tegra30-ahb"' where <chip> is tegra124, '"nvidia,<chip>-ahb", "nvidia,tegra30-ahb"' where <chip> is tegra124,
tegra132, or tegra210. tegra132, or tegra210.
- reg : Should contain 1 register ranges(address and length) - reg : Should contain 1 register ranges(address and length). For
Tegra20, Tegra30, and Tegra114 chips, the value must be <0x6000c004
0x10c>. For Tegra124, Tegra132 and Tegra210 chips, the value should
be be <0x6000c000 0x150>.
Example: Example (for a Tegra20 chip):
ahb: ahb@6000c004 { ahb: ahb@6000c004 {
compatible = "nvidia,tegra20-ahb"; compatible = "nvidia,tegra20-ahb";
reg = <0x6000c004 0x10c>; /* AHB Arbitration + Gizmo Controller */ reg = <0x6000c004 0x10c>; /* AHB Arbitration + Gizmo Controller */
......
...@@ -21,6 +21,7 @@ config ARM ...@@ -21,6 +21,7 @@ config ARM
select GENERIC_IDLE_POLL_SETUP select GENERIC_IDLE_POLL_SETUP
select GENERIC_IRQ_PROBE select GENERIC_IRQ_PROBE
select GENERIC_IRQ_SHOW select GENERIC_IRQ_SHOW
select GENERIC_IRQ_SHOW_LEVEL
select GENERIC_PCI_IOMAP select GENERIC_PCI_IOMAP
select GENERIC_SCHED_CLOCK select GENERIC_SCHED_CLOCK
select GENERIC_SMP_IDLE_THREAD select GENERIC_SMP_IDLE_THREAD
...@@ -1063,7 +1064,7 @@ config ARM_ERRATA_430973 ...@@ -1063,7 +1064,7 @@ config ARM_ERRATA_430973
depends on CPU_V7 depends on CPU_V7
help help
This option enables the workaround for the 430973 Cortex-A8 This option enables the workaround for the 430973 Cortex-A8
(r1p0..r1p2) erratum. If a code sequence containing an ARM/Thumb r1p* erratum. If a code sequence containing an ARM/Thumb
interworking branch is replaced with another code sequence at the interworking branch is replaced with another code sequence at the
same virtual address, whether due to self-modifying code or virtual same virtual address, whether due to self-modifying code or virtual
to physical address re-mapping, Cortex-A8 does not recover from the to physical address re-mapping, Cortex-A8 does not recover from the
...@@ -1132,6 +1133,7 @@ config ARM_ERRATA_742231 ...@@ -1132,6 +1133,7 @@ config ARM_ERRATA_742231
config ARM_ERRATA_643719 config ARM_ERRATA_643719
bool "ARM errata: LoUIS bit field in CLIDR register is incorrect" bool "ARM errata: LoUIS bit field in CLIDR register is incorrect"
depends on CPU_V7 && SMP depends on CPU_V7 && SMP
default y
help help
This option enables the workaround for the 643719 Cortex-A9 (prior to This option enables the workaround for the 643719 Cortex-A9 (prior to
r1p0) erratum. On affected cores the LoUIS bit field of the CLIDR r1p0) erratum. On affected cores the LoUIS bit field of the CLIDR
...@@ -1349,7 +1351,7 @@ config SMP ...@@ -1349,7 +1351,7 @@ config SMP
If you don't know what to do here, say N. If you don't know what to do here, say N.
config SMP_ON_UP config SMP_ON_UP
bool "Allow booting SMP kernel on uniprocessor systems (EXPERIMENTAL)" bool "Allow booting SMP kernel on uniprocessor systems"
depends on SMP && !XIP_KERNEL && MMU depends on SMP && !XIP_KERNEL && MMU
default y default y
help help
......
...@@ -13,7 +13,7 @@ ...@@ -13,7 +13,7 @@
# Ensure linker flags are correct # Ensure linker flags are correct
LDFLAGS := LDFLAGS :=
LDFLAGS_vmlinux :=-p --no-undefined -X LDFLAGS_vmlinux :=-p --no-undefined -X --pic-veneer
ifeq ($(CONFIG_CPU_ENDIAN_BE8),y) ifeq ($(CONFIG_CPU_ENDIAN_BE8),y)
LDFLAGS_vmlinux += --be8 LDFLAGS_vmlinux += --be8
LDFLAGS_MODULE += --be8 LDFLAGS_MODULE += --be8
...@@ -264,6 +264,7 @@ core-$(CONFIG_FPE_FASTFPE) += $(FASTFPE_OBJ) ...@@ -264,6 +264,7 @@ core-$(CONFIG_FPE_FASTFPE) += $(FASTFPE_OBJ)
core-$(CONFIG_VFP) += arch/arm/vfp/ core-$(CONFIG_VFP) += arch/arm/vfp/
core-$(CONFIG_XEN) += arch/arm/xen/ core-$(CONFIG_XEN) += arch/arm/xen/
core-$(CONFIG_KVM_ARM_HOST) += arch/arm/kvm/ core-$(CONFIG_KVM_ARM_HOST) += arch/arm/kvm/
core-$(CONFIG_VDSO) += arch/arm/vdso/
# If we have a machine-specific directory, then include it in the build. # If we have a machine-specific directory, then include it in the build.
core-y += arch/arm/kernel/ arch/arm/mm/ arch/arm/common/ core-y += arch/arm/kernel/ arch/arm/mm/ arch/arm/common/
...@@ -321,6 +322,12 @@ dtbs: prepare scripts ...@@ -321,6 +322,12 @@ dtbs: prepare scripts
dtbs_install: dtbs_install:
$(Q)$(MAKE) $(dtbinst)=$(boot)/dts $(Q)$(MAKE) $(dtbinst)=$(boot)/dts
PHONY += vdso_install
vdso_install:
ifeq ($(CONFIG_VDSO),y)
$(Q)$(MAKE) $(build)=arch/arm/vdso $@
endif
# We use MRPROPER_FILES and CLEAN_FILES now # We use MRPROPER_FILES and CLEAN_FILES now
archclean: archclean:
$(Q)$(MAKE) $(clean)=$(boot) $(Q)$(MAKE) $(clean)=$(boot)
...@@ -345,4 +352,5 @@ define archhelp ...@@ -345,4 +352,5 @@ define archhelp
echo ' Install using (your) ~/bin/$(INSTALLKERNEL) or' echo ' Install using (your) ~/bin/$(INSTALLKERNEL) or'
echo ' (distribution) /sbin/$(INSTALLKERNEL) or' echo ' (distribution) /sbin/$(INSTALLKERNEL) or'
echo ' install to $$(INSTALL_PATH) and run lilo' echo ' install to $$(INSTALL_PATH) and run lilo'
echo ' vdso_install - Install unstripped vdso.so to $$(INSTALL_MOD_PATH)/vdso'
endef endef
...@@ -10,8 +10,11 @@ ...@@ -10,8 +10,11 @@
*/ */
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/assembler.h> #include <asm/assembler.h>
#include <asm/v7m.h>
AR_CLASS( .arch armv7-a )
M_CLASS( .arch armv7-m )
.arch armv7-a
/* /*
* Debugging stuff * Debugging stuff
* *
...@@ -114,7 +117,12 @@ ...@@ -114,7 +117,12 @@
* sort out different calling conventions * sort out different calling conventions
*/ */
.align .align
.arm @ Always enter in ARM state /*
* Always enter in ARM state for CPUs that support the ARM ISA.
* As of today (2014) that's exactly the members of the A and R
* classes.
*/
AR_CLASS( .arm )
start: start:
.type start,#function .type start,#function
.rept 7 .rept 7
...@@ -133,13 +141,14 @@ start: ...@@ -133,13 +141,14 @@ start:
THUMB( .thumb ) THUMB( .thumb )
1: 1:
ARM_BE8( setend be ) @ go BE8 if compiled for BE8 ARM_BE8( setend be ) @ go BE8 if compiled for BE8
mrs r9, cpsr AR_CLASS( mrs r9, cpsr )
#ifdef CONFIG_ARM_VIRT_EXT #ifdef CONFIG_ARM_VIRT_EXT
bl __hyp_stub_install @ get into SVC mode, reversibly bl __hyp_stub_install @ get into SVC mode, reversibly
#endif #endif
mov r7, r1 @ save architecture ID mov r7, r1 @ save architecture ID
mov r8, r2 @ save atags pointer mov r8, r2 @ save atags pointer
#ifndef CONFIG_CPU_V7M
/* /*
* Booting from Angel - need to enter SVC mode and disable * Booting from Angel - need to enter SVC mode and disable
* FIQs/IRQs (numeric definitions from angel arm.h source). * FIQs/IRQs (numeric definitions from angel arm.h source).
...@@ -155,6 +164,7 @@ not_angel: ...@@ -155,6 +164,7 @@ not_angel:
safe_svcmode_maskall r0 safe_svcmode_maskall r0
msr spsr_cxsf, r9 @ Save the CPU boot mode in msr spsr_cxsf, r9 @ Save the CPU boot mode in
@ SPSR @ SPSR
#endif
/* /*
* Note that some cache flushing and other stuff may * Note that some cache flushing and other stuff may
* be needed here - is there an Angel SWI call for this? * be needed here - is there an Angel SWI call for this?
...@@ -168,9 +178,26 @@ not_angel: ...@@ -168,9 +178,26 @@ not_angel:
.text .text
#ifdef CONFIG_AUTO_ZRELADDR #ifdef CONFIG_AUTO_ZRELADDR
@ determine final kernel image address /*
* Find the start of physical memory. As we are executing
* without the MMU on, we are in the physical address space.
* We just need to get rid of any offset by aligning the
* address.
*
* This alignment is a balance between the requirements of
* different platforms - we have chosen 128MB to allow
* platforms which align the start of their physical memory
* to 128MB to use this feature, while allowing the zImage
* to be placed within the first 128MB of memory on other
* platforms. Increasing the alignment means we place
* stricter alignment requirements on the start of physical
* memory, but relaxing it means that we break people who
* are already placing their zImage in (eg) the top 64MB
* of this range.
*/
mov r4, pc mov r4, pc
and r4, r4, #0xf8000000 and r4, r4, #0xf8000000
/* Determine final kernel image address. */
add r4, r4, #TEXT_OFFSET add r4, r4, #TEXT_OFFSET
#else #else
ldr r4, =zreladdr ldr r4, =zreladdr
...@@ -810,6 +837,16 @@ __common_mmu_cache_on: ...@@ -810,6 +837,16 @@ __common_mmu_cache_on:
call_cache_fn: adr r12, proc_types call_cache_fn: adr r12, proc_types
#ifdef CONFIG_CPU_CP15 #ifdef CONFIG_CPU_CP15
mrc p15, 0, r9, c0, c0 @ get processor ID mrc p15, 0, r9, c0, c0 @ get processor ID
#elif defined(CONFIG_CPU_V7M)
/*
* On v7-M the processor id is located in the V7M_SCB_CPUID
* register, but as cache handling is IMPLEMENTATION DEFINED on
* v7-M (if existant at all) we just return early here.
* If V7M_SCB_CPUID were used the cpu ID functions (i.e.
* __armv7_mmu_cache_{on,off,flush}) would be selected which
* use cp15 registers that are not implemented on v7-M.
*/
bx lr
#else #else
ldr r9, =CONFIG_PROCESSOR_ID ldr r9, =CONFIG_PROCESSOR_ID
#endif #endif
...@@ -1311,7 +1348,8 @@ __hyp_reentry_vectors: ...@@ -1311,7 +1348,8 @@ __hyp_reentry_vectors:
__enter_kernel: __enter_kernel:
mov r0, #0 @ must be 0 mov r0, #0 @ must be 0
ARM( mov pc, r4 ) @ call kernel ARM( mov pc, r4 ) @ call kernel
THUMB( bx r4 ) @ entry point is always ARM M_CLASS( add r4, r4, #1 ) @ enter in Thumb mode for M class
THUMB( bx r4 ) @ entry point is always ARM for A/R classes
reloc_code_end: reloc_code_end:
......
generic-y += auxvec.h
generic-y += bitsperlong.h generic-y += bitsperlong.h
generic-y += cputime.h generic-y += cputime.h
generic-y += current.h generic-y += current.h
......
...@@ -237,6 +237,9 @@ ...@@ -237,6 +237,9 @@
.pushsection ".alt.smp.init", "a" ;\ .pushsection ".alt.smp.init", "a" ;\
.long 9998b ;\ .long 9998b ;\
9997: instr ;\ 9997: instr ;\
.if . - 9997b == 2 ;\
nop ;\
.endif ;\
.if . - 9997b != 4 ;\ .if . - 9997b != 4 ;\
.error "ALT_UP() content must assemble to exactly 4 bytes";\ .error "ALT_UP() content must assemble to exactly 4 bytes";\
.endif ;\ .endif ;\
......
#include <uapi/asm/auxvec.h>
...@@ -253,4 +253,20 @@ static inline int cpu_is_pj4(void) ...@@ -253,4 +253,20 @@ static inline int cpu_is_pj4(void)
#else #else
#define cpu_is_pj4() 0 #define cpu_is_pj4() 0
#endif #endif
static inline int __attribute_const__ cpuid_feature_extract_field(u32 features,
int field)
{
int feature = (features >> field) & 15;
/* feature registers are signed values */
if (feature > 8)
feature -= 16;
return feature;
}
#define cpuid_feature_extract(reg, field) \
cpuid_feature_extract_field(read_cpuid_ext(reg), field)
#endif #endif
#ifndef __ASMARM_ELF_H #ifndef __ASMARM_ELF_H
#define __ASMARM_ELF_H #define __ASMARM_ELF_H
#include <asm/auxvec.h>
#include <asm/hwcap.h> #include <asm/hwcap.h>
#include <asm/vdso_datapage.h>
/* /*
* ELF register definitions.. * ELF register definitions..
...@@ -115,7 +117,7 @@ int dump_task_regs(struct task_struct *t, elf_gregset_t *elfregs); ...@@ -115,7 +117,7 @@ int dump_task_regs(struct task_struct *t, elf_gregset_t *elfregs);
the loader. We need to make sure that it is out of the way of the program the loader. We need to make sure that it is out of the way of the program
that it will "exec", and that there is sufficient room for the brk. */ that it will "exec", and that there is sufficient room for the brk. */
#define ELF_ET_DYN_BASE (2 * TASK_SIZE / 3) #define ELF_ET_DYN_BASE (TASK_SIZE / 3 * 2)
/* When the program starts, a1 contains a pointer to a function to be /* When the program starts, a1 contains a pointer to a function to be
registered with atexit, as per the SVR4 ABI. A value of 0 means we registered with atexit, as per the SVR4 ABI. A value of 0 means we
...@@ -126,6 +128,13 @@ extern void elf_set_personality(const struct elf32_hdr *); ...@@ -126,6 +128,13 @@ extern void elf_set_personality(const struct elf32_hdr *);
#define SET_PERSONALITY(ex) elf_set_personality(&(ex)) #define SET_PERSONALITY(ex) elf_set_personality(&(ex))
#ifdef CONFIG_MMU #ifdef CONFIG_MMU
#ifdef CONFIG_VDSO
#define ARCH_DLINFO \
do { \
NEW_AUX_ENT(AT_SYSINFO_EHDR, \
(elf_addr_t)current->mm->context.vdso); \
} while (0)
#endif
#define ARCH_HAS_SETUP_ADDITIONAL_PAGES 1 #define ARCH_HAS_SETUP_ADDITIONAL_PAGES 1
struct linux_binprm; struct linux_binprm;
int arch_setup_additional_pages(struct linux_binprm *, int); int arch_setup_additional_pages(struct linux_binprm *, int);
......
...@@ -13,7 +13,7 @@ ...@@ -13,7 +13,7 @@
" .align 3\n" \ " .align 3\n" \
" .long 1b, 4f, 2b, 4f\n" \ " .long 1b, 4f, 2b, 4f\n" \
" .popsection\n" \ " .popsection\n" \
" .pushsection .fixup,\"ax\"\n" \ " .pushsection .text.fixup,\"ax\"\n" \
" .align 2\n" \ " .align 2\n" \
"4: mov %0, " err_reg "\n" \ "4: mov %0, " err_reg "\n" \
" b 3b\n" \ " b 3b\n" \
......
...@@ -11,6 +11,9 @@ typedef struct { ...@@ -11,6 +11,9 @@ typedef struct {
#endif #endif
unsigned int vmalloc_seq; unsigned int vmalloc_seq;
unsigned long sigpage; unsigned long sigpage;
#ifdef CONFIG_VDSO
unsigned long vdso;
#endif
} mm_context_t; } mm_context_t;
#ifdef CONFIG_CPU_HAS_ASID #ifdef CONFIG_CPU_HAS_ASID
......
...@@ -92,6 +92,7 @@ struct pmu_hw_events { ...@@ -92,6 +92,7 @@ struct pmu_hw_events {
struct arm_pmu { struct arm_pmu {
struct pmu pmu; struct pmu pmu;
cpumask_t active_irqs; cpumask_t active_irqs;
int *irq_affinity;
char *name; char *name;
irqreturn_t (*handle_irq)(int irq_num, void *dev); irqreturn_t (*handle_irq)(int irq_num, void *dev);
void (*enable)(struct perf_event *event); void (*enable)(struct perf_event *event);
......
...@@ -104,6 +104,7 @@ static inline u32 mpidr_hash_size(void) ...@@ -104,6 +104,7 @@ static inline u32 mpidr_hash_size(void)
return 1 << mpidr_hash.bits; return 1 << mpidr_hash.bits;
} }
extern int platform_can_secondary_boot(void);
extern int platform_can_cpu_hotplug(void); extern int platform_can_cpu_hotplug(void);
#endif #endif
...@@ -315,7 +315,7 @@ do { \ ...@@ -315,7 +315,7 @@ do { \
__asm__ __volatile__( \ __asm__ __volatile__( \
"1: " TUSER(ldrb) " %1,[%2],#0\n" \ "1: " TUSER(ldrb) " %1,[%2],#0\n" \
"2:\n" \ "2:\n" \
" .pushsection .fixup,\"ax\"\n" \ " .pushsection .text.fixup,\"ax\"\n" \
" .align 2\n" \ " .align 2\n" \
"3: mov %0, %3\n" \ "3: mov %0, %3\n" \
" mov %1, #0\n" \ " mov %1, #0\n" \
...@@ -351,7 +351,7 @@ do { \ ...@@ -351,7 +351,7 @@ do { \
__asm__ __volatile__( \ __asm__ __volatile__( \
"1: " TUSER(ldr) " %1,[%2],#0\n" \ "1: " TUSER(ldr) " %1,[%2],#0\n" \
"2:\n" \ "2:\n" \
" .pushsection .fixup,\"ax\"\n" \ " .pushsection .text.fixup,\"ax\"\n" \
" .align 2\n" \ " .align 2\n" \
"3: mov %0, %3\n" \ "3: mov %0, %3\n" \
" mov %1, #0\n" \ " mov %1, #0\n" \
...@@ -397,7 +397,7 @@ do { \ ...@@ -397,7 +397,7 @@ do { \
__asm__ __volatile__( \ __asm__ __volatile__( \
"1: " TUSER(strb) " %1,[%2],#0\n" \ "1: " TUSER(strb) " %1,[%2],#0\n" \
"2:\n" \ "2:\n" \
" .pushsection .fixup,\"ax\"\n" \ " .pushsection .text.fixup,\"ax\"\n" \
" .align 2\n" \ " .align 2\n" \
"3: mov %0, %3\n" \ "3: mov %0, %3\n" \
" b 2b\n" \ " b 2b\n" \
...@@ -430,7 +430,7 @@ do { \ ...@@ -430,7 +430,7 @@ do { \
__asm__ __volatile__( \ __asm__ __volatile__( \
"1: " TUSER(str) " %1,[%2],#0\n" \ "1: " TUSER(str) " %1,[%2],#0\n" \
"2:\n" \ "2:\n" \
" .pushsection .fixup,\"ax\"\n" \ " .pushsection .text.fixup,\"ax\"\n" \
" .align 2\n" \ " .align 2\n" \
"3: mov %0, %3\n" \ "3: mov %0, %3\n" \
" b 2b\n" \ " b 2b\n" \
...@@ -458,7 +458,7 @@ do { \ ...@@ -458,7 +458,7 @@ do { \
THUMB( "1: " TUSER(str) " " __reg_oper1 ", [%1]\n" ) \ THUMB( "1: " TUSER(str) " " __reg_oper1 ", [%1]\n" ) \
THUMB( "2: " TUSER(str) " " __reg_oper0 ", [%1, #4]\n" ) \ THUMB( "2: " TUSER(str) " " __reg_oper0 ", [%1, #4]\n" ) \
"3:\n" \ "3:\n" \
" .pushsection .fixup,\"ax\"\n" \ " .pushsection .text.fixup,\"ax\"\n" \
" .align 2\n" \ " .align 2\n" \
"4: mov %0, %3\n" \ "4: mov %0, %3\n" \
" b 3b\n" \ " b 3b\n" \
......
...@@ -24,6 +24,14 @@ ...@@ -24,6 +24,14 @@
.syntax unified .syntax unified
#endif #endif
#ifdef CONFIG_CPU_V7M
#define AR_CLASS(x...)
#define M_CLASS(x...) x
#else
#define AR_CLASS(x...) x
#define M_CLASS(x...)
#endif
#ifdef CONFIG_THUMB2_KERNEL #ifdef CONFIG_THUMB2_KERNEL
#if __GNUC__ < 4 #if __GNUC__ < 4
......
#ifndef __ASM_VDSO_H
#define __ASM_VDSO_H
#ifdef __KERNEL__
#ifndef __ASSEMBLY__
struct mm_struct;
#ifdef CONFIG_VDSO
void arm_install_vdso(struct mm_struct *mm, unsigned long addr);
extern char vdso_start, vdso_end;
extern unsigned int vdso_total_pages;
#else /* CONFIG_VDSO */
static inline void arm_install_vdso(struct mm_struct *mm, unsigned long addr)
{
}
#define vdso_total_pages 0
#endif /* CONFIG_VDSO */
#endif /* __ASSEMBLY__ */
#endif /* __KERNEL__ */
#endif /* __ASM_VDSO_H */
/*
* Adapted from arm64 version.
*
* Copyright (C) 2012 ARM Limited
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef __ASM_VDSO_DATAPAGE_H
#define __ASM_VDSO_DATAPAGE_H
#ifdef __KERNEL__
#ifndef __ASSEMBLY__
#include <asm/page.h>
/* Try to be cache-friendly on systems that don't implement the
* generic timer: fit the unconditionally updated fields in the first
* 32 bytes.
*/
struct vdso_data {
u32 seq_count; /* sequence count - odd during updates */
u16 tk_is_cntvct; /* fall back to syscall if false */
u16 cs_shift; /* clocksource shift */
u32 xtime_coarse_sec; /* coarse time */
u32 xtime_coarse_nsec;
u32 wtm_clock_sec; /* wall to monotonic offset */
u32 wtm_clock_nsec;
u32 xtime_clock_sec; /* CLOCK_REALTIME - seconds */
u32 cs_mult; /* clocksource multiplier */
u64 cs_cycle_last; /* last cycle value */
u64 cs_mask; /* clocksource mask */
u64 xtime_clock_snsec; /* CLOCK_REALTIME sub-ns base */
u32 tz_minuteswest; /* timezone info for gettimeofday(2) */
u32 tz_dsttime;
};
union vdso_data_store {
struct vdso_data data;
u8 page[PAGE_SIZE];
};
#endif /* !__ASSEMBLY__ */
#endif /* __KERNEL__ */
#endif /* __ASM_VDSO_DATAPAGE_H */
...@@ -71,7 +71,7 @@ static inline unsigned long load_unaligned_zeropad(const void *addr) ...@@ -71,7 +71,7 @@ static inline unsigned long load_unaligned_zeropad(const void *addr)
asm( asm(
"1: ldr %0, [%2]\n" "1: ldr %0, [%2]\n"
"2:\n" "2:\n"
" .pushsection .fixup,\"ax\"\n" " .pushsection .text.fixup,\"ax\"\n"
" .align 2\n" " .align 2\n"
"3: and %1, %2, #0x3\n" "3: and %1, %2, #0x3\n"
" bic %2, %2, #0x3\n" " bic %2, %2, #0x3\n"
......
# UAPI Header export list # UAPI Header export list
include include/uapi/asm-generic/Kbuild.asm include include/uapi/asm-generic/Kbuild.asm
header-y += auxvec.h
header-y += byteorder.h header-y += byteorder.h
header-y += fcntl.h header-y += fcntl.h
header-y += hwcap.h header-y += hwcap.h
......
#ifndef __ASM_AUXVEC_H
#define __ASM_AUXVEC_H
/* VDSO location */
#define AT_SYSINFO_EHDR 33
#endif
...@@ -16,7 +16,7 @@ CFLAGS_REMOVE_return_address.o = -pg ...@@ -16,7 +16,7 @@ CFLAGS_REMOVE_return_address.o = -pg
# Object file lists. # Object file lists.
obj-y := elf.o entry-common.o irq.o opcodes.o \ obj-y := elf.o entry-common.o irq.o opcodes.o \
process.o ptrace.o return_address.o \ process.o ptrace.o reboot.o return_address.o \
setup.o signal.o sigreturn_codes.o \ setup.o signal.o sigreturn_codes.o \
stacktrace.o sys_arm.o time.o traps.o stacktrace.o sys_arm.o time.o traps.o
...@@ -75,6 +75,7 @@ obj-$(CONFIG_HW_PERF_EVENTS) += perf_event.o perf_event_cpu.o ...@@ -75,6 +75,7 @@ obj-$(CONFIG_HW_PERF_EVENTS) += perf_event.o perf_event_cpu.o
CFLAGS_pj4-cp0.o := -marm CFLAGS_pj4-cp0.o := -marm
AFLAGS_iwmmxt.o := -Wa,-mcpu=iwmmxt AFLAGS_iwmmxt.o := -Wa,-mcpu=iwmmxt
obj-$(CONFIG_ARM_CPU_TOPOLOGY) += topology.o obj-$(CONFIG_ARM_CPU_TOPOLOGY) += topology.o
obj-$(CONFIG_VDSO) += vdso.o
ifneq ($(CONFIG_ARCH_EBSA110),y) ifneq ($(CONFIG_ARCH_EBSA110),y)
obj-y += io.o obj-y += io.o
...@@ -86,7 +87,7 @@ obj-$(CONFIG_EARLY_PRINTK) += early_printk.o ...@@ -86,7 +87,7 @@ obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
obj-$(CONFIG_ARM_VIRT_EXT) += hyp-stub.o obj-$(CONFIG_ARM_VIRT_EXT) += hyp-stub.o
ifeq ($(CONFIG_ARM_PSCI),y) ifeq ($(CONFIG_ARM_PSCI),y)
obj-y += psci.o obj-y += psci.o psci-call.o
obj-$(CONFIG_SMP) += psci_smp.o obj-$(CONFIG_SMP) += psci_smp.o
endif endif
......
...@@ -25,6 +25,7 @@ ...@@ -25,6 +25,7 @@
#include <asm/memory.h> #include <asm/memory.h>
#include <asm/procinfo.h> #include <asm/procinfo.h>
#include <asm/suspend.h> #include <asm/suspend.h>
#include <asm/vdso_datapage.h>
#include <asm/hardware/cache-l2x0.h> #include <asm/hardware/cache-l2x0.h>
#include <linux/kbuild.h> #include <linux/kbuild.h>
...@@ -205,6 +206,10 @@ int main(void) ...@@ -205,6 +206,10 @@ int main(void)
DEFINE(KVM_TIMER_ENABLED, offsetof(struct kvm, arch.timer.enabled)); DEFINE(KVM_TIMER_ENABLED, offsetof(struct kvm, arch.timer.enabled));
DEFINE(KVM_VGIC_VCTRL, offsetof(struct kvm, arch.vgic.vctrl_base)); DEFINE(KVM_VGIC_VCTRL, offsetof(struct kvm, arch.vgic.vctrl_base));
DEFINE(KVM_VTTBR, offsetof(struct kvm, arch.vttbr)); DEFINE(KVM_VTTBR, offsetof(struct kvm, arch.vttbr));
#endif
BLANK();
#ifdef CONFIG_VDSO
DEFINE(VDSO_DATA_SIZE, sizeof(union vdso_data_store));
#endif #endif
return 0; return 0;
} }
...@@ -618,21 +618,15 @@ int pcibios_enable_device(struct pci_dev *dev, int mask) ...@@ -618,21 +618,15 @@ int pcibios_enable_device(struct pci_dev *dev, int mask)
int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma, int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma,
enum pci_mmap_state mmap_state, int write_combine) enum pci_mmap_state mmap_state, int write_combine)
{ {
struct pci_sys_data *root = dev->sysdata; if (mmap_state == pci_mmap_io)
unsigned long phys;
if (mmap_state == pci_mmap_io) {
return -EINVAL; return -EINVAL;
} else {
phys = vma->vm_pgoff + (root->mem_offset >> PAGE_SHIFT);
}
/* /*
* Mark this as IO * Mark this as IO
*/ */
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
if (remap_pfn_range(vma, vma->vm_start, phys, if (remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
vma->vm_end - vma->vm_start, vma->vm_end - vma->vm_start,
vma->vm_page_prot)) vma->vm_page_prot))
return -EAGAIN; return -EAGAIN;
......
...@@ -545,7 +545,7 @@ ENDPROC(__und_usr) ...@@ -545,7 +545,7 @@ ENDPROC(__und_usr)
/* /*
* The out of line fixup for the ldrt instructions above. * The out of line fixup for the ldrt instructions above.
*/ */
.pushsection .fixup, "ax" .pushsection .text.fixup, "ax"
.align 2 .align 2
4: str r4, [sp, #S_PC] @ retry current instruction 4: str r4, [sp, #S_PC] @ retry current instruction
ret r9 ret r9
......
...@@ -138,9 +138,9 @@ ENTRY(stext) ...@@ -138,9 +138,9 @@ ENTRY(stext)
@ mmu has been enabled @ mmu has been enabled
adr lr, BSYM(1f) @ return (PIC) address adr lr, BSYM(1f) @ return (PIC) address
mov r8, r4 @ set TTBR1 to swapper_pg_dir mov r8, r4 @ set TTBR1 to swapper_pg_dir
ARM( add pc, r10, #PROCINFO_INITFUNC ) ldr r12, [r10, #PROCINFO_INITFUNC]
THUMB( add r12, r10, #PROCINFO_INITFUNC ) add r12, r12, r10
THUMB( ret r12 ) ret r12
1: b __enable_mmu 1: b __enable_mmu
ENDPROC(stext) ENDPROC(stext)
.ltorg .ltorg
...@@ -386,10 +386,10 @@ ENTRY(secondary_startup) ...@@ -386,10 +386,10 @@ ENTRY(secondary_startup)
ldr r8, [r7, lr] @ get secondary_data.swapper_pg_dir ldr r8, [r7, lr] @ get secondary_data.swapper_pg_dir
adr lr, BSYM(__enable_mmu) @ return address adr lr, BSYM(__enable_mmu) @ return address
mov r13, r12 @ __secondary_switched address mov r13, r12 @ __secondary_switched address
ARM( add pc, r10, #PROCINFO_INITFUNC ) @ initialise processor ldr r12, [r10, #PROCINFO_INITFUNC]
add r12, r12, r10 @ initialise processor
@ (return control reg) @ (return control reg)
THUMB( add r12, r10, #PROCINFO_INITFUNC ) ret r12
THUMB( ret r12 )
ENDPROC(secondary_startup) ENDPROC(secondary_startup)
ENDPROC(secondary_startup_arm) ENDPROC(secondary_startup_arm)
......
...@@ -22,6 +22,7 @@ ...@@ -22,6 +22,7 @@
#include <asm/suspend.h> #include <asm/suspend.h>
#include <asm/memory.h> #include <asm/memory.h>
#include <asm/sections.h> #include <asm/sections.h>
#include "reboot.h"
int pfn_is_nosave(unsigned long pfn) int pfn_is_nosave(unsigned long pfn)
{ {
...@@ -61,7 +62,7 @@ static int notrace arch_save_image(unsigned long unused) ...@@ -61,7 +62,7 @@ static int notrace arch_save_image(unsigned long unused)
ret = swsusp_save(); ret = swsusp_save();
if (ret == 0) if (ret == 0)
soft_restart(virt_to_phys(cpu_resume)); _soft_restart(virt_to_phys(cpu_resume), false);
return ret; return ret;
} }
...@@ -86,7 +87,7 @@ static void notrace arch_restore_image(void *unused) ...@@ -86,7 +87,7 @@ static void notrace arch_restore_image(void *unused)
for (pbe = restore_pblist; pbe; pbe = pbe->next) for (pbe = restore_pblist; pbe; pbe = pbe->next)
copy_page(pbe->orig_address, pbe->address); copy_page(pbe->orig_address, pbe->address);
soft_restart(virt_to_phys(cpu_resume)); _soft_restart(virt_to_phys(cpu_resume), false);
} }
static u64 resume_stack[PAGE_SIZE/2/sizeof(u64)] __nosavedata; static u64 resume_stack[PAGE_SIZE/2/sizeof(u64)] __nosavedata;
...@@ -99,7 +100,6 @@ static u64 resume_stack[PAGE_SIZE/2/sizeof(u64)] __nosavedata; ...@@ -99,7 +100,6 @@ static u64 resume_stack[PAGE_SIZE/2/sizeof(u64)] __nosavedata;
*/ */
int swsusp_arch_resume(void) int swsusp_arch_resume(void)
{ {
extern void call_with_stack(void (*fn)(void *), void *arg, void *sp);
call_with_stack(arch_restore_image, 0, call_with_stack(arch_restore_image, 0,
resume_stack + ARRAY_SIZE(resume_stack)); resume_stack + ARRAY_SIZE(resume_stack));
return 0; return 0;
......
...@@ -46,7 +46,8 @@ int machine_kexec_prepare(struct kimage *image) ...@@ -46,7 +46,8 @@ int machine_kexec_prepare(struct kimage *image)
* and implements CPU hotplug for the current HW. If not, we won't be * and implements CPU hotplug for the current HW. If not, we won't be
* able to kexec reliably, so fail the prepare operation. * able to kexec reliably, so fail the prepare operation.
*/ */
if (num_possible_cpus() > 1 && !platform_can_cpu_hotplug()) if (num_possible_cpus() > 1 && platform_can_secondary_boot() &&
!platform_can_cpu_hotplug())
return -EINVAL; return -EINVAL;
/* /*
......
...@@ -98,14 +98,19 @@ apply_relocate(Elf32_Shdr *sechdrs, const char *strtab, unsigned int symindex, ...@@ -98,14 +98,19 @@ apply_relocate(Elf32_Shdr *sechdrs, const char *strtab, unsigned int symindex,
case R_ARM_PC24: case R_ARM_PC24:
case R_ARM_CALL: case R_ARM_CALL:
case R_ARM_JUMP24: case R_ARM_JUMP24:
if (sym->st_value & 3) {
pr_err("%s: section %u reloc %u sym '%s': unsupported interworking call (ARM -> Thumb)\n",
module->name, relindex, i, symname);
return -ENOEXEC;
}
offset = __mem_to_opcode_arm(*(u32 *)loc); offset = __mem_to_opcode_arm(*(u32 *)loc);
offset = (offset & 0x00ffffff) << 2; offset = (offset & 0x00ffffff) << 2;
if (offset & 0x02000000) if (offset & 0x02000000)
offset -= 0x04000000; offset -= 0x04000000;
offset += sym->st_value - loc; offset += sym->st_value - loc;
if (offset & 3 || if (offset <= (s32)0xfe000000 ||
offset <= (s32)0xfe000000 ||
offset >= (s32)0x02000000) { offset >= (s32)0x02000000) {
pr_err("%s: section %u reloc %u sym '%s': relocation %u out of range (%#lx -> %#x)\n", pr_err("%s: section %u reloc %u sym '%s': relocation %u out of range (%#lx -> %#x)\n",
module->name, relindex, i, symname, module->name, relindex, i, symname,
...@@ -155,6 +160,22 @@ apply_relocate(Elf32_Shdr *sechdrs, const char *strtab, unsigned int symindex, ...@@ -155,6 +160,22 @@ apply_relocate(Elf32_Shdr *sechdrs, const char *strtab, unsigned int symindex,
#ifdef CONFIG_THUMB2_KERNEL #ifdef CONFIG_THUMB2_KERNEL
case R_ARM_THM_CALL: case R_ARM_THM_CALL:
case R_ARM_THM_JUMP24: case R_ARM_THM_JUMP24:
/*
* For function symbols, only Thumb addresses are
* allowed (no interworking).
*
* For non-function symbols, the destination
* has no specific ARM/Thumb disposition, so
* the branch is resolved under the assumption
* that interworking is not required.
*/
if (ELF32_ST_TYPE(sym->st_info) == STT_FUNC &&
!(sym->st_value & 1)) {
pr_err("%s: section %u reloc %u sym '%s': unsupported interworking call (Thumb -> ARM)\n",
module->name, relindex, i, symname);
return -ENOEXEC;
}
upper = __mem_to_opcode_thumb16(*(u16 *)loc); upper = __mem_to_opcode_thumb16(*(u16 *)loc);
lower = __mem_to_opcode_thumb16(*(u16 *)(loc + 2)); lower = __mem_to_opcode_thumb16(*(u16 *)(loc + 2));
...@@ -182,18 +203,7 @@ apply_relocate(Elf32_Shdr *sechdrs, const char *strtab, unsigned int symindex, ...@@ -182,18 +203,7 @@ apply_relocate(Elf32_Shdr *sechdrs, const char *strtab, unsigned int symindex,
offset -= 0x02000000; offset -= 0x02000000;
offset += sym->st_value - loc; offset += sym->st_value - loc;
/* if (offset <= (s32)0xff000000 ||
* For function symbols, only Thumb addresses are
* allowed (no interworking).
*
* For non-function symbols, the destination
* has no specific ARM/Thumb disposition, so
* the branch is resolved under the assumption
* that interworking is not required.
*/
if ((ELF32_ST_TYPE(sym->st_info) == STT_FUNC &&
!(offset & 1)) ||
offset <= (s32)0xff000000 ||
offset >= (s32)0x01000000) { offset >= (s32)0x01000000) {
pr_err("%s: section %u reloc %u sym '%s': relocation %u out of range (%#lx -> %#x)\n", pr_err("%s: section %u reloc %u sym '%s': relocation %u out of range (%#lx -> %#x)\n",
module->name, relindex, i, symname, module->name, relindex, i, symname,
......
...@@ -259,20 +259,29 @@ armpmu_add(struct perf_event *event, int flags) ...@@ -259,20 +259,29 @@ armpmu_add(struct perf_event *event, int flags)
} }
static int static int
validate_event(struct pmu_hw_events *hw_events, validate_event(struct pmu *pmu, struct pmu_hw_events *hw_events,
struct perf_event *event) struct perf_event *event)
{ {
struct arm_pmu *armpmu = to_arm_pmu(event->pmu); struct arm_pmu *armpmu;
if (is_software_event(event)) if (is_software_event(event))
return 1; return 1;
/*
* Reject groups spanning multiple HW PMUs (e.g. CPU + CCI). The
* core perf code won't check that the pmu->ctx == leader->ctx
* until after pmu->event_init(event).
*/
if (event->pmu != pmu)
return 0;
if (event->state < PERF_EVENT_STATE_OFF) if (event->state < PERF_EVENT_STATE_OFF)
return 1; return 1;
if (event->state == PERF_EVENT_STATE_OFF && !event->attr.enable_on_exec) if (event->state == PERF_EVENT_STATE_OFF && !event->attr.enable_on_exec)
return 1; return 1;
armpmu = to_arm_pmu(event->pmu);
return armpmu->get_event_idx(hw_events, event) >= 0; return armpmu->get_event_idx(hw_events, event) >= 0;
} }
...@@ -288,15 +297,15 @@ validate_group(struct perf_event *event) ...@@ -288,15 +297,15 @@ validate_group(struct perf_event *event)
*/ */
memset(&fake_pmu.used_mask, 0, sizeof(fake_pmu.used_mask)); memset(&fake_pmu.used_mask, 0, sizeof(fake_pmu.used_mask));
if (!validate_event(&fake_pmu, leader)) if (!validate_event(event->pmu, &fake_pmu, leader))
return -EINVAL; return -EINVAL;
list_for_each_entry(sibling, &leader->sibling_list, group_entry) { list_for_each_entry(sibling, &leader->sibling_list, group_entry) {
if (!validate_event(&fake_pmu, sibling)) if (!validate_event(event->pmu, &fake_pmu, sibling))
return -EINVAL; return -EINVAL;
} }
if (!validate_event(&fake_pmu, event)) if (!validate_event(event->pmu, &fake_pmu, event))
return -EINVAL; return -EINVAL;
return 0; return 0;
......
...@@ -92,11 +92,16 @@ static void cpu_pmu_free_irq(struct arm_pmu *cpu_pmu) ...@@ -92,11 +92,16 @@ static void cpu_pmu_free_irq(struct arm_pmu *cpu_pmu)
free_percpu_irq(irq, &hw_events->percpu_pmu); free_percpu_irq(irq, &hw_events->percpu_pmu);
} else { } else {
for (i = 0; i < irqs; ++i) { for (i = 0; i < irqs; ++i) {
if (!cpumask_test_and_clear_cpu(i, &cpu_pmu->active_irqs)) int cpu = i;
if (cpu_pmu->irq_affinity)
cpu = cpu_pmu->irq_affinity[i];
if (!cpumask_test_and_clear_cpu(cpu, &cpu_pmu->active_irqs))
continue; continue;
irq = platform_get_irq(pmu_device, i); irq = platform_get_irq(pmu_device, i);
if (irq >= 0) if (irq >= 0)
free_irq(irq, per_cpu_ptr(&hw_events->percpu_pmu, i)); free_irq(irq, per_cpu_ptr(&hw_events->percpu_pmu, cpu));
} }
} }
} }
...@@ -128,32 +133,37 @@ static int cpu_pmu_request_irq(struct arm_pmu *cpu_pmu, irq_handler_t handler) ...@@ -128,32 +133,37 @@ static int cpu_pmu_request_irq(struct arm_pmu *cpu_pmu, irq_handler_t handler)
on_each_cpu(cpu_pmu_enable_percpu_irq, &irq, 1); on_each_cpu(cpu_pmu_enable_percpu_irq, &irq, 1);
} else { } else {
for (i = 0; i < irqs; ++i) { for (i = 0; i < irqs; ++i) {
int cpu = i;
err = 0; err = 0;
irq = platform_get_irq(pmu_device, i); irq = platform_get_irq(pmu_device, i);
if (irq < 0) if (irq < 0)
continue; continue;
if (cpu_pmu->irq_affinity)
cpu = cpu_pmu->irq_affinity[i];
/* /*
* If we have a single PMU interrupt that we can't shift, * If we have a single PMU interrupt that we can't shift,
* assume that we're running on a uniprocessor machine and * assume that we're running on a uniprocessor machine and
* continue. Otherwise, continue without this interrupt. * continue. Otherwise, continue without this interrupt.
*/ */
if (irq_set_affinity(irq, cpumask_of(i)) && irqs > 1) { if (irq_set_affinity(irq, cpumask_of(cpu)) && irqs > 1) {
pr_warn("unable to set irq affinity (irq=%d, cpu=%u)\n", pr_warn("unable to set irq affinity (irq=%d, cpu=%u)\n",
irq, i); irq, cpu);
continue; continue;
} }
err = request_irq(irq, handler, err = request_irq(irq, handler,
IRQF_NOBALANCING | IRQF_NO_THREAD, "arm-pmu", IRQF_NOBALANCING | IRQF_NO_THREAD, "arm-pmu",
per_cpu_ptr(&hw_events->percpu_pmu, i)); per_cpu_ptr(&hw_events->percpu_pmu, cpu));
if (err) { if (err) {
pr_err("unable to request IRQ%d for ARM PMU counters\n", pr_err("unable to request IRQ%d for ARM PMU counters\n",
irq); irq);
return err; return err;
} }
cpumask_set_cpu(i, &cpu_pmu->active_irqs); cpumask_set_cpu(cpu, &cpu_pmu->active_irqs);
} }
} }
...@@ -243,6 +253,8 @@ static const struct of_device_id cpu_pmu_of_device_ids[] = { ...@@ -243,6 +253,8 @@ static const struct of_device_id cpu_pmu_of_device_ids[] = {
{.compatible = "arm,arm1176-pmu", .data = armv6_1176_pmu_init}, {.compatible = "arm,arm1176-pmu", .data = armv6_1176_pmu_init},
{.compatible = "arm,arm1136-pmu", .data = armv6_1136_pmu_init}, {.compatible = "arm,arm1136-pmu", .data = armv6_1136_pmu_init},
{.compatible = "qcom,krait-pmu", .data = krait_pmu_init}, {.compatible = "qcom,krait-pmu", .data = krait_pmu_init},
{.compatible = "qcom,scorpion-pmu", .data = scorpion_pmu_init},
{.compatible = "qcom,scorpion-mp-pmu", .data = scorpion_mp_pmu_init},
{}, {},
}; };
...@@ -289,6 +301,48 @@ static int probe_current_pmu(struct arm_pmu *pmu) ...@@ -289,6 +301,48 @@ static int probe_current_pmu(struct arm_pmu *pmu)
return ret; return ret;
} }
static int of_pmu_irq_cfg(struct platform_device *pdev)
{
int i;
int *irqs = kcalloc(pdev->num_resources, sizeof(*irqs), GFP_KERNEL);
if (!irqs)
return -ENOMEM;
for (i = 0; i < pdev->num_resources; ++i) {
struct device_node *dn;
int cpu;
dn = of_parse_phandle(pdev->dev.of_node, "interrupt-affinity",
i);
if (!dn) {
pr_warn("Failed to parse %s/interrupt-affinity[%d]\n",
of_node_full_name(dn), i);
break;
}
for_each_possible_cpu(cpu)
if (arch_find_n_match_cpu_physical_id(dn, cpu, NULL))
break;
of_node_put(dn);
if (cpu >= nr_cpu_ids) {
pr_warn("Failed to find logical CPU for %s\n",
dn->name);
break;
}
irqs[i] = cpu;
}
if (i == pdev->num_resources)
cpu_pmu->irq_affinity = irqs;
else
kfree(irqs);
return 0;
}
static int cpu_pmu_device_probe(struct platform_device *pdev) static int cpu_pmu_device_probe(struct platform_device *pdev)
{ {
const struct of_device_id *of_id; const struct of_device_id *of_id;
...@@ -313,6 +367,9 @@ static int cpu_pmu_device_probe(struct platform_device *pdev) ...@@ -313,6 +367,9 @@ static int cpu_pmu_device_probe(struct platform_device *pdev)
if (node && (of_id = of_match_node(cpu_pmu_of_device_ids, pdev->dev.of_node))) { if (node && (of_id = of_match_node(cpu_pmu_of_device_ids, pdev->dev.of_node))) {
init_fn = of_id->data; init_fn = of_id->data;
ret = of_pmu_irq_cfg(pdev);
if (!ret)
ret = init_fn(pmu); ret = init_fn(pmu);
} else { } else {
ret = probe_current_pmu(pmu); ret = probe_current_pmu(pmu);
......
This diff is collapsed.
...@@ -17,12 +17,9 @@ ...@@ -17,12 +17,9 @@
#include <linux/stddef.h> #include <linux/stddef.h>
#include <linux/unistd.h> #include <linux/unistd.h>
#include <linux/user.h> #include <linux/user.h>
#include <linux/delay.h>
#include <linux/reboot.h>
#include <linux/interrupt.h> #include <linux/interrupt.h>
#include <linux/kallsyms.h> #include <linux/kallsyms.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/cpu.h>
#include <linux/elfcore.h> #include <linux/elfcore.h>
#include <linux/pm.h> #include <linux/pm.h>
#include <linux/tick.h> #include <linux/tick.h>
...@@ -31,16 +28,14 @@ ...@@ -31,16 +28,14 @@
#include <linux/random.h> #include <linux/random.h>
#include <linux/hw_breakpoint.h> #include <linux/hw_breakpoint.h>
#include <linux/leds.h> #include <linux/leds.h>
#include <linux/reboot.h>
#include <asm/cacheflush.h>
#include <asm/idmap.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/thread_notify.h> #include <asm/thread_notify.h>
#include <asm/stacktrace.h> #include <asm/stacktrace.h>
#include <asm/system_misc.h> #include <asm/system_misc.h>
#include <asm/mach/time.h> #include <asm/mach/time.h>
#include <asm/tls.h> #include <asm/tls.h>
#include <asm/vdso.h>
#ifdef CONFIG_CC_STACKPROTECTOR #ifdef CONFIG_CC_STACKPROTECTOR
#include <linux/stackprotector.h> #include <linux/stackprotector.h>
...@@ -59,69 +54,6 @@ static const char *isa_modes[] __maybe_unused = { ...@@ -59,69 +54,6 @@ static const char *isa_modes[] __maybe_unused = {
"ARM" , "Thumb" , "Jazelle", "ThumbEE" "ARM" , "Thumb" , "Jazelle", "ThumbEE"
}; };
extern void call_with_stack(void (*fn)(void *), void *arg, void *sp);
typedef void (*phys_reset_t)(unsigned long);
/*
* A temporary stack to use for CPU reset. This is static so that we
* don't clobber it with the identity mapping. When running with this
* stack, any references to the current task *will not work* so you
* should really do as little as possible before jumping to your reset
* code.
*/
static u64 soft_restart_stack[16];
static void __soft_restart(void *addr)
{
phys_reset_t phys_reset;
/* Take out a flat memory mapping. */
setup_mm_for_reboot();
/* Clean and invalidate caches */
flush_cache_all();
/* Turn off caching */
cpu_proc_fin();
/* Push out any further dirty data, and ensure cache is empty */
flush_cache_all();
/* Switch to the identity mapping. */
phys_reset = (phys_reset_t)(unsigned long)virt_to_phys(cpu_reset);
phys_reset((unsigned long)addr);
/* Should never get here. */
BUG();
}
void soft_restart(unsigned long addr)
{
u64 *stack = soft_restart_stack + ARRAY_SIZE(soft_restart_stack);
/* Disable interrupts first */
raw_local_irq_disable();
local_fiq_disable();
/* Disable the L2 if we're the last man standing. */
if (num_online_cpus() == 1)
outer_disable();
/* Change to the new stack and continue with the reset. */
call_with_stack(__soft_restart, (void *)addr, (void *)stack);
/* Should never get here. */
BUG();
}
/*
* Function pointers to optional machine specific functions
*/
void (*pm_power_off)(void);
EXPORT_SYMBOL(pm_power_off);
void (*arm_pm_restart)(enum reboot_mode reboot_mode, const char *cmd);
/* /*
* This is our default idle handler. * This is our default idle handler.
*/ */
...@@ -166,79 +98,6 @@ void arch_cpu_idle_dead(void) ...@@ -166,79 +98,6 @@ void arch_cpu_idle_dead(void)
} }
#endif #endif
/*
* Called by kexec, immediately prior to machine_kexec().
*
* This must completely disable all secondary CPUs; simply causing those CPUs
* to execute e.g. a RAM-based pin loop is not sufficient. This allows the
* kexec'd kernel to use any and all RAM as it sees fit, without having to
* avoid any code or data used by any SW CPU pin loop. The CPU hotplug
* functionality embodied in disable_nonboot_cpus() to achieve this.
*/
void machine_shutdown(void)
{
disable_nonboot_cpus();
}
/*
* Halting simply requires that the secondary CPUs stop performing any
* activity (executing tasks, handling interrupts). smp_send_stop()
* achieves this.
*/
void machine_halt(void)
{
local_irq_disable();
smp_send_stop();
local_irq_disable();
while (1);
}
/*
* Power-off simply requires that the secondary CPUs stop performing any
* activity (executing tasks, handling interrupts). smp_send_stop()
* achieves this. When the system power is turned off, it will take all CPUs
* with it.
*/
void machine_power_off(void)
{
local_irq_disable();
smp_send_stop();
if (pm_power_off)
pm_power_off();
}
/*
* Restart requires that the secondary CPUs stop performing any activity
* while the primary CPU resets the system. Systems with a single CPU can
* use soft_restart() as their machine descriptor's .restart hook, since that
* will cause the only available CPU to reset. Systems with multiple CPUs must
* provide a HW restart implementation, to ensure that all CPUs reset at once.
* This is required so that any code running after reset on the primary CPU
* doesn't have to co-ordinate with other CPUs to ensure they aren't still
* executing pre-reset code, and using RAM that the primary CPU's code wishes
* to use. Implementing such co-ordination would be essentially impossible.
*/
void machine_restart(char *cmd)
{
local_irq_disable();
smp_send_stop();
if (arm_pm_restart)
arm_pm_restart(reboot_mode, cmd);
else
do_kernel_restart(cmd);
/* Give a grace period for failure to restart of 1s */
mdelay(1000);
/* Whoops - the platform was unable to reboot. Tell the user! */
printk("Reboot failed -- System halted\n");
local_irq_disable();
while (1);
}
void __show_regs(struct pt_regs *regs) void __show_regs(struct pt_regs *regs)
{ {
unsigned long flags; unsigned long flags;
...@@ -475,7 +334,7 @@ const char *arch_vma_name(struct vm_area_struct *vma) ...@@ -475,7 +334,7 @@ const char *arch_vma_name(struct vm_area_struct *vma)
} }
/* If possible, provide a placement hint at a random offset from the /* If possible, provide a placement hint at a random offset from the
* stack for the signal page. * stack for the sigpage and vdso pages.
*/ */
static unsigned long sigpage_addr(const struct mm_struct *mm, static unsigned long sigpage_addr(const struct mm_struct *mm,
unsigned int npages) unsigned int npages)
...@@ -519,6 +378,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp) ...@@ -519,6 +378,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
{ {
struct mm_struct *mm = current->mm; struct mm_struct *mm = current->mm;
struct vm_area_struct *vma; struct vm_area_struct *vma;
unsigned long npages;
unsigned long addr; unsigned long addr;
unsigned long hint; unsigned long hint;
int ret = 0; int ret = 0;
...@@ -528,9 +388,12 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp) ...@@ -528,9 +388,12 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
if (!signal_page) if (!signal_page)
return -ENOMEM; return -ENOMEM;
npages = 1; /* for sigpage */
npages += vdso_total_pages;
down_write(&mm->mmap_sem); down_write(&mm->mmap_sem);
hint = sigpage_addr(mm, 1); hint = sigpage_addr(mm, npages);
addr = get_unmapped_area(NULL, hint, PAGE_SIZE, 0, 0); addr = get_unmapped_area(NULL, hint, npages << PAGE_SHIFT, 0, 0);
if (IS_ERR_VALUE(addr)) { if (IS_ERR_VALUE(addr)) {
ret = addr; ret = addr;
goto up_fail; goto up_fail;
...@@ -547,6 +410,12 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp) ...@@ -547,6 +410,12 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
mm->context.sigpage = addr; mm->context.sigpage = addr;
/* Unlike the sigpage, failure to install the vdso is unlikely
* to be fatal to the process, so no error check needed
* here.
*/
arm_install_vdso(mm, addr + PAGE_SIZE);
up_fail: up_fail:
up_write(&mm->mmap_sem); up_write(&mm->mmap_sem);
return ret; return ret;
......
/*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* Copyright (C) 2015 ARM Limited
*
* Author: Mark Rutland <mark.rutland@arm.com>
*/
#include <linux/linkage.h>
#include <asm/opcodes-sec.h>
#include <asm/opcodes-virt.h>
/* int __invoke_psci_fn_hvc(u32 function_id, u32 arg0, u32 arg1, u32 arg2) */
ENTRY(__invoke_psci_fn_hvc)
__HVC(0)
bx lr
ENDPROC(__invoke_psci_fn_hvc)
/* int __invoke_psci_fn_smc(u32 function_id, u32 arg0, u32 arg1, u32 arg2) */
ENTRY(__invoke_psci_fn_smc)
__SMC(0)
bx lr
ENDPROC(__invoke_psci_fn_smc)
...@@ -23,8 +23,6 @@ ...@@ -23,8 +23,6 @@
#include <asm/compiler.h> #include <asm/compiler.h>
#include <asm/errno.h> #include <asm/errno.h>
#include <asm/opcodes-sec.h>
#include <asm/opcodes-virt.h>
#include <asm/psci.h> #include <asm/psci.h>
#include <asm/system_misc.h> #include <asm/system_misc.h>
...@@ -33,6 +31,9 @@ struct psci_operations psci_ops; ...@@ -33,6 +31,9 @@ struct psci_operations psci_ops;
static int (*invoke_psci_fn)(u32, u32, u32, u32); static int (*invoke_psci_fn)(u32, u32, u32, u32);
typedef int (*psci_initcall_t)(const struct device_node *); typedef int (*psci_initcall_t)(const struct device_node *);
asmlinkage int __invoke_psci_fn_hvc(u32, u32, u32, u32);
asmlinkage int __invoke_psci_fn_smc(u32, u32, u32, u32);
enum psci_function { enum psci_function {
PSCI_FN_CPU_SUSPEND, PSCI_FN_CPU_SUSPEND,
PSCI_FN_CPU_ON, PSCI_FN_CPU_ON,
...@@ -71,40 +72,6 @@ static u32 psci_power_state_pack(struct psci_power_state state) ...@@ -71,40 +72,6 @@ static u32 psci_power_state_pack(struct psci_power_state state)
& PSCI_0_2_POWER_STATE_AFFL_MASK); & PSCI_0_2_POWER_STATE_AFFL_MASK);
} }
/*
* The following two functions are invoked via the invoke_psci_fn pointer
* and will not be inlined, allowing us to piggyback on the AAPCS.
*/
static noinline int __invoke_psci_fn_hvc(u32 function_id, u32 arg0, u32 arg1,
u32 arg2)
{
asm volatile(
__asmeq("%0", "r0")
__asmeq("%1", "r1")
__asmeq("%2", "r2")
__asmeq("%3", "r3")
__HVC(0)
: "+r" (function_id)
: "r" (arg0), "r" (arg1), "r" (arg2));
return function_id;
}
static noinline int __invoke_psci_fn_smc(u32 function_id, u32 arg0, u32 arg1,
u32 arg2)
{
asm volatile(
__asmeq("%0", "r0")
__asmeq("%1", "r1")
__asmeq("%2", "r2")
__asmeq("%3", "r3")
__SMC(0)
: "+r" (function_id)
: "r" (arg0), "r" (arg1), "r" (arg2));
return function_id;
}
static int psci_get_version(void) static int psci_get_version(void)
{ {
int err; int err;
......
/*
* Copyright (C) 1996-2000 Russell King - Converted to ARM.
* Original Copyright (C) 1995 Linus Torvalds
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/cpu.h>
#include <linux/delay.h>
#include <linux/reboot.h>
#include <asm/cacheflush.h>
#include <asm/idmap.h>
#include "reboot.h"
typedef void (*phys_reset_t)(unsigned long);
/*
* Function pointers to optional machine specific functions
*/
void (*arm_pm_restart)(enum reboot_mode reboot_mode, const char *cmd);
void (*pm_power_off)(void);
EXPORT_SYMBOL(pm_power_off);
/*
* A temporary stack to use for CPU reset. This is static so that we
* don't clobber it with the identity mapping. When running with this
* stack, any references to the current task *will not work* so you
* should really do as little as possible before jumping to your reset
* code.
*/
static u64 soft_restart_stack[16];
static void __soft_restart(void *addr)
{
phys_reset_t phys_reset;
/* Take out a flat memory mapping. */
setup_mm_for_reboot();
/* Clean and invalidate caches */
flush_cache_all();
/* Turn off caching */
cpu_proc_fin();
/* Push out any further dirty data, and ensure cache is empty */
flush_cache_all();
/* Switch to the identity mapping. */
phys_reset = (phys_reset_t)(unsigned long)virt_to_phys(cpu_reset);
phys_reset((unsigned long)addr);
/* Should never get here. */
BUG();
}
void _soft_restart(unsigned long addr, bool disable_l2)
{
u64 *stack = soft_restart_stack + ARRAY_SIZE(soft_restart_stack);
/* Disable interrupts first */
raw_local_irq_disable();
local_fiq_disable();
/* Disable the L2 if we're the last man standing. */
if (disable_l2)
outer_disable();
/* Change to the new stack and continue with the reset. */
call_with_stack(__soft_restart, (void *)addr, (void *)stack);
/* Should never get here. */
BUG();
}
void soft_restart(unsigned long addr)
{
_soft_restart(addr, num_online_cpus() == 1);
}
/*
* Called by kexec, immediately prior to machine_kexec().
*
* This must completely disable all secondary CPUs; simply causing those CPUs
* to execute e.g. a RAM-based pin loop is not sufficient. This allows the
* kexec'd kernel to use any and all RAM as it sees fit, without having to
* avoid any code or data used by any SW CPU pin loop. The CPU hotplug
* functionality embodied in disable_nonboot_cpus() to achieve this.
*/
void machine_shutdown(void)
{
disable_nonboot_cpus();
}
/*
* Halting simply requires that the secondary CPUs stop performing any
* activity (executing tasks, handling interrupts). smp_send_stop()
* achieves this.
*/
void machine_halt(void)
{
local_irq_disable();
smp_send_stop();
local_irq_disable();
while (1);
}
/*
* Power-off simply requires that the secondary CPUs stop performing any
* activity (executing tasks, handling interrupts). smp_send_stop()
* achieves this. When the system power is turned off, it will take all CPUs
* with it.
*/
void machine_power_off(void)
{
local_irq_disable();
smp_send_stop();
if (pm_power_off)
pm_power_off();
}
/*
* Restart requires that the secondary CPUs stop performing any activity
* while the primary CPU resets the system. Systems with a single CPU can
* use soft_restart() as their machine descriptor's .restart hook, since that
* will cause the only available CPU to reset. Systems with multiple CPUs must
* provide a HW restart implementation, to ensure that all CPUs reset at once.
* This is required so that any code running after reset on the primary CPU
* doesn't have to co-ordinate with other CPUs to ensure they aren't still
* executing pre-reset code, and using RAM that the primary CPU's code wishes
* to use. Implementing such co-ordination would be essentially impossible.
*/
void machine_restart(char *cmd)
{
local_irq_disable();
smp_send_stop();
if (arm_pm_restart)
arm_pm_restart(reboot_mode, cmd);
else
do_kernel_restart(cmd);
/* Give a grace period for failure to restart of 1s */
mdelay(1000);
/* Whoops - the platform was unable to reboot. Tell the user! */
printk("Reboot failed -- System halted\n");
local_irq_disable();
while (1);
}
#ifndef REBOOT_H
#define REBOOT_H
extern void call_with_stack(void (*fn)(void *), void *arg, void *sp);
extern void _soft_restart(unsigned long addr, bool disable_l2);
#endif
...@@ -56,8 +56,6 @@ void *return_address(unsigned int level) ...@@ -56,8 +56,6 @@ void *return_address(unsigned int level)
return NULL; return NULL;
} }
#else /* if defined(CONFIG_FRAME_POINTER) && !defined(CONFIG_ARM_UNWIND) */ #endif /* if defined(CONFIG_FRAME_POINTER) && !defined(CONFIG_ARM_UNWIND) */
#endif /* if defined(CONFIG_FRAME_POINTER) && !defined(CONFIG_ARM_UNWIND) / else */
EXPORT_SYMBOL_GPL(return_address); EXPORT_SYMBOL_GPL(return_address);
...@@ -372,30 +372,48 @@ void __init early_print(const char *str, ...) ...@@ -372,30 +372,48 @@ void __init early_print(const char *str, ...)
static void __init cpuid_init_hwcaps(void) static void __init cpuid_init_hwcaps(void)
{ {
unsigned int divide_instrs, vmsa; int block;
u32 isar5;
if (cpu_architecture() < CPU_ARCH_ARMv7) if (cpu_architecture() < CPU_ARCH_ARMv7)
return; return;
divide_instrs = (read_cpuid_ext(CPUID_EXT_ISAR0) & 0x0f000000) >> 24; block = cpuid_feature_extract(CPUID_EXT_ISAR0, 24);
if (block >= 2)
switch (divide_instrs) {
case 2:
elf_hwcap |= HWCAP_IDIVA; elf_hwcap |= HWCAP_IDIVA;
case 1: if (block >= 1)
elf_hwcap |= HWCAP_IDIVT; elf_hwcap |= HWCAP_IDIVT;
}
/* LPAE implies atomic ldrd/strd instructions */ /* LPAE implies atomic ldrd/strd instructions */
vmsa = (read_cpuid_ext(CPUID_EXT_MMFR0) & 0xf) >> 0; block = cpuid_feature_extract(CPUID_EXT_MMFR0, 0);
if (vmsa >= 5) if (block >= 5)
elf_hwcap |= HWCAP_LPAE; elf_hwcap |= HWCAP_LPAE;
/* check for supported v8 Crypto instructions */
isar5 = read_cpuid_ext(CPUID_EXT_ISAR5);
block = cpuid_feature_extract_field(isar5, 4);
if (block >= 2)
elf_hwcap2 |= HWCAP2_PMULL;
if (block >= 1)
elf_hwcap2 |= HWCAP2_AES;
block = cpuid_feature_extract_field(isar5, 8);
if (block >= 1)
elf_hwcap2 |= HWCAP2_SHA1;
block = cpuid_feature_extract_field(isar5, 12);
if (block >= 1)
elf_hwcap2 |= HWCAP2_SHA2;
block = cpuid_feature_extract_field(isar5, 16);
if (block >= 1)
elf_hwcap2 |= HWCAP2_CRC32;
} }
static void __init elf_hwcap_fixup(void) static void __init elf_hwcap_fixup(void)
{ {
unsigned id = read_cpuid_id(); unsigned id = read_cpuid_id();
unsigned sync_prim;
/* /*
* HWCAP_TLS is available only on 1136 r1p0 and later, * HWCAP_TLS is available only on 1136 r1p0 and later,
...@@ -416,9 +434,9 @@ static void __init elf_hwcap_fixup(void) ...@@ -416,9 +434,9 @@ static void __init elf_hwcap_fixup(void)
* avoid advertising SWP; it may not be atomic with * avoid advertising SWP; it may not be atomic with
* multiprocessing cores. * multiprocessing cores.
*/ */
sync_prim = ((read_cpuid_ext(CPUID_EXT_ISAR3) >> 8) & 0xf0) | if (cpuid_feature_extract(CPUID_EXT_ISAR3, 12) > 1 ||
((read_cpuid_ext(CPUID_EXT_ISAR4) >> 20) & 0x0f); (cpuid_feature_extract(CPUID_EXT_ISAR3, 12) == 1 &&
if (sync_prim >= 0x13) cpuid_feature_extract(CPUID_EXT_ISAR3, 20) >= 3))
elf_hwcap &= ~HWCAP_SWP; elf_hwcap &= ~HWCAP_SWP;
} }
......
...@@ -116,14 +116,7 @@ cpu_resume_after_mmu: ...@@ -116,14 +116,7 @@ cpu_resume_after_mmu:
ldmfd sp!, {r4 - r11, pc} ldmfd sp!, {r4 - r11, pc}
ENDPROC(cpu_resume_after_mmu) ENDPROC(cpu_resume_after_mmu)
/* .text
* Note: Yes, part of the following code is located into the .data section.
* This is to allow sleep_save_sp to be accessed with a relative load
* while we can't rely on any MMU translation. We could have put
* sleep_save_sp in the .text section as well, but some setups might
* insist on it to be truly read-only.
*/
.data
.align .align
ENTRY(cpu_resume) ENTRY(cpu_resume)
ARM_BE8(setend be) @ ensure we are in BE mode ARM_BE8(setend be) @ ensure we are in BE mode
...@@ -145,6 +138,8 @@ ARM_BE8(setend be) @ ensure we are in BE mode ...@@ -145,6 +138,8 @@ ARM_BE8(setend be) @ ensure we are in BE mode
compute_mpidr_hash r1, r4, r5, r6, r0, r3 compute_mpidr_hash r1, r4, r5, r6, r0, r3
1: 1:
adr r0, _sleep_save_sp adr r0, _sleep_save_sp
ldr r2, [r0]
add r0, r0, r2
ldr r0, [r0, #SLEEP_SAVE_SP_PHYS] ldr r0, [r0, #SLEEP_SAVE_SP_PHYS]
ldr r0, [r0, r1, lsl #2] ldr r0, [r0, r1, lsl #2]
...@@ -156,10 +151,12 @@ THUMB( bx r3 ) ...@@ -156,10 +151,12 @@ THUMB( bx r3 )
ENDPROC(cpu_resume) ENDPROC(cpu_resume)
.align 2 .align 2
_sleep_save_sp:
.long sleep_save_sp - .
mpidr_hash_ptr: mpidr_hash_ptr:
.long mpidr_hash - . @ mpidr_hash struct offset .long mpidr_hash - . @ mpidr_hash struct offset
.data
.type sleep_save_sp, #object .type sleep_save_sp, #object
ENTRY(sleep_save_sp) ENTRY(sleep_save_sp)
_sleep_save_sp:
.space SLEEP_SAVE_SP_SZ @ struct sleep_save_sp .space SLEEP_SAVE_SP_SZ @ struct sleep_save_sp
...@@ -145,6 +145,11 @@ void __init smp_init_cpus(void) ...@@ -145,6 +145,11 @@ void __init smp_init_cpus(void)
smp_ops.smp_init_cpus(); smp_ops.smp_init_cpus();
} }
int platform_can_secondary_boot(void)
{
return !!smp_ops.smp_boot_secondary;
}
int platform_can_cpu_hotplug(void) int platform_can_cpu_hotplug(void)
{ {
#ifdef CONFIG_HOTPLUG_CPU #ifdef CONFIG_HOTPLUG_CPU
......
...@@ -42,7 +42,7 @@ ...@@ -42,7 +42,7 @@
" cmp %0, #0\n" \ " cmp %0, #0\n" \
" movne %0, %4\n" \ " movne %0, %4\n" \
"2:\n" \ "2:\n" \
" .section .fixup,\"ax\"\n" \ " .section .text.fixup,\"ax\"\n" \
" .align 2\n" \ " .align 2\n" \
"3: mov %0, %5\n" \ "3: mov %0, %5\n" \
" b 2b\n" \ " b 2b\n" \
......
/*
* Adapted from arm64 version.
*
* Copyright (C) 2012 ARM Limited
* Copyright (C) 2015 Mentor Graphics Corporation.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <linux/elf.h>
#include <linux/err.h>
#include <linux/kernel.h>
#include <linux/mm.h>
#include <linux/of.h>
#include <linux/printk.h>
#include <linux/slab.h>
#include <linux/timekeeper_internal.h>
#include <linux/vmalloc.h>
#include <asm/arch_timer.h>
#include <asm/barrier.h>
#include <asm/cacheflush.h>
#include <asm/page.h>
#include <asm/vdso.h>
#include <asm/vdso_datapage.h>
#include <clocksource/arm_arch_timer.h>
#define MAX_SYMNAME 64
static struct page **vdso_text_pagelist;
/* Total number of pages needed for the data and text portions of the VDSO. */
unsigned int vdso_total_pages __read_mostly;
/*
* The VDSO data page.
*/
static union vdso_data_store vdso_data_store __page_aligned_data;
static struct vdso_data *vdso_data = &vdso_data_store.data;
static struct page *vdso_data_page;
static struct vm_special_mapping vdso_data_mapping = {
.name = "[vvar]",
.pages = &vdso_data_page,
};
static struct vm_special_mapping vdso_text_mapping = {
.name = "[vdso]",
};
struct elfinfo {
Elf32_Ehdr *hdr; /* ptr to ELF */
Elf32_Sym *dynsym; /* ptr to .dynsym section */
unsigned long dynsymsize; /* size of .dynsym section */
char *dynstr; /* ptr to .dynstr section */
};
/* Cached result of boot-time check for whether the arch timer exists,
* and if so, whether the virtual counter is useable.
*/
static bool cntvct_ok __read_mostly;
static bool __init cntvct_functional(void)
{
struct device_node *np;
bool ret = false;
if (!IS_ENABLED(CONFIG_ARM_ARCH_TIMER))
goto out;
/* The arm_arch_timer core should export
* arch_timer_use_virtual or similar so we don't have to do
* this.
*/
np = of_find_compatible_node(NULL, NULL, "arm,armv7-timer");
if (!np)
goto out_put;
if (of_property_read_bool(np, "arm,cpu-registers-not-fw-configured"))
goto out_put;
ret = true;
out_put:
of_node_put(np);
out:
return ret;
}
static void * __init find_section(Elf32_Ehdr *ehdr, const char *name,
unsigned long *size)
{
Elf32_Shdr *sechdrs;
unsigned int i;
char *secnames;
/* Grab section headers and strings so we can tell who is who */
sechdrs = (void *)ehdr + ehdr->e_shoff;
secnames = (void *)ehdr + sechdrs[ehdr->e_shstrndx].sh_offset;
/* Find the section they want */
for (i = 1; i < ehdr->e_shnum; i++) {
if (strcmp(secnames + sechdrs[i].sh_name, name) == 0) {
if (size)
*size = sechdrs[i].sh_size;
return (void *)ehdr + sechdrs[i].sh_offset;
}
}
if (size)
*size = 0;
return NULL;
}
static Elf32_Sym * __init find_symbol(struct elfinfo *lib, const char *symname)
{
unsigned int i;
for (i = 0; i < (lib->dynsymsize / sizeof(Elf32_Sym)); i++) {
char name[MAX_SYMNAME], *c;
if (lib->dynsym[i].st_name == 0)
continue;
strlcpy(name, lib->dynstr + lib->dynsym[i].st_name,
MAX_SYMNAME);
c = strchr(name, '@');
if (c)
*c = 0;
if (strcmp(symname, name) == 0)
return &lib->dynsym[i];
}
return NULL;
}
static void __init vdso_nullpatch_one(struct elfinfo *lib, const char *symname)
{
Elf32_Sym *sym;
sym = find_symbol(lib, symname);
if (!sym)
return;
sym->st_name = 0;
}
static void __init patch_vdso(void *ehdr)
{
struct elfinfo einfo;
einfo = (struct elfinfo) {
.hdr = ehdr,
};
einfo.dynsym = find_section(einfo.hdr, ".dynsym", &einfo.dynsymsize);
einfo.dynstr = find_section(einfo.hdr, ".dynstr", NULL);
/* If the virtual counter is absent or non-functional we don't
* want programs to incur the slight additional overhead of
* dispatching through the VDSO only to fall back to syscalls.
*/
if (!cntvct_ok) {
vdso_nullpatch_one(&einfo, "__vdso_gettimeofday");
vdso_nullpatch_one(&einfo, "__vdso_clock_gettime");
}
}
static int __init vdso_init(void)
{
unsigned int text_pages;
int i;
if (memcmp(&vdso_start, "\177ELF", 4)) {
pr_err("VDSO is not a valid ELF object!\n");
return -ENOEXEC;
}
text_pages = (&vdso_end - &vdso_start) >> PAGE_SHIFT;
pr_debug("vdso: %i text pages at base %p\n", text_pages, &vdso_start);
/* Allocate the VDSO text pagelist */
vdso_text_pagelist = kcalloc(text_pages, sizeof(struct page *),
GFP_KERNEL);
if (vdso_text_pagelist == NULL)
return -ENOMEM;
/* Grab the VDSO data page. */
vdso_data_page = virt_to_page(vdso_data);
/* Grab the VDSO text pages. */
for (i = 0; i < text_pages; i++) {
struct page *page;
page = virt_to_page(&vdso_start + i * PAGE_SIZE);
vdso_text_pagelist[i] = page;
}
vdso_text_mapping.pages = vdso_text_pagelist;
vdso_total_pages = 1; /* for the data/vvar page */
vdso_total_pages += text_pages;
cntvct_ok = cntvct_functional();
patch_vdso(&vdso_start);
return 0;
}
arch_initcall(vdso_init);
static int install_vvar(struct mm_struct *mm, unsigned long addr)
{
struct vm_area_struct *vma;
vma = _install_special_mapping(mm, addr, PAGE_SIZE,
VM_READ | VM_MAYREAD,
&vdso_data_mapping);
return IS_ERR(vma) ? PTR_ERR(vma) : 0;
}
/* assumes mmap_sem is write-locked */
void arm_install_vdso(struct mm_struct *mm, unsigned long addr)
{
struct vm_area_struct *vma;
unsigned long len;
mm->context.vdso = 0;
if (vdso_text_pagelist == NULL)
return;
if (install_vvar(mm, addr))
return;
/* Account for vvar page. */
addr += PAGE_SIZE;
len = (vdso_total_pages - 1) << PAGE_SHIFT;
vma = _install_special_mapping(mm, addr, len,
VM_READ | VM_EXEC | VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC,
&vdso_text_mapping);
if (!IS_ERR(vma))
mm->context.vdso = addr;
}
static void vdso_write_begin(struct vdso_data *vdata)
{
++vdso_data->seq_count;
smp_wmb(); /* Pairs with smp_rmb in vdso_read_retry */
}
static void vdso_write_end(struct vdso_data *vdata)
{
smp_wmb(); /* Pairs with smp_rmb in vdso_read_begin */
++vdso_data->seq_count;
}
static bool tk_is_cntvct(const struct timekeeper *tk)
{
if (!IS_ENABLED(CONFIG_ARM_ARCH_TIMER))
return false;
if (strcmp(tk->tkr_mono.clock->name, "arch_sys_counter") != 0)
return false;
return true;
}
/**
* update_vsyscall - update the vdso data page
*
* Increment the sequence counter, making it odd, indicating to
* userspace that an update is in progress. Update the fields used
* for coarse clocks and, if the architected system timer is in use,
* the fields used for high precision clocks. Increment the sequence
* counter again, making it even, indicating to userspace that the
* update is finished.
*
* Userspace is expected to sample seq_count before reading any other
* fields from the data page. If seq_count is odd, userspace is
* expected to wait until it becomes even. After copying data from
* the page, userspace must sample seq_count again; if it has changed
* from its previous value, userspace must retry the whole sequence.
*
* Calls to update_vsyscall are serialized by the timekeeping core.
*/
void update_vsyscall(struct timekeeper *tk)
{
struct timespec xtime_coarse;
struct timespec64 *wtm = &tk->wall_to_monotonic;
if (!cntvct_ok) {
/* The entry points have been zeroed, so there is no
* point in updating the data page.
*/
return;
}
vdso_write_begin(vdso_data);
xtime_coarse = __current_kernel_time();
vdso_data->tk_is_cntvct = tk_is_cntvct(tk);
vdso_data->xtime_coarse_sec = xtime_coarse.tv_sec;
vdso_data->xtime_coarse_nsec = xtime_coarse.tv_nsec;
vdso_data->wtm_clock_sec = wtm->tv_sec;
vdso_data->wtm_clock_nsec = wtm->tv_nsec;
if (vdso_data->tk_is_cntvct) {
vdso_data->cs_cycle_last = tk->tkr_mono.cycle_last;
vdso_data->xtime_clock_sec = tk->xtime_sec;
vdso_data->xtime_clock_snsec = tk->tkr_mono.xtime_nsec;
vdso_data->cs_mult = tk->tkr_mono.mult;
vdso_data->cs_shift = tk->tkr_mono.shift;
vdso_data->cs_mask = tk->tkr_mono.mask;
}
vdso_write_end(vdso_data);
flush_dcache_page(virt_to_page(vdso_data));
}
void update_vsyscall_tz(void)
{
vdso_data->tz_minuteswest = sys_tz.tz_minuteswest;
vdso_data->tz_dsttime = sys_tz.tz_dsttime;
flush_dcache_page(virt_to_page(vdso_data));
}
...@@ -74,7 +74,7 @@ SECTIONS ...@@ -74,7 +74,7 @@ SECTIONS
ARM_EXIT_DISCARD(EXIT_DATA) ARM_EXIT_DISCARD(EXIT_DATA)
EXIT_CALL EXIT_CALL
#ifndef CONFIG_MMU #ifndef CONFIG_MMU
*(.fixup) *(.text.fixup)
*(__ex_table) *(__ex_table)
#endif #endif
#ifndef CONFIG_SMP_ON_UP #ifndef CONFIG_SMP_ON_UP
...@@ -100,6 +100,7 @@ SECTIONS ...@@ -100,6 +100,7 @@ SECTIONS
.text : { /* Real text segment */ .text : { /* Real text segment */
_stext = .; /* Text and read-only data */ _stext = .; /* Text and read-only data */
IDMAP_TEXT
__exception_text_start = .; __exception_text_start = .;
*(.exception.text) *(.exception.text)
__exception_text_end = .; __exception_text_end = .;
...@@ -108,10 +109,6 @@ SECTIONS ...@@ -108,10 +109,6 @@ SECTIONS
SCHED_TEXT SCHED_TEXT
LOCK_TEXT LOCK_TEXT
KPROBES_TEXT KPROBES_TEXT
IDMAP_TEXT
#ifdef CONFIG_MMU
*(.fixup)
#endif
*(.gnu.warning) *(.gnu.warning)
*(.glue_7) *(.glue_7)
*(.glue_7t) *(.glue_7t)
......
...@@ -47,7 +47,7 @@ USER( strnebt r2, [r0]) ...@@ -47,7 +47,7 @@ USER( strnebt r2, [r0])
ENDPROC(__clear_user) ENDPROC(__clear_user)
ENDPROC(__clear_user_std) ENDPROC(__clear_user_std)
.pushsection .fixup,"ax" .pushsection .text.fixup,"ax"
.align 0 .align 0
9001: ldmfd sp!, {r0, pc} 9001: ldmfd sp!, {r0, pc}
.popsection .popsection
......
...@@ -100,7 +100,7 @@ WEAK(__copy_to_user) ...@@ -100,7 +100,7 @@ WEAK(__copy_to_user)
ENDPROC(__copy_to_user) ENDPROC(__copy_to_user)
ENDPROC(__copy_to_user_std) ENDPROC(__copy_to_user_std)
.pushsection .fixup,"ax" .pushsection .text.fixup,"ax"
.align 0 .align 0
copy_abort_preamble copy_abort_preamble
ldmfd sp!, {r1, r2, r3} ldmfd sp!, {r1, r2, r3}
......
...@@ -68,7 +68,7 @@ ...@@ -68,7 +68,7 @@
* so properly, we would have to add in whatever registers were loaded before * so properly, we would have to add in whatever registers were loaded before
* the fault, which, with the current asm above is not predictable. * the fault, which, with the current asm above is not predictable.
*/ */
.pushsection .fixup,"ax" .pushsection .text.fixup,"ax"
.align 4 .align 4
9001: mov r4, #-EFAULT 9001: mov r4, #-EFAULT
ldr r5, [sp, #8*4] @ *err_ptr ldr r5, [sp, #8*4] @ *err_ptr
......
...@@ -83,6 +83,12 @@ void __init register_current_timer_delay(const struct delay_timer *timer) ...@@ -83,6 +83,12 @@ void __init register_current_timer_delay(const struct delay_timer *timer)
NSEC_PER_SEC, 3600); NSEC_PER_SEC, 3600);
res = cyc_to_ns(1ULL, new_mult, new_shift); res = cyc_to_ns(1ULL, new_mult, new_shift);
if (res > 1000) {
pr_err("Ignoring delay timer %ps, which has insufficient resolution of %lluns\n",
timer, res);
return;
}
if (!delay_calibrated && (!delay_res || (res < delay_res))) { if (!delay_calibrated && (!delay_res || (res < delay_res))) {
pr_info("Switching to timer-based delay loop, resolution %lluns\n", res); pr_info("Switching to timer-based delay loop, resolution %lluns\n", res);
delay_timer = timer; delay_timer = timer;
......
...@@ -23,14 +23,7 @@ ...@@ -23,14 +23,7 @@
#define CPU_MASK 0xff0ffff0 #define CPU_MASK 0xff0ffff0
#define CPU_CORTEX_A9 0x410fc090 #define CPU_CORTEX_A9 0x410fc090
/* .text
* The following code is located into the .data section. This is to
* allow l2x0_regs_phys to be accessed with a relative load while we
* can't rely on any MMU translation. We could have put l2x0_regs_phys
* in the .text section as well, but some setups might insist on it to
* be truly read-only. (Reference from: arch/arm/kernel/sleep.S)
*/
.data
.align .align
/* /*
...@@ -69,10 +62,12 @@ ENTRY(exynos_cpu_resume_ns) ...@@ -69,10 +62,12 @@ ENTRY(exynos_cpu_resume_ns)
cmp r0, r1 cmp r0, r1
bne skip_cp15 bne skip_cp15
adr r0, cp15_save_power adr r0, _cp15_save_power
ldr r1, [r0] ldr r1, [r0]
adr r0, cp15_save_diag ldr r1, [r0, r1]
adr r0, _cp15_save_diag
ldr r2, [r0] ldr r2, [r0]
ldr r2, [r0, r2]
mov r0, #SMC_CMD_C15RESUME mov r0, #SMC_CMD_C15RESUME
dsb dsb
smc #0 smc #0
...@@ -118,14 +113,20 @@ skip_l2x0: ...@@ -118,14 +113,20 @@ skip_l2x0:
skip_cp15: skip_cp15:
b cpu_resume b cpu_resume
ENDPROC(exynos_cpu_resume_ns) ENDPROC(exynos_cpu_resume_ns)
.align
_cp15_save_power:
.long cp15_save_power - .
_cp15_save_diag:
.long cp15_save_diag - .
#ifdef CONFIG_CACHE_L2X0
1: .long l2x0_saved_regs - .
#endif /* CONFIG_CACHE_L2X0 */
.data
.globl cp15_save_diag .globl cp15_save_diag
cp15_save_diag: cp15_save_diag:
.long 0 @ cp15 diagnostic .long 0 @ cp15 diagnostic
.globl cp15_save_power .globl cp15_save_power
cp15_save_power: cp15_save_power:
.long 0 @ cp15 power control .long 0 @ cp15 power control
#ifdef CONFIG_CACHE_L2X0
.align
1: .long l2x0_saved_regs - .
#endif /* CONFIG_CACHE_L2X0 */
...@@ -14,7 +14,7 @@ ...@@ -14,7 +14,7 @@
#include <linux/linkage.h> #include <linux/linkage.h>
.data .text
.align .align
/* /*
......
...@@ -42,6 +42,7 @@ if ARCH_VEXPRESS ...@@ -42,6 +42,7 @@ if ARCH_VEXPRESS
config ARCH_VEXPRESS_CORTEX_A5_A9_ERRATA config ARCH_VEXPRESS_CORTEX_A5_A9_ERRATA
bool "Enable A5 and A9 only errata work-arounds" bool "Enable A5 and A9 only errata work-arounds"
default y default y
select ARM_ERRATA_643719 if SMP
select ARM_ERRATA_720789 select ARM_ERRATA_720789
select PL310_ERRATA_753970 if CACHE_L2X0 select PL310_ERRATA_753970 if CACHE_L2X0
help help
......
...@@ -738,7 +738,7 @@ config CPU_ICACHE_DISABLE ...@@ -738,7 +738,7 @@ config CPU_ICACHE_DISABLE
config CPU_DCACHE_DISABLE config CPU_DCACHE_DISABLE
bool "Disable D-Cache (C-bit)" bool "Disable D-Cache (C-bit)"
depends on CPU_CP15 depends on CPU_CP15 && !SMP
help help
Say Y here to disable the processor data cache. Unless Say Y here to disable the processor data cache. Unless
you have a reason not to or are unsure, say N. you have a reason not to or are unsure, say N.
...@@ -825,6 +825,20 @@ config KUSER_HELPERS ...@@ -825,6 +825,20 @@ config KUSER_HELPERS
Say N here only if you are absolutely certain that you do not Say N here only if you are absolutely certain that you do not
need these helpers; otherwise, the safe option is to say Y. need these helpers; otherwise, the safe option is to say Y.
config VDSO
bool "Enable VDSO for acceleration of some system calls"
depends on AEABI && MMU
default y if ARM_ARCH_TIMER
select GENERIC_TIME_VSYSCALL
help
Place in the process address space an ELF shared object
providing fast implementations of gettimeofday and
clock_gettime. Systems that implement the ARM architected
timer will receive maximum benefit.
You must have glibc 2.22 or later for programs to seamlessly
take advantage of this.
config DMA_CACHE_RWFO config DMA_CACHE_RWFO
bool "Enable read/write for ownership DMA cache maintenance" bool "Enable read/write for ownership DMA cache maintenance"
depends on CPU_V6K && SMP depends on CPU_V6K && SMP
......
...@@ -201,7 +201,7 @@ union offset_union { ...@@ -201,7 +201,7 @@ union offset_union {
THUMB( "1: "ins" %1, [%2]\n" ) \ THUMB( "1: "ins" %1, [%2]\n" ) \
THUMB( " add %2, %2, #1\n" ) \ THUMB( " add %2, %2, #1\n" ) \
"2:\n" \ "2:\n" \
" .pushsection .fixup,\"ax\"\n" \ " .pushsection .text.fixup,\"ax\"\n" \
" .align 2\n" \ " .align 2\n" \
"3: mov %0, #1\n" \ "3: mov %0, #1\n" \
" b 2b\n" \ " b 2b\n" \
...@@ -261,7 +261,7 @@ union offset_union { ...@@ -261,7 +261,7 @@ union offset_union {
" mov %1, %1, "NEXT_BYTE"\n" \ " mov %1, %1, "NEXT_BYTE"\n" \
"2: "ins" %1, [%2]\n" \ "2: "ins" %1, [%2]\n" \
"3:\n" \ "3:\n" \
" .pushsection .fixup,\"ax\"\n" \ " .pushsection .text.fixup,\"ax\"\n" \
" .align 2\n" \ " .align 2\n" \
"4: mov %0, #1\n" \ "4: mov %0, #1\n" \
" b 3b\n" \ " b 3b\n" \
...@@ -301,7 +301,7 @@ union offset_union { ...@@ -301,7 +301,7 @@ union offset_union {
" mov %1, %1, "NEXT_BYTE"\n" \ " mov %1, %1, "NEXT_BYTE"\n" \
"4: "ins" %1, [%2]\n" \ "4: "ins" %1, [%2]\n" \
"5:\n" \ "5:\n" \
" .pushsection .fixup,\"ax\"\n" \ " .pushsection .text.fixup,\"ax\"\n" \
" .align 2\n" \ " .align 2\n" \
"6: mov %0, #1\n" \ "6: mov %0, #1\n" \
" b 5b\n" \ " b 5b\n" \
......
...@@ -1647,6 +1647,7 @@ int __init l2x0_of_init(u32 aux_val, u32 aux_mask) ...@@ -1647,6 +1647,7 @@ int __init l2x0_of_init(u32 aux_val, u32 aux_mask)
struct device_node *np; struct device_node *np;
struct resource res; struct resource res;
u32 cache_id, old_aux; u32 cache_id, old_aux;
u32 cache_level = 2;
np = of_find_matching_node(NULL, l2x0_ids); np = of_find_matching_node(NULL, l2x0_ids);
if (!np) if (!np)
...@@ -1679,6 +1680,12 @@ int __init l2x0_of_init(u32 aux_val, u32 aux_mask) ...@@ -1679,6 +1680,12 @@ int __init l2x0_of_init(u32 aux_val, u32 aux_mask)
if (!of_property_read_bool(np, "cache-unified")) if (!of_property_read_bool(np, "cache-unified"))
pr_err("L2C: device tree omits to specify unified cache\n"); pr_err("L2C: device tree omits to specify unified cache\n");
if (of_property_read_u32(np, "cache-level", &cache_level))
pr_err("L2C: device tree omits to specify cache-level\n");
if (cache_level != 2)
pr_err("L2C: device tree specifies invalid cache level\n");
/* Read back current (default) hardware configuration */ /* Read back current (default) hardware configuration */
if (data->save) if (data->save)
data->save(l2x0_base); data->save(l2x0_base);
......
...@@ -36,10 +36,10 @@ ENTRY(v7_invalidate_l1) ...@@ -36,10 +36,10 @@ ENTRY(v7_invalidate_l1)
mcr p15, 2, r0, c0, c0, 0 mcr p15, 2, r0, c0, c0, 0
mrc p15, 1, r0, c0, c0, 0 mrc p15, 1, r0, c0, c0, 0
ldr r1, =0x7fff movw r1, #0x7fff
and r2, r1, r0, lsr #13 and r2, r1, r0, lsr #13
ldr r1, =0x3ff movw r1, #0x3ff
and r3, r1, r0, lsr #3 @ NumWays - 1 and r3, r1, r0, lsr #3 @ NumWays - 1
add r2, r2, #1 @ NumSets add r2, r2, #1 @ NumSets
...@@ -90,21 +90,20 @@ ENDPROC(v7_flush_icache_all) ...@@ -90,21 +90,20 @@ ENDPROC(v7_flush_icache_all)
ENTRY(v7_flush_dcache_louis) ENTRY(v7_flush_dcache_louis)
dmb @ ensure ordering with previous memory accesses dmb @ ensure ordering with previous memory accesses
mrc p15, 1, r0, c0, c0, 1 @ read clidr, r0 = clidr mrc p15, 1, r0, c0, c0, 1 @ read clidr, r0 = clidr
ALT_SMP(ands r3, r0, #(7 << 21)) @ extract LoUIS from clidr ALT_SMP(mov r3, r0, lsr #20) @ move LoUIS into position
ALT_UP(ands r3, r0, #(7 << 27)) @ extract LoUU from clidr ALT_UP( mov r3, r0, lsr #26) @ move LoUU into position
ands r3, r3, #7 << 1 @ extract LoU*2 field from clidr
bne start_flush_levels @ LoU != 0, start flushing
#ifdef CONFIG_ARM_ERRATA_643719 #ifdef CONFIG_ARM_ERRATA_643719
ALT_SMP(mrceq p15, 0, r2, c0, c0, 0) @ read main ID register ALT_SMP(mrc p15, 0, r2, c0, c0, 0) @ read main ID register
ALT_UP(reteq lr) @ LoUU is zero, so nothing to do ALT_UP( ret lr) @ LoUU is zero, so nothing to do
ldreq r1, =0x410fc090 @ ID of ARM Cortex A9 r0p? movw r1, #:lower16:(0x410fc090 >> 4) @ ID of ARM Cortex A9 r0p?
biceq r2, r2, #0x0000000f @ clear minor revision number movt r1, #:upper16:(0x410fc090 >> 4)
teqeq r2, r1 @ test for errata affected core and if so... teq r1, r2, lsr #4 @ test for errata affected core and if so...
orreqs r3, #(1 << 21) @ fix LoUIS value (and set flags state to 'ne') moveq r3, #1 << 1 @ fix LoUIS value
beq start_flush_levels @ start flushing cache levels
#endif #endif
ALT_SMP(mov r3, r3, lsr #20) @ r3 = LoUIS * 2 ret lr
ALT_UP(mov r3, r3, lsr #26) @ r3 = LoUU * 2
reteq lr @ return if level == 0
mov r10, #0 @ r10 (starting level) = 0
b flush_levels @ start flushing cache levels
ENDPROC(v7_flush_dcache_louis) ENDPROC(v7_flush_dcache_louis)
/* /*
...@@ -119,9 +118,10 @@ ENDPROC(v7_flush_dcache_louis) ...@@ -119,9 +118,10 @@ ENDPROC(v7_flush_dcache_louis)
ENTRY(v7_flush_dcache_all) ENTRY(v7_flush_dcache_all)
dmb @ ensure ordering with previous memory accesses dmb @ ensure ordering with previous memory accesses
mrc p15, 1, r0, c0, c0, 1 @ read clidr mrc p15, 1, r0, c0, c0, 1 @ read clidr
ands r3, r0, #0x7000000 @ extract loc from clidr mov r3, r0, lsr #23 @ move LoC into position
mov r3, r3, lsr #23 @ left align loc bit field ands r3, r3, #7 << 1 @ extract LoC*2 from clidr
beq finished @ if loc is 0, then no need to clean beq finished @ if loc is 0, then no need to clean
start_flush_levels:
mov r10, #0 @ start clean at cache level 0 mov r10, #0 @ start clean at cache level 0
flush_levels: flush_levels:
add r2, r10, r10, lsr #1 @ work out 3x current cache level add r2, r10, r10, lsr #1 @ work out 3x current cache level
...@@ -140,10 +140,10 @@ flush_levels: ...@@ -140,10 +140,10 @@ flush_levels:
#endif #endif
and r2, r1, #7 @ extract the length of the cache lines and r2, r1, #7 @ extract the length of the cache lines
add r2, r2, #4 @ add 4 (line length offset) add r2, r2, #4 @ add 4 (line length offset)
ldr r4, =0x3ff movw r4, #0x3ff
ands r4, r4, r1, lsr #3 @ find maximum number on the way size ands r4, r4, r1, lsr #3 @ find maximum number on the way size
clz r5, r4 @ find bit position of way size increment clz r5, r4 @ find bit position of way size increment
ldr r7, =0x7fff movw r7, #0x7fff
ands r7, r7, r1, lsr #13 @ extract max number of the index size ands r7, r7, r1, lsr #13 @ extract max number of the index size
loop1: loop1:
mov r9, r7 @ create working copy of max index mov r9, r7 @ create working copy of max index
......
This diff is collapsed.
...@@ -86,55 +86,6 @@ static int __init parse_tag_initrd2(const struct tag *tag) ...@@ -86,55 +86,6 @@ static int __init parse_tag_initrd2(const struct tag *tag)
__tagtable(ATAG_INITRD2, parse_tag_initrd2); __tagtable(ATAG_INITRD2, parse_tag_initrd2);
/*
* This keeps memory configuration data used by a couple memory
* initialization functions, as well as show_mem() for the skipping
* of holes in the memory map. It is populated by arm_add_memory().
*/
void show_mem(unsigned int filter)
{
int free = 0, total = 0, reserved = 0;
int shared = 0, cached = 0, slab = 0;
struct memblock_region *reg;
printk("Mem-info:\n");
show_free_areas(filter);
for_each_memblock (memory, reg) {
unsigned int pfn1, pfn2;
struct page *page, *end;
pfn1 = memblock_region_memory_base_pfn(reg);
pfn2 = memblock_region_memory_end_pfn(reg);
page = pfn_to_page(pfn1);
end = pfn_to_page(pfn2 - 1) + 1;
do {
total++;
if (PageReserved(page))
reserved++;
else if (PageSwapCache(page))
cached++;
else if (PageSlab(page))
slab++;
else if (!page_count(page))
free++;
else
shared += page_count(page) - 1;
pfn1++;
page = pfn_to_page(pfn1);
} while (pfn1 < pfn2);
}
printk("%d pages of RAM\n", total);
printk("%d free pages\n", free);
printk("%d reserved pages\n", reserved);
printk("%d slab pages\n", slab);
printk("%d pages shared\n", shared);
printk("%d pages swap cached\n", cached);
}
static void __init find_limits(unsigned long *min, unsigned long *max_low, static void __init find_limits(unsigned long *min, unsigned long *max_low,
unsigned long *max_high) unsigned long *max_high)
{ {
......
...@@ -507,7 +507,7 @@ cpu_arm1020_name: ...@@ -507,7 +507,7 @@ cpu_arm1020_name:
.align .align
.section ".proc.info.init", #alloc, #execinstr .section ".proc.info.init", #alloc
.type __arm1020_proc_info,#object .type __arm1020_proc_info,#object
__arm1020_proc_info: __arm1020_proc_info:
...@@ -519,7 +519,7 @@ __arm1020_proc_info: ...@@ -519,7 +519,7 @@ __arm1020_proc_info:
.long PMD_TYPE_SECT | \ .long PMD_TYPE_SECT | \
PMD_SECT_AP_WRITE | \ PMD_SECT_AP_WRITE | \
PMD_SECT_AP_READ PMD_SECT_AP_READ
b __arm1020_setup initfn __arm1020_setup, __arm1020_proc_info
.long cpu_arch_name .long cpu_arch_name
.long cpu_elf_name .long cpu_elf_name
.long HWCAP_SWP | HWCAP_HALF | HWCAP_THUMB .long HWCAP_SWP | HWCAP_HALF | HWCAP_THUMB
......
...@@ -465,7 +465,7 @@ arm1020e_crval: ...@@ -465,7 +465,7 @@ arm1020e_crval:
.align .align
.section ".proc.info.init", #alloc, #execinstr .section ".proc.info.init", #alloc
.type __arm1020e_proc_info,#object .type __arm1020e_proc_info,#object
__arm1020e_proc_info: __arm1020e_proc_info:
...@@ -479,7 +479,7 @@ __arm1020e_proc_info: ...@@ -479,7 +479,7 @@ __arm1020e_proc_info:
PMD_BIT4 | \ PMD_BIT4 | \
PMD_SECT_AP_WRITE | \ PMD_SECT_AP_WRITE | \
PMD_SECT_AP_READ PMD_SECT_AP_READ
b __arm1020e_setup initfn __arm1020e_setup, __arm1020e_proc_info
.long cpu_arch_name .long cpu_arch_name
.long cpu_elf_name .long cpu_elf_name
.long HWCAP_SWP | HWCAP_HALF | HWCAP_THUMB | HWCAP_EDSP .long HWCAP_SWP | HWCAP_HALF | HWCAP_THUMB | HWCAP_EDSP
......
...@@ -448,7 +448,7 @@ arm1022_crval: ...@@ -448,7 +448,7 @@ arm1022_crval:
.align .align
.section ".proc.info.init", #alloc, #execinstr .section ".proc.info.init", #alloc
.type __arm1022_proc_info,#object .type __arm1022_proc_info,#object
__arm1022_proc_info: __arm1022_proc_info:
...@@ -462,7 +462,7 @@ __arm1022_proc_info: ...@@ -462,7 +462,7 @@ __arm1022_proc_info:
PMD_BIT4 | \ PMD_BIT4 | \
PMD_SECT_AP_WRITE | \ PMD_SECT_AP_WRITE | \
PMD_SECT_AP_READ PMD_SECT_AP_READ
b __arm1022_setup initfn __arm1022_setup, __arm1022_proc_info
.long cpu_arch_name .long cpu_arch_name
.long cpu_elf_name .long cpu_elf_name
.long HWCAP_SWP | HWCAP_HALF | HWCAP_THUMB | HWCAP_EDSP .long HWCAP_SWP | HWCAP_HALF | HWCAP_THUMB | HWCAP_EDSP
......
...@@ -442,7 +442,7 @@ arm1026_crval: ...@@ -442,7 +442,7 @@ arm1026_crval:
string cpu_arm1026_name, "ARM1026EJ-S" string cpu_arm1026_name, "ARM1026EJ-S"
.align .align
.section ".proc.info.init", #alloc, #execinstr .section ".proc.info.init", #alloc
.type __arm1026_proc_info,#object .type __arm1026_proc_info,#object
__arm1026_proc_info: __arm1026_proc_info:
...@@ -456,7 +456,7 @@ __arm1026_proc_info: ...@@ -456,7 +456,7 @@ __arm1026_proc_info:
PMD_BIT4 | \ PMD_BIT4 | \
PMD_SECT_AP_WRITE | \ PMD_SECT_AP_WRITE | \
PMD_SECT_AP_READ PMD_SECT_AP_READ
b __arm1026_setup initfn __arm1026_setup, __arm1026_proc_info
.long cpu_arch_name .long cpu_arch_name
.long cpu_elf_name .long cpu_elf_name
.long HWCAP_SWP|HWCAP_HALF|HWCAP_THUMB|HWCAP_FAST_MULT|HWCAP_EDSP|HWCAP_JAVA .long HWCAP_SWP|HWCAP_HALF|HWCAP_THUMB|HWCAP_FAST_MULT|HWCAP_EDSP|HWCAP_JAVA
......
...@@ -186,7 +186,7 @@ arm720_crval: ...@@ -186,7 +186,7 @@ arm720_crval:
* See <asm/procinfo.h> for a definition of this structure. * See <asm/procinfo.h> for a definition of this structure.
*/ */
.section ".proc.info.init", #alloc, #execinstr .section ".proc.info.init", #alloc
.macro arm720_proc_info name:req, cpu_val:req, cpu_mask:req, cpu_name:req, cpu_flush:req .macro arm720_proc_info name:req, cpu_val:req, cpu_mask:req, cpu_name:req, cpu_flush:req
.type __\name\()_proc_info,#object .type __\name\()_proc_info,#object
...@@ -203,7 +203,7 @@ __\name\()_proc_info: ...@@ -203,7 +203,7 @@ __\name\()_proc_info:
PMD_BIT4 | \ PMD_BIT4 | \
PMD_SECT_AP_WRITE | \ PMD_SECT_AP_WRITE | \
PMD_SECT_AP_READ PMD_SECT_AP_READ
b \cpu_flush @ cpu_flush initfn \cpu_flush, __\name\()_proc_info @ cpu_flush
.long cpu_arch_name @ arch_name .long cpu_arch_name @ arch_name
.long cpu_elf_name @ elf_name .long cpu_elf_name @ elf_name
.long HWCAP_SWP | HWCAP_HALF | HWCAP_THUMB @ elf_hwcap .long HWCAP_SWP | HWCAP_HALF | HWCAP_THUMB @ elf_hwcap
......
...@@ -132,14 +132,14 @@ __arm740_setup: ...@@ -132,14 +132,14 @@ __arm740_setup:
.align .align
.section ".proc.info.init", #alloc, #execinstr .section ".proc.info.init", #alloc
.type __arm740_proc_info,#object .type __arm740_proc_info,#object
__arm740_proc_info: __arm740_proc_info:
.long 0x41807400 .long 0x41807400
.long 0xfffffff0 .long 0xfffffff0
.long 0 .long 0
.long 0 .long 0
b __arm740_setup initfn __arm740_setup, __arm740_proc_info
.long cpu_arch_name .long cpu_arch_name
.long cpu_elf_name .long cpu_elf_name
.long HWCAP_SWP | HWCAP_HALF | HWCAP_THUMB | HWCAP_26BIT .long HWCAP_SWP | HWCAP_HALF | HWCAP_THUMB | HWCAP_26BIT
......
...@@ -76,7 +76,7 @@ __arm7tdmi_setup: ...@@ -76,7 +76,7 @@ __arm7tdmi_setup:
.align .align
.section ".proc.info.init", #alloc, #execinstr .section ".proc.info.init", #alloc
.macro arm7tdmi_proc_info name:req, cpu_val:req, cpu_mask:req, cpu_name:req, \ .macro arm7tdmi_proc_info name:req, cpu_val:req, cpu_mask:req, cpu_name:req, \
extra_hwcaps=0 extra_hwcaps=0
...@@ -86,7 +86,7 @@ __\name\()_proc_info: ...@@ -86,7 +86,7 @@ __\name\()_proc_info:
.long \cpu_mask .long \cpu_mask
.long 0 .long 0
.long 0 .long 0
b __arm7tdmi_setup initfn __arm7tdmi_setup, __\name\()_proc_info
.long cpu_arch_name .long cpu_arch_name
.long cpu_elf_name .long cpu_elf_name
.long HWCAP_SWP | HWCAP_26BIT | ( \extra_hwcaps ) .long HWCAP_SWP | HWCAP_26BIT | ( \extra_hwcaps )
......
...@@ -448,7 +448,7 @@ arm920_crval: ...@@ -448,7 +448,7 @@ arm920_crval:
.align .align
.section ".proc.info.init", #alloc, #execinstr .section ".proc.info.init", #alloc
.type __arm920_proc_info,#object .type __arm920_proc_info,#object
__arm920_proc_info: __arm920_proc_info:
...@@ -464,7 +464,7 @@ __arm920_proc_info: ...@@ -464,7 +464,7 @@ __arm920_proc_info:
PMD_BIT4 | \ PMD_BIT4 | \
PMD_SECT_AP_WRITE | \ PMD_SECT_AP_WRITE | \
PMD_SECT_AP_READ PMD_SECT_AP_READ
b __arm920_setup initfn __arm920_setup, __arm920_proc_info
.long cpu_arch_name .long cpu_arch_name
.long cpu_elf_name .long cpu_elf_name
.long HWCAP_SWP | HWCAP_HALF | HWCAP_THUMB .long HWCAP_SWP | HWCAP_HALF | HWCAP_THUMB
......
...@@ -426,7 +426,7 @@ arm922_crval: ...@@ -426,7 +426,7 @@ arm922_crval:
.align .align
.section ".proc.info.init", #alloc, #execinstr .section ".proc.info.init", #alloc
.type __arm922_proc_info,#object .type __arm922_proc_info,#object
__arm922_proc_info: __arm922_proc_info:
...@@ -442,7 +442,7 @@ __arm922_proc_info: ...@@ -442,7 +442,7 @@ __arm922_proc_info:
PMD_BIT4 | \ PMD_BIT4 | \
PMD_SECT_AP_WRITE | \ PMD_SECT_AP_WRITE | \
PMD_SECT_AP_READ PMD_SECT_AP_READ
b __arm922_setup initfn __arm922_setup, __arm922_proc_info
.long cpu_arch_name .long cpu_arch_name
.long cpu_elf_name .long cpu_elf_name
.long HWCAP_SWP | HWCAP_HALF | HWCAP_THUMB .long HWCAP_SWP | HWCAP_HALF | HWCAP_THUMB
......
...@@ -494,7 +494,7 @@ arm925_crval: ...@@ -494,7 +494,7 @@ arm925_crval:
.align .align
.section ".proc.info.init", #alloc, #execinstr .section ".proc.info.init", #alloc
.macro arm925_proc_info name:req, cpu_val:req, cpu_mask:req, cpu_name:req, cache .macro arm925_proc_info name:req, cpu_val:req, cpu_mask:req, cpu_name:req, cache
.type __\name\()_proc_info,#object .type __\name\()_proc_info,#object
...@@ -510,7 +510,7 @@ __\name\()_proc_info: ...@@ -510,7 +510,7 @@ __\name\()_proc_info:
PMD_BIT4 | \ PMD_BIT4 | \
PMD_SECT_AP_WRITE | \ PMD_SECT_AP_WRITE | \
PMD_SECT_AP_READ PMD_SECT_AP_READ
b __arm925_setup initfn __arm925_setup, __\name\()_proc_info
.long cpu_arch_name .long cpu_arch_name
.long cpu_elf_name .long cpu_elf_name
.long HWCAP_SWP | HWCAP_HALF | HWCAP_THUMB .long HWCAP_SWP | HWCAP_HALF | HWCAP_THUMB
......
...@@ -474,7 +474,7 @@ arm926_crval: ...@@ -474,7 +474,7 @@ arm926_crval:
.align .align
.section ".proc.info.init", #alloc, #execinstr .section ".proc.info.init", #alloc
.type __arm926_proc_info,#object .type __arm926_proc_info,#object
__arm926_proc_info: __arm926_proc_info:
...@@ -490,7 +490,7 @@ __arm926_proc_info: ...@@ -490,7 +490,7 @@ __arm926_proc_info:
PMD_BIT4 | \ PMD_BIT4 | \
PMD_SECT_AP_WRITE | \ PMD_SECT_AP_WRITE | \
PMD_SECT_AP_READ PMD_SECT_AP_READ
b __arm926_setup initfn __arm926_setup, __arm926_proc_info
.long cpu_arch_name .long cpu_arch_name
.long cpu_elf_name .long cpu_elf_name
.long HWCAP_SWP|HWCAP_HALF|HWCAP_THUMB|HWCAP_FAST_MULT|HWCAP_EDSP|HWCAP_JAVA .long HWCAP_SWP|HWCAP_HALF|HWCAP_THUMB|HWCAP_FAST_MULT|HWCAP_EDSP|HWCAP_JAVA
......
...@@ -297,26 +297,16 @@ __arm940_setup: ...@@ -297,26 +297,16 @@ __arm940_setup:
mcr p15, 0, r0, c6, c0, 1 mcr p15, 0, r0, c6, c0, 1
ldr r0, =(CONFIG_DRAM_BASE & 0xFFFFF000) @ base[31:12] of RAM ldr r0, =(CONFIG_DRAM_BASE & 0xFFFFF000) @ base[31:12] of RAM
ldr r1, =(CONFIG_DRAM_SIZE >> 12) @ size of RAM (must be >= 4KB) ldr r7, =CONFIG_DRAM_SIZE >> 12 @ size of RAM (must be >= 4KB)
mov r2, #10 @ 11 is the minimum (4KB) pr_val r3, r0, r7, #1
1: add r2, r2, #1 @ area size *= 2 mcr p15, 0, r3, c6, c1, 0 @ set area 1, RAM
mov r1, r1, lsr #1 mcr p15, 0, r3, c6, c1, 1
bne 1b @ count not zero r-shift
orr r0, r0, r2, lsl #1 @ the area register value
orr r0, r0, #1 @ set enable bit
mcr p15, 0, r0, c6, c1, 0 @ set area 1, RAM
mcr p15, 0, r0, c6, c1, 1
ldr r0, =(CONFIG_FLASH_MEM_BASE & 0xFFFFF000) @ base[31:12] of FLASH ldr r0, =(CONFIG_FLASH_MEM_BASE & 0xFFFFF000) @ base[31:12] of FLASH
ldr r1, =(CONFIG_FLASH_SIZE >> 12) @ size of FLASH (must be >= 4KB) ldr r7, =CONFIG_FLASH_SIZE @ size of FLASH (must be >= 4KB)
mov r2, #10 @ 11 is the minimum (4KB) pr_val r3, r0, r6, #1
1: add r2, r2, #1 @ area size *= 2 mcr p15, 0, r3, c6, c2, 0 @ set area 2, ROM/FLASH
mov r1, r1, lsr #1 mcr p15, 0, r3, c6, c2, 1
bne 1b @ count not zero r-shift
orr r0, r0, r2, lsl #1 @ the area register value
orr r0, r0, #1 @ set enable bit
mcr p15, 0, r0, c6, c2, 0 @ set area 2, ROM/FLASH
mcr p15, 0, r0, c6, c2, 1
mov r0, #0x06 mov r0, #0x06
mcr p15, 0, r0, c2, c0, 0 @ Region 1&2 cacheable mcr p15, 0, r0, c2, c0, 0 @ Region 1&2 cacheable
...@@ -354,14 +344,14 @@ __arm940_setup: ...@@ -354,14 +344,14 @@ __arm940_setup:
.align .align
.section ".proc.info.init", #alloc, #execinstr .section ".proc.info.init", #alloc
.type __arm940_proc_info,#object .type __arm940_proc_info,#object
__arm940_proc_info: __arm940_proc_info:
.long 0x41009400 .long 0x41009400
.long 0xff00fff0 .long 0xff00fff0
.long 0 .long 0
b __arm940_setup initfn __arm940_setup, __arm940_proc_info
.long cpu_arch_name .long cpu_arch_name
.long cpu_elf_name .long cpu_elf_name
.long HWCAP_SWP | HWCAP_HALF | HWCAP_THUMB .long HWCAP_SWP | HWCAP_HALF | HWCAP_THUMB
......
...@@ -343,24 +343,14 @@ __arm946_setup: ...@@ -343,24 +343,14 @@ __arm946_setup:
mcr p15, 0, r0, c6, c0, 0 @ set region 0, default mcr p15, 0, r0, c6, c0, 0 @ set region 0, default
ldr r0, =(CONFIG_DRAM_BASE & 0xFFFFF000) @ base[31:12] of RAM ldr r0, =(CONFIG_DRAM_BASE & 0xFFFFF000) @ base[31:12] of RAM
ldr r1, =(CONFIG_DRAM_SIZE >> 12) @ size of RAM (must be >= 4KB) ldr r7, =CONFIG_DRAM_SIZE @ size of RAM (must be >= 4KB)
mov r2, #10 @ 11 is the minimum (4KB) pr_val r3, r0, r7, #1
1: add r2, r2, #1 @ area size *= 2 mcr p15, 0, r3, c6, c1, 0
mov r1, r1, lsr #1
bne 1b @ count not zero r-shift
orr r0, r0, r2, lsl #1 @ the region register value
orr r0, r0, #1 @ set enable bit
mcr p15, 0, r0, c6, c1, 0 @ set region 1, RAM
ldr r0, =(CONFIG_FLASH_MEM_BASE & 0xFFFFF000) @ base[31:12] of FLASH ldr r0, =(CONFIG_FLASH_MEM_BASE & 0xFFFFF000) @ base[31:12] of FLASH
ldr r1, =(CONFIG_FLASH_SIZE >> 12) @ size of FLASH (must be >= 4KB) ldr r7, =CONFIG_FLASH_SIZE @ size of FLASH (must be >= 4KB)
mov r2, #10 @ 11 is the minimum (4KB) pr_val r3, r0, r7, #1
1: add r2, r2, #1 @ area size *= 2 mcr p15, 0, r3, c6, c2, 0
mov r1, r1, lsr #1
bne 1b @ count not zero r-shift
orr r0, r0, r2, lsl #1 @ the region register value
orr r0, r0, #1 @ set enable bit
mcr p15, 0, r0, c6, c2, 0 @ set region 2, ROM/FLASH
mov r0, #0x06 mov r0, #0x06
mcr p15, 0, r0, c2, c0, 0 @ region 1,2 d-cacheable mcr p15, 0, r0, c2, c0, 0 @ region 1,2 d-cacheable
...@@ -409,14 +399,14 @@ __arm946_setup: ...@@ -409,14 +399,14 @@ __arm946_setup:
.align .align
.section ".proc.info.init", #alloc, #execinstr .section ".proc.info.init", #alloc
.type __arm946_proc_info,#object .type __arm946_proc_info,#object
__arm946_proc_info: __arm946_proc_info:
.long 0x41009460 .long 0x41009460
.long 0xff00fff0 .long 0xff00fff0
.long 0 .long 0
.long 0 .long 0
b __arm946_setup initfn __arm946_setup, __arm946_proc_info
.long cpu_arch_name .long cpu_arch_name
.long cpu_elf_name .long cpu_elf_name
.long HWCAP_SWP | HWCAP_HALF | HWCAP_THUMB .long HWCAP_SWP | HWCAP_HALF | HWCAP_THUMB
......
...@@ -70,7 +70,7 @@ __arm9tdmi_setup: ...@@ -70,7 +70,7 @@ __arm9tdmi_setup:
.align .align
.section ".proc.info.init", #alloc, #execinstr .section ".proc.info.init", #alloc
.macro arm9tdmi_proc_info name:req, cpu_val:req, cpu_mask:req, cpu_name:req .macro arm9tdmi_proc_info name:req, cpu_val:req, cpu_mask:req, cpu_name:req
.type __\name\()_proc_info, #object .type __\name\()_proc_info, #object
...@@ -79,7 +79,7 @@ __\name\()_proc_info: ...@@ -79,7 +79,7 @@ __\name\()_proc_info:
.long \cpu_mask .long \cpu_mask
.long 0 .long 0
.long 0 .long 0
b __arm9tdmi_setup initfn __arm9tdmi_setup, __\name\()_proc_info
.long cpu_arch_name .long cpu_arch_name
.long cpu_elf_name .long cpu_elf_name
.long HWCAP_SWP | HWCAP_THUMB | HWCAP_26BIT .long HWCAP_SWP | HWCAP_THUMB | HWCAP_26BIT
......
...@@ -190,7 +190,7 @@ fa526_cr1_set: ...@@ -190,7 +190,7 @@ fa526_cr1_set:
.align .align
.section ".proc.info.init", #alloc, #execinstr .section ".proc.info.init", #alloc
.type __fa526_proc_info,#object .type __fa526_proc_info,#object
__fa526_proc_info: __fa526_proc_info:
...@@ -206,7 +206,7 @@ __fa526_proc_info: ...@@ -206,7 +206,7 @@ __fa526_proc_info:
PMD_BIT4 | \ PMD_BIT4 | \
PMD_SECT_AP_WRITE | \ PMD_SECT_AP_WRITE | \
PMD_SECT_AP_READ PMD_SECT_AP_READ
b __fa526_setup initfn __fa526_setup, __fa526_proc_info
.long cpu_arch_name .long cpu_arch_name
.long cpu_elf_name .long cpu_elf_name
.long HWCAP_SWP | HWCAP_HALF .long HWCAP_SWP | HWCAP_HALF
......
...@@ -584,7 +584,7 @@ feroceon_crval: ...@@ -584,7 +584,7 @@ feroceon_crval:
.align .align
.section ".proc.info.init", #alloc, #execinstr .section ".proc.info.init", #alloc
.macro feroceon_proc_info name:req, cpu_val:req, cpu_mask:req, cpu_name:req, cache:req .macro feroceon_proc_info name:req, cpu_val:req, cpu_mask:req, cpu_name:req, cache:req
.type __\name\()_proc_info,#object .type __\name\()_proc_info,#object
...@@ -601,7 +601,8 @@ __\name\()_proc_info: ...@@ -601,7 +601,8 @@ __\name\()_proc_info:
PMD_BIT4 | \ PMD_BIT4 | \
PMD_SECT_AP_WRITE | \ PMD_SECT_AP_WRITE | \
PMD_SECT_AP_READ PMD_SECT_AP_READ
b __feroceon_setup initfn __feroceon_setup, __\name\()_proc_info
.long __feroceon_setup
.long cpu_arch_name .long cpu_arch_name
.long cpu_elf_name .long cpu_elf_name
.long HWCAP_SWP|HWCAP_HALF|HWCAP_THUMB|HWCAP_FAST_MULT|HWCAP_EDSP .long HWCAP_SWP|HWCAP_HALF|HWCAP_THUMB|HWCAP_FAST_MULT|HWCAP_EDSP
......
...@@ -331,3 +331,31 @@ ENTRY(\name\()_tlb_fns) ...@@ -331,3 +331,31 @@ ENTRY(\name\()_tlb_fns)
.globl \x .globl \x
.equ \x, \y .equ \x, \y
.endm .endm
.macro initfn, func, base
.long \func - \base
.endm
/*
* Macro to calculate the log2 size for the protection region
* registers. This calculates rd = log2(size) - 1. tmp must
* not be the same register as rd.
*/
.macro pr_sz, rd, size, tmp
mov \tmp, \size, lsr #12
mov \rd, #11
1: movs \tmp, \tmp, lsr #1
addne \rd, \rd, #1
bne 1b
.endm
/*
* Macro to generate a protection region register value
* given a pre-masked address, size, and enable bit.
* Corrupts size.
*/
.macro pr_val, dest, addr, size, enable
pr_sz \dest, \size, \size @ calculate log2(size) - 1
orr \dest, \addr, \dest, lsl #1 @ mask in the region size
orr \dest, \dest, \enable
.endm
...@@ -427,7 +427,7 @@ mohawk_crval: ...@@ -427,7 +427,7 @@ mohawk_crval:
.align .align
.section ".proc.info.init", #alloc, #execinstr .section ".proc.info.init", #alloc
.type __88sv331x_proc_info,#object .type __88sv331x_proc_info,#object
__88sv331x_proc_info: __88sv331x_proc_info:
...@@ -443,7 +443,7 @@ __88sv331x_proc_info: ...@@ -443,7 +443,7 @@ __88sv331x_proc_info:
PMD_BIT4 | \ PMD_BIT4 | \
PMD_SECT_AP_WRITE | \ PMD_SECT_AP_WRITE | \
PMD_SECT_AP_READ PMD_SECT_AP_READ
b __mohawk_setup initfn __mohawk_setup, __88sv331x_proc_info
.long cpu_arch_name .long cpu_arch_name
.long cpu_elf_name .long cpu_elf_name
.long HWCAP_SWP|HWCAP_HALF|HWCAP_THUMB|HWCAP_FAST_MULT|HWCAP_EDSP .long HWCAP_SWP|HWCAP_HALF|HWCAP_THUMB|HWCAP_FAST_MULT|HWCAP_EDSP
......
...@@ -199,7 +199,7 @@ sa110_crval: ...@@ -199,7 +199,7 @@ sa110_crval:
.align .align
.section ".proc.info.init", #alloc, #execinstr .section ".proc.info.init", #alloc
.type __sa110_proc_info,#object .type __sa110_proc_info,#object
__sa110_proc_info: __sa110_proc_info:
...@@ -213,7 +213,7 @@ __sa110_proc_info: ...@@ -213,7 +213,7 @@ __sa110_proc_info:
.long PMD_TYPE_SECT | \ .long PMD_TYPE_SECT | \
PMD_SECT_AP_WRITE | \ PMD_SECT_AP_WRITE | \
PMD_SECT_AP_READ PMD_SECT_AP_READ
b __sa110_setup initfn __sa110_setup, __sa110_proc_info
.long cpu_arch_name .long cpu_arch_name
.long cpu_elf_name .long cpu_elf_name
.long HWCAP_SWP | HWCAP_HALF | HWCAP_26BIT | HWCAP_FAST_MULT .long HWCAP_SWP | HWCAP_HALF | HWCAP_26BIT | HWCAP_FAST_MULT
......
...@@ -242,7 +242,7 @@ sa1100_crval: ...@@ -242,7 +242,7 @@ sa1100_crval:
.align .align
.section ".proc.info.init", #alloc, #execinstr .section ".proc.info.init", #alloc
.macro sa1100_proc_info name:req, cpu_val:req, cpu_mask:req, cpu_name:req .macro sa1100_proc_info name:req, cpu_val:req, cpu_mask:req, cpu_name:req
.type __\name\()_proc_info,#object .type __\name\()_proc_info,#object
...@@ -257,7 +257,7 @@ __\name\()_proc_info: ...@@ -257,7 +257,7 @@ __\name\()_proc_info:
.long PMD_TYPE_SECT | \ .long PMD_TYPE_SECT | \
PMD_SECT_AP_WRITE | \ PMD_SECT_AP_WRITE | \
PMD_SECT_AP_READ PMD_SECT_AP_READ
b __sa1100_setup initfn __sa1100_setup, __\name\()_proc_info
.long cpu_arch_name .long cpu_arch_name
.long cpu_elf_name .long cpu_elf_name
.long HWCAP_SWP | HWCAP_HALF | HWCAP_26BIT | HWCAP_FAST_MULT .long HWCAP_SWP | HWCAP_HALF | HWCAP_26BIT | HWCAP_FAST_MULT
......
...@@ -264,7 +264,7 @@ v6_crval: ...@@ -264,7 +264,7 @@ v6_crval:
string cpu_elf_name, "v6" string cpu_elf_name, "v6"
.align .align
.section ".proc.info.init", #alloc, #execinstr .section ".proc.info.init", #alloc
/* /*
* Match any ARMv6 processor core. * Match any ARMv6 processor core.
...@@ -287,7 +287,7 @@ __v6_proc_info: ...@@ -287,7 +287,7 @@ __v6_proc_info:
PMD_SECT_XN | \ PMD_SECT_XN | \
PMD_SECT_AP_WRITE | \ PMD_SECT_AP_WRITE | \
PMD_SECT_AP_READ PMD_SECT_AP_READ
b __v6_setup initfn __v6_setup, __v6_proc_info
.long cpu_arch_name .long cpu_arch_name
.long cpu_elf_name .long cpu_elf_name
/* See also feat_v6_fixup() for HWCAP_TLS */ /* See also feat_v6_fixup() for HWCAP_TLS */
......
...@@ -37,15 +37,18 @@ ...@@ -37,15 +37,18 @@
* It is assumed that: * It is assumed that:
* - we are not using split page tables * - we are not using split page tables
*/ */
ENTRY(cpu_v7_switch_mm) ENTRY(cpu_ca8_switch_mm)
#ifdef CONFIG_MMU #ifdef CONFIG_MMU
mov r2, #0 mov r2, #0
mmid r1, r1 @ get mm->context.id
ALT_SMP(orr r0, r0, #TTB_FLAGS_SMP)
ALT_UP(orr r0, r0, #TTB_FLAGS_UP)
#ifdef CONFIG_ARM_ERRATA_430973 #ifdef CONFIG_ARM_ERRATA_430973
mcr p15, 0, r2, c7, c5, 6 @ flush BTAC/BTB mcr p15, 0, r2, c7, c5, 6 @ flush BTAC/BTB
#endif #endif
#endif
ENTRY(cpu_v7_switch_mm)
#ifdef CONFIG_MMU
mmid r1, r1 @ get mm->context.id
ALT_SMP(orr r0, r0, #TTB_FLAGS_SMP)
ALT_UP(orr r0, r0, #TTB_FLAGS_UP)
#ifdef CONFIG_PID_IN_CONTEXTIDR #ifdef CONFIG_PID_IN_CONTEXTIDR
mrc p15, 0, r2, c13, c0, 1 @ read current context ID mrc p15, 0, r2, c13, c0, 1 @ read current context ID
lsr r2, r2, #8 @ extract the PID lsr r2, r2, #8 @ extract the PID
...@@ -61,6 +64,7 @@ ENTRY(cpu_v7_switch_mm) ...@@ -61,6 +64,7 @@ ENTRY(cpu_v7_switch_mm)
#endif #endif
bx lr bx lr
ENDPROC(cpu_v7_switch_mm) ENDPROC(cpu_v7_switch_mm)
ENDPROC(cpu_ca8_switch_mm)
/* /*
* cpu_v7_set_pte_ext(ptep, pte) * cpu_v7_set_pte_ext(ptep, pte)
......
...@@ -152,6 +152,21 @@ ENTRY(cpu_v7_do_resume) ...@@ -152,6 +152,21 @@ ENTRY(cpu_v7_do_resume)
ENDPROC(cpu_v7_do_resume) ENDPROC(cpu_v7_do_resume)
#endif #endif
/*
* Cortex-A8
*/
globl_equ cpu_ca8_proc_init, cpu_v7_proc_init
globl_equ cpu_ca8_proc_fin, cpu_v7_proc_fin
globl_equ cpu_ca8_reset, cpu_v7_reset
globl_equ cpu_ca8_do_idle, cpu_v7_do_idle
globl_equ cpu_ca8_dcache_clean_area, cpu_v7_dcache_clean_area
globl_equ cpu_ca8_set_pte_ext, cpu_v7_set_pte_ext
globl_equ cpu_ca8_suspend_size, cpu_v7_suspend_size
#ifdef CONFIG_ARM_CPU_SUSPEND
globl_equ cpu_ca8_do_suspend, cpu_v7_do_suspend
globl_equ cpu_ca8_do_resume, cpu_v7_do_resume
#endif
/* /*
* Cortex-A9 processor functions * Cortex-A9 processor functions
*/ */
...@@ -451,7 +466,10 @@ __v7_setup_stack: ...@@ -451,7 +466,10 @@ __v7_setup_stack:
@ define struct processor (see <asm/proc-fns.h> and proc-macros.S) @ define struct processor (see <asm/proc-fns.h> and proc-macros.S)
define_processor_functions v7, dabort=v7_early_abort, pabort=v7_pabort, suspend=1 define_processor_functions v7, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
#ifndef CONFIG_ARM_LPAE
define_processor_functions ca8, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
define_processor_functions ca9mp, dabort=v7_early_abort, pabort=v7_pabort, suspend=1 define_processor_functions ca9mp, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
#endif
#ifdef CONFIG_CPU_PJ4B #ifdef CONFIG_CPU_PJ4B
define_processor_functions pj4b, dabort=v7_early_abort, pabort=v7_pabort, suspend=1 define_processor_functions pj4b, dabort=v7_early_abort, pabort=v7_pabort, suspend=1
#endif #endif
...@@ -462,19 +480,19 @@ __v7_setup_stack: ...@@ -462,19 +480,19 @@ __v7_setup_stack:
string cpu_elf_name, "v7" string cpu_elf_name, "v7"
.align .align
.section ".proc.info.init", #alloc, #execinstr .section ".proc.info.init", #alloc
/* /*
* Standard v7 proc info content * Standard v7 proc info content
*/ */
.macro __v7_proc initfunc, mm_mmuflags = 0, io_mmuflags = 0, hwcaps = 0, proc_fns = v7_processor_functions .macro __v7_proc name, initfunc, mm_mmuflags = 0, io_mmuflags = 0, hwcaps = 0, proc_fns = v7_processor_functions
ALT_SMP(.long PMD_TYPE_SECT | PMD_SECT_AP_WRITE | PMD_SECT_AP_READ | \ ALT_SMP(.long PMD_TYPE_SECT | PMD_SECT_AP_WRITE | PMD_SECT_AP_READ | \
PMD_SECT_AF | PMD_FLAGS_SMP | \mm_mmuflags) PMD_SECT_AF | PMD_FLAGS_SMP | \mm_mmuflags)
ALT_UP(.long PMD_TYPE_SECT | PMD_SECT_AP_WRITE | PMD_SECT_AP_READ | \ ALT_UP(.long PMD_TYPE_SECT | PMD_SECT_AP_WRITE | PMD_SECT_AP_READ | \
PMD_SECT_AF | PMD_FLAGS_UP | \mm_mmuflags) PMD_SECT_AF | PMD_FLAGS_UP | \mm_mmuflags)
.long PMD_TYPE_SECT | PMD_SECT_AP_WRITE | \ .long PMD_TYPE_SECT | PMD_SECT_AP_WRITE | \
PMD_SECT_AP_READ | PMD_SECT_AF | \io_mmuflags PMD_SECT_AP_READ | PMD_SECT_AF | \io_mmuflags
W(b) \initfunc initfn \initfunc, \name
.long cpu_arch_name .long cpu_arch_name
.long cpu_elf_name .long cpu_elf_name
.long HWCAP_SWP | HWCAP_HALF | HWCAP_THUMB | HWCAP_FAST_MULT | \ .long HWCAP_SWP | HWCAP_HALF | HWCAP_THUMB | HWCAP_FAST_MULT | \
...@@ -494,7 +512,7 @@ __v7_setup_stack: ...@@ -494,7 +512,7 @@ __v7_setup_stack:
__v7_ca5mp_proc_info: __v7_ca5mp_proc_info:
.long 0x410fc050 .long 0x410fc050
.long 0xff0ffff0 .long 0xff0ffff0
__v7_proc __v7_ca5mp_setup __v7_proc __v7_ca5mp_proc_info, __v7_ca5mp_setup
.size __v7_ca5mp_proc_info, . - __v7_ca5mp_proc_info .size __v7_ca5mp_proc_info, . - __v7_ca5mp_proc_info
/* /*
...@@ -504,9 +522,19 @@ __v7_ca5mp_proc_info: ...@@ -504,9 +522,19 @@ __v7_ca5mp_proc_info:
__v7_ca9mp_proc_info: __v7_ca9mp_proc_info:
.long 0x410fc090 .long 0x410fc090
.long 0xff0ffff0 .long 0xff0ffff0
__v7_proc __v7_ca9mp_setup, proc_fns = ca9mp_processor_functions __v7_proc __v7_ca9mp_proc_info, __v7_ca9mp_setup, proc_fns = ca9mp_processor_functions
.size __v7_ca9mp_proc_info, . - __v7_ca9mp_proc_info .size __v7_ca9mp_proc_info, . - __v7_ca9mp_proc_info
/*
* ARM Ltd. Cortex A8 processor.
*/
.type __v7_ca8_proc_info, #object
__v7_ca8_proc_info:
.long 0x410fc080
.long 0xff0ffff0
__v7_proc __v7_ca8_proc_info, __v7_setup, proc_fns = ca8_processor_functions
.size __v7_ca8_proc_info, . - __v7_ca8_proc_info
#endif /* CONFIG_ARM_LPAE */ #endif /* CONFIG_ARM_LPAE */
/* /*
...@@ -517,7 +545,7 @@ __v7_ca9mp_proc_info: ...@@ -517,7 +545,7 @@ __v7_ca9mp_proc_info:
__v7_pj4b_proc_info: __v7_pj4b_proc_info:
.long 0x560f5800 .long 0x560f5800
.long 0xff0fff00 .long 0xff0fff00
__v7_proc __v7_pj4b_setup, proc_fns = pj4b_processor_functions __v7_proc __v7_pj4b_proc_info, __v7_pj4b_setup, proc_fns = pj4b_processor_functions
.size __v7_pj4b_proc_info, . - __v7_pj4b_proc_info .size __v7_pj4b_proc_info, . - __v7_pj4b_proc_info
#endif #endif
...@@ -528,7 +556,7 @@ __v7_pj4b_proc_info: ...@@ -528,7 +556,7 @@ __v7_pj4b_proc_info:
__v7_cr7mp_proc_info: __v7_cr7mp_proc_info:
.long 0x410fc170 .long 0x410fc170
.long 0xff0ffff0 .long 0xff0ffff0
__v7_proc __v7_cr7mp_setup __v7_proc __v7_cr7mp_proc_info, __v7_cr7mp_setup
.size __v7_cr7mp_proc_info, . - __v7_cr7mp_proc_info .size __v7_cr7mp_proc_info, . - __v7_cr7mp_proc_info
/* /*
...@@ -538,7 +566,7 @@ __v7_cr7mp_proc_info: ...@@ -538,7 +566,7 @@ __v7_cr7mp_proc_info:
__v7_ca7mp_proc_info: __v7_ca7mp_proc_info:
.long 0x410fc070 .long 0x410fc070
.long 0xff0ffff0 .long 0xff0ffff0
__v7_proc __v7_ca7mp_setup __v7_proc __v7_ca7mp_proc_info, __v7_ca7mp_setup
.size __v7_ca7mp_proc_info, . - __v7_ca7mp_proc_info .size __v7_ca7mp_proc_info, . - __v7_ca7mp_proc_info
/* /*
...@@ -548,7 +576,7 @@ __v7_ca7mp_proc_info: ...@@ -548,7 +576,7 @@ __v7_ca7mp_proc_info:
__v7_ca12mp_proc_info: __v7_ca12mp_proc_info:
.long 0x410fc0d0 .long 0x410fc0d0
.long 0xff0ffff0 .long 0xff0ffff0
__v7_proc __v7_ca12mp_setup __v7_proc __v7_ca12mp_proc_info, __v7_ca12mp_setup
.size __v7_ca12mp_proc_info, . - __v7_ca12mp_proc_info .size __v7_ca12mp_proc_info, . - __v7_ca12mp_proc_info
/* /*
...@@ -558,7 +586,7 @@ __v7_ca12mp_proc_info: ...@@ -558,7 +586,7 @@ __v7_ca12mp_proc_info:
__v7_ca15mp_proc_info: __v7_ca15mp_proc_info:
.long 0x410fc0f0 .long 0x410fc0f0
.long 0xff0ffff0 .long 0xff0ffff0
__v7_proc __v7_ca15mp_setup __v7_proc __v7_ca15mp_proc_info, __v7_ca15mp_setup
.size __v7_ca15mp_proc_info, . - __v7_ca15mp_proc_info .size __v7_ca15mp_proc_info, . - __v7_ca15mp_proc_info
/* /*
...@@ -568,7 +596,7 @@ __v7_ca15mp_proc_info: ...@@ -568,7 +596,7 @@ __v7_ca15mp_proc_info:
__v7_b15mp_proc_info: __v7_b15mp_proc_info:
.long 0x420f00f0 .long 0x420f00f0
.long 0xff0ffff0 .long 0xff0ffff0
__v7_proc __v7_b15mp_setup __v7_proc __v7_b15mp_proc_info, __v7_b15mp_setup
.size __v7_b15mp_proc_info, . - __v7_b15mp_proc_info .size __v7_b15mp_proc_info, . - __v7_b15mp_proc_info
/* /*
...@@ -578,7 +606,7 @@ __v7_b15mp_proc_info: ...@@ -578,7 +606,7 @@ __v7_b15mp_proc_info:
__v7_ca17mp_proc_info: __v7_ca17mp_proc_info:
.long 0x410fc0e0 .long 0x410fc0e0
.long 0xff0ffff0 .long 0xff0ffff0
__v7_proc __v7_ca17mp_setup __v7_proc __v7_ca17mp_proc_info, __v7_ca17mp_setup
.size __v7_ca17mp_proc_info, . - __v7_ca17mp_proc_info .size __v7_ca17mp_proc_info, . - __v7_ca17mp_proc_info
/* /*
...@@ -594,7 +622,7 @@ __krait_proc_info: ...@@ -594,7 +622,7 @@ __krait_proc_info:
* do support them. They also don't indicate support for fused multiply * do support them. They also don't indicate support for fused multiply
* instructions even though they actually do support them. * instructions even though they actually do support them.
*/ */
__v7_proc __v7_setup, hwcaps = HWCAP_IDIV | HWCAP_VFPv4 __v7_proc __krait_proc_info, __v7_setup, hwcaps = HWCAP_IDIV | HWCAP_VFPv4
.size __krait_proc_info, . - __krait_proc_info .size __krait_proc_info, . - __krait_proc_info
/* /*
...@@ -604,5 +632,5 @@ __krait_proc_info: ...@@ -604,5 +632,5 @@ __krait_proc_info:
__v7_proc_info: __v7_proc_info:
.long 0x000f0000 @ Required ID value .long 0x000f0000 @ Required ID value
.long 0x000f0000 @ Mask for ID .long 0x000f0000 @ Mask for ID
__v7_proc __v7_setup __v7_proc __v7_proc_info, __v7_setup
.size __v7_proc_info, . - __v7_proc_info .size __v7_proc_info, . - __v7_proc_info
...@@ -135,7 +135,7 @@ __v7m_setup_stack_top: ...@@ -135,7 +135,7 @@ __v7m_setup_stack_top:
string cpu_elf_name "v7m" string cpu_elf_name "v7m"
string cpu_v7m_name "ARMv7-M" string cpu_v7m_name "ARMv7-M"
.section ".proc.info.init", #alloc, #execinstr .section ".proc.info.init", #alloc
/* /*
* Match any ARMv7-M processor core. * Match any ARMv7-M processor core.
...@@ -146,7 +146,7 @@ __v7m_proc_info: ...@@ -146,7 +146,7 @@ __v7m_proc_info:
.long 0x000f0000 @ Mask for ID .long 0x000f0000 @ Mask for ID
.long 0 @ proc_info_list.__cpu_mm_mmu_flags .long 0 @ proc_info_list.__cpu_mm_mmu_flags
.long 0 @ proc_info_list.__cpu_io_mmu_flags .long 0 @ proc_info_list.__cpu_io_mmu_flags
b __v7m_setup @ proc_info_list.__cpu_flush initfn __v7m_setup, __v7m_proc_info @ proc_info_list.__cpu_flush
.long cpu_arch_name .long cpu_arch_name
.long cpu_elf_name .long cpu_elf_name
.long HWCAP_HALF|HWCAP_THUMB|HWCAP_FAST_MULT .long HWCAP_HALF|HWCAP_THUMB|HWCAP_FAST_MULT
......
...@@ -499,7 +499,7 @@ xsc3_crval: ...@@ -499,7 +499,7 @@ xsc3_crval:
.align .align
.section ".proc.info.init", #alloc, #execinstr .section ".proc.info.init", #alloc
.macro xsc3_proc_info name:req, cpu_val:req, cpu_mask:req .macro xsc3_proc_info name:req, cpu_val:req, cpu_mask:req
.type __\name\()_proc_info,#object .type __\name\()_proc_info,#object
...@@ -514,7 +514,7 @@ __\name\()_proc_info: ...@@ -514,7 +514,7 @@ __\name\()_proc_info:
.long PMD_TYPE_SECT | \ .long PMD_TYPE_SECT | \
PMD_SECT_AP_WRITE | \ PMD_SECT_AP_WRITE | \
PMD_SECT_AP_READ PMD_SECT_AP_READ
b __xsc3_setup initfn __xsc3_setup, __\name\()_proc_info
.long cpu_arch_name .long cpu_arch_name
.long cpu_elf_name .long cpu_elf_name
.long HWCAP_SWP|HWCAP_HALF|HWCAP_THUMB|HWCAP_FAST_MULT|HWCAP_EDSP .long HWCAP_SWP|HWCAP_HALF|HWCAP_THUMB|HWCAP_FAST_MULT|HWCAP_EDSP
......
...@@ -612,7 +612,7 @@ xscale_crval: ...@@ -612,7 +612,7 @@ xscale_crval:
.align .align
.section ".proc.info.init", #alloc, #execinstr .section ".proc.info.init", #alloc
.macro xscale_proc_info name:req, cpu_val:req, cpu_mask:req, cpu_name:req, cache .macro xscale_proc_info name:req, cpu_val:req, cpu_mask:req, cpu_name:req, cache
.type __\name\()_proc_info,#object .type __\name\()_proc_info,#object
...@@ -627,7 +627,7 @@ __\name\()_proc_info: ...@@ -627,7 +627,7 @@ __\name\()_proc_info:
.long PMD_TYPE_SECT | \ .long PMD_TYPE_SECT | \
PMD_SECT_AP_WRITE | \ PMD_SECT_AP_WRITE | \
PMD_SECT_AP_READ PMD_SECT_AP_READ
b __xscale_setup initfn __xscale_setup, __\name\()_proc_info
.long cpu_arch_name .long cpu_arch_name
.long cpu_elf_name .long cpu_elf_name
.long HWCAP_SWP|HWCAP_HALF|HWCAP_THUMB|HWCAP_FAST_MULT|HWCAP_EDSP .long HWCAP_SWP|HWCAP_HALF|HWCAP_THUMB|HWCAP_FAST_MULT|HWCAP_EDSP
......
...@@ -113,7 +113,7 @@ next: ...@@ -113,7 +113,7 @@ next:
@ to fault. Emit the appropriate exception gunk to fix things up. @ to fault. Emit the appropriate exception gunk to fix things up.
@ ??? For some reason, faults can happen at .Lx2 even with a @ ??? For some reason, faults can happen at .Lx2 even with a
@ plain LDR instruction. Weird, but it seems harmless. @ plain LDR instruction. Weird, but it seems harmless.
.pushsection .fixup,"ax" .pushsection .text.fixup,"ax"
.align 2 .align 2
.Lfix: ret r9 @ let the user eat segfaults .Lfix: ret r9 @ let the user eat segfaults
.popsection .popsection
......
hostprogs-y := vdsomunge
obj-vdso := vgettimeofday.o datapage.o
# Build rules
targets := $(obj-vdso) vdso.so vdso.so.dbg vdso.so.raw vdso.lds
obj-vdso := $(addprefix $(obj)/, $(obj-vdso))
ccflags-y := -shared -fPIC -fno-common -fno-builtin -fno-stack-protector
ccflags-y += -nostdlib -Wl,-soname=linux-vdso.so.1 -DDISABLE_BRANCH_PROFILING
ccflags-y += -Wl,--no-undefined $(call cc-ldoption, -Wl$(comma)--hash-style=sysv)
obj-y += vdso.o
extra-y += vdso.lds
CPPFLAGS_vdso.lds += -P -C -U$(ARCH)
CFLAGS_REMOVE_vdso.o = -pg
# Force -O2 to avoid libgcc dependencies
CFLAGS_REMOVE_vgettimeofday.o = -pg -Os
CFLAGS_vgettimeofday.o = -O2
# Disable gcov profiling for VDSO code
GCOV_PROFILE := n
# Force dependency
$(obj)/vdso.o : $(obj)/vdso.so
# Link rule for the .so file
$(obj)/vdso.so.raw: $(src)/vdso.lds $(obj-vdso) FORCE
$(call if_changed,vdsold)
$(obj)/vdso.so.dbg: $(obj)/vdso.so.raw $(obj)/vdsomunge FORCE
$(call if_changed,vdsomunge)
# Strip rule for the .so file
$(obj)/%.so: OBJCOPYFLAGS := -S
$(obj)/%.so: $(obj)/%.so.dbg FORCE
$(call if_changed,objcopy)
# Actual build commands
quiet_cmd_vdsold = VDSO $@
cmd_vdsold = $(CC) $(c_flags) -Wl,-T $(filter %.lds,$^) $(filter %.o,$^) \
$(call cc-ldoption, -Wl$(comma)--build-id) \
-Wl,-Bsymbolic -Wl,-z,max-page-size=4096 \
-Wl,-z,common-page-size=4096 -o $@
quiet_cmd_vdsomunge = MUNGE $@
cmd_vdsomunge = $(objtree)/$(obj)/vdsomunge $< $@
#
# Install the unstripped copy of vdso.so.dbg. If our toolchain
# supports build-id, install .build-id links as well.
#
# Cribbed from arch/x86/vdso/Makefile.
#
quiet_cmd_vdso_install = INSTALL $<
define cmd_vdso_install
cp $< "$(MODLIB)/vdso/vdso.so"; \
if readelf -n $< | grep -q 'Build ID'; then \
buildid=`readelf -n $< |grep 'Build ID' |sed -e 's/^.*Build ID: \(.*\)$$/\1/'`; \
first=`echo $$buildid | cut -b-2`; \
last=`echo $$buildid | cut -b3-`; \
mkdir -p "$(MODLIB)/vdso/.build-id/$$first"; \
ln -sf "../../vdso.so" "$(MODLIB)/vdso/.build-id/$$first/$$last.debug"; \
fi
endef
$(MODLIB)/vdso: FORCE
@mkdir -p $(MODLIB)/vdso
PHONY += vdso_install
vdso_install: $(obj)/vdso.so.dbg $(MODLIB)/vdso FORCE
$(call cmd,vdso_install)
#include <linux/linkage.h>
#include <asm/asm-offsets.h>
.align 2
.L_vdso_data_ptr:
.long _start - . - VDSO_DATA_SIZE
ENTRY(__get_datapage)
.fnstart
adr r0, .L_vdso_data_ptr
ldr r1, [r0]
add r0, r0, r1
bx lr
.fnend
ENDPROC(__get_datapage)
/*
* Adapted from arm64 version.
*
* Copyright (C) 2012 ARM Limited
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
* Author: Will Deacon <will.deacon@arm.com>
*/
#include <linux/init.h>
#include <linux/linkage.h>
#include <linux/const.h>
#include <asm/page.h>
__PAGE_ALIGNED_DATA
.globl vdso_start, vdso_end
.balign PAGE_SIZE
vdso_start:
.incbin "arch/arm/vdso/vdso.so"
.balign PAGE_SIZE
vdso_end:
.previous
/*
* Adapted from arm64 version.
*
* GNU linker script for the VDSO library.
*
* Copyright (C) 2012 ARM Limited
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
* Author: Will Deacon <will.deacon@arm.com>
* Heavily based on the vDSO linker scripts for other archs.
*/
#include <linux/const.h>
#include <asm/page.h>
#include <asm/vdso.h>
OUTPUT_FORMAT("elf32-littlearm", "elf32-bigarm", "elf32-littlearm")
OUTPUT_ARCH(arm)
SECTIONS
{
PROVIDE(_start = .);
. = SIZEOF_HEADERS;
.hash : { *(.hash) } :text
.gnu.hash : { *(.gnu.hash) }
.dynsym : { *(.dynsym) }
.dynstr : { *(.dynstr) }
.gnu.version : { *(.gnu.version) }
.gnu.version_d : { *(.gnu.version_d) }
.gnu.version_r : { *(.gnu.version_r) }
.note : { *(.note.*) } :text :note
.eh_frame_hdr : { *(.eh_frame_hdr) } :text :eh_frame_hdr
.eh_frame : { KEEP (*(.eh_frame)) } :text
.dynamic : { *(.dynamic) } :text :dynamic
.rodata : { *(.rodata*) } :text
.text : { *(.text*) } :text =0xe7f001f2
.got : { *(.got) }
.rel.plt : { *(.rel.plt) }
/DISCARD/ : {
*(.note.GNU-stack)
*(.data .data.* .gnu.linkonce.d.* .sdata*)
*(.bss .sbss .dynbss .dynsbss)
}
}
/*
* We must supply the ELF program headers explicitly to get just one
* PT_LOAD segment, and set the flags explicitly to make segments read-only.
*/
PHDRS
{
text PT_LOAD FLAGS(5) FILEHDR PHDRS; /* PF_R|PF_X */
dynamic PT_DYNAMIC FLAGS(4); /* PF_R */
note PT_NOTE FLAGS(4); /* PF_R */
eh_frame_hdr PT_GNU_EH_FRAME;
}
VERSION
{
LINUX_2.6 {
global:
__vdso_clock_gettime;
__vdso_gettimeofday;
local: *;
};
}
/*
* Copyright 2015 Mentor Graphics Corporation.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; version 2 of the
* License.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*
*
* vdsomunge - Host program which produces a shared object
* architecturally specified to be usable by both soft- and hard-float
* programs.
*
* The Procedure Call Standard for the ARM Architecture (ARM IHI
* 0042E) says:
*
* 6.4.1 VFP and Base Standard Compatibility
*
* Code compiled for the VFP calling standard is compatible with
* the base standard (and vice-versa) if no floating-point or
* containerized vector arguments or results are used.
*
* And ELF for the ARM Architecture (ARM IHI 0044E) (Table 4-2) says:
*
* If both EF_ARM_ABI_FLOAT_XXXX bits are clear, conformance to the
* base procedure-call standard is implied.
*
* The VDSO is built with -msoft-float, as with the rest of the ARM
* kernel, and uses no floating point arguments or results. The build
* process will produce a shared object that may or may not have the
* EF_ARM_ABI_FLOAT_SOFT flag set (it seems to depend on the binutils
* version; binutils starting with 2.24 appears to set it). The
* EF_ARM_ABI_FLOAT_HARD flag should definitely not be set, and this
* program will error out if it is.
*
* If the soft-float flag is set, this program clears it. That's all
* it does.
*/
#define _GNU_SOURCE
#include <byteswap.h>
#include <elf.h>
#include <errno.h>
#include <error.h>
#include <fcntl.h>
#include <stdbool.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/mman.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <unistd.h>
#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
#define HOST_ORDER ELFDATA2LSB
#elif __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
#define HOST_ORDER ELFDATA2MSB
#endif
/* Some of the ELF constants we'd like to use were added to <elf.h>
* relatively recently.
*/
#ifndef EF_ARM_EABI_VER5
#define EF_ARM_EABI_VER5 0x05000000
#endif
#ifndef EF_ARM_ABI_FLOAT_SOFT
#define EF_ARM_ABI_FLOAT_SOFT 0x200
#endif
#ifndef EF_ARM_ABI_FLOAT_HARD
#define EF_ARM_ABI_FLOAT_HARD 0x400
#endif
static const char *outfile;
static void cleanup(void)
{
if (error_message_count > 0 && outfile != NULL)
unlink(outfile);
}
static Elf32_Word read_elf_word(Elf32_Word word, bool swap)
{
return swap ? bswap_32(word) : word;
}
static Elf32_Half read_elf_half(Elf32_Half half, bool swap)
{
return swap ? bswap_16(half) : half;
}
static void write_elf_word(Elf32_Word val, Elf32_Word *dst, bool swap)
{
*dst = swap ? bswap_32(val) : val;
}
int main(int argc, char **argv)
{
const Elf32_Ehdr *inhdr;
bool clear_soft_float;
const char *infile;
Elf32_Word e_flags;
const void *inbuf;
struct stat stat;
void *outbuf;
bool swap;
int outfd;
int infd;
atexit(cleanup);
if (argc != 3)
error(EXIT_FAILURE, 0, "Usage: %s [infile] [outfile]", argv[0]);
infile = argv[1];
outfile = argv[2];
infd = open(infile, O_RDONLY);
if (infd < 0)
error(EXIT_FAILURE, errno, "Cannot open %s", infile);
if (fstat(infd, &stat) != 0)
error(EXIT_FAILURE, errno, "Failed stat for %s", infile);
inbuf = mmap(NULL, stat.st_size, PROT_READ, MAP_PRIVATE, infd, 0);
if (inbuf == MAP_FAILED)
error(EXIT_FAILURE, errno, "Failed to map %s", infile);
close(infd);
inhdr = inbuf;
if (memcmp(&inhdr->e_ident, ELFMAG, SELFMAG) != 0)
error(EXIT_FAILURE, 0, "Not an ELF file");
if (inhdr->e_ident[EI_CLASS] != ELFCLASS32)
error(EXIT_FAILURE, 0, "Unsupported ELF class");
swap = inhdr->e_ident[EI_DATA] != HOST_ORDER;
if (read_elf_half(inhdr->e_type, swap) != ET_DYN)
error(EXIT_FAILURE, 0, "Not a shared object");
if (read_elf_half(inhdr->e_machine, swap) != EM_ARM) {
error(EXIT_FAILURE, 0, "Unsupported architecture %#x",
inhdr->e_machine);
}
e_flags = read_elf_word(inhdr->e_flags, swap);
if (EF_ARM_EABI_VERSION(e_flags) != EF_ARM_EABI_VER5) {
error(EXIT_FAILURE, 0, "Unsupported EABI version %#x",
EF_ARM_EABI_VERSION(e_flags));
}
if (e_flags & EF_ARM_ABI_FLOAT_HARD)
error(EXIT_FAILURE, 0,
"Unexpected hard-float flag set in e_flags");
clear_soft_float = !!(e_flags & EF_ARM_ABI_FLOAT_SOFT);
outfd = open(outfile, O_RDWR | O_CREAT | O_TRUNC, S_IRUSR | S_IWUSR);
if (outfd < 0)
error(EXIT_FAILURE, errno, "Cannot open %s", outfile);
if (ftruncate(outfd, stat.st_size) != 0)
error(EXIT_FAILURE, errno, "Cannot truncate %s", outfile);
outbuf = mmap(NULL, stat.st_size, PROT_READ | PROT_WRITE, MAP_SHARED,
outfd, 0);
if (outbuf == MAP_FAILED)
error(EXIT_FAILURE, errno, "Failed to map %s", outfile);
close(outfd);
memcpy(outbuf, inbuf, stat.st_size);
if (clear_soft_float) {
Elf32_Ehdr *outhdr;
outhdr = outbuf;
e_flags &= ~EF_ARM_ABI_FLOAT_SOFT;
write_elf_word(e_flags, &outhdr->e_flags, swap);
}
if (msync(outbuf, stat.st_size, MS_SYNC) != 0)
error(EXIT_FAILURE, errno, "Failed to sync %s", outfile);
return EXIT_SUCCESS;
}
/*
* Copyright 2015 Mentor Graphics Corporation.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; version 2 of the
* License.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <linux/compiler.h>
#include <linux/hrtimer.h>
#include <linux/time.h>
#include <asm/arch_timer.h>
#include <asm/barrier.h>
#include <asm/bug.h>
#include <asm/page.h>
#include <asm/unistd.h>
#include <asm/vdso_datapage.h>
#ifndef CONFIG_AEABI
#error This code depends on AEABI system call conventions
#endif
extern struct vdso_data *__get_datapage(void);
static notrace u32 __vdso_read_begin(const struct vdso_data *vdata)
{
u32 seq;
repeat:
seq = ACCESS_ONCE(vdata->seq_count);
if (seq & 1) {
cpu_relax();
goto repeat;
}
return seq;
}
static notrace u32 vdso_read_begin(const struct vdso_data *vdata)
{
u32 seq;
seq = __vdso_read_begin(vdata);
smp_rmb(); /* Pairs with smp_wmb in vdso_write_end */
return seq;
}
static notrace int vdso_read_retry(const struct vdso_data *vdata, u32 start)
{
smp_rmb(); /* Pairs with smp_wmb in vdso_write_begin */
return vdata->seq_count != start;
}
static notrace long clock_gettime_fallback(clockid_t _clkid,
struct timespec *_ts)
{
register struct timespec *ts asm("r1") = _ts;
register clockid_t clkid asm("r0") = _clkid;
register long ret asm ("r0");
register long nr asm("r7") = __NR_clock_gettime;
asm volatile(
" swi #0\n"
: "=r" (ret)
: "r" (clkid), "r" (ts), "r" (nr)
: "memory");
return ret;
}
static notrace int do_realtime_coarse(struct timespec *ts,
struct vdso_data *vdata)
{
u32 seq;
do {
seq = vdso_read_begin(vdata);
ts->tv_sec = vdata->xtime_coarse_sec;
ts->tv_nsec = vdata->xtime_coarse_nsec;
} while (vdso_read_retry(vdata, seq));
return 0;
}
static notrace int do_monotonic_coarse(struct timespec *ts,
struct vdso_data *vdata)
{
struct timespec tomono;
u32 seq;
do {
seq = vdso_read_begin(vdata);
ts->tv_sec = vdata->xtime_coarse_sec;
ts->tv_nsec = vdata->xtime_coarse_nsec;
tomono.tv_sec = vdata->wtm_clock_sec;
tomono.tv_nsec = vdata->wtm_clock_nsec;
} while (vdso_read_retry(vdata, seq));
ts->tv_sec += tomono.tv_sec;
timespec_add_ns(ts, tomono.tv_nsec);
return 0;
}
#ifdef CONFIG_ARM_ARCH_TIMER
static notrace u64 get_ns(struct vdso_data *vdata)
{
u64 cycle_delta;
u64 cycle_now;
u64 nsec;
cycle_now = arch_counter_get_cntvct();
cycle_delta = (cycle_now - vdata->cs_cycle_last) & vdata->cs_mask;
nsec = (cycle_delta * vdata->cs_mult) + vdata->xtime_clock_snsec;
nsec >>= vdata->cs_shift;
return nsec;
}
static notrace int do_realtime(struct timespec *ts, struct vdso_data *vdata)
{
u64 nsecs;
u32 seq;
do {
seq = vdso_read_begin(vdata);
if (!vdata->tk_is_cntvct)
return -1;
ts->tv_sec = vdata->xtime_clock_sec;
nsecs = get_ns(vdata);
} while (vdso_read_retry(vdata, seq));
ts->tv_nsec = 0;
timespec_add_ns(ts, nsecs);
return 0;
}
static notrace int do_monotonic(struct timespec *ts, struct vdso_data *vdata)
{
struct timespec tomono;
u64 nsecs;
u32 seq;
do {
seq = vdso_read_begin(vdata);
if (!vdata->tk_is_cntvct)
return -1;
ts->tv_sec = vdata->xtime_clock_sec;
nsecs = get_ns(vdata);
tomono.tv_sec = vdata->wtm_clock_sec;
tomono.tv_nsec = vdata->wtm_clock_nsec;
} while (vdso_read_retry(vdata, seq));
ts->tv_sec += tomono.tv_sec;
ts->tv_nsec = 0;
timespec_add_ns(ts, nsecs + tomono.tv_nsec);
return 0;
}
#else /* CONFIG_ARM_ARCH_TIMER */
static notrace int do_realtime(struct timespec *ts, struct vdso_data *vdata)
{
return -1;
}
static notrace int do_monotonic(struct timespec *ts, struct vdso_data *vdata)
{
return -1;
}
#endif /* CONFIG_ARM_ARCH_TIMER */
notrace int __vdso_clock_gettime(clockid_t clkid, struct timespec *ts)
{
struct vdso_data *vdata;
int ret = -1;
vdata = __get_datapage();
switch (clkid) {
case CLOCK_REALTIME_COARSE:
ret = do_realtime_coarse(ts, vdata);
break;
case CLOCK_MONOTONIC_COARSE:
ret = do_monotonic_coarse(ts, vdata);
break;
case CLOCK_REALTIME:
ret = do_realtime(ts, vdata);
break;
case CLOCK_MONOTONIC:
ret = do_monotonic(ts, vdata);
break;
default:
break;
}
if (ret)
ret = clock_gettime_fallback(clkid, ts);
return ret;
}
static notrace long gettimeofday_fallback(struct timeval *_tv,
struct timezone *_tz)
{
register struct timezone *tz asm("r1") = _tz;
register struct timeval *tv asm("r0") = _tv;
register long ret asm ("r0");
register long nr asm("r7") = __NR_gettimeofday;
asm volatile(
" swi #0\n"
: "=r" (ret)
: "r" (tv), "r" (tz), "r" (nr)
: "memory");
return ret;
}
notrace int __vdso_gettimeofday(struct timeval *tv, struct timezone *tz)
{
struct timespec ts;
struct vdso_data *vdata;
int ret;
vdata = __get_datapage();
ret = do_realtime(&ts, vdata);
if (ret)
return gettimeofday_fallback(tv, tz);
if (tv) {
tv->tv_sec = ts.tv_sec;
tv->tv_usec = ts.tv_nsec / 1000;
}
if (tz) {
tz->tz_minuteswest = vdata->tz_minuteswest;
tz->tz_dsttime = vdata->tz_dsttime;
}
return ret;
}
/* Avoid unresolved references emitted by GCC */
void __aeabi_unwind_cpp_pr0(void)
{
}
void __aeabi_unwind_cpp_pr1(void)
{
}
void __aeabi_unwind_cpp_pr2(void)
{
}
...@@ -25,49 +25,50 @@ ...@@ -25,49 +25,50 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/platform_device.h> #include <linux/platform_device.h>
#include <linux/io.h> #include <linux/io.h>
#include <linux/of.h>
#include <soc/tegra/ahb.h> #include <soc/tegra/ahb.h>
#define DRV_NAME "tegra-ahb" #define DRV_NAME "tegra-ahb"
#define AHB_ARBITRATION_DISABLE 0x00 #define AHB_ARBITRATION_DISABLE 0x04
#define AHB_ARBITRATION_PRIORITY_CTRL 0x04 #define AHB_ARBITRATION_PRIORITY_CTRL 0x08
#define AHB_PRIORITY_WEIGHT(x) (((x) & 0x7) << 29) #define AHB_PRIORITY_WEIGHT(x) (((x) & 0x7) << 29)
#define PRIORITY_SELECT_USB BIT(6) #define PRIORITY_SELECT_USB BIT(6)
#define PRIORITY_SELECT_USB2 BIT(18) #define PRIORITY_SELECT_USB2 BIT(18)
#define PRIORITY_SELECT_USB3 BIT(17) #define PRIORITY_SELECT_USB3 BIT(17)
#define AHB_GIZMO_AHB_MEM 0x0c #define AHB_GIZMO_AHB_MEM 0x10
#define ENB_FAST_REARBITRATE BIT(2) #define ENB_FAST_REARBITRATE BIT(2)
#define DONT_SPLIT_AHB_WR BIT(7) #define DONT_SPLIT_AHB_WR BIT(7)
#define AHB_GIZMO_APB_DMA 0x10 #define AHB_GIZMO_APB_DMA 0x14
#define AHB_GIZMO_IDE 0x18 #define AHB_GIZMO_IDE 0x1c
#define AHB_GIZMO_USB 0x1c #define AHB_GIZMO_USB 0x20
#define AHB_GIZMO_AHB_XBAR_BRIDGE 0x20 #define AHB_GIZMO_AHB_XBAR_BRIDGE 0x24
#define AHB_GIZMO_CPU_AHB_BRIDGE 0x24 #define AHB_GIZMO_CPU_AHB_BRIDGE 0x28
#define AHB_GIZMO_COP_AHB_BRIDGE 0x28 #define AHB_GIZMO_COP_AHB_BRIDGE 0x2c
#define AHB_GIZMO_XBAR_APB_CTLR 0x2c #define AHB_GIZMO_XBAR_APB_CTLR 0x30
#define AHB_GIZMO_VCP_AHB_BRIDGE 0x30 #define AHB_GIZMO_VCP_AHB_BRIDGE 0x34
#define AHB_GIZMO_NAND 0x3c #define AHB_GIZMO_NAND 0x40
#define AHB_GIZMO_SDMMC4 0x44 #define AHB_GIZMO_SDMMC4 0x48
#define AHB_GIZMO_XIO 0x48 #define AHB_GIZMO_XIO 0x4c
#define AHB_GIZMO_BSEV 0x60 #define AHB_GIZMO_BSEV 0x64
#define AHB_GIZMO_BSEA 0x70 #define AHB_GIZMO_BSEA 0x74
#define AHB_GIZMO_NOR 0x74 #define AHB_GIZMO_NOR 0x78
#define AHB_GIZMO_USB2 0x78 #define AHB_GIZMO_USB2 0x7c
#define AHB_GIZMO_USB3 0x7c #define AHB_GIZMO_USB3 0x80
#define IMMEDIATE BIT(18) #define IMMEDIATE BIT(18)
#define AHB_GIZMO_SDMMC1 0x80 #define AHB_GIZMO_SDMMC1 0x84
#define AHB_GIZMO_SDMMC2 0x84 #define AHB_GIZMO_SDMMC2 0x88
#define AHB_GIZMO_SDMMC3 0x88 #define AHB_GIZMO_SDMMC3 0x8c
#define AHB_MEM_PREFETCH_CFG_X 0xd8 #define AHB_MEM_PREFETCH_CFG_X 0xdc
#define AHB_ARBITRATION_XBAR_CTRL 0xdc #define AHB_ARBITRATION_XBAR_CTRL 0xe0
#define AHB_MEM_PREFETCH_CFG3 0xe0 #define AHB_MEM_PREFETCH_CFG3 0xe4
#define AHB_MEM_PREFETCH_CFG4 0xe4 #define AHB_MEM_PREFETCH_CFG4 0xe8
#define AHB_MEM_PREFETCH_CFG1 0xec #define AHB_MEM_PREFETCH_CFG1 0xf0
#define AHB_MEM_PREFETCH_CFG2 0xf0 #define AHB_MEM_PREFETCH_CFG2 0xf4
#define PREFETCH_ENB BIT(31) #define PREFETCH_ENB BIT(31)
#define MST_ID(x) (((x) & 0x1f) << 26) #define MST_ID(x) (((x) & 0x1f) << 26)
#define AHBDMA_MST_ID MST_ID(5) #define AHBDMA_MST_ID MST_ID(5)
...@@ -77,10 +78,20 @@ ...@@ -77,10 +78,20 @@
#define ADDR_BNDRY(x) (((x) & 0xf) << 21) #define ADDR_BNDRY(x) (((x) & 0xf) << 21)
#define INACTIVITY_TIMEOUT(x) (((x) & 0xffff) << 0) #define INACTIVITY_TIMEOUT(x) (((x) & 0xffff) << 0)
#define AHB_ARBITRATION_AHB_MEM_WRQUE_MST_ID 0xf8 #define AHB_ARBITRATION_AHB_MEM_WRQUE_MST_ID 0xfc
#define AHB_ARBITRATION_XBAR_CTRL_SMMU_INIT_DONE BIT(17) #define AHB_ARBITRATION_XBAR_CTRL_SMMU_INIT_DONE BIT(17)
/*
* INCORRECT_BASE_ADDR_LOW_BYTE: Legacy kernel DT files for Tegra SoCs
* prior to Tegra124 generally use a physical base address ending in
* 0x4 for the AHB IP block. According to the TRM, the low byte
* should be 0x0. During device probing, this macro is used to detect
* whether the passed-in physical address is incorrect, and if so, to
* correct it.
*/
#define INCORRECT_BASE_ADDR_LOW_BYTE 0x4
static struct platform_driver tegra_ahb_driver; static struct platform_driver tegra_ahb_driver;
static const u32 tegra_ahb_gizmo[] = { static const u32 tegra_ahb_gizmo[] = {
...@@ -257,6 +268,15 @@ static int tegra_ahb_probe(struct platform_device *pdev) ...@@ -257,6 +268,15 @@ static int tegra_ahb_probe(struct platform_device *pdev)
return -ENOMEM; return -ENOMEM;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0); res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
/* Correct the IP block base address if necessary */
if (res &&
(res->start & INCORRECT_BASE_ADDR_LOW_BYTE) ==
INCORRECT_BASE_ADDR_LOW_BYTE) {
dev_warn(&pdev->dev, "incorrect AHB base address in DT data - enabling workaround\n");
res->start -= INCORRECT_BASE_ADDR_LOW_BYTE;
}
ahb->regs = devm_ioremap_resource(&pdev->dev, res); ahb->regs = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(ahb->regs)) if (IS_ERR(ahb->regs))
return PTR_ERR(ahb->regs); return PTR_ERR(ahb->regs);
......
...@@ -405,7 +405,7 @@ ...@@ -405,7 +405,7 @@
#define TEXT_TEXT \ #define TEXT_TEXT \
ALIGN_FUNCTION(); \ ALIGN_FUNCTION(); \
*(.text.hot) \ *(.text.hot) \
*(.text) \ *(.text .text.fixup) \
*(.ref.text) \ *(.ref.text) \
MEM_KEEP(init.text) \ MEM_KEEP(init.text) \
MEM_KEEP(exit.text) \ MEM_KEEP(exit.text) \
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment