Commit 05ad391d authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64

Pull ARM64 update from Catalin Marinas:
 "Main features:
   - Ticket-based spinlock implementation and lockless lockref support
   - Big endian support
   - CPU hotplug support, currently for PSCI (Power State Coordination
     Interface) capable firmware
   - Virtual address space extended to 42-bit in the 64K page
     configuration (maximum VA space with 2 levels of page tables)
   - Compat (AArch32) kuser helpers updated to ARMv8 (make use of
     load-acquire/store-release instructions)
   - Code cleanup, defconfig update and minor fixes"

* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/linux-aarch64: (43 commits)
  ARM64: /proc/interrupts: display IPIs of online CPUs only
  arm64: locks: Remove CONFIG_GENERIC_LOCKBREAK
  arm64: KVM: vgic: byteswap GICv2 access on world switch if BE
  arm64: KVM: initialize HYP mode following the kernel endianness
  arm64: compat: Clear the IT state independent of the 32-bit ARM or Thumb-2 mode
  arm64: Use 42-bit address space with 64K pages
  arm64: module: ensure instruction is little-endian before manipulation
  arm64: defconfig: Enable CONFIG_PREEMPT by default
  arm64: fix access to preempt_count from assembly code
  arm64: move enabling of GIC before CPUs are set online
  arm64: use generic RW_DATA_SECTION macro in linker script
  arm64: Slightly improve the warning on CPU0 enable-method
  ARM64: simplify cpu_read_bootcpu_ops using OF/DT helper
  ARM64: DT: define ARM64 specific arch_match_cpu_phys_id
  arm64: allow ioremap_cache() to use existing RAM mappings
  arm64: update 32-bit kuser helpers to ARMv8
  arm64: perf: fix event number mask
  arm64: kconfig: allow CPU_BIG_ENDIAN to be selected
  arm64: Fix the endianness of arch_spinlock_t
  arm64: big-endian: write CPU holding pen address as LE
  ...
parents 8b5baa46 67317c26
...@@ -115,9 +115,10 @@ Before jumping into the kernel, the following conditions must be met: ...@@ -115,9 +115,10 @@ Before jumping into the kernel, the following conditions must be met:
External caches (if present) must be configured and disabled. External caches (if present) must be configured and disabled.
- Architected timers - Architected timers
CNTFRQ must be programmed with the timer frequency. CNTFRQ must be programmed with the timer frequency and CNTVOFF must
If entering the kernel at EL1, CNTHCTL_EL2 must have EL1PCTEN (bit 0) be programmed with a consistent value on all CPUs. If entering the
set where available. kernel at EL1, CNTHCTL_EL2 must have EL1PCTEN (bit 0) set where
available.
- Coherency - Coherency
All CPUs to be booted by the kernel must be part of the same coherency All CPUs to be booted by the kernel must be part of the same coherency
...@@ -130,30 +131,46 @@ Before jumping into the kernel, the following conditions must be met: ...@@ -130,30 +131,46 @@ Before jumping into the kernel, the following conditions must be met:
the kernel image will be entered must be initialised by software at a the kernel image will be entered must be initialised by software at a
higher exception level to prevent execution in an UNKNOWN state. higher exception level to prevent execution in an UNKNOWN state.
The requirements described above for CPU mode, caches, MMUs, architected
timers, coherency and system registers apply to all CPUs. All CPUs must
enter the kernel in the same exception level.
The boot loader is expected to enter the kernel on each CPU in the The boot loader is expected to enter the kernel on each CPU in the
following manner: following manner:
- The primary CPU must jump directly to the first instruction of the - The primary CPU must jump directly to the first instruction of the
kernel image. The device tree blob passed by this CPU must contain kernel image. The device tree blob passed by this CPU must contain
for each CPU node: an 'enable-method' property for each cpu node. The supported
enable-methods are described below.
1. An 'enable-method' property. Currently, the only supported value
for this field is the string "spin-table".
2. A 'cpu-release-addr' property identifying a 64-bit,
zero-initialised memory location.
It is expected that the bootloader will generate these device tree It is expected that the bootloader will generate these device tree
properties and insert them into the blob prior to kernel entry. properties and insert them into the blob prior to kernel entry.
- Any secondary CPUs must spin outside of the kernel in a reserved area - CPUs with a "spin-table" enable-method must have a 'cpu-release-addr'
of memory (communicated to the kernel by a /memreserve/ region in the property in their cpu node. This property identifies a
naturally-aligned 64-bit zero-initalised memory location.
These CPUs should spin outside of the kernel in a reserved area of
memory (communicated to the kernel by a /memreserve/ region in the
device tree) polling their cpu-release-addr location, which must be device tree) polling their cpu-release-addr location, which must be
contained in the reserved region. A wfe instruction may be inserted contained in the reserved region. A wfe instruction may be inserted
to reduce the overhead of the busy-loop and a sev will be issued by to reduce the overhead of the busy-loop and a sev will be issued by
the primary CPU. When a read of the location pointed to by the the primary CPU. When a read of the location pointed to by the
cpu-release-addr returns a non-zero value, the CPU must jump directly cpu-release-addr returns a non-zero value, the CPU must jump to this
to this value. value. The value will be written as a single 64-bit little-endian
value, so CPUs must convert the read value to their native endianness
before jumping to it.
- CPUs with a "psci" enable method should remain outside of
the kernel (i.e. outside of the regions of memory described to the
kernel in the memory node, or in a reserved area of memory described
to the kernel by a /memreserve/ region in the device tree). The
kernel will issue CPU_ON calls as described in ARM document number ARM
DEN 0022A ("Power State Coordination Interface System Software on ARM
processors") to bring CPUs into the kernel.
The device tree should contain a 'psci' node, as described in
Documentation/devicetree/bindings/arm/psci.txt.
- Secondary CPU general-purpose register settings - Secondary CPU general-purpose register settings
x0 = 0 (reserved for future use) x0 = 0 (reserved for future use)
......
...@@ -21,7 +21,7 @@ The swapper_pgd_dir address is written to TTBR1 and never written to ...@@ -21,7 +21,7 @@ The swapper_pgd_dir address is written to TTBR1 and never written to
TTBR0. TTBR0.
AArch64 Linux memory layout: AArch64 Linux memory layout with 4KB pages:
Start End Size Use Start End Size Use
----------------------------------------------------------------------- -----------------------------------------------------------------------
...@@ -39,13 +39,38 @@ ffffffbffbc00000 ffffffbffbdfffff 2MB earlyprintk device ...@@ -39,13 +39,38 @@ ffffffbffbc00000 ffffffbffbdfffff 2MB earlyprintk device
ffffffbffbe00000 ffffffbffbe0ffff 64KB PCI I/O space ffffffbffbe00000 ffffffbffbe0ffff 64KB PCI I/O space
ffffffbbffff0000 ffffffbcffffffff ~2MB [guard] ffffffbffbe10000 ffffffbcffffffff ~2MB [guard]
ffffffbffc000000 ffffffbfffffffff 64MB modules ffffffbffc000000 ffffffbfffffffff 64MB modules
ffffffc000000000 ffffffffffffffff 256GB kernel logical memory map ffffffc000000000 ffffffffffffffff 256GB kernel logical memory map
AArch64 Linux memory layout with 64KB pages:
Start End Size Use
-----------------------------------------------------------------------
0000000000000000 000003ffffffffff 4TB user
fffffc0000000000 fffffdfbfffeffff ~2TB vmalloc
fffffdfbffff0000 fffffdfbffffffff 64KB [guard page]
fffffdfc00000000 fffffdfdffffffff 8GB vmemmap
fffffdfe00000000 fffffdfffbbfffff ~8GB [guard, future vmmemap]
fffffdfffbc00000 fffffdfffbdfffff 2MB earlyprintk device
fffffdfffbe00000 fffffdfffbe0ffff 64KB PCI I/O space
fffffdfffbe10000 fffffdfffbffffff ~2MB [guard]
fffffdfffc000000 fffffdffffffffff 64MB modules
fffffe0000000000 ffffffffffffffff 2TB kernel logical memory map
Translation table lookup with 4KB pages: Translation table lookup with 4KB pages:
+--------+--------+--------+--------+--------+--------+--------+--------+ +--------+--------+--------+--------+--------+--------+--------+--------+
......
config ARM64 config ARM64
def_bool y def_bool y
select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
select ARCH_USE_CMPXCHG_LOCKREF
select ARCH_WANT_OPTIONAL_GPIOLIB select ARCH_WANT_OPTIONAL_GPIOLIB
select ARCH_WANT_COMPAT_IPC_PARSE_VERSION select ARCH_WANT_COMPAT_IPC_PARSE_VERSION
select ARCH_WANT_FRAME_POINTERS select ARCH_WANT_FRAME_POINTERS
...@@ -61,10 +62,6 @@ config LOCKDEP_SUPPORT ...@@ -61,10 +62,6 @@ config LOCKDEP_SUPPORT
config TRACE_IRQFLAGS_SUPPORT config TRACE_IRQFLAGS_SUPPORT
def_bool y def_bool y
config GENERIC_LOCKBREAK
def_bool y
depends on SMP && PREEMPT
config RWSEM_GENERIC_SPINLOCK config RWSEM_GENERIC_SPINLOCK
def_bool y def_bool y
...@@ -138,6 +135,11 @@ config ARM64_64K_PAGES ...@@ -138,6 +135,11 @@ config ARM64_64K_PAGES
look-up. AArch32 emulation is not available when this feature look-up. AArch32 emulation is not available when this feature
is enabled. is enabled.
config CPU_BIG_ENDIAN
bool "Build big-endian kernel"
help
Say Y if you plan on running a kernel in big-endian mode.
config SMP config SMP
bool "Symmetric Multi-Processing" bool "Symmetric Multi-Processing"
select USE_GENERIC_SMP_HELPERS select USE_GENERIC_SMP_HELPERS
...@@ -160,6 +162,13 @@ config NR_CPUS ...@@ -160,6 +162,13 @@ config NR_CPUS
default "8" if ARCH_XGENE default "8" if ARCH_XGENE
default "4" default "4"
config HOTPLUG_CPU
bool "Support for hot-pluggable CPUs"
depends on SMP
help
Say Y here to experiment with turning CPUs off and on. CPUs
can be controlled through /sys/devices/system/cpu.
source kernel/Kconfig.preempt source kernel/Kconfig.preempt
config HZ config HZ
......
...@@ -20,9 +20,15 @@ LIBGCC := $(shell $(CC) $(KBUILD_CFLAGS) -print-libgcc-file-name) ...@@ -20,9 +20,15 @@ LIBGCC := $(shell $(CC) $(KBUILD_CFLAGS) -print-libgcc-file-name)
KBUILD_DEFCONFIG := defconfig KBUILD_DEFCONFIG := defconfig
KBUILD_CFLAGS += -mgeneral-regs-only KBUILD_CFLAGS += -mgeneral-regs-only
ifeq ($(CONFIG_CPU_BIG_ENDIAN), y)
KBUILD_CPPFLAGS += -mbig-endian
AS += -EB
LD += -EB
else
KBUILD_CPPFLAGS += -mlittle-endian KBUILD_CPPFLAGS += -mlittle-endian
AS += -EL AS += -EL
LD += -EL LD += -EL
endif
comma = , comma = ,
......
...@@ -26,7 +26,7 @@ CONFIG_MODULE_UNLOAD=y ...@@ -26,7 +26,7 @@ CONFIG_MODULE_UNLOAD=y
CONFIG_ARCH_VEXPRESS=y CONFIG_ARCH_VEXPRESS=y
CONFIG_ARCH_XGENE=y CONFIG_ARCH_XGENE=y
CONFIG_SMP=y CONFIG_SMP=y
CONFIG_PREEMPT_VOLUNTARY=y CONFIG_PREEMPT=y
CONFIG_CMDLINE="console=ttyAMA0" CONFIG_CMDLINE="console=ttyAMA0"
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
CONFIG_COMPAT=y CONFIG_COMPAT=y
......
...@@ -115,3 +115,34 @@ lr .req x30 // link register ...@@ -115,3 +115,34 @@ lr .req x30 // link register
.align 7 .align 7
b \label b \label
.endm .endm
/*
* Select code when configured for BE.
*/
#ifdef CONFIG_CPU_BIG_ENDIAN
#define CPU_BE(code...) code
#else
#define CPU_BE(code...)
#endif
/*
* Select code when configured for LE.
*/
#ifdef CONFIG_CPU_BIG_ENDIAN
#define CPU_LE(code...)
#else
#define CPU_LE(code...) code
#endif
/*
* Define a macro that constructs a 64-bit value by concatenating two
* 32-bit registers. Note that on big endian systems the order of the
* registers is swapped.
*/
#ifndef CONFIG_CPU_BIG_ENDIAN
.macro regs_to_64, rd, lbits, hbits
#else
.macro regs_to_64, rd, hbits, lbits
#endif
orr \rd, \lbits, \hbits, lsl #32
.endm
...@@ -173,4 +173,6 @@ static inline unsigned long __cmpxchg_mb(volatile void *ptr, unsigned long old, ...@@ -173,4 +173,6 @@ static inline unsigned long __cmpxchg_mb(volatile void *ptr, unsigned long old,
#define cmpxchg64(ptr,o,n) cmpxchg((ptr),(o),(n)) #define cmpxchg64(ptr,o,n) cmpxchg((ptr),(o),(n))
#define cmpxchg64_local(ptr,o,n) cmpxchg_local((ptr),(o),(n)) #define cmpxchg64_local(ptr,o,n) cmpxchg_local((ptr),(o),(n))
#define cmpxchg64_relaxed(ptr,o,n) cmpxchg_local((ptr),(o),(n))
#endif /* __ASM_CMPXCHG_H */ #endif /* __ASM_CMPXCHG_H */
...@@ -26,7 +26,11 @@ ...@@ -26,7 +26,11 @@
#include <linux/ptrace.h> #include <linux/ptrace.h>
#define COMPAT_USER_HZ 100 #define COMPAT_USER_HZ 100
#ifdef __AARCH64EB__
#define COMPAT_UTS_MACHINE "armv8b\0\0"
#else
#define COMPAT_UTS_MACHINE "armv8l\0\0" #define COMPAT_UTS_MACHINE "armv8l\0\0"
#endif
typedef u32 compat_size_t; typedef u32 compat_size_t;
typedef s32 compat_ssize_t; typedef s32 compat_ssize_t;
...@@ -73,13 +77,23 @@ struct compat_timeval { ...@@ -73,13 +77,23 @@ struct compat_timeval {
}; };
struct compat_stat { struct compat_stat {
#ifdef __AARCH64EB__
short st_dev;
short __pad1;
#else
compat_dev_t st_dev; compat_dev_t st_dev;
#endif
compat_ino_t st_ino; compat_ino_t st_ino;
compat_mode_t st_mode; compat_mode_t st_mode;
compat_ushort_t st_nlink; compat_ushort_t st_nlink;
__compat_uid16_t st_uid; __compat_uid16_t st_uid;
__compat_gid16_t st_gid; __compat_gid16_t st_gid;
#ifdef __AARCH64EB__
short st_rdev;
short __pad2;
#else
compat_dev_t st_rdev; compat_dev_t st_rdev;
#endif
compat_off_t st_size; compat_off_t st_size;
compat_off_t st_blksize; compat_off_t st_blksize;
compat_off_t st_blocks; compat_off_t st_blocks;
......
/*
* Copyright (C) 2013 ARM Ltd.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef __ASM_CPU_OPS_H
#define __ASM_CPU_OPS_H
#include <linux/init.h>
#include <linux/threads.h>
struct device_node;
/**
* struct cpu_operations - Callback operations for hotplugging CPUs.
*
* @name: Name of the property as appears in a devicetree cpu node's
* enable-method property.
* @cpu_init: Reads any data necessary for a specific enable-method from the
* devicetree, for a given cpu node and proposed logical id.
* @cpu_prepare: Early one-time preparation step for a cpu. If there is a
* mechanism for doing so, tests whether it is possible to boot
* the given CPU.
* @cpu_boot: Boots a cpu into the kernel.
* @cpu_postboot: Optionally, perform any post-boot cleanup or necesary
* synchronisation. Called from the cpu being booted.
* @cpu_disable: Prepares a cpu to die. May fail for some mechanism-specific
* reason, which will cause the hot unplug to be aborted. Called
* from the cpu to be killed.
* @cpu_die: Makes a cpu leave the kernel. Must not fail. Called from the
* cpu being killed.
*/
struct cpu_operations {
const char *name;
int (*cpu_init)(struct device_node *, unsigned int);
int (*cpu_prepare)(unsigned int);
int (*cpu_boot)(unsigned int);
void (*cpu_postboot)(void);
#ifdef CONFIG_HOTPLUG_CPU
int (*cpu_disable)(unsigned int cpu);
void (*cpu_die)(unsigned int cpu);
#endif
};
extern const struct cpu_operations *cpu_ops[NR_CPUS];
extern int __init cpu_read_ops(struct device_node *dn, int cpu);
extern void __init cpu_read_bootcpu_ops(void);
#endif /* ifndef __ASM_CPU_OPS_H */
...@@ -90,11 +90,24 @@ typedef struct user_fpsimd_state elf_fpregset_t; ...@@ -90,11 +90,24 @@ typedef struct user_fpsimd_state elf_fpregset_t;
* These are used to set parameters in the core dumps. * These are used to set parameters in the core dumps.
*/ */
#define ELF_CLASS ELFCLASS64 #define ELF_CLASS ELFCLASS64
#ifdef __AARCH64EB__
#define ELF_DATA ELFDATA2MSB
#else
#define ELF_DATA ELFDATA2LSB #define ELF_DATA ELFDATA2LSB
#endif
#define ELF_ARCH EM_AARCH64 #define ELF_ARCH EM_AARCH64
/*
* This yields a string that ld.so will use to load implementation
* specific libraries for optimization. This is more specific in
* intent than poking at uname or /proc/cpuinfo.
*/
#define ELF_PLATFORM_SIZE 16 #define ELF_PLATFORM_SIZE 16
#ifdef __AARCH64EB__
#define ELF_PLATFORM ("aarch64_be")
#else
#define ELF_PLATFORM ("aarch64") #define ELF_PLATFORM ("aarch64")
#endif
/* /*
* This is used to ensure we don't load something for the wrong architecture. * This is used to ensure we don't load something for the wrong architecture.
...@@ -149,7 +162,12 @@ extern unsigned long arch_randomize_brk(struct mm_struct *mm); ...@@ -149,7 +162,12 @@ extern unsigned long arch_randomize_brk(struct mm_struct *mm);
#define arch_randomize_brk arch_randomize_brk #define arch_randomize_brk arch_randomize_brk
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
#ifdef __AARCH64EB__
#define COMPAT_ELF_PLATFORM ("v8b")
#else
#define COMPAT_ELF_PLATFORM ("v8l") #define COMPAT_ELF_PLATFORM ("v8l")
#endif
#define COMPAT_ELF_ET_DYN_BASE (randomize_et_dyn(2 * TASK_SIZE_32 / 3)) #define COMPAT_ELF_ET_DYN_BASE (randomize_et_dyn(2 * TASK_SIZE_32 / 3))
......
...@@ -224,6 +224,7 @@ extern void __memset_io(volatile void __iomem *, int, size_t); ...@@ -224,6 +224,7 @@ extern void __memset_io(volatile void __iomem *, int, size_t);
*/ */
extern void __iomem *__ioremap(phys_addr_t phys_addr, size_t size, pgprot_t prot); extern void __iomem *__ioremap(phys_addr_t phys_addr, size_t size, pgprot_t prot);
extern void __iounmap(volatile void __iomem *addr); extern void __iounmap(volatile void __iomem *addr);
extern void __iomem *ioremap_cache(phys_addr_t phys_addr, size_t size);
#define PROT_DEFAULT (PTE_TYPE_PAGE | PTE_AF | PTE_DIRTY) #define PROT_DEFAULT (PTE_TYPE_PAGE | PTE_AF | PTE_DIRTY)
#define PROT_DEVICE_nGnRE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_ATTRINDX(MT_DEVICE_nGnRE)) #define PROT_DEVICE_nGnRE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_ATTRINDX(MT_DEVICE_nGnRE))
...@@ -233,7 +234,6 @@ extern void __iounmap(volatile void __iomem *addr); ...@@ -233,7 +234,6 @@ extern void __iounmap(volatile void __iomem *addr);
#define ioremap(addr, size) __ioremap((addr), (size), __pgprot(PROT_DEVICE_nGnRE)) #define ioremap(addr, size) __ioremap((addr), (size), __pgprot(PROT_DEVICE_nGnRE))
#define ioremap_nocache(addr, size) __ioremap((addr), (size), __pgprot(PROT_DEVICE_nGnRE)) #define ioremap_nocache(addr, size) __ioremap((addr), (size), __pgprot(PROT_DEVICE_nGnRE))
#define ioremap_wc(addr, size) __ioremap((addr), (size), __pgprot(PROT_NORMAL_NC)) #define ioremap_wc(addr, size) __ioremap((addr), (size), __pgprot(PROT_NORMAL_NC))
#define ioremap_cached(addr, size) __ioremap((addr), (size), __pgprot(PROT_NORMAL))
#define iounmap __iounmap #define iounmap __iounmap
#define PROT_SECT_DEFAULT (PMD_TYPE_SECT | PMD_SECT_AF) #define PROT_SECT_DEFAULT (PMD_TYPE_SECT | PMD_SECT_AF)
......
...@@ -4,6 +4,7 @@ ...@@ -4,6 +4,7 @@
#include <asm-generic/irq.h> #include <asm-generic/irq.h>
extern void (*handle_arch_irq)(struct pt_regs *); extern void (*handle_arch_irq)(struct pt_regs *);
extern void migrate_irqs(void);
extern void set_handle_irq(void (*handle_irq)(struct pt_regs *)); extern void set_handle_irq(void (*handle_irq)(struct pt_regs *));
#endif #endif
...@@ -33,18 +33,23 @@ ...@@ -33,18 +33,23 @@
#define UL(x) _AC(x, UL) #define UL(x) _AC(x, UL)
/* /*
* PAGE_OFFSET - the virtual address of the start of the kernel image. * PAGE_OFFSET - the virtual address of the start of the kernel image (top
* (VA_BITS - 1))
* VA_BITS - the maximum number of bits for virtual addresses. * VA_BITS - the maximum number of bits for virtual addresses.
* TASK_SIZE - the maximum size of a user space task. * TASK_SIZE - the maximum size of a user space task.
* TASK_UNMAPPED_BASE - the lower boundary of the mmap VM area. * TASK_UNMAPPED_BASE - the lower boundary of the mmap VM area.
* The module space lives between the addresses given by TASK_SIZE * The module space lives between the addresses given by TASK_SIZE
* and PAGE_OFFSET - it must be within 128MB of the kernel text. * and PAGE_OFFSET - it must be within 128MB of the kernel text.
*/ */
#define PAGE_OFFSET UL(0xffffffc000000000) #ifdef CONFIG_ARM64_64K_PAGES
#define VA_BITS (42)
#else
#define VA_BITS (39)
#endif
#define PAGE_OFFSET (UL(0xffffffffffffffff) << (VA_BITS - 1))
#define MODULES_END (PAGE_OFFSET) #define MODULES_END (PAGE_OFFSET)
#define MODULES_VADDR (MODULES_END - SZ_64M) #define MODULES_VADDR (MODULES_END - SZ_64M)
#define EARLYCON_IOBASE (MODULES_VADDR - SZ_4M) #define EARLYCON_IOBASE (MODULES_VADDR - SZ_4M)
#define VA_BITS (39)
#define TASK_SIZE_64 (UL(1) << VA_BITS) #define TASK_SIZE_64 (UL(1) << VA_BITS)
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
......
...@@ -21,10 +21,10 @@ ...@@ -21,10 +21,10 @@
* 8192 entries of 8 bytes each, occupying a 64KB page. Levels 0 and 1 are not * 8192 entries of 8 bytes each, occupying a 64KB page. Levels 0 and 1 are not
* used. The 2nd level table (PGD for Linux) can cover a range of 4TB, each * used. The 2nd level table (PGD for Linux) can cover a range of 4TB, each
* entry representing 512MB. The user and kernel address spaces are limited to * entry representing 512MB. The user and kernel address spaces are limited to
* 512GB and therefore we only use 1024 entries in the PGD. * 4TB in the 64KB page configuration.
*/ */
#define PTRS_PER_PTE 8192 #define PTRS_PER_PTE 8192
#define PTRS_PER_PGD 1024 #define PTRS_PER_PGD 8192
/* /*
* PGDIR_SHIFT determines the size a top-level page table entry can map. * PGDIR_SHIFT determines the size a top-level page table entry can map.
......
...@@ -33,7 +33,7 @@ ...@@ -33,7 +33,7 @@
/* /*
* VMALLOC and SPARSEMEM_VMEMMAP ranges. * VMALLOC and SPARSEMEM_VMEMMAP ranges.
*/ */
#define VMALLOC_START UL(0xffffff8000000000) #define VMALLOC_START (UL(0xffffffffffffffff) << VA_BITS)
#define VMALLOC_END (PAGE_OFFSET - UL(0x400000000) - SZ_64K) #define VMALLOC_END (PAGE_OFFSET - UL(0x400000000) - SZ_64K)
#define vmemmap ((struct page *)(VMALLOC_END + SZ_64K)) #define vmemmap ((struct page *)(VMALLOC_END + SZ_64K))
......
...@@ -107,6 +107,11 @@ static inline void compat_start_thread(struct pt_regs *regs, unsigned long pc, ...@@ -107,6 +107,11 @@ static inline void compat_start_thread(struct pt_regs *regs, unsigned long pc,
regs->pstate = COMPAT_PSR_MODE_USR; regs->pstate = COMPAT_PSR_MODE_USR;
if (pc & 1) if (pc & 1)
regs->pstate |= COMPAT_PSR_T_BIT; regs->pstate |= COMPAT_PSR_T_BIT;
#ifdef __AARCH64EB__
regs->pstate |= COMPAT_PSR_E_BIT;
#endif
regs->compat_sp = sp; regs->compat_sp = sp;
} }
#endif #endif
......
...@@ -14,25 +14,6 @@ ...@@ -14,25 +14,6 @@
#ifndef __ASM_PSCI_H #ifndef __ASM_PSCI_H
#define __ASM_PSCI_H #define __ASM_PSCI_H
#define PSCI_POWER_STATE_TYPE_STANDBY 0
#define PSCI_POWER_STATE_TYPE_POWER_DOWN 1
struct psci_power_state {
u16 id;
u8 type;
u8 affinity_level;
};
struct psci_operations {
int (*cpu_suspend)(struct psci_power_state state,
unsigned long entry_point);
int (*cpu_off)(struct psci_power_state state);
int (*cpu_on)(unsigned long cpuid, unsigned long entry_point);
int (*migrate)(unsigned long cpuid);
};
extern struct psci_operations psci_ops;
int psci_init(void); int psci_init(void);
#endif /* __ASM_PSCI_H */ #endif /* __ASM_PSCI_H */
...@@ -42,6 +42,7 @@ ...@@ -42,6 +42,7 @@
#define COMPAT_PSR_MODE_UND 0x0000001b #define COMPAT_PSR_MODE_UND 0x0000001b
#define COMPAT_PSR_MODE_SYS 0x0000001f #define COMPAT_PSR_MODE_SYS 0x0000001f
#define COMPAT_PSR_T_BIT 0x00000020 #define COMPAT_PSR_T_BIT 0x00000020
#define COMPAT_PSR_E_BIT 0x00000200
#define COMPAT_PSR_F_BIT 0x00000040 #define COMPAT_PSR_F_BIT 0x00000040
#define COMPAT_PSR_I_BIT 0x00000080 #define COMPAT_PSR_I_BIT 0x00000080
#define COMPAT_PSR_A_BIT 0x00000100 #define COMPAT_PSR_A_BIT 0x00000100
......
...@@ -60,21 +60,14 @@ struct secondary_data { ...@@ -60,21 +60,14 @@ struct secondary_data {
void *stack; void *stack;
}; };
extern struct secondary_data secondary_data; extern struct secondary_data secondary_data;
extern void secondary_holding_pen(void); extern void secondary_entry(void);
extern volatile unsigned long secondary_holding_pen_release;
extern void arch_send_call_function_single_ipi(int cpu); extern void arch_send_call_function_single_ipi(int cpu);
extern void arch_send_call_function_ipi_mask(const struct cpumask *mask); extern void arch_send_call_function_ipi_mask(const struct cpumask *mask);
struct device_node; extern int __cpu_disable(void);
struct smp_enable_ops { extern void __cpu_die(unsigned int cpu);
const char *name; extern void cpu_die(void);
int (*init_cpu)(struct device_node *, int);
int (*prepare_cpu)(int);
};
extern const struct smp_enable_ops smp_spin_table_ops;
extern const struct smp_enable_ops smp_psci_ops;
#endif /* ifndef __ASM_SMP_H */ #endif /* ifndef __ASM_SMP_H */
...@@ -22,17 +22,10 @@ ...@@ -22,17 +22,10 @@
/* /*
* Spinlock implementation. * Spinlock implementation.
* *
* The old value is read exclusively and the new one, if unlocked, is written
* exclusively. In case of failure, the loop is restarted.
*
* The memory barriers are implicit with the load-acquire and store-release * The memory barriers are implicit with the load-acquire and store-release
* instructions. * instructions.
*
* Unlocked value: 0
* Locked value: 1
*/ */
#define arch_spin_is_locked(x) ((x)->lock != 0)
#define arch_spin_unlock_wait(lock) \ #define arch_spin_unlock_wait(lock) \
do { while (arch_spin_is_locked(lock)) cpu_relax(); } while (0) do { while (arch_spin_is_locked(lock)) cpu_relax(); } while (0)
...@@ -41,32 +34,51 @@ ...@@ -41,32 +34,51 @@
static inline void arch_spin_lock(arch_spinlock_t *lock) static inline void arch_spin_lock(arch_spinlock_t *lock)
{ {
unsigned int tmp; unsigned int tmp;
arch_spinlock_t lockval, newval;
asm volatile( asm volatile(
" sevl\n" /* Atomically increment the next ticket. */
"1: wfe\n" " prfm pstl1strm, %3\n"
"2: ldaxr %w0, %1\n" "1: ldaxr %w0, %3\n"
" cbnz %w0, 1b\n" " add %w1, %w0, %w5\n"
" stxr %w0, %w2, %1\n" " stxr %w2, %w1, %3\n"
" cbnz %w0, 2b\n" " cbnz %w2, 1b\n"
: "=&r" (tmp), "+Q" (lock->lock) /* Did we get the lock? */
: "r" (1) " eor %w1, %w0, %w0, ror #16\n"
: "cc", "memory"); " cbz %w1, 3f\n"
/*
* No: spin on the owner. Send a local event to avoid missing an
* unlock before the exclusive load.
*/
" sevl\n"
"2: wfe\n"
" ldaxrh %w2, %4\n"
" eor %w1, %w2, %w0, lsr #16\n"
" cbnz %w1, 2b\n"
/* We got the lock. Critical section starts here. */
"3:"
: "=&r" (lockval), "=&r" (newval), "=&r" (tmp), "+Q" (*lock)
: "Q" (lock->owner), "I" (1 << TICKET_SHIFT)
: "memory");
} }
static inline int arch_spin_trylock(arch_spinlock_t *lock) static inline int arch_spin_trylock(arch_spinlock_t *lock)
{ {
unsigned int tmp; unsigned int tmp;
arch_spinlock_t lockval;
asm volatile( asm volatile(
"2: ldaxr %w0, %1\n" " prfm pstl1strm, %2\n"
" cbnz %w0, 1f\n" "1: ldaxr %w0, %2\n"
" stxr %w0, %w2, %1\n" " eor %w1, %w0, %w0, ror #16\n"
" cbnz %w0, 2b\n" " cbnz %w1, 2f\n"
"1:\n" " add %w0, %w0, %3\n"
: "=&r" (tmp), "+Q" (lock->lock) " stxr %w1, %w0, %2\n"
: "r" (1) " cbnz %w1, 1b\n"
: "cc", "memory"); "2:"
: "=&r" (lockval), "=&r" (tmp), "+Q" (*lock)
: "I" (1 << TICKET_SHIFT)
: "memory");
return !tmp; return !tmp;
} }
...@@ -74,9 +86,28 @@ static inline int arch_spin_trylock(arch_spinlock_t *lock) ...@@ -74,9 +86,28 @@ static inline int arch_spin_trylock(arch_spinlock_t *lock)
static inline void arch_spin_unlock(arch_spinlock_t *lock) static inline void arch_spin_unlock(arch_spinlock_t *lock)
{ {
asm volatile( asm volatile(
" stlr %w1, %0\n" " stlrh %w1, %0\n"
: "=Q" (lock->lock) : "r" (0) : "memory"); : "=Q" (lock->owner)
: "r" (lock->owner + 1)
: "memory");
}
static inline int arch_spin_value_unlocked(arch_spinlock_t lock)
{
return lock.owner == lock.next;
}
static inline int arch_spin_is_locked(arch_spinlock_t *lock)
{
return !arch_spin_value_unlocked(ACCESS_ONCE(*lock));
}
static inline int arch_spin_is_contended(arch_spinlock_t *lock)
{
arch_spinlock_t lockval = ACCESS_ONCE(*lock);
return (lockval.next - lockval.owner) > 1;
} }
#define arch_spin_is_contended arch_spin_is_contended
/* /*
* Write lock implementation. * Write lock implementation.
......
...@@ -20,14 +20,19 @@ ...@@ -20,14 +20,19 @@
# error "please don't include this file directly" # error "please don't include this file directly"
#endif #endif
/* We only require natural alignment for exclusive accesses. */ #define TICKET_SHIFT 16
#define __lock_aligned
typedef struct { typedef struct {
volatile unsigned int lock; #ifdef __AARCH64EB__
} arch_spinlock_t; u16 next;
u16 owner;
#else
u16 owner;
u16 next;
#endif
} __aligned(4) arch_spinlock_t;
#define __ARCH_SPIN_LOCK_UNLOCKED { 0 } #define __ARCH_SPIN_LOCK_UNLOCKED { 0 , 0 }
typedef struct { typedef struct {
volatile unsigned int lock; volatile unsigned int lock;
......
...@@ -59,6 +59,9 @@ static inline void syscall_get_arguments(struct task_struct *task, ...@@ -59,6 +59,9 @@ static inline void syscall_get_arguments(struct task_struct *task,
unsigned int i, unsigned int n, unsigned int i, unsigned int n,
unsigned long *args) unsigned long *args)
{ {
if (n == 0)
return;
if (i + n > SYSCALL_MAX_ARGS) { if (i + n > SYSCALL_MAX_ARGS) {
unsigned long *args_bad = args + SYSCALL_MAX_ARGS - i; unsigned long *args_bad = args + SYSCALL_MAX_ARGS - i;
unsigned int n_bad = n + i - SYSCALL_MAX_ARGS; unsigned int n_bad = n + i - SYSCALL_MAX_ARGS;
...@@ -82,6 +85,9 @@ static inline void syscall_set_arguments(struct task_struct *task, ...@@ -82,6 +85,9 @@ static inline void syscall_set_arguments(struct task_struct *task,
unsigned int i, unsigned int n, unsigned int i, unsigned int n,
const unsigned long *args) const unsigned long *args)
{ {
if (n == 0)
return;
if (i + n > SYSCALL_MAX_ARGS) { if (i + n > SYSCALL_MAX_ARGS) {
pr_warning("%s called with max args %d, handling only %d\n", pr_warning("%s called with max args %d, handling only %d\n",
__func__, i + n, SYSCALL_MAX_ARGS); __func__, i + n, SYSCALL_MAX_ARGS);
......
...@@ -18,7 +18,8 @@ ...@@ -18,7 +18,8 @@
#ifndef __ASM__VIRT_H #ifndef __ASM__VIRT_H
#define __ASM__VIRT_H #define __ASM__VIRT_H
#define BOOT_CPU_MODE_EL2 (0x0e12b007) #define BOOT_CPU_MODE_EL1 (0xe11)
#define BOOT_CPU_MODE_EL2 (0xe12)
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
......
...@@ -16,6 +16,10 @@ ...@@ -16,6 +16,10 @@
#ifndef __ASM_BYTEORDER_H #ifndef __ASM_BYTEORDER_H
#define __ASM_BYTEORDER_H #define __ASM_BYTEORDER_H
#ifdef __AARCH64EB__
#include <linux/byteorder/big_endian.h>
#else
#include <linux/byteorder/little_endian.h> #include <linux/byteorder/little_endian.h>
#endif
#endif /* __ASM_BYTEORDER_H */ #endif /* __ASM_BYTEORDER_H */
...@@ -9,12 +9,12 @@ AFLAGS_head.o := -DTEXT_OFFSET=$(TEXT_OFFSET) ...@@ -9,12 +9,12 @@ AFLAGS_head.o := -DTEXT_OFFSET=$(TEXT_OFFSET)
arm64-obj-y := cputable.o debug-monitors.o entry.o irq.o fpsimd.o \ arm64-obj-y := cputable.o debug-monitors.o entry.o irq.o fpsimd.o \
entry-fpsimd.o process.o ptrace.o setup.o signal.o \ entry-fpsimd.o process.o ptrace.o setup.o signal.o \
sys.o stacktrace.o time.o traps.o io.o vdso.o \ sys.o stacktrace.o time.o traps.o io.o vdso.o \
hyp-stub.o psci.o hyp-stub.o psci.o cpu_ops.o
arm64-obj-$(CONFIG_COMPAT) += sys32.o kuser32.o signal32.o \ arm64-obj-$(CONFIG_COMPAT) += sys32.o kuser32.o signal32.o \
sys_compat.o sys_compat.o
arm64-obj-$(CONFIG_MODULES) += arm64ksyms.o module.o arm64-obj-$(CONFIG_MODULES) += arm64ksyms.o module.o
arm64-obj-$(CONFIG_SMP) += smp.o smp_spin_table.o smp_psci.o arm64-obj-$(CONFIG_SMP) += smp.o smp_spin_table.o
arm64-obj-$(CONFIG_HW_PERF_EVENTS) += perf_event.o arm64-obj-$(CONFIG_HW_PERF_EVENTS) += perf_event.o
arm64-obj-$(CONFIG_HAVE_HW_BREAKPOINT)+= hw_breakpoint.o arm64-obj-$(CONFIG_HAVE_HW_BREAKPOINT)+= hw_breakpoint.o
arm64-obj-$(CONFIG_EARLY_PRINTK) += early_printk.o arm64-obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
......
...@@ -39,6 +39,7 @@ EXPORT_SYMBOL(clear_page); ...@@ -39,6 +39,7 @@ EXPORT_SYMBOL(clear_page);
EXPORT_SYMBOL(__copy_from_user); EXPORT_SYMBOL(__copy_from_user);
EXPORT_SYMBOL(__copy_to_user); EXPORT_SYMBOL(__copy_to_user);
EXPORT_SYMBOL(__clear_user); EXPORT_SYMBOL(__clear_user);
EXPORT_SYMBOL(__copy_in_user);
/* physical memory */ /* physical memory */
EXPORT_SYMBOL(memstart_addr); EXPORT_SYMBOL(memstart_addr);
......
/* /*
* PSCI SMP initialisation * CPU kernel entry/exit control
* *
* Copyright (C) 2013 ARM Ltd. * Copyright (C) 2013 ARM Ltd.
* *
...@@ -16,38 +16,72 @@ ...@@ -16,38 +16,72 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>. * along with this program. If not, see <http://www.gnu.org/licenses/>.
*/ */
#include <linux/init.h> #include <asm/cpu_ops.h>
#include <asm/smp_plat.h>
#include <linux/errno.h>
#include <linux/of.h> #include <linux/of.h>
#include <linux/smp.h> #include <linux/string.h>
#include <asm/psci.h> extern const struct cpu_operations smp_spin_table_ops;
#include <asm/smp_plat.h> extern const struct cpu_operations cpu_psci_ops;
const struct cpu_operations *cpu_ops[NR_CPUS];
static const struct cpu_operations *supported_cpu_ops[] __initconst = {
#ifdef CONFIG_SMP
&smp_spin_table_ops,
&cpu_psci_ops,
#endif
NULL,
};
static int __init smp_psci_init_cpu(struct device_node *dn, int cpu) static const struct cpu_operations * __init cpu_get_ops(const char *name)
{ {
return 0; const struct cpu_operations **ops = supported_cpu_ops;
while (*ops) {
if (!strcmp(name, (*ops)->name))
return *ops;
ops++;
}
return NULL;
} }
static int __init smp_psci_prepare_cpu(int cpu) /*
* Read a cpu's enable method from the device tree and record it in cpu_ops.
*/
int __init cpu_read_ops(struct device_node *dn, int cpu)
{ {
int err; const char *enable_method = of_get_property(dn, "enable-method", NULL);
if (!enable_method) {
if (!psci_ops.cpu_on) { /*
pr_err("psci: no cpu_on method, not booting CPU%d\n", cpu); * The boot CPU may not have an enable method (e.g. when
return -ENODEV; * spin-table is used for secondaries). Don't warn spuriously.
*/
if (cpu != 0)
pr_err("%s: missing enable-method property\n",
dn->full_name);
return -ENOENT;
} }
err = psci_ops.cpu_on(cpu_logical_map(cpu), __pa(secondary_holding_pen)); cpu_ops[cpu] = cpu_get_ops(enable_method);
if (err) { if (!cpu_ops[cpu]) {
pr_err("psci: failed to boot CPU%d (%d)\n", cpu, err); pr_warn("%s: unsupported enable-method property: %s\n",
return err; dn->full_name, enable_method);
return -EOPNOTSUPP;
} }
return 0; return 0;
} }
const struct smp_enable_ops smp_psci_ops __initconst = { void __init cpu_read_bootcpu_ops(void)
.name = "psci", {
.init_cpu = smp_psci_init_cpu, struct device_node *dn = of_get_cpu_node(0, NULL);
.prepare_cpu = smp_psci_prepare_cpu, if (!dn) {
}; pr_err("Failed to find device node for boot cpu\n");
return;
}
cpu_read_ops(dn, 0);
}
...@@ -22,7 +22,7 @@ ...@@ -22,7 +22,7 @@
extern unsigned long __cpu_setup(void); extern unsigned long __cpu_setup(void);
struct cpu_info __initdata cpu_table[] = { struct cpu_info cpu_table[] = {
{ {
.cpu_id_val = 0x000f0000, .cpu_id_val = 0x000f0000,
.cpu_id_mask = 0x000f0000, .cpu_id_mask = 0x000f0000,
......
...@@ -311,14 +311,14 @@ el1_irq: ...@@ -311,14 +311,14 @@ el1_irq:
#endif #endif
#ifdef CONFIG_PREEMPT #ifdef CONFIG_PREEMPT
get_thread_info tsk get_thread_info tsk
ldr x24, [tsk, #TI_PREEMPT] // get preempt count ldr w24, [tsk, #TI_PREEMPT] // get preempt count
add x0, x24, #1 // increment it add w0, w24, #1 // increment it
str x0, [tsk, #TI_PREEMPT] str w0, [tsk, #TI_PREEMPT]
#endif #endif
irq_handler irq_handler
#ifdef CONFIG_PREEMPT #ifdef CONFIG_PREEMPT
str x24, [tsk, #TI_PREEMPT] // restore preempt count str w24, [tsk, #TI_PREEMPT] // restore preempt count
cbnz x24, 1f // preempt count != 0 cbnz w24, 1f // preempt count != 0
ldr x0, [tsk, #TI_FLAGS] // get flags ldr x0, [tsk, #TI_FLAGS] // get flags
tbz x0, #TIF_NEED_RESCHED, 1f // needs rescheduling? tbz x0, #TIF_NEED_RESCHED, 1f // needs rescheduling?
bl el1_preempt bl el1_preempt
...@@ -509,15 +509,15 @@ el0_irq_naked: ...@@ -509,15 +509,15 @@ el0_irq_naked:
#endif #endif
get_thread_info tsk get_thread_info tsk
#ifdef CONFIG_PREEMPT #ifdef CONFIG_PREEMPT
ldr x24, [tsk, #TI_PREEMPT] // get preempt count ldr w24, [tsk, #TI_PREEMPT] // get preempt count
add x23, x24, #1 // increment it add w23, w24, #1 // increment it
str x23, [tsk, #TI_PREEMPT] str w23, [tsk, #TI_PREEMPT]
#endif #endif
irq_handler irq_handler
#ifdef CONFIG_PREEMPT #ifdef CONFIG_PREEMPT
ldr x0, [tsk, #TI_PREEMPT] ldr w0, [tsk, #TI_PREEMPT]
str x24, [tsk, #TI_PREEMPT] str w24, [tsk, #TI_PREEMPT]
cmp x0, x23 cmp w0, w23
b.eq 1f b.eq 1f
mov x1, #0 mov x1, #0
str x1, [x1] // BUG str x1, [x1] // BUG
......
...@@ -123,8 +123,9 @@ ...@@ -123,8 +123,9 @@
ENTRY(stext) ENTRY(stext)
mov x21, x0 // x21=FDT mov x21, x0 // x21=FDT
bl el2_setup // Drop to EL1, w20=cpu_boot_mode
bl __calc_phys_offset // x24=PHYS_OFFSET, x28=PHYS_OFFSET-PAGE_OFFSET bl __calc_phys_offset // x24=PHYS_OFFSET, x28=PHYS_OFFSET-PAGE_OFFSET
bl el2_setup // Drop to EL1 bl set_cpu_boot_mode_flag
mrs x22, midr_el1 // x22=cpuid mrs x22, midr_el1 // x22=cpuid
mov x0, x22 mov x0, x22
bl lookup_processor_type bl lookup_processor_type
...@@ -150,21 +151,30 @@ ENDPROC(stext) ...@@ -150,21 +151,30 @@ ENDPROC(stext)
/* /*
* If we're fortunate enough to boot at EL2, ensure that the world is * If we're fortunate enough to boot at EL2, ensure that the world is
* sane before dropping to EL1. * sane before dropping to EL1.
*
* Returns either BOOT_CPU_MODE_EL1 or BOOT_CPU_MODE_EL2 in x20 if
* booted in EL1 or EL2 respectively.
*/ */
ENTRY(el2_setup) ENTRY(el2_setup)
mrs x0, CurrentEL mrs x0, CurrentEL
cmp x0, #PSR_MODE_EL2t cmp x0, #PSR_MODE_EL2t
ccmp x0, #PSR_MODE_EL2h, #0x4, ne ccmp x0, #PSR_MODE_EL2h, #0x4, ne
ldr x0, =__boot_cpu_mode // Compute __boot_cpu_mode b.ne 1f
add x0, x0, x28 mrs x0, sctlr_el2
b.eq 1f CPU_BE( orr x0, x0, #(1 << 25) ) // Set the EE bit for EL2
str wzr, [x0] // Remember we don't have EL2... CPU_LE( bic x0, x0, #(1 << 25) ) // Clear the EE bit for EL2
msr sctlr_el2, x0
b 2f
1: mrs x0, sctlr_el1
CPU_BE( orr x0, x0, #(3 << 24) ) // Set the EE and E0E bits for EL1
CPU_LE( bic x0, x0, #(3 << 24) ) // Clear the EE and E0E bits for EL1
msr sctlr_el1, x0
mov w20, #BOOT_CPU_MODE_EL1 // This cpu booted in EL1
isb
ret ret
/* Hyp configuration. */ /* Hyp configuration. */
1: ldr w1, =BOOT_CPU_MODE_EL2 2: mov x0, #(1 << 31) // 64-bit EL1
str w1, [x0, #4] // This CPU has EL2
mov x0, #(1 << 31) // 64-bit EL1
msr hcr_el2, x0 msr hcr_el2, x0
/* Generic timers. */ /* Generic timers. */
...@@ -181,7 +191,8 @@ ENTRY(el2_setup) ...@@ -181,7 +191,8 @@ ENTRY(el2_setup)
/* sctlr_el1 */ /* sctlr_el1 */
mov x0, #0x0800 // Set/clear RES{1,0} bits mov x0, #0x0800 // Set/clear RES{1,0} bits
movk x0, #0x30d0, lsl #16 CPU_BE( movk x0, #0x33d0, lsl #16 ) // Set EE and E0E on BE systems
CPU_LE( movk x0, #0x30d0, lsl #16 ) // Clear EE and E0E on LE systems
msr sctlr_el1, x0 msr sctlr_el1, x0
/* Coprocessor traps. */ /* Coprocessor traps. */
...@@ -204,9 +215,24 @@ ENTRY(el2_setup) ...@@ -204,9 +215,24 @@ ENTRY(el2_setup)
PSR_MODE_EL1h) PSR_MODE_EL1h)
msr spsr_el2, x0 msr spsr_el2, x0
msr elr_el2, lr msr elr_el2, lr
mov w20, #BOOT_CPU_MODE_EL2 // This CPU booted in EL2
eret eret
ENDPROC(el2_setup) ENDPROC(el2_setup)
/*
* Sets the __boot_cpu_mode flag depending on the CPU boot mode passed
* in x20. See arch/arm64/include/asm/virt.h for more info.
*/
ENTRY(set_cpu_boot_mode_flag)
ldr x1, =__boot_cpu_mode // Compute __boot_cpu_mode
add x1, x1, x28
cmp w20, #BOOT_CPU_MODE_EL2
b.ne 1f
add x1, x1, #4
1: str w20, [x1] // This CPU has booted in EL1
ret
ENDPROC(set_cpu_boot_mode_flag)
/* /*
* We need to find out the CPU boot mode long after boot, so we need to * We need to find out the CPU boot mode long after boot, so we need to
* store it in a writable variable. * store it in a writable variable.
...@@ -225,7 +251,6 @@ ENTRY(__boot_cpu_mode) ...@@ -225,7 +251,6 @@ ENTRY(__boot_cpu_mode)
.quad PAGE_OFFSET .quad PAGE_OFFSET
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
.pushsection .smp.pen.text, "ax"
.align 3 .align 3
1: .quad . 1: .quad .
.quad secondary_holding_pen_release .quad secondary_holding_pen_release
...@@ -235,8 +260,9 @@ ENTRY(__boot_cpu_mode) ...@@ -235,8 +260,9 @@ ENTRY(__boot_cpu_mode)
* cores are held until we're ready for them to initialise. * cores are held until we're ready for them to initialise.
*/ */
ENTRY(secondary_holding_pen) ENTRY(secondary_holding_pen)
bl __calc_phys_offset // x24=phys offset bl el2_setup // Drop to EL1, w20=cpu_boot_mode
bl el2_setup // Drop to EL1 bl __calc_phys_offset // x24=PHYS_OFFSET, x28=PHYS_OFFSET-PAGE_OFFSET
bl set_cpu_boot_mode_flag
mrs x0, mpidr_el1 mrs x0, mpidr_el1
ldr x1, =MPIDR_HWID_BITMASK ldr x1, =MPIDR_HWID_BITMASK
and x0, x0, x1 and x0, x0, x1
...@@ -250,7 +276,16 @@ pen: ldr x4, [x3] ...@@ -250,7 +276,16 @@ pen: ldr x4, [x3]
wfe wfe
b pen b pen
ENDPROC(secondary_holding_pen) ENDPROC(secondary_holding_pen)
.popsection
/*
* Secondary entry point that jumps straight into the kernel. Only to
* be used where CPUs are brought online dynamically by the kernel.
*/
ENTRY(secondary_entry)
bl __calc_phys_offset // x2=phys offset
bl el2_setup // Drop to EL1
b secondary_startup
ENDPROC(secondary_entry)
ENTRY(secondary_startup) ENTRY(secondary_startup)
/* /*
......
...@@ -81,3 +81,64 @@ void __init init_IRQ(void) ...@@ -81,3 +81,64 @@ void __init init_IRQ(void)
if (!handle_arch_irq) if (!handle_arch_irq)
panic("No interrupt controller found."); panic("No interrupt controller found.");
} }
#ifdef CONFIG_HOTPLUG_CPU
static bool migrate_one_irq(struct irq_desc *desc)
{
struct irq_data *d = irq_desc_get_irq_data(desc);
const struct cpumask *affinity = d->affinity;
struct irq_chip *c;
bool ret = false;
/*
* If this is a per-CPU interrupt, or the affinity does not
* include this CPU, then we have nothing to do.
*/
if (irqd_is_per_cpu(d) || !cpumask_test_cpu(smp_processor_id(), affinity))
return false;
if (cpumask_any_and(affinity, cpu_online_mask) >= nr_cpu_ids) {
affinity = cpu_online_mask;
ret = true;
}
c = irq_data_get_irq_chip(d);
if (!c->irq_set_affinity)
pr_debug("IRQ%u: unable to set affinity\n", d->irq);
else if (c->irq_set_affinity(d, affinity, true) == IRQ_SET_MASK_OK && ret)
cpumask_copy(d->affinity, affinity);
return ret;
}
/*
* The current CPU has been marked offline. Migrate IRQs off this CPU.
* If the affinity settings do not allow other CPUs, force them onto any
* available CPU.
*
* Note: we must iterate over all IRQs, whether they have an attached
* action structure or not, as we need to get chained interrupts too.
*/
void migrate_irqs(void)
{
unsigned int i;
struct irq_desc *desc;
unsigned long flags;
local_irq_save(flags);
for_each_irq_desc(i, desc) {
bool affinity_broken;
raw_spin_lock(&desc->lock);
affinity_broken = migrate_one_irq(desc);
raw_spin_unlock(&desc->lock);
if (affinity_broken)
pr_warn_ratelimited("IRQ%u no longer affine to CPU%u\n",
i, smp_processor_id());
}
local_irq_restore(flags);
}
#endif /* CONFIG_HOTPLUG_CPU */
...@@ -27,6 +27,9 @@ ...@@ -27,6 +27,9 @@
* *
* See Documentation/arm/kernel_user_helpers.txt for formal definitions. * See Documentation/arm/kernel_user_helpers.txt for formal definitions.
*/ */
#include <asm/unistd32.h>
.align 5 .align 5
.globl __kuser_helper_start .globl __kuser_helper_start
__kuser_helper_start: __kuser_helper_start:
...@@ -35,33 +38,30 @@ __kuser_cmpxchg64: // 0xffff0f60 ...@@ -35,33 +38,30 @@ __kuser_cmpxchg64: // 0xffff0f60
.inst 0xe92d00f0 // push {r4, r5, r6, r7} .inst 0xe92d00f0 // push {r4, r5, r6, r7}
.inst 0xe1c040d0 // ldrd r4, r5, [r0] .inst 0xe1c040d0 // ldrd r4, r5, [r0]
.inst 0xe1c160d0 // ldrd r6, r7, [r1] .inst 0xe1c160d0 // ldrd r6, r7, [r1]
.inst 0xf57ff05f // dmb sy .inst 0xe1b20e9f // 1: ldaexd r0, r1, [r2]
.inst 0xe1b20f9f // 1: ldrexd r0, r1, [r2]
.inst 0xe0303004 // eors r3, r0, r4 .inst 0xe0303004 // eors r3, r0, r4
.inst 0x00313005 // eoreqs r3, r1, r5 .inst 0x00313005 // eoreqs r3, r1, r5
.inst 0x01a23f96 // strexdeq r3, r6, [r2] .inst 0x01a23e96 // stlexdeq r3, r6, [r2]
.inst 0x03330001 // teqeq r3, #1 .inst 0x03330001 // teqeq r3, #1
.inst 0x0afffff9 // beq 1b .inst 0x0afffff9 // beq 1b
.inst 0xf57ff05f // dmb sy
.inst 0xe2730000 // rsbs r0, r3, #0 .inst 0xe2730000 // rsbs r0, r3, #0
.inst 0xe8bd00f0 // pop {r4, r5, r6, r7} .inst 0xe8bd00f0 // pop {r4, r5, r6, r7}
.inst 0xe12fff1e // bx lr .inst 0xe12fff1e // bx lr
.align 5 .align 5
__kuser_memory_barrier: // 0xffff0fa0 __kuser_memory_barrier: // 0xffff0fa0
.inst 0xf57ff05f // dmb sy .inst 0xf57ff05b // dmb ish
.inst 0xe12fff1e // bx lr .inst 0xe12fff1e // bx lr
.align 5 .align 5
__kuser_cmpxchg: // 0xffff0fc0 __kuser_cmpxchg: // 0xffff0fc0
.inst 0xf57ff05f // dmb sy .inst 0xe1923e9f // 1: ldaex r3, [r2]
.inst 0xe1923f9f // 1: ldrex r3, [r2]
.inst 0xe0533000 // subs r3, r3, r0 .inst 0xe0533000 // subs r3, r3, r0
.inst 0x01823f91 // strexeq r3, r1, [r2] .inst 0x01823e91 // stlexeq r3, r1, [r2]
.inst 0x03330001 // teqeq r3, #1 .inst 0x03330001 // teqeq r3, #1
.inst 0x0afffffa // beq 1b .inst 0x0afffffa // beq 1b
.inst 0xe2730000 // rsbs r0, r3, #0 .inst 0xe2730000 // rsbs r0, r3, #0
.inst 0xeaffffef // b <__kuser_memory_barrier> .inst 0xe12fff1e // bx lr
.align 5 .align 5
__kuser_get_tls: // 0xffff0fe0 __kuser_get_tls: // 0xffff0fe0
...@@ -75,3 +75,42 @@ __kuser_helper_version: // 0xffff0ffc ...@@ -75,3 +75,42 @@ __kuser_helper_version: // 0xffff0ffc
.word ((__kuser_helper_end - __kuser_helper_start) >> 5) .word ((__kuser_helper_end - __kuser_helper_start) >> 5)
.globl __kuser_helper_end .globl __kuser_helper_end
__kuser_helper_end: __kuser_helper_end:
/*
* AArch32 sigreturn code
*
* For ARM syscalls, the syscall number has to be loaded into r7.
* We do not support an OABI userspace.
*
* For Thumb syscalls, we also pass the syscall number via r7. We therefore
* need two 16-bit instructions.
*/
.globl __aarch32_sigret_code_start
__aarch32_sigret_code_start:
/*
* ARM Code
*/
.byte __NR_compat_sigreturn, 0x70, 0xa0, 0xe3 // mov r7, #__NR_compat_sigreturn
.byte __NR_compat_sigreturn, 0x00, 0x00, 0xef // svc #__NR_compat_sigreturn
/*
* Thumb code
*/
.byte __NR_compat_sigreturn, 0x27 // svc #__NR_compat_sigreturn
.byte __NR_compat_sigreturn, 0xdf // mov r7, #__NR_compat_sigreturn
/*
* ARM code
*/
.byte __NR_compat_rt_sigreturn, 0x70, 0xa0, 0xe3 // mov r7, #__NR_compat_rt_sigreturn
.byte __NR_compat_rt_sigreturn, 0x00, 0x00, 0xef // svc #__NR_compat_rt_sigreturn
/*
* Thumb code
*/
.byte __NR_compat_rt_sigreturn, 0x27 // svc #__NR_compat_rt_sigreturn
.byte __NR_compat_rt_sigreturn, 0xdf // mov r7, #__NR_compat_rt_sigreturn
.globl __aarch32_sigret_code_end
__aarch32_sigret_code_end:
...@@ -111,6 +111,9 @@ static u32 encode_insn_immediate(enum aarch64_imm_type type, u32 insn, u64 imm) ...@@ -111,6 +111,9 @@ static u32 encode_insn_immediate(enum aarch64_imm_type type, u32 insn, u64 imm)
u32 immlo, immhi, lomask, himask, mask; u32 immlo, immhi, lomask, himask, mask;
int shift; int shift;
/* The instruction stream is always little endian. */
insn = le32_to_cpu(insn);
switch (type) { switch (type) {
case INSN_IMM_MOVNZ: case INSN_IMM_MOVNZ:
/* /*
...@@ -179,7 +182,7 @@ static u32 encode_insn_immediate(enum aarch64_imm_type type, u32 insn, u64 imm) ...@@ -179,7 +182,7 @@ static u32 encode_insn_immediate(enum aarch64_imm_type type, u32 insn, u64 imm)
insn &= ~(mask << shift); insn &= ~(mask << shift);
insn |= (imm & mask) << shift; insn |= (imm & mask) << shift;
return insn; return cpu_to_le32(insn);
} }
static int reloc_insn_movw(enum aarch64_reloc_op op, void *place, u64 val, static int reloc_insn_movw(enum aarch64_reloc_op op, void *place, u64 val,
......
...@@ -784,8 +784,8 @@ static const unsigned armv8_pmuv3_perf_cache_map[PERF_COUNT_HW_CACHE_MAX] ...@@ -784,8 +784,8 @@ static const unsigned armv8_pmuv3_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
/* /*
* PMXEVTYPER: Event selection reg * PMXEVTYPER: Event selection reg
*/ */
#define ARMV8_EVTYPE_MASK 0xc80000ff /* Mask for writable bits */ #define ARMV8_EVTYPE_MASK 0xc80003ff /* Mask for writable bits */
#define ARMV8_EVTYPE_EVENT 0xff /* Mask for EVENT bits */ #define ARMV8_EVTYPE_EVENT 0x3ff /* Mask for EVENT bits */
/* /*
* Event filters for PMUv3 * Event filters for PMUv3
...@@ -1175,7 +1175,8 @@ static void armv8pmu_reset(void *info) ...@@ -1175,7 +1175,8 @@ static void armv8pmu_reset(void *info)
static int armv8_pmuv3_map_event(struct perf_event *event) static int armv8_pmuv3_map_event(struct perf_event *event)
{ {
return map_cpu_event(event, &armv8_pmuv3_perf_map, return map_cpu_event(event, &armv8_pmuv3_perf_map,
&armv8_pmuv3_perf_cache_map, 0xFF); &armv8_pmuv3_perf_cache_map,
ARMV8_EVTYPE_EVENT);
} }
static struct arm_pmu armv8pmu = { static struct arm_pmu armv8pmu = {
......
...@@ -102,6 +102,13 @@ void arch_cpu_idle(void) ...@@ -102,6 +102,13 @@ void arch_cpu_idle(void)
local_irq_enable(); local_irq_enable();
} }
#ifdef CONFIG_HOTPLUG_CPU
void arch_cpu_idle_dead(void)
{
cpu_die();
}
#endif
void machine_shutdown(void) void machine_shutdown(void)
{ {
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
......
...@@ -17,12 +17,32 @@ ...@@ -17,12 +17,32 @@
#include <linux/init.h> #include <linux/init.h>
#include <linux/of.h> #include <linux/of.h>
#include <linux/smp.h>
#include <asm/compiler.h> #include <asm/compiler.h>
#include <asm/cpu_ops.h>
#include <asm/errno.h> #include <asm/errno.h>
#include <asm/psci.h> #include <asm/psci.h>
#include <asm/smp_plat.h>
struct psci_operations psci_ops; #define PSCI_POWER_STATE_TYPE_STANDBY 0
#define PSCI_POWER_STATE_TYPE_POWER_DOWN 1
struct psci_power_state {
u16 id;
u8 type;
u8 affinity_level;
};
struct psci_operations {
int (*cpu_suspend)(struct psci_power_state state,
unsigned long entry_point);
int (*cpu_off)(struct psci_power_state state);
int (*cpu_on)(unsigned long cpuid, unsigned long entry_point);
int (*migrate)(unsigned long cpuid);
};
static struct psci_operations psci_ops;
static int (*invoke_psci_fn)(u64, u64, u64, u64); static int (*invoke_psci_fn)(u64, u64, u64, u64);
...@@ -209,3 +229,68 @@ int __init psci_init(void) ...@@ -209,3 +229,68 @@ int __init psci_init(void)
of_node_put(np); of_node_put(np);
return err; return err;
} }
#ifdef CONFIG_SMP
static int __init cpu_psci_cpu_init(struct device_node *dn, unsigned int cpu)
{
return 0;
}
static int __init cpu_psci_cpu_prepare(unsigned int cpu)
{
if (!psci_ops.cpu_on) {
pr_err("no cpu_on method, not booting CPU%d\n", cpu);
return -ENODEV;
}
return 0;
}
static int cpu_psci_cpu_boot(unsigned int cpu)
{
int err = psci_ops.cpu_on(cpu_logical_map(cpu), __pa(secondary_entry));
if (err)
pr_err("psci: failed to boot CPU%d (%d)\n", cpu, err);
return err;
}
#ifdef CONFIG_HOTPLUG_CPU
static int cpu_psci_cpu_disable(unsigned int cpu)
{
/* Fail early if we don't have CPU_OFF support */
if (!psci_ops.cpu_off)
return -EOPNOTSUPP;
return 0;
}
static void cpu_psci_cpu_die(unsigned int cpu)
{
int ret;
/*
* There are no known implementations of PSCI actually using the
* power state field, pass a sensible default for now.
*/
struct psci_power_state state = {
.type = PSCI_POWER_STATE_TYPE_POWER_DOWN,
};
ret = psci_ops.cpu_off(state);
pr_crit("psci: unable to power off CPU%u (%d)\n", cpu, ret);
}
#endif
const struct cpu_operations cpu_psci_ops = {
.name = "psci",
.cpu_init = cpu_psci_cpu_init,
.cpu_prepare = cpu_psci_cpu_prepare,
.cpu_boot = cpu_psci_cpu_boot,
#ifdef CONFIG_HOTPLUG_CPU
.cpu_disable = cpu_psci_cpu_disable,
.cpu_die = cpu_psci_cpu_die,
#endif
};
#endif
...@@ -45,6 +45,7 @@ ...@@ -45,6 +45,7 @@
#include <asm/cputype.h> #include <asm/cputype.h>
#include <asm/elf.h> #include <asm/elf.h>
#include <asm/cputable.h> #include <asm/cputable.h>
#include <asm/cpu_ops.h>
#include <asm/sections.h> #include <asm/sections.h>
#include <asm/setup.h> #include <asm/setup.h>
#include <asm/smp_plat.h> #include <asm/smp_plat.h>
...@@ -97,6 +98,11 @@ void __init early_print(const char *str, ...) ...@@ -97,6 +98,11 @@ void __init early_print(const char *str, ...)
printk("%s", buf); printk("%s", buf);
} }
bool arch_match_cpu_phys_id(int cpu, u64 phys_id)
{
return phys_id == cpu_logical_map(cpu);
}
static void __init setup_processor(void) static void __init setup_processor(void)
{ {
struct cpu_info *cpu_info; struct cpu_info *cpu_info;
...@@ -118,7 +124,7 @@ static void __init setup_processor(void) ...@@ -118,7 +124,7 @@ static void __init setup_processor(void)
printk("CPU: %s [%08x] revision %d\n", printk("CPU: %s [%08x] revision %d\n",
cpu_name, read_cpuid_id(), read_cpuid_id() & 15); cpu_name, read_cpuid_id(), read_cpuid_id() & 15);
sprintf(init_utsname()->machine, "aarch64"); sprintf(init_utsname()->machine, ELF_PLATFORM);
elf_hwcap = 0; elf_hwcap = 0;
} }
...@@ -264,6 +270,7 @@ void __init setup_arch(char **cmdline_p) ...@@ -264,6 +270,7 @@ void __init setup_arch(char **cmdline_p)
psci_init(); psci_init();
cpu_logical_map(0) = read_cpuid_mpidr() & MPIDR_HWID_BITMASK; cpu_logical_map(0) = read_cpuid_mpidr() & MPIDR_HWID_BITMASK;
cpu_read_bootcpu_ops();
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
smp_init_cpus(); smp_init_cpus();
#endif #endif
......
...@@ -100,34 +100,6 @@ struct compat_rt_sigframe { ...@@ -100,34 +100,6 @@ struct compat_rt_sigframe {
#define _BLOCKABLE (~(sigmask(SIGKILL) | sigmask(SIGSTOP))) #define _BLOCKABLE (~(sigmask(SIGKILL) | sigmask(SIGSTOP)))
/*
* For ARM syscalls, the syscall number has to be loaded into r7.
* We do not support an OABI userspace.
*/
#define MOV_R7_NR_SIGRETURN (0xe3a07000 | __NR_compat_sigreturn)
#define SVC_SYS_SIGRETURN (0xef000000 | __NR_compat_sigreturn)
#define MOV_R7_NR_RT_SIGRETURN (0xe3a07000 | __NR_compat_rt_sigreturn)
#define SVC_SYS_RT_SIGRETURN (0xef000000 | __NR_compat_rt_sigreturn)
/*
* For Thumb syscalls, we also pass the syscall number via r7. We therefore
* need two 16-bit instructions.
*/
#define SVC_THUMB_SIGRETURN (((0xdf00 | __NR_compat_sigreturn) << 16) | \
0x2700 | __NR_compat_sigreturn)
#define SVC_THUMB_RT_SIGRETURN (((0xdf00 | __NR_compat_rt_sigreturn) << 16) | \
0x2700 | __NR_compat_rt_sigreturn)
const compat_ulong_t aarch32_sigret_code[6] = {
/*
* AArch32 sigreturn code.
* We don't construct an OABI SWI - instead we just set the imm24 field
* to the EABI syscall number so that we create a sane disassembly.
*/
MOV_R7_NR_SIGRETURN, SVC_SYS_SIGRETURN, SVC_THUMB_SIGRETURN,
MOV_R7_NR_RT_SIGRETURN, SVC_SYS_RT_SIGRETURN, SVC_THUMB_RT_SIGRETURN,
};
static inline int put_sigset_t(compat_sigset_t __user *uset, sigset_t *set) static inline int put_sigset_t(compat_sigset_t __user *uset, sigset_t *set)
{ {
compat_sigset_t cset; compat_sigset_t cset;
...@@ -474,12 +446,13 @@ static void compat_setup_return(struct pt_regs *regs, struct k_sigaction *ka, ...@@ -474,12 +446,13 @@ static void compat_setup_return(struct pt_regs *regs, struct k_sigaction *ka,
/* Check if the handler is written for ARM or Thumb */ /* Check if the handler is written for ARM or Thumb */
thumb = handler & 1; thumb = handler & 1;
if (thumb) { if (thumb)
spsr |= COMPAT_PSR_T_BIT; spsr |= COMPAT_PSR_T_BIT;
spsr &= ~COMPAT_PSR_IT_MASK; else
} else {
spsr &= ~COMPAT_PSR_T_BIT; spsr &= ~COMPAT_PSR_T_BIT;
}
/* The IT state must be cleared for both ARM and Thumb-2 */
spsr &= ~COMPAT_PSR_IT_MASK;
if (ka->sa.sa_flags & SA_RESTORER) { if (ka->sa.sa_flags & SA_RESTORER) {
retcode = ptr_to_compat(ka->sa.sa_restorer); retcode = ptr_to_compat(ka->sa.sa_restorer);
......
...@@ -39,6 +39,7 @@ ...@@ -39,6 +39,7 @@
#include <asm/atomic.h> #include <asm/atomic.h>
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
#include <asm/cputype.h> #include <asm/cputype.h>
#include <asm/cpu_ops.h>
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/pgalloc.h> #include <asm/pgalloc.h>
...@@ -54,7 +55,6 @@ ...@@ -54,7 +55,6 @@
* where to place its SVC stack * where to place its SVC stack
*/ */
struct secondary_data secondary_data; struct secondary_data secondary_data;
volatile unsigned long secondary_holding_pen_release = INVALID_HWID;
enum ipi_msg_type { enum ipi_msg_type {
IPI_RESCHEDULE, IPI_RESCHEDULE,
...@@ -63,61 +63,16 @@ enum ipi_msg_type { ...@@ -63,61 +63,16 @@ enum ipi_msg_type {
IPI_CPU_STOP, IPI_CPU_STOP,
}; };
static DEFINE_RAW_SPINLOCK(boot_lock);
/*
* Write secondary_holding_pen_release in a way that is guaranteed to be
* visible to all observers, irrespective of whether they're taking part
* in coherency or not. This is necessary for the hotplug code to work
* reliably.
*/
static void write_pen_release(u64 val)
{
void *start = (void *)&secondary_holding_pen_release;
unsigned long size = sizeof(secondary_holding_pen_release);
secondary_holding_pen_release = val;
__flush_dcache_area(start, size);
}
/* /*
* Boot a secondary CPU, and assign it the specified idle task. * Boot a secondary CPU, and assign it the specified idle task.
* This also gives us the initial stack to use for this CPU. * This also gives us the initial stack to use for this CPU.
*/ */
static int boot_secondary(unsigned int cpu, struct task_struct *idle) static int boot_secondary(unsigned int cpu, struct task_struct *idle)
{ {
unsigned long timeout; if (cpu_ops[cpu]->cpu_boot)
return cpu_ops[cpu]->cpu_boot(cpu);
/*
* Set synchronisation state between this boot processor
* and the secondary one
*/
raw_spin_lock(&boot_lock);
/*
* Update the pen release flag.
*/
write_pen_release(cpu_logical_map(cpu));
/*
* Send an event, causing the secondaries to read pen_release.
*/
sev();
timeout = jiffies + (1 * HZ);
while (time_before(jiffies, timeout)) {
if (secondary_holding_pen_release == INVALID_HWID)
break;
udelay(10);
}
/*
* Now the secondary core is starting up let it run its
* calibrations, then wait for it to finish
*/
raw_spin_unlock(&boot_lock);
return secondary_holding_pen_release != INVALID_HWID ? -ENOSYS : 0; return -EOPNOTSUPP;
} }
static DECLARE_COMPLETION(cpu_running); static DECLARE_COMPLETION(cpu_running);
...@@ -187,17 +142,13 @@ asmlinkage void secondary_start_kernel(void) ...@@ -187,17 +142,13 @@ asmlinkage void secondary_start_kernel(void)
preempt_disable(); preempt_disable();
trace_hardirqs_off(); trace_hardirqs_off();
/* if (cpu_ops[cpu]->cpu_postboot)
* Let the primary processor know we're out of the cpu_ops[cpu]->cpu_postboot();
* pen, then head off into the C entry point
*/
write_pen_release(INVALID_HWID);
/* /*
* Synchronise with the boot thread. * Enable GIC and timers.
*/ */
raw_spin_lock(&boot_lock); notify_cpu_starting(cpu);
raw_spin_unlock(&boot_lock);
/* /*
* OK, now it's safe to let the boot CPU continue. Wait for * OK, now it's safe to let the boot CPU continue. Wait for
...@@ -207,11 +158,6 @@ asmlinkage void secondary_start_kernel(void) ...@@ -207,11 +158,6 @@ asmlinkage void secondary_start_kernel(void)
set_cpu_online(cpu, true); set_cpu_online(cpu, true);
complete(&cpu_running); complete(&cpu_running);
/*
* Enable GIC and timers.
*/
notify_cpu_starting(cpu);
local_irq_enable(); local_irq_enable();
local_fiq_enable(); local_fiq_enable();
...@@ -221,39 +167,113 @@ asmlinkage void secondary_start_kernel(void) ...@@ -221,39 +167,113 @@ asmlinkage void secondary_start_kernel(void)
cpu_startup_entry(CPUHP_ONLINE); cpu_startup_entry(CPUHP_ONLINE);
} }
void __init smp_cpus_done(unsigned int max_cpus) #ifdef CONFIG_HOTPLUG_CPU
static int op_cpu_disable(unsigned int cpu)
{ {
pr_info("SMP: Total of %d processors activated.\n", num_online_cpus()); /*
* If we don't have a cpu_die method, abort before we reach the point
* of no return. CPU0 may not have an cpu_ops, so test for it.
*/
if (!cpu_ops[cpu] || !cpu_ops[cpu]->cpu_die)
return -EOPNOTSUPP;
/*
* We may need to abort a hot unplug for some other mechanism-specific
* reason.
*/
if (cpu_ops[cpu]->cpu_disable)
return cpu_ops[cpu]->cpu_disable(cpu);
return 0;
} }
void __init smp_prepare_boot_cpu(void) /*
* __cpu_disable runs on the processor to be shutdown.
*/
int __cpu_disable(void)
{ {
} unsigned int cpu = smp_processor_id();
int ret;
static void (*smp_cross_call)(const struct cpumask *, unsigned int); ret = op_cpu_disable(cpu);
if (ret)
return ret;
static const struct smp_enable_ops *enable_ops[] __initconst = { /*
&smp_spin_table_ops, * Take this CPU offline. Once we clear this, we can't return,
&smp_psci_ops, * and we must not schedule until we're ready to give up the cpu.
NULL, */
}; set_cpu_online(cpu, false);
/*
* OK - migrate IRQs away from this CPU
*/
migrate_irqs();
static const struct smp_enable_ops *smp_enable_ops[NR_CPUS]; /*
* Remove this CPU from the vm mask set of all processes.
*/
clear_tasks_mm_cpumask(cpu);
static const struct smp_enable_ops * __init smp_get_enable_ops(const char *name) return 0;
{ }
const struct smp_enable_ops **ops = enable_ops;
while (*ops) { static DECLARE_COMPLETION(cpu_died);
if (!strcmp(name, (*ops)->name))
return *ops;
ops++; /*
* called on the thread which is asking for a CPU to be shutdown -
* waits until shutdown has completed, or it is timed out.
*/
void __cpu_die(unsigned int cpu)
{
if (!wait_for_completion_timeout(&cpu_died, msecs_to_jiffies(5000))) {
pr_crit("CPU%u: cpu didn't die\n", cpu);
return;
} }
pr_notice("CPU%u: shutdown\n", cpu);
}
/*
* Called from the idle thread for the CPU which has been shutdown.
*
* Note that we disable IRQs here, but do not re-enable them
* before returning to the caller. This is also the behaviour
* of the other hotplug-cpu capable cores, so presumably coming
* out of idle fixes this.
*/
void cpu_die(void)
{
unsigned int cpu = smp_processor_id();
idle_task_exit();
local_irq_disable();
/* Tell __cpu_die() that this CPU is now safe to dispose of */
complete(&cpu_died);
/*
* Actually shutdown the CPU. This must never fail. The specific hotplug
* mechanism must perform all required cache maintenance to ensure that
* no dirty lines are lost in the process of shutting down the CPU.
*/
cpu_ops[cpu]->cpu_die(cpu);
BUG();
}
#endif
void __init smp_cpus_done(unsigned int max_cpus)
{
pr_info("SMP: Total of %d processors activated.\n", num_online_cpus());
}
return NULL; void __init smp_prepare_boot_cpu(void)
{
} }
static void (*smp_cross_call)(const struct cpumask *, unsigned int);
/* /*
* Enumerate the possible CPU set from the device tree and build the * Enumerate the possible CPU set from the device tree and build the
* cpu logical map array containing MPIDR values related to logical * cpu logical map array containing MPIDR values related to logical
...@@ -261,9 +281,8 @@ static const struct smp_enable_ops * __init smp_get_enable_ops(const char *name) ...@@ -261,9 +281,8 @@ static const struct smp_enable_ops * __init smp_get_enable_ops(const char *name)
*/ */
void __init smp_init_cpus(void) void __init smp_init_cpus(void)
{ {
const char *enable_method;
struct device_node *dn = NULL; struct device_node *dn = NULL;
int i, cpu = 1; unsigned int i, cpu = 1;
bool bootcpu_valid = false; bool bootcpu_valid = false;
while ((dn = of_find_node_by_type(dn, "cpu"))) { while ((dn = of_find_node_by_type(dn, "cpu"))) {
...@@ -332,25 +351,10 @@ void __init smp_init_cpus(void) ...@@ -332,25 +351,10 @@ void __init smp_init_cpus(void)
if (cpu >= NR_CPUS) if (cpu >= NR_CPUS)
goto next; goto next;
/* if (cpu_read_ops(dn, cpu) != 0)
* We currently support only the "spin-table" enable-method.
*/
enable_method = of_get_property(dn, "enable-method", NULL);
if (!enable_method) {
pr_err("%s: missing enable-method property\n",
dn->full_name);
goto next;
}
smp_enable_ops[cpu] = smp_get_enable_ops(enable_method);
if (!smp_enable_ops[cpu]) {
pr_err("%s: invalid enable-method property: %s\n",
dn->full_name, enable_method);
goto next; goto next;
}
if (smp_enable_ops[cpu]->init_cpu(dn, cpu)) if (cpu_ops[cpu]->cpu_init(dn, cpu))
goto next; goto next;
pr_debug("cpu logical map 0x%llx\n", hwid); pr_debug("cpu logical map 0x%llx\n", hwid);
...@@ -380,8 +384,8 @@ void __init smp_init_cpus(void) ...@@ -380,8 +384,8 @@ void __init smp_init_cpus(void)
void __init smp_prepare_cpus(unsigned int max_cpus) void __init smp_prepare_cpus(unsigned int max_cpus)
{ {
int cpu, err; int err;
unsigned int ncores = num_possible_cpus(); unsigned int cpu, ncores = num_possible_cpus();
/* /*
* are we trying to boot more cores than exist? * are we trying to boot more cores than exist?
...@@ -408,10 +412,10 @@ void __init smp_prepare_cpus(unsigned int max_cpus) ...@@ -408,10 +412,10 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
if (cpu == smp_processor_id()) if (cpu == smp_processor_id())
continue; continue;
if (!smp_enable_ops[cpu]) if (!cpu_ops[cpu])
continue; continue;
err = smp_enable_ops[cpu]->prepare_cpu(cpu); err = cpu_ops[cpu]->cpu_prepare(cpu);
if (err) if (err)
continue; continue;
...@@ -451,7 +455,7 @@ void show_ipi_list(struct seq_file *p, int prec) ...@@ -451,7 +455,7 @@ void show_ipi_list(struct seq_file *p, int prec)
for (i = 0; i < NR_IPI; i++) { for (i = 0; i < NR_IPI; i++) {
seq_printf(p, "%*s%u:%s", prec - 1, "IPI", i + IPI_RESCHEDULE, seq_printf(p, "%*s%u:%s", prec - 1, "IPI", i + IPI_RESCHEDULE,
prec >= 4 ? " " : ""); prec >= 4 ? " " : "");
for_each_present_cpu(cpu) for_each_online_cpu(cpu)
seq_printf(p, "%10u ", seq_printf(p, "%10u ",
__get_irq_stat(cpu, ipi_irqs[i])); __get_irq_stat(cpu, ipi_irqs[i]));
seq_printf(p, " %s\n", ipi_types[i]); seq_printf(p, " %s\n", ipi_types[i]);
......
...@@ -16,15 +16,39 @@ ...@@ -16,15 +16,39 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>. * along with this program. If not, see <http://www.gnu.org/licenses/>.
*/ */
#include <linux/delay.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/of.h> #include <linux/of.h>
#include <linux/smp.h> #include <linux/smp.h>
#include <asm/cacheflush.h> #include <asm/cacheflush.h>
#include <asm/cpu_ops.h>
#include <asm/cputype.h>
#include <asm/smp_plat.h>
extern void secondary_holding_pen(void);
volatile unsigned long secondary_holding_pen_release = INVALID_HWID;
static phys_addr_t cpu_release_addr[NR_CPUS]; static phys_addr_t cpu_release_addr[NR_CPUS];
static DEFINE_RAW_SPINLOCK(boot_lock);
/*
* Write secondary_holding_pen_release in a way that is guaranteed to be
* visible to all observers, irrespective of whether they're taking part
* in coherency or not. This is necessary for the hotplug code to work
* reliably.
*/
static void write_pen_release(u64 val)
{
void *start = (void *)&secondary_holding_pen_release;
unsigned long size = sizeof(secondary_holding_pen_release);
static int __init smp_spin_table_init_cpu(struct device_node *dn, int cpu) secondary_holding_pen_release = val;
__flush_dcache_area(start, size);
}
static int smp_spin_table_cpu_init(struct device_node *dn, unsigned int cpu)
{ {
/* /*
* Determine the address from which the CPU is polling. * Determine the address from which the CPU is polling.
...@@ -40,7 +64,7 @@ static int __init smp_spin_table_init_cpu(struct device_node *dn, int cpu) ...@@ -40,7 +64,7 @@ static int __init smp_spin_table_init_cpu(struct device_node *dn, int cpu)
return 0; return 0;
} }
static int __init smp_spin_table_prepare_cpu(int cpu) static int smp_spin_table_cpu_prepare(unsigned int cpu)
{ {
void **release_addr; void **release_addr;
...@@ -48,7 +72,16 @@ static int __init smp_spin_table_prepare_cpu(int cpu) ...@@ -48,7 +72,16 @@ static int __init smp_spin_table_prepare_cpu(int cpu)
return -ENODEV; return -ENODEV;
release_addr = __va(cpu_release_addr[cpu]); release_addr = __va(cpu_release_addr[cpu]);
release_addr[0] = (void *)__pa(secondary_holding_pen);
/*
* We write the release address as LE regardless of the native
* endianess of the kernel. Therefore, any boot-loaders that
* read this address need to convert this address to the
* boot-loader's endianess before jumping. This is mandated by
* the boot protocol.
*/
release_addr[0] = (void *) cpu_to_le64(__pa(secondary_holding_pen));
__flush_dcache_area(release_addr, sizeof(release_addr[0])); __flush_dcache_area(release_addr, sizeof(release_addr[0]));
/* /*
...@@ -59,8 +92,60 @@ static int __init smp_spin_table_prepare_cpu(int cpu) ...@@ -59,8 +92,60 @@ static int __init smp_spin_table_prepare_cpu(int cpu)
return 0; return 0;
} }
const struct smp_enable_ops smp_spin_table_ops __initconst = { static int smp_spin_table_cpu_boot(unsigned int cpu)
{
unsigned long timeout;
/*
* Set synchronisation state between this boot processor
* and the secondary one
*/
raw_spin_lock(&boot_lock);
/*
* Update the pen release flag.
*/
write_pen_release(cpu_logical_map(cpu));
/*
* Send an event, causing the secondaries to read pen_release.
*/
sev();
timeout = jiffies + (1 * HZ);
while (time_before(jiffies, timeout)) {
if (secondary_holding_pen_release == INVALID_HWID)
break;
udelay(10);
}
/*
* Now the secondary core is starting up let it run its
* calibrations, then wait for it to finish
*/
raw_spin_unlock(&boot_lock);
return secondary_holding_pen_release != INVALID_HWID ? -ENOSYS : 0;
}
void smp_spin_table_cpu_postboot(void)
{
/*
* Let the primary processor know we're out of the pen.
*/
write_pen_release(INVALID_HWID);
/*
* Synchronise with the boot thread.
*/
raw_spin_lock(&boot_lock);
raw_spin_unlock(&boot_lock);
}
const struct cpu_operations smp_spin_table_ops = {
.name = "spin-table", .name = "spin-table",
.init_cpu = smp_spin_table_init_cpu, .cpu_init = smp_spin_table_cpu_init,
.prepare_cpu = smp_spin_table_prepare_cpu, .cpu_prepare = smp_spin_table_cpu_prepare,
.cpu_boot = smp_spin_table_cpu_boot,
.cpu_postboot = smp_spin_table_cpu_postboot,
}; };
...@@ -59,48 +59,48 @@ ENDPROC(compat_sys_fstatfs64_wrapper) ...@@ -59,48 +59,48 @@ ENDPROC(compat_sys_fstatfs64_wrapper)
* extension. * extension.
*/ */
compat_sys_pread64_wrapper: compat_sys_pread64_wrapper:
orr x3, x4, x5, lsl #32 regs_to_64 x3, x4, x5
b sys_pread64 b sys_pread64
ENDPROC(compat_sys_pread64_wrapper) ENDPROC(compat_sys_pread64_wrapper)
compat_sys_pwrite64_wrapper: compat_sys_pwrite64_wrapper:
orr x3, x4, x5, lsl #32 regs_to_64 x3, x4, x5
b sys_pwrite64 b sys_pwrite64
ENDPROC(compat_sys_pwrite64_wrapper) ENDPROC(compat_sys_pwrite64_wrapper)
compat_sys_truncate64_wrapper: compat_sys_truncate64_wrapper:
orr x1, x2, x3, lsl #32 regs_to_64 x1, x2, x3
b sys_truncate b sys_truncate
ENDPROC(compat_sys_truncate64_wrapper) ENDPROC(compat_sys_truncate64_wrapper)
compat_sys_ftruncate64_wrapper: compat_sys_ftruncate64_wrapper:
orr x1, x2, x3, lsl #32 regs_to_64 x1, x2, x3
b sys_ftruncate b sys_ftruncate
ENDPROC(compat_sys_ftruncate64_wrapper) ENDPROC(compat_sys_ftruncate64_wrapper)
compat_sys_readahead_wrapper: compat_sys_readahead_wrapper:
orr x1, x2, x3, lsl #32 regs_to_64 x1, x2, x3
mov w2, w4 mov w2, w4
b sys_readahead b sys_readahead
ENDPROC(compat_sys_readahead_wrapper) ENDPROC(compat_sys_readahead_wrapper)
compat_sys_fadvise64_64_wrapper: compat_sys_fadvise64_64_wrapper:
mov w6, w1 mov w6, w1
orr x1, x2, x3, lsl #32 regs_to_64 x1, x2, x3
orr x2, x4, x5, lsl #32 regs_to_64 x2, x4, x5
mov w3, w6 mov w3, w6
b sys_fadvise64_64 b sys_fadvise64_64
ENDPROC(compat_sys_fadvise64_64_wrapper) ENDPROC(compat_sys_fadvise64_64_wrapper)
compat_sys_sync_file_range2_wrapper: compat_sys_sync_file_range2_wrapper:
orr x2, x2, x3, lsl #32 regs_to_64 x2, x2, x3
orr x3, x4, x5, lsl #32 regs_to_64 x3, x4, x5
b sys_sync_file_range2 b sys_sync_file_range2
ENDPROC(compat_sys_sync_file_range2_wrapper) ENDPROC(compat_sys_sync_file_range2_wrapper)
compat_sys_fallocate_wrapper: compat_sys_fallocate_wrapper:
orr x2, x2, x3, lsl #32 regs_to_64 x2, x2, x3
orr x3, x4, x5, lsl #32 regs_to_64 x3, x4, x5
b sys_fallocate b sys_fallocate
ENDPROC(compat_sys_fallocate_wrapper) ENDPROC(compat_sys_fallocate_wrapper)
......
...@@ -58,7 +58,10 @@ static struct page *vectors_page[1]; ...@@ -58,7 +58,10 @@ static struct page *vectors_page[1];
static int alloc_vectors_page(void) static int alloc_vectors_page(void)
{ {
extern char __kuser_helper_start[], __kuser_helper_end[]; extern char __kuser_helper_start[], __kuser_helper_end[];
extern char __aarch32_sigret_code_start[], __aarch32_sigret_code_end[];
int kuser_sz = __kuser_helper_end - __kuser_helper_start; int kuser_sz = __kuser_helper_end - __kuser_helper_start;
int sigret_sz = __aarch32_sigret_code_end - __aarch32_sigret_code_start;
unsigned long vpage; unsigned long vpage;
vpage = get_zeroed_page(GFP_ATOMIC); vpage = get_zeroed_page(GFP_ATOMIC);
...@@ -72,7 +75,7 @@ static int alloc_vectors_page(void) ...@@ -72,7 +75,7 @@ static int alloc_vectors_page(void)
/* sigreturn code */ /* sigreturn code */
memcpy((void *)vpage + AARCH32_KERN_SIGRET_CODE_OFFSET, memcpy((void *)vpage + AARCH32_KERN_SIGRET_CODE_OFFSET,
aarch32_sigret_code, sizeof(aarch32_sigret_code)); __aarch32_sigret_code_start, sigret_sz);
flush_icache_range(vpage, vpage + PAGE_SIZE); flush_icache_range(vpage, vpage + PAGE_SIZE);
vectors_page[0] = virt_to_page(vpage); vectors_page[0] = virt_to_page(vpage);
......
...@@ -54,7 +54,6 @@ SECTIONS ...@@ -54,7 +54,6 @@ SECTIONS
} }
.text : { /* Real text segment */ .text : { /* Real text segment */
_stext = .; /* Text and read-only data */ _stext = .; /* Text and read-only data */
*(.smp.pen.text)
__exception_text_start = .; __exception_text_start = .;
*(.exception.text) *(.exception.text)
__exception_text_end = .; __exception_text_end = .;
...@@ -97,30 +96,13 @@ SECTIONS ...@@ -97,30 +96,13 @@ SECTIONS
PERCPU_SECTION(64) PERCPU_SECTION(64)
__init_end = .; __init_end = .;
. = ALIGN(THREAD_SIZE);
__data_loc = .;
.data : AT(__data_loc) { . = ALIGN(PAGE_SIZE);
_data = .; /* address in memory */ _data = .;
__data_loc = _data - LOAD_OFFSET;
_sdata = .; _sdata = .;
RW_DATA_SECTION(64, PAGE_SIZE, THREAD_SIZE)
/*
* first, the init task union, aligned
* to an 8192 byte boundary.
*/
INIT_TASK_DATA(THREAD_SIZE)
NOSAVE_DATA
CACHELINE_ALIGNED_DATA(64)
READ_MOSTLY_DATA(64)
/*
* and the usual data section
*/
DATA_DATA
CONSTRUCTORS
_edata = .; _edata = .;
}
_edata_loc = __data_loc + SIZEOF(.data); _edata_loc = __data_loc + SIZEOF(.data);
BSS_SECTION(0, 0, 0) BSS_SECTION(0, 0, 0)
......
...@@ -74,7 +74,10 @@ __do_hyp_init: ...@@ -74,7 +74,10 @@ __do_hyp_init:
msr mair_el2, x4 msr mair_el2, x4
isb isb
mov x4, #SCTLR_EL2_FLAGS mrs x4, sctlr_el2
and x4, x4, #SCTLR_EL2_EE // preserve endianness of EL2
ldr x5, =SCTLR_EL2_FLAGS
orr x4, x4, x5
msr sctlr_el2, x4 msr sctlr_el2, x4
isb isb
......
...@@ -403,6 +403,14 @@ __kvm_hyp_code_start: ...@@ -403,6 +403,14 @@ __kvm_hyp_code_start:
ldr w9, [x2, #GICH_ELRSR0] ldr w9, [x2, #GICH_ELRSR0]
ldr w10, [x2, #GICH_ELRSR1] ldr w10, [x2, #GICH_ELRSR1]
ldr w11, [x2, #GICH_APR] ldr w11, [x2, #GICH_APR]
CPU_BE( rev w4, w4 )
CPU_BE( rev w5, w5 )
CPU_BE( rev w6, w6 )
CPU_BE( rev w7, w7 )
CPU_BE( rev w8, w8 )
CPU_BE( rev w9, w9 )
CPU_BE( rev w10, w10 )
CPU_BE( rev w11, w11 )
str w4, [x3, #VGIC_CPU_HCR] str w4, [x3, #VGIC_CPU_HCR]
str w5, [x3, #VGIC_CPU_VMCR] str w5, [x3, #VGIC_CPU_VMCR]
...@@ -421,6 +429,7 @@ __kvm_hyp_code_start: ...@@ -421,6 +429,7 @@ __kvm_hyp_code_start:
ldr w4, [x3, #VGIC_CPU_NR_LR] ldr w4, [x3, #VGIC_CPU_NR_LR]
add x3, x3, #VGIC_CPU_LR add x3, x3, #VGIC_CPU_LR
1: ldr w5, [x2], #4 1: ldr w5, [x2], #4
CPU_BE( rev w5, w5 )
str w5, [x3], #4 str w5, [x3], #4
sub w4, w4, #1 sub w4, w4, #1
cbnz w4, 1b cbnz w4, 1b
...@@ -446,6 +455,9 @@ __kvm_hyp_code_start: ...@@ -446,6 +455,9 @@ __kvm_hyp_code_start:
ldr w4, [x3, #VGIC_CPU_HCR] ldr w4, [x3, #VGIC_CPU_HCR]
ldr w5, [x3, #VGIC_CPU_VMCR] ldr w5, [x3, #VGIC_CPU_VMCR]
ldr w6, [x3, #VGIC_CPU_APR] ldr w6, [x3, #VGIC_CPU_APR]
CPU_BE( rev w4, w4 )
CPU_BE( rev w5, w5 )
CPU_BE( rev w6, w6 )
str w4, [x2, #GICH_HCR] str w4, [x2, #GICH_HCR]
str w5, [x2, #GICH_VMCR] str w5, [x2, #GICH_VMCR]
...@@ -456,6 +468,7 @@ __kvm_hyp_code_start: ...@@ -456,6 +468,7 @@ __kvm_hyp_code_start:
ldr w4, [x3, #VGIC_CPU_NR_LR] ldr w4, [x3, #VGIC_CPU_NR_LR]
add x3, x3, #VGIC_CPU_LR add x3, x3, #VGIC_CPU_LR
1: ldr w5, [x3], #4 1: ldr w5, [x3], #4
CPU_BE( rev w5, w5 )
str w5, [x2], #4 str w5, [x2], #4
sub w4, w4, #1 sub w4, w4, #1
cbnz w4, 1b cbnz w4, 1b
......
...@@ -77,8 +77,24 @@ EXPORT_SYMBOL(__ioremap); ...@@ -77,8 +77,24 @@ EXPORT_SYMBOL(__ioremap);
void __iounmap(volatile void __iomem *io_addr) void __iounmap(volatile void __iomem *io_addr)
{ {
void *addr = (void *)(PAGE_MASK & (unsigned long)io_addr); unsigned long addr = (unsigned long)io_addr & PAGE_MASK;
vunmap(addr); /*
* We could get an address outside vmalloc range in case
* of ioremap_cache() reusing a RAM mapping.
*/
if (VMALLOC_START <= addr && addr < VMALLOC_END)
vunmap((void *)addr);
} }
EXPORT_SYMBOL(__iounmap); EXPORT_SYMBOL(__iounmap);
void __iomem *ioremap_cache(phys_addr_t phys_addr, size_t size)
{
/* For normal memory we already have a cacheable mapping. */
if (pfn_valid(__phys_to_pfn(phys_addr)))
return (void __iomem *)__phys_to_virt(phys_addr);
return __ioremap_caller(phys_addr, size, __pgprot(PROT_NORMAL),
__builtin_return_address(0));
}
EXPORT_SYMBOL(ioremap_cache);
...@@ -162,9 +162,9 @@ ENDPROC(__cpu_setup) ...@@ -162,9 +162,9 @@ ENDPROC(__cpu_setup)
* CE0 XWHW CZ ME TEEA S * CE0 XWHW CZ ME TEEA S
* .... .IEE .... NEAI TE.I ..AD DEN0 ACAM * .... .IEE .... NEAI TE.I ..AD DEN0 ACAM
* 0011 0... 1101 ..0. ..0. 10.. .... .... < hardware reserved * 0011 0... 1101 ..0. ..0. 10.. .... .... < hardware reserved
* .... .100 .... 01.1 11.1 ..01 0001 1101 < software settings * .... .1.. .... 01.1 11.1 ..01 0001 1101 < software settings
*/ */
.type crval, #object .type crval, #object
crval: crval:
.word 0x030802e2 // clear .word 0x000802e2 // clear
.word 0x0405d11d // set .word 0x0405d11d // set
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment