Commit f47671e2 authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'for-linus' of git://git.linaro.org/people/rmk/linux-arm

Pull ARM updates from Russell King:
 "Included in this series are:

   1. BE8 (modern big endian) changes for ARM from Ben Dooks
   2. big.Little support from Nicolas Pitre and Dave Martin
   3. support for LPAE systems with all system memory above 4GB
   4. Perf updates from Will Deacon
   5. Additional prefetching and other performance improvements from Will.
   6. Neon-optimised AES implementation fro Ard.
   7. A number of smaller fixes scattered around the place.

  There is a rather horrid merge conflict in tools/perf - I was never
  notified of the conflict because it originally occurred between Will's
  tree and other stuff.  Consequently I have a resolution which Will
  forwarded me, which I'll forward on immediately after sending this
  mail.

  The other notable thing is I'm expecting some build breakage in the
  crypto stuff on ARM only with Ard's AES patches.  These were merged
  into a stable git branch which others had already pulled, so there's
  little I can do about this.  The problem is caused because these
  patches have a dependency on some code in the crypto git tree - I
  tried requesting a branch I can pull to resolve these, and all I got
  each time from the crypto people was "we'll revert our patches then"
  which would only make things worse since I still don't have the
  dependent patches.  I've no idea what's going on there or how to
  resolve that, and since I can't split these patches from the rest of
  this pull request, I'm rather stuck with pushing this as-is or
  reverting Ard's patches.

  Since it should "come out in the wash" I've left them in - the only
  build problems they seem to cause at the moment are with randconfigs,
  and since it's a new feature anyway.  However, if by -rc1 the
  dependencies aren't in, I think it'd be best to revert Ard's patches"

I resolved the perf conflict roughly as per the patch sent by Russell,
but there may be some differences.  Any errors are likely mine.  Let's
see how the crypto issues work out..

* 'for-linus' of git://git.linaro.org/people/rmk/linux-arm: (110 commits)
  ARM: 7868/1: arm/arm64: remove atomic_clear_mask() in "include/asm/atomic.h"
  ARM: 7867/1: include: asm: use 'int' instead of 'unsigned long' for 'oldval' in atomic_cmpxchg().
  ARM: 7866/1: include: asm: use 'long long' instead of 'u64' within atomic.h
  ARM: 7871/1: amba: Extend number of IRQS
  ARM: 7887/1: Don't smp_cross_call() on UP devices in arch_irq_work_raise()
  ARM: 7872/1: Support arch_irq_work_raise() via self IPIs
  ARM: 7880/1: Clear the IT state independent of the Thumb-2 mode
  ARM: 7878/1: nommu: Implement dummy early_paging_init()
  ARM: 7876/1: clear Thumb-2 IT state on exception handling
  ARM: 7874/2: bL_switcher: Remove cpu_hotplug_driver_{lock,unlock}()
  ARM: footbridge: fix build warnings for netwinder
  ARM: 7873/1: vfp: clear vfp_current_hw_state for dying cpu
  ARM: fix misplaced arch_virt_to_idmap()
  ARM: 7848/1: mcpm: Implement cpu_kill() to synchronise on powerdown
  ARM: 7847/1: mcpm: Factor out logical-to-physical CPU translation
  ARM: 7869/1: remove unused XSCALE_PMU Kconfig param
  ARM: 7864/1: Handle 64-bit memory in case of 32-bit phys_addr_t
  ARM: 7863/1: Let arm_add_memory() always use 64-bit arguments
  ARM: 7862/1: pcpu: replace __get_cpu_var_uses
  ARM: 7861/1: cacheflush: consolidate single-CPU ARMv7 cache disabling code
  ...
parents 8ceafbfa 42cbe827
...@@ -5,6 +5,7 @@ config ARM ...@@ -5,6 +5,7 @@ config ARM
select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_HAVE_CUSTOM_GPIO_H select ARCH_HAVE_CUSTOM_GPIO_H
select ARCH_USE_CMPXCHG_LOCKREF
select ARCH_WANT_IPC_PARSE_VERSION select ARCH_WANT_IPC_PARSE_VERSION
select BUILDTIME_EXTABLE_SORT if MMU select BUILDTIME_EXTABLE_SORT if MMU
select CLONE_BACKWARDS select CLONE_BACKWARDS
...@@ -51,6 +52,8 @@ config ARM ...@@ -51,6 +52,8 @@ config ARM
select HAVE_MOD_ARCH_SPECIFIC if ARM_UNWIND select HAVE_MOD_ARCH_SPECIFIC if ARM_UNWIND
select HAVE_OPROFILE if (HAVE_PERF_EVENTS) select HAVE_OPROFILE if (HAVE_PERF_EVENTS)
select HAVE_PERF_EVENTS select HAVE_PERF_EVENTS
select HAVE_PERF_REGS
select HAVE_PERF_USER_STACK_DUMP
select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_REGS_AND_STACK_ACCESS_API
select HAVE_SYSCALL_TRACEPOINTS select HAVE_SYSCALL_TRACEPOINTS
select HAVE_UID16 select HAVE_UID16
...@@ -481,6 +484,7 @@ config ARCH_IXP4XX ...@@ -481,6 +484,7 @@ config ARCH_IXP4XX
bool "IXP4xx-based" bool "IXP4xx-based"
depends on MMU depends on MMU
select ARCH_HAS_DMA_SET_COHERENT_MASK select ARCH_HAS_DMA_SET_COHERENT_MASK
select ARCH_SUPPORTS_BIG_ENDIAN
select ARCH_REQUIRE_GPIOLIB select ARCH_REQUIRE_GPIOLIB
select CLKSRC_MMIO select CLKSRC_MMIO
select CPU_XSCALE select CPU_XSCALE
...@@ -688,7 +692,6 @@ config ARCH_SA1100 ...@@ -688,7 +692,6 @@ config ARCH_SA1100
select GENERIC_CLOCKEVENTS select GENERIC_CLOCKEVENTS
select HAVE_IDE select HAVE_IDE
select ISA select ISA
select NEED_MACH_GPIO_H
select NEED_MACH_MEMORY_H select NEED_MACH_MEMORY_H
select SPARSE_IRQ select SPARSE_IRQ
help help
...@@ -1064,11 +1067,6 @@ config IWMMXT ...@@ -1064,11 +1067,6 @@ config IWMMXT
Enable support for iWMMXt context switching at run time if Enable support for iWMMXt context switching at run time if
running on a CPU that supports it. running on a CPU that supports it.
config XSCALE_PMU
bool
depends on CPU_XSCALE
default y
config MULTI_IRQ_HANDLER config MULTI_IRQ_HANDLER
bool bool
help help
...@@ -1516,6 +1514,32 @@ config MCPM ...@@ -1516,6 +1514,32 @@ config MCPM
for (multi-)cluster based systems, such as big.LITTLE based for (multi-)cluster based systems, such as big.LITTLE based
systems. systems.
config BIG_LITTLE
bool "big.LITTLE support (Experimental)"
depends on CPU_V7 && SMP
select MCPM
help
This option enables support selections for the big.LITTLE
system architecture.
config BL_SWITCHER
bool "big.LITTLE switcher support"
depends on BIG_LITTLE && MCPM && HOTPLUG_CPU
select CPU_PM
select ARM_CPU_SUSPEND
help
The big.LITTLE "switcher" provides the core functionality to
transparently handle transition between a cluster of A15's
and a cluster of A7's in a big.LITTLE system.
config BL_SWITCHER_DUMMY_IF
tristate "Simple big.LITTLE switcher user interface"
depends on BL_SWITCHER && DEBUG_KERNEL
help
This is a simple and dummy char dev interface to control
the big.LITTLE switcher core code. It is meant for
debugging purposes only.
choice choice
prompt "Memory split" prompt "Memory split"
default VMSPLIT_3G default VMSPLIT_3G
......
...@@ -318,6 +318,7 @@ choice ...@@ -318,6 +318,7 @@ choice
config DEBUG_MSM_UART1 config DEBUG_MSM_UART1
bool "Kernel low-level debugging messages via MSM UART1" bool "Kernel low-level debugging messages via MSM UART1"
depends on ARCH_MSM7X00A || ARCH_MSM7X30 || ARCH_QSD8X50 depends on ARCH_MSM7X00A || ARCH_MSM7X30 || ARCH_QSD8X50
select DEBUG_MSM_UART
help help
Say Y here if you want the debug print routines to direct Say Y here if you want the debug print routines to direct
their output to the first serial port on MSM devices. their output to the first serial port on MSM devices.
...@@ -325,6 +326,7 @@ choice ...@@ -325,6 +326,7 @@ choice
config DEBUG_MSM_UART2 config DEBUG_MSM_UART2
bool "Kernel low-level debugging messages via MSM UART2" bool "Kernel low-level debugging messages via MSM UART2"
depends on ARCH_MSM7X00A || ARCH_MSM7X30 || ARCH_QSD8X50 depends on ARCH_MSM7X00A || ARCH_MSM7X30 || ARCH_QSD8X50
select DEBUG_MSM_UART
help help
Say Y here if you want the debug print routines to direct Say Y here if you want the debug print routines to direct
their output to the second serial port on MSM devices. their output to the second serial port on MSM devices.
...@@ -332,6 +334,7 @@ choice ...@@ -332,6 +334,7 @@ choice
config DEBUG_MSM_UART3 config DEBUG_MSM_UART3
bool "Kernel low-level debugging messages via MSM UART3" bool "Kernel low-level debugging messages via MSM UART3"
depends on ARCH_MSM7X00A || ARCH_MSM7X30 || ARCH_QSD8X50 depends on ARCH_MSM7X00A || ARCH_MSM7X30 || ARCH_QSD8X50
select DEBUG_MSM_UART
help help
Say Y here if you want the debug print routines to direct Say Y here if you want the debug print routines to direct
their output to the third serial port on MSM devices. their output to the third serial port on MSM devices.
...@@ -340,6 +343,7 @@ choice ...@@ -340,6 +343,7 @@ choice
bool "Kernel low-level debugging messages via MSM 8660 UART" bool "Kernel low-level debugging messages via MSM 8660 UART"
depends on ARCH_MSM8X60 depends on ARCH_MSM8X60
select MSM_HAS_DEBUG_UART_HS select MSM_HAS_DEBUG_UART_HS
select DEBUG_MSM_UART
help help
Say Y here if you want the debug print routines to direct Say Y here if you want the debug print routines to direct
their output to the serial port on MSM 8660 devices. their output to the serial port on MSM 8660 devices.
...@@ -348,10 +352,20 @@ choice ...@@ -348,10 +352,20 @@ choice
bool "Kernel low-level debugging messages via MSM 8960 UART" bool "Kernel low-level debugging messages via MSM 8960 UART"
depends on ARCH_MSM8960 depends on ARCH_MSM8960
select MSM_HAS_DEBUG_UART_HS select MSM_HAS_DEBUG_UART_HS
select DEBUG_MSM_UART
help help
Say Y here if you want the debug print routines to direct Say Y here if you want the debug print routines to direct
their output to the serial port on MSM 8960 devices. their output to the serial port on MSM 8960 devices.
config DEBUG_MSM8974_UART
bool "Kernel low-level debugging messages via MSM 8974 UART"
depends on ARCH_MSM8974
select MSM_HAS_DEBUG_UART_HS
select DEBUG_MSM_UART
help
Say Y here if you want the debug print routines to direct
their output to the serial port on MSM 8974 devices.
config DEBUG_MVEBU_UART config DEBUG_MVEBU_UART
bool "Kernel low-level debugging messages via MVEBU UART (old bootloaders)" bool "Kernel low-level debugging messages via MVEBU UART (old bootloaders)"
depends on ARCH_MVEBU depends on ARCH_MVEBU
...@@ -841,6 +855,20 @@ choice ...@@ -841,6 +855,20 @@ choice
options; the platform specific options are deprecated options; the platform specific options are deprecated
and will be soon removed. and will be soon removed.
config DEBUG_LL_UART_EFM32
bool "Kernel low-level debugging via efm32 UART"
depends on ARCH_EFM32
help
Say Y here if you want the debug print routines to direct
their output to an UART or USART port on efm32 based
machines. Use the following addresses for DEBUG_UART_PHYS:
0x4000c000 | USART0
0x4000c400 | USART1
0x4000c800 | USART2
0x4000e000 | UART0
0x4000e400 | UART1
config DEBUG_LL_UART_PL01X config DEBUG_LL_UART_PL01X
bool "Kernel low-level debugging via ARM Ltd PL01x Primecell UART" bool "Kernel low-level debugging via ARM Ltd PL01x Primecell UART"
help help
...@@ -887,11 +915,16 @@ config DEBUG_STI_UART ...@@ -887,11 +915,16 @@ config DEBUG_STI_UART
bool bool
depends on ARCH_STI depends on ARCH_STI
config DEBUG_MSM_UART
bool
depends on ARCH_MSM
config DEBUG_LL_INCLUDE config DEBUG_LL_INCLUDE
string string
default "debug/8250.S" if DEBUG_LL_UART_8250 || DEBUG_UART_8250 default "debug/8250.S" if DEBUG_LL_UART_8250 || DEBUG_UART_8250
default "debug/pl01x.S" if DEBUG_LL_UART_PL01X || DEBUG_UART_PL01X default "debug/pl01x.S" if DEBUG_LL_UART_PL01X || DEBUG_UART_PL01X
default "debug/exynos.S" if DEBUG_EXYNOS_UART default "debug/exynos.S" if DEBUG_EXYNOS_UART
default "debug/efm32.S" if DEBUG_LL_UART_EFM32
default "debug/icedcc.S" if DEBUG_ICEDCC default "debug/icedcc.S" if DEBUG_ICEDCC
default "debug/imx.S" if DEBUG_IMX1_UART || \ default "debug/imx.S" if DEBUG_IMX1_UART || \
DEBUG_IMX25_UART || \ DEBUG_IMX25_UART || \
...@@ -902,11 +935,7 @@ config DEBUG_LL_INCLUDE ...@@ -902,11 +935,7 @@ config DEBUG_LL_INCLUDE
DEBUG_IMX53_UART ||\ DEBUG_IMX53_UART ||\
DEBUG_IMX6Q_UART || \ DEBUG_IMX6Q_UART || \
DEBUG_IMX6SL_UART DEBUG_IMX6SL_UART
default "debug/msm.S" if DEBUG_MSM_UART1 || \ default "debug/msm.S" if DEBUG_MSM_UART
DEBUG_MSM_UART2 || \
DEBUG_MSM_UART3 || \
DEBUG_MSM8660_UART || \
DEBUG_MSM8960_UART
default "debug/omap2plus.S" if DEBUG_OMAP2PLUS_UART default "debug/omap2plus.S" if DEBUG_OMAP2PLUS_UART
default "debug/sirf.S" if DEBUG_SIRFPRIMA2_UART1 || DEBUG_SIRFMARCO_UART1 default "debug/sirf.S" if DEBUG_SIRFPRIMA2_UART1 || DEBUG_SIRFMARCO_UART1
default "debug/sti.S" if DEBUG_STI_UART default "debug/sti.S" if DEBUG_STI_UART
...@@ -959,6 +988,7 @@ config DEBUG_UART_PHYS ...@@ -959,6 +988,7 @@ config DEBUG_UART_PHYS
default 0x20064000 if DEBUG_RK29_UART1 || DEBUG_RK3X_UART2 default 0x20064000 if DEBUG_RK29_UART1 || DEBUG_RK3X_UART2
default 0x20068000 if DEBUG_RK29_UART2 || DEBUG_RK3X_UART3 default 0x20068000 if DEBUG_RK29_UART2 || DEBUG_RK3X_UART3
default 0x20201000 if DEBUG_BCM2835 default 0x20201000 if DEBUG_BCM2835
default 0x4000e400 if DEBUG_LL_UART_EFM32
default 0x40090000 if ARCH_LPC32XX default 0x40090000 if ARCH_LPC32XX
default 0x40100000 if DEBUG_PXA_UART1 default 0x40100000 if DEBUG_PXA_UART1
default 0x42000000 if ARCH_GEMINI default 0x42000000 if ARCH_GEMINI
...@@ -989,6 +1019,7 @@ config DEBUG_UART_PHYS ...@@ -989,6 +1019,7 @@ config DEBUG_UART_PHYS
default 0xfff36000 if DEBUG_HIGHBANK_UART default 0xfff36000 if DEBUG_HIGHBANK_UART
default 0xfffff700 if ARCH_IOP33X default 0xfffff700 if ARCH_IOP33X
depends on DEBUG_LL_UART_8250 || DEBUG_LL_UART_PL01X || \ depends on DEBUG_LL_UART_8250 || DEBUG_LL_UART_PL01X || \
DEBUG_LL_UART_EFM32 || \
DEBUG_UART_8250 || DEBUG_UART_PL01X DEBUG_UART_8250 || DEBUG_UART_PL01X
config DEBUG_UART_VIRT config DEBUG_UART_VIRT
......
...@@ -16,6 +16,7 @@ LDFLAGS := ...@@ -16,6 +16,7 @@ LDFLAGS :=
LDFLAGS_vmlinux :=-p --no-undefined -X LDFLAGS_vmlinux :=-p --no-undefined -X
ifeq ($(CONFIG_CPU_ENDIAN_BE8),y) ifeq ($(CONFIG_CPU_ENDIAN_BE8),y)
LDFLAGS_vmlinux += --be8 LDFLAGS_vmlinux += --be8
LDFLAGS_MODULE += --be8
endif endif
OBJCOPYFLAGS :=-O binary -R .comment -S OBJCOPYFLAGS :=-O binary -R .comment -S
......
...@@ -135,6 +135,7 @@ start: ...@@ -135,6 +135,7 @@ start:
.word _edata @ zImage end address .word _edata @ zImage end address
THUMB( .thumb ) THUMB( .thumb )
1: 1:
ARM_BE8( setend be ) @ go BE8 if compiled for BE8
mrs r9, cpsr mrs r9, cpsr
#ifdef CONFIG_ARM_VIRT_EXT #ifdef CONFIG_ARM_VIRT_EXT
bl __hyp_stub_install @ get into SVC mode, reversibly bl __hyp_stub_install @ get into SVC mode, reversibly
...@@ -699,9 +700,7 @@ __armv4_mmu_cache_on: ...@@ -699,9 +700,7 @@ __armv4_mmu_cache_on:
mrc p15, 0, r0, c1, c0, 0 @ read control reg mrc p15, 0, r0, c1, c0, 0 @ read control reg
orr r0, r0, #0x5000 @ I-cache enable, RR cache replacement orr r0, r0, #0x5000 @ I-cache enable, RR cache replacement
orr r0, r0, #0x0030 orr r0, r0, #0x0030
#ifdef CONFIG_CPU_ENDIAN_BE8 ARM_BE8( orr r0, r0, #1 << 25 ) @ big-endian page tables
orr r0, r0, #1 << 25 @ big-endian page tables
#endif
bl __common_mmu_cache_on bl __common_mmu_cache_on
mov r0, #0 mov r0, #0
mcr p15, 0, r0, c8, c7, 0 @ flush I,D TLBs mcr p15, 0, r0, c8, c7, 0 @ flush I,D TLBs
...@@ -728,9 +727,7 @@ __armv7_mmu_cache_on: ...@@ -728,9 +727,7 @@ __armv7_mmu_cache_on:
orr r0, r0, #1 << 22 @ U (v6 unaligned access model) orr r0, r0, #1 << 22 @ U (v6 unaligned access model)
@ (needed for ARM1176) @ (needed for ARM1176)
#ifdef CONFIG_MMU #ifdef CONFIG_MMU
#ifdef CONFIG_CPU_ENDIAN_BE8 ARM_BE8( orr r0, r0, #1 << 25 ) @ big-endian page tables
orr r0, r0, #1 << 25 @ big-endian page tables
#endif
mrcne p15, 0, r6, c2, c0, 2 @ read ttb control reg mrcne p15, 0, r6, c2, c0, 2 @ read ttb control reg
orrne r0, r0, #1 @ MMU enabled orrne r0, r0, #1 @ MMU enabled
movne r1, #0xfffffffd @ domain 0 = client movne r1, #0xfffffffd @ domain 0 = client
......
...@@ -16,3 +16,5 @@ obj-$(CONFIG_MCPM) += mcpm_head.o mcpm_entry.o mcpm_platsmp.o vlock.o ...@@ -16,3 +16,5 @@ obj-$(CONFIG_MCPM) += mcpm_head.o mcpm_entry.o mcpm_platsmp.o vlock.o
AFLAGS_mcpm_head.o := -march=armv7-a AFLAGS_mcpm_head.o := -march=armv7-a
AFLAGS_vlock.o := -march=armv7-a AFLAGS_vlock.o := -march=armv7-a
obj-$(CONFIG_TI_PRIV_EDMA) += edma.o obj-$(CONFIG_TI_PRIV_EDMA) += edma.o
obj-$(CONFIG_BL_SWITCHER) += bL_switcher.o
obj-$(CONFIG_BL_SWITCHER_DUMMY_IF) += bL_switcher_dummy_if.o
This diff is collapsed.
/*
* arch/arm/common/bL_switcher_dummy_if.c -- b.L switcher dummy interface
*
* Created by: Nicolas Pitre, November 2012
* Copyright: (C) 2012-2013 Linaro Limited
*
* Dummy interface to user space for debugging purpose only.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/init.h>
#include <linux/module.h>
#include <linux/fs.h>
#include <linux/miscdevice.h>
#include <asm/uaccess.h>
#include <asm/bL_switcher.h>
static ssize_t bL_switcher_write(struct file *file, const char __user *buf,
size_t len, loff_t *pos)
{
unsigned char val[3];
unsigned int cpu, cluster;
int ret;
pr_debug("%s\n", __func__);
if (len < 3)
return -EINVAL;
if (copy_from_user(val, buf, 3))
return -EFAULT;
/* format: <cpu#>,<cluster#> */
if (val[0] < '0' || val[0] > '9' ||
val[1] != ',' ||
val[2] < '0' || val[2] > '1')
return -EINVAL;
cpu = val[0] - '0';
cluster = val[2] - '0';
ret = bL_switch_request(cpu, cluster);
return ret ? : len;
}
static const struct file_operations bL_switcher_fops = {
.write = bL_switcher_write,
.owner = THIS_MODULE,
};
static struct miscdevice bL_switcher_device = {
MISC_DYNAMIC_MINOR,
"b.L_switcher",
&bL_switcher_fops
};
static int __init bL_switcher_dummy_if_init(void)
{
return misc_register(&bL_switcher_device);
}
static void __exit bL_switcher_dummy_if_exit(void)
{
misc_deregister(&bL_switcher_device);
}
module_init(bL_switcher_dummy_if_init);
module_exit(bL_switcher_dummy_if_exit);
...@@ -27,6 +27,18 @@ void mcpm_set_entry_vector(unsigned cpu, unsigned cluster, void *ptr) ...@@ -27,6 +27,18 @@ void mcpm_set_entry_vector(unsigned cpu, unsigned cluster, void *ptr)
sync_cache_w(&mcpm_entry_vectors[cluster][cpu]); sync_cache_w(&mcpm_entry_vectors[cluster][cpu]);
} }
extern unsigned long mcpm_entry_early_pokes[MAX_NR_CLUSTERS][MAX_CPUS_PER_CLUSTER][2];
void mcpm_set_early_poke(unsigned cpu, unsigned cluster,
unsigned long poke_phys_addr, unsigned long poke_val)
{
unsigned long *poke = &mcpm_entry_early_pokes[cluster][cpu][0];
poke[0] = poke_phys_addr;
poke[1] = poke_val;
__cpuc_flush_dcache_area((void *)poke, 8);
outer_clean_range(__pa(poke), __pa(poke + 2));
}
static const struct mcpm_platform_ops *platform_ops; static const struct mcpm_platform_ops *platform_ops;
int __init mcpm_platform_register(const struct mcpm_platform_ops *ops) int __init mcpm_platform_register(const struct mcpm_platform_ops *ops)
...@@ -90,6 +102,21 @@ void mcpm_cpu_power_down(void) ...@@ -90,6 +102,21 @@ void mcpm_cpu_power_down(void)
BUG(); BUG();
} }
int mcpm_cpu_power_down_finish(unsigned int cpu, unsigned int cluster)
{
int ret;
if (WARN_ON_ONCE(!platform_ops || !platform_ops->power_down_finish))
return -EUNATCH;
ret = platform_ops->power_down_finish(cpu, cluster);
if (ret)
pr_warn("%s: cpu %u, cluster %u failed to power down (%d)\n",
__func__, cpu, cluster, ret);
return ret;
}
void mcpm_cpu_suspend(u64 expected_residency) void mcpm_cpu_suspend(u64 expected_residency)
{ {
phys_reset_t phys_reset; phys_reset_t phys_reset;
......
...@@ -15,6 +15,7 @@ ...@@ -15,6 +15,7 @@
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/mcpm.h> #include <asm/mcpm.h>
#include <asm/assembler.h>
#include "vlock.h" #include "vlock.h"
...@@ -47,6 +48,7 @@ ...@@ -47,6 +48,7 @@
ENTRY(mcpm_entry_point) ENTRY(mcpm_entry_point)
ARM_BE8(setend be)
THUMB( adr r12, BSYM(1f) ) THUMB( adr r12, BSYM(1f) )
THUMB( bx r12 ) THUMB( bx r12 )
THUMB( .thumb ) THUMB( .thumb )
...@@ -71,12 +73,19 @@ ENTRY(mcpm_entry_point) ...@@ -71,12 +73,19 @@ ENTRY(mcpm_entry_point)
* position independent way. * position independent way.
*/ */
adr r5, 3f adr r5, 3f
ldmia r5, {r6, r7, r8, r11} ldmia r5, {r0, r6, r7, r8, r11}
add r0, r5, r0 @ r0 = mcpm_entry_early_pokes
add r6, r5, r6 @ r6 = mcpm_entry_vectors add r6, r5, r6 @ r6 = mcpm_entry_vectors
ldr r7, [r5, r7] @ r7 = mcpm_power_up_setup_phys ldr r7, [r5, r7] @ r7 = mcpm_power_up_setup_phys
add r8, r5, r8 @ r8 = mcpm_sync add r8, r5, r8 @ r8 = mcpm_sync
add r11, r5, r11 @ r11 = first_man_locks add r11, r5, r11 @ r11 = first_man_locks
@ Perform an early poke, if any
add r0, r0, r4, lsl #3
ldmia r0, {r0, r1}
teq r0, #0
strne r1, [r0]
mov r0, #MCPM_SYNC_CLUSTER_SIZE mov r0, #MCPM_SYNC_CLUSTER_SIZE
mla r8, r0, r10, r8 @ r8 = sync cluster base mla r8, r0, r10, r8 @ r8 = sync cluster base
...@@ -195,7 +204,8 @@ mcpm_entry_gated: ...@@ -195,7 +204,8 @@ mcpm_entry_gated:
.align 2 .align 2
3: .word mcpm_entry_vectors - . 3: .word mcpm_entry_early_pokes - .
.word mcpm_entry_vectors - 3b
.word mcpm_power_up_setup_phys - 3b .word mcpm_power_up_setup_phys - 3b
.word mcpm_sync - 3b .word mcpm_sync - 3b
.word first_man_locks - 3b .word first_man_locks - 3b
...@@ -214,6 +224,10 @@ first_man_locks: ...@@ -214,6 +224,10 @@ first_man_locks:
ENTRY(mcpm_entry_vectors) ENTRY(mcpm_entry_vectors)
.space 4 * MAX_NR_CLUSTERS * MAX_CPUS_PER_CLUSTER .space 4 * MAX_NR_CLUSTERS * MAX_CPUS_PER_CLUSTER
.type mcpm_entry_early_pokes, #object
ENTRY(mcpm_entry_early_pokes)
.space 8 * MAX_NR_CLUSTERS * MAX_CPUS_PER_CLUSTER
.type mcpm_power_up_setup_phys, #object .type mcpm_power_up_setup_phys, #object
ENTRY(mcpm_power_up_setup_phys) ENTRY(mcpm_power_up_setup_phys)
.space 4 @ set by mcpm_sync_init() .space 4 @ set by mcpm_sync_init()
...@@ -19,14 +19,23 @@ ...@@ -19,14 +19,23 @@
#include <asm/smp.h> #include <asm/smp.h>
#include <asm/smp_plat.h> #include <asm/smp_plat.h>
static void cpu_to_pcpu(unsigned int cpu,
unsigned int *pcpu, unsigned int *pcluster)
{
unsigned int mpidr;
mpidr = cpu_logical_map(cpu);
*pcpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
*pcluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
}
static int mcpm_boot_secondary(unsigned int cpu, struct task_struct *idle) static int mcpm_boot_secondary(unsigned int cpu, struct task_struct *idle)
{ {
unsigned int mpidr, pcpu, pcluster, ret; unsigned int pcpu, pcluster, ret;
extern void secondary_startup(void); extern void secondary_startup(void);
mpidr = cpu_logical_map(cpu); cpu_to_pcpu(cpu, &pcpu, &pcluster);
pcpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
pcluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
pr_debug("%s: logical CPU %d is physical CPU %d cluster %d\n", pr_debug("%s: logical CPU %d is physical CPU %d cluster %d\n",
__func__, cpu, pcpu, pcluster); __func__, cpu, pcpu, pcluster);
...@@ -47,6 +56,15 @@ static void mcpm_secondary_init(unsigned int cpu) ...@@ -47,6 +56,15 @@ static void mcpm_secondary_init(unsigned int cpu)
#ifdef CONFIG_HOTPLUG_CPU #ifdef CONFIG_HOTPLUG_CPU
static int mcpm_cpu_kill(unsigned int cpu)
{
unsigned int pcpu, pcluster;
cpu_to_pcpu(cpu, &pcpu, &pcluster);
return !mcpm_cpu_power_down_finish(pcpu, pcluster);
}
static int mcpm_cpu_disable(unsigned int cpu) static int mcpm_cpu_disable(unsigned int cpu)
{ {
/* /*
...@@ -73,6 +91,7 @@ static struct smp_operations __initdata mcpm_smp_ops = { ...@@ -73,6 +91,7 @@ static struct smp_operations __initdata mcpm_smp_ops = {
.smp_boot_secondary = mcpm_boot_secondary, .smp_boot_secondary = mcpm_boot_secondary,
.smp_secondary_init = mcpm_secondary_init, .smp_secondary_init = mcpm_secondary_init,
#ifdef CONFIG_HOTPLUG_CPU #ifdef CONFIG_HOTPLUG_CPU
.cpu_kill = mcpm_cpu_kill,
.cpu_disable = mcpm_cpu_disable, .cpu_disable = mcpm_cpu_disable,
.cpu_die = mcpm_cpu_die, .cpu_die = mcpm_cpu_die,
#endif #endif
......
...@@ -175,7 +175,7 @@ static struct clock_event_device sp804_clockevent = { ...@@ -175,7 +175,7 @@ static struct clock_event_device sp804_clockevent = {
static struct irqaction sp804_timer_irq = { static struct irqaction sp804_timer_irq = {
.name = "timer", .name = "timer",
.flags = IRQF_DISABLED | IRQF_TIMER | IRQF_IRQPOLL, .flags = IRQF_TIMER | IRQF_IRQPOLL,
.handler = sp804_timer_interrupt, .handler = sp804_timer_interrupt,
.dev_id = &sp804_clockevent, .dev_id = &sp804_clockevent,
}; };
......
CONFIG_EXPERIMENTAL=y
CONFIG_SYSVIPC=y CONFIG_SYSVIPC=y
CONFIG_NO_HZ_IDLE=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_LOG_BUF_SHIFT=14 CONFIG_LOG_BUF_SHIFT=14
CONFIG_BLK_DEV_INITRD=y CONFIG_BLK_DEV_INITRD=y
CONFIG_MODULES=y CONFIG_MODULES=y
...@@ -11,11 +12,11 @@ CONFIG_ARCH_SA1100=y ...@@ -11,11 +12,11 @@ CONFIG_ARCH_SA1100=y
CONFIG_SA1100_H3600=y CONFIG_SA1100_H3600=y
CONFIG_PCCARD=y CONFIG_PCCARD=y
CONFIG_PCMCIA_SA1100=y CONFIG_PCMCIA_SA1100=y
CONFIG_PREEMPT=y
CONFIG_ZBOOT_ROM_TEXT=0x0 CONFIG_ZBOOT_ROM_TEXT=0x0
CONFIG_ZBOOT_ROM_BSS=0x0 CONFIG_ZBOOT_ROM_BSS=0x0
# CONFIG_CPU_FREQ_STAT is not set # CONFIG_CPU_FREQ_STAT is not set
CONFIG_FPE_NWFPE=y CONFIG_FPE_NWFPE=y
CONFIG_PM=y
CONFIG_NET=y CONFIG_NET=y
CONFIG_UNIX=y CONFIG_UNIX=y
CONFIG_INET=y CONFIG_INET=y
...@@ -24,13 +25,10 @@ CONFIG_IRDA=m ...@@ -24,13 +25,10 @@ CONFIG_IRDA=m
CONFIG_IRLAN=m CONFIG_IRLAN=m
CONFIG_IRNET=m CONFIG_IRNET=m
CONFIG_IRCOMM=m CONFIG_IRCOMM=m
CONFIG_SA1100_FIR=m
# CONFIG_WIRELESS is not set # CONFIG_WIRELESS is not set
CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
CONFIG_MTD=y CONFIG_MTD=y
CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_REDBOOT_PARTS=y CONFIG_MTD_REDBOOT_PARTS=y
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y CONFIG_MTD_BLOCK=y
CONFIG_MTD_CFI=y CONFIG_MTD_CFI=y
CONFIG_MTD_CFI_ADV_OPTIONS=y CONFIG_MTD_CFI_ADV_OPTIONS=y
...@@ -41,19 +39,15 @@ CONFIG_MTD_SA1100=y ...@@ -41,19 +39,15 @@ CONFIG_MTD_SA1100=y
CONFIG_BLK_DEV_LOOP=m CONFIG_BLK_DEV_LOOP=m
CONFIG_BLK_DEV_RAM=y CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_SIZE=8192 CONFIG_BLK_DEV_RAM_SIZE=8192
# CONFIG_MISC_DEVICES is not set
CONFIG_IDE=y CONFIG_IDE=y
CONFIG_BLK_DEV_IDECS=y CONFIG_BLK_DEV_IDECS=y
CONFIG_NETDEVICES=y CONFIG_NETDEVICES=y
# CONFIG_NETDEV_1000 is not set
# CONFIG_NETDEV_10000 is not set
# CONFIG_WLAN is not set
CONFIG_NET_PCMCIA=y
CONFIG_PCMCIA_PCNET=y CONFIG_PCMCIA_PCNET=y
CONFIG_PPP=m CONFIG_PPP=m
CONFIG_PPP_ASYNC=m
CONFIG_PPP_DEFLATE=m
CONFIG_PPP_BSDCOMP=m CONFIG_PPP_BSDCOMP=m
CONFIG_PPP_DEFLATE=m
CONFIG_PPP_ASYNC=m
# CONFIG_WLAN is not set
# CONFIG_KEYBOARD_ATKBD is not set # CONFIG_KEYBOARD_ATKBD is not set
CONFIG_KEYBOARD_GPIO=y CONFIG_KEYBOARD_GPIO=y
# CONFIG_INPUT_MOUSE is not set # CONFIG_INPUT_MOUSE is not set
...@@ -64,8 +58,6 @@ CONFIG_SERIAL_SA1100_CONSOLE=y ...@@ -64,8 +58,6 @@ CONFIG_SERIAL_SA1100_CONSOLE=y
# CONFIG_HWMON is not set # CONFIG_HWMON is not set
CONFIG_FB=y CONFIG_FB=y
CONFIG_FB_SA1100=y CONFIG_FB_SA1100=y
# CONFIG_VGA_CONSOLE is not set
# CONFIG_HID_SUPPORT is not set
# CONFIG_USB_SUPPORT is not set # CONFIG_USB_SUPPORT is not set
CONFIG_EXT2_FS=y CONFIG_EXT2_FS=y
CONFIG_MSDOS_FS=m CONFIG_MSDOS_FS=m
...@@ -74,6 +66,4 @@ CONFIG_JFFS2_FS=y ...@@ -74,6 +66,4 @@ CONFIG_JFFS2_FS=y
CONFIG_CRAMFS=m CONFIG_CRAMFS=m
CONFIG_NFS_FS=y CONFIG_NFS_FS=y
CONFIG_NFSD=m CONFIG_NFSD=m
CONFIG_SMB_FS=m
CONFIG_NLS=y CONFIG_NLS=y
# CONFIG_RCU_CPU_STALL_DETECTOR is not set
...@@ -3,7 +3,17 @@ ...@@ -3,7 +3,17 @@
# #
obj-$(CONFIG_CRYPTO_AES_ARM) += aes-arm.o obj-$(CONFIG_CRYPTO_AES_ARM) += aes-arm.o
obj-$(CONFIG_CRYPTO_AES_ARM_BS) += aes-arm-bs.o
obj-$(CONFIG_CRYPTO_SHA1_ARM) += sha1-arm.o obj-$(CONFIG_CRYPTO_SHA1_ARM) += sha1-arm.o
aes-arm-y := aes-armv4.o aes_glue.o aes-arm-y := aes-armv4.o aes_glue.o
sha1-arm-y := sha1-armv4-large.o sha1_glue.o aes-arm-bs-y := aesbs-core.o aesbs-glue.o
sha1-arm-y := sha1-armv4-large.o sha1_glue.o
quiet_cmd_perl = PERL $@
cmd_perl = $(PERL) $(<) > $(@)
$(src)/aesbs-core.S_shipped: $(src)/bsaes-armv7.pl
$(call cmd,perl)
.PRECIOUS: $(obj)/aesbs-core.S
...@@ -6,22 +6,12 @@ ...@@ -6,22 +6,12 @@
#include <linux/crypto.h> #include <linux/crypto.h>
#include <crypto/aes.h> #include <crypto/aes.h>
#define AES_MAXNR 14 #include "aes_glue.h"
typedef struct { EXPORT_SYMBOL(AES_encrypt);
unsigned int rd_key[4 *(AES_MAXNR + 1)]; EXPORT_SYMBOL(AES_decrypt);
int rounds; EXPORT_SYMBOL(private_AES_set_encrypt_key);
} AES_KEY; EXPORT_SYMBOL(private_AES_set_decrypt_key);
struct AES_CTX {
AES_KEY enc_key;
AES_KEY dec_key;
};
asmlinkage void AES_encrypt(const u8 *in, u8 *out, AES_KEY *ctx);
asmlinkage void AES_decrypt(const u8 *in, u8 *out, AES_KEY *ctx);
asmlinkage int private_AES_set_decrypt_key(const unsigned char *userKey, const int bits, AES_KEY *key);
asmlinkage int private_AES_set_encrypt_key(const unsigned char *userKey, const int bits, AES_KEY *key);
static void aes_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) static void aes_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
{ {
...@@ -81,7 +71,7 @@ static struct crypto_alg aes_alg = { ...@@ -81,7 +71,7 @@ static struct crypto_alg aes_alg = {
.cipher = { .cipher = {
.cia_min_keysize = AES_MIN_KEY_SIZE, .cia_min_keysize = AES_MIN_KEY_SIZE,
.cia_max_keysize = AES_MAX_KEY_SIZE, .cia_max_keysize = AES_MAX_KEY_SIZE,
.cia_setkey = aes_set_key, .cia_setkey = aes_set_key,
.cia_encrypt = aes_encrypt, .cia_encrypt = aes_encrypt,
.cia_decrypt = aes_decrypt .cia_decrypt = aes_decrypt
} }
......
#define AES_MAXNR 14
struct AES_KEY {
unsigned int rd_key[4 * (AES_MAXNR + 1)];
int rounds;
};
struct AES_CTX {
struct AES_KEY enc_key;
struct AES_KEY dec_key;
};
asmlinkage void AES_encrypt(const u8 *in, u8 *out, struct AES_KEY *ctx);
asmlinkage void AES_decrypt(const u8 *in, u8 *out, struct AES_KEY *ctx);
asmlinkage int private_AES_set_decrypt_key(const unsigned char *userKey,
const int bits, struct AES_KEY *key);
asmlinkage int private_AES_set_encrypt_key(const unsigned char *userKey,
const int bits, struct AES_KEY *key);
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -24,6 +24,7 @@ generic-y += sembuf.h ...@@ -24,6 +24,7 @@ generic-y += sembuf.h
generic-y += serial.h generic-y += serial.h
generic-y += shmbuf.h generic-y += shmbuf.h
generic-y += siginfo.h generic-y += siginfo.h
generic-y += simd.h
generic-y += sizes.h generic-y += sizes.h
generic-y += socket.h generic-y += socket.h
generic-y += sockios.h generic-y += sockios.h
......
...@@ -53,6 +53,13 @@ ...@@ -53,6 +53,13 @@
#define put_byte_3 lsl #0 #define put_byte_3 lsl #0
#endif #endif
/* Select code for any configuration running in BE8 mode */
#ifdef CONFIG_CPU_ENDIAN_BE8
#define ARM_BE8(code...) code
#else
#define ARM_BE8(code...)
#endif
/* /*
* Data preload for architectures that support it * Data preload for architectures that support it
*/ */
......
...@@ -12,6 +12,7 @@ ...@@ -12,6 +12,7 @@
#define __ASM_ARM_ATOMIC_H #define __ASM_ARM_ATOMIC_H
#include <linux/compiler.h> #include <linux/compiler.h>
#include <linux/prefetch.h>
#include <linux/types.h> #include <linux/types.h>
#include <linux/irqflags.h> #include <linux/irqflags.h>
#include <asm/barrier.h> #include <asm/barrier.h>
...@@ -41,6 +42,7 @@ static inline void atomic_add(int i, atomic_t *v) ...@@ -41,6 +42,7 @@ static inline void atomic_add(int i, atomic_t *v)
unsigned long tmp; unsigned long tmp;
int result; int result;
prefetchw(&v->counter);
__asm__ __volatile__("@ atomic_add\n" __asm__ __volatile__("@ atomic_add\n"
"1: ldrex %0, [%3]\n" "1: ldrex %0, [%3]\n"
" add %0, %0, %4\n" " add %0, %0, %4\n"
...@@ -79,6 +81,7 @@ static inline void atomic_sub(int i, atomic_t *v) ...@@ -79,6 +81,7 @@ static inline void atomic_sub(int i, atomic_t *v)
unsigned long tmp; unsigned long tmp;
int result; int result;
prefetchw(&v->counter);
__asm__ __volatile__("@ atomic_sub\n" __asm__ __volatile__("@ atomic_sub\n"
"1: ldrex %0, [%3]\n" "1: ldrex %0, [%3]\n"
" sub %0, %0, %4\n" " sub %0, %0, %4\n"
...@@ -114,7 +117,8 @@ static inline int atomic_sub_return(int i, atomic_t *v) ...@@ -114,7 +117,8 @@ static inline int atomic_sub_return(int i, atomic_t *v)
static inline int atomic_cmpxchg(atomic_t *ptr, int old, int new) static inline int atomic_cmpxchg(atomic_t *ptr, int old, int new)
{ {
unsigned long oldval, res; int oldval;
unsigned long res;
smp_mb(); smp_mb();
...@@ -134,21 +138,6 @@ static inline int atomic_cmpxchg(atomic_t *ptr, int old, int new) ...@@ -134,21 +138,6 @@ static inline int atomic_cmpxchg(atomic_t *ptr, int old, int new)
return oldval; return oldval;
} }
static inline void atomic_clear_mask(unsigned long mask, unsigned long *addr)
{
unsigned long tmp, tmp2;
__asm__ __volatile__("@ atomic_clear_mask\n"
"1: ldrex %0, [%3]\n"
" bic %0, %0, %4\n"
" strex %1, %0, [%3]\n"
" teq %1, #0\n"
" bne 1b"
: "=&r" (tmp), "=&r" (tmp2), "+Qo" (*addr)
: "r" (addr), "Ir" (mask)
: "cc");
}
#else /* ARM_ARCH_6 */ #else /* ARM_ARCH_6 */
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
...@@ -197,15 +186,6 @@ static inline int atomic_cmpxchg(atomic_t *v, int old, int new) ...@@ -197,15 +186,6 @@ static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
return ret; return ret;
} }
static inline void atomic_clear_mask(unsigned long mask, unsigned long *addr)
{
unsigned long flags;
raw_local_irq_save(flags);
*addr &= ~mask;
raw_local_irq_restore(flags);
}
#endif /* __LINUX_ARM_ARCH__ */ #endif /* __LINUX_ARM_ARCH__ */
#define atomic_xchg(v, new) (xchg(&((v)->counter), new)) #define atomic_xchg(v, new) (xchg(&((v)->counter), new))
...@@ -238,15 +218,15 @@ static inline int __atomic_add_unless(atomic_t *v, int a, int u) ...@@ -238,15 +218,15 @@ static inline int __atomic_add_unless(atomic_t *v, int a, int u)
#ifndef CONFIG_GENERIC_ATOMIC64 #ifndef CONFIG_GENERIC_ATOMIC64
typedef struct { typedef struct {
u64 __aligned(8) counter; long long counter;
} atomic64_t; } atomic64_t;
#define ATOMIC64_INIT(i) { (i) } #define ATOMIC64_INIT(i) { (i) }
#ifdef CONFIG_ARM_LPAE #ifdef CONFIG_ARM_LPAE
static inline u64 atomic64_read(const atomic64_t *v) static inline long long atomic64_read(const atomic64_t *v)
{ {
u64 result; long long result;
__asm__ __volatile__("@ atomic64_read\n" __asm__ __volatile__("@ atomic64_read\n"
" ldrd %0, %H0, [%1]" " ldrd %0, %H0, [%1]"
...@@ -257,7 +237,7 @@ static inline u64 atomic64_read(const atomic64_t *v) ...@@ -257,7 +237,7 @@ static inline u64 atomic64_read(const atomic64_t *v)
return result; return result;
} }
static inline void atomic64_set(atomic64_t *v, u64 i) static inline void atomic64_set(atomic64_t *v, long long i)
{ {
__asm__ __volatile__("@ atomic64_set\n" __asm__ __volatile__("@ atomic64_set\n"
" strd %2, %H2, [%1]" " strd %2, %H2, [%1]"
...@@ -266,9 +246,9 @@ static inline void atomic64_set(atomic64_t *v, u64 i) ...@@ -266,9 +246,9 @@ static inline void atomic64_set(atomic64_t *v, u64 i)
); );
} }
#else #else
static inline u64 atomic64_read(const atomic64_t *v) static inline long long atomic64_read(const atomic64_t *v)
{ {
u64 result; long long result;
__asm__ __volatile__("@ atomic64_read\n" __asm__ __volatile__("@ atomic64_read\n"
" ldrexd %0, %H0, [%1]" " ldrexd %0, %H0, [%1]"
...@@ -279,10 +259,11 @@ static inline u64 atomic64_read(const atomic64_t *v) ...@@ -279,10 +259,11 @@ static inline u64 atomic64_read(const atomic64_t *v)
return result; return result;
} }
static inline void atomic64_set(atomic64_t *v, u64 i) static inline void atomic64_set(atomic64_t *v, long long i)
{ {
u64 tmp; long long tmp;
prefetchw(&v->counter);
__asm__ __volatile__("@ atomic64_set\n" __asm__ __volatile__("@ atomic64_set\n"
"1: ldrexd %0, %H0, [%2]\n" "1: ldrexd %0, %H0, [%2]\n"
" strexd %0, %3, %H3, [%2]\n" " strexd %0, %3, %H3, [%2]\n"
...@@ -294,15 +275,16 @@ static inline void atomic64_set(atomic64_t *v, u64 i) ...@@ -294,15 +275,16 @@ static inline void atomic64_set(atomic64_t *v, u64 i)
} }
#endif #endif
static inline void atomic64_add(u64 i, atomic64_t *v) static inline void atomic64_add(long long i, atomic64_t *v)
{ {
u64 result; long long result;
unsigned long tmp; unsigned long tmp;
prefetchw(&v->counter);
__asm__ __volatile__("@ atomic64_add\n" __asm__ __volatile__("@ atomic64_add\n"
"1: ldrexd %0, %H0, [%3]\n" "1: ldrexd %0, %H0, [%3]\n"
" adds %0, %0, %4\n" " adds %Q0, %Q0, %Q4\n"
" adc %H0, %H0, %H4\n" " adc %R0, %R0, %R4\n"
" strexd %1, %0, %H0, [%3]\n" " strexd %1, %0, %H0, [%3]\n"
" teq %1, #0\n" " teq %1, #0\n"
" bne 1b" " bne 1b"
...@@ -311,17 +293,17 @@ static inline void atomic64_add(u64 i, atomic64_t *v) ...@@ -311,17 +293,17 @@ static inline void atomic64_add(u64 i, atomic64_t *v)
: "cc"); : "cc");
} }
static inline u64 atomic64_add_return(u64 i, atomic64_t *v) static inline long long atomic64_add_return(long long i, atomic64_t *v)
{ {
u64 result; long long result;
unsigned long tmp; unsigned long tmp;
smp_mb(); smp_mb();
__asm__ __volatile__("@ atomic64_add_return\n" __asm__ __volatile__("@ atomic64_add_return\n"
"1: ldrexd %0, %H0, [%3]\n" "1: ldrexd %0, %H0, [%3]\n"
" adds %0, %0, %4\n" " adds %Q0, %Q0, %Q4\n"
" adc %H0, %H0, %H4\n" " adc %R0, %R0, %R4\n"
" strexd %1, %0, %H0, [%3]\n" " strexd %1, %0, %H0, [%3]\n"
" teq %1, #0\n" " teq %1, #0\n"
" bne 1b" " bne 1b"
...@@ -334,15 +316,16 @@ static inline u64 atomic64_add_return(u64 i, atomic64_t *v) ...@@ -334,15 +316,16 @@ static inline u64 atomic64_add_return(u64 i, atomic64_t *v)
return result; return result;
} }
static inline void atomic64_sub(u64 i, atomic64_t *v) static inline void atomic64_sub(long long i, atomic64_t *v)
{ {
u64 result; long long result;
unsigned long tmp; unsigned long tmp;
prefetchw(&v->counter);
__asm__ __volatile__("@ atomic64_sub\n" __asm__ __volatile__("@ atomic64_sub\n"
"1: ldrexd %0, %H0, [%3]\n" "1: ldrexd %0, %H0, [%3]\n"
" subs %0, %0, %4\n" " subs %Q0, %Q0, %Q4\n"
" sbc %H0, %H0, %H4\n" " sbc %R0, %R0, %R4\n"
" strexd %1, %0, %H0, [%3]\n" " strexd %1, %0, %H0, [%3]\n"
" teq %1, #0\n" " teq %1, #0\n"
" bne 1b" " bne 1b"
...@@ -351,17 +334,17 @@ static inline void atomic64_sub(u64 i, atomic64_t *v) ...@@ -351,17 +334,17 @@ static inline void atomic64_sub(u64 i, atomic64_t *v)
: "cc"); : "cc");
} }
static inline u64 atomic64_sub_return(u64 i, atomic64_t *v) static inline long long atomic64_sub_return(long long i, atomic64_t *v)
{ {
u64 result; long long result;
unsigned long tmp; unsigned long tmp;
smp_mb(); smp_mb();
__asm__ __volatile__("@ atomic64_sub_return\n" __asm__ __volatile__("@ atomic64_sub_return\n"
"1: ldrexd %0, %H0, [%3]\n" "1: ldrexd %0, %H0, [%3]\n"
" subs %0, %0, %4\n" " subs %Q0, %Q0, %Q4\n"
" sbc %H0, %H0, %H4\n" " sbc %R0, %R0, %R4\n"
" strexd %1, %0, %H0, [%3]\n" " strexd %1, %0, %H0, [%3]\n"
" teq %1, #0\n" " teq %1, #0\n"
" bne 1b" " bne 1b"
...@@ -374,9 +357,10 @@ static inline u64 atomic64_sub_return(u64 i, atomic64_t *v) ...@@ -374,9 +357,10 @@ static inline u64 atomic64_sub_return(u64 i, atomic64_t *v)
return result; return result;
} }
static inline u64 atomic64_cmpxchg(atomic64_t *ptr, u64 old, u64 new) static inline long long atomic64_cmpxchg(atomic64_t *ptr, long long old,
long long new)
{ {
u64 oldval; long long oldval;
unsigned long res; unsigned long res;
smp_mb(); smp_mb();
...@@ -398,9 +382,9 @@ static inline u64 atomic64_cmpxchg(atomic64_t *ptr, u64 old, u64 new) ...@@ -398,9 +382,9 @@ static inline u64 atomic64_cmpxchg(atomic64_t *ptr, u64 old, u64 new)
return oldval; return oldval;
} }
static inline u64 atomic64_xchg(atomic64_t *ptr, u64 new) static inline long long atomic64_xchg(atomic64_t *ptr, long long new)
{ {
u64 result; long long result;
unsigned long tmp; unsigned long tmp;
smp_mb(); smp_mb();
...@@ -419,18 +403,18 @@ static inline u64 atomic64_xchg(atomic64_t *ptr, u64 new) ...@@ -419,18 +403,18 @@ static inline u64 atomic64_xchg(atomic64_t *ptr, u64 new)
return result; return result;
} }
static inline u64 atomic64_dec_if_positive(atomic64_t *v) static inline long long atomic64_dec_if_positive(atomic64_t *v)
{ {
u64 result; long long result;
unsigned long tmp; unsigned long tmp;
smp_mb(); smp_mb();
__asm__ __volatile__("@ atomic64_dec_if_positive\n" __asm__ __volatile__("@ atomic64_dec_if_positive\n"
"1: ldrexd %0, %H0, [%3]\n" "1: ldrexd %0, %H0, [%3]\n"
" subs %0, %0, #1\n" " subs %Q0, %Q0, #1\n"
" sbc %H0, %H0, #0\n" " sbc %R0, %R0, #0\n"
" teq %H0, #0\n" " teq %R0, #0\n"
" bmi 2f\n" " bmi 2f\n"
" strexd %1, %0, %H0, [%3]\n" " strexd %1, %0, %H0, [%3]\n"
" teq %1, #0\n" " teq %1, #0\n"
...@@ -445,9 +429,9 @@ static inline u64 atomic64_dec_if_positive(atomic64_t *v) ...@@ -445,9 +429,9 @@ static inline u64 atomic64_dec_if_positive(atomic64_t *v)
return result; return result;
} }
static inline int atomic64_add_unless(atomic64_t *v, u64 a, u64 u) static inline int atomic64_add_unless(atomic64_t *v, long long a, long long u)
{ {
u64 val; long long val;
unsigned long tmp; unsigned long tmp;
int ret = 1; int ret = 1;
...@@ -459,8 +443,8 @@ static inline int atomic64_add_unless(atomic64_t *v, u64 a, u64 u) ...@@ -459,8 +443,8 @@ static inline int atomic64_add_unless(atomic64_t *v, u64 a, u64 u)
" teqeq %H0, %H5\n" " teqeq %H0, %H5\n"
" moveq %1, #0\n" " moveq %1, #0\n"
" beq 2f\n" " beq 2f\n"
" adds %0, %0, %6\n" " adds %Q0, %Q0, %Q6\n"
" adc %H0, %H0, %H6\n" " adc %R0, %R0, %R6\n"
" strexd %2, %0, %H0, [%4]\n" " strexd %2, %0, %H0, [%4]\n"
" teq %2, #0\n" " teq %2, #0\n"
" bne 1b\n" " bne 1b\n"
......
/*
* arch/arm/include/asm/bL_switcher.h
*
* Created by: Nicolas Pitre, April 2012
* Copyright: (C) 2012-2013 Linaro Limited
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef ASM_BL_SWITCHER_H
#define ASM_BL_SWITCHER_H
#include <linux/compiler.h>
#include <linux/types.h>
typedef void (*bL_switch_completion_handler)(void *cookie);
int bL_switch_request_cb(unsigned int cpu, unsigned int new_cluster_id,
bL_switch_completion_handler completer,
void *completer_cookie);
static inline int bL_switch_request(unsigned int cpu, unsigned int new_cluster_id)
{
return bL_switch_request_cb(cpu, new_cluster_id, NULL, NULL);
}
/*
* Register here to be notified about runtime enabling/disabling of
* the switcher.
*
* The notifier chain is called with the switcher activation lock held:
* the switcher will not be enabled or disabled during callbacks.
* Callbacks must not call bL_switcher_{get,put}_enabled().
*/
#define BL_NOTIFY_PRE_ENABLE 0
#define BL_NOTIFY_POST_ENABLE 1
#define BL_NOTIFY_PRE_DISABLE 2
#define BL_NOTIFY_POST_DISABLE 3
#ifdef CONFIG_BL_SWITCHER
int bL_switcher_register_notifier(struct notifier_block *nb);
int bL_switcher_unregister_notifier(struct notifier_block *nb);
/*
* Use these functions to temporarily prevent enabling/disabling of
* the switcher.
* bL_switcher_get_enabled() returns true if the switcher is currently
* enabled. Each call to bL_switcher_get_enabled() must be followed
* by a call to bL_switcher_put_enabled(). These functions are not
* recursive.
*/
bool bL_switcher_get_enabled(void);
void bL_switcher_put_enabled(void);
int bL_switcher_trace_trigger(void);
int bL_switcher_get_logical_index(u32 mpidr);
#else
static inline int bL_switcher_register_notifier(struct notifier_block *nb)
{
return 0;
}
static inline int bL_switcher_unregister_notifier(struct notifier_block *nb)
{
return 0;
}
static inline bool bL_switcher_get_enabled(void) { return false; }
static inline void bL_switcher_put_enabled(void) { }
static inline int bL_switcher_trace_trigger(void) { return 0; }
static inline int bL_switcher_get_logical_index(u32 mpidr) { return -EUNATCH; }
#endif /* CONFIG_BL_SWITCHER */
#endif
...@@ -2,6 +2,8 @@ ...@@ -2,6 +2,8 @@
#define _ASMARM_BUG_H #define _ASMARM_BUG_H
#include <linux/linkage.h> #include <linux/linkage.h>
#include <linux/types.h>
#include <asm/opcodes.h>
#ifdef CONFIG_BUG #ifdef CONFIG_BUG
...@@ -12,10 +14,10 @@ ...@@ -12,10 +14,10 @@
*/ */
#ifdef CONFIG_THUMB2_KERNEL #ifdef CONFIG_THUMB2_KERNEL
#define BUG_INSTR_VALUE 0xde02 #define BUG_INSTR_VALUE 0xde02
#define BUG_INSTR_TYPE ".hword " #define BUG_INSTR(__value) __inst_thumb16(__value)
#else #else
#define BUG_INSTR_VALUE 0xe7f001f2 #define BUG_INSTR_VALUE 0xe7f001f2
#define BUG_INSTR_TYPE ".word " #define BUG_INSTR(__value) __inst_arm(__value)
#endif #endif
...@@ -33,7 +35,7 @@ ...@@ -33,7 +35,7 @@
#define __BUG(__file, __line, __value) \ #define __BUG(__file, __line, __value) \
do { \ do { \
asm volatile("1:\t" BUG_INSTR_TYPE #__value "\n" \ asm volatile("1:\t" BUG_INSTR(__value) "\n" \
".pushsection .rodata.str, \"aMS\", %progbits, 1\n" \ ".pushsection .rodata.str, \"aMS\", %progbits, 1\n" \
"2:\t.asciz " #__file "\n" \ "2:\t.asciz " #__file "\n" \
".popsection\n" \ ".popsection\n" \
...@@ -48,7 +50,7 @@ do { \ ...@@ -48,7 +50,7 @@ do { \
#define __BUG(__file, __line, __value) \ #define __BUG(__file, __line, __value) \
do { \ do { \
asm volatile(BUG_INSTR_TYPE #__value); \ asm volatile(BUG_INSTR(__value) "\n"); \
unreachable(); \ unreachable(); \
} while (0) } while (0)
#endif /* CONFIG_DEBUG_BUGVERBOSE */ #endif /* CONFIG_DEBUG_BUGVERBOSE */
......
...@@ -435,4 +435,50 @@ static inline void __sync_cache_range_r(volatile void *p, size_t size) ...@@ -435,4 +435,50 @@ static inline void __sync_cache_range_r(volatile void *p, size_t size)
#define sync_cache_w(ptr) __sync_cache_range_w(ptr, sizeof *(ptr)) #define sync_cache_w(ptr) __sync_cache_range_w(ptr, sizeof *(ptr))
#define sync_cache_r(ptr) __sync_cache_range_r(ptr, sizeof *(ptr)) #define sync_cache_r(ptr) __sync_cache_range_r(ptr, sizeof *(ptr))
/*
* Disabling cache access for one CPU in an ARMv7 SMP system is tricky.
* To do so we must:
*
* - Clear the SCTLR.C bit to prevent further cache allocations
* - Flush the desired level of cache
* - Clear the ACTLR "SMP" bit to disable local coherency
*
* ... and so without any intervening memory access in between those steps,
* not even to the stack.
*
* WARNING -- After this has been called:
*
* - No ldrex/strex (and similar) instructions must be used.
* - The CPU is obviously no longer coherent with the other CPUs.
* - This is unlikely to work as expected if Linux is running non-secure.
*
* Note:
*
* - This is known to apply to several ARMv7 processor implementations,
* however some exceptions may exist. Caveat emptor.
*
* - The clobber list is dictated by the call to v7_flush_dcache_*.
* fp is preserved to the stack explicitly prior disabling the cache
* since adding it to the clobber list is incompatible with having
* CONFIG_FRAME_POINTER=y. ip is saved as well if ever r12-clobbering
* trampoline are inserted by the linker and to keep sp 64-bit aligned.
*/
#define v7_exit_coherency_flush(level) \
asm volatile( \
"stmfd sp!, {fp, ip} \n\t" \
"mrc p15, 0, r0, c1, c0, 0 @ get SCTLR \n\t" \
"bic r0, r0, #"__stringify(CR_C)" \n\t" \
"mcr p15, 0, r0, c1, c0, 0 @ set SCTLR \n\t" \
"isb \n\t" \
"bl v7_flush_dcache_"__stringify(level)" \n\t" \
"clrex \n\t" \
"mrc p15, 0, r0, c1, c0, 1 @ get ACTLR \n\t" \
"bic r0, r0, #(1 << 6) @ disable local coherency \n\t" \
"mcr p15, 0, r0, c1, c0, 1 @ set ACTLR \n\t" \
"isb \n\t" \
"dsb \n\t" \
"ldmfd sp!, {fp, ip}" \
: : : "r0","r1","r2","r3","r4","r5","r6","r7", \
"r9","r10","lr","memory" )
#endif #endif
...@@ -223,6 +223,42 @@ static inline unsigned long __cmpxchg_local(volatile void *ptr, ...@@ -223,6 +223,42 @@ static inline unsigned long __cmpxchg_local(volatile void *ptr,
return ret; return ret;
} }
static inline unsigned long long __cmpxchg64(unsigned long long *ptr,
unsigned long long old,
unsigned long long new)
{
unsigned long long oldval;
unsigned long res;
__asm__ __volatile__(
"1: ldrexd %1, %H1, [%3]\n"
" teq %1, %4\n"
" teqeq %H1, %H4\n"
" bne 2f\n"
" strexd %0, %5, %H5, [%3]\n"
" teq %0, #0\n"
" bne 1b\n"
"2:"
: "=&r" (res), "=&r" (oldval), "+Qo" (*ptr)
: "r" (ptr), "r" (old), "r" (new)
: "cc");
return oldval;
}
static inline unsigned long long __cmpxchg64_mb(unsigned long long *ptr,
unsigned long long old,
unsigned long long new)
{
unsigned long long ret;
smp_mb();
ret = __cmpxchg64(ptr, old, new);
smp_mb();
return ret;
}
#define cmpxchg_local(ptr,o,n) \ #define cmpxchg_local(ptr,o,n) \
((__typeof__(*(ptr)))__cmpxchg_local((ptr), \ ((__typeof__(*(ptr)))__cmpxchg_local((ptr), \
(unsigned long)(o), \ (unsigned long)(o), \
...@@ -230,18 +266,16 @@ static inline unsigned long __cmpxchg_local(volatile void *ptr, ...@@ -230,18 +266,16 @@ static inline unsigned long __cmpxchg_local(volatile void *ptr,
sizeof(*(ptr)))) sizeof(*(ptr))))
#define cmpxchg64(ptr, o, n) \ #define cmpxchg64(ptr, o, n) \
((__typeof__(*(ptr)))atomic64_cmpxchg(container_of((ptr), \ ((__typeof__(*(ptr)))__cmpxchg64_mb((ptr), \
atomic64_t, \ (unsigned long long)(o), \
counter), \ (unsigned long long)(n)))
(unsigned long long)(o), \
(unsigned long long)(n))) #define cmpxchg64_relaxed(ptr, o, n) \
((__typeof__(*(ptr)))__cmpxchg64((ptr), \
#define cmpxchg64_local(ptr, o, n) \ (unsigned long long)(o), \
((__typeof__(*(ptr)))local64_cmpxchg(container_of((ptr), \ (unsigned long long)(n)))
local64_t, \
a), \ #define cmpxchg64_local(ptr, o, n) cmpxchg64_relaxed((ptr), (o), (n))
(unsigned long long)(o), \
(unsigned long long)(n)))
#endif /* __LINUX_ARM_ARCH__ >= 6 */ #endif /* __LINUX_ARM_ARCH__ >= 6 */
......
...@@ -10,6 +10,7 @@ ...@@ -10,6 +10,7 @@
#define CPUID_TLBTYPE 3 #define CPUID_TLBTYPE 3
#define CPUID_MPUIR 4 #define CPUID_MPUIR 4
#define CPUID_MPIDR 5 #define CPUID_MPIDR 5
#define CPUID_REVIDR 6
#ifdef CONFIG_CPU_V7M #ifdef CONFIG_CPU_V7M
#define CPUID_EXT_PFR0 0x40 #define CPUID_EXT_PFR0 0x40
......
...@@ -5,7 +5,7 @@ ...@@ -5,7 +5,7 @@
#include <linux/threads.h> #include <linux/threads.h>
#include <asm/irq.h> #include <asm/irq.h>
#define NR_IPI 6 #define NR_IPI 8
typedef struct { typedef struct {
unsigned int __softirq_pending; unsigned int __softirq_pending;
......
...@@ -24,8 +24,8 @@ ...@@ -24,8 +24,8 @@
#define TRACER_TIMEOUT 10000 #define TRACER_TIMEOUT 10000
#define etm_writel(t, v, x) \ #define etm_writel(t, v, x) \
(__raw_writel((v), (t)->etm_regs + (x))) (writel_relaxed((v), (t)->etm_regs + (x)))
#define etm_readl(t, x) (__raw_readl((t)->etm_regs + (x))) #define etm_readl(t, x) (readl_relaxed((t)->etm_regs + (x)))
/* CoreSight Management Registers */ /* CoreSight Management Registers */
#define CSMR_LOCKACCESS 0xfb0 #define CSMR_LOCKACCESS 0xfb0
...@@ -142,8 +142,8 @@ ...@@ -142,8 +142,8 @@
#define ETBFF_TRIGFL BIT(10) #define ETBFF_TRIGFL BIT(10)
#define etb_writel(t, v, x) \ #define etb_writel(t, v, x) \
(__raw_writel((v), (t)->etb_regs + (x))) (writel_relaxed((v), (t)->etb_regs + (x)))
#define etb_readl(t, x) (__raw_readl((t)->etb_regs + (x))) #define etb_readl(t, x) (readl_relaxed((t)->etb_regs + (x)))
#define etm_lock(t) do { etm_writel((t), 0, CSMR_LOCKACCESS); } while (0) #define etm_lock(t) do { etm_writel((t), 0, CSMR_LOCKACCESS); } while (0)
#define etm_unlock(t) \ #define etm_unlock(t) \
......
...@@ -11,6 +11,7 @@ ...@@ -11,6 +11,7 @@
#define __ARM_KGDB_H__ #define __ARM_KGDB_H__
#include <linux/ptrace.h> #include <linux/ptrace.h>
#include <asm/opcodes.h>
/* /*
* GDB assumes that we're a user process being debugged, so * GDB assumes that we're a user process being debugged, so
...@@ -41,7 +42,7 @@ ...@@ -41,7 +42,7 @@
static inline void arch_kgdb_breakpoint(void) static inline void arch_kgdb_breakpoint(void)
{ {
asm(".word 0xe7ffdeff"); asm(__inst_arm(0xe7ffdeff));
} }
extern void kgdb_handle_bus_error(void); extern void kgdb_handle_bus_error(void);
......
...@@ -49,6 +49,7 @@ struct machine_desc { ...@@ -49,6 +49,7 @@ struct machine_desc {
bool (*smp_init)(void); bool (*smp_init)(void);
void (*fixup)(struct tag *, char **, void (*fixup)(struct tag *, char **,
struct meminfo *); struct meminfo *);
void (*init_meminfo)(void);
void (*reserve)(void);/* reserve mem blocks */ void (*reserve)(void);/* reserve mem blocks */
void (*map_io)(void);/* IO mapping function */ void (*map_io)(void);/* IO mapping function */
void (*init_early)(void); void (*init_early)(void);
......
This diff is collapsed.
This diff is collapsed.
...@@ -16,7 +16,7 @@ typedef struct { ...@@ -16,7 +16,7 @@ typedef struct {
#ifdef CONFIG_CPU_HAS_ASID #ifdef CONFIG_CPU_HAS_ASID
#define ASID_BITS 8 #define ASID_BITS 8
#define ASID_MASK ((~0ULL) << ASID_BITS) #define ASID_MASK ((~0ULL) << ASID_BITS)
#define ASID(mm) ((mm)->context.id.counter & ~ASID_MASK) #define ASID(mm) ((unsigned int)((mm)->context.id.counter & ~ASID_MASK))
#else #else
#define ASID(mm) (0) #define ASID(mm) (0)
#endif #endif
......
...@@ -181,6 +181,13 @@ static inline pmd_t *pmd_offset(pud_t *pud, unsigned long addr) ...@@ -181,6 +181,13 @@ static inline pmd_t *pmd_offset(pud_t *pud, unsigned long addr)
#define set_pte_ext(ptep,pte,ext) cpu_set_pte_ext(ptep,pte,ext) #define set_pte_ext(ptep,pte,ext) cpu_set_pte_ext(ptep,pte,ext)
/*
* We don't have huge page support for short descriptors, for the moment
* define empty stubs for use by pin_page_for_write.
*/
#define pmd_hugewillfault(pmd) (0)
#define pmd_thp_or_huge(pmd) (0)
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#endif /* _ASM_PGTABLE_2LEVEL_H */ #endif /* _ASM_PGTABLE_2LEVEL_H */
...@@ -206,6 +206,9 @@ static inline pmd_t *pmd_offset(pud_t *pud, unsigned long addr) ...@@ -206,6 +206,9 @@ static inline pmd_t *pmd_offset(pud_t *pud, unsigned long addr)
#define __HAVE_ARCH_PMD_WRITE #define __HAVE_ARCH_PMD_WRITE
#define pmd_write(pmd) (!(pmd_val(pmd) & PMD_SECT_RDONLY)) #define pmd_write(pmd) (!(pmd_val(pmd) & PMD_SECT_RDONLY))
#define pmd_hugewillfault(pmd) (!pmd_young(pmd) || !pmd_write(pmd))
#define pmd_thp_or_huge(pmd) (pmd_huge(pmd) || pmd_trans_huge(pmd))
#ifdef CONFIG_TRANSPARENT_HUGEPAGE #ifdef CONFIG_TRANSPARENT_HUGEPAGE
#define pmd_trans_huge(pmd) (pmd_val(pmd) && !(pmd_val(pmd) & PMD_TABLE_BIT)) #define pmd_trans_huge(pmd) (pmd_val(pmd) && !(pmd_val(pmd) & PMD_TABLE_BIT))
#define pmd_trans_splitting(pmd) (pmd_val(pmd) & PMD_SECT_SPLITTING) #define pmd_trans_splitting(pmd) (pmd_val(pmd) & PMD_SECT_SPLITTING)
......
This diff is collapsed.
...@@ -49,7 +49,7 @@ extern struct meminfo meminfo; ...@@ -49,7 +49,7 @@ extern struct meminfo meminfo;
#define bank_phys_end(bank) ((bank)->start + (bank)->size) #define bank_phys_end(bank) ((bank)->start + (bank)->size)
#define bank_phys_size(bank) (bank)->size #define bank_phys_size(bank) (bank)->size
extern int arm_add_memory(phys_addr_t start, phys_addr_t size); extern int arm_add_memory(u64 start, u64 size);
extern void early_print(const char *str, ...); extern void early_print(const char *str, ...);
extern void dump_machine_table(void); extern void dump_machine_table(void);
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment