Commit ac0f6f92 authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'for-linus' of master.kernel.org:/home/rmk/linux-2.6-arm

* 'for-linus' of master.kernel.org:/home/rmk/linux-2.6-arm: (100 commits)
  ARM: Eliminate decompressor -Dstatic= PIC hack
  ARM: 5958/1: ARM: U300: fix inverted clk round rate
  ARM: 5956/1: misplaced parentheses
  ARM: 5955/1: ep93xx: move timer defines into core.c and document
  ARM: 5954/1: ep93xx: move gpio interrupt support to gpio.c
  ARM: 5953/1: ep93xx: fix broken build of clock.c
  ARM: 5952/1: ARM: MM: Add ARM_L1_CACHE_SHIFT_6 for handle inside each ARCH Kconfig
  ARM: 5949/1: NUC900 add gpio virtual memory map
  ARM: 5948/1: Enable timer0 to time4 clock support for nuc910
  ARM: 5940/2: ARM: MMCI: remove custom DBG macro and printk
  ARM: make_coherent(): fix problems with highpte, part 2
  MM: Pass a PTE pointer to update_mmu_cache() rather than the PTE itself
  ARM: 5945/1: ep93xx: include correct irq.h in core.c
  ARM: 5933/1: amba-pl011: support hardware flow control
  ARM: 5930/1: Add PKMAP area description to memory.txt.
  ARM: 5929/1: Add checks to detect overlap of memory regions.
  ARM: 5928/1: Change type of VMALLOC_END to unsigned long.
  ARM: 5927/1: Make delimiters of DMA area globally visibly.
  ARM: 5926/1: Add "Virtual kernel memory..." printout.
  ARM: 5920/1: OMAP4: Enable L2 Cache
  ...

Fix up trivial conflict in arch/arm/mach-mx25/clock.c
parents b1bf9368 9f33be2c
......@@ -59,7 +59,11 @@ PAGE_OFFSET high_memory-1 Kernel direct-mapped RAM region.
This maps the platforms RAM, and typically
maps all platform RAM in a 1:1 relationship.
TASK_SIZE PAGE_OFFSET-1 Kernel module space
PKMAP_BASE PAGE_OFFSET-1 Permanent kernel mappings
One way of mapping HIGHMEM pages into kernel
space.
MODULES_VADDR MODULES_END-1 Kernel module space
Kernel modules inserted via insmod are
placed here using dynamic mappings.
......
......@@ -88,12 +88,12 @@ changes occur:
This is used primarily during fault processing.
5) void update_mmu_cache(struct vm_area_struct *vma,
unsigned long address, pte_t pte)
unsigned long address, pte_t *ptep)
At the end of every page fault, this routine is invoked to
tell the architecture specific code that a translation
described by "pte" now exists at virtual address "address"
for address space "vma->vm_mm", in the software page tables.
now exists at virtual address "address" for address space
"vma->vm_mm", in the software page tables.
A port may use this information in any way it so chooses.
For example, it could use this event to pre-load TLB
......
......@@ -329,7 +329,7 @@ extern pgd_t swapper_pg_dir[1024];
* tables contain all the necessary information.
*/
extern inline void update_mmu_cache(struct vm_area_struct * vma,
unsigned long address, pte_t pte)
unsigned long address, pte_t *ptep)
{
}
......
......@@ -12,6 +12,7 @@ config ARM
select HAVE_IDE
select RTC_LIB
select SYS_SUPPORTS_APM_EMULATION
select GENERIC_ATOMIC64 if (!CPU_32v6K)
select HAVE_OPROFILE
select HAVE_ARCH_KGDB
select HAVE_KPROBES if (!XIP_KERNEL)
......@@ -20,6 +21,8 @@ config ARM
select HAVE_GENERIC_DMA_COHERENT
select HAVE_KERNEL_GZIP
select HAVE_KERNEL_LZO
select HAVE_PERF_EVENTS
select PERF_USE_VMALLOC
help
The ARM series is a line of low-power-consumption RISC chip designs
licensed by ARM Ltd and targeted at embedded applications and
......@@ -52,6 +55,9 @@ config HAVE_TCM
bool
select GENERIC_ALLOCATOR
config HAVE_PROC_CPU
bool
config NO_IOPORT
bool
......@@ -161,6 +167,11 @@ config ARCH_MTD_XIP
config GENERIC_HARDIRQS_NO__DO_IRQ
def_bool y
config ARM_L1_CACHE_SHIFT_6
bool
help
Setting ARM L1 cache line size to 64 Bytes.
if OPROFILE
config OPROFILE_ARMV6
......@@ -550,10 +561,20 @@ config ARCH_W90X900
<http://www.nuvoton.com/hq/enu/ProductAndSales/ProductLines/
ConsumerElectronicsIC/ARMMicrocontroller/ARMMicrocontroller>
config ARCH_NUC93X
bool "Nuvoton NUC93X CPU"
select CPU_ARM926T
select HAVE_CLK
select COMMON_CLKDEV
help
Support for Nuvoton (Winbond logic dept.) NUC93X MCU,The NUC93X is a
low-power and high performance MPEG-4/JPEG multimedia controller chip.
config ARCH_PNX4008
bool "Philips Nexperia PNX4008 Mobile"
select CPU_ARM926T
select HAVE_CLK
select COMMON_CLKDEV
help
This enables support for Philips PNX4008 mobile platform.
......@@ -638,6 +659,7 @@ config ARCH_S5PC1XX
select GENERIC_GPIO
select HAVE_CLK
select CPU_V7
select ARM_L1_CACHE_SHIFT_6
help
Samsung S5PC1XX series based systems
......@@ -785,6 +807,8 @@ source "arch/arm/plat-nomadik/Kconfig"
source "arch/arm/mach-ns9xxx/Kconfig"
source "arch/arm/mach-nuc93x/Kconfig"
source "arch/arm/plat-omap/Kconfig"
source "arch/arm/mach-omap1/Kconfig"
......@@ -867,6 +891,11 @@ config XSCALE_PMU
depends on CPU_XSCALE && !XSCALE_PMU_TIMER
default y
config CPU_HAS_PMU
depends on CPU_V6 || CPU_V7 || XSCALE_PMU
default y
bool
if !MMU
source "arch/arm/Kconfig-nommu"
endif
......@@ -921,6 +950,19 @@ config ARM_ERRATA_460075
ACTLR register. Note that setting specific bits in the ACTLR register
may not be available in non-secure mode.
config PL310_ERRATA_588369
bool "Clean & Invalidate maintenance operations do not invalidate clean lines"
depends on CACHE_L2X0 && ARCH_OMAP4
help
The PL310 L2 cache controller implements three types of Clean &
Invalidate maintenance operations: by Physical Address
(offset 0x7F0), by Index/Way (0x7F8) and by Way (0x7FC).
They are architecturally defined to behave as the execution of a
clean operation followed immediately by an invalidate operation,
both performing to the same memory location. This functionality
is not correctly implemented in PL310 as clean lines are not
invalidated as a result of these operations. Note that this errata
uses Texas Instrument's secure monitor api.
endmenu
source "arch/arm/common/Kconfig"
......@@ -1171,6 +1213,14 @@ config HIGHPTE
depends on HIGHMEM
depends on !OUTER_CACHE
config HW_PERF_EVENTS
bool "Enable hardware performance counter support for perf events"
depends on PERF_EVENTS && CPU_HAS_PMU && (CPU_V6 || CPU_V7)
default y
help
Enable hardware performance counter support for perf events. If
disabled, perf events will use software events only.
source "mm/Kconfig"
config LEDS
......@@ -1230,6 +1280,7 @@ config ALIGNMENT_TRAP
bool
depends on CPU_CP15_MMU
default y if !ARCH_EBSA110
select HAVE_PROC_CPU if PROC_FS
help
ARM processors cannot fetch/store information which is not
naturally aligned on the bus, i.e., a 4 byte fetch must start at an
......
......@@ -171,6 +171,7 @@ machine-$(CONFIG_ARCH_U300) := u300
machine-$(CONFIG_ARCH_U8500) := ux500
machine-$(CONFIG_ARCH_VERSATILE) := versatile
machine-$(CONFIG_ARCH_W90X900) := w90x900
machine-$(CONFIG_ARCH_NUC93X) := nuc93x
machine-$(CONFIG_FOOTBRIDGE) := footbridge
# Platform directory name. This list is sorted alphanumerically
......
......@@ -5,7 +5,7 @@
#
HEAD = head.o
OBJS = misc.o
OBJS = misc.o decompress.o
FONTC = $(srctree)/drivers/video/console/font_acorn_8x8.c
#
......@@ -106,10 +106,6 @@ lib1funcs = $(obj)/lib1funcs.o
$(obj)/lib1funcs.S: $(srctree)/arch/$(SRCARCH)/lib/lib1funcs.S FORCE
$(call cmd,shipped)
# Don't allow any static data in misc.o, which
# would otherwise mess up our GOT table
CFLAGS_misc.o := -Dstatic=
$(obj)/vmlinux: $(obj)/vmlinux.lds $(obj)/$(HEAD) $(obj)/piggy.$(suffix_y).o \
$(addprefix $(obj)/, $(OBJS)) $(lib1funcs) FORCE
$(call if_changed,ld)
......
#define _LINUX_STRING_H_
#include <linux/compiler.h> /* for inline */
#include <linux/types.h> /* for size_t */
#include <linux/stddef.h> /* for NULL */
#include <linux/linkage.h>
#include <asm/string.h>
extern unsigned long free_mem_ptr;
extern unsigned long free_mem_end_ptr;
extern void error(char *);
#define STATIC static
#define ARCH_HAS_DECOMP_WDOG
/* Diagnostic functions */
#ifdef DEBUG
# define Assert(cond,msg) {if(!(cond)) error(msg);}
# define Trace(x) fprintf x
# define Tracev(x) {if (verbose) fprintf x ;}
# define Tracevv(x) {if (verbose>1) fprintf x ;}
# define Tracec(c,x) {if (verbose && (c)) fprintf x ;}
# define Tracecv(c,x) {if (verbose>1 && (c)) fprintf x ;}
#else
# define Assert(cond,msg)
# define Trace(x)
# define Tracev(x)
# define Tracevv(x)
# define Tracec(c,x)
# define Tracecv(c,x)
#endif
#ifdef CONFIG_KERNEL_GZIP
#include "../../../../lib/decompress_inflate.c"
#endif
#ifdef CONFIG_KERNEL_LZO
#include "../../../../lib/decompress_unlzo.c"
#endif
void do_decompress(u8 *input, int len, u8 *output, void (*error)(char *x))
{
decompress(input, len, NULL, NULL, output, NULL, error);
}
......@@ -22,13 +22,13 @@
#if defined(CONFIG_DEBUG_ICEDCC)
#ifdef CONFIG_CPU_V6
.macro loadsp, rb
.macro loadsp, rb, tmp
.endm
.macro writeb, ch, rb
mcr p14, 0, \ch, c0, c5, 0
.endm
#elif defined(CONFIG_CPU_V7)
.macro loadsp, rb
.macro loadsp, rb, tmp
.endm
.macro writeb, ch, rb
wait: mrc p14, 0, pc, c0, c1, 0
......@@ -36,13 +36,13 @@ wait: mrc p14, 0, pc, c0, c1, 0
mcr p14, 0, \ch, c0, c5, 0
.endm
#elif defined(CONFIG_CPU_XSCALE)
.macro loadsp, rb
.macro loadsp, rb, tmp
.endm
.macro writeb, ch, rb
mcr p14, 0, \ch, c8, c0, 0
.endm
#else
.macro loadsp, rb
.macro loadsp, rb, tmp
.endm
.macro writeb, ch, rb
mcr p14, 0, \ch, c1, c0, 0
......@@ -58,7 +58,7 @@ wait: mrc p14, 0, pc, c0, c1, 0
.endm
#if defined(CONFIG_ARCH_SA1100)
.macro loadsp, rb
.macro loadsp, rb, tmp
mov \rb, #0x80000000 @ physical base address
#ifdef CONFIG_DEBUG_LL_SER3
add \rb, \rb, #0x00050000 @ Ser3
......@@ -67,13 +67,13 @@ wait: mrc p14, 0, pc, c0, c1, 0
#endif
.endm
#elif defined(CONFIG_ARCH_S3C2410)
.macro loadsp, rb
.macro loadsp, rb, tmp
mov \rb, #0x50000000
add \rb, \rb, #0x4000 * CONFIG_S3C_LOWLEVEL_UART_PORT
.endm
#else
.macro loadsp, rb
addruart \rb
.macro loadsp, rb, tmp
addruart \rb, \tmp
.endm
#endif
#endif
......@@ -1025,7 +1025,7 @@ phex: adr r3, phexbuf
strb r2, [r3, r1]
b 1b
puts: loadsp r3
puts: loadsp r3, r1
1: ldrb r2, [r0], #1
teq r2, #0
moveq pc, lr
......@@ -1042,7 +1042,7 @@ puts: loadsp r3
putc:
mov r2, r0
mov r0, #0
loadsp r3
loadsp r3, r1
b 2b
memdump: mov r12, r0
......
......@@ -23,8 +23,8 @@ unsigned int __machine_arch_type;
#include <linux/compiler.h> /* for inline */
#include <linux/types.h> /* for size_t */
#include <linux/stddef.h> /* for NULL */
#include <asm/string.h>
#include <linux/linkage.h>
#include <asm/string.h>
#include <asm/unaligned.h>
......@@ -117,57 +117,7 @@ static void putstr(const char *ptr)
#endif
#define __ptr_t void *
#define memzero(s,n) __memzero(s,n)
/*
* Optimised C version of memzero for the ARM.
*/
void __memzero (__ptr_t s, size_t n)
{
union { void *vp; unsigned long *ulp; unsigned char *ucp; } u;
int i;
u.vp = s;
for (i = n >> 5; i > 0; i--) {
*u.ulp++ = 0;
*u.ulp++ = 0;
*u.ulp++ = 0;
*u.ulp++ = 0;
*u.ulp++ = 0;
*u.ulp++ = 0;
*u.ulp++ = 0;
*u.ulp++ = 0;
}
if (n & 1 << 4) {
*u.ulp++ = 0;
*u.ulp++ = 0;
*u.ulp++ = 0;
*u.ulp++ = 0;
}
if (n & 1 << 3) {
*u.ulp++ = 0;
*u.ulp++ = 0;
}
if (n & 1 << 2)
*u.ulp++ = 0;
if (n & 1 << 1) {
*u.ucp++ = 0;
*u.ucp++ = 0;
}
if (n & 1)
*u.ucp++ = 0;
}
static inline __ptr_t memcpy(__ptr_t __dest, __const __ptr_t __src,
size_t __n)
void *memcpy(void *__dest, __const void *__src, size_t __n)
{
int i = 0;
unsigned char *d = (unsigned char *)__dest, *s = (unsigned char *)__src;
......@@ -204,59 +154,20 @@ static inline __ptr_t memcpy(__ptr_t __dest, __const __ptr_t __src,
/*
* gzip delarations
*/
#define STATIC static
/* Diagnostic functions */
#ifdef DEBUG
# define Assert(cond,msg) {if(!(cond)) error(msg);}
# define Trace(x) fprintf x
# define Tracev(x) {if (verbose) fprintf x ;}
# define Tracevv(x) {if (verbose>1) fprintf x ;}
# define Tracec(c,x) {if (verbose && (c)) fprintf x ;}
# define Tracecv(c,x) {if (verbose>1 && (c)) fprintf x ;}
#else
# define Assert(cond,msg)
# define Trace(x)
# define Tracev(x)
# define Tracevv(x)
# define Tracec(c,x)
# define Tracecv(c,x)
#endif
static void error(char *m);
extern char input_data[];
extern char input_data_end[];
static unsigned char *output_data;
static unsigned long output_ptr;
static void error(char *m);
unsigned char *output_data;
unsigned long output_ptr;
static void putstr(const char *);
static unsigned long free_mem_ptr;
static unsigned long free_mem_end_ptr;
#ifdef STANDALONE_DEBUG
#define NO_INFLATE_MALLOC
#endif
#define ARCH_HAS_DECOMP_WDOG
#ifdef CONFIG_KERNEL_GZIP
#include "../../../../lib/decompress_inflate.c"
#endif
#ifdef CONFIG_KERNEL_LZO
#include "../../../../lib/decompress_unlzo.c"
#endif
unsigned long free_mem_ptr;
unsigned long free_mem_end_ptr;
#ifndef arch_error
#define arch_error(x)
#endif
static void error(char *x)
void error(char *x)
{
arch_error(x);
......@@ -272,6 +183,8 @@ asmlinkage void __div0(void)
error("Attempting division by 0!");
}
extern void do_decompress(u8 *input, int len, u8 *output, void (*error)(char *x));
#ifndef STANDALONE_DEBUG
unsigned long
......@@ -292,8 +205,8 @@ decompress_kernel(unsigned long output_start, unsigned long free_mem_ptr_p,
output_ptr = get_unaligned_le32(tmp);
putstr("Uncompressing Linux...");
decompress(input_data, input_data_end - input_data,
NULL, NULL, output_data, NULL, error);
do_decompress(input_data, input_data_end - input_data,
output_data, error);
putstr(" done, booting the kernel.\n");
return output_ptr;
}
......
......@@ -14,6 +14,13 @@ SECTIONS
/DISCARD/ : {
*(.ARM.exidx*)
*(.ARM.extab*)
/*
* Discard any r/w data - this produces a link error if we have any,
* which is required for PIC decompression. Local data generates
* GOTOFF relocations, which prevents it being relocated independently
* of the text/got segments.
*/
*(.data)
}
. = TEXT_START;
......@@ -40,7 +47,6 @@ SECTIONS
.got : { *(.got) }
_got_end = .;
.got.plt : { *(.got.plt) }
.data : { *(.data) }
_edata = .;
. = BSS_START;
......
......@@ -99,6 +99,16 @@ void clkdev_add(struct clk_lookup *cl)
}
EXPORT_SYMBOL(clkdev_add);
void __init clkdev_add_table(struct clk_lookup *cl, size_t num)
{
mutex_lock(&clocks_mutex);
while (num--) {
list_add_tail(&cl->node, &clocks);
cl++;
}
mutex_unlock(&clocks_mutex);
}
#define MAX_DEV_ID 20
#define MAX_CON_ID 16
......
......@@ -277,7 +277,7 @@ static inline dma_addr_t map_single(struct device *dev, void *ptr, size_t size,
* We don't need to sync the DMA buffer since
* it was allocated via the coherent allocators.
*/
dma_cache_maint(ptr, size, dir);
__dma_single_cpu_to_dev(ptr, size, dir);
}
return dma_addr;
......@@ -315,6 +315,8 @@ static inline void unmap_single(struct device *dev, dma_addr_t dma_addr,
__cpuc_flush_dcache_area(ptr, size);
}
free_safe_buffer(dev->archdata.dmabounce, buf);
} else {
__dma_single_dev_to_cpu(dma_to_virt(dev, dma_addr), size, dir);
}
}
......
This diff is collapsed.
This diff is collapsed.
......@@ -242,10 +242,13 @@ CONFIG_CPU_CP15_MMU=y
# CONFIG_CPU_DCACHE_DISABLE is not set
# CONFIG_CPU_BPREDICT_DISABLE is not set
CONFIG_HAS_TLS_REG=y
CONFIG_OUTER_CACHE=y
CONFIG_CACHE_L2X0=y
CONFIG_ARM_L1_CACHE_SHIFT=5
# CONFIG_ARM_ERRATA_430973 is not set
# CONFIG_ARM_ERRATA_458693 is not set
# CONFIG_ARM_ERRATA_460075 is not set
CONFIG_PL310_ERRATA_588369=y
CONFIG_ARM_GIC=y
#
......
......@@ -235,6 +235,234 @@ static inline int atomic_add_unless(atomic_t *v, int a, int u)
#define smp_mb__before_atomic_inc() smp_mb()
#define smp_mb__after_atomic_inc() smp_mb()
#ifndef CONFIG_GENERIC_ATOMIC64
typedef struct {
u64 __aligned(8) counter;
} atomic64_t;
#define ATOMIC64_INIT(i) { (i) }
static inline u64 atomic64_read(atomic64_t *v)
{
u64 result;
__asm__ __volatile__("@ atomic64_read\n"
" ldrexd %0, %H0, [%1]"
: "=&r" (result)
: "r" (&v->counter)
);
return result;
}
static inline void atomic64_set(atomic64_t *v, u64 i)
{
u64 tmp;
__asm__ __volatile__("@ atomic64_set\n"
"1: ldrexd %0, %H0, [%1]\n"
" strexd %0, %2, %H2, [%1]\n"
" teq %0, #0\n"
" bne 1b"
: "=&r" (tmp)
: "r" (&v->counter), "r" (i)
: "cc");
}
static inline void atomic64_add(u64 i, atomic64_t *v)
{
u64 result;
unsigned long tmp;
__asm__ __volatile__("@ atomic64_add\n"
"1: ldrexd %0, %H0, [%2]\n"
" adds %0, %0, %3\n"
" adc %H0, %H0, %H3\n"
" strexd %1, %0, %H0, [%2]\n"
" teq %1, #0\n"
" bne 1b"
: "=&r" (result), "=&r" (tmp)
: "r" (&v->counter), "r" (i)
: "cc");
}
static inline u64 atomic64_add_return(u64 i, atomic64_t *v)
{
u64 result;
unsigned long tmp;
smp_mb();
__asm__ __volatile__("@ atomic64_add_return\n"
"1: ldrexd %0, %H0, [%2]\n"
" adds %0, %0, %3\n"
" adc %H0, %H0, %H3\n"
" strexd %1, %0, %H0, [%2]\n"
" teq %1, #0\n"
" bne 1b"
: "=&r" (result), "=&r" (tmp)
: "r" (&v->counter), "r" (i)
: "cc");
smp_mb();
return result;
}
static inline void atomic64_sub(u64 i, atomic64_t *v)
{
u64 result;
unsigned long tmp;
__asm__ __volatile__("@ atomic64_sub\n"
"1: ldrexd %0, %H0, [%2]\n"
" subs %0, %0, %3\n"
" sbc %H0, %H0, %H3\n"
" strexd %1, %0, %H0, [%2]\n"
" teq %1, #0\n"
" bne 1b"
: "=&r" (result), "=&r" (tmp)
: "r" (&v->counter), "r" (i)
: "cc");
}
static inline u64 atomic64_sub_return(u64 i, atomic64_t *v)
{
u64 result;
unsigned long tmp;
smp_mb();
__asm__ __volatile__("@ atomic64_sub_return\n"
"1: ldrexd %0, %H0, [%2]\n"
" subs %0, %0, %3\n"
" sbc %H0, %H0, %H3\n"
" strexd %1, %0, %H0, [%2]\n"
" teq %1, #0\n"
" bne 1b"
: "=&r" (result), "=&r" (tmp)
: "r" (&v->counter), "r" (i)
: "cc");
smp_mb();
return result;
}
static inline u64 atomic64_cmpxchg(atomic64_t *ptr, u64 old, u64 new)
{
u64 oldval;
unsigned long res;
smp_mb();
do {
__asm__ __volatile__("@ atomic64_cmpxchg\n"
"ldrexd %1, %H1, [%2]\n"
"mov %0, #0\n"
"teq %1, %3\n"
"teqeq %H1, %H3\n"
"strexdeq %0, %4, %H4, [%2]"
: "=&r" (res), "=&r" (oldval)
: "r" (&ptr->counter), "r" (old), "r" (new)
: "cc");
} while (res);
smp_mb();
return oldval;
}
static inline u64 atomic64_xchg(atomic64_t *ptr, u64 new)
{
u64 result;
unsigned long tmp;
smp_mb();
__asm__ __volatile__("@ atomic64_xchg\n"
"1: ldrexd %0, %H0, [%2]\n"
" strexd %1, %3, %H3, [%2]\n"
" teq %1, #0\n"
" bne 1b"
: "=&r" (result), "=&r" (tmp)
: "r" (&ptr->counter), "r" (new)
: "cc");
smp_mb();
return result;
}
static inline u64 atomic64_dec_if_positive(atomic64_t *v)
{
u64 result;
unsigned long tmp;
smp_mb();
__asm__ __volatile__("@ atomic64_dec_if_positive\n"
"1: ldrexd %0, %H0, [%2]\n"
" subs %0, %0, #1\n"
" sbc %H0, %H0, #0\n"
" teq %H0, #0\n"
" bmi 2f\n"
" strexd %1, %0, %H0, [%2]\n"
" teq %1, #0\n"
" bne 1b\n"
"2:"
: "=&r" (result), "=&r" (tmp)
: "r" (&v->counter)
: "cc");
smp_mb();
return result;
}
static inline int atomic64_add_unless(atomic64_t *v, u64 a, u64 u)
{
u64 val;
unsigned long tmp;
int ret = 1;
smp_mb();
__asm__ __volatile__("@ atomic64_add_unless\n"
"1: ldrexd %0, %H0, [%3]\n"
" teq %0, %4\n"
" teqeq %H0, %H4\n"
" moveq %1, #0\n"
" beq 2f\n"
" adds %0, %0, %5\n"
" adc %H0, %H0, %H5\n"
" strexd %2, %0, %H0, [%3]\n"
" teq %2, #0\n"
" bne 1b\n"
"2:"
: "=&r" (val), "=&r" (ret), "=&r" (tmp)
: "r" (&v->counter), "r" (u), "r" (a)
: "cc");
if (ret)
smp_mb();
return ret;
}
#define atomic64_add_negative(a, v) (atomic64_add_return((a), (v)) < 0)
#define atomic64_inc(v) atomic64_add(1LL, (v))
#define atomic64_inc_return(v) atomic64_add_return(1LL, (v))
#define atomic64_inc_and_test(v) (atomic64_inc_return(v) == 0)
#define atomic64_sub_and_test(a, v) (atomic64_sub_return((a), (v)) == 0)
#define atomic64_dec(v) atomic64_sub(1LL, (v))
#define atomic64_dec_return(v) atomic64_sub_return(1LL, (v))
#define atomic64_dec_and_test(v) (atomic64_dec_return((v)) == 0)
#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1LL, 0LL)
#else /* !CONFIG_GENERIC_ATOMIC64 */
#include <asm-generic/atomic64.h>
#endif
#include <asm-generic/atomic-long.h>
#endif
#endif
......@@ -197,21 +197,6 @@
* DMA Cache Coherency
* ===================
*
* dma_inv_range(start, end)
*
* Invalidate (discard) the specified virtual address range.
* May not write back any entries. If 'start' or 'end'
* are not cache line aligned, those lines must be written
* back.
* - start - virtual start address
* - end - virtual end address
*
* dma_clean_range(start, end)
*
* Clean (write back) the specified virtual address range.
* - start - virtual start address
* - end - virtual end address
*
* dma_flush_range(start, end)
*
* Clean and invalidate the specified virtual address range.
......@@ -228,8 +213,9 @@ struct cpu_cache_fns {
void (*coherent_user_range)(unsigned long, unsigned long);
void (*flush_kern_dcache_area)(void *, size_t);
void (*dma_inv_range)(const void *, const void *);
void (*dma_clean_range)(const void *, const void *);
void (*dma_map_area)(const void *, size_t, int);
void (*dma_unmap_area)(const void *, size_t, int);
void (*dma_flush_range)(const void *, const void *);
};
......@@ -259,8 +245,8 @@ extern struct cpu_cache_fns cpu_cache;
* is visible to DMA, or data written by DMA to system memory is
* visible to the CPU.
*/
#define dmac_inv_range cpu_cache.dma_inv_range
#define dmac_clean_range cpu_cache.dma_clean_range
#define dmac_map_area cpu_cache.dma_map_area
#define dmac_unmap_area cpu_cache.dma_unmap_area
#define dmac_flush_range cpu_cache.dma_flush_range
#else
......@@ -285,12 +271,12 @@ extern void __cpuc_flush_dcache_area(void *, size_t);
* is visible to DMA, or data written by DMA to system memory is
* visible to the CPU.
*/
#define dmac_inv_range __glue(_CACHE,_dma_inv_range)
#define dmac_clean_range __glue(_CACHE,_dma_clean_range)
#define dmac_map_area __glue(_CACHE,_dma_map_area)
#define dmac_unmap_area __glue(_CACHE,_dma_unmap_area)
#define dmac_flush_range __glue(_CACHE,_dma_flush_range)
extern void dmac_inv_range(const void *, const void *);
extern void dmac_clean_range(const void *, const void *);
extern void dmac_map_area(const void *, size_t, int);
extern void dmac_unmap_area(const void *, size_t, int);
extern void dmac_flush_range(const void *, const void *);
#endif
......@@ -331,12 +317,8 @@ static inline void outer_flush_range(unsigned long start, unsigned long end)
* processes address space. Really, we want to allow our "user
* space" model to handle this.
*/
#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
do { \
memcpy(dst, src, len); \
flush_ptrace_access(vma, page, vaddr, dst, len, 1);\
} while (0)
extern void copy_to_user_page(struct vm_area_struct *, struct page *,
unsigned long, void *, const void *, unsigned long);
#define copy_from_user_page(vma, page, vaddr, dst, src, len) \
do { \
memcpy(dst, src, len); \
......@@ -370,17 +352,6 @@ vivt_flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsig
}
}
static inline void
vivt_flush_ptrace_access(struct vm_area_struct *vma, struct page *page,
unsigned long uaddr, void *kaddr,
unsigned long len, int write)
{
if (cpumask_test_cpu(smp_processor_id(), mm_cpumask(vma->vm_mm))) {
unsigned long addr = (unsigned long)kaddr;
__cpuc_coherent_kern_range(addr, addr + len);
}
}
#ifndef CONFIG_CPU_CACHE_VIPT
#define flush_cache_mm(mm) \
vivt_flush_cache_mm(mm)
......@@ -388,15 +359,10 @@ vivt_flush_ptrace_access(struct vm_area_struct *vma, struct page *page,
vivt_flush_cache_range(vma,start,end)
#define flush_cache_page(vma,addr,pfn) \
vivt_flush_cache_page(vma,addr,pfn)
#define flush_ptrace_access(vma,page,ua,ka,len,write) \
vivt_flush_ptrace_access(vma,page,ua,ka,len,write)
#else
extern void flush_cache_mm(struct mm_struct *mm);
extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end);
extern void flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn);
extern void flush_ptrace_access(struct vm_area_struct *vma, struct page *page,
unsigned long uaddr, void *kaddr,
unsigned long len, int write);
#endif
#define flush_cache_dup_mm(mm) flush_cache_mm(mm)
......
......@@ -27,4 +27,7 @@ struct clk_lookup *clkdev_alloc(struct clk *clk, const char *con_id,
void clkdev_add(struct clk_lookup *cl);
void clkdev_drop(struct clk_lookup *cl);
void clkdev_add_table(struct clk_lookup *, size_t);
int clk_add_alias(const char *, const char *, char *, struct device *);
#endif
......@@ -57,18 +57,58 @@ static inline dma_addr_t virt_to_dma(struct device *dev, void *addr)
#endif
/*
* DMA-consistent mapping functions. These allocate/free a region of
* uncached, unwrite-buffered mapped memory space for use with DMA
* devices. This is the "generic" version. The PCI specific version
* is in pci.h
* The DMA API is built upon the notion of "buffer ownership". A buffer
* is either exclusively owned by the CPU (and therefore may be accessed
* by it) or exclusively owned by the DMA device. These helper functions
* represent the transitions between these two ownership states.
*
* Note: Drivers should NOT use this function directly, as it will break
* platforms with CONFIG_DMABOUNCE.
* Use the driver DMA support - see dma-mapping.h (dma_sync_*)
* Note, however, that on later ARMs, this notion does not work due to
* speculative prefetches. We model our approach on the assumption that
* the CPU does do speculative prefetches, which means we clean caches
* before transfers and delay cache invalidation until transfer completion.
*
* Private support functions: these are not part of the API and are
* liable to change. Drivers must not use these.
*/
extern void dma_cache_maint(const void *kaddr, size_t size, int rw);
extern void dma_cache_maint_page(struct page *page, unsigned long offset,
size_t size, int rw);
static inline void __dma_single_cpu_to_dev(const void *kaddr, size_t size,
enum dma_data_direction dir)
{
extern void ___dma_single_cpu_to_dev(const void *, size_t,
enum dma_data_direction);
if (!arch_is_coherent())
___dma_single_cpu_to_dev(kaddr, size, dir);
}
static inline void __dma_single_dev_to_cpu(const void *kaddr, size_t size,
enum dma_data_direction dir)
{
extern void ___dma_single_dev_to_cpu(const void *, size_t,
enum dma_data_direction);
if (!arch_is_coherent())
___dma_single_dev_to_cpu(kaddr, size, dir);
}
static inline void __dma_page_cpu_to_dev(struct page *page, unsigned long off,
size_t size, enum dma_data_direction dir)
{
extern void ___dma_page_cpu_to_dev(struct page *, unsigned long,
size_t, enum dma_data_direction);
if (!arch_is_coherent())
___dma_page_cpu_to_dev(page, off, size, dir);
}
static inline void __dma_page_dev_to_cpu(struct page *page, unsigned long off,
size_t size, enum dma_data_direction dir)
{
extern void ___dma_page_dev_to_cpu(struct page *, unsigned long,
size_t, enum dma_data_direction);
if (!arch_is_coherent())
___dma_page_dev_to_cpu(page, off, size, dir);
}
/*
* Return whether the given device DMA address mask can be supported
......@@ -304,8 +344,7 @@ static inline dma_addr_t dma_map_single(struct device *dev, void *cpu_addr,
{
BUG_ON(!valid_dma_direction(dir));
if (!arch_is_coherent())
dma_cache_maint(cpu_addr, size, dir);
__dma_single_cpu_to_dev(cpu_addr, size, dir);
return virt_to_dma(dev, cpu_addr);
}
......@@ -329,8 +368,7 @@ static inline dma_addr_t dma_map_page(struct device *dev, struct page *page,
{
BUG_ON(!valid_dma_direction(dir));
if (!arch_is_coherent())
dma_cache_maint_page(page, offset, size, dir);
__dma_page_cpu_to_dev(page, offset, size, dir);
return page_to_dma(dev, page) + offset;
}
......@@ -352,7 +390,7 @@ static inline dma_addr_t dma_map_page(struct device *dev, struct page *page,
static inline void dma_unmap_single(struct device *dev, dma_addr_t handle,
size_t size, enum dma_data_direction dir)
{
/* nothing to do */
__dma_single_dev_to_cpu(dma_to_virt(dev, handle), size, dir);
}
/**
......@@ -372,7 +410,8 @@ static inline void dma_unmap_single(struct device *dev, dma_addr_t handle,
static inline void dma_unmap_page(struct device *dev, dma_addr_t handle,
size_t size, enum dma_data_direction dir)
{
/* nothing to do */
__dma_page_dev_to_cpu(dma_to_page(dev, handle), handle & ~PAGE_MASK,
size, dir);
}
#endif /* CONFIG_DMABOUNCE */
......@@ -400,7 +439,10 @@ static inline void dma_sync_single_range_for_cpu(struct device *dev,
{
BUG_ON(!valid_dma_direction(dir));
dmabounce_sync_for_cpu(dev, handle, offset, size, dir);
if (!dmabounce_sync_for_cpu(dev, handle, offset, size, dir))
return;
__dma_single_dev_to_cpu(dma_to_virt(dev, handle) + offset, size, dir);
}
static inline void dma_sync_single_range_for_device(struct device *dev,
......@@ -412,8 +454,7 @@ static inline void dma_sync_single_range_for_device(struct device *dev,
if (!dmabounce_sync_for_device(dev, handle, offset, size, dir))
return;
if (!arch_is_coherent())
dma_cache_maint(dma_to_virt(dev, handle) + offset, size, dir);
__dma_single_cpu_to_dev(dma_to_virt(dev, handle) + offset, size, dir);
}
static inline void dma_sync_single_for_cpu(struct device *dev,
......
......@@ -69,9 +69,16 @@ extern void __raw_readsl(const void __iomem *addr, void *data, int longlen);
/*
* __arm_ioremap takes CPU physical address.
* __arm_ioremap_pfn takes a Page Frame Number and an offset into that page
* The _caller variety takes a __builtin_return_address(0) value for
* /proc/vmalloc to use - and should only be used in non-inline functions.
*/
extern void __iomem * __arm_ioremap_pfn(unsigned long, unsigned long, size_t, unsigned int);
extern void __iomem * __arm_ioremap(unsigned long, size_t, unsigned int);
extern void __iomem *__arm_ioremap_pfn_caller(unsigned long, unsigned long,
size_t, unsigned int, void *);
extern void __iomem *__arm_ioremap_caller(unsigned long, size_t, unsigned int,
void *);
extern void __iomem *__arm_ioremap_pfn(unsigned long, unsigned long, size_t, unsigned int);
extern void __iomem *__arm_ioremap(unsigned long, size_t, unsigned int);
extern void __iounmap(volatile void __iomem *addr);
/*
......
......@@ -46,12 +46,4 @@ struct sys_timer {
extern struct sys_timer *system_timer;
extern void timer_tick(void);
/*
* Kernel time keeping support.
*/
struct timespec;
extern int (*set_rtc)(void);
extern void save_time_delta(struct timespec *delta, struct timespec *rtc);
extern void restore_time_delta(struct timespec *delta, struct timespec *rtc);
#endif
......@@ -76,6 +76,17 @@
*/
#define IOREMAP_MAX_ORDER 24
/*
* Size of DMA-consistent memory region. Must be multiple of 2M,
* between 2MB and 14MB inclusive.
*/
#ifndef CONSISTENT_DMA_SIZE
#define CONSISTENT_DMA_SIZE SZ_2M
#endif
#define CONSISTENT_END (0xffe00000UL)
#define CONSISTENT_BASE (CONSISTENT_END - CONSISTENT_DMA_SIZE)
#else /* CONFIG_MMU */
/*
......@@ -93,11 +104,11 @@
#endif
#ifndef PHYS_OFFSET
#define PHYS_OFFSET (CONFIG_DRAM_BASE)
#define PHYS_OFFSET UL(CONFIG_DRAM_BASE)
#endif
#ifndef END_MEM
#define END_MEM (CONFIG_DRAM_BASE + CONFIG_DRAM_SIZE)
#define END_MEM (UL(CONFIG_DRAM_BASE) + CONFIG_DRAM_SIZE)
#endif
#ifndef PAGE_OFFSET
......@@ -112,14 +123,6 @@
#endif /* !CONFIG_MMU */
/*
* Size of DMA-consistent memory region. Must be multiple of 2M,
* between 2MB and 14MB inclusive.
*/
#ifndef CONSISTENT_DMA_SIZE
#define CONSISTENT_DMA_SIZE SZ_2M
#endif
/*
* Physical vs virtual RAM address space conversion. These are
* private definitions which should NOT be used outside memory.h
......
......@@ -6,6 +6,7 @@
typedef struct {
#ifdef CONFIG_CPU_HAS_ASID
unsigned int id;
spinlock_t id_lock;
#endif
unsigned int kvm_seq;
} mm_context_t;
......
......@@ -43,12 +43,23 @@ void __check_kvm_seq(struct mm_struct *mm);
#define ASID_FIRST_VERSION (1 << ASID_BITS)
extern unsigned int cpu_last_asid;
#ifdef CONFIG_SMP
DECLARE_PER_CPU(struct mm_struct *, current_mm);
#endif
void __init_new_context(struct task_struct *tsk, struct mm_struct *mm);
void __new_context(struct mm_struct *mm);
static inline void check_context(struct mm_struct *mm)
{
/*
* This code is executed with interrupts enabled. Therefore,
* mm->context.id cannot be updated to the latest ASID version
* on a different CPU (and condition below not triggered)
* without first getting an IPI to reset the context. The
* alternative is to take a read_lock on mm->context.id_lock
* (after changing its type to rwlock_t).
*/
if (unlikely((mm->context.id ^ cpu_last_asid) >> ASID_BITS))
__new_context(mm);
......@@ -108,6 +119,10 @@ switch_mm(struct mm_struct *prev, struct mm_struct *next,
__flush_icache_all();
#endif
if (!cpumask_test_and_set_cpu(cpu, mm_cpumask(next)) || prev != next) {
#ifdef CONFIG_SMP
struct mm_struct **crt_mm = &per_cpu(current_mm, cpu);
*crt_mm = next;
#endif
check_context(next);
cpu_switch_mm(next->pgd, next);
if (cache_is_vivt())
......
......@@ -117,11 +117,12 @@
#endif
struct page;
struct vm_area_struct;
struct cpu_user_fns {
void (*cpu_clear_user_highpage)(struct page *page, unsigned long vaddr);
void (*cpu_copy_user_highpage)(struct page *to, struct page *from,
unsigned long vaddr);
unsigned long vaddr, struct vm_area_struct *vma);
};
#ifdef MULTI_USER
......@@ -137,7 +138,7 @@ extern struct cpu_user_fns cpu_user;
extern void __cpu_clear_user_highpage(struct page *page, unsigned long vaddr);
extern void __cpu_copy_user_highpage(struct page *to, struct page *from,
unsigned long vaddr);
unsigned long vaddr, struct vm_area_struct *vma);
#endif
#define clear_user_highpage(page,vaddr) \
......@@ -145,7 +146,7 @@ extern void __cpu_copy_user_highpage(struct page *to, struct page *from,
#define __HAVE_ARCH_COPY_USER_HIGHPAGE
#define copy_user_highpage(to,from,vaddr,vma) \
__cpu_copy_user_highpage(to, from, vaddr)
__cpu_copy_user_highpage(to, from, vaddr, vma)
#define clear_page(page) memset((void *)(page), 0, PAGE_SIZE)
extern void copy_page(void *to, const void *from);
......
/*
* linux/arch/arm/include/asm/perf_event.h
*
* Copyright (C) 2009 picoChip Designs Ltd, Jamie Iles
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
*/
#ifndef __ARM_PERF_EVENT_H__
#define __ARM_PERF_EVENT_H__
/*
* NOP: on *most* (read: all supported) ARM platforms, the performance
* counter interrupts are regular interrupts and not an NMI. This
* means that when we receive the interrupt we can call
* perf_event_do_pending() that handles all of the work with
* interrupts enabled.
*/
static inline void
set_perf_event_pending(void)
{
}
/* ARM performance counters start from 1 (in the cp15 accesses) so use the
* same indexes here for consistency. */
#define PERF_EVENT_INDEX_OFFSET 1
#endif /* __ARM_PERF_EVENT_H__ */
......@@ -86,8 +86,8 @@ extern unsigned int kobjsize(const void *objp);
* All 32bit addresses are effectively valid for vmalloc...
* Sort of meaningless for non-VM targets.
*/
#define VMALLOC_START 0
#define VMALLOC_END 0xffffffff
#define VMALLOC_START 0UL
#define VMALLOC_END 0xffffffffUL
#define FIRST_USER_ADDRESS (0)
......
/*
* linux/arch/arm/include/asm/pmu.h
*
* Copyright (C) 2009 picoChip Designs Ltd, Jamie Iles
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
*/
#ifndef __ARM_PMU_H__
#define __ARM_PMU_H__
#ifdef CONFIG_CPU_HAS_PMU
struct pmu_irqs {
const int *irqs;
int num_irqs;
};
/**
* reserve_pmu() - reserve the hardware performance counters
*
* Reserve the hardware performance counters in the system for exclusive use.
* The 'struct pmu_irqs' for the system is returned on success, ERR_PTR()
* encoded error on failure.
*/
extern const struct pmu_irqs *
reserve_pmu(void);
/**
* release_pmu() - Relinquish control of the performance counters
*
* Release the performance counters and allow someone else to use them.
* Callers must have disabled the counters and released IRQs before calling
* this. The 'struct pmu_irqs' returned from reserve_pmu() must be passed as
* a cookie.
*/
extern int
release_pmu(const struct pmu_irqs *irqs);
/**
* init_pmu() - Initialise the PMU.
*
* Initialise the system ready for PMU enabling. This should typically set the
* IRQ affinity and nothing else. The users (oprofile/perf events etc) will do
* the actual hardware initialisation.
*/
extern int
init_pmu(void);
#else /* CONFIG_CPU_HAS_PMU */
static inline const struct pmu_irqs *
reserve_pmu(void)
{
return ERR_PTR(-ENODEV);
}
static inline int
release_pmu(const struct pmu_irqs *irqs)
{
return -ENODEV;
}
static inline int
init_pmu(void)
{
return -ENODEV;
}
#endif /* CONFIG_CPU_HAS_PMU */
#endif /* __ARM_PMU_H__ */
......@@ -223,18 +223,6 @@ extern struct meminfo meminfo;
#define bank_phys_end(bank) ((bank)->start + (bank)->size)
#define bank_phys_size(bank) (bank)->size
/*
* Early command line parameters.
*/
struct early_params {
const char *arg;
void (*fn)(char **p);
};
#define __early_param(name,fn) \
static struct early_params __early_##fn __used \
__attribute__((__section__(".early_param.init"))) = { name, fn }
#endif /* __KERNEL__ */
#endif
......@@ -13,4 +13,9 @@ static inline int tlb_ops_need_broadcast(void)
return ((read_cpuid_ext(CPUID_EXT_MMFR3) >> 12) & 0xf) < 2;
}
static inline int cache_ops_need_broadcast(void)
{
return ((read_cpuid_ext(CPUID_EXT_MMFR3) >> 12) & 0xf) < 1;
}
#endif
......@@ -5,6 +5,22 @@
#error SMP not supported on pre-ARMv6 CPUs
#endif
static inline void dsb_sev(void)
{
#if __LINUX_ARM_ARCH__ >= 7
__asm__ __volatile__ (
"dsb\n"
"sev"
);
#elif defined(CONFIG_CPU_32v6K)
__asm__ __volatile__ (
"mcr p15, 0, %0, c7, c10, 4\n"
"sev"
: : "r" (0)
);
#endif
}
/*
* ARMv6 Spin-locking.
*
......@@ -69,13 +85,11 @@ static inline void arch_spin_unlock(arch_spinlock_t *lock)
__asm__ __volatile__(
" str %1, [%0]\n"
#ifdef CONFIG_CPU_32v6K
" mcr p15, 0, %1, c7, c10, 4\n" /* DSB */
" sev"
#endif
:
: "r" (&lock->lock), "r" (0)
: "cc");
dsb_sev();
}
/*
......@@ -132,13 +146,11 @@ static inline void arch_write_unlock(arch_rwlock_t *rw)
__asm__ __volatile__(
"str %1, [%0]\n"
#ifdef CONFIG_CPU_32v6K
" mcr p15, 0, %1, c7, c10, 4\n" /* DSB */
" sev\n"
#endif
:
: "r" (&rw->lock), "r" (0)
: "cc");
dsb_sev();
}
/* write_can_lock - would write_trylock() succeed? */
......@@ -188,14 +200,12 @@ static inline void arch_read_unlock(arch_rwlock_t *rw)
" strex %1, %0, [%2]\n"
" teq %1, #0\n"
" bne 1b"
#ifdef CONFIG_CPU_32v6K
"\n cmp %0, #0\n"
" mcreq p15, 0, %0, c7, c10, 4\n"
" seveq"
#endif
: "=&r" (tmp), "=&r" (tmp2)
: "r" (&rw->lock)
: "cc");
if (tmp == 0)
dsb_sev();
}
static inline int arch_read_trylock(arch_rwlock_t *rw)
......
......@@ -73,8 +73,7 @@ extern unsigned int mem_fclk_21285;
struct pt_regs;
void die(const char *msg, struct pt_regs *regs, int err)
__attribute__((noreturn));
void die(const char *msg, struct pt_regs *regs, int err);
struct siginfo;
void arm_notify_die(const char *str, struct pt_regs *regs, struct siginfo *info,
......
......@@ -115,7 +115,8 @@ extern void iwmmxt_task_restore(struct thread_info *, void *);
extern void iwmmxt_task_release(struct thread_info *);
extern void iwmmxt_task_switch(struct thread_info *);
extern void vfp_sync_state(struct thread_info *thread);
extern void vfp_sync_hwstate(struct thread_info *);
extern void vfp_flush_hwstate(struct thread_info *);
#endif
......
......@@ -529,7 +529,8 @@ extern void flush_tlb_kernel_range(unsigned long start, unsigned long end);
* cache entries for the kernels virtual memory range are written
* back to the page.
*/
extern void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr, pte_t pte);
extern void update_mmu_cache(struct vm_area_struct *vma, unsigned long addr,
pte_t *ptep);
#endif
......
......@@ -17,6 +17,7 @@ obj-y := compat.o elf.o entry-armv.o entry-common.o irq.o \
process.o ptrace.o return_address.o setup.o signal.o \
sys_arm.o stacktrace.o time.o traps.o
obj-$(CONFIG_LEDS) += leds.o
obj-$(CONFIG_OC_ETM) += etm.o
obj-$(CONFIG_ISA_DMA_API) += dma.o
......@@ -46,6 +47,8 @@ obj-$(CONFIG_CPU_XSCALE) += xscale-cp0.o
obj-$(CONFIG_CPU_XSC3) += xscale-cp0.o
obj-$(CONFIG_CPU_MOHAWK) += xscale-cp0.o
obj-$(CONFIG_IWMMXT) += iwmmxt.o
obj-$(CONFIG_CPU_HAS_PMU) += pmu.o
obj-$(CONFIG_HW_PERF_EVENTS) += perf_event.o
AFLAGS_iwmmxt.o := -Wa,-mcpu=iwmmxt
ifneq ($(CONFIG_ARCH_EBSA110),y)
......
......@@ -12,6 +12,7 @@
*/
#include <linux/sched.h>
#include <linux/mm.h>
#include <linux/dma-mapping.h>
#include <asm/mach/arch.h>
#include <asm/thread_info.h>
#include <asm/memory.h>
......@@ -112,5 +113,9 @@ int main(void)
#ifdef MULTI_PABORT
DEFINE(PROCESSOR_PABT_FUNC, offsetof(struct processor, _prefetch_abort));
#endif
BLANK();
DEFINE(DMA_BIDIRECTIONAL, DMA_BIDIRECTIONAL);
DEFINE(DMA_TO_DEVICE, DMA_TO_DEVICE);
DEFINE(DMA_FROM_DEVICE, DMA_FROM_DEVICE);
return 0;
}
......@@ -24,7 +24,7 @@
#if defined(CONFIG_CPU_V6)
.macro addruart, rx
.macro addruart, rx, tmp
.endm
.macro senduart, rd, rx
......@@ -51,7 +51,7 @@
#elif defined(CONFIG_CPU_V7)
.macro addruart, rx
.macro addruart, rx, tmp
.endm
.macro senduart, rd, rx
......@@ -71,7 +71,7 @@ wait: mrc p14, 0, pc, c0, c1, 0
#elif defined(CONFIG_CPU_XSCALE)
.macro addruart, rx
.macro addruart, rx, tmp
.endm
.macro senduart, rd, rx
......@@ -98,7 +98,7 @@ wait: mrc p14, 0, pc, c0, c1, 0
#else
.macro addruart, rx
.macro addruart, rx, tmp
.endm
.macro senduart, rd, rx
......@@ -164,7 +164,7 @@ ENDPROC(printhex2)
.ltorg
ENTRY(printascii)
addruart r3
addruart r3, r1
b 2f
1: waituart r2, r3
senduart r1, r3
......@@ -180,7 +180,7 @@ ENTRY(printascii)
ENDPROC(printascii)
ENTRY(printch)
addruart r3
addruart r3, r1
mov r1, r0
mov r0, #0
b 1b
......
/*
* LED support code, ripped out of arch/arm/kernel/time.c
*
* Copyright (C) 1994-2001 Russell King
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/module.h>
#include <linux/init.h>
#include <linux/sysdev.h>
#include <asm/leds.h>
static void dummy_leds_event(led_event_t evt)
{
}
void (*leds_event)(led_event_t) = dummy_leds_event;
struct leds_evt_name {
const char name[8];
int on;
int off;
};
static const struct leds_evt_name evt_names[] = {
{ "amber", led_amber_on, led_amber_off },
{ "blue", led_blue_on, led_blue_off },
{ "green", led_green_on, led_green_off },
{ "red", led_red_on, led_red_off },
};
static ssize_t leds_store(struct sys_device *dev,
struct sysdev_attribute *attr,
const char *buf, size_t size)
{
int ret = -EINVAL, len = strcspn(buf, " ");
if (len > 0 && buf[len] == '\0')
len--;
if (strncmp(buf, "claim", len) == 0) {
leds_event(led_claim);
ret = size;
} else if (strncmp(buf, "release", len) == 0) {
leds_event(led_release);
ret = size;
} else {
int i;
for (i = 0; i < ARRAY_SIZE(evt_names); i++) {
if (strlen(evt_names[i].name) != len ||
strncmp(buf, evt_names[i].name, len) != 0)
continue;
if (strncmp(buf+len, " on", 3) == 0) {
leds_event(evt_names[i].on);
ret = size;
} else if (strncmp(buf+len, " off", 4) == 0) {
leds_event(evt_names[i].off);
ret = size;
}
break;
}
}
return ret;
}
static SYSDEV_ATTR(event, 0200, NULL, leds_store);
static int leds_suspend(struct sys_device *dev, pm_message_t state)
{
leds_event(led_stop);
return 0;
}
static int leds_resume(struct sys_device *dev)
{
leds_event(led_start);
return 0;
}
static int leds_shutdown(struct sys_device *dev)
{
leds_event(led_halted);
return 0;
}
static struct sysdev_class leds_sysclass = {
.name = "leds",
.shutdown = leds_shutdown,
.suspend = leds_suspend,
.resume = leds_resume,
};
static struct sys_device leds_device = {
.id = 0,
.cls = &leds_sysclass,
};
static int __init leds_init(void)
{
int ret;
ret = sysdev_class_register(&leds_sysclass);
if (ret == 0)
ret = sysdev_register(&leds_device);
if (ret == 0)
ret = sysdev_create_file(&leds_device, &attr_event);
return ret;
}
device_initcall(leds_init);
EXPORT_SYMBOL(leds_event);
This diff is collapsed.
/*
* linux/arch/arm/kernel/pmu.c
*
* Copyright (C) 2009 picoChip Designs Ltd, Jamie Iles
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
*/
#include <linux/cpumask.h>
#include <linux/err.h>
#include <linux/interrupt.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <asm/pmu.h>
/*
* Define the IRQs for the system. We could use something like a platform
* device but that seems fairly heavyweight for this. Also, the performance
* counters can't be removed or hotplugged.
*
* Ordering is important: init_pmu() will use the ordering to set the affinity
* to the corresponding core. e.g. the first interrupt will go to cpu 0, the
* second goes to cpu 1 etc.
*/
static const int irqs[] = {
#if defined(CONFIG_ARCH_OMAP2)
3,
#elif defined(CONFIG_ARCH_BCMRING)
IRQ_PMUIRQ,
#elif defined(CONFIG_MACH_REALVIEW_EB)
IRQ_EB11MP_PMU_CPU0,
IRQ_EB11MP_PMU_CPU1,
IRQ_EB11MP_PMU_CPU2,
IRQ_EB11MP_PMU_CPU3,
#elif defined(CONFIG_ARCH_OMAP3)
INT_34XX_BENCH_MPU_EMUL,
#elif defined(CONFIG_ARCH_IOP32X)
IRQ_IOP32X_CORE_PMU,
#elif defined(CONFIG_ARCH_IOP33X)
IRQ_IOP33X_CORE_PMU,
#elif defined(CONFIG_ARCH_PXA)
IRQ_PMU,
#endif
};
static const struct pmu_irqs pmu_irqs = {
.irqs = irqs,
.num_irqs = ARRAY_SIZE(irqs),
};
static volatile long pmu_lock;
const struct pmu_irqs *
reserve_pmu(void)
{
return test_and_set_bit_lock(0, &pmu_lock) ? ERR_PTR(-EBUSY) :
&pmu_irqs;
}
EXPORT_SYMBOL_GPL(reserve_pmu);
int
release_pmu(const struct pmu_irqs *irqs)
{
if (WARN_ON(irqs != &pmu_irqs))
return -EINVAL;
clear_bit_unlock(0, &pmu_lock);
return 0;
}
EXPORT_SYMBOL_GPL(release_pmu);
static int
set_irq_affinity(int irq,
unsigned int cpu)
{
#ifdef CONFIG_SMP
int err = irq_set_affinity(irq, cpumask_of(cpu));
if (err)
pr_warning("unable to set irq affinity (irq=%d, cpu=%u)\n",
irq, cpu);
return err;
#else
return 0;
#endif
}
int
init_pmu(void)
{
int i, err = 0;
for (i = 0; i < pmu_irqs.num_irqs; ++i) {
err = set_irq_affinity(pmu_irqs.irqs[i], i);
if (err)
break;
}
return err;
}
EXPORT_SYMBOL_GPL(init_pmu);
......@@ -499,10 +499,41 @@ static struct undef_hook thumb_break_hook = {
.fn = break_trap,
};
static int thumb2_break_trap(struct pt_regs *regs, unsigned int instr)
{
unsigned int instr2;
void __user *pc;
/* Check the second half of the instruction. */
pc = (void __user *)(instruction_pointer(regs) + 2);
if (processor_mode(regs) == SVC_MODE) {
instr2 = *(u16 *) pc;
} else {
get_user(instr2, (u16 __user *)pc);
}
if (instr2 == 0xa000) {
ptrace_break(current, regs);
return 0;
} else {
return 1;
}
}
static struct undef_hook thumb2_break_hook = {
.instr_mask = 0xffff,
.instr_val = 0xf7f0,
.cpsr_mask = PSR_T_BIT,
.cpsr_val = PSR_T_BIT,
.fn = thumb2_break_trap,
};
static int __init ptrace_break_init(void)
{
register_undef_hook(&arm_break_hook);
register_undef_hook(&thumb_break_hook);
register_undef_hook(&thumb2_break_hook);
return 0;
}
......@@ -669,7 +700,7 @@ static int ptrace_getvfpregs(struct task_struct *tsk, void __user *data)
union vfp_state *vfp = &thread->vfpstate;
struct user_vfp __user *ufp = data;
vfp_sync_state(thread);
vfp_sync_hwstate(thread);
/* copy the floating point registers */
if (copy_to_user(&ufp->fpregs, &vfp->hard.fpregs,
......@@ -692,7 +723,7 @@ static int ptrace_setvfpregs(struct task_struct *tsk, void __user *data)
union vfp_state *vfp = &thread->vfpstate;
struct user_vfp __user *ufp = data;
vfp_sync_state(thread);
vfp_sync_hwstate(thread);
/* copy the floating point registers */
if (copy_from_user(&vfp->hard.fpregs, &ufp->fpregs,
......@@ -703,6 +734,8 @@ static int ptrace_setvfpregs(struct task_struct *tsk, void __user *data)
if (get_user(vfp->hard.fpscr, &ufp->fpscr))
return -EFAULT;
vfp_flush_hwstate(thread);
return 0;
}
#endif
......@@ -712,26 +745,10 @@ long arch_ptrace(struct task_struct *child, long request, long addr, long data)
int ret;
switch (request) {
/*
* read word at location "addr" in the child process.
*/
case PTRACE_PEEKTEXT:
case PTRACE_PEEKDATA:
ret = generic_ptrace_peekdata(child, addr, data);
break;
case PTRACE_PEEKUSR:
ret = ptrace_read_user(child, addr, (unsigned long __user *)data);
break;
/*
* write the word at location addr.
*/
case PTRACE_POKETEXT:
case PTRACE_POKEDATA:
ret = generic_ptrace_pokedata(child, addr, data);
break;
case PTRACE_POKEUSR:
ret = ptrace_write_user(child, addr, data);
break;
......
......@@ -24,6 +24,7 @@
#include <linux/interrupt.h>
#include <linux/smp.h>
#include <linux/fs.h>
#include <linux/proc_fs.h>
#include <asm/unified.h>
#include <asm/cpu.h>
......@@ -118,7 +119,7 @@ EXPORT_SYMBOL(elf_platform);
static const char *cpu_name;
static const char *machine_name;
static char __initdata command_line[COMMAND_LINE_SIZE];
static char __initdata cmd_line[COMMAND_LINE_SIZE];
static char default_command_line[COMMAND_LINE_SIZE] __initdata = CONFIG_CMDLINE;
static union { char c[4]; unsigned long l; } endian_test __initdata = { { 'l', '?', '?', 'b' } };
......@@ -418,10 +419,11 @@ static int __init arm_add_memory(unsigned long start, unsigned long size)
* Pick out the memory size. We look for mem=size@start,
* where start and size are "size[KkMm]"
*/
static void __init early_mem(char **p)
static int __init early_mem(char *p)
{
static int usermem __initdata = 0;
unsigned long size, start;
char *endp;
/*
* If the user specifies memory size, we
......@@ -434,52 +436,15 @@ static void __init early_mem(char **p)
}
start = PHYS_OFFSET;
size = memparse(*p, p);
if (**p == '@')
start = memparse(*p + 1, p);
size = memparse(p, &endp);
if (*endp == '@')
start = memparse(endp + 1, NULL);
arm_add_memory(start, size);
}
__early_param("mem=", early_mem);
/*
* Initial parsing of the command line.
*/
static void __init parse_cmdline(char **cmdline_p, char *from)
{
char c = ' ', *to = command_line;
int len = 0;
for (;;) {
if (c == ' ') {
extern struct early_params __early_begin, __early_end;
struct early_params *p;
for (p = &__early_begin; p < &__early_end; p++) {
int arglen = strlen(p->arg);
if (memcmp(from, p->arg, arglen) == 0) {
if (to != command_line)
to -= 1;
from += arglen;
p->fn(&from);
while (*from != ' ' && *from != '\0')
from++;
break;
}
}
}
c = *from++;
if (!c)
break;
if (COMMAND_LINE_SIZE <= ++len)
break;
*to++ = c;
}
*to = '\0';
*cmdline_p = command_line;
return 0;
}
early_param("mem", early_mem);
static void __init
setup_ramdisk(int doload, int prompt, int image_start, unsigned int rd_sz)
......@@ -740,9 +705,15 @@ void __init setup_arch(char **cmdline_p)
init_mm.end_data = (unsigned long) _edata;
init_mm.brk = (unsigned long) _end;
memcpy(boot_command_line, from, COMMAND_LINE_SIZE);
boot_command_line[COMMAND_LINE_SIZE-1] = '\0';
parse_cmdline(cmdline_p, from);
/* parse_early_param needs a boot_command_line */
strlcpy(boot_command_line, from, COMMAND_LINE_SIZE);
/* populate cmd_line too for later use, preserving boot_command_line */
strlcpy(cmd_line, boot_command_line, COMMAND_LINE_SIZE);
*cmdline_p = cmd_line;
parse_early_param();
paging_init(mdesc);
request_standard_resources(&meminfo, mdesc);
......@@ -783,9 +754,21 @@ static int __init topology_init(void)
return 0;
}
subsys_initcall(topology_init);
#ifdef CONFIG_HAVE_PROC_CPU
static int __init proc_cpu_init(void)
{
struct proc_dir_entry *res;
res = proc_mkdir("cpu", NULL);
if (!res)
return -ENOMEM;
return 0;
}
fs_initcall(proc_cpu_init);
#endif
static const char *hwcap_str[] = {
"swp",
"half",
......
......@@ -10,11 +10,6 @@
*
* This file contains the ARM-specific time handling details:
* reading the RTC at bootup, etc...
*
* 1994-07-02 Alan Modra
* fixed set_rtc_mmss, fixed time.year for >= 2000, new mktime
* 1998-12-20 Updated NTP code according to technical memorandum Jan '96
* "A Kernel Model for Precision Timekeeping" by Dave Mills
*/
#include <linux/module.h>
#include <linux/kernel.h>
......@@ -77,11 +72,6 @@ unsigned long profile_pc(struct pt_regs *regs)
EXPORT_SYMBOL(profile_pc);
#endif
/*
* hook for setting the RTC's idea of the current time.
*/
int (*set_rtc)(void);
#ifndef CONFIG_GENERIC_TIME
static unsigned long dummy_gettimeoffset(void)
{
......@@ -89,140 +79,6 @@ static unsigned long dummy_gettimeoffset(void)
}
#endif
static unsigned long next_rtc_update;
/*
* If we have an externally synchronized linux clock, then update
* CMOS clock accordingly every ~11 minutes. set_rtc() has to be
* called as close as possible to 500 ms before the new second
* starts.
*/
static inline void do_set_rtc(void)
{
if (!ntp_synced() || set_rtc == NULL)
return;
if (next_rtc_update &&
time_before((unsigned long)xtime.tv_sec, next_rtc_update))
return;
if (xtime.tv_nsec < 500000000 - ((unsigned) tick_nsec >> 1) &&
xtime.tv_nsec >= 500000000 + ((unsigned) tick_nsec >> 1))
return;
if (set_rtc())
/*
* rtc update failed. Try again in 60s
*/
next_rtc_update = xtime.tv_sec + 60;
else
next_rtc_update = xtime.tv_sec + 660;
}
#ifdef CONFIG_LEDS
static void dummy_leds_event(led_event_t evt)
{
}
void (*leds_event)(led_event_t) = dummy_leds_event;
struct leds_evt_name {
const char name[8];
int on;
int off;
};
static const struct leds_evt_name evt_names[] = {
{ "amber", led_amber_on, led_amber_off },
{ "blue", led_blue_on, led_blue_off },
{ "green", led_green_on, led_green_off },
{ "red", led_red_on, led_red_off },
};
static ssize_t leds_store(struct sys_device *dev,
struct sysdev_attribute *attr,
const char *buf, size_t size)
{
int ret = -EINVAL, len = strcspn(buf, " ");
if (len > 0 && buf[len] == '\0')
len--;
if (strncmp(buf, "claim", len) == 0) {
leds_event(led_claim);
ret = size;
} else if (strncmp(buf, "release", len) == 0) {
leds_event(led_release);
ret = size;
} else {
int i;
for (i = 0; i < ARRAY_SIZE(evt_names); i++) {
if (strlen(evt_names[i].name) != len ||
strncmp(buf, evt_names[i].name, len) != 0)
continue;
if (strncmp(buf+len, " on", 3) == 0) {
leds_event(evt_names[i].on);
ret = size;
} else if (strncmp(buf+len, " off", 4) == 0) {
leds_event(evt_names[i].off);
ret = size;
}
break;
}
}
return ret;
}
static SYSDEV_ATTR(event, 0200, NULL, leds_store);
static int leds_suspend(struct sys_device *dev, pm_message_t state)
{
leds_event(led_stop);
return 0;
}
static int leds_resume(struct sys_device *dev)
{
leds_event(led_start);
return 0;
}
static int leds_shutdown(struct sys_device *dev)
{
leds_event(led_halted);
return 0;
}
static struct sysdev_class leds_sysclass = {
.name = "leds",
.shutdown = leds_shutdown,
.suspend = leds_suspend,
.resume = leds_resume,
};
static struct sys_device leds_device = {
.id = 0,
.cls = &leds_sysclass,
};
static int __init leds_init(void)
{
int ret;
ret = sysdev_class_register(&leds_sysclass);
if (ret == 0)
ret = sysdev_register(&leds_device);
if (ret == 0)
ret = sysdev_create_file(&leds_device, &attr_event);
return ret;
}
device_initcall(leds_init);
EXPORT_SYMBOL(leds_event);
#endif
#ifdef CONFIG_LEDS_TIMER
static inline void do_leds(void)
{
......@@ -295,39 +151,6 @@ int do_settimeofday(struct timespec *tv)
EXPORT_SYMBOL(do_settimeofday);
#endif /* !CONFIG_GENERIC_TIME */
/**
* save_time_delta - Save the offset between system time and RTC time
* @delta: pointer to timespec to store delta
* @rtc: pointer to timespec for current RTC time
*
* Return a delta between the system time and the RTC time, such
* that system time can be restored later with restore_time_delta()
*/
void save_time_delta(struct timespec *delta, struct timespec *rtc)
{
set_normalized_timespec(delta,
xtime.tv_sec - rtc->tv_sec,
xtime.tv_nsec - rtc->tv_nsec);
}
EXPORT_SYMBOL(save_time_delta);
/**
* restore_time_delta - Restore the current system time
* @delta: delta returned by save_time_delta()
* @rtc: pointer to timespec for current RTC time
*/
void restore_time_delta(struct timespec *delta, struct timespec *rtc)
{
struct timespec ts;
set_normalized_timespec(&ts,
delta->tv_sec + rtc->tv_sec,
delta->tv_nsec + rtc->tv_nsec);
do_settimeofday(&ts);
}
EXPORT_SYMBOL(restore_time_delta);
#ifndef CONFIG_GENERIC_CLOCKEVENTS
/*
* Kernel system timer support.
......@@ -336,7 +159,6 @@ void timer_tick(void)
{
profile_tick(CPU_PROFILING);
do_leds();
do_set_rtc();
write_seqlock(&xtime_lock);
do_timer(1);
write_sequnlock(&xtime_lock);
......
......@@ -12,15 +12,17 @@
* 'linux/arch/arm/lib/traps.S'. Mostly a debugging aid, but will probably
* kill the offending process.
*/
#include <linux/module.h>
#include <linux/signal.h>
#include <linux/spinlock.h>
#include <linux/personality.h>
#include <linux/kallsyms.h>
#include <linux/delay.h>
#include <linux/spinlock.h>
#include <linux/uaccess.h>
#include <linux/hardirq.h>
#include <linux/kdebug.h>
#include <linux/module.h>
#include <linux/kexec.h>
#include <linux/delay.h>
#include <linux/init.h>
#include <linux/uaccess.h>
#include <asm/atomic.h>
#include <asm/cacheflush.h>
......@@ -224,14 +226,21 @@ void show_stack(struct task_struct *tsk, unsigned long *sp)
#define S_SMP ""
#endif
static void __die(const char *str, int err, struct thread_info *thread, struct pt_regs *regs)
static int __die(const char *str, int err, struct thread_info *thread, struct pt_regs *regs)
{
struct task_struct *tsk = thread->task;
static int die_counter;
int ret;
printk(KERN_EMERG "Internal error: %s: %x [#%d]" S_PREEMPT S_SMP "\n",
str, err, ++die_counter);
sysfs_printk_last_file();
/* trap and error numbers are mostly meaningless on ARM */
ret = notify_die(DIE_OOPS, str, regs, err, tsk->thread.trap_no, SIGSEGV);
if (ret == NOTIFY_STOP)
return ret;
print_modules();
__show_regs(regs);
printk(KERN_EMERG "Process %.*s (pid: %d, stack limit = 0x%p)\n",
......@@ -243,6 +252,8 @@ static void __die(const char *str, int err, struct thread_info *thread, struct p
dump_backtrace(regs, tsk);
dump_instr(KERN_EMERG, regs);
}
return ret;
}
DEFINE_SPINLOCK(die_lock);
......@@ -250,16 +261,21 @@ DEFINE_SPINLOCK(die_lock);
/*
* This function is protected against re-entrancy.
*/
NORET_TYPE void die(const char *str, struct pt_regs *regs, int err)
void die(const char *str, struct pt_regs *regs, int err)
{
struct thread_info *thread = current_thread_info();
int ret;
oops_enter();
spin_lock_irq(&die_lock);
console_verbose();
bust_spinlocks(1);
__die(str, err, thread, regs);
ret = __die(str, err, thread, regs);
if (regs && kexec_should_crash(thread->task))
crash_kexec(regs);
bust_spinlocks(0);
add_taint(TAINT_DIE);
spin_unlock_irq(&die_lock);
......@@ -267,10 +283,9 @@ NORET_TYPE void die(const char *str, struct pt_regs *regs, int err)
if (in_interrupt())
panic("Fatal exception in interrupt");
if (panic_on_oops)
panic("Fatal exception");
if (ret != NOTIFY_STOP)
do_exit(SIGSEGV);
}
......
......@@ -43,10 +43,6 @@ SECTIONS
INIT_SETUP(16)
__early_begin = .;
*(.early_param.init)
__early_end = .;
INIT_CALLS
CON_INITCALL
SECURITY_INITCALL
......
......@@ -10,7 +10,7 @@
*/
#include "hardware.h"
.macro addruart,rx
.macro addruart, rx, tmp
mrc p15, 0, \rx, c1, c0
tst \rx, #1 @ MMU enabled?
moveq \rx, #0x80000000 @ physical
......
......@@ -89,6 +89,12 @@ config ARCH_AT91CAP9
select GENERIC_CLOCKEVENTS
select HAVE_FB_ATMEL
config ARCH_AT572D940HF
bool "AT572D940HF"
select CPU_ARM926T
select GENERIC_TIME
select GENERIC_CLOCKEVENTS
config ARCH_AT91X40
bool "AT91x40"
......@@ -390,6 +396,23 @@ endif
# ----------------------------------------------------------
if ARCH_AT572D940HF
comment "AT572D940HF Board Type"
config MACH_AT572D940HFEB
bool "AT572D940HF-EK"
depends on ARCH_AT572D940HF
select HAVE_AT91_DATAFLASH_CARD
select HAVE_NAND_ATMEL_BUSWIDTH_16
help
Select this if you are using Atmel's AT572D940HF-EK evaluation kit.
<http://www.atmel.com/products/diopsis/default.asp>
endif
# ----------------------------------------------------------
if ARCH_AT91X40
comment "AT91X40 Board Type"
......
......@@ -19,6 +19,7 @@ obj-$(CONFIG_ARCH_AT91SAM9RL) += at91sam9rl.o at91sam926x_time.o at91sam9rl_devi
obj-$(CONFIG_ARCH_AT91SAM9G20) += at91sam9260.o at91sam926x_time.o at91sam9260_devices.o sam9_smc.o
obj-$(CONFIG_ARCH_AT91SAM9G45) += at91sam9g45.o at91sam926x_time.o at91sam9g45_devices.o sam9_smc.o
obj-$(CONFIG_ARCH_AT91CAP9) += at91cap9.o at91sam926x_time.o at91cap9_devices.o sam9_smc.o
obj-$(CONFIG_ARCH_AT572D940HF) += at572d940hf.o at91sam926x_time.o at572d940hf_devices.o sam9_smc.o
obj-$(CONFIG_ARCH_AT91X40) += at91x40.o at91x40_time.o
# AT91RM9200 board-specific support
......@@ -69,6 +70,9 @@ obj-$(CONFIG_MACH_AT91SAM9G45EKES) += board-sam9m10g45ek.o
# AT91CAP9 board-specific support
obj-$(CONFIG_MACH_AT91CAP9ADK) += board-cap9adk.o
# AT572D940HF board-specific support
obj-$(CONFIG_MACH_AT572D940HFEB) += board-at572d940hf_ek.o
# AT91X40 board-specific support
obj-$(CONFIG_MACH_AT91EB01) += board-eb01.o
......
/*
* arch/arm/mach-at91/at572d940hf.c
*
* Antonio R. Costa <costa.antonior@gmail.com>
* Copyright (C) 2008 Atmel
*
* Copyright (C) 2005 SAN People
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
*/
#include <linux/module.h>
#include <asm/mach/irq.h>
#include <asm/mach/arch.h>
#include <asm/mach/map.h>
#include <mach/at572d940hf.h>
#include <mach/at91_pmc.h>
#include <mach/at91_rstc.h>
#include "generic.h"
#include "clock.h"
static struct map_desc at572d940hf_io_desc[] __initdata = {
{
.virtual = AT91_VA_BASE_SYS,
.pfn = __phys_to_pfn(AT91_BASE_SYS),
.length = SZ_16K,
.type = MT_DEVICE,
}, {
.virtual = AT91_IO_VIRT_BASE - AT572D940HF_SRAM_SIZE,
.pfn = __phys_to_pfn(AT572D940HF_SRAM_BASE),
.length = AT572D940HF_SRAM_SIZE,
.type = MT_DEVICE,
},
};
/* --------------------------------------------------------------------
* Clocks
* -------------------------------------------------------------------- */
/*
* The peripheral clocks.
*/
static struct clk pioA_clk = {
.name = "pioA_clk",
.pmc_mask = 1 << AT572D940HF_ID_PIOA,
.type = CLK_TYPE_PERIPHERAL,
};
static struct clk pioB_clk = {
.name = "pioB_clk",
.pmc_mask = 1 << AT572D940HF_ID_PIOB,
.type = CLK_TYPE_PERIPHERAL,
};
static struct clk pioC_clk = {
.name = "pioC_clk",
.pmc_mask = 1 << AT572D940HF_ID_PIOC,
.type = CLK_TYPE_PERIPHERAL,
};
static struct clk macb_clk = {
.name = "macb_clk",
.pmc_mask = 1 << AT572D940HF_ID_EMAC,
.type = CLK_TYPE_PERIPHERAL,
};
static struct clk usart0_clk = {
.name = "usart0_clk",
.pmc_mask = 1 << AT572D940HF_ID_US0,
.type = CLK_TYPE_PERIPHERAL,
};
static struct clk usart1_clk = {
.name = "usart1_clk",
.pmc_mask = 1 << AT572D940HF_ID_US1,
.type = CLK_TYPE_PERIPHERAL,
};
static struct clk usart2_clk = {
.name = "usart2_clk",
.pmc_mask = 1 << AT572D940HF_ID_US2,
.type = CLK_TYPE_PERIPHERAL,
};
static struct clk mmc_clk = {
.name = "mci_clk",
.pmc_mask = 1 << AT572D940HF_ID_MCI,
.type = CLK_TYPE_PERIPHERAL,
};
static struct clk udc_clk = {
.name = "udc_clk",
.pmc_mask = 1 << AT572D940HF_ID_UDP,
.type = CLK_TYPE_PERIPHERAL,
};
static struct clk twi0_clk = {
.name = "twi0_clk",
.pmc_mask = 1 << AT572D940HF_ID_TWI0,
.type = CLK_TYPE_PERIPHERAL,
};
static struct clk spi0_clk = {
.name = "spi0_clk",
.pmc_mask = 1 << AT572D940HF_ID_SPI0,
.type = CLK_TYPE_PERIPHERAL,
};
static struct clk spi1_clk = {
.name = "spi1_clk",
.pmc_mask = 1 << AT572D940HF_ID_SPI1,
.type = CLK_TYPE_PERIPHERAL,
};
static struct clk ssc0_clk = {
.name = "ssc0_clk",
.pmc_mask = 1 << AT572D940HF_ID_SSC0,
.type = CLK_TYPE_PERIPHERAL,
};
static struct clk ssc1_clk = {
.name = "ssc1_clk",
.pmc_mask = 1 << AT572D940HF_ID_SSC1,
.type = CLK_TYPE_PERIPHERAL,
};
static struct clk ssc2_clk = {
.name = "ssc2_clk",
.pmc_mask = 1 << AT572D940HF_ID_SSC2,
.type = CLK_TYPE_PERIPHERAL,
};
static struct clk tc0_clk = {
.name = "tc0_clk",
.pmc_mask = 1 << AT572D940HF_ID_TC0,
.type = CLK_TYPE_PERIPHERAL,
};
static struct clk tc1_clk = {
.name = "tc1_clk",
.pmc_mask = 1 << AT572D940HF_ID_TC1,
.type = CLK_TYPE_PERIPHERAL,
};
static struct clk tc2_clk = {
.name = "tc2_clk",
.pmc_mask = 1 << AT572D940HF_ID_TC2,
.type = CLK_TYPE_PERIPHERAL,
};
static struct clk ohci_clk = {
.name = "ohci_clk",
.pmc_mask = 1 << AT572D940HF_ID_UHP,
.type = CLK_TYPE_PERIPHERAL,
};
static struct clk ssc3_clk = {
.name = "ssc3_clk",
.pmc_mask = 1 << AT572D940HF_ID_SSC3,
.type = CLK_TYPE_PERIPHERAL,
};
static struct clk twi1_clk = {
.name = "twi1_clk",
.pmc_mask = 1 << AT572D940HF_ID_TWI1,
.type = CLK_TYPE_PERIPHERAL,
};
static struct clk can0_clk = {
.name = "can0_clk",
.pmc_mask = 1 << AT572D940HF_ID_CAN0,
.type = CLK_TYPE_PERIPHERAL,
};
static struct clk can1_clk = {
.name = "can1_clk",
.pmc_mask = 1 << AT572D940HF_ID_CAN1,
.type = CLK_TYPE_PERIPHERAL,
};
static struct clk mAgicV_clk = {
.name = "mAgicV_clk",
.pmc_mask = 1 << AT572D940HF_ID_MSIRQ0,
.type = CLK_TYPE_PERIPHERAL,
};
static struct clk *periph_clocks[] __initdata = {
&pioA_clk,
&pioB_clk,
&pioC_clk,
&macb_clk,
&usart0_clk,
&usart1_clk,
&usart2_clk,
&mmc_clk,
&udc_clk,
&twi0_clk,
&spi0_clk,
&spi1_clk,
&ssc0_clk,
&ssc1_clk,
&ssc2_clk,
&tc0_clk,
&tc1_clk,
&tc2_clk,
&ohci_clk,
&ssc3_clk,
&twi1_clk,
&can0_clk,
&can1_clk,
&mAgicV_clk,
/* irq0 .. irq2 */
};
/*
* The five programmable clocks.
* You must configure pin multiplexing to bring these signals out.
*/
static struct clk pck0 = {
.name = "pck0",
.pmc_mask = AT91_PMC_PCK0,
.type = CLK_TYPE_PROGRAMMABLE,
.id = 0,
};
static struct clk pck1 = {
.name = "pck1",
.pmc_mask = AT91_PMC_PCK1,
.type = CLK_TYPE_PROGRAMMABLE,
.id = 1,
};
static struct clk pck2 = {
.name = "pck2",
.pmc_mask = AT91_PMC_PCK2,
.type = CLK_TYPE_PROGRAMMABLE,
.id = 2,
};
static struct clk pck3 = {
.name = "pck3",
.pmc_mask = AT91_PMC_PCK3,
.type = CLK_TYPE_PROGRAMMABLE,
.id = 3,
};
static struct clk mAgicV_mem_clk = {
.name = "mAgicV_mem_clk",
.pmc_mask = AT91_PMC_PCK4,
.type = CLK_TYPE_PROGRAMMABLE,
.id = 4,
};
/* HClocks */
static struct clk hck0 = {
.name = "hck0",
.pmc_mask = AT91_PMC_HCK0,
.type = CLK_TYPE_SYSTEM,
.id = 0,
};
static struct clk hck1 = {
.name = "hck1",
.pmc_mask = AT91_PMC_HCK1,
.type = CLK_TYPE_SYSTEM,
.id = 1,
};
static void __init at572d940hf_register_clocks(void)
{
int i;
for (i = 0; i < ARRAY_SIZE(periph_clocks); i++)
clk_register(periph_clocks[i]);
clk_register(&pck0);
clk_register(&pck1);
clk_register(&pck2);
clk_register(&pck3);
clk_register(&mAgicV_mem_clk);
clk_register(&hck0);
clk_register(&hck1);
}
/* --------------------------------------------------------------------
* GPIO
* -------------------------------------------------------------------- */
static struct at91_gpio_bank at572d940hf_gpio[] = {
{
.id = AT572D940HF_ID_PIOA,
.offset = AT91_PIOA,
.clock = &pioA_clk,
}, {
.id = AT572D940HF_ID_PIOB,
.offset = AT91_PIOB,
.clock = &pioB_clk,
}, {
.id = AT572D940HF_ID_PIOC,
.offset = AT91_PIOC,
.clock = &pioC_clk,
}
};
static void at572d940hf_reset(void)
{
at91_sys_write(AT91_RSTC_CR, AT91_RSTC_KEY | AT91_RSTC_PROCRST | AT91_RSTC_PERRST);
}
/* --------------------------------------------------------------------
* AT572D940HF processor initialization
* -------------------------------------------------------------------- */
void __init at572d940hf_initialize(unsigned long main_clock)
{
/* Map peripherals */
iotable_init(at572d940hf_io_desc, ARRAY_SIZE(at572d940hf_io_desc));
at91_arch_reset = at572d940hf_reset;
at91_extern_irq = (1 << AT572D940HF_ID_IRQ0) | (1 << AT572D940HF_ID_IRQ1)
| (1 << AT572D940HF_ID_IRQ2);
/* Init clock subsystem */
at91_clock_init(main_clock);
/* Register the processor-specific clocks */
at572d940hf_register_clocks();
/* Register GPIO subsystem */
at91_gpio_init(at572d940hf_gpio, 3);
}
/* --------------------------------------------------------------------
* Interrupt initialization
* -------------------------------------------------------------------- */
/*
* The default interrupt priority levels (0 = lowest, 7 = highest).
*/
static unsigned int at572d940hf_default_irq_priority[NR_AIC_IRQS] __initdata = {
7, /* Advanced Interrupt Controller */
7, /* System Peripherals */
0, /* Parallel IO Controller A */
0, /* Parallel IO Controller B */
0, /* Parallel IO Controller C */
3, /* Ethernet */
6, /* USART 0 */
6, /* USART 1 */
6, /* USART 2 */
0, /* Multimedia Card Interface */
4, /* USB Device Port */
0, /* Two-Wire Interface 0 */
6, /* Serial Peripheral Interface 0 */
6, /* Serial Peripheral Interface 1 */
5, /* Serial Synchronous Controller 0 */
5, /* Serial Synchronous Controller 1 */
5, /* Serial Synchronous Controller 2 */
0, /* Timer Counter 0 */
0, /* Timer Counter 1 */
0, /* Timer Counter 2 */
3, /* USB Host port */
3, /* Serial Synchronous Controller 3 */
0, /* Two-Wire Interface 1 */
0, /* CAN Controller 0 */
0, /* CAN Controller 1 */
0, /* mAgicV HALT line */
0, /* mAgicV SIRQ0 line */
0, /* mAgicV exception line */
0, /* mAgicV end of DMA line */
0, /* Advanced Interrupt Controller */
0, /* Advanced Interrupt Controller */
0, /* Advanced Interrupt Controller */
};
void __init at572d940hf_init_interrupts(unsigned int priority[NR_AIC_IRQS])
{
if (!priority)
priority = at572d940hf_default_irq_priority;
/* Initialize the AIC interrupt controller */
at91_aic_init(priority);
/* Enable GPIO interrupts */
at91_gpio_irq_setup();
}
This diff is collapsed.
/*
* linux/arch/arm/mach-at91/board-at572d940hf_ek.c
*
* Copyright (C) 2008 Atmel Antonio R. Costa <costa.antonior@gmail.com>
* Copyright (C) 2005 SAN People
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
#include <linux/types.h>
#include <linux/init.h>
#include <linux/mm.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/spi/spi.h>
#include <linux/spi/ds1305.h>
#include <linux/irq.h>
#include <linux/mtd/physmap.h>
#include <mach/hardware.h>
#include <asm/setup.h>
#include <asm/mach-types.h>
#include <asm/irq.h>
#include <asm/mach/arch.h>
#include <asm/mach/map.h>
#include <asm/mach/irq.h>
#include <mach/board.h>
#include <mach/gpio.h>
#include <mach/at91sam9_smc.h>
#include "sam9_smc.h"
#include "generic.h"
static void __init eb_map_io(void)
{
/* Initialize processor: 12.500 MHz crystal */
at572d940hf_initialize(12000000);
/* DBGU on ttyS0. (Rx & Tx only) */
at91_register_uart(0, 0, 0);
/* USART0 on ttyS1. (Rx & Tx only) */
at91_register_uart(AT572D940HF_ID_US0, 1, 0);
/* USART1 on ttyS2. (Rx & Tx only) */
at91_register_uart(AT572D940HF_ID_US1, 2, 0);
/* USART2 on ttyS3. (Tx & Rx only */
at91_register_uart(AT572D940HF_ID_US2, 3, 0);
/* set serial console to ttyS0 (ie, DBGU) */
at91_set_serial_console(0);
}
static void __init eb_init_irq(void)
{
at572d940hf_init_interrupts(NULL);
}
/*
* USB Host Port
*/
static struct at91_usbh_data __initdata eb_usbh_data = {
.ports = 2,
};
/*
* USB Device Port
*/
static struct at91_udc_data __initdata eb_udc_data = {
.vbus_pin = 0, /* no VBUS detection,UDC always on */
.pullup_pin = 0, /* pull-up driven by UDC */
};
/*
* MCI (SD/MMC)
*/
static struct at91_mmc_data __initdata eb_mmc_data = {
.wire4 = 1,
/* .det_pin = ... not connected */
/* .wp_pin = ... not connected */
/* .vcc_pin = ... not connected */
};
/*
* MACB Ethernet device
*/
static struct at91_eth_data __initdata eb_eth_data = {
.phy_irq_pin = AT91_PIN_PB25,
.is_rmii = 1,
};
/*
* NOR flash
*/
static struct mtd_partition eb_nor_partitions[] = {
{
.name = "Raw Environment",
.offset = 0,
.size = SZ_4M,
.mask_flags = 0,
},
{
.name = "OS FS",
.offset = MTDPART_OFS_APPEND,
.size = 3 * SZ_1M,
.mask_flags = 0,
},
{
.name = "APP FS",
.offset = MTDPART_OFS_APPEND,
.size = MTDPART_SIZ_FULL,
.mask_flags = 0,
},
};
static void nor_flash_set_vpp(struct map_info* mi, int i) {
};
static struct physmap_flash_data nor_flash_data = {
.width = 4,
.parts = eb_nor_partitions,
.nr_parts = ARRAY_SIZE(eb_nor_partitions),
.set_vpp = nor_flash_set_vpp,
};
static struct resource nor_flash_resources[] = {
{
.start = AT91_CHIPSELECT_0,
.end = AT91_CHIPSELECT_0 + SZ_16M - 1,
.flags = IORESOURCE_MEM,
},
};
static struct platform_device nor_flash = {
.name = "physmap-flash",
.id = 0,
.dev = {
.platform_data = &nor_flash_data,
},
.resource = nor_flash_resources,
.num_resources = ARRAY_SIZE(nor_flash_resources),
};
static struct sam9_smc_config __initdata eb_nor_smc_config = {
.ncs_read_setup = 1,
.nrd_setup = 1,
.ncs_write_setup = 1,
.nwe_setup = 1,
.ncs_read_pulse = 7,
.nrd_pulse = 7,
.ncs_write_pulse = 7,
.nwe_pulse = 7,
.read_cycle = 9,
.write_cycle = 9,
.mode = AT91_SMC_READMODE | AT91_SMC_WRITEMODE | AT91_SMC_EXNWMODE_DISABLE | AT91_SMC_BAT_WRITE | AT91_SMC_DBW_32,
.tdf_cycles = 1,
};
static void __init eb_add_device_nor(void)
{
/* configure chip-select 0 (NOR) */
sam9_smc_configure(0, &eb_nor_smc_config);
platform_device_register(&nor_flash);
}
/*
* NAND flash
*/
static struct mtd_partition __initdata eb_nand_partition[] = {
{
.name = "Partition 1",
.offset = 0,
.size = SZ_16M,
},
{
.name = "Partition 2",
.offset = MTDPART_OFS_NXTBLK,
.size = MTDPART_SIZ_FULL,
}
};
static struct mtd_partition * __init nand_partitions(int size, int *num_partitions)
{
*num_partitions = ARRAY_SIZE(eb_nand_partition);
return eb_nand_partition;
}
static struct atmel_nand_data __initdata eb_nand_data = {
.ale = 22,
.cle = 21,
/* .det_pin = ... not connected */
/* .rdy_pin = AT91_PIN_PC16, */
.enable_pin = AT91_PIN_PA15,
.partition_info = nand_partitions,
#if defined(CONFIG_MTD_NAND_AT91_BUSWIDTH_16)
.bus_width_16 = 1,
#else
.bus_width_16 = 0,
#endif
};
static struct sam9_smc_config __initdata eb_nand_smc_config = {
.ncs_read_setup = 0,
.nrd_setup = 0,
.ncs_write_setup = 1,
.nwe_setup = 1,
.ncs_read_pulse = 3,
.nrd_pulse = 3,
.ncs_write_pulse = 3,
.nwe_pulse = 3,
.read_cycle = 5,
.write_cycle = 5,
.mode = AT91_SMC_READMODE | AT91_SMC_WRITEMODE | AT91_SMC_EXNWMODE_DISABLE,
.tdf_cycles = 12,
};
static void __init eb_add_device_nand(void)
{
/* setup bus-width (8 or 16) */
if (eb_nand_data.bus_width_16)
eb_nand_smc_config.mode |= AT91_SMC_DBW_16;
else
eb_nand_smc_config.mode |= AT91_SMC_DBW_8;
/* configure chip-select 3 (NAND) */
sam9_smc_configure(3, &eb_nand_smc_config);
at91_add_device_nand(&eb_nand_data);
}
/*
* SPI devices
*/
static struct resource rtc_resources[] = {
[0] = {
.start = AT572D940HF_ID_IRQ1,
.end = AT572D940HF_ID_IRQ1,
.flags = IORESOURCE_IRQ,
},
};
static struct ds1305_platform_data ds1306_data = {
.is_ds1306 = true,
.en_1hz = false,
};
static struct spi_board_info eb_spi_devices[] = {
{ /* RTC Dallas DS1306 */
.modalias = "rtc-ds1305",
.chip_select = 3,
.mode = SPI_CS_HIGH | SPI_CPOL | SPI_CPHA,
.max_speed_hz = 500000,
.bus_num = 0,
.irq = AT572D940HF_ID_IRQ1,
.platform_data = (void *) &ds1306_data,
},
#if defined(CONFIG_MTD_AT91_DATAFLASH_CARD)
{ /* Dataflash card */
.modalias = "mtd_dataflash",
.chip_select = 0,
.max_speed_hz = 15 * 1000 * 1000,
.bus_num = 0,
},
#endif
};
static void __init eb_board_init(void)
{
/* Serial */
at91_add_device_serial();
/* USB Host */
at91_add_device_usbh(&eb_usbh_data);
/* USB Device */
at91_add_device_udc(&eb_udc_data);
/* I2C */
at91_add_device_i2c(NULL, 0);
/* NOR */
eb_add_device_nor();
/* NAND */
eb_add_device_nand();
/* SPI */
at91_add_device_spi(eb_spi_devices, ARRAY_SIZE(eb_spi_devices));
/* MMC */
at91_add_device_mmc(0, &eb_mmc_data);
/* Ethernet */
at91_add_device_eth(&eb_eth_data);
/* mAgic */
at91_add_device_mAgic();
}
MACHINE_START(AT572D940HFEB, "Atmel AT91D940HF-EB")
/* Maintainer: Atmel <costa.antonior@gmail.com> */
.phys_io = AT91_BASE_SYS,
.io_pg_offst = (AT91_VA_BASE_SYS >> 18) & 0xfffc,
.boot_params = AT91_SDRAM_BASE + 0x100,
.timer = &at91sam926x_timer,
.map_io = eb_map_io,
.init_irq = eb_init_irq,
.init_machine = eb_board_init,
MACHINE_END
......@@ -29,6 +29,7 @@
#include <mach/cpu.h>
#include "clock.h"
#include "generic.h"
/*
......@@ -628,7 +629,7 @@ static void __init at91_pllb_usbfs_clock_init(unsigned long main_clock)
at91_sys_write(AT91_PMC_SCER, AT91RM9200_PMC_MCKUDP);
} else if (cpu_is_at91sam9260() || cpu_is_at91sam9261() ||
cpu_is_at91sam9263() || cpu_is_at91sam9g20() ||
cpu_is_at91sam9g10()) {
cpu_is_at91sam9g10() || cpu_is_at572d940hf()) {
uhpck.pmc_mask = AT91SAM926x_PMC_UHP;
udpck.pmc_mask = AT91SAM926x_PMC_UDP;
} else if (cpu_is_at91cap9()) {
......@@ -711,12 +712,13 @@ int __init at91_clock_init(unsigned long main_clock)
/*
* USB HS clock init
*/
if (cpu_has_utmi())
if (cpu_has_utmi()) {
/*
* multiplier is hard-wired to 40
* (obtain the USB High Speed 480 MHz when input is 12 MHz)
*/
utmi_clk.rate_hz = 40 * utmi_clk.parent->rate_hz;
}
/*
* USB FS clock init
......
......@@ -22,7 +22,7 @@ struct clk {
struct clk *parent;
u32 pmc_mask;
void (*mode)(struct clk *, int);
unsigned id:2; /* PCK0..3, or 32k/main/a/b */
unsigned id:3; /* PCK0..4, or 32k/main/a/b */
unsigned type; /* clock type */
u16 users;
};
......
......@@ -17,6 +17,7 @@ extern void __init at91sam9rl_initialize(unsigned long main_clock);
extern void __init at91sam9g45_initialize(unsigned long main_clock);
extern void __init at91x40_initialize(unsigned long main_clock);
extern void __init at91cap9_initialize(unsigned long main_clock);
extern void __init at572d940hf_initialize(unsigned long main_clock);
/* Interrupts */
extern void __init at91rm9200_init_interrupts(unsigned int priority[]);
......@@ -27,6 +28,7 @@ extern void __init at91sam9rl_init_interrupts(unsigned int priority[]);
extern void __init at91sam9g45_init_interrupts(unsigned int priority[]);
extern void __init at91x40_init_interrupts(unsigned int priority[]);
extern void __init at91cap9_init_interrupts(unsigned int priority[]);
extern void __init at572d940hf_init_interrupts(unsigned int priority[]);
extern void __init at91_aic_init(unsigned int priority[]);
/* Timer */
......
This diff is collapsed.
This diff is collapsed.
......@@ -32,6 +32,7 @@
#define AT91_PMC_PCK1 (1 << 9) /* Programmable Clock 1 */
#define AT91_PMC_PCK2 (1 << 10) /* Programmable Clock 2 */
#define AT91_PMC_PCK3 (1 << 11) /* Programmable Clock 3 */
#define AT91_PMC_PCK4 (1 << 12) /* Programmable Clock 4 [AT572D940HF only] */
#define AT91_PMC_HCK0 (1 << 16) /* AHB Clock (USB host) [AT91SAM9261 only] */
#define AT91_PMC_HCK1 (1 << 17) /* AHB Clock (LCD) [AT91SAM9261 only] */
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -2,4 +2,4 @@
* arch/arm/mach-dove/include/mach/vmalloc.h
*/
#define VMALLOC_END 0xfd800000
#define VMALLOC_END 0xfd800000UL
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment