Commit bd79a781 authored by Linus Torvalds's avatar Linus Torvalds

Import 2.3.23pre2

parent 58cf0ac4
......@@ -43,7 +43,7 @@ Current Minimal Requirements
encountered a bug! If you're unsure what version you're currently
running, the suggested command should tell you.
- Kernel modutils 2.3.5 ; insmod -V
- Kernel modutils 2.3.6 ; insmod -V
- Gnu C 2.7.2.3 ; gcc --version
- Binutils 2.8.1.0.23 ; ld -v
- Linux libc5 C Library 5.4.46 ; ls -l /lib/libc*
......@@ -567,8 +567,8 @@ ftp://metalab.unc.edu/pub/Linux/GCC/ld.so-1.9.9.tar.gz
Modules utilities
=================
The 2.3.5 release:
ftp://ftp.ocs.com.au/pub/modutils/v2.3/modutils-2.3.5.tar.gz
The 2.3.6 release:
ftp://ftp.ocs.com.au/pub/modutils/v2.3/modutils-2.3.6.tar.gz
Procps utilities
================
......
This diff is collapsed.
Started Oct 1999 by Kanoj Sarcar <kanoj@sgi.com>
The intent of this file is to have an uptodate, running commentary
from different people about how locking and synchronization is done
in the Linux vm code.
vmlist_access_lock/vmlist_modify_lock
--------------------------------------
Page stealers pick processes out of the process pool and scan for
the best process to steal pages from. To guarantee the existance
of the victim mm, a mm_count inc and a mmdrop are done in swap_out().
Page stealers hold kernel_lock to protect against a bunch of races.
The vma list of the victim mm is also scanned by the stealer,
and the vmlist_lock is used to preserve list sanity against the
process adding/deleting to the list. This also gurantees existance
of the vma. Vma existance gurantee while invoking the driver
swapout() method in try_to_swap_out() also relies on the fact
that do_munmap() temporarily gets lock_kernel before decimating
the vma, thus the swapout() method must snapshot all the vma
fields it needs before going to sleep (which will release the
lock_kernel held by the page stealer). Currently, filemap_swapout
is the only method that depends on this shaky interlocking.
Any code that modifies the vmlist, or the vm_start/vm_end/
vm_flags:VM_LOCKED/vm_next of any vma *in the list* must prevent
kswapd from looking at the chain. This does not include driver mmap()
methods, for example, since the vma is still not in the list.
The rules are:
1. To modify the vmlist (add/delete or change fields in an element),
you must hold mmap_sem to guard against clones doing mmap/munmap/faults,
(ie all vm system calls and faults), and from ptrace, swapin due to
swap deletion etc.
2. To modify the vmlist (add/delete or change fields in an element),
you must also hold vmlist_modify_lock, to guard against page stealers
scanning the list.
3. To scan the vmlist (find_vma()), you must either
a. grab mmap_sem, which should be done by all cases except
page stealer.
or
b. grab vmlist_access_lock, only done by page stealer.
4. While holding the vmlist_modify_lock, you must be able to guarantee
that no code path will lead to page stealing. A better guarantee is
to claim non sleepability, which ensures that you are not sleeping
for a lock, whose holder might in turn be doing page stealing.
5. You must be able to guarantee that while holding vmlist_modify_lock
or vmlist_access_lock of mm A, you will not try to get either lock
for mm B.
The caveats are:
1. find_vma() makes use of, and updates, the mmap_cache pointer hint.
The update of mmap_cache is racy (page stealer can race with other code
that invokes find_vma with mmap_sem held), but that is okay, since it
is a hint. This can be fixed, if desired, by having find_vma grab the
vmlist lock.
Code that add/delete elements from the vmlist chain are
1. callers of insert_vm_struct
2. callers of merge_segments
3. callers of avl_remove
Code that changes vm_start/vm_end/vm_flags:VM_LOCKED of vma's on
the list:
1. expand_stack
2. mprotect
3. mlock
4. mremap
It is advisable that changes to vm_start/vm_end be protected, although
in some cases it is not really needed. Eg, vm_start is modified by
expand_stack(), it is hard to come up with a destructive scenario without
having the vmlist protection in this case.
The vmlist lock nests with the inode i_shared_lock and the kmem cache
c_spinlock spinlocks. This is okay, since code that holds i_shared_lock
never asks for memory, and the kmem code asks for pages after dropping
c_spinlock.
The vmlist lock can be a sleeping or spin lock. In either case, care
must be taken that it is not held on entry to the driver methods, since
those methods might sleep or ask for memory, causing deadlocks.
The current implementation of the vmlist lock uses the page_table_lock,
which is also the spinlock that page stealers use to protect changes to
the victim process' ptes. Thus we have a reduction in the total number
of locks.
......@@ -831,6 +831,16 @@ STARMODE RADIO IP (STRIP) PROTOCOL DRIVER
W: http://mosquitonet.Stanford.EDU/strip.html
S: Unsupported ?
SUPERH
P: Niibe Yutaka
M: gniibe@chroot.org
P: Kazumoto Kojima
M: kkojima@rr.iij4u.or.jp
L: linux-sh@m17n.org
W: http://www.m17n.org/linux-sh/
W: http://www.rr.iij4u.or.jp/~kkojima/linux-sh4.html
S: Maintained
SVGA HANDLING
P: Martin Mares
M: mj@atrey.karlin.mff.cuni.cz
......
......@@ -89,7 +89,7 @@ CONFIG_IDEDMA_AUTO=y
# CONFIG_BLK_DEV_OPTI621 is not set
# CONFIG_BLK_DEV_TRM290 is not set
# CONFIG_BLK_DEV_NS87415 is not set
# CONFIG_BLK_DEV_VIA82C586 is not set
# CONFIG_BLK_DEV_VIA82CXXX is not set
# CONFIG_BLK_DEV_CMD646 is not set
CONFIG_BLK_DEV_SL82C105=y
# CONFIG_IDE_CHIPSETS is not set
......
......@@ -89,7 +89,7 @@ CONFIG_IDEDMA_PCI_AUTO=y
# CONFIG_BLK_DEV_OPTI621 is not set
# CONFIG_BLK_DEV_TRM290 is not set
# CONFIG_BLK_DEV_NS87415 is not set
# CONFIG_BLK_DEV_VIA82C586 is not set
# CONFIG_BLK_DEV_VIA82CXXX is not set
# CONFIG_BLK_DEV_CMD646 is not set
CONFIG_BLK_DEV_SL82C105=y
# CONFIG_IDE_CHIPSETS is not set
......
......@@ -84,9 +84,6 @@ MAKEBOOT = $(MAKE) -C arch/$(ARCH)/boot
vmlinux: arch/i386/vmlinux.lds
arch/i386/vmlinux.lds: arch/i386/vmlinux.lds.S FORCE
$(CPP) -C -P -imacros $(HPATH)/asm-i386/page_offset.h -Ui386 arch/i386/vmlinux.lds.S >arch/i386/vmlinux.lds
FORCE: ;
.PHONY: zImage bzImage compressed zlilo bzlilo zdisk bzdisk install \
......@@ -119,7 +116,6 @@ archclean:
@$(MAKEBOOT) clean
archmrproper:
rm -f arch/i386/vmlinux.lds
archdep:
@$(MAKEBOOT) dep
......@@ -42,10 +42,6 @@ if [ "$CONFIG_MK7" = "y" ]; then
define_bool CONFIG_X86_USE_3DNOW y
fi
choice 'Maximum Physical Memory' \
"1GB CONFIG_1GB \
2GB CONFIG_2GB" 1GB
bool 'Math emulation' CONFIG_MATH_EMULATION
bool 'MTRR (Memory Type Range Register) support' CONFIG_MTRR
bool 'Symmetric multi-processing support' CONFIG_SMP
......@@ -63,7 +59,7 @@ endmenu
mainmenu_option next_comment
comment 'General setup'
bool 'BIGMEM support' CONFIG_BIGMEM
bool 'Support for over 1Gig of memory' CONFIG_BIGMEM
bool 'Networking support' CONFIG_NET
bool 'SGI Visual Workstation support' CONFIG_VISWS
if [ "$CONFIG_VISWS" = "y" ]; then
......
......@@ -14,6 +14,7 @@
#include <asm/smp.h>
#include <asm/lithium.h>
#include <asm/io.h>
#include "pci-i386.h"
......
......@@ -37,7 +37,7 @@
#include <asm/cobalt.h>
#include "irq.h"
#include <linux/irq.h>
/*
* This is the PIIX4-based 8259 that is wired up indirectly to Cobalt
......
......@@ -6,7 +6,7 @@ OUTPUT_ARCH(i386)
ENTRY(_start)
SECTIONS
{
. = PAGE_OFFSET_RAW + 0x100000;
. = 0xC0000000 + 0x100000;
_text = .; /* Text and read-only data */
.text : {
*(.text)
......
......@@ -16,7 +16,6 @@
#include <asm/hydra.h>
#include <asm/prom.h>
#include <asm/gg2.h>
#include <asm/ide.h>
#include <asm/machdep.h>
#include "pci.h"
......
......@@ -28,6 +28,7 @@
#include <linux/reboot.h>
#include <linux/init.h>
#include <linux/blk.h>
#include <linux/ide.h>
#include <linux/ioport.h>
#include <linux/console.h>
#include <linux/pci.h>
......@@ -40,8 +41,6 @@
#include <asm/processor.h>
#include <asm/io.h>
#include <asm/pgtable.h>
#include <linux/ide.h>
#include <asm/ide.h>
#include <asm/prom.h>
#include <asm/gg2.h>
#include <asm/pci-bridge.h>
......
......@@ -30,6 +30,7 @@
#include <linux/reboot.h>
#include <linux/init.h>
#include <linux/blk.h>
#include <linux/ide.h>
#include <linux/ioport.h>
#include <asm/mmu.h>
......@@ -37,7 +38,6 @@
#include <asm/residual.h>
#include <asm/io.h>
#include <asm/pgtable.h>
#include <asm/ide.h>
#include <asm/mbx.h>
#include <asm/machdep.h>
......
......@@ -39,6 +39,7 @@
#include <linux/ioport.h>
#include <linux/major.h>
#include <linux/blk.h>
#include <linux/ide.h>
#include <linux/vt_kern.h>
#include <linux/console.h>
#include <linux/ide.h>
......@@ -57,7 +58,6 @@
#include <asm/ohare.h>
#include <asm/mediabay.h>
#include <asm/feature.h>
#include <asm/ide.h>
#include <asm/machdep.h>
#include <asm/keyboard.h>
#include <asm/dma.h>
......@@ -421,7 +421,7 @@ note_scsi_host(struct device_node *node, void *host)
#if defined(CONFIG_BLK_DEV_IDE) && defined(CONFIG_BLK_DEV_IDE_PMAC)
extern int pmac_ide_count;
extern struct device_node *pmac_ide_node[];
static int ide_majors[] = { 3, 22, 33, 34, 56, 57, 88, 89 };
static int ide_majors[] = { 3, 22, 33, 34, 56, 57, 88, 89, 90, 91 };
kdev_t __init find_ide_boot(void)
{
......
......@@ -41,7 +41,6 @@
#include <asm/residual.h>
#include <asm/io.h>
#include <asm/pgtable.h>
#include <asm/ide.h>
#include <asm/cache.h>
#include <asm/dma.h>
#include <asm/machdep.h>
......
......@@ -11,12 +11,11 @@
#include <linux/reboot.h>
#include <linux/delay.h>
#include <linux/blk.h>
#include <linux/ide.h>
#include <asm/init.h>
#include <asm/residual.h>
#include <asm/io.h>
#include <linux/ide.h>
#include <asm/ide.h>
#include <asm/prom.h>
#include <asm/processor.h>
#include <asm/pgtable.h>
......
......@@ -74,7 +74,7 @@ CONFIG_IDEDMA_PCI_AUTO=y
# CONFIG_BLK_DEV_OPTI621 is not set
# CONFIG_BLK_DEV_TRM290 is not set
# CONFIG_BLK_DEV_NS87415 is not set
# CONFIG_BLK_DEV_VIA82C586 is not set
# CONFIG_BLK_DEV_VIA82CXXX is not set
# CONFIG_BLK_DEV_CMD646 is not set
CONFIG_BLK_DEV_SL82C105=y
# CONFIG_IDE_CHIPSETS is not set
......
# $Id$
# $Id: Makefile,v 1.1 1999/09/18 16:55:51 gniibe Exp gniibe $
#
# This file is subject to the terms and conditions of the GNU General Public
# License. See the file "COPYING" in the main directory of this archive
# for more details.
#
# Copyright (C) 1999 Kaz Kojima
#
# This file is included by the global makefile so that you can add your own
# architecture-specific flags and dependencies. Remember to do have actions
# for "archclean" and "archdep" for cleaning up and making dependencies for
......@@ -13,25 +15,35 @@
#
# Select the object file format to substitute into the linker script.
#
tool-prefix = sh-gniibe-
oformat = elf
tool-prefix = sh-elf
ifdef CONFIG_LITTLE_ENDIAN
CFLAGS += -ml
AFLAGS += -ml
# LINKFLAGS += -EL
LDFLAGS := -EL
LD =$(CROSS_COMPILE)ld $(LDFLAGS)
endif
ifdef CONFIG_CROSSCOMPILE
CROSS_COMPILE = $(tool-prefix)
endif
LINKFLAGS = # -EL # -static #-N
MODFLAGS +=
#
#
CFLAGS += -m3 # -ml
LINKFLAGS +=
LDFLAGS += # -EL
#
#
HOSTCC = cc
ifdef CONFIG_CPU_SH3
CFLAGS += -m3
AFLAGS += -m3
endif
ifdef CONFIG_CPU_SH4
CFLAGS += -m4
AFLAGS += -m4
endif
#
# Choosing incompatible machines durings configuration will result in
......@@ -52,14 +64,16 @@ HEAD := arch/sh/kernel/head.o arch/sh/kernel/init_task.o
SUBDIRS := $(SUBDIRS) $(addprefix arch/sh/, kernel mm lib)
CORE_FILES := arch/sh/kernel/kernel.o arch/sh/mm/mm.o $(CORE_FILES)
LIBS := $(TOPDIR)/arch/sh/lib/lib.a $(LIBS) $(TOPDIR)/arch/sh/lib/lib.a /home/niibe/lib/gcc-lib/sh-gniibe-elf/egcs-2.91.66/libgcc.a
LIBGCC := $(shell $(CC) $(CFLAGS) -print-libgcc-file-name)
LIBS := $(TOPDIR)/arch/sh/lib/lib.a $(LIBS) $(TOPDIR)/arch/sh/lib/lib.a \
$(LIBGCC)
MAKEBOOT = $(MAKE) -C arch/$(ARCH)/boot
vmlinux: arch/sh/vmlinux.lds
arch/sh/vmlinux.lds: arch/sh/vmlinux.lds.S FORCE
gcc -E -C -P -I$(HPATH) -imacros $(HPATH)/linux/config.h -Ush arch/sh/vmlinux.lds.S >arch/sh/vmlinux.lds
gcc -E -C -P -I$(HPATH) -Ush arch/sh/vmlinux.lds.S >arch/sh/vmlinux.lds
FORCE: ;
......@@ -77,6 +91,7 @@ archclean:
# $(MAKE) -C arch/$(ARCH)/tools clean
archmrproper:
rm -f arch/sh/vmlinux.lds
archdep:
@$(MAKEBOOT) dep
......@@ -12,28 +12,24 @@ endmenu
mainmenu_option next_comment
comment 'Processor type and features'
choice 'Processor family' \
"SH3 CONFIG_CPU_SH3 \
SH4 CONFIG_CPU_SH4" SH3
"SH-3 CONFIG_CPU_SH3 \
SH-4 CONFIG_CPU_SH4" SH-3
bool 'Little Endian' CONFIG_LITTLE_ENDIAN
hex 'Physical memory start address' CONFIG_MEMORY_START 0c000000
hex 'Physical memory start address' CONFIG_MEMORY_START 08000000
bool 'Use SH CPU internal real time clock' CONFIG_SH_CPU_RTC
endmenu
mainmenu_option next_comment
comment 'Loadable module support'
bool 'Enable loadable module support' CONFIG_MODULES
if [ "$CONFIG_MODULES" = "y" ]; then
bool ' Set version information on all symbols for modules' CONFIG_MODVERSIONS
bool ' Kernel module loader' CONFIG_KMOD
bool 'Set version information on all symbols for modules' CONFIG_MODVERSIONS
bool 'Kernel module loader' CONFIG_KMOD
fi
endmenu
define_bool CONFIG_SERIAL n
define_bool CONFIG_SH3SCI_SERIAL y
define_bool CONFIG_SERIAL_CONSOLE y
mainmenu_option next_comment
comment 'Floppy, IDE, and other block devices'
comment 'General setup'
bool 'Networking support' CONFIG_NET
bool 'System V IPC' CONFIG_SYSVIPC
bool 'BSD Process Accounting' CONFIG_BSD_PROCESS_ACCT
......@@ -41,6 +37,18 @@ bool 'Sysctl support' CONFIG_SYSCTL
tristate 'Kernel support for ELF binaries' CONFIG_BINFMT_ELF
tristate 'Kernel support for MISC binaries' CONFIG_BINFMT_MISC
endmenu
mainmenu_option next_comment
comment 'Character devices'
define_bool CONFIG_SERIAL n
define_bool CONFIG_SERIAL_CONSOLE y
bool 'SuperH SCI support' CONFIG_SH_SCI_SERIAL
bool 'SuperH SCIF support' CONFIG_SH_SCIF_SERIAL
endmenu
mainmenu_option next_comment
comment 'Floppy, IDE, and other block devices'
tristate 'RAM disk support' CONFIG_BLK_DEV_RAM
if [ "$CONFIG_BLK_DEV_RAM" = "y" ]; then
......@@ -75,4 +83,5 @@ mainmenu_option next_comment
comment 'Kernel hacking'
bool 'Magic SysRq key' CONFIG_MAGIC_SYSRQ
bool 'GDB Stub kernel debug' CONFIG_DEBUG_KERNEL_WITH_GDB_STUB
endmenu
......@@ -10,48 +10,41 @@
#
# Processor type and features
#
CONFIG_CPU_SH3=y
# CONFIG_CPU_SH4 is not set
# CONFIG_LITTLE_ENDIAN is not set
CONFIG_MEMORY_START=0c000000
# CONFIG_CPU_SH3 is not set
CONFIG_CPU_SH4=y
CONFIG_LITTLE_ENDIAN=y
CONFIG_MEMORY_START=08000000
#
# Loadable module support
#
# CONFIG_MODULES is not set
# CONFIG_SERIAL is not set
CONFIG_SH3SCI_SERIAL=y
CONFIG_SERIAL_CONSOLE=y
#
# Floppy, IDE, and other block devices
# General setup
#
# CONFIG_NET is not set
CONFIG_SYSVIPC=y
# CONFIG_SYSVIPC is not set
# CONFIG_BSD_PROCESS_ACCT is not set
# CONFIG_SYSCTL is not set
CONFIG_BINFMT_ELF=y
# CONFIG_BINFMT_MISC is not set
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_INITRD=y
# CONFIG_BLK_DEV_LOOP is not set
# CONFIG_BLK_DEV_NBD is not set
#
# Networking options
# Character devices
#
# CONFIG_PACKET is not set
# CONFIG_NETLINK is not set
# CONFIG_FIREWALL is not set
# CONFIG_FILTER is not set
# CONFIG_UNIX is not set
# CONFIG_INET is not set
# CONFIG_SERIAL is not set
CONFIG_SERIAL_CONSOLE=y
# CONFIG_SH_SCI_SERIAL is not set
CONFIG_SH_SCIF_SERIAL=y
#
#
# Floppy, IDE, and other block devices
#
# CONFIG_IPX is not set
# CONFIG_ATALK is not set
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_INITRD=y
# CONFIG_BLK_DEV_LOOP is not set
# CONFIG_BLK_DEV_NBD is not set
#
# Unix 98 PTY support
......@@ -66,8 +59,12 @@ CONFIG_BLK_DEV_INITRD=y
# CONFIG_AFFS_FS is not set
# CONFIG_HFS_FS is not set
# CONFIG_FAT_FS is not set
# CONFIG_MSDOS_FS is not set
# CONFIG_UMSDOS_FS is not set
# CONFIG_VFAT_FS is not set
# CONFIG_ISO9660_FS is not set
# CONFIG_JOLIET is not set
# CONFIG_UDF_FS is not set
# CONFIG_MINIX_FS is not set
# CONFIG_NTFS_FS is not set
# CONFIG_HPFS_FS is not set
......@@ -77,17 +74,16 @@ CONFIG_EXT2_FS=y
# CONFIG_SYSV_FS is not set
# CONFIG_UFS_FS is not set
#
# Network File Systems
#
#
# Partition Types
#
# CONFIG_PARTITION_ADVANCED is not set
CONFIG_MSDOS_PARTITION=y
# CONFIG_SMD_DISKLABEL is not set
# CONFIG_SGI_DISKLABEL is not set
# CONFIG_BSD_DISKLABEL is not set
# CONFIG_SOLARIS_X86_PARTITION is not set
# CONFIG_UNIXWARE_DISKLABEL is not set
# CONFIG_SGI_PARTITION is not set
# CONFIG_SUN_PARTITION is not set
# CONFIG_NLS is not set
#
......@@ -99,3 +95,4 @@ CONFIG_MSDOS_PARTITION=y
# Kernel hacking
#
# CONFIG_MAGIC_SYSRQ is not set
CONFIG_DEBUG_KERNEL_WITH_GDB_STUB=y
......@@ -11,7 +11,7 @@
O_TARGET := kernel.o
O_OBJS := process.o signal.o entry.o traps.o irq.o irq_onchip.o \
ptrace.o setup.o time.o sys_sh.o test-img.o semaphore.o
ptrace.o setup.o time.o sys_sh.o semaphore.o
OX_OBJS := sh_ksyms.o
MX_OBJS :=
......
This diff is collapsed.
/* $Id$
/* $Id: head.S,v 1.6 1999/10/05 12:34:16 gniibe Exp $
*
* arch/sh/kernel/head.S
*
* Copyright (C) 1999 Niibe Yutaka
* Copyright (C) 1999 Niibe Yutaka & Kaz Kojima
*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
......@@ -10,60 +10,74 @@
*
* Head.S contains the SH exception handlers and startup code.
*/
#include <linux/config.h>
#include <linux/threads.h>
#include <linux/linkage.h>
#include <asm/page.h>
#include <asm/pgtable.h>
#ifdef CONFIG_CPU_SH3
/* Following values are assumed to be as small as immediate. */
#define CCR 0xffffffec /* Address of Cache Control Register */
#define CACHE_INIT 0x00000009 /* 8k-byte cache, flush, enable */
#elif CONFIG_CPU_SH4
/* Should fill here. */
#endif
.section .empty_zero_page, "aw"
ENTRY(empty_zero_page)
.long 1 /* MOUNT_ROOT_RDONLY */
.long 0 /* RAMDISK_FLAGS */
.long 0x0200 /* ORIG_ROOT_DEV */
.long 1 /* LOADER_TYPE */
.long 0x88400000 /* INITRD_START */
.long 0x00400000 /* INITRD_SIZE */
.long 0x89000000 /* MEMORY_END */
.long 0
.text
.balign 4096,0,4096
/*
* Condition at the entry of _stext:
*
* BSC has already been initialized.
* INTC may or may not be initialized.
* VBR may or may not be initialized.
* MMU may or may not be initialized.
* Cache may or may not be initialized.
* Hardware (including on-chip modules) may or may not be initialized.
*
* The register R4&R5 holds the address of the parameter block, which has
* command-line data, etc.
*
*/
ENTRY(_stext)
! Switch to register bank 0
stc sr,r1 !
mov.l 1f,r0 ! RB=0, BL=1
and r1,r0
ldc r0,sr
! Enable cache
#ifdef CONFIG_CPU_SH3
mov #CCR,r1
mov.l @r1,r0
cmp/eq #1,r0 ! If it's enabled already, don't flush it
bt/s 8f
mov #CACHE_INIT,r0
mov.l r0,@r1
#elif CONFIG_CPU_SH4
! Should fill here.
#if defined(__SH4__)
! Initialize FPSCR
/* GCC (as of 2.95.1) assumes FPU with double precision mode. */
mov.l 7f,r0
lds r0,fpscr
#endif
8:
! Initialize Status Register
mov.l 1f,r0 ! MD=1, RB=0, BL=1
ldc r0,sr
!
mov.l 2f,r0
mov r0,r15 ! Set initial r15 (stack pointer)
ldc r0,r4_bank ! and stack base
!
! Enable cache
mov.l 6f,r0
jsr @r0
nop
! Clear BSS area
mov.l 3f,r1
add #4,r1
mov.l 4f,r2
mov #0,r0
9: mov.l r0,@r1
cmp/hs r2,r1
bf/s 9b
add #4,r1
9: cmp/hs r2,r1
bf/s 9b ! while (r1 < r2)
mov.l r0,@-r2
! Start kernel
mov.l 5f,r0
jmp @r0
nop
.balign 4
1: .long 0xdfffffff ! RB=0, BL=1
2: .long SYMBOL_NAME(stack)
3: .long SYMBOL_NAME(__bss_start)
4: .long SYMBOL_NAME(_end)
5: .long SYMBOL_NAME(start_kernel)
.data
1: .long 0x50000000 ! MD=1, RB=0, BL=1
2: .long SYMBOL_NAME(stack)
3: .long SYMBOL_NAME(__bss_start)
4: .long SYMBOL_NAME(_end)
5: .long SYMBOL_NAME(start_kernel)
6: .long SYMBOL_NAME(cache_init)
#if defined(__SH4__)
7: .long 0x00080000
#endif
/*
/* $Id: irq.c,v 1.4 1999/10/11 13:12:14 gniibe Exp $
*
* linux/arch/sh/kernel/irq.c
*
* Copyright (C) 1992, 1998 Linus Torvalds, Ingo Molnar
*
*
* SuperH version: Copyright (C) 1999 Niibe Yutaka
* SuperH version: Copyright (C) 1999 Niibe Yutaka
*/
/*
......@@ -48,7 +49,7 @@ spinlock_t irq_controller_lock = SPIN_LOCK_UNLOCKED;
/*
* Controller mappings for all interrupt sources:
*/
irq_desc_t irq_desc[NR_IRQS] = { [0 ... NR_IRQS-1] = { 0, &no_irq_type, }};
irq_desc_t irq_desc[NR_IRQS] __cacheline_aligned = { [0 ... NR_IRQS-1] = { 0, &no_irq_type, }};
/*
* Special irq handlers.
......@@ -56,6 +57,37 @@ irq_desc_t irq_desc[NR_IRQS] = { [0 ... NR_IRQS-1] = { 0, &no_irq_type, }};
void no_action(int cpl, void *dev_id, struct pt_regs *regs) { }
/*
* Generic no controller code
*/
static void enable_none(unsigned int irq) { }
static unsigned int startup_none(unsigned int irq) { return 0; }
static void disable_none(unsigned int irq) { }
static void ack_none(unsigned int irq)
{
/*
* 'what should we do if we get a hw irq event on an illegal vector'.
* each architecture has to answer this themselves, it doesnt deserve
* a generic callback i think.
*/
printk("unexpected IRQ trap at vector %02x\n", irq);
}
/* startup is the same as "enable", shutdown is same as "disable" */
#define shutdown_none disable_none
#define end_none enable_none
struct hw_interrupt_type no_irq_type = {
"none",
startup_none,
shutdown_none,
enable_none,
disable_none,
ack_none,
end_none
};
/*
* Generic, controller-independent functions:
*/
......@@ -203,6 +235,8 @@ asmlinkage int do_IRQ(unsigned long r4, unsigned long r5,
struct irqaction * action;
unsigned int status;
regs.syscall_nr = -1; /* It's not system call */
/* Get IRQ number */
asm volatile("stc r2_bank,%0\n\t"
"shlr2 %0\n\t"
......@@ -257,7 +291,7 @@ asmlinkage int do_IRQ(unsigned long r4, unsigned long r5,
for (;;) {
handle_IRQ_event(irq, &regs, action);
spin_lock(&irq_controller_lock);
if (!(desc->status & IRQ_PENDING))
break;
desc->status &= ~IRQ_PENDING;
......@@ -265,7 +299,7 @@ asmlinkage int do_IRQ(unsigned long r4, unsigned long r5,
}
desc->status &= ~IRQ_INPROGRESS;
if (!(desc->status & IRQ_DISABLED)){
irq_desc[irq].handler->end(irq);
irq_desc[irq].handler->end(irq);
}
spin_unlock(&irq_controller_lock);
......@@ -334,16 +368,22 @@ void free_irq(unsigned int irq, void *dev_id)
/* Found it - now remove it from the list of entries */
*pp = action->next;
if (irq_desc[irq].action)
break;
irq_desc[irq].status |= IRQ_DISABLED;
irq_desc[irq].handler->shutdown(irq);
break;
if (!irq_desc[irq].action) {
irq_desc[irq].status |= IRQ_DISABLED;
irq_desc[irq].handler->shutdown(irq);
}
spin_unlock_irqrestore(&irq_controller_lock,flags);
/* Wait to make sure it's not being used on another CPU */
while (irq_desc[irq].status & IRQ_INPROGRESS)
barrier();
kfree(action);
return;
}
printk("Trying to free free IRQ%d\n",irq);
break;
spin_unlock_irqrestore(&irq_controller_lock,flags);
return;
}
spin_unlock_irqrestore(&irq_controller_lock,flags);
}
/*
......
/*
/* $Id: irq_onchip.c,v 1.3 1999/10/11 13:12:19 gniibe Exp $
*
* linux/arch/sh/kernel/irq_onchip.c
*
* Copyright (C) 1999 Niibe Yutaka
......@@ -31,32 +32,6 @@
#include <linux/irq.h>
/*
* SH (non-)specific no controller code
*/
static void enable_none(unsigned int irq) { }
static unsigned int startup_none(unsigned int irq) { return 0; }
static void disable_none(unsigned int irq) { }
static void ack_none(unsigned int irq)
{
}
/* startup is the same as "enable", shutdown is same as "disable" */
#define shutdown_none disable_none
#define end_none enable_none
struct hw_interrupt_type no_irq_type = {
"none",
startup_none,
shutdown_none,
enable_none,
disable_none,
ack_none,
end_none
};
struct ipr_data {
int offset;
int priority;
......@@ -104,22 +79,25 @@ static struct hw_interrupt_type onChip_irq_type = {
* IPRC 15-12 11-8 7-4 3-0
*
*/
#if defined(__sh3__)
#define INTC_IPR 0xfffffee2UL /* Word access */
#define INTC_SIZE 0x2
#elif defined(__SH4__)
#define INTC_IPR 0xffd00004UL /* Word access */
#define INTC_SIZE 0x4
#endif
void disable_onChip_irq(unsigned int irq)
{
/* Set priority in IPR to 0 */
int offset = ipr_data[irq-TIMER_IRQ].offset;
unsigned long intc_ipr_address = INTC_IPR + offset/16;
unsigned long intc_ipr_address = INTC_IPR + (offset/16*INTC_SIZE);
unsigned short mask = 0xffff ^ (0xf << (offset%16));
unsigned long __dummy;
asm volatile("mov.w @%1,%0\n\t"
"and %2,%0\n\t"
"mov.w %0,@%1"
: "=&z" (__dummy)
: "r" (intc_ipr_address), "r" (mask)
: "memory" );
unsigned long val;
val = ctrl_inw(intc_ipr_address);
val &= mask;
ctrl_outw(val, intc_ipr_address);
}
static void enable_onChip_irq(unsigned int irq)
......@@ -127,16 +105,13 @@ static void enable_onChip_irq(unsigned int irq)
/* Set priority in IPR back to original value */
int offset = ipr_data[irq-TIMER_IRQ].offset;
int priority = ipr_data[irq-TIMER_IRQ].priority;
unsigned long intc_ipr_address = INTC_IPR + offset/16;
unsigned long intc_ipr_address = INTC_IPR + (offset/16*INTC_SIZE);
unsigned short value = (priority << (offset%16));
unsigned long __dummy;
asm volatile("mov.w @%1,%0\n\t"
"or %2,%0\n\t"
"mov.w %0,@%1"
: "=&z" (__dummy)
: "r" (intc_ipr_address), "r" (value)
: "memory" );
unsigned long val;
val = ctrl_inw(intc_ipr_address);
val |= value;
ctrl_outw(val, intc_ipr_address);
}
void make_onChip_irq(unsigned int irq)
......@@ -149,13 +124,11 @@ void make_onChip_irq(unsigned int irq)
static void mask_and_ack_onChip(unsigned int irq)
{
disable_onChip_irq(irq);
sti();
}
static void end_onChip_irq(unsigned int irq)
{
enable_onChip_irq(irq);
cli();
}
void __init init_IRQ(void)
......
/*
/* $Id: process.c,v 1.7 1999/09/23 00:05:41 gniibe Exp $
*
* linux/arch/sh/kernel/process.c
*
* Copyright (C) 1995 Linus Torvalds
*
* SuperH version: Copyright (C) 1999 Niibe Yutaka
* SuperH version: Copyright (C) 1999 Niibe Yutaka & Kaz Kojima
*/
/*
......@@ -41,6 +42,10 @@
#include <linux/irq.h>
#if defined(__SH4__)
struct task_struct *last_task_used_math = NULL;
#endif
static int hlt_counter=0;
#define HARD_IDLE_TIMEOUT (HZ / 3)
......@@ -92,22 +97,21 @@ void machine_power_off(void)
void show_regs(struct pt_regs * regs)
{
printk("\n");
printk("PC: [<%08lx>]", regs->pc);
printk(" SP: %08lx", regs->u_regs[UREG_SP]);
printk(" SR: %08lx\n", regs->sr);
printk("R0 : %08lx R1 : %08lx R2 : %08lx R3 : %08lx\n",
regs->u_regs[0],regs->u_regs[1],
regs->u_regs[2],regs->u_regs[3]);
printk("R4 : %08lx R5 : %08lx R6 : %08lx R7 : %08lx\n",
regs->u_regs[4],regs->u_regs[5],
regs->u_regs[6],regs->u_regs[7]);
printk("R8 : %08lx R9 : %08lx R10: %08lx R11: %08lx\n",
regs->u_regs[8],regs->u_regs[9],
regs->u_regs[10],regs->u_regs[11]);
printk("R12: %08lx R13: %08lx R14: %08lx\n",
regs->u_regs[12],regs->u_regs[13],
regs->u_regs[14]);
printk("MACH: %08lx MACL: %08lx GBR: %08lx PR: %08lx",
printk("PC : %08lx SP : %08lx SR : %08lx TEA : %08lx\n",
regs->pc, regs->sp, regs->sr, ctrl_inl(MMU_TEA));
printk("R0 : %08lx R1 : %08lx R2 : %08lx R3 : %08lx\n",
regs->regs[0],regs->regs[1],
regs->regs[2],regs->regs[3]);
printk("R4 : %08lx R5 : %08lx R6 : %08lx R7 : %08lx\n",
regs->regs[4],regs->regs[5],
regs->regs[6],regs->regs[7]);
printk("R8 : %08lx R9 : %08lx R10 : %08lx R11 : %08lx\n",
regs->regs[8],regs->regs[9],
regs->regs[10],regs->regs[11]);
printk("R12 : %08lx R13 : %08lx R14 : %08lx\n",
regs->regs[12],regs->regs[13],
regs->regs[14]);
printk("MACH: %08lx MACL: %08lx GBR : %08lx PR : %08lx\n",
regs->mach, regs->macl, regs->gbr, regs->pr);
}
......@@ -163,13 +167,35 @@ int kernel_thread(int (*fn)(void *), void * arg, unsigned long flags)
*/
void exit_thread(void)
{
#if defined(__sh3__)
/* nothing to do ... */
#elif defined(__SH4__)
#if 0 /* for the time being... */
/* Forget lazy fpu state */
if (last_task_used_math == current) {
set_status_register (SR_FD, 0);
write_system_register (fpscr, FPSCR_PR);
last_task_used_math = NULL;
}
#endif
#endif
}
void flush_thread(void)
{
#if defined(__sh3__)
/* do nothing */
/* Possibly, set clear debug registers */
#elif defined(__SH4__)
#if 0 /* for the time being... */
/* Forget lazy fpu state */
if (last_task_used_math == current) {
set_status_register (SR_FD, 0);
write_system_register (fpscr, FPSCR_PR);
last_task_used_math = NULL;
}
#endif
#endif
}
void release_thread(struct task_struct *dead_task)
......@@ -180,6 +206,15 @@ void release_thread(struct task_struct *dead_task)
/* Fill in the fpu structure for a core dump.. */
int dump_fpu(struct pt_regs *regs, elf_fpregset_t *r)
{
#if defined(__SH4__)
#if 0 /* for the time being... */
/* We store the FPU info in the task->thread area. */
if (! (regs->sr & SR_FD)) {
memcpy (r, &current->thread.fpu, sizeof (*r));
return 1;
}
#endif
#endif
return 0; /* Task didn't use the fpu at all. */
}
......@@ -191,14 +226,26 @@ int copy_thread(int nr, unsigned long clone_flags, unsigned long usp,
struct pt_regs *childregs;
childregs = ((struct pt_regs *)(THREAD_SIZE + (unsigned long) p)) - 1;
*childregs = *regs;
#if defined(__SH4__)
#if 0 /* for the time being... */
if (last_task_used_math == current) {
set_status_register (SR_FD, 0);
sh4_save_fp (p);
}
/* New tasks loose permission to use the fpu. This accelerates context
switching for most programs since they don't use the fpu. */
p->thread.sr = (read_control_register (sr) &~ SR_MD) | SR_FD;
childregs->sr |= SR_FD;
#endif
#endif
if (user_mode(regs)) {
childregs->u_regs[UREG_SP] = usp;
childregs->sp = usp;
} else {
childregs->u_regs[UREG_SP] = (unsigned long)p+2*PAGE_SIZE;
childregs->sp = (unsigned long)p+2*PAGE_SIZE;
}
childregs->u_regs[0] = 0; /* Set return value for child */
childregs->regs[0] = 0; /* Set return value for child */
p->thread.sp = (unsigned long) childregs;
p->thread.pc = (unsigned long) ret_from_fork;
......@@ -215,18 +262,22 @@ void dump_thread(struct pt_regs * regs, struct user * dump)
{
/* changed the size calculations - should hopefully work better. lbt */
dump->magic = CMAGIC;
dump->start_code = 0;
dump->start_stack = regs->u_regs[UREG_SP] & ~(PAGE_SIZE - 1);
dump->u_tsize = ((unsigned long) current->mm->end_code) >> PAGE_SHIFT;
dump->u_dsize = ((unsigned long) (current->mm->brk + (PAGE_SIZE-1))) >> PAGE_SHIFT;
dump->u_dsize -= dump->u_tsize;
dump->u_ssize = 0;
dump->start_code = current->mm->start_code;
dump->start_data = current->mm->start_data;
dump->start_stack = regs->sp & ~(PAGE_SIZE - 1);
dump->u_tsize = (current->mm->end_code - dump->start_code) >> PAGE_SHIFT;
dump->u_dsize = (current->mm->brk + (PAGE_SIZE-1) - dump->start_data) >> PAGE_SHIFT;
dump->u_ssize = (current->mm->start_stack - dump->start_stack +
PAGE_SIZE - 1) >> PAGE_SHIFT;
/* Debug registers will come here. */
if (dump->start_stack < TASK_SIZE)
dump->u_ssize = ((unsigned long) (TASK_SIZE - dump->start_stack)) >> PAGE_SHIFT;
dump->regs = *regs;
#if 0 /* defined(__SH4__) */
/* FPU */
memcpy (&dump->regs[EF_SIZE/4], &current->thread.fpu,
sizeof (current->thread.fpu));
#endif
}
/*
......@@ -248,7 +299,7 @@ asmlinkage int sys_fork(unsigned long r4, unsigned long r5,
unsigned long r6, unsigned long r7,
struct pt_regs regs)
{
return do_fork(SIGCHLD, regs.u_regs[UREG_SP], &regs);
return do_fork(SIGCHLD, regs.sp, &regs);
}
asmlinkage int sys_clone(unsigned long clone_flags, unsigned long newsp,
......@@ -256,7 +307,7 @@ asmlinkage int sys_clone(unsigned long clone_flags, unsigned long newsp,
struct pt_regs regs)
{
if (!newsp)
newsp = regs.u_regs[UREG_SP];
newsp = regs.sp;
return do_fork(clone_flags, newsp, &regs);
}
......@@ -274,8 +325,7 @@ asmlinkage int sys_vfork(unsigned long r4, unsigned long r5,
unsigned long r6, unsigned long r7,
struct pt_regs regs)
{
return do_fork(CLONE_VFORK | CLONE_VM | SIGCHLD,
regs.u_regs[UREG_SP], &regs);
return do_fork(CLONE_VFORK | CLONE_VM | SIGCHLD, regs.sp, &regs);
}
/*
......
/*
/* $Id: setup.c,v 1.4 1999/10/17 02:49:24 gniibe Exp $
*
* linux/arch/sh/kernel/setup.c
*
* Copyright (C) 1999 Niibe Yutaka
......@@ -29,6 +30,7 @@
#endif
#include <asm/processor.h>
#include <linux/console.h>
#include <asm/pgtable.h>
#include <asm/uaccess.h>
#include <asm/system.h>
#include <asm/io.h>
......@@ -38,8 +40,7 @@
* Machine setup..
*/
struct sh_cpuinfo boot_cpu_data = { 0, 0, 0, 0, };
extern int _text, _etext, _edata, _end, _stext, __bss_start;
struct sh_cpuinfo boot_cpu_data = { CPU_SH_NONE, 0, 0, 0, };
#ifdef CONFIG_BLK_DEV_RAM
extern int rd_doload; /* 1 = load ramdisk, 0 = don't load */
......@@ -48,13 +49,31 @@ extern int rd_image_start; /* starting block # of image */
#endif
extern int root_mountflags;
extern int _text, _etext, _edata, _end;
/*
* This is set up by the setup-routine at boot-time
*/
#define PARAM ((unsigned char *)empty_zero_page)
#define MOUNT_ROOT_RDONLY (*(unsigned long *) (PARAM+0x000))
#define RAMDISK_FLAGS (*(unsigned long *) (PARAM+0x004))
#define ORIG_ROOT_DEV (*(unsigned long *) (PARAM+0x008))
#define LOADER_TYPE (*(unsigned long *) (PARAM+0x00c))
#define INITRD_START (*(unsigned long *) (PARAM+0x010))
#define INITRD_SIZE (*(unsigned long *) (PARAM+0x014))
#define MEMORY_END (*(unsigned long *) (PARAM+0x018))
/* ... */
#define COMMAND_LINE ((char *) (PARAM+0x100))
#define COMMAND_LINE_SIZE 256
#define RAMDISK_IMAGE_START_MASK 0x07FF
#define RAMDISK_PROMPT_FLAG 0x8000
#define RAMDISK_LOAD_FLAG 0x4000
#define COMMAND_LINE_SIZE 1024
static char command_line[COMMAND_LINE_SIZE] = { 0, };
char saved_command_line[COMMAND_LINE_SIZE];
extern unsigned char *root_fs_image;
struct resource standard_io_resources[] = {
{ "dma1", 0x00, 0x1f },
{ "pic1", 0x20, 0x3f },
......@@ -68,7 +87,6 @@ struct resource standard_io_resources[] = {
#define STANDARD_IO_RESOURCES (sizeof(standard_io_resources)/sizeof(struct resource))
/* System RAM - interrupted by the 640kB-1M hole */
#define code_resource (ram_resources[3])
#define data_resource (ram_resources[4])
......@@ -87,17 +105,26 @@ static struct resource rom_resources[MAXROMS] = {
{ "Video ROM", 0xc0000, 0xc7fff }
};
void __init setup_arch(char **cmdline_p,
unsigned long * memory_start_p,
unsigned long * memory_end_p)
{
*cmdline_p = command_line;
*memory_start_p = (unsigned long) &_end;
*memory_end_p = 0x8c400000; /* For my board. */
ram_resources[1].end = *memory_end_p-1;
unsigned long memory_start, memory_end;
ROOT_DEV = to_kdev_t(ORIG_ROOT_DEV);
#ifdef CONFIG_BLK_DEV_RAM
rd_image_start = RAMDISK_FLAGS & RAMDISK_IMAGE_START_MASK;
rd_prompt = ((RAMDISK_FLAGS & RAMDISK_PROMPT_FLAG) != 0);
rd_doload = ((RAMDISK_FLAGS & RAMDISK_LOAD_FLAG) != 0);
#endif
if (!MOUNT_ROOT_RDONLY)
root_mountflags &= ~MS_RDONLY;
memory_start = (unsigned long) &_end;
memory_end = MEMORY_END;
init_mm.start_code = (unsigned long)&_stext;
init_mm.start_code = (unsigned long)&_text;
init_mm.end_code = (unsigned long) &_etext;
init_mm.end_data = (unsigned long) &_edata;
init_mm.brk = (unsigned long) &_end;
......@@ -107,51 +134,25 @@ void __init setup_arch(char **cmdline_p,
data_resource.start = virt_to_bus(&_etext);
data_resource.end = virt_to_bus(&_edata)-1;
ROOT_DEV = MKDEV(FLOPPY_MAJOR, 0);
/* Save unparsed command line copy for /proc/cmdline */
memcpy(saved_command_line, COMMAND_LINE, COMMAND_LINE_SIZE);
saved_command_line[COMMAND_LINE_SIZE-1] = '\0';
initrd_below_start_ok = 1;
initrd_start = (long)&root_fs_image;
initrd_end = (long)&__bss_start;
mount_initrd = 1;
memcpy(command_line, COMMAND_LINE, COMMAND_LINE_SIZE);
command_line[COMMAND_LINE_SIZE-1] = '\0';
/* Not support "mem=XXX[kKmM]" command line option. */
*cmdline_p = command_line;
#if 0
/* Request the standard RAM and ROM resources - they eat up PCI memory space */
request_resource(&iomem_resource, ram_resources+0);
request_resource(&iomem_resource, ram_resources+1);
request_resource(&iomem_resource, ram_resources+2);
request_resource(ram_resources+1, &code_resource);
request_resource(ram_resources+1, &data_resource);
#endif
#if 0
for (i = 0; i < STANDARD_IO_RESOURCES; i++)
request_resource(&ioport_resource, standard_io_resources+i);
#endif
#if 0
rd_image_start = (long)root_fs_image;
rd_prompt = 0;
rd_doload = 1;
#endif
memory_end &= PAGE_MASK;
ram_resources[1].end = memory_end-1;
#if 0
ROOT_DEV = to_kdev_t(ORIG_ROOT_DEV);
#ifdef CONFIG_BLK_DEV_RAM
rd_image_start = RAMDISK_FLAGS & RAMDISK_IMAGE_START_MASK;
rd_prompt = ((RAMDISK_FLAGS & RAMDISK_PROMPT_FLAG) != 0);
rd_doload = ((RAMDISK_FLAGS & RAMDISK_LOAD_FLAG) != 0);
#endif
if (!MOUNT_ROOT_RDONLY)
root_mountflags &= ~MS_RDONLY;
#endif
*memory_start_p = memory_start;
*memory_end_p = memory_end;
#ifdef CONFIG_BLK_DEV_INITRD
#if 0
if (LOADER_TYPE) {
initrd_start = INITRD_START ? INITRD_START + PAGE_OFFSET : 0;
initrd_start = INITRD_START ? INITRD_START : 0;
initrd_end = initrd_start+INITRD_SIZE;
if (initrd_end > memory_end) {
printk("initrd extends beyond end of memory "
......@@ -162,6 +163,29 @@ void __init setup_arch(char **cmdline_p,
}
#endif
#if 0
/*
* Request the standard RAM and ROM resources -
* they eat up PCI memory space
*/
request_resource(&iomem_resource, ram_resources+0);
request_resource(&iomem_resource, ram_resources+1);
request_resource(&iomem_resource, ram_resources+2);
request_resource(ram_resources+1, &code_resource);
request_resource(ram_resources+1, &data_resource);
probe_roms();
/* request I/O space for devices used on all i[345]86 PCs */
for (i = 0; i < STANDARD_IO_RESOURCES; i++)
request_resource(&ioport_resource, standard_io_resources+i);
#endif
#ifdef CONFIG_VT
#if defined(CONFIG_VGA_CONSOLE)
conswitchp = &vga_con;
#elif defined(CONFIG_DUMMY_CONSOLE)
conswitchp = &dummy_con;
#endif
#endif
}
......@@ -173,12 +197,12 @@ int get_cpuinfo(char *buffer)
{
char *p = buffer;
#ifdef CONFIG_CPU_SH3
p += sprintf(p,"cpu family\t: SH3\n"
#if defined(__sh3__)
p += sprintf(p,"cpu family\t: SH-3\n"
"cache size\t: 8K-byte\n");
#elif CONFIG_CPU_SH4
p += sprintf(p,"cpu family\t: SH4\n"
"cache size\t: ??K-byte\n");
#elif defined(__SH4__)
p += sprintf(p,"cpu family\t: SH-4\n"
"cache size\t: 8K-byte/16K-byte\n");
#endif
p += sprintf(p, "bogomips\t: %lu.%02lu\n\n",
(loops_per_sec+2500)/500000,
......
/*
/* $Id: signal.c,v 1.10 1999/09/27 23:25:44 gniibe Exp $
*
* linux/arch/sh/kernel/signal.c
*
* Copyright (C) 1991, 1992 Linus Torvalds
......@@ -9,8 +10,6 @@
*
*/
#include <linux/config.h>
#include <linux/sched.h>
#include <linux/mm.h>
#include <linux/smp.h>
......@@ -24,6 +23,7 @@
#include <linux/stddef.h>
#include <asm/ucontext.h>
#include <asm/uaccess.h>
#include <asm/pgtable.h>
#define DEBUG_SIG 0
......@@ -50,7 +50,7 @@ sys_sigsuspend(old_sigset_t mask,
recalc_sigpending(current);
spin_unlock_irq(&current->sigmask_lock);
regs.u_regs[0] = -EINTR;
regs.regs[0] = -EINTR;
while (1) {
current->state = TASK_INTERRUPTIBLE;
schedule();
......@@ -80,7 +80,7 @@ sys_rt_sigsuspend(sigset_t *unewset, size_t sigsetsize,
recalc_sigpending(current);
spin_unlock_irq(&current->sigmask_lock);
regs.u_regs[0] = -EINTR;
regs.regs[0] = -EINTR;
while (1) {
current->state = TASK_INTERRUPTIBLE;
schedule();
......@@ -126,7 +126,7 @@ sys_sigaltstack(const stack_t *uss, stack_t *uoss,
unsigned long r6, unsigned long r7,
struct pt_regs regs)
{
return do_sigaltstack(uss, uoss, regs.u_regs[UREG_SP]);
return do_sigaltstack(uss, uoss, regs.sp);
}
......@@ -137,7 +137,7 @@ sys_sigaltstack(const stack_t *uss, stack_t *uoss,
struct sigframe
{
struct sigcontext sc;
/* FPU should come here: SH-3 has no FPU */
/* FPU data should come here: SH-3 has no FPU */
unsigned long extramask[_NSIG_WORDS-1];
char retcode[4];
};
......@@ -158,22 +158,22 @@ restore_sigcontext(struct pt_regs *regs, struct sigcontext *sc, int *r0_p)
{
unsigned int err = 0;
#define COPY(x) err |= __get_user(regs->x, &sc->x)
COPY(u_regs[1]);
COPY(u_regs[2]); COPY(u_regs[3]);
COPY(u_regs[4]); COPY(u_regs[5]);
COPY(u_regs[6]); COPY(u_regs[7]);
COPY(u_regs[8]); COPY(u_regs[9]);
COPY(u_regs[10]); COPY(u_regs[11]);
COPY(u_regs[12]); COPY(u_regs[13]);
COPY(u_regs[14]); COPY(u_regs[15]);
COPY(gbr); COPY(mach);
COPY(macl); COPY(pr);
COPY(sr); COPY(pc);
#define COPY(x) err |= __get_user(regs->x, &sc->sc_##x)
COPY(regs[1]);
COPY(regs[2]); COPY(regs[3]);
COPY(regs[4]); COPY(regs[5]);
COPY(regs[6]); COPY(regs[7]);
COPY(regs[8]); COPY(regs[9]);
COPY(regs[10]); COPY(regs[11]);
COPY(regs[12]); COPY(regs[13]);
COPY(regs[14]); COPY(sp);
COPY(gbr); COPY(mach);
COPY(macl); COPY(pr);
COPY(sr); COPY(pc);
#undef COPY
regs->syscall_nr = -1; /* disable syscall checks */
err |= __get_user(*r0_p, &sc->u_regs[0]);
err |= __get_user(*r0_p, &sc->sc_regs[0]);
return err;
}
......@@ -182,7 +182,7 @@ asmlinkage int sys_sigreturn(unsigned long r4, unsigned long r5,
unsigned long r6, unsigned long r7,
struct pt_regs regs)
{
struct sigframe *frame = (struct sigframe *)regs.u_regs[UREG_SP];
struct sigframe *frame = (struct sigframe *)regs.sp;
sigset_t set;
int r0;
......@@ -213,7 +213,7 @@ asmlinkage int sys_rt_sigreturn(unsigned long r4, unsigned long r5,
unsigned long r6, unsigned long r7,
struct pt_regs regs)
{
struct rt_sigframe *frame = (struct rt_sigframe *)regs.u_regs[UREG_SP];
struct rt_sigframe *frame = (struct rt_sigframe *)regs.sp;
sigset_t set;
stack_t st;
int r0;
......@@ -236,7 +236,7 @@ asmlinkage int sys_rt_sigreturn(unsigned long r4, unsigned long r5,
goto badframe;
/* It is more difficult to avoid calling this function than to
call it and ignore errors. */
do_sigaltstack(&st, NULL, regs.u_regs[UREG_SP]);
do_sigaltstack(&st, NULL, regs.sp);
return r0;
......@@ -255,18 +255,18 @@ setup_sigcontext(struct sigcontext *sc, struct pt_regs *regs,
{
int err = 0;
#define COPY(x) err |= __put_user(regs->x, &sc->x)
COPY(u_regs[0]); COPY(u_regs[1]);
COPY(u_regs[2]); COPY(u_regs[3]);
COPY(u_regs[4]); COPY(u_regs[5]);
COPY(u_regs[6]); COPY(u_regs[7]);
COPY(u_regs[8]); COPY(u_regs[9]);
COPY(u_regs[10]); COPY(u_regs[11]);
COPY(u_regs[12]); COPY(u_regs[13]);
COPY(u_regs[14]); COPY(u_regs[15]);
COPY(gbr); COPY(mach);
COPY(macl); COPY(pr);
COPY(sr); COPY(pc);
#define COPY(x) err |= __put_user(regs->x, &sc->sc_##x)
COPY(regs[0]); COPY(regs[1]);
COPY(regs[2]); COPY(regs[3]);
COPY(regs[4]); COPY(regs[5]);
COPY(regs[6]); COPY(regs[7]);
COPY(regs[8]); COPY(regs[9]);
COPY(regs[10]); COPY(regs[11]);
COPY(regs[12]); COPY(regs[13]);
COPY(regs[14]); COPY(sp);
COPY(gbr); COPY(mach);
COPY(macl); COPY(pr);
COPY(sr); COPY(pc);
#undef COPY
/* non-iBCS2 extensions.. */
......@@ -294,7 +294,7 @@ static void setup_frame(int sig, struct k_sigaction *ka,
int err = 0;
int signal;
frame = get_sigframe(ka, regs->u_regs[UREG_SP], sizeof(*frame));
frame = get_sigframe(ka, regs->sp, sizeof(*frame));
if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
goto give_sigsegv;
......@@ -318,8 +318,8 @@ static void setup_frame(int sig, struct k_sigaction *ka,
regs->pr = (unsigned long) ka->sa.sa_restorer;
} else {
/* This is ; mov #__NR_sigreturn,r0 ; trapa #0 */
#ifdef CONFIG_LITTLE_ENDIAN
unsigned long code = 0x00c300e0 | (__NR_sigreturn << 8);
#ifdef __LITTLE_ENDIAN__
unsigned long code = 0xc300e000 | (__NR_sigreturn);
#else
unsigned long code = 0xe000c300 | (__NR_sigreturn << 16);
#endif
......@@ -332,8 +332,8 @@ static void setup_frame(int sig, struct k_sigaction *ka,
goto give_sigsegv;
/* Set up registers for signal handler */
regs->u_regs[UREG_SP] = (unsigned long) frame;
regs->u_regs[4] = signal; /* Arg for signal handler */
regs->sp = (unsigned long) frame;
regs->regs[4] = signal; /* Arg for signal handler */
regs->pc = (unsigned long) ka->sa.sa_handler;
set_fs(USER_DS);
......@@ -343,6 +343,7 @@ static void setup_frame(int sig, struct k_sigaction *ka,
current->comm, current->pid, frame, regs->pc, regs->pr);
#endif
flush_icache_range(regs->pr, regs->pr+4);
return;
give_sigsegv:
......@@ -358,7 +359,7 @@ static void setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
int err = 0;
int signal;
frame = get_sigframe(ka, regs->u_regs[UREG_SP], sizeof(*frame));
frame = get_sigframe(ka, regs->sp, sizeof(*frame));
if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
goto give_sigsegv;
......@@ -376,9 +377,9 @@ static void setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
/* Create the ucontext. */
err |= __put_user(0, &frame->uc.uc_flags);
err |= __put_user(0, &frame->uc.uc_link);
err |= __put_user(current->sas_ss_sp, &frame->uc.uc_stack.ss_sp);
err |= __put_user(sas_ss_flags(regs->u_regs[UREG_SP]),
&frame->uc.uc_stack.ss_flags);
err |= __put_user((void *)current->sas_ss_sp,
&frame->uc.uc_stack.ss_sp);
err |= __put_user(sas_ss_flags(regs->sp), &frame->uc.uc_stack.ss_flags);
err |= __put_user(current->sas_ss_size, &frame->uc.uc_stack.ss_size);
err |= setup_sigcontext(&frame->uc.uc_mcontext,
regs, set->sig[0]);
......@@ -390,8 +391,8 @@ static void setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
regs->pr = (unsigned long) ka->sa.sa_restorer;
} else {
/* This is ; mov #__NR_sigreturn,r0 ; trapa #0 */
#ifdef CONFIG_LITTLE_ENDIAN
unsigned long code = 0x00c300e0 | (__NR_sigreturn << 8);
#ifdef __LITTLE_ENDIAN__
unsigned long code = 0xc300e000 | (__NR_sigreturn);
#else
unsigned long code = 0xe000c300 | (__NR_sigreturn << 16);
#endif
......@@ -404,8 +405,8 @@ static void setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
goto give_sigsegv;
/* Set up registers for signal handler */
regs->u_regs[UREG_SP] = (unsigned long) frame;
regs->u_regs[4] = signal; /* Arg for signal handler */
regs->sp = (unsigned long) frame;
regs->regs[4] = signal; /* Arg for signal handler */
regs->pc = (unsigned long) ka->sa.sa_handler;
set_fs(USER_DS);
......@@ -415,6 +416,7 @@ static void setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
current->comm, current->pid, frame, regs->pc, regs->pr);
#endif
flush_icache_range(regs->pr, regs->pr+4);
return;
give_sigsegv:
......@@ -434,19 +436,19 @@ handle_signal(unsigned long sig, struct k_sigaction *ka,
/* Are we from a system call? */
if (regs->syscall_nr >= 0) {
/* If so, check system call restarting.. */
switch (regs->u_regs[0]) {
switch (regs->regs[0]) {
case -ERESTARTNOHAND:
regs->u_regs[0] = -EINTR;
regs->regs[0] = -EINTR;
break;
case -ERESTARTSYS:
if (!(ka->sa.sa_flags & SA_RESTART)) {
regs->u_regs[0] = -EINTR;
regs->regs[0] = -EINTR;
break;
}
/* fallthrough */
case -ERESTARTNOINTR:
regs->u_regs[0] = regs->syscall_nr;
regs->regs[0] = regs->syscall_nr;
regs->pc -= 2;
}
}
......@@ -577,7 +579,6 @@ int do_signal(struct pt_regs *regs, sigset_t *oldset)
/* NOTREACHED */
}
}
/* Whee! Actually deliver the signal. */
handle_signal(signr, ka, &info, oldset, regs);
return 1;
......@@ -586,10 +587,10 @@ int do_signal(struct pt_regs *regs, sigset_t *oldset)
/* Did we come from a system call? */
if (regs->syscall_nr >= 0) {
/* Restart the system call - no handlers present */
if (regs->u_regs[0] == -ERESTARTNOHAND ||
regs->u_regs[0] == -ERESTARTSYS ||
regs->u_regs[0] == -ERESTARTNOINTR) {
regs->u_regs[0] = regs->syscall_nr;
if (regs->regs[0] == -ERESTARTNOHAND ||
regs->regs[0] == -ERESTARTSYS ||
regs->regs[0] == -ERESTARTNOINTR) {
regs->regs[0] = regs->syscall_nr;
regs->pc -= 2;
}
}
......
/*
* linux/arch/i386/kernel/sys_i386.c
* linux/arch/sh/kernel/sys_sh.c
*
* This file contains various random system calls that
* have a non-standard calling sequence on the Linux/i386
* have a non-standard calling sequence on the Linux/SuperH
* platform.
*
* Taken from i386 version.
*/
#include <linux/errno.h>
......@@ -41,66 +43,32 @@ asmlinkage int sys_pipe(unsigned long * fildes)
return error;
}
/*
* Perform the select(nd, in, out, ex, tv) and mmap() system
* calls. Linux/i386 didn't use to be able to handle more than
* 4 system call parameters, so these system calls used a memory
* block for parameter passing..
*/
struct mmap_arg_struct {
unsigned long addr;
unsigned long len;
unsigned long prot;
unsigned long flags;
unsigned long fd;
unsigned long offset;
};
asmlinkage int old_mmap(struct mmap_arg_struct *arg)
asmlinkage unsigned long
sys_mmap(int fd, unsigned long addr,
unsigned long len, unsigned long prot,
unsigned long flags, unsigned long off)
{
int error = -EFAULT;
struct file * file = NULL;
struct mmap_arg_struct a;
if (copy_from_user(&a, arg, sizeof(a)))
return -EFAULT;
struct file *file = NULL;
down(&current->mm->mmap_sem);
lock_kernel();
if (!(a.flags & MAP_ANONYMOUS)) {
if (!(flags & MAP_ANONYMOUS)) {
error = -EBADF;
file = fget(a.fd);
file = fget(fd);
if (!file)
goto out;
}
a.flags &= ~(MAP_EXECUTABLE | MAP_DENYWRITE);
flags &= ~(MAP_EXECUTABLE | MAP_DENYWRITE);
error = do_mmap(file, a.addr, a.len, a.prot, a.flags, a.offset);
error = do_mmap(file, addr, len, prot, flags, off);
if (file)
fput(file);
out:
unlock_kernel();
up(&current->mm->mmap_sem);
return error;
}
extern asmlinkage int sys_select(int, fd_set *, fd_set *, fd_set *, struct timeval *);
struct sel_arg_struct {
unsigned long n;
fd_set *inp, *outp, *exp;
struct timeval *tvp;
};
asmlinkage int old_select(struct sel_arg_struct *arg)
{
struct sel_arg_struct a;
if (copy_from_user(&a, arg, sizeof(a)))
return -EFAULT;
/* sys_select() does the appropriate kernel locking */
return sys_select(a.n, a.inp, a.outp, a.exp, a.tvp);
return error;
}
/*
......@@ -198,9 +166,6 @@ asmlinkage int sys_ipc (uint call, int first, int second,
return -EINVAL;
}
/*
* Old cruft
*/
asmlinkage int sys_uname(struct old_utsname * name)
{
int err;
......@@ -212,35 +177,6 @@ asmlinkage int sys_uname(struct old_utsname * name)
return err?-EFAULT:0;
}
asmlinkage int sys_olduname(struct oldold_utsname * name)
{
int error;
if (!name)
return -EFAULT;
if (!access_ok(VERIFY_WRITE,name,sizeof(struct oldold_utsname)))
return -EFAULT;
down(&uts_sem);
error = __copy_to_user(&name->sysname,&system_utsname.sysname,__OLD_UTS_LEN);
error |= __put_user(0,name->sysname+__OLD_UTS_LEN);
error |= __copy_to_user(&name->nodename,&system_utsname.nodename,__OLD_UTS_LEN);
error |= __put_user(0,name->nodename+__OLD_UTS_LEN);
error |= __copy_to_user(&name->release,&system_utsname.release,__OLD_UTS_LEN);
error |= __put_user(0,name->release+__OLD_UTS_LEN);
error |= __copy_to_user(&name->version,&system_utsname.version,__OLD_UTS_LEN);
error |= __put_user(0,name->version+__OLD_UTS_LEN);
error |= __copy_to_user(&name->machine,&system_utsname.machine,__OLD_UTS_LEN);
error |= __put_user(0,name->machine+__OLD_UTS_LEN);
up(&uts_sem);
error = error ? -EFAULT : 0;
return error;
}
asmlinkage int sys_pause(void)
{
current->state = TASK_INTERRUPTIBLE;
......
unsigned char root_fs_image[]
__attribute__((__section__(".data.disk_image")))
= {
0x1f,0x8b,0x08,0x08,0x5d,0xd5,0xc7,0x37,0x00,0x03,0x72,0x2e,0x62,0x69,0x6e,0x00,
0xed,0xdc,0x3f,0x6c,0x1b,0x55,0x1c,0xc0,0xf1,0xdf,0xf9,0xdc,0x04,0x27,0x69,0xb1,
0x93,0x14,0x10,0x48,0x91,0xd3,0x02,0x4d,0x8a,0xb8,0xd4,0x21,0x8a,0x09,0x02,0x02,
0xb5,0x4a,0xab,0x52,0x65,0x69,0x11,0x03,0x42,0xc2,0xb1,0x8f,0xc4,0x92,0xe3,0x03,
0x9f,0x8d,0xca,0x14,0xd8,0x88,0x2a,0xa6,0x0e,0x88,0xa9,0x20,0xb1,0x87,0x8d,0xa5,
0x5b,0x86,0xcc,0x90,0x78,0x77,0xd4,0x60,0x75,0xa9,0x40,0xe2,0xdf,0xd0,0x42,0x78,
0x77,0xef,0x9c,0x38,0x24,0x72,0x49,0x20,0xc9,0x70,0xdf,0x8f,0xf2,0xf3,0xd9,0x77,
0xbf,0xf3,0xbb,0x67,0xbf,0xdf,0xf9,0x4f,0xf4,0x2c,0x02,0x20,0xac,0xe2,0x2a,0x5e,
0x53,0x61,0xaa,0x18,0x0e,0xd6,0x19,0xad,0x09,0x49,0x1d,0x5e,0x5e,0x7d,0x75,0x39,
0xfd,0x6c,0x6d,0x39,0x6d,0x48,0xbf,0x5c,0xfd,0xc9,0xf0,0xf3,0x56,0xd5,0x3a,0x99,
0xba,0xf7,0xd0,0x76,0x8a,0x53,0x5f,0xc4,0xdf,0xcd,0x24,0x56,0x6e,0x9e,0x59,0xb9,
0x30,0x3e,0x73,0x3b,0xf7,0x3f,0x76,0x01,0xc0,0x3e,0x79,0x75,0x1f,0x55,0x71,0x4c,
0x74,0xfd,0x47,0x8f,0xf6,0x70,0x00,0x1c,0xa2,0x8d,0x8d,0x49,0x6f,0xf1,0xc9,0x06,
0x00,0x00,0x08,0x8d,0xe6,0xfb,0x00,0xef,0x73,0x7c,0x33,0x0e,0xf3,0xfd,0xc7,0xbd,
0xd7,0xc5,0xff,0xd0,0x31,0x5a,0x5b,0x4e,0xf7,0x05,0xa1,0xb7,0x1c,0x93,0x48,0x4b,
0x5e,0xe7,0x61,0x1e,0x14,0x80,0x50,0xf0,0xcf,0x3f,0xe7,0x76,0x3b,0xff,0x45,0xe4,
0x89,0x96,0xbc,0x47,0x54,0xc4,0x54,0x74,0xa9,0xe8,0x56,0xd1,0xa3,0xe2,0xb8,0x8a,
0x13,0x2a,0x1e,0x15,0xfd,0xfd,0x68,0x42,0x45,0xaf,0x8a,0xbe,0xbd,0xb6,0xaf,0xce,
0x7f,0x7f,0xaa,0x76,0xef,0x07,0xd1,0x6c,0xbf,0xf5,0xfc,0xd7,0xbf,0xf7,0xae,0x6d,
0x32,0xda,0x6c,0x6b,0xb6,0x7f,0x56,0x9d,0x77,0x4f,0x05,0xb1,0x5b,0xfb,0x27,0x0f,
0xa8,0xfd,0x6f,0x06,0xf5,0xf2,0xfe,0x8e,0xfe,0xff,0x63,0xaf,0xff,0xf0,0xc5,0x54,
0xdb,0xfe,0x7f,0x7a,0xeb,0xf2,0x15,0x53,0xe4,0xe6,0xaa,0x7e,0xed,0x19,0x0b,0xda,
0xbf,0x75,0xd9,0xd8,0xd6,0xff,0xc7,0xf6,0xdf,0x7c,0xdb,0xf6,0x37,0xbe,0xd6,0x63,
0x6a,0xe7,0xe3,0xbf,0x7d,0x2f,0xcb,0x1a,0x29,0x16,0x4a,0xd5,0xeb,0xe5,0x7d,0x7c,
0x73,0xde,0xae,0x7d,0xaf,0x8f,0x3d,0x2a,0xc3,0xda,0xbc,0x1e,0x51,0x6d,0xe9,0x31,
0xde,0xaf,0x8e,0xac,0xe8,0xb8,0x95,0xe7,0xde,0x77,0xaa,0xa5,0xbc,0x1e,0xf3,0x3d,
0x62,0x4a,0xde,0xfe,0xc8,0x1f,0xfb,0x3d,0xea,0x49,0x71,0xa7,0x0b,0x25,0x6f,0xfc,
0xdf,0x36,0x3b,0x65,0xdf,0x07,0x08,0xe0,0x48,0xe8,0xd7,0xb2,0xad,0xfa,0xff,0xd5,
0xd4,0xf5,0x0f,0x20,0x24,0xf8,0xa7,0x1f,0x10,0x5e,0xd4,0x3f,0x10,0x5e,0xd4,0x3f,
0x10,0x5e,0xd4,0x3f,0x10,0x5e,0xd4,0x3f,0x10,0x5e,0xd4,0x3f,0x10,0x5e,0xd4,0x3f,
0x10,0x5e,0xd4,0x3f,0x10,0x5e,0xd4,0x3f,0x10,0x5e,0xd4,0x3f,0x10,0x5e,0xd4,0x3f,
0x10,0x4a,0x7a,0x4e,0xcf,0xce,0xf9,0x3f,0xde,0xbc,0xb6,0xbb,0x66,0xa7,0xe4,0x9c,
0x92,0xeb,0x14,0xed,0xa3,0x3d,0x48,0x00,0x07,0x42,0xcf,0xe3,0xdb,0x59,0xff,0xde,
0x7c,0xd6,0xbb,0x66,0x54,0x0a,0xa5,0x42,0xe5,0x68,0x8f,0x10,0xc0,0x41,0x99,0xbf,
0x70,0xe5,0x0d,0x23,0xd2,0x32,0x43,0x38,0x22,0x67,0xc5,0x9f,0x32,0x1c,0xff,0x4a,
0x2d,0xc7,0xd4,0xd5,0x75,0x7f,0xfd,0x98,0x24,0xd5,0xb6,0x21,0x89,0xf9,0x53,0xe1,
0x83,0x1d,0xe2,0x41,0x18,0xd3,0x3a,0xfc,0x9f,0x11,0x34,0x74,0x78,0xb7,0x07,0x83,
0xd8,0xd4,0xe1,0x27,0x67,0xd6,0x8d,0x46,0x7c,0xa4,0x51,0x8f,0xd6,0x3a,0x4a,0xbf,
0x2c,0xc9,0x7b,0xa7,0x3f,0x33,0x16,0xcc,0x5a,0xb4,0x61,0xd6,0xa3,0x4b,0xe2,0xdc,
0x91,0xee,0xd2,0xef,0x22,0x89,0xa7,0x55,0xbc,0x38,0xd2,0x98,0xff,0xb9,0x1e,0xf1,
0xb2,0xa6,0xcd,0xf3,0x89,0x85,0xce,0x75,0xa3,0xf6,0x78,0xe3,0xa4,0x97,0x27,0xb1,
0xc5,0xbf,0x24,0x76,0x6a,0x68,0xa1,0x7b,0xa5,0x6f,0x4d,0x3e,0x34,0x52,0xe9,0x1b,
0x0f,0xf2,0xa7,0x7f,0x34,0xea,0xcf,0x2c,0xc9,0xe2,0x1f,0x6b,0x6a,0xfb,0xf7,0x27,
0xd6,0x0d,0xab,0xd7,0xbe,0xb3,0x26,0x03,0x89,0x86,0x0c,0xf4,0xd6,0x33,0x03,0x7d,
0x4b,0xf2,0x43,0xd7,0xba,0x21,0xb1,0x5a,0xac,0x71,0xdc,0xbb,0x17,0x2f,0x4f,0xed,
0x7b,0xe6,0xc6,0x83,0xc5,0xdf,0xbc,0xf5,0xaa,0xcd,0x97,0xe5,0x9d,0xcf,0xe7,0x55,
0xbf,0x2a,0xf2,0xdd,0x93,0x1b,0xea,0xf6,0xb5,0x6b,0xb3,0x05,0x37,0xa9,0xfe,0xae,
0x56,0x3f,0xb0,0xcb,0x97,0x06,0xbd,0xe9,0xda,0x32,0x39,0xd9,0x25,0xae,0x33,0x67,
0x57,0x66,0x0b,0xa5,0x99,0x64,0xb5,0x54,0x75,0xab,0xd9,0xa2,0x65,0x59,0xde,0xc6,
0x4b,0x76,0xb1,0xe8,0x24,0xdf,0x76,0xca,0xc5,0xbc,0x97,0x7c,0x31,0x93,0x79,0x29,
0x39,0x74,0x71,0xea,0xad,0xe1,0xa4,0x3d,0x93,0x73,0x9f,0x1f,0xb5,0x26,0x52,0xd6,
0xf8,0x78,0x32,0x35,0x31,0x31,0x71,0xee,0x85,0xd4,0x58,0x72,0xc8,0x5f,0x9d,0xb2,
0x52,0xd6,0x68,0xb2,0x6c,0x17,0xed,0xac,0x6b,0x0f,0x8b,0x58,0xee,0xc7,0x73,0x95,
0xec,0xb4,0x5a,0x56,0xca,0x7a,0x39,0xdb,0xbc,0x56,0xb1,0xaf,0x57,0xc4,0x2a,0x3b,
0xf9,0x6c,0x25,0x2b,0x96,0xbe,0xcc,0x55,0x9c,0xb2,0xab,0x6e,0xe8,0xc5,0xb4,0xab,
0x2e,0x72,0xce,0xdc,0x9c,0x5d,0xda,0xd3,0xe9,0xfb,0xa9,0xe0,0xf9,0xeb,0xf0,0xfb,
0x2f,0xe2,0xc5,0xb7,0x2d,0xdb,0x9b,0x9f,0x14,0x07,0x83,0xbc,0x88,0x7e,0x9e,0x0c,
0x15,0xf2,0xea,0x2e,0x79,0xc3,0x41,0x9e,0xa9,0xc7,0x81,0xd1,0x3a,0x16,0x64,0x6b,
0x1c,0xc9,0xc8,0xd6,0xb8,0x69,0x9b,0x37,0xfe,0x2f,0xf3,0x5e,0x11,0xfd,0x93,0x0d,
0x0f,0x6b,0xf7,0xbc,0x6c,0x9b,0x1e,0xef,0xe7,0xa5,0x77,0xc9,0x4b,0xe8,0xfb,0xda,
0x5c,0xfd,0xa5,0xba,0x78,0x73,0x97,0x3c,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
0x00,0x00,0x00,0x00,0x00,0x87,0xe8,0x6f,0x20,0x01,0xec,0xc5,0x00,0x00,0x01,0x00,
};
/*
/* $Id: time.c,v 1.2 1999/10/11 13:12:02 gniibe Exp $
*
* linux/arch/sh/kernel/time.c
*
* Copyright (C) 1999 Niibe Yutaka
* Copyright (C) 1999 Tetsuya Okada & Niibe Yutaka
*
* Some code taken from i386 version.
* Copyright (C) 1991, 1992, 1995 Linus Torvalds
*/
#include <linux/config.h>
#include <linux/errno.h>
#include <linux/sched.h>
#include <linux/kernel.h>
......@@ -28,6 +31,11 @@
#include <linux/timex.h>
#include <linux/irq.h>
#define TMU_TOCR_INIT 0x00
#define TMU0_TCR_INIT 0x0020
#define TMU_TSTR_INIT 1
#if defined(__sh3__)
#define TMU_TOCR 0xfffffe90 /* Byte access */
#define TMU_TSTR 0xfffffe92 /* Byte access */
......@@ -35,13 +43,63 @@
#define TMU0_TCNT 0xfffffe98 /* Long access */
#define TMU0_TCR 0xfffffe9c /* Word access */
#define TMU_TOCR_INIT 0x00
#define TMU0_TCR_INIT 0x0020
#define TMU_TSTR_INIT 1
#define CLOCK_MHZ (60/4)
#define INTERVAL 37500 /* (1000000*CLOCK_MHZ/HZ/2) ??? */
/* SH-3 RTC */
#define R64CNT 0xfffffec0
#define RSECCNT 0xfffffec2
#define RMINCNT 0xfffffec4
#define RHRCNT 0xfffffec6
#define RWKCNT 0xfffffec8
#define RDAYCNT 0xfffffeca
#define RMONCNT 0xfffffecc
#define RYRCNT 0xfffffece
#define RSECAR 0xfffffed0
#define RMINAR 0xfffffed2
#define RHRAR 0xfffffed4
#define RWKAR 0xfffffed6
#define RDAYAR 0xfffffed8
#define RMONAR 0xfffffeda
#define RCR1 0xfffffedc
#define RCR2 0xfffffede
#elif defined(__SH4__)
#define TMU_TOCR 0xffd80000 /* Byte access */
#define TMU_TSTR 0xffd80004 /* Byte access */
#define TMU0_TCOR 0xffd80008 /* Long access */
#define TMU0_TCNT 0xffd8000c /* Long access */
#define TMU0_TCR 0xffd80010 /* Word access */
#define INTERVAL 83333
/* SH-4 RTC */
#define R64CNT 0xffc80000
#define RSECCNT 0xffc80004
#define RMINCNT 0xffc80008
#define RHRCNT 0xffc8000c
#define RWKCNT 0xffc80010
#define RDAYCNT 0xffc80014
#define RMONCNT 0xffc80018
#define RYRCNT 0xffc8001c /* 16bit */
#define RSECAR 0xffc80020
#define RMINAR 0xffc80024
#define RHRAR 0xffc80028
#define RWKAR 0xffc8002c
#define RDAYAR 0xffc80030
#define RMONAR 0xffc80034
#define RCR1 0xffc80038
#define RCR2 0xffc8003c
#endif
#ifndef BCD_TO_BIN
#define BCD_TO_BIN(val) ((val)=((val)&15) + ((val)>>4)*10)
#endif
#ifndef BIN_TO_BCD
#define BIN_TO_BCD(val) ((val)=(((val)/10)<<4) + (val)%10)
#endif
extern rwlock_t xtime_lock;
#define TICK_SIZE tick
......@@ -82,14 +140,48 @@ void do_settimeofday(struct timeval *tv)
write_unlock_irq(&xtime_lock);
}
/*
*/
static int set_rtc_time(unsigned long nowtime)
{
/* XXX should be implemented XXXXXXXXXX */
int retval = -1;
#ifdef CONFIG_SH_CPU_RTC
int retval = 0;
int real_seconds, real_minutes, cmos_minutes;
ctrl_outb(2, RCR2); /* reset pre-scaler & stop RTC */
cmos_minutes = ctrl_inb(RMINCNT);
BCD_TO_BIN(cmos_minutes);
/*
* since we're only adjusting minutes and seconds,
* don't interfere with hour overflow. This avoids
* messing with unknown time zones but requires your
* RTC not to be off by more than 15 minutes
*/
real_seconds = nowtime % 60;
real_minutes = nowtime / 60;
if (((abs(real_minutes - cmos_minutes) + 15)/30) & 1)
real_minutes += 30; /* correct for half hour time zone */
real_minutes %= 60;
if (abs(real_minutes - cmos_minutes) < 30) {
BIN_TO_BCD(real_seconds);
BIN_TO_BCD(real_minutes);
ctrl_outb(real_seconds, RSECCNT);
ctrl_outb(real_minutes, RMINCNT);
} else {
printk(KERN_WARNING
"set_rtc_time: can't update from %d to %d\n",
cmos_minutes, real_minutes);
retval = -1;
}
ctrl_outb(2, RCR2); /* start RTC */
return retval;
#else
/* XXX should support other clock devices? */
return -1;
#endif
}
/* last time the RTC clock got updated */
......@@ -131,14 +223,12 @@ static inline void do_timer_interrupt(int irq, void *dev_id, struct pt_regs *reg
*/
static void timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
{
unsigned long __dummy;
unsigned long timer_status;
/* Clear UNF bit */
asm volatile("mov.w %1,%0\n\t"
"and %2,%0\n\t"
"mov.w %0,%1"
: "=&z" (__dummy)
: "m" (__m(TMU0_TCR)), "r" (~0x100));
timer_status = ctrl_inw(TMU0_TCR);
timer_status &= ~0x100;
ctrl_outw(timer_status, TMU0_TCR);
/*
* Here we are in the timer irq handler. We just have irqs locally
......@@ -187,16 +277,67 @@ static inline unsigned long mktime(unsigned int year, unsigned int mon,
static unsigned long get_rtc_time(void)
{
/* XXX not implemented yet */
#ifdef CONFIG_SH_CPU_RTC
unsigned int sec, min, hr, wk, day, mon, yr, yr100;
again:
ctrl_outb(1, RCR1); /* clear CF bit */
do {
sec = ctrl_inb(RSECCNT);
min = ctrl_inb(RMINCNT);
hr = ctrl_inb(RHRCNT);
wk = ctrl_inb(RWKCNT);
day = ctrl_inb(RDAYCNT);
mon = ctrl_inb(RMONCNT);
#if defined(__SH4__)
yr = ctrl_inw(RYRCNT);
yr100 = (yr >> 8);
yr &= 0xff;
#else
yr = ctrl_inb(RYRCNT);
yr100 = (yr == 0x99) ? 0x19 : 0x20;
#endif
} while ((ctrl_inb(RCR1) & 0x80) != 0);
BCD_TO_BIN(yr100);
BCD_TO_BIN(yr);
BCD_TO_BIN(mon);
BCD_TO_BIN(day);
BCD_TO_BIN(hr);
BCD_TO_BIN(min);
BCD_TO_BIN(sec);
if (yr > 99 || mon < 1 || mon > 12 || day > 31 || day < 1 ||
hr > 23 || min > 59 || sec > 59) {
printk(KERN_ERR
"SH RTC: invalid value, resetting to 1 Jan 2000\n");
ctrl_outb(2, RCR2); /* reset, stop */
ctrl_outb(0, RSECCNT);
ctrl_outb(0, RMINCNT);
ctrl_outb(0, RHRCNT);
ctrl_outb(6, RWKCNT);
ctrl_outb(1, RDAYCNT);
ctrl_outb(1, RMONCNT);
#if defined(__SH4__)
ctrl_outw(0x2000, RYRCNT);
#else
ctrl_outb(0, RYRCNT);
#endif
ctrl_outb(1, RCR2); /* start */
goto again;
}
return mktime(yr100 * 100 + yr, mon, day, hr, min, sec);
#else
/* XXX should support other clock devices? */
return 0;
#endif
}
static struct irqaction irq0 = { timer_interrupt, SA_INTERRUPT, 0, "timer", NULL, NULL};
void __init time_init(void)
{
unsigned long __dummy;
xtime.tv_sec = get_rtc_time();
xtime.tv_usec = 0;
......@@ -204,19 +345,12 @@ void __init time_init(void)
setup_irq(TIMER_IRQ, &irq0);
/* Start TMU0 */
asm volatile("mov %1,%0\n\t"
"mov.b %0,%2 ! external clock input\n\t"
"mov %3,%0\n\t"
"mov.w %0,%4 ! enable timer0 interrupt\n\t"
"mov.l %5,%6\n\t"
"mov.l %5,%7\n\t"
"mov %8,%0\n\t"
"mov.b %0,%9"
: "=&z" (__dummy)
: "i" (TMU_TOCR_INIT), "m" (__m(TMU_TOCR)),
"i" (TMU0_TCR_INIT), "m" (__m(TMU0_TCR)),
"r" (INTERVAL), "m" (__m(TMU0_TCOR)), "m" (__m(TMU0_TCNT)),
"i" (TMU_TSTR_INIT), "m" (__m(TMU_TSTR)));
ctrl_outb(TMU_TOCR_INIT,TMU_TOCR);
ctrl_outw(TMU0_TCR_INIT,TMU0_TCR);
ctrl_outl(INTERVAL,TMU0_TCOR);
ctrl_outl(INTERVAL,TMU0_TCNT);
ctrl_outb(TMU_TSTR_INIT,TMU_TSTR);
#if 0
/* Start RTC */
asm volatile("");
......
/*
/* $Id: traps.c,v 1.3 1999/09/21 14:37:19 gniibe Exp $
*
* linux/arch/sh/traps.c
*
* SuperH version: Copyright (C) 1999 Niibe Yutaka
......@@ -56,10 +57,6 @@ asmlinkage void do_##name(unsigned long r4, unsigned long r5, \
#define VMALLOC_OFFSET (8*1024*1024)
#define MODULE_RANGE (8*1024*1024)
static void show_registers(struct pt_regs *regs)
{/* Not implemented yet. */
}
spinlock_t die_lock;
void die(const char * str, struct pt_regs * regs, long err)
......@@ -67,7 +64,7 @@ void die(const char * str, struct pt_regs * regs, long err)
console_verbose();
spin_lock_irq(&die_lock);
printk("%s: %04lx\n", str, err & 0xffff);
show_registers(regs);
show_regs(regs);
spin_unlock_irq(&die_lock);
do_exit(SIGSEGV);
}
......
......@@ -6,9 +6,7 @@
$(CC) -D__ASSEMBLY__ $(AFLAGS) -traditional -c $< -o $*.o
L_TARGET = lib.a
# L_OBJS = checksum.o old-checksum.o semaphore.o delay.o \
# usercopy.o getuser.o putuser.o
L_OBJS = delay.o memcpy.o memset.o memmove.o csum_partial_copy.o \
wordcopy.o checksum.o # usercopy.o getuser.o putuser.o
L_OBJS = delay.o memcpy.o memset.o memmove.o old-checksum.o \
checksum.o
include $(TOPDIR)/Rules.make
/* $Id: checksum.S,v 1.1 1999/09/18 16:56:53 gniibe Exp $
*
* INET An implementation of the TCP/IP protocol suite for the LINUX
* operating system. INET is implemented using the BSD Socket
* interface as the means of communication with the user level.
*
* IP/TCP/UDP checksumming routines
*
* Authors: Jorge Cwik, <jorge@laser.satlink.net>
* Arnt Gulbrandsen, <agulbra@nvg.unit.no>
* Tom May, <ftom@netcom.com>
* Pentium Pro/II routines:
* Alexander Kjeldaas <astor@guardian.no>
* Finn Arne Gangstad <finnag@guardian.no>
* Lots of code moved from tcp.c and ip.c; see those files
* for more names.
*
* Changes: Ingo Molnar, converted csum_partial_copy() to 2.1 exception
* handling.
* Andi Kleen, add zeroing on error
* converted to pure assembler
*
* SuperH version: Copyright (C) 1999 Niibe Yutaka
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#include <asm/errno.h>
#include <linux/linkage.h>
/*
* computes a partial checksum, e.g. for TCP/UDP fragments
*/
/*
* unsigned int csum_partial(const unsigned char *buf, int len,
* unsigned int sum);
*/
.text
ENTRY(csum_partial)
/*
* Experiments with Ethernet and SLIP connections show that buff
* is aligned on either a 2-byte or 4-byte boundary. We get at
* least a twofold speedup on 486 and Pentium if it is 4-byte aligned.
* Fortunately, it is easy to convert 2-byte alignment to 4-byte
* alignment for the unrolled loop.
*/
mov r5,r1
mov r4,r0
tst #2,r0 ! Check alignment.
bt 2f ! Jump if alignment is ok.
!
add #-2,r5 ! Alignment uses up two bytes.
cmp/pz r5 !
bt/s 1f ! Jump if we had at least two bytes.
clrt
bra 6f
add #2,r5 ! r5 was < 2. Deal with it.
1:
mov.w @r4+,r0
extu.w r0,r0
addc r0,r6
bf 2f
add #1,r6
2:
mov #-5,r0
shld r0,r5
tst r5,r5
bt/s 4f ! if it's =0, go to 4f
clrt
3:
mov.l @r4+,r0
addc r0,r6
mov.l @r4+,r0
addc r0,r6
mov.l @r4+,r0
addc r0,r6
mov.l @r4+,r0
addc r0,r6
mov.l @r4+,r0
addc r0,r6
mov.l @r4+,r0
addc r0,r6
mov.l @r4+,r0
addc r0,r6
mov.l @r4+,r0
addc r0,r6
movt r0
dt r5
bf/s 3b
cmp/eq #1,r0
mov #0,r0
addc r0,r6
4:
mov r1,r5
mov #0x1c,r0
and r0,r5
tst r5,r5
bt/s 6f
clrt
shlr2 r5
5:
mov.l @r4+,r0
addc r0,r6
movt r0
dt r5
bf/s 5b
cmp/eq #1,r0
mov #0,r0
addc r0,r6
6:
mov r1,r5
mov #3,r0
and r0,r5
tst r5,r5
bt 9f ! if it's =0 go to 9f
mov #2,r1
cmp/hs r1,r5
bf 7f
mov.w @r4+,r0
extu.w r0,r0
cmp/eq r1,r5
bt/s 8f
clrt
shll16 r0
addc r0,r6
7:
mov.b @r4+,r0
extu.b r0,r0
8:
addc r0,r6
mov #0,r0
addc r0,r6
9:
rts
mov r6,r0
/*
unsigned int csum_partial_copy_generic (const char *src, char *dst, int len,
int sum, int *src_err_ptr, int *dst_err_ptr)
*/
/*
* Copy from ds while checksumming, otherwise like csum_partial
*
* The macros SRC and DST specify the type of access for the instruction.
* thus we can call a custom exception handler for all access types.
*
* FIXME: could someone double-check whether I haven't mixed up some SRC and
* DST definitions? It's damn hard to trigger all cases. I hope I got
* them all but there's no guarantee.
*/
#define SRC(y...) \
9999: y; \
.section __ex_table, "a"; \
.long 9999b, 6001f ; \
.previous
#define DST(y...) \
9999: y; \
.section __ex_table, "a"; \
.long 9999b, 6002f ; \
.previous
ENTRY(csum_partial_copy_generic)
mov.l r5,@-r15
mov.l r6,@-r15
mov #2,r0
tst r0,r5 ! Check alignment.
bt 2f ! Jump if alignment is ok.
add #-2,r6 ! Alignment uses up two bytes.
cmp/pz r6 ! Jump if we had at least two bytes.
bt/s 1f
clrt
bra 4f
add #2,r6 ! ecx was < 2. Deal with it.
SRC(1: mov.w @r4+,r0 )
DST( mov.w r0,@r5 )
add #2,r5
extu.w r0,r0
addc r0,r7
mov #0,r0
addc r0,r7
2:
mov r6,r2
mov #-5,r0
shld r0,r6
tst r6,r6
bf/s 2f
clrt
SRC(1: mov.l @r4+,r0 )
SRC( mov.l @r4+,r1 )
addc r0,r7
DST( mov.l r0,@r5 )
add #4,r5
addc r1,r7
DST( mov.l r1,@r5 )
add #4,r5
SRC( mov.l @r4+,r0 )
SRC( mov.l @r4+,r1 )
addc r0,r7
DST( mov.l r0,@r5 )
add #4,r5
addc r1,r7
DST( mov.l r1,@r5 )
add #4,r5
SRC( mov.l @r4+,r0 )
SRC( mov.l @r4+,r1 )
addc r0,r7
DST( mov.l r0,@r5 )
add #4,r5
addc r1,r7
DST( mov.l r1,@r5 )
add #4,r5
SRC( mov.l @r4+,r0 )
SRC( mov.l @r4+,r1 )
addc r0,r7
DST( mov.l r0,@r5 )
add #4,r5
addc r1,r7
DST( mov.l r1,@r5 )
add #4,r5
movt r0
dt r6
bf/s 1b
cmp/eq #1,r0
mov #0,r0
addc r0,r7
2: mov r2,r6
mov #0x1c,r0
and r0,r6
cmp/pl r6
bf/s 4f
clrt
shlr2 r6
SRC(3: mov.l @r4+,r0 )
addc r0,r7
DST( mov.l r0,@r5 )
add #4,r5
movt r0
dt r6
bf/s 3b
cmp/eq #1,r0
mov #0,r0
addc r0,r7
4: mov r2,r6
mov #3,r0
and r0,r6
cmp/pl r6
bf 7f
mov #2,r1
cmp/hs r1,r6
bf 5f
SRC( mov.w @r4+,r0 )
DST( mov.w r0,@r5 )
extu.w r0,r0
add #2,r5
cmp/eq r1,r6
bt/s 6f
clrt
shll16 r0
addc r0,r7
SRC(5: mov.b @r4+,r0 )
DST( mov.b r0,@r5 )
extu.b r0,r0
6: addc r0,r7
mov #0,r0
addc r0,r7
7:
5000:
# Exception handler:
.section .fixup, "ax"
6001:
mov.l @(8,r15),r0 ! src_err_ptr
mov #-EFAULT,r1
mov.l r1,@r0
! zero the complete destination - computing the rest
! is too much work
mov.l @(4,r15),r5 ! dst
mov.l @r15,r6 ! len
mov #0,r7
1: mov.b r7,@r5
dt r6
bf/s 1b
add #1,r5
mov.l 8000f,r0
jmp @r0
nop
.balign 4
8000: .long 5000b
6002:
mov.l @(12,r15),r0 ! dst_err_ptr
mov #-EFAULT,r1
mov.l r1,@r0
mov.l 8001f,r0
jmp @r0
nop
.balign 4
8001: .long 5000b
.previous
add #8,r15
rts
mov r7,r0
/*
* Taken from:
* arch/alpha/lib/checksum.c
*
* This file contains network checksum routines that are better done
* in an architecture-specific manner due to speed..
*/
#include <linux/string.h>
#include <asm/byteorder.h>
static inline unsigned short from64to16(unsigned long long x)
{
/* add up 32-bit words for 33 bits */
x = (x & 0xffffffff) + (x >> 32);
/* add up 16-bit and 17-bit words for 17+c bits */
x = (x & 0xffff) + (x >> 16);
/* add up 16-bit and 2-bit for 16+c bit */
x = (x & 0xffff) + (x >> 16);
/* add up carry.. */
x = (x & 0xffff) + (x >> 16);
return x;
}
/*
* computes the checksum of the TCP/UDP pseudo-header
* returns a 16-bit checksum, already complemented.
*/
unsigned short int csum_tcpudp_magic(unsigned long saddr,
unsigned long daddr,
unsigned short len,
unsigned short proto,
unsigned int sum)
{
return ~from64to16(saddr + daddr + sum +
((unsigned long) ntohs(len) << 16) +
((unsigned long) proto << 8));
}
unsigned int csum_tcpudp_nofold(unsigned long saddr,
unsigned long daddr,
unsigned short len,
unsigned short proto,
unsigned int sum)
{
unsigned long long result;
result = (saddr + daddr + sum +
((unsigned long) ntohs(len) << 16) +
((unsigned long) proto << 8));
/* Fold down to 32-bits so we don't loose in the typedef-less
network stack. */
/* 64 to 33 */
result = (result & 0xffffffff) + (result >> 32);
/* 33 to 32 */
result = (result & 0xffffffff) + (result >> 32);
return result;
}
/*
* Do a 64-bit checksum on an arbitrary memory area..
*
* This isn't a great routine, but it's not _horrible_ either. The
* inner loop could be unrolled a bit further, and there are better
* ways to do the carry, but this is reasonable.
*/
static inline unsigned long do_csum(const unsigned char * buff, int len)
{
int odd, count;
unsigned long long result = 0;
if (len <= 0)
goto out;
odd = 1 & (unsigned long) buff;
if (odd) {
result = *buff << 8;
len--;
buff++;
}
count = len >> 1; /* nr of 16-bit words.. */
if (count) {
if (2 & (unsigned long) buff) {
result += *(unsigned short *) buff;
count--;
len -= 2;
buff += 2;
}
count >>= 1; /* nr of 32-bit words.. */
if (count) {
if (4 & (unsigned long) buff) {
result += *(unsigned int *) buff;
count--;
len -= 4;
buff += 4;
}
count >>= 1; /* nr of 64-bit words.. */
if (count) {
unsigned long carry = 0;
do {
unsigned long w = *(unsigned long *) buff;
count--;
buff += 8;
result += carry;
result += w;
carry = (w > result);
} while (count);
result += carry;
result = (result & 0xffffffff) + (result >> 32);
}
if (len & 4) {
result += *(unsigned int *) buff;
buff += 4;
}
}
if (len & 2) {
result += *(unsigned short *) buff;
buff += 2;
}
}
if (len & 1)
result += *buff;
result = from64to16(result);
if (odd)
result = ((result >> 8) & 0xff) | ((result & 0xff) << 8);
out:
return result;
}
/*
* This is a version of ip_compute_csum() optimized for IP headers,
* which always checksum on 4 octet boundaries.
*/
unsigned short ip_fast_csum(unsigned char * iph, unsigned int ihl)
{
return ~do_csum(iph,ihl*4);
}
/*
* computes the checksum of a memory block at buff, length len,
* and adds in "sum" (32-bit)
*
* returns a 32-bit number suitable for feeding into itself
* or csum_tcpudp_magic
*
* this function must be called with even lengths, except
* for the last fragment, which may be odd
*
* it's best to have buff aligned on a 32-bit boundary
*/
unsigned int csum_partial(const unsigned char * buff, int len, unsigned int sum)
{
unsigned long long result = do_csum(buff, len);
/* add in old sum, and carry.. */
result += sum;
/* 32+c bits -> 32 bits */
result = (result & 0xffffffff) + (result >> 32);
return result;
}
/*
* this routine is used for miscellaneous IP-like checksums, mainly
* in icmp.c
*/
unsigned short ip_compute_csum(unsigned char * buff, int len)
{
return ~from64to16(do_csum(buff,len));
}
/*
* INET An implementation of the TCP/IP protocol suite for the LINUX
* operating system. INET is implemented using the BSD Socket
* interface as the means of communication with the user level.
*
* MIPS specific IP/TCP/UDP checksumming routines
*
* Authors: Ralf Baechle, <ralf@waldorf-gmbh.de>
* Lots of code moved from tcp.c and ip.c; see those files
* for more names.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*
* $Id: csum_partial_copy.c,v 1.2 1998/09/16 13:29:32 ralf Exp $
*/
#include <net/checksum.h>
#include <linux/types.h>
#include <asm/byteorder.h>
#include <asm/string.h>
#include <asm/uaccess.h>
/*
* copy while checksumming, otherwise like csum_partial
*/
unsigned int csum_partial_copy(const char *src, char *dst,
int len, unsigned int sum)
{
/*
* It's 2:30 am and I don't feel like doing it real ...
* This is lots slower than the real thing (tm)
*/
sum = csum_partial(src, len, sum);
memcpy(dst, src, len);
return sum;
}
/*
* Copy from userspace and compute checksum. If we catch an exception
* then zero the rest of the buffer.
*/
unsigned int csum_partial_copy_from_user (const char *src, char *dst,
int len, unsigned int sum,
int *err_ptr)
{
int missing;
missing = copy_from_user(dst, src, len);
if (missing) {
memset(dst + len - missing, 0, missing);
*err_ptr = -EFAULT;
}
return csum_partial(dst, len, sum);
}
/*
* Copy to userspace and compute checksum.
*/
unsigned int csum_partial_copy_to_user (const char *src, char *dst,
int len, unsigned int sum,
int *err_ptr)
{
sum = csum_partial(src, len, sum);
if (copy_to_user(dst, src, len)) {
*err_ptr = -EFAULT;
return sum;
}
return sum;
}
! Taken from newlib-1.8.0
/* $Id: memcpy.S,v 1.3 1999/09/28 11:32:48 gniibe Exp $
*
* "memcpy" implementation of SuperH
*
* Copyright (C) 1999 Niibe Yutaka
*
*/
!
! Fast SH memcpy
!
! by Toshiyasu Morita (tm@netcom.com)
! hacked by J"orn Rernnecke (amylaar@cygnus.co.uk) ("o for o-umlaut)
!
! Entry: r4: destination pointer
! r5: source pointer
! r6: byte count
!
! Exit: r0: destination pointer
! r1-r7: trashed
!
! Notes: Usually one wants to do small reads and write a longword, but
! unfortunately it is difficult in some cases to concatanate bytes
! into a longword on the SH, so this does a longword read and small
! writes.
!
! This implementation makes two assumptions about how it is called:
!
! 1.: If the byte count is nonzero, the address of the last byte to be
! copied is unsigned greater than the address of the first byte to
! be copied. This could be easily swapped for a signed comparison,
! but the algorithm used needs some comparison.
!
! 2.: When there are two or three bytes in the last word of an 11-or-bore
! bytes memory chunk to b copied, the rest of the word can be read
! without size effects.
! This could be easily changed by increasing the minumum size of
! a fast memcpy and the amount subtracted from r7 before L_2l_loop be 2,
! however, this would cost a few extra cyles on average.
!
/*
* void *memcpy(void *dst, const void *src, size_t n);
* No overlap between the memory of DST and of SRC are assumed.
*/
#include <linux/linkage.h>
ENTRY(memcpy)
! Big endian version copies with decreasing addresses.
mov r4,r0
add r6,r0
sub r4,r5
mov #11,r1
cmp/hs r1,r6
bf/s L_small
tst r6,r6
bt/s 9f ! if n=0, do nothing
mov r4,r0
sub r4,r5 ! From here, r5 has the distance to r0
add r6,r0 ! From here, r0 points the end of copying point
mov #12,r1
cmp/gt r6,r1
bt/s 7f ! if it's too small, copy a byte at once
add #-1,r5
mov r5,r3
add r0,r3
shlr r3
bt/s L_even
mov r4,r7
mov.b @(r0,r5),r2
add #-1,r3
mov.b r2,@-r0
L_even:
tst #1,r0
add #-1,r5
bf/s L_odddst
add #8,r7
tst #2,r0
bt L_al4dst
add #-1,r3
mov.w @(r0,r5),r1
mov.w r1,@-r0
L_al4dst:
shlr r3
bt L_al4both
mov.w @(r0,r5),r1
swap.w r1,r1
add #4,r7
add #-4,r5
.align 2
L_2l_loop:
mov.l @(r0,r5),r2
xtrct r2,r1
mov.l r1,@-r0
cmp/hs r7,r0
mov.l @(r0,r5),r1
xtrct r1,r2
mov.l r2,@-r0
bt L_2l_loop
bra L_cleanup
add #5,r5
add #1,r5
! From here, r6 is free
!
! r4 --> [ ... ] DST [ ... ] SRC
! [ ... ] [ ... ]
! : :
! r0 --> [ ... ] r0+r5 --> [ ... ]
!
!
mov r5,r1
mov #3,r2
and r2,r1
shll2 r1
mov r0,r3 ! Save the value on R0 to R3
mova jmptable,r0
add r1,r0
mov.l @r0,r1
jmp @r1
mov r3,r0 ! and back to R0
.balign 4
jmptable:
.long case0
.long case1
.long case2
.long case3
nop ! avoid nop in executed code.
L_al4both:
add #-2,r5
.align 2
L_al4both_loop:
mov.l @(r0,r5),r1
cmp/hs r7,r0
bt/s L_al4both_loop
! copy a byte at once
7: mov r4,r2
add #1,r2
8:
cmp/hi r2,r0
mov.b @(r0,r5),r1
bt/s 8b ! while (r0>r2)
mov.b r1,@-r0
9:
rts
nop
case0:
!
! GHIJ KLMN OPQR --> GHIJ KLMN OPQR
!
! First, align to long word boundary
mov r0,r3
and r2,r3
tst r3,r3
bt/s 2f
add #-4,r5
add #3,r5
1: dt r3
mov.b @(r0,r5),r1
bf/s 1b
mov.b r1,@-r0
!
add #-3,r5
2: ! Second, copy a long word at once
mov r4,r2
add #7,r2
3: mov.l @(r0,r5),r1
cmp/hi r2,r0
bt/s 3b
mov.l r1,@-r0
bra L_cleanup
!
! Third, copy a byte at once, if necessary
cmp/eq r4,r0
bt/s 9b
add #3,r5
bra 8b
add #-6,r2
nop ! avoid nop in executed code.
L_odddst:
shlr r3
bt L_al4src
mov.w @(r0,r5),r1
mov.b r1,@-r0
shlr8 r1
mov.b r1,@-r0
L_al4src:
add #-2,r5
.align 2
L_odd_loop:
mov.l @(r0,r5),r2
cmp/hs r7,r0
mov.b r2,@-r0
shlr8 r2
mov.w r2,@-r0
shlr16 r2
mov.b r2,@-r0
bt L_odd_loop
add #3,r5
L_cleanup:
L_small:
case1:
!
! GHIJ KLMN OPQR --> ...G HIJK LMNO PQR.
!
! First, align to long word boundary
mov r0,r3
and r2,r3
tst r3,r3
bt/s 2f
add #-1,r5
1: dt r3
mov.b @(r0,r5),r1
bf/s 1b
mov.b r1,@-r0
!
2: ! Second, read a long word and write a long word at once
mov.l @(r0,r5),r1
add #-4,r5
mov r4,r2
add #7,r2
!
#ifdef __LITTLE_ENDIAN__
3: mov r1,r3 ! RQPO
shll16 r3
shll8 r3 ! Oxxx
mov.l @(r0,r5),r1 ! NMLK
mov r1,r6
shlr8 r6 ! xNML
or r6,r3 ! ONML
cmp/hi r2,r0
bt/s 3b
mov.l r3,@-r0
#else
3: mov r1,r3 ! OPQR
shlr16 r3
shlr8 r3 ! xxxO
mov.l @(r0,r5),r1 ! KLMN
mov r1,r6
shll8 r6 ! LMNx
or r6,r3 ! LMNO
cmp/hi r2,r0
bt/s 3b
mov.l r3,@-r0
#endif
!
! Third, copy a byte at once, if necessary
cmp/eq r4,r0
bt L_ready
add #1,r4
.align 2
L_cleanup_loop:
mov.b @(r0,r5),r2
bt/s 9b
add #4,r5
bra 8b
add #-6,r2
case2:
!
! GHIJ KLMN OPQR --> ..GH IJKL MNOP QR..
!
! First, align to word boundary
tst #1,r0
bt/s 2f
add #-1,r5
mov.b @(r0,r5),r1
mov.b r1,@-r0
!
2: ! Second, read a word and write a word at once
add #-1,r5
mov r4,r2
add #3,r2
!
3: mov.w @(r0,r5),r1
cmp/hi r2,r0
bt/s 3b
mov.w r1,@-r0
!
! Third, copy a byte at once, if necessary
cmp/eq r4,r0
mov.b r2,@-r0
bf L_cleanup_loop
L_ready:
bt/s 9b
add #1,r5
mov.b @(r0,r5),r1
rts
nop
mov.b r1,@-r0
case3:
!
! GHIJ KLMN OPQR --> .GHI JKLM NOPQ R...
!
! First, align to long word boundary
mov r0,r3
and r2,r3
tst r3,r3
bt/s 2f
add #-1,r5
1: dt r3
mov.b @(r0,r5),r1
bf/s 1b
mov.b r1,@-r0
!
2: ! Second, read a long word and write a long word at once
add #-2,r5
mov.l @(r0,r5),r1
add #-4,r5
mov r4,r2
add #7,r2
!
#ifdef __LITTLE_ENDIAN__
3: mov r1,r3 ! RQPO
shll8 r3 ! QPOx
mov.l @(r0,r5),r1 ! NMLK
mov r1,r6
shlr16 r6
shlr8 r6 ! xxxN
or r6,r3 ! QPON
cmp/hi r2,r0
bt/s 3b
mov.l r3,@-r0
#else
3: mov r1,r3 ! OPQR
shlr8 r3 ! xOPQ
mov.l @(r0,r5),r1 ! KLMN
mov r1,r6
shll16 r6
shll8 r6 ! Nxxx
or r6,r3 ! NOPQ
cmp/hi r2,r0
bt/s 3b
mov.l r3,@-r0
#endif
!
! Third, copy a byte at once, if necessary
cmp/eq r4,r0
bt/s 9b
add #6,r5
bra 8b
add #-6,r2
/* $Id: memmove.S,v 1.2 1999/09/21 12:55:49 gniibe Exp $
*
* "memmove" implementation of SuperH
*
* Copyright (C) 1999 Niibe Yutaka
*
*/
/*
* void *memmove(void *dst, const void *src, size_t n);
* The memory areas may overlap.
*/
#include <linux/linkage.h>
ENTRY(memmove)
mov.l r8,@-r15
mov.l r9,@-r15
mov.l r14,@-r15
sts.l pr,@-r15
add #-28,r15
mov r15,r14
mov.l r4,@r14
mov.l r5,@(4,r14)
mov.l r6,@(8,r14)
mov.l @r14,r1
mov.l r1,@(12,r14)
mov.l @(4,r14),r1
mov.l r1,@(16,r14)
mov.l @(12,r14),r1
mov.l @(16,r14),r2
sub r2,r1
mov.l @(8,r14),r2
cmp/hs r2,r1
bt .L54
bra .L2
nop
.L54:
mov.l @(8,r14),r1
mov #15,r2
cmp/gt r2,r1
bt .LF100
bra .L52
nop
.LF100:
mov.l @(12,r14),r2
neg r2,r1
mov #3,r2
and r1,r2
mov.l @(8,r14),r1
mov r1,r9
sub r2,r9
mov r9,r2
mov.l r2,@(8,r14)
.L4:
mov.l @(12,r14),r2
neg r2,r1
mov #3,r2
and r1,r2
mov.l r2,@(20,r14)
.L7:
mov.l @(20,r14),r1
cmp/pl r1
bt .L9
bra .L6
nop
.align 2
.L9:
mov r14,r2
mov r14,r1
add #24,r1
mov.l @(16,r14),r2
mov.b @r2,r3
mov.b r3,@r1
mov.l @(16,r14),r1
mov r1,r2
add #1,r2
mov.l r2,@(16,r14)
mov.l @(20,r14),r1
mov r1,r2
add #-1,r2
mov.l r2,@(20,r14)
mov.l @(12,r14),r1
mov r14,r2
mov r14,r3
add #24,r3
mov.b @r3,r2
mov.b r2,@r1
mov.l @(12,r14),r1
mov r1,r2
add #1,r2
mov.l r2,@(12,r14)
bra .L7
nop
.align 2
.L8:
.L6:
bra .L5
nop
.align 2
.L10:
bra .L4
nop
.align 2
.L5:
nop
.L11:
mov.l @(16,r14),r1
! if dest > src, call memcpy (it copies in decreasing order)
cmp/hi r5,r4
bf 1f
mov.l 2f,r0
jmp @r0
nop
.balign 4
2: .long SYMBOL_NAME(memcpy)
1:
sub r5,r4 ! From here, r4 has the distance to r0
tst r6,r6
bt/s 9f ! if n=0, do nothing
mov r5,r0
add r6,r5
mov #12,r1
cmp/gt r6,r1
bt/s 8f ! if it's too small, copy a byte at once
add #-1,r4
add #1,r4
!
! [ ... ] DST [ ... ] SRC
! [ ... ] [ ... ]
! : :
! r0+r4--> [ ... ] r0 --> [ ... ]
! : :
! [ ... ] [ ... ]
! r5 -->
!
mov r4,r1
mov #3,r2
and r1,r2
tst r2,r2
bf .L14
mov r15,r2
mov.l @(12,r14),r1
mov.l @(16,r14),r2
mov.l @(8,r14),r7
mov r7,r3
shlr2 r3
mov r1,r4
mov r2,r5
mov r3,r6
mov.l .L46,r8
jsr @r8
nop
bra .L15
nop
.align 2
.L14:
mov r15,r2
mov.l @(12,r14),r1
mov.l @(16,r14),r2
mov.l @(8,r14),r7
mov r7,r3
shlr2 r3
mov r1,r4
mov r2,r5
mov r3,r6
mov.l .L47,r8
jsr @r8
nop
.L15:
mov.l @(8,r14),r1
mov #-4,r2
and r2,r1
mov.l @(16,r14),r2
add r2,r1
mov.l r1,@(16,r14)
mov.l @(8,r14),r1
mov #-4,r2
and r2,r1
mov.l @(12,r14),r2
add r2,r1
mov.l r1,@(12,r14)
mov.l @(8,r14),r1
mov #3,r2
and r1,r2
mov.l r2,@(8,r14)
.L13:
.L52:
bra .L3
nop
.align 2
.L16:
bra .L11
nop
.align 2
.L12:
.L3:
nop
.L17:
mov.l @(8,r14),r1
mov.l r1,@(20,r14)
.L20:
mov.l @(20,r14),r1
cmp/pl r1
bt .L22
bra .L19
nop
.align 2
.L22:
mov r14,r2
mov r14,r1
add #24,r1
mov.l @(16,r14),r2
mov.b @r2,r3
mov.b r3,@r1
mov.l @(16,r14),r1
mov r1,r2
add #1,r2
mov.l r2,@(16,r14)
mov.l @(20,r14),r1
mov r1,r2
add #-1,r2
mov.l r2,@(20,r14)
mov.l @(12,r14),r1
mov r14,r2
mov r14,r3
add #24,r3
mov.b @r3,r2
mov.b r2,@r1
mov.l @(12,r14),r1
shll2 r1
mov r0,r3 ! Save the value on R0 to R3
mova jmptable,r0
add r1,r0
mov.l @r0,r1
jmp @r1
mov r3,r0 ! and back to R0
.balign 4
jmptable:
.long case0
.long case1
.long case2
.long case3
! copy a byte at once
8: mov.b @r0+,r1
cmp/hs r5,r0
bf/s 8b ! while (r0<r5)
mov.b r1,@(r0,r4)
add #1,r4
9:
add r4,r0
rts
sub r6,r0
case_none:
bra 8b
add #-1,r4
case0:
!
! GHIJ KLMN OPQR --> GHIJ KLMN OPQR
!
! First, align to long word boundary
mov r0,r3
and r2,r3
tst r3,r3
bt/s 2f
add #-1,r4
mov #4,r2
sub r3,r2
1: dt r2
mov.b @r0+,r1
bf/s 1b
mov.b r1,@(r0,r4)
!
2: ! Second, copy a long word at once
add #-3,r4
add #-3,r5
3: mov.l @r0+,r1
cmp/hs r5,r0
bf/s 3b
mov.l r1,@(r0,r4)
add #3,r5
!
! Third, copy a byte at once, if necessary
cmp/eq r5,r0
bt/s 9b
add #4,r4
bra 8b
add #-1,r4
case3:
!
! GHIJ KLMN OPQR --> ...G HIJK LMNO PQR.
!
! First, align to long word boundary
mov r0,r3
and r2,r3
tst r3,r3
bt/s 2f
add #-1,r4
mov #4,r2
sub r3,r2
1: dt r2
mov.b @r0+,r1
bf/s 1b
mov.b r1,@(r0,r4)
!
2: ! Second, read a long word and write a long word at once
add #-2,r4
mov.l @(r0,r4),r1
add #-7,r5
add #-4,r4
!
#ifdef __LITTLE_ENDIAN__
shll8 r1
3: mov r1,r3 ! JIHG
shlr8 r3 ! xJIH
mov.l @r0+,r1 ! NMLK
mov r1,r2
add #1,r2
mov.l r2,@(12,r14)
bra .L20
nop
.align 2
.L21:
.L19:
bra .L18
nop
.align 2
.L23:
bra .L17
nop
.align 2
.L18:
bra .L24
nop
.align 2
.L2:
mov.l @(16,r14),r1
mov.l @(8,r14),r2
add r2,r1
mov.l r1,@(16,r14)
mov.l @(12,r14),r1
mov.l @(8,r14),r2
add r2,r1
mov.l r1,@(12,r14)
mov.l @(8,r14),r1
mov #15,r2
cmp/gt r2,r1
bt .LF101
bra .L53
nop
.LF101:
mov.l @(12,r14),r1
mov #3,r2
and r1,r2
mov.l @(8,r14),r1
mov r1,r9
sub r2,r9
mov r9,r2
mov.l r2,@(8,r14)
.L26:
mov.l @(12,r14),r1
mov #3,r2
and r1,r2
mov.l r2,@(20,r14)
.L29:
mov.l @(20,r14),r1
cmp/pl r1
bt .L31
bra .L28
nop
.align 2
.L31:
mov.l @(16,r14),r1
mov r1,r2
add #-1,r2
mov.l r2,@(16,r14)
mov r14,r2
mov r14,r1
add #24,r1
mov.l @(16,r14),r2
mov.b @r2,r3
mov.b r3,@r1
mov.l @(12,r14),r1
mov r1,r2
add #-1,r2
mov.l r2,@(12,r14)
mov.l @(20,r14),r1
mov r1,r2
add #-1,r2
mov.l r2,@(20,r14)
mov.l @(12,r14),r1
mov r14,r2
mov r14,r3
add #24,r3
mov.b @r3,r2
mov.b r2,@r1
bra .L29
nop
.align 2
.L30:
.L28:
bra .L27
nop
.align 2
.L32:
bra .L26
nop
.align 2
.L27:
nop
.L33:
mov.l @(16,r14),r1
mov #3,r2
and r1,r2
tst r2,r2
bf .L36
mov r15,r2
mov.l @(12,r14),r1
mov.l @(16,r14),r2
mov.l @(8,r14),r7
mov r7,r3
shlr2 r3
mov r1,r4
mov r2,r5
mov r3,r6
mov.l .L48,r8
jsr @r8
nop
bra .L37
nop
.align 2
.L36:
mov r15,r2
mov.l @(12,r14),r1
mov.l @(16,r14),r2
mov.l @(8,r14),r7
mov r7,r3
shlr2 r3
mov r1,r4
mov r2,r5
mov r3,r6
mov.l .L49,r8
jsr @r8
nop
.L37:
mov.l @(8,r14),r1
mov #-4,r2
and r2,r1
mov.l @(16,r14),r2
mov r2,r9
sub r1,r9
mov r9,r1
mov.l r1,@(16,r14)
mov.l @(8,r14),r1
mov #-4,r2
and r2,r1
mov.l @(12,r14),r2
mov r2,r9
sub r1,r9
mov r9,r1
mov.l r1,@(12,r14)
mov.l @(8,r14),r1
mov #3,r2
and r1,r2
mov.l r2,@(8,r14)
.L35:
.L53:
bra .L25
nop
.align 2
.L38:
bra .L33
nop
.align 2
.L34:
.L25:
nop
.L39:
mov.l @(8,r14),r1
mov.l r1,@(20,r14)
.L42:
mov.l @(20,r14),r1
cmp/pl r1
bt .L44
bra .L41
nop
.align 2
.L44:
mov.l @(16,r14),r1
shll16 r2
shll8 r2 ! Kxxx
or r2,r3 ! KJIH
cmp/hs r5,r0
bf/s 3b
mov.l r3,@(r0,r4)
#else
shlr8 r1
3: mov r1,r3 ! GHIJ
shll8 r3 ! HIJx
mov.l @r0+,r1 ! KLMN
mov r1,r2
add #-1,r2
mov.l r2,@(16,r14)
mov r14,r2
mov r14,r1
add #24,r1
mov.l @(16,r14),r2
mov.b @r2,r3
mov.b r3,@r1
mov.l @(12,r14),r1
shlr16 r2
shlr8 r2 ! xxxK
or r2,r3 ! HIJK
cmp/hs r5,r0
bf/s 3b
mov.l r3,@(r0,r4)
#endif
add #7,r5
!
! Third, copy a byte at once, if necessary
cmp/eq r5,r0
bt/s 9b
add #7,r4
add #-3,r0
bra 8b
add #-1,r4
case2:
!
! GHIJ KLMN OPQR --> ..GH IJKL MNOP QR..
!
! First, align to word boundary
tst #1,r0
bt/s 2f
add #-1,r4
mov.b @r0+,r1
mov.b r1,@(r0,r4)
!
2: ! Second, read a word and write a word at once
add #-1,r4
add #-1,r5
!
3: mov.w @r0+,r1
cmp/hs r5,r0
bf/s 3b
mov.w r1,@(r0,r4)
add #1,r5
!
! Third, copy a byte at once, if necessary
cmp/eq r5,r0
bt/s 9b
add #2,r4
mov.b @r0,r1
mov.b r1,@(r0,r4)
bra 9b
add #1,r0
case1:
!
! GHIJ KLMN OPQR --> .GHI JKLM NOPQ R...
!
! First, align to long word boundary
mov r0,r3
and r2,r3
tst r3,r3
bt/s 2f
add #-1,r4
mov #4,r2
sub r3,r2
1: dt r2
mov.b @r0+,r1
bf/s 1b
mov.b r1,@(r0,r4)
!
2: ! Second, read a long word and write a long word at once
mov.l @(r0,r4),r1
add #-7,r5
add #-4,r4
!
#ifdef __LITTLE_ENDIAN__
shll16 r1
shll8 r1
3: mov r1,r3 ! JIHG
shlr16 r3
shlr8 r3 ! xxxJ
mov.l @r0+,r1 ! NMLK
mov r1,r2
add #-1,r2
mov.l r2,@(12,r14)
mov.l @(20,r14),r1
shll8 r2 ! MLKx
or r2,r3 ! MLKJ
cmp/hs r5,r0
bf/s 3b
mov.l r3,@(r0,r4)
#else
shlr16 r1
shlr8 r1
3: mov r1,r3 ! GHIJ
shll16 r3
shll8 r3 ! Jxxx
mov.l @r0+,r1 ! KLMN
mov r1,r2
add #-1,r2
mov.l r2,@(20,r14)
mov.l @(12,r14),r1
mov r14,r2
mov r14,r3
add #24,r3
mov.b @r3,r2
mov.b r2,@r1
bra .L42
nop
.align 2
.L43:
.L41:
bra .L24
nop
.align 2
.L45:
bra .L39
nop
.align 2
.L40:
.L24:
mov.l @r14,r1
mov r1,r0
bra .L1
nop
.align 2
.L1:
add #28,r14
mov r14,r15
lds.l @r15+,pr
mov.l @r15+,r14
mov.l @r15+,r9
mov.l @r15+,r8
rts
nop
.L50:
.align 2
.L46:
.long __wordcopy_fwd_aligned
.L47:
.long __wordcopy_fwd_dest_aligned
.L48:
.long __wordcopy_bwd_aligned
.L49:
.long __wordcopy_bwd_dest_aligned
.Lfe1:
shlr8 r2 ! xKLM
or r2,r3 ! JKLM
cmp/hs r5,r0
bf/s 3b ! while(r0<r5)
mov.l r3,@(r0,r4)
#endif
add #7,r5
!
! Third, copy a byte at once, if necessary
cmp/eq r5,r0
bt/s 9b
add #5,r4
add #-3,r0
bra 8b
add #-1,r4
! Taken from newlib-1.8.0
/* $Id: memset.S,v 1.1 1999/09/18 16:57:09 gniibe Exp $
*
* "memset" implementation of SuperH
*
* Copyright (C) 1999 Niibe Yutaka
*
*/
/*
* void *memset(void *s, int c, size_t n);
*/
!
! Fast SH memset
!
! by Toshiyasu Morita (tm@netcom.com)
!
! Entry: r4: destination pointer
! r5: fill value
! r6: byte count
!
! Exit: r0-r3: trashed
!
#include <linux/linkage.h>
ENTRY(memset)
mov r4,r3 ! Save return value
mov r6,r0 ! Check explicitly for zero
cmp/eq #0,r0
bt L_exit
mov #12,r0 ! Check for small number of bytes
tst r6,r6
bt/s 5f ! if n=0, do nothing
add r6,r4
mov #12,r0
cmp/gt r6,r0
bt L_store_byte_loop
neg r4,r0 ! Align destination
add #4,r0
bt/s 4f ! if it's too small, set a byte at once
mov r4,r0
and #3,r0
tst r0,r0
bt L_dup_bytes
.balignw 4,0x0009
L_align_loop:
mov.b r5,@r4
add #-1,r6
add #1,r4
cmp/eq #0,r0
bt/s 2f ! It's aligned
sub r0,r6
1:
dt r0
bf L_align_loop
L_dup_bytes:
extu.b r5,r5 ! Duplicate bytes across longword
swap.b r5,r0
or r0,r5
swap.w r5,r0
or r0,r5
mov r6,r2 ! Calculate number of double longwords
shlr2 r2
shlr r2
.balignw 4,0x0009
L_store_long_loop:
mov.l r5,@r4 ! Store double longs to memory
dt r2
mov.l r5,@(4,r4)
add #8,r4
bf L_store_long_loop
bf/s 1b
mov.b r5,@-r4
2: ! make VVVV
swap.b r5,r0 ! V0
or r0,r5 ! VV
swap.w r5,r0 ! VV00
or r0,r5 ! VVVV
!
mov r6,r0
shlr2 r0
shlr r0 ! r0 = r6 >> 3
3:
dt r0
mov.l r5,@-r4 ! set 8-byte at once
bf/s 3b
mov.l r5,@-r4
!
mov #7,r0
and r0,r6
tst r6,r6
bt L_exit
.balignw 4,0x0009
L_store_byte_loop:
mov.b r5,@r4 ! Store bytes to memory
add #1,r4
bt 5f
! fill bytes
4:
dt r6
bf L_store_byte_loop
L_exit:
bf/s 4b
mov.b r5,@-r4
5:
rts
mov r3,r0
mov r4,r0
......@@ -15,5 +15,3 @@ unsigned int csum_partial_copy( const char *src, char *dst, int len, int sum)
return sum;
}
This diff is collapsed.
......@@ -8,6 +8,6 @@
# Note 2! The CFLAGS definition is now in the main makefile...
O_TARGET := mm.o
O_OBJS := init.o fault.o ioremap.o extable.o
O_OBJS := init.o fault.o ioremap.o extable.o cache.o
include $(TOPDIR)/Rules.make
/* $Id: cache.c,v 1.7 1999/09/23 11:43:07 gniibe Exp $
*
* linux/arch/sh/mm/cache.c
*
* Copyright (C) 1999 Niibe Yutaka
*
*/
#include <linux/init.h>
#include <linux/mman.h>
#include <linux/mm.h>
#include <linux/threads.h>
#include <asm/page.h>
#include <asm/pgtable.h>
#include <asm/processor.h>
#include <asm/cache.h>
#include <asm/io.h>
#include <asm/uaccess.h>
#if defined(__sh3__)
#define CCR 0xffffffec /* Address of Cache Control Register */
#define CCR_CACHE_VAL 0x00000005 /* 8k-byte cache, P1-wb, enable */
#define CCR_CACHE_INIT 0x0000000d /* 8k-byte cache, CF, P1-wb, enable */
#define CCR_CACHE_ENABLE 1
#define CACHE_IC_ADDRESS_ARRAY 0xf0000000 /* SH-3 has unified cache system */
#define CACHE_OC_ADDRESS_ARRAY 0xf0000000
#define CACHE_VALID 1
#define CACHE_UPDATED 2
/* 7709A/7729 has 16K cache (256-entry), while 7702 has only 2K(direct)
7702 is not supported (yet) */
struct _cache_system_info {
int way_shift;
int entry_mask;
int num_entries;
};
static struct _cache_system_info cache_system_info;
#define CACHE_OC_WAY_SHIFT (cache_system_info.way_shift)
#define CACHE_IC_WAY_SHIFT (cache_system_info.way_shift)
#define CACHE_OC_ENTRY_SHIFT 4
#define CACHE_OC_ENTRY_MASK (cache_system_info.entry_mask)
#define CACHE_IC_ENTRY_MASK (cache_system_info.entry_mask)
#define CACHE_OC_NUM_ENTRIES (cache_system_info.num_entries)
#define CACHE_OC_NUM_WAYS 4
#define CACHE_IC_NUM_WAYS 4
#elif defined(__SH4__)
#define CCR 0xff00001c /* Address of Cache Control Register */
#define CCR_CACHE_VAL 0x00000105 /* 8k+16k-byte cache,P1-wb,enable */
#define CCR_CACHE_INIT 0x0000090d /* 8k+16k-byte cache,CF,P1-wb,enable */
#define CCR_CACHE_ENABLE 0x00000101
#define CACHE_IC_ADDRESS_ARRAY 0xf0000000
#define CACHE_OC_ADDRESS_ARRAY 0xf4000000
#define CACHE_VALID 1
#define CACHE_UPDATED 2
#define CACHE_OC_WAY_SHIFT 13
#define CACHE_IC_WAY_SHIFT 13
#define CACHE_OC_ENTRY_SHIFT 5
#define CACHE_OC_ENTRY_MASK 0x3fe0
#define CACHE_IC_ENTRY_MASK 0x1fe0
#define CACHE_OC_NUM_ENTRIES 512
#define CACHE_OC_NUM_WAYS 1
#define CACHE_IC_NUM_WAYS 1
#endif
#define jump_to_p2(__dummy) \
asm volatile("mova 1f,%0\n\t" \
"add %1,%0\n\t" \
"jmp @r0 ! Jump to P2 area\n\t" \
" nop\n\t" \
".balign 4\n" \
"1:" \
: "=&z" (__dummy) \
: "r" (0x20000000))
#define back_to_p1(__dummy) \
asm volatile("nop;nop;nop;nop;nop;nop\n\t" \
"mova 9f,%0\n\t" \
"sub %1,%0\n\t" \
"jmp @r0 ! Back to P1 area\n\t" \
" nop\n\t" \
".balign 4\n" \
"9:" \
: "=&z" (__dummy) \
: "r" (0x20000000), "0" (__dummy))
/* Write back caches to memory (if needed) and invalidates the caches */
void cache_flush_area(unsigned long start, unsigned long end)
{
unsigned long flags, __dummy;
unsigned long addr, data, v, p;
start &= ~(L1_CACHE_BYTES-1);
save_and_cli(flags);
jump_to_p2(__dummy);
for (v = start; v < end; v+=L1_CACHE_BYTES) {
p = __pa(v);
addr = CACHE_IC_ADDRESS_ARRAY |
(v&CACHE_IC_ENTRY_MASK) | 0x8 /* A-bit */;
data = (v&0xfffffc00); /* U=0, V=0 */
ctrl_outl(data,addr);
#if CACHE_IC_ADDRESS_ARRAY != CACHE_OC_ADDRESS_ARRAY
asm volatile("ocbp %0"
: /* no output */
: "m" (__m(v)));
#endif
}
back_to_p1(__dummy);
restore_flags(flags);
}
/* Purge (just invalidate, no write back) the caches */
/* This is expected to work well.. but..
On SH7708S, the write-back cache is written back on "purge".
(it's not expected, though).
It seems that we have no way to just purge (with no write back action)
the cache line. */
void cache_purge_area(unsigned long start, unsigned long end)
{
unsigned long flags, __dummy;
unsigned long addr, data, v, p, j;
start &= ~(L1_CACHE_BYTES-1);
save_and_cli(flags);
jump_to_p2(__dummy);
for (v = start; v < end; v+=L1_CACHE_BYTES) {
p = __pa(v);
for (j=0; j<CACHE_IC_NUM_WAYS; j++) {
addr = CACHE_IC_ADDRESS_ARRAY|(j<<CACHE_IC_WAY_SHIFT)|
(v&CACHE_IC_ENTRY_MASK);
data = ctrl_inl(addr);
if ((data & 0xfffffc00) == (p&0xfffffc00)
&& (data & CACHE_VALID)) {
data &= ~CACHE_VALID;
ctrl_outl(data,addr);
break;
}
}
#if CACHE_IC_ADDRESS_ARRAY != CACHE_OC_ADDRESS_ARRAY
asm volatile("ocbi %0"
: /* no output */
: "m" (__m(v)));
#endif
}
back_to_p1(__dummy);
restore_flags(flags);
}
/* write back the dirty cache, but not invalidate the cache */
void cache_wback_area(unsigned long start, unsigned long end)
{
unsigned long flags, __dummy;
unsigned long v;
start &= ~(L1_CACHE_BYTES-1);
save_and_cli(flags);
jump_to_p2(__dummy);
for (v = start; v < end; v+=L1_CACHE_BYTES) {
#if CACHE_IC_ADDRESS_ARRAY == CACHE_OC_ADDRESS_ARRAY
unsigned long addr, data, j;
unsigned long p = __pa(v);
for (j=0; j<CACHE_OC_NUM_WAYS; j++) {
addr = CACHE_OC_ADDRESS_ARRAY|(j<<CACHE_OC_WAY_SHIFT)|
(v&CACHE_OC_ENTRY_MASK);
data = ctrl_inl(addr);
if ((data & 0xfffffc00) == (p&0xfffffc00)
&& (data & CACHE_VALID)
&& (data & CACHE_UPDATED)) {
data &= ~CACHE_UPDATED;
ctrl_outl(data,addr);
break;
}
}
#else
asm volatile("ocbwb %0"
: /* no output */
: "m" (__m(v)));
#endif
}
back_to_p1(__dummy);
restore_flags(flags);
}
/*
* Write back the cache.
*
* For SH-4, flush (write back) Operand Cache, as Instruction Cache
* doesn't have "updated" data.
*/
static void cache_wback_all(void)
{
unsigned long flags, __dummy;
unsigned long addr, data, i, j;
save_and_cli(flags);
jump_to_p2(__dummy);
for (i=0; i<CACHE_OC_NUM_ENTRIES; i++) {
for (j=0; j<CACHE_OC_NUM_WAYS; j++) {
addr = CACHE_OC_ADDRESS_ARRAY|(j<<CACHE_OC_WAY_SHIFT)|
(i<<CACHE_OC_ENTRY_SHIFT);
data = ctrl_inl(addr);
if (data & CACHE_VALID) {
data &= ~(CACHE_VALID|CACHE_UPDATED);
ctrl_outl(data,addr);
}
}
}
back_to_p1(__dummy);
restore_flags(flags);
}
static void
detect_cpu_and_cache_system(void)
{
#if defined(__sh3__)
unsigned long __dummy, addr0, addr1, data0, data1, data2, data3;
jump_to_p2(__dummy);
/* Check if the entry shadows or not.
* When shadowed, it's 128-entry system.
* Otherwise, it's 256-entry system.
*/
addr0 = CACHE_OC_ADDRESS_ARRAY + (3 << 12);
addr1 = CACHE_OC_ADDRESS_ARRAY + (1 << 12);
data0 = ctrl_inl(addr0);
data0 ^= 0x00000001;
ctrl_outl(data0,addr0);
data1 = ctrl_inl(addr1);
data2 = data1 ^ 0x00000001;
ctrl_outl(data2,addr1);
data3 = ctrl_inl(addr0);
/* Invaliate them, in case the cache has been enabled already. */
ctrl_outl(data0&~0x00000001,addr0);
ctrl_outl(data2&~0x00000001,addr1);
back_to_p1(__dummy);
if (data0 == data1 && data2 == data3) { /* Shadow */
cache_system_info.way_shift = 11;
cache_system_info.entry_mask = 0x7f0;
cache_system_info.num_entries = 128;
cpu_data->type = CPU_SH7708;
} else { /* 7709A or 7729 */
cache_system_info.way_shift = 12;
cache_system_info.entry_mask = 0xff0;
cache_system_info.num_entries = 256;
cpu_data->type = CPU_SH7729;
}
#elif defined(__SH4__)
cpu_data->type = CPU_SH7750;
#endif
}
void __init cache_init(void)
{
unsigned long __dummy, ccr;
detect_cpu_and_cache_system();
ccr = ctrl_inl(CCR);
if (ccr == CCR_CACHE_VAL)
return;
if (ccr & CCR_CACHE_ENABLE)
/* Should check RA here. If RA was 1,
we only need to flush the half of the caches. */
cache_wback_all();
jump_to_p2(__dummy);
ctrl_outl(CCR_CACHE_INIT, CCR);
back_to_p1(__dummy);
}
#if defined(__SH4__)
/* Write back data caches, and invalidates instructiin caches */
void flush_icache_range(unsigned long start, unsigned long end)
{
unsigned long flags, __dummy;
unsigned long addr, data, v;
start &= ~(L1_CACHE_BYTES-1);
save_and_cli(flags);
jump_to_p2(__dummy);
for (v = start; v < end; v+=L1_CACHE_BYTES) {
/* Write back O Cache */
asm volatile("ocbwb %0"
: /* no output */
: "m" (__m(v)));
/* Invalidate I Cache */
addr = CACHE_IC_ADDRESS_ARRAY |
(v&CACHE_IC_ENTRY_MASK) | 0x8 /* A-bit */;
data = (v&0xfffffc00); /* Valid=0 */
ctrl_outl(data,addr);
}
back_to_p1(__dummy);
restore_flags(flags);
}
void flush_cache_all(void)
{
unsigned long flags,__dummy;
/* Write back Operand Cache */
cache_wback_all ();
/* Then, invalidate Instruction Cache and Operand Cache */
save_and_cli(flags);
jump_to_p2(__dummy);
ctrl_outl(CCR_CACHE_INIT, CCR);
back_to_p1(__dummy);
restore_flags(flags);
}
void flush_cache_mm(struct mm_struct *mm)
{
/* Is there any good way? */
/* XXX: possibly call flush_cache_range for each vm area */
flush_cache_all();
}
void flush_cache_range(struct mm_struct *mm, unsigned long start,
unsigned long end)
{
unsigned long flags, __dummy;
unsigned long addr, data, v;
start &= ~(L1_CACHE_BYTES-1);
save_and_cli(flags);
jump_to_p2(__dummy);
for (v = start; v < end; v+=L1_CACHE_BYTES) {
addr = CACHE_IC_ADDRESS_ARRAY |
(v&CACHE_IC_ENTRY_MASK) | 0x8 /* A-bit */;
data = (v&0xfffffc00); /* Update=0, Valid=0 */
ctrl_outl(data,addr);
addr = CACHE_OC_ADDRESS_ARRAY |
(v&CACHE_OC_ENTRY_MASK) | 0x8 /* A-bit */;
ctrl_outl(data,addr);
}
back_to_p1(__dummy);
restore_flags(flags);
}
void flush_cache_page(struct vm_area_struct *vma, unsigned long addr)
{
flush_cache_range(vma->vm_mm, addr, addr+PAGE_SIZE);
}
void flush_page_to_ram(unsigned long page)
{ /* Page is in physical address */
/* XXX: for the time being... */
flush_cache_all();
}
#endif
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment