Commit 9daeaa37 authored by Linus Torvalds's avatar Linus Torvalds

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-next

Pull sparc updates from David Miller:

1) Kill off support for sun4c and Cypress sun4m chips.

   And as a result we were able to also kill off that ugly btfixup thing
   that required multi-stage links of the final vmlinux image in the
   Kbuild system.  This should make the kbuild maintainers really happy.

   Thanks a lot to Sam Ravnborg for his tireless efforts to get this
   going.

2) Convert sparc64 to nobootmem.  I suspect now with sparc32 being a lot
   cleaner, it should be able to fall in line and modernize in this area
   too.

3) Make sparc32 use generic clockevents, from Tkhai Kirill.

[ I fixed up the BPF rules, and tried to clean up the build rules too.
  But I don't have - or want - a sparc cross-build environment, so the
  BPF rule bug and the related build cleanup was all done with just a
  bare "make -n" pseudo-test.      - Linus ]

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-next: (110 commits)
  sparc32: use flushi when run-time patching in per_cpu_patch
  sparc32: fix cpuid_patch run-time patching
  sparc32: drop unused inline functions in srmmu.c
  sparc32: drop unused functions in pgtsrmmu.h
  sparc32,leon: move leon mmu functions to leon_mm.c
  sparc32,leon: remove duplicate definitions in leon.h
  sparc32,leon: remove duplicate UART register definitions
  sparc32,leon: move leon ASI definitions to asi.h
  sparc32: move trap table to a separate file
  sparc64: renamed ttable.S to ttable_64.S
  sparc32: Remove asm/sysen.h header.
  sparc32: Delete asm/smpprim.h
  sparc32: Remove unused empty_bad_page{,_table} declarations.
  sparc32: Kill boot_cpu_id4
  sparc32: Move GET_PROCESSOR*_ID() out of asm/asmmacro.h
  sparc32: Remove completely unused code from asm/cache.h
  sparc32: Add ucmpdi2.o to obj-y instead of lib-y.
  sparc32: add ucmpdi2
  sparc: introduce arch/sparc/Kbuild
  sparc: remove obsolete documentation
  ...
parents cb62ab71 1edc1783
BTFIXUP
-------
To build new kernels you have to issue "make image". The ready kernel
in ELF format is placed in arch/sparc/boot/image. Explanation is below.
BTFIXUP is a unique feature of Linux/sparc among other architectures,
developed by Jakub Jelinek (I think... Obviously David S. Miller took
part, too). It allows to boot the same kernel at different
sub-architectures, such as sun4c, sun4m, sun4d, where SunOS uses
different kernels. This feature is convinient for people who you move
disks between boxes and for distrution builders.
To function, BTFIXUP must link the kernel "in the draft" first,
analyze the result, write a special stub code based on that, and
build the final kernel with the stub (btfix.o).
Kai Germaschewski improved the build system of the kernel in the 2.5 series
significantly. Unfortunately, the traditional way of running the draft
linking from architecture specific Makefile before the actual linking
by generic Makefile is nearly impossible to support properly in the
new build system. Therefore, the way we integrate BTFIXUP with the
build system was changed in 2.5.40. Now, generic Makefile performs
the draft linking and stores the result in file vmlinux. Architecture
specific post-processing invokes BTFIXUP machinery and final linking
in the same way as other architectures do bootstraps.
Implications of that change are as follows.
1. Hackers must type "make image" now, instead of just "make", in the same
way as s390 people do now. It is analogous to "make bzImage" on i386.
This does NOT affect sparc64, you continue to use "make" to build sparc64
kernels.
2. vmlinux is not the final kernel, so RPM builders have to adjust
their spec files (if they delivered vmlinux for debugging).
System.map generated for vmlinux is still valid.
3. Scripts that produce a.out images have to be changed. First, if they
invoke make, they have to use "make image". Second, they have to pick up
the new kernel in arch/sparc/boot/image instead of vmlinux.
4. Since we are compliant with Kai's build system now, make -j is permitted.
-- Pete Zaitcev
zaitcev@yahoo.com
#
# core part of the sparc kernel
#
obj-y += kernel/
obj-y += mm/
obj-y += math-emu/
obj-y += net/
...@@ -30,7 +30,7 @@ config SPARC ...@@ -30,7 +30,7 @@ config SPARC
select USE_GENERIC_SMP_HELPERS if SMP select USE_GENERIC_SMP_HELPERS if SMP
select GENERIC_PCI_IOMAP select GENERIC_PCI_IOMAP
select HAVE_NMI_WATCHDOG if SPARC64 select HAVE_NMI_WATCHDOG if SPARC64
select HAVE_BPF_JIT select HAVE_BPF_JIT if NET
config SPARC32 config SPARC32
def_bool !64BIT def_bool !64BIT
...@@ -62,6 +62,7 @@ config SPARC64 ...@@ -62,6 +62,7 @@ config SPARC64
select IRQ_PREFLOW_FASTEOI select IRQ_PREFLOW_FASTEOI
select ARCH_HAVE_NMI_SAFE_CMPXCHG select ARCH_HAVE_NMI_SAFE_CMPXCHG
select HAVE_C_RECORDMCOUNT select HAVE_C_RECORDMCOUNT
select NO_BOOTMEM
config ARCH_DEFCONFIG config ARCH_DEFCONFIG
string string
...@@ -74,17 +75,12 @@ config BITS ...@@ -74,17 +75,12 @@ config BITS
default 32 if SPARC32 default 32 if SPARC32
default 64 if SPARC64 default 64 if SPARC64
config ARCH_USES_GETTIMEOFFSET
bool
default y if SPARC32
config GENERIC_CMOS_UPDATE config GENERIC_CMOS_UPDATE
bool bool
default y default y
config GENERIC_CLOCKEVENTS config GENERIC_CLOCKEVENTS
bool def_bool y
default y if SPARC64
config IOMMU_HELPER config IOMMU_HELPER
bool bool
...@@ -155,7 +151,7 @@ source "kernel/Kconfig.freezer" ...@@ -155,7 +151,7 @@ source "kernel/Kconfig.freezer"
menu "Processor type and features" menu "Processor type and features"
config SMP config SMP
bool "Symmetric multi-processing support (does not work on sun4/sun4c)" bool "Symmetric multi-processing support"
---help--- ---help---
This enables support for systems with more than one CPU. If you have This enables support for systems with more than one CPU. If you have
a system with only one CPU, say N. If you have a system with more a system with only one CPU, say N. If you have a system with more
......
...@@ -19,39 +19,27 @@ ifeq ($(CONFIG_SPARC32),y) ...@@ -19,39 +19,27 @@ ifeq ($(CONFIG_SPARC32),y)
# sparc32 # sparc32
# #
#
# Uncomment the first KBUILD_CFLAGS if you are doing kgdb source level
# debugging of the kernel to get the proper debugging information.
AS := $(AS) -32
LDFLAGS := -m elf32_sparc
CHECKFLAGS += -D__sparc__ CHECKFLAGS += -D__sparc__
LDFLAGS := -m elf32_sparc
export BITS := 32 export BITS := 32
UTS_MACHINE := sparc UTS_MACHINE := sparc
#KBUILD_CFLAGS += -g -pipe -fcall-used-g5 -fcall-used-g7 KBUILD_CFLAGS += -m32 -mcpu=v8 -pipe -mno-fpu -fcall-used-g5 -fcall-used-g7
KBUILD_CFLAGS += -m32 -pipe -mno-fpu -fcall-used-g5 -fcall-used-g7 KBUILD_AFLAGS += -m32 -Wa,-Av8
KBUILD_AFLAGS += -m32 -Wa,-Av8
#LDFLAGS_vmlinux = -N -Ttext 0xf0004000
# Since 2.5.40, the first stage is left not btfix-ed.
# Actual linking is done with "make image".
LDFLAGS_vmlinux = -r
else else
##### #####
# sparc64 # sparc64
# #
CHECKFLAGS += -D__sparc__ -D__sparc_v9__ -D__arch64__ -m64 CHECKFLAGS += -D__sparc__ -D__sparc_v9__ -D__arch64__ -m64
LDFLAGS := -m elf64_sparc
export BITS := 64
UTS_MACHINE := sparc64
LDFLAGS := -m elf64_sparc KBUILD_CFLAGS += -m64 -pipe -mno-fpu -mcpu=ultrasparc -mcmodel=medlow
export BITS := 64 KBUILD_CFLAGS += -ffixed-g4 -ffixed-g5 -fcall-used-g7 -Wno-sign-compare
UTS_MACHINE := sparc64 KBUILD_CFLAGS += -Wa,--undeclared-regs
KBUILD_CFLAGS += -m64 -pipe -mno-fpu -mcpu=ultrasparc -mcmodel=medlow \
-ffixed-g4 -ffixed-g5 -fcall-used-g7 -Wno-sign-compare \
-Wa,--undeclared-regs
KBUILD_CFLAGS += $(call cc-option,-mtune=ultrasparc3) KBUILD_CFLAGS += $(call cc-option,-mtune=ultrasparc3)
KBUILD_AFLAGS += -m64 -mcpu=ultrasparc -Wa,--undeclared-regs KBUILD_AFLAGS += -m64 -mcpu=ultrasparc -Wa,--undeclared-regs
...@@ -64,26 +52,14 @@ endif ...@@ -64,26 +52,14 @@ endif
head-y := arch/sparc/kernel/head_$(BITS).o head-y := arch/sparc/kernel/head_$(BITS).o
head-y += arch/sparc/kernel/init_task.o head-y += arch/sparc/kernel/init_task.o
core-y += arch/sparc/kernel/ # See arch/sparc/Kbuild for the core part of the kernel
core-y += arch/sparc/mm/ arch/sparc/math-emu/ core-y += arch/sparc/
core-y += arch/sparc/net/
libs-y += arch/sparc/prom/ libs-y += arch/sparc/prom/
libs-y += arch/sparc/lib/ libs-y += arch/sparc/lib/
drivers-$(CONFIG_OPROFILE) += arch/sparc/oprofile/ drivers-$(CONFIG_OPROFILE) += arch/sparc/oprofile/
# Export what is needed by arch/sparc/boot/Makefile
export VMLINUX_INIT VMLINUX_MAIN
VMLINUX_INIT := $(head-y) $(init-y)
VMLINUX_MAIN := $(core-y) kernel/ mm/ fs/ ipc/ security/ crypto/ block/
VMLINUX_MAIN += $(patsubst %/, %/lib.a, $(libs-y)) $(libs-y)
VMLINUX_MAIN += $(drivers-y) $(net-y)
ifdef CONFIG_KALLSYMS
export kallsyms.o := .tmp_kallsyms2.o
endif
boot := arch/sparc/boot boot := arch/sparc/boot
# Default target # Default target
......
...@@ -6,8 +6,8 @@ ...@@ -6,8 +6,8 @@
ROOT_IMG := /usr/src/root.img ROOT_IMG := /usr/src/root.img
ELFTOAOUT := elftoaout ELFTOAOUT := elftoaout
hostprogs-y := piggyback btfixupprep hostprogs-y := piggyback
targets := tftpboot.img btfix.o btfix.S image zImage vmlinux.aout targets := tftpboot.img image zImage vmlinux.aout
clean-files := System.map clean-files := System.map
quiet_cmd_elftoaout = ELFTOAOUT $@ quiet_cmd_elftoaout = ELFTOAOUT $@
...@@ -17,58 +17,9 @@ quiet_cmd_piggy = PIGGY $@ ...@@ -17,58 +17,9 @@ quiet_cmd_piggy = PIGGY $@
quiet_cmd_strip = STRIP $@ quiet_cmd_strip = STRIP $@
cmd_strip = $(STRIP) -R .comment -R .note -K sun4u_init -K _end -K _start $< -o $@ cmd_strip = $(STRIP) -R .comment -R .note -K sun4u_init -K _end -K _start $< -o $@
ifeq ($(CONFIG_SPARC32),y)
quiet_cmd_btfix = BTFIX $@
cmd_btfix = $(OBJDUMP) -x vmlinux | $(obj)/btfixupprep > $@
quiet_cmd_sysmap = SYSMAP $(obj)/System.map
cmd_sysmap = $(CONFIG_SHELL) $(srctree)/scripts/mksysmap
quiet_cmd_image = LD $@
cmd_image = $(LD) $(LDFLAGS) $(EXTRA_LDFLAGS) $(LDFLAGS_$(@F)) -o $@
define rule_image
$(if $($(quiet)cmd_image), \
echo ' $($(quiet)cmd_image)' &&) \
$(cmd_image); \
$(if $($(quiet)cmd_sysmap), \
echo ' $($(quiet)cmd_sysmap)' &&) \
$(cmd_sysmap) $@ $(obj)/System.map; \
if [ $$? -ne 0 ]; then \
rm -f $@; \
/bin/false; \
fi; \
echo 'cmd_$@ := $(cmd_image)' > $(@D)/.$(@F).cmd
endef
BTOBJS := $(patsubst %/, %/built-in.o, $(VMLINUX_INIT))
BTLIBS := $(patsubst %/, %/built-in.o, $(VMLINUX_MAIN))
LDFLAGS_image := -T arch/sparc/kernel/vmlinux.lds $(BTOBJS) \
--start-group $(BTLIBS) --end-group \
$(kallsyms.o) $(obj)/btfix.o
# Link the final image including btfixup'ed symbols.
# This is a replacement for the link done in the top-level Makefile.
# Note: No dependency on the prerequisite files since that would require
# make to try check if they are updated - and due to changes
# in gcc options (path for example) this would result in
# these files being recompiled for each build.
$(obj)/image: $(obj)/btfix.o FORCE
$(call if_changed_rule,image)
$(obj)/zImage: $(obj)/image
$(call if_changed,strip)
@echo ' kernel: $@ is ready'
$(obj)/btfix.S: $(obj)/btfixupprep vmlinux FORCE
$(call if_changed,btfix)
endif
ifeq ($(CONFIG_SPARC64),y) ifeq ($(CONFIG_SPARC64),y)
# Actual linking # Actual linking
$(obj)/image: vmlinux FORCE
$(call if_changed,strip)
@echo ' kernel: $@ is ready'
$(obj)/zImage: $(obj)/image $(obj)/zImage: $(obj)/image
$(call if_changed,gzip) $(call if_changed,gzip)
...@@ -79,6 +30,10 @@ $(obj)/vmlinux.aout: vmlinux FORCE ...@@ -79,6 +30,10 @@ $(obj)/vmlinux.aout: vmlinux FORCE
@echo ' kernel: $@ is ready' @echo ' kernel: $@ is ready'
else else
$(obj)/zImage: $(obj)/image
$(call if_changed,strip)
@echo ' kernel: $@ is ready'
# The following lines make a readable image for U-Boot. # The following lines make a readable image for U-Boot.
# uImage - Binary file read by U-boot # uImage - Binary file read by U-boot
# uImage.o - object file of uImage for loading with a # uImage.o - object file of uImage for loading with a
...@@ -107,6 +62,10 @@ $(obj)/uImage: $(obj)/image.gz ...@@ -107,6 +62,10 @@ $(obj)/uImage: $(obj)/image.gz
endif endif
$(obj)/image: vmlinux FORCE
$(call if_changed,strip)
@echo ' kernel: $@ is ready'
$(obj)/tftpboot.img: $(obj)/image $(obj)/piggyback System.map $(ROOT_IMG) FORCE $(obj)/tftpboot.img: $(obj)/image $(obj)/piggyback System.map $(ROOT_IMG) FORCE
$(call if_changed,elftoaout) $(call if_changed,elftoaout)
$(call if_changed,piggy) $(call if_changed,piggy)
This diff is collapsed.
...@@ -112,6 +112,20 @@ ...@@ -112,6 +112,20 @@
#define ASI_M_ACTION 0x4c /* Breakpoint Action Register (GNU/Viking) */ #define ASI_M_ACTION 0x4c /* Breakpoint Action Register (GNU/Viking) */
/* LEON ASI */
#define ASI_LEON_NOCACHE 0x01
#define ASI_LEON_DCACHE_MISS 0x01
#define ASI_LEON_CACHEREGS 0x02
#define ASI_LEON_IFLUSH 0x10
#define ASI_LEON_DFLUSH 0x11
#define ASI_LEON_MMUFLUSH 0x18
#define ASI_LEON_MMUREGS 0x19
#define ASI_LEON_BYPASS 0x1c
#define ASI_LEON_FLUSH_PAGE 0x10
/* V9 Architecture mandary ASIs. */ /* V9 Architecture mandary ASIs. */
#define ASI_N 0x04 /* Nucleus */ #define ASI_N 0x04 /* Nucleus */
#define ASI_NL 0x0c /* Nucleus, little endian */ #define ASI_NL 0x0c /* Nucleus, little endian */
......
...@@ -6,17 +6,6 @@ ...@@ -6,17 +6,6 @@
#ifndef _SPARC_ASMMACRO_H #ifndef _SPARC_ASMMACRO_H
#define _SPARC_ASMMACRO_H #define _SPARC_ASMMACRO_H
#include <asm/btfixup.h>
#include <asm/asi.h>
#define GET_PROCESSOR4M_ID(reg) \
rd %tbr, %reg; \
srl %reg, 12, %reg; \
and %reg, 3, %reg;
#define GET_PROCESSOR4D_ID(reg) \
lda [%g0] ASI_M_VIKING_TMP1, %reg;
/* All trap entry points _must_ begin with this macro or else you /* All trap entry points _must_ begin with this macro or else you
* lose. It makes sure the kernel has a proper window so that * lose. It makes sure the kernel has a proper window so that
* c-code can be called. * c-code can be called.
...@@ -31,10 +20,4 @@ ...@@ -31,10 +20,4 @@
/* All traps low-level code here must end with this macro. */ /* All traps low-level code here must end with this macro. */
#define RESTORE_ALL b ret_trap_entry; clr %l6; #define RESTORE_ALL b ret_trap_entry; clr %l6;
/* sun4 probably wants half word accesses to ASI_SEGMAP, while sun4c+
likes byte accesses. These are to avoid ifdef mania. */
#define lduXa lduba
#define stXa stba
#endif /* !(_SPARC_ASMMACRO_H) */ #endif /* !(_SPARC_ASMMACRO_H) */
/*
* asm/btfixup.h: Macros for boot time linking.
*
* Copyright (C) 1998 Jakub Jelinek (jj@sunsite.mff.cuni.cz)
*/
#ifndef _SPARC_BTFIXUP_H
#define _SPARC_BTFIXUP_H
#include <linux/init.h>
#ifndef __ASSEMBLY__
#ifdef MODULE
extern unsigned int ___illegal_use_of_BTFIXUP_SIMM13_in_module(void);
extern unsigned int ___illegal_use_of_BTFIXUP_SETHI_in_module(void);
extern unsigned int ___illegal_use_of_BTFIXUP_HALF_in_module(void);
extern unsigned int ___illegal_use_of_BTFIXUP_INT_in_module(void);
#define BTFIXUP_SIMM13(__name) ___illegal_use_of_BTFIXUP_SIMM13_in_module()
#define BTFIXUP_HALF(__name) ___illegal_use_of_BTFIXUP_HALF_in_module()
#define BTFIXUP_SETHI(__name) ___illegal_use_of_BTFIXUP_SETHI_in_module()
#define BTFIXUP_INT(__name) ___illegal_use_of_BTFIXUP_INT_in_module()
#define BTFIXUP_BLACKBOX(__name) ___illegal_use_of_BTFIXUP_BLACKBOX_in_module
#else
#define BTFIXUP_SIMM13(__name) ___sf_##__name()
#define BTFIXUP_HALF(__name) ___af_##__name()
#define BTFIXUP_SETHI(__name) ___hf_##__name()
#define BTFIXUP_INT(__name) ((unsigned int)&___i_##__name)
/* This must be written in assembly and present in a sethi */
#define BTFIXUP_BLACKBOX(__name) ___b_##__name
#endif /* MODULE */
/* Fixup call xx */
#define BTFIXUPDEF_CALL(__type, __name, __args...) \
extern __type ___f_##__name(__args); \
extern unsigned ___fs_##__name[3];
#define BTFIXUPDEF_CALL_CONST(__type, __name, __args...) \
extern __type ___f_##__name(__args) __attribute_const__; \
extern unsigned ___fs_##__name[3];
#define BTFIXUP_CALL(__name) ___f_##__name
#define BTFIXUPDEF_BLACKBOX(__name) \
extern unsigned ___bs_##__name[2];
/* Put bottom 13bits into some register variable */
#define BTFIXUPDEF_SIMM13(__name) \
static inline unsigned int ___sf_##__name(void) __attribute_const__; \
extern unsigned ___ss_##__name[2]; \
static inline unsigned int ___sf_##__name(void) { \
unsigned int ret; \
__asm__ ("or %%g0, ___s_" #__name ", %0" : "=r"(ret)); \
return ret; \
}
#define BTFIXUPDEF_SIMM13_INIT(__name,__val) \
static inline unsigned int ___sf_##__name(void) __attribute_const__; \
extern unsigned ___ss_##__name[2]; \
static inline unsigned int ___sf_##__name(void) { \
unsigned int ret; \
__asm__ ("or %%g0, ___s_" #__name "__btset_" #__val ", %0" : "=r"(ret));\
return ret; \
}
/* Put either bottom 13 bits, or upper 22 bits into some register variable
* (depending on the value, this will lead into sethi FIX, reg; or
* mov FIX, reg; )
*/
#define BTFIXUPDEF_HALF(__name) \
static inline unsigned int ___af_##__name(void) __attribute_const__; \
extern unsigned ___as_##__name[2]; \
static inline unsigned int ___af_##__name(void) { \
unsigned int ret; \
__asm__ ("or %%g0, ___a_" #__name ", %0" : "=r"(ret)); \
return ret; \
}
#define BTFIXUPDEF_HALF_INIT(__name,__val) \
static inline unsigned int ___af_##__name(void) __attribute_const__; \
extern unsigned ___as_##__name[2]; \
static inline unsigned int ___af_##__name(void) { \
unsigned int ret; \
__asm__ ("or %%g0, ___a_" #__name "__btset_" #__val ", %0" : "=r"(ret));\
return ret; \
}
/* Put upper 22 bits into some register variable */
#define BTFIXUPDEF_SETHI(__name) \
static inline unsigned int ___hf_##__name(void) __attribute_const__; \
extern unsigned ___hs_##__name[2]; \
static inline unsigned int ___hf_##__name(void) { \
unsigned int ret; \
__asm__ ("sethi %%hi(___h_" #__name "), %0" : "=r"(ret)); \
return ret; \
}
#define BTFIXUPDEF_SETHI_INIT(__name,__val) \
static inline unsigned int ___hf_##__name(void) __attribute_const__; \
extern unsigned ___hs_##__name[2]; \
static inline unsigned int ___hf_##__name(void) { \
unsigned int ret; \
__asm__ ("sethi %%hi(___h_" #__name "__btset_" #__val "), %0" : \
"=r"(ret)); \
return ret; \
}
/* Put a full 32bit integer into some register variable */
#define BTFIXUPDEF_INT(__name) \
extern unsigned char ___i_##__name; \
extern unsigned ___is_##__name[2];
#define BTFIXUPCALL_NORM 0x00000000 /* Always call */
#define BTFIXUPCALL_NOP 0x01000000 /* Possibly optimize to nop */
#define BTFIXUPCALL_RETINT(i) (0x90102000|((i) & 0x1fff)) /* Possibly optimize to mov i, %o0 */
#define BTFIXUPCALL_ORINT(i) (0x90122000|((i) & 0x1fff)) /* Possibly optimize to or %o0, i, %o0 */
#define BTFIXUPCALL_RETO0 0x01000000 /* Return first parameter, actually a nop */
#define BTFIXUPCALL_ANDNINT(i) (0x902a2000|((i) & 0x1fff)) /* Possibly optimize to andn %o0, i, %o0 */
#define BTFIXUPCALL_SWAPO0O1 0xd27a0000 /* Possibly optimize to swap [%o0],%o1 */
#define BTFIXUPCALL_SWAPO0G0 0xc07a0000 /* Possibly optimize to swap [%o0],%g0 */
#define BTFIXUPCALL_SWAPG1G2 0xc4784000 /* Possibly optimize to swap [%g1],%g2 */
#define BTFIXUPCALL_STG0O0 0xc0220000 /* Possibly optimize to st %g0,[%o0] */
#define BTFIXUPCALL_STO1O0 0xd2220000 /* Possibly optimize to st %o1,[%o0] */
#define BTFIXUPSET_CALL(__name, __addr, __insn) \
do { \
___fs_##__name[0] |= 1; \
___fs_##__name[1] = (unsigned long)__addr; \
___fs_##__name[2] = __insn; \
} while (0)
#define BTFIXUPSET_BLACKBOX(__name, __func) \
do { \
___bs_##__name[0] |= 1; \
___bs_##__name[1] = (unsigned long)__func; \
} while (0)
#define BTFIXUPCOPY_CALL(__name, __from) \
do { \
___fs_##__name[0] |= 1; \
___fs_##__name[1] = ___fs_##__from[1]; \
___fs_##__name[2] = ___fs_##__from[2]; \
} while (0)
#define BTFIXUPSET_SIMM13(__name, __val) \
do { \
___ss_##__name[0] |= 1; \
___ss_##__name[1] = (unsigned)__val; \
} while (0)
#define BTFIXUPCOPY_SIMM13(__name, __from) \
do { \
___ss_##__name[0] |= 1; \
___ss_##__name[1] = ___ss_##__from[1]; \
} while (0)
#define BTFIXUPSET_HALF(__name, __val) \
do { \
___as_##__name[0] |= 1; \
___as_##__name[1] = (unsigned)__val; \
} while (0)
#define BTFIXUPCOPY_HALF(__name, __from) \
do { \
___as_##__name[0] |= 1; \
___as_##__name[1] = ___as_##__from[1]; \
} while (0)
#define BTFIXUPSET_SETHI(__name, __val) \
do { \
___hs_##__name[0] |= 1; \
___hs_##__name[1] = (unsigned)__val; \
} while (0)
#define BTFIXUPCOPY_SETHI(__name, __from) \
do { \
___hs_##__name[0] |= 1; \
___hs_##__name[1] = ___hs_##__from[1]; \
} while (0)
#define BTFIXUPSET_INT(__name, __val) \
do { \
___is_##__name[0] |= 1; \
___is_##__name[1] = (unsigned)__val; \
} while (0)
#define BTFIXUPCOPY_INT(__name, __from) \
do { \
___is_##__name[0] |= 1; \
___is_##__name[1] = ___is_##__from[1]; \
} while (0)
#define BTFIXUPVAL_CALL(__name) \
((unsigned long)___fs_##__name[1])
extern void btfixup(void);
#else /* __ASSEMBLY__ */
#define BTFIXUP_SETHI(__name) %hi(___h_ ## __name)
#define BTFIXUP_SETHI_INIT(__name,__val) %hi(___h_ ## __name ## __btset_ ## __val)
#endif /* __ASSEMBLY__ */
#endif /* !(_SPARC_BTFIXUP_H) */
...@@ -22,118 +22,4 @@ ...@@ -22,118 +22,4 @@
#define __read_mostly __attribute__((__section__(".data..read_mostly"))) #define __read_mostly __attribute__((__section__(".data..read_mostly")))
#ifdef CONFIG_SPARC32
#include <asm/asi.h>
/* Direct access to the instruction cache is provided through and
* alternate address space. The IDC bit must be off in the ICCR on
* HyperSparcs for these accesses to work. The code below does not do
* any checking, the caller must do so. These routines are for
* diagnostics only, but could end up being useful. Use with care.
* Also, you are asking for trouble if you execute these in one of the
* three instructions following a %asr/%psr access or modification.
*/
/* First, cache-tag access. */
static inline unsigned int get_icache_tag(int setnum, int tagnum)
{
unsigned int vaddr, retval;
vaddr = ((setnum&1) << 12) | ((tagnum&0x7f) << 5);
__asm__ __volatile__("lda [%1] %2, %0\n\t" :
"=r" (retval) :
"r" (vaddr), "i" (ASI_M_TXTC_TAG));
return retval;
}
static inline void put_icache_tag(int setnum, int tagnum, unsigned int entry)
{
unsigned int vaddr;
vaddr = ((setnum&1) << 12) | ((tagnum&0x7f) << 5);
__asm__ __volatile__("sta %0, [%1] %2\n\t" : :
"r" (entry), "r" (vaddr), "i" (ASI_M_TXTC_TAG) :
"memory");
}
/* Second cache-data access. The data is returned two-32bit quantities
* at a time.
*/
static inline void get_icache_data(int setnum, int tagnum, int subblock,
unsigned int *data)
{
unsigned int value1, value2, vaddr;
vaddr = ((setnum&0x1) << 12) | ((tagnum&0x7f) << 5) |
((subblock&0x3) << 3);
__asm__ __volatile__("ldda [%2] %3, %%g2\n\t"
"or %%g0, %%g2, %0\n\t"
"or %%g0, %%g3, %1\n\t" :
"=r" (value1), "=r" (value2) :
"r" (vaddr), "i" (ASI_M_TXTC_DATA) :
"g2", "g3");
data[0] = value1; data[1] = value2;
}
static inline void put_icache_data(int setnum, int tagnum, int subblock,
unsigned int *data)
{
unsigned int value1, value2, vaddr;
vaddr = ((setnum&0x1) << 12) | ((tagnum&0x7f) << 5) |
((subblock&0x3) << 3);
value1 = data[0]; value2 = data[1];
__asm__ __volatile__("or %%g0, %0, %%g2\n\t"
"or %%g0, %1, %%g3\n\t"
"stda %%g2, [%2] %3\n\t" : :
"r" (value1), "r" (value2),
"r" (vaddr), "i" (ASI_M_TXTC_DATA) :
"g2", "g3", "memory" /* no joke */);
}
/* Different types of flushes with the ICACHE. Some of the flushes
* affect both the ICACHE and the external cache. Others only clear
* the ICACHE entries on the cpu itself. V8's (most) allow
* granularity of flushes on the packet (element in line), whole line,
* and entire cache (ie. all lines) level. The ICACHE only flushes are
* ROSS HyperSparc specific and are in ross.h
*/
/* Flushes which clear out both the on-chip and external caches */
static inline void flush_ei_page(unsigned int addr)
{
__asm__ __volatile__("sta %%g0, [%0] %1\n\t" : :
"r" (addr), "i" (ASI_M_FLUSH_PAGE) :
"memory");
}
static inline void flush_ei_seg(unsigned int addr)
{
__asm__ __volatile__("sta %%g0, [%0] %1\n\t" : :
"r" (addr), "i" (ASI_M_FLUSH_SEG) :
"memory");
}
static inline void flush_ei_region(unsigned int addr)
{
__asm__ __volatile__("sta %%g0, [%0] %1\n\t" : :
"r" (addr), "i" (ASI_M_FLUSH_REGION) :
"memory");
}
static inline void flush_ei_ctx(unsigned int addr)
{
__asm__ __volatile__("sta %%g0, [%0] %1\n\t" : :
"r" (addr), "i" (ASI_M_FLUSH_CTX) :
"memory");
}
static inline void flush_ei_user(unsigned int addr)
{
__asm__ __volatile__("sta %%g0, [%0] %1\n\t" : :
"r" (addr), "i" (ASI_M_FLUSH_USER) :
"memory");
}
#endif /* CONFIG_SPARC32 */
#endif /* !(_SPARC_CACHE_H) */ #endif /* !(_SPARC_CACHE_H) */
#ifndef ___ASM_SPARC_CACHEFLUSH_H #ifndef ___ASM_SPARC_CACHEFLUSH_H
#define ___ASM_SPARC_CACHEFLUSH_H #define ___ASM_SPARC_CACHEFLUSH_H
/* flush addr - to allow use of self-modifying code */
#define flushi(addr) __asm__ __volatile__ ("flush %0" : : "r" (addr) : "memory")
#if defined(__sparc__) && defined(__arch64__) #if defined(__sparc__) && defined(__arch64__)
#include <asm/cacheflush_64.h> #include <asm/cacheflush_64.h>
#else #else
......
#ifndef _SPARC_CACHEFLUSH_H #ifndef _SPARC_CACHEFLUSH_H
#define _SPARC_CACHEFLUSH_H #define _SPARC_CACHEFLUSH_H
#include <linux/mm.h> /* Common for other includes */ #include <asm/cachetlb_32.h>
// #include <linux/kernel.h> from pgalloc.h
// #include <linux/sched.h> from pgalloc.h #define flush_cache_all() \
sparc32_cachetlb_ops->cache_all()
// #include <asm/page.h> #define flush_cache_mm(mm) \
#include <asm/btfixup.h> sparc32_cachetlb_ops->cache_mm(mm)
#define flush_cache_dup_mm(mm) \
/* sparc32_cachetlb_ops->cache_mm(mm)
* Fine grained cache flushing. #define flush_cache_range(vma,start,end) \
*/ sparc32_cachetlb_ops->cache_range(vma, start, end)
#ifdef CONFIG_SMP #define flush_cache_page(vma,addr,pfn) \
sparc32_cachetlb_ops->cache_page(vma, addr)
BTFIXUPDEF_CALL(void, local_flush_cache_all, void)
BTFIXUPDEF_CALL(void, local_flush_cache_mm, struct mm_struct *)
BTFIXUPDEF_CALL(void, local_flush_cache_range, struct vm_area_struct *, unsigned long, unsigned long)
BTFIXUPDEF_CALL(void, local_flush_cache_page, struct vm_area_struct *, unsigned long)
#define local_flush_cache_all() BTFIXUP_CALL(local_flush_cache_all)()
#define local_flush_cache_mm(mm) BTFIXUP_CALL(local_flush_cache_mm)(mm)
#define local_flush_cache_range(vma,start,end) BTFIXUP_CALL(local_flush_cache_range)(vma,start,end)
#define local_flush_cache_page(vma,addr) BTFIXUP_CALL(local_flush_cache_page)(vma,addr)
BTFIXUPDEF_CALL(void, local_flush_page_to_ram, unsigned long)
BTFIXUPDEF_CALL(void, local_flush_sig_insns, struct mm_struct *, unsigned long)
#define local_flush_page_to_ram(addr) BTFIXUP_CALL(local_flush_page_to_ram)(addr)
#define local_flush_sig_insns(mm,insn_addr) BTFIXUP_CALL(local_flush_sig_insns)(mm,insn_addr)
extern void smp_flush_cache_all(void);
extern void smp_flush_cache_mm(struct mm_struct *mm);
extern void smp_flush_cache_range(struct vm_area_struct *vma,
unsigned long start,
unsigned long end);
extern void smp_flush_cache_page(struct vm_area_struct *vma, unsigned long page);
extern void smp_flush_page_to_ram(unsigned long page);
extern void smp_flush_sig_insns(struct mm_struct *mm, unsigned long insn_addr);
#endif /* CONFIG_SMP */
BTFIXUPDEF_CALL(void, flush_cache_all, void)
BTFIXUPDEF_CALL(void, flush_cache_mm, struct mm_struct *)
BTFIXUPDEF_CALL(void, flush_cache_range, struct vm_area_struct *, unsigned long, unsigned long)
BTFIXUPDEF_CALL(void, flush_cache_page, struct vm_area_struct *, unsigned long)
#define flush_cache_all() BTFIXUP_CALL(flush_cache_all)()
#define flush_cache_mm(mm) BTFIXUP_CALL(flush_cache_mm)(mm)
#define flush_cache_dup_mm(mm) BTFIXUP_CALL(flush_cache_mm)(mm)
#define flush_cache_range(vma,start,end) BTFIXUP_CALL(flush_cache_range)(vma,start,end)
#define flush_cache_page(vma,addr,pfn) BTFIXUP_CALL(flush_cache_page)(vma,addr)
#define flush_icache_range(start, end) do { } while (0) #define flush_icache_range(start, end) do { } while (0)
#define flush_icache_page(vma, pg) do { } while (0) #define flush_icache_page(vma, pg) do { } while (0)
...@@ -67,11 +29,12 @@ BTFIXUPDEF_CALL(void, flush_cache_page, struct vm_area_struct *, unsigned long) ...@@ -67,11 +29,12 @@ BTFIXUPDEF_CALL(void, flush_cache_page, struct vm_area_struct *, unsigned long)
memcpy(dst, src, len); \ memcpy(dst, src, len); \
} while (0) } while (0)
BTFIXUPDEF_CALL(void, __flush_page_to_ram, unsigned long) #define __flush_page_to_ram(addr) \
BTFIXUPDEF_CALL(void, flush_sig_insns, struct mm_struct *, unsigned long) sparc32_cachetlb_ops->page_to_ram(addr)
#define flush_sig_insns(mm,insn_addr) \
#define __flush_page_to_ram(addr) BTFIXUP_CALL(__flush_page_to_ram)(addr) sparc32_cachetlb_ops->sig_insns(mm, insn_addr)
#define flush_sig_insns(mm,insn_addr) BTFIXUP_CALL(flush_sig_insns)(mm,insn_addr) #define flush_page_for_dma(addr) \
sparc32_cachetlb_ops->page_for_dma(addr)
extern void sparc_flush_page_to_ram(struct page *page); extern void sparc_flush_page_to_ram(struct page *page);
......
...@@ -8,9 +8,6 @@ ...@@ -8,9 +8,6 @@
#include <linux/mm.h> #include <linux/mm.h>
/* Cache flush operations. */ /* Cache flush operations. */
#define flushi(addr) __asm__ __volatile__ ("flush %0" : : "r" (addr) : "memory")
#define flushw_all() __asm__ __volatile__("flushw") #define flushw_all() __asm__ __volatile__("flushw")
extern void __flushw_user(void); extern void __flushw_user(void);
......
#ifndef _SPARC_CACHETLB_H
#define _SPARC_CACHETLB_H
struct mm_struct;
struct vm_area_struct;
struct sparc32_cachetlb_ops {
void (*cache_all)(void);
void (*cache_mm)(struct mm_struct *);
void (*cache_range)(struct vm_area_struct *, unsigned long,
unsigned long);
void (*cache_page)(struct vm_area_struct *, unsigned long);
void (*tlb_all)(void);
void (*tlb_mm)(struct mm_struct *);
void (*tlb_range)(struct vm_area_struct *, unsigned long,
unsigned long);
void (*tlb_page)(struct vm_area_struct *, unsigned long);
void (*page_to_ram)(unsigned long);
void (*sig_insns)(struct mm_struct *, unsigned long);
void (*page_for_dma)(unsigned long);
};
extern const struct sparc32_cachetlb_ops *sparc32_cachetlb_ops;
#ifdef CONFIG_SMP
extern const struct sparc32_cachetlb_ops *local_ops;
#endif
#endif /* SPARC_CACHETLB_H */
...@@ -11,40 +11,13 @@ ...@@ -11,40 +11,13 @@
#ifndef __ARCH_SPARC_CMPXCHG__ #ifndef __ARCH_SPARC_CMPXCHG__
#define __ARCH_SPARC_CMPXCHG__ #define __ARCH_SPARC_CMPXCHG__
#include <asm/btfixup.h>
/* This has special calling conventions */
#ifndef CONFIG_SMP
BTFIXUPDEF_CALL(void, ___xchg32, void)
#endif
static inline unsigned long xchg_u32(__volatile__ unsigned long *m, unsigned long val) static inline unsigned long xchg_u32(__volatile__ unsigned long *m, unsigned long val)
{ {
#ifdef CONFIG_SMP
__asm__ __volatile__("swap [%2], %0" __asm__ __volatile__("swap [%2], %0"
: "=&r" (val) : "=&r" (val)
: "0" (val), "r" (m) : "0" (val), "r" (m)
: "memory"); : "memory");
return val; return val;
#else
register unsigned long *ptr asm("g1");
register unsigned long ret asm("g2");
ptr = (unsigned long *) m;
ret = val;
/* Note: this is magic and the nop there is
really needed. */
__asm__ __volatile__(
"mov %%o7, %%g4\n\t"
"call ___f____xchg32\n\t"
" nop\n\t"
: "=&r" (ret)
: "0" (ret), "r" (ptr)
: "g3", "g4", "g7", "memory", "cc");
return ret;
#endif
} }
extern void __xchg_called_with_bad_pointer(void); extern void __xchg_called_with_bad_pointer(void);
......
...@@ -7,28 +7,6 @@ ...@@ -7,28 +7,6 @@
* Copyright (C) 1995 David S. Miller (davem@caip.rutgers.edu) * Copyright (C) 1995 David S. Miller (davem@caip.rutgers.edu)
*/ */
/* 3=sun3
4=sun4 (as in sun4 sysmaint student book)
c=sun4c (according to davem) */
#define AC_IDPROM 0x00000000 /* 34 ID PROM, R/O, byte, 32 bytes */
#define AC_PAGEMAP 0x10000000 /* 3 Pagemap R/W, long */
#define AC_SEGMAP 0x20000000 /* 3 Segment map, byte */
#define AC_CONTEXT 0x30000000 /* 34c current mmu-context */
#define AC_SENABLE 0x40000000 /* 34c system dvma/cache/reset enable reg*/
#define AC_UDVMA_ENB 0x50000000 /* 34 Not used on Sun boards, byte */
#define AC_BUS_ERROR 0x60000000 /* 34 Not cleared on read, byte. */
#define AC_SYNC_ERR 0x60000000 /* c fault type */
#define AC_SYNC_VA 0x60000004 /* c fault virtual address */
#define AC_ASYNC_ERR 0x60000008 /* c asynchronous fault type */
#define AC_ASYNC_VA 0x6000000c /* c async fault virtual address */
#define AC_LEDS 0x70000000 /* 34 Zero turns on LEDs, byte */
#define AC_CACHETAGS 0x80000000 /* 34c direct access to the VAC tags */
#define AC_CACHEDDATA 0x90000000 /* 3 c direct access to the VAC data */
#define AC_UDVMA_MAP 0xD0000000 /* 4 Not used on Sun boards, byte */
#define AC_VME_VECTOR 0xE0000000 /* 4 For non-Autovector VME, byte */
#define AC_BOOT_SCC 0xF0000000 /* 34 bypass to access Zilog 8530. byte.*/
/* s=Swift, h=Ross_HyperSPARC, v=TI_Viking, t=Tsunami, r=Ross_Cypress */ /* s=Swift, h=Ross_HyperSPARC, v=TI_Viking, t=Tsunami, r=Ross_Cypress */
#define AC_M_PCR 0x0000 /* shv Processor Control Reg */ #define AC_M_PCR 0x0000 /* shv Processor Control Reg */
#define AC_M_CTPR 0x0100 /* shv Context Table Pointer Reg */ #define AC_M_CTPR 0x0100 /* shv Context Table Pointer Reg */
......
...@@ -5,30 +5,24 @@ ...@@ -5,30 +5,24 @@
* Sparc (general) CPU types * Sparc (general) CPU types
*/ */
enum sparc_cpu { enum sparc_cpu {
sun4 = 0x00, sun4m = 0x00,
sun4c = 0x01, sun4d = 0x01,
sun4m = 0x02, sun4e = 0x02,
sun4d = 0x03, sun4u = 0x03, /* V8 ploos ploos */
sun4e = 0x04, sun_unknown = 0x04,
sun4u = 0x05, /* V8 ploos ploos */ ap1000 = 0x05, /* almost a sun4m */
sun_unknown = 0x06, sparc_leon = 0x06, /* Leon SoC */
ap1000 = 0x07, /* almost a sun4m */
sparc_leon = 0x08, /* Leon SoC */
}; };
#ifdef CONFIG_SPARC32 #ifdef CONFIG_SPARC32
extern enum sparc_cpu sparc_cpu_model; extern enum sparc_cpu sparc_cpu_model;
#define ARCH_SUN4C (sparc_cpu_model==sun4c)
#define SUN4M_NCPUS 4 /* Architectural limit of sun4m. */ #define SUN4M_NCPUS 4 /* Architectural limit of sun4m. */
#else #else
#define sparc_cpu_model sun4u #define sparc_cpu_model sun4u
/* This cannot ever be a sun4c :) That's just history. */
#define ARCH_SUN4C 0
#endif #endif
#endif /* __ASM_CPU_TYPE_H */ #endif /* __ASM_CPU_TYPE_H */
...@@ -14,7 +14,6 @@ ...@@ -14,7 +14,6 @@
typedef struct { typedef struct {
unsigned long udelay_val; unsigned long udelay_val;
unsigned long clock_tick; unsigned long clock_tick;
unsigned int multiplier;
unsigned int counter; unsigned int counter;
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
unsigned int irq_resched_count; unsigned int irq_resched_count;
......
/*
* cypress.h: Cypress module specific definitions and defines.
*
* Copyright (C) 1995 David S. Miller (davem@caip.rutgers.edu)
*/
#ifndef _SPARC_CYPRESS_H
#define _SPARC_CYPRESS_H
/* Cypress chips have %psr 'impl' of '0001' and 'vers' of '0001'. */
/* The MMU control register fields on the Sparc Cypress 604/605 MMU's.
*
* ---------------------------------------------------------------
* |implvers| MCA | MCM |MV| MID |BM| C|RSV|MR|CM|CL|CE|RSV|NF|ME|
* ---------------------------------------------------------------
* 31 24 23-22 21-20 19 18-15 14 13 12 11 10 9 8 7-2 1 0
*
* MCA: MultiChip Access -- Used for configuration of multiple
* CY7C604/605 cache units.
* MCM: MultiChip Mask -- Again, for multiple cache unit config.
* MV: MultiChip Valid -- Indicates MCM and MCA have valid settings.
* MID: ModuleID -- Unique processor ID for MBus transactions. (605 only)
* BM: Boot Mode -- 0 = not in boot mode, 1 = in boot mode
* C: Cacheable -- Indicates whether accesses are cacheable while
* the MMU is off. 0=no 1=yes
* MR: MemoryReflection -- Indicates whether the bus attached to the
* MBus supports memory reflection. 0=no 1=yes (605 only)
* CM: CacheMode -- Indicates whether the cache is operating in write
* through or copy-back mode. 0=write-through 1=copy-back
* CL: CacheLock -- Indicates if the entire cache is locked or not.
* 0=not-locked 1=locked (604 only)
* CE: CacheEnable -- Is the virtual cache on? 0=no 1=yes
* NF: NoFault -- Do faults generate traps? 0=yes 1=no
* ME: MmuEnable -- Is the MMU doing translations? 0=no 1=yes
*/
#define CYPRESS_MCA 0x00c00000
#define CYPRESS_MCM 0x00300000
#define CYPRESS_MVALID 0x00080000
#define CYPRESS_MIDMASK 0x00078000 /* Only on 605 */
#define CYPRESS_BMODE 0x00004000
#define CYPRESS_ACENABLE 0x00002000
#define CYPRESS_MRFLCT 0x00000800 /* Only on 605 */
#define CYPRESS_CMODE 0x00000400
#define CYPRESS_CLOCK 0x00000200 /* Only on 604 */
#define CYPRESS_CENABLE 0x00000100
#define CYPRESS_NFAULT 0x00000002
#define CYPRESS_MENABLE 0x00000001
static inline void cypress_flush_page(unsigned long page)
{
__asm__ __volatile__("sta %%g0, [%0] %1\n\t" : :
"r" (page), "i" (ASI_M_FLUSH_PAGE));
}
static inline void cypress_flush_segment(unsigned long addr)
{
__asm__ __volatile__("sta %%g0, [%0] %1\n\t" : :
"r" (addr), "i" (ASI_M_FLUSH_SEG));
}
static inline void cypress_flush_region(unsigned long addr)
{
__asm__ __volatile__("sta %%g0, [%0] %1\n\t" : :
"r" (addr), "i" (ASI_M_FLUSH_REGION));
}
static inline void cypress_flush_context(void)
{
__asm__ __volatile__("sta %%g0, [%%g0] %0\n\t" : :
"i" (ASI_M_FLUSH_CTX));
}
/* XXX Displacement flushes for buggy chips and initial testing
* XXX go here.
*/
#endif /* !(_SPARC_CYPRESS_H) */
...@@ -92,27 +92,31 @@ extern int isa_dma_bridge_buggy; ...@@ -92,27 +92,31 @@ extern int isa_dma_bridge_buggy;
#ifdef CONFIG_SPARC32 #ifdef CONFIG_SPARC32
/* Routines for data transfer buffers. */ /* Routines for data transfer buffers. */
BTFIXUPDEF_CALL(char *, mmu_lockarea, char *, unsigned long)
BTFIXUPDEF_CALL(void, mmu_unlockarea, char *, unsigned long)
#define mmu_lockarea(vaddr,len) BTFIXUP_CALL(mmu_lockarea)(vaddr,len)
#define mmu_unlockarea(vaddr,len) BTFIXUP_CALL(mmu_unlockarea)(vaddr,len)
struct page;
struct device; struct device;
struct scatterlist; struct scatterlist;
/* These are implementations for sbus_map_sg/sbus_unmap_sg... collapse later */ struct sparc32_dma_ops {
BTFIXUPDEF_CALL(__u32, mmu_get_scsi_one, struct device *, char *, unsigned long) __u32 (*get_scsi_one)(struct device *, char *, unsigned long);
BTFIXUPDEF_CALL(void, mmu_get_scsi_sgl, struct device *, struct scatterlist *, int) void (*get_scsi_sgl)(struct device *, struct scatterlist *, int);
BTFIXUPDEF_CALL(void, mmu_release_scsi_one, struct device *, __u32, unsigned long) void (*release_scsi_one)(struct device *, __u32, unsigned long);
BTFIXUPDEF_CALL(void, mmu_release_scsi_sgl, struct device *, struct scatterlist *, int) void (*release_scsi_sgl)(struct device *, struct scatterlist *,int);
#ifdef CONFIG_SBUS
#define mmu_get_scsi_one(dev,vaddr,len) BTFIXUP_CALL(mmu_get_scsi_one)(dev,vaddr,len) int (*map_dma_area)(struct device *, dma_addr_t *, unsigned long, unsigned long, int);
#define mmu_get_scsi_sgl(dev,sg,sz) BTFIXUP_CALL(mmu_get_scsi_sgl)(dev,sg,sz) void (*unmap_dma_area)(struct device *, unsigned long, int);
#define mmu_release_scsi_one(dev,vaddr,len) BTFIXUP_CALL(mmu_release_scsi_one)(dev,vaddr,len) #endif
#define mmu_release_scsi_sgl(dev,sg,sz) BTFIXUP_CALL(mmu_release_scsi_sgl)(dev,sg,sz) };
extern const struct sparc32_dma_ops *sparc32_dma_ops;
#define mmu_get_scsi_one(dev,vaddr,len) \
sparc32_dma_ops->get_scsi_one(dev, vaddr, len)
#define mmu_get_scsi_sgl(dev,sg,sz) \
sparc32_dma_ops->get_scsi_sgl(dev, sg, sz)
#define mmu_release_scsi_one(dev,vaddr,len) \
sparc32_dma_ops->release_scsi_one(dev, vaddr,len)
#define mmu_release_scsi_sgl(dev,sg,sz) \
sparc32_dma_ops->release_scsi_sgl(dev, sg, sz)
#ifdef CONFIG_SBUS
/* /*
* mmu_map/unmap are provided by iommu/iounit; Invalid to call on IIep. * mmu_map/unmap are provided by iommu/iounit; Invalid to call on IIep.
* *
...@@ -123,17 +127,17 @@ BTFIXUPDEF_CALL(void, mmu_release_scsi_sgl, struct device *, struct scatterlist ...@@ -123,17 +127,17 @@ BTFIXUPDEF_CALL(void, mmu_release_scsi_sgl, struct device *, struct scatterlist
* Second mapping is for device visible address, or "bus" address. * Second mapping is for device visible address, or "bus" address.
* The bus address is returned at '*pba'. * The bus address is returned at '*pba'.
* *
* These functions seem distinct, but are hard to split. On sun4c, * These functions seem distinct, but are hard to split.
* at least for now, 'a' is equal to bus address, and retured in *pba.
* On sun4m, page attributes depend on the CPU type, so we have to * On sun4m, page attributes depend on the CPU type, so we have to
* know if we are mapping RAM or I/O, so it has to be an additional argument * know if we are mapping RAM or I/O, so it has to be an additional argument
* to a separate mapping function for CPU visible mappings. * to a separate mapping function for CPU visible mappings.
*/ */
BTFIXUPDEF_CALL(int, mmu_map_dma_area, struct device *, dma_addr_t *, unsigned long, unsigned long, int len) #define sbus_map_dma_area(dev,pba,va,a,len) \
BTFIXUPDEF_CALL(void, mmu_unmap_dma_area, struct device *, unsigned long busa, int len) sparc32_dma_ops->map_dma_area(dev, pba, va, a, len)
#define sbus_unmap_dma_area(dev,ba,len) \
sparc32_dma_ops->unmap_dma_area(dev, ba, len)
#endif /* CONFIG_SBUS */
#define mmu_map_dma_area(dev,pba,va,a,len) BTFIXUP_CALL(mmu_map_dma_area)(dev,pba,va,a,len)
#define mmu_unmap_dma_area(dev,ba,len) BTFIXUP_CALL(mmu_unmap_dma_area)(dev,ba,len)
#endif #endif
#endif /* !(_ASM_SPARC_DMA_H) */ #endif /* !(_ASM_SPARC_DMA_H) */
...@@ -118,16 +118,9 @@ typedef struct { ...@@ -118,16 +118,9 @@ typedef struct {
instruction set this cpu supports. This can NOT be done in userspace instruction set this cpu supports. This can NOT be done in userspace
on Sparc. */ on Sparc. */
/* Sun4c has none of the capabilities, most sun4m's have them all. /* Most sun4m's have them all. */
* XXX This is gross, set some global variable at boot time. -DaveM #define ELF_HWCAP (HWCAP_SPARC_FLUSH | HWCAP_SPARC_STBAR | \
*/ HWCAP_SPARC_SWAP | HWCAP_SPARC_MULDIV)
#define ELF_HWCAP ((ARCH_SUN4C) ? 0 : \
(HWCAP_SPARC_FLUSH | HWCAP_SPARC_STBAR | \
HWCAP_SPARC_SWAP | \
((srmmu_modtype != Cypress && \
srmmu_modtype != Cypress_vE && \
srmmu_modtype != Cypress_vD) ? \
HWCAP_SPARC_MULDIV : 0)))
/* This yields a string that ld.so will use to load implementation /* This yields a string that ld.so will use to load implementation
specific libraries for optimization. This is more specific in specific libraries for optimization. This is more specific in
......
...@@ -12,7 +12,6 @@ ...@@ -12,7 +12,6 @@
#include <asm/page.h> #include <asm/page.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/idprom.h> #include <asm/idprom.h>
#include <asm/machines.h>
#include <asm/oplib.h> #include <asm/oplib.h>
#include <asm/auxio.h> #include <asm/auxio.h>
#include <asm/irq.h> #include <asm/irq.h>
...@@ -103,25 +102,13 @@ static struct sun_floppy_ops sun_fdops; ...@@ -103,25 +102,13 @@ static struct sun_floppy_ops sun_fdops;
/* Routines unique to each controller type on a Sun. */ /* Routines unique to each controller type on a Sun. */
static void sun_set_dor(unsigned char value, int fdc_82077) static void sun_set_dor(unsigned char value, int fdc_82077)
{ {
if (sparc_cpu_model == sun4c) { if (fdc_82077)
unsigned int bits = 0;
if (value & 0x10)
bits |= AUXIO_FLPY_DSEL;
if ((value & 0x80) == 0)
bits |= AUXIO_FLPY_EJCT;
set_auxio(bits, (~bits) & (AUXIO_FLPY_DSEL|AUXIO_FLPY_EJCT));
}
if (fdc_82077) {
sun_fdc->dor_82077 = value; sun_fdc->dor_82077 = value;
}
} }
static unsigned char sun_read_dir(void) static unsigned char sun_read_dir(void)
{ {
if (sparc_cpu_model == sun4c) return sun_fdc->dir_82077;
return (get_auxio() & AUXIO_FLPY_DCHG) ? 0x80 : 0;
else
return sun_fdc->dir_82077;
} }
static unsigned char sun_82072_fd_inb(int port) static unsigned char sun_82072_fd_inb(int port)
...@@ -242,10 +229,7 @@ static inline void virtual_dma_init(void) ...@@ -242,10 +229,7 @@ static inline void virtual_dma_init(void)
static inline void sun_fd_disable_dma(void) static inline void sun_fd_disable_dma(void)
{ {
doing_pdma = 0; doing_pdma = 0;
if (pdma_base) { pdma_base = NULL;
mmu_unlockarea(pdma_base, pdma_areasize);
pdma_base = NULL;
}
} }
static inline void sun_fd_set_dma_mode(int mode) static inline void sun_fd_set_dma_mode(int mode)
...@@ -275,7 +259,6 @@ static inline void sun_fd_set_dma_count(int length) ...@@ -275,7 +259,6 @@ static inline void sun_fd_set_dma_count(int length)
static inline void sun_fd_enable_dma(void) static inline void sun_fd_enable_dma(void)
{ {
pdma_vaddr = mmu_lockarea(pdma_vaddr, pdma_size);
pdma_base = pdma_vaddr; pdma_base = pdma_vaddr;
pdma_areasize = pdma_size; pdma_areasize = pdma_size;
} }
...@@ -301,38 +284,36 @@ static int sun_floppy_init(void) ...@@ -301,38 +284,36 @@ static int sun_floppy_init(void)
{ {
struct platform_device *op; struct platform_device *op;
struct device_node *dp; struct device_node *dp;
struct resource r;
char state[128]; char state[128];
phandle tnode, fd_node; phandle fd_node;
phandle tnode;
int num_regs; int num_regs;
struct resource r;
use_virtual_dma = 1; use_virtual_dma = 1;
/* Forget it if we aren't on a machine that could possibly /* Forget it if we aren't on a machine that could possibly
* ever have a floppy drive. * ever have a floppy drive.
*/ */
if((sparc_cpu_model != sun4c && sparc_cpu_model != sun4m) || if (sparc_cpu_model != sun4m) {
((idprom->id_machtype == (SM_SUN4C | SM_4C_SLC)) ||
(idprom->id_machtype == (SM_SUN4C | SM_4C_ELC)))) {
/* We certainly don't have a floppy controller. */ /* We certainly don't have a floppy controller. */
goto no_sun_fdc; goto no_sun_fdc;
} }
/* Well, try to find one. */ /* Well, try to find one. */
tnode = prom_getchild(prom_root_node); tnode = prom_getchild(prom_root_node);
fd_node = prom_searchsiblings(tnode, "obio"); fd_node = prom_searchsiblings(tnode, "obio");
if(fd_node != 0) { if (fd_node != 0) {
tnode = prom_getchild(fd_node); tnode = prom_getchild(fd_node);
fd_node = prom_searchsiblings(tnode, "SUNW,fdtwo"); fd_node = prom_searchsiblings(tnode, "SUNW,fdtwo");
} else { } else {
fd_node = prom_searchsiblings(tnode, "fd"); fd_node = prom_searchsiblings(tnode, "fd");
} }
if(fd_node == 0) { if (fd_node == 0) {
goto no_sun_fdc; goto no_sun_fdc;
} }
/* The sun4m lets us know if the controller is actually usable. */ /* The sun4m lets us know if the controller is actually usable. */
if(sparc_cpu_model == sun4m && if (prom_getproperty(fd_node, "status", state, sizeof(state)) != -1) {
prom_getproperty(fd_node, "status", state, sizeof(state)) != -1) {
if(!strcmp(state, "disabled")) { if(!strcmp(state, "disabled")) {
goto no_sun_fdc; goto no_sun_fdc;
} }
...@@ -343,12 +324,12 @@ static int sun_floppy_init(void) ...@@ -343,12 +324,12 @@ static int sun_floppy_init(void)
memset(&r, 0, sizeof(r)); memset(&r, 0, sizeof(r));
r.flags = fd_regs[0].which_io; r.flags = fd_regs[0].which_io;
r.start = fd_regs[0].phys_addr; r.start = fd_regs[0].phys_addr;
sun_fdc = (struct sun_flpy_controller *) sun_fdc = of_ioremap(&r, 0, fd_regs[0].reg_size, "floppy");
of_ioremap(&r, 0, fd_regs[0].reg_size, "floppy");
/* Look up irq in platform_device. /* Look up irq in platform_device.
* We try "SUNW,fdtwo" and "fd" * We try "SUNW,fdtwo" and "fd"
*/ */
op = NULL;
for_each_node_by_name(dp, "SUNW,fdtwo") { for_each_node_by_name(dp, "SUNW,fdtwo") {
op = of_find_device_by_node(dp); op = of_find_device_by_node(dp);
if (op) if (op)
...@@ -367,7 +348,7 @@ static int sun_floppy_init(void) ...@@ -367,7 +348,7 @@ static int sun_floppy_init(void)
FLOPPY_IRQ = op->archdata.irqs[0]; FLOPPY_IRQ = op->archdata.irqs[0];
/* Last minute sanity check... */ /* Last minute sanity check... */
if(sun_fdc->status_82072 == 0xff) { if (sun_fdc->status_82072 == 0xff) {
sun_fdc = NULL; sun_fdc = NULL;
goto no_sun_fdc; goto no_sun_fdc;
} }
......
...@@ -161,10 +161,7 @@ unsigned long pdma_areasize; ...@@ -161,10 +161,7 @@ unsigned long pdma_areasize;
static void sun_fd_disable_dma(void) static void sun_fd_disable_dma(void)
{ {
doing_pdma = 0; doing_pdma = 0;
if (pdma_base) { pdma_base = NULL;
mmu_unlockarea(pdma_base, pdma_areasize);
pdma_base = NULL;
}
} }
static void sun_fd_set_dma_mode(int mode) static void sun_fd_set_dma_mode(int mode)
...@@ -194,7 +191,6 @@ static void sun_fd_set_dma_count(int length) ...@@ -194,7 +191,6 @@ static void sun_fd_set_dma_count(int length)
static void sun_fd_enable_dma(void) static void sun_fd_enable_dma(void)
{ {
pdma_vaddr = mmu_lockarea(pdma_vaddr, pdma_size);
pdma_base = pdma_vaddr; pdma_base = pdma_vaddr;
pdma_areasize = pdma_size; pdma_areasize = pdma_size;
} }
......
...@@ -2,15 +2,8 @@ ...@@ -2,15 +2,8 @@
#define __SPARC_HEAD_H #define __SPARC_HEAD_H
#define KERNBASE 0xf0000000 /* First address the kernel will eventually be */ #define KERNBASE 0xf0000000 /* First address the kernel will eventually be */
#define LOAD_ADDR 0x4000 /* prom jumps to us here unless this is elf /boot */
#define SUN4C_SEGSZ (1 << 18)
#define SRMMU_L1_KBASE_OFFSET ((KERNBASE>>24)<<2) /* Used in boot remapping. */
#define INTS_ENAB 0x01 /* entry.S uses this. */
#define SUN4_PROM_VECTOR 0xFFE81000 /* SUN4 PROM needs to be hardwired */
#define WRITE_PAUSE nop; nop; nop; /* Have to do this after %wim/%psr chg */ #define WRITE_PAUSE nop; nop; nop; /* Have to do this after %wim/%psr chg */
#define NOP_INSN 0x01000000 /* Used to patch sparc_save_state */
/* Here are some trap goodies */ /* Here are some trap goodies */
...@@ -18,9 +11,7 @@ ...@@ -18,9 +11,7 @@
#define TRAP_ENTRY(type, label) \ #define TRAP_ENTRY(type, label) \
rd %psr, %l0; b label; rd %wim, %l3; nop; rd %psr, %l0; b label; rd %wim, %l3; nop;
/* Data/text faults. Defaults to sun4c version at boot time. */ /* Data/text faults */
#define SPARC_TFAULT rd %psr, %l0; rd %wim, %l3; b sun4c_fault; mov 1, %l7;
#define SPARC_DFAULT rd %psr, %l0; rd %wim, %l3; b sun4c_fault; mov 0, %l7;
#define SRMMU_TFAULT rd %psr, %l0; rd %wim, %l3; b srmmu_fault; mov 1, %l7; #define SRMMU_TFAULT rd %psr, %l0; rd %wim, %l3; b srmmu_fault; mov 1, %l7;
#define SRMMU_DFAULT rd %psr, %l0; rd %wim, %l3; b srmmu_fault; mov 0, %l7; #define SRMMU_DFAULT rd %psr, %l0; rd %wim, %l3; b srmmu_fault; mov 0, %l7;
...@@ -80,16 +71,6 @@ ...@@ -80,16 +71,6 @@
#define TRAP_ENTRY_INTERRUPT(int_level) \ #define TRAP_ENTRY_INTERRUPT(int_level) \
mov int_level, %l7; rd %psr, %l0; b real_irq_entry; rd %wim, %l3; mov int_level, %l7; rd %psr, %l0; b real_irq_entry; rd %wim, %l3;
/* NMI's (Non Maskable Interrupts) are special, you can't keep them
* from coming in, and basically if you get one, the shows over. ;(
* On the sun4c they are usually asynchronous memory errors, on the
* the sun4m they could be either due to mem errors or a software
* initiated interrupt from the prom/kern on an SMP box saying "I
* command you to do CPU tricks, read your mailbox for more info."
*/
#define NMI_TRAP \
rd %wim, %l3; b linux_trap_nmi_sun4c; mov %psr, %l0; nop;
/* Window overflows/underflows are special and we need to try to be as /* Window overflows/underflows are special and we need to try to be as
* efficient as possible here.... * efficient as possible here....
*/ */
......
...@@ -10,19 +10,6 @@ ...@@ -10,19 +10,6 @@
#ifdef CONFIG_SPARC_LEON #ifdef CONFIG_SPARC_LEON
#define ASI_LEON_NOCACHE 0x01
#define ASI_LEON_DCACHE_MISS 0x1
#define ASI_LEON_CACHEREGS 0x02
#define ASI_LEON_IFLUSH 0x10
#define ASI_LEON_DFLUSH 0x11
#define ASI_LEON_MMUFLUSH 0x18
#define ASI_LEON_MMUREGS 0x19
#define ASI_LEON_BYPASS 0x1c
#define ASI_LEON_FLUSH_PAGE 0x10
/* mmu register access, ASI_LEON_MMUREGS */ /* mmu register access, ASI_LEON_MMUREGS */
#define LEON_CNR_CTRL 0x000 #define LEON_CNR_CTRL 0x000
#define LEON_CNR_CTXP 0x100 #define LEON_CNR_CTXP 0x100
...@@ -57,29 +44,6 @@ ...@@ -57,29 +44,6 @@
#define LEON_IRQMASK_R 0x0000fffe /* bit 15- 1 of lregs.irqmask */ #define LEON_IRQMASK_R 0x0000fffe /* bit 15- 1 of lregs.irqmask */
#define LEON_IRQPRIO_R 0xfffe0000 /* bit 31-17 of lregs.irqmask */ #define LEON_IRQPRIO_R 0xfffe0000 /* bit 31-17 of lregs.irqmask */
/* leon uart register definitions */
#define LEON_OFF_UDATA 0x0
#define LEON_OFF_USTAT 0x4
#define LEON_OFF_UCTRL 0x8
#define LEON_OFF_USCAL 0xc
#define LEON_UCTRL_RE 0x01
#define LEON_UCTRL_TE 0x02
#define LEON_UCTRL_RI 0x04
#define LEON_UCTRL_TI 0x08
#define LEON_UCTRL_PS 0x10
#define LEON_UCTRL_PE 0x20
#define LEON_UCTRL_FL 0x40
#define LEON_UCTRL_LB 0x80
#define LEON_USTAT_DR 0x01
#define LEON_USTAT_TS 0x02
#define LEON_USTAT_TH 0x04
#define LEON_USTAT_BR 0x08
#define LEON_USTAT_OV 0x10
#define LEON_USTAT_PE 0x20
#define LEON_USTAT_FE 0x40
#define LEON_MCFG2_SRAMDIS 0x00002000 #define LEON_MCFG2_SRAMDIS 0x00002000
#define LEON_MCFG2_SDRAMEN 0x00004000 #define LEON_MCFG2_SDRAMEN 0x00004000
#define LEON_MCFG2_SRAMBANKSZ 0x00001e00 /* [12-9] */ #define LEON_MCFG2_SRAMBANKSZ 0x00001e00 /* [12-9] */
...@@ -89,8 +53,6 @@ ...@@ -89,8 +53,6 @@
#define LEON_TCNT0_MASK 0x7fffff #define LEON_TCNT0_MASK 0x7fffff
#define LEON_USTAT_ERROR (LEON_USTAT_OV | LEON_USTAT_PE | LEON_USTAT_FE)
/* no break yet */
#define ASI_LEON3_SYSCTRL 0x02 #define ASI_LEON3_SYSCTRL 0x02
#define ASI_LEON3_SYSCTRL_ICFG 0x08 #define ASI_LEON3_SYSCTRL_ICFG 0x08
...@@ -278,18 +240,11 @@ static inline int sparc_leon3_cpuid(void) ...@@ -278,18 +240,11 @@ static inline int sparc_leon3_cpuid(void)
#define LEON2_CFG_SSIZE_MASK 0x00007000UL #define LEON2_CFG_SSIZE_MASK 0x00007000UL
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
extern unsigned long srmmu_swprobe(unsigned long vaddr, unsigned long *paddr);
extern void leon_flush_icache_all(void);
extern void leon_flush_dcache_all(void);
extern void leon_flush_cache_all(void);
extern void leon_flush_tlb_all(void);
extern int leon_flush_during_switch;
extern int leon_flush_needed(void);
struct vm_area_struct; struct vm_area_struct;
extern unsigned long srmmu_swprobe(unsigned long vaddr, unsigned long *paddr);
extern void leon_flush_icache_all(void); extern void leon_flush_icache_all(void);
extern void leon_flush_dcache_all(void); extern void leon_flush_dcache_all(void);
extern void leon_flush_pcache_all(struct vm_area_struct *vma, unsigned long page);
extern void leon_flush_cache_all(void); extern void leon_flush_cache_all(void);
extern void leon_flush_tlb_all(void); extern void leon_flush_tlb_all(void);
extern int leon_flush_during_switch; extern int leon_flush_during_switch;
...@@ -321,22 +276,12 @@ extern unsigned int leon_build_device_irq(unsigned int real_irq, ...@@ -321,22 +276,12 @@ extern unsigned int leon_build_device_irq(unsigned int real_irq,
extern void leon_update_virq_handling(unsigned int virq, extern void leon_update_virq_handling(unsigned int virq,
irq_flow_handler_t flow_handler, irq_flow_handler_t flow_handler,
const char *name, int do_ack); const char *name, int do_ack);
extern void leon_clear_clock_irq(void); extern void leon_init_timers(void);
extern void leon_load_profile_irq(int cpu, unsigned int limit);
extern void leon_init_timers(irq_handler_t counter_fn);
extern void leon_clear_clock_irq(void);
extern void leon_load_profile_irq(int cpu, unsigned int limit);
extern void leon_trans_init(struct device_node *dp); extern void leon_trans_init(struct device_node *dp);
extern void leon_node_init(struct device_node *dp, struct device_node ***nextp); extern void leon_node_init(struct device_node *dp, struct device_node ***nextp);
extern void leon_init_IRQ(void);
extern void leon_init(void);
extern unsigned long srmmu_swprobe(unsigned long vaddr, unsigned long *paddr);
extern void init_leon(void); extern void init_leon(void);
extern void poke_leonsparc(void); extern void poke_leonsparc(void);
extern void leon3_getCacheRegs(struct leon3_cacheregs *regs); extern void leon3_getCacheRegs(struct leon3_cacheregs *regs);
extern int leon_flush_needed(void);
extern void leon_switch_mm(void);
extern int srmmu_swprobe_trace;
extern int leon3_ticker_irq; extern int leon3_ticker_irq;
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
......
...@@ -12,11 +12,6 @@ struct Sun_Machine_Models { ...@@ -12,11 +12,6 @@ struct Sun_Machine_Models {
unsigned char id_machtype; unsigned char id_machtype;
}; };
/* Current number of machines we know about that has an IDPROM
* machtype entry including one entry for the 0x80 OBP machines.
*/
#define NUM_SUN_MACHINES 16
/* The machine type in the idprom area looks like this: /* The machine type in the idprom area looks like this:
* *
* --------------- * ---------------
...@@ -24,36 +19,20 @@ struct Sun_Machine_Models { ...@@ -24,36 +19,20 @@ struct Sun_Machine_Models {
* --------------- * ---------------
* 7 4 3 0 * 7 4 3 0
* *
* The ARCH field determines the architecture line (sun4, sun4c, etc). * The ARCH field determines the architecture line (sun4m, etc).
* The MACH field determines the machine make within that architecture. * The MACH field determines the machine make within that architecture.
*/ */
#define SM_ARCH_MASK 0xf0 #define SM_ARCH_MASK 0xf0
#define SM_SUN4 0x20
#define M_LEON 0x30 #define M_LEON 0x30
#define SM_SUN4C 0x50
#define SM_SUN4M 0x70 #define SM_SUN4M 0x70
#define SM_SUN4M_OBP 0x80 #define SM_SUN4M_OBP 0x80
#define SM_TYP_MASK 0x0f #define SM_TYP_MASK 0x0f
/* Sun4 machines */
#define SM_4_260 0x01 /* Sun 4/200 series */
#define SM_4_110 0x02 /* Sun 4/100 series */
#define SM_4_330 0x03 /* Sun 4/300 series */
#define SM_4_470 0x04 /* Sun 4/400 series */
/* Leon machines */ /* Leon machines */
#define M_LEON3_SOC 0x02 /* Leon3 SoC */ #define M_LEON3_SOC 0x02 /* Leon3 SoC */
/* Sun4c machines Full Name - PROM NAME */
#define SM_4C_SS1 0x01 /* Sun4c SparcStation 1 - Sun 4/60 */
#define SM_4C_IPC 0x02 /* Sun4c SparcStation IPC - Sun 4/40 */
#define SM_4C_SS1PLUS 0x03 /* Sun4c SparcStation 1+ - Sun 4/65 */
#define SM_4C_SLC 0x04 /* Sun4c SparcStation SLC - Sun 4/20 */
#define SM_4C_SS2 0x05 /* Sun4c SparcStation 2 - Sun 4/75 */
#define SM_4C_ELC 0x06 /* Sun4c SparcStation ELC - Sun 4/25 */
#define SM_4C_IPX 0x07 /* Sun4c SparcStation IPX - Sun 4/50 */
/* Sun4m machines, these predate the OpenBoot. These values only mean /* Sun4m machines, these predate the OpenBoot. These values only mean
* something if the value in the ARCH field is SM_SUN4M, if it is * something if the value in the ARCH field is SM_SUN4M, if it is
* SM_SUN4M_OBP then you have the following situation: * SM_SUN4M_OBP then you have the following situation:
......
...@@ -8,14 +8,10 @@ ...@@ -8,14 +8,10 @@
#define _SPARC_MBUS_H #define _SPARC_MBUS_H
#include <asm/ross.h> /* HyperSparc stuff */ #include <asm/ross.h> /* HyperSparc stuff */
#include <asm/cypress.h> /* Cypress Chips */
#include <asm/viking.h> /* Ugh, bug city... */ #include <asm/viking.h> /* Ugh, bug city... */
enum mbus_module { enum mbus_module {
HyperSparc = 0, HyperSparc = 0,
Cypress = 1,
Cypress_vE = 2,
Cypress_vD = 3,
Swift_ok = 4, Swift_ok = 4,
Swift_bad_c = 5, Swift_bad_c = 5,
Swift_lots_o_bugs = 6, Swift_lots_o_bugs = 6,
......
#ifndef _SPARC_MEMREG_H
#define _SPARC_MEMREG_H
/* memreg.h: Definitions of the values found in the synchronous
* and asynchronous memory error registers when a fault
* occurs on the sun4c.
*
* Copyright (C) 1995 David S. Miller (davem@caip.rutgers.edu)
*/
/* First the synchronous error codes, these are usually just
* normal page faults.
*/
#define SUN4C_SYNC_WDRESET 0x0001 /* watchdog reset */
#define SUN4C_SYNC_SIZE 0x0002 /* bad access size? whuz this? */
#define SUN4C_SYNC_PARITY 0x0008 /* bad ram chips caused a parity error */
#define SUN4C_SYNC_SBUS 0x0010 /* the SBUS had some problems... */
#define SUN4C_SYNC_NOMEM 0x0020 /* translation to non-existent ram */
#define SUN4C_SYNC_PROT 0x0040 /* access violated pte protections */
#define SUN4C_SYNC_NPRESENT 0x0080 /* pte said that page was not present */
#define SUN4C_SYNC_BADWRITE 0x8000 /* while writing something went bogus */
#define SUN4C_SYNC_BOLIXED \
(SUN4C_SYNC_WDRESET | SUN4C_SYNC_SIZE | SUN4C_SYNC_SBUS | \
SUN4C_SYNC_NOMEM | SUN4C_SYNC_PARITY)
/* Now the asynchronous error codes, these are almost always produced
* by the cache writing things back to memory and getting a bad translation.
* Bad DVMA transactions can cause these faults too.
*/
#define SUN4C_ASYNC_BADDVMA 0x0010 /* error during DVMA access */
#define SUN4C_ASYNC_NOMEM 0x0020 /* write back pointed to bad phys addr */
#define SUN4C_ASYNC_BADWB 0x0080 /* write back points to non-present page */
/* Memory parity error register with associated bit constants. */
#ifndef __ASSEMBLY__
extern __volatile__ unsigned long __iomem *sun4c_memerr_reg;
#endif
#define SUN4C_MPE_ERROR 0x80 /* Parity error detected. (ro) */
#define SUN4C_MPE_MULTI 0x40 /* Multiple parity errors detected. (ro) */
#define SUN4C_MPE_TEST 0x20 /* Write inverse parity. (rw) */
#define SUN4C_MPE_CHECK 0x10 /* Enable parity checking. (rw) */
#define SUN4C_MPE_ERR00 0x08 /* Parity error in bits 0-7. (ro) */
#define SUN4C_MPE_ERR08 0x04 /* Parity error in bits 8-15. (ro) */
#define SUN4C_MPE_ERR16 0x02 /* Parity error in bits 16-23. (ro) */
#define SUN4C_MPE_ERR24 0x01 /* Parity error in bits 24-31. (ro) */
#define SUN4C_MPE_ERRS 0x0F /* Bit mask for the error bits. (ro) */
#endif /* !(_SPARC_MEMREG_H) */
#ifndef __SPARC_MMU_CONTEXT_H #ifndef __SPARC_MMU_CONTEXT_H
#define __SPARC_MMU_CONTEXT_H #define __SPARC_MMU_CONTEXT_H
#include <asm/btfixup.h>
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <asm-generic/mm_hooks.h> #include <asm-generic/mm_hooks.h>
...@@ -23,14 +21,11 @@ static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk) ...@@ -23,14 +21,11 @@ static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
* all the page tables have been flushed. Our job is to destroy * all the page tables have been flushed. Our job is to destroy
* any remaining processor-specific state. * any remaining processor-specific state.
*/ */
BTFIXUPDEF_CALL(void, destroy_context, struct mm_struct *) void destroy_context(struct mm_struct *mm);
#define destroy_context(mm) BTFIXUP_CALL(destroy_context)(mm)
/* Switch the current MM context. */ /* Switch the current MM context. */
BTFIXUPDEF_CALL(void, switch_mm, struct mm_struct *, struct mm_struct *, struct task_struct *) void switch_mm(struct mm_struct *old_mm, struct mm_struct *mm,
struct task_struct *tsk);
#define switch_mm(old_mm, mm, tsk) BTFIXUP_CALL(switch_mm)(old_mm, mm, tsk)
#define deactivate_mm(tsk,mm) do { } while (0) #define deactivate_mm(tsk,mm) do { } while (0)
......
...@@ -220,19 +220,6 @@ static inline void cc_set_igen(unsigned gen) ...@@ -220,19 +220,6 @@ static inline void cc_set_igen(unsigned gen)
"i" (ASI_M_MXCC)); "i" (ASI_M_MXCC));
} }
/* +-------+-------------+-----------+------------------------------------+
* | bcast | devid | sid | levels mask |
* +-------+-------------+-----------+------------------------------------+
* 31 30 23 22 15 14 0
*/
#define IGEN_MESSAGE(bcast, devid, sid, levels) \
(((bcast) << 31) | ((devid) << 23) | ((sid) << 15) | (levels))
static inline void sun4d_send_ipi(int cpu, int level)
{
cc_set_igen(IGEN_MESSAGE(0, cpu << 3, 6 + ((level >> 1) & 7), 1 << (level - 1)));
}
#endif /* !__ASSEMBLY__ */ #endif /* !__ASSEMBLY__ */
#endif /* !(_SPARC_OBIO_H) */ #endif /* !(_SPARC_OBIO_H) */
...@@ -105,14 +105,6 @@ extern void prom_write(const char *buf, unsigned int len); ...@@ -105,14 +105,6 @@ extern void prom_write(const char *buf, unsigned int len);
extern int prom_startcpu(int cpunode, struct linux_prom_registers *context_table, extern int prom_startcpu(int cpunode, struct linux_prom_registers *context_table,
int context, char *program_counter); int context, char *program_counter);
/* Sun4/sun4c specific memory-management startup hook. */
/* Map the passed segment in the given context at the passed
* virtual address.
*/
extern void prom_putsegment(int context, unsigned long virt_addr,
int physical_segment);
/* Initialize the memory lists based upon the prom version. */ /* Initialize the memory lists based upon the prom version. */
void prom_meminit(void); void prom_meminit(void);
......
...@@ -14,8 +14,6 @@ ...@@ -14,8 +14,6 @@
#define PAGE_SIZE (_AC(1, UL) << PAGE_SHIFT) #define PAGE_SIZE (_AC(1, UL) << PAGE_SHIFT)
#define PAGE_MASK (~(PAGE_SIZE-1)) #define PAGE_MASK (~(PAGE_SIZE-1))
#include <asm/btfixup.h>
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#define clear_page(page) memset((void *)(page), 0, PAGE_SIZE) #define clear_page(page) memset((void *)(page), 0, PAGE_SIZE)
...@@ -45,12 +43,6 @@ struct sparc_phys_banks { ...@@ -45,12 +43,6 @@ struct sparc_phys_banks {
extern struct sparc_phys_banks sp_banks[SPARC_PHYS_BANKS+1]; extern struct sparc_phys_banks sp_banks[SPARC_PHYS_BANKS+1];
/* Cache alias structure. Entry is valid if context != -1. */
struct cache_palias {
unsigned long vaddr;
int context;
};
/* passing structs on the Sparc slow us down tremendously... */ /* passing structs on the Sparc slow us down tremendously... */
/* #define STRICT_MM_TYPECHECKS */ /* #define STRICT_MM_TYPECHECKS */
...@@ -116,10 +108,7 @@ typedef unsigned long iopgprot_t; ...@@ -116,10 +108,7 @@ typedef unsigned long iopgprot_t;
typedef struct page *pgtable_t; typedef struct page *pgtable_t;
extern unsigned long sparc_unmapped_base; extern unsigned long sparc_unmapped_base;
#define TASK_UNMAPPED_BASE sparc_unmapped_base
BTFIXUPDEF_SETHI(sparc_unmapped_base)
#define TASK_UNMAPPED_BASE BTFIXUP_SETHI(sparc_unmapped_base)
#else /* !(__ASSEMBLY__) */ #else /* !(__ASSEMBLY__) */
......
...@@ -4,8 +4,10 @@ ...@@ -4,8 +4,10 @@
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/sched.h> #include <linux/sched.h>
#include <asm/pgtsrmmu.h>
#include <asm/pgtable.h>
#include <asm/vaddrs.h>
#include <asm/page.h> #include <asm/page.h>
#include <asm/btfixup.h>
struct page; struct page;
...@@ -15,54 +17,74 @@ extern struct pgtable_cache_struct { ...@@ -15,54 +17,74 @@ extern struct pgtable_cache_struct {
unsigned long pgtable_cache_sz; unsigned long pgtable_cache_sz;
unsigned long pgd_cache_sz; unsigned long pgd_cache_sz;
} pgt_quicklists; } pgt_quicklists;
unsigned long srmmu_get_nocache(int size, int align);
void srmmu_free_nocache(unsigned long vaddr, int size);
#define pgd_quicklist (pgt_quicklists.pgd_cache) #define pgd_quicklist (pgt_quicklists.pgd_cache)
#define pmd_quicklist ((unsigned long *)0) #define pmd_quicklist ((unsigned long *)0)
#define pte_quicklist (pgt_quicklists.pte_cache) #define pte_quicklist (pgt_quicklists.pte_cache)
#define pgtable_cache_size (pgt_quicklists.pgtable_cache_sz) #define pgtable_cache_size (pgt_quicklists.pgtable_cache_sz)
#define pgd_cache_size (pgt_quicklists.pgd_cache_sz) #define pgd_cache_size (pgt_quicklists.pgd_cache_sz)
extern void check_pgt_cache(void); #define check_pgt_cache() do { } while (0)
BTFIXUPDEF_CALL(void, do_check_pgt_cache, int, int)
#define do_check_pgt_cache(low,high) BTFIXUP_CALL(do_check_pgt_cache)(low,high)
BTFIXUPDEF_CALL(pgd_t *, get_pgd_fast, void)
#define get_pgd_fast() BTFIXUP_CALL(get_pgd_fast)()
BTFIXUPDEF_CALL(void, free_pgd_fast, pgd_t *) pgd_t *get_pgd_fast(void);
#define free_pgd_fast(pgd) BTFIXUP_CALL(free_pgd_fast)(pgd) static inline void free_pgd_fast(pgd_t *pgd)
{
srmmu_free_nocache((unsigned long)pgd, SRMMU_PGD_TABLE_SIZE);
}
#define pgd_free(mm, pgd) free_pgd_fast(pgd) #define pgd_free(mm, pgd) free_pgd_fast(pgd)
#define pgd_alloc(mm) get_pgd_fast() #define pgd_alloc(mm) get_pgd_fast()
BTFIXUPDEF_CALL(void, pgd_set, pgd_t *, pmd_t *) static inline void pgd_set(pgd_t * pgdp, pmd_t * pmdp)
#define pgd_set(pgdp,pmdp) BTFIXUP_CALL(pgd_set)(pgdp,pmdp) {
unsigned long pa = __nocache_pa((unsigned long)pmdp);
set_pte((pte_t *)pgdp, (SRMMU_ET_PTD | (pa >> 4)));
}
#define pgd_populate(MM, PGD, PMD) pgd_set(PGD, PMD) #define pgd_populate(MM, PGD, PMD) pgd_set(PGD, PMD)
BTFIXUPDEF_CALL(pmd_t *, pmd_alloc_one, struct mm_struct *, unsigned long) static inline pmd_t *pmd_alloc_one(struct mm_struct *mm,
#define pmd_alloc_one(mm, address) BTFIXUP_CALL(pmd_alloc_one)(mm, address) unsigned long address)
{
return (pmd_t *)srmmu_get_nocache(SRMMU_PMD_TABLE_SIZE,
SRMMU_PMD_TABLE_SIZE);
}
BTFIXUPDEF_CALL(void, free_pmd_fast, pmd_t *) static inline void free_pmd_fast(pmd_t * pmd)
#define free_pmd_fast(pmd) BTFIXUP_CALL(free_pmd_fast)(pmd) {
srmmu_free_nocache((unsigned long)pmd, SRMMU_PMD_TABLE_SIZE);
}
#define pmd_free(mm, pmd) free_pmd_fast(pmd) #define pmd_free(mm, pmd) free_pmd_fast(pmd)
#define __pmd_free_tlb(tlb, pmd, addr) pmd_free((tlb)->mm, pmd) #define __pmd_free_tlb(tlb, pmd, addr) pmd_free((tlb)->mm, pmd)
BTFIXUPDEF_CALL(void, pmd_populate, pmd_t *, struct page *) void pmd_populate(struct mm_struct *mm, pmd_t *pmdp, struct page *ptep);
#define pmd_populate(MM, PMD, PTE) BTFIXUP_CALL(pmd_populate)(PMD, PTE)
#define pmd_pgtable(pmd) pmd_page(pmd) #define pmd_pgtable(pmd) pmd_page(pmd)
BTFIXUPDEF_CALL(void, pmd_set, pmd_t *, pte_t *)
#define pmd_populate_kernel(MM, PMD, PTE) BTFIXUP_CALL(pmd_set)(PMD, PTE)
BTFIXUPDEF_CALL(pgtable_t , pte_alloc_one, struct mm_struct *, unsigned long) void pmd_set(pmd_t *pmdp, pte_t *ptep);
#define pte_alloc_one(mm, address) BTFIXUP_CALL(pte_alloc_one)(mm, address) #define pmd_populate_kernel(MM, PMD, PTE) pmd_set(PMD, PTE)
BTFIXUPDEF_CALL(pte_t *, pte_alloc_one_kernel, struct mm_struct *, unsigned long)
#define pte_alloc_one_kernel(mm, addr) BTFIXUP_CALL(pte_alloc_one_kernel)(mm, addr) pgtable_t pte_alloc_one(struct mm_struct *mm, unsigned long address);
static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
unsigned long address)
{
return (pte_t *)srmmu_get_nocache(PTE_SIZE, PTE_SIZE);
}
static inline void free_pte_fast(pte_t *pte)
{
srmmu_free_nocache((unsigned long)pte, PTE_SIZE);
}
BTFIXUPDEF_CALL(void, free_pte_fast, pte_t *) #define pte_free_kernel(mm, pte) free_pte_fast(pte)
#define pte_free_kernel(mm, pte) BTFIXUP_CALL(free_pte_fast)(pte)
BTFIXUPDEF_CALL(void, pte_free, pgtable_t ) void pte_free(struct mm_struct * mm, pgtable_t pte);
#define pte_free(mm, pte) BTFIXUP_CALL(pte_free)(pte)
#define __pte_free_tlb(tlb, pte, addr) pte_free((tlb)->mm, pte) #define __pte_free_tlb(tlb, pte, addr) pte_free((tlb)->mm, pte)
#endif /* _SPARC_PGALLOC_H */ #endif /* _SPARC_PGALLOC_H */
This diff is collapsed.
...@@ -717,10 +717,6 @@ extern unsigned long find_ecache_flush_span(unsigned long size); ...@@ -717,10 +717,6 @@ extern unsigned long find_ecache_flush_span(unsigned long size);
struct seq_file; struct seq_file;
extern void mmu_info(struct seq_file *); extern void mmu_info(struct seq_file *);
/* These do nothing with the way I have things setup. */
#define mmu_lockarea(vaddr, len) (vaddr)
#define mmu_unlockarea(vaddr, len) do { } while(0)
struct vm_area_struct; struct vm_area_struct;
extern void update_mmu_cache(struct vm_area_struct *, unsigned long, pte_t *); extern void update_mmu_cache(struct vm_area_struct *, unsigned long, pte_t *);
......
...@@ -173,17 +173,6 @@ static inline void srmmu_set_ctable_ptr(unsigned long paddr) ...@@ -173,17 +173,6 @@ static inline void srmmu_set_ctable_ptr(unsigned long paddr)
"memory"); "memory");
} }
static inline unsigned long srmmu_get_ctable_ptr(void)
{
unsigned int retval;
__asm__ __volatile__("lda [%1] %2, %0\n\t" :
"=r" (retval) :
"r" (SRMMU_CTXTBL_PTR),
"i" (ASI_M_MMUREGS));
return (retval & SRMMU_CTX_PMASK) << 4;
}
static inline void srmmu_set_context(int context) static inline void srmmu_set_context(int context)
{ {
__asm__ __volatile__("sta %0, [%1] %2\n\t" : : __asm__ __volatile__("sta %0, [%1] %2\n\t" : :
...@@ -231,42 +220,6 @@ static inline void srmmu_flush_whole_tlb(void) ...@@ -231,42 +220,6 @@ static inline void srmmu_flush_whole_tlb(void)
} }
/* These flush types are not available on all chips... */ /* These flush types are not available on all chips... */
static inline void srmmu_flush_tlb_ctx(void)
{
__asm__ __volatile__("sta %%g0, [%0] %1\n\t": :
"r" (0x300), /* Flush TLB ctx.. */
"i" (ASI_M_FLUSH_PROBE) : "memory");
}
static inline void srmmu_flush_tlb_region(unsigned long addr)
{
addr &= SRMMU_PGDIR_MASK;
__asm__ __volatile__("sta %%g0, [%0] %1\n\t": :
"r" (addr | 0x200), /* Flush TLB region.. */
"i" (ASI_M_FLUSH_PROBE) : "memory");
}
static inline void srmmu_flush_tlb_segment(unsigned long addr)
{
addr &= SRMMU_REAL_PMD_MASK;
__asm__ __volatile__("sta %%g0, [%0] %1\n\t": :
"r" (addr | 0x100), /* Flush TLB segment.. */
"i" (ASI_M_FLUSH_PROBE) : "memory");
}
static inline void srmmu_flush_tlb_page(unsigned long page)
{
page &= PAGE_MASK;
__asm__ __volatile__("sta %%g0, [%0] %1\n\t": :
"r" (page), /* Flush TLB page.. */
"i" (ASI_M_FLUSH_PROBE) : "memory");
}
#ifndef CONFIG_SPARC_LEON #ifndef CONFIG_SPARC_LEON
static inline unsigned long srmmu_hwprobe(unsigned long vaddr) static inline unsigned long srmmu_hwprobe(unsigned long vaddr)
{ {
...@@ -294,9 +247,6 @@ srmmu_get_pte (unsigned long addr) ...@@ -294,9 +247,6 @@ srmmu_get_pte (unsigned long addr)
return entry; return entry;
} }
extern unsigned long (*srmmu_read_physical)(unsigned long paddr);
extern void (*srmmu_write_physical)(unsigned long paddr, unsigned long word);
#endif /* !(__ASSEMBLY__) */ #endif /* !(__ASSEMBLY__) */
#endif /* !(_SPARC_PGTSRMMU_H) */ #endif /* !(_SPARC_PGTSRMMU_H) */
/*
* pgtsun4c.h: Sun4c specific pgtable.h defines and code.
*
* Copyright (C) 1995 David S. Miller (davem@caip.rutgers.edu)
*/
#ifndef _SPARC_PGTSUN4C_H
#define _SPARC_PGTSUN4C_H
#include <asm/contregs.h>
/* PMD_SHIFT determines the size of the area a second-level page table can map */
#define SUN4C_PMD_SHIFT 22
/* PGDIR_SHIFT determines what a third-level page table entry can map */
#define SUN4C_PGDIR_SHIFT 22
#define SUN4C_PGDIR_SIZE (1UL << SUN4C_PGDIR_SHIFT)
#define SUN4C_PGDIR_MASK (~(SUN4C_PGDIR_SIZE-1))
#define SUN4C_PGDIR_ALIGN(addr) (((addr)+SUN4C_PGDIR_SIZE-1)&SUN4C_PGDIR_MASK)
/* To represent how the sun4c mmu really lays things out. */
#define SUN4C_REAL_PGDIR_SHIFT 18
#define SUN4C_REAL_PGDIR_SIZE (1UL << SUN4C_REAL_PGDIR_SHIFT)
#define SUN4C_REAL_PGDIR_MASK (~(SUN4C_REAL_PGDIR_SIZE-1))
#define SUN4C_REAL_PGDIR_ALIGN(addr) (((addr)+SUN4C_REAL_PGDIR_SIZE-1)&SUN4C_REAL_PGDIR_MASK)
/* 16 bit PFN on sun4c */
#define SUN4C_PFN_MASK 0xffff
/* Don't increase these unless the structures in sun4c.c are fixed */
#define SUN4C_MAX_SEGMAPS 256
#define SUN4C_MAX_CONTEXTS 16
/*
* To be efficient, and not have to worry about allocating such
* a huge pgd, we make the kernel sun4c tables each hold 1024
* entries and the pgd similarly just like the i386 tables.
*/
#define SUN4C_PTRS_PER_PTE 1024
#define SUN4C_PTRS_PER_PMD 1
#define SUN4C_PTRS_PER_PGD 1024
/*
* Sparc SUN4C pte fields.
*/
#define _SUN4C_PAGE_VALID 0x80000000
#define _SUN4C_PAGE_SILENT_READ 0x80000000 /* synonym */
#define _SUN4C_PAGE_DIRTY 0x40000000
#define _SUN4C_PAGE_SILENT_WRITE 0x40000000 /* synonym */
#define _SUN4C_PAGE_PRIV 0x20000000 /* privileged page */
#define _SUN4C_PAGE_NOCACHE 0x10000000 /* non-cacheable page */
#define _SUN4C_PAGE_PRESENT 0x08000000 /* implemented in software */
#define _SUN4C_PAGE_IO 0x04000000 /* I/O page */
#define _SUN4C_PAGE_FILE 0x02000000 /* implemented in software */
#define _SUN4C_PAGE_READ 0x00800000 /* implemented in software */
#define _SUN4C_PAGE_WRITE 0x00400000 /* implemented in software */
#define _SUN4C_PAGE_ACCESSED 0x00200000 /* implemented in software */
#define _SUN4C_PAGE_MODIFIED 0x00100000 /* implemented in software */
#define _SUN4C_READABLE (_SUN4C_PAGE_READ|_SUN4C_PAGE_SILENT_READ|\
_SUN4C_PAGE_ACCESSED)
#define _SUN4C_WRITEABLE (_SUN4C_PAGE_WRITE|_SUN4C_PAGE_SILENT_WRITE|\
_SUN4C_PAGE_MODIFIED)
#define _SUN4C_PAGE_CHG_MASK (0xffff|_SUN4C_PAGE_ACCESSED|_SUN4C_PAGE_MODIFIED)
#define SUN4C_PAGE_NONE __pgprot(_SUN4C_PAGE_PRESENT)
#define SUN4C_PAGE_SHARED __pgprot(_SUN4C_PAGE_PRESENT|_SUN4C_READABLE|\
_SUN4C_PAGE_WRITE)
#define SUN4C_PAGE_COPY __pgprot(_SUN4C_PAGE_PRESENT|_SUN4C_READABLE)
#define SUN4C_PAGE_READONLY __pgprot(_SUN4C_PAGE_PRESENT|_SUN4C_READABLE)
#define SUN4C_PAGE_KERNEL __pgprot(_SUN4C_READABLE|_SUN4C_WRITEABLE|\
_SUN4C_PAGE_DIRTY|_SUN4C_PAGE_PRIV)
/* SUN4C swap entry encoding
*
* We use 5 bits for the type and 19 for the offset. This gives us
* 32 swapfiles of 4GB each. Encoding looks like:
*
* RRRRRRRRooooooooooooooooooottttt
* fedcba9876543210fedcba9876543210
*
* The top 8 bits are reserved for protection and status bits, especially
* FILE and PRESENT.
*/
#define SUN4C_SWP_TYPE_MASK 0x1f
#define SUN4C_SWP_OFF_MASK 0x7ffff
#define SUN4C_SWP_OFF_SHIFT 5
#ifndef __ASSEMBLY__
static inline unsigned long sun4c_get_synchronous_error(void)
{
unsigned long sync_err;
__asm__ __volatile__("lda [%1] %2, %0\n\t" :
"=r" (sync_err) :
"r" (AC_SYNC_ERR), "i" (ASI_CONTROL));
return sync_err;
}
static inline unsigned long sun4c_get_synchronous_address(void)
{
unsigned long sync_addr;
__asm__ __volatile__("lda [%1] %2, %0\n\t" :
"=r" (sync_addr) :
"r" (AC_SYNC_VA), "i" (ASI_CONTROL));
return sync_addr;
}
/* SUN4C pte, segmap, and context manipulation */
static inline unsigned long sun4c_get_segmap(unsigned long addr)
{
register unsigned long entry;
__asm__ __volatile__("\n\tlduba [%1] %2, %0\n\t" :
"=r" (entry) :
"r" (addr), "i" (ASI_SEGMAP));
return entry;
}
static inline void sun4c_put_segmap(unsigned long addr, unsigned long entry)
{
__asm__ __volatile__("\n\tstba %1, [%0] %2; nop; nop; nop;\n\t" : :
"r" (addr), "r" (entry),
"i" (ASI_SEGMAP)
: "memory");
}
static inline unsigned long sun4c_get_pte(unsigned long addr)
{
register unsigned long entry;
__asm__ __volatile__("\n\tlda [%1] %2, %0\n\t" :
"=r" (entry) :
"r" (addr), "i" (ASI_PTE));
return entry;
}
static inline void sun4c_put_pte(unsigned long addr, unsigned long entry)
{
__asm__ __volatile__("\n\tsta %1, [%0] %2; nop; nop; nop;\n\t" : :
"r" (addr),
"r" ((entry & ~(_SUN4C_PAGE_PRESENT))), "i" (ASI_PTE)
: "memory");
}
static inline int sun4c_get_context(void)
{
register int ctx;
__asm__ __volatile__("\n\tlduba [%1] %2, %0\n\t" :
"=r" (ctx) :
"r" (AC_CONTEXT), "i" (ASI_CONTROL));
return ctx;
}
static inline int sun4c_set_context(int ctx)
{
__asm__ __volatile__("\n\tstba %0, [%1] %2; nop; nop; nop;\n\t" : :
"r" (ctx), "r" (AC_CONTEXT), "i" (ASI_CONTROL)
: "memory");
return ctx;
}
#endif /* !(__ASSEMBLY__) */
#endif /* !(_SPARC_PGTSUN4C_H) */
...@@ -16,7 +16,6 @@ ...@@ -16,7 +16,6 @@
#include <asm/ptrace.h> #include <asm/ptrace.h>
#include <asm/head.h> #include <asm/head.h>
#include <asm/signal.h> #include <asm/signal.h>
#include <asm/btfixup.h>
#include <asm/page.h> #include <asm/page.h>
/* /*
......
...@@ -20,10 +20,7 @@ extern char reboot_command[]; ...@@ -20,10 +20,7 @@ extern char reboot_command[];
* Only sun4d + leon may have boot_cpu_id != 0 * Only sun4d + leon may have boot_cpu_id != 0
*/ */
extern unsigned char boot_cpu_id; extern unsigned char boot_cpu_id;
extern unsigned char boot_cpu_id4;
extern unsigned long empty_bad_page;
extern unsigned long empty_bad_page_table;
extern unsigned long empty_zero_page; extern unsigned long empty_zero_page;
extern int serial_console; extern int serial_console;
......
...@@ -4,8 +4,6 @@ ...@@ -4,8 +4,6 @@
#define __ARCH_FORCE_SHMLBA 1 #define __ARCH_FORCE_SHMLBA 1
extern int vac_cache_size; extern int vac_cache_size;
#define SHMLBA (vac_cache_size ? vac_cache_size : \ #define SHMLBA (vac_cache_size ? vac_cache_size : PAGE_SIZE)
(sparc_cpu_model == sun4c ? (64 * 1024) : \
(sparc_cpu_model == sun4 ? (128 * 1024) : PAGE_SIZE)))
#endif /* _ASMSPARC_SHMPARAM_H */ #endif /* _ASMSPARC_SHMPARAM_H */
...@@ -8,7 +8,6 @@ ...@@ -8,7 +8,6 @@
#include <linux/threads.h> #include <linux/threads.h>
#include <asm/head.h> #include <asm/head.h>
#include <asm/btfixup.h>
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
...@@ -58,104 +57,53 @@ struct seq_file; ...@@ -58,104 +57,53 @@ struct seq_file;
void smp_bogo(struct seq_file *); void smp_bogo(struct seq_file *);
void smp_info(struct seq_file *); void smp_info(struct seq_file *);
BTFIXUPDEF_CALL(void, smp_cross_call, smpfunc_t, cpumask_t, unsigned long, unsigned long, unsigned long, unsigned long) struct sparc32_ipi_ops {
BTFIXUPDEF_CALL(int, __hard_smp_processor_id, void) void (*cross_call)(smpfunc_t func, cpumask_t mask, unsigned long arg1,
BTFIXUPDEF_CALL(void, smp_ipi_resched, int); unsigned long arg2, unsigned long arg3,
BTFIXUPDEF_CALL(void, smp_ipi_single, int); unsigned long arg4);
BTFIXUPDEF_CALL(void, smp_ipi_mask_one, int); void (*resched)(int cpu);
BTFIXUPDEF_BLACKBOX(hard_smp_processor_id) void (*single)(int cpu);
BTFIXUPDEF_BLACKBOX(load_current) void (*mask_one)(int cpu);
};
#define smp_cross_call(func,mask,arg1,arg2,arg3,arg4) BTFIXUP_CALL(smp_cross_call)(func,mask,arg1,arg2,arg3,arg4) extern const struct sparc32_ipi_ops *sparc32_ipi_ops;
static inline void xc0(smpfunc_t func)
{
sparc32_ipi_ops->cross_call(func, *cpu_online_mask, 0, 0, 0, 0);
}
static inline void xc0(smpfunc_t func) { smp_cross_call(func, *cpu_online_mask, 0, 0, 0, 0); }
static inline void xc1(smpfunc_t func, unsigned long arg1) static inline void xc1(smpfunc_t func, unsigned long arg1)
{ smp_cross_call(func, *cpu_online_mask, arg1, 0, 0, 0); }
static inline void xc2(smpfunc_t func, unsigned long arg1, unsigned long arg2)
{ smp_cross_call(func, *cpu_online_mask, arg1, arg2, 0, 0); }
static inline void xc3(smpfunc_t func, unsigned long arg1, unsigned long arg2,
unsigned long arg3)
{ smp_cross_call(func, *cpu_online_mask, arg1, arg2, arg3, 0); }
static inline void xc4(smpfunc_t func, unsigned long arg1, unsigned long arg2,
unsigned long arg3, unsigned long arg4)
{ smp_cross_call(func, *cpu_online_mask, arg1, arg2, arg3, arg4); }
extern void arch_send_call_function_single_ipi(int cpu);
extern void arch_send_call_function_ipi_mask(const struct cpumask *mask);
static inline int cpu_logical_map(int cpu)
{ {
return cpu; sparc32_ipi_ops->cross_call(func, *cpu_online_mask, arg1, 0, 0, 0);
} }
static inline void xc2(smpfunc_t func, unsigned long arg1, unsigned long arg2)
static inline int hard_smp4m_processor_id(void)
{ {
int cpuid; sparc32_ipi_ops->cross_call(func, *cpu_online_mask, arg1, arg2, 0, 0);
__asm__ __volatile__("rd %%tbr, %0\n\t"
"srl %0, 12, %0\n\t"
"and %0, 3, %0\n\t" :
"=&r" (cpuid));
return cpuid;
} }
static inline int hard_smp4d_processor_id(void) static inline void xc3(smpfunc_t func, unsigned long arg1, unsigned long arg2,
unsigned long arg3)
{ {
int cpuid; sparc32_ipi_ops->cross_call(func, *cpu_online_mask,
arg1, arg2, arg3, 0);
__asm__ __volatile__("lda [%%g0] %1, %0\n\t" :
"=&r" (cpuid) : "i" (ASI_M_VIKING_TMP1));
return cpuid;
} }
extern inline int hard_smpleon_processor_id(void) static inline void xc4(smpfunc_t func, unsigned long arg1, unsigned long arg2,
unsigned long arg3, unsigned long arg4)
{ {
int cpuid; sparc32_ipi_ops->cross_call(func, *cpu_online_mask,
__asm__ __volatile__("rd %%asr17,%0\n\t" arg1, arg2, arg3, arg4);
"srl %0,28,%0" :
"=&r" (cpuid) : );
return cpuid;
} }
#ifndef MODULE extern void arch_send_call_function_single_ipi(int cpu);
static inline int hard_smp_processor_id(void) extern void arch_send_call_function_ipi_mask(const struct cpumask *mask);
static inline int cpu_logical_map(int cpu)
{ {
int cpuid; return cpu;
/* Black box - sun4m
__asm__ __volatile__("rd %%tbr, %0\n\t"
"srl %0, 12, %0\n\t"
"and %0, 3, %0\n\t" :
"=&r" (cpuid));
- sun4d
__asm__ __volatile__("lda [%g0] ASI_M_VIKING_TMP1, %0\n\t"
"nop; nop" :
"=&r" (cpuid));
- leon
__asm__ __volatile__( "rd %asr17, %0\n\t"
"srl %0, 0x1c, %0\n\t"
"nop\n\t" :
"=&r" (cpuid));
See btfixup.h and btfixupprep.c to understand how a blackbox works.
*/
__asm__ __volatile__("sethi %%hi(___b_hard_smp_processor_id), %0\n\t"
"sethi %%hi(boot_cpu_id), %0\n\t"
"ldub [%0 + %%lo(boot_cpu_id)], %0\n\t" :
"=&r" (cpuid));
return cpuid;
} }
#else
static inline int hard_smp_processor_id(void)
{
int cpuid;
__asm__ __volatile__("mov %%o7, %%g1\n\t" extern int hard_smp_processor_id(void);
"call ___f___hard_smp_processor_id\n\t"
" nop\n\t"
"mov %%g2, %0\n\t" : "=r"(cpuid) : : "g1", "g2");
return cpuid;
}
#endif
#define raw_smp_processor_id() (current_thread_info()->cpu) #define raw_smp_processor_id() (current_thread_info()->cpu)
......
/*
* smpprim.h: SMP locking primitives on the Sparc
*
* God knows we won't be actually using this code for some time
* but I thought I'd write it since I knew how.
*
* Copyright (C) 1995 David S. Miller (davem@caip.rutgers.edu)
*/
#ifndef __SPARC_SMPPRIM_H
#define __SPARC_SMPPRIM_H
/* Test and set the unsigned byte at ADDR to 1. Returns the previous
* value. On the Sparc we use the ldstub instruction since it is
* atomic.
*/
static inline __volatile__ char test_and_set(void *addr)
{
char state = 0;
__asm__ __volatile__("ldstub [%0], %1 ! test_and_set\n\t"
"=r" (addr), "=r" (state) :
"0" (addr), "1" (state) : "memory");
return state;
}
/* Initialize a spin-lock. */
static inline __volatile__ smp_initlock(void *spinlock)
{
/* Unset the lock. */
*((unsigned char *) spinlock) = 0;
return;
}
/* This routine spins until it acquires the lock at ADDR. */
static inline __volatile__ smp_lock(void *addr)
{
while(test_and_set(addr) == 0xff)
;
/* We now have the lock */
return;
}
/* This routine releases the lock at ADDR. */
static inline __volatile__ smp_unlock(void *addr)
{
*((unsigned char *) addr) = 0;
}
#endif /* !(__SPARC_SMPPRIM_H) */
...@@ -61,68 +61,7 @@ extern int memcmp(const void *,const void *,__kernel_size_t); ...@@ -61,68 +61,7 @@ extern int memcmp(const void *,const void *,__kernel_size_t);
extern __kernel_size_t strlen(const char *); extern __kernel_size_t strlen(const char *);
#define __HAVE_ARCH_STRNCMP #define __HAVE_ARCH_STRNCMP
extern int strncmp(const char *, const char *, __kernel_size_t);
extern int __strncmp(const char *, const char *, __kernel_size_t);
static inline int __constant_strncmp(const char *src, const char *dest, __kernel_size_t count)
{
register int retval;
switch(count) {
case 0: return 0;
case 1: return (src[0] - dest[0]);
case 2: retval = (src[0] - dest[0]);
if(!retval && src[0])
retval = (src[1] - dest[1]);
return retval;
case 3: retval = (src[0] - dest[0]);
if(!retval && src[0]) {
retval = (src[1] - dest[1]);
if(!retval && src[1])
retval = (src[2] - dest[2]);
}
return retval;
case 4: retval = (src[0] - dest[0]);
if(!retval && src[0]) {
retval = (src[1] - dest[1]);
if(!retval && src[1]) {
retval = (src[2] - dest[2]);
if (!retval && src[2])
retval = (src[3] - dest[3]);
}
}
return retval;
case 5: retval = (src[0] - dest[0]);
if(!retval && src[0]) {
retval = (src[1] - dest[1]);
if(!retval && src[1]) {
retval = (src[2] - dest[2]);
if (!retval && src[2]) {
retval = (src[3] - dest[3]);
if (!retval && src[3])
retval = (src[4] - dest[4]);
}
}
}
return retval;
default:
retval = (src[0] - dest[0]);
if(!retval && src[0]) {
retval = (src[1] - dest[1]);
if(!retval && src[1]) {
retval = (src[2] - dest[2]);
if(!retval && src[2])
retval = __strncmp(src+3,dest+3,count-3);
}
}
return retval;
}
}
#undef strncmp
#define strncmp(__arg0, __arg1, __arg2) \
(__builtin_constant_p(__arg2) ? \
__constant_strncmp(__arg0, __arg1, __arg2) : \
__strncmp(__arg0, __arg1, __arg2))
#endif /* !EXPORT_SYMTAB_STROPS */ #endif /* !EXPORT_SYMTAB_STROPS */
......
/*
* sysen.h: Bit fields within the "System Enable" register accessed via
* the ASI_CONTROL address space at address AC_SYSENABLE.
*
* Copyright (C) 1994 David S. Miller (davem@caip.rutgers.edu)
*/
#ifndef _SPARC_SYSEN_H
#define _SPARC_SYSEN_H
#define SENABLE_DVMA 0x20 /* enable dvma transfers */
#define SENABLE_CACHE 0x10 /* enable VAC cache */
#define SENABLE_RESET 0x04 /* reset whole machine, danger Will Robinson */
#endif /* _SPARC_SYSEN_H */
...@@ -15,7 +15,6 @@ ...@@ -15,7 +15,6 @@
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <asm/btfixup.h>
#include <asm/ptrace.h> #include <asm/ptrace.h>
#include <asm/page.h> #include <asm/page.h>
...@@ -82,11 +81,8 @@ register struct thread_info *current_thread_info_reg asm("g6"); ...@@ -82,11 +81,8 @@ register struct thread_info *current_thread_info_reg asm("g6");
#define __HAVE_ARCH_THREAD_INFO_ALLOCATOR #define __HAVE_ARCH_THREAD_INFO_ALLOCATOR
BTFIXUPDEF_CALL(struct thread_info *, alloc_thread_info_node, int) struct thread_info * alloc_thread_info_node(struct task_struct *tsk, int node);
#define alloc_thread_info_node(tsk, node) BTFIXUP_CALL(alloc_thread_info_node)(node) void free_thread_info(struct thread_info *);
BTFIXUPDEF_CALL(void, free_thread_info, struct thread_info *)
#define free_thread_info(ti) BTFIXUP_CALL(free_thread_info)(ti)
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
......
...@@ -8,14 +8,37 @@ ...@@ -8,14 +8,37 @@
#ifndef _SPARC_TIMER_H #ifndef _SPARC_TIMER_H
#define _SPARC_TIMER_H #define _SPARC_TIMER_H
#include <linux/clocksource.h>
#include <linux/irqreturn.h>
#include <asm-generic/percpu.h>
#include <asm/cpu_type.h> /* For SUN4M_NCPUS */ #include <asm/cpu_type.h> /* For SUN4M_NCPUS */
#include <asm/btfixup.h>
#define SBUS_CLOCK_RATE 2000000 /* 2MHz */
#define TIMER_VALUE_SHIFT 9
#define TIMER_VALUE_MASK 0x3fffff
#define TIMER_LIMIT_BIT (1 << 31) /* Bit 31 in Counter-Timer register */
/* The counter timer register has the value offset by 9 bits.
* From sun4m manual:
* When a counter reaches the value in the corresponding limit register,
* the Limit bit is set and the counter is set to 500 nS (i.e. 0x00000200).
*
* To compensate for this add one to the value.
*/
static inline unsigned int timer_value(unsigned int value)
{
return (value + 1) << TIMER_VALUE_SHIFT;
}
extern __volatile__ unsigned int *master_l10_counter; extern __volatile__ unsigned int *master_l10_counter;
/* FIXME: Make do_[gs]ettimeofday btfixup calls */ extern irqreturn_t notrace timer_interrupt(int dummy, void *dev_id);
struct timespec;
BTFIXUPDEF_CALL(int, bus_do_settimeofday, struct timespec *tv) #ifdef CONFIG_SMP
#define bus_do_settimeofday(tv) BTFIXUP_CALL(bus_do_settimeofday)(tv) DECLARE_PER_CPU(struct clock_event_device, sparc32_clockevent);
extern void register_percpu_ce(int cpu);
#endif
#endif /* !(_SPARC_TIMER_H) */ #endif /* !(_SPARC_TIMER_H) */
...@@ -12,5 +12,4 @@ ...@@ -12,5 +12,4 @@
typedef unsigned long cycles_t; typedef unsigned long cycles_t;
#define get_cycles() (0) #define get_cycles() (0)
extern u32 (*do_arch_gettimeoffset)(void);
#endif #endif
#ifndef _SPARC_TLBFLUSH_H #ifndef _SPARC_TLBFLUSH_H
#define _SPARC_TLBFLUSH_H #define _SPARC_TLBFLUSH_H
#include <linux/mm.h> #include <asm/cachetlb_32.h>
// #include <asm/processor.h>
#define flush_tlb_all() \
/* sparc32_cachetlb_ops->tlb_all()
* TLB flushing: #define flush_tlb_mm(mm) \
* sparc32_cachetlb_ops->tlb_mm(mm)
* - flush_tlb() flushes the current mm struct TLBs XXX Exists? #define flush_tlb_range(vma, start, end) \
* - flush_tlb_all() flushes all processes TLBs sparc32_cachetlb_ops->tlb_range(vma, start, end)
* - flush_tlb_mm(mm) flushes the specified mm context TLB's #define flush_tlb_page(vma, addr) \
* - flush_tlb_page(vma, vmaddr) flushes one page sparc32_cachetlb_ops->tlb_page(vma, addr)
* - flush_tlb_range(vma, start, end) flushes a range of pages
* - flush_tlb_kernel_range(start, end) flushes a range of kernel pages
*/
#ifdef CONFIG_SMP
BTFIXUPDEF_CALL(void, local_flush_tlb_all, void)
BTFIXUPDEF_CALL(void, local_flush_tlb_mm, struct mm_struct *)
BTFIXUPDEF_CALL(void, local_flush_tlb_range, struct vm_area_struct *, unsigned long, unsigned long)
BTFIXUPDEF_CALL(void, local_flush_tlb_page, struct vm_area_struct *, unsigned long)
#define local_flush_tlb_all() BTFIXUP_CALL(local_flush_tlb_all)()
#define local_flush_tlb_mm(mm) BTFIXUP_CALL(local_flush_tlb_mm)(mm)
#define local_flush_tlb_range(vma,start,end) BTFIXUP_CALL(local_flush_tlb_range)(vma,start,end)
#define local_flush_tlb_page(vma,addr) BTFIXUP_CALL(local_flush_tlb_page)(vma,addr)
extern void smp_flush_tlb_all(void);
extern void smp_flush_tlb_mm(struct mm_struct *mm);
extern void smp_flush_tlb_range(struct vm_area_struct *vma,
unsigned long start,
unsigned long end);
extern void smp_flush_tlb_page(struct vm_area_struct *mm, unsigned long page);
#endif /* CONFIG_SMP */
BTFIXUPDEF_CALL(void, flush_tlb_all, void)
BTFIXUPDEF_CALL(void, flush_tlb_mm, struct mm_struct *)
BTFIXUPDEF_CALL(void, flush_tlb_range, struct vm_area_struct *, unsigned long, unsigned long)
BTFIXUPDEF_CALL(void, flush_tlb_page, struct vm_area_struct *, unsigned long)
#define flush_tlb_all() BTFIXUP_CALL(flush_tlb_all)()
#define flush_tlb_mm(mm) BTFIXUP_CALL(flush_tlb_mm)(mm)
#define flush_tlb_range(vma,start,end) BTFIXUP_CALL(flush_tlb_range)(vma,start,end)
#define flush_tlb_page(vma,addr) BTFIXUP_CALL(flush_tlb_page)(vma,addr)
// #define flush_tlb() flush_tlb_mm(current->active_mm) /* XXX Sure? */
/* /*
* This is a kludge, until I know better. --zaitcev XXX * This is a kludge, until I know better. --zaitcev XXX
......
...@@ -12,7 +12,6 @@ ...@@ -12,7 +12,6 @@
#include <linux/sched.h> #include <linux/sched.h>
#include <linux/string.h> #include <linux/string.h>
#include <linux/errno.h> #include <linux/errno.h>
#include <asm/vac-ops.h>
#endif #endif
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
......
#ifndef _SPARC_VAC_OPS_H
#define _SPARC_VAC_OPS_H
/* vac-ops.h: Inline assembly routines to do operations on the Sparc
* VAC (virtual address cache) for the sun4c.
*
* Copyright (C) 1994, David S. Miller (davem@caip.rutgers.edu)
*/
#include <asm/sysen.h>
#include <asm/contregs.h>
#include <asm/asi.h>
/* The SUN4C models have a virtually addressed write-through
* cache.
*
* The cache tags are directly accessible through an ASI and
* each have the form:
*
* ------------------------------------------------------------
* | MBZ | CONTEXT | WRITE | PRIV | VALID | MBZ | TagID | MBZ |
* ------------------------------------------------------------
* 31 25 24 22 21 20 19 18 16 15 2 1 0
*
* MBZ: These bits are either unused and/or reserved and should
* be written as zeroes.
*
* CONTEXT: Records the context to which this cache line belongs.
*
* WRITE: A copy of the writable bit from the mmu pte access bits.
*
* PRIV: A copy of the privileged bit from the pte access bits.
*
* VALID: If set, this line is valid, else invalid.
*
* TagID: Fourteen bits of tag ID.
*
* Every virtual address is seen by the cache like this:
*
* ----------------------------------------
* | RESV | TagID | LINE | BYTE-in-LINE |
* ----------------------------------------
* 31 30 29 16 15 4 3 0
*
* RESV: Unused/reserved.
*
* TagID: Used to match the Tag-ID in that vac tags.
*
* LINE: Which line within the cache
*
* BYTE-in-LINE: Which byte within the cache line.
*/
/* Sun4c VAC Tags */
#define S4CVACTAG_CID 0x01c00000
#define S4CVACTAG_W 0x00200000
#define S4CVACTAG_P 0x00100000
#define S4CVACTAG_V 0x00080000
#define S4CVACTAG_TID 0x0000fffc
/* Sun4c VAC Virtual Address */
/* These aren't used, why bother? (Anton) */
#if 0
#define S4CVACVA_TID 0x3fff0000
#define S4CVACVA_LINE 0x0000fff0
#define S4CVACVA_BIL 0x0000000f
#endif
/* The indexing of cache lines creates a problem. Because the line
* field of a virtual address extends past the page offset within
* the virtual address it is possible to have what are called
* 'bad aliases' which will create inconsistencies. So we must make
* sure that within a context that if a physical page is mapped
* more than once, that 'extra' line bits are the same. If this is
* not the case, and thus is a 'bad alias' we must turn off the
* cacheable bit in the pte's of all such pages.
*/
#define S4CVAC_BADBITS 0x0000f000
/* The following is true if vaddr1 and vaddr2 would cause
* a 'bad alias'.
*/
#define S4CVAC_BADALIAS(vaddr1, vaddr2) \
((((unsigned long) (vaddr1)) ^ ((unsigned long) (vaddr2))) & \
(S4CVAC_BADBITS))
/* The following structure describes the characteristics of a sun4c
* VAC as probed from the prom during boot time.
*/
struct sun4c_vac_props {
unsigned int num_bytes; /* Size of the cache */
unsigned int do_hwflushes; /* Hardware flushing available? */
unsigned int linesize; /* Size of each line in bytes */
unsigned int log2lsize; /* log2(linesize) */
unsigned int on; /* VAC is enabled */
};
extern struct sun4c_vac_props sun4c_vacinfo;
/* sun4c_enable_vac() enables the sun4c virtual address cache. */
static inline void sun4c_enable_vac(void)
{
__asm__ __volatile__("lduba [%0] %1, %%g1\n\t"
"or %%g1, %2, %%g1\n\t"
"stba %%g1, [%0] %1\n\t"
: /* no outputs */
: "r" ((unsigned int) AC_SENABLE),
"i" (ASI_CONTROL), "i" (SENABLE_CACHE)
: "g1", "memory");
sun4c_vacinfo.on = 1;
}
/* sun4c_disable_vac() disables the virtual address cache. */
static inline void sun4c_disable_vac(void)
{
__asm__ __volatile__("lduba [%0] %1, %%g1\n\t"
"andn %%g1, %2, %%g1\n\t"
"stba %%g1, [%0] %1\n\t"
: /* no outputs */
: "r" ((unsigned int) AC_SENABLE),
"i" (ASI_CONTROL), "i" (SENABLE_CACHE)
: "g1", "memory");
sun4c_vacinfo.on = 0;
}
#endif /* !(_SPARC_VAC_OPS_H) */
...@@ -34,22 +34,6 @@ ...@@ -34,22 +34,6 @@
#define IOBASE_VADDR 0xfe000000 #define IOBASE_VADDR 0xfe000000
#define IOBASE_END 0xfe600000 #define IOBASE_END 0xfe600000
/*
* On the sun4/4c we need a place
* to reliably map locked down kernel data. This includes the
* task_struct and kernel stack pages of each process plus the
* scsi buffers during dvma IO transfers, also the floppy buffers
* during pseudo dma which runs with traps off (no faults allowed).
* Some quick calculations yield:
* NR_TASKS <512> * (3 * PAGE_SIZE) == 0x600000
* Subtract this from 0xc00000 and you get 0x927C0 of vm left
* over to map SCSI dvma + floppy pseudo-dma buffers. So be
* careful if you change NR_TASKS or else there won't be enough
* room for it all.
*/
#define SUN4C_LOCK_VADDR 0xff000000
#define SUN4C_LOCK_END 0xffc00000
#define KADB_DEBUGGER_BEGVM 0xffc00000 /* Where kern debugger is in virt-mem */ #define KADB_DEBUGGER_BEGVM 0xffc00000 /* Where kern debugger is in virt-mem */
#define KADB_DEBUGGER_ENDVM 0xffd00000 #define KADB_DEBUGGER_ENDVM 0xffd00000
#define DEBUG_FIRSTVADDR KADB_DEBUGGER_BEGVM #define DEBUG_FIRSTVADDR KADB_DEBUGGER_BEGVM
......
...@@ -103,37 +103,24 @@ ...@@ -103,37 +103,24 @@
st %scratch, [%cur_reg + TI_W_SAVED]; st %scratch, [%cur_reg + TI_W_SAVED];
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
/* Results of LOAD_CURRENT() after BTFIXUP for SUN4M, SUN4D & LEON (comments) */ #define LOAD_CURRENT(dest_reg, idreg) \
#define LOAD_CURRENT4M(dest_reg, idreg) \ 661: rd %tbr, %idreg; \
rd %tbr, %idreg; \ srl %idreg, 10, %idreg; \
sethi %hi(current_set), %dest_reg; \ and %idreg, 0xc, %idreg; \
srl %idreg, 10, %idreg; \ .section .cpuid_patch, "ax"; \
or %dest_reg, %lo(current_set), %dest_reg; \ /* Instruction location. */ \
and %idreg, 0xc, %idreg; \ .word 661b; \
ld [%idreg + %dest_reg], %dest_reg; /* SUN4D implementation. */ \
lda [%g0] ASI_M_VIKING_TMP1, %idreg; \
#define LOAD_CURRENT4D(dest_reg, idreg) \ sll %idreg, 2, %idreg; \
lda [%g0] ASI_M_VIKING_TMP1, %idreg; \ nop; \
sethi %hi(C_LABEL(current_set)), %dest_reg; \ /* LEON implementation. */ \
sll %idreg, 2, %idreg; \ rd %asr17, %idreg; \
or %dest_reg, %lo(C_LABEL(current_set)), %dest_reg; \ srl %idreg, 0x1c, %idreg; \
ld [%idreg + %dest_reg], %dest_reg; sll %idreg, 0x02, %idreg; \
.previous; \
#define LOAD_CURRENT_LEON(dest_reg, idreg) \ sethi %hi(current_set), %dest_reg; \
rd %asr17, %idreg; \ or %dest_reg, %lo(current_set), %dest_reg;\
sethi %hi(current_set), %dest_reg; \
srl %idreg, 0x1c, %idreg; \
or %dest_reg, %lo(current_set), %dest_reg; \
sll %idreg, 0x2, %idreg; \
ld [%idreg + %dest_reg], %dest_reg;
/* Blackbox - take care with this... - check smp4m and smp4d before changing this. */
#define LOAD_CURRENT(dest_reg, idreg) \
sethi %hi(___b_load_current), %idreg; \
sethi %hi(current_set), %dest_reg; \
sethi %hi(boot_cpu_id4), %idreg; \
or %dest_reg, %lo(current_set), %dest_reg; \
ldub [%idreg + %lo(boot_cpu_id4)], %idreg; \
ld [%idreg + %dest_reg], %dest_reg; ld [%idreg + %dest_reg], %dest_reg;
#else #else
#define LOAD_CURRENT(dest_reg, idreg) \ #define LOAD_CURRENT(dest_reg, idreg) \
......
...@@ -28,7 +28,7 @@ obj-y += traps_$(BITS).o ...@@ -28,7 +28,7 @@ obj-y += traps_$(BITS).o
# IRQ # IRQ
obj-y += irq_$(BITS).o obj-y += irq_$(BITS).o
obj-$(CONFIG_SPARC32) += sun4m_irq.o sun4c_irq.o sun4d_irq.o obj-$(CONFIG_SPARC32) += sun4m_irq.o sun4d_irq.o
obj-y += process_$(BITS).o obj-y += process_$(BITS).o
obj-y += signal_$(BITS).o obj-y += signal_$(BITS).o
...@@ -46,7 +46,6 @@ obj-$(CONFIG_SPARC32) += tadpole.o ...@@ -46,7 +46,6 @@ obj-$(CONFIG_SPARC32) += tadpole.o
obj-y += ptrace_$(BITS).o obj-y += ptrace_$(BITS).o
obj-y += unaligned_$(BITS).o obj-y += unaligned_$(BITS).o
obj-y += una_asm_$(BITS).o obj-y += una_asm_$(BITS).o
obj-$(CONFIG_SPARC32) += muldiv.o
obj-y += prom_common.o obj-y += prom_common.o
obj-y += prom_$(BITS).o obj-y += prom_$(BITS).o
obj-y += of_device_common.o obj-y += of_device_common.o
......
...@@ -32,7 +32,6 @@ void __init auxio_probe(void) ...@@ -32,7 +32,6 @@ void __init auxio_probe(void)
switch (sparc_cpu_model) { switch (sparc_cpu_model) {
case sparc_leon: case sparc_leon:
case sun4d: case sun4d:
case sun4:
return; return;
default: default:
break; break;
...@@ -65,9 +64,8 @@ void __init auxio_probe(void) ...@@ -65,9 +64,8 @@ void __init auxio_probe(void)
r.start = auxregs[0].phys_addr; r.start = auxregs[0].phys_addr;
r.end = auxregs[0].phys_addr + auxregs[0].reg_size - 1; r.end = auxregs[0].phys_addr + auxregs[0].reg_size - 1;
auxio_register = of_ioremap(&r, 0, auxregs[0].reg_size, "auxio"); auxio_register = of_ioremap(&r, 0, auxregs[0].reg_size, "auxio");
/* Fix the address on sun4m and sun4c. */ /* Fix the address on sun4m. */
if((((unsigned long) auxregs[0].phys_addr) & 3) == 3 || if ((((unsigned long) auxregs[0].phys_addr) & 3) == 3)
sparc_cpu_model == sun4c)
auxio_register += (3 - ((unsigned long)auxio_register & 3)); auxio_register += (3 - ((unsigned long)auxio_register & 3));
set_auxio(AUXIO_LED, 0); set_auxio(AUXIO_LED, 0);
...@@ -86,12 +84,7 @@ void set_auxio(unsigned char bits_on, unsigned char bits_off) ...@@ -86,12 +84,7 @@ void set_auxio(unsigned char bits_on, unsigned char bits_off)
unsigned char regval; unsigned char regval;
unsigned long flags; unsigned long flags;
spin_lock_irqsave(&auxio_lock, flags); spin_lock_irqsave(&auxio_lock, flags);
switch(sparc_cpu_model) { switch (sparc_cpu_model) {
case sun4c:
regval = sbus_readb(auxio_register);
sbus_writeb(((regval | bits_on) & ~bits_off) | AUXIO_ORMEIN,
auxio_register);
break;
case sun4m: case sun4m:
if(!auxio_register) if(!auxio_register)
break; /* VME chassis sun4m, no auxio. */ break; /* VME chassis sun4m, no auxio. */
......
...@@ -21,7 +21,6 @@ ...@@ -21,7 +21,6 @@
#include <asm/cpu_type.h> #include <asm/cpu_type.h>
extern void clock_stop_probe(void); /* tadpole.c */ extern void clock_stop_probe(void); /* tadpole.c */
extern void sun4c_probe_memerr_reg(void);
static char *cpu_mid_prop(void) static char *cpu_mid_prop(void)
{ {
...@@ -139,7 +138,4 @@ void __init device_scan(void) ...@@ -139,7 +138,4 @@ void __init device_scan(void)
auxio_power_probe(); auxio_power_probe();
} }
clock_stop_probe(); clock_stop_probe();
if (ARCH_SUN4C)
sun4c_probe_memerr_reg();
} }
...@@ -868,7 +868,7 @@ void ldom_power_off(void) ...@@ -868,7 +868,7 @@ void ldom_power_off(void)
static void ds_conn_reset(struct ds_info *dp) static void ds_conn_reset(struct ds_info *dp)
{ {
printk(KERN_ERR "ds-%llu: ds_conn_reset() from %p\n", printk(KERN_ERR "ds-%llu: ds_conn_reset() from %pf\n",
dp->id, __builtin_return_address(0)); dp->id, __builtin_return_address(0));
} }
......
This diff is collapsed.
...@@ -216,9 +216,7 @@ tsetup_patch6: ...@@ -216,9 +216,7 @@ tsetup_patch6:
/* Call MMU-architecture dependent stack checking /* Call MMU-architecture dependent stack checking
* routine. * routine.
*/ */
.globl tsetup_mmu_patchme b tsetup_srmmu_stackchk
tsetup_mmu_patchme:
b tsetup_sun4c_stackchk
andcc %sp, 0x7, %g0 andcc %sp, 0x7, %g0
/* Architecture specific stack checking routines. When either /* Architecture specific stack checking routines. When either
...@@ -228,52 +226,6 @@ tsetup_mmu_patchme: ...@@ -228,52 +226,6 @@ tsetup_mmu_patchme:
*/ */
#define glob_tmp g1 #define glob_tmp g1
tsetup_sun4c_stackchk:
/* Done by caller: andcc %sp, 0x7, %g0 */
bne trap_setup_user_stack_is_bolixed
sra %sp, 29, %glob_tmp
add %glob_tmp, 0x1, %glob_tmp
andncc %glob_tmp, 0x1, %g0
bne trap_setup_user_stack_is_bolixed
and %sp, 0xfff, %glob_tmp ! delay slot
/* See if our dump area will be on more than one
* page.
*/
add %glob_tmp, 0x38, %glob_tmp
andncc %glob_tmp, 0xff8, %g0
be tsetup_sun4c_onepage ! only one page to check
lda [%sp] ASI_PTE, %glob_tmp ! have to check first page anyways
tsetup_sun4c_twopages:
/* Is first page ok permission wise? */
srl %glob_tmp, 29, %glob_tmp
cmp %glob_tmp, 0x6
bne trap_setup_user_stack_is_bolixed
add %sp, 0x38, %glob_tmp /* Is second page in vma hole? */
sra %glob_tmp, 29, %glob_tmp
add %glob_tmp, 0x1, %glob_tmp
andncc %glob_tmp, 0x1, %g0
bne trap_setup_user_stack_is_bolixed
add %sp, 0x38, %glob_tmp
lda [%glob_tmp] ASI_PTE, %glob_tmp
tsetup_sun4c_onepage:
srl %glob_tmp, 29, %glob_tmp
cmp %glob_tmp, 0x6 ! can user write to it?
bne trap_setup_user_stack_is_bolixed ! failure
nop
STORE_WINDOW(sp)
restore %g0, %g0, %g0
jmpl %t_retpc + 0x8, %g0
mov %t_kstack, %sp
.globl tsetup_srmmu_stackchk .globl tsetup_srmmu_stackchk
tsetup_srmmu_stackchk: tsetup_srmmu_stackchk:
/* Check results of callers andcc %sp, 0x7, %g0 */ /* Check results of callers andcc %sp, 0x7, %g0 */
......
This diff is collapsed.
...@@ -906,7 +906,7 @@ swapper_4m_tsb: ...@@ -906,7 +906,7 @@ swapper_4m_tsb:
* error and will instead write junk into the relocation and * error and will instead write junk into the relocation and
* you'll have an unbootable kernel. * you'll have an unbootable kernel.
*/ */
#include "ttable.S" #include "ttable_64.S"
! 0x0000000000428000 ! 0x0000000000428000
......
...@@ -25,22 +25,9 @@ static struct idprom idprom_buffer; ...@@ -25,22 +25,9 @@ static struct idprom idprom_buffer;
* of the Sparc CPU and have a meaningful IDPROM machtype value that we * of the Sparc CPU and have a meaningful IDPROM machtype value that we
* know about. See asm-sparc/machines.h for empirical constants. * know about. See asm-sparc/machines.h for empirical constants.
*/ */
static struct Sun_Machine_Models Sun_Machines[NUM_SUN_MACHINES] = { static struct Sun_Machine_Models Sun_Machines[] = {
/* First, Sun4's */ /* First, Leon */
{ .name = "Sun 4/100 Series", .id_machtype = (SM_SUN4 | SM_4_110) },
{ .name = "Sun 4/200 Series", .id_machtype = (SM_SUN4 | SM_4_260) },
{ .name = "Sun 4/300 Series", .id_machtype = (SM_SUN4 | SM_4_330) },
{ .name = "Sun 4/400 Series", .id_machtype = (SM_SUN4 | SM_4_470) },
/* Now Leon */
{ .name = "Leon3 System-on-a-Chip", .id_machtype = (M_LEON | M_LEON3_SOC) }, { .name = "Leon3 System-on-a-Chip", .id_machtype = (M_LEON | M_LEON3_SOC) },
/* Now, Sun4c's */
{ .name = "Sun4c SparcStation 1", .id_machtype = (SM_SUN4C | SM_4C_SS1) },
{ .name = "Sun4c SparcStation IPC", .id_machtype = (SM_SUN4C | SM_4C_IPC) },
{ .name = "Sun4c SparcStation 1+", .id_machtype = (SM_SUN4C | SM_4C_SS1PLUS) },
{ .name = "Sun4c SparcStation SLC", .id_machtype = (SM_SUN4C | SM_4C_SLC) },
{ .name = "Sun4c SparcStation 2", .id_machtype = (SM_SUN4C | SM_4C_SS2) },
{ .name = "Sun4c SparcStation ELC", .id_machtype = (SM_SUN4C | SM_4C_ELC) },
{ .name = "Sun4c SparcStation IPX", .id_machtype = (SM_SUN4C | SM_4C_IPX) },
/* Finally, early Sun4m's */ /* Finally, early Sun4m's */
{ .name = "Sun4m SparcSystem600", .id_machtype = (SM_SUN4M | SM_4M_SS60) }, { .name = "Sun4m SparcSystem600", .id_machtype = (SM_SUN4M | SM_4M_SS60) },
{ .name = "Sun4m SparcStation10/20", .id_machtype = (SM_SUN4M | SM_4M_SS50) }, { .name = "Sun4m SparcStation10/20", .id_machtype = (SM_SUN4M | SM_4M_SS50) },
...@@ -53,7 +40,7 @@ static void __init display_system_type(unsigned char machtype) ...@@ -53,7 +40,7 @@ static void __init display_system_type(unsigned char machtype)
char sysname[128]; char sysname[128];
register int i; register int i;
for (i = 0; i < NUM_SUN_MACHINES; i++) { for (i = 0; i < ARRAY_SIZE(Sun_Machines); i++) {
if (Sun_Machines[i].id_machtype == machtype) { if (Sun_Machines[i].id_machtype == machtype) {
if (machtype != (SM_SUN4M_OBP | 0x00) || if (machtype != (SM_SUN4M_OBP | 0x00) ||
prom_getproperty(prom_root_node, "banner-name", prom_getproperty(prom_root_node, "banner-name",
......
This diff is collapsed.
This diff is collapsed.
...@@ -23,16 +23,8 @@ ...@@ -23,16 +23,8 @@
#include "kernel.h" #include "kernel.h"
#include "irq.h" #include "irq.h"
#ifdef CONFIG_SMP
#define SMP_NOP2 "nop; nop;\n\t"
#define SMP_NOP3 "nop; nop; nop;\n\t"
#else
#define SMP_NOP2
#define SMP_NOP3
#endif /* SMP */
/* platform specific irq setup */ /* platform specific irq setup */
struct sparc_irq_config sparc_irq_config; struct sparc_config sparc_config;
unsigned long arch_local_irq_save(void) unsigned long arch_local_irq_save(void)
{ {
...@@ -41,7 +33,6 @@ unsigned long arch_local_irq_save(void) ...@@ -41,7 +33,6 @@ unsigned long arch_local_irq_save(void)
__asm__ __volatile__( __asm__ __volatile__(
"rd %%psr, %0\n\t" "rd %%psr, %0\n\t"
SMP_NOP3 /* Sun4m + Cypress + SMP bug */
"or %0, %2, %1\n\t" "or %0, %2, %1\n\t"
"wr %1, 0, %%psr\n\t" "wr %1, 0, %%psr\n\t"
"nop; nop; nop\n" "nop; nop; nop\n"
...@@ -59,7 +50,6 @@ void arch_local_irq_enable(void) ...@@ -59,7 +50,6 @@ void arch_local_irq_enable(void)
__asm__ __volatile__( __asm__ __volatile__(
"rd %%psr, %0\n\t" "rd %%psr, %0\n\t"
SMP_NOP3 /* Sun4m + Cypress + SMP bug */
"andn %0, %1, %0\n\t" "andn %0, %1, %0\n\t"
"wr %0, 0, %%psr\n\t" "wr %0, 0, %%psr\n\t"
"nop; nop; nop\n" "nop; nop; nop\n"
...@@ -76,7 +66,6 @@ void arch_local_irq_restore(unsigned long old_psr) ...@@ -76,7 +66,6 @@ void arch_local_irq_restore(unsigned long old_psr)
__asm__ __volatile__( __asm__ __volatile__(
"rd %%psr, %0\n\t" "rd %%psr, %0\n\t"
"and %2, %1, %2\n\t" "and %2, %1, %2\n\t"
SMP_NOP2 /* Sun4m + Cypress + SMP bug */
"andn %0, %1, %0\n\t" "andn %0, %1, %0\n\t"
"wr %0, %2, %%psr\n\t" "wr %0, %2, %%psr\n\t"
"nop; nop; nop\n" "nop; nop; nop\n"
...@@ -346,11 +335,6 @@ void sparc_floppy_irq(int irq, void *dev_id, struct pt_regs *regs) ...@@ -346,11 +335,6 @@ void sparc_floppy_irq(int irq, void *dev_id, struct pt_regs *regs)
void __init init_IRQ(void) void __init init_IRQ(void)
{ {
switch (sparc_cpu_model) { switch (sparc_cpu_model) {
case sun4c:
case sun4:
sun4c_init_IRQ();
break;
case sun4m: case sun4m:
pcic_probe(); pcic_probe();
if (pcic_present()) if (pcic_present())
...@@ -371,6 +355,5 @@ void __init init_IRQ(void) ...@@ -371,6 +355,5 @@ void __init init_IRQ(void)
prom_printf("Cannot initialize IRQs on this Sun machine..."); prom_printf("Cannot initialize IRQs on this Sun machine...");
break; break;
} }
btfixup();
} }
...@@ -799,7 +799,7 @@ static void kill_prom_timer(void) ...@@ -799,7 +799,7 @@ static void kill_prom_timer(void)
prom_limit0 = prom_timers->limit0; prom_limit0 = prom_timers->limit0;
prom_limit1 = prom_timers->limit1; prom_limit1 = prom_timers->limit1;
/* Just as in sun4c/sun4m PROM uses timer which ticks at IRQ 14. /* Just as in sun4c PROM uses timer which ticks at IRQ 14.
* We turn both off here just to be paranoid. * We turn both off here just to be paranoid.
*/ */
prom_timers->limit0 = 0; prom_timers->limit0 = 0;
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment