Commit 3c92ec8a authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc

* 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc: (144 commits)
  powerpc/44x: Support 16K/64K base page sizes on 44x
  powerpc: Force memory size to be a multiple of PAGE_SIZE
  powerpc/32: Wire up the trampoline code for kdump
  powerpc/32: Add the ability for a classic ppc kernel to be loaded at 32M
  powerpc/32: Allow __ioremap on RAM addresses for kdump kernel
  powerpc/32: Setup OF properties for kdump
  powerpc/32/kdump: Implement crash_setup_regs() using ppc_save_regs()
  powerpc: Prepare xmon_save_regs for use with kdump
  powerpc: Remove default kexec/crash_kernel ops assignments
  powerpc: Make default kexec/crash_kernel ops implicit
  powerpc: Setup OF properties for ppc32 kexec
  powerpc/pseries: Fix cpu hotplug
  powerpc: Fix KVM build on ppc440
  powerpc/cell: add QPACE as a separate Cell platform
  powerpc/cell: fix build breakage with CONFIG_SPUFS disabled
  powerpc/mpc5200: fix error paths in PSC UART probe function
  powerpc/mpc5200: add rts/cts handling in PSC UART driver
  powerpc/mpc5200: Make PSC UART driver update serial errors counters
  powerpc/mpc5200: Remove obsolete code from mpc5200 MDIO driver
  powerpc/mpc5200: Add MDMA/UDMA support to MPC5200 ATA driver
  ...

Fix trivial conflict in drivers/char/Makefile as per Paul's directions
parents c4c9f018 ca9153a3
...@@ -285,6 +285,10 @@ config IOMMU_VMERGE ...@@ -285,6 +285,10 @@ config IOMMU_VMERGE
config IOMMU_HELPER config IOMMU_HELPER
def_bool PPC64 def_bool PPC64
config PPC_NEED_DMA_SYNC_OPS
def_bool y
depends on NOT_COHERENT_CACHE
config HOTPLUG_CPU config HOTPLUG_CPU
bool "Support for enabling/disabling CPUs" bool "Support for enabling/disabling CPUs"
depends on SMP && HOTPLUG && EXPERIMENTAL && (PPC_PSERIES || PPC_PMAC) depends on SMP && HOTPLUG && EXPERIMENTAL && (PPC_PSERIES || PPC_PMAC)
...@@ -322,7 +326,7 @@ config KEXEC ...@@ -322,7 +326,7 @@ config KEXEC
config CRASH_DUMP config CRASH_DUMP
bool "Build a kdump crash kernel" bool "Build a kdump crash kernel"
depends on PPC_MULTIPLATFORM && PPC64 && RELOCATABLE depends on (PPC64 && RELOCATABLE) || 6xx
help help
Build a kernel suitable for use as a kdump capture kernel. Build a kernel suitable for use as a kdump capture kernel.
The same kernel binary can be used as production kernel and dump The same kernel binary can be used as production kernel and dump
...@@ -401,23 +405,53 @@ config PPC_HAS_HASH_64K ...@@ -401,23 +405,53 @@ config PPC_HAS_HASH_64K
depends on PPC64 depends on PPC64
default n default n
config PPC_64K_PAGES choice
bool "64k page size" prompt "Page size"
depends on PPC64 default PPC_4K_PAGES
select PPC_HAS_HASH_64K
help help
This option changes the kernel logical page size to 64k. On machines Select the kernel logical page size. Increasing the page size
without processor support for 64k pages, the kernel will simulate will reduce software overhead at each page boundary, allow
them by loading each individual 4k page on demand transparently, hardware prefetch mechanisms to be more effective, and allow
while on hardware with such support, it will be used to map larger dma transfers increasing IO efficiency and reducing
normal application pages. overhead. However the utilization of memory will increase.
For example, each cached file will using a multiple of the
page size to hold its contents and the difference between the
end of file and the end of page is wasted.
Some dedicated systems, such as software raid serving with
accelerated calculations, have shown significant increases.
If you configure a 64 bit kernel for 64k pages but the
processor does not support them, then the kernel will simulate
them with 4k pages, loading them on demand, but with the
reduced software overhead and larger internal fragmentation.
For the 32 bit kernel, a large page option will not be offered
unless it is supported by the configured processor.
If unsure, choose 4K_PAGES.
config PPC_4K_PAGES
bool "4k page size"
config PPC_16K_PAGES
bool "16k page size" if 44x
config PPC_64K_PAGES
bool "64k page size" if 44x || PPC_STD_MMU_64
select PPC_HAS_HASH_64K if PPC_STD_MMU_64
endchoice
config FORCE_MAX_ZONEORDER config FORCE_MAX_ZONEORDER
int "Maximum zone order" int "Maximum zone order"
range 9 64 if PPC_64K_PAGES range 9 64 if PPC_STD_MMU_64 && PPC_64K_PAGES
default "9" if PPC_64K_PAGES default "9" if PPC_STD_MMU_64 && PPC_64K_PAGES
range 13 64 if PPC64 && !PPC_64K_PAGES range 13 64 if PPC_STD_MMU_64 && !PPC_64K_PAGES
default "13" if PPC64 && !PPC_64K_PAGES default "13" if PPC_STD_MMU_64 && !PPC_64K_PAGES
range 9 64 if PPC_STD_MMU_32 && PPC_16K_PAGES
default "9" if PPC_STD_MMU_32 && PPC_16K_PAGES
range 7 64 if PPC_STD_MMU_32 && PPC_64K_PAGES
default "7" if PPC_STD_MMU_32 && PPC_64K_PAGES
range 11 64 range 11 64
default "11" default "11"
help help
...@@ -437,7 +471,7 @@ config FORCE_MAX_ZONEORDER ...@@ -437,7 +471,7 @@ config FORCE_MAX_ZONEORDER
config PPC_SUBPAGE_PROT config PPC_SUBPAGE_PROT
bool "Support setting protections for 4k subpages" bool "Support setting protections for 4k subpages"
depends on PPC_64K_PAGES depends on PPC_STD_MMU_64 && PPC_64K_PAGES
help help
This option adds support for a system call to allow user programs This option adds support for a system call to allow user programs
to set access permissions (read/write, readonly, or no access) to set access permissions (read/write, readonly, or no access)
......
...@@ -2,6 +2,15 @@ menu "Kernel hacking" ...@@ -2,6 +2,15 @@ menu "Kernel hacking"
source "lib/Kconfig.debug" source "lib/Kconfig.debug"
config PRINT_STACK_DEPTH
int "Stack depth to print" if DEBUG_KERNEL
default 64
help
This option allows you to set the stack depth that the kernel
prints in stack traces. This can be useful if your display is
too small and stack traces cause important information to
scroll off the screen.
config DEBUG_STACKOVERFLOW config DEBUG_STACKOVERFLOW
bool "Check for stack overflows" bool "Check for stack overflows"
depends on DEBUG_KERNEL depends on DEBUG_KERNEL
......
...@@ -107,7 +107,6 @@ KBUILD_CFLAGS += $(call cc-option,-mno-altivec) ...@@ -107,7 +107,6 @@ KBUILD_CFLAGS += $(call cc-option,-mno-altivec)
# (We use all available options to help semi-broken compilers) # (We use all available options to help semi-broken compilers)
KBUILD_CFLAGS += $(call cc-option,-mno-spe) KBUILD_CFLAGS += $(call cc-option,-mno-spe)
KBUILD_CFLAGS += $(call cc-option,-mspe=no) KBUILD_CFLAGS += $(call cc-option,-mspe=no)
KBUILD_CFLAGS += $(call cc-option,-mabi=no-spe)
# Enable unit-at-a-time mode when possible. It shrinks the # Enable unit-at-a-time mode when possible. It shrinks the
# kernel considerably. # kernel considerably.
......
...@@ -194,6 +194,7 @@ image-$(CONFIG_PPC_MAPLE) += zImage.pseries ...@@ -194,6 +194,7 @@ image-$(CONFIG_PPC_MAPLE) += zImage.pseries
image-$(CONFIG_PPC_IBM_CELL_BLADE) += zImage.pseries image-$(CONFIG_PPC_IBM_CELL_BLADE) += zImage.pseries
image-$(CONFIG_PPC_PS3) += dtbImage.ps3 image-$(CONFIG_PPC_PS3) += dtbImage.ps3
image-$(CONFIG_PPC_CELLEB) += zImage.pseries image-$(CONFIG_PPC_CELLEB) += zImage.pseries
image-$(CONFIG_PPC_CELL_QPACE) += zImage.pseries
image-$(CONFIG_PPC_CHRP) += zImage.chrp image-$(CONFIG_PPC_CHRP) += zImage.chrp
image-$(CONFIG_PPC_EFIKA) += zImage.chrp image-$(CONFIG_PPC_EFIKA) += zImage.chrp
image-$(CONFIG_PPC_PMAC) += zImage.pmac image-$(CONFIG_PPC_PMAC) += zImage.pmac
......
...@@ -213,7 +213,7 @@ static int find_range(u32 *reg, u32 *ranges, int nregaddr, ...@@ -213,7 +213,7 @@ static int find_range(u32 *reg, u32 *ranges, int nregaddr,
u32 range_addr[MAX_ADDR_CELLS]; u32 range_addr[MAX_ADDR_CELLS];
u32 range_size[MAX_ADDR_CELLS]; u32 range_size[MAX_ADDR_CELLS];
copy_val(range_addr, ranges + i, naddr); copy_val(range_addr, ranges + i, nregaddr);
copy_val(range_size, ranges + i + nregaddr + naddr, nsize); copy_val(range_size, ranges + i + nregaddr + naddr, nsize);
if (compare_reg(reg, range_addr, range_size)) if (compare_reg(reg, range_addr, range_size))
......
...@@ -269,7 +269,8 @@ PCI0: pci@ec000000 { ...@@ -269,7 +269,8 @@ PCI0: pci@ec000000 {
* later cannot be changed. Chip supports a second * later cannot be changed. Chip supports a second
* IO range but we don't use it for now * IO range but we don't use it for now
*/ */
ranges = <0x02000000 0x00000000 0xa0000000 0x00000000 0xa0000000 0x00000000 0x20000000 ranges = <0x02000000 0x00000000 0xa0000000 0x00000000 0xa0000000 0x00000000 0x40000000
0x02000000 0x00000000 0x00000000 0x00000000 0xe0000000 0x00000000 0x00100000
0x01000000 0x00000000 0x00000000 0x00000000 0xe8000000 0x00000000 0x00010000>; 0x01000000 0x00000000 0x00000000 0x00000000 0xe8000000 0x00000000 0x00010000>;
/* Inbound 2GB range starting at 0 */ /* Inbound 2GB range starting at 0 */
......
...@@ -40,6 +40,7 @@ cpu@0 { ...@@ -40,6 +40,7 @@ cpu@0 {
d-cache-size = <32768>; d-cache-size = <32768>;
dcr-controller; dcr-controller;
dcr-access-method = "native"; dcr-access-method = "native";
next-level-cache = <&L2C0>;
}; };
}; };
...@@ -104,6 +105,16 @@ CPR0: cpr { ...@@ -104,6 +105,16 @@ CPR0: cpr {
dcr-reg = <0x00c 0x002>; dcr-reg = <0x00c 0x002>;
}; };
L2C0: l2c {
compatible = "ibm,l2-cache-460ex", "ibm,l2-cache";
dcr-reg = <0x020 0x008 /* Internal SRAM DCR's */
0x030 0x008>; /* L2 cache DCR's */
cache-line-size = <32>; /* 32 bytes */
cache-size = <262144>; /* L2, 256K */
interrupt-parent = <&UIC1>;
interrupts = <11 1>;
};
plb { plb {
compatible = "ibm,plb-460ex", "ibm,plb4"; compatible = "ibm,plb-460ex", "ibm,plb4";
#address-cells = <2>; #address-cells = <2>;
...@@ -343,6 +354,7 @@ PCIX0: pci@c0ec00000 { ...@@ -343,6 +354,7 @@ PCIX0: pci@c0ec00000 {
* later cannot be changed * later cannot be changed
*/ */
ranges = <0x02000000 0x00000000 0x80000000 0x0000000d 0x80000000 0x00000000 0x80000000 ranges = <0x02000000 0x00000000 0x80000000 0x0000000d 0x80000000 0x00000000 0x80000000
0x02000000 0x00000000 0x00000000 0x0000000c 0x0ee00000 0x00000000 0x00100000
0x01000000 0x00000000 0x00000000 0x0000000c 0x08000000 0x00000000 0x00010000>; 0x01000000 0x00000000 0x00000000 0x0000000c 0x08000000 0x00000000 0x00010000>;
/* Inbound 2GB range starting at 0 */ /* Inbound 2GB range starting at 0 */
...@@ -373,6 +385,7 @@ PCIE0: pciex@d00000000 { ...@@ -373,6 +385,7 @@ PCIE0: pciex@d00000000 {
* later cannot be changed * later cannot be changed
*/ */
ranges = <0x02000000 0x00000000 0x80000000 0x0000000e 0x00000000 0x00000000 0x80000000 ranges = <0x02000000 0x00000000 0x80000000 0x0000000e 0x00000000 0x00000000 0x80000000
0x02000000 0x00000000 0x00000000 0x0000000f 0x00000000 0x00000000 0x00100000
0x01000000 0x00000000 0x00000000 0x0000000f 0x80000000 0x00000000 0x00010000>; 0x01000000 0x00000000 0x00000000 0x0000000f 0x80000000 0x00000000 0x00010000>;
/* Inbound 2GB range starting at 0 */ /* Inbound 2GB range starting at 0 */
...@@ -414,6 +427,7 @@ PCIE1: pciex@d20000000 { ...@@ -414,6 +427,7 @@ PCIE1: pciex@d20000000 {
* later cannot be changed * later cannot be changed
*/ */
ranges = <0x02000000 0x00000000 0x80000000 0x0000000e 0x80000000 0x00000000 0x80000000 ranges = <0x02000000 0x00000000 0x80000000 0x0000000e 0x80000000 0x00000000 0x80000000
0x02000000 0x00000000 0x00000000 0x0000000f 0x00100000 0x00000000 0x00100000
0x01000000 0x00000000 0x00000000 0x0000000f 0x80010000 0x00000000 0x00010000>; 0x01000000 0x00000000 0x00000000 0x0000000f 0x80010000 0x00000000 0x00010000>;
/* Inbound 2GB range starting at 0 */ /* Inbound 2GB range starting at 0 */
......
...@@ -98,6 +98,12 @@ gef_pic: pic@4,4000 { ...@@ -98,6 +98,12 @@ gef_pic: pic@4,4000 {
interrupt-parent = <&mpic>; interrupt-parent = <&mpic>;
}; };
gef_gpio: gpio@7,14000 {
#gpio-cells = <2>;
compatible = "gef,sbc610-gpio";
reg = <0x7 0x14000 0x24>;
gpio-controller;
};
}; };
soc@fef00000 { soc@fef00000 {
...@@ -119,6 +125,11 @@ i2c1: i2c@3000 { ...@@ -119,6 +125,11 @@ i2c1: i2c@3000 {
interrupt-parent = <&mpic>; interrupt-parent = <&mpic>;
dfsrr; dfsrr;
rtc@51 {
compatible = "epson,rx8581";
reg = <0x00000051>;
};
eti@6b { eti@6b {
compatible = "dallas,ds1682"; compatible = "dallas,ds1682";
reg = <0x6b>; reg = <0x6b>;
......
...@@ -76,7 +76,6 @@ i2c@80003000 { ...@@ -76,7 +76,6 @@ i2c@80003000 {
interrupt-parent = <&mpic>; interrupt-parent = <&mpic>;
rtc@32 { rtc@32 {
device_type = "rtc";
compatible = "ricoh,rs5c372a"; compatible = "ricoh,rs5c372a";
reg = <0x32>; reg = <0x32>;
}; };
......
...@@ -76,7 +76,6 @@ i2c@80003000 { ...@@ -76,7 +76,6 @@ i2c@80003000 {
interrupt-parent = <&mpic>; interrupt-parent = <&mpic>;
rtc@32 { rtc@32 {
device_type = "rtc";
compatible = "ricoh,rs5c372a"; compatible = "ricoh,rs5c372a";
reg = <0x32>; reg = <0x32>;
}; };
......
...@@ -130,7 +130,6 @@ timer@670 { // General Purpose Timer ...@@ -130,7 +130,6 @@ timer@670 { // General Purpose Timer
rtc@800 { // Real time clock rtc@800 { // Real time clock
compatible = "fsl,mpc5200-rtc"; compatible = "fsl,mpc5200-rtc";
device_type = "rtc";
reg = <0x800 0x100>; reg = <0x800 0x100>;
interrupts = <1 5 0 1 6 0>; interrupts = <1 5 0 1 6 0>;
interrupt-parent = <&mpc5200_pic>; interrupt-parent = <&mpc5200_pic>;
......
...@@ -130,7 +130,6 @@ timer@670 { // General Purpose Timer ...@@ -130,7 +130,6 @@ timer@670 { // General Purpose Timer
rtc@800 { // Real time clock rtc@800 { // Real time clock
compatible = "fsl,mpc5200b-rtc","fsl,mpc5200-rtc"; compatible = "fsl,mpc5200b-rtc","fsl,mpc5200-rtc";
device_type = "rtc";
reg = <0x800 0x100>; reg = <0x800 0x100>;
interrupts = <1 5 0 1 6 0>; interrupts = <1 5 0 1 6 0>;
interrupt-parent = <&mpc5200_pic>; interrupt-parent = <&mpc5200_pic>;
......
...@@ -248,7 +248,6 @@ i2c@3d40 { ...@@ -248,7 +248,6 @@ i2c@3d40 {
fsl5200-clocking; fsl5200-clocking;
rtc@68 { rtc@68 {
device_type = "rtc";
compatible = "dallas,ds1339"; compatible = "dallas,ds1339";
reg = <0x68>; reg = <0x68>;
}; };
......
...@@ -117,7 +117,6 @@ i2c@3000 { ...@@ -117,7 +117,6 @@ i2c@3000 {
interrupt-parent = <&ipic>; interrupt-parent = <&ipic>;
dfsrr; dfsrr;
rtc@68 { rtc@68 {
device_type = "rtc";
compatible = "dallas,ds1339"; compatible = "dallas,ds1339";
reg = <0x68>; reg = <0x68>;
}; };
......
...@@ -85,7 +85,6 @@ i2c@3100 { ...@@ -85,7 +85,6 @@ i2c@3100 {
dfsrr; dfsrr;
rtc@68 { rtc@68 {
device_type = "rtc";
compatible = "dallas,ds1339"; compatible = "dallas,ds1339";
reg = <0x68>; reg = <0x68>;
interrupts = <18 0x8>; interrupts = <18 0x8>;
......
...@@ -83,7 +83,6 @@ i2c@3100 { ...@@ -83,7 +83,6 @@ i2c@3100 {
dfsrr; dfsrr;
rtc@68 { rtc@68 {
device_type = "rtc";
compatible = "dallas,ds1339"; compatible = "dallas,ds1339";
reg = <0x68>; reg = <0x68>;
interrupts = <18 0x8>; interrupts = <18 0x8>;
......
...@@ -117,7 +117,6 @@ i2c@3000 { ...@@ -117,7 +117,6 @@ i2c@3000 {
interrupt-parent = <&ipic>; interrupt-parent = <&ipic>;
dfsrr; dfsrr;
rtc@68 { rtc@68 {
device_type = "rtc";
compatible = "dallas,ds1339"; compatible = "dallas,ds1339";
reg = <0x68>; reg = <0x68>;
}; };
......
...@@ -117,7 +117,6 @@ i2c@3000 { ...@@ -117,7 +117,6 @@ i2c@3000 {
interrupt-parent = <&ipic>; interrupt-parent = <&ipic>;
dfsrr; dfsrr;
rtc@68 { rtc@68 {
device_type = "rtc";
compatible = "dallas,ds1339"; compatible = "dallas,ds1339";
reg = <0x68>; reg = <0x68>;
}; };
......
...@@ -117,7 +117,6 @@ i2c@3000 { ...@@ -117,7 +117,6 @@ i2c@3000 {
interrupt-parent = <&ipic>; interrupt-parent = <&ipic>;
dfsrr; dfsrr;
rtc@68 { rtc@68 {
device_type = "rtc";
compatible = "dallas,ds1339"; compatible = "dallas,ds1339";
reg = <0x68>; reg = <0x68>;
}; };
......
...@@ -63,6 +63,119 @@ memory { ...@@ -63,6 +63,119 @@ memory {
device_type = "memory"; device_type = "memory";
}; };
localbus@ffe05000 {
#address-cells = <2>;
#size-cells = <1>;
compatible = "fsl,mpc8572-elbc", "fsl,elbc", "simple-bus";
reg = <0 0xffe05000 0 0x1000>;
interrupts = <19 2>;
interrupt-parent = <&mpic>;
ranges = <0x0 0x0 0x0 0xe8000000 0x08000000
0x1 0x0 0x0 0xe0000000 0x08000000
0x2 0x0 0x0 0xffa00000 0x00040000
0x3 0x0 0x0 0xffdf0000 0x00008000
0x4 0x0 0x0 0xffa40000 0x00040000
0x5 0x0 0x0 0xffa80000 0x00040000
0x6 0x0 0x0 0xffac0000 0x00040000>;
nor@0,0 {
#address-cells = <1>;
#size-cells = <1>;
compatible = "cfi-flash";
reg = <0x0 0x0 0x8000000>;
bank-width = <2>;
device-width = <1>;
ramdisk@0 {
reg = <0x0 0x03000000>;
readl-only;
};
diagnostic@3000000 {
reg = <0x03000000 0x00e00000>;
read-only;
};
dink@3e00000 {
reg = <0x03e00000 0x00200000>;
read-only;
};
kernel@4000000 {
reg = <0x04000000 0x00400000>;
read-only;
};
jffs2@4400000 {
reg = <0x04400000 0x03b00000>;
};
dtb@7f00000 {
reg = <0x07f00000 0x00080000>;
read-only;
};
u-boot@7f80000 {
reg = <0x07f80000 0x00080000>;
read-only;
};
};
nand@2,0 {
#address-cells = <1>;
#size-cells = <1>;
compatible = "fsl,mpc8572-fcm-nand",
"fsl,elbc-fcm-nand";
reg = <0x2 0x0 0x40000>;
u-boot@0 {
reg = <0x0 0x02000000>;
read-only;
};
jffs2@2000000 {
reg = <0x02000000 0x10000000>;
};
ramdisk@12000000 {
reg = <0x12000000 0x08000000>;
read-only;
};
kernel@1a000000 {
reg = <0x1a000000 0x04000000>;
};
dtb@1e000000 {
reg = <0x1e000000 0x01000000>;
read-only;
};
empty@1f000000 {
reg = <0x1f000000 0x21000000>;
};
};
nand@4,0 {
compatible = "fsl,mpc8572-fcm-nand",
"fsl,elbc-fcm-nand";
reg = <0x4 0x0 0x40000>;
};
nand@5,0 {
compatible = "fsl,mpc8572-fcm-nand",
"fsl,elbc-fcm-nand";
reg = <0x5 0x0 0x40000>;
};
nand@6,0 {
compatible = "fsl,mpc8572-fcm-nand",
"fsl,elbc-fcm-nand";
reg = <0x6 0x0 0x40000>;
};
};
soc8572@ffe00000 { soc8572@ffe00000 {
#address-cells = <1>; #address-cells = <1>;
#size-cells = <1>; #size-cells = <1>;
......
This diff is collapsed.
/*
* MPC8572 DS Core1 Device Tree Source in CAMP mode.
*
* In CAMP mode, each core needs to have its own dts. Only mpic and L2 cache
* can be shared, all the other devices must be assigned to one core only.
* This dts allows core1 to have l2, dma2, eth2, eth3, pci2, msi.
*
* Please note to add "-b 1" for core1's dts compiling.
*
* Copyright 2007, 2008 Freescale Semiconductor Inc.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* Free Software Foundation; either version 2 of the License, or (at your
* option) any later version.
*/
/dts-v1/;
/ {
model = "fsl,MPC8572DS";
compatible = "fsl,MPC8572DS", "fsl,MPC8572DS-CAMP";
#address-cells = <1>;
#size-cells = <1>;
aliases {
ethernet2 = &enet2;
ethernet3 = &enet3;
serial0 = &serial0;
pci2 = &pci2;
};
cpus {
#address-cells = <1>;
#size-cells = <0>;
PowerPC,8572@1 {
device_type = "cpu";
reg = <0x1>;
d-cache-line-size = <32>; // 32 bytes
i-cache-line-size = <32>; // 32 bytes
d-cache-size = <0x8000>; // L1, 32K
i-cache-size = <0x8000>; // L1, 32K
timebase-frequency = <0>;
bus-frequency = <0>;
clock-frequency = <0>;
next-level-cache = <&L2>;
};
};
memory {
device_type = "memory";
reg = <0x0 0x0>; // Filled by U-Boot
};
soc8572@ffe00000 {
#address-cells = <1>;
#size-cells = <1>;
device_type = "soc";
compatible = "simple-bus";
ranges = <0x0 0xffe00000 0x100000>;
reg = <0xffe00000 0x1000>; // CCSRBAR & soc regs, remove once parse code for immrbase fixed
bus-frequency = <0>; // Filled out by uboot.
L2: l2-cache-controller@20000 {
compatible = "fsl,mpc8572-l2-cache-controller";
reg = <0x20000 0x1000>;
cache-line-size = <32>; // 32 bytes
cache-size = <0x80000>; // L2, 512K
interrupt-parent = <&mpic>;
};
dma@c300 {
#address-cells = <1>;
#size-cells = <1>;
compatible = "fsl,mpc8572-dma", "fsl,eloplus-dma";
reg = <0xc300 0x4>;
ranges = <0x0 0xc100 0x200>;
cell-index = <0>;
dma-channel@0 {
compatible = "fsl,mpc8572-dma-channel",
"fsl,eloplus-dma-channel";
reg = <0x0 0x80>;
cell-index = <0>;
interrupt-parent = <&mpic>;
interrupts = <76 2>;
};
dma-channel@80 {
compatible = "fsl,mpc8572-dma-channel",
"fsl,eloplus-dma-channel";
reg = <0x80 0x80>;
cell-index = <1>;
interrupt-parent = <&mpic>;
interrupts = <77 2>;
};
dma-channel@100 {
compatible = "fsl,mpc8572-dma-channel",
"fsl,eloplus-dma-channel";
reg = <0x100 0x80>;
cell-index = <2>;
interrupt-parent = <&mpic>;
interrupts = <78 2>;
};
dma-channel@180 {
compatible = "fsl,mpc8572-dma-channel",
"fsl,eloplus-dma-channel";
reg = <0x180 0x80>;
cell-index = <3>;
interrupt-parent = <&mpic>;
interrupts = <79 2>;
};
};
mdio@24520 {
#address-cells = <1>;
#size-cells = <0>;
compatible = "fsl,gianfar-mdio";
reg = <0x24520 0x20>;
phy2: ethernet-phy@2 {
interrupt-parent = <&mpic>;
reg = <0x2>;
};
phy3: ethernet-phy@3 {
interrupt-parent = <&mpic>;
reg = <0x3>;
};
};
enet2: ethernet@26000 {
cell-index = <2>;
device_type = "network";
model = "eTSEC";
compatible = "gianfar";
reg = <0x26000 0x1000>;
local-mac-address = [ 00 00 00 00 00 00 ];
interrupts = <31 2 32 2 33 2>;
interrupt-parent = <&mpic>;
phy-handle = <&phy2>;
phy-connection-type = "rgmii-id";
};
enet3: ethernet@27000 {
cell-index = <3>;
device_type = "network";
model = "eTSEC";
compatible = "gianfar";
reg = <0x27000 0x1000>;
local-mac-address = [ 00 00 00 00 00 00 ];
interrupts = <37 2 38 2 39 2>;
interrupt-parent = <&mpic>;
phy-handle = <&phy3>;
phy-connection-type = "rgmii-id";
};
msi@41600 {
compatible = "fsl,mpc8572-msi", "fsl,mpic-msi";
reg = <0x41600 0x80>;
msi-available-ranges = <0 0x100>;
interrupts = <
0xe0 0
0xe1 0
0xe2 0
0xe3 0
0xe4 0
0xe5 0
0xe6 0
0xe7 0>;
interrupt-parent = <&mpic>;
};
serial0: serial@4600 {
cell-index = <1>;
device_type = "serial";
compatible = "ns16550";
reg = <0x4600 0x100>;
clock-frequency = <0>;
};
mpic: pic@40000 {
interrupt-controller;
#address-cells = <0>;
#interrupt-cells = <2>;
reg = <0x40000 0x40000>;
compatible = "chrp,open-pic";
device_type = "open-pic";
protected-sources = <
18 16 10 42 45 58 /* MEM L2 mdio serial crypto */
29 30 34 35 36 40 /* enet0 enet1 */
24 26 20 21 22 23 /* pcie0 pcie1 dma1 */
43 /* i2c */
0x1 0x2 0x3 0x4 /* pci slot */
0x9 0xa 0xb 0xc /* usb */
0x6 0x7 0xe 0x5 /* Audio elgacy SATA */
>;
};
};
pci2: pcie@ffe0a000 {
cell-index = <2>;
compatible = "fsl,mpc8548-pcie";
device_type = "pci";
#interrupt-cells = <1>;
#size-cells = <2>;
#address-cells = <3>;
reg = <0xffe0a000 0x1000>;
bus-range = <0 255>;
ranges = <0x2000000 0x0 0xc0000000 0xc0000000 0x0 0x20000000
0x1000000 0x0 0x0 0xffc20000 0x0 0x10000>;
clock-frequency = <33333333>;
interrupt-parent = <&mpic>;
interrupts = <27 2>;
interrupt-map-mask = <0xf800 0x0 0x0 0x7>;
interrupt-map = <
/* IDSEL 0x0 */
0000 0x0 0x0 0x1 &mpic 0x0 0x1
0000 0x0 0x0 0x2 &mpic 0x1 0x1
0000 0x0 0x0 0x3 &mpic 0x2 0x1
0000 0x0 0x0 0x4 &mpic 0x3 0x1
>;
pcie@0 {
reg = <0x0 0x0 0x0 0x0 0x0>;
#size-cells = <2>;
#address-cells = <3>;
device_type = "pci";
ranges = <0x2000000 0x0 0xc0000000
0x2000000 0x0 0xc0000000
0x0 0x20000000
0x1000000 0x0 0x0
0x1000000 0x0 0x0
0x0 0x100000>;
};
};
};
...@@ -143,7 +143,6 @@ gpt7: timer@670 { /* General Purpose Timer in GPIO mode */ ...@@ -143,7 +143,6 @@ gpt7: timer@670 { /* General Purpose Timer in GPIO mode */
rtc@800 { // Real time clock rtc@800 { // Real time clock
compatible = "fsl,mpc5200b-rtc","fsl,mpc5200-rtc"; compatible = "fsl,mpc5200b-rtc","fsl,mpc5200-rtc";
device_type = "rtc";
reg = <0x800 0x100>; reg = <0x800 0x100>;
interrupts = <0x1 0x5 0x0 0x1 0x6 0x0>; interrupts = <0x1 0x5 0x0 0x1 0x6 0x0>;
interrupt-parent = <&mpc5200_pic>; interrupt-parent = <&mpc5200_pic>;
...@@ -301,7 +300,6 @@ i2c@3d40 { ...@@ -301,7 +300,6 @@ i2c@3d40 {
interrupt-parent = <&mpc5200_pic>; interrupt-parent = <&mpc5200_pic>;
fsl5200-clocking; fsl5200-clocking;
rtc@51 { rtc@51 {
device_type = "rtc";
compatible = "nxp,pcf8563"; compatible = "nxp,pcf8563";
reg = <0x51>; reg = <0x51>;
}; };
......
...@@ -181,7 +181,6 @@ i2c@3d40 { ...@@ -181,7 +181,6 @@ i2c@3d40 {
fsl5200-clocking; fsl5200-clocking;
rtc@68 { rtc@68 {
device_type = "rtc";
compatible = "dallas,ds1307"; compatible = "dallas,ds1307";
reg = <0x68>; reg = <0x68>;
}; };
......
...@@ -185,7 +185,7 @@ void fdt_init(void *blob) ...@@ -185,7 +185,7 @@ void fdt_init(void *blob)
/* Make sure the dt blob is the right version and so forth */ /* Make sure the dt blob is the right version and so forth */
fdt = blob; fdt = blob;
bufsize = fdt_totalsize(fdt) + 4; bufsize = fdt_totalsize(fdt) + EXPAND_GRANULARITY;
buf = malloc(bufsize); buf = malloc(bufsize);
if(!buf) if(!buf)
fatal("malloc failed. can't relocate the device tree\n\r"); fatal("malloc failed. can't relocate the device tree\n\r");
......
...@@ -1397,8 +1397,11 @@ CONFIG_USB_STORAGE=y ...@@ -1397,8 +1397,11 @@ CONFIG_USB_STORAGE=y
# CONFIG_ACCESSIBILITY is not set # CONFIG_ACCESSIBILITY is not set
# CONFIG_INFINIBAND is not set # CONFIG_INFINIBAND is not set
# CONFIG_EDAC is not set # CONFIG_EDAC is not set
CONFIG_RTC_LIB=m CONFIG_RTC_LIB=y
CONFIG_RTC_CLASS=m CONFIG_RTC_CLASS=y
CONFIG_RTC_HCTOSYS=y
CONFIG_RTC_HCTOSYS_DEVICE="rtc0"
# CONFIG_RTC_DEBUG is not set
# #
# RTC interfaces # RTC interfaces
...@@ -1424,6 +1427,7 @@ CONFIG_RTC_INTF_DEV=y ...@@ -1424,6 +1427,7 @@ CONFIG_RTC_INTF_DEV=y
# CONFIG_RTC_DRV_M41T80 is not set # CONFIG_RTC_DRV_M41T80 is not set
# CONFIG_RTC_DRV_S35390A is not set # CONFIG_RTC_DRV_S35390A is not set
# CONFIG_RTC_DRV_FM3130 is not set # CONFIG_RTC_DRV_FM3130 is not set
CONFIG_RTC_DRV_RX8581=y
# #
# SPI RTC drivers # SPI RTC drivers
......
...@@ -267,7 +267,7 @@ CONFIG_PCI_SYSCALL=y ...@@ -267,7 +267,7 @@ CONFIG_PCI_SYSCALL=y
# CONFIG_PCIEPORTBUS is not set # CONFIG_PCIEPORTBUS is not set
CONFIG_ARCH_SUPPORTS_MSI=y CONFIG_ARCH_SUPPORTS_MSI=y
# CONFIG_PCI_MSI is not set # CONFIG_PCI_MSI is not set
CONFIG_PCI_LEGACY=y # CONFIG_PCI_LEGACY is not set
# CONFIG_PCI_DEBUG is not set # CONFIG_PCI_DEBUG is not set
# CONFIG_PCCARD is not set # CONFIG_PCCARD is not set
# CONFIG_HOTPLUG_PCI is not set # CONFIG_HOTPLUG_PCI is not set
...@@ -354,7 +354,7 @@ CONFIG_IPV6_NDISC_NODETYPE=y ...@@ -354,7 +354,7 @@ CONFIG_IPV6_NDISC_NODETYPE=y
# CONFIG_IP_SCTP is not set # CONFIG_IP_SCTP is not set
# CONFIG_TIPC is not set # CONFIG_TIPC is not set
# CONFIG_ATM is not set # CONFIG_ATM is not set
# CONFIG_BRIDGE is not set CONFIG_BRIDGE=m
# CONFIG_NET_DSA is not set # CONFIG_NET_DSA is not set
# CONFIG_VLAN_8021Q is not set # CONFIG_VLAN_8021Q is not set
# CONFIG_DECNET is not set # CONFIG_DECNET is not set
...@@ -579,7 +579,7 @@ CONFIG_NETDEVICES=y ...@@ -579,7 +579,7 @@ CONFIG_NETDEVICES=y
# CONFIG_BONDING is not set # CONFIG_BONDING is not set
# CONFIG_MACVLAN is not set # CONFIG_MACVLAN is not set
# CONFIG_EQUALIZER is not set # CONFIG_EQUALIZER is not set
# CONFIG_TUN is not set CONFIG_TUN=m
# CONFIG_VETH is not set # CONFIG_VETH is not set
# CONFIG_ARCNET is not set # CONFIG_ARCNET is not set
# CONFIG_PHYLIB is not set # CONFIG_PHYLIB is not set
...@@ -1001,11 +1001,11 @@ CONFIG_USB_OHCI_LITTLE_ENDIAN=y ...@@ -1001,11 +1001,11 @@ CONFIG_USB_OHCI_LITTLE_ENDIAN=y
# CONFIG_USB_TMC is not set # CONFIG_USB_TMC is not set
# #
# NOTE: USB_STORAGE enables SCSI, and 'SCSI disk support' # NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may also be needed;
# #
# #
# may also be needed; see USB_STORAGE Help for more information # see USB_STORAGE Help for more information
# #
CONFIG_USB_STORAGE=m CONFIG_USB_STORAGE=m
# CONFIG_USB_STORAGE_DEBUG is not set # CONFIG_USB_STORAGE_DEBUG is not set
...@@ -1418,6 +1418,6 @@ CONFIG_CRYPTO_LZO=m ...@@ -1418,6 +1418,6 @@ CONFIG_CRYPTO_LZO=m
# CONFIG_PPC_CLOCK is not set # CONFIG_PPC_CLOCK is not set
CONFIG_VIRTUALIZATION=y CONFIG_VIRTUALIZATION=y
CONFIG_KVM=y CONFIG_KVM=y
CONFIG_KVM_BOOKE_HOST=y CONFIG_KVM_440=y
# CONFIG_VIRTIO_PCI is not set # CONFIG_VIRTIO_PCI is not set
# CONFIG_VIRTIO_BALLOON is not set # CONFIG_VIRTIO_BALLOON is not set
...@@ -111,7 +111,7 @@ static __inline__ void atomic_inc(atomic_t *v) ...@@ -111,7 +111,7 @@ static __inline__ void atomic_inc(atomic_t *v)
bne- 1b" bne- 1b"
: "=&r" (t), "+m" (v->counter) : "=&r" (t), "+m" (v->counter)
: "r" (&v->counter) : "r" (&v->counter)
: "cc"); : "cc", "xer");
} }
static __inline__ int atomic_inc_return(atomic_t *v) static __inline__ int atomic_inc_return(atomic_t *v)
...@@ -128,7 +128,7 @@ static __inline__ int atomic_inc_return(atomic_t *v) ...@@ -128,7 +128,7 @@ static __inline__ int atomic_inc_return(atomic_t *v)
ISYNC_ON_SMP ISYNC_ON_SMP
: "=&r" (t) : "=&r" (t)
: "r" (&v->counter) : "r" (&v->counter)
: "cc", "memory"); : "cc", "xer", "memory");
return t; return t;
} }
...@@ -155,7 +155,7 @@ static __inline__ void atomic_dec(atomic_t *v) ...@@ -155,7 +155,7 @@ static __inline__ void atomic_dec(atomic_t *v)
bne- 1b" bne- 1b"
: "=&r" (t), "+m" (v->counter) : "=&r" (t), "+m" (v->counter)
: "r" (&v->counter) : "r" (&v->counter)
: "cc"); : "cc", "xer");
} }
static __inline__ int atomic_dec_return(atomic_t *v) static __inline__ int atomic_dec_return(atomic_t *v)
...@@ -172,7 +172,7 @@ static __inline__ int atomic_dec_return(atomic_t *v) ...@@ -172,7 +172,7 @@ static __inline__ int atomic_dec_return(atomic_t *v)
ISYNC_ON_SMP ISYNC_ON_SMP
: "=&r" (t) : "=&r" (t)
: "r" (&v->counter) : "r" (&v->counter)
: "cc", "memory"); : "cc", "xer", "memory");
return t; return t;
} }
...@@ -346,7 +346,7 @@ static __inline__ void atomic64_inc(atomic64_t *v) ...@@ -346,7 +346,7 @@ static __inline__ void atomic64_inc(atomic64_t *v)
bne- 1b" bne- 1b"
: "=&r" (t), "+m" (v->counter) : "=&r" (t), "+m" (v->counter)
: "r" (&v->counter) : "r" (&v->counter)
: "cc"); : "cc", "xer");
} }
static __inline__ long atomic64_inc_return(atomic64_t *v) static __inline__ long atomic64_inc_return(atomic64_t *v)
...@@ -362,7 +362,7 @@ static __inline__ long atomic64_inc_return(atomic64_t *v) ...@@ -362,7 +362,7 @@ static __inline__ long atomic64_inc_return(atomic64_t *v)
ISYNC_ON_SMP ISYNC_ON_SMP
: "=&r" (t) : "=&r" (t)
: "r" (&v->counter) : "r" (&v->counter)
: "cc", "memory"); : "cc", "xer", "memory");
return t; return t;
} }
...@@ -388,7 +388,7 @@ static __inline__ void atomic64_dec(atomic64_t *v) ...@@ -388,7 +388,7 @@ static __inline__ void atomic64_dec(atomic64_t *v)
bne- 1b" bne- 1b"
: "=&r" (t), "+m" (v->counter) : "=&r" (t), "+m" (v->counter)
: "r" (&v->counter) : "r" (&v->counter)
: "cc"); : "cc", "xer");
} }
static __inline__ long atomic64_dec_return(atomic64_t *v) static __inline__ long atomic64_dec_return(atomic64_t *v)
...@@ -404,7 +404,7 @@ static __inline__ long atomic64_dec_return(atomic64_t *v) ...@@ -404,7 +404,7 @@ static __inline__ long atomic64_dec_return(atomic64_t *v)
ISYNC_ON_SMP ISYNC_ON_SMP
: "=&r" (t) : "=&r" (t)
: "r" (&v->counter) : "r" (&v->counter)
: "cc", "memory"); : "cc", "xer", "memory");
return t; return t;
} }
...@@ -431,7 +431,7 @@ static __inline__ long atomic64_dec_if_positive(atomic64_t *v) ...@@ -431,7 +431,7 @@ static __inline__ long atomic64_dec_if_positive(atomic64_t *v)
"\n\ "\n\
2:" : "=&r" (t) 2:" : "=&r" (t)
: "r" (&v->counter) : "r" (&v->counter)
: "cc", "memory"); : "cc", "xer", "memory");
return t; return t;
} }
......
...@@ -3,6 +3,7 @@ ...@@ -3,6 +3,7 @@
#ifdef __KERNEL__ #ifdef __KERNEL__
#include <asm/asm-compat.h> #include <asm/asm-compat.h>
/* /*
* Define an illegal instr to trap on the bug. * Define an illegal instr to trap on the bug.
* We don't use 0 because that marks the end of a function * We don't use 0 because that marks the end of a function
...@@ -14,6 +15,7 @@ ...@@ -14,6 +15,7 @@
#ifdef CONFIG_BUG #ifdef CONFIG_BUG
#ifdef __ASSEMBLY__ #ifdef __ASSEMBLY__
#include <asm/asm-offsets.h>
#ifdef CONFIG_DEBUG_BUGVERBOSE #ifdef CONFIG_DEBUG_BUGVERBOSE
.macro EMIT_BUG_ENTRY addr,file,line,flags .macro EMIT_BUG_ENTRY addr,file,line,flags
.section __bug_table,"a" .section __bug_table,"a"
...@@ -26,7 +28,7 @@ ...@@ -26,7 +28,7 @@
.previous .previous
.endm .endm
#else #else
.macro EMIT_BUG_ENTRY addr,file,line,flags .macro EMIT_BUG_ENTRY addr,file,line,flags
.section __bug_table,"a" .section __bug_table,"a"
5001: PPC_LONG \addr 5001: PPC_LONG \addr
.short \flags .short \flags
...@@ -113,6 +115,13 @@ ...@@ -113,6 +115,13 @@
#define HAVE_ARCH_BUG_ON #define HAVE_ARCH_BUG_ON
#define HAVE_ARCH_WARN_ON #define HAVE_ARCH_WARN_ON
#endif /* __ASSEMBLY __ */ #endif /* __ASSEMBLY __ */
#else
#ifdef __ASSEMBLY__
.macro EMIT_BUG_ENTRY addr,file,line,flags
.endm
#else /* !__ASSEMBLY__ */
#define _EMIT_BUG_ENTRY
#endif
#endif /* CONFIG_BUG */ #endif /* CONFIG_BUG */
#include <asm-generic/bug.h> #include <asm-generic/bug.h>
......
...@@ -11,6 +11,8 @@ ...@@ -11,6 +11,8 @@
#include <asm/types.h> #include <asm/types.h>
#include <linux/compiler.h> #include <linux/compiler.h>
#define __BIG_ENDIAN
#ifdef __GNUC__ #ifdef __GNUC__
#ifdef __KERNEL__ #ifdef __KERNEL__
...@@ -21,12 +23,19 @@ static __inline__ __u16 ld_le16(const volatile __u16 *addr) ...@@ -21,12 +23,19 @@ static __inline__ __u16 ld_le16(const volatile __u16 *addr)
__asm__ __volatile__ ("lhbrx %0,0,%1" : "=r" (val) : "r" (addr), "m" (*addr)); __asm__ __volatile__ ("lhbrx %0,0,%1" : "=r" (val) : "r" (addr), "m" (*addr));
return val; return val;
} }
#define __arch_swab16p ld_le16
static __inline__ void st_le16(volatile __u16 *addr, const __u16 val) static __inline__ void st_le16(volatile __u16 *addr, const __u16 val)
{ {
__asm__ __volatile__ ("sthbrx %1,0,%2" : "=m" (*addr) : "r" (val), "r" (addr)); __asm__ __volatile__ ("sthbrx %1,0,%2" : "=m" (*addr) : "r" (val), "r" (addr));
} }
static inline void __arch_swab16s(__u16 *addr)
{
st_le16(addr, *addr);
}
#define __arch_swab16s __arch_swab16s
static __inline__ __u32 ld_le32(const volatile __u32 *addr) static __inline__ __u32 ld_le32(const volatile __u32 *addr)
{ {
__u32 val; __u32 val;
...@@ -34,13 +43,20 @@ static __inline__ __u32 ld_le32(const volatile __u32 *addr) ...@@ -34,13 +43,20 @@ static __inline__ __u32 ld_le32(const volatile __u32 *addr)
__asm__ __volatile__ ("lwbrx %0,0,%1" : "=r" (val) : "r" (addr), "m" (*addr)); __asm__ __volatile__ ("lwbrx %0,0,%1" : "=r" (val) : "r" (addr), "m" (*addr));
return val; return val;
} }
#define __arch_swab32p ld_le32
static __inline__ void st_le32(volatile __u32 *addr, const __u32 val) static __inline__ void st_le32(volatile __u32 *addr, const __u32 val)
{ {
__asm__ __volatile__ ("stwbrx %1,0,%2" : "=m" (*addr) : "r" (val), "r" (addr)); __asm__ __volatile__ ("stwbrx %1,0,%2" : "=m" (*addr) : "r" (val), "r" (addr));
} }
static __inline__ __attribute_const__ __u16 ___arch__swab16(__u16 value) static inline void __arch_swab32s(__u32 *addr)
{
st_le32(addr, *addr);
}
#define __arch_swab32s __arch_swab32s
static inline __attribute_const__ __u16 __arch_swab16(__u16 value)
{ {
__u16 result; __u16 result;
...@@ -49,8 +65,9 @@ static __inline__ __attribute_const__ __u16 ___arch__swab16(__u16 value) ...@@ -49,8 +65,9 @@ static __inline__ __attribute_const__ __u16 ___arch__swab16(__u16 value)
: "r" (value), "0" (value >> 8)); : "r" (value), "0" (value >> 8));
return result; return result;
} }
#define __arch_swab16 __arch_swab16
static __inline__ __attribute_const__ __u32 ___arch__swab32(__u32 value) static inline __attribute_const__ __u32 __arch_swab32(__u32 value)
{ {
__u32 result; __u32 result;
...@@ -61,29 +78,16 @@ static __inline__ __attribute_const__ __u32 ___arch__swab32(__u32 value) ...@@ -61,29 +78,16 @@ static __inline__ __attribute_const__ __u32 ___arch__swab32(__u32 value)
: "r" (value), "0" (value >> 24)); : "r" (value), "0" (value >> 24));
return result; return result;
} }
#define __arch_swab32 __arch_swab32
#define __arch__swab16(x) ___arch__swab16(x)
#define __arch__swab32(x) ___arch__swab32(x)
/* The same, but returns converted value from the location pointer by addr. */
#define __arch__swab16p(addr) ld_le16(addr)
#define __arch__swab32p(addr) ld_le32(addr)
/* The same, but do the conversion in situ, ie. put the value back to addr. */
#define __arch__swab16s(addr) st_le16(addr,*addr)
#define __arch__swab32s(addr) st_le32(addr,*addr)
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
#ifndef __STRICT_ANSI__
#define __BYTEORDER_HAS_U64__
#ifndef __powerpc64__ #ifndef __powerpc64__
#define __SWAB_64_THRU_32__ #define __SWAB_64_THRU_32__
#endif /* __powerpc64__ */ #endif /* __powerpc64__ */
#endif /* __STRICT_ANSI__ */
#endif /* __GNUC__ */ #endif /* __GNUC__ */
#include <linux/byteorder/big_endian.h> #include <linux/byteorder.h>
#endif /* _ASM_POWERPC_BYTEORDER_H */ #endif /* _ASM_POWERPC_BYTEORDER_H */
This diff is collapsed.
...@@ -23,6 +23,7 @@ ...@@ -23,6 +23,7 @@
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <linux/spinlock.h> #include <linux/spinlock.h>
#include <asm/cputable.h>
typedef struct { typedef struct {
unsigned int base; unsigned int base;
...@@ -39,23 +40,45 @@ static inline bool dcr_map_ok_native(dcr_host_native_t host) ...@@ -39,23 +40,45 @@ static inline bool dcr_map_ok_native(dcr_host_native_t host)
#define dcr_read_native(host, dcr_n) mfdcr(dcr_n + host.base) #define dcr_read_native(host, dcr_n) mfdcr(dcr_n + host.base)
#define dcr_write_native(host, dcr_n, value) mtdcr(dcr_n + host.base, value) #define dcr_write_native(host, dcr_n, value) mtdcr(dcr_n + host.base, value)
/* Device Control Registers */ /* Table based DCR accessors */
void __mtdcr(int reg, unsigned int val); extern void __mtdcr(unsigned int reg, unsigned int val);
unsigned int __mfdcr(int reg); extern unsigned int __mfdcr(unsigned int reg);
/* mfdcrx/mtdcrx instruction based accessors. We hand code
* the opcodes in order not to depend on newer binutils
*/
static inline unsigned int mfdcrx(unsigned int reg)
{
unsigned int ret;
asm volatile(".long 0x7c000206 | (%0 << 21) | (%1 << 16)"
: "=r" (ret) : "r" (reg));
return ret;
}
static inline void mtdcrx(unsigned int reg, unsigned int val)
{
asm volatile(".long 0x7c000306 | (%0 << 21) | (%1 << 16)"
: : "r" (val), "r" (reg));
}
#define mfdcr(rn) \ #define mfdcr(rn) \
({unsigned int rval; \ ({unsigned int rval; \
if (__builtin_constant_p(rn)) \ if (__builtin_constant_p(rn) && rn < 1024) \
asm volatile("mfdcr %0," __stringify(rn) \ asm volatile("mfdcr %0," __stringify(rn) \
: "=r" (rval)); \ : "=r" (rval)); \
else if (likely(cpu_has_feature(CPU_FTR_INDEXED_DCR))) \
rval = mfdcrx(rn); \
else \ else \
rval = __mfdcr(rn); \ rval = __mfdcr(rn); \
rval;}) rval;})
#define mtdcr(rn, v) \ #define mtdcr(rn, v) \
do { \ do { \
if (__builtin_constant_p(rn)) \ if (__builtin_constant_p(rn) && rn < 1024) \
asm volatile("mtdcr " __stringify(rn) ",%0" \ asm volatile("mtdcr " __stringify(rn) ",%0" \
: : "r" (v)); \ : : "r" (v)); \
else if (likely(cpu_has_feature(CPU_FTR_INDEXED_DCR))) \
mtdcrx(rn, v); \
else \ else \
__mtdcr(rn, v); \ __mtdcr(rn, v); \
} while (0) } while (0)
...@@ -69,8 +92,13 @@ static inline unsigned __mfdcri(int base_addr, int base_data, int reg) ...@@ -69,8 +92,13 @@ static inline unsigned __mfdcri(int base_addr, int base_data, int reg)
unsigned int val; unsigned int val;
spin_lock_irqsave(&dcr_ind_lock, flags); spin_lock_irqsave(&dcr_ind_lock, flags);
__mtdcr(base_addr, reg); if (cpu_has_feature(CPU_FTR_INDEXED_DCR)) {
val = __mfdcr(base_data); mtdcrx(base_addr, reg);
val = mfdcrx(base_data);
} else {
__mtdcr(base_addr, reg);
val = __mfdcr(base_data);
}
spin_unlock_irqrestore(&dcr_ind_lock, flags); spin_unlock_irqrestore(&dcr_ind_lock, flags);
return val; return val;
} }
...@@ -81,8 +109,13 @@ static inline void __mtdcri(int base_addr, int base_data, int reg, ...@@ -81,8 +109,13 @@ static inline void __mtdcri(int base_addr, int base_data, int reg,
unsigned long flags; unsigned long flags;
spin_lock_irqsave(&dcr_ind_lock, flags); spin_lock_irqsave(&dcr_ind_lock, flags);
__mtdcr(base_addr, reg); if (cpu_has_feature(CPU_FTR_INDEXED_DCR)) {
__mtdcr(base_data, val); mtdcrx(base_addr, reg);
mtdcrx(base_data, val);
} else {
__mtdcr(base_addr, reg);
__mtdcr(base_data, val);
}
spin_unlock_irqrestore(&dcr_ind_lock, flags); spin_unlock_irqrestore(&dcr_ind_lock, flags);
} }
...@@ -93,9 +126,15 @@ static inline void __dcri_clrset(int base_addr, int base_data, int reg, ...@@ -93,9 +126,15 @@ static inline void __dcri_clrset(int base_addr, int base_data, int reg,
unsigned int val; unsigned int val;
spin_lock_irqsave(&dcr_ind_lock, flags); spin_lock_irqsave(&dcr_ind_lock, flags);
__mtdcr(base_addr, reg); if (cpu_has_feature(CPU_FTR_INDEXED_DCR)) {
val = (__mfdcr(base_data) & ~clr) | set; mtdcrx(base_addr, reg);
__mtdcr(base_data, val); val = (mfdcrx(base_data) & ~clr) | set;
mtdcrx(base_data, val);
} else {
__mtdcr(base_addr, reg);
val = (__mfdcr(base_data) & ~clr) | set;
__mtdcr(base_data, val);
}
spin_unlock_irqrestore(&dcr_ind_lock, flags); spin_unlock_irqrestore(&dcr_ind_lock, flags);
} }
......
...@@ -68,9 +68,9 @@ typedef dcr_host_mmio_t dcr_host_t; ...@@ -68,9 +68,9 @@ typedef dcr_host_mmio_t dcr_host_t;
* additional helpers to read the DCR * base from the device-tree * additional helpers to read the DCR * base from the device-tree
*/ */
struct device_node; struct device_node;
extern unsigned int dcr_resource_start(struct device_node *np, extern unsigned int dcr_resource_start(const struct device_node *np,
unsigned int index); unsigned int index);
extern unsigned int dcr_resource_len(struct device_node *np, extern unsigned int dcr_resource_len(const struct device_node *np,
unsigned int index); unsigned int index);
#endif /* CONFIG_PPC_DCR */ #endif /* CONFIG_PPC_DCR */
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
......
...@@ -18,4 +18,16 @@ struct dev_archdata { ...@@ -18,4 +18,16 @@ struct dev_archdata {
void *dma_data; void *dma_data;
}; };
static inline void dev_archdata_set_node(struct dev_archdata *ad,
struct device_node *np)
{
ad->of_node = np;
}
static inline struct device_node *
dev_archdata_get_node(const struct dev_archdata *ad)
{
return ad->of_node;
}
#endif /* _ASM_POWERPC_DEVICE_H */ #endif /* _ASM_POWERPC_DEVICE_H */
...@@ -60,12 +60,6 @@ struct dma_mapping_ops { ...@@ -60,12 +60,6 @@ struct dma_mapping_ops {
dma_addr_t *dma_handle, gfp_t flag); dma_addr_t *dma_handle, gfp_t flag);
void (*free_coherent)(struct device *dev, size_t size, void (*free_coherent)(struct device *dev, size_t size,
void *vaddr, dma_addr_t dma_handle); void *vaddr, dma_addr_t dma_handle);
dma_addr_t (*map_single)(struct device *dev, void *ptr,
size_t size, enum dma_data_direction direction,
struct dma_attrs *attrs);
void (*unmap_single)(struct device *dev, dma_addr_t dma_addr,
size_t size, enum dma_data_direction direction,
struct dma_attrs *attrs);
int (*map_sg)(struct device *dev, struct scatterlist *sg, int (*map_sg)(struct device *dev, struct scatterlist *sg,
int nents, enum dma_data_direction direction, int nents, enum dma_data_direction direction,
struct dma_attrs *attrs); struct dma_attrs *attrs);
...@@ -82,6 +76,22 @@ struct dma_mapping_ops { ...@@ -82,6 +76,22 @@ struct dma_mapping_ops {
dma_addr_t dma_address, size_t size, dma_addr_t dma_address, size_t size,
enum dma_data_direction direction, enum dma_data_direction direction,
struct dma_attrs *attrs); struct dma_attrs *attrs);
#ifdef CONFIG_PPC_NEED_DMA_SYNC_OPS
void (*sync_single_range_for_cpu)(struct device *hwdev,
dma_addr_t dma_handle, unsigned long offset,
size_t size,
enum dma_data_direction direction);
void (*sync_single_range_for_device)(struct device *hwdev,
dma_addr_t dma_handle, unsigned long offset,
size_t size,
enum dma_data_direction direction);
void (*sync_sg_for_cpu)(struct device *hwdev,
struct scatterlist *sg, int nelems,
enum dma_data_direction direction);
void (*sync_sg_for_device)(struct device *hwdev,
struct scatterlist *sg, int nelems,
enum dma_data_direction direction);
#endif
}; };
/* /*
...@@ -149,10 +159,9 @@ static inline int dma_set_mask(struct device *dev, u64 dma_mask) ...@@ -149,10 +159,9 @@ static inline int dma_set_mask(struct device *dev, u64 dma_mask)
} }
/* /*
* TODO: map_/unmap_single will ideally go away, to be completely * map_/unmap_single actually call through to map/unmap_page now that all the
* replaced by map/unmap_page. Until then, we allow dma_ops to have * dma_mapping_ops have been converted over. We just have to get the page and
* one or the other, or both by checking to see if the specific * offset to pass through to map_page
* function requested exists; and if not, falling back on the other set.
*/ */
static inline dma_addr_t dma_map_single_attrs(struct device *dev, static inline dma_addr_t dma_map_single_attrs(struct device *dev,
void *cpu_addr, void *cpu_addr,
...@@ -164,10 +173,6 @@ static inline dma_addr_t dma_map_single_attrs(struct device *dev, ...@@ -164,10 +173,6 @@ static inline dma_addr_t dma_map_single_attrs(struct device *dev,
BUG_ON(!dma_ops); BUG_ON(!dma_ops);
if (dma_ops->map_single)
return dma_ops->map_single(dev, cpu_addr, size, direction,
attrs);
return dma_ops->map_page(dev, virt_to_page(cpu_addr), return dma_ops->map_page(dev, virt_to_page(cpu_addr),
(unsigned long)cpu_addr % PAGE_SIZE, size, (unsigned long)cpu_addr % PAGE_SIZE, size,
direction, attrs); direction, attrs);
...@@ -183,11 +188,6 @@ static inline void dma_unmap_single_attrs(struct device *dev, ...@@ -183,11 +188,6 @@ static inline void dma_unmap_single_attrs(struct device *dev,
BUG_ON(!dma_ops); BUG_ON(!dma_ops);
if (dma_ops->unmap_single) {
dma_ops->unmap_single(dev, dma_addr, size, direction, attrs);
return;
}
dma_ops->unmap_page(dev, dma_addr, size, direction, attrs); dma_ops->unmap_page(dev, dma_addr, size, direction, attrs);
} }
...@@ -201,12 +201,7 @@ static inline dma_addr_t dma_map_page_attrs(struct device *dev, ...@@ -201,12 +201,7 @@ static inline dma_addr_t dma_map_page_attrs(struct device *dev,
BUG_ON(!dma_ops); BUG_ON(!dma_ops);
if (dma_ops->map_page) return dma_ops->map_page(dev, page, offset, size, direction, attrs);
return dma_ops->map_page(dev, page, offset, size, direction,
attrs);
return dma_ops->map_single(dev, page_address(page) + offset, size,
direction, attrs);
} }
static inline void dma_unmap_page_attrs(struct device *dev, static inline void dma_unmap_page_attrs(struct device *dev,
...@@ -219,12 +214,7 @@ static inline void dma_unmap_page_attrs(struct device *dev, ...@@ -219,12 +214,7 @@ static inline void dma_unmap_page_attrs(struct device *dev,
BUG_ON(!dma_ops); BUG_ON(!dma_ops);
if (dma_ops->unmap_page) { dma_ops->unmap_page(dev, dma_address, size, direction, attrs);
dma_ops->unmap_page(dev, dma_address, size, direction, attrs);
return;
}
dma_ops->unmap_single(dev, dma_address, size, direction, attrs);
} }
static inline int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, static inline int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg,
...@@ -308,47 +298,107 @@ static inline void dma_unmap_sg(struct device *dev, struct scatterlist *sg, ...@@ -308,47 +298,107 @@ static inline void dma_unmap_sg(struct device *dev, struct scatterlist *sg,
dma_unmap_sg_attrs(dev, sg, nhwentries, direction, NULL); dma_unmap_sg_attrs(dev, sg, nhwentries, direction, NULL);
} }
#ifdef CONFIG_PPC_NEED_DMA_SYNC_OPS
static inline void dma_sync_single_for_cpu(struct device *dev, static inline void dma_sync_single_for_cpu(struct device *dev,
dma_addr_t dma_handle, size_t size, dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction) enum dma_data_direction direction)
{ {
BUG_ON(direction == DMA_NONE); struct dma_mapping_ops *dma_ops = get_dma_ops(dev);
__dma_sync(bus_to_virt(dma_handle), size, direction);
BUG_ON(!dma_ops);
dma_ops->sync_single_range_for_cpu(dev, dma_handle, 0,
size, direction);
} }
static inline void dma_sync_single_for_device(struct device *dev, static inline void dma_sync_single_for_device(struct device *dev,
dma_addr_t dma_handle, size_t size, dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction) enum dma_data_direction direction)
{ {
BUG_ON(direction == DMA_NONE); struct dma_mapping_ops *dma_ops = get_dma_ops(dev);
__dma_sync(bus_to_virt(dma_handle), size, direction);
BUG_ON(!dma_ops);
dma_ops->sync_single_range_for_device(dev, dma_handle,
0, size, direction);
} }
static inline void dma_sync_sg_for_cpu(struct device *dev, static inline void dma_sync_sg_for_cpu(struct device *dev,
struct scatterlist *sgl, int nents, struct scatterlist *sgl, int nents,
enum dma_data_direction direction) enum dma_data_direction direction)
{ {
struct scatterlist *sg; struct dma_mapping_ops *dma_ops = get_dma_ops(dev);
int i;
BUG_ON(direction == DMA_NONE); BUG_ON(!dma_ops);
dma_ops->sync_sg_for_cpu(dev, sgl, nents, direction);
}
static inline void dma_sync_sg_for_device(struct device *dev,
struct scatterlist *sgl, int nents,
enum dma_data_direction direction)
{
struct dma_mapping_ops *dma_ops = get_dma_ops(dev);
BUG_ON(!dma_ops);
dma_ops->sync_sg_for_device(dev, sgl, nents, direction);
}
static inline void dma_sync_single_range_for_cpu(struct device *dev,
dma_addr_t dma_handle, unsigned long offset, size_t size,
enum dma_data_direction direction)
{
struct dma_mapping_ops *dma_ops = get_dma_ops(dev);
for_each_sg(sgl, sg, nents, i) BUG_ON(!dma_ops);
__dma_sync_page(sg_page(sg), sg->offset, sg->length, direction); dma_ops->sync_single_range_for_cpu(dev, dma_handle,
offset, size, direction);
}
static inline void dma_sync_single_range_for_device(struct device *dev,
dma_addr_t dma_handle, unsigned long offset, size_t size,
enum dma_data_direction direction)
{
struct dma_mapping_ops *dma_ops = get_dma_ops(dev);
BUG_ON(!dma_ops);
dma_ops->sync_single_range_for_device(dev, dma_handle, offset,
size, direction);
}
#else /* CONFIG_PPC_NEED_DMA_SYNC_OPS */
static inline void dma_sync_single_for_cpu(struct device *dev,
dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction)
{
}
static inline void dma_sync_single_for_device(struct device *dev,
dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction)
{
}
static inline void dma_sync_sg_for_cpu(struct device *dev,
struct scatterlist *sgl, int nents,
enum dma_data_direction direction)
{
} }
static inline void dma_sync_sg_for_device(struct device *dev, static inline void dma_sync_sg_for_device(struct device *dev,
struct scatterlist *sgl, int nents, struct scatterlist *sgl, int nents,
enum dma_data_direction direction) enum dma_data_direction direction)
{ {
struct scatterlist *sg; }
int i;
BUG_ON(direction == DMA_NONE); static inline void dma_sync_single_range_for_cpu(struct device *dev,
dma_addr_t dma_handle, unsigned long offset, size_t size,
enum dma_data_direction direction)
{
}
for_each_sg(sgl, sg, nents, i) static inline void dma_sync_single_range_for_device(struct device *dev,
__dma_sync_page(sg_page(sg), sg->offset, sg->length, direction); dma_addr_t dma_handle, unsigned long offset, size_t size,
enum dma_data_direction direction)
{
} }
#endif
static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr) static inline int dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
{ {
...@@ -382,22 +432,6 @@ static inline int dma_get_cache_alignment(void) ...@@ -382,22 +432,6 @@ static inline int dma_get_cache_alignment(void)
#endif #endif
} }
static inline void dma_sync_single_range_for_cpu(struct device *dev,
dma_addr_t dma_handle, unsigned long offset, size_t size,
enum dma_data_direction direction)
{
/* just sync everything for now */
dma_sync_single_for_cpu(dev, dma_handle, offset + size, direction);
}
static inline void dma_sync_single_range_for_device(struct device *dev,
dma_addr_t dma_handle, unsigned long offset, size_t size,
enum dma_data_direction direction)
{
/* just sync everything for now */
dma_sync_single_for_device(dev, dma_handle, offset + size, direction);
}
static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size, static inline void dma_cache_sync(struct device *dev, void *vaddr, size_t size,
enum dma_data_direction direction) enum dma_data_direction direction)
{ {
......
...@@ -17,8 +17,8 @@ ...@@ -17,8 +17,8 @@
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/ */
#ifndef _PPC64_EEH_H #ifndef _POWERPC_EEH_H
#define _PPC64_EEH_H #define _POWERPC_EEH_H
#ifdef __KERNEL__ #ifdef __KERNEL__
#include <linux/init.h> #include <linux/init.h>
...@@ -110,6 +110,7 @@ static inline void eeh_remove_bus_device(struct pci_dev *dev) { } ...@@ -110,6 +110,7 @@ static inline void eeh_remove_bus_device(struct pci_dev *dev) { }
#define EEH_IO_ERROR_VALUE(size) (-1UL) #define EEH_IO_ERROR_VALUE(size) (-1UL)
#endif /* CONFIG_EEH */ #endif /* CONFIG_EEH */
#ifdef CONFIG_PPC64
/* /*
* MMIO read/write operations with EEH support. * MMIO read/write operations with EEH support.
*/ */
...@@ -207,5 +208,6 @@ static inline void eeh_readsl(const volatile void __iomem *addr, void * buf, ...@@ -207,5 +208,6 @@ static inline void eeh_readsl(const volatile void __iomem *addr, void * buf,
eeh_check_failure(addr, *(u32*)buf); eeh_check_failure(addr, *(u32*)buf);
} }
#endif /* CONFIG_PPC64 */
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
#endif /* _PPC64_EEH_H */ #endif /* _POWERPC_EEH_H */
...@@ -81,6 +81,36 @@ label##5: \ ...@@ -81,6 +81,36 @@ label##5: \
#define ALT_FTR_SECTION_END_IFCLR(msk) \ #define ALT_FTR_SECTION_END_IFCLR(msk) \
ALT_FTR_SECTION_END_NESTED_IFCLR(msk, 97) ALT_FTR_SECTION_END_NESTED_IFCLR(msk, 97)
/* MMU feature dependent sections */
#define BEGIN_MMU_FTR_SECTION_NESTED(label) START_FTR_SECTION(label)
#define BEGIN_MMU_FTR_SECTION START_FTR_SECTION(97)
#define END_MMU_FTR_SECTION_NESTED(msk, val, label) \
FTR_SECTION_ELSE_NESTED(label) \
MAKE_FTR_SECTION_ENTRY(msk, val, label, __mmu_ftr_fixup)
#define END_MMU_FTR_SECTION(msk, val) \
END_MMU_FTR_SECTION_NESTED(msk, val, 97)
#define END_MMU_FTR_SECTION_IFSET(msk) END_MMU_FTR_SECTION((msk), (msk))
#define END_MMU_FTR_SECTION_IFCLR(msk) END_MMU_FTR_SECTION((msk), 0)
/* MMU feature sections with alternatives, use BEGIN_FTR_SECTION to start */
#define MMU_FTR_SECTION_ELSE_NESTED(label) FTR_SECTION_ELSE_NESTED(label)
#define MMU_FTR_SECTION_ELSE MMU_FTR_SECTION_ELSE_NESTED(97)
#define ALT_MMU_FTR_SECTION_END_NESTED(msk, val, label) \
MAKE_FTR_SECTION_ENTRY(msk, val, label, __mmu_ftr_fixup)
#define ALT_MMU_FTR_SECTION_END_NESTED_IFSET(msk, label) \
ALT_MMU_FTR_SECTION_END_NESTED(msk, msk, label)
#define ALT_MMU_FTR_SECTION_END_NESTED_IFCLR(msk, label) \
ALT_MMU_FTR_SECTION_END_NESTED(msk, 0, label)
#define ALT_MMU_FTR_SECTION_END(msk, val) \
ALT_MMU_FTR_SECTION_END_NESTED(msk, val, 97)
#define ALT_MMU_FTR_SECTION_END_IFSET(msk) \
ALT_MMU_FTR_SECTION_END_NESTED_IFSET(msk, 97)
#define ALT_MMU_FTR_SECTION_END_IFCLR(msk) \
ALT_MMU_FTR_SECTION_END_NESTED_IFCLR(msk, 97)
/* Firmware feature dependent sections */ /* Firmware feature dependent sections */
#define BEGIN_FW_FTR_SECTION_NESTED(label) START_FTR_SECTION(label) #define BEGIN_FW_FTR_SECTION_NESTED(label) START_FTR_SECTION(label)
#define BEGIN_FW_FTR_SECTION START_FTR_SECTION(97) #define BEGIN_FW_FTR_SECTION START_FTR_SECTION(97)
......
...@@ -38,9 +38,24 @@ extern pte_t *pkmap_page_table; ...@@ -38,9 +38,24 @@ extern pte_t *pkmap_page_table;
* easily, subsequent pte tables have to be allocated in one physical * easily, subsequent pte tables have to be allocated in one physical
* chunk of RAM. * chunk of RAM.
*/ */
#define LAST_PKMAP (1 << PTE_SHIFT) /*
#define LAST_PKMAP_MASK (LAST_PKMAP-1) * We use one full pte table with 4K pages. And with 16K/64K pages pte
* table covers enough memory (32MB and 512MB resp.) that both FIXMAP
* and PKMAP can be placed in single pte table. We use 1024 pages for
* PKMAP in case of 16K/64K pages.
*/
#ifdef CONFIG_PPC_4K_PAGES
#define PKMAP_ORDER PTE_SHIFT
#else
#define PKMAP_ORDER 10
#endif
#define LAST_PKMAP (1 << PKMAP_ORDER)
#ifndef CONFIG_PPC_4K_PAGES
#define PKMAP_BASE (FIXADDR_START - PAGE_SIZE*(LAST_PKMAP + 1))
#else
#define PKMAP_BASE ((FIXADDR_START - PAGE_SIZE*(LAST_PKMAP + 1)) & PMD_MASK) #define PKMAP_BASE ((FIXADDR_START - PAGE_SIZE*(LAST_PKMAP + 1)) & PMD_MASK)
#endif
#define LAST_PKMAP_MASK (LAST_PKMAP-1)
#define PKMAP_NR(virt) ((virt-PKMAP_BASE) >> PAGE_SHIFT) #define PKMAP_NR(virt) ((virt-PKMAP_BASE) >> PAGE_SHIFT)
#define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT)) #define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT))
...@@ -85,7 +100,7 @@ static inline void *kmap_atomic_prot(struct page *page, enum km_type type, pgpro ...@@ -85,7 +100,7 @@ static inline void *kmap_atomic_prot(struct page *page, enum km_type type, pgpro
BUG_ON(!pte_none(*(kmap_pte-idx))); BUG_ON(!pte_none(*(kmap_pte-idx)));
#endif #endif
__set_pte_at(&init_mm, vaddr, kmap_pte-idx, mk_pte(page, prot)); __set_pte_at(&init_mm, vaddr, kmap_pte-idx, mk_pte(page, prot));
flush_tlb_page(NULL, vaddr); local_flush_tlb_page(NULL, vaddr);
return (void*) vaddr; return (void*) vaddr;
} }
...@@ -113,7 +128,7 @@ static inline void kunmap_atomic(void *kvaddr, enum km_type type) ...@@ -113,7 +128,7 @@ static inline void kunmap_atomic(void *kvaddr, enum km_type type)
* this pte without first remap it * this pte without first remap it
*/ */
pte_clear(&init_mm, vaddr, kmap_pte-idx); pte_clear(&init_mm, vaddr, kmap_pte-idx);
flush_tlb_page(NULL, vaddr); local_flush_tlb_page(NULL, vaddr);
#endif #endif
pagefault_enable(); pagefault_enable();
} }
......
...@@ -713,13 +713,6 @@ static inline void * phys_to_virt(unsigned long address) ...@@ -713,13 +713,6 @@ static inline void * phys_to_virt(unsigned long address)
*/ */
#define page_to_phys(page) ((phys_addr_t)page_to_pfn(page) << PAGE_SHIFT) #define page_to_phys(page) ((phys_addr_t)page_to_pfn(page) << PAGE_SHIFT)
/* We do NOT want virtual merging, it would put too much pressure on
* our iommu allocator. Instead, we want drivers to be smart enough
* to coalesce sglists that happen to have been mapped in a contiguous
* way by the iommu
*/
#define BIO_VMERGE_BOUNDARY 0
/* /*
* 32 bits still uses virt_to_bus() for it's implementation of DMA * 32 bits still uses virt_to_bus() for it's implementation of DMA
* mappings se we have to keep it defined here. We also have some old * mappings se we have to keep it defined here. We also have some old
......
#ifndef _PPC64_KDUMP_H #ifndef _PPC64_KDUMP_H
#define _PPC64_KDUMP_H #define _PPC64_KDUMP_H
#include <asm/page.h>
/* Kdump kernel runs at 32 MB, change at your peril. */ /* Kdump kernel runs at 32 MB, change at your peril. */
#define KDUMP_KERNELBASE 0x2000000 #define KDUMP_KERNELBASE 0x2000000
...@@ -11,8 +13,19 @@ ...@@ -11,8 +13,19 @@
#ifdef CONFIG_CRASH_DUMP #ifdef CONFIG_CRASH_DUMP
/*
* On PPC64 translation is disabled during trampoline setup, so we use
* physical addresses. Though on PPC32 translation is already enabled,
* so we can't do the same. Luckily create_trampoline() creates relative
* branches, so we can just add the PAGE_OFFSET and don't worry about it.
*/
#ifdef __powerpc64__
#define KDUMP_TRAMPOLINE_START 0x0100 #define KDUMP_TRAMPOLINE_START 0x0100
#define KDUMP_TRAMPOLINE_END 0x3000 #define KDUMP_TRAMPOLINE_END 0x3000
#else
#define KDUMP_TRAMPOLINE_START (0x0100 + PAGE_OFFSET)
#define KDUMP_TRAMPOLINE_END (0x3000 + PAGE_OFFSET)
#endif /* __powerpc64__ */
#define KDUMP_MIN_TCE_ENTRIES 2048 #define KDUMP_MIN_TCE_ENTRIES 2048
......
...@@ -33,12 +33,12 @@ ...@@ -33,12 +33,12 @@
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <linux/cpumask.h> #include <linux/cpumask.h>
#include <asm/reg.h>
typedef void (*crash_shutdown_t)(void); typedef void (*crash_shutdown_t)(void);
#ifdef CONFIG_KEXEC #ifdef CONFIG_KEXEC
#ifdef __powerpc64__
/* /*
* This function is responsible for capturing register states if coming * This function is responsible for capturing register states if coming
* via panic or invoking dump using sysrq-trigger. * via panic or invoking dump using sysrq-trigger.
...@@ -48,6 +48,7 @@ static inline void crash_setup_regs(struct pt_regs *newregs, ...@@ -48,6 +48,7 @@ static inline void crash_setup_regs(struct pt_regs *newregs,
{ {
if (oldregs) if (oldregs)
memcpy(newregs, oldregs, sizeof(*newregs)); memcpy(newregs, oldregs, sizeof(*newregs));
#ifdef __powerpc64__
else { else {
/* FIXME Merge this with xmon_save_regs ?? */ /* FIXME Merge this with xmon_save_regs ?? */
unsigned long tmp1, tmp2; unsigned long tmp1, tmp2;
...@@ -100,15 +101,11 @@ static inline void crash_setup_regs(struct pt_regs *newregs, ...@@ -100,15 +101,11 @@ static inline void crash_setup_regs(struct pt_regs *newregs,
: "b" (newregs) : "b" (newregs)
: "memory"); : "memory");
} }
}
#else #else
/* else
* Provide a dummy definition to avoid build failures. Will remain ppc_save_regs(newregs);
* empty till crash dump support is enabled. #endif /* __powerpc64__ */
*/ }
static inline void crash_setup_regs(struct pt_regs *newregs,
struct pt_regs *oldregs) { }
#endif /* !__powerpc64 __ */
extern void kexec_smp_wait(void); /* get and clear naca physid, wait for extern void kexec_smp_wait(void); /* get and clear naca physid, wait for
master to copy new code to 0 */ master to copy new code to 0 */
......
...@@ -67,7 +67,7 @@ static __inline__ long local_inc_return(local_t *l) ...@@ -67,7 +67,7 @@ static __inline__ long local_inc_return(local_t *l)
bne- 1b" bne- 1b"
: "=&r" (t) : "=&r" (t)
: "r" (&(l->a.counter)) : "r" (&(l->a.counter))
: "cc", "memory"); : "cc", "xer", "memory");
return t; return t;
} }
...@@ -94,7 +94,7 @@ static __inline__ long local_dec_return(local_t *l) ...@@ -94,7 +94,7 @@ static __inline__ long local_dec_return(local_t *l)
bne- 1b" bne- 1b"
: "=&r" (t) : "=&r" (t)
: "r" (&(l->a.counter)) : "r" (&(l->a.counter))
: "cc", "memory"); : "cc", "xer", "memory");
return t; return t;
} }
......
...@@ -133,7 +133,8 @@ struct lppaca { ...@@ -133,7 +133,8 @@ struct lppaca {
//============================================================================= //=============================================================================
// CACHE_LINE_4-5 0x0180 - 0x027F Contains PMC interrupt data // CACHE_LINE_4-5 0x0180 - 0x027F Contains PMC interrupt data
//============================================================================= //=============================================================================
u8 pmc_save_area[256]; // PMC interrupt Area x00-xFF u32 page_ins; // CMO Hint - # page ins by OS x00-x04
u8 pmc_save_area[252]; // PMC interrupt Area x04-xFF
} __attribute__((__aligned__(0x400))); } __attribute__((__aligned__(0x400)));
extern struct lppaca lppaca[]; extern struct lppaca lppaca[];
......
...@@ -54,8 +54,9 @@ ...@@ -54,8 +54,9 @@
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
typedef struct { typedef struct {
unsigned long id; unsigned int id;
unsigned long vdso_base; unsigned int active;
unsigned long vdso_base;
} mm_context_t; } mm_context_t;
#endif /* !__ASSEMBLY__ */ #endif /* !__ASSEMBLY__ */
......
...@@ -4,6 +4,8 @@ ...@@ -4,6 +4,8 @@
* PPC440 support * PPC440 support
*/ */
#include <asm/page.h>
#define PPC44x_MMUCR_TID 0x000000ff #define PPC44x_MMUCR_TID 0x000000ff
#define PPC44x_MMUCR_STS 0x00010000 #define PPC44x_MMUCR_STS 0x00010000
...@@ -56,8 +58,9 @@ ...@@ -56,8 +58,9 @@
extern unsigned int tlb_44x_hwater; extern unsigned int tlb_44x_hwater;
typedef struct { typedef struct {
unsigned long id; unsigned int id;
unsigned long vdso_base; unsigned int active;
unsigned long vdso_base;
} mm_context_t; } mm_context_t;
#endif /* !__ASSEMBLY__ */ #endif /* !__ASSEMBLY__ */
...@@ -73,4 +76,19 @@ typedef struct { ...@@ -73,4 +76,19 @@ typedef struct {
/* Size of the TLBs used for pinning in lowmem */ /* Size of the TLBs used for pinning in lowmem */
#define PPC_PIN_SIZE (1 << 28) /* 256M */ #define PPC_PIN_SIZE (1 << 28) /* 256M */
#if (PAGE_SHIFT == 12)
#define PPC44x_TLBE_SIZE PPC44x_TLB_4K
#elif (PAGE_SHIFT == 14)
#define PPC44x_TLBE_SIZE PPC44x_TLB_16K
#elif (PAGE_SHIFT == 16)
#define PPC44x_TLBE_SIZE PPC44x_TLB_64K
#else
#error "Unsupported PAGE_SIZE"
#endif
#define PPC44x_PGD_OFF_SHIFT (32 - PGDIR_SHIFT + PGD_T_LOG2)
#define PPC44x_PGD_OFF_MASK_BIT (PGDIR_SHIFT - PGD_T_LOG2)
#define PPC44x_PTE_ADD_SHIFT (32 - PGDIR_SHIFT + PTE_SHIFT + PTE_T_LOG2)
#define PPC44x_PTE_ADD_MASK_BIT (32 - PTE_T_LOG2 - PTE_SHIFT)
#endif /* _ASM_POWERPC_MMU_44X_H_ */ #endif /* _ASM_POWERPC_MMU_44X_H_ */
...@@ -137,7 +137,8 @@ ...@@ -137,7 +137,8 @@
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
typedef struct { typedef struct {
unsigned long id; unsigned int id;
unsigned int active;
unsigned long vdso_base; unsigned long vdso_base;
} mm_context_t; } mm_context_t;
#endif /* !__ASSEMBLY__ */ #endif /* !__ASSEMBLY__ */
......
...@@ -40,6 +40,8 @@ ...@@ -40,6 +40,8 @@
#define MAS2_M 0x00000004 #define MAS2_M 0x00000004
#define MAS2_G 0x00000002 #define MAS2_G 0x00000002
#define MAS2_E 0x00000001 #define MAS2_E 0x00000001
#define MAS2_EPN_MASK(size) (~0 << (2*(size) + 10))
#define MAS2_VAL(addr, size, flags) ((addr) & MAS2_EPN_MASK(size) | (flags))
#define MAS3_RPN 0xFFFFF000 #define MAS3_RPN 0xFFFFF000
#define MAS3_U0 0x00000200 #define MAS3_U0 0x00000200
...@@ -74,8 +76,9 @@ ...@@ -74,8 +76,9 @@
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
typedef struct { typedef struct {
unsigned long id; unsigned int id;
unsigned long vdso_base; unsigned int active;
unsigned long vdso_base;
} mm_context_t; } mm_context_t;
#endif /* !__ASSEMBLY__ */ #endif /* !__ASSEMBLY__ */
......
...@@ -2,6 +2,63 @@ ...@@ -2,6 +2,63 @@
#define _ASM_POWERPC_MMU_H_ #define _ASM_POWERPC_MMU_H_
#ifdef __KERNEL__ #ifdef __KERNEL__
#include <asm/asm-compat.h>
#include <asm/feature-fixups.h>
/*
* MMU features bit definitions
*/
/*
* First half is MMU families
*/
#define MMU_FTR_HPTE_TABLE ASM_CONST(0x00000001)
#define MMU_FTR_TYPE_8xx ASM_CONST(0x00000002)
#define MMU_FTR_TYPE_40x ASM_CONST(0x00000004)
#define MMU_FTR_TYPE_44x ASM_CONST(0x00000008)
#define MMU_FTR_TYPE_FSL_E ASM_CONST(0x00000010)
/*
* This is individual features
*/
/* Enable use of high BAT registers */
#define MMU_FTR_USE_HIGH_BATS ASM_CONST(0x00010000)
/* Enable >32-bit physical addresses on 32-bit processor, only used
* by CONFIG_6xx currently as BookE supports that from day 1
*/
#define MMU_FTR_BIG_PHYS ASM_CONST(0x00020000)
/* Enable use of broadcast TLB invalidations. We don't always set it
* on processors that support it due to other constraints with the
* use of such invalidations
*/
#define MMU_FTR_USE_TLBIVAX_BCAST ASM_CONST(0x00040000)
/* Enable use of tlbilx invalidate-by-PID variant.
*/
#define MMU_FTR_USE_TLBILX_PID ASM_CONST(0x00080000)
/* This indicates that the processor cannot handle multiple outstanding
* broadcast tlbivax or tlbsync. This makes the code use a spinlock
* around such invalidate forms.
*/
#define MMU_FTR_LOCK_BCAST_INVAL ASM_CONST(0x00100000)
#ifndef __ASSEMBLY__
#include <asm/cputable.h>
static inline int mmu_has_feature(unsigned long feature)
{
return (cur_cpu_spec->mmu_features & feature);
}
extern unsigned int __start___mmu_ftr_fixup, __stop___mmu_ftr_fixup;
#endif /* !__ASSEMBLY__ */
#ifdef CONFIG_PPC64 #ifdef CONFIG_PPC64
/* 64-bit classic hash table MMU */ /* 64-bit classic hash table MMU */
# include <asm/mmu-hash64.h> # include <asm/mmu-hash64.h>
......
...@@ -2,237 +2,26 @@ ...@@ -2,237 +2,26 @@
#define __ASM_POWERPC_MMU_CONTEXT_H #define __ASM_POWERPC_MMU_CONTEXT_H
#ifdef __KERNEL__ #ifdef __KERNEL__
#include <linux/kernel.h>
#include <linux/mm.h>
#include <linux/sched.h>
#include <linux/spinlock.h>
#include <asm/mmu.h> #include <asm/mmu.h>
#include <asm/cputable.h> #include <asm/cputable.h>
#include <asm-generic/mm_hooks.h> #include <asm-generic/mm_hooks.h>
#include <asm/cputhreads.h>
#ifndef CONFIG_PPC64
#include <asm/atomic.h>
#include <linux/bitops.h>
/*
* On 32-bit PowerPC 6xx/7xx/7xxx CPUs, we use a set of 16 VSIDs
* (virtual segment identifiers) for each context. Although the
* hardware supports 24-bit VSIDs, and thus >1 million contexts,
* we only use 32,768 of them. That is ample, since there can be
* at most around 30,000 tasks in the system anyway, and it means
* that we can use a bitmap to indicate which contexts are in use.
* Using a bitmap means that we entirely avoid all of the problems
* that we used to have when the context number overflowed,
* particularly on SMP systems.
* -- paulus.
*/
/*
* This function defines the mapping from contexts to VSIDs (virtual
* segment IDs). We use a skew on both the context and the high 4 bits
* of the 32-bit virtual address (the "effective segment ID") in order
* to spread out the entries in the MMU hash table. Note, if this
* function is changed then arch/ppc/mm/hashtable.S will have to be
* changed to correspond.
*/
#define CTX_TO_VSID(ctx, va) (((ctx) * (897 * 16) + ((va) >> 28) * 0x111) \
& 0xffffff)
/*
The MPC8xx has only 16 contexts. We rotate through them on each
task switch. A better way would be to keep track of tasks that
own contexts, and implement an LRU usage. That way very active
tasks don't always have to pay the TLB reload overhead. The
kernel pages are mapped shared, so the kernel can run on behalf
of any task that makes a kernel entry. Shared does not mean they
are not protected, just that the ASID comparison is not performed.
-- Dan
The IBM4xx has 256 contexts, so we can just rotate through these
as a way of "switching" contexts. If the TID of the TLB is zero,
the PID/TID comparison is disabled, so we can use a TID of zero
to represent all kernel pages as shared among all contexts.
-- Dan
*/
static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
{
}
#ifdef CONFIG_8xx
#define NO_CONTEXT 16
#define LAST_CONTEXT 15
#define FIRST_CONTEXT 0
#elif defined(CONFIG_4xx)
#define NO_CONTEXT 256
#define LAST_CONTEXT 255
#define FIRST_CONTEXT 1
#elif defined(CONFIG_E200) || defined(CONFIG_E500)
#define NO_CONTEXT 256
#define LAST_CONTEXT 255
#define FIRST_CONTEXT 1
#else
/* PPC 6xx, 7xx CPUs */
#define NO_CONTEXT ((unsigned long) -1)
#define LAST_CONTEXT 32767
#define FIRST_CONTEXT 1
#endif
/*
* Set the current MMU context.
* On 32-bit PowerPCs (other than the 8xx embedded chips), this is done by
* loading up the segment registers for the user part of the address space.
*
* Since the PGD is immediately available, it is much faster to simply
* pass this along as a second parameter, which is required for 8xx and
* can be used for debugging on all processors (if you happen to have
* an Abatron).
*/
extern void set_context(unsigned long contextid, pgd_t *pgd);
/*
* Bitmap of contexts in use.
* The size of this bitmap is LAST_CONTEXT + 1 bits.
*/
extern unsigned long context_map[];
/*
* This caches the next context number that we expect to be free.
* Its use is an optimization only, we can't rely on this context
* number to be free, but it usually will be.
*/
extern unsigned long next_mmu_context;
/*
* If we don't have sufficient contexts to give one to every task
* that could be in the system, we need to be able to steal contexts.
* These variables support that.
*/
#if LAST_CONTEXT < 30000
#define FEW_CONTEXTS 1
extern atomic_t nr_free_contexts;
extern struct mm_struct *context_mm[LAST_CONTEXT+1];
extern void steal_context(void);
#endif
/*
* Get a new mmu context for the address space described by `mm'.
*/
static inline void get_mmu_context(struct mm_struct *mm)
{
unsigned long ctx;
if (mm->context.id != NO_CONTEXT)
return;
#ifdef FEW_CONTEXTS
while (atomic_dec_if_positive(&nr_free_contexts) < 0)
steal_context();
#endif
ctx = next_mmu_context;
while (test_and_set_bit(ctx, context_map)) {
ctx = find_next_zero_bit(context_map, LAST_CONTEXT+1, ctx);
if (ctx > LAST_CONTEXT)
ctx = 0;
}
next_mmu_context = (ctx + 1) & LAST_CONTEXT;
mm->context.id = ctx;
#ifdef FEW_CONTEXTS
context_mm[ctx] = mm;
#endif
}
/*
* Set up the context for a new address space.
*/
static inline int init_new_context(struct task_struct *t, struct mm_struct *mm)
{
mm->context.id = NO_CONTEXT;
return 0;
}
/*
* We're finished using the context for an address space.
*/
static inline void destroy_context(struct mm_struct *mm)
{
preempt_disable();
if (mm->context.id != NO_CONTEXT) {
clear_bit(mm->context.id, context_map);
mm->context.id = NO_CONTEXT;
#ifdef FEW_CONTEXTS
atomic_inc(&nr_free_contexts);
#endif
}
preempt_enable();
}
static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
struct task_struct *tsk)
{
#ifdef CONFIG_ALTIVEC
if (cpu_has_feature(CPU_FTR_ALTIVEC))
asm volatile ("dssall;\n"
#ifndef CONFIG_POWER4
"sync;\n" /* G4 needs a sync here, G5 apparently not */
#endif
: : );
#endif /* CONFIG_ALTIVEC */
tsk->thread.pgdir = next->pgd;
/* No need to flush userspace segments if the mm doesnt change */
if (prev == next)
return;
/* Setup new userspace context */
get_mmu_context(next);
set_context(next->context.id, next->pgd);
}
#define deactivate_mm(tsk,mm) do { } while (0)
/* /*
* After we have set current->mm to a new value, this activates * Most if the context management is out of line
* the context for the new mm so we see the new mappings.
*/ */
#define activate_mm(active_mm, mm) switch_mm(active_mm, mm, current)
extern void mmu_context_init(void); extern void mmu_context_init(void);
#else
#include <linux/kernel.h>
#include <linux/mm.h>
#include <linux/sched.h>
/*
* Copyright (C) 2001 PPC 64 Team, IBM Corp
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
static inline void enter_lazy_tlb(struct mm_struct *mm,
struct task_struct *tsk)
{
}
/*
* The proto-VSID space has 2^35 - 1 segments available for user mappings.
* Each segment contains 2^28 bytes. Each context maps 2^44 bytes,
* so we can support 2^19-1 contexts (19 == 35 + 28 - 44).
*/
#define NO_CONTEXT 0
#define MAX_CONTEXT ((1UL << 19) - 1)
extern int init_new_context(struct task_struct *tsk, struct mm_struct *mm); extern int init_new_context(struct task_struct *tsk, struct mm_struct *mm);
extern void destroy_context(struct mm_struct *mm); extern void destroy_context(struct mm_struct *mm);
extern void switch_mmu_context(struct mm_struct *prev, struct mm_struct *next);
extern void switch_stab(struct task_struct *tsk, struct mm_struct *mm); extern void switch_stab(struct task_struct *tsk, struct mm_struct *mm);
extern void switch_slb(struct task_struct *tsk, struct mm_struct *mm); extern void switch_slb(struct task_struct *tsk, struct mm_struct *mm);
extern void set_context(unsigned long id, pgd_t *pgd);
/* /*
* switch_mm is the entry point called from the architecture independent * switch_mm is the entry point called from the architecture independent
...@@ -241,22 +30,39 @@ extern void switch_slb(struct task_struct *tsk, struct mm_struct *mm); ...@@ -241,22 +30,39 @@ extern void switch_slb(struct task_struct *tsk, struct mm_struct *mm);
static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next, static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
struct task_struct *tsk) struct task_struct *tsk)
{ {
if (!cpu_isset(smp_processor_id(), next->cpu_vm_mask)) /* Mark this context has been used on the new CPU */
cpu_set(smp_processor_id(), next->cpu_vm_mask); cpu_set(smp_processor_id(), next->cpu_vm_mask);
/* 32-bit keeps track of the current PGDIR in the thread struct */
#ifdef CONFIG_PPC32
tsk->thread.pgdir = next->pgd;
#endif /* CONFIG_PPC32 */
/* No need to flush userspace segments if the mm doesnt change */ /* Nothing else to do if we aren't actually switching */
if (prev == next) if (prev == next)
return; return;
/* We must stop all altivec streams before changing the HW
* context
*/
#ifdef CONFIG_ALTIVEC #ifdef CONFIG_ALTIVEC
if (cpu_has_feature(CPU_FTR_ALTIVEC)) if (cpu_has_feature(CPU_FTR_ALTIVEC))
asm volatile ("dssall"); asm volatile ("dssall");
#endif /* CONFIG_ALTIVEC */ #endif /* CONFIG_ALTIVEC */
/* The actual HW switching method differs between the various
* sub architectures.
*/
#ifdef CONFIG_PPC_STD_MMU_64
if (cpu_has_feature(CPU_FTR_SLB)) if (cpu_has_feature(CPU_FTR_SLB))
switch_slb(tsk, next); switch_slb(tsk, next);
else else
switch_stab(tsk, next); switch_stab(tsk, next);
#else
/* Out of line for now */
switch_mmu_context(prev, next);
#endif
} }
#define deactivate_mm(tsk,mm) do { } while (0) #define deactivate_mm(tsk,mm) do { } while (0)
...@@ -274,6 +80,11 @@ static inline void activate_mm(struct mm_struct *prev, struct mm_struct *next) ...@@ -274,6 +80,11 @@ static inline void activate_mm(struct mm_struct *prev, struct mm_struct *next)
local_irq_restore(flags); local_irq_restore(flags);
} }
#endif /* CONFIG_PPC64 */ /* We don't currently use enter_lazy_tlb() for anything */
static inline void enter_lazy_tlb(struct mm_struct *mm,
struct task_struct *tsk)
{
}
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
#endif /* __ASM_POWERPC_MMU_CONTEXT_H */ #endif /* __ASM_POWERPC_MMU_CONTEXT_H */
...@@ -239,6 +239,25 @@ struct mpc52xx_cdm { ...@@ -239,6 +239,25 @@ struct mpc52xx_cdm {
u16 mclken_div_psc6; /* CDM + 0x36 reg13 byte2,3 */ u16 mclken_div_psc6; /* CDM + 0x36 reg13 byte2,3 */
}; };
/* Interrupt controller Register set */
struct mpc52xx_intr {
u32 per_mask; /* INTR + 0x00 */
u32 per_pri1; /* INTR + 0x04 */
u32 per_pri2; /* INTR + 0x08 */
u32 per_pri3; /* INTR + 0x0c */
u32 ctrl; /* INTR + 0x10 */
u32 main_mask; /* INTR + 0x14 */
u32 main_pri1; /* INTR + 0x18 */
u32 main_pri2; /* INTR + 0x1c */
u32 reserved1; /* INTR + 0x20 */
u32 enc_status; /* INTR + 0x24 */
u32 crit_status; /* INTR + 0x28 */
u32 main_status; /* INTR + 0x2c */
u32 per_status; /* INTR + 0x30 */
u32 reserved2; /* INTR + 0x34 */
u32 per_error; /* INTR + 0x38 */
};
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
......
...@@ -68,12 +68,20 @@ ...@@ -68,12 +68,20 @@
#define MPC52xx_PSC_IMR_ORERR 0x1000 #define MPC52xx_PSC_IMR_ORERR 0x1000
#define MPC52xx_PSC_IMR_IPC 0x8000 #define MPC52xx_PSC_IMR_IPC 0x8000
/* PSC input port change bit */ /* PSC input port change bits */
#define MPC52xx_PSC_CTS 0x01 #define MPC52xx_PSC_CTS 0x01
#define MPC52xx_PSC_DCD 0x02 #define MPC52xx_PSC_DCD 0x02
#define MPC52xx_PSC_D_CTS 0x10 #define MPC52xx_PSC_D_CTS 0x10
#define MPC52xx_PSC_D_DCD 0x20 #define MPC52xx_PSC_D_DCD 0x20
/* PSC acr bits */
#define MPC52xx_PSC_IEC_CTS 0x01
#define MPC52xx_PSC_IEC_DCD 0x02
/* PSC output port bits */
#define MPC52xx_PSC_OP_RTS 0x01
#define MPC52xx_PSC_OP_RES 0x02
/* PSC mode fields */ /* PSC mode fields */
#define MPC52xx_PSC_MODE_5_BITS 0x00 #define MPC52xx_PSC_MODE_5_BITS 0x00
#define MPC52xx_PSC_MODE_6_BITS 0x01 #define MPC52xx_PSC_MODE_6_BITS 0x01
...@@ -91,6 +99,7 @@ ...@@ -91,6 +99,7 @@
#define MPC52xx_PSC_MODE_ONE_STOP_5_BITS 0x00 #define MPC52xx_PSC_MODE_ONE_STOP_5_BITS 0x00
#define MPC52xx_PSC_MODE_ONE_STOP 0x07 #define MPC52xx_PSC_MODE_ONE_STOP 0x07
#define MPC52xx_PSC_MODE_TWO_STOP 0x0f #define MPC52xx_PSC_MODE_TWO_STOP 0x0f
#define MPC52xx_PSC_MODE_TXCTS 0x10
#define MPC52xx_PSC_RFNUM_MASK 0x01ff #define MPC52xx_PSC_RFNUM_MASK 0x01ff
......
/* /*
* Pull in the generic implementation for the mutex fastpath. * Optimised mutex implementation of include/asm-generic/mutex-dec.h algorithm
*/
#ifndef _ASM_POWERPC_MUTEX_H
#define _ASM_POWERPC_MUTEX_H
static inline int __mutex_cmpxchg_lock(atomic_t *v, int old, int new)
{
int t;
__asm__ __volatile__ (
"1: lwarx %0,0,%1 # mutex trylock\n\
cmpw 0,%0,%2\n\
bne- 2f\n"
PPC405_ERR77(0,%1)
" stwcx. %3,0,%1\n\
bne- 1b"
ISYNC_ON_SMP
"\n\
2:"
: "=&r" (t)
: "r" (&v->counter), "r" (old), "r" (new)
: "cc", "memory");
return t;
}
static inline int __mutex_dec_return_lock(atomic_t *v)
{
int t;
__asm__ __volatile__(
"1: lwarx %0,0,%1 # mutex lock\n\
addic %0,%0,-1\n"
PPC405_ERR77(0,%1)
" stwcx. %0,0,%1\n\
bne- 1b"
ISYNC_ON_SMP
: "=&r" (t)
: "r" (&v->counter)
: "cc", "memory");
return t;
}
static inline int __mutex_inc_return_unlock(atomic_t *v)
{
int t;
__asm__ __volatile__(
LWSYNC_ON_SMP
"1: lwarx %0,0,%1 # mutex unlock\n\
addic %0,%0,1\n"
PPC405_ERR77(0,%1)
" stwcx. %0,0,%1 \n\
bne- 1b"
: "=&r" (t)
: "r" (&v->counter)
: "cc", "memory");
return t;
}
/**
* __mutex_fastpath_lock - try to take the lock by moving the count
* from 1 to a 0 value
* @count: pointer of type atomic_t
* @fail_fn: function to call if the original value was not 1
*
* Change the count from 1 to a value lower than 1, and call <fail_fn> if
* it wasn't 1 originally. This function MUST leave the value lower than
* 1 even when the "1" assertion wasn't true.
*/
static inline void
__mutex_fastpath_lock(atomic_t *count, void (*fail_fn)(atomic_t *))
{
if (unlikely(__mutex_dec_return_lock(count) < 0))
fail_fn(count);
}
/**
* __mutex_fastpath_lock_retval - try to take the lock by moving the count
* from 1 to a 0 value
* @count: pointer of type atomic_t
* @fail_fn: function to call if the original value was not 1
*
* Change the count from 1 to a value lower than 1, and call <fail_fn> if
* it wasn't 1 originally. This function returns 0 if the fastpath succeeds,
* or anything the slow path function returns.
*/
static inline int
__mutex_fastpath_lock_retval(atomic_t *count, int (*fail_fn)(atomic_t *))
{
if (unlikely(__mutex_dec_return_lock(count) < 0))
return fail_fn(count);
return 0;
}
/**
* __mutex_fastpath_unlock - try to promote the count from 0 to 1
* @count: pointer of type atomic_t
* @fail_fn: function to call if the original value was not 0
*
* Try to promote the count from 0 to 1. If it wasn't 0, call <fail_fn>.
* In the failure case, this function is allowed to either set the value to
* 1, or to set it to a value lower than 1.
*/
static inline void
__mutex_fastpath_unlock(atomic_t *count, void (*fail_fn)(atomic_t *))
{
if (unlikely(__mutex_inc_return_unlock(count) <= 0))
fail_fn(count);
}
#define __mutex_slowpath_needs_to_unlock() 1
/**
* __mutex_fastpath_trylock - try to acquire the mutex, without waiting
*
* @count: pointer of type atomic_t
* @fail_fn: fallback function
* *
* TODO: implement optimized primitives instead, or leave the generic * Change the count from 1 to 0, and return 1 (success), or if the count
* implementation in place, or pick the atomic_xchg() based generic * was not 1, then return 0 (failure).
* implementation. (see asm-generic/mutex-xchg.h for details)
*/ */
static inline int
__mutex_fastpath_trylock(atomic_t *count, int (*fail_fn)(atomic_t *))
{
if (likely(__mutex_cmpxchg_lock(count, 1, 0) == 1))
return 1;
return 0;
}
#include <asm-generic/mutex-dec.h> #endif
...@@ -19,12 +19,15 @@ ...@@ -19,12 +19,15 @@
#include <asm/kdump.h> #include <asm/kdump.h>
/* /*
* On PPC32 page size is 4K. For PPC64 we support either 4K or 64K software * On regular PPC32 page size is 4K (but we support 4K/16K/64K pages
* on PPC44x). For PPC64 we support either 4K or 64K software
* page size. When using 64K pages however, whether we are really supporting * page size. When using 64K pages however, whether we are really supporting
* 64K pages in HW or not is irrelevant to those definitions. * 64K pages in HW or not is irrelevant to those definitions.
*/ */
#ifdef CONFIG_PPC_64K_PAGES #if defined(CONFIG_PPC_64K_PAGES)
#define PAGE_SHIFT 16 #define PAGE_SHIFT 16
#elif defined(CONFIG_PPC_16K_PAGES)
#define PAGE_SHIFT 14
#else #else
#define PAGE_SHIFT 12 #define PAGE_SHIFT 12
#endif #endif
...@@ -151,7 +154,7 @@ typedef struct { pte_basic_t pte; } pte_t; ...@@ -151,7 +154,7 @@ typedef struct { pte_basic_t pte; } pte_t;
/* 64k pages additionally define a bigger "real PTE" type that gathers /* 64k pages additionally define a bigger "real PTE" type that gathers
* the "second half" part of the PTE for pseudo 64k pages * the "second half" part of the PTE for pseudo 64k pages
*/ */
#ifdef CONFIG_PPC_64K_PAGES #if defined(CONFIG_PPC_64K_PAGES) && defined(CONFIG_PPC_STD_MMU_64)
typedef struct { pte_t pte; unsigned long hidx; } real_pte_t; typedef struct { pte_t pte; unsigned long hidx; } real_pte_t;
#else #else
typedef struct { pte_t pte; } real_pte_t; typedef struct { pte_t pte; } real_pte_t;
...@@ -191,10 +194,10 @@ typedef pte_basic_t pte_t; ...@@ -191,10 +194,10 @@ typedef pte_basic_t pte_t;
#define pte_val(x) (x) #define pte_val(x) (x)
#define __pte(x) (x) #define __pte(x) (x)
#ifdef CONFIG_PPC_64K_PAGES #if defined(CONFIG_PPC_64K_PAGES) && defined(CONFIG_PPC_STD_MMU_64)
typedef struct { pte_t pte; unsigned long hidx; } real_pte_t; typedef struct { pte_t pte; unsigned long hidx; } real_pte_t;
#else #else
typedef unsigned long real_pte_t; typedef pte_t real_pte_t;
#endif #endif
......
...@@ -19,6 +19,8 @@ ...@@ -19,6 +19,8 @@
#define PTE_FLAGS_OFFSET 0 #define PTE_FLAGS_OFFSET 0
#endif #endif
#define PTE_SHIFT (PAGE_SHIFT - PTE_T_LOG2) /* full page */
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
/* /*
* The basic type of a PTE - 64 bits for those CPUs with > 32 bit * The basic type of a PTE - 64 bits for those CPUs with > 32 bit
...@@ -26,10 +28,8 @@ ...@@ -26,10 +28,8 @@
*/ */
#ifdef CONFIG_PTE_64BIT #ifdef CONFIG_PTE_64BIT
typedef unsigned long long pte_basic_t; typedef unsigned long long pte_basic_t;
#define PTE_SHIFT (PAGE_SHIFT - 3) /* 512 ptes per page */
#else #else
typedef unsigned long pte_basic_t; typedef unsigned long pte_basic_t;
#define PTE_SHIFT (PAGE_SHIFT - 2) /* 1024 ptes per page */
#endif #endif
struct page; struct page;
...@@ -39,6 +39,9 @@ extern void copy_page(void *to, void *from); ...@@ -39,6 +39,9 @@ extern void copy_page(void *to, void *from);
#include <asm-generic/page.h> #include <asm-generic/page.h>
#define PGD_T_LOG2 (__builtin_ffs(sizeof(pgd_t)) - 1)
#define PTE_T_LOG2 (__builtin_ffs(sizeof(pte_t)) - 1)
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#endif /* _ASM_POWERPC_PAGE_32_H */ #endif /* _ASM_POWERPC_PAGE_32_H */
...@@ -13,7 +13,6 @@ ...@@ -13,7 +13,6 @@
struct device_node; struct device_node;
extern unsigned int ppc_pci_flags;
enum { enum {
/* Force re-assigning all resources (ignore firmware /* Force re-assigning all resources (ignore firmware
* setup completely) * setup completely)
...@@ -36,6 +35,31 @@ enum { ...@@ -36,6 +35,31 @@ enum {
/* ... except for domain 0 */ /* ... except for domain 0 */
PPC_PCI_COMPAT_DOMAIN_0 = 0x00000020, PPC_PCI_COMPAT_DOMAIN_0 = 0x00000020,
}; };
#ifdef CONFIG_PCI
extern unsigned int ppc_pci_flags;
static inline void ppc_pci_set_flags(int flags)
{
ppc_pci_flags = flags;
}
static inline void ppc_pci_add_flags(int flags)
{
ppc_pci_flags |= flags;
}
static inline int ppc_pci_has_flag(int flag)
{
return (ppc_pci_flags & flag);
}
#else
static inline void ppc_pci_set_flags(int flags) { }
static inline void ppc_pci_add_flags(int flags) { }
static inline int ppc_pci_has_flag(int flag)
{
return 0;
}
#endif
/* /*
...@@ -241,9 +265,6 @@ extern void pcibios_remove_pci_devices(struct pci_bus *bus); ...@@ -241,9 +265,6 @@ extern void pcibios_remove_pci_devices(struct pci_bus *bus);
/** Discover new pci devices under this bus, and add them */ /** Discover new pci devices under this bus, and add them */
extern void pcibios_add_pci_devices(struct pci_bus *bus); extern void pcibios_add_pci_devices(struct pci_bus *bus);
extern void pcibios_fixup_new_pci_devices(struct pci_bus *bus);
extern int pcibios_remove_root_bus(struct pci_controller *phb);
static inline struct pci_controller *pci_bus_to_host(const struct pci_bus *bus) static inline struct pci_controller *pci_bus_to_host(const struct pci_bus *bus)
{ {
...@@ -290,6 +311,7 @@ extern void pci_process_bridge_OF_ranges(struct pci_controller *hose, ...@@ -290,6 +311,7 @@ extern void pci_process_bridge_OF_ranges(struct pci_controller *hose,
/* Allocate & free a PCI host bridge structure */ /* Allocate & free a PCI host bridge structure */
extern struct pci_controller *pcibios_alloc_controller(struct device_node *dev); extern struct pci_controller *pcibios_alloc_controller(struct device_node *dev);
extern void pcibios_free_controller(struct pci_controller *phb); extern void pcibios_free_controller(struct pci_controller *phb);
extern void pcibios_setup_phb_resources(struct pci_controller *hose);
#ifdef CONFIG_PCI #ifdef CONFIG_PCI
extern unsigned long pci_address_to_pio(phys_addr_t address); extern unsigned long pci_address_to_pio(phys_addr_t address);
......
...@@ -38,8 +38,8 @@ struct pci_dev; ...@@ -38,8 +38,8 @@ struct pci_dev;
* Set this to 1 if you want the kernel to re-assign all PCI * Set this to 1 if you want the kernel to re-assign all PCI
* bus numbers (don't do that on ppc64 yet !) * bus numbers (don't do that on ppc64 yet !)
*/ */
#define pcibios_assign_all_busses() (ppc_pci_flags & \ #define pcibios_assign_all_busses() \
PPC_PCI_REASSIGN_ALL_BUS) (ppc_pci_has_flag(PPC_PCI_REASSIGN_ALL_BUS))
#define pcibios_scan_all_fns(a, b) 0 #define pcibios_scan_all_fns(a, b) 0
static inline void pcibios_set_master(struct pci_dev *dev) static inline void pcibios_set_master(struct pci_dev *dev)
...@@ -204,15 +204,14 @@ static inline struct resource *pcibios_select_root(struct pci_dev *pdev, ...@@ -204,15 +204,14 @@ static inline struct resource *pcibios_select_root(struct pci_dev *pdev,
return root; return root;
} }
extern void pcibios_setup_new_device(struct pci_dev *dev);
extern void pcibios_claim_one_bus(struct pci_bus *b); extern void pcibios_claim_one_bus(struct pci_bus *b);
extern void pcibios_allocate_bus_resources(struct pci_bus *bus); extern void pcibios_finish_adding_to_bus(struct pci_bus *bus);
extern void pcibios_resource_survey(void); extern void pcibios_resource_survey(void);
extern struct pci_controller *init_phb_dynamic(struct device_node *dn); extern struct pci_controller *init_phb_dynamic(struct device_node *dn);
extern int remove_phb_dynamic(struct pci_controller *phb);
extern struct pci_dev *of_create_pci_dev(struct device_node *node, extern struct pci_dev *of_create_pci_dev(struct device_node *node,
struct pci_bus *bus, int devfn); struct pci_bus *bus, int devfn);
...@@ -221,6 +220,7 @@ extern void of_scan_pci_bridge(struct device_node *node, ...@@ -221,6 +220,7 @@ extern void of_scan_pci_bridge(struct device_node *node,
struct pci_dev *dev); struct pci_dev *dev);
extern void of_scan_bus(struct device_node *node, struct pci_bus *bus); extern void of_scan_bus(struct device_node *node, struct pci_bus *bus);
extern void of_rescan_bus(struct device_node *node, struct pci_bus *bus);
extern int pci_read_irq_line(struct pci_dev *dev); extern int pci_read_irq_line(struct pci_dev *dev);
...@@ -235,9 +235,8 @@ extern void pci_resource_to_user(const struct pci_dev *dev, int bar, ...@@ -235,9 +235,8 @@ extern void pci_resource_to_user(const struct pci_dev *dev, int bar,
const struct resource *rsrc, const struct resource *rsrc,
resource_size_t *start, resource_size_t *end); resource_size_t *start, resource_size_t *end);
extern void pcibios_do_bus_setup(struct pci_bus *bus); extern void pcibios_setup_bus_devices(struct pci_bus *bus);
extern void pcibios_fixup_of_probed_bus(struct pci_bus *bus); extern void pcibios_setup_bus_self(struct pci_bus *bus);
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
#endif /* __ASM_POWERPC_PCI_H */ #endif /* __ASM_POWERPC_PCI_H */
...@@ -3,6 +3,8 @@ ...@@ -3,6 +3,8 @@
#include <linux/threads.h> #include <linux/threads.h>
#define PTE_NONCACHE_NUM 0 /* dummy for now to share code w/ppc64 */
extern void __bad_pte(pmd_t *pmd); extern void __bad_pte(pmd_t *pmd);
extern pgd_t *pgd_alloc(struct mm_struct *mm); extern pgd_t *pgd_alloc(struct mm_struct *mm);
...@@ -33,10 +35,13 @@ extern void pgd_free(struct mm_struct *mm, pgd_t *pgd); ...@@ -33,10 +35,13 @@ extern void pgd_free(struct mm_struct *mm, pgd_t *pgd);
extern pte_t *pte_alloc_one_kernel(struct mm_struct *mm, unsigned long addr); extern pte_t *pte_alloc_one_kernel(struct mm_struct *mm, unsigned long addr);
extern pgtable_t pte_alloc_one(struct mm_struct *mm, unsigned long addr); extern pgtable_t pte_alloc_one(struct mm_struct *mm, unsigned long addr);
extern void pte_free_kernel(struct mm_struct *mm, pte_t *pte);
extern void pte_free(struct mm_struct *mm, pgtable_t pte);
#define __pte_free_tlb(tlb, pte) pte_free((tlb)->mm, (pte)) static inline void pgtable_free(pgtable_free_t pgf)
{
void *p = (void *)(pgf.val & ~PGF_CACHENUM_MASK);
free_page((unsigned long)p);
}
#define check_pgt_cache() do { } while (0) #define check_pgt_cache() do { } while (0)
......
...@@ -7,7 +7,6 @@ ...@@ -7,7 +7,6 @@
* 2 of the License, or (at your option) any later version. * 2 of the License, or (at your option) any later version.
*/ */
#include <linux/mm.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/cpumask.h> #include <linux/cpumask.h>
#include <linux/percpu.h> #include <linux/percpu.h>
...@@ -108,31 +107,6 @@ static inline pgtable_t pte_alloc_one(struct mm_struct *mm, ...@@ -108,31 +107,6 @@ static inline pgtable_t pte_alloc_one(struct mm_struct *mm,
return page; return page;
} }
static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
{
free_page((unsigned long)pte);
}
static inline void pte_free(struct mm_struct *mm, pgtable_t ptepage)
{
pgtable_page_dtor(ptepage);
__free_page(ptepage);
}
#define PGF_CACHENUM_MASK 0x7
typedef struct pgtable_free {
unsigned long val;
} pgtable_free_t;
static inline pgtable_free_t pgtable_free_cache(void *p, int cachenum,
unsigned long mask)
{
BUG_ON(cachenum > PGF_CACHENUM_MASK);
return (pgtable_free_t){.val = ((unsigned long) p & ~mask) | cachenum};
}
static inline void pgtable_free(pgtable_free_t pgf) static inline void pgtable_free(pgtable_free_t pgf)
{ {
void *p = (void *)(pgf.val & ~PGF_CACHENUM_MASK); void *p = (void *)(pgf.val & ~PGF_CACHENUM_MASK);
...@@ -144,14 +118,6 @@ static inline void pgtable_free(pgtable_free_t pgf) ...@@ -144,14 +118,6 @@ static inline void pgtable_free(pgtable_free_t pgf)
kmem_cache_free(pgtable_cache[cachenum], p); kmem_cache_free(pgtable_cache[cachenum], p);
} }
extern void pgtable_free_tlb(struct mmu_gather *tlb, pgtable_free_t pgf);
#define __pte_free_tlb(tlb,ptepage) \
do { \
pgtable_page_dtor(ptepage); \
pgtable_free_tlb(tlb, pgtable_free_cache(page_address(ptepage), \
PTE_NONCACHE_NUM, PTE_TABLE_SIZE-1)); \
} while (0)
#define __pmd_free_tlb(tlb, pmd) \ #define __pmd_free_tlb(tlb, pmd) \
pgtable_free_tlb(tlb, pgtable_free_cache(pmd, \ pgtable_free_tlb(tlb, pgtable_free_cache(pmd, \
PMD_CACHE_NUM, PMD_TABLE_SIZE-1)) PMD_CACHE_NUM, PMD_TABLE_SIZE-1))
......
...@@ -2,11 +2,52 @@ ...@@ -2,11 +2,52 @@
#define _ASM_POWERPC_PGALLOC_H #define _ASM_POWERPC_PGALLOC_H
#ifdef __KERNEL__ #ifdef __KERNEL__
#include <linux/mm.h>
static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
{
free_page((unsigned long)pte);
}
static inline void pte_free(struct mm_struct *mm, pgtable_t ptepage)
{
pgtable_page_dtor(ptepage);
__free_page(ptepage);
}
typedef struct pgtable_free {
unsigned long val;
} pgtable_free_t;
#define PGF_CACHENUM_MASK 0x7
static inline pgtable_free_t pgtable_free_cache(void *p, int cachenum,
unsigned long mask)
{
BUG_ON(cachenum > PGF_CACHENUM_MASK);
return (pgtable_free_t){.val = ((unsigned long) p & ~mask) | cachenum};
}
#ifdef CONFIG_PPC64 #ifdef CONFIG_PPC64
#include <asm/pgalloc-64.h> #include <asm/pgalloc-64.h>
#else #else
#include <asm/pgalloc-32.h> #include <asm/pgalloc-32.h>
#endif #endif
extern void pgtable_free_tlb(struct mmu_gather *tlb, pgtable_free_t pgf);
#ifdef CONFIG_SMP
#define __pte_free_tlb(tlb,ptepage) \
do { \
pgtable_page_dtor(ptepage); \
pgtable_free_tlb(tlb, pgtable_free_cache(page_address(ptepage), \
PTE_NONCACHE_NUM, PTE_TABLE_SIZE-1)); \
} while (0)
#else
#define __pte_free_tlb(tlb, pte) pte_free((tlb)->mm, (pte))
#endif
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_PGALLOC_H */ #endif /* _ASM_POWERPC_PGALLOC_H */
...@@ -228,9 +228,10 @@ extern int icache_44x_need_flush; ...@@ -228,9 +228,10 @@ extern int icache_44x_need_flush;
* - FILE *must* be in the bottom three bits because swap cache * - FILE *must* be in the bottom three bits because swap cache
* entries use the top 29 bits for TLB2. * entries use the top 29 bits for TLB2.
* *
* - CACHE COHERENT bit (M) has no effect on PPC440 core, because it * - CACHE COHERENT bit (M) has no effect on original PPC440 cores,
* doesn't support SMP. So we can use this as software bit, like * because it doesn't support SMP. However, some later 460 variants
* DIRTY. * have -some- form of SMP support and so I keep the bit there for
* future use
* *
* With the PPC 44x Linux implementation, the 0-11th LSBs of the PTE are used * With the PPC 44x Linux implementation, the 0-11th LSBs of the PTE are used
* for memory protection related functions (see PTE structure in * for memory protection related functions (see PTE structure in
...@@ -436,20 +437,23 @@ extern int icache_44x_need_flush; ...@@ -436,20 +437,23 @@ extern int icache_44x_need_flush;
_PAGE_USER | _PAGE_ACCESSED | \ _PAGE_USER | _PAGE_ACCESSED | \
_PAGE_RW | _PAGE_HWWRITE | _PAGE_DIRTY | \ _PAGE_RW | _PAGE_HWWRITE | _PAGE_DIRTY | \
_PAGE_EXEC | _PAGE_HWEXEC) _PAGE_EXEC | _PAGE_HWEXEC)
/* /*
* Note: the _PAGE_COHERENT bit automatically gets set in the hardware * We define 2 sets of base prot bits, one for basic pages (ie,
* PTE if CONFIG_SMP is defined (hash_page does this); there is no need * cacheable kernel and user pages) and one for non cacheable
* to have it in the Linux PTE, and in fact the bit could be reused for * pages. We always set _PAGE_COHERENT when SMP is enabled or
* another purpose. -- paulus. * the processor might need it for DMA coherency.
*/ */
#if defined(CONFIG_SMP) || defined(CONFIG_PPC_STD_MMU)
#ifdef CONFIG_44x #define _PAGE_BASE (_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_COHERENT)
#define _PAGE_BASE (_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_GUARDED)
#else #else
#define _PAGE_BASE (_PAGE_PRESENT | _PAGE_ACCESSED) #define _PAGE_BASE (_PAGE_PRESENT | _PAGE_ACCESSED)
#endif #endif
#define _PAGE_BASE_NC (_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_NO_CACHE)
#define _PAGE_WRENABLE (_PAGE_RW | _PAGE_DIRTY | _PAGE_HWWRITE) #define _PAGE_WRENABLE (_PAGE_RW | _PAGE_DIRTY | _PAGE_HWWRITE)
#define _PAGE_KERNEL (_PAGE_BASE | _PAGE_SHARED | _PAGE_WRENABLE) #define _PAGE_KERNEL (_PAGE_BASE | _PAGE_SHARED | _PAGE_WRENABLE)
#define _PAGE_KERNEL_NC (_PAGE_BASE_NC | _PAGE_SHARED | _PAGE_WRENABLE)
#ifdef CONFIG_PPC_STD_MMU #ifdef CONFIG_PPC_STD_MMU
/* On standard PPC MMU, no user access implies kernel read/write access, /* On standard PPC MMU, no user access implies kernel read/write access,
...@@ -459,7 +463,7 @@ extern int icache_44x_need_flush; ...@@ -459,7 +463,7 @@ extern int icache_44x_need_flush;
#define _PAGE_KERNEL_RO (_PAGE_BASE | _PAGE_SHARED) #define _PAGE_KERNEL_RO (_PAGE_BASE | _PAGE_SHARED)
#endif #endif
#define _PAGE_IO (_PAGE_KERNEL | _PAGE_NO_CACHE | _PAGE_GUARDED) #define _PAGE_IO (_PAGE_KERNEL_NC | _PAGE_GUARDED)
#define _PAGE_RAM (_PAGE_KERNEL | _PAGE_HWEXEC) #define _PAGE_RAM (_PAGE_KERNEL | _PAGE_HWEXEC)
#if defined(CONFIG_KGDB) || defined(CONFIG_XMON) || defined(CONFIG_BDI_SWITCH) ||\ #if defined(CONFIG_KGDB) || defined(CONFIG_XMON) || defined(CONFIG_BDI_SWITCH) ||\
...@@ -552,9 +556,6 @@ static inline int pte_young(pte_t pte) { return pte_val(pte) & _PAGE_ACCESSED; ...@@ -552,9 +556,6 @@ static inline int pte_young(pte_t pte) { return pte_val(pte) & _PAGE_ACCESSED;
static inline int pte_file(pte_t pte) { return pte_val(pte) & _PAGE_FILE; } static inline int pte_file(pte_t pte) { return pte_val(pte) & _PAGE_FILE; }
static inline int pte_special(pte_t pte) { return pte_val(pte) & _PAGE_SPECIAL; } static inline int pte_special(pte_t pte) { return pte_val(pte) & _PAGE_SPECIAL; }
static inline void pte_uncache(pte_t pte) { pte_val(pte) |= _PAGE_NO_CACHE; }
static inline void pte_cache(pte_t pte) { pte_val(pte) &= ~_PAGE_NO_CACHE; }
static inline pte_t pte_wrprotect(pte_t pte) { static inline pte_t pte_wrprotect(pte_t pte) {
pte_val(pte) &= ~(_PAGE_RW | _PAGE_HWWRITE); return pte; } pte_val(pte) &= ~(_PAGE_RW | _PAGE_HWWRITE); return pte; }
static inline pte_t pte_mkclean(pte_t pte) { static inline pte_t pte_mkclean(pte_t pte) {
...@@ -693,10 +694,11 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr, ...@@ -693,10 +694,11 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
#endif #endif
} }
static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, pte_t pte) pte_t *ptep, pte_t pte)
{ {
#if defined(CONFIG_PTE_64BIT) && defined(CONFIG_SMP) #if defined(CONFIG_PTE_64BIT) && defined(CONFIG_SMP) && defined(CONFIG_DEBUG_VM)
WARN_ON(pte_present(*ptep)); WARN_ON(pte_present(*ptep));
#endif #endif
__set_pte_at(mm, addr, ptep, pte); __set_pte_at(mm, addr, ptep, pte);
...@@ -760,16 +762,6 @@ static inline void __ptep_set_access_flags(pte_t *ptep, pte_t entry, int dirty) ...@@ -760,16 +762,6 @@ static inline void __ptep_set_access_flags(pte_t *ptep, pte_t entry, int dirty)
__changed; \ __changed; \
}) })
/*
* Macro to mark a page protection value as "uncacheable".
*/
#define pgprot_noncached(prot) (__pgprot(pgprot_val(prot) | _PAGE_NO_CACHE | _PAGE_GUARDED))
struct file;
extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
unsigned long size, pgprot_t vma_prot);
#define __HAVE_PHYS_MEM_ACCESS_PROT
#define __HAVE_ARCH_PTE_SAME #define __HAVE_ARCH_PTE_SAME
#define pte_same(A,B) (((pte_val(A) ^ pte_val(B)) & ~_PAGE_HASHPTE) == 0) #define pte_same(A,B) (((pte_val(A) ^ pte_val(B)) & ~_PAGE_HASHPTE) == 0)
......
...@@ -100,7 +100,7 @@ ...@@ -100,7 +100,7 @@
#define _PAGE_WRENABLE (_PAGE_RW | _PAGE_DIRTY) #define _PAGE_WRENABLE (_PAGE_RW | _PAGE_DIRTY)
/* __pgprot defined in arch/powerpc/incliude/asm/page.h */ /* __pgprot defined in arch/powerpc/include/asm/page.h */
#define PAGE_NONE __pgprot(_PAGE_PRESENT | _PAGE_ACCESSED) #define PAGE_NONE __pgprot(_PAGE_PRESENT | _PAGE_ACCESSED)
#define PAGE_SHARED __pgprot(_PAGE_BASE | _PAGE_RW | _PAGE_USER) #define PAGE_SHARED __pgprot(_PAGE_BASE | _PAGE_RW | _PAGE_USER)
...@@ -245,9 +245,6 @@ static inline int pte_young(pte_t pte) { return pte_val(pte) & _PAGE_ACCESSED;} ...@@ -245,9 +245,6 @@ static inline int pte_young(pte_t pte) { return pte_val(pte) & _PAGE_ACCESSED;}
static inline int pte_file(pte_t pte) { return pte_val(pte) & _PAGE_FILE;} static inline int pte_file(pte_t pte) { return pte_val(pte) & _PAGE_FILE;}
static inline int pte_special(pte_t pte) { return pte_val(pte) & _PAGE_SPECIAL; } static inline int pte_special(pte_t pte) { return pte_val(pte) & _PAGE_SPECIAL; }
static inline void pte_uncache(pte_t pte) { pte_val(pte) |= _PAGE_NO_CACHE; }
static inline void pte_cache(pte_t pte) { pte_val(pte) &= ~_PAGE_NO_CACHE; }
static inline pte_t pte_wrprotect(pte_t pte) { static inline pte_t pte_wrprotect(pte_t pte) {
pte_val(pte) &= ~(_PAGE_RW); return pte; } pte_val(pte) &= ~(_PAGE_RW); return pte; }
static inline pte_t pte_mkclean(pte_t pte) { static inline pte_t pte_mkclean(pte_t pte) {
...@@ -405,16 +402,6 @@ static inline void __ptep_set_access_flags(pte_t *ptep, pte_t entry, int dirty) ...@@ -405,16 +402,6 @@ static inline void __ptep_set_access_flags(pte_t *ptep, pte_t entry, int dirty)
__changed; \ __changed; \
}) })
/*
* Macro to mark a page protection value as "uncacheable".
*/
#define pgprot_noncached(prot) (__pgprot(pgprot_val(prot) | _PAGE_NO_CACHE | _PAGE_GUARDED))
struct file;
extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
unsigned long size, pgprot_t vma_prot);
#define __HAVE_PHYS_MEM_ACCESS_PROT
#define __HAVE_ARCH_PTE_SAME #define __HAVE_ARCH_PTE_SAME
#define pte_same(A,B) (((pte_val(A) ^ pte_val(B)) & ~_PAGE_HPTEFLAGS) == 0) #define pte_same(A,B) (((pte_val(A) ^ pte_val(B)) & ~_PAGE_HPTEFLAGS) == 0)
......
...@@ -16,6 +16,32 @@ struct mm_struct; ...@@ -16,6 +16,32 @@ struct mm_struct;
#endif #endif
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
/*
* Macro to mark a page protection value as "uncacheable".
*/
#define _PAGE_CACHE_CTL (_PAGE_COHERENT | _PAGE_GUARDED | _PAGE_NO_CACHE | \
_PAGE_WRITETHRU)
#define pgprot_noncached(prot) (__pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) | \
_PAGE_NO_CACHE | _PAGE_GUARDED))
#define pgprot_noncached_wc(prot) (__pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) | \
_PAGE_NO_CACHE))
#define pgprot_cached(prot) (__pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) | \
_PAGE_COHERENT))
#define pgprot_cached_wthru(prot) (__pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) | \
_PAGE_COHERENT | _PAGE_WRITETHRU))
struct file;
extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
unsigned long size, pgprot_t vma_prot);
#define __HAVE_PHYS_MEM_ACCESS_PROT
/* /*
* ZERO_PAGE is a global shared page that is always zero: used * ZERO_PAGE is a global shared page that is always zero: used
* for zero-mapped memory areas etc.. * for zero-mapped memory areas etc..
......
...@@ -425,14 +425,14 @@ END_FTR_SECTION_IFCLR(CPU_FTR_601) ...@@ -425,14 +425,14 @@ END_FTR_SECTION_IFCLR(CPU_FTR_601)
#define fromreal(rd) tovirt(rd,rd) #define fromreal(rd) tovirt(rd,rd)
#define tophys(rd,rs) \ #define tophys(rd,rs) \
0: addis rd,rs,-KERNELBASE@h; \ 0: addis rd,rs,-PAGE_OFFSET@h; \
.section ".vtop_fixup","aw"; \ .section ".vtop_fixup","aw"; \
.align 1; \ .align 1; \
.long 0b; \ .long 0b; \
.previous .previous
#define tovirt(rd,rs) \ #define tovirt(rd,rs) \
0: addis rd,rs,KERNELBASE@h; \ 0: addis rd,rs,PAGE_OFFSET@h; \
.section ".ptov_fixup","aw"; \ .section ".ptov_fixup","aw"; \
.align 1; \ .align 1; \
.long 0b; \ .long 0b; \
......
...@@ -69,8 +69,6 @@ extern int _prep_type; ...@@ -69,8 +69,6 @@ extern int _prep_type;
#ifdef __KERNEL__ #ifdef __KERNEL__
extern int have_of;
struct task_struct; struct task_struct;
void start_thread(struct pt_regs *regs, unsigned long fdptr, unsigned long sp); void start_thread(struct pt_regs *regs, unsigned long fdptr, unsigned long sp);
void release_thread(struct task_struct *); void release_thread(struct task_struct *);
...@@ -207,6 +205,11 @@ struct thread_struct { ...@@ -207,6 +205,11 @@ struct thread_struct {
#define INIT_SP_LIMIT \ #define INIT_SP_LIMIT \
(_ALIGN_UP(sizeof(init_thread_info), 16) + (unsigned long) &init_stack) (_ALIGN_UP(sizeof(init_thread_info), 16) + (unsigned long) &init_stack)
#ifdef CONFIG_SPE
#define SPEFSCR_INIT .spefscr = SPEFSCR_FINVE | SPEFSCR_FDBZE | SPEFSCR_FUNFE | SPEFSCR_FOVFE,
#else
#define SPEFSCR_INIT
#endif
#ifdef CONFIG_PPC32 #ifdef CONFIG_PPC32
#define INIT_THREAD { \ #define INIT_THREAD { \
...@@ -215,6 +218,7 @@ struct thread_struct { ...@@ -215,6 +218,7 @@ struct thread_struct {
.fs = KERNEL_DS, \ .fs = KERNEL_DS, \
.pgdir = swapper_pg_dir, \ .pgdir = swapper_pg_dir, \
.fpexc_mode = MSR_FE0 | MSR_FE1, \ .fpexc_mode = MSR_FE0 | MSR_FE1, \
SPEFSCR_INIT \
} }
#else #else
#define INIT_THREAD { \ #define INIT_THREAD { \
......
...@@ -253,6 +253,9 @@ extern void kdump_move_device_tree(void); ...@@ -253,6 +253,9 @@ extern void kdump_move_device_tree(void);
/* CPU OF node matching */ /* CPU OF node matching */
struct device_node *of_get_cpu_node(int cpu, unsigned int *thread); struct device_node *of_get_cpu_node(int cpu, unsigned int *thread);
/* cache lookup */
struct device_node *of_find_next_cache_node(struct device_node *np);
/* Get the MAC address */ /* Get the MAC address */
extern const void *of_get_mac_address(struct device_node *np); extern const void *of_get_mac_address(struct device_node *np);
......
...@@ -305,30 +305,34 @@ static inline const char* ps3_result(int result) ...@@ -305,30 +305,34 @@ static inline const char* ps3_result(int result)
/* system bus routines */ /* system bus routines */
enum ps3_match_id { enum ps3_match_id {
PS3_MATCH_ID_EHCI = 1, PS3_MATCH_ID_EHCI = 1,
PS3_MATCH_ID_OHCI = 2, PS3_MATCH_ID_OHCI = 2,
PS3_MATCH_ID_GELIC = 3, PS3_MATCH_ID_GELIC = 3,
PS3_MATCH_ID_AV_SETTINGS = 4, PS3_MATCH_ID_AV_SETTINGS = 4,
PS3_MATCH_ID_SYSTEM_MANAGER = 5, PS3_MATCH_ID_SYSTEM_MANAGER = 5,
PS3_MATCH_ID_STOR_DISK = 6, PS3_MATCH_ID_STOR_DISK = 6,
PS3_MATCH_ID_STOR_ROM = 7, PS3_MATCH_ID_STOR_ROM = 7,
PS3_MATCH_ID_STOR_FLASH = 8, PS3_MATCH_ID_STOR_FLASH = 8,
PS3_MATCH_ID_SOUND = 9, PS3_MATCH_ID_SOUND = 9,
PS3_MATCH_ID_GRAPHICS = 10, PS3_MATCH_ID_GPU = 10,
PS3_MATCH_ID_LPM = 11, PS3_MATCH_ID_LPM = 11,
}; };
#define PS3_MODULE_ALIAS_EHCI "ps3:1" enum ps3_match_sub_id {
#define PS3_MODULE_ALIAS_OHCI "ps3:2" PS3_MATCH_SUB_ID_GPU_FB = 1,
#define PS3_MODULE_ALIAS_GELIC "ps3:3" };
#define PS3_MODULE_ALIAS_AV_SETTINGS "ps3:4"
#define PS3_MODULE_ALIAS_SYSTEM_MANAGER "ps3:5" #define PS3_MODULE_ALIAS_EHCI "ps3:1:0"
#define PS3_MODULE_ALIAS_STOR_DISK "ps3:6" #define PS3_MODULE_ALIAS_OHCI "ps3:2:0"
#define PS3_MODULE_ALIAS_STOR_ROM "ps3:7" #define PS3_MODULE_ALIAS_GELIC "ps3:3:0"
#define PS3_MODULE_ALIAS_STOR_FLASH "ps3:8" #define PS3_MODULE_ALIAS_AV_SETTINGS "ps3:4:0"
#define PS3_MODULE_ALIAS_SOUND "ps3:9" #define PS3_MODULE_ALIAS_SYSTEM_MANAGER "ps3:5:0"
#define PS3_MODULE_ALIAS_GRAPHICS "ps3:10" #define PS3_MODULE_ALIAS_STOR_DISK "ps3:6:0"
#define PS3_MODULE_ALIAS_LPM "ps3:11" #define PS3_MODULE_ALIAS_STOR_ROM "ps3:7:0"
#define PS3_MODULE_ALIAS_STOR_FLASH "ps3:8:0"
#define PS3_MODULE_ALIAS_SOUND "ps3:9:0"
#define PS3_MODULE_ALIAS_GPU_FB "ps3:10:1"
#define PS3_MODULE_ALIAS_LPM "ps3:11:0"
enum ps3_system_bus_device_type { enum ps3_system_bus_device_type {
PS3_DEVICE_TYPE_IOC0 = 1, PS3_DEVICE_TYPE_IOC0 = 1,
...@@ -337,11 +341,6 @@ enum ps3_system_bus_device_type { ...@@ -337,11 +341,6 @@ enum ps3_system_bus_device_type {
PS3_DEVICE_TYPE_LPM, PS3_DEVICE_TYPE_LPM,
}; };
enum ps3_match_sub_id {
/* for PS3_MATCH_ID_GRAPHICS */
PS3_MATCH_SUB_ID_FB = 1,
};
/** /**
* struct ps3_system_bus_device - a device on the system bus * struct ps3_system_bus_device - a device on the system bus
*/ */
...@@ -516,4 +515,7 @@ void ps3_sync_irq(int node); ...@@ -516,4 +515,7 @@ void ps3_sync_irq(int node);
u32 ps3_get_hw_thread_id(int cpu); u32 ps3_get_hw_thread_id(int cpu);
u64 ps3_get_spe_id(void *arg); u64 ps3_get_spe_id(void *arg);
/* mutex synchronizing GPU accesses and video mode changes */
extern struct mutex ps3_gpu_mutex;
#endif #endif
...@@ -740,8 +740,4 @@ extern int ps3av_audio_mute(int); ...@@ -740,8 +740,4 @@ extern int ps3av_audio_mute(int);
extern int ps3av_audio_mute_analog(int); extern int ps3av_audio_mute_analog(int);
extern int ps3av_dev_open(void); extern int ps3av_dev_open(void);
extern int ps3av_dev_close(void); extern int ps3av_dev_close(void);
extern void ps3av_register_flip_ctl(void (*flip_ctl)(int on, void *data),
void *flip_data);
extern void ps3av_flip_ctl(int on);
#endif /* _ASM_POWERPC_PS3AV_H_ */ #endif /* _ASM_POWERPC_PS3AV_H_ */
...@@ -783,6 +783,10 @@ extern void scom970_write(unsigned int address, unsigned long value); ...@@ -783,6 +783,10 @@ extern void scom970_write(unsigned int address, unsigned long value);
#define __get_SP() ({unsigned long sp; \ #define __get_SP() ({unsigned long sp; \
asm volatile("mr %0,1": "=r" (sp)); sp;}) asm volatile("mr %0,1": "=r" (sp)); sp;})
struct pt_regs;
extern void ppc_save_regs(struct pt_regs *regs);
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_REG_H */ #endif /* _ASM_POWERPC_REG_H */
...@@ -168,6 +168,7 @@ extern void rtas_os_term(char *str); ...@@ -168,6 +168,7 @@ extern void rtas_os_term(char *str);
extern int rtas_get_sensor(int sensor, int index, int *state); extern int rtas_get_sensor(int sensor, int index, int *state);
extern int rtas_get_power_level(int powerdomain, int *level); extern int rtas_get_power_level(int powerdomain, int *level);
extern int rtas_set_power_level(int powerdomain, int level, int *setlevel); extern int rtas_set_power_level(int powerdomain, int level, int *setlevel);
extern bool rtas_indicator_present(int token, int *maxindex);
extern int rtas_set_indicator(int indicator, int index, int new_value); extern int rtas_set_indicator(int indicator, int index, int new_value);
extern int rtas_set_indicator_fast(int indicator, int index, int new_value); extern int rtas_set_indicator_fast(int indicator, int index, int new_value);
extern void rtas_progress(char *s, unsigned short hex); extern void rtas_progress(char *s, unsigned short hex);
......
...@@ -82,7 +82,7 @@ ...@@ -82,7 +82,7 @@
#define _FP_MUL_MEAT_S(R,X,Y) _FP_MUL_MEAT_1_wide(_FP_WFRACBITS_S,R,X,Y,umul_ppmm) #define _FP_MUL_MEAT_S(R,X,Y) _FP_MUL_MEAT_1_wide(_FP_WFRACBITS_S,R,X,Y,umul_ppmm)
#define _FP_MUL_MEAT_D(R,X,Y) _FP_MUL_MEAT_2_wide(_FP_WFRACBITS_D,R,X,Y,umul_ppmm) #define _FP_MUL_MEAT_D(R,X,Y) _FP_MUL_MEAT_2_wide(_FP_WFRACBITS_D,R,X,Y,umul_ppmm)
#define _FP_DIV_MEAT_S(R,X,Y) _FP_DIV_MEAT_1_udiv(S,R,X,Y) #define _FP_DIV_MEAT_S(R,X,Y) _FP_DIV_MEAT_1_udiv_norm(S,R,X,Y)
#define _FP_DIV_MEAT_D(R,X,Y) _FP_DIV_MEAT_2_udiv(D,R,X,Y) #define _FP_DIV_MEAT_D(R,X,Y) _FP_DIV_MEAT_2_udiv(D,R,X,Y)
/* These macros define what NaN looks like. They're supposed to expand to /* These macros define what NaN looks like. They're supposed to expand to
...@@ -97,6 +97,20 @@ ...@@ -97,6 +97,20 @@
#define _FP_KEEPNANFRACP 1 #define _FP_KEEPNANFRACP 1
#ifdef FP_EX_BOOKE_E500_SPE
#define FP_EX_INEXACT (1 << 21)
#define FP_EX_INVALID (1 << 20)
#define FP_EX_DIVZERO (1 << 19)
#define FP_EX_UNDERFLOW (1 << 18)
#define FP_EX_OVERFLOW (1 << 17)
#define FP_INHIBIT_RESULTS 0
#define __FPU_FPSCR (current->thread.spefscr)
#define __FPU_ENABLED_EXC \
({ \
(__FPU_FPSCR >> 2) & 0x1f; \
})
#else
/* Exception flags. We use the bit positions of the appropriate bits /* Exception flags. We use the bit positions of the appropriate bits
in the FPSCR, which also correspond to the FE_* bits. This makes in the FPSCR, which also correspond to the FE_* bits. This makes
everything easier ;-). */ everything easier ;-). */
...@@ -111,22 +125,6 @@ ...@@ -111,22 +125,6 @@
#define FP_EX_DIVZERO (1 << (31 - 5)) #define FP_EX_DIVZERO (1 << (31 - 5))
#define FP_EX_INEXACT (1 << (31 - 6)) #define FP_EX_INEXACT (1 << (31 - 6))
/* This macro appears to be called when both X and Y are NaNs, and
* has to choose one and copy it to R. i386 goes for the larger of the
* two, sparc64 just picks Y. I don't understand this at all so I'll
* go with sparc64 because it's shorter :-> -- PMM
*/
#define _FP_CHOOSENAN(fs, wc, R, X, Y, OP) \
do { \
R##_s = Y##_s; \
_FP_FRAC_COPY_##wc(R,Y); \
R##_c = FP_CLS_NAN; \
} while (0)
#include <linux/kernel.h>
#include <linux/sched.h>
#define __FPU_FPSCR (current->thread.fpscr.val) #define __FPU_FPSCR (current->thread.fpscr.val)
/* We only actually write to the destination register /* We only actually write to the destination register
...@@ -137,6 +135,32 @@ ...@@ -137,6 +135,32 @@
(__FPU_FPSCR >> 3) & 0x1f; \ (__FPU_FPSCR >> 3) & 0x1f; \
}) })
#endif
/*
* If one NaN is signaling and the other is not,
* we choose that one, otherwise we choose X.
*/
#define _FP_CHOOSENAN(fs, wc, R, X, Y, OP) \
do { \
if ((_FP_FRAC_HIGH_RAW_##fs(Y) & _FP_QNANBIT_##fs) \
&& !(_FP_FRAC_HIGH_RAW_##fs(X) & _FP_QNANBIT_##fs)) \
{ \
R##_s = X##_s; \
_FP_FRAC_COPY_##wc(R,X); \
} \
else \
{ \
R##_s = Y##_s; \
_FP_FRAC_COPY_##wc(R,Y); \
} \
R##_c = FP_CLS_NAN; \
} while (0)
#include <linux/kernel.h>
#include <linux/sched.h>
#define __FPU_TRAP_P(bits) \ #define __FPU_TRAP_P(bits) \
((__FPU_ENABLED_EXC & (bits)) != 0) ((__FPU_ENABLED_EXC & (bits)) != 0)
......
...@@ -81,6 +81,13 @@ extern int cpu_to_core_id(int cpu); ...@@ -81,6 +81,13 @@ extern int cpu_to_core_id(int cpu);
#define PPC_MSG_CALL_FUNC_SINGLE 2 #define PPC_MSG_CALL_FUNC_SINGLE 2
#define PPC_MSG_DEBUGGER_BREAK 3 #define PPC_MSG_DEBUGGER_BREAK 3
/*
* irq controllers that have dedicated ipis per message and don't
* need additional code in the action handler may use this
*/
extern int smp_request_message_ipi(int virq, int message);
extern const char *smp_ipi_name[];
void smp_init_iSeries(void); void smp_init_iSeries(void);
void smp_init_pSeries(void); void smp_init_pSeries(void);
void smp_init_cell(void); void smp_init_cell(void);
......
...@@ -277,7 +277,7 @@ static inline void __raw_read_unlock(raw_rwlock_t *rw) ...@@ -277,7 +277,7 @@ static inline void __raw_read_unlock(raw_rwlock_t *rw)
bne- 1b" bne- 1b"
: "=&r"(tmp) : "=&r"(tmp)
: "r"(&rw->lock) : "r"(&rw->lock)
: "cr0", "memory"); : "cr0", "xer", "memory");
} }
static inline void __raw_write_unlock(raw_rwlock_t *rw) static inline void __raw_write_unlock(raw_rwlock_t *rw)
......
...@@ -5,6 +5,10 @@ ...@@ -5,6 +5,10 @@
#include <linux/stringify.h> #include <linux/stringify.h>
#include <asm/feature-fixups.h> #include <asm/feature-fixups.h>
#if defined(__powerpc64__) || defined(CONFIG_PPC_E500MC)
#define __SUBARCH_HAS_LWSYNC
#endif
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
extern unsigned int __start___lwsync_fixup, __stop___lwsync_fixup; extern unsigned int __start___lwsync_fixup, __stop___lwsync_fixup;
extern void do_lwsync_fixups(unsigned long value, void *fixup_start, extern void do_lwsync_fixups(unsigned long value, void *fixup_start,
......
...@@ -23,15 +23,17 @@ ...@@ -23,15 +23,17 @@
* read_barrier_depends() prevents data-dependent loads being reordered * read_barrier_depends() prevents data-dependent loads being reordered
* across this point (nop on PPC). * across this point (nop on PPC).
* *
* We have to use the sync instructions for mb(), since lwsync doesn't * *mb() variants without smp_ prefix must order all types of memory
* order loads with respect to previous stores. Lwsync is fine for * operations with one another. sync is the only instruction sufficient
* rmb(), though. Note that rmb() actually uses a sync on 32-bit * to do this.
* architectures.
* *
* For wmb(), we use sync since wmb is used in drivers to order * For the smp_ barriers, ordering is for cacheable memory operations
* stores to system memory with respect to writes to the device. * only. We have to use the sync instruction for smp_mb(), since lwsync
* However, smp_wmb() can be a lighter-weight lwsync or eieio barrier * doesn't order loads with respect to previous stores. Lwsync can be
* on SMP since it is only used to order updates to system memory. * used for smp_rmb() and smp_wmb().
*
* However, on CPUs that don't support lwsync, lwsync actually maps to a
* heavy-weight sync, so smp_wmb() can be a lighter-weight eieio.
*/ */
#define mb() __asm__ __volatile__ ("sync" : : : "memory") #define mb() __asm__ __volatile__ ("sync" : : : "memory")
#define rmb() __asm__ __volatile__ ("sync" : : : "memory") #define rmb() __asm__ __volatile__ ("sync" : : : "memory")
...@@ -45,14 +47,14 @@ ...@@ -45,14 +47,14 @@
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
#ifdef __SUBARCH_HAS_LWSYNC #ifdef __SUBARCH_HAS_LWSYNC
# define SMPWMB lwsync # define SMPWMB LWSYNC
#else #else
# define SMPWMB eieio # define SMPWMB eieio
#endif #endif
#define smp_mb() mb() #define smp_mb() mb()
#define smp_rmb() rmb() #define smp_rmb() __asm__ __volatile__ (stringify_in_c(LWSYNC) : : :"memory")
#define smp_wmb() __asm__ __volatile__ (__stringify(SMPWMB) : : :"memory") #define smp_wmb() __asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
#define smp_read_barrier_depends() read_barrier_depends() #define smp_read_barrier_depends() read_barrier_depends()
#else #else
#define smp_mb() barrier() #define smp_mb() barrier()
......
...@@ -48,26 +48,6 @@ extern unsigned long ppc_proc_freq; ...@@ -48,26 +48,6 @@ extern unsigned long ppc_proc_freq;
extern unsigned long ppc_tb_freq; extern unsigned long ppc_tb_freq;
#define DEFAULT_TB_FREQ 125000000UL #define DEFAULT_TB_FREQ 125000000UL
/*
* By putting all of this stuff into a single struct we
* reduce the number of cache lines touched by do_gettimeofday.
* Both by collecting all of the data in one cache line and
* by touching only one TOC entry on ppc64.
*/
struct gettimeofday_vars {
u64 tb_to_xs;
u64 stamp_xsec;
u64 tb_orig_stamp;
};
struct gettimeofday_struct {
unsigned long tb_ticks_per_sec;
struct gettimeofday_vars vars[2];
struct gettimeofday_vars * volatile varp;
unsigned var_idx;
unsigned tb_to_us;
};
struct div_result { struct div_result {
u64 result_high; u64 result_high;
u64 result_low; u64 result_low;
......
...@@ -6,6 +6,9 @@ ...@@ -6,6 +6,9 @@
* *
* - flush_tlb_mm(mm) flushes the specified mm context TLB's * - flush_tlb_mm(mm) flushes the specified mm context TLB's
* - flush_tlb_page(vma, vmaddr) flushes one page * - flush_tlb_page(vma, vmaddr) flushes one page
* - local_flush_tlb_mm(mm) flushes the specified mm context on
* the local processor
* - local_flush_tlb_page(vma, vmaddr) flushes one page on the local processor
* - flush_tlb_page_nohash(vma, vmaddr) flushes one page if SW loaded TLB * - flush_tlb_page_nohash(vma, vmaddr) flushes one page if SW loaded TLB
* - flush_tlb_range(vma, start, end) flushes a range of pages * - flush_tlb_range(vma, start, end) flushes a range of pages
* - flush_tlb_kernel_range(start, end) flushes a range of kernel pages * - flush_tlb_kernel_range(start, end) flushes a range of kernel pages
...@@ -17,7 +20,7 @@ ...@@ -17,7 +20,7 @@
*/ */
#ifdef __KERNEL__ #ifdef __KERNEL__
#if defined(CONFIG_4xx) || defined(CONFIG_8xx) || defined(CONFIG_FSL_BOOKE) #ifdef CONFIG_PPC_MMU_NOHASH
/* /*
* TLB flushing for software loaded TLB chips * TLB flushing for software loaded TLB chips
* *
...@@ -28,63 +31,49 @@ ...@@ -28,63 +31,49 @@
#include <linux/mm.h> #include <linux/mm.h>
extern void _tlbie(unsigned long address, unsigned int pid); #define MMU_NO_CONTEXT ((unsigned int)-1)
extern void _tlbil_all(void);
extern void _tlbil_pid(unsigned int pid);
extern void _tlbil_va(unsigned long address, unsigned int pid);
#if defined(CONFIG_40x) || defined(CONFIG_8xx) extern void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
#define _tlbia() asm volatile ("tlbia; sync" : : : "memory") unsigned long end);
#else /* CONFIG_44x || CONFIG_FSL_BOOKE */ extern void flush_tlb_kernel_range(unsigned long start, unsigned long end);
extern void _tlbia(void);
#endif
static inline void flush_tlb_mm(struct mm_struct *mm)
{
_tlbil_pid(mm->context.id);
}
static inline void flush_tlb_page(struct vm_area_struct *vma,
unsigned long vmaddr)
{
_tlbil_va(vmaddr, vma ? vma->vm_mm->context.id : 0);
}
static inline void flush_tlb_page_nohash(struct vm_area_struct *vma, extern void local_flush_tlb_mm(struct mm_struct *mm);
unsigned long vmaddr) extern void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
{
flush_tlb_page(vma, vmaddr);
}
static inline void flush_tlb_range(struct vm_area_struct *vma, #ifdef CONFIG_SMP
unsigned long start, unsigned long end) extern void flush_tlb_mm(struct mm_struct *mm);
{ extern void flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
_tlbil_pid(vma->vm_mm->context.id); #else
} #define flush_tlb_mm(mm) local_flush_tlb_mm(mm)
#define flush_tlb_page(vma,addr) local_flush_tlb_page(vma,addr)
#endif
#define flush_tlb_page_nohash(vma,addr) flush_tlb_page(vma,addr)
static inline void flush_tlb_kernel_range(unsigned long start, #elif defined(CONFIG_PPC_STD_MMU_32)
unsigned long end)
{
_tlbil_pid(0);
}
#elif defined(CONFIG_PPC32)
/* /*
* TLB flushing for "classic" hash-MMMU 32-bit CPUs, 6xx, 7xx, 7xxx * TLB flushing for "classic" hash-MMU 32-bit CPUs, 6xx, 7xx, 7xxx
*/ */
extern void _tlbie(unsigned long address);
extern void _tlbia(void);
extern void flush_tlb_mm(struct mm_struct *mm); extern void flush_tlb_mm(struct mm_struct *mm);
extern void flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr); extern void flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
extern void flush_tlb_page_nohash(struct vm_area_struct *vma, unsigned long addr); extern void flush_tlb_page_nohash(struct vm_area_struct *vma, unsigned long addr);
extern void flush_tlb_range(struct vm_area_struct *vma, unsigned long start, extern void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
unsigned long end); unsigned long end);
extern void flush_tlb_kernel_range(unsigned long start, unsigned long end); extern void flush_tlb_kernel_range(unsigned long start, unsigned long end);
static inline void local_flush_tlb_page(struct vm_area_struct *vma,
unsigned long vmaddr)
{
flush_tlb_page(vma, vmaddr);
}
static inline void local_flush_tlb_mm(struct mm_struct *mm)
{
flush_tlb_mm(mm);
}
#elif defined(CONFIG_PPC_STD_MMU_64)
#else
/* /*
* TLB flushing for 64-bit has-MMU CPUs * TLB flushing for 64-bit hash-MMU CPUs
*/ */
#include <linux/percpu.h> #include <linux/percpu.h>
...@@ -134,10 +123,19 @@ extern void flush_hash_page(unsigned long va, real_pte_t pte, int psize, ...@@ -134,10 +123,19 @@ extern void flush_hash_page(unsigned long va, real_pte_t pte, int psize,
extern void flush_hash_range(unsigned long number, int local); extern void flush_hash_range(unsigned long number, int local);
static inline void local_flush_tlb_mm(struct mm_struct *mm)
{
}
static inline void flush_tlb_mm(struct mm_struct *mm) static inline void flush_tlb_mm(struct mm_struct *mm)
{ {
} }
static inline void local_flush_tlb_page(struct vm_area_struct *vma,
unsigned long vmaddr)
{
}
static inline void flush_tlb_page(struct vm_area_struct *vma, static inline void flush_tlb_page(struct vm_area_struct *vma,
unsigned long vmaddr) unsigned long vmaddr)
{ {
...@@ -162,7 +160,8 @@ static inline void flush_tlb_kernel_range(unsigned long start, ...@@ -162,7 +160,8 @@ static inline void flush_tlb_kernel_range(unsigned long start,
extern void __flush_hash_table_range(struct mm_struct *mm, unsigned long start, extern void __flush_hash_table_range(struct mm_struct *mm, unsigned long start,
unsigned long end); unsigned long end);
#else
#error Unsupported MMU type
#endif #endif
#endif /*__KERNEL__ */ #endif /*__KERNEL__ */
......
...@@ -39,6 +39,7 @@ ...@@ -39,6 +39,7 @@
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <linux/unistd.h> #include <linux/unistd.h>
#include <linux/time.h>
#define SYSCALL_MAP_SIZE ((__NR_syscalls + 31) / 32) #define SYSCALL_MAP_SIZE ((__NR_syscalls + 31) / 32)
...@@ -83,6 +84,7 @@ struct vdso_data { ...@@ -83,6 +84,7 @@ struct vdso_data {
__u32 icache_log_block_size; /* L1 i-cache log block size */ __u32 icache_log_block_size; /* L1 i-cache log block size */
__s32 wtom_clock_sec; /* Wall to monotonic clock */ __s32 wtom_clock_sec; /* Wall to monotonic clock */
__s32 wtom_clock_nsec; __s32 wtom_clock_nsec;
struct timespec stamp_xtime; /* xtime as at tb_orig_stamp */
__u32 syscall_map_64[SYSCALL_MAP_SIZE]; /* map of syscalls */ __u32 syscall_map_64[SYSCALL_MAP_SIZE]; /* map of syscalls */
__u32 syscall_map_32[SYSCALL_MAP_SIZE]; /* map of syscalls */ __u32 syscall_map_32[SYSCALL_MAP_SIZE]; /* map of syscalls */
}; };
...@@ -102,6 +104,7 @@ struct vdso_data { ...@@ -102,6 +104,7 @@ struct vdso_data {
__u32 tz_dsttime; /* Type of dst correction 0x5C */ __u32 tz_dsttime; /* Type of dst correction 0x5C */
__s32 wtom_clock_sec; /* Wall to monotonic clock */ __s32 wtom_clock_sec; /* Wall to monotonic clock */
__s32 wtom_clock_nsec; __s32 wtom_clock_nsec;
struct timespec stamp_xtime; /* xtime as at tb_orig_stamp */
__u32 syscall_map_32[SYSCALL_MAP_SIZE]; /* map of syscalls */ __u32 syscall_map_32[SYSCALL_MAP_SIZE]; /* map of syscalls */
__u32 dcache_block_size; /* L1 d-cache block size */ __u32 dcache_block_size; /* L1 d-cache block size */
__u32 icache_block_size; /* L1 i-cache block size */ __u32 icache_block_size; /* L1 i-cache block size */
......
...@@ -103,6 +103,10 @@ endif ...@@ -103,6 +103,10 @@ endif
obj-$(CONFIG_PPC64) += $(obj64-y) obj-$(CONFIG_PPC64) += $(obj64-y)
ifneq ($(CONFIG_XMON)$(CONFIG_KEXEC),)
obj-y += ppc_save_regs.o
endif
extra-$(CONFIG_PPC_FPU) += fpu.o extra-$(CONFIG_PPC_FPU) += fpu.o
extra-$(CONFIG_PPC64) += entry_64.o extra-$(CONFIG_PPC64) += entry_64.o
......
...@@ -60,6 +60,7 @@ int main(void) ...@@ -60,6 +60,7 @@ int main(void)
{ {
DEFINE(THREAD, offsetof(struct task_struct, thread)); DEFINE(THREAD, offsetof(struct task_struct, thread));
DEFINE(MM, offsetof(struct task_struct, mm)); DEFINE(MM, offsetof(struct task_struct, mm));
DEFINE(MMCONTEXTID, offsetof(struct mm_struct, context.id));
#ifdef CONFIG_PPC64 #ifdef CONFIG_PPC64
DEFINE(AUDITCONTEXT, offsetof(struct task_struct, audit_context)); DEFINE(AUDITCONTEXT, offsetof(struct task_struct, audit_context));
#else #else
...@@ -306,6 +307,7 @@ int main(void) ...@@ -306,6 +307,7 @@ int main(void)
DEFINE(CFG_SYSCALL_MAP32, offsetof(struct vdso_data, syscall_map_32)); DEFINE(CFG_SYSCALL_MAP32, offsetof(struct vdso_data, syscall_map_32));
DEFINE(WTOM_CLOCK_SEC, offsetof(struct vdso_data, wtom_clock_sec)); DEFINE(WTOM_CLOCK_SEC, offsetof(struct vdso_data, wtom_clock_sec));
DEFINE(WTOM_CLOCK_NSEC, offsetof(struct vdso_data, wtom_clock_nsec)); DEFINE(WTOM_CLOCK_NSEC, offsetof(struct vdso_data, wtom_clock_nsec));
DEFINE(STAMP_XTIME, offsetof(struct vdso_data, stamp_xtime));
DEFINE(CFG_ICACHE_BLOCKSZ, offsetof(struct vdso_data, icache_block_size)); DEFINE(CFG_ICACHE_BLOCKSZ, offsetof(struct vdso_data, icache_block_size));
DEFINE(CFG_DCACHE_BLOCKSZ, offsetof(struct vdso_data, dcache_block_size)); DEFINE(CFG_DCACHE_BLOCKSZ, offsetof(struct vdso_data, dcache_block_size));
DEFINE(CFG_ICACHE_LOGBLOCKSZ, offsetof(struct vdso_data, icache_log_block_size)); DEFINE(CFG_ICACHE_LOGBLOCKSZ, offsetof(struct vdso_data, icache_log_block_size));
...@@ -378,6 +380,10 @@ int main(void) ...@@ -378,6 +380,10 @@ int main(void)
DEFINE(VCPU_FAULT_DEAR, offsetof(struct kvm_vcpu, arch.fault_dear)); DEFINE(VCPU_FAULT_DEAR, offsetof(struct kvm_vcpu, arch.fault_dear));
DEFINE(VCPU_FAULT_ESR, offsetof(struct kvm_vcpu, arch.fault_esr)); DEFINE(VCPU_FAULT_ESR, offsetof(struct kvm_vcpu, arch.fault_esr));
#endif #endif
#ifdef CONFIG_44x
DEFINE(PGD_T_LOG2, PGD_T_LOG2);
DEFINE(PTE_T_LOG2, PTE_T_LOG2);
#endif
return 0; return 0;
} }
This diff is collapsed.
...@@ -120,6 +120,26 @@ static inline void dma_direct_unmap_page(struct device *dev, ...@@ -120,6 +120,26 @@ static inline void dma_direct_unmap_page(struct device *dev,
{ {
} }
#ifdef CONFIG_NOT_COHERENT_CACHE
static inline void dma_direct_sync_sg(struct device *dev,
struct scatterlist *sgl, int nents,
enum dma_data_direction direction)
{
struct scatterlist *sg;
int i;
for_each_sg(sgl, sg, nents, i)
__dma_sync_page(sg_page(sg), sg->offset, sg->length, direction);
}
static inline void dma_direct_sync_single_range(struct device *dev,
dma_addr_t dma_handle, unsigned long offset, size_t size,
enum dma_data_direction direction)
{
__dma_sync(bus_to_virt(dma_handle+offset), size, direction);
}
#endif
struct dma_mapping_ops dma_direct_ops = { struct dma_mapping_ops dma_direct_ops = {
.alloc_coherent = dma_direct_alloc_coherent, .alloc_coherent = dma_direct_alloc_coherent,
.free_coherent = dma_direct_free_coherent, .free_coherent = dma_direct_free_coherent,
...@@ -128,5 +148,11 @@ struct dma_mapping_ops dma_direct_ops = { ...@@ -128,5 +148,11 @@ struct dma_mapping_ops dma_direct_ops = {
.dma_supported = dma_direct_dma_supported, .dma_supported = dma_direct_dma_supported,
.map_page = dma_direct_map_page, .map_page = dma_direct_map_page,
.unmap_page = dma_direct_unmap_page, .unmap_page = dma_direct_unmap_page,
#ifdef CONFIG_NOT_COHERENT_CACHE
.sync_single_range_for_cpu = dma_direct_sync_single_range,
.sync_single_range_for_device = dma_direct_sync_single_range,
.sync_sg_for_cpu = dma_direct_sync_sg,
.sync_sg_for_device = dma_direct_sync_sg,
#endif
}; };
EXPORT_SYMBOL(dma_direct_ops); EXPORT_SYMBOL(dma_direct_ops);
...@@ -31,6 +31,7 @@ ...@@ -31,6 +31,7 @@
#include <asm/ppc_asm.h> #include <asm/ppc_asm.h>
#include <asm/asm-offsets.h> #include <asm/asm-offsets.h>
#include <asm/ptrace.h> #include <asm/ptrace.h>
#include <asm/bug.h>
/* 601 only have IBAT; cr0.eq is set on 601 when using this macro */ /* 601 only have IBAT; cr0.eq is set on 601 when using this macro */
#define LOAD_BAT(n, reg, RA, RB) \ #define LOAD_BAT(n, reg, RA, RB) \
...@@ -182,7 +183,8 @@ __after_mmu_off: ...@@ -182,7 +183,8 @@ __after_mmu_off:
bl reloc_offset bl reloc_offset
mr r26,r3 mr r26,r3
addis r4,r3,KERNELBASE@h /* current address of _start */ addis r4,r3,KERNELBASE@h /* current address of _start */
cmpwi 0,r4,0 /* are we already running at 0? */ lis r5,PHYSICAL_START@h
cmplw 0,r4,r5 /* already running at PHYSICAL_START? */
bne relocate_kernel bne relocate_kernel
/* /*
* we now have the 1st 16M of ram mapped with the bats. * we now have the 1st 16M of ram mapped with the bats.
...@@ -810,13 +812,13 @@ giveup_altivec: ...@@ -810,13 +812,13 @@ giveup_altivec:
/* /*
* This code is jumped to from the startup code to copy * This code is jumped to from the startup code to copy
* the kernel image to physical address 0. * the kernel image to physical address PHYSICAL_START.
*/ */
relocate_kernel: relocate_kernel:
addis r9,r26,klimit@ha /* fetch klimit */ addis r9,r26,klimit@ha /* fetch klimit */
lwz r25,klimit@l(r9) lwz r25,klimit@l(r9)
addis r25,r25,-KERNELBASE@h addis r25,r25,-KERNELBASE@h
li r3,0 /* Destination base address */ lis r3,PHYSICAL_START@h /* Destination base address */
li r6,0 /* Destination offset */ li r6,0 /* Destination offset */
li r5,0x4000 /* # bytes of memory to copy */ li r5,0x4000 /* # bytes of memory to copy */
bl copy_and_flush /* copy the first 0x4000 bytes */ bl copy_and_flush /* copy the first 0x4000 bytes */
...@@ -989,12 +991,12 @@ load_up_mmu: ...@@ -989,12 +991,12 @@ load_up_mmu:
LOAD_BAT(1,r3,r4,r5) LOAD_BAT(1,r3,r4,r5)
LOAD_BAT(2,r3,r4,r5) LOAD_BAT(2,r3,r4,r5)
LOAD_BAT(3,r3,r4,r5) LOAD_BAT(3,r3,r4,r5)
BEGIN_FTR_SECTION BEGIN_MMU_FTR_SECTION
LOAD_BAT(4,r3,r4,r5) LOAD_BAT(4,r3,r4,r5)
LOAD_BAT(5,r3,r4,r5) LOAD_BAT(5,r3,r4,r5)
LOAD_BAT(6,r3,r4,r5) LOAD_BAT(6,r3,r4,r5)
LOAD_BAT(7,r3,r4,r5) LOAD_BAT(7,r3,r4,r5)
END_FTR_SECTION_IFSET(CPU_FTR_HAS_HIGH_BATS) END_MMU_FTR_SECTION_IFSET(MMU_FTR_USE_HIGH_BATS)
blr blr
/* /*
...@@ -1070,9 +1072,14 @@ start_here: ...@@ -1070,9 +1072,14 @@ start_here:
RFI RFI
/* /*
* void switch_mmu_context(struct mm_struct *prev, struct mm_struct *next);
*
* Set up the segment registers for a new context. * Set up the segment registers for a new context.
*/ */
_ENTRY(set_context) _ENTRY(switch_mmu_context)
lwz r3,MMCONTEXTID(r4)
cmpwi cr0,r3,0
blt- 4f
mulli r3,r3,897 /* multiply context by skew factor */ mulli r3,r3,897 /* multiply context by skew factor */
rlwinm r3,r3,4,8,27 /* VSID = (context & 0xfffff) << 4 */ rlwinm r3,r3,4,8,27 /* VSID = (context & 0xfffff) << 4 */
addis r3,r3,0x6000 /* Set Ks, Ku bits */ addis r3,r3,0x6000 /* Set Ks, Ku bits */
...@@ -1083,6 +1090,7 @@ _ENTRY(set_context) ...@@ -1083,6 +1090,7 @@ _ENTRY(set_context)
/* Context switch the PTE pointer for the Abatron BDI2000. /* Context switch the PTE pointer for the Abatron BDI2000.
* The PGDIR is passed as second argument. * The PGDIR is passed as second argument.
*/ */
lwz r4,MM_PGD(r4)
lis r5, KERNELBASE@h lis r5, KERNELBASE@h
lwz r5, 0xf0(r5) lwz r5, 0xf0(r5)
stw r4, 0x4(r5) stw r4, 0x4(r5)
...@@ -1098,6 +1106,9 @@ _ENTRY(set_context) ...@@ -1098,6 +1106,9 @@ _ENTRY(set_context)
sync sync
isync isync
blr blr
4: trap
EMIT_BUG_ENTRY 4b,__FILE__,__LINE__,0
blr
/* /*
* An undocumented "feature" of 604e requires that the v bit * An undocumented "feature" of 604e requires that the v bit
...@@ -1131,7 +1142,7 @@ clear_bats: ...@@ -1131,7 +1142,7 @@ clear_bats:
mtspr SPRN_IBAT2L,r10 mtspr SPRN_IBAT2L,r10
mtspr SPRN_IBAT3U,r10 mtspr SPRN_IBAT3U,r10
mtspr SPRN_IBAT3L,r10 mtspr SPRN_IBAT3L,r10
BEGIN_FTR_SECTION BEGIN_MMU_FTR_SECTION
/* Here's a tweak: at this point, CPU setup have /* Here's a tweak: at this point, CPU setup have
* not been called yet, so HIGH_BAT_EN may not be * not been called yet, so HIGH_BAT_EN may not be
* set in HID0 for the 745x processors. However, it * set in HID0 for the 745x processors. However, it
...@@ -1154,7 +1165,7 @@ BEGIN_FTR_SECTION ...@@ -1154,7 +1165,7 @@ BEGIN_FTR_SECTION
mtspr SPRN_IBAT6L,r10 mtspr SPRN_IBAT6L,r10
mtspr SPRN_IBAT7U,r10 mtspr SPRN_IBAT7U,r10
mtspr SPRN_IBAT7L,r10 mtspr SPRN_IBAT7L,r10
END_FTR_SECTION_IFSET(CPU_FTR_HAS_HIGH_BATS) END_MMU_FTR_SECTION_IFSET(MMU_FTR_USE_HIGH_BATS)
blr blr
flush_tlbs: flush_tlbs:
...@@ -1178,11 +1189,11 @@ mmu_off: ...@@ -1178,11 +1189,11 @@ mmu_off:
/* /*
* Use the first pair of BAT registers to map the 1st 16MB * Use the first pair of BAT registers to map the 1st 16MB
* of RAM to KERNELBASE. From this point on we can't safely * of RAM to PAGE_OFFSET. From this point on we can't safely
* call OF any more. * call OF any more.
*/ */
initial_bats: initial_bats:
lis r11,KERNELBASE@h lis r11,PAGE_OFFSET@h
mfspr r9,SPRN_PVR mfspr r9,SPRN_PVR
rlwinm r9,r9,16,16,31 /* r9 = 1 for 601, 4 for 604 */ rlwinm r9,r9,16,16,31 /* r9 = 1 for 601, 4 for 604 */
cmpwi 0,r9,1 cmpwi 0,r9,1
......
...@@ -68,6 +68,17 @@ _ENTRY(_start); ...@@ -68,6 +68,17 @@ _ENTRY(_start);
mr r27,r7 mr r27,r7
li r24,0 /* CPU number */ li r24,0 /* CPU number */
/*
* In case the firmware didn't do it, we apply some workarounds
* that are good for all 440 core variants here
*/
mfspr r3,SPRN_CCR0
rlwinm r3,r3,0,0,27 /* disable icache prefetch */
isync
mtspr SPRN_CCR0,r3
isync
sync
/* /*
* Set up the initial MMU state * Set up the initial MMU state
* *
...@@ -391,12 +402,14 @@ interrupt_base: ...@@ -391,12 +402,14 @@ interrupt_base:
rlwimi r13,r12,10,30,30 rlwimi r13,r12,10,30,30
/* Load the PTE */ /* Load the PTE */
rlwinm r12, r10, 13, 19, 29 /* Compute pgdir/pmd offset */ /* Compute pgdir/pmd offset */
rlwinm r12, r10, PPC44x_PGD_OFF_SHIFT, PPC44x_PGD_OFF_MASK_BIT, 29
lwzx r11, r12, r11 /* Get pgd/pmd entry */ lwzx r11, r12, r11 /* Get pgd/pmd entry */
rlwinm. r12, r11, 0, 0, 20 /* Extract pt base address */ rlwinm. r12, r11, 0, 0, 20 /* Extract pt base address */
beq 2f /* Bail if no table */ beq 2f /* Bail if no table */
rlwimi r12, r10, 23, 20, 28 /* Compute pte address */ /* Compute pte address */
rlwimi r12, r10, PPC44x_PTE_ADD_SHIFT, PPC44x_PTE_ADD_MASK_BIT, 28
lwz r11, 0(r12) /* Get high word of pte entry */ lwz r11, 0(r12) /* Get high word of pte entry */
lwz r12, 4(r12) /* Get low word of pte entry */ lwz r12, 4(r12) /* Get low word of pte entry */
...@@ -485,12 +498,14 @@ tlb_44x_patch_hwater_D: ...@@ -485,12 +498,14 @@ tlb_44x_patch_hwater_D:
/* Make up the required permissions */ /* Make up the required permissions */
li r13,_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_HWEXEC li r13,_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_HWEXEC
rlwinm r12, r10, 13, 19, 29 /* Compute pgdir/pmd offset */ /* Compute pgdir/pmd offset */
rlwinm r12, r10, PPC44x_PGD_OFF_SHIFT, PPC44x_PGD_OFF_MASK_BIT, 29
lwzx r11, r12, r11 /* Get pgd/pmd entry */ lwzx r11, r12, r11 /* Get pgd/pmd entry */
rlwinm. r12, r11, 0, 0, 20 /* Extract pt base address */ rlwinm. r12, r11, 0, 0, 20 /* Extract pt base address */
beq 2f /* Bail if no table */ beq 2f /* Bail if no table */
rlwimi r12, r10, 23, 20, 28 /* Compute pte address */ /* Compute pte address */
rlwimi r12, r10, PPC44x_PTE_ADD_SHIFT, PPC44x_PTE_ADD_MASK_BIT, 28
lwz r11, 0(r12) /* Get high word of pte entry */ lwz r11, 0(r12) /* Get high word of pte entry */
lwz r12, 4(r12) /* Get low word of pte entry */ lwz r12, 4(r12) /* Get low word of pte entry */
...@@ -554,15 +569,16 @@ tlb_44x_patch_hwater_I: ...@@ -554,15 +569,16 @@ tlb_44x_patch_hwater_I:
*/ */
finish_tlb_load: finish_tlb_load:
/* Combine RPN & ERPN an write WS 0 */ /* Combine RPN & ERPN an write WS 0 */
rlwimi r11,r12,0,0,19 rlwimi r11,r12,0,0,31-PAGE_SHIFT
tlbwe r11,r13,PPC44x_TLB_XLAT tlbwe r11,r13,PPC44x_TLB_XLAT
/* /*
* Create WS1. This is the faulting address (EPN), * Create WS1. This is the faulting address (EPN),
* page size, and valid flag. * page size, and valid flag.
*/ */
li r11,PPC44x_TLB_VALID | PPC44x_TLB_4K li r11,PPC44x_TLB_VALID | PPC44x_TLBE_SIZE
rlwimi r10,r11,0,20,31 /* Insert valid and page size*/ /* Insert valid and page size */
rlwimi r10,r11,0,PPC44x_PTE_ADD_MASK_BIT,31
tlbwe r10,r13,PPC44x_TLB_PAGEID /* Write PAGEID */ tlbwe r10,r13,PPC44x_TLB_PAGEID /* Write PAGEID */
/* And WS 2 */ /* And WS 2 */
...@@ -634,12 +650,12 @@ _GLOBAL(set_context) ...@@ -634,12 +650,12 @@ _GLOBAL(set_context)
* goes at the beginning of the data segment, which is page-aligned. * goes at the beginning of the data segment, which is page-aligned.
*/ */
.data .data
.align 12 .align PAGE_SHIFT
.globl sdata .globl sdata
sdata: sdata:
.globl empty_zero_page .globl empty_zero_page
empty_zero_page: empty_zero_page:
.space 4096 .space PAGE_SIZE
/* /*
* To support >32-bit physical addresses, we use an 8KB pgdir. * To support >32-bit physical addresses, we use an 8KB pgdir.
......
This diff is collapsed.
...@@ -47,7 +47,7 @@ ...@@ -47,7 +47,7 @@
#include <asm/abs_addr.h> #include <asm/abs_addr.h>
static struct device ibmebus_bus_device = { /* fake "parent" device */ static struct device ibmebus_bus_device = { /* fake "parent" device */
.bus_id = "ibmebus", .init_name = "ibmebus",
}; };
struct bus_type ibmebus_bus_type; struct bus_type ibmebus_bus_type;
...@@ -231,6 +231,7 @@ void ibmebus_free_irq(u32 ist, void *dev_id) ...@@ -231,6 +231,7 @@ void ibmebus_free_irq(u32 ist, void *dev_id)
unsigned int irq = irq_find_mapping(NULL, ist); unsigned int irq = irq_find_mapping(NULL, ist);
free_irq(irq, dev_id); free_irq(irq, dev_id);
irq_dispose_mapping(irq);
} }
EXPORT_SYMBOL(ibmebus_free_irq); EXPORT_SYMBOL(ibmebus_free_irq);
......
This diff is collapsed.
...@@ -289,7 +289,7 @@ void default_machine_kexec(struct kimage *image) ...@@ -289,7 +289,7 @@ void default_machine_kexec(struct kimage *image)
} }
/* Values we need to export to the second kernel via the device tree. */ /* Values we need to export to the second kernel via the device tree. */
static unsigned long htab_base, kernel_end; static unsigned long htab_base;
static struct property htab_base_prop = { static struct property htab_base_prop = {
.name = "linux,htab-base", .name = "linux,htab-base",
...@@ -303,25 +303,20 @@ static struct property htab_size_prop = { ...@@ -303,25 +303,20 @@ static struct property htab_size_prop = {
.value = &htab_size_bytes, .value = &htab_size_bytes,
}; };
static struct property kernel_end_prop = { static int __init export_htab_values(void)
.name = "linux,kernel-end",
.length = sizeof(unsigned long),
.value = &kernel_end,
};
static void __init export_htab_values(void)
{ {
struct device_node *node; struct device_node *node;
struct property *prop; struct property *prop;
/* On machines with no htab htab_address is NULL */
if (!htab_address)
return -ENODEV;
node = of_find_node_by_path("/chosen"); node = of_find_node_by_path("/chosen");
if (!node) if (!node)
return; return -ENODEV;
/* remove any stale propertys so ours can be found */ /* remove any stale propertys so ours can be found */
prop = of_find_property(node, kernel_end_prop.name, NULL);
if (prop)
prom_remove_property(node, prop);
prop = of_find_property(node, htab_base_prop.name, NULL); prop = of_find_property(node, htab_base_prop.name, NULL);
if (prop) if (prop)
prom_remove_property(node, prop); prom_remove_property(node, prop);
...@@ -329,68 +324,11 @@ static void __init export_htab_values(void) ...@@ -329,68 +324,11 @@ static void __init export_htab_values(void)
if (prop) if (prop)
prom_remove_property(node, prop); prom_remove_property(node, prop);
/* information needed by userspace when using default_machine_kexec */
kernel_end = __pa(_end);
prom_add_property(node, &kernel_end_prop);
/* On machines with no htab htab_address is NULL */
if (NULL == htab_address)
goto out;
htab_base = __pa(htab_address); htab_base = __pa(htab_address);
prom_add_property(node, &htab_base_prop); prom_add_property(node, &htab_base_prop);
prom_add_property(node, &htab_size_prop); prom_add_property(node, &htab_size_prop);
out:
of_node_put(node);
}
static struct property crashk_base_prop = {
.name = "linux,crashkernel-base",
.length = sizeof(unsigned long),
.value = &crashk_res.start,
};
static unsigned long crashk_size;
static struct property crashk_size_prop = {
.name = "linux,crashkernel-size",
.length = sizeof(unsigned long),
.value = &crashk_size,
};
static void __init export_crashk_values(void)
{
struct device_node *node;
struct property *prop;
node = of_find_node_by_path("/chosen");
if (!node)
return;
/* There might be existing crash kernel properties, but we can't
* be sure what's in them, so remove them. */
prop = of_find_property(node, "linux,crashkernel-base", NULL);
if (prop)
prom_remove_property(node, prop);
prop = of_find_property(node, "linux,crashkernel-size", NULL);
if (prop)
prom_remove_property(node, prop);
if (crashk_res.start != 0) {
prom_add_property(node, &crashk_base_prop);
crashk_size = crashk_res.end - crashk_res.start + 1;
prom_add_property(node, &crashk_size_prop);
}
of_node_put(node); of_node_put(node);
}
static int __init kexec_setup(void)
{
export_htab_values();
export_crashk_values();
return 0; return 0;
} }
__initcall(kexec_setup); late_initcall(export_htab_values);
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment