Commit c04a5880 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'powerpc-4.7-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux

Pull powerpc updates from Michael Ellerman:
 "Highlights:
   - Support for Power ISA 3.0 (Power9) Radix Tree MMU from Aneesh Kumar K.V
   - Live patching support for ppc64le (also merged via livepatching.git)

  Various cleanups & minor fixes from:
   - Aaro Koskinen, Alexey Kardashevskiy, Andrew Donnellan, Aneesh Kumar K.V,
     Chris Smart, Daniel Axtens, Frederic Barrat, Gavin Shan, Ian Munsie,
     Lennart Sorensen, Madhavan Srinivasan, Mahesh Salgaonkar, Markus Elfring,
     Michael Ellerman, Oliver O'Halloran, Paul Gortmaker, Paul Mackerras,
     Rashmica Gupta, Russell Currey, Suraj Jitindar Singh, Thiago Jung
     Bauermann, Valentin Rothberg, Vipin K Parashar.

  General:
   - Update LMB associativity index during DLPAR add/remove from Nathan
     Fontenot
   - Fix branching to OOL handlers in relocatable kernel from Hari Bathini
   - Add support for userspace Power9 copy/paste from Chris Smart
   - Always use STRICT_MM_TYPECHECKS from Michael Ellerman
   - Add mask of possible MMU features from Michael Ellerman

  PCI:
   - Enable pass through of NVLink to guests from Alexey Kardashevskiy
   - Cleanups in preparation for powernv PCI hotplug from Gavin Shan
   - Don't report error in eeh_pe_reset_and_recover() from Gavin Shan
   - Restore initial state in eeh_pe_reset_and_recover() from Gavin Shan
   - Revert "powerpc/eeh: Fix crash in eeh_add_device_early() on Cell"
     from Guilherme G Piccoli
   - Remove the dependency on EEH struct in DDW mechanism from Guilherme
     G Piccoli

  selftests:
   - Test cp_abort during context switch from Chris Smart
   - Add several tests for transactional memory support from Rashmica
     Gupta

  perf:
   - Add support for sampling interrupt register state from Anju T
   - Add support for unwinding perf-stackdump from Chandan Kumar

  cxl:
   - Configure the PSL for two CAPI ports on POWER8NVL from Philippe
     Bergheaud
   - Allow initialization on timebase sync failures from Frederic Barrat
   - Increase timeout for detection of AFU mmio hang from Frederic
     Barrat
   - Handle num_of_processes larger than can fit in the SPA from Ian
     Munsie
   - Ensure PSL interrupt is configured for contexts with no AFU IRQs
     from Ian Munsie
   - Add kernel API to allow a context to operate with relocate disabled
     from Ian Munsie
   - Check periodically the coherent platform function's state from
     Christophe Lombard

  Freescale:
   - Updates from Scott: "Contains 86xx fixes, minor device tree fixes,
     an erratum workaround, and a kconfig dependency fix."

* tag 'powerpc-4.7-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (192 commits)
  powerpc/86xx: Fix PCI interrupt map definition
  powerpc/86xx: Move pci1 definition to the include file
  powerpc/fsl: Fix build of the dtb embedded kernel images
  powerpc/fsl: Fix rcpm compatible string
  powerpc/fsl: Remove FSL_SOC dependency from FSL_LBC
  powerpc/fsl-pci: Add a workaround for PCI 5 errata
  powerpc/fsl: Fix SPI compatible on t208xrdb and t1040rdb
  powerpc/powernv/npu: Add PE to PHB's list
  powerpc/powernv: Fix insufficient memory allocation
  powerpc/iommu: Remove the dependency on EEH struct in DDW mechanism
  Revert "powerpc/eeh: Fix crash in eeh_add_device_early() on Cell"
  powerpc/eeh: Drop unnecessary label in eeh_pe_change_owner()
  powerpc/eeh: Ignore handlers in eeh_pe_reset_and_recover()
  powerpc/eeh: Restore initial state in eeh_pe_reset_and_recover()
  powerpc/eeh: Don't report error in eeh_pe_reset_and_recover()
  Revert "powerpc/powernv: Exclude root bus in pnv_pci_reset_secondary_bus()"
  powerpc/powernv/npu: Enable NVLink pass through
  powerpc/powernv/npu: Rework TCE Kill handling
  powerpc/powernv/npu: Add set/unset window helpers
  powerpc/powernv/ioda2: Export debug helper pe_level_printk()
  ...
parents a1c28b75 138a0764
...@@ -233,3 +233,11 @@ Description: read/write ...@@ -233,3 +233,11 @@ Description: read/write
0 = don't trust, the image may be different (default) 0 = don't trust, the image may be different (default)
1 = trust that the image will not change. 1 = trust that the image will not change.
Users: https://github.com/ibm-capi/libcxl Users: https://github.com/ibm-capi/libcxl
What: /sys/class/cxl/<card>/psl_timebase_synced
Date: March 2016
Contact: linuxppc-dev@lists.ozlabs.org
Description: read only
Returns 1 if the psl timebase register is synchronized
with the core timebase register, 0 otherwise.
Users: https://github.com/ibm-capi/libcxl
...@@ -27,7 +27,7 @@ ...@@ -27,7 +27,7 @@
| nios2: | TODO | | nios2: | TODO |
| openrisc: | TODO | | openrisc: | TODO |
| parisc: | TODO | | parisc: | TODO |
| powerpc: | TODO | | powerpc: | ok |
| s390: | TODO | | s390: | TODO |
| score: | TODO | | score: | TODO |
| sh: | TODO | | sh: | TODO |
......
...@@ -27,7 +27,7 @@ ...@@ -27,7 +27,7 @@
| nios2: | TODO | | nios2: | TODO |
| openrisc: | TODO | | openrisc: | TODO |
| parisc: | TODO | | parisc: | TODO |
| powerpc: | TODO | | powerpc: | ok |
| s390: | TODO | | s390: | TODO |
| score: | TODO | | score: | TODO |
| sh: | TODO | | sh: | TODO |
......
...@@ -12,7 +12,7 @@ Overview: ...@@ -12,7 +12,7 @@ Overview:
The IBM POWER-based pSeries and iSeries computers include PCI bus The IBM POWER-based pSeries and iSeries computers include PCI bus
controller chips that have extended capabilities for detecting and controller chips that have extended capabilities for detecting and
reporting a large variety of PCI bus error conditions. These features reporting a large variety of PCI bus error conditions. These features
go under the name of "EEH", for "Extended Error Handling". The EEH go under the name of "EEH", for "Enhanced Error Handling". The EEH
hardware features allow PCI bus errors to be cleared and a PCI hardware features allow PCI bus errors to be cleared and a PCI
card to be "rebooted", without also having to reboot the operating card to be "rebooted", without also having to reboot the operating
system. system.
......
...@@ -6675,6 +6675,19 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git ...@@ -6675,6 +6675,19 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git
S: Supported S: Supported
F: Documentation/powerpc/ F: Documentation/powerpc/
F: arch/powerpc/ F: arch/powerpc/
F: drivers/char/tpm/tpm_ibmvtpm*
F: drivers/crypto/nx/
F: drivers/crypto/vmx/
F: drivers/net/ethernet/ibm/ibmveth.*
F: drivers/net/ethernet/ibm/ibmvnic.*
F: drivers/pci/hotplug/rpa*
F: drivers/scsi/ibmvscsi/
N: opal
N: /pmac
N: powermac
N: powernv
N: [^a-z0-9]ps3
N: pseries
LINUX FOR POWER MACINTOSH LINUX FOR POWER MACINTOSH
M: Benjamin Herrenschmidt <benh@kernel.crashing.org> M: Benjamin Herrenschmidt <benh@kernel.crashing.org>
......
...@@ -116,6 +116,8 @@ config PPC ...@@ -116,6 +116,8 @@ config PPC
select GENERIC_ATOMIC64 if PPC32 select GENERIC_ATOMIC64 if PPC32
select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
select HAVE_PERF_EVENTS select HAVE_PERF_EVENTS
select HAVE_PERF_REGS
select HAVE_PERF_USER_STACK_DUMP
select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_REGS_AND_STACK_ACCESS_API
select HAVE_HW_BREAKPOINT if PERF_EVENTS && PPC_BOOK3S_64 select HAVE_HW_BREAKPOINT if PERF_EVENTS && PPC_BOOK3S_64
select ARCH_WANT_IPC_PARSE_VERSION select ARCH_WANT_IPC_PARSE_VERSION
...@@ -606,9 +608,9 @@ endchoice ...@@ -606,9 +608,9 @@ endchoice
config FORCE_MAX_ZONEORDER config FORCE_MAX_ZONEORDER
int "Maximum zone order" int "Maximum zone order"
range 9 64 if PPC64 && PPC_64K_PAGES range 8 9 if PPC64 && PPC_64K_PAGES
default "9" if PPC64 && PPC_64K_PAGES default "9" if PPC64 && PPC_64K_PAGES
range 13 64 if PPC64 && !PPC_64K_PAGES range 9 13 if PPC64 && !PPC_64K_PAGES
default "13" if PPC64 && !PPC_64K_PAGES default "13" if PPC64 && !PPC_64K_PAGES
range 9 64 if PPC32 && PPC_16K_PAGES range 9 64 if PPC32 && PPC_16K_PAGES
default "9" if PPC32 && PPC_16K_PAGES default "9" if PPC32 && PPC_16K_PAGES
...@@ -795,7 +797,6 @@ config 4xx_SOC ...@@ -795,7 +797,6 @@ config 4xx_SOC
config FSL_LBC config FSL_LBC
bool "Freescale Local Bus support" bool "Freescale Local Bus support"
depends on FSL_SOC
help help
Enables reporting of errors from the Freescale local bus Enables reporting of errors from the Freescale local bus
controller. Also contains some common code used by controller. Also contains some common code used by
......
...@@ -19,14 +19,6 @@ config PPC_WERROR ...@@ -19,14 +19,6 @@ config PPC_WERROR
depends on !PPC_DISABLE_WERROR depends on !PPC_DISABLE_WERROR
default y default y
config STRICT_MM_TYPECHECKS
bool "Do extra type checking on mm types"
default n
help
This option turns on extra type checking for some mm related types.
If you don't know what this means, say N.
config PRINT_STACK_DEPTH config PRINT_STACK_DEPTH
int "Stack depth to print" if DEBUG_KERNEL int "Stack depth to print" if DEBUG_KERNEL
default 64 default 64
......
...@@ -362,9 +362,6 @@ $(obj)/cuImage.initrd.%: vmlinux $(obj)/%.dtb $(wrapperbits) ...@@ -362,9 +362,6 @@ $(obj)/cuImage.initrd.%: vmlinux $(obj)/%.dtb $(wrapperbits)
$(obj)/cuImage.%: vmlinux $(obj)/%.dtb $(wrapperbits) $(obj)/cuImage.%: vmlinux $(obj)/%.dtb $(wrapperbits)
$(call if_changed,wrap,cuboot-$*,,$(obj)/$*.dtb) $(call if_changed,wrap,cuboot-$*,,$(obj)/$*.dtb)
$(obj)/cuImage.%: vmlinux $(obj)/fsl/%.dtb $(wrapperbits)
$(call if_changed,wrap,cuboot-$*,,$(obj)/fsl/$*.dtb)
$(obj)/simpleImage.initrd.%: vmlinux $(obj)/%.dtb $(wrapperbits) $(obj)/simpleImage.initrd.%: vmlinux $(obj)/%.dtb $(wrapperbits)
$(call if_changed,wrap,simpleboot-$*,,$(obj)/$*.dtb,$(obj)/ramdisk.image.gz) $(call if_changed,wrap,simpleboot-$*,,$(obj)/$*.dtb,$(obj)/ramdisk.image.gz)
...@@ -381,6 +378,9 @@ $(obj)/treeImage.%: vmlinux $(obj)/%.dtb $(wrapperbits) ...@@ -381,6 +378,9 @@ $(obj)/treeImage.%: vmlinux $(obj)/%.dtb $(wrapperbits)
$(obj)/%.dtb: $(src)/dts/%.dts FORCE $(obj)/%.dtb: $(src)/dts/%.dts FORCE
$(call if_changed_dep,dtc) $(call if_changed_dep,dtc)
$(obj)/%.dtb: $(src)/dts/fsl/%.dts FORCE
$(call if_changed_dep,dtc)
# If there isn't a platform selected then just strip the vmlinux. # If there isn't a platform selected then just strip the vmlinux.
ifeq (,$(image-y)) ifeq (,$(image-y))
image-y := vmlinux.strip image-y := vmlinux.strip
......
...@@ -211,6 +211,10 @@ pcie@0 { ...@@ -211,6 +211,10 @@ pcie@0 {
0x0 0x00400000>; 0x0 0x00400000>;
}; };
}; };
pci1: pcie@fef09000 {
status = "disabled";
};
}; };
/include/ "mpc8641si-post.dtsi" /include/ "mpc8641si-post.dtsi"
...@@ -24,10 +24,6 @@ / { ...@@ -24,10 +24,6 @@ / {
model = "GEF_SBC310"; model = "GEF_SBC310";
compatible = "gef,sbc310"; compatible = "gef,sbc310";
aliases {
pci1 = &pci1;
};
memory { memory {
device_type = "memory"; device_type = "memory";
reg = <0x0 0x40000000>; // set by uboot reg = <0x0 0x40000000>; // set by uboot
...@@ -223,29 +219,11 @@ pcie@0 { ...@@ -223,29 +219,11 @@ pcie@0 {
}; };
pci1: pcie@fef09000 { pci1: pcie@fef09000 {
compatible = "fsl,mpc8641-pcie";
device_type = "pci";
#size-cells = <2>;
#address-cells = <3>;
reg = <0xfef09000 0x1000>; reg = <0xfef09000 0x1000>;
bus-range = <0x0 0xff>;
ranges = <0x02000000 0x0 0xc0000000 0xc0000000 0x0 0x20000000 ranges = <0x02000000 0x0 0xc0000000 0xc0000000 0x0 0x20000000
0x01000000 0x0 0x00000000 0xfe400000 0x0 0x00400000>; 0x01000000 0x0 0x00000000 0xfe400000 0x0 0x00400000>;
clock-frequency = <100000000>;
interrupts = <0x19 0x2 0 0>;
interrupt-map-mask = <0xf800 0x0 0x0 0x7>;
interrupt-map = <
0x0000 0x0 0x0 0x1 &mpic 0x4 0x2
0x0000 0x0 0x0 0x2 &mpic 0x5 0x2
0x0000 0x0 0x0 0x3 &mpic 0x6 0x2
0x0000 0x0 0x0 0x4 &mpic 0x7 0x2
>;
pcie@0 { pcie@0 {
reg = <0 0 0 0 0>;
#size-cells = <2>;
#address-cells = <3>;
device_type = "pci";
ranges = <0x02000000 0x0 0xc0000000 ranges = <0x02000000 0x0 0xc0000000
0x02000000 0x0 0xc0000000 0x02000000 0x0 0xc0000000
0x0 0x20000000 0x0 0x20000000
......
...@@ -209,6 +209,10 @@ pcie@0 { ...@@ -209,6 +209,10 @@ pcie@0 {
0x0 0x00400000>; 0x0 0x00400000>;
}; };
}; };
pci1: pcie@fef09000 {
status = "disabled";
};
}; };
/include/ "mpc8641si-post.dtsi" /include/ "mpc8641si-post.dtsi"
...@@ -15,10 +15,6 @@ / { ...@@ -15,10 +15,6 @@ / {
model = "MPC8641HPCN"; model = "MPC8641HPCN";
compatible = "fsl,mpc8641hpcn"; compatible = "fsl,mpc8641hpcn";
aliases {
pci1 = &pci1;
};
memory { memory {
device_type = "memory"; device_type = "memory";
reg = <0x00000000 0x40000000>; // 1G at 0x0 reg = <0x00000000 0x40000000>; // 1G at 0x0
...@@ -359,29 +355,11 @@ gpio@400 { ...@@ -359,29 +355,11 @@ gpio@400 {
}; };
pci1: pcie@ffe09000 { pci1: pcie@ffe09000 {
compatible = "fsl,mpc8641-pcie";
device_type = "pci";
#size-cells = <2>;
#address-cells = <3>;
reg = <0xffe09000 0x1000>; reg = <0xffe09000 0x1000>;
bus-range = <0 0xff>;
ranges = <0x02000000 0x0 0xa0000000 0xa0000000 0x0 0x20000000 ranges = <0x02000000 0x0 0xa0000000 0xa0000000 0x0 0x20000000
0x01000000 0x0 0x00000000 0xffc10000 0x0 0x00010000>; 0x01000000 0x0 0x00000000 0xffc10000 0x0 0x00010000>;
clock-frequency = <100000000>;
interrupts = <25 2 0 0>;
interrupt-map-mask = <0xf800 0 0 7>;
interrupt-map = <
/* IDSEL 0x0 */
0x0000 0 0 1 &mpic 4 1
0x0000 0 0 2 &mpic 5 1
0x0000 0 0 3 &mpic 6 1
0x0000 0 0 4 &mpic 7 1
>;
pcie@0 { pcie@0 {
reg = <0 0 0 0 0>;
#size-cells = <2>;
#address-cells = <3>;
device_type = "pci";
ranges = <0x02000000 0x0 0xa0000000 ranges = <0x02000000 0x0 0xa0000000
0x02000000 0x0 0xa0000000 0x02000000 0x0 0xa0000000
0x0 0x20000000 0x0 0x20000000
......
...@@ -17,10 +17,6 @@ / { ...@@ -17,10 +17,6 @@ / {
#address-cells = <2>; #address-cells = <2>;
#size-cells = <2>; #size-cells = <2>;
aliases {
pci1 = &pci1;
};
memory { memory {
device_type = "memory"; device_type = "memory";
reg = <0x0 0x00000000 0x0 0x40000000>; // 1G at 0x0 reg = <0x0 0x00000000 0x0 0x40000000>; // 1G at 0x0
...@@ -326,29 +322,11 @@ gpio@400 { ...@@ -326,29 +322,11 @@ gpio@400 {
}; };
pci1: pcie@fffe09000 { pci1: pcie@fffe09000 {
compatible = "fsl,mpc8641-pcie";
device_type = "pci";
#size-cells = <2>;
#address-cells = <3>;
reg = <0x0f 0xffe09000 0x0 0x1000>; reg = <0x0f 0xffe09000 0x0 0x1000>;
bus-range = <0x0 0xff>;
ranges = <0x02000000 0x0 0xe0000000 0x0c 0x20000000 0x0 0x20000000 ranges = <0x02000000 0x0 0xe0000000 0x0c 0x20000000 0x0 0x20000000
0x01000000 0x0 0x00000000 0x0f 0xffc10000 0x0 0x00010000>; 0x01000000 0x0 0x00000000 0x0f 0xffc10000 0x0 0x00010000>;
clock-frequency = <100000000>;
interrupts = <25 2 0 0>;
interrupt-map-mask = <0xf800 0 0 7>;
interrupt-map = <
/* IDSEL 0x0 */
0x0000 0 0 1 &mpic 4 1
0x0000 0 0 2 &mpic 5 1
0x0000 0 0 3 &mpic 6 1
0x0000 0 0 4 &mpic 7 1
>;
pcie@0 { pcie@0 {
reg = <0 0 0 0 0>;
#size-cells = <2>;
#address-cells = <3>;
device_type = "pci";
ranges = <0x02000000 0x0 0xe0000000 ranges = <0x02000000 0x0 0xe0000000
0x02000000 0x0 0xe0000000 0x02000000 0x0 0xe0000000
0x0 0x20000000 0x0 0x20000000
......
...@@ -102,19 +102,46 @@ &pci0 { ...@@ -102,19 +102,46 @@ &pci0 {
bus-range = <0x0 0xff>; bus-range = <0x0 0xff>;
clock-frequency = <100000000>; clock-frequency = <100000000>;
interrupts = <24 2 0 0>; interrupts = <24 2 0 0>;
interrupt-map-mask = <0xf800 0x0 0x0 0x7>;
interrupt-map = < pcie@0 {
0x0000 0x0 0x0 0x1 &mpic 0x0 0x1 reg = <0 0 0 0 0>;
0x0000 0x0 0x0 0x2 &mpic 0x1 0x1 #interrupt-cells = <1>;
0x0000 0x0 0x0 0x3 &mpic 0x2 0x1 #size-cells = <2>;
0x0000 0x0 0x0 0x4 &mpic 0x3 0x1 #address-cells = <3>;
>; device_type = "pci";
interrupts = <24 2 0 0>;
interrupt-map-mask = <0xf800 0x0 0x0 0x7>;
interrupt-map = <
0x0000 0x0 0x0 0x1 &mpic 0x0 0x1 0x0 0x0
0x0000 0x0 0x0 0x2 &mpic 0x1 0x1 0x0 0x0
0x0000 0x0 0x0 0x3 &mpic 0x2 0x1 0x0 0x0
0x0000 0x0 0x0 0x4 &mpic 0x3 0x1 0x0 0x0
>;
};
};
&pci1 {
compatible = "fsl,mpc8641-pcie";
device_type = "pci";
#size-cells = <2>;
#address-cells = <3>;
bus-range = <0x0 0xff>;
clock-frequency = <100000000>;
interrupts = <25 2 0 0>;
pcie@0 { pcie@0 {
reg = <0 0 0 0 0>; reg = <0 0 0 0 0>;
#interrupt-cells = <1>;
#size-cells = <2>; #size-cells = <2>;
#address-cells = <3>; #address-cells = <3>;
device_type = "pci"; device_type = "pci";
interrupts = <25 2 0 0>;
interrupt-map-mask = <0xf800 0x0 0x0 0x7>;
interrupt-map = <
0x0000 0x0 0x0 0x1 &mpic 0x4 0x1 0x0 0x0
0x0000 0x0 0x0 0x2 &mpic 0x5 0x1 0x0 0x0
0x0000 0x0 0x0 0x3 &mpic 0x6 0x1 0x0 0x0
0x0000 0x0 0x0 0x4 &mpic 0x7 0x1 0x0 0x0
>;
}; };
}; };
...@@ -25,6 +25,7 @@ aliases { ...@@ -25,6 +25,7 @@ aliases {
serial0 = &serial0; serial0 = &serial0;
serial1 = &serial1; serial1 = &serial1;
pci0 = &pci0; pci0 = &pci0;
pci1 = &pci1;
}; };
cpus { cpus {
......
...@@ -19,10 +19,6 @@ / { ...@@ -19,10 +19,6 @@ / {
model = "SBC8641D"; model = "SBC8641D";
compatible = "wind,sbc8641"; compatible = "wind,sbc8641";
aliases {
pci1 = &pci1;
};
memory { memory {
device_type = "memory"; device_type = "memory";
reg = <0x00000000 0x20000000>; // 512M at 0x0 reg = <0x00000000 0x20000000>; // 512M at 0x0
...@@ -165,30 +161,11 @@ pcie@0 { ...@@ -165,30 +161,11 @@ pcie@0 {
}; };
pci1: pcie@f8009000 { pci1: pcie@f8009000 {
compatible = "fsl,mpc8641-pcie";
device_type = "pci";
#size-cells = <2>;
#address-cells = <3>;
reg = <0xf8009000 0x1000>; reg = <0xf8009000 0x1000>;
bus-range = <0 0xff>;
ranges = <0x02000000 0x0 0xa0000000 0xa0000000 0x0 0x20000000 ranges = <0x02000000 0x0 0xa0000000 0xa0000000 0x0 0x20000000
0x01000000 0x0 0x00000000 0xe3000000 0x0 0x00100000>; 0x01000000 0x0 0x00000000 0xe3000000 0x0 0x00100000>;
clock-frequency = <100000000>;
interrupts = <25 2 0 0>;
interrupt-map-mask = <0xf800 0 0 7>;
interrupt-map = <
/* IDSEL 0x0 */
0x0000 0 0 1 &mpic 4 1
0x0000 0 0 2 &mpic 5 1
0x0000 0 0 3 &mpic 6 1
0x0000 0 0 4 &mpic 7 1
>;
pcie@0 { pcie@0 {
reg = <0 0 0 0 0>;
#size-cells = <2>;
#address-cells = <3>;
device_type = "pci";
ranges = <0x02000000 0x0 0xa0000000 ranges = <0x02000000 0x0 0xa0000000
0x02000000 0x0 0xa0000000 0x02000000 0x0 0xa0000000
0x0 0x20000000 0x0 0x20000000
......
...@@ -263,7 +263,7 @@ mux1: mux1@20 { ...@@ -263,7 +263,7 @@ mux1: mux1@20 {
}; };
rcpm: global-utilities@e2000 { rcpm: global-utilities@e2000 {
compatible = "fsl,t1023-rcpm", "fsl,qoriq-rcpm-2.0"; compatible = "fsl,t1023-rcpm", "fsl,qoriq-rcpm-2.1";
reg = <0xe2000 0x1000>; reg = <0xe2000 0x1000>;
}; };
......
...@@ -472,7 +472,7 @@ mux3: mux3@60 { ...@@ -472,7 +472,7 @@ mux3: mux3@60 {
}; };
rcpm: global-utilities@e2000 { rcpm: global-utilities@e2000 {
compatible = "fsl,t1040-rcpm", "fsl,qoriq-rcpm-2.0"; compatible = "fsl,t1040-rcpm", "fsl,qoriq-rcpm-2.1";
reg = <0xe2000 0x1000>; reg = <0xe2000 0x1000>;
}; };
......
...@@ -109,7 +109,7 @@ spi@110000 { ...@@ -109,7 +109,7 @@ spi@110000 {
flash@0 { flash@0 {
#address-cells = <1>; #address-cells = <1>;
#size-cells = <1>; #size-cells = <1>;
compatible = "micron,n25q512a", "jedec,spi-nor"; compatible = "micron,n25q512ax3", "jedec,spi-nor";
reg = <0>; reg = <0>;
spi-max-frequency = <10000000>; /* input clock */ spi-max-frequency = <10000000>; /* input clock */
}; };
......
...@@ -113,7 +113,7 @@ spi@110000 { ...@@ -113,7 +113,7 @@ spi@110000 {
flash@0 { flash@0 {
#address-cells = <1>; #address-cells = <1>;
#size-cells = <1>; #size-cells = <1>;
compatible = "micron,n25q512a", "jedec,spi-nor"; compatible = "micron,n25q512ax3", "jedec,spi-nor";
reg = <0>; reg = <0>;
spi-max-frequency = <10000000>; /* input clock */ spi-max-frequency = <10000000>; /* input clock */
}; };
......
...@@ -39,8 +39,5 @@ ...@@ -39,8 +39,5 @@
#define _PMD_PRESENT_MASK (PAGE_MASK) #define _PMD_PRESENT_MASK (PAGE_MASK)
#define _PMD_BAD (~PAGE_MASK) #define _PMD_BAD (~PAGE_MASK)
/* Hash table based platforms need atomic updates of the linux PTE */
#define PTE_ATOMIC_UPDATES 1
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_BOOK3S_32_HASH_H */ #endif /* _ASM_POWERPC_BOOK3S_32_HASH_H */
#ifndef _ASM_POWERPC_MMU_HASH32_H_ #ifndef _ASM_POWERPC_BOOK3S_32_MMU_HASH_H_
#define _ASM_POWERPC_MMU_HASH32_H_ #define _ASM_POWERPC_BOOK3S_32_MMU_HASH_H_
/* /*
* 32-bit hash table MMU support * 32-bit hash table MMU support
*/ */
...@@ -90,4 +90,4 @@ typedef struct { ...@@ -90,4 +90,4 @@ typedef struct {
#define mmu_virtual_psize MMU_PAGE_4K #define mmu_virtual_psize MMU_PAGE_4K
#define mmu_linear_psize MMU_PAGE_256M #define mmu_linear_psize MMU_PAGE_256M
#endif /* _ASM_POWERPC_MMU_HASH32_H_ */ #endif /* _ASM_POWERPC_BOOK3S_32_MMU_HASH_H_ */
#ifndef _ASM_POWERPC_BOOK3S_32_PGALLOC_H
#define _ASM_POWERPC_BOOK3S_32_PGALLOC_H
#include <linux/threads.h>
/* For 32-bit, all levels of page tables are just drawn from get_free_page() */
#define MAX_PGTABLE_INDEX_SIZE 0
extern void __bad_pte(pmd_t *pmd);
extern pgd_t *pgd_alloc(struct mm_struct *mm);
extern void pgd_free(struct mm_struct *mm, pgd_t *pgd);
/*
* We don't have any real pmd's, and this code never triggers because
* the pgd will always be present..
*/
/* #define pmd_alloc_one(mm,address) ({ BUG(); ((pmd_t *)2); }) */
#define pmd_free(mm, x) do { } while (0)
#define __pmd_free_tlb(tlb,x,a) do { } while (0)
/* #define pgd_populate(mm, pmd, pte) BUG() */
#ifndef CONFIG_BOOKE
static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmdp,
pte_t *pte)
{
*pmdp = __pmd(__pa(pte) | _PMD_PRESENT);
}
static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmdp,
pgtable_t pte_page)
{
*pmdp = __pmd((page_to_pfn(pte_page) << PAGE_SHIFT) | _PMD_PRESENT);
}
#define pmd_pgtable(pmd) pmd_page(pmd)
#else
static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmdp,
pte_t *pte)
{
*pmdp = __pmd((unsigned long)pte | _PMD_PRESENT);
}
static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmdp,
pgtable_t pte_page)
{
*pmdp = __pmd((unsigned long)lowmem_page_address(pte_page) | _PMD_PRESENT);
}
#define pmd_pgtable(pmd) pmd_page(pmd)
#endif
extern pte_t *pte_alloc_one_kernel(struct mm_struct *mm, unsigned long addr);
extern pgtable_t pte_alloc_one(struct mm_struct *mm, unsigned long addr);
static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
{
free_page((unsigned long)pte);
}
static inline void pte_free(struct mm_struct *mm, pgtable_t ptepage)
{
pgtable_page_dtor(ptepage);
__free_page(ptepage);
}
static inline void pgtable_free(void *table, unsigned index_size)
{
BUG_ON(index_size); /* 32-bit doesn't use this */
free_page((unsigned long)table);
}
#define check_pgt_cache() do { } while (0)
#ifdef CONFIG_SMP
static inline void pgtable_free_tlb(struct mmu_gather *tlb,
void *table, int shift)
{
unsigned long pgf = (unsigned long)table;
BUG_ON(shift > MAX_PGTABLE_INDEX_SIZE);
pgf |= shift;
tlb_remove_table(tlb, (void *)pgf);
}
static inline void __tlb_remove_table(void *_table)
{
void *table = (void *)((unsigned long)_table & ~MAX_PGTABLE_INDEX_SIZE);
unsigned shift = (unsigned long)_table & MAX_PGTABLE_INDEX_SIZE;
pgtable_free(table, shift);
}
#else
static inline void pgtable_free_tlb(struct mmu_gather *tlb,
void *table, int shift)
{
pgtable_free(table, shift);
}
#endif
static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table,
unsigned long address)
{
tlb_flush_pgtable(tlb, address);
pgtable_page_dtor(table);
pgtable_free_tlb(tlb, page_address(table), 0);
}
#endif /* _ASM_POWERPC_BOOK3S_32_PGALLOC_H */
...@@ -5,58 +5,31 @@ ...@@ -5,58 +5,31 @@
* for each page table entry. The PMD and PGD level use a 32b record for * for each page table entry. The PMD and PGD level use a 32b record for
* each entry by assuming that each entry is page aligned. * each entry by assuming that each entry is page aligned.
*/ */
#define PTE_INDEX_SIZE 9 #define H_PTE_INDEX_SIZE 9
#define PMD_INDEX_SIZE 7 #define H_PMD_INDEX_SIZE 7
#define PUD_INDEX_SIZE 9 #define H_PUD_INDEX_SIZE 9
#define PGD_INDEX_SIZE 9 #define H_PGD_INDEX_SIZE 9
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#define PTE_TABLE_SIZE (sizeof(pte_t) << PTE_INDEX_SIZE) #define H_PTE_TABLE_SIZE (sizeof(pte_t) << H_PTE_INDEX_SIZE)
#define PMD_TABLE_SIZE (sizeof(pmd_t) << PMD_INDEX_SIZE) #define H_PMD_TABLE_SIZE (sizeof(pmd_t) << H_PMD_INDEX_SIZE)
#define PUD_TABLE_SIZE (sizeof(pud_t) << PUD_INDEX_SIZE) #define H_PUD_TABLE_SIZE (sizeof(pud_t) << H_PUD_INDEX_SIZE)
#define PGD_TABLE_SIZE (sizeof(pgd_t) << PGD_INDEX_SIZE) #define H_PGD_TABLE_SIZE (sizeof(pgd_t) << H_PGD_INDEX_SIZE)
#endif /* __ASSEMBLY__ */
#define PTRS_PER_PTE (1 << PTE_INDEX_SIZE)
#define PTRS_PER_PMD (1 << PMD_INDEX_SIZE)
#define PTRS_PER_PUD (1 << PUD_INDEX_SIZE)
#define PTRS_PER_PGD (1 << PGD_INDEX_SIZE)
/* PMD_SHIFT determines what a second-level page table entry can map */
#define PMD_SHIFT (PAGE_SHIFT + PTE_INDEX_SIZE)
#define PMD_SIZE (1UL << PMD_SHIFT)
#define PMD_MASK (~(PMD_SIZE-1))
/* With 4k base page size, hugepage PTEs go at the PMD level */ /* With 4k base page size, hugepage PTEs go at the PMD level */
#define MIN_HUGEPTE_SHIFT PMD_SHIFT #define MIN_HUGEPTE_SHIFT PMD_SHIFT
/* PUD_SHIFT determines what a third-level page table entry can map */
#define PUD_SHIFT (PMD_SHIFT + PMD_INDEX_SIZE)
#define PUD_SIZE (1UL << PUD_SHIFT)
#define PUD_MASK (~(PUD_SIZE-1))
/* PGDIR_SHIFT determines what a fourth-level page table entry can map */
#define PGDIR_SHIFT (PUD_SHIFT + PUD_INDEX_SIZE)
#define PGDIR_SIZE (1UL << PGDIR_SHIFT)
#define PGDIR_MASK (~(PGDIR_SIZE-1))
/* Bits to mask out from a PMD to get to the PTE page */
#define PMD_MASKED_BITS 0
/* Bits to mask out from a PUD to get to the PMD page */
#define PUD_MASKED_BITS 0
/* Bits to mask out from a PGD to get to the PUD page */
#define PGD_MASKED_BITS 0
/* PTE flags to conserve for HPTE identification */ /* PTE flags to conserve for HPTE identification */
#define _PAGE_HPTEFLAGS (_PAGE_BUSY | _PAGE_HASHPTE | \ #define _PAGE_HPTEFLAGS (H_PAGE_BUSY | H_PAGE_HASHPTE | \
_PAGE_F_SECOND | _PAGE_F_GIX) H_PAGE_F_SECOND | H_PAGE_F_GIX)
/*
/* shift to put page number into pte */ * Not supported by 4k linux page size
#define PTE_RPN_SHIFT (12) */
#define PTE_RPN_SIZE (45) /* gives 57-bit real addresses */ #define H_PAGE_4K_PFN 0x0
#define H_PAGE_THP_HUGE 0x0
#define _PAGE_4K_PFN 0 #define H_PAGE_COMBO 0x0
#ifndef __ASSEMBLY__ #define H_PTE_FRAG_NR 0
#define H_PTE_FRAG_SIZE_SHIFT 0
/* /*
* On all 4K setups, remap_4k_pfn() equates to remap_pfn_range() * On all 4K setups, remap_4k_pfn() equates to remap_pfn_range()
*/ */
...@@ -64,37 +37,76 @@ ...@@ -64,37 +37,76 @@
remap_pfn_range((vma), (addr), (pfn), PAGE_SIZE, (prot)) remap_pfn_range((vma), (addr), (pfn), PAGE_SIZE, (prot))
#ifdef CONFIG_HUGETLB_PAGE #ifdef CONFIG_HUGETLB_PAGE
/* static inline int hash__hugepd_ok(hugepd_t hpd)
* For 4k page size, we support explicit hugepage via hugepd {
*/ /*
static inline int pmd_huge(pmd_t pmd) * if it is not a pte and have hugepd shift mask
* set, then it is a hugepd directory pointer
*/
if (!(hpd.pd & _PAGE_PTE) &&
((hpd.pd & HUGEPD_SHIFT_MASK) != 0))
return true;
return false;
}
#endif
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
static inline char *get_hpte_slot_array(pmd_t *pmdp)
{
BUG();
return NULL;
}
static inline unsigned int hpte_valid(unsigned char *hpte_slot_array, int index)
{ {
BUG();
return 0; return 0;
} }
static inline int pud_huge(pud_t pud) static inline unsigned int hpte_hash_index(unsigned char *hpte_slot_array,
int index)
{ {
BUG();
return 0; return 0;
} }
static inline int pgd_huge(pgd_t pgd) static inline void mark_hpte_slot_valid(unsigned char *hpte_slot_array,
unsigned int index, unsigned int hidx)
{
BUG();
}
static inline int hash__pmd_trans_huge(pmd_t pmd)
{ {
return 0; return 0;
} }
#define pgd_huge pgd_huge
static inline int hugepd_ok(hugepd_t hpd) static inline int hash__pmd_same(pmd_t pmd_a, pmd_t pmd_b)
{ {
/* BUG();
* if it is not a pte and have hugepd shift mask return 0;
* set, then it is a hugepd directory pointer
*/
if (!(hpd.pd & _PAGE_PTE) &&
((hpd.pd & HUGEPD_SHIFT_MASK) != 0))
return true;
return false;
} }
#define is_hugepd(hpd) (hugepd_ok(hpd))
static inline pmd_t hash__pmd_mkhuge(pmd_t pmd)
{
BUG();
return pmd;
}
extern unsigned long hash__pmd_hugepage_update(struct mm_struct *mm,
unsigned long addr, pmd_t *pmdp,
unsigned long clr, unsigned long set);
extern pmd_t hash__pmdp_collapse_flush(struct vm_area_struct *vma,
unsigned long address, pmd_t *pmdp);
extern void hash__pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
pgtable_t pgtable);
extern pgtable_t hash__pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
extern void hash__pmdp_huge_split_prepare(struct vm_area_struct *vma,
unsigned long address, pmd_t *pmdp);
extern pmd_t hash__pmdp_huge_get_and_clear(struct mm_struct *mm,
unsigned long addr, pmd_t *pmdp);
extern int hash__has_transparent_hugepage(void);
#endif #endif
#endif /* !__ASSEMBLY__ */ #endif /* !__ASSEMBLY__ */
......
#ifndef _ASM_POWERPC_BOOK3S_64_HASH_64K_H #ifndef _ASM_POWERPC_BOOK3S_64_HASH_64K_H
#define _ASM_POWERPC_BOOK3S_64_HASH_64K_H #define _ASM_POWERPC_BOOK3S_64_HASH_64K_H
#define PTE_INDEX_SIZE 8 #define H_PTE_INDEX_SIZE 8
#define PMD_INDEX_SIZE 5 #define H_PMD_INDEX_SIZE 5
#define PUD_INDEX_SIZE 5 #define H_PUD_INDEX_SIZE 5
#define PGD_INDEX_SIZE 12 #define H_PGD_INDEX_SIZE 12
#define PTRS_PER_PTE (1 << PTE_INDEX_SIZE)
#define PTRS_PER_PMD (1 << PMD_INDEX_SIZE)
#define PTRS_PER_PUD (1 << PUD_INDEX_SIZE)
#define PTRS_PER_PGD (1 << PGD_INDEX_SIZE)
/* With 4k base page size, hugepage PTEs go at the PMD level */ /* With 4k base page size, hugepage PTEs go at the PMD level */
#define MIN_HUGEPTE_SHIFT PAGE_SHIFT #define MIN_HUGEPTE_SHIFT PAGE_SHIFT
/* PMD_SHIFT determines what a second-level page table entry can map */ #define H_PAGE_COMBO 0x00001000 /* this is a combo 4k page */
#define PMD_SHIFT (PAGE_SHIFT + PTE_INDEX_SIZE) #define H_PAGE_4K_PFN 0x00002000 /* PFN is for a single 4k page */
#define PMD_SIZE (1UL << PMD_SHIFT)
#define PMD_MASK (~(PMD_SIZE-1))
/* PUD_SHIFT determines what a third-level page table entry can map */
#define PUD_SHIFT (PMD_SHIFT + PMD_INDEX_SIZE)
#define PUD_SIZE (1UL << PUD_SHIFT)
#define PUD_MASK (~(PUD_SIZE-1))
/* PGDIR_SHIFT determines what a fourth-level page table entry can map */
#define PGDIR_SHIFT (PUD_SHIFT + PUD_INDEX_SIZE)
#define PGDIR_SIZE (1UL << PGDIR_SHIFT)
#define PGDIR_MASK (~(PGDIR_SIZE-1))
#define _PAGE_COMBO 0x00001000 /* this is a combo 4k page */
#define _PAGE_4K_PFN 0x00002000 /* PFN is for a single 4k page */
/* /*
* Used to track subpage group valid if _PAGE_COMBO is set * We need to differentiate between explicit huge page and THP huge
* This overloads _PAGE_F_GIX and _PAGE_F_SECOND * page, since THP huge page also need to track real subpage details
*/ */
#define _PAGE_COMBO_VALID (_PAGE_F_GIX | _PAGE_F_SECOND) #define H_PAGE_THP_HUGE H_PAGE_4K_PFN
/* PTE flags to conserve for HPTE identification */ /*
#define _PAGE_HPTEFLAGS (_PAGE_BUSY | _PAGE_F_SECOND | \ * Used to track subpage group valid if H_PAGE_COMBO is set
_PAGE_F_GIX | _PAGE_HASHPTE | _PAGE_COMBO) * This overloads H_PAGE_F_GIX and H_PAGE_F_SECOND
/* Shift to put page number into pte.
*
* That gives us a max RPN of 41 bits, which means a max of 57 bits
* of addressable physical space, or 53 bits for the special 4k PFNs.
*/ */
#define PTE_RPN_SHIFT (16) #define H_PAGE_COMBO_VALID (H_PAGE_F_GIX | H_PAGE_F_SECOND)
#define PTE_RPN_SIZE (41)
/* PTE flags to conserve for HPTE identification */
#define _PAGE_HPTEFLAGS (H_PAGE_BUSY | H_PAGE_F_SECOND | \
H_PAGE_F_GIX | H_PAGE_HASHPTE | H_PAGE_COMBO)
/* /*
* we support 16 fragments per PTE page of 64K size. * we support 16 fragments per PTE page of 64K size.
*/ */
#define PTE_FRAG_NR 16 #define H_PTE_FRAG_NR 16
/* /*
* We use a 2K PTE page fragment and another 2K for storing * We use a 2K PTE page fragment and another 2K for storing
* real_pte_t hash index * real_pte_t hash index
*/ */
#define PTE_FRAG_SIZE_SHIFT 12 #define H_PTE_FRAG_SIZE_SHIFT 12
#define PTE_FRAG_SIZE (1UL << PTE_FRAG_SIZE_SHIFT) #define PTE_FRAG_SIZE (1UL << PTE_FRAG_SIZE_SHIFT)
/* Bits to mask out from a PMD to get to the PTE page */
#define PMD_MASKED_BITS 0xc0000000000000ffUL
/* Bits to mask out from a PUD to get to the PMD page */
#define PUD_MASKED_BITS 0xc0000000000000ffUL
/* Bits to mask out from a PGD to get to the PUD page */
#define PGD_MASKED_BITS 0xc0000000000000ffUL
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <asm/errno.h>
/* /*
* With 64K pages on hash table, we have a special PTE format that * With 64K pages on hash table, we have a special PTE format that
...@@ -83,9 +54,9 @@ static inline real_pte_t __real_pte(pte_t pte, pte_t *ptep) ...@@ -83,9 +54,9 @@ static inline real_pte_t __real_pte(pte_t pte, pte_t *ptep)
rpte.pte = pte; rpte.pte = pte;
rpte.hidx = 0; rpte.hidx = 0;
if (pte_val(pte) & _PAGE_COMBO) { if (pte_val(pte) & H_PAGE_COMBO) {
/* /*
* Make sure we order the hidx load against the _PAGE_COMBO * Make sure we order the hidx load against the H_PAGE_COMBO
* check. The store side ordering is done in __hash_page_4K * check. The store side ordering is done in __hash_page_4K
*/ */
smp_rmb(); smp_rmb();
...@@ -97,9 +68,9 @@ static inline real_pte_t __real_pte(pte_t pte, pte_t *ptep) ...@@ -97,9 +68,9 @@ static inline real_pte_t __real_pte(pte_t pte, pte_t *ptep)
static inline unsigned long __rpte_to_hidx(real_pte_t rpte, unsigned long index) static inline unsigned long __rpte_to_hidx(real_pte_t rpte, unsigned long index)
{ {
if ((pte_val(rpte.pte) & _PAGE_COMBO)) if ((pte_val(rpte.pte) & H_PAGE_COMBO))
return (rpte.hidx >> (index<<2)) & 0xf; return (rpte.hidx >> (index<<2)) & 0xf;
return (pte_val(rpte.pte) >> _PAGE_F_GIX_SHIFT) & 0xf; return (pte_val(rpte.pte) >> H_PAGE_F_GIX_SHIFT) & 0xf;
} }
#define __rpte_to_pte(r) ((r).pte) #define __rpte_to_pte(r) ((r).pte)
...@@ -122,79 +93,32 @@ extern bool __rpte_sub_valid(real_pte_t rpte, unsigned long index); ...@@ -122,79 +93,32 @@ extern bool __rpte_sub_valid(real_pte_t rpte, unsigned long index);
#define pte_iterate_hashed_end() } while(0); } } while(0) #define pte_iterate_hashed_end() } while(0); } } while(0)
#define pte_pagesize_index(mm, addr, pte) \ #define pte_pagesize_index(mm, addr, pte) \
(((pte) & _PAGE_COMBO)? MMU_PAGE_4K: MMU_PAGE_64K) (((pte) & H_PAGE_COMBO)? MMU_PAGE_4K: MMU_PAGE_64K)
#define remap_4k_pfn(vma, addr, pfn, prot) \
(WARN_ON(((pfn) >= (1UL << PTE_RPN_SIZE))) ? -EINVAL : \
remap_pfn_range((vma), (addr), (pfn), PAGE_SIZE, \
__pgprot(pgprot_val((prot)) | _PAGE_4K_PFN)))
#define PTE_TABLE_SIZE PTE_FRAG_SIZE
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
#define PMD_TABLE_SIZE ((sizeof(pmd_t) << PMD_INDEX_SIZE) + (sizeof(unsigned long) << PMD_INDEX_SIZE))
#else
#define PMD_TABLE_SIZE (sizeof(pmd_t) << PMD_INDEX_SIZE)
#endif
#define PUD_TABLE_SIZE (sizeof(pud_t) << PUD_INDEX_SIZE)
#define PGD_TABLE_SIZE (sizeof(pgd_t) << PGD_INDEX_SIZE)
#ifdef CONFIG_HUGETLB_PAGE
/*
* We have PGD_INDEX_SIZ = 12 and PTE_INDEX_SIZE = 8, so that we can have
* 16GB hugepage pte in PGD and 16MB hugepage pte at PMD;
*
* Defined in such a way that we can optimize away code block at build time
* if CONFIG_HUGETLB_PAGE=n.
*/
static inline int pmd_huge(pmd_t pmd)
{
/*
* leaf pte for huge page
*/
return !!(pmd_val(pmd) & _PAGE_PTE);
}
static inline int pud_huge(pud_t pud)
{
/*
* leaf pte for huge page
*/
return !!(pud_val(pud) & _PAGE_PTE);
}
static inline int pgd_huge(pgd_t pgd) extern int remap_pfn_range(struct vm_area_struct *, unsigned long addr,
unsigned long pfn, unsigned long size, pgprot_t);
static inline int hash__remap_4k_pfn(struct vm_area_struct *vma, unsigned long addr,
unsigned long pfn, pgprot_t prot)
{ {
/* if (pfn > (PTE_RPN_MASK >> PAGE_SHIFT)) {
* leaf pte for huge page WARN(1, "remap_4k_pfn called with wrong pfn value\n");
*/ return -EINVAL;
return !!(pgd_val(pgd) & _PAGE_PTE); }
return remap_pfn_range(vma, addr, pfn, PAGE_SIZE,
__pgprot(pgprot_val(prot) | H_PAGE_4K_PFN));
} }
#define pgd_huge pgd_huge
#ifdef CONFIG_DEBUG_VM #define H_PTE_TABLE_SIZE PTE_FRAG_SIZE
extern int hugepd_ok(hugepd_t hpd); #ifdef CONFIG_TRANSPARENT_HUGEPAGE
#define is_hugepd(hpd) (hugepd_ok(hpd)) #define H_PMD_TABLE_SIZE ((sizeof(pmd_t) << PMD_INDEX_SIZE) + \
(sizeof(unsigned long) << PMD_INDEX_SIZE))
#else #else
/* #define H_PMD_TABLE_SIZE (sizeof(pmd_t) << PMD_INDEX_SIZE)
* With 64k page size, we have hugepage ptes in the pgd and pmd entries. We don't #endif
* need to setup hugepage directory for them. Our pte and page directory format #define H_PUD_TABLE_SIZE (sizeof(pud_t) << PUD_INDEX_SIZE)
* enable us to have this enabled. #define H_PGD_TABLE_SIZE (sizeof(pgd_t) << PGD_INDEX_SIZE)
*/
static inline int hugepd_ok(hugepd_t hpd)
{
return 0;
}
#define is_hugepd(pdep) 0
#endif /* CONFIG_DEBUG_VM */
#endif /* CONFIG_HUGETLB_PAGE */
#ifdef CONFIG_TRANSPARENT_HUGEPAGE #ifdef CONFIG_TRANSPARENT_HUGEPAGE
extern unsigned long pmd_hugepage_update(struct mm_struct *mm,
unsigned long addr,
pmd_t *pmdp,
unsigned long clr,
unsigned long set);
static inline char *get_hpte_slot_array(pmd_t *pmdp) static inline char *get_hpte_slot_array(pmd_t *pmdp)
{ {
/* /*
...@@ -253,50 +177,35 @@ static inline void mark_hpte_slot_valid(unsigned char *hpte_slot_array, ...@@ -253,50 +177,35 @@ static inline void mark_hpte_slot_valid(unsigned char *hpte_slot_array,
* that for explicit huge pages. * that for explicit huge pages.
* *
*/ */
static inline int pmd_trans_huge(pmd_t pmd) static inline int hash__pmd_trans_huge(pmd_t pmd)
{ {
return !!((pmd_val(pmd) & (_PAGE_PTE | _PAGE_THP_HUGE)) == return !!((pmd_val(pmd) & (_PAGE_PTE | H_PAGE_THP_HUGE)) ==
(_PAGE_PTE | _PAGE_THP_HUGE)); (_PAGE_PTE | H_PAGE_THP_HUGE));
} }
static inline int pmd_large(pmd_t pmd) static inline int hash__pmd_same(pmd_t pmd_a, pmd_t pmd_b)
{ {
return !!(pmd_val(pmd) & _PAGE_PTE); return (((pmd_raw(pmd_a) ^ pmd_raw(pmd_b)) & ~cpu_to_be64(_PAGE_HPTEFLAGS)) == 0);
} }
static inline pmd_t pmd_mknotpresent(pmd_t pmd) static inline pmd_t hash__pmd_mkhuge(pmd_t pmd)
{ {
return __pmd(pmd_val(pmd) & ~_PAGE_PRESENT); return __pmd(pmd_val(pmd) | (_PAGE_PTE | H_PAGE_THP_HUGE));
}
#define __HAVE_ARCH_PMD_SAME
static inline int pmd_same(pmd_t pmd_a, pmd_t pmd_b)
{
return (((pmd_val(pmd_a) ^ pmd_val(pmd_b)) & ~_PAGE_HPTEFLAGS) == 0);
}
static inline int __pmdp_test_and_clear_young(struct mm_struct *mm,
unsigned long addr, pmd_t *pmdp)
{
unsigned long old;
if ((pmd_val(*pmdp) & (_PAGE_ACCESSED | _PAGE_HASHPTE)) == 0)
return 0;
old = pmd_hugepage_update(mm, addr, pmdp, _PAGE_ACCESSED, 0);
return ((old & _PAGE_ACCESSED) != 0);
}
#define __HAVE_ARCH_PMDP_SET_WRPROTECT
static inline void pmdp_set_wrprotect(struct mm_struct *mm, unsigned long addr,
pmd_t *pmdp)
{
if ((pmd_val(*pmdp) & _PAGE_RW) == 0)
return;
pmd_hugepage_update(mm, addr, pmdp, _PAGE_RW, 0);
} }
extern unsigned long hash__pmd_hugepage_update(struct mm_struct *mm,
unsigned long addr, pmd_t *pmdp,
unsigned long clr, unsigned long set);
extern pmd_t hash__pmdp_collapse_flush(struct vm_area_struct *vma,
unsigned long address, pmd_t *pmdp);
extern void hash__pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
pgtable_t pgtable);
extern pgtable_t hash__pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
extern void hash__pmdp_huge_split_prepare(struct vm_area_struct *vma,
unsigned long address, pmd_t *pmdp);
extern pmd_t hash__pmdp_huge_get_and_clear(struct mm_struct *mm,
unsigned long addr, pmd_t *pmdp);
extern int hash__has_transparent_hugepage(void);
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
......
This diff is collapsed.
#ifndef _ASM_POWERPC_BOOK3S_64_HUGETLB_RADIX_H
#define _ASM_POWERPC_BOOK3S_64_HUGETLB_RADIX_H
/*
* For radix we want generic code to handle hugetlb. But then if we want
* both hash and radix to be enabled together we need to workaround the
* limitations.
*/
void radix__flush_hugetlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
void radix__local_flush_hugetlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
extern unsigned long
radix__hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
unsigned long len, unsigned long pgoff,
unsigned long flags);
#endif
#ifndef _ASM_POWERPC_MMU_HASH64_H_ #ifndef _ASM_POWERPC_BOOK3S_64_MMU_HASH_H_
#define _ASM_POWERPC_MMU_HASH64_H_ #define _ASM_POWERPC_BOOK3S_64_MMU_HASH_H_
/* /*
* PowerPC64 memory management structures * PowerPC64 memory management structures
* *
...@@ -78,6 +78,10 @@ ...@@ -78,6 +78,10 @@
#define HPTE_V_SECONDARY ASM_CONST(0x0000000000000002) #define HPTE_V_SECONDARY ASM_CONST(0x0000000000000002)
#define HPTE_V_VALID ASM_CONST(0x0000000000000001) #define HPTE_V_VALID ASM_CONST(0x0000000000000001)
/*
* ISA 3.0 have a different HPTE format.
*/
#define HPTE_R_3_0_SSIZE_SHIFT 58
#define HPTE_R_PP0 ASM_CONST(0x8000000000000000) #define HPTE_R_PP0 ASM_CONST(0x8000000000000000)
#define HPTE_R_TS ASM_CONST(0x4000000000000000) #define HPTE_R_TS ASM_CONST(0x4000000000000000)
#define HPTE_R_KEY_HI ASM_CONST(0x3000000000000000) #define HPTE_R_KEY_HI ASM_CONST(0x3000000000000000)
...@@ -115,6 +119,7 @@ ...@@ -115,6 +119,7 @@
#define POWER7_TLB_SETS 128 /* # sets in POWER7 TLB */ #define POWER7_TLB_SETS 128 /* # sets in POWER7 TLB */
#define POWER8_TLB_SETS 512 /* # sets in POWER8 TLB */ #define POWER8_TLB_SETS 512 /* # sets in POWER8 TLB */
#define POWER9_TLB_SETS_HASH 256 /* # sets in POWER9 TLB Hash mode */ #define POWER9_TLB_SETS_HASH 256 /* # sets in POWER9 TLB Hash mode */
#define POWER9_TLB_SETS_RADIX 128 /* # sets in POWER9 TLB Radix mode */
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
...@@ -127,24 +132,6 @@ extern struct hash_pte *htab_address; ...@@ -127,24 +132,6 @@ extern struct hash_pte *htab_address;
extern unsigned long htab_size_bytes; extern unsigned long htab_size_bytes;
extern unsigned long htab_hash_mask; extern unsigned long htab_hash_mask;
/*
* Page size definition
*
* shift : is the "PAGE_SHIFT" value for that page size
* sllp : is a bit mask with the value of SLB L || LP to be or'ed
* directly to a slbmte "vsid" value
* penc : is the HPTE encoding mask for the "LP" field:
*
*/
struct mmu_psize_def
{
unsigned int shift; /* number of bits */
int penc[MMU_PAGE_COUNT]; /* HPTE encoding */
unsigned int tlbiel; /* tlbiel supported for that page size */
unsigned long avpnm; /* bits to mask out in AVPN in the HPTE */
unsigned long sllp; /* SLB L||LP (exact mask to use in slbmte) */
};
extern struct mmu_psize_def mmu_psize_defs[MMU_PAGE_COUNT];
static inline int shift_to_mmu_psize(unsigned int shift) static inline int shift_to_mmu_psize(unsigned int shift)
{ {
...@@ -210,11 +197,6 @@ static inline int segment_shift(int ssize) ...@@ -210,11 +197,6 @@ static inline int segment_shift(int ssize)
/* /*
* The current system page and segment sizes * The current system page and segment sizes
*/ */
extern int mmu_linear_psize;
extern int mmu_virtual_psize;
extern int mmu_vmalloc_psize;
extern int mmu_vmemmap_psize;
extern int mmu_io_psize;
extern int mmu_kernel_ssize; extern int mmu_kernel_ssize;
extern int mmu_highuser_ssize; extern int mmu_highuser_ssize;
extern u16 mmu_slb_size; extern u16 mmu_slb_size;
...@@ -247,7 +229,8 @@ static inline unsigned long hpte_encode_avpn(unsigned long vpn, int psize, ...@@ -247,7 +229,8 @@ static inline unsigned long hpte_encode_avpn(unsigned long vpn, int psize,
*/ */
v = (vpn >> (23 - VPN_SHIFT)) & ~(mmu_psize_defs[psize].avpnm); v = (vpn >> (23 - VPN_SHIFT)) & ~(mmu_psize_defs[psize].avpnm);
v <<= HPTE_V_AVPN_SHIFT; v <<= HPTE_V_AVPN_SHIFT;
v |= ((unsigned long) ssize) << HPTE_V_SSIZE_SHIFT; if (!cpu_has_feature(CPU_FTR_ARCH_300))
v |= ((unsigned long) ssize) << HPTE_V_SSIZE_SHIFT;
return v; return v;
} }
...@@ -271,8 +254,12 @@ static inline unsigned long hpte_encode_v(unsigned long vpn, int base_psize, ...@@ -271,8 +254,12 @@ static inline unsigned long hpte_encode_v(unsigned long vpn, int base_psize,
* aligned for the requested page size * aligned for the requested page size
*/ */
static inline unsigned long hpte_encode_r(unsigned long pa, int base_psize, static inline unsigned long hpte_encode_r(unsigned long pa, int base_psize,
int actual_psize) int actual_psize, int ssize)
{ {
if (cpu_has_feature(CPU_FTR_ARCH_300))
pa |= ((unsigned long) ssize) << HPTE_R_3_0_SSIZE_SHIFT;
/* A 4K page needs no special encoding */ /* A 4K page needs no special encoding */
if (actual_psize == MMU_PAGE_4K) if (actual_psize == MMU_PAGE_4K)
return pa & HPTE_R_RPN; return pa & HPTE_R_RPN;
...@@ -476,7 +463,7 @@ extern void slb_set_size(u16 size); ...@@ -476,7 +463,7 @@ extern void slb_set_size(u16 size);
add rt,rt,rx add rt,rt,rx
/* 4 bits per slice and we have one slice per 1TB */ /* 4 bits per slice and we have one slice per 1TB */
#define SLICE_ARRAY_SIZE (PGTABLE_RANGE >> 41) #define SLICE_ARRAY_SIZE (H_PGTABLE_RANGE >> 41)
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
...@@ -512,38 +499,6 @@ static inline void subpage_prot_free(struct mm_struct *mm) {} ...@@ -512,38 +499,6 @@ static inline void subpage_prot_free(struct mm_struct *mm) {}
static inline void subpage_prot_init_new_context(struct mm_struct *mm) { } static inline void subpage_prot_init_new_context(struct mm_struct *mm) { }
#endif /* CONFIG_PPC_SUBPAGE_PROT */ #endif /* CONFIG_PPC_SUBPAGE_PROT */
typedef unsigned long mm_context_id_t;
struct spinlock;
typedef struct {
mm_context_id_t id;
u16 user_psize; /* page size index */
#ifdef CONFIG_PPC_MM_SLICES
u64 low_slices_psize; /* SLB page size encodings */
unsigned char high_slices_psize[SLICE_ARRAY_SIZE];
#else
u16 sllp; /* SLB page size encoding */
#endif
unsigned long vdso_base;
#ifdef CONFIG_PPC_SUBPAGE_PROT
struct subpage_prot_table spt;
#endif /* CONFIG_PPC_SUBPAGE_PROT */
#ifdef CONFIG_PPC_ICSWX
struct spinlock *cop_lockp; /* guard acop and cop_pid */
unsigned long acop; /* mask of enabled coprocessor types */
unsigned int cop_pid; /* pid value used with coprocessors */
#endif /* CONFIG_PPC_ICSWX */
#ifdef CONFIG_PPC_64K_PAGES
/* for 4K PTE fragment support */
void *pte_frag;
#endif
#ifdef CONFIG_SPAPR_TCE_IOMMU
struct list_head iommu_group_mem_list;
#endif
} mm_context_t;
#if 0 #if 0
/* /*
* The code below is equivalent to this function for arguments * The code below is equivalent to this function for arguments
...@@ -579,7 +534,7 @@ static inline unsigned long get_vsid(unsigned long context, unsigned long ea, ...@@ -579,7 +534,7 @@ static inline unsigned long get_vsid(unsigned long context, unsigned long ea,
/* /*
* Bad address. We return VSID 0 for that * Bad address. We return VSID 0 for that
*/ */
if ((ea & ~REGION_MASK) >= PGTABLE_RANGE) if ((ea & ~REGION_MASK) >= H_PGTABLE_RANGE)
return 0; return 0;
if (ssize == MMU_SEGSIZE_256M) if (ssize == MMU_SEGSIZE_256M)
...@@ -613,4 +568,4 @@ unsigned htab_shift_for_mem_size(unsigned long mem_size); ...@@ -613,4 +568,4 @@ unsigned htab_shift_for_mem_size(unsigned long mem_size);
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#endif /* _ASM_POWERPC_MMU_HASH64_H_ */ #endif /* _ASM_POWERPC_BOOK3S_64_MMU_HASH_H_ */
#ifndef _ASM_POWERPC_BOOK3S_64_MMU_H_
#define _ASM_POWERPC_BOOK3S_64_MMU_H_
#ifndef __ASSEMBLY__
/*
* Page size definition
*
* shift : is the "PAGE_SHIFT" value for that page size
* sllp : is a bit mask with the value of SLB L || LP to be or'ed
* directly to a slbmte "vsid" value
* penc : is the HPTE encoding mask for the "LP" field:
*
*/
struct mmu_psize_def {
unsigned int shift; /* number of bits */
int penc[MMU_PAGE_COUNT]; /* HPTE encoding */
unsigned int tlbiel; /* tlbiel supported for that page size */
unsigned long avpnm; /* bits to mask out in AVPN in the HPTE */
union {
unsigned long sllp; /* SLB L||LP (exact mask to use in slbmte) */
unsigned long ap; /* Ap encoding used by PowerISA 3.0 */
};
};
extern struct mmu_psize_def mmu_psize_defs[MMU_PAGE_COUNT];
#define radix_enabled() mmu_has_feature(MMU_FTR_RADIX)
#endif /* __ASSEMBLY__ */
/* 64-bit classic hash table MMU */
#include <asm/book3s/64/mmu-hash.h>
#ifndef __ASSEMBLY__
/*
* ISA 3.0 partiton and process table entry format
*/
struct prtb_entry {
__be64 prtb0;
__be64 prtb1;
};
extern struct prtb_entry *process_tb;
struct patb_entry {
__be64 patb0;
__be64 patb1;
};
extern struct patb_entry *partition_tb;
#define PATB_HR (1UL << 63)
#define PATB_GR (1UL << 63)
#define RPDB_MASK 0x0ffffffffffff00fUL
#define RPDB_SHIFT (1UL << 8)
/*
* Limit process table to PAGE_SIZE table. This
* also limit the max pid we can support.
* MAX_USER_CONTEXT * 16 bytes of space.
*/
#define PRTB_SIZE_SHIFT (CONTEXT_BITS + 4)
/*
* Power9 currently only support 64K partition table size.
*/
#define PATB_SIZE_SHIFT 16
typedef unsigned long mm_context_id_t;
struct spinlock;
typedef struct {
mm_context_id_t id;
u16 user_psize; /* page size index */
#ifdef CONFIG_PPC_MM_SLICES
u64 low_slices_psize; /* SLB page size encodings */
unsigned char high_slices_psize[SLICE_ARRAY_SIZE];
#else
u16 sllp; /* SLB page size encoding */
#endif
unsigned long vdso_base;
#ifdef CONFIG_PPC_SUBPAGE_PROT
struct subpage_prot_table spt;
#endif /* CONFIG_PPC_SUBPAGE_PROT */
#ifdef CONFIG_PPC_ICSWX
struct spinlock *cop_lockp; /* guard acop and cop_pid */
unsigned long acop; /* mask of enabled coprocessor types */
unsigned int cop_pid; /* pid value used with coprocessors */
#endif /* CONFIG_PPC_ICSWX */
#ifdef CONFIG_PPC_64K_PAGES
/* for 4K PTE fragment support */
void *pte_frag;
#endif
#ifdef CONFIG_SPAPR_TCE_IOMMU
struct list_head iommu_group_mem_list;
#endif
} mm_context_t;
/*
* The current system page and segment sizes
*/
extern int mmu_linear_psize;
extern int mmu_virtual_psize;
extern int mmu_vmalloc_psize;
extern int mmu_vmemmap_psize;
extern int mmu_io_psize;
/* MMU initialization */
extern void radix_init_native(void);
extern void hash__early_init_mmu(void);
extern void radix__early_init_mmu(void);
static inline void early_init_mmu(void)
{
if (radix_enabled())
return radix__early_init_mmu();
return hash__early_init_mmu();
}
extern void hash__early_init_mmu_secondary(void);
extern void radix__early_init_mmu_secondary(void);
static inline void early_init_mmu_secondary(void)
{
if (radix_enabled())
return radix__early_init_mmu_secondary();
return hash__early_init_mmu_secondary();
}
extern void hash__setup_initial_memory_limit(phys_addr_t first_memblock_base,
phys_addr_t first_memblock_size);
extern void radix__setup_initial_memory_limit(phys_addr_t first_memblock_base,
phys_addr_t first_memblock_size);
static inline void setup_initial_memory_limit(phys_addr_t first_memblock_base,
phys_addr_t first_memblock_size)
{
if (radix_enabled())
return radix__setup_initial_memory_limit(first_memblock_base,
first_memblock_size);
return hash__setup_initial_memory_limit(first_memblock_base,
first_memblock_size);
}
#endif /* __ASSEMBLY__ */
#endif /* _ASM_POWERPC_BOOK3S_64_MMU_H_ */
#ifndef _ASM_POWERPC_BOOK3S_64_PGALLOC_H
#define _ASM_POWERPC_BOOK3S_64_PGALLOC_H
/*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#include <linux/slab.h>
#include <linux/cpumask.h>
#include <linux/percpu.h>
struct vmemmap_backing {
struct vmemmap_backing *list;
unsigned long phys;
unsigned long virt_addr;
};
extern struct vmemmap_backing *vmemmap_list;
/*
* Functions that deal with pagetables that could be at any level of
* the table need to be passed an "index_size" so they know how to
* handle allocation. For PTE pages (which are linked to a struct
* page for now, and drawn from the main get_free_pages() pool), the
* allocation size will be (2^index_size * sizeof(pointer)) and
* allocations are drawn from the kmem_cache in PGT_CACHE(index_size).
*
* The maximum index size needs to be big enough to allow any
* pagetable sizes we need, but small enough to fit in the low bits of
* any page table pointer. In other words all pagetables, even tiny
* ones, must be aligned to allow at least enough low 0 bits to
* contain this value. This value is also used as a mask, so it must
* be one less than a power of two.
*/
#define MAX_PGTABLE_INDEX_SIZE 0xf
extern struct kmem_cache *pgtable_cache[];
#define PGT_CACHE(shift) ({ \
BUG_ON(!(shift)); \
pgtable_cache[(shift) - 1]; \
})
#define PGALLOC_GFP GFP_KERNEL | __GFP_NOTRACK | __GFP_REPEAT | __GFP_ZERO
extern pte_t *pte_fragment_alloc(struct mm_struct *, unsigned long, int);
extern void pte_fragment_free(unsigned long *, int);
extern void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift);
#ifdef CONFIG_SMP
extern void __tlb_remove_table(void *_table);
#endif
static inline pgd_t *radix__pgd_alloc(struct mm_struct *mm)
{
#ifdef CONFIG_PPC_64K_PAGES
return (pgd_t *)__get_free_page(PGALLOC_GFP);
#else
struct page *page;
page = alloc_pages(PGALLOC_GFP, 4);
if (!page)
return NULL;
return (pgd_t *) page_address(page);
#endif
}
static inline void radix__pgd_free(struct mm_struct *mm, pgd_t *pgd)
{
#ifdef CONFIG_PPC_64K_PAGES
free_page((unsigned long)pgd);
#else
free_pages((unsigned long)pgd, 4);
#endif
}
static inline pgd_t *pgd_alloc(struct mm_struct *mm)
{
if (radix_enabled())
return radix__pgd_alloc(mm);
return kmem_cache_alloc(PGT_CACHE(PGD_INDEX_SIZE), GFP_KERNEL);
}
static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
{
if (radix_enabled())
return radix__pgd_free(mm, pgd);
kmem_cache_free(PGT_CACHE(PGD_INDEX_SIZE), pgd);
}
static inline void pgd_populate(struct mm_struct *mm, pgd_t *pgd, pud_t *pud)
{
pgd_set(pgd, __pgtable_ptr_val(pud) | PGD_VAL_BITS);
}
static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long addr)
{
return kmem_cache_alloc(PGT_CACHE(PUD_INDEX_SIZE),
GFP_KERNEL|__GFP_REPEAT);
}
static inline void pud_free(struct mm_struct *mm, pud_t *pud)
{
kmem_cache_free(PGT_CACHE(PUD_INDEX_SIZE), pud);
}
static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
{
pud_set(pud, __pgtable_ptr_val(pmd) | PUD_VAL_BITS);
}
static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pud,
unsigned long address)
{
pgtable_free_tlb(tlb, pud, PUD_INDEX_SIZE);
}
static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr)
{
return kmem_cache_alloc(PGT_CACHE(PMD_CACHE_INDEX),
GFP_KERNEL|__GFP_REPEAT);
}
static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)
{
kmem_cache_free(PGT_CACHE(PMD_CACHE_INDEX), pmd);
}
static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd,
unsigned long address)
{
return pgtable_free_tlb(tlb, pmd, PMD_CACHE_INDEX);
}
static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd,
pte_t *pte)
{
pmd_set(pmd, __pgtable_ptr_val(pte) | PMD_VAL_BITS);
}
static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd,
pgtable_t pte_page)
{
pmd_set(pmd, __pgtable_ptr_val(pte_page) | PMD_VAL_BITS);
}
static inline pgtable_t pmd_pgtable(pmd_t pmd)
{
return (pgtable_t)pmd_page_vaddr(pmd);
}
#ifdef CONFIG_PPC_4K_PAGES
static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
unsigned long address)
{
return (pte_t *)__get_free_page(GFP_KERNEL | __GFP_REPEAT | __GFP_ZERO);
}
static inline pgtable_t pte_alloc_one(struct mm_struct *mm,
unsigned long address)
{
struct page *page;
pte_t *pte;
pte = pte_alloc_one_kernel(mm, address);
if (!pte)
return NULL;
page = virt_to_page(pte);
if (!pgtable_page_ctor(page)) {
__free_page(page);
return NULL;
}
return pte;
}
#else /* if CONFIG_PPC_64K_PAGES */
static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
unsigned long address)
{
return (pte_t *)pte_fragment_alloc(mm, address, 1);
}
static inline pgtable_t pte_alloc_one(struct mm_struct *mm,
unsigned long address)
{
return (pgtable_t)pte_fragment_alloc(mm, address, 0);
}
#endif
static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
{
pte_fragment_free((unsigned long *)pte, 1);
}
static inline void pte_free(struct mm_struct *mm, pgtable_t ptepage)
{
pte_fragment_free((unsigned long *)ptepage, 0);
}
static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table,
unsigned long address)
{
tlb_flush_pgtable(tlb, address);
pgtable_free_tlb(tlb, table, 0);
}
#define check_pgt_cache() do { } while (0)
#endif /* _ASM_POWERPC_BOOK3S_64_PGALLOC_H */
#ifndef _ASM_POWERPC_BOOK3S_64_PGTABLE_4K_H
#define _ASM_POWERPC_BOOK3S_64_PGTABLE_4K_H
/*
* hash 4k can't share hugetlb and also doesn't support THP
*/
#ifndef __ASSEMBLY__
#ifdef CONFIG_HUGETLB_PAGE
static inline int pmd_huge(pmd_t pmd)
{
/*
* leaf pte for huge page
*/
if (radix_enabled())
return !!(pmd_val(pmd) & _PAGE_PTE);
return 0;
}
static inline int pud_huge(pud_t pud)
{
/*
* leaf pte for huge page
*/
if (radix_enabled())
return !!(pud_val(pud) & _PAGE_PTE);
return 0;
}
static inline int pgd_huge(pgd_t pgd)
{
/*
* leaf pte for huge page
*/
if (radix_enabled())
return !!(pgd_val(pgd) & _PAGE_PTE);
return 0;
}
#define pgd_huge pgd_huge
/*
* With radix , we have hugepage ptes in the pud and pmd entries. We don't
* need to setup hugepage directory for them. Our pte and page directory format
* enable us to have this enabled.
*/
static inline int hugepd_ok(hugepd_t hpd)
{
if (radix_enabled())
return 0;
return hash__hugepd_ok(hpd);
}
#define is_hugepd(hpd) (hugepd_ok(hpd))
#endif /* CONFIG_HUGETLB_PAGE */
#endif /* __ASSEMBLY__ */
#endif /*_ASM_POWERPC_BOOK3S_64_PGTABLE_4K_H */
#ifndef _ASM_POWERPC_BOOK3S_64_PGTABLE_64K_H
#define _ASM_POWERPC_BOOK3S_64_PGTABLE_64K_H
#ifndef __ASSEMBLY__
#ifdef CONFIG_HUGETLB_PAGE
/*
* We have PGD_INDEX_SIZ = 12 and PTE_INDEX_SIZE = 8, so that we can have
* 16GB hugepage pte in PGD and 16MB hugepage pte at PMD;
*
* Defined in such a way that we can optimize away code block at build time
* if CONFIG_HUGETLB_PAGE=n.
*/
static inline int pmd_huge(pmd_t pmd)
{
/*
* leaf pte for huge page
*/
return !!(pmd_val(pmd) & _PAGE_PTE);
}
static inline int pud_huge(pud_t pud)
{
/*
* leaf pte for huge page
*/
return !!(pud_val(pud) & _PAGE_PTE);
}
static inline int pgd_huge(pgd_t pgd)
{
/*
* leaf pte for huge page
*/
return !!(pgd_val(pgd) & _PAGE_PTE);
}
#define pgd_huge pgd_huge
#ifdef CONFIG_DEBUG_VM
extern int hugepd_ok(hugepd_t hpd);
#define is_hugepd(hpd) (hugepd_ok(hpd))
#else
/*
* With 64k page size, we have hugepage ptes in the pgd and pmd entries. We don't
* need to setup hugepage directory for them. Our pte and page directory format
* enable us to have this enabled.
*/
static inline int hugepd_ok(hugepd_t hpd)
{
return 0;
}
#define is_hugepd(pdep) 0
#endif /* CONFIG_DEBUG_VM */
#endif /* CONFIG_HUGETLB_PAGE */
static inline int remap_4k_pfn(struct vm_area_struct *vma, unsigned long addr,
unsigned long pfn, pgprot_t prot)
{
if (radix_enabled())
BUG();
return hash__remap_4k_pfn(vma, addr, pfn, prot);
}
#endif /* __ASSEMBLY__ */
#endif /*_ASM_POWERPC_BOOK3S_64_PGTABLE_64K_H */
#ifndef _ASM_POWERPC_PGTABLE_RADIX_4K_H
#define _ASM_POWERPC_PGTABLE_RADIX_4K_H
/*
* For 4K page size supported index is 13/9/9/9
*/
#define RADIX_PTE_INDEX_SIZE 9 /* 2MB huge page */
#define RADIX_PMD_INDEX_SIZE 9 /* 1G huge page */
#define RADIX_PUD_INDEX_SIZE 9
#define RADIX_PGD_INDEX_SIZE 13
#endif /* _ASM_POWERPC_PGTABLE_RADIX_4K_H */
#ifndef _ASM_POWERPC_PGTABLE_RADIX_64K_H
#define _ASM_POWERPC_PGTABLE_RADIX_64K_H
/*
* For 64K page size supported index is 13/9/9/5
*/
#define RADIX_PTE_INDEX_SIZE 5 /* 2MB huge page */
#define RADIX_PMD_INDEX_SIZE 9 /* 1G huge page */
#define RADIX_PUD_INDEX_SIZE 9
#define RADIX_PGD_INDEX_SIZE 13
#endif /* _ASM_POWERPC_PGTABLE_RADIX_64K_H */
#ifndef _ASM_POWERPC_PGTABLE_RADIX_H
#define _ASM_POWERPC_PGTABLE_RADIX_H
#ifndef __ASSEMBLY__
#include <asm/cmpxchg.h>
#endif
#ifdef CONFIG_PPC_64K_PAGES
#include <asm/book3s/64/radix-64k.h>
#else
#include <asm/book3s/64/radix-4k.h>
#endif
/* An empty PTE can still have a R or C writeback */
#define RADIX_PTE_NONE_MASK (_PAGE_DIRTY | _PAGE_ACCESSED)
/* Bits to set in a RPMD/RPUD/RPGD */
#define RADIX_PMD_VAL_BITS (0x8000000000000000UL | RADIX_PTE_INDEX_SIZE)
#define RADIX_PUD_VAL_BITS (0x8000000000000000UL | RADIX_PMD_INDEX_SIZE)
#define RADIX_PGD_VAL_BITS (0x8000000000000000UL | RADIX_PUD_INDEX_SIZE)
/* Don't have anything in the reserved bits and leaf bits */
#define RADIX_PMD_BAD_BITS 0x60000000000000e0UL
#define RADIX_PUD_BAD_BITS 0x60000000000000e0UL
#define RADIX_PGD_BAD_BITS 0x60000000000000e0UL
/*
* Size of EA range mapped by our pagetables.
*/
#define RADIX_PGTABLE_EADDR_SIZE (RADIX_PTE_INDEX_SIZE + RADIX_PMD_INDEX_SIZE + \
RADIX_PUD_INDEX_SIZE + RADIX_PGD_INDEX_SIZE + PAGE_SHIFT)
#define RADIX_PGTABLE_RANGE (ASM_CONST(1) << RADIX_PGTABLE_EADDR_SIZE)
/*
* We support 52 bit address space, Use top bit for kernel
* virtual mapping. Also make sure kernel fit in the top
* quadrant.
*
* +------------------+
* +------------------+ Kernel virtual map (0xc008000000000000)
* | |
* | |
* | |
* 0b11......+------------------+ Kernel linear map (0xc....)
* | |
* | 2 quadrant |
* | |
* 0b10......+------------------+
* | |
* | 1 quadrant |
* | |
* 0b01......+------------------+
* | |
* | 0 quadrant |
* | |
* 0b00......+------------------+
*
*
* 3rd quadrant expanded:
* +------------------------------+
* | |
* | |
* | |
* +------------------------------+ Kernel IO map end (0xc010000000000000)
* | |
* | |
* | 1/2 of virtual map |
* | |
* | |
* +------------------------------+ Kernel IO map start
* | |
* | 1/4 of virtual map |
* | |
* +------------------------------+ Kernel vmemap start
* | |
* | 1/4 of virtual map |
* | |
* +------------------------------+ Kernel virt start (0xc008000000000000)
* | |
* | |
* | |
* +------------------------------+ Kernel linear (0xc.....)
*/
#define RADIX_KERN_VIRT_START ASM_CONST(0xc008000000000000)
#define RADIX_KERN_VIRT_SIZE ASM_CONST(0x0008000000000000)
/*
* The vmalloc space starts at the beginning of that region, and
* occupies a quarter of it on radix config.
* (we keep a quarter for the virtual memmap)
*/
#define RADIX_VMALLOC_START RADIX_KERN_VIRT_START
#define RADIX_VMALLOC_SIZE (RADIX_KERN_VIRT_SIZE >> 2)
#define RADIX_VMALLOC_END (RADIX_VMALLOC_START + RADIX_VMALLOC_SIZE)
/*
* Defines the address of the vmemap area, in its own region on
* hash table CPUs.
*/
#define RADIX_VMEMMAP_BASE (RADIX_VMALLOC_END)
#ifndef __ASSEMBLY__
#define RADIX_PTE_TABLE_SIZE (sizeof(pte_t) << RADIX_PTE_INDEX_SIZE)
#define RADIX_PMD_TABLE_SIZE (sizeof(pmd_t) << RADIX_PMD_INDEX_SIZE)
#define RADIX_PUD_TABLE_SIZE (sizeof(pud_t) << RADIX_PUD_INDEX_SIZE)
#define RADIX_PGD_TABLE_SIZE (sizeof(pgd_t) << RADIX_PGD_INDEX_SIZE)
static inline unsigned long radix__pte_update(struct mm_struct *mm,
unsigned long addr,
pte_t *ptep, unsigned long clr,
unsigned long set,
int huge)
{
pte_t pte;
unsigned long old_pte, new_pte;
do {
pte = READ_ONCE(*ptep);
old_pte = pte_val(pte);
new_pte = (old_pte | set) & ~clr;
} while (!pte_xchg(ptep, __pte(old_pte), __pte(new_pte)));
/* We already do a sync in cmpxchg, is ptesync needed ?*/
asm volatile("ptesync" : : : "memory");
/* huge pages use the old page table lock */
if (!huge)
assert_pte_locked(mm, addr);
return old_pte;
}
/*
* Set the dirty and/or accessed bits atomically in a linux PTE, this
* function doesn't need to invalidate tlb.
*/
static inline void radix__ptep_set_access_flags(pte_t *ptep, pte_t entry)
{
pte_t pte;
unsigned long old_pte, new_pte;
unsigned long set = pte_val(entry) & (_PAGE_DIRTY | _PAGE_ACCESSED |
_PAGE_RW | _PAGE_EXEC);
do {
pte = READ_ONCE(*ptep);
old_pte = pte_val(pte);
new_pte = old_pte | set;
} while (!pte_xchg(ptep, __pte(old_pte), __pte(new_pte)));
/* We already do a sync in cmpxchg, is ptesync needed ?*/
asm volatile("ptesync" : : : "memory");
}
static inline int radix__pte_same(pte_t pte_a, pte_t pte_b)
{
return ((pte_raw(pte_a) ^ pte_raw(pte_b)) == 0);
}
static inline int radix__pte_none(pte_t pte)
{
return (pte_val(pte) & ~RADIX_PTE_NONE_MASK) == 0;
}
static inline void radix__set_pte_at(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, pte_t pte, int percpu)
{
*ptep = pte;
asm volatile("ptesync" : : : "memory");
}
static inline int radix__pmd_bad(pmd_t pmd)
{
return !!(pmd_val(pmd) & RADIX_PMD_BAD_BITS);
}
static inline int radix__pmd_same(pmd_t pmd_a, pmd_t pmd_b)
{
return ((pmd_raw(pmd_a) ^ pmd_raw(pmd_b)) == 0);
}
static inline int radix__pud_bad(pud_t pud)
{
return !!(pud_val(pud) & RADIX_PUD_BAD_BITS);
}
static inline int radix__pgd_bad(pgd_t pgd)
{
return !!(pgd_val(pgd) & RADIX_PGD_BAD_BITS);
}
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
static inline int radix__pmd_trans_huge(pmd_t pmd)
{
return !!(pmd_val(pmd) & _PAGE_PTE);
}
static inline pmd_t radix__pmd_mkhuge(pmd_t pmd)
{
return __pmd(pmd_val(pmd) | _PAGE_PTE);
}
static inline void radix__pmdp_huge_split_prepare(struct vm_area_struct *vma,
unsigned long address, pmd_t *pmdp)
{
/* Nothing to do for radix. */
return;
}
extern unsigned long radix__pmd_hugepage_update(struct mm_struct *mm, unsigned long addr,
pmd_t *pmdp, unsigned long clr,
unsigned long set);
extern pmd_t radix__pmdp_collapse_flush(struct vm_area_struct *vma,
unsigned long address, pmd_t *pmdp);
extern void radix__pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
pgtable_t pgtable);
extern pgtable_t radix__pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
extern pmd_t radix__pmdp_huge_get_and_clear(struct mm_struct *mm,
unsigned long addr, pmd_t *pmdp);
extern int radix__has_transparent_hugepage(void);
#endif
extern int __meminit radix__vmemmap_create_mapping(unsigned long start,
unsigned long page_size,
unsigned long phys);
extern void radix__vmemmap_remove_mapping(unsigned long start,
unsigned long page_size);
extern int radix__map_kernel_page(unsigned long ea, unsigned long pa,
pgprot_t flags, unsigned int psz);
#endif /* __ASSEMBLY__ */
#endif
#ifndef _ASM_POWERPC_BOOK3S_64_TLBFLUSH_HASH_H #ifndef _ASM_POWERPC_BOOK3S_64_TLBFLUSH_HASH_H
#define _ASM_POWERPC_BOOK3S_64_TLBFLUSH_HASH_H #define _ASM_POWERPC_BOOK3S_64_TLBFLUSH_HASH_H
#define MMU_NO_CONTEXT 0
/* /*
* TLB flushing for 64-bit hash-MMU CPUs * TLB flushing for 64-bit hash-MMU CPUs
*/ */
...@@ -29,14 +27,21 @@ extern void __flush_tlb_pending(struct ppc64_tlb_batch *batch); ...@@ -29,14 +27,21 @@ extern void __flush_tlb_pending(struct ppc64_tlb_batch *batch);
static inline void arch_enter_lazy_mmu_mode(void) static inline void arch_enter_lazy_mmu_mode(void)
{ {
struct ppc64_tlb_batch *batch = this_cpu_ptr(&ppc64_tlb_batch); struct ppc64_tlb_batch *batch;
if (radix_enabled())
return;
batch = this_cpu_ptr(&ppc64_tlb_batch);
batch->active = 1; batch->active = 1;
} }
static inline void arch_leave_lazy_mmu_mode(void) static inline void arch_leave_lazy_mmu_mode(void)
{ {
struct ppc64_tlb_batch *batch = this_cpu_ptr(&ppc64_tlb_batch); struct ppc64_tlb_batch *batch;
if (radix_enabled())
return;
batch = this_cpu_ptr(&ppc64_tlb_batch);
if (batch->index) if (batch->index)
__flush_tlb_pending(batch); __flush_tlb_pending(batch);
...@@ -52,40 +57,42 @@ extern void flush_hash_range(unsigned long number, int local); ...@@ -52,40 +57,42 @@ extern void flush_hash_range(unsigned long number, int local);
extern void flush_hash_hugepage(unsigned long vsid, unsigned long addr, extern void flush_hash_hugepage(unsigned long vsid, unsigned long addr,
pmd_t *pmdp, unsigned int psize, int ssize, pmd_t *pmdp, unsigned int psize, int ssize,
unsigned long flags); unsigned long flags);
static inline void hash__local_flush_tlb_mm(struct mm_struct *mm)
static inline void local_flush_tlb_mm(struct mm_struct *mm)
{ {
} }
static inline void flush_tlb_mm(struct mm_struct *mm) static inline void hash__flush_tlb_mm(struct mm_struct *mm)
{ {
} }
static inline void local_flush_tlb_page(struct vm_area_struct *vma, static inline void hash__local_flush_tlb_page(struct vm_area_struct *vma,
unsigned long vmaddr) unsigned long vmaddr)
{ {
} }
static inline void flush_tlb_page(struct vm_area_struct *vma, static inline void hash__flush_tlb_page(struct vm_area_struct *vma,
unsigned long vmaddr) unsigned long vmaddr)
{ {
} }
static inline void flush_tlb_page_nohash(struct vm_area_struct *vma, static inline void hash__flush_tlb_page_nohash(struct vm_area_struct *vma,
unsigned long vmaddr) unsigned long vmaddr)
{ {
} }
static inline void flush_tlb_range(struct vm_area_struct *vma, static inline void hash__flush_tlb_range(struct vm_area_struct *vma,
unsigned long start, unsigned long end) unsigned long start, unsigned long end)
{ {
} }
static inline void flush_tlb_kernel_range(unsigned long start, static inline void hash__flush_tlb_kernel_range(unsigned long start,
unsigned long end) unsigned long end)
{ {
} }
struct mmu_gather;
extern void hash__tlb_flush(struct mmu_gather *tlb);
/* Private function for use by PCI IO mapping code */ /* Private function for use by PCI IO mapping code */
extern void __flush_hash_table_range(struct mm_struct *mm, unsigned long start, extern void __flush_hash_table_range(struct mm_struct *mm, unsigned long start,
unsigned long end); unsigned long end);
......
#ifndef _ASM_POWERPC_TLBFLUSH_RADIX_H
#define _ASM_POWERPC_TLBFLUSH_RADIX_H
struct vm_area_struct;
struct mm_struct;
struct mmu_gather;
static inline int mmu_get_ap(int psize)
{
return mmu_psize_defs[psize].ap;
}
extern void radix__flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
unsigned long end);
extern void radix__flush_tlb_kernel_range(unsigned long start, unsigned long end);
extern void radix__local_flush_tlb_mm(struct mm_struct *mm);
extern void radix__local_flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
extern void radix___local_flush_tlb_page(struct mm_struct *mm, unsigned long vmaddr,
unsigned long ap, int nid);
extern void radix__tlb_flush(struct mmu_gather *tlb);
#ifdef CONFIG_SMP
extern void radix__flush_tlb_mm(struct mm_struct *mm);
extern void radix__flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
extern void radix___flush_tlb_page(struct mm_struct *mm, unsigned long vmaddr,
unsigned long ap, int nid);
#else
#define radix__flush_tlb_mm(mm) radix__local_flush_tlb_mm(mm)
#define radix__flush_tlb_page(vma,addr) radix__local_flush_tlb_page(vma,addr)
#define radix___flush_tlb_page(mm,addr,p,i) radix___local_flush_tlb_page(mm,addr,p,i)
#endif
#endif
#ifndef _ASM_POWERPC_BOOK3S_64_TLBFLUSH_H
#define _ASM_POWERPC_BOOK3S_64_TLBFLUSH_H
#define MMU_NO_CONTEXT ~0UL
#include <asm/book3s/64/tlbflush-hash.h>
#include <asm/book3s/64/tlbflush-radix.h>
static inline void flush_tlb_range(struct vm_area_struct *vma,
unsigned long start, unsigned long end)
{
if (radix_enabled())
return radix__flush_tlb_range(vma, start, end);
return hash__flush_tlb_range(vma, start, end);
}
static inline void flush_tlb_kernel_range(unsigned long start,
unsigned long end)
{
if (radix_enabled())
return radix__flush_tlb_kernel_range(start, end);
return hash__flush_tlb_kernel_range(start, end);
}
static inline void local_flush_tlb_mm(struct mm_struct *mm)
{
if (radix_enabled())
return radix__local_flush_tlb_mm(mm);
return hash__local_flush_tlb_mm(mm);
}
static inline void local_flush_tlb_page(struct vm_area_struct *vma,
unsigned long vmaddr)
{
if (radix_enabled())
return radix__local_flush_tlb_page(vma, vmaddr);
return hash__local_flush_tlb_page(vma, vmaddr);
}
static inline void flush_tlb_page_nohash(struct vm_area_struct *vma,
unsigned long vmaddr)
{
if (radix_enabled())
return radix__flush_tlb_page(vma, vmaddr);
return hash__flush_tlb_page_nohash(vma, vmaddr);
}
static inline void tlb_flush(struct mmu_gather *tlb)
{
if (radix_enabled())
return radix__tlb_flush(tlb);
return hash__tlb_flush(tlb);
}
#ifdef CONFIG_SMP
static inline void flush_tlb_mm(struct mm_struct *mm)
{
if (radix_enabled())
return radix__flush_tlb_mm(mm);
return hash__flush_tlb_mm(mm);
}
static inline void flush_tlb_page(struct vm_area_struct *vma,
unsigned long vmaddr)
{
if (radix_enabled())
return radix__flush_tlb_page(vma, vmaddr);
return hash__flush_tlb_page(vma, vmaddr);
}
#else
#define flush_tlb_mm(mm) local_flush_tlb_mm(mm)
#define flush_tlb_page(vma, addr) local_flush_tlb_page(vma, addr)
#endif /* CONFIG_SMP */
#endif /* _ASM_POWERPC_BOOK3S_64_TLBFLUSH_H */
#ifndef _ASM_POWERPC_BOOK3S_PGALLOC_H
#define _ASM_POWERPC_BOOK3S_PGALLOC_H
#include <linux/mm.h>
extern void tlb_remove_table(struct mmu_gather *tlb, void *table);
static inline void tlb_flush_pgtable(struct mmu_gather *tlb,
unsigned long address)
{
}
#ifdef CONFIG_PPC64
#include <asm/book3s/64/pgalloc.h>
#else
#include <asm/book3s/32/pgalloc.h>
#endif
#endif /* _ASM_POWERPC_BOOK3S_PGALLOC_H */
...@@ -8,6 +8,8 @@ ...@@ -8,6 +8,8 @@
extern struct kmem_cache *hugepte_cache; extern struct kmem_cache *hugepte_cache;
#ifdef CONFIG_PPC_BOOK3S_64 #ifdef CONFIG_PPC_BOOK3S_64
#include <asm/book3s/64/hugetlb-radix.h>
/* /*
* This should work for other subarchs too. But right now we use the * This should work for other subarchs too. But right now we use the
* new format only for 64bit book3s * new format only for 64bit book3s
...@@ -31,7 +33,19 @@ static inline unsigned int hugepd_shift(hugepd_t hpd) ...@@ -31,7 +33,19 @@ static inline unsigned int hugepd_shift(hugepd_t hpd)
{ {
return mmu_psize_to_shift(hugepd_mmu_psize(hpd)); return mmu_psize_to_shift(hugepd_mmu_psize(hpd));
} }
static inline void flush_hugetlb_page(struct vm_area_struct *vma,
unsigned long vmaddr)
{
if (radix_enabled())
return radix__flush_hugetlb_page(vma, vmaddr);
}
static inline void __local_flush_hugetlb_page(struct vm_area_struct *vma,
unsigned long vmaddr)
{
if (radix_enabled())
return radix__local_flush_hugetlb_page(vma, vmaddr);
}
#else #else
static inline pte_t *hugepd_page(hugepd_t hpd) static inline pte_t *hugepd_page(hugepd_t hpd)
......
...@@ -276,19 +276,24 @@ static inline unsigned long hpte_make_readonly(unsigned long ptel) ...@@ -276,19 +276,24 @@ static inline unsigned long hpte_make_readonly(unsigned long ptel)
return ptel; return ptel;
} }
static inline int hpte_cache_flags_ok(unsigned long ptel, unsigned long io_type) static inline bool hpte_cache_flags_ok(unsigned long hptel, bool is_ci)
{ {
unsigned int wimg = ptel & HPTE_R_WIMG; unsigned int wimg = hptel & HPTE_R_WIMG;
/* Handle SAO */ /* Handle SAO */
if (wimg == (HPTE_R_W | HPTE_R_I | HPTE_R_M) && if (wimg == (HPTE_R_W | HPTE_R_I | HPTE_R_M) &&
cpu_has_feature(CPU_FTR_ARCH_206)) cpu_has_feature(CPU_FTR_ARCH_206))
wimg = HPTE_R_M; wimg = HPTE_R_M;
if (!io_type) if (!is_ci)
return wimg == HPTE_R_M; return wimg == HPTE_R_M;
/*
return (wimg & (HPTE_R_W | HPTE_R_I)) == io_type; * if host is mapped cache inhibited, make sure hptel also have
* cache inhibited.
*/
if (wimg & HPTE_R_W) /* FIXME!! is this ok for all guest. ? */
return false;
return !!(wimg & HPTE_R_I);
} }
/* /*
...@@ -305,9 +310,9 @@ static inline pte_t kvmppc_read_update_linux_pte(pte_t *ptep, int writing) ...@@ -305,9 +310,9 @@ static inline pte_t kvmppc_read_update_linux_pte(pte_t *ptep, int writing)
*/ */
old_pte = READ_ONCE(*ptep); old_pte = READ_ONCE(*ptep);
/* /*
* wait until _PAGE_BUSY is clear then set it atomically * wait until H_PAGE_BUSY is clear then set it atomically
*/ */
if (unlikely(pte_val(old_pte) & _PAGE_BUSY)) { if (unlikely(pte_val(old_pte) & H_PAGE_BUSY)) {
cpu_relax(); cpu_relax();
continue; continue;
} }
...@@ -319,27 +324,12 @@ static inline pte_t kvmppc_read_update_linux_pte(pte_t *ptep, int writing) ...@@ -319,27 +324,12 @@ static inline pte_t kvmppc_read_update_linux_pte(pte_t *ptep, int writing)
if (writing && pte_write(old_pte)) if (writing && pte_write(old_pte))
new_pte = pte_mkdirty(new_pte); new_pte = pte_mkdirty(new_pte);
if (pte_val(old_pte) == __cmpxchg_u64((unsigned long *)ptep, if (pte_xchg(ptep, old_pte, new_pte))
pte_val(old_pte),
pte_val(new_pte))) {
break; break;
}
} }
return new_pte; return new_pte;
} }
/* Return HPTE cache control bits corresponding to Linux pte bits */
static inline unsigned long hpte_cache_bits(unsigned long pte_val)
{
#if _PAGE_NO_CACHE == HPTE_R_I && _PAGE_WRITETHRU == HPTE_R_W
return pte_val & (HPTE_R_W | HPTE_R_I);
#else
return ((pte_val & _PAGE_NO_CACHE) ? HPTE_R_I : 0) +
((pte_val & _PAGE_WRITETHRU) ? HPTE_R_W : 0);
#endif
}
static inline bool hpte_read_permission(unsigned long pp, unsigned long key) static inline bool hpte_read_permission(unsigned long pp, unsigned long key)
{ {
if (key) if (key)
......
...@@ -256,6 +256,7 @@ struct machdep_calls { ...@@ -256,6 +256,7 @@ struct machdep_calls {
#ifdef CONFIG_ARCH_RANDOM #ifdef CONFIG_ARCH_RANDOM
int (*get_random_seed)(unsigned long *v); int (*get_random_seed)(unsigned long *v);
#endif #endif
int (*update_partition_table)(u64);
}; };
extern void e500_idle(void); extern void e500_idle(void);
......
...@@ -88,6 +88,11 @@ ...@@ -88,6 +88,11 @@
*/ */
#define MMU_FTR_1T_SEGMENT ASM_CONST(0x40000000) #define MMU_FTR_1T_SEGMENT ASM_CONST(0x40000000)
/*
* Radix page table available
*/
#define MMU_FTR_RADIX ASM_CONST(0x80000000)
/* MMU feature bit sets for various CPUs */ /* MMU feature bit sets for various CPUs */
#define MMU_FTRS_DEFAULT_HPTE_ARCH_V2 \ #define MMU_FTRS_DEFAULT_HPTE_ARCH_V2 \
MMU_FTR_HPTE_TABLE | MMU_FTR_PPCAS_ARCH_V2 MMU_FTR_HPTE_TABLE | MMU_FTR_PPCAS_ARCH_V2
...@@ -110,9 +115,25 @@ ...@@ -110,9 +115,25 @@
DECLARE_PER_CPU(int, next_tlbcam_idx); DECLARE_PER_CPU(int, next_tlbcam_idx);
#endif #endif
enum {
MMU_FTRS_POSSIBLE = MMU_FTR_HPTE_TABLE | MMU_FTR_TYPE_8xx |
MMU_FTR_TYPE_40x | MMU_FTR_TYPE_44x | MMU_FTR_TYPE_FSL_E |
MMU_FTR_TYPE_47x | MMU_FTR_USE_HIGH_BATS | MMU_FTR_BIG_PHYS |
MMU_FTR_USE_TLBIVAX_BCAST | MMU_FTR_USE_TLBILX |
MMU_FTR_LOCK_BCAST_INVAL | MMU_FTR_NEED_DTLB_SW_LRU |
MMU_FTR_USE_TLBRSRV | MMU_FTR_USE_PAIRED_MAS |
MMU_FTR_NO_SLBIE_B | MMU_FTR_16M_PAGE | MMU_FTR_TLBIEL |
MMU_FTR_LOCKLESS_TLBIE | MMU_FTR_CI_LARGE_PAGE |
MMU_FTR_1T_SEGMENT |
#ifdef CONFIG_PPC_RADIX_MMU
MMU_FTR_RADIX |
#endif
0,
};
static inline int mmu_has_feature(unsigned long feature) static inline int mmu_has_feature(unsigned long feature)
{ {
return (cur_cpu_spec->mmu_features & feature); return (MMU_FTRS_POSSIBLE & cur_cpu_spec->mmu_features & feature);
} }
static inline void mmu_clear_feature(unsigned long feature) static inline void mmu_clear_feature(unsigned long feature)
...@@ -122,13 +143,6 @@ static inline void mmu_clear_feature(unsigned long feature) ...@@ -122,13 +143,6 @@ static inline void mmu_clear_feature(unsigned long feature)
extern unsigned int __start___mmu_ftr_fixup, __stop___mmu_ftr_fixup; extern unsigned int __start___mmu_ftr_fixup, __stop___mmu_ftr_fixup;
/* MMU initialization */
extern void early_init_mmu(void);
extern void early_init_mmu_secondary(void);
extern void setup_initial_memory_limit(phys_addr_t first_memblock_base,
phys_addr_t first_memblock_size);
#ifdef CONFIG_PPC64 #ifdef CONFIG_PPC64
/* This is our real memory area size on ppc64 server, on embedded, we /* This is our real memory area size on ppc64 server, on embedded, we
* make it match the size our of bolted TLB area * make it match the size our of bolted TLB area
...@@ -181,10 +195,20 @@ static inline void assert_pte_locked(struct mm_struct *mm, unsigned long addr) ...@@ -181,10 +195,20 @@ static inline void assert_pte_locked(struct mm_struct *mm, unsigned long addr)
#define MMU_PAGE_COUNT 15 #define MMU_PAGE_COUNT 15
#if defined(CONFIG_PPC_STD_MMU_64) #ifdef CONFIG_PPC_BOOK3S_64
/* 64-bit classic hash table MMU */ #include <asm/book3s/64/mmu.h>
#include <asm/book3s/64/mmu-hash.h> #else /* CONFIG_PPC_BOOK3S_64 */
#elif defined(CONFIG_PPC_STD_MMU_32)
#ifndef __ASSEMBLY__
/* MMU initialization */
extern void early_init_mmu(void);
extern void early_init_mmu_secondary(void);
extern void setup_initial_memory_limit(phys_addr_t first_memblock_base,
phys_addr_t first_memblock_size);
#endif /* __ASSEMBLY__ */
#endif
#if defined(CONFIG_PPC_STD_MMU_32)
/* 32-bit classic hash table MMU */ /* 32-bit classic hash table MMU */
#include <asm/book3s/32/mmu-hash.h> #include <asm/book3s/32/mmu-hash.h>
#elif defined(CONFIG_40x) #elif defined(CONFIG_40x)
...@@ -201,6 +225,9 @@ static inline void assert_pte_locked(struct mm_struct *mm, unsigned long addr) ...@@ -201,6 +225,9 @@ static inline void assert_pte_locked(struct mm_struct *mm, unsigned long addr)
# include <asm/mmu-8xx.h> # include <asm/mmu-8xx.h>
#endif #endif
#ifndef radix_enabled
#define radix_enabled() (0)
#endif
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_MMU_H_ */ #endif /* _ASM_POWERPC_MMU_H_ */
...@@ -33,16 +33,27 @@ extern long mm_iommu_ua_to_hpa(struct mm_iommu_table_group_mem_t *mem, ...@@ -33,16 +33,27 @@ extern long mm_iommu_ua_to_hpa(struct mm_iommu_table_group_mem_t *mem,
extern long mm_iommu_mapped_inc(struct mm_iommu_table_group_mem_t *mem); extern long mm_iommu_mapped_inc(struct mm_iommu_table_group_mem_t *mem);
extern void mm_iommu_mapped_dec(struct mm_iommu_table_group_mem_t *mem); extern void mm_iommu_mapped_dec(struct mm_iommu_table_group_mem_t *mem);
#endif #endif
extern void switch_mmu_context(struct mm_struct *prev, struct mm_struct *next);
extern void switch_slb(struct task_struct *tsk, struct mm_struct *mm); extern void switch_slb(struct task_struct *tsk, struct mm_struct *mm);
extern void set_context(unsigned long id, pgd_t *pgd); extern void set_context(unsigned long id, pgd_t *pgd);
#ifdef CONFIG_PPC_BOOK3S_64 #ifdef CONFIG_PPC_BOOK3S_64
extern void radix__switch_mmu_context(struct mm_struct *prev,
struct mm_struct *next);
static inline void switch_mmu_context(struct mm_struct *prev,
struct mm_struct *next,
struct task_struct *tsk)
{
if (radix_enabled())
return radix__switch_mmu_context(prev, next);
return switch_slb(tsk, next);
}
extern int __init_new_context(void); extern int __init_new_context(void);
extern void __destroy_context(int context_id); extern void __destroy_context(int context_id);
static inline void mmu_context_init(void) { } static inline void mmu_context_init(void) { }
#else #else
extern void switch_mmu_context(struct mm_struct *prev, struct mm_struct *next,
struct task_struct *tsk);
extern unsigned long __init_new_context(void); extern unsigned long __init_new_context(void);
extern void __destroy_context(unsigned long context_id); extern void __destroy_context(unsigned long context_id);
extern void mmu_context_init(void); extern void mmu_context_init(void);
...@@ -88,17 +99,11 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next, ...@@ -88,17 +99,11 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
if (cpu_has_feature(CPU_FTR_ALTIVEC)) if (cpu_has_feature(CPU_FTR_ALTIVEC))
asm volatile ("dssall"); asm volatile ("dssall");
#endif /* CONFIG_ALTIVEC */ #endif /* CONFIG_ALTIVEC */
/*
/* The actual HW switching method differs between the various * The actual HW switching method differs between the various
* sub architectures. * sub architectures. Out of line for now
*/ */
#ifdef CONFIG_PPC_STD_MMU_64 switch_mmu_context(prev, next, tsk);
switch_slb(tsk, next);
#else
/* Out of line for now */
switch_mmu_context(prev, next);
#endif
} }
#define deactivate_mm(tsk,mm) do { } while (0) #define deactivate_mm(tsk,mm) do { } while (0)
......
...@@ -53,7 +53,7 @@ static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd) ...@@ -53,7 +53,7 @@ static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
#ifndef CONFIG_PPC_64K_PAGES #ifndef CONFIG_PPC_64K_PAGES
#define pgd_populate(MM, PGD, PUD) pgd_set(PGD, __pgtable_ptr_val(PUD)) #define pgd_populate(MM, PGD, PUD) pgd_set(PGD, (unsigned long)PUD)
static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long addr) static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long addr)
{ {
...@@ -68,19 +68,19 @@ static inline void pud_free(struct mm_struct *mm, pud_t *pud) ...@@ -68,19 +68,19 @@ static inline void pud_free(struct mm_struct *mm, pud_t *pud)
static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd) static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
{ {
pud_set(pud, __pgtable_ptr_val(pmd)); pud_set(pud, (unsigned long)pmd);
} }
static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd,
pte_t *pte) pte_t *pte)
{ {
pmd_set(pmd, __pgtable_ptr_val(pte)); pmd_set(pmd, (unsigned long)pte);
} }
static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd, static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd,
pgtable_t pte_page) pgtable_t pte_page)
{ {
pmd_set(pmd, __pgtable_ptr_val(page_address(pte_page))); pmd_set(pmd, (unsigned long)page_address(pte_page));
} }
#define pmd_pgtable(pmd) pmd_page(pmd) #define pmd_pgtable(pmd) pmd_page(pmd)
...@@ -119,119 +119,65 @@ static inline void pte_free(struct mm_struct *mm, pgtable_t ptepage) ...@@ -119,119 +119,65 @@ static inline void pte_free(struct mm_struct *mm, pgtable_t ptepage)
__free_page(ptepage); __free_page(ptepage);
} }
static inline void pgtable_free(void *table, unsigned index_size) extern void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift);
{
if (!index_size)
free_page((unsigned long)table);
else {
BUG_ON(index_size > MAX_PGTABLE_INDEX_SIZE);
kmem_cache_free(PGT_CACHE(index_size), table);
}
}
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
static inline void pgtable_free_tlb(struct mmu_gather *tlb, extern void __tlb_remove_table(void *_table);
void *table, int shift) #endif
{
unsigned long pgf = (unsigned long)table;
BUG_ON(shift > MAX_PGTABLE_INDEX_SIZE);
pgf |= shift;
tlb_remove_table(tlb, (void *)pgf);
}
static inline void __tlb_remove_table(void *_table)
{
void *table = (void *)((unsigned long)_table & ~MAX_PGTABLE_INDEX_SIZE);
unsigned shift = (unsigned long)_table & MAX_PGTABLE_INDEX_SIZE;
pgtable_free(table, shift);
}
#else /* !CONFIG_SMP */
static inline void pgtable_free_tlb(struct mmu_gather *tlb,
void *table, int shift)
{
pgtable_free(table, shift);
}
#endif /* CONFIG_SMP */
static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table, static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table,
unsigned long address) unsigned long address)
{ {
tlb_flush_pgtable(tlb, address); tlb_flush_pgtable(tlb, address);
pgtable_page_dtor(table);
pgtable_free_tlb(tlb, page_address(table), 0); pgtable_free_tlb(tlb, page_address(table), 0);
} }
#else /* if CONFIG_PPC_64K_PAGES */ #else /* if CONFIG_PPC_64K_PAGES */
extern pte_t *page_table_alloc(struct mm_struct *, unsigned long, int); extern pte_t *pte_fragment_alloc(struct mm_struct *, unsigned long, int);
extern void page_table_free(struct mm_struct *, unsigned long *, int); extern void pte_fragment_free(unsigned long *, int);
extern void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift); extern void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift);
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
extern void __tlb_remove_table(void *_table); extern void __tlb_remove_table(void *_table);
#endif #endif
#ifndef __PAGETABLE_PUD_FOLDED #define pud_populate(mm, pud, pmd) pud_set(pud, (unsigned long)pmd)
/* book3s 64 is 4 level page table */
static inline void pgd_populate(struct mm_struct *mm, pgd_t *pgd, pud_t *pud)
{
pgd_set(pgd, __pgtable_ptr_val(pud));
}
static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long addr)
{
return kmem_cache_alloc(PGT_CACHE(PUD_INDEX_SIZE),
GFP_KERNEL|__GFP_REPEAT);
}
static inline void pud_free(struct mm_struct *mm, pud_t *pud)
{
kmem_cache_free(PGT_CACHE(PUD_INDEX_SIZE), pud);
}
#endif
static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
{
pud_set(pud, __pgtable_ptr_val(pmd));
}
static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd,
pte_t *pte) pte_t *pte)
{ {
pmd_set(pmd, __pgtable_ptr_val(pte)); pmd_set(pmd, (unsigned long)pte);
} }
static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd, static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd,
pgtable_t pte_page) pgtable_t pte_page)
{ {
pmd_set(pmd, __pgtable_ptr_val(pte_page)); pmd_set(pmd, (unsigned long)pte_page);
} }
static inline pgtable_t pmd_pgtable(pmd_t pmd) static inline pgtable_t pmd_pgtable(pmd_t pmd)
{ {
return (pgtable_t)pmd_page_vaddr(pmd); return (pgtable_t)(pmd_val(pmd) & ~PMD_MASKED_BITS);
} }
static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm, static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
unsigned long address) unsigned long address)
{ {
return (pte_t *)page_table_alloc(mm, address, 1); return (pte_t *)pte_fragment_alloc(mm, address, 1);
} }
static inline pgtable_t pte_alloc_one(struct mm_struct *mm, static inline pgtable_t pte_alloc_one(struct mm_struct *mm,
unsigned long address) unsigned long address)
{ {
return (pgtable_t)page_table_alloc(mm, address, 0); return (pgtable_t)pte_fragment_alloc(mm, address, 0);
} }
static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte) static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
{ {
page_table_free(mm, (unsigned long *)pte, 1); pte_fragment_fre((unsigned long *)pte, 1);
} }
static inline void pte_free(struct mm_struct *mm, pgtable_t ptepage) static inline void pte_free(struct mm_struct *mm, pgtable_t ptepage)
{ {
page_table_free(mm, (unsigned long *)ptepage, 0); pte_fragment_free((unsigned long *)ptepage, 0);
} }
static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table, static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table,
...@@ -255,11 +201,11 @@ static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd) ...@@ -255,11 +201,11 @@ static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)
#define __pmd_free_tlb(tlb, pmd, addr) \ #define __pmd_free_tlb(tlb, pmd, addr) \
pgtable_free_tlb(tlb, pmd, PMD_CACHE_INDEX) pgtable_free_tlb(tlb, pmd, PMD_CACHE_INDEX)
#ifndef __PAGETABLE_PUD_FOLDED #ifndef CONFIG_PPC_64K_PAGES
#define __pud_free_tlb(tlb, pud, addr) \ #define __pud_free_tlb(tlb, pud, addr) \
pgtable_free_tlb(tlb, pud, PUD_INDEX_SIZE) pgtable_free_tlb(tlb, pud, PUD_INDEX_SIZE)
#endif /* __PAGETABLE_PUD_FOLDED */ #endif /* CONFIG_PPC_64K_PAGES */
#define check_pgt_cache() do { } while (0) #define check_pgt_cache() do { } while (0)
......
...@@ -108,9 +108,6 @@ ...@@ -108,9 +108,6 @@
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
/* pte_clear moved to later in this file */ /* pte_clear moved to later in this file */
/* Pointers in the page table tree are virtual addresses */
#define __pgtable_ptr_val(ptr) ((unsigned long)(ptr))
#define PMD_BAD_BITS (PTE_TABLE_SIZE-1) #define PMD_BAD_BITS (PTE_TABLE_SIZE-1)
#define PUD_BAD_BITS (PMD_TABLE_SIZE-1) #define PUD_BAD_BITS (PMD_TABLE_SIZE-1)
...@@ -362,6 +359,13 @@ static inline void __ptep_set_access_flags(pte_t *ptep, pte_t entry) ...@@ -362,6 +359,13 @@ static inline void __ptep_set_access_flags(pte_t *ptep, pte_t entry)
void pgtable_cache_add(unsigned shift, void (*ctor)(void *)); void pgtable_cache_add(unsigned shift, void (*ctor)(void *));
void pgtable_cache_init(void); void pgtable_cache_init(void);
extern int map_kernel_page(unsigned long ea, unsigned long pa,
unsigned long flags);
extern int __meminit vmemmap_create_mapping(unsigned long start,
unsigned long page_size,
unsigned long phys);
extern void vmemmap_remove_mapping(unsigned long start,
unsigned long page_size);
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#endif /* _ASM_POWERPC_NOHASH_64_PGTABLE_H */ #endif /* _ASM_POWERPC_NOHASH_64_PGTABLE_H */
#ifndef _ASM_POWERPC_NOHASH_PGALLOC_H
#define _ASM_POWERPC_NOHASH_PGALLOC_H
#include <linux/mm.h>
extern void tlb_remove_table(struct mmu_gather *tlb, void *table);
#ifdef CONFIG_PPC64
extern void tlb_flush_pgtable(struct mmu_gather *tlb, unsigned long address);
#else
/* 44x etc which is BOOKE not BOOK3E */
static inline void tlb_flush_pgtable(struct mmu_gather *tlb,
unsigned long address)
{
}
#endif /* !CONFIG_PPC_BOOK3E */
#ifdef CONFIG_PPC64
#include <asm/nohash/64/pgalloc.h>
#else
#include <asm/nohash/32/pgalloc.h>
#endif
#endif /* _ASM_POWERPC_NOHASH_PGALLOC_H */
...@@ -368,16 +368,16 @@ enum OpalLPCAddressType { ...@@ -368,16 +368,16 @@ enum OpalLPCAddressType {
}; };
enum opal_msg_type { enum opal_msg_type {
OPAL_MSG_ASYNC_COMP = 0, /* params[0] = token, params[1] = rc, OPAL_MSG_ASYNC_COMP = 0, /* params[0] = token, params[1] = rc,
* additional params function-specific * additional params function-specific
*/ */
OPAL_MSG_MEM_ERR, OPAL_MSG_MEM_ERR = 1,
OPAL_MSG_EPOW, OPAL_MSG_EPOW = 2,
OPAL_MSG_SHUTDOWN, /* params[0] = 1 reboot, 0 shutdown */ OPAL_MSG_SHUTDOWN = 3, /* params[0] = 1 reboot, 0 shutdown */
OPAL_MSG_HMI_EVT, OPAL_MSG_HMI_EVT = 4,
OPAL_MSG_DPO, OPAL_MSG_DPO = 5,
OPAL_MSG_PRD, OPAL_MSG_PRD = 6,
OPAL_MSG_OCC, OPAL_MSG_OCC = 7,
OPAL_MSG_TYPE_MAX, OPAL_MSG_TYPE_MAX,
}; };
......
...@@ -288,7 +288,11 @@ extern long long virt_phys_offset; ...@@ -288,7 +288,11 @@ extern long long virt_phys_offset;
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#ifdef CONFIG_PPC_BOOK3S_64
#include <asm/pgtable-be-types.h>
#else
#include <asm/pgtable-types.h> #include <asm/pgtable-types.h>
#endif
typedef struct { signed long pd; } hugepd_t; typedef struct { signed long pd; } hugepd_t;
...@@ -312,12 +316,20 @@ void arch_free_page(struct page *page, int order); ...@@ -312,12 +316,20 @@ void arch_free_page(struct page *page, int order);
#endif #endif
struct vm_area_struct; struct vm_area_struct;
#ifdef CONFIG_PPC_BOOK3S_64
/*
* For BOOK3s 64 with 4k and 64K linux page size
* we want to use pointers, because the page table
* actually store pfn
*/
typedef pte_t *pgtable_t;
#else
#if defined(CONFIG_PPC_64K_PAGES) && defined(CONFIG_PPC64) #if defined(CONFIG_PPC_64K_PAGES) && defined(CONFIG_PPC64)
typedef pte_t *pgtable_t; typedef pte_t *pgtable_t;
#else #else
typedef struct page *pgtable_t; typedef struct page *pgtable_t;
#endif #endif
#endif
#include <asm-generic/memory_model.h> #include <asm-generic/memory_model.h>
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
......
...@@ -93,7 +93,7 @@ extern u64 ppc64_pft_size; ...@@ -93,7 +93,7 @@ extern u64 ppc64_pft_size;
#define SLICE_LOW_TOP (0x100000000ul) #define SLICE_LOW_TOP (0x100000000ul)
#define SLICE_NUM_LOW (SLICE_LOW_TOP >> SLICE_LOW_SHIFT) #define SLICE_NUM_LOW (SLICE_LOW_TOP >> SLICE_LOW_SHIFT)
#define SLICE_NUM_HIGH (PGTABLE_RANGE >> SLICE_HIGH_SHIFT) #define SLICE_NUM_HIGH (H_PGTABLE_RANGE >> SLICE_HIGH_SHIFT)
#define GET_LOW_SLICE_INDEX(addr) ((addr) >> SLICE_LOW_SHIFT) #define GET_LOW_SLICE_INDEX(addr) ((addr) >> SLICE_LOW_SHIFT)
#define GET_HIGH_SLICE_INDEX(addr) ((addr) >> SLICE_HIGH_SHIFT) #define GET_HIGH_SLICE_INDEX(addr) ((addr) >> SLICE_HIGH_SHIFT)
...@@ -128,8 +128,6 @@ extern void slice_set_user_psize(struct mm_struct *mm, unsigned int psize); ...@@ -128,8 +128,6 @@ extern void slice_set_user_psize(struct mm_struct *mm, unsigned int psize);
extern void slice_set_range_psize(struct mm_struct *mm, unsigned long start, extern void slice_set_range_psize(struct mm_struct *mm, unsigned long start,
unsigned long len, unsigned int psize); unsigned long len, unsigned int psize);
#define slice_mm_new_context(mm) ((mm)->context.id == MMU_NO_CONTEXT)
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#else #else
#define slice_init() #define slice_init()
...@@ -151,7 +149,6 @@ do { \ ...@@ -151,7 +149,6 @@ do { \
#define slice_set_range_psize(mm, start, len, psize) \ #define slice_set_range_psize(mm, start, len, psize) \
slice_set_user_psize((mm), (psize)) slice_set_user_psize((mm), (psize))
#define slice_mm_new_context(mm) 1
#endif /* CONFIG_PPC_MM_SLICES */ #endif /* CONFIG_PPC_MM_SLICES */
#ifdef CONFIG_HUGETLB_PAGE #ifdef CONFIG_HUGETLB_PAGE
......
...@@ -17,33 +17,34 @@ struct device_node; ...@@ -17,33 +17,34 @@ struct device_node;
* PCI controller operations * PCI controller operations
*/ */
struct pci_controller_ops { struct pci_controller_ops {
void (*dma_dev_setup)(struct pci_dev *dev); void (*dma_dev_setup)(struct pci_dev *pdev);
void (*dma_bus_setup)(struct pci_bus *bus); void (*dma_bus_setup)(struct pci_bus *bus);
int (*probe_mode)(struct pci_bus *); int (*probe_mode)(struct pci_bus *bus);
/* Called when pci_enable_device() is called. Returns true to /* Called when pci_enable_device() is called. Returns true to
* allow assignment/enabling of the device. */ * allow assignment/enabling of the device. */
bool (*enable_device_hook)(struct pci_dev *); bool (*enable_device_hook)(struct pci_dev *pdev);
void (*disable_device)(struct pci_dev *); void (*disable_device)(struct pci_dev *pdev);
void (*release_device)(struct pci_dev *); void (*release_device)(struct pci_dev *pdev);
/* Called during PCI resource reassignment */ /* Called during PCI resource reassignment */
resource_size_t (*window_alignment)(struct pci_bus *, unsigned long type); resource_size_t (*window_alignment)(struct pci_bus *bus,
void (*reset_secondary_bus)(struct pci_dev *dev); unsigned long type);
void (*reset_secondary_bus)(struct pci_dev *pdev);
#ifdef CONFIG_PCI_MSI #ifdef CONFIG_PCI_MSI
int (*setup_msi_irqs)(struct pci_dev *dev, int (*setup_msi_irqs)(struct pci_dev *pdev,
int nvec, int type); int nvec, int type);
void (*teardown_msi_irqs)(struct pci_dev *dev); void (*teardown_msi_irqs)(struct pci_dev *pdev);
#endif #endif
int (*dma_set_mask)(struct pci_dev *dev, u64 dma_mask); int (*dma_set_mask)(struct pci_dev *pdev, u64 dma_mask);
u64 (*dma_get_required_mask)(struct pci_dev *dev); u64 (*dma_get_required_mask)(struct pci_dev *pdev);
void (*shutdown)(struct pci_controller *); void (*shutdown)(struct pci_controller *hose);
}; };
/* /*
...@@ -208,14 +209,14 @@ struct pci_dn { ...@@ -208,14 +209,14 @@ struct pci_dn {
#ifdef CONFIG_EEH #ifdef CONFIG_EEH
struct eeh_dev *edev; /* eeh device */ struct eeh_dev *edev; /* eeh device */
#endif #endif
#define IODA_INVALID_PE (-1) #define IODA_INVALID_PE 0xFFFFFFFF
#ifdef CONFIG_PPC_POWERNV #ifdef CONFIG_PPC_POWERNV
int pe_number; unsigned int pe_number;
int vf_index; /* VF index in the PF */ int vf_index; /* VF index in the PF */
#ifdef CONFIG_PCI_IOV #ifdef CONFIG_PCI_IOV
u16 vfs_expanded; /* number of VFs IOV BAR expanded */ u16 vfs_expanded; /* number of VFs IOV BAR expanded */
u16 num_vfs; /* number of VFs enabled*/ u16 num_vfs; /* number of VFs enabled*/
int *pe_num_map; /* PE# for the first VF PE or array */ unsigned int *pe_num_map; /* PE# for the first VF PE or array */
bool m64_single_mode; /* Use M64 BAR in Single Mode */ bool m64_single_mode; /* Use M64 BAR in Single Mode */
#define IODA_INVALID_M64 (-1) #define IODA_INVALID_M64 (-1)
int (*m64_map)[PCI_SRIOV_NUM_BARS]; int (*m64_map)[PCI_SRIOV_NUM_BARS];
...@@ -234,7 +235,9 @@ extern struct pci_dn *pci_get_pdn_by_devfn(struct pci_bus *bus, ...@@ -234,7 +235,9 @@ extern struct pci_dn *pci_get_pdn_by_devfn(struct pci_bus *bus,
extern struct pci_dn *pci_get_pdn(struct pci_dev *pdev); extern struct pci_dn *pci_get_pdn(struct pci_dev *pdev);
extern struct pci_dn *add_dev_pci_data(struct pci_dev *pdev); extern struct pci_dn *add_dev_pci_data(struct pci_dev *pdev);
extern void remove_dev_pci_data(struct pci_dev *pdev); extern void remove_dev_pci_data(struct pci_dev *pdev);
extern void *update_dn_pci_info(struct device_node *dn, void *data); extern struct pci_dn *pci_add_device_node_info(struct pci_controller *hose,
struct device_node *dn);
extern void pci_remove_device_node_info(struct device_node *dn);
static inline int pci_device_from_OF_node(struct device_node *np, static inline int pci_device_from_OF_node(struct device_node *np,
u8 *bus, u8 *devfn) u8 *bus, u8 *devfn)
...@@ -256,13 +259,13 @@ static inline struct eeh_dev *pdn_to_eeh_dev(struct pci_dn *pdn) ...@@ -256,13 +259,13 @@ static inline struct eeh_dev *pdn_to_eeh_dev(struct pci_dn *pdn)
#endif #endif
/** Find the bus corresponding to the indicated device node */ /** Find the bus corresponding to the indicated device node */
extern struct pci_bus *pcibios_find_pci_bus(struct device_node *dn); extern struct pci_bus *pci_find_bus_by_node(struct device_node *dn);
/** Remove all of the PCI devices under this bus */ /** Remove all of the PCI devices under this bus */
extern void pcibios_remove_pci_devices(struct pci_bus *bus); extern void pci_hp_remove_devices(struct pci_bus *bus);
/** Discover new pci devices under this bus, and add them */ /** Discover new pci devices under this bus, and add them */
extern void pcibios_add_pci_devices(struct pci_bus *bus); extern void pci_hp_add_devices(struct pci_bus *bus);
extern void isa_bridge_find_early(struct pci_controller *hose); extern void isa_bridge_find_early(struct pci_controller *hose);
......
#ifndef _ASM_POWERPC_PGALLOC_H #ifndef _ASM_POWERPC_PGALLOC_H
#define _ASM_POWERPC_PGALLOC_H #define _ASM_POWERPC_PGALLOC_H
#ifdef __KERNEL__
#include <linux/mm.h> #include <linux/mm.h>
#ifdef CONFIG_PPC_BOOK3E #ifdef CONFIG_PPC_BOOK3S
extern void tlb_flush_pgtable(struct mmu_gather *tlb, unsigned long address); #include <asm/book3s/pgalloc.h>
#else /* CONFIG_PPC_BOOK3E */
static inline void tlb_flush_pgtable(struct mmu_gather *tlb,
unsigned long address)
{
}
#endif /* !CONFIG_PPC_BOOK3E */
extern void tlb_remove_table(struct mmu_gather *tlb, void *table);
#ifdef CONFIG_PPC64
#include <asm/pgalloc-64.h>
#else #else
#include <asm/pgalloc-32.h> #include <asm/nohash/pgalloc.h>
#endif #endif
#endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_PGALLOC_H */ #endif /* _ASM_POWERPC_PGALLOC_H */
#ifndef _ASM_POWERPC_PGTABLE_BE_TYPES_H
#define _ASM_POWERPC_PGTABLE_BE_TYPES_H
#include <asm/cmpxchg.h>
/* PTE level */
typedef struct { __be64 pte; } pte_t;
#define __pte(x) ((pte_t) { cpu_to_be64(x) })
static inline unsigned long pte_val(pte_t x)
{
return be64_to_cpu(x.pte);
}
static inline __be64 pte_raw(pte_t x)
{
return x.pte;
}
/* PMD level */
#ifdef CONFIG_PPC64
typedef struct { __be64 pmd; } pmd_t;
#define __pmd(x) ((pmd_t) { cpu_to_be64(x) })
static inline unsigned long pmd_val(pmd_t x)
{
return be64_to_cpu(x.pmd);
}
static inline __be64 pmd_raw(pmd_t x)
{
return x.pmd;
}
/*
* 64 bit hash always use 4 level table. Everybody else use 4 level
* only for 4K page size.
*/
#if defined(CONFIG_PPC_BOOK3S_64) || !defined(CONFIG_PPC_64K_PAGES)
typedef struct { __be64 pud; } pud_t;
#define __pud(x) ((pud_t) { cpu_to_be64(x) })
static inline unsigned long pud_val(pud_t x)
{
return be64_to_cpu(x.pud);
}
#endif /* CONFIG_PPC_BOOK3S_64 || !CONFIG_PPC_64K_PAGES */
#endif /* CONFIG_PPC64 */
/* PGD level */
typedef struct { __be64 pgd; } pgd_t;
#define __pgd(x) ((pgd_t) { cpu_to_be64(x) })
static inline unsigned long pgd_val(pgd_t x)
{
return be64_to_cpu(x.pgd);
}
/* Page protection bits */
typedef struct { unsigned long pgprot; } pgprot_t;
#define pgprot_val(x) ((x).pgprot)
#define __pgprot(x) ((pgprot_t) { (x) })
/*
* With hash config 64k pages additionally define a bigger "real PTE" type that
* gathers the "second half" part of the PTE for pseudo 64k pages
*/
#if defined(CONFIG_PPC_64K_PAGES) && defined(CONFIG_PPC_STD_MMU_64)
typedef struct { pte_t pte; unsigned long hidx; } real_pte_t;
#else
typedef struct { pte_t pte; } real_pte_t;
#endif
static inline bool pte_xchg(pte_t *ptep, pte_t old, pte_t new)
{
unsigned long *p = (unsigned long *)ptep;
__be64 prev;
prev = (__force __be64)__cmpxchg_u64(p, (__force unsigned long)pte_raw(old),
(__force unsigned long)pte_raw(new));
return pte_raw(old) == prev;
}
static inline bool pmd_xchg(pmd_t *pmdp, pmd_t old, pmd_t new)
{
unsigned long *p = (unsigned long *)pmdp;
__be64 prev;
prev = (__force __be64)__cmpxchg_u64(p, (__force unsigned long)pmd_raw(old),
(__force unsigned long)pmd_raw(new));
return pmd_raw(old) == prev;
}
#endif /* _ASM_POWERPC_PGTABLE_BE_TYPES_H */
#ifndef _ASM_POWERPC_PGTABLE_TYPES_H #ifndef _ASM_POWERPC_PGTABLE_TYPES_H
#define _ASM_POWERPC_PGTABLE_TYPES_H #define _ASM_POWERPC_PGTABLE_TYPES_H
#ifdef CONFIG_STRICT_MM_TYPECHECKS
/* These are used to make use of C type-checking. */
/* PTE level */ /* PTE level */
typedef struct { pte_basic_t pte; } pte_t; typedef struct { pte_basic_t pte; } pte_t;
#define __pte(x) ((pte_t) { (x) }) #define __pte(x) ((pte_t) { (x) })
...@@ -48,49 +45,6 @@ typedef struct { unsigned long pgprot; } pgprot_t; ...@@ -48,49 +45,6 @@ typedef struct { unsigned long pgprot; } pgprot_t;
#define pgprot_val(x) ((x).pgprot) #define pgprot_val(x) ((x).pgprot)
#define __pgprot(x) ((pgprot_t) { (x) }) #define __pgprot(x) ((pgprot_t) { (x) })
#else
/*
* .. while these make it easier on the compiler
*/
typedef pte_basic_t pte_t;
#define __pte(x) (x)
static inline pte_basic_t pte_val(pte_t pte)
{
return pte;
}
#ifdef CONFIG_PPC64
typedef unsigned long pmd_t;
#define __pmd(x) (x)
static inline unsigned long pmd_val(pmd_t pmd)
{
return pmd;
}
#if defined(CONFIG_PPC_BOOK3S_64) || !defined(CONFIG_PPC_64K_PAGES)
typedef unsigned long pud_t;
#define __pud(x) (x)
static inline unsigned long pud_val(pud_t pud)
{
return pud;
}
#endif /* CONFIG_PPC_BOOK3S_64 || !CONFIG_PPC_64K_PAGES */
#endif /* CONFIG_PPC64 */
typedef unsigned long pgd_t;
#define __pgd(x) (x)
static inline unsigned long pgd_val(pgd_t pgd)
{
return pgd;
}
typedef unsigned long pgprot_t;
#define pgprot_val(x) (x)
#define __pgprot(x) (x)
#endif /* CONFIG_STRICT_MM_TYPECHECKS */
/* /*
* With hash config 64k pages additionally define a bigger "real PTE" type that * With hash config 64k pages additionally define a bigger "real PTE" type that
* gathers the "second half" part of the PTE for pseudo 64k pages * gathers the "second half" part of the PTE for pseudo 64k pages
...@@ -100,4 +54,16 @@ typedef struct { pte_t pte; unsigned long hidx; } real_pte_t; ...@@ -100,4 +54,16 @@ typedef struct { pte_t pte; unsigned long hidx; } real_pte_t;
#else #else
typedef struct { pte_t pte; } real_pte_t; typedef struct { pte_t pte; } real_pte_t;
#endif #endif
#ifdef CONFIG_PPC_STD_MMU_64
#include <asm/cmpxchg.h>
static inline bool pte_xchg(pte_t *ptep, pte_t old, pte_t new)
{
unsigned long *p = (unsigned long *)ptep;
return pte_val(old) == __cmpxchg_u64(p, pte_val(old), pte_val(new));
}
#endif
#endif /* _ASM_POWERPC_PGTABLE_TYPES_H */ #endif /* _ASM_POWERPC_PGTABLE_TYPES_H */
...@@ -131,6 +131,7 @@ ...@@ -131,6 +131,7 @@
/* sorted alphabetically */ /* sorted alphabetically */
#define PPC_INST_BHRBE 0x7c00025c #define PPC_INST_BHRBE 0x7c00025c
#define PPC_INST_CLRBHRB 0x7c00035c #define PPC_INST_CLRBHRB 0x7c00035c
#define PPC_INST_CP_ABORT 0x7c00068c
#define PPC_INST_DCBA 0x7c0005ec #define PPC_INST_DCBA 0x7c0005ec
#define PPC_INST_DCBA_MASK 0xfc0007fe #define PPC_INST_DCBA_MASK 0xfc0007fe
#define PPC_INST_DCBAL 0x7c2005ec #define PPC_INST_DCBAL 0x7c2005ec
...@@ -285,6 +286,7 @@ ...@@ -285,6 +286,7 @@
#endif #endif
/* Deal with instructions that older assemblers aren't aware of */ /* Deal with instructions that older assemblers aren't aware of */
#define PPC_CP_ABORT stringify_in_c(.long PPC_INST_CP_ABORT)
#define PPC_DCBAL(a, b) stringify_in_c(.long PPC_INST_DCBAL | \ #define PPC_DCBAL(a, b) stringify_in_c(.long PPC_INST_DCBAL | \
__PPC_RA(a) | __PPC_RB(b)) __PPC_RA(a) | __PPC_RB(b))
#define PPC_DCBZL(a, b) stringify_in_c(.long PPC_INST_DCBZL | \ #define PPC_DCBZL(a, b) stringify_in_c(.long PPC_INST_DCBZL | \
......
...@@ -33,9 +33,9 @@ extern struct pci_dev *isa_bridge_pcidev; /* may be NULL if no ISA bus */ ...@@ -33,9 +33,9 @@ extern struct pci_dev *isa_bridge_pcidev; /* may be NULL if no ISA bus */
struct device_node; struct device_node;
struct pci_dn; struct pci_dn;
typedef void *(*traverse_func)(struct device_node *me, void *data); void *pci_traverse_device_nodes(struct device_node *start,
void *traverse_pci_devices(struct device_node *start, traverse_func pre, void *(*fn)(struct device_node *, void *),
void *data); void *data);
void *traverse_pci_dn(struct pci_dn *root, void *traverse_pci_dn(struct pci_dn *root,
void *(*fn)(struct pci_dn *, void *), void *(*fn)(struct pci_dn *, void *),
void *data); void *data);
......
...@@ -427,7 +427,10 @@ END_FTR_SECTION_IFCLR(CPU_FTR_601) ...@@ -427,7 +427,10 @@ END_FTR_SECTION_IFCLR(CPU_FTR_601)
li r4,1024; \ li r4,1024; \
mtctr r4; \ mtctr r4; \
lis r4,KERNELBASE@h; \ lis r4,KERNELBASE@h; \
.machine push; \
.machine "power4"; \
0: tlbie r4; \ 0: tlbie r4; \
.machine pop; \
addi r4,r4,0x1000; \ addi r4,r4,0x1000; \
bdnz 0b bdnz 0b
#endif #endif
......
...@@ -76,6 +76,16 @@ ...@@ -76,6 +76,16 @@
*/ */
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
extern unsigned long bad_call_to_PMD_PAGE_SIZE(void); extern unsigned long bad_call_to_PMD_PAGE_SIZE(void);
/*
* Don't just check for any non zero bits in __PAGE_USER, since for book3e
* and PTE_64BIT, PAGE_KERNEL_X contains _PAGE_BAP_SR which is also in
* _PAGE_USER. Need to explicitly match _PAGE_BAP_UR bit in that case too.
*/
static inline bool pte_user(pte_t pte)
{
return (pte_val(pte) & _PAGE_USER) == _PAGE_USER;
}
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
/* Location of the PFN in the PTE. Most 32-bit platforms use the same /* Location of the PFN in the PTE. Most 32-bit platforms use the same
...@@ -184,13 +194,6 @@ extern unsigned long bad_call_to_PMD_PAGE_SIZE(void); ...@@ -184,13 +194,6 @@ extern unsigned long bad_call_to_PMD_PAGE_SIZE(void);
/* Make modules code happy. We don't set RO yet */ /* Make modules code happy. We don't set RO yet */
#define PAGE_KERNEL_EXEC PAGE_KERNEL_X #define PAGE_KERNEL_EXEC PAGE_KERNEL_X
/*
* Don't just check for any non zero bits in __PAGE_USER, since for book3e
* and PTE_64BIT, PAGE_KERNEL_X contains _PAGE_BAP_SR which is also in
* _PAGE_USER. Need to explicitly match _PAGE_BAP_UR bit in that case too.
*/
#define pte_user(val) ((val & _PAGE_USER) == _PAGE_USER)
/* Advertise special mapping type for AGP */ /* Advertise special mapping type for AGP */
#define PAGE_AGP (PAGE_KERNEL_NC) #define PAGE_AGP (PAGE_KERNEL_NC)
#define HAVE_PAGE_AGP #define HAVE_PAGE_AGP
...@@ -198,3 +201,12 @@ extern unsigned long bad_call_to_PMD_PAGE_SIZE(void); ...@@ -198,3 +201,12 @@ extern unsigned long bad_call_to_PMD_PAGE_SIZE(void);
/* Advertise support for _PAGE_SPECIAL */ /* Advertise support for _PAGE_SPECIAL */
#define __HAVE_ARCH_PTE_SPECIAL #define __HAVE_ARCH_PTE_SPECIAL
#ifndef _PAGE_READ
/* if not defined, we should not find _PAGE_WRITE too */
#define _PAGE_READ 0
#define _PAGE_WRITE _PAGE_RW
#endif
#ifndef H_PAGE_4K_PFN
#define H_PAGE_4K_PFN 0
#endif
...@@ -347,6 +347,7 @@ ...@@ -347,6 +347,7 @@
#define LPCR_LPES_SH 2 #define LPCR_LPES_SH 2
#define LPCR_RMI 0x00000002 /* real mode is cache inhibit */ #define LPCR_RMI 0x00000002 /* real mode is cache inhibit */
#define LPCR_HDICE 0x00000001 /* Hyp Decr enable (HV,PR,EE) */ #define LPCR_HDICE 0x00000001 /* Hyp Decr enable (HV,PR,EE) */
#define LPCR_UPRT 0x00400000 /* Use Process Table (ISA 3) */
#ifndef SPRN_LPID #ifndef SPRN_LPID
#define SPRN_LPID 0x13F /* Logical Partition Identifier */ #define SPRN_LPID 0x13F /* Logical Partition Identifier */
#endif #endif
...@@ -587,6 +588,7 @@ ...@@ -587,6 +588,7 @@
#define SPRN_PIR 0x3FF /* Processor Identification Register */ #define SPRN_PIR 0x3FF /* Processor Identification Register */
#endif #endif
#define SPRN_TIR 0x1BE /* Thread Identification Register */ #define SPRN_TIR 0x1BE /* Thread Identification Register */
#define SPRN_PTCR 0x1D0 /* Partition table control Register */
#define SPRN_PSPB 0x09F /* Problem State Priority Boost reg */ #define SPRN_PSPB 0x09F /* Problem State Priority Boost reg */
#define SPRN_PTEHI 0x3D5 /* 981 7450 PTE HI word (S/W TLB load) */ #define SPRN_PTEHI 0x3D5 /* 981 7450 PTE HI word (S/W TLB load) */
#define SPRN_PTELO 0x3D6 /* 982 7450 PTE LO word (S/W TLB load) */ #define SPRN_PTELO 0x3D6 /* 982 7450 PTE LO word (S/W TLB load) */
...@@ -1182,6 +1184,7 @@ ...@@ -1182,6 +1184,7 @@
#define PVR_970GX 0x0045 #define PVR_970GX 0x0045
#define PVR_POWER7p 0x004A #define PVR_POWER7p 0x004A
#define PVR_POWER8E 0x004B #define PVR_POWER8E 0x004B
#define PVR_POWER8NVL 0x004C
#define PVR_POWER8 0x004D #define PVR_POWER8 0x004D
#define PVR_BE 0x0070 #define PVR_BE 0x0070
#define PVR_PA6T 0x0090 #define PVR_PA6T 0x0090
......
...@@ -58,6 +58,7 @@ extern void __flush_tlb_page(struct mm_struct *mm, unsigned long vmaddr, ...@@ -58,6 +58,7 @@ extern void __flush_tlb_page(struct mm_struct *mm, unsigned long vmaddr,
#elif defined(CONFIG_PPC_STD_MMU_32) #elif defined(CONFIG_PPC_STD_MMU_32)
#define MMU_NO_CONTEXT (0)
/* /*
* TLB flushing for "classic" hash-MMU 32-bit CPUs, 6xx, 7xx, 7xxx * TLB flushing for "classic" hash-MMU 32-bit CPUs, 6xx, 7xx, 7xxx
*/ */
...@@ -78,7 +79,7 @@ static inline void local_flush_tlb_mm(struct mm_struct *mm) ...@@ -78,7 +79,7 @@ static inline void local_flush_tlb_mm(struct mm_struct *mm)
} }
#elif defined(CONFIG_PPC_STD_MMU_64) #elif defined(CONFIG_PPC_STD_MMU_64)
#include <asm/book3s/64/tlbflush-hash.h> #include <asm/book3s/64/tlbflush.h>
#else #else
#error Unsupported MMU type #error Unsupported MMU type
#endif #endif
......
#ifndef _UAPI_ASM_POWERPC_PERF_REGS_H
#define _UAPI_ASM_POWERPC_PERF_REGS_H
enum perf_event_powerpc_regs {
PERF_REG_POWERPC_R0,
PERF_REG_POWERPC_R1,
PERF_REG_POWERPC_R2,
PERF_REG_POWERPC_R3,
PERF_REG_POWERPC_R4,
PERF_REG_POWERPC_R5,
PERF_REG_POWERPC_R6,
PERF_REG_POWERPC_R7,
PERF_REG_POWERPC_R8,
PERF_REG_POWERPC_R9,
PERF_REG_POWERPC_R10,
PERF_REG_POWERPC_R11,
PERF_REG_POWERPC_R12,
PERF_REG_POWERPC_R13,
PERF_REG_POWERPC_R14,
PERF_REG_POWERPC_R15,
PERF_REG_POWERPC_R16,
PERF_REG_POWERPC_R17,
PERF_REG_POWERPC_R18,
PERF_REG_POWERPC_R19,
PERF_REG_POWERPC_R20,
PERF_REG_POWERPC_R21,
PERF_REG_POWERPC_R22,
PERF_REG_POWERPC_R23,
PERF_REG_POWERPC_R24,
PERF_REG_POWERPC_R25,
PERF_REG_POWERPC_R26,
PERF_REG_POWERPC_R27,
PERF_REG_POWERPC_R28,
PERF_REG_POWERPC_R29,
PERF_REG_POWERPC_R30,
PERF_REG_POWERPC_R31,
PERF_REG_POWERPC_NIP,
PERF_REG_POWERPC_MSR,
PERF_REG_POWERPC_ORIG_R3,
PERF_REG_POWERPC_CTR,
PERF_REG_POWERPC_LINK,
PERF_REG_POWERPC_XER,
PERF_REG_POWERPC_CCR,
PERF_REG_POWERPC_SOFTE,
PERF_REG_POWERPC_TRAP,
PERF_REG_POWERPC_DAR,
PERF_REG_POWERPC_DSISR,
PERF_REG_POWERPC_MAX,
};
#endif /* _UAPI_ASM_POWERPC_PERF_REGS_H */
...@@ -438,7 +438,11 @@ int main(void) ...@@ -438,7 +438,11 @@ int main(void)
DEFINE(BUG_ENTRY_SIZE, sizeof(struct bug_entry)); DEFINE(BUG_ENTRY_SIZE, sizeof(struct bug_entry));
#endif #endif
#ifdef MAX_PGD_TABLE_SIZE
DEFINE(PGD_TABLE_SIZE, MAX_PGD_TABLE_SIZE);
#else
DEFINE(PGD_TABLE_SIZE, PGD_TABLE_SIZE); DEFINE(PGD_TABLE_SIZE, PGD_TABLE_SIZE);
#endif
DEFINE(PTE_SIZE, sizeof(pte_t)); DEFINE(PTE_SIZE, sizeof(pte_t));
#ifdef CONFIG_KVM #ifdef CONFIG_KVM
......
...@@ -162,7 +162,7 @@ void btext_map(void) ...@@ -162,7 +162,7 @@ void btext_map(void)
offset = ((unsigned long) dispDeviceBase) - base; offset = ((unsigned long) dispDeviceBase) - base;
size = dispDeviceRowBytes * dispDeviceRect[3] + offset size = dispDeviceRowBytes * dispDeviceRect[3] + offset
+ dispDeviceRect[0]; + dispDeviceRect[0];
vbase = __ioremap(base, size, _PAGE_NO_CACHE); vbase = __ioremap(base, size, pgprot_val(pgprot_noncached_wc(__pgprot(0))));
if (vbase == 0) if (vbase == 0)
return; return;
logicalDisplayBase = vbase + offset; logicalDisplayBase = vbase + offset;
......
...@@ -63,7 +63,6 @@ extern void __setup_cpu_745x(unsigned long offset, struct cpu_spec* spec); ...@@ -63,7 +63,6 @@ extern void __setup_cpu_745x(unsigned long offset, struct cpu_spec* spec);
extern void __setup_cpu_ppc970(unsigned long offset, struct cpu_spec* spec); extern void __setup_cpu_ppc970(unsigned long offset, struct cpu_spec* spec);
extern void __setup_cpu_ppc970MP(unsigned long offset, struct cpu_spec* spec); extern void __setup_cpu_ppc970MP(unsigned long offset, struct cpu_spec* spec);
extern void __setup_cpu_pa6t(unsigned long offset, struct cpu_spec* spec); extern void __setup_cpu_pa6t(unsigned long offset, struct cpu_spec* spec);
extern void __setup_cpu_a2(unsigned long offset, struct cpu_spec* spec);
extern void __restore_cpu_pa6t(void); extern void __restore_cpu_pa6t(void);
extern void __restore_cpu_ppc970(void); extern void __restore_cpu_ppc970(void);
extern void __setup_cpu_power7(unsigned long offset, struct cpu_spec* spec); extern void __setup_cpu_power7(unsigned long offset, struct cpu_spec* spec);
...@@ -72,7 +71,6 @@ extern void __setup_cpu_power8(unsigned long offset, struct cpu_spec* spec); ...@@ -72,7 +71,6 @@ extern void __setup_cpu_power8(unsigned long offset, struct cpu_spec* spec);
extern void __restore_cpu_power8(void); extern void __restore_cpu_power8(void);
extern void __setup_cpu_power9(unsigned long offset, struct cpu_spec* spec); extern void __setup_cpu_power9(unsigned long offset, struct cpu_spec* spec);
extern void __restore_cpu_power9(void); extern void __restore_cpu_power9(void);
extern void __restore_cpu_a2(void);
extern void __flush_tlb_power7(unsigned int action); extern void __flush_tlb_power7(unsigned int action);
extern void __flush_tlb_power8(unsigned int action); extern void __flush_tlb_power8(unsigned int action);
extern void __flush_tlb_power9(unsigned int action); extern void __flush_tlb_power9(unsigned int action);
......
...@@ -48,7 +48,7 @@ ...@@ -48,7 +48,7 @@
/** Overview: /** Overview:
* EEH, or "Extended Error Handling" is a PCI bridge technology for * EEH, or "Enhanced Error Handling" is a PCI bridge technology for
* dealing with PCI bus errors that can't be dealt with within the * dealing with PCI bus errors that can't be dealt with within the
* usual PCI framework, except by check-stopping the CPU. Systems * usual PCI framework, except by check-stopping the CPU. Systems
* that are designed for high-availability/reliability cannot afford * that are designed for high-availability/reliability cannot afford
...@@ -1068,7 +1068,7 @@ void eeh_add_device_early(struct pci_dn *pdn) ...@@ -1068,7 +1068,7 @@ void eeh_add_device_early(struct pci_dn *pdn)
struct pci_controller *phb; struct pci_controller *phb;
struct eeh_dev *edev = pdn_to_eeh_dev(pdn); struct eeh_dev *edev = pdn_to_eeh_dev(pdn);
if (!edev || !eeh_enabled()) if (!edev)
return; return;
if (!eeh_has_flag(EEH_PROBE_MODE_DEVTREE)) if (!eeh_has_flag(EEH_PROBE_MODE_DEVTREE))
...@@ -1336,14 +1336,11 @@ static int eeh_pe_change_owner(struct eeh_pe *pe) ...@@ -1336,14 +1336,11 @@ static int eeh_pe_change_owner(struct eeh_pe *pe)
id->subdevice != pdev->subsystem_device) id->subdevice != pdev->subsystem_device)
continue; continue;
goto reset; return eeh_pe_reset_and_recover(pe);
} }
} }
return eeh_unfreeze_pe(pe, true); return eeh_unfreeze_pe(pe, true);
reset:
return eeh_pe_reset_and_recover(pe);
} }
/** /**
......
...@@ -171,6 +171,16 @@ static void *eeh_dev_save_state(void *data, void *userdata) ...@@ -171,6 +171,16 @@ static void *eeh_dev_save_state(void *data, void *userdata)
if (!edev) if (!edev)
return NULL; return NULL;
/*
* We cannot access the config space on some adapters.
* Otherwise, it will cause fenced PHB. We don't save
* the content in their config space and will restore
* from the initial config space saved when the EEH
* device is created.
*/
if (edev->pe && (edev->pe->state & EEH_PE_CFG_RESTRICTED))
return NULL;
pdev = eeh_dev_to_pci_dev(edev); pdev = eeh_dev_to_pci_dev(edev);
if (!pdev) if (!pdev)
return NULL; return NULL;
...@@ -312,6 +322,19 @@ static void *eeh_dev_restore_state(void *data, void *userdata) ...@@ -312,6 +322,19 @@ static void *eeh_dev_restore_state(void *data, void *userdata)
if (!edev) if (!edev)
return NULL; return NULL;
/*
* The content in the config space isn't saved because
* the blocked config space on some adapters. We have
* to restore the initial saved config space when the
* EEH device is created.
*/
if (edev->pe && (edev->pe->state & EEH_PE_CFG_RESTRICTED)) {
if (list_is_last(&edev->list, &edev->pe->edevs))
eeh_pe_restore_bars(edev->pe);
return NULL;
}
pdev = eeh_dev_to_pci_dev(edev); pdev = eeh_dev_to_pci_dev(edev);
if (!pdev) if (!pdev)
return NULL; return NULL;
...@@ -552,7 +575,7 @@ static int eeh_clear_pe_frozen_state(struct eeh_pe *pe, ...@@ -552,7 +575,7 @@ static int eeh_clear_pe_frozen_state(struct eeh_pe *pe,
int eeh_pe_reset_and_recover(struct eeh_pe *pe) int eeh_pe_reset_and_recover(struct eeh_pe *pe)
{ {
int result, ret; int ret;
/* Bail if the PE is being recovered */ /* Bail if the PE is being recovered */
if (pe->state & EEH_PE_RECOVERING) if (pe->state & EEH_PE_RECOVERING)
...@@ -564,9 +587,6 @@ int eeh_pe_reset_and_recover(struct eeh_pe *pe) ...@@ -564,9 +587,6 @@ int eeh_pe_reset_and_recover(struct eeh_pe *pe)
/* Save states */ /* Save states */
eeh_pe_dev_traverse(pe, eeh_dev_save_state, NULL); eeh_pe_dev_traverse(pe, eeh_dev_save_state, NULL);
/* Report error */
eeh_pe_dev_traverse(pe, eeh_report_error, &result);
/* Issue reset */ /* Issue reset */
ret = eeh_reset_pe(pe); ret = eeh_reset_pe(pe);
if (ret) { if (ret) {
...@@ -581,15 +601,9 @@ int eeh_pe_reset_and_recover(struct eeh_pe *pe) ...@@ -581,15 +601,9 @@ int eeh_pe_reset_and_recover(struct eeh_pe *pe)
return ret; return ret;
} }
/* Notify completion of reset */
eeh_pe_dev_traverse(pe, eeh_report_reset, &result);
/* Restore device state */ /* Restore device state */
eeh_pe_dev_traverse(pe, eeh_dev_restore_state, NULL); eeh_pe_dev_traverse(pe, eeh_dev_restore_state, NULL);
/* Resume */
eeh_pe_dev_traverse(pe, eeh_report_resume, NULL);
/* Clear recovery mode */ /* Clear recovery mode */
eeh_pe_state_clear(pe, EEH_PE_RECOVERING); eeh_pe_state_clear(pe, EEH_PE_RECOVERING);
...@@ -621,7 +635,7 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus, ...@@ -621,7 +635,7 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus,
* We don't remove the corresponding PE instances because * We don't remove the corresponding PE instances because
* we need the information afterwords. The attached EEH * we need the information afterwords. The attached EEH
* devices are expected to be attached soon when calling * devices are expected to be attached soon when calling
* into pcibios_add_pci_devices(). * into pci_hp_add_devices().
*/ */
eeh_pe_state_mark(pe, EEH_PE_KEEP); eeh_pe_state_mark(pe, EEH_PE_KEEP);
if (bus) { if (bus) {
...@@ -630,7 +644,7 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus, ...@@ -630,7 +644,7 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus,
} else { } else {
eeh_pe_state_clear(pe, EEH_PE_PRI_BUS); eeh_pe_state_clear(pe, EEH_PE_PRI_BUS);
pci_lock_rescan_remove(); pci_lock_rescan_remove();
pcibios_remove_pci_devices(bus); pci_hp_remove_devices(bus);
pci_unlock_rescan_remove(); pci_unlock_rescan_remove();
} }
} else if (frozen_bus) { } else if (frozen_bus) {
...@@ -681,7 +695,7 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus, ...@@ -681,7 +695,7 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus,
if (pe->type & EEH_PE_VF) if (pe->type & EEH_PE_VF)
eeh_add_virt_device(edev, NULL); eeh_add_virt_device(edev, NULL);
else else
pcibios_add_pci_devices(bus); pci_hp_add_devices(bus);
} else if (frozen_bus && rmv_data->removed) { } else if (frozen_bus && rmv_data->removed) {
pr_info("EEH: Sleep 5s ahead of partial hotplug\n"); pr_info("EEH: Sleep 5s ahead of partial hotplug\n");
ssleep(5); ssleep(5);
...@@ -691,7 +705,7 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus, ...@@ -691,7 +705,7 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus,
if (pe->type & EEH_PE_VF) if (pe->type & EEH_PE_VF)
eeh_add_virt_device(edev, NULL); eeh_add_virt_device(edev, NULL);
else else
pcibios_add_pci_devices(frozen_bus); pci_hp_add_devices(frozen_bus);
} }
eeh_pe_state_clear(pe, EEH_PE_KEEP); eeh_pe_state_clear(pe, EEH_PE_KEEP);
...@@ -896,7 +910,7 @@ static void eeh_handle_normal_event(struct eeh_pe *pe) ...@@ -896,7 +910,7 @@ static void eeh_handle_normal_event(struct eeh_pe *pe)
eeh_pe_dev_mode_mark(pe, EEH_DEV_REMOVED); eeh_pe_dev_mode_mark(pe, EEH_DEV_REMOVED);
pci_lock_rescan_remove(); pci_lock_rescan_remove();
pcibios_remove_pci_devices(frozen_bus); pci_hp_remove_devices(frozen_bus);
pci_unlock_rescan_remove(); pci_unlock_rescan_remove();
} }
} }
...@@ -981,7 +995,7 @@ static void eeh_handle_special_event(void) ...@@ -981,7 +995,7 @@ static void eeh_handle_special_event(void)
bus = eeh_pe_bus_get(phb_pe); bus = eeh_pe_bus_get(phb_pe);
eeh_pe_dev_traverse(pe, eeh_pe_dev_traverse(pe,
eeh_report_failure, NULL); eeh_report_failure, NULL);
pcibios_remove_pci_devices(bus); pci_hp_remove_devices(bus);
} }
pci_unlock_rescan_remove(); pci_unlock_rescan_remove();
} }
......
...@@ -36,7 +36,7 @@ ...@@ -36,7 +36,7 @@
static DEFINE_SPINLOCK(eeh_eventlist_lock); static DEFINE_SPINLOCK(eeh_eventlist_lock);
static struct semaphore eeh_eventlist_sem; static struct semaphore eeh_eventlist_sem;
LIST_HEAD(eeh_eventlist); static LIST_HEAD(eeh_eventlist);
/** /**
* eeh_event_handler - Dispatch EEH events. * eeh_event_handler - Dispatch EEH events.
......
...@@ -249,7 +249,7 @@ static void *__eeh_pe_get(void *data, void *flag) ...@@ -249,7 +249,7 @@ static void *__eeh_pe_get(void *data, void *flag)
} else { } else {
if (edev->pe_config_addr && if (edev->pe_config_addr &&
(edev->pe_config_addr == pe->addr)) (edev->pe_config_addr == pe->addr))
return pe; return pe;
} }
/* Try BDF address */ /* Try BDF address */
......
...@@ -37,6 +37,7 @@ ...@@ -37,6 +37,7 @@
#include <asm/hw_irq.h> #include <asm/hw_irq.h>
#include <asm/context_tracking.h> #include <asm/context_tracking.h>
#include <asm/tm.h> #include <asm/tm.h>
#include <asm/ppc-opcode.h>
/* /*
* System calls. * System calls.
...@@ -509,6 +510,14 @@ BEGIN_FTR_SECTION ...@@ -509,6 +510,14 @@ BEGIN_FTR_SECTION
ldarx r6,0,r1 ldarx r6,0,r1
END_FTR_SECTION_IFSET(CPU_FTR_STCX_CHECKS_ADDRESS) END_FTR_SECTION_IFSET(CPU_FTR_STCX_CHECKS_ADDRESS)
BEGIN_FTR_SECTION
/*
* A cp_abort (copy paste abort) here ensures that when context switching, a
* copy from one process can't leak into the paste of another.
*/
PPC_CP_ABORT
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
#ifdef CONFIG_PPC_BOOK3S #ifdef CONFIG_PPC_BOOK3S
/* Cancel all explict user streams as they will have no use after context /* Cancel all explict user streams as they will have no use after context
* switch and will stop the HW from creating streams itself * switch and will stop the HW from creating streams itself
...@@ -520,7 +529,10 @@ END_FTR_SECTION_IFSET(CPU_FTR_STCX_CHECKS_ADDRESS) ...@@ -520,7 +529,10 @@ END_FTR_SECTION_IFSET(CPU_FTR_STCX_CHECKS_ADDRESS)
std r6,PACACURRENT(r13) /* Set new 'current' */ std r6,PACACURRENT(r13) /* Set new 'current' */
ld r8,KSP(r4) /* new stack pointer */ ld r8,KSP(r4) /* new stack pointer */
#ifdef CONFIG_PPC_BOOK3S #ifdef CONFIG_PPC_STD_MMU_64
BEGIN_MMU_FTR_SECTION
b 2f
END_MMU_FTR_SECTION_IFSET(MMU_FTR_RADIX)
BEGIN_FTR_SECTION BEGIN_FTR_SECTION
clrrdi r6,r8,28 /* get its ESID */ clrrdi r6,r8,28 /* get its ESID */
clrrdi r9,r1,28 /* get current sp ESID */ clrrdi r9,r1,28 /* get current sp ESID */
...@@ -566,7 +578,7 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_1T_SEGMENT) ...@@ -566,7 +578,7 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_1T_SEGMENT)
slbmte r7,r0 slbmte r7,r0
isync isync
2: 2:
#endif /* !CONFIG_PPC_BOOK3S */ #endif /* CONFIG_PPC_STD_MMU_64 */
CURRENT_THREAD_INFO(r7, r8) /* base of new stack */ CURRENT_THREAD_INFO(r7, r8) /* base of new stack */
/* Note: this uses SWITCH_FRAME_SIZE rather than INT_FRAME_SIZE /* Note: this uses SWITCH_FRAME_SIZE rather than INT_FRAME_SIZE
......
This diff is collapsed.
...@@ -607,3 +607,13 @@ unsigned long __init arch_syscall_addr(int nr) ...@@ -607,3 +607,13 @@ unsigned long __init arch_syscall_addr(int nr)
return sys_call_table[nr*2]; return sys_call_table[nr*2];
} }
#endif /* CONFIG_FTRACE_SYSCALLS && CONFIG_PPC64 */ #endif /* CONFIG_FTRACE_SYSCALLS && CONFIG_PPC64 */
#if defined(CONFIG_PPC64) && (!defined(_CALL_ELF) || _CALL_ELF != 2)
char *arch_ftrace_match_adjust(char *str, const char *search)
{
if (str[0] == '.' && search[0] != '.')
return str + 1;
else
return str;
}
#endif /* defined(CONFIG_PPC64) && (!defined(_CALL_ELF) || _CALL_ELF != 2) */
...@@ -973,13 +973,16 @@ start_here_common: ...@@ -973,13 +973,16 @@ start_here_common:
* This stuff goes at the beginning of the bss, which is page-aligned. * This stuff goes at the beginning of the bss, which is page-aligned.
*/ */
.section ".bss" .section ".bss"
/*
* pgd dir should be aligned to PGD_TABLE_SIZE which is 64K.
* We will need to find a better way to fix this
*/
.align 16
.align PAGE_SHIFT .globl swapper_pg_dir
swapper_pg_dir:
.space PGD_TABLE_SIZE
.globl empty_zero_page .globl empty_zero_page
empty_zero_page: empty_zero_page:
.space PAGE_SIZE .space PAGE_SIZE
.globl swapper_pg_dir
swapper_pg_dir:
.space PGD_TABLE_SIZE
...@@ -408,7 +408,7 @@ static ssize_t modalias_show(struct device *dev, ...@@ -408,7 +408,7 @@ static ssize_t modalias_show(struct device *dev,
return len+1; return len+1;
} }
struct device_attribute ibmebus_bus_device_attrs[] = { static struct device_attribute ibmebus_bus_device_attrs[] = {
__ATTR_RO(devspec), __ATTR_RO(devspec),
__ATTR_RO(name), __ATTR_RO(name),
__ATTR_RO(modalias), __ATTR_RO(modalias),
......
...@@ -109,14 +109,14 @@ static void pci_process_ISA_OF_ranges(struct device_node *isa_node, ...@@ -109,14 +109,14 @@ static void pci_process_ISA_OF_ranges(struct device_node *isa_node,
size = 0x10000; size = 0x10000;
__ioremap_at(phb_io_base_phys, (void *)ISA_IO_BASE, __ioremap_at(phb_io_base_phys, (void *)ISA_IO_BASE,
size, _PAGE_NO_CACHE|_PAGE_GUARDED); size, pgprot_val(pgprot_noncached(__pgprot(0))));
return; return;
inval_range: inval_range:
printk(KERN_ERR "no ISA IO ranges or unexpected isa range, " printk(KERN_ERR "no ISA IO ranges or unexpected isa range, "
"mapping 64k\n"); "mapping 64k\n");
__ioremap_at(phb_io_base_phys, (void *)ISA_IO_BASE, __ioremap_at(phb_io_base_phys, (void *)ISA_IO_BASE,
0x10000, _PAGE_NO_CACHE|_PAGE_GUARDED); 0x10000, pgprot_val(pgprot_noncached(__pgprot(0))));
} }
......
...@@ -228,17 +228,12 @@ static struct property memory_limit_prop = { ...@@ -228,17 +228,12 @@ static struct property memory_limit_prop = {
static void __init export_crashk_values(struct device_node *node) static void __init export_crashk_values(struct device_node *node)
{ {
struct property *prop;
/* There might be existing crash kernel properties, but we can't /* There might be existing crash kernel properties, but we can't
* be sure what's in them, so remove them. */ * be sure what's in them, so remove them. */
prop = of_find_property(node, "linux,crashkernel-base", NULL); of_remove_property(node, of_find_property(node,
if (prop) "linux,crashkernel-base", NULL));
of_remove_property(node, prop); of_remove_property(node, of_find_property(node,
"linux,crashkernel-size", NULL));
prop = of_find_property(node, "linux,crashkernel-size", NULL);
if (prop)
of_remove_property(node, prop);
if (crashk_res.start != 0) { if (crashk_res.start != 0) {
crashk_base = cpu_to_be_ulong(crashk_res.start), crashk_base = cpu_to_be_ulong(crashk_res.start),
...@@ -258,16 +253,13 @@ static void __init export_crashk_values(struct device_node *node) ...@@ -258,16 +253,13 @@ static void __init export_crashk_values(struct device_node *node)
static int __init kexec_setup(void) static int __init kexec_setup(void)
{ {
struct device_node *node; struct device_node *node;
struct property *prop;
node = of_find_node_by_path("/chosen"); node = of_find_node_by_path("/chosen");
if (!node) if (!node)
return -ENOENT; return -ENOENT;
/* remove any stale properties so ours can be found */ /* remove any stale properties so ours can be found */
prop = of_find_property(node, kernel_end_prop.name, NULL); of_remove_property(node, of_find_property(node, kernel_end_prop.name, NULL));
if (prop)
of_remove_property(node, prop);
/* information needed by userspace when using default_machine_kexec */ /* information needed by userspace when using default_machine_kexec */
kernel_end = cpu_to_be_ulong(__pa(_end)); kernel_end = cpu_to_be_ulong(__pa(_end));
......
...@@ -76,6 +76,7 @@ int default_machine_kexec_prepare(struct kimage *image) ...@@ -76,6 +76,7 @@ int default_machine_kexec_prepare(struct kimage *image)
* end of the blocked region (begin >= high). Use the * end of the blocked region (begin >= high). Use the
* boolean identity !(a || b) === (!a && !b). * boolean identity !(a || b) === (!a && !b).
*/ */
#ifdef CONFIG_PPC_STD_MMU_64
if (htab_address) { if (htab_address) {
low = __pa(htab_address); low = __pa(htab_address);
high = low + htab_size_bytes; high = low + htab_size_bytes;
...@@ -88,6 +89,7 @@ int default_machine_kexec_prepare(struct kimage *image) ...@@ -88,6 +89,7 @@ int default_machine_kexec_prepare(struct kimage *image)
return -ETXTBSY; return -ETXTBSY;
} }
} }
#endif /* CONFIG_PPC_STD_MMU_64 */
/* We also should not overwrite the tce tables */ /* We also should not overwrite the tce tables */
for_each_node_by_type(node, "pci") { for_each_node_by_type(node, "pci") {
...@@ -381,7 +383,7 @@ void default_machine_kexec(struct kimage *image) ...@@ -381,7 +383,7 @@ void default_machine_kexec(struct kimage *image)
/* NOTREACHED */ /* NOTREACHED */
} }
#ifndef CONFIG_PPC_BOOK3E #ifdef CONFIG_PPC_STD_MMU_64
/* Values we need to export to the second kernel via the device tree. */ /* Values we need to export to the second kernel via the device tree. */
static unsigned long htab_base; static unsigned long htab_base;
static unsigned long htab_size; static unsigned long htab_size;
...@@ -401,7 +403,6 @@ static struct property htab_size_prop = { ...@@ -401,7 +403,6 @@ static struct property htab_size_prop = {
static int __init export_htab_values(void) static int __init export_htab_values(void)
{ {
struct device_node *node; struct device_node *node;
struct property *prop;
/* On machines with no htab htab_address is NULL */ /* On machines with no htab htab_address is NULL */
if (!htab_address) if (!htab_address)
...@@ -412,12 +413,8 @@ static int __init export_htab_values(void) ...@@ -412,12 +413,8 @@ static int __init export_htab_values(void)
return -ENODEV; return -ENODEV;
/* remove any stale propertys so ours can be found */ /* remove any stale propertys so ours can be found */
prop = of_find_property(node, htab_base_prop.name, NULL); of_remove_property(node, of_find_property(node, htab_base_prop.name, NULL));
if (prop) of_remove_property(node, of_find_property(node, htab_size_prop.name, NULL));
of_remove_property(node, prop);
prop = of_find_property(node, htab_size_prop.name, NULL);
if (prop)
of_remove_property(node, prop);
htab_base = cpu_to_be64(__pa(htab_address)); htab_base = cpu_to_be64(__pa(htab_address));
of_add_property(node, &htab_base_prop); of_add_property(node, &htab_base_prop);
...@@ -428,4 +425,4 @@ static int __init export_htab_values(void) ...@@ -428,4 +425,4 @@ static int __init export_htab_values(void)
return 0; return 0;
} }
late_initcall(export_htab_values); late_initcall(export_htab_values);
#endif /* !CONFIG_PPC_BOOK3E */ #endif /* CONFIG_PPC_STD_MMU_64 */
...@@ -37,7 +37,7 @@ static DEFINE_PER_CPU(int, mce_queue_count); ...@@ -37,7 +37,7 @@ static DEFINE_PER_CPU(int, mce_queue_count);
static DEFINE_PER_CPU(struct machine_check_event[MAX_MC_EVT], mce_event_queue); static DEFINE_PER_CPU(struct machine_check_event[MAX_MC_EVT], mce_event_queue);
static void machine_check_process_queued_event(struct irq_work *work); static void machine_check_process_queued_event(struct irq_work *work);
struct irq_work mce_event_process_work = { static struct irq_work mce_event_process_work = {
.func = machine_check_process_queued_event, .func = machine_check_process_queued_event,
}; };
......
This diff is collapsed.
...@@ -599,12 +599,6 @@ _GLOBAL(__bswapdi2) ...@@ -599,12 +599,6 @@ _GLOBAL(__bswapdi2)
mr r4,r10 mr r4,r10
blr blr
_GLOBAL(abs)
srawi r4,r3,31
xor r3,r3,r4
sub r3,r3,r4
blr
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
_GLOBAL(start_secondary_resume) _GLOBAL(start_secondary_resume)
/* Reset stack */ /* Reset stack */
......
...@@ -15,8 +15,6 @@ ...@@ -15,8 +15,6 @@
* parsing code. * parsing code.
*/ */
#include <linux/module.h>
#include <linux/types.h> #include <linux/types.h>
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/fs.h> #include <linux/fs.h>
...@@ -1231,12 +1229,4 @@ static int __init nvram_init(void) ...@@ -1231,12 +1229,4 @@ static int __init nvram_init(void)
return rc; return rc;
} }
device_initcall(nvram_init);
static void __exit nvram_cleanup(void)
{
misc_deregister( &nvram_dev );
}
module_init(nvram_init);
module_exit(nvram_cleanup);
MODULE_LICENSE("GPL");
This diff is collapsed.
...@@ -38,7 +38,7 @@ ...@@ -38,7 +38,7 @@
* ISA drivers use hard coded offsets. If no ISA bus exists nothing * ISA drivers use hard coded offsets. If no ISA bus exists nothing
* is mapped on the first 64K of IO space * is mapped on the first 64K of IO space
*/ */
unsigned long pci_io_base = ISA_IO_BASE; unsigned long pci_io_base;
EXPORT_SYMBOL(pci_io_base); EXPORT_SYMBOL(pci_io_base);
static int __init pcibios_init(void) static int __init pcibios_init(void)
...@@ -47,6 +47,7 @@ static int __init pcibios_init(void) ...@@ -47,6 +47,7 @@ static int __init pcibios_init(void)
printk(KERN_INFO "PCI: Probing PCI hardware\n"); printk(KERN_INFO "PCI: Probing PCI hardware\n");
pci_io_base = ISA_IO_BASE;
/* For now, override phys_mem_access_prot. If we need it,g /* For now, override phys_mem_access_prot. If we need it,g
* later, we may move that initialization to each ppc_md * later, we may move that initialization to each ppc_md
*/ */
...@@ -159,7 +160,7 @@ static int pcibios_map_phb_io_space(struct pci_controller *hose) ...@@ -159,7 +160,7 @@ static int pcibios_map_phb_io_space(struct pci_controller *hose)
/* Establish the mapping */ /* Establish the mapping */
if (__ioremap_at(phys_page, area->addr, size_page, if (__ioremap_at(phys_page, area->addr, size_page,
_PAGE_NO_CACHE | _PAGE_GUARDED) == NULL) pgprot_val(pgprot_noncached(__pgprot(0)))) == NULL)
return -ENOMEM; return -ENOMEM;
/* Fixup hose IO resource */ /* Fixup hose IO resource */
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -31,6 +31,6 @@ void save_processor_state(void) ...@@ -31,6 +31,6 @@ void save_processor_state(void)
void restore_processor_state(void) void restore_processor_state(void)
{ {
#ifdef CONFIG_PPC32 #ifdef CONFIG_PPC32
switch_mmu_context(current->active_mm, current->active_mm); switch_mmu_context(current->active_mm, current->active_mm, NULL);
#endif #endif
} }
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment