Commit f689b742 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'powerpc-4.5-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux

Pull powerpc updates from Michael Ellerman:
 "Core:
   - Ground work for the new Power9 MMU from Aneesh Kumar K.V
   - Optimise FP/VMX/VSX context switching from Anton Blanchard

  Misc:
   - Various cleanups from Krzysztof Kozlowski, John Ogness, Rashmica
     Gupta, Russell Currey, Gavin Shan, Daniel Axtens, Michael Neuling,
     Andrew Donnellan
   - Allow wrapper to work on non-english system from Laurent Vivier
   - Add rN aliases to the pt_regs_offset table from Rashmica Gupta
   - Fix module autoload for rackmeter & axonram drivers from Luis de
     Bethencourt
   - Include KVM guest test in all interrupt vectors from Paul Mackerras
   - Fix DSCR inheritance over fork() from Anton Blanchard
   - Make value-returning atomics & {cmp}xchg* & their atomic_ versions
     fully ordered from Boqun Feng
   - Print MSR TM bits in oops messages from Michael Neuling
   - Add TM signal return & invalid stack selftests from Michael Neuling
   - Limit EPOW reset event warnings from Vipin K Parashar
   - Remove the Cell QPACE code from Rashmica Gupta
   - Append linux_banner to exception information in xmon from Rashmica
     Gupta
   - Add selftest to check if VSRs are corrupted from Rashmica Gupta
   - Remove broken GregorianDay() from Daniel Axtens
   - Import Anton's context_switch2 benchmark into selftests from
     Michael Ellerman
   - Add selftest script to test HMI functionality from Daniel Axtens
   - Remove obsolete OPAL v2 support from Stewart Smith
   - Make enter_rtas() private from Michael Ellerman
   - PPR exception cleanups from Michael Ellerman
   - Add page soft dirty tracking from Laurent Dufour
   - Add support for Nvlink NPUs from Alistair Popple
   - Add support for kexec on 476fpe from Alistair Popple
   - Enable kernel CPU dlpar from sysfs from Nathan Fontenot
   - Copy only required pieces of the mm_context_t to the paca from
     Michael Neuling
   - Add a kmsg_dumper that flushes OPAL console output on panic from
     Russell Currey
   - Implement save_stack_trace_regs() to enable kprobe stack tracing
     from Steven Rostedt
   - Add HWCAP bits for Power9 from Michael Ellerman
   - Fix _PAGE_PTE breaking swapoff from Aneesh Kumar K.V
   - Fix _PAGE_SWP_SOFT_DIRTY breaking swapoff from Hugh Dickins
   - scripts/recordmcount.pl: support data in text section on powerpc
     from Ulrich Weigand
   - Handle R_PPC64_ENTRY relocations in modules from Ulrich Weigand

  cxl:
   - cxl: Fix possible idr warning when contexts are released from
     Vaibhav Jain
   - cxl: use correct operator when writing pcie config space values
     from Andrew Donnellan
   - cxl: Fix DSI misses when the context owning task exits from Vaibhav
     Jain
   - cxl: fix build for GCC 4.6.x from Brian Norris
   - cxl: use -Werror only with CONFIG_PPC_WERROR from Brian Norris
   - cxl: Enable PCI device ID for future IBM CXL adapter from Uma
     Krishnan

  Freescale:
   - Freescale updates from Scott: Highlights include moving QE code out
     of arch/powerpc (to be shared with arm), device tree updates, and
     minor fixes"

* tag 'powerpc-4.5-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (149 commits)
  powerpc/module: Handle R_PPC64_ENTRY relocations
  scripts/recordmcount.pl: support data in text section on powerpc
  powerpc/powernv: Fix OPAL_CONSOLE_FLUSH prototype and usages
  powerpc/mm: fix _PAGE_SWP_SOFT_DIRTY breaking swapoff
  powerpc/mm: Fix _PAGE_PTE breaking swapoff
  cxl: Enable PCI device ID for future IBM CXL adapter
  cxl: use -Werror only with CONFIG_PPC_WERROR
  cxl: fix build for GCC 4.6.x
  powerpc: Add HWCAP bits for Power9
  powerpc/powernv: Reserve PE#0 on NPU
  powerpc/powernv: Change NPU PE# assignment
  powerpc/powernv: Fix update of NVLink DMA mask
  powerpc/powernv: Remove misleading comment in pci.c
  powerpc: Implement save_stack_trace_regs() to enable kprobe stack tracing
  powerpc: Fix build break due to paca mm_context_t changes
  cxl: Fix DSI misses when the context owning task exits
  MAINTAINERS: Update Scott Wood's e-mail address
  powerpc/powernv: Fix minor off-by-one error in opal_mce_check_early_recovery()
  powerpc: Fix style of self-test config prompts
  powerpc/powernv: Only delay opal_rtc_read() retry when necessary
  ...
parents 37cea93b be6bfc29
...@@ -14,7 +14,6 @@ Required properties: ...@@ -14,7 +14,6 @@ Required properties:
tegra132, or tegra210. tegra132, or tegra210.
- "nxp,lpc3220-uart" - "nxp,lpc3220-uart"
- "ralink,rt2880-uart" - "ralink,rt2880-uart"
- "ibm,qpace-nwp-serial"
- "altr,16550-FIFO32" - "altr,16550-FIFO32"
- "altr,16550-FIFO64" - "altr,16550-FIFO64"
- "altr,16550-FIFO128" - "altr,16550-FIFO128"
......
* Thermal Monitoring Unit (TMU) on Freescale QorIQ SoCs
Required properties:
- compatible : Must include "fsl,qoriq-tmu". The version of the device is
determined by the TMU IP Block Revision Register (IPBRR0) at
offset 0x0BF8.
Table of correspondences between IPBRR0 values and example chips:
Value Device
---------- -----
0x01900102 T1040
- reg : Address range of TMU registers.
- interrupts : Contains the interrupt for TMU.
- fsl,tmu-range : The values to be programmed into TTRnCR, as specified by
the SoC reference manual. The first cell is TTR0CR, the second is
TTR1CR, etc.
- fsl,tmu-calibration : A list of cell pairs containing temperature
calibration data, as specified by the SoC reference manual.
The first cell of each pair is the value to be written to TTCFGR,
and the second is the value to be written to TSCFGR.
Example:
tmu@f0000 {
compatible = "fsl,qoriq-tmu";
reg = <0xf0000 0x1000>;
interrupts = <18 2 0 0>;
fsl,tmu-range = <0x000a0000 0x00090026 0x0008004a 0x0001006a>;
fsl,tmu-calibration = <0x00000000 0x00000025
0x00000001 0x00000028
0x00000002 0x0000002d
0x00000003 0x00000031
0x00000004 0x00000036
0x00000005 0x0000003a
0x00000006 0x00000040
0x00000007 0x00000044
0x00000008 0x0000004a
0x00000009 0x0000004f
0x0000000a 0x00000054
0x00010000 0x0000000d
0x00010001 0x00000013
0x00010002 0x00000019
0x00010003 0x0000001f
0x00010004 0x00000025
0x00010005 0x0000002d
0x00010006 0x00000033
0x00010007 0x00000043
0x00010008 0x0000004b
0x00010009 0x00000053
0x00020000 0x00000010
0x00020001 0x00000017
0x00020002 0x0000001f
0x00020003 0x00000029
0x00020004 0x00000031
0x00020005 0x0000003c
0x00020006 0x00000042
0x00020007 0x0000004d
0x00020008 0x00000056
0x00030000 0x00000012
0x00030001 0x0000001d>;
};
...@@ -2993,6 +2993,12 @@ bytes respectively. Such letter suffixes can also be entirely omitted. ...@@ -2993,6 +2993,12 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
may be specified. may be specified.
Format: <port>,<port>.... Format: <port>,<port>....
ppc_strict_facility_enable
[PPC] This option catches any kernel floating point,
Altivec, VSX and SPE outside of regions specifically
allowed (eg kernel_enable_fpu()/kernel_disable_fpu()).
There is some performance impact when enabling this.
print-fatal-signals= print-fatal-signals=
[KNL] debug: print fatal signals [KNL] debug: print fatal signals
......
...@@ -4490,8 +4490,9 @@ F: include/linux/fs_enet_pd.h ...@@ -4490,8 +4490,9 @@ F: include/linux/fs_enet_pd.h
FREESCALE QUICC ENGINE LIBRARY FREESCALE QUICC ENGINE LIBRARY
L: linuxppc-dev@lists.ozlabs.org L: linuxppc-dev@lists.ozlabs.org
S: Orphan S: Orphan
F: arch/powerpc/sysdev/qe_lib/ F: drivers/soc/fsl/qe/
F: arch/powerpc/include/asm/*qe.h F: include/soc/fsl/*qe*.h
F: include/soc/fsl/*ucc*.h
FREESCALE USB PERIPHERAL DRIVERS FREESCALE USB PERIPHERAL DRIVERS
M: Li Yang <leoli@freescale.com> M: Li Yang <leoli@freescale.com>
...@@ -6444,7 +6445,7 @@ S: Maintained ...@@ -6444,7 +6445,7 @@ S: Maintained
F: arch/powerpc/platforms/8xx/ F: arch/powerpc/platforms/8xx/
LINUX FOR POWERPC EMBEDDED PPC83XX AND PPC85XX LINUX FOR POWERPC EMBEDDED PPC83XX AND PPC85XX
M: Scott Wood <scottwood@freescale.com> M: Scott Wood <oss@buserror.net>
M: Kumar Gala <galak@kernel.crashing.org> M: Kumar Gala <galak@kernel.crashing.org>
W: http://www.penguinppc.org/ W: http://www.penguinppc.org/
L: linuxppc-dev@lists.ozlabs.org L: linuxppc-dev@lists.ozlabs.org
......
...@@ -560,6 +560,7 @@ choice ...@@ -560,6 +560,7 @@ choice
config PPC_4K_PAGES config PPC_4K_PAGES
bool "4k page size" bool "4k page size"
select HAVE_ARCH_SOFT_DIRTY if CHECKPOINT_RESTORE && PPC_BOOK3S
config PPC_16K_PAGES config PPC_16K_PAGES
bool "16k page size" bool "16k page size"
...@@ -568,6 +569,7 @@ config PPC_16K_PAGES ...@@ -568,6 +569,7 @@ config PPC_16K_PAGES
config PPC_64K_PAGES config PPC_64K_PAGES
bool "64k page size" bool "64k page size"
depends on !PPC_FSL_BOOK3E && (44x || PPC_STD_MMU_64 || PPC_BOOK3E_64) depends on !PPC_FSL_BOOK3E && (44x || PPC_STD_MMU_64 || PPC_BOOK3E_64)
select HAVE_ARCH_SOFT_DIRTY if CHECKPOINT_RESTORE && PPC_BOOK3S
config PPC_256K_PAGES config PPC_256K_PAGES
bool "256k page size" bool "256k page size"
...@@ -1075,8 +1077,6 @@ source "drivers/Kconfig" ...@@ -1075,8 +1077,6 @@ source "drivers/Kconfig"
source "fs/Kconfig" source "fs/Kconfig"
source "arch/powerpc/sysdev/qe_lib/Kconfig"
source "lib/Kconfig" source "lib/Kconfig"
source "arch/powerpc/Kconfig.debug" source "arch/powerpc/Kconfig.debug"
......
...@@ -64,17 +64,17 @@ config PPC_EMULATED_STATS ...@@ -64,17 +64,17 @@ config PPC_EMULATED_STATS
emulated. emulated.
config CODE_PATCHING_SELFTEST config CODE_PATCHING_SELFTEST
bool "Run self-tests of the code-patching code." bool "Run self-tests of the code-patching code"
depends on DEBUG_KERNEL depends on DEBUG_KERNEL
default n default n
config FTR_FIXUP_SELFTEST config FTR_FIXUP_SELFTEST
bool "Run self-tests of the feature-fixup code." bool "Run self-tests of the feature-fixup code"
depends on DEBUG_KERNEL depends on DEBUG_KERNEL
default n default n
config MSI_BITMAP_SELFTEST config MSI_BITMAP_SELFTEST
bool "Run self-tests of the MSI bitmap code." bool "Run self-tests of the MSI bitmap code"
depends on DEBUG_KERNEL depends on DEBUG_KERNEL
default n default n
......
...@@ -113,7 +113,6 @@ src-plat-$(CONFIG_EPAPR_BOOT) += epapr.c epapr-wrapper.c ...@@ -113,7 +113,6 @@ src-plat-$(CONFIG_EPAPR_BOOT) += epapr.c epapr-wrapper.c
src-plat-$(CONFIG_PPC_PSERIES) += pseries-head.S src-plat-$(CONFIG_PPC_PSERIES) += pseries-head.S
src-plat-$(CONFIG_PPC_POWERNV) += pseries-head.S src-plat-$(CONFIG_PPC_POWERNV) += pseries-head.S
src-plat-$(CONFIG_PPC_IBM_CELL_BLADE) += pseries-head.S src-plat-$(CONFIG_PPC_IBM_CELL_BLADE) += pseries-head.S
src-plat-$(CONFIG_PPC_CELL_QPACE) += pseries-head.S
src-wlib := $(sort $(src-wlib-y)) src-wlib := $(sort $(src-wlib-y))
src-plat := $(sort $(src-plat-y)) src-plat := $(sort $(src-plat-y))
...@@ -217,7 +216,6 @@ image-$(CONFIG_PPC_POWERNV) += zImage.pseries ...@@ -217,7 +216,6 @@ image-$(CONFIG_PPC_POWERNV) += zImage.pseries
image-$(CONFIG_PPC_MAPLE) += zImage.maple image-$(CONFIG_PPC_MAPLE) += zImage.maple
image-$(CONFIG_PPC_IBM_CELL_BLADE) += zImage.pseries image-$(CONFIG_PPC_IBM_CELL_BLADE) += zImage.pseries
image-$(CONFIG_PPC_PS3) += dtbImage.ps3 image-$(CONFIG_PPC_PS3) += dtbImage.ps3
image-$(CONFIG_PPC_CELL_QPACE) += zImage.pseries
image-$(CONFIG_PPC_CHRP) += zImage.chrp image-$(CONFIG_PPC_CHRP) += zImage.chrp
image-$(CONFIG_PPC_EFIKA) += zImage.chrp image-$(CONFIG_PPC_EFIKA) += zImage.chrp
image-$(CONFIG_PPC_PMAC) += zImage.pmac image-$(CONFIG_PPC_PMAC) += zImage.pmac
......
...@@ -474,6 +474,11 @@ bman: bman@31a000 { ...@@ -474,6 +474,11 @@ bman: bman@31a000 {
fman@400000 { fman@400000 {
interrupts = <96 2 0 0>, <16 2 1 30>; interrupts = <96 2 0 0>, <16 2 1 30>;
muram@0 {
compatible = "fsl,fman-muram";
reg = <0x0 0x80000>;
};
enet0: ethernet@e0000 { enet0: ethernet@e0000 {
}; };
......
...@@ -29,6 +29,21 @@ ifc: ifc@ff71e000 { ...@@ -29,6 +29,21 @@ ifc: ifc@ff71e000 {
soc: soc@ff700000 { soc: soc@ff700000 {
ranges = <0x0 0x0 0xff700000 0x100000>; ranges = <0x0 0x0 0xff700000 0x100000>;
}; };
pci0: pcie@ff70a000 {
reg = <0 0xff70a000 0 0x1000>;
ranges = <0x2000000 0x0 0x90000000 0 0x90000000 0x0 0x20000000
0x1000000 0x0 0x00000000 0 0xc0010000 0x0 0x10000>;
pcie@0 {
ranges = <0x2000000 0x0 0x90000000
0x2000000 0x0 0x90000000
0x0 0x20000000
0x1000000 0x0 0x0
0x1000000 0x0 0x0
0x0 0x100000>;
};
};
}; };
/include/ "bsc9132qds.dtsi" /include/ "bsc9132qds.dtsi"
......
...@@ -40,6 +40,34 @@ &ifc { ...@@ -40,6 +40,34 @@ &ifc {
interrupts = <16 2 0 0 20 2 0 0>; interrupts = <16 2 0 0 20 2 0 0>;
}; };
/* controller at 0xa000 */
&pci0 {
compatible = "fsl,bsc9132-pcie", "fsl,qoriq-pcie-v2.2";
device_type = "pci";
#size-cells = <2>;
#address-cells = <3>;
bus-range = <0 255>;
interrupts = <16 2 0 0>;
pcie@0 {
reg = <0 0 0 0 0>;
#interrupt-cells = <1>;
#size-cells = <2>;
#address-cells = <3>;
device_type = "pci";
interrupts = <16 2 0 0>;
interrupt-map-mask = <0xf800 0 0 7>;
interrupt-map = <
/* IDSEL 0x0 */
0000 0x0 0x0 0x1 &mpic 0x0 0x2 0x0 0x0
0000 0x0 0x0 0x2 &mpic 0x1 0x2 0x0 0x0
0000 0x0 0x0 0x3 &mpic 0x2 0x2 0x0 0x0
0000 0x0 0x0 0x4 &mpic 0x3 0x2 0x0 0x0
>;
};
};
&soc { &soc {
#address-cells = <1>; #address-cells = <1>;
#size-cells = <1>; #size-cells = <1>;
......
...@@ -45,6 +45,7 @@ aliases { ...@@ -45,6 +45,7 @@ aliases {
serial0 = &serial0; serial0 = &serial0;
ethernet0 = &enet0; ethernet0 = &enet0;
ethernet1 = &enet1; ethernet1 = &enet1;
pci0 = &pci0;
}; };
cpus { cpus {
......
...@@ -215,3 +215,19 @@ enet2: ethernet@b2000 { ...@@ -215,3 +215,19 @@ enet2: ethernet@b2000 {
phy-connection-type = "sgmii"; phy-connection-type = "sgmii";
}; };
}; };
&pci0 {
pcie@0 {
interrupt-map = <
/* IDSEL 0x0 */
/*
*irq[4:5] are active-high
*irq[6:7] are active-low
*/
0000 0x0 0x0 0x1 &mpic 0x4 0x2 0x0 0x0
0000 0x0 0x0 0x2 &mpic 0x5 0x2 0x0 0x0
0000 0x0 0x0 0x3 &mpic 0x6 0x1 0x0 0x0
0000 0x0 0x0 0x4 &mpic 0x7 0x1 0x0 0x0
>;
};
};
...@@ -159,4 +159,4 @@ pcie@0 { ...@@ -159,4 +159,4 @@ pcie@0 {
}; };
}; };
/include/ "t1023si-post.dtsi" #include "t1023si-post.dtsi"
...@@ -32,6 +32,8 @@ ...@@ -32,6 +32,8 @@
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/ */
#include <dt-bindings/thermal/thermal.h>
&ifc { &ifc {
#address-cells = <2>; #address-cells = <2>;
#size-cells = <1>; #size-cells = <1>;
...@@ -275,6 +277,90 @@ serdes: serdes@ea000 { ...@@ -275,6 +277,90 @@ serdes: serdes@ea000 {
reg = <0xea000 0x4000>; reg = <0xea000 0x4000>;
}; };
tmu: tmu@f0000 {
compatible = "fsl,qoriq-tmu";
reg = <0xf0000 0x1000>;
interrupts = <18 2 0 0>;
fsl,tmu-range = <0xb0000 0xa0026 0x80048 0x30061>;
fsl,tmu-calibration = <0x00000000 0x0000000f
0x00000001 0x00000017
0x00000002 0x0000001e
0x00000003 0x00000026
0x00000004 0x0000002e
0x00000005 0x00000035
0x00000006 0x0000003d
0x00000007 0x00000044
0x00000008 0x0000004c
0x00000009 0x00000053
0x0000000a 0x0000005b
0x0000000b 0x00000064
0x00010000 0x00000011
0x00010001 0x0000001c
0x00010002 0x00000024
0x00010003 0x0000002b
0x00010004 0x00000034
0x00010005 0x00000039
0x00010006 0x00000042
0x00010007 0x0000004c
0x00010008 0x00000051
0x00010009 0x0000005a
0x0001000a 0x00000063
0x00020000 0x00000013
0x00020001 0x00000019
0x00020002 0x00000024
0x00020003 0x0000002c
0x00020004 0x00000035
0x00020005 0x0000003d
0x00020006 0x00000046
0x00020007 0x00000050
0x00020008 0x00000059
0x00030000 0x00000002
0x00030001 0x0000000d
0x00030002 0x00000019
0x00030003 0x00000024>;
#thermal-sensor-cells = <0>;
};
thermal-zones {
cpu_thermal: cpu-thermal {
polling-delay-passive = <1000>;
polling-delay = <5000>;
thermal-sensors = <&tmu>;
trips {
cpu_alert: cpu-alert {
temperature = <85000>;
hysteresis = <2000>;
type = "passive";
};
cpu_crit: cpu-crit {
temperature = <95000>;
hysteresis = <2000>;
type = "critical";
};
};
cooling-maps {
map0 {
trip = <&cpu_alert>;
cooling-device =
<&cpu0 THERMAL_NO_LIMIT
THERMAL_NO_LIMIT>;
};
map1 {
trip = <&cpu_alert>;
cooling-device =
<&cpu1 THERMAL_NO_LIMIT
THERMAL_NO_LIMIT>;
};
};
};
};
scfg: global-utilities@fc000 { scfg: global-utilities@fc000 {
compatible = "fsl,t1023-scfg"; compatible = "fsl,t1023-scfg";
reg = <0xfc000 0x1000>; reg = <0xfc000 0x1000>;
......
...@@ -248,4 +248,4 @@ pcie@0 { ...@@ -248,4 +248,4 @@ pcie@0 {
}; };
}; };
/include/ "t1024si-post.dtsi" #include "t1024si-post.dtsi"
...@@ -188,4 +188,4 @@ pcie@0 { ...@@ -188,4 +188,4 @@ pcie@0 {
}; };
}; };
/include/ "t1024si-post.dtsi" #include "t1024si-post.dtsi"
...@@ -32,7 +32,7 @@ ...@@ -32,7 +32,7 @@
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/ */
/include/ "t1023si-post.dtsi" #include "t1023si-post.dtsi"
/ { / {
aliases { aliases {
......
...@@ -76,6 +76,7 @@ cpu0: PowerPC,e5500@0 { ...@@ -76,6 +76,7 @@ cpu0: PowerPC,e5500@0 {
reg = <0>; reg = <0>;
clocks = <&mux0>; clocks = <&mux0>;
next-level-cache = <&L2_1>; next-level-cache = <&L2_1>;
#cooling-cells = <2>;
L2_1: l2-cache { L2_1: l2-cache {
next-level-cache = <&cpc>; next-level-cache = <&cpc>;
}; };
...@@ -85,6 +86,7 @@ cpu1: PowerPC,e5500@1 { ...@@ -85,6 +86,7 @@ cpu1: PowerPC,e5500@1 {
reg = <1>; reg = <1>;
clocks = <&mux1>; clocks = <&mux1>;
next-level-cache = <&L2_2>; next-level-cache = <&L2_2>;
#cooling-cells = <2>;
L2_2: l2-cache { L2_2: l2-cache {
next-level-cache = <&cpc>; next-level-cache = <&cpc>;
}; };
......
...@@ -43,4 +43,4 @@ / { ...@@ -43,4 +43,4 @@ / {
interrupt-parent = <&mpic>; interrupt-parent = <&mpic>;
}; };
/include/ "t1040si-post.dtsi" #include "t1040si-post.dtsi"
...@@ -43,4 +43,4 @@ / { ...@@ -43,4 +43,4 @@ / {
interrupt-parent = <&mpic>; interrupt-parent = <&mpic>;
}; };
/include/ "t1040si-post.dtsi" #include "t1040si-post.dtsi"
...@@ -45,4 +45,4 @@ cpld@3,0 { ...@@ -45,4 +45,4 @@ cpld@3,0 {
}; };
}; };
/include/ "t1040si-post.dtsi" #include "t1040si-post.dtsi"
...@@ -32,6 +32,8 @@ ...@@ -32,6 +32,8 @@
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/ */
#include <dt-bindings/thermal/thermal.h>
&bman_fbpr { &bman_fbpr {
compatible = "fsl,bman-fbpr"; compatible = "fsl,bman-fbpr";
alloc-ranges = <0 0 0x10000 0>; alloc-ranges = <0 0 0x10000 0>;
...@@ -484,6 +486,98 @@ serdes: serdes@ea000 { ...@@ -484,6 +486,98 @@ serdes: serdes@ea000 {
reg = <0xea000 0x4000>; reg = <0xea000 0x4000>;
}; };
tmu: tmu@f0000 {
compatible = "fsl,qoriq-tmu";
reg = <0xf0000 0x1000>;
interrupts = <18 2 0 0>;
fsl,tmu-range = <0xa0000 0x90026 0x8004a 0x1006a>;
fsl,tmu-calibration = <0x00000000 0x00000025
0x00000001 0x00000028
0x00000002 0x0000002d
0x00000003 0x00000031
0x00000004 0x00000036
0x00000005 0x0000003a
0x00000006 0x00000040
0x00000007 0x00000044
0x00000008 0x0000004a
0x00000009 0x0000004f
0x0000000a 0x00000054
0x00010000 0x0000000d
0x00010001 0x00000013
0x00010002 0x00000019
0x00010003 0x0000001f
0x00010004 0x00000025
0x00010005 0x0000002d
0x00010006 0x00000033
0x00010007 0x00000043
0x00010008 0x0000004b
0x00010009 0x00000053
0x00020000 0x00000010
0x00020001 0x00000017
0x00020002 0x0000001f
0x00020003 0x00000029
0x00020004 0x00000031
0x00020005 0x0000003c
0x00020006 0x00000042
0x00020007 0x0000004d
0x00020008 0x00000056
0x00030000 0x00000012
0x00030001 0x0000001d>;
#thermal-sensor-cells = <0>;
};
thermal-zones {
cpu_thermal: cpu-thermal {
polling-delay-passive = <1000>;
polling-delay = <5000>;
thermal-sensors = <&tmu>;
trips {
cpu_alert: cpu-alert {
temperature = <85000>;
hysteresis = <2000>;
type = "passive";
};
cpu_crit: cpu-crit {
temperature = <95000>;
hysteresis = <2000>;
type = "critical";
};
};
cooling-maps {
map0 {
trip = <&cpu_alert>;
cooling-device =
<&cpu0 THERMAL_NO_LIMIT
THERMAL_NO_LIMIT>;
};
map1 {
trip = <&cpu_alert>;
cooling-device =
<&cpu1 THERMAL_NO_LIMIT
THERMAL_NO_LIMIT>;
};
map2 {
trip = <&cpu_alert>;
cooling-device =
<&cpu2 THERMAL_NO_LIMIT
THERMAL_NO_LIMIT>;
};
map3 {
trip = <&cpu_alert>;
cooling-device =
<&cpu3 THERMAL_NO_LIMIT
THERMAL_NO_LIMIT>;
};
};
};
};
scfg: global-utilities@fc000 { scfg: global-utilities@fc000 {
compatible = "fsl,t1040-scfg"; compatible = "fsl,t1040-scfg";
reg = <0xfc000 0x1000>; reg = <0xfc000 0x1000>;
......
...@@ -50,4 +50,4 @@ cpld@3,0 { ...@@ -50,4 +50,4 @@ cpld@3,0 {
}; };
}; };
/include/ "t1040si-post.dtsi" #include "t1042si-post.dtsi"
...@@ -43,4 +43,4 @@ / { ...@@ -43,4 +43,4 @@ / {
interrupt-parent = <&mpic>; interrupt-parent = <&mpic>;
}; };
/include/ "t1042si-post.dtsi" #include "t1042si-post.dtsi"
...@@ -45,4 +45,4 @@ cpld@3,0 { ...@@ -45,4 +45,4 @@ cpld@3,0 {
}; };
}; };
/include/ "t1042si-post.dtsi" #include "t1042si-post.dtsi"
...@@ -54,4 +54,4 @@ rtc@68 { ...@@ -54,4 +54,4 @@ rtc@68 {
}; };
}; };
/include/ "t1042si-post.dtsi" #include "t1042si-post.dtsi"
...@@ -32,6 +32,6 @@ ...@@ -32,6 +32,6 @@
* SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/ */
/include/ "t1040si-post.dtsi" #include "t1040si-post.dtsi"
/* Place holder for ethernet related device tree nodes */ /* Place holder for ethernet related device tree nodes */
...@@ -76,6 +76,7 @@ cpu0: PowerPC,e5500@0 { ...@@ -76,6 +76,7 @@ cpu0: PowerPC,e5500@0 {
reg = <0>; reg = <0>;
clocks = <&mux0>; clocks = <&mux0>;
next-level-cache = <&L2_1>; next-level-cache = <&L2_1>;
#cooling-cells = <2>;
L2_1: l2-cache { L2_1: l2-cache {
next-level-cache = <&cpc>; next-level-cache = <&cpc>;
}; };
...@@ -85,6 +86,7 @@ cpu1: PowerPC,e5500@1 { ...@@ -85,6 +86,7 @@ cpu1: PowerPC,e5500@1 {
reg = <1>; reg = <1>;
clocks = <&mux1>; clocks = <&mux1>;
next-level-cache = <&L2_2>; next-level-cache = <&L2_2>;
#cooling-cells = <2>;
L2_2: l2-cache { L2_2: l2-cache {
next-level-cache = <&cpc>; next-level-cache = <&cpc>;
}; };
...@@ -94,6 +96,7 @@ cpu2: PowerPC,e5500@2 { ...@@ -94,6 +96,7 @@ cpu2: PowerPC,e5500@2 {
reg = <2>; reg = <2>;
clocks = <&mux2>; clocks = <&mux2>;
next-level-cache = <&L2_3>; next-level-cache = <&L2_3>;
#cooling-cells = <2>;
L2_3: l2-cache { L2_3: l2-cache {
next-level-cache = <&cpc>; next-level-cache = <&cpc>;
}; };
...@@ -103,6 +106,7 @@ cpu3: PowerPC,e5500@3 { ...@@ -103,6 +106,7 @@ cpu3: PowerPC,e5500@3 {
reg = <3>; reg = <3>;
clocks = <&mux3>; clocks = <&mux3>;
next-level-cache = <&L2_4>; next-level-cache = <&L2_4>;
#cooling-cells = <2>;
L2_4: l2-cache { L2_4: l2-cache {
next-level-cache = <&cpc>; next-level-cache = <&cpc>;
}; };
......
...@@ -154,7 +154,7 @@ if [ -z "$kernel" ]; then ...@@ -154,7 +154,7 @@ if [ -z "$kernel" ]; then
kernel=vmlinux kernel=vmlinux
fi fi
elfformat="`${CROSS}objdump -p "$kernel" | grep 'file format' | awk '{print $4}'`" LANG=C elfformat="`${CROSS}objdump -p "$kernel" | grep 'file format' | awk '{print $4}'`"
case "$elfformat" in case "$elfformat" in
elf64-powerpcle) format=elf64lppc ;; elf64-powerpcle) format=elf64lppc ;;
elf64-powerpc) format=elf32ppc ;; elf64-powerpc) format=elf32ppc ;;
......
...@@ -12,6 +12,7 @@ CONFIG_P1010_RDB=y ...@@ -12,6 +12,7 @@ CONFIG_P1010_RDB=y
CONFIG_P1022_DS=y CONFIG_P1022_DS=y
CONFIG_P1022_RDK=y CONFIG_P1022_RDK=y
CONFIG_P1023_RDB=y CONFIG_P1023_RDB=y
CONFIG_TWR_P102x=y
CONFIG_SBC8548=y CONFIG_SBC8548=y
CONFIG_SOCRATES=y CONFIG_SOCRATES=y
CONFIG_STX_GP3=y CONFIG_STX_GP3=y
......
...@@ -36,7 +36,6 @@ CONFIG_PS3_ROM=m ...@@ -36,7 +36,6 @@ CONFIG_PS3_ROM=m
CONFIG_PS3_FLASH=m CONFIG_PS3_FLASH=m
CONFIG_PS3_LPM=m CONFIG_PS3_LPM=m
CONFIG_PPC_IBM_CELL_BLADE=y CONFIG_PPC_IBM_CELL_BLADE=y
CONFIG_PPC_CELL_QPACE=y
CONFIG_RTAS_FLASH=m CONFIG_RTAS_FLASH=m
CONFIG_IBMEBUS=y CONFIG_IBMEBUS=y
CONFIG_CPU_FREQ_PMAC64=y CONFIG_CPU_FREQ_PMAC64=y
......
...@@ -85,6 +85,7 @@ static void spe_begin(void) ...@@ -85,6 +85,7 @@ static void spe_begin(void)
static void spe_end(void) static void spe_end(void)
{ {
disable_kernel_spe();
/* reenable preemption */ /* reenable preemption */
preempt_enable(); preempt_enable();
} }
......
...@@ -46,6 +46,7 @@ static void spe_begin(void) ...@@ -46,6 +46,7 @@ static void spe_begin(void)
static void spe_end(void) static void spe_end(void)
{ {
disable_kernel_spe();
/* reenable preemption */ /* reenable preemption */
preempt_enable(); preempt_enable();
} }
......
...@@ -47,6 +47,7 @@ static void spe_begin(void) ...@@ -47,6 +47,7 @@ static void spe_begin(void)
static void spe_end(void) static void spe_end(void)
{ {
disable_kernel_spe();
/* reenable preemption */ /* reenable preemption */
preempt_enable(); preempt_enable();
} }
......
#ifndef _ASM_POWERPC_PTE_HASH32_H #ifndef _ASM_POWERPC_BOOK3S_32_HASH_H
#define _ASM_POWERPC_PTE_HASH32_H #define _ASM_POWERPC_BOOK3S_32_HASH_H
#ifdef __KERNEL__ #ifdef __KERNEL__
/* /*
...@@ -43,4 +43,4 @@ ...@@ -43,4 +43,4 @@
#define PTE_ATOMIC_UPDATES 1 #define PTE_ATOMIC_UPDATES 1
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_PTE_HASH32_H */ #endif /* _ASM_POWERPC_BOOK3S_32_HASH_H */
This diff is collapsed.
#ifndef _ASM_POWERPC_BOOK3S_64_HASH_4K_H
#define _ASM_POWERPC_BOOK3S_64_HASH_4K_H
/*
* Entries per page directory level. The PTE level must use a 64b record
* for each page table entry. The PMD and PGD level use a 32b record for
* each entry by assuming that each entry is page aligned.
*/
#define PTE_INDEX_SIZE 9
#define PMD_INDEX_SIZE 7
#define PUD_INDEX_SIZE 9
#define PGD_INDEX_SIZE 9
#ifndef __ASSEMBLY__
#define PTE_TABLE_SIZE (sizeof(pte_t) << PTE_INDEX_SIZE)
#define PMD_TABLE_SIZE (sizeof(pmd_t) << PMD_INDEX_SIZE)
#define PUD_TABLE_SIZE (sizeof(pud_t) << PUD_INDEX_SIZE)
#define PGD_TABLE_SIZE (sizeof(pgd_t) << PGD_INDEX_SIZE)
#endif /* __ASSEMBLY__ */
#define PTRS_PER_PTE (1 << PTE_INDEX_SIZE)
#define PTRS_PER_PMD (1 << PMD_INDEX_SIZE)
#define PTRS_PER_PUD (1 << PUD_INDEX_SIZE)
#define PTRS_PER_PGD (1 << PGD_INDEX_SIZE)
/* PMD_SHIFT determines what a second-level page table entry can map */
#define PMD_SHIFT (PAGE_SHIFT + PTE_INDEX_SIZE)
#define PMD_SIZE (1UL << PMD_SHIFT)
#define PMD_MASK (~(PMD_SIZE-1))
/* With 4k base page size, hugepage PTEs go at the PMD level */
#define MIN_HUGEPTE_SHIFT PMD_SHIFT
/* PUD_SHIFT determines what a third-level page table entry can map */
#define PUD_SHIFT (PMD_SHIFT + PMD_INDEX_SIZE)
#define PUD_SIZE (1UL << PUD_SHIFT)
#define PUD_MASK (~(PUD_SIZE-1))
/* PGDIR_SHIFT determines what a fourth-level page table entry can map */
#define PGDIR_SHIFT (PUD_SHIFT + PUD_INDEX_SIZE)
#define PGDIR_SIZE (1UL << PGDIR_SHIFT)
#define PGDIR_MASK (~(PGDIR_SIZE-1))
/* Bits to mask out from a PMD to get to the PTE page */
#define PMD_MASKED_BITS 0
/* Bits to mask out from a PUD to get to the PMD page */
#define PUD_MASKED_BITS 0
/* Bits to mask out from a PGD to get to the PUD page */
#define PGD_MASKED_BITS 0
/* PTE flags to conserve for HPTE identification */
#define _PAGE_HPTEFLAGS (_PAGE_BUSY | _PAGE_HASHPTE | \
_PAGE_F_SECOND | _PAGE_F_GIX)
/* shift to put page number into pte */
#define PTE_RPN_SHIFT (18)
#define _PAGE_4K_PFN 0
#ifndef __ASSEMBLY__
/*
* 4-level page tables related bits
*/
#define pgd_none(pgd) (!pgd_val(pgd))
#define pgd_bad(pgd) (pgd_val(pgd) == 0)
#define pgd_present(pgd) (pgd_val(pgd) != 0)
#define pgd_page_vaddr(pgd) (pgd_val(pgd) & ~PGD_MASKED_BITS)
static inline void pgd_clear(pgd_t *pgdp)
{
*pgdp = __pgd(0);
}
static inline pte_t pgd_pte(pgd_t pgd)
{
return __pte(pgd_val(pgd));
}
static inline pgd_t pte_pgd(pte_t pte)
{
return __pgd(pte_val(pte));
}
extern struct page *pgd_page(pgd_t pgd);
#define pud_offset(pgdp, addr) \
(((pud_t *) pgd_page_vaddr(*(pgdp))) + \
(((addr) >> PUD_SHIFT) & (PTRS_PER_PUD - 1)))
#define pud_ERROR(e) \
pr_err("%s:%d: bad pud %08lx.\n", __FILE__, __LINE__, pud_val(e))
/*
* On all 4K setups, remap_4k_pfn() equates to remap_pfn_range() */
#define remap_4k_pfn(vma, addr, pfn, prot) \
remap_pfn_range((vma), (addr), (pfn), PAGE_SIZE, (prot))
#ifdef CONFIG_HUGETLB_PAGE
/*
* For 4k page size, we support explicit hugepage via hugepd
*/
static inline int pmd_huge(pmd_t pmd)
{
return 0;
}
static inline int pud_huge(pud_t pud)
{
return 0;
}
static inline int pgd_huge(pgd_t pgd)
{
return 0;
}
#define pgd_huge pgd_huge
static inline int hugepd_ok(hugepd_t hpd)
{
/*
* if it is not a pte and have hugepd shift mask
* set, then it is a hugepd directory pointer
*/
if (!(hpd.pd & _PAGE_PTE) &&
((hpd.pd & HUGEPD_SHIFT_MASK) != 0))
return true;
return false;
}
#define is_hugepd(hpd) (hugepd_ok(hpd))
#endif
#endif /* !__ASSEMBLY__ */
#endif /* _ASM_POWERPC_BOOK3S_64_HASH_4K_H */
#ifndef _ASM_POWERPC_BOOK3S_64_HASH_64K_H
#define _ASM_POWERPC_BOOK3S_64_HASH_64K_H
#include <asm-generic/pgtable-nopud.h>
#define PTE_INDEX_SIZE 8
#define PMD_INDEX_SIZE 10
#define PUD_INDEX_SIZE 0
#define PGD_INDEX_SIZE 12
#define PTRS_PER_PTE (1 << PTE_INDEX_SIZE)
#define PTRS_PER_PMD (1 << PMD_INDEX_SIZE)
#define PTRS_PER_PGD (1 << PGD_INDEX_SIZE)
/* With 4k base page size, hugepage PTEs go at the PMD level */
#define MIN_HUGEPTE_SHIFT PAGE_SHIFT
/* PMD_SHIFT determines what a second-level page table entry can map */
#define PMD_SHIFT (PAGE_SHIFT + PTE_INDEX_SIZE)
#define PMD_SIZE (1UL << PMD_SHIFT)
#define PMD_MASK (~(PMD_SIZE-1))
/* PGDIR_SHIFT determines what a third-level page table entry can map */
#define PGDIR_SHIFT (PMD_SHIFT + PMD_INDEX_SIZE)
#define PGDIR_SIZE (1UL << PGDIR_SHIFT)
#define PGDIR_MASK (~(PGDIR_SIZE-1))
#define _PAGE_COMBO 0x00040000 /* this is a combo 4k page */
#define _PAGE_4K_PFN 0x00080000 /* PFN is for a single 4k page */
/*
* Used to track subpage group valid if _PAGE_COMBO is set
* This overloads _PAGE_F_GIX and _PAGE_F_SECOND
*/
#define _PAGE_COMBO_VALID (_PAGE_F_GIX | _PAGE_F_SECOND)
/* PTE flags to conserve for HPTE identification */
#define _PAGE_HPTEFLAGS (_PAGE_BUSY | _PAGE_F_SECOND | \
_PAGE_F_GIX | _PAGE_HASHPTE | _PAGE_COMBO)
/* Shift to put page number into pte.
*
* That gives us a max RPN of 34 bits, which means a max of 50 bits
* of addressable physical space, or 46 bits for the special 4k PFNs.
*/
#define PTE_RPN_SHIFT (30)
/*
* we support 16 fragments per PTE page of 64K size.
*/
#define PTE_FRAG_NR 16
/*
* We use a 2K PTE page fragment and another 2K for storing
* real_pte_t hash index
*/
#define PTE_FRAG_SIZE_SHIFT 12
#define PTE_FRAG_SIZE (1UL << PTE_FRAG_SIZE_SHIFT)
/*
* Bits to mask out from a PMD to get to the PTE page
* PMDs point to PTE table fragments which are PTE_FRAG_SIZE aligned.
*/
#define PMD_MASKED_BITS (PTE_FRAG_SIZE - 1)
/* Bits to mask out from a PGD/PUD to get to the PMD page */
#define PUD_MASKED_BITS 0x1ff
#ifndef __ASSEMBLY__
/*
* With 64K pages on hash table, we have a special PTE format that
* uses a second "half" of the page table to encode sub-page information
* in order to deal with 64K made of 4K HW pages. Thus we override the
* generic accessors and iterators here
*/
#define __real_pte __real_pte
static inline real_pte_t __real_pte(pte_t pte, pte_t *ptep)
{
real_pte_t rpte;
unsigned long *hidxp;
rpte.pte = pte;
rpte.hidx = 0;
if (pte_val(pte) & _PAGE_COMBO) {
/*
* Make sure we order the hidx load against the _PAGE_COMBO
* check. The store side ordering is done in __hash_page_4K
*/
smp_rmb();
hidxp = (unsigned long *)(ptep + PTRS_PER_PTE);
rpte.hidx = *hidxp;
}
return rpte;
}
static inline unsigned long __rpte_to_hidx(real_pte_t rpte, unsigned long index)
{
if ((pte_val(rpte.pte) & _PAGE_COMBO))
return (rpte.hidx >> (index<<2)) & 0xf;
return (pte_val(rpte.pte) >> _PAGE_F_GIX_SHIFT) & 0xf;
}
#define __rpte_to_pte(r) ((r).pte)
extern bool __rpte_sub_valid(real_pte_t rpte, unsigned long index);
/*
* Trick: we set __end to va + 64k, which happens works for
* a 16M page as well as we want only one iteration
*/
#define pte_iterate_hashed_subpages(rpte, psize, vpn, index, shift) \
do { \
unsigned long __end = vpn + (1UL << (PAGE_SHIFT - VPN_SHIFT)); \
unsigned __split = (psize == MMU_PAGE_4K || \
psize == MMU_PAGE_64K_AP); \
shift = mmu_psize_defs[psize].shift; \
for (index = 0; vpn < __end; index++, \
vpn += (1L << (shift - VPN_SHIFT))) { \
if (!__split || __rpte_sub_valid(rpte, index)) \
do {
#define pte_iterate_hashed_end() } while(0); } } while(0)
#define pte_pagesize_index(mm, addr, pte) \
(((pte) & _PAGE_COMBO)? MMU_PAGE_4K: MMU_PAGE_64K)
#define remap_4k_pfn(vma, addr, pfn, prot) \
(WARN_ON(((pfn) >= (1UL << (64 - PTE_RPN_SHIFT)))) ? -EINVAL : \
remap_pfn_range((vma), (addr), (pfn), PAGE_SIZE, \
__pgprot(pgprot_val((prot)) | _PAGE_4K_PFN)))
#define PTE_TABLE_SIZE PTE_FRAG_SIZE
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
#define PMD_TABLE_SIZE ((sizeof(pmd_t) << PMD_INDEX_SIZE) + (sizeof(unsigned long) << PMD_INDEX_SIZE))
#else
#define PMD_TABLE_SIZE (sizeof(pmd_t) << PMD_INDEX_SIZE)
#endif
#define PGD_TABLE_SIZE (sizeof(pgd_t) << PGD_INDEX_SIZE)
#define pgd_pte(pgd) (pud_pte(((pud_t){ pgd })))
#define pte_pgd(pte) ((pgd_t)pte_pud(pte))
#ifdef CONFIG_HUGETLB_PAGE
/*
* We have PGD_INDEX_SIZ = 12 and PTE_INDEX_SIZE = 8, so that we can have
* 16GB hugepage pte in PGD and 16MB hugepage pte at PMD;
*
* Defined in such a way that we can optimize away code block at build time
* if CONFIG_HUGETLB_PAGE=n.
*/
static inline int pmd_huge(pmd_t pmd)
{
/*
* leaf pte for huge page
*/
return !!(pmd_val(pmd) & _PAGE_PTE);
}
static inline int pud_huge(pud_t pud)
{
/*
* leaf pte for huge page
*/
return !!(pud_val(pud) & _PAGE_PTE);
}
static inline int pgd_huge(pgd_t pgd)
{
/*
* leaf pte for huge page
*/
return !!(pgd_val(pgd) & _PAGE_PTE);
}
#define pgd_huge pgd_huge
#ifdef CONFIG_DEBUG_VM
extern int hugepd_ok(hugepd_t hpd);
#define is_hugepd(hpd) (hugepd_ok(hpd))
#else
/*
* With 64k page size, we have hugepage ptes in the pgd and pmd entries. We don't
* need to setup hugepage directory for them. Our pte and page directory format
* enable us to have this enabled.
*/
static inline int hugepd_ok(hugepd_t hpd)
{
return 0;
}
#define is_hugepd(pdep) 0
#endif /* CONFIG_DEBUG_VM */
#endif /* CONFIG_HUGETLB_PAGE */
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
extern unsigned long pmd_hugepage_update(struct mm_struct *mm,
unsigned long addr,
pmd_t *pmdp,
unsigned long clr,
unsigned long set);
static inline char *get_hpte_slot_array(pmd_t *pmdp)
{
/*
* The hpte hindex is stored in the pgtable whose address is in the
* second half of the PMD
*
* Order this load with the test for pmd_trans_huge in the caller
*/
smp_rmb();
return *(char **)(pmdp + PTRS_PER_PMD);
}
/*
* The linux hugepage PMD now include the pmd entries followed by the address
* to the stashed pgtable_t. The stashed pgtable_t contains the hpte bits.
* [ 1 bit secondary | 3 bit hidx | 1 bit valid | 000]. We use one byte per
* each HPTE entry. With 16MB hugepage and 64K HPTE we need 256 entries and
* with 4K HPTE we need 4096 entries. Both will fit in a 4K pgtable_t.
*
* The last three bits are intentionally left to zero. This memory location
* are also used as normal page PTE pointers. So if we have any pointers
* left around while we collapse a hugepage, we need to make sure
* _PAGE_PRESENT bit of that is zero when we look at them
*/
static inline unsigned int hpte_valid(unsigned char *hpte_slot_array, int index)
{
return (hpte_slot_array[index] >> 3) & 0x1;
}
static inline unsigned int hpte_hash_index(unsigned char *hpte_slot_array,
int index)
{
return hpte_slot_array[index] >> 4;
}
static inline void mark_hpte_slot_valid(unsigned char *hpte_slot_array,
unsigned int index, unsigned int hidx)
{
hpte_slot_array[index] = hidx << 4 | 0x1 << 3;
}
/*
*
* For core kernel code by design pmd_trans_huge is never run on any hugetlbfs
* page. The hugetlbfs page table walking and mangling paths are totally
* separated form the core VM paths and they're differentiated by
* VM_HUGETLB being set on vm_flags well before any pmd_trans_huge could run.
*
* pmd_trans_huge() is defined as false at build time if
* CONFIG_TRANSPARENT_HUGEPAGE=n to optimize away code blocks at build
* time in such case.
*
* For ppc64 we need to differntiate from explicit hugepages from THP, because
* for THP we also track the subpage details at the pmd level. We don't do
* that for explicit huge pages.
*
*/
static inline int pmd_trans_huge(pmd_t pmd)
{
return !!((pmd_val(pmd) & (_PAGE_PTE | _PAGE_THP_HUGE)) ==
(_PAGE_PTE | _PAGE_THP_HUGE));
}
static inline int pmd_trans_splitting(pmd_t pmd)
{
if (pmd_trans_huge(pmd))
return pmd_val(pmd) & _PAGE_SPLITTING;
return 0;
}
static inline int pmd_large(pmd_t pmd)
{
return !!(pmd_val(pmd) & _PAGE_PTE);
}
static inline pmd_t pmd_mknotpresent(pmd_t pmd)
{
return __pmd(pmd_val(pmd) & ~_PAGE_PRESENT);
}
static inline pmd_t pmd_mksplitting(pmd_t pmd)
{
return __pmd(pmd_val(pmd) | _PAGE_SPLITTING);
}
#define __HAVE_ARCH_PMD_SAME
static inline int pmd_same(pmd_t pmd_a, pmd_t pmd_b)
{
return (((pmd_val(pmd_a) ^ pmd_val(pmd_b)) & ~_PAGE_HPTEFLAGS) == 0);
}
static inline int __pmdp_test_and_clear_young(struct mm_struct *mm,
unsigned long addr, pmd_t *pmdp)
{
unsigned long old;
if ((pmd_val(*pmdp) & (_PAGE_ACCESSED | _PAGE_HASHPTE)) == 0)
return 0;
old = pmd_hugepage_update(mm, addr, pmdp, _PAGE_ACCESSED, 0);
return ((old & _PAGE_ACCESSED) != 0);
}
#define __HAVE_ARCH_PMDP_SET_WRPROTECT
static inline void pmdp_set_wrprotect(struct mm_struct *mm, unsigned long addr,
pmd_t *pmdp)
{
if ((pmd_val(*pmdp) & _PAGE_RW) == 0)
return;
pmd_hugepage_update(mm, addr, pmdp, _PAGE_RW, 0);
}
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
#endif /* __ASSEMBLY__ */
#endif /* _ASM_POWERPC_BOOK3S_64_HASH_64K_H */
This diff is collapsed.
#ifndef _ASM_POWERPC_BOOK3S_64_PGTABLE_H_
#define _ASM_POWERPC_BOOK3S_64_PGTABLE_H_
/*
* This file contains the functions and defines necessary to modify and use
* the ppc64 hashed page table.
*/
#include <asm/book3s/64/hash.h>
#include <asm/barrier.h>
/*
* The second half of the kernel virtual space is used for IO mappings,
* it's itself carved into the PIO region (ISA and PHB IO space) and
* the ioremap space
*
* ISA_IO_BASE = KERN_IO_START, 64K reserved area
* PHB_IO_BASE = ISA_IO_BASE + 64K to ISA_IO_BASE + 2G, PHB IO spaces
* IOREMAP_BASE = ISA_IO_BASE + 2G to VMALLOC_START + PGTABLE_RANGE
*/
#define KERN_IO_START (KERN_VIRT_START + (KERN_VIRT_SIZE >> 1))
#define FULL_IO_SIZE 0x80000000ul
#define ISA_IO_BASE (KERN_IO_START)
#define ISA_IO_END (KERN_IO_START + 0x10000ul)
#define PHB_IO_BASE (ISA_IO_END)
#define PHB_IO_END (KERN_IO_START + FULL_IO_SIZE)
#define IOREMAP_BASE (PHB_IO_END)
#define IOREMAP_END (KERN_VIRT_START + KERN_VIRT_SIZE)
#define vmemmap ((struct page *)VMEMMAP_BASE)
/* Advertise special mapping type for AGP */
#define HAVE_PAGE_AGP
/* Advertise support for _PAGE_SPECIAL */
#define __HAVE_ARCH_PTE_SPECIAL
#ifndef __ASSEMBLY__
/*
* This is the default implementation of various PTE accessors, it's
* used in all cases except Book3S with 64K pages where we have a
* concept of sub-pages
*/
#ifndef __real_pte
#ifdef CONFIG_STRICT_MM_TYPECHECKS
#define __real_pte(e,p) ((real_pte_t){(e)})
#define __rpte_to_pte(r) ((r).pte)
#else
#define __real_pte(e,p) (e)
#define __rpte_to_pte(r) (__pte(r))
#endif
#define __rpte_to_hidx(r,index) (pte_val(__rpte_to_pte(r)) >>_PAGE_F_GIX_SHIFT)
#define pte_iterate_hashed_subpages(rpte, psize, va, index, shift) \
do { \
index = 0; \
shift = mmu_psize_defs[psize].shift; \
#define pte_iterate_hashed_end() } while(0)
/*
* We expect this to be called only for user addresses or kernel virtual
* addresses other than the linear mapping.
*/
#define pte_pagesize_index(mm, addr, pte) MMU_PAGE_4K
#endif /* __real_pte */
static inline void pmd_set(pmd_t *pmdp, unsigned long val)
{
*pmdp = __pmd(val);
}
static inline void pmd_clear(pmd_t *pmdp)
{
*pmdp = __pmd(0);
}
#define pmd_none(pmd) (!pmd_val(pmd))
#define pmd_present(pmd) (!pmd_none(pmd))
static inline void pud_set(pud_t *pudp, unsigned long val)
{
*pudp = __pud(val);
}
static inline void pud_clear(pud_t *pudp)
{
*pudp = __pud(0);
}
#define pud_none(pud) (!pud_val(pud))
#define pud_present(pud) (pud_val(pud) != 0)
extern struct page *pud_page(pud_t pud);
extern struct page *pmd_page(pmd_t pmd);
static inline pte_t pud_pte(pud_t pud)
{
return __pte(pud_val(pud));
}
static inline pud_t pte_pud(pte_t pte)
{
return __pud(pte_val(pte));
}
#define pud_write(pud) pte_write(pud_pte(pud))
#define pgd_write(pgd) pte_write(pgd_pte(pgd))
static inline void pgd_set(pgd_t *pgdp, unsigned long val)
{
*pgdp = __pgd(val);
}
/*
* Find an entry in a page-table-directory. We combine the address region
* (the high order N bits) and the pgd portion of the address.
*/
#define pgd_offset(mm, address) ((mm)->pgd + pgd_index(address))
#define pmd_offset(pudp,addr) \
(((pmd_t *) pud_page_vaddr(*(pudp))) + pmd_index(addr))
#define pte_offset_kernel(dir,addr) \
(((pte_t *) pmd_page_vaddr(*(dir))) + pte_index(addr))
#define pte_offset_map(dir,addr) pte_offset_kernel((dir), (addr))
#define pte_unmap(pte) do { } while(0)
/* to find an entry in a kernel page-table-directory */
/* This now only contains the vmalloc pages */
#define pgd_offset_k(address) pgd_offset(&init_mm, address)
#define pte_ERROR(e) \
pr_err("%s:%d: bad pte %08lx.\n", __FILE__, __LINE__, pte_val(e))
#define pmd_ERROR(e) \
pr_err("%s:%d: bad pmd %08lx.\n", __FILE__, __LINE__, pmd_val(e))
#define pgd_ERROR(e) \
pr_err("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pgd_val(e))
/* Encode and de-code a swap entry */
#define MAX_SWAPFILES_CHECK() do { \
BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > SWP_TYPE_BITS); \
/* \
* Don't have overlapping bits with _PAGE_HPTEFLAGS \
* We filter HPTEFLAGS on set_pte. \
*/ \
BUILD_BUG_ON(_PAGE_HPTEFLAGS & (0x1f << _PAGE_BIT_SWAP_TYPE)); \
BUILD_BUG_ON(_PAGE_HPTEFLAGS & _PAGE_SWP_SOFT_DIRTY); \
} while (0)
/*
* on pte we don't need handle RADIX_TREE_EXCEPTIONAL_SHIFT;
*/
#define SWP_TYPE_BITS 5
#define __swp_type(x) (((x).val >> _PAGE_BIT_SWAP_TYPE) \
& ((1UL << SWP_TYPE_BITS) - 1))
#define __swp_offset(x) ((x).val >> PTE_RPN_SHIFT)
#define __swp_entry(type, offset) ((swp_entry_t) { \
((type) << _PAGE_BIT_SWAP_TYPE) \
| ((offset) << PTE_RPN_SHIFT) })
/*
* swp_entry_t must be independent of pte bits. We build a swp_entry_t from
* swap type and offset we get from swap and convert that to pte to find a
* matching pte in linux page table.
* Clear bits not found in swap entries here.
*/
#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val((pte)) & ~_PAGE_PTE })
#define __swp_entry_to_pte(x) __pte((x).val | _PAGE_PTE)
#ifdef CONFIG_MEM_SOFT_DIRTY
#define _PAGE_SWP_SOFT_DIRTY (1UL << (SWP_TYPE_BITS + _PAGE_BIT_SWAP_TYPE))
#else
#define _PAGE_SWP_SOFT_DIRTY 0UL
#endif /* CONFIG_MEM_SOFT_DIRTY */
#ifdef CONFIG_HAVE_ARCH_SOFT_DIRTY
static inline pte_t pte_swp_mksoft_dirty(pte_t pte)
{
return __pte(pte_val(pte) | _PAGE_SWP_SOFT_DIRTY);
}
static inline bool pte_swp_soft_dirty(pte_t pte)
{
return !!(pte_val(pte) & _PAGE_SWP_SOFT_DIRTY);
}
static inline pte_t pte_swp_clear_soft_dirty(pte_t pte)
{
return __pte(pte_val(pte) & ~_PAGE_SWP_SOFT_DIRTY);
}
#endif /* CONFIG_HAVE_ARCH_SOFT_DIRTY */
void pgtable_cache_add(unsigned shift, void (*ctor)(void *));
void pgtable_cache_init(void);
struct page *realmode_pfn_to_page(unsigned long pfn);
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
extern pmd_t pfn_pmd(unsigned long pfn, pgprot_t pgprot);
extern pmd_t mk_pmd(struct page *page, pgprot_t pgprot);
extern pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot);
extern void set_pmd_at(struct mm_struct *mm, unsigned long addr,
pmd_t *pmdp, pmd_t pmd);
extern void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr,
pmd_t *pmd);
extern int has_transparent_hugepage(void);
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
static inline pte_t pmd_pte(pmd_t pmd)
{
return __pte(pmd_val(pmd));
}
static inline pmd_t pte_pmd(pte_t pte)
{
return __pmd(pte_val(pte));
}
static inline pte_t *pmdp_ptep(pmd_t *pmd)
{
return (pte_t *)pmd;
}
#define pmd_pfn(pmd) pte_pfn(pmd_pte(pmd))
#define pmd_dirty(pmd) pte_dirty(pmd_pte(pmd))
#define pmd_young(pmd) pte_young(pmd_pte(pmd))
#define pmd_mkold(pmd) pte_pmd(pte_mkold(pmd_pte(pmd)))
#define pmd_wrprotect(pmd) pte_pmd(pte_wrprotect(pmd_pte(pmd)))
#define pmd_mkdirty(pmd) pte_pmd(pte_mkdirty(pmd_pte(pmd)))
#define pmd_mkyoung(pmd) pte_pmd(pte_mkyoung(pmd_pte(pmd)))
#define pmd_mkwrite(pmd) pte_pmd(pte_mkwrite(pmd_pte(pmd)))
#ifdef CONFIG_HAVE_ARCH_SOFT_DIRTY
#define pmd_soft_dirty(pmd) pte_soft_dirty(pmd_pte(pmd))
#define pmd_mksoft_dirty(pmd) pte_pmd(pte_mksoft_dirty(pmd_pte(pmd)))
#define pmd_clear_soft_dirty(pmd) pte_pmd(pte_clear_soft_dirty(pmd_pte(pmd)))
#endif /* CONFIG_HAVE_ARCH_SOFT_DIRTY */
#ifdef CONFIG_NUMA_BALANCING
static inline int pmd_protnone(pmd_t pmd)
{
return pte_protnone(pmd_pte(pmd));
}
#endif /* CONFIG_NUMA_BALANCING */
#define __HAVE_ARCH_PMD_WRITE
#define pmd_write(pmd) pte_write(pmd_pte(pmd))
static inline pmd_t pmd_mkhuge(pmd_t pmd)
{
return __pmd(pmd_val(pmd) | (_PAGE_PTE | _PAGE_THP_HUGE));
}
#define __HAVE_ARCH_PMDP_SET_ACCESS_FLAGS
extern int pmdp_set_access_flags(struct vm_area_struct *vma,
unsigned long address, pmd_t *pmdp,
pmd_t entry, int dirty);
#define __HAVE_ARCH_PMDP_TEST_AND_CLEAR_YOUNG
extern int pmdp_test_and_clear_young(struct vm_area_struct *vma,
unsigned long address, pmd_t *pmdp);
#define __HAVE_ARCH_PMDP_CLEAR_YOUNG_FLUSH
extern int pmdp_clear_flush_young(struct vm_area_struct *vma,
unsigned long address, pmd_t *pmdp);
#define __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR
extern pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
unsigned long addr, pmd_t *pmdp);
#define __HAVE_ARCH_PMDP_SPLITTING_FLUSH
extern void pmdp_splitting_flush(struct vm_area_struct *vma,
unsigned long address, pmd_t *pmdp);
extern pmd_t pmdp_collapse_flush(struct vm_area_struct *vma,
unsigned long address, pmd_t *pmdp);
#define pmdp_collapse_flush pmdp_collapse_flush
#define __HAVE_ARCH_PGTABLE_DEPOSIT
extern void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
pgtable_t pgtable);
#define __HAVE_ARCH_PGTABLE_WITHDRAW
extern pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
#define __HAVE_ARCH_PMDP_INVALIDATE
extern void pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
pmd_t *pmdp);
#define pmd_move_must_withdraw pmd_move_must_withdraw
struct spinlock;
static inline int pmd_move_must_withdraw(struct spinlock *new_pmd_ptl,
struct spinlock *old_pmd_ptl)
{
/*
* Archs like ppc64 use pgtable to store per pmd
* specific information. So when we switch the pmd,
* we should also withdraw and deposit the pgtable
*/
return true;
}
#endif /* __ASSEMBLY__ */
#endif /* _ASM_POWERPC_BOOK3S_64_PGTABLE_H_ */
#ifndef _ASM_POWERPC_BOOK3S_PGTABLE_H
#define _ASM_POWERPC_BOOK3S_PGTABLE_H
#ifdef CONFIG_PPC64
#include <asm/book3s/64/pgtable.h>
#else
#include <asm/book3s/32/pgtable.h>
#endif
#define FIRST_USER_ADDRESS 0UL
#ifndef __ASSEMBLY__
/* Insert a PTE, top-level function is out of line. It uses an inline
* low level function in the respective pgtable-* files
*/
extern void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
pte_t pte);
#define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
extern int ptep_set_access_flags(struct vm_area_struct *vma, unsigned long address,
pte_t *ptep, pte_t entry, int dirty);
struct file;
extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
unsigned long size, pgprot_t vma_prot);
#define __HAVE_PHYS_MEM_ACCESS_PROT
#endif /* __ASSEMBLY__ */
#endif
...@@ -18,12 +18,12 @@ __xchg_u32(volatile void *p, unsigned long val) ...@@ -18,12 +18,12 @@ __xchg_u32(volatile void *p, unsigned long val)
unsigned long prev; unsigned long prev;
__asm__ __volatile__( __asm__ __volatile__(
PPC_RELEASE_BARRIER PPC_ATOMIC_ENTRY_BARRIER
"1: lwarx %0,0,%2 \n" "1: lwarx %0,0,%2 \n"
PPC405_ERR77(0,%2) PPC405_ERR77(0,%2)
" stwcx. %3,0,%2 \n\ " stwcx. %3,0,%2 \n\
bne- 1b" bne- 1b"
PPC_ACQUIRE_BARRIER PPC_ATOMIC_EXIT_BARRIER
: "=&r" (prev), "+m" (*(volatile unsigned int *)p) : "=&r" (prev), "+m" (*(volatile unsigned int *)p)
: "r" (p), "r" (val) : "r" (p), "r" (val)
: "cc", "memory"); : "cc", "memory");
...@@ -61,12 +61,12 @@ __xchg_u64(volatile void *p, unsigned long val) ...@@ -61,12 +61,12 @@ __xchg_u64(volatile void *p, unsigned long val)
unsigned long prev; unsigned long prev;
__asm__ __volatile__( __asm__ __volatile__(
PPC_RELEASE_BARRIER PPC_ATOMIC_ENTRY_BARRIER
"1: ldarx %0,0,%2 \n" "1: ldarx %0,0,%2 \n"
PPC405_ERR77(0,%2) PPC405_ERR77(0,%2)
" stdcx. %3,0,%2 \n\ " stdcx. %3,0,%2 \n\
bne- 1b" bne- 1b"
PPC_ACQUIRE_BARRIER PPC_ATOMIC_EXIT_BARRIER
: "=&r" (prev), "+m" (*(volatile unsigned long *)p) : "=&r" (prev), "+m" (*(volatile unsigned long *)p)
: "r" (p), "r" (val) : "r" (p), "r" (val)
: "cc", "memory"); : "cc", "memory");
...@@ -151,14 +151,14 @@ __cmpxchg_u32(volatile unsigned int *p, unsigned long old, unsigned long new) ...@@ -151,14 +151,14 @@ __cmpxchg_u32(volatile unsigned int *p, unsigned long old, unsigned long new)
unsigned int prev; unsigned int prev;
__asm__ __volatile__ ( __asm__ __volatile__ (
PPC_RELEASE_BARRIER PPC_ATOMIC_ENTRY_BARRIER
"1: lwarx %0,0,%2 # __cmpxchg_u32\n\ "1: lwarx %0,0,%2 # __cmpxchg_u32\n\
cmpw 0,%0,%3\n\ cmpw 0,%0,%3\n\
bne- 2f\n" bne- 2f\n"
PPC405_ERR77(0,%2) PPC405_ERR77(0,%2)
" stwcx. %4,0,%2\n\ " stwcx. %4,0,%2\n\
bne- 1b" bne- 1b"
PPC_ACQUIRE_BARRIER PPC_ATOMIC_EXIT_BARRIER
"\n\ "\n\
2:" 2:"
: "=&r" (prev), "+m" (*p) : "=&r" (prev), "+m" (*p)
...@@ -197,13 +197,13 @@ __cmpxchg_u64(volatile unsigned long *p, unsigned long old, unsigned long new) ...@@ -197,13 +197,13 @@ __cmpxchg_u64(volatile unsigned long *p, unsigned long old, unsigned long new)
unsigned long prev; unsigned long prev;
__asm__ __volatile__ ( __asm__ __volatile__ (
PPC_RELEASE_BARRIER PPC_ATOMIC_ENTRY_BARRIER
"1: ldarx %0,0,%2 # __cmpxchg_u64\n\ "1: ldarx %0,0,%2 # __cmpxchg_u64\n\
cmpd 0,%0,%3\n\ cmpd 0,%0,%3\n\
bne- 2f\n\ bne- 2f\n\
stdcx. %4,0,%2\n\ stdcx. %4,0,%2\n\
bne- 1b" bne- 1b"
PPC_ACQUIRE_BARRIER PPC_ATOMIC_EXIT_BARRIER
"\n\ "\n\
2:" 2:"
: "=&r" (prev), "+m" (*p) : "=&r" (prev), "+m" (*p)
......
...@@ -5,6 +5,7 @@ ...@@ -5,6 +5,7 @@
#include <linux/types.h> #include <linux/types.h>
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/of.h> #include <linux/of.h>
#include <soc/fsl/qe/qe.h>
/* /*
* SPI Parameter RAM common to QE and CPM. * SPI Parameter RAM common to QE and CPM.
...@@ -155,49 +156,6 @@ typedef struct cpm_buf_desc { ...@@ -155,49 +156,6 @@ typedef struct cpm_buf_desc {
*/ */
#define BD_I2C_START (0x0400) #define BD_I2C_START (0x0400)
int cpm_muram_init(void);
#if defined(CONFIG_CPM) || defined(CONFIG_QUICC_ENGINE)
unsigned long cpm_muram_alloc(unsigned long size, unsigned long align);
int cpm_muram_free(unsigned long offset);
unsigned long cpm_muram_alloc_fixed(unsigned long offset, unsigned long size);
void __iomem *cpm_muram_addr(unsigned long offset);
unsigned long cpm_muram_offset(void __iomem *addr);
dma_addr_t cpm_muram_dma(void __iomem *addr);
#else
static inline unsigned long cpm_muram_alloc(unsigned long size,
unsigned long align)
{
return -ENOSYS;
}
static inline int cpm_muram_free(unsigned long offset)
{
return -ENOSYS;
}
static inline unsigned long cpm_muram_alloc_fixed(unsigned long offset,
unsigned long size)
{
return -ENOSYS;
}
static inline void __iomem *cpm_muram_addr(unsigned long offset)
{
return NULL;
}
static inline unsigned long cpm_muram_offset(void __iomem *addr)
{
return -ENOSYS;
}
static inline dma_addr_t cpm_muram_dma(void __iomem *addr)
{
return 0;
}
#endif /* defined(CONFIG_CPM) || defined(CONFIG_QUICC_ENGINE) */
#ifdef CONFIG_CPM #ifdef CONFIG_CPM
int cpm_command(u32 command, u8 opcode); int cpm_command(u32 command, u8 opcode);
#else #else
......
...@@ -129,15 +129,6 @@ BEGIN_FTR_SECTION_NESTED(941) \ ...@@ -129,15 +129,6 @@ BEGIN_FTR_SECTION_NESTED(941) \
mtspr SPRN_PPR,ra; \ mtspr SPRN_PPR,ra; \
END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,941) END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,941)
/*
* Increase the priority on systems where PPR save/restore is not
* implemented/ supported.
*/
#define HMT_MEDIUM_PPR_DISCARD \
BEGIN_FTR_SECTION_NESTED(942) \
HMT_MEDIUM; \
END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,0,942) /*non P7*/
/* /*
* Get an SPR into a register if the CPU has the given feature * Get an SPR into a register if the CPU has the given feature
*/ */
...@@ -263,17 +254,6 @@ do_kvm_##n: \ ...@@ -263,17 +254,6 @@ do_kvm_##n: \
#define KVM_HANDLER_SKIP(area, h, n) #define KVM_HANDLER_SKIP(area, h, n)
#endif #endif
#ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE
#define KVMTEST_PR(n) __KVMTEST(n)
#define KVM_HANDLER_PR(area, h, n) __KVM_HANDLER(area, h, n)
#define KVM_HANDLER_PR_SKIP(area, h, n) __KVM_HANDLER_SKIP(area, h, n)
#else
#define KVMTEST_PR(n)
#define KVM_HANDLER_PR(area, h, n)
#define KVM_HANDLER_PR_SKIP(area, h, n)
#endif
#define NOTEST(n) #define NOTEST(n)
/* /*
...@@ -353,27 +333,25 @@ do_kvm_##n: \ ...@@ -353,27 +333,25 @@ do_kvm_##n: \
/* /*
* Exception vectors. * Exception vectors.
*/ */
#define STD_EXCEPTION_PSERIES(loc, vec, label) \ #define STD_EXCEPTION_PSERIES(vec, label) \
. = loc; \ . = vec; \
.globl label##_pSeries; \ .globl label##_pSeries; \
label##_pSeries: \ label##_pSeries: \
HMT_MEDIUM_PPR_DISCARD; \
SET_SCRATCH0(r13); /* save r13 */ \ SET_SCRATCH0(r13); /* save r13 */ \
EXCEPTION_PROLOG_PSERIES(PACA_EXGEN, label##_common, \ EXCEPTION_PROLOG_PSERIES(PACA_EXGEN, label##_common, \
EXC_STD, KVMTEST_PR, vec) EXC_STD, KVMTEST, vec)
/* Version of above for when we have to branch out-of-line */ /* Version of above for when we have to branch out-of-line */
#define STD_EXCEPTION_PSERIES_OOL(vec, label) \ #define STD_EXCEPTION_PSERIES_OOL(vec, label) \
.globl label##_pSeries; \ .globl label##_pSeries; \
label##_pSeries: \ label##_pSeries: \
EXCEPTION_PROLOG_1(PACA_EXGEN, KVMTEST_PR, vec); \ EXCEPTION_PROLOG_1(PACA_EXGEN, KVMTEST, vec); \
EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_STD) EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_STD)
#define STD_EXCEPTION_HV(loc, vec, label) \ #define STD_EXCEPTION_HV(loc, vec, label) \
. = loc; \ . = loc; \
.globl label##_hv; \ .globl label##_hv; \
label##_hv: \ label##_hv: \
HMT_MEDIUM_PPR_DISCARD; \
SET_SCRATCH0(r13); /* save r13 */ \ SET_SCRATCH0(r13); /* save r13 */ \
EXCEPTION_PROLOG_PSERIES(PACA_EXGEN, label##_common, \ EXCEPTION_PROLOG_PSERIES(PACA_EXGEN, label##_common, \
EXC_HV, KVMTEST, vec) EXC_HV, KVMTEST, vec)
...@@ -389,7 +367,6 @@ label##_hv: \ ...@@ -389,7 +367,6 @@ label##_hv: \
. = loc; \ . = loc; \
.globl label##_relon_pSeries; \ .globl label##_relon_pSeries; \
label##_relon_pSeries: \ label##_relon_pSeries: \
HMT_MEDIUM_PPR_DISCARD; \
/* No guest interrupts come through here */ \ /* No guest interrupts come through here */ \
SET_SCRATCH0(r13); /* save r13 */ \ SET_SCRATCH0(r13); /* save r13 */ \
EXCEPTION_RELON_PROLOG_PSERIES(PACA_EXGEN, label##_common, \ EXCEPTION_RELON_PROLOG_PSERIES(PACA_EXGEN, label##_common, \
...@@ -405,7 +382,6 @@ label##_relon_pSeries: \ ...@@ -405,7 +382,6 @@ label##_relon_pSeries: \
. = loc; \ . = loc; \
.globl label##_relon_hv; \ .globl label##_relon_hv; \
label##_relon_hv: \ label##_relon_hv: \
HMT_MEDIUM_PPR_DISCARD; \
/* No guest interrupts come through here */ \ /* No guest interrupts come through here */ \
SET_SCRATCH0(r13); /* save r13 */ \ SET_SCRATCH0(r13); /* save r13 */ \
EXCEPTION_RELON_PROLOG_PSERIES(PACA_EXGEN, label##_common, \ EXCEPTION_RELON_PROLOG_PSERIES(PACA_EXGEN, label##_common, \
...@@ -436,17 +412,13 @@ label##_relon_hv: \ ...@@ -436,17 +412,13 @@ label##_relon_hv: \
#define _SOFTEN_TEST(h, vec) __SOFTEN_TEST(h, vec) #define _SOFTEN_TEST(h, vec) __SOFTEN_TEST(h, vec)
#define SOFTEN_TEST_PR(vec) \ #define SOFTEN_TEST_PR(vec) \
KVMTEST_PR(vec); \ KVMTEST(vec); \
_SOFTEN_TEST(EXC_STD, vec) _SOFTEN_TEST(EXC_STD, vec)
#define SOFTEN_TEST_HV(vec) \ #define SOFTEN_TEST_HV(vec) \
KVMTEST(vec); \ KVMTEST(vec); \
_SOFTEN_TEST(EXC_HV, vec) _SOFTEN_TEST(EXC_HV, vec)
#define SOFTEN_TEST_HV_201(vec) \
KVMTEST(vec); \
_SOFTEN_TEST(EXC_STD, vec)
#define SOFTEN_NOTEST_PR(vec) _SOFTEN_TEST(EXC_STD, vec) #define SOFTEN_NOTEST_PR(vec) _SOFTEN_TEST(EXC_STD, vec)
#define SOFTEN_NOTEST_HV(vec) _SOFTEN_TEST(EXC_HV, vec) #define SOFTEN_NOTEST_HV(vec) _SOFTEN_TEST(EXC_HV, vec)
...@@ -463,7 +435,6 @@ label##_relon_hv: \ ...@@ -463,7 +435,6 @@ label##_relon_hv: \
. = loc; \ . = loc; \
.globl label##_pSeries; \ .globl label##_pSeries; \
label##_pSeries: \ label##_pSeries: \
HMT_MEDIUM_PPR_DISCARD; \
_MASKABLE_EXCEPTION_PSERIES(vec, label, \ _MASKABLE_EXCEPTION_PSERIES(vec, label, \
EXC_STD, SOFTEN_TEST_PR) EXC_STD, SOFTEN_TEST_PR)
...@@ -481,7 +452,6 @@ label##_hv: \ ...@@ -481,7 +452,6 @@ label##_hv: \
EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_HV); EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_HV);
#define __MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, h, extra) \ #define __MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, h, extra) \
HMT_MEDIUM_PPR_DISCARD; \
SET_SCRATCH0(r13); /* save r13 */ \ SET_SCRATCH0(r13); /* save r13 */ \
EXCEPTION_PROLOG_0(PACA_EXGEN); \ EXCEPTION_PROLOG_0(PACA_EXGEN); \
__EXCEPTION_PROLOG_1(PACA_EXGEN, extra, vec); \ __EXCEPTION_PROLOG_1(PACA_EXGEN, extra, vec); \
......
...@@ -47,12 +47,10 @@ ...@@ -47,12 +47,10 @@
#define FW_FEATURE_VPHN ASM_CONST(0x0000000004000000) #define FW_FEATURE_VPHN ASM_CONST(0x0000000004000000)
#define FW_FEATURE_XCMO ASM_CONST(0x0000000008000000) #define FW_FEATURE_XCMO ASM_CONST(0x0000000008000000)
#define FW_FEATURE_OPAL ASM_CONST(0x0000000010000000) #define FW_FEATURE_OPAL ASM_CONST(0x0000000010000000)
#define FW_FEATURE_OPALv2 ASM_CONST(0x0000000020000000)
#define FW_FEATURE_SET_MODE ASM_CONST(0x0000000040000000) #define FW_FEATURE_SET_MODE ASM_CONST(0x0000000040000000)
#define FW_FEATURE_BEST_ENERGY ASM_CONST(0x0000000080000000) #define FW_FEATURE_BEST_ENERGY ASM_CONST(0x0000000080000000)
#define FW_FEATURE_TYPE1_AFFINITY ASM_CONST(0x0000000100000000) #define FW_FEATURE_TYPE1_AFFINITY ASM_CONST(0x0000000100000000)
#define FW_FEATURE_PRRN ASM_CONST(0x0000000200000000) #define FW_FEATURE_PRRN ASM_CONST(0x0000000200000000)
#define FW_FEATURE_OPALv3 ASM_CONST(0x0000000400000000)
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
...@@ -70,8 +68,7 @@ enum { ...@@ -70,8 +68,7 @@ enum {
FW_FEATURE_SET_MODE | FW_FEATURE_BEST_ENERGY | FW_FEATURE_SET_MODE | FW_FEATURE_BEST_ENERGY |
FW_FEATURE_TYPE1_AFFINITY | FW_FEATURE_PRRN, FW_FEATURE_TYPE1_AFFINITY | FW_FEATURE_PRRN,
FW_FEATURE_PSERIES_ALWAYS = 0, FW_FEATURE_PSERIES_ALWAYS = 0,
FW_FEATURE_POWERNV_POSSIBLE = FW_FEATURE_OPAL | FW_FEATURE_OPALv2 | FW_FEATURE_POWERNV_POSSIBLE = FW_FEATURE_OPAL,
FW_FEATURE_OPALv3,
FW_FEATURE_POWERNV_ALWAYS = 0, FW_FEATURE_POWERNV_ALWAYS = 0,
FW_FEATURE_PS3_POSSIBLE = FW_FEATURE_LPAR | FW_FEATURE_PS3_LV1, FW_FEATURE_PS3_POSSIBLE = FW_FEATURE_LPAR | FW_FEATURE_PS3_LV1,
FW_FEATURE_PS3_ALWAYS = FW_FEATURE_LPAR | FW_FEATURE_PS3_LV1, FW_FEATURE_PS3_ALWAYS = FW_FEATURE_LPAR | FW_FEATURE_PS3_LV1,
......
...@@ -385,6 +385,17 @@ static inline void __raw_writeq(unsigned long v, volatile void __iomem *addr) ...@@ -385,6 +385,17 @@ static inline void __raw_writeq(unsigned long v, volatile void __iomem *addr)
{ {
*(volatile unsigned long __force *)PCI_FIX_ADDR(addr) = v; *(volatile unsigned long __force *)PCI_FIX_ADDR(addr) = v;
} }
/*
* Real mode version of the above. stdcix is only supposed to be used
* in hypervisor real mode as per the architecture spec.
*/
static inline void __raw_rm_writeq(u64 val, volatile void __iomem *paddr)
{
__asm__ __volatile__("stdcix %0,0,%1"
: : "r" (val), "r" (paddr) : "memory");
}
#endif /* __powerpc64__ */ #endif /* __powerpc64__ */
/* /*
......
...@@ -21,7 +21,7 @@ ...@@ -21,7 +21,7 @@
* need for various slices related matters. Note that this isn't the * need for various slices related matters. Note that this isn't the
* complete pgtable.h but only a portion of it. * complete pgtable.h but only a portion of it.
*/ */
#include <asm/pgtable-ppc64.h> #include <asm/book3s/64/pgtable.h>
#include <asm/bug.h> #include <asm/bug.h>
#include <asm/processor.h> #include <asm/processor.h>
......
#ifndef _ASM_POWERPC_PGTABLE_PPC32_H #ifndef _ASM_POWERPC_NOHASH_32_PGTABLE_H
#define _ASM_POWERPC_PGTABLE_PPC32_H #define _ASM_POWERPC_NOHASH_32_PGTABLE_H
#include <asm-generic/pgtable-nopmd.h> #include <asm-generic/pgtable-nopmd.h>
...@@ -106,17 +106,15 @@ extern int icache_44x_need_flush; ...@@ -106,17 +106,15 @@ extern int icache_44x_need_flush;
*/ */
#if defined(CONFIG_40x) #if defined(CONFIG_40x)
#include <asm/pte-40x.h> #include <asm/nohash/32/pte-40x.h>
#elif defined(CONFIG_44x) #elif defined(CONFIG_44x)
#include <asm/pte-44x.h> #include <asm/nohash/32/pte-44x.h>
#elif defined(CONFIG_FSL_BOOKE) && defined(CONFIG_PTE_64BIT) #elif defined(CONFIG_FSL_BOOKE) && defined(CONFIG_PTE_64BIT)
#include <asm/pte-book3e.h> #include <asm/nohash/pte-book3e.h>
#elif defined(CONFIG_FSL_BOOKE) #elif defined(CONFIG_FSL_BOOKE)
#include <asm/pte-fsl-booke.h> #include <asm/nohash/32/pte-fsl-booke.h>
#elif defined(CONFIG_8xx) #elif defined(CONFIG_8xx)
#include <asm/pte-8xx.h> #include <asm/nohash/32/pte-8xx.h>
#else /* CONFIG_6xx */
#include <asm/pte-hash32.h>
#endif #endif
/* And here we include common definitions */ /* And here we include common definitions */
...@@ -130,7 +128,12 @@ extern int icache_44x_need_flush; ...@@ -130,7 +128,12 @@ extern int icache_44x_need_flush;
#define pmd_none(pmd) (!pmd_val(pmd)) #define pmd_none(pmd) (!pmd_val(pmd))
#define pmd_bad(pmd) (pmd_val(pmd) & _PMD_BAD) #define pmd_bad(pmd) (pmd_val(pmd) & _PMD_BAD)
#define pmd_present(pmd) (pmd_val(pmd) & _PMD_PRESENT_MASK) #define pmd_present(pmd) (pmd_val(pmd) & _PMD_PRESENT_MASK)
#define pmd_clear(pmdp) do { pmd_val(*(pmdp)) = 0; } while (0) static inline void pmd_clear(pmd_t *pmdp)
{
*pmdp = __pmd(0);
}
/* /*
* When flushing the tlb entry for a page, we also need to flush the hash * When flushing the tlb entry for a page, we also need to flush the hash
...@@ -337,4 +340,4 @@ extern int get_pteptr(struct mm_struct *mm, unsigned long addr, pte_t **ptep, ...@@ -337,4 +340,4 @@ extern int get_pteptr(struct mm_struct *mm, unsigned long addr, pte_t **ptep,
#endif /* !__ASSEMBLY__ */ #endif /* !__ASSEMBLY__ */
#endif /* _ASM_POWERPC_PGTABLE_PPC32_H */ #endif /* __ASM_POWERPC_NOHASH_32_PGTABLE_H */
#ifndef _ASM_POWERPC_PTE_40x_H #ifndef _ASM_POWERPC_NOHASH_32_PTE_40x_H
#define _ASM_POWERPC_PTE_40x_H #define _ASM_POWERPC_NOHASH_32_PTE_40x_H
#ifdef __KERNEL__ #ifdef __KERNEL__
/* /*
...@@ -61,4 +61,4 @@ ...@@ -61,4 +61,4 @@
#define PTE_ATOMIC_UPDATES 1 #define PTE_ATOMIC_UPDATES 1
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_PTE_40x_H */ #endif /* _ASM_POWERPC_NOHASH_32_PTE_40x_H */
#ifndef _ASM_POWERPC_PTE_44x_H #ifndef _ASM_POWERPC_NOHASH_32_PTE_44x_H
#define _ASM_POWERPC_PTE_44x_H #define _ASM_POWERPC_NOHASH_32_PTE_44x_H
#ifdef __KERNEL__ #ifdef __KERNEL__
/* /*
...@@ -94,4 +94,4 @@ ...@@ -94,4 +94,4 @@
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_PTE_44x_H */ #endif /* _ASM_POWERPC_NOHASH_32_PTE_44x_H */
#ifndef _ASM_POWERPC_PTE_8xx_H #ifndef _ASM_POWERPC_NOHASH_32_PTE_8xx_H
#define _ASM_POWERPC_PTE_8xx_H #define _ASM_POWERPC_NOHASH_32_PTE_8xx_H
#ifdef __KERNEL__ #ifdef __KERNEL__
/* /*
...@@ -62,4 +62,4 @@ ...@@ -62,4 +62,4 @@
_PAGE_HWWRITE | _PAGE_EXEC) _PAGE_HWWRITE | _PAGE_EXEC)
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_PTE_8xx_H */ #endif /* _ASM_POWERPC_NOHASH_32_PTE_8xx_H */
#ifndef _ASM_POWERPC_PTE_FSL_BOOKE_H #ifndef _ASM_POWERPC_NOHASH_32_PTE_FSL_BOOKE_H
#define _ASM_POWERPC_PTE_FSL_BOOKE_H #define _ASM_POWERPC_NOHASH_32_PTE_FSL_BOOKE_H
#ifdef __KERNEL__ #ifdef __KERNEL__
/* PTE bit definitions for Freescale BookE SW loaded TLB MMU based /* PTE bit definitions for Freescale BookE SW loaded TLB MMU based
...@@ -37,4 +37,4 @@ ...@@ -37,4 +37,4 @@
#define PTE_WIMGE_SHIFT (6) #define PTE_WIMGE_SHIFT (6)
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_PTE_FSL_BOOKE_H */ #endif /* _ASM_POWERPC_NOHASH_32_PTE_FSL_BOOKE_H */
#ifndef _ASM_POWERPC_PGTABLE_PPC64_4K_H #ifndef _ASM_POWERPC_NOHASH_64_PGTABLE_4K_H
#define _ASM_POWERPC_PGTABLE_PPC64_4K_H #define _ASM_POWERPC_NOHASH_64_PGTABLE_4K_H
/* /*
* Entries per page directory level. The PTE level must use a 64b record * Entries per page directory level. The PTE level must use a 64b record
* for each page table entry. The PMD and PGD level use a 32b record for * for each page table entry. The PMD and PGD level use a 32b record for
...@@ -55,11 +55,15 @@ ...@@ -55,11 +55,15 @@
#define pgd_none(pgd) (!pgd_val(pgd)) #define pgd_none(pgd) (!pgd_val(pgd))
#define pgd_bad(pgd) (pgd_val(pgd) == 0) #define pgd_bad(pgd) (pgd_val(pgd) == 0)
#define pgd_present(pgd) (pgd_val(pgd) != 0) #define pgd_present(pgd) (pgd_val(pgd) != 0)
#define pgd_clear(pgdp) (pgd_val(*(pgdp)) = 0)
#define pgd_page_vaddr(pgd) (pgd_val(pgd) & ~PGD_MASKED_BITS) #define pgd_page_vaddr(pgd) (pgd_val(pgd) & ~PGD_MASKED_BITS)
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
static inline void pgd_clear(pgd_t *pgdp)
{
*pgdp = __pgd(0);
}
static inline pte_t pgd_pte(pgd_t pgd) static inline pte_t pgd_pte(pgd_t pgd)
{ {
return __pte(pgd_val(pgd)); return __pte(pgd_val(pgd));
...@@ -85,4 +89,4 @@ extern struct page *pgd_page(pgd_t pgd); ...@@ -85,4 +89,4 @@ extern struct page *pgd_page(pgd_t pgd);
#define remap_4k_pfn(vma, addr, pfn, prot) \ #define remap_4k_pfn(vma, addr, pfn, prot) \
remap_pfn_range((vma), (addr), (pfn), PAGE_SIZE, (prot)) remap_pfn_range((vma), (addr), (pfn), PAGE_SIZE, (prot))
#endif /* _ASM_POWERPC_PGTABLE_PPC64_4K_H */ #endif /* _ _ASM_POWERPC_NOHASH_64_PGTABLE_4K_H */
#ifndef _ASM_POWERPC_PGTABLE_PPC64_64K_H #ifndef _ASM_POWERPC_NOHASH_64_PGTABLE_64K_H
#define _ASM_POWERPC_PGTABLE_PPC64_64K_H #define _ASM_POWERPC_NOHASH_64_PGTABLE_64K_H
#include <asm-generic/pgtable-nopud.h> #include <asm-generic/pgtable-nopud.h>
...@@ -9,8 +9,19 @@ ...@@ -9,8 +9,19 @@
#define PUD_INDEX_SIZE 0 #define PUD_INDEX_SIZE 0
#define PGD_INDEX_SIZE 12 #define PGD_INDEX_SIZE 12
/*
* we support 32 fragments per PTE page of 64K size
*/
#define PTE_FRAG_NR 32
/*
* We use a 2K PTE page fragment and another 2K for storing
* real_pte_t hash index
*/
#define PTE_FRAG_SIZE_SHIFT 11
#define PTE_FRAG_SIZE (1UL << PTE_FRAG_SIZE_SHIFT)
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#define PTE_TABLE_SIZE (sizeof(real_pte_t) << PTE_INDEX_SIZE) #define PTE_TABLE_SIZE PTE_FRAG_SIZE
#define PMD_TABLE_SIZE (sizeof(pmd_t) << PMD_INDEX_SIZE) #define PMD_TABLE_SIZE (sizeof(pmd_t) << PMD_INDEX_SIZE)
#define PGD_TABLE_SIZE (sizeof(pgd_t) << PGD_INDEX_SIZE) #define PGD_TABLE_SIZE (sizeof(pgd_t) << PGD_INDEX_SIZE)
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
...@@ -32,13 +43,15 @@ ...@@ -32,13 +43,15 @@
#define PGDIR_SIZE (1UL << PGDIR_SHIFT) #define PGDIR_SIZE (1UL << PGDIR_SHIFT)
#define PGDIR_MASK (~(PGDIR_SIZE-1)) #define PGDIR_MASK (~(PGDIR_SIZE-1))
/* Bits to mask out from a PMD to get to the PTE page */ /*
/* PMDs point to PTE table fragments which are 4K aligned. */ * Bits to mask out from a PMD to get to the PTE page
#define PMD_MASKED_BITS 0xfff * PMDs point to PTE table fragments which are PTE_FRAG_SIZE aligned.
*/
#define PMD_MASKED_BITS (PTE_FRAG_SIZE - 1)
/* Bits to mask out from a PGD/PUD to get to the PMD page */ /* Bits to mask out from a PGD/PUD to get to the PMD page */
#define PUD_MASKED_BITS 0x1ff #define PUD_MASKED_BITS 0x1ff
#define pgd_pte(pgd) (pud_pte(((pud_t){ pgd }))) #define pgd_pte(pgd) (pud_pte(((pud_t){ pgd })))
#define pte_pgd(pte) ((pgd_t)pte_pud(pte)) #define pte_pgd(pte) ((pgd_t)pte_pud(pte))
#endif /* _ASM_POWERPC_PGTABLE_PPC64_64K_H */ #endif /* _ASM_POWERPC_NOHASH_64_PGTABLE_64K_H */
#ifndef _ASM_POWERPC_NOHASH_PGTABLE_H
#define _ASM_POWERPC_NOHASH_PGTABLE_H
#if defined(CONFIG_PPC64)
#include <asm/nohash/64/pgtable.h>
#else
#include <asm/nohash/32/pgtable.h>
#endif
#ifndef __ASSEMBLY__
/* Generic accessors to PTE bits */
static inline int pte_write(pte_t pte)
{
return (pte_val(pte) & (_PAGE_RW | _PAGE_RO)) != _PAGE_RO;
}
static inline int pte_dirty(pte_t pte) { return pte_val(pte) & _PAGE_DIRTY; }
static inline int pte_young(pte_t pte) { return pte_val(pte) & _PAGE_ACCESSED; }
static inline int pte_special(pte_t pte) { return pte_val(pte) & _PAGE_SPECIAL; }
static inline int pte_none(pte_t pte) { return (pte_val(pte) & ~_PTE_NONE_MASK) == 0; }
static inline pgprot_t pte_pgprot(pte_t pte) { return __pgprot(pte_val(pte) & PAGE_PROT_BITS); }
#ifdef CONFIG_NUMA_BALANCING
/*
* These work without NUMA balancing but the kernel does not care. See the
* comment in include/asm-generic/pgtable.h . On powerpc, this will only
* work for user pages and always return true for kernel pages.
*/
static inline int pte_protnone(pte_t pte)
{
return (pte_val(pte) &
(_PAGE_PRESENT | _PAGE_USER)) == _PAGE_PRESENT;
}
static inline int pmd_protnone(pmd_t pmd)
{
return pte_protnone(pmd_pte(pmd));
}
#endif /* CONFIG_NUMA_BALANCING */
static inline int pte_present(pte_t pte)
{
return pte_val(pte) & _PAGE_PRESENT;
}
/* Conversion functions: convert a page and protection to a page entry,
* and a page entry and page directory to the page they refer to.
*
* Even if PTEs can be unsigned long long, a PFN is always an unsigned
* long for now.
*/
static inline pte_t pfn_pte(unsigned long pfn, pgprot_t pgprot) {
return __pte(((pte_basic_t)(pfn) << PTE_RPN_SHIFT) |
pgprot_val(pgprot)); }
static inline unsigned long pte_pfn(pte_t pte) {
return pte_val(pte) >> PTE_RPN_SHIFT; }
/* Generic modifiers for PTE bits */
static inline pte_t pte_wrprotect(pte_t pte)
{
pte_basic_t ptev;
ptev = pte_val(pte) & ~(_PAGE_RW | _PAGE_HWWRITE);
ptev |= _PAGE_RO;
return __pte(ptev);
}
static inline pte_t pte_mkclean(pte_t pte)
{
return __pte(pte_val(pte) & ~(_PAGE_DIRTY | _PAGE_HWWRITE));
}
static inline pte_t pte_mkold(pte_t pte)
{
return __pte(pte_val(pte) & ~_PAGE_ACCESSED);
}
static inline pte_t pte_mkwrite(pte_t pte)
{
pte_basic_t ptev;
ptev = pte_val(pte) & ~_PAGE_RO;
ptev |= _PAGE_RW;
return __pte(ptev);
}
static inline pte_t pte_mkdirty(pte_t pte)
{
return __pte(pte_val(pte) | _PAGE_DIRTY);
}
static inline pte_t pte_mkyoung(pte_t pte)
{
return __pte(pte_val(pte) | _PAGE_ACCESSED);
}
static inline pte_t pte_mkspecial(pte_t pte)
{
return __pte(pte_val(pte) | _PAGE_SPECIAL);
}
static inline pte_t pte_mkhuge(pte_t pte)
{
return pte;
}
static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
{
return __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot));
}
/* Insert a PTE, top-level function is out of line. It uses an inline
* low level function in the respective pgtable-* files
*/
extern void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
pte_t pte);
/* This low level function performs the actual PTE insertion
* Setting the PTE depends on the MMU type and other factors. It's
* an horrible mess that I'm not going to try to clean up now but
* I'm keeping it in one place rather than spread around
*/
static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, pte_t pte, int percpu)
{
#if defined(CONFIG_PPC_STD_MMU_32) && defined(CONFIG_SMP) && !defined(CONFIG_PTE_64BIT)
/* First case is 32-bit Hash MMU in SMP mode with 32-bit PTEs. We use the
* helper pte_update() which does an atomic update. We need to do that
* because a concurrent invalidation can clear _PAGE_HASHPTE. If it's a
* per-CPU PTE such as a kmap_atomic, we do a simple update preserving
* the hash bits instead (ie, same as the non-SMP case)
*/
if (percpu)
*ptep = __pte((pte_val(*ptep) & _PAGE_HASHPTE)
| (pte_val(pte) & ~_PAGE_HASHPTE));
else
pte_update(ptep, ~_PAGE_HASHPTE, pte_val(pte));
#elif defined(CONFIG_PPC32) && defined(CONFIG_PTE_64BIT)
/* Second case is 32-bit with 64-bit PTE. In this case, we
* can just store as long as we do the two halves in the right order
* with a barrier in between. This is possible because we take care,
* in the hash code, to pre-invalidate if the PTE was already hashed,
* which synchronizes us with any concurrent invalidation.
* In the percpu case, we also fallback to the simple update preserving
* the hash bits
*/
if (percpu) {
*ptep = __pte((pte_val(*ptep) & _PAGE_HASHPTE)
| (pte_val(pte) & ~_PAGE_HASHPTE));
return;
}
#if _PAGE_HASHPTE != 0
if (pte_val(*ptep) & _PAGE_HASHPTE)
flush_hash_entry(mm, ptep, addr);
#endif
__asm__ __volatile__("\
stw%U0%X0 %2,%0\n\
eieio\n\
stw%U0%X0 %L2,%1"
: "=m" (*ptep), "=m" (*((unsigned char *)ptep+4))
: "r" (pte) : "memory");
#elif defined(CONFIG_PPC_STD_MMU_32)
/* Third case is 32-bit hash table in UP mode, we need to preserve
* the _PAGE_HASHPTE bit since we may not have invalidated the previous
* translation in the hash yet (done in a subsequent flush_tlb_xxx())
* and see we need to keep track that this PTE needs invalidating
*/
*ptep = __pte((pte_val(*ptep) & _PAGE_HASHPTE)
| (pte_val(pte) & ~_PAGE_HASHPTE));
#else
/* Anything else just stores the PTE normally. That covers all 64-bit
* cases, and 32-bit non-hash with 32-bit PTEs.
*/
*ptep = pte;
#ifdef CONFIG_PPC_BOOK3E_64
/*
* With hardware tablewalk, a sync is needed to ensure that
* subsequent accesses see the PTE we just wrote. Unlike userspace
* mappings, we can't tolerate spurious faults, so make sure
* the new PTE will be seen the first time.
*/
if (is_kernel_addr(addr))
mb();
#endif
#endif
}
#define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
extern int ptep_set_access_flags(struct vm_area_struct *vma, unsigned long address,
pte_t *ptep, pte_t entry, int dirty);
/*
* Macro to mark a page protection value as "uncacheable".
*/
#define _PAGE_CACHE_CTL (_PAGE_COHERENT | _PAGE_GUARDED | _PAGE_NO_CACHE | \
_PAGE_WRITETHRU)
#define pgprot_noncached(prot) (__pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) | \
_PAGE_NO_CACHE | _PAGE_GUARDED))
#define pgprot_noncached_wc(prot) (__pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) | \
_PAGE_NO_CACHE))
#define pgprot_cached(prot) (__pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) | \
_PAGE_COHERENT))
#define pgprot_cached_wthru(prot) (__pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) | \
_PAGE_COHERENT | _PAGE_WRITETHRU))
#define pgprot_cached_noncoherent(prot) \
(__pgprot(pgprot_val(prot) & ~_PAGE_CACHE_CTL))
#define pgprot_writecombine pgprot_noncached_wc
struct file;
extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
unsigned long size, pgprot_t vma_prot);
#define __HAVE_PHYS_MEM_ACCESS_PROT
#ifdef CONFIG_HUGETLB_PAGE
static inline int hugepd_ok(hugepd_t hpd)
{
return (hpd.pd > 0);
}
static inline int pmd_huge(pmd_t pmd)
{
return 0;
}
static inline int pud_huge(pud_t pud)
{
return 0;
}
static inline int pgd_huge(pgd_t pgd)
{
return 0;
}
#define pgd_huge pgd_huge
#define is_hugepd(hpd) (hugepd_ok(hpd))
#endif
#endif /* __ASSEMBLY__ */
#endif
#ifndef _ASM_POWERPC_PTE_BOOK3E_H #ifndef _ASM_POWERPC_NOHASH_PTE_BOOK3E_H
#define _ASM_POWERPC_PTE_BOOK3E_H #define _ASM_POWERPC_NOHASH_PTE_BOOK3E_H
#ifdef __KERNEL__ #ifdef __KERNEL__
/* PTE bit definitions for processors compliant to the Book3E /* PTE bit definitions for processors compliant to the Book3E
...@@ -84,4 +84,4 @@ ...@@ -84,4 +84,4 @@
#endif #endif
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_PTE_FSL_BOOKE_H */ #endif /* _ASM_POWERPC_NOHASH_PTE_BOOK3E_H */
...@@ -157,7 +157,8 @@ ...@@ -157,7 +157,8 @@
#define OPAL_LEDS_GET_INDICATOR 114 #define OPAL_LEDS_GET_INDICATOR 114
#define OPAL_LEDS_SET_INDICATOR 115 #define OPAL_LEDS_SET_INDICATOR 115
#define OPAL_CEC_REBOOT2 116 #define OPAL_CEC_REBOOT2 116
#define OPAL_LAST 116 #define OPAL_CONSOLE_FLUSH 117
#define OPAL_LAST 117
/* Device tree flags */ /* Device tree flags */
......
...@@ -35,6 +35,7 @@ int64_t opal_console_read(int64_t term_number, __be64 *length, ...@@ -35,6 +35,7 @@ int64_t opal_console_read(int64_t term_number, __be64 *length,
uint8_t *buffer); uint8_t *buffer);
int64_t opal_console_write_buffer_space(int64_t term_number, int64_t opal_console_write_buffer_space(int64_t term_number,
__be64 *length); __be64 *length);
int64_t opal_console_flush(int64_t term_number);
int64_t opal_rtc_read(__be32 *year_month_day, int64_t opal_rtc_read(__be32 *year_month_day,
__be64 *hour_minute_second_millisecond); __be64 *hour_minute_second_millisecond);
int64_t opal_rtc_write(uint32_t year_month_day, int64_t opal_rtc_write(uint32_t year_month_day,
...@@ -262,6 +263,8 @@ extern int opal_resync_timebase(void); ...@@ -262,6 +263,8 @@ extern int opal_resync_timebase(void);
extern void opal_lpc_init(void); extern void opal_lpc_init(void);
extern void opal_kmsg_init(void);
extern int opal_event_request(unsigned int opal_event_nr); extern int opal_event_request(unsigned int opal_event_nr);
struct opal_sg_list *opal_vmalloc_to_sg_list(void *vmalloc_addr, struct opal_sg_list *opal_vmalloc_to_sg_list(void *vmalloc_addr,
......
...@@ -16,6 +16,7 @@ ...@@ -16,6 +16,7 @@
#ifdef CONFIG_PPC64 #ifdef CONFIG_PPC64
#include <linux/string.h>
#include <asm/types.h> #include <asm/types.h>
#include <asm/lppaca.h> #include <asm/lppaca.h>
#include <asm/mmu.h> #include <asm/mmu.h>
...@@ -131,7 +132,16 @@ struct paca_struct { ...@@ -131,7 +132,16 @@ struct paca_struct {
struct tlb_core_data tcd; struct tlb_core_data tcd;
#endif /* CONFIG_PPC_BOOK3E */ #endif /* CONFIG_PPC_BOOK3E */
mm_context_t context; #ifdef CONFIG_PPC_BOOK3S
mm_context_id_t mm_ctx_id;
#ifdef CONFIG_PPC_MM_SLICES
u64 mm_ctx_low_slices_psize;
unsigned char mm_ctx_high_slices_psize[SLICE_ARRAY_SIZE];
#else
u16 mm_ctx_user_psize;
u16 mm_ctx_sllp;
#endif
#endif
/* /*
* then miscellaneous read-write fields * then miscellaneous read-write fields
...@@ -194,6 +204,23 @@ struct paca_struct { ...@@ -194,6 +204,23 @@ struct paca_struct {
#endif #endif
}; };
#ifdef CONFIG_PPC_BOOK3S
static inline void copy_mm_to_paca(mm_context_t *context)
{
get_paca()->mm_ctx_id = context->id;
#ifdef CONFIG_PPC_MM_SLICES
get_paca()->mm_ctx_low_slices_psize = context->low_slices_psize;
memcpy(&get_paca()->mm_ctx_high_slices_psize,
&context->high_slices_psize, SLICE_ARRAY_SIZE);
#else
get_paca()->mm_ctx_user_psize = context->user_psize;
get_paca()->mm_ctx_sllp = context->sllp;
#endif
}
#else
static inline void copy_mm_to_paca(mm_context_t *context){}
#endif
extern struct paca_struct *paca; extern struct paca_struct *paca;
extern void initialise_paca(struct paca_struct *new_paca, int cpu); extern void initialise_paca(struct paca_struct *new_paca, int cpu);
extern void setup_paca(struct paca_struct *new_paca); extern void setup_paca(struct paca_struct *new_paca);
......
...@@ -286,8 +286,11 @@ extern long long virt_phys_offset; ...@@ -286,8 +286,11 @@ extern long long virt_phys_offset;
/* PTE level */ /* PTE level */
typedef struct { pte_basic_t pte; } pte_t; typedef struct { pte_basic_t pte; } pte_t;
#define pte_val(x) ((x).pte)
#define __pte(x) ((pte_t) { (x) }) #define __pte(x) ((pte_t) { (x) })
static inline pte_basic_t pte_val(pte_t x)
{
return x.pte;
}
/* 64k pages additionally define a bigger "real PTE" type that gathers /* 64k pages additionally define a bigger "real PTE" type that gathers
* the "second half" part of the PTE for pseudo 64k pages * the "second half" part of the PTE for pseudo 64k pages
...@@ -301,21 +304,30 @@ typedef struct { pte_t pte; } real_pte_t; ...@@ -301,21 +304,30 @@ typedef struct { pte_t pte; } real_pte_t;
/* PMD level */ /* PMD level */
#ifdef CONFIG_PPC64 #ifdef CONFIG_PPC64
typedef struct { unsigned long pmd; } pmd_t; typedef struct { unsigned long pmd; } pmd_t;
#define pmd_val(x) ((x).pmd)
#define __pmd(x) ((pmd_t) { (x) }) #define __pmd(x) ((pmd_t) { (x) })
static inline unsigned long pmd_val(pmd_t x)
{
return x.pmd;
}
/* PUD level exusts only on 4k pages */ /* PUD level exusts only on 4k pages */
#ifndef CONFIG_PPC_64K_PAGES #ifndef CONFIG_PPC_64K_PAGES
typedef struct { unsigned long pud; } pud_t; typedef struct { unsigned long pud; } pud_t;
#define pud_val(x) ((x).pud)
#define __pud(x) ((pud_t) { (x) }) #define __pud(x) ((pud_t) { (x) })
static inline unsigned long pud_val(pud_t x)
{
return x.pud;
}
#endif /* !CONFIG_PPC_64K_PAGES */ #endif /* !CONFIG_PPC_64K_PAGES */
#endif /* CONFIG_PPC64 */ #endif /* CONFIG_PPC64 */
/* PGD level */ /* PGD level */
typedef struct { unsigned long pgd; } pgd_t; typedef struct { unsigned long pgd; } pgd_t;
#define pgd_val(x) ((x).pgd)
#define __pgd(x) ((pgd_t) { (x) }) #define __pgd(x) ((pgd_t) { (x) })
static inline unsigned long pgd_val(pgd_t x)
{
return x.pgd;
}
/* Page protection bits */ /* Page protection bits */
typedef struct { unsigned long pgprot; } pgprot_t; typedef struct { unsigned long pgprot; } pgprot_t;
...@@ -329,8 +341,11 @@ typedef struct { unsigned long pgprot; } pgprot_t; ...@@ -329,8 +341,11 @@ typedef struct { unsigned long pgprot; } pgprot_t;
*/ */
typedef pte_basic_t pte_t; typedef pte_basic_t pte_t;
#define pte_val(x) (x)
#define __pte(x) (x) #define __pte(x) (x)
static inline pte_basic_t pte_val(pte_t pte)
{
return pte;
}
#if defined(CONFIG_PPC_64K_PAGES) && defined(CONFIG_PPC_STD_MMU_64) #if defined(CONFIG_PPC_64K_PAGES) && defined(CONFIG_PPC_STD_MMU_64)
typedef struct { pte_t pte; unsigned long hidx; } real_pte_t; typedef struct { pte_t pte; unsigned long hidx; } real_pte_t;
...@@ -341,67 +356,42 @@ typedef pte_t real_pte_t; ...@@ -341,67 +356,42 @@ typedef pte_t real_pte_t;
#ifdef CONFIG_PPC64 #ifdef CONFIG_PPC64
typedef unsigned long pmd_t; typedef unsigned long pmd_t;
#define pmd_val(x) (x)
#define __pmd(x) (x) #define __pmd(x) (x)
static inline unsigned long pmd_val(pmd_t pmd)
{
return pmd;
}
#ifndef CONFIG_PPC_64K_PAGES #ifndef CONFIG_PPC_64K_PAGES
typedef unsigned long pud_t; typedef unsigned long pud_t;
#define pud_val(x) (x)
#define __pud(x) (x) #define __pud(x) (x)
static inline unsigned long pud_val(pud_t pud)
{
return pud;
}
#endif /* !CONFIG_PPC_64K_PAGES */ #endif /* !CONFIG_PPC_64K_PAGES */
#endif /* CONFIG_PPC64 */ #endif /* CONFIG_PPC64 */
typedef unsigned long pgd_t; typedef unsigned long pgd_t;
#define pgd_val(x) (x) #define __pgd(x) (x)
#define pgprot_val(x) (x) static inline unsigned long pgd_val(pgd_t pgd)
{
return pgd;
}
typedef unsigned long pgprot_t; typedef unsigned long pgprot_t;
#define __pgd(x) (x) #define pgprot_val(x) (x)
#define __pgprot(x) (x) #define __pgprot(x) (x)
#endif #endif
typedef struct { signed long pd; } hugepd_t; typedef struct { signed long pd; } hugepd_t;
#ifdef CONFIG_HUGETLB_PAGE #ifndef CONFIG_HUGETLB_PAGE
#ifdef CONFIG_PPC_BOOK3S_64 #define is_hugepd(pdep) (0)
#ifdef CONFIG_PPC_64K_PAGES #define pgd_huge(pgd) (0)
/*
* With 64k page size, we have hugepage ptes in the pgd and pmd entries. We don't
* need to setup hugepage directory for them. Our pte and page directory format
* enable us to have this enabled. But to avoid errors when implementing new
* features disable hugepd for 64K. We enable a debug version here, So we catch
* wrong usage.
*/
#ifdef CONFIG_DEBUG_VM
extern int hugepd_ok(hugepd_t hpd);
#else
#define hugepd_ok(x) (0)
#endif
#else
static inline int hugepd_ok(hugepd_t hpd)
{
/*
* hugepd pointer, bottom two bits == 00 and next 4 bits
* indicate size of table
*/
return (((hpd.pd & 0x3) == 0x0) && ((hpd.pd & HUGEPD_SHIFT_MASK) != 0));
}
#endif
#else
static inline int hugepd_ok(hugepd_t hpd)
{
return (hpd.pd > 0);
}
#endif
#define is_hugepd(hpd) (hugepd_ok(hpd))
#define pgd_huge pgd_huge
int pgd_huge(pgd_t pgd);
#else /* CONFIG_HUGETLB_PAGE */
#define is_hugepd(pdep) 0
#define pgd_huge(pgd) 0
#endif /* CONFIG_HUGETLB_PAGE */ #endif /* CONFIG_HUGETLB_PAGE */
#define __hugepd(x) ((hugepd_t) { (x) }) #define __hugepd(x) ((hugepd_t) { (x) })
struct page; struct page;
......
...@@ -205,6 +205,7 @@ struct pci_dn { ...@@ -205,6 +205,7 @@ struct pci_dn {
int pci_ext_config_space; /* for pci devices */ int pci_ext_config_space; /* for pci devices */
struct pci_dev *pcidev; /* back-pointer to the pci device */
#ifdef CONFIG_EEH #ifdef CONFIG_EEH
struct eeh_dev *edev; /* eeh device */ struct eeh_dev *edev; /* eeh device */
#endif #endif
......
...@@ -149,4 +149,8 @@ extern void pcibios_setup_phb_io_space(struct pci_controller *hose); ...@@ -149,4 +149,8 @@ extern void pcibios_setup_phb_io_space(struct pci_controller *hose);
extern void pcibios_scan_phb(struct pci_controller *hose); extern void pcibios_scan_phb(struct pci_controller *hose);
#endif /* __KERNEL__ */ #endif /* __KERNEL__ */
extern struct pci_dev *pnv_pci_get_gpu_dev(struct pci_dev *npdev);
extern struct pci_dev *pnv_pci_get_npu_dev(struct pci_dev *gpdev, int index);
#endif /* __ASM_POWERPC_PCI_H */ #endif /* __ASM_POWERPC_PCI_H */
...@@ -21,16 +21,34 @@ extern void pgd_free(struct mm_struct *mm, pgd_t *pgd); ...@@ -21,16 +21,34 @@ extern void pgd_free(struct mm_struct *mm, pgd_t *pgd);
/* #define pgd_populate(mm, pmd, pte) BUG() */ /* #define pgd_populate(mm, pmd, pte) BUG() */
#ifndef CONFIG_BOOKE #ifndef CONFIG_BOOKE
#define pmd_populate_kernel(mm, pmd, pte) \
(pmd_val(*(pmd)) = __pa(pte) | _PMD_PRESENT) static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmdp,
#define pmd_populate(mm, pmd, pte) \ pte_t *pte)
(pmd_val(*(pmd)) = (page_to_pfn(pte) << PAGE_SHIFT) | _PMD_PRESENT) {
*pmdp = __pmd(__pa(pte) | _PMD_PRESENT);
}
static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmdp,
pgtable_t pte_page)
{
*pmdp = __pmd((page_to_pfn(pte_page) << PAGE_SHIFT) | _PMD_PRESENT);
}
#define pmd_pgtable(pmd) pmd_page(pmd) #define pmd_pgtable(pmd) pmd_page(pmd)
#else #else
#define pmd_populate_kernel(mm, pmd, pte) \
(pmd_val(*(pmd)) = (unsigned long)pte | _PMD_PRESENT) static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmdp,
#define pmd_populate(mm, pmd, pte) \ pte_t *pte)
(pmd_val(*(pmd)) = (unsigned long)lowmem_page_address(pte) | _PMD_PRESENT) {
*pmdp = __pmd((unsigned long)pte | _PMD_PRESENT);
}
static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmdp,
pgtable_t pte_page)
{
*pmdp = __pmd((unsigned long)lowmem_page_address(pte_page) | _PMD_PRESENT);
}
#define pmd_pgtable(pmd) pmd_page(pmd) #define pmd_pgtable(pmd) pmd_page(pmd)
#endif #endif
......
...@@ -53,7 +53,7 @@ static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd) ...@@ -53,7 +53,7 @@ static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
#ifndef CONFIG_PPC_64K_PAGES #ifndef CONFIG_PPC_64K_PAGES
#define pgd_populate(MM, PGD, PUD) pgd_set(PGD, PUD) #define pgd_populate(MM, PGD, PUD) pgd_set(PGD, (unsigned long)PUD)
static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long addr) static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long addr)
{ {
...@@ -71,9 +71,18 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd) ...@@ -71,9 +71,18 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
pud_set(pud, (unsigned long)pmd); pud_set(pud, (unsigned long)pmd);
} }
#define pmd_populate(mm, pmd, pte_page) \ static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd,
pmd_populate_kernel(mm, pmd, page_address(pte_page)) pte_t *pte)
#define pmd_populate_kernel(mm, pmd, pte) pmd_set(pmd, (unsigned long)(pte)) {
pmd_set(pmd, (unsigned long)pte);
}
static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd,
pgtable_t pte_page)
{
pmd_set(pmd, (unsigned long)page_address(pte_page));
}
#define pmd_pgtable(pmd) pmd_page(pmd) #define pmd_pgtable(pmd) pmd_page(pmd)
static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm, static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
...@@ -154,16 +163,6 @@ static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table, ...@@ -154,16 +163,6 @@ static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table,
} }
#else /* if CONFIG_PPC_64K_PAGES */ #else /* if CONFIG_PPC_64K_PAGES */
/*
* we support 16 fragments per PTE page.
*/
#define PTE_FRAG_NR 16
/*
* We use a 2K PTE page fragment and another 2K for storing
* real_pte_t hash index
*/
#define PTE_FRAG_SIZE_SHIFT 12
#define PTE_FRAG_SIZE (2 * PTRS_PER_PTE * sizeof(pte_t))
extern pte_t *page_table_alloc(struct mm_struct *, unsigned long, int); extern pte_t *page_table_alloc(struct mm_struct *, unsigned long, int);
extern void page_table_free(struct mm_struct *, unsigned long *, int); extern void page_table_free(struct mm_struct *, unsigned long *, int);
......
#ifndef _ASM_POWERPC_PGTABLE_H #ifndef _ASM_POWERPC_PGTABLE_H
#define _ASM_POWERPC_PGTABLE_H #define _ASM_POWERPC_PGTABLE_H
#ifdef __KERNEL__
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <linux/mmdebug.h> #include <linux/mmdebug.h>
...@@ -13,210 +12,20 @@ struct mm_struct; ...@@ -13,210 +12,20 @@ struct mm_struct;
#endif /* !__ASSEMBLY__ */ #endif /* !__ASSEMBLY__ */
#if defined(CONFIG_PPC64) #ifdef CONFIG_PPC_BOOK3S
# include <asm/pgtable-ppc64.h> #include <asm/book3s/pgtable.h>
#else #else
# include <asm/pgtable-ppc32.h> #include <asm/nohash/pgtable.h>
#endif #endif /* !CONFIG_PPC_BOOK3S */
/*
* We save the slot number & secondary bit in the second half of the
* PTE page. We use the 8 bytes per each pte entry.
*/
#define PTE_PAGE_HIDX_OFFSET (PTRS_PER_PTE * 8)
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
#include <asm/tlbflush.h> #include <asm/tlbflush.h>
/* Generic accessors to PTE bits */
static inline int pte_write(pte_t pte)
{ return (pte_val(pte) & (_PAGE_RW | _PAGE_RO)) != _PAGE_RO; }
static inline int pte_dirty(pte_t pte) { return pte_val(pte) & _PAGE_DIRTY; }
static inline int pte_young(pte_t pte) { return pte_val(pte) & _PAGE_ACCESSED; }
static inline int pte_special(pte_t pte) { return pte_val(pte) & _PAGE_SPECIAL; }
static inline int pte_none(pte_t pte) { return (pte_val(pte) & ~_PTE_NONE_MASK) == 0; }
static inline pgprot_t pte_pgprot(pte_t pte) { return __pgprot(pte_val(pte) & PAGE_PROT_BITS); }
#ifdef CONFIG_NUMA_BALANCING
/*
* These work without NUMA balancing but the kernel does not care. See the
* comment in include/asm-generic/pgtable.h . On powerpc, this will only
* work for user pages and always return true for kernel pages.
*/
static inline int pte_protnone(pte_t pte)
{
return (pte_val(pte) &
(_PAGE_PRESENT | _PAGE_USER)) == _PAGE_PRESENT;
}
static inline int pmd_protnone(pmd_t pmd)
{
return pte_protnone(pmd_pte(pmd));
}
#endif /* CONFIG_NUMA_BALANCING */
static inline int pte_present(pte_t pte)
{
return pte_val(pte) & _PAGE_PRESENT;
}
/* Conversion functions: convert a page and protection to a page entry,
* and a page entry and page directory to the page they refer to.
*
* Even if PTEs can be unsigned long long, a PFN is always an unsigned
* long for now.
*/
static inline pte_t pfn_pte(unsigned long pfn, pgprot_t pgprot) {
return __pte(((pte_basic_t)(pfn) << PTE_RPN_SHIFT) |
pgprot_val(pgprot)); }
static inline unsigned long pte_pfn(pte_t pte) {
return pte_val(pte) >> PTE_RPN_SHIFT; }
/* Keep these as a macros to avoid include dependency mess */ /* Keep these as a macros to avoid include dependency mess */
#define pte_page(x) pfn_to_page(pte_pfn(x)) #define pte_page(x) pfn_to_page(pte_pfn(x))
#define mk_pte(page, pgprot) pfn_pte(page_to_pfn(page), (pgprot)) #define mk_pte(page, pgprot) pfn_pte(page_to_pfn(page), (pgprot))
/* Generic modifiers for PTE bits */
static inline pte_t pte_wrprotect(pte_t pte) {
pte_val(pte) &= ~(_PAGE_RW | _PAGE_HWWRITE);
pte_val(pte) |= _PAGE_RO; return pte; }
static inline pte_t pte_mkclean(pte_t pte) {
pte_val(pte) &= ~(_PAGE_DIRTY | _PAGE_HWWRITE); return pte; }
static inline pte_t pte_mkold(pte_t pte) {
pte_val(pte) &= ~_PAGE_ACCESSED; return pte; }
static inline pte_t pte_mkwrite(pte_t pte) {
pte_val(pte) &= ~_PAGE_RO;
pte_val(pte) |= _PAGE_RW; return pte; }
static inline pte_t pte_mkdirty(pte_t pte) {
pte_val(pte) |= _PAGE_DIRTY; return pte; }
static inline pte_t pte_mkyoung(pte_t pte) {
pte_val(pte) |= _PAGE_ACCESSED; return pte; }
static inline pte_t pte_mkspecial(pte_t pte) {
pte_val(pte) |= _PAGE_SPECIAL; return pte; }
static inline pte_t pte_mkhuge(pte_t pte) {
return pte; }
static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
{
pte_val(pte) = (pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot);
return pte;
}
/* Insert a PTE, top-level function is out of line. It uses an inline
* low level function in the respective pgtable-* files
*/
extern void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
pte_t pte);
/* This low level function performs the actual PTE insertion
* Setting the PTE depends on the MMU type and other factors. It's
* an horrible mess that I'm not going to try to clean up now but
* I'm keeping it in one place rather than spread around
*/
static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, pte_t pte, int percpu)
{
#if defined(CONFIG_PPC_STD_MMU_32) && defined(CONFIG_SMP) && !defined(CONFIG_PTE_64BIT)
/* First case is 32-bit Hash MMU in SMP mode with 32-bit PTEs. We use the
* helper pte_update() which does an atomic update. We need to do that
* because a concurrent invalidation can clear _PAGE_HASHPTE. If it's a
* per-CPU PTE such as a kmap_atomic, we do a simple update preserving
* the hash bits instead (ie, same as the non-SMP case)
*/
if (percpu)
*ptep = __pte((pte_val(*ptep) & _PAGE_HASHPTE)
| (pte_val(pte) & ~_PAGE_HASHPTE));
else
pte_update(ptep, ~_PAGE_HASHPTE, pte_val(pte));
#elif defined(CONFIG_PPC32) && defined(CONFIG_PTE_64BIT)
/* Second case is 32-bit with 64-bit PTE. In this case, we
* can just store as long as we do the two halves in the right order
* with a barrier in between. This is possible because we take care,
* in the hash code, to pre-invalidate if the PTE was already hashed,
* which synchronizes us with any concurrent invalidation.
* In the percpu case, we also fallback to the simple update preserving
* the hash bits
*/
if (percpu) {
*ptep = __pte((pte_val(*ptep) & _PAGE_HASHPTE)
| (pte_val(pte) & ~_PAGE_HASHPTE));
return;
}
#if _PAGE_HASHPTE != 0
if (pte_val(*ptep) & _PAGE_HASHPTE)
flush_hash_entry(mm, ptep, addr);
#endif
__asm__ __volatile__("\
stw%U0%X0 %2,%0\n\
eieio\n\
stw%U0%X0 %L2,%1"
: "=m" (*ptep), "=m" (*((unsigned char *)ptep+4))
: "r" (pte) : "memory");
#elif defined(CONFIG_PPC_STD_MMU_32)
/* Third case is 32-bit hash table in UP mode, we need to preserve
* the _PAGE_HASHPTE bit since we may not have invalidated the previous
* translation in the hash yet (done in a subsequent flush_tlb_xxx())
* and see we need to keep track that this PTE needs invalidating
*/
*ptep = __pte((pte_val(*ptep) & _PAGE_HASHPTE)
| (pte_val(pte) & ~_PAGE_HASHPTE));
#else
/* Anything else just stores the PTE normally. That covers all 64-bit
* cases, and 32-bit non-hash with 32-bit PTEs.
*/
*ptep = pte;
#ifdef CONFIG_PPC_BOOK3E_64
/*
* With hardware tablewalk, a sync is needed to ensure that
* subsequent accesses see the PTE we just wrote. Unlike userspace
* mappings, we can't tolerate spurious faults, so make sure
* the new PTE will be seen the first time.
*/
if (is_kernel_addr(addr))
mb();
#endif
#endif
}
#define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
extern int ptep_set_access_flags(struct vm_area_struct *vma, unsigned long address,
pte_t *ptep, pte_t entry, int dirty);
/*
* Macro to mark a page protection value as "uncacheable".
*/
#define _PAGE_CACHE_CTL (_PAGE_COHERENT | _PAGE_GUARDED | _PAGE_NO_CACHE | \
_PAGE_WRITETHRU)
#define pgprot_noncached(prot) (__pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) | \
_PAGE_NO_CACHE | _PAGE_GUARDED))
#define pgprot_noncached_wc(prot) (__pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) | \
_PAGE_NO_CACHE))
#define pgprot_cached(prot) (__pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) | \
_PAGE_COHERENT))
#define pgprot_cached_wthru(prot) (__pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) | \
_PAGE_COHERENT | _PAGE_WRITETHRU))
#define pgprot_cached_noncoherent(prot) \
(__pgprot(pgprot_val(prot) & ~_PAGE_CACHE_CTL))
#define pgprot_writecombine pgprot_noncached_wc
struct file;
extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
unsigned long size, pgprot_t vma_prot);
#define __HAVE_PHYS_MEM_ACCESS_PROT
/* /*
* ZERO_PAGE is a global shared page that is always zero: used * ZERO_PAGE is a global shared page that is always zero: used
* for zero-mapped memory areas etc.. * for zero-mapped memory areas etc..
...@@ -271,5 +80,4 @@ static inline pte_t *find_linux_pte_or_hugepte(pgd_t *pgdir, unsigned long ea, ...@@ -271,5 +80,4 @@ static inline pte_t *find_linux_pte_or_hugepte(pgd_t *pgdir, unsigned long ea,
} }
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_PGTABLE_H */ #endif /* _ASM_POWERPC_PGTABLE_H */
...@@ -201,6 +201,23 @@ static inline long plpar_pte_read_raw(unsigned long flags, unsigned long ptex, ...@@ -201,6 +201,23 @@ static inline long plpar_pte_read_raw(unsigned long flags, unsigned long ptex,
return rc; return rc;
} }
/*
* ptes must be 8*sizeof(unsigned long)
*/
static inline long plpar_pte_read_4(unsigned long flags, unsigned long ptex,
unsigned long *ptes)
{
long rc;
unsigned long retbuf[PLPAR_HCALL9_BUFSIZE];
rc = plpar_hcall9(H_READ, retbuf, flags | H_READ_4, ptex);
memcpy(ptes, retbuf, 8*sizeof(unsigned long));
return rc;
}
/* /*
* plpar_pte_read_4_raw can be called in real mode. * plpar_pte_read_4_raw can be called in real mode.
* ptes must be 8*sizeof(unsigned long) * ptes must be 8*sizeof(unsigned long)
......
...@@ -413,24 +413,6 @@ END_FTR_SECTION_IFCLR(CPU_FTR_601) ...@@ -413,24 +413,6 @@ END_FTR_SECTION_IFCLR(CPU_FTR_601)
FTR_SECTION_ELSE_NESTED(848); \ FTR_SECTION_ELSE_NESTED(848); \
mtocrf (FXM), RS; \ mtocrf (FXM), RS; \
ALT_FTR_SECTION_END_NESTED_IFCLR(CPU_FTR_NOEXECUTE, 848) ALT_FTR_SECTION_END_NESTED_IFCLR(CPU_FTR_NOEXECUTE, 848)
/*
* PPR restore macros used in entry_64.S
* Used for P7 or later processors
*/
#define HMT_MEDIUM_LOW_HAS_PPR \
BEGIN_FTR_SECTION_NESTED(944) \
HMT_MEDIUM_LOW; \
END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,944)
#define SET_DEFAULT_THREAD_PPR(ra, rb) \
BEGIN_FTR_SECTION_NESTED(945) \
lis ra,INIT_PPR@highest; /* default ppr=3 */ \
ld rb,PACACURRENT(r13); \
sldi ra,ra,32; /* 11- 13 bits are used for ppr */ \
std ra,TASKTHREADPPR(rb); \
END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,945)
#endif #endif
/* /*
......
...@@ -88,12 +88,6 @@ struct task_struct; ...@@ -88,12 +88,6 @@ struct task_struct;
void start_thread(struct pt_regs *regs, unsigned long fdptr, unsigned long sp); void start_thread(struct pt_regs *regs, unsigned long fdptr, unsigned long sp);
void release_thread(struct task_struct *); void release_thread(struct task_struct *);
/* Lazy FPU handling on uni-processor */
extern struct task_struct *last_task_used_math;
extern struct task_struct *last_task_used_altivec;
extern struct task_struct *last_task_used_vsx;
extern struct task_struct *last_task_used_spe;
#ifdef CONFIG_PPC32 #ifdef CONFIG_PPC32
#if CONFIG_TASK_SIZE > CONFIG_KERNEL_START #if CONFIG_TASK_SIZE > CONFIG_KERNEL_START
...@@ -294,6 +288,7 @@ struct thread_struct { ...@@ -294,6 +288,7 @@ struct thread_struct {
#endif #endif
#ifdef CONFIG_PPC64 #ifdef CONFIG_PPC64
unsigned long dscr; unsigned long dscr;
unsigned long fscr;
/* /*
* This member element dscr_inherit indicates that the process * This member element dscr_inherit indicates that the process
* has explicitly attempted and changed the DSCR register value * has explicitly attempted and changed the DSCR register value
...@@ -385,8 +380,6 @@ extern int set_endian(struct task_struct *tsk, unsigned int val); ...@@ -385,8 +380,6 @@ extern int set_endian(struct task_struct *tsk, unsigned int val);
extern int get_unalign_ctl(struct task_struct *tsk, unsigned long adr); extern int get_unalign_ctl(struct task_struct *tsk, unsigned long adr);
extern int set_unalign_ctl(struct task_struct *tsk, unsigned int val); extern int set_unalign_ctl(struct task_struct *tsk, unsigned int val);
extern void fp_enable(void);
extern void vec_enable(void);
extern void load_fp_state(struct thread_fp_state *fp); extern void load_fp_state(struct thread_fp_state *fp);
extern void store_fp_state(struct thread_fp_state *fp); extern void store_fp_state(struct thread_fp_state *fp);
extern void load_vr_state(struct thread_vr_state *vr); extern void load_vr_state(struct thread_vr_state *vr);
......
...@@ -40,6 +40,11 @@ ...@@ -40,6 +40,11 @@
#else #else
#define _PAGE_RW 0 #define _PAGE_RW 0
#endif #endif
#ifndef _PAGE_PTE
#define _PAGE_PTE 0
#endif
#ifndef _PMD_PRESENT_MASK #ifndef _PMD_PRESENT_MASK
#define _PMD_PRESENT_MASK _PMD_PRESENT #define _PMD_PRESENT_MASK _PMD_PRESENT
#endif #endif
......
/* To be include by pgtable-hash64.h only */
/* PTE bits */
#define _PAGE_HASHPTE 0x0400 /* software: pte has an associated HPTE */
#define _PAGE_SECONDARY 0x8000 /* software: HPTE is in secondary group */
#define _PAGE_GROUP_IX 0x7000 /* software: HPTE index within group */
#define _PAGE_F_SECOND _PAGE_SECONDARY
#define _PAGE_F_GIX _PAGE_GROUP_IX
#define _PAGE_SPECIAL 0x10000 /* software: special page */
/* PTE flags to conserve for HPTE identification */
#define _PAGE_HPTEFLAGS (_PAGE_BUSY | _PAGE_HASHPTE | \
_PAGE_SECONDARY | _PAGE_GROUP_IX)
/* shift to put page number into pte */
#define PTE_RPN_SHIFT (17)
/* To be include by pgtable-hash64.h only */
/* Additional PTE bits (don't change without checking asm in hash_low.S) */
#define _PAGE_SPECIAL 0x00000400 /* software: special page */
#define _PAGE_HPTE_SUB 0x0ffff000 /* combo only: sub pages HPTE bits */
#define _PAGE_HPTE_SUB0 0x08000000 /* combo only: first sub page */
#define _PAGE_COMBO 0x10000000 /* this is a combo 4k page */
#define _PAGE_4K_PFN 0x20000000 /* PFN is for a single 4k page */
/* For 64K page, we don't have a separate _PAGE_HASHPTE bit. Instead,
* we set that to be the whole sub-bits mask. The C code will only
* test this, so a multi-bit mask will work. For combo pages, this
* is equivalent as effectively, the old _PAGE_HASHPTE was an OR of
* all the sub bits. For real 64k pages, we now have the assembly set
* _PAGE_HPTE_SUB0 in addition to setting the HIDX bits which overlap
* that mask. This is fine as long as the HIDX bits are never set on
* a PTE that isn't hashed, which is the case today.
*
* A little nit is for the huge page C code, which does the hashing
* in C, we need to provide which bit to use.
*/
#define _PAGE_HASHPTE _PAGE_HPTE_SUB
/* Note the full page bits must be in the same location as for normal
* 4k pages as the same assembly will be used to insert 64K pages
* whether the kernel has CONFIG_PPC_64K_PAGES or not
*/
#define _PAGE_F_SECOND 0x00008000 /* full page: hidx bits */
#define _PAGE_F_GIX 0x00007000 /* full page: hidx bits */
/* PTE flags to conserve for HPTE identification */
#define _PAGE_HPTEFLAGS (_PAGE_BUSY | _PAGE_HASHPTE | _PAGE_COMBO)
/* Shift to put page number into pte.
*
* That gives us a max RPN of 34 bits, which means a max of 50 bits
* of addressable physical space, or 46 bits for the special 4k PFNs.
*/
#define PTE_RPN_SHIFT (30)
#ifndef __ASSEMBLY__
/*
* With 64K pages on hash table, we have a special PTE format that
* uses a second "half" of the page table to encode sub-page information
* in order to deal with 64K made of 4K HW pages. Thus we override the
* generic accessors and iterators here
*/
#define __real_pte __real_pte
static inline real_pte_t __real_pte(pte_t pte, pte_t *ptep)
{
real_pte_t rpte;
rpte.pte = pte;
rpte.hidx = 0;
if (pte_val(pte) & _PAGE_COMBO) {
/*
* Make sure we order the hidx load against the _PAGE_COMBO
* check. The store side ordering is done in __hash_page_4K
*/
smp_rmb();
rpte.hidx = pte_val(*((ptep) + PTRS_PER_PTE));
}
return rpte;
}
static inline unsigned long __rpte_to_hidx(real_pte_t rpte, unsigned long index)
{
if ((pte_val(rpte.pte) & _PAGE_COMBO))
return (rpte.hidx >> (index<<2)) & 0xf;
return (pte_val(rpte.pte) >> 12) & 0xf;
}
#define __rpte_to_pte(r) ((r).pte)
#define __rpte_sub_valid(rpte, index) \
(pte_val(rpte.pte) & (_PAGE_HPTE_SUB0 >> (index)))
/* Trick: we set __end to va + 64k, which happens works for
* a 16M page as well as we want only one iteration
*/
#define pte_iterate_hashed_subpages(rpte, psize, vpn, index, shift) \
do { \
unsigned long __end = vpn + (1UL << (PAGE_SHIFT - VPN_SHIFT)); \
unsigned __split = (psize == MMU_PAGE_4K || \
psize == MMU_PAGE_64K_AP); \
shift = mmu_psize_defs[psize].shift; \
for (index = 0; vpn < __end; index++, \
vpn += (1L << (shift - VPN_SHIFT))) { \
if (!__split || __rpte_sub_valid(rpte, index)) \
do {
#define pte_iterate_hashed_end() } while(0); } } while(0)
#define pte_pagesize_index(mm, addr, pte) \
(((pte) & _PAGE_COMBO)? MMU_PAGE_4K: MMU_PAGE_64K)
#define remap_4k_pfn(vma, addr, pfn, prot) \
(WARN_ON(((pfn) >= (1UL << (64 - PTE_RPN_SHIFT)))) ? -EINVAL : \
remap_pfn_range((vma), (addr), (pfn), PAGE_SIZE, \
__pgprot(pgprot_val((prot)) | _PAGE_4K_PFN)))
#endif /* __ASSEMBLY__ */
#ifndef _ASM_POWERPC_PTE_HASH64_H
#define _ASM_POWERPC_PTE_HASH64_H
#ifdef __KERNEL__
/*
* Common bits between 4K and 64K pages in a linux-style PTE.
* These match the bits in the (hardware-defined) PowerPC PTE as closely
* as possible. Additional bits may be defined in pgtable-hash64-*.h
*
* Note: We only support user read/write permissions. Supervisor always
* have full read/write to pages above PAGE_OFFSET (pages below that
* always use the user access permissions).
*
* We could create separate kernel read-only if we used the 3 PP bits
* combinations that newer processors provide but we currently don't.
*/
#define _PAGE_PRESENT 0x0001 /* software: pte contains a translation */
#define _PAGE_USER 0x0002 /* matches one of the PP bits */
#define _PAGE_BIT_SWAP_TYPE 2
#define _PAGE_EXEC 0x0004 /* No execute on POWER4 and newer (we invert) */
#define _PAGE_GUARDED 0x0008
/* We can derive Memory coherence from _PAGE_NO_CACHE */
#define _PAGE_NO_CACHE 0x0020 /* I: cache inhibit */
#define _PAGE_WRITETHRU 0x0040 /* W: cache write-through */
#define _PAGE_DIRTY 0x0080 /* C: page changed */
#define _PAGE_ACCESSED 0x0100 /* R: page referenced */
#define _PAGE_RW 0x0200 /* software: user write access allowed */
#define _PAGE_BUSY 0x0800 /* software: PTE & hash are busy */
/* No separate kernel read-only */
#define _PAGE_KERNEL_RW (_PAGE_RW | _PAGE_DIRTY) /* user access blocked by key */
#define _PAGE_KERNEL_RO _PAGE_KERNEL_RW
/* Strong Access Ordering */
#define _PAGE_SAO (_PAGE_WRITETHRU | _PAGE_NO_CACHE | _PAGE_COHERENT)
/* No page size encoding in the linux PTE */
#define _PAGE_PSIZE 0
/* PTEIDX nibble */
#define _PTEIDX_SECONDARY 0x8
#define _PTEIDX_GROUP_IX 0x7
/* Hash table based platforms need atomic updates of the linux PTE */
#define PTE_ATOMIC_UPDATES 1
#ifdef CONFIG_PPC_64K_PAGES
#include <asm/pte-hash64-64k.h>
#else
#include <asm/pte-hash64-4k.h>
#endif
#endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_PTE_HASH64_H */
...@@ -1194,12 +1194,20 @@ ...@@ -1194,12 +1194,20 @@
#define __mtmsrd(v, l) asm volatile("mtmsrd %0," __stringify(l) \ #define __mtmsrd(v, l) asm volatile("mtmsrd %0," __stringify(l) \
: : "r" (v) : "memory") : : "r" (v) : "memory")
#define mtmsr(v) __mtmsrd((v), 0) #define mtmsr(v) __mtmsrd((v), 0)
#define __MTMSR "mtmsrd"
#else #else
#define mtmsr(v) asm volatile("mtmsr %0" : \ #define mtmsr(v) asm volatile("mtmsr %0" : \
: "r" ((unsigned long)(v)) \ : "r" ((unsigned long)(v)) \
: "memory") : "memory")
#define __MTMSR "mtmsr"
#endif #endif
static inline void mtmsr_isync(unsigned long val)
{
asm volatile(__MTMSR " %0; " ASM_FTR_IFCLR("isync", "nop", %1) : :
"r" (val), "i" (CPU_FTR_ARCH_206) : "memory");
}
#define mfspr(rn) ({unsigned long rval; \ #define mfspr(rn) ({unsigned long rval; \
asm volatile("mfspr %0," __stringify(rn) \ asm volatile("mfspr %0," __stringify(rn) \
: "=r" (rval)); rval;}) : "=r" (rval)); rval;})
...@@ -1207,6 +1215,15 @@ ...@@ -1207,6 +1215,15 @@
: "r" ((unsigned long)(v)) \ : "r" ((unsigned long)(v)) \
: "memory") : "memory")
extern void msr_check_and_set(unsigned long bits);
extern bool strict_msr_control;
extern void __msr_check_and_clear(unsigned long bits);
static inline void msr_check_and_clear(unsigned long bits)
{
if (strict_msr_control)
__msr_check_and_clear(bits);
}
static inline unsigned long mfvtb (void) static inline unsigned long mfvtb (void)
{ {
#ifdef CONFIG_PPC_BOOK3S_64 #ifdef CONFIG_PPC_BOOK3S_64
......
...@@ -334,10 +334,11 @@ extern void (*rtas_flash_term_hook)(int); ...@@ -334,10 +334,11 @@ extern void (*rtas_flash_term_hook)(int);
extern struct rtas_t rtas; extern struct rtas_t rtas;
extern void enter_rtas(unsigned long);
extern int rtas_token(const char *service); extern int rtas_token(const char *service);
extern int rtas_service_present(const char *service); extern int rtas_service_present(const char *service);
extern int rtas_call(int token, int, int, int *, ...); extern int rtas_call(int token, int, int, int *, ...);
void rtas_call_unlocked(struct rtas_args *args, int token, int nargs,
int nret, ...);
extern void rtas_restart(char *cmd); extern void rtas_restart(char *cmd);
extern void rtas_power_off(void); extern void rtas_power_off(void);
extern void rtas_halt(void); extern void rtas_halt(void);
......
...@@ -4,6 +4,8 @@ ...@@ -4,6 +4,8 @@
#ifndef _ASM_POWERPC_SWITCH_TO_H #ifndef _ASM_POWERPC_SWITCH_TO_H
#define _ASM_POWERPC_SWITCH_TO_H #define _ASM_POWERPC_SWITCH_TO_H
#include <asm/reg.h>
struct thread_struct; struct thread_struct;
struct task_struct; struct task_struct;
struct pt_regs; struct pt_regs;
...@@ -12,74 +14,59 @@ extern struct task_struct *__switch_to(struct task_struct *, ...@@ -12,74 +14,59 @@ extern struct task_struct *__switch_to(struct task_struct *,
struct task_struct *); struct task_struct *);
#define switch_to(prev, next, last) ((last) = __switch_to((prev), (next))) #define switch_to(prev, next, last) ((last) = __switch_to((prev), (next)))
struct thread_struct;
extern struct task_struct *_switch(struct thread_struct *prev, extern struct task_struct *_switch(struct thread_struct *prev,
struct thread_struct *next); struct thread_struct *next);
#ifdef CONFIG_PPC_BOOK3S_64
static inline void save_early_sprs(struct thread_struct *prev)
{
if (cpu_has_feature(CPU_FTR_ARCH_207S))
prev->tar = mfspr(SPRN_TAR);
if (cpu_has_feature(CPU_FTR_DSCR))
prev->dscr = mfspr(SPRN_DSCR);
}
#else
static inline void save_early_sprs(struct thread_struct *prev) {}
#endif
extern void enable_kernel_fp(void);
extern void enable_kernel_altivec(void);
extern void enable_kernel_vsx(void);
extern int emulate_altivec(struct pt_regs *);
extern void __giveup_vsx(struct task_struct *);
extern void giveup_vsx(struct task_struct *);
extern void enable_kernel_spe(void);
extern void giveup_spe(struct task_struct *);
extern void load_up_spe(struct task_struct *);
extern void switch_booke_debug_regs(struct debug_reg *new_debug); extern void switch_booke_debug_regs(struct debug_reg *new_debug);
#ifndef CONFIG_SMP extern int emulate_altivec(struct pt_regs *);
extern void discard_lazy_cpu_state(void);
#else extern void flush_all_to_thread(struct task_struct *);
static inline void discard_lazy_cpu_state(void) extern void giveup_all(struct task_struct *);
{
}
#endif
#ifdef CONFIG_PPC_FPU #ifdef CONFIG_PPC_FPU
extern void enable_kernel_fp(void);
extern void flush_fp_to_thread(struct task_struct *); extern void flush_fp_to_thread(struct task_struct *);
extern void giveup_fpu(struct task_struct *); extern void giveup_fpu(struct task_struct *);
extern void __giveup_fpu(struct task_struct *);
static inline void disable_kernel_fp(void)
{
msr_check_and_clear(MSR_FP);
}
#else #else
static inline void flush_fp_to_thread(struct task_struct *t) { } static inline void flush_fp_to_thread(struct task_struct *t) { }
static inline void giveup_fpu(struct task_struct *t) { }
#endif #endif
#ifdef CONFIG_ALTIVEC #ifdef CONFIG_ALTIVEC
extern void enable_kernel_altivec(void);
extern void flush_altivec_to_thread(struct task_struct *); extern void flush_altivec_to_thread(struct task_struct *);
extern void giveup_altivec(struct task_struct *); extern void giveup_altivec(struct task_struct *);
extern void giveup_altivec_notask(void); extern void __giveup_altivec(struct task_struct *);
#else static inline void disable_kernel_altivec(void)
static inline void flush_altivec_to_thread(struct task_struct *t)
{
}
static inline void giveup_altivec(struct task_struct *t)
{ {
msr_check_and_clear(MSR_VEC);
} }
#endif #endif
#ifdef CONFIG_VSX #ifdef CONFIG_VSX
extern void enable_kernel_vsx(void);
extern void flush_vsx_to_thread(struct task_struct *); extern void flush_vsx_to_thread(struct task_struct *);
#else extern void giveup_vsx(struct task_struct *);
static inline void flush_vsx_to_thread(struct task_struct *t) extern void __giveup_vsx(struct task_struct *);
static inline void disable_kernel_vsx(void)
{ {
msr_check_and_clear(MSR_FP|MSR_VEC|MSR_VSX);
} }
#endif #endif
#ifdef CONFIG_SPE #ifdef CONFIG_SPE
extern void enable_kernel_spe(void);
extern void flush_spe_to_thread(struct task_struct *); extern void flush_spe_to_thread(struct task_struct *);
#else extern void giveup_spe(struct task_struct *);
static inline void flush_spe_to_thread(struct task_struct *t) extern void __giveup_spe(struct task_struct *);
static inline void disable_kernel_spe(void)
{ {
msr_check_and_clear(MSR_SPE);
} }
#endif #endif
......
...@@ -44,7 +44,7 @@ static inline void isync(void) ...@@ -44,7 +44,7 @@ static inline void isync(void)
MAKE_LWSYNC_SECTION_ENTRY(97, __lwsync_fixup); MAKE_LWSYNC_SECTION_ENTRY(97, __lwsync_fixup);
#define PPC_ACQUIRE_BARRIER "\n" stringify_in_c(__PPC_ACQUIRE_BARRIER) #define PPC_ACQUIRE_BARRIER "\n" stringify_in_c(__PPC_ACQUIRE_BARRIER)
#define PPC_RELEASE_BARRIER stringify_in_c(LWSYNC) "\n" #define PPC_RELEASE_BARRIER stringify_in_c(LWSYNC) "\n"
#define PPC_ATOMIC_ENTRY_BARRIER "\n" stringify_in_c(LWSYNC) "\n" #define PPC_ATOMIC_ENTRY_BARRIER "\n" stringify_in_c(sync) "\n"
#define PPC_ATOMIC_EXIT_BARRIER "\n" stringify_in_c(sync) "\n" #define PPC_ATOMIC_EXIT_BARRIER "\n" stringify_in_c(sync) "\n"
#else #else
#define PPC_ACQUIRE_BARRIER #define PPC_ACQUIRE_BARRIER
......
...@@ -27,7 +27,6 @@ extern struct clock_event_device decrementer_clockevent; ...@@ -27,7 +27,6 @@ extern struct clock_event_device decrementer_clockevent;
struct rtc_time; struct rtc_time;
extern void to_tm(int tim, struct rtc_time * tm); extern void to_tm(int tim, struct rtc_time * tm);
extern void GregorianDay(struct rtc_time *tm);
extern void tick_broadcast_ipi_handler(void); extern void tick_broadcast_ipi_handler(void);
extern void generic_calibrate_decr(void); extern void generic_calibrate_decr(void);
......
...@@ -12,10 +12,9 @@ ...@@ -12,10 +12,9 @@
#include <uapi/asm/unistd.h> #include <uapi/asm/unistd.h>
#define __NR_syscalls 379 #define NR_syscalls 379
#define __NR__exit __NR_exit #define __NR__exit __NR_exit
#define NR_syscalls __NR_syscalls
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
......
...@@ -41,7 +41,7 @@ ...@@ -41,7 +41,7 @@
#include <linux/unistd.h> #include <linux/unistd.h>
#include <linux/time.h> #include <linux/time.h>
#define SYSCALL_MAP_SIZE ((__NR_syscalls + 31) / 32) #define SYSCALL_MAP_SIZE ((NR_syscalls + 31) / 32)
/* /*
* So here is the ppc64 backward compatible version * So here is the ppc64 backward compatible version
......
...@@ -43,5 +43,7 @@ ...@@ -43,5 +43,7 @@
#define PPC_FEATURE2_TAR 0x04000000 #define PPC_FEATURE2_TAR 0x04000000
#define PPC_FEATURE2_VEC_CRYPTO 0x02000000 #define PPC_FEATURE2_VEC_CRYPTO 0x02000000
#define PPC_FEATURE2_HTM_NOSC 0x01000000 #define PPC_FEATURE2_HTM_NOSC 0x01000000
#define PPC_FEATURE2_ARCH_3_00 0x00800000 /* ISA 3.00 */
#define PPC_FEATURE2_HAS_IEEE128 0x00400000 /* VSX IEEE Binary Float 128-bit */
#endif /* _UAPI__ASM_POWERPC_CPUTABLE_H */ #endif /* _UAPI__ASM_POWERPC_CPUTABLE_H */
...@@ -295,6 +295,8 @@ do { \ ...@@ -295,6 +295,8 @@ do { \
#define R_PPC64_TLSLD 108 #define R_PPC64_TLSLD 108
#define R_PPC64_TOCSAVE 109 #define R_PPC64_TOCSAVE 109
#define R_PPC64_ENTRY 118
#define R_PPC64_REL16 249 #define R_PPC64_REL16 249
#define R_PPC64_REL16_LO 250 #define R_PPC64_REL16_LO 250
#define R_PPC64_REL16_HI 251 #define R_PPC64_REL16_HI 251
......
...@@ -960,6 +960,7 @@ int fix_alignment(struct pt_regs *regs) ...@@ -960,6 +960,7 @@ int fix_alignment(struct pt_regs *regs)
preempt_disable(); preempt_disable();
enable_kernel_fp(); enable_kernel_fp();
cvt_df(&data.dd, (float *)&data.x32.low32); cvt_df(&data.dd, (float *)&data.x32.low32);
disable_kernel_fp();
preempt_enable(); preempt_enable();
#else #else
return 0; return 0;
...@@ -1000,6 +1001,7 @@ int fix_alignment(struct pt_regs *regs) ...@@ -1000,6 +1001,7 @@ int fix_alignment(struct pt_regs *regs)
preempt_disable(); preempt_disable();
enable_kernel_fp(); enable_kernel_fp();
cvt_fd((float *)&data.x32.low32, &data.dd); cvt_fd((float *)&data.x32.low32, &data.dd);
disable_kernel_fp();
preempt_enable(); preempt_enable();
#else #else
return 0; return 0;
......
...@@ -185,14 +185,16 @@ int main(void) ...@@ -185,14 +185,16 @@ int main(void)
DEFINE(PACAKMSR, offsetof(struct paca_struct, kernel_msr)); DEFINE(PACAKMSR, offsetof(struct paca_struct, kernel_msr));
DEFINE(PACASOFTIRQEN, offsetof(struct paca_struct, soft_enabled)); DEFINE(PACASOFTIRQEN, offsetof(struct paca_struct, soft_enabled));
DEFINE(PACAIRQHAPPENED, offsetof(struct paca_struct, irq_happened)); DEFINE(PACAIRQHAPPENED, offsetof(struct paca_struct, irq_happened));
DEFINE(PACACONTEXTID, offsetof(struct paca_struct, context.id)); #ifdef CONFIG_PPC_BOOK3S
DEFINE(PACACONTEXTID, offsetof(struct paca_struct, mm_ctx_id));
#ifdef CONFIG_PPC_MM_SLICES #ifdef CONFIG_PPC_MM_SLICES
DEFINE(PACALOWSLICESPSIZE, offsetof(struct paca_struct, DEFINE(PACALOWSLICESPSIZE, offsetof(struct paca_struct,
context.low_slices_psize)); mm_ctx_low_slices_psize));
DEFINE(PACAHIGHSLICEPSIZE, offsetof(struct paca_struct, DEFINE(PACAHIGHSLICEPSIZE, offsetof(struct paca_struct,
context.high_slices_psize)); mm_ctx_high_slices_psize));
DEFINE(MMUPSIZEDEFSIZE, sizeof(struct mmu_psize_def)); DEFINE(MMUPSIZEDEFSIZE, sizeof(struct mmu_psize_def));
#endif /* CONFIG_PPC_MM_SLICES */ #endif /* CONFIG_PPC_MM_SLICES */
#endif
#ifdef CONFIG_PPC_BOOK3E #ifdef CONFIG_PPC_BOOK3E
DEFINE(PACAPGD, offsetof(struct paca_struct, pgd)); DEFINE(PACAPGD, offsetof(struct paca_struct, pgd));
...@@ -222,7 +224,7 @@ int main(void) ...@@ -222,7 +224,7 @@ int main(void)
#ifdef CONFIG_PPC_MM_SLICES #ifdef CONFIG_PPC_MM_SLICES
DEFINE(MMUPSIZESLLP, offsetof(struct mmu_psize_def, sllp)); DEFINE(MMUPSIZESLLP, offsetof(struct mmu_psize_def, sllp));
#else #else
DEFINE(PACACONTEXTSLLP, offsetof(struct paca_struct, context.sllp)); DEFINE(PACACONTEXTSLLP, offsetof(struct paca_struct, mm_ctx_sllp));
#endif /* CONFIG_PPC_MM_SLICES */ #endif /* CONFIG_PPC_MM_SLICES */
DEFINE(PACA_EXGEN, offsetof(struct paca_struct, exgen)); DEFINE(PACA_EXGEN, offsetof(struct paca_struct, exgen));
DEFINE(PACA_EXMC, offsetof(struct paca_struct, exmc)); DEFINE(PACA_EXMC, offsetof(struct paca_struct, exmc));
......
...@@ -223,7 +223,11 @@ END_FTR_SECTION_IFCLR(CPU_FTR_STCX_CHECKS_ADDRESS) ...@@ -223,7 +223,11 @@ END_FTR_SECTION_IFCLR(CPU_FTR_STCX_CHECKS_ADDRESS)
beq- 1f beq- 1f
ACCOUNT_CPU_USER_EXIT(r11, r12) ACCOUNT_CPU_USER_EXIT(r11, r12)
HMT_MEDIUM_LOW_HAS_PPR
BEGIN_FTR_SECTION
HMT_MEDIUM_LOW
END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
ld r13,GPR13(r1) /* only restore r13 if returning to usermode */ ld r13,GPR13(r1) /* only restore r13 if returning to usermode */
1: ld r2,GPR2(r1) 1: ld r2,GPR2(r1)
ld r1,GPR1(r1) ld r1,GPR1(r1)
...@@ -312,7 +316,13 @@ syscall_exit_work: ...@@ -312,7 +316,13 @@ syscall_exit_work:
subi r12,r12,TI_FLAGS subi r12,r12,TI_FLAGS
4: /* Anything else left to do? */ 4: /* Anything else left to do? */
SET_DEFAULT_THREAD_PPR(r3, r10) /* Set thread.ppr = 3 */ BEGIN_FTR_SECTION
lis r3,INIT_PPR@highest /* Set thread.ppr = 3 */
ld r10,PACACURRENT(r13)
sldi r3,r3,32 /* bits 11-13 are used for ppr */
std r3,TASKTHREADPPR(r10)
END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
andi. r0,r9,(_TIF_SYSCALL_DOTRACE|_TIF_SINGLESTEP) andi. r0,r9,(_TIF_SYSCALL_DOTRACE|_TIF_SINGLESTEP)
beq ret_from_except_lite beq ret_from_except_lite
...@@ -452,43 +462,11 @@ _GLOBAL(_switch) ...@@ -452,43 +462,11 @@ _GLOBAL(_switch)
/* r3-r13 are caller saved -- Cort */ /* r3-r13 are caller saved -- Cort */
SAVE_8GPRS(14, r1) SAVE_8GPRS(14, r1)
SAVE_10GPRS(22, r1) SAVE_10GPRS(22, r1)
mflr r20 /* Return to switch caller */ std r0,_NIP(r1) /* Return to switch caller */
mfmsr r22
li r0, MSR_FP
#ifdef CONFIG_VSX
BEGIN_FTR_SECTION
oris r0,r0,MSR_VSX@h /* Disable VSX */
END_FTR_SECTION_IFSET(CPU_FTR_VSX)
#endif /* CONFIG_VSX */
#ifdef CONFIG_ALTIVEC
BEGIN_FTR_SECTION
oris r0,r0,MSR_VEC@h /* Disable altivec */
mfspr r24,SPRN_VRSAVE /* save vrsave register value */
std r24,THREAD_VRSAVE(r3)
END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
#endif /* CONFIG_ALTIVEC */
and. r0,r0,r22
beq+ 1f
andc r22,r22,r0
MTMSRD(r22)
isync
1: std r20,_NIP(r1)
mfcr r23 mfcr r23
std r23,_CCR(r1) std r23,_CCR(r1)
std r1,KSP(r3) /* Set old stack pointer */ std r1,KSP(r3) /* Set old stack pointer */
#ifdef CONFIG_PPC_BOOK3S_64
BEGIN_FTR_SECTION
/* Event based branch registers */
mfspr r0, SPRN_BESCR
std r0, THREAD_BESCR(r3)
mfspr r0, SPRN_EBBHR
std r0, THREAD_EBBHR(r3)
mfspr r0, SPRN_EBBRR
std r0, THREAD_EBBRR(r3)
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
#endif
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
/* We need a sync somewhere here to make sure that if the /* We need a sync somewhere here to make sure that if the
* previous task gets rescheduled on another CPU, it sees all * previous task gets rescheduled on another CPU, it sees all
...@@ -576,47 +554,6 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_1T_SEGMENT) ...@@ -576,47 +554,6 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_1T_SEGMENT)
mr r1,r8 /* start using new stack pointer */ mr r1,r8 /* start using new stack pointer */
std r7,PACAKSAVE(r13) std r7,PACAKSAVE(r13)
#ifdef CONFIG_PPC_BOOK3S_64
BEGIN_FTR_SECTION
/* Event based branch registers */
ld r0, THREAD_BESCR(r4)
mtspr SPRN_BESCR, r0
ld r0, THREAD_EBBHR(r4)
mtspr SPRN_EBBHR, r0
ld r0, THREAD_EBBRR(r4)
mtspr SPRN_EBBRR, r0
ld r0,THREAD_TAR(r4)
mtspr SPRN_TAR,r0
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S)
#endif
#ifdef CONFIG_ALTIVEC
BEGIN_FTR_SECTION
ld r0,THREAD_VRSAVE(r4)
mtspr SPRN_VRSAVE,r0 /* if G4, restore VRSAVE reg */
END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
#endif /* CONFIG_ALTIVEC */
#ifdef CONFIG_PPC64
BEGIN_FTR_SECTION
lwz r6,THREAD_DSCR_INHERIT(r4)
ld r0,THREAD_DSCR(r4)
cmpwi r6,0
bne 1f
ld r0,PACA_DSCR_DEFAULT(r13)
1:
BEGIN_FTR_SECTION_NESTED(70)
mfspr r8, SPRN_FSCR
rldimi r8, r6, FSCR_DSCR_LG, (63 - FSCR_DSCR_LG)
mtspr SPRN_FSCR, r8
END_FTR_SECTION_NESTED(CPU_FTR_ARCH_207S, CPU_FTR_ARCH_207S, 70)
cmpd r0,r25
beq 2f
mtspr SPRN_DSCR,r0
2:
END_FTR_SECTION_IFSET(CPU_FTR_DSCR)
#endif
ld r6,_CCR(r1) ld r6,_CCR(r1)
mtcrf 0xFF,r6 mtcrf 0xFF,r6
......
...@@ -96,7 +96,6 @@ __start_interrupts: ...@@ -96,7 +96,6 @@ __start_interrupts:
.globl system_reset_pSeries; .globl system_reset_pSeries;
system_reset_pSeries: system_reset_pSeries:
HMT_MEDIUM_PPR_DISCARD
SET_SCRATCH0(r13) SET_SCRATCH0(r13)
#ifdef CONFIG_PPC_P7_NAP #ifdef CONFIG_PPC_P7_NAP
BEGIN_FTR_SECTION BEGIN_FTR_SECTION
...@@ -164,7 +163,6 @@ machine_check_pSeries_1: ...@@ -164,7 +163,6 @@ machine_check_pSeries_1:
* some code path might still want to branch into the original * some code path might still want to branch into the original
* vector * vector
*/ */
HMT_MEDIUM_PPR_DISCARD
SET_SCRATCH0(r13) /* save r13 */ SET_SCRATCH0(r13) /* save r13 */
#ifdef CONFIG_PPC_P7_NAP #ifdef CONFIG_PPC_P7_NAP
BEGIN_FTR_SECTION BEGIN_FTR_SECTION
...@@ -199,7 +197,6 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE) ...@@ -199,7 +197,6 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
. = 0x300 . = 0x300
.globl data_access_pSeries .globl data_access_pSeries
data_access_pSeries: data_access_pSeries:
HMT_MEDIUM_PPR_DISCARD
SET_SCRATCH0(r13) SET_SCRATCH0(r13)
EXCEPTION_PROLOG_PSERIES(PACA_EXGEN, data_access_common, EXC_STD, EXCEPTION_PROLOG_PSERIES(PACA_EXGEN, data_access_common, EXC_STD,
KVMTEST, 0x300) KVMTEST, 0x300)
...@@ -207,7 +204,6 @@ data_access_pSeries: ...@@ -207,7 +204,6 @@ data_access_pSeries:
. = 0x380 . = 0x380
.globl data_access_slb_pSeries .globl data_access_slb_pSeries
data_access_slb_pSeries: data_access_slb_pSeries:
HMT_MEDIUM_PPR_DISCARD
SET_SCRATCH0(r13) SET_SCRATCH0(r13)
EXCEPTION_PROLOG_0(PACA_EXSLB) EXCEPTION_PROLOG_0(PACA_EXSLB)
EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST, 0x380) EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST, 0x380)
...@@ -234,15 +230,14 @@ data_access_slb_pSeries: ...@@ -234,15 +230,14 @@ data_access_slb_pSeries:
bctr bctr
#endif #endif
STD_EXCEPTION_PSERIES(0x400, 0x400, instruction_access) STD_EXCEPTION_PSERIES(0x400, instruction_access)
. = 0x480 . = 0x480
.globl instruction_access_slb_pSeries .globl instruction_access_slb_pSeries
instruction_access_slb_pSeries: instruction_access_slb_pSeries:
HMT_MEDIUM_PPR_DISCARD
SET_SCRATCH0(r13) SET_SCRATCH0(r13)
EXCEPTION_PROLOG_0(PACA_EXSLB) EXCEPTION_PROLOG_0(PACA_EXSLB)
EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST_PR, 0x480) EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST, 0x480)
std r3,PACA_EXSLB+EX_R3(r13) std r3,PACA_EXSLB+EX_R3(r13)
mfspr r3,SPRN_SRR0 /* SRR0 is faulting address */ mfspr r3,SPRN_SRR0 /* SRR0 is faulting address */
#ifdef __DISABLED__ #ifdef __DISABLED__
...@@ -269,25 +264,24 @@ instruction_access_slb_pSeries: ...@@ -269,25 +264,24 @@ instruction_access_slb_pSeries:
.globl hardware_interrupt_hv; .globl hardware_interrupt_hv;
hardware_interrupt_pSeries: hardware_interrupt_pSeries:
hardware_interrupt_hv: hardware_interrupt_hv:
HMT_MEDIUM_PPR_DISCARD
BEGIN_FTR_SECTION BEGIN_FTR_SECTION
_MASKABLE_EXCEPTION_PSERIES(0x502, hardware_interrupt, _MASKABLE_EXCEPTION_PSERIES(0x502, hardware_interrupt,
EXC_HV, SOFTEN_TEST_HV) EXC_HV, SOFTEN_TEST_HV)
KVM_HANDLER(PACA_EXGEN, EXC_HV, 0x502) KVM_HANDLER(PACA_EXGEN, EXC_HV, 0x502)
FTR_SECTION_ELSE FTR_SECTION_ELSE
_MASKABLE_EXCEPTION_PSERIES(0x500, hardware_interrupt, _MASKABLE_EXCEPTION_PSERIES(0x500, hardware_interrupt,
EXC_STD, SOFTEN_TEST_HV_201) EXC_STD, SOFTEN_TEST_PR)
KVM_HANDLER(PACA_EXGEN, EXC_STD, 0x500) KVM_HANDLER(PACA_EXGEN, EXC_STD, 0x500)
ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206) ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206)
STD_EXCEPTION_PSERIES(0x600, 0x600, alignment) STD_EXCEPTION_PSERIES(0x600, alignment)
KVM_HANDLER_PR(PACA_EXGEN, EXC_STD, 0x600) KVM_HANDLER(PACA_EXGEN, EXC_STD, 0x600)
STD_EXCEPTION_PSERIES(0x700, 0x700, program_check) STD_EXCEPTION_PSERIES(0x700, program_check)
KVM_HANDLER_PR(PACA_EXGEN, EXC_STD, 0x700) KVM_HANDLER(PACA_EXGEN, EXC_STD, 0x700)
STD_EXCEPTION_PSERIES(0x800, 0x800, fp_unavailable) STD_EXCEPTION_PSERIES(0x800, fp_unavailable)
KVM_HANDLER_PR(PACA_EXGEN, EXC_STD, 0x800) KVM_HANDLER(PACA_EXGEN, EXC_STD, 0x800)
. = 0x900 . = 0x900
.globl decrementer_pSeries .globl decrementer_pSeries
...@@ -297,10 +291,10 @@ decrementer_pSeries: ...@@ -297,10 +291,10 @@ decrementer_pSeries:
STD_EXCEPTION_HV(0x980, 0x982, hdecrementer) STD_EXCEPTION_HV(0x980, 0x982, hdecrementer)
MASKABLE_EXCEPTION_PSERIES(0xa00, 0xa00, doorbell_super) MASKABLE_EXCEPTION_PSERIES(0xa00, 0xa00, doorbell_super)
KVM_HANDLER_PR(PACA_EXGEN, EXC_STD, 0xa00) KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xa00)
STD_EXCEPTION_PSERIES(0xb00, 0xb00, trap_0b) STD_EXCEPTION_PSERIES(0xb00, trap_0b)
KVM_HANDLER_PR(PACA_EXGEN, EXC_STD, 0xb00) KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xb00)
. = 0xc00 . = 0xc00
.globl system_call_pSeries .globl system_call_pSeries
...@@ -331,8 +325,8 @@ system_call_pSeries: ...@@ -331,8 +325,8 @@ system_call_pSeries:
SYSCALL_PSERIES_3 SYSCALL_PSERIES_3
KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xc00) KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xc00)
STD_EXCEPTION_PSERIES(0xd00, 0xd00, single_step) STD_EXCEPTION_PSERIES(0xd00, single_step)
KVM_HANDLER_PR(PACA_EXGEN, EXC_STD, 0xd00) KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xd00)
/* At 0xe??? we have a bunch of hypervisor exceptions, we branch /* At 0xe??? we have a bunch of hypervisor exceptions, we branch
* out of line to handle them * out of line to handle them
...@@ -407,13 +401,12 @@ hv_facility_unavailable_trampoline: ...@@ -407,13 +401,12 @@ hv_facility_unavailable_trampoline:
KVM_HANDLER_SKIP(PACA_EXGEN, EXC_HV, 0x1202) KVM_HANDLER_SKIP(PACA_EXGEN, EXC_HV, 0x1202)
#endif /* CONFIG_CBE_RAS */ #endif /* CONFIG_CBE_RAS */
STD_EXCEPTION_PSERIES(0x1300, 0x1300, instruction_breakpoint) STD_EXCEPTION_PSERIES(0x1300, instruction_breakpoint)
KVM_HANDLER_PR_SKIP(PACA_EXGEN, EXC_STD, 0x1300) KVM_HANDLER_SKIP(PACA_EXGEN, EXC_STD, 0x1300)
. = 0x1500 . = 0x1500
.global denorm_exception_hv .global denorm_exception_hv
denorm_exception_hv: denorm_exception_hv:
HMT_MEDIUM_PPR_DISCARD
mtspr SPRN_SPRG_HSCRATCH0,r13 mtspr SPRN_SPRG_HSCRATCH0,r13
EXCEPTION_PROLOG_0(PACA_EXGEN) EXCEPTION_PROLOG_0(PACA_EXGEN)
EXCEPTION_PROLOG_1(PACA_EXGEN, NOTEST, 0x1500) EXCEPTION_PROLOG_1(PACA_EXGEN, NOTEST, 0x1500)
...@@ -435,8 +428,8 @@ denorm_exception_hv: ...@@ -435,8 +428,8 @@ denorm_exception_hv:
KVM_HANDLER_SKIP(PACA_EXGEN, EXC_HV, 0x1602) KVM_HANDLER_SKIP(PACA_EXGEN, EXC_HV, 0x1602)
#endif /* CONFIG_CBE_RAS */ #endif /* CONFIG_CBE_RAS */
STD_EXCEPTION_PSERIES(0x1700, 0x1700, altivec_assist) STD_EXCEPTION_PSERIES(0x1700, altivec_assist)
KVM_HANDLER_PR(PACA_EXGEN, EXC_STD, 0x1700) KVM_HANDLER(PACA_EXGEN, EXC_STD, 0x1700)
#ifdef CONFIG_CBE_RAS #ifdef CONFIG_CBE_RAS
STD_EXCEPTION_HV(0x1800, 0x1802, cbe_thermal) STD_EXCEPTION_HV(0x1800, 0x1802, cbe_thermal)
...@@ -527,7 +520,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_HVMODE) ...@@ -527,7 +520,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_HVMODE)
machine_check_pSeries: machine_check_pSeries:
.globl machine_check_fwnmi .globl machine_check_fwnmi
machine_check_fwnmi: machine_check_fwnmi:
HMT_MEDIUM_PPR_DISCARD
SET_SCRATCH0(r13) /* save r13 */ SET_SCRATCH0(r13) /* save r13 */
EXCEPTION_PROLOG_0(PACA_EXMC) EXCEPTION_PROLOG_0(PACA_EXMC)
machine_check_pSeries_0: machine_check_pSeries_0:
...@@ -536,9 +528,9 @@ machine_check_pSeries_0: ...@@ -536,9 +528,9 @@ machine_check_pSeries_0:
KVM_HANDLER_SKIP(PACA_EXMC, EXC_STD, 0x200) KVM_HANDLER_SKIP(PACA_EXMC, EXC_STD, 0x200)
KVM_HANDLER_SKIP(PACA_EXGEN, EXC_STD, 0x300) KVM_HANDLER_SKIP(PACA_EXGEN, EXC_STD, 0x300)
KVM_HANDLER_SKIP(PACA_EXSLB, EXC_STD, 0x380) KVM_HANDLER_SKIP(PACA_EXSLB, EXC_STD, 0x380)
KVM_HANDLER_PR(PACA_EXGEN, EXC_STD, 0x400) KVM_HANDLER(PACA_EXGEN, EXC_STD, 0x400)
KVM_HANDLER_PR(PACA_EXSLB, EXC_STD, 0x480) KVM_HANDLER(PACA_EXSLB, EXC_STD, 0x480)
KVM_HANDLER_PR(PACA_EXGEN, EXC_STD, 0x900) KVM_HANDLER(PACA_EXGEN, EXC_STD, 0x900)
KVM_HANDLER(PACA_EXGEN, EXC_HV, 0x982) KVM_HANDLER(PACA_EXGEN, EXC_HV, 0x982)
#ifdef CONFIG_PPC_DENORMALISATION #ifdef CONFIG_PPC_DENORMALISATION
...@@ -621,13 +613,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR) ...@@ -621,13 +613,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
/* moved from 0xf00 */ /* moved from 0xf00 */
STD_EXCEPTION_PSERIES_OOL(0xf00, performance_monitor) STD_EXCEPTION_PSERIES_OOL(0xf00, performance_monitor)
KVM_HANDLER_PR(PACA_EXGEN, EXC_STD, 0xf00) KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xf00)
STD_EXCEPTION_PSERIES_OOL(0xf20, altivec_unavailable) STD_EXCEPTION_PSERIES_OOL(0xf20, altivec_unavailable)
KVM_HANDLER_PR(PACA_EXGEN, EXC_STD, 0xf20) KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xf20)
STD_EXCEPTION_PSERIES_OOL(0xf40, vsx_unavailable) STD_EXCEPTION_PSERIES_OOL(0xf40, vsx_unavailable)
KVM_HANDLER_PR(PACA_EXGEN, EXC_STD, 0xf40) KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xf40)
STD_EXCEPTION_PSERIES_OOL(0xf60, facility_unavailable) STD_EXCEPTION_PSERIES_OOL(0xf60, facility_unavailable)
KVM_HANDLER_PR(PACA_EXGEN, EXC_STD, 0xf60) KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xf60)
STD_EXCEPTION_HV_OOL(0xf82, facility_unavailable) STD_EXCEPTION_HV_OOL(0xf82, facility_unavailable)
KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xf82) KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xf82)
...@@ -711,7 +703,6 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE) ...@@ -711,7 +703,6 @@ ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
.globl system_reset_fwnmi .globl system_reset_fwnmi
.align 7 .align 7
system_reset_fwnmi: system_reset_fwnmi:
HMT_MEDIUM_PPR_DISCARD
SET_SCRATCH0(r13) /* save r13 */ SET_SCRATCH0(r13) /* save r13 */
EXCEPTION_PROLOG_PSERIES(PACA_EXGEN, system_reset_common, EXC_STD, EXCEPTION_PROLOG_PSERIES(PACA_EXGEN, system_reset_common, EXC_STD,
NOTEST, 0x100) NOTEST, 0x100)
...@@ -1556,29 +1547,19 @@ do_hash_page: ...@@ -1556,29 +1547,19 @@ do_hash_page:
lwz r0,TI_PREEMPT(r11) /* If we're in an "NMI" */ lwz r0,TI_PREEMPT(r11) /* If we're in an "NMI" */
andis. r0,r0,NMI_MASK@h /* (i.e. an irq when soft-disabled) */ andis. r0,r0,NMI_MASK@h /* (i.e. an irq when soft-disabled) */
bne 77f /* then don't call hash_page now */ bne 77f /* then don't call hash_page now */
/*
* We need to set the _PAGE_USER bit if MSR_PR is set or if we are
* accessing a userspace segment (even from the kernel). We assume
* kernel addresses always have the high bit set.
*/
rlwinm r4,r4,32-25+9,31-9,31-9 /* DSISR_STORE -> _PAGE_RW */
rotldi r0,r3,15 /* Move high bit into MSR_PR posn */
orc r0,r12,r0 /* MSR_PR | ~high_bit */
rlwimi r4,r0,32-13,30,30 /* becomes _PAGE_USER access bit */
ori r4,r4,1 /* add _PAGE_PRESENT */
rlwimi r4,r5,22+2,31-2,31-2 /* Set _PAGE_EXEC if trap is 0x400 */
/* /*
* r3 contains the faulting address * r3 contains the faulting address
* r4 contains the required access permissions * r4 msr
* r5 contains the trap number * r5 contains the trap number
* r6 contains dsisr * r6 contains dsisr
* *
* at return r3 = 0 for success, 1 for page fault, negative for error * at return r3 = 0 for success, 1 for page fault, negative for error
*/ */
mr r4,r12
ld r6,_DSISR(r1) ld r6,_DSISR(r1)
bl hash_page /* build HPTE if possible */ bl __hash_page /* build HPTE if possible */
cmpdi r3,0 /* see if hash_page succeeded */ cmpdi r3,0 /* see if __hash_page succeeded */
/* Success */ /* Success */
beq fast_exc_return_irq /* Return from exception on success */ beq fast_exc_return_irq /* Return from exception on success */
......
...@@ -73,29 +73,9 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX) ...@@ -73,29 +73,9 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
MTFSF_L(fr0) MTFSF_L(fr0)
REST_32FPVSRS(0, R4, R7) REST_32FPVSRS(0, R4, R7)
/* FP/VSX off again */
MTMSRD(r6)
SYNC
blr blr
#endif /* CONFIG_PPC_TRANSACTIONAL_MEM */ #endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
/*
* Enable use of the FPU, and VSX if possible, for the caller.
*/
_GLOBAL(fp_enable)
mfmsr r3
ori r3,r3,MSR_FP
#ifdef CONFIG_VSX
BEGIN_FTR_SECTION
oris r3,r3,MSR_VSX@h
END_FTR_SECTION_IFSET(CPU_FTR_VSX)
#endif
SYNC
MTMSRD(r3)
isync /* (not necessary for arch 2.02 and later) */
blr
/* /*
* Load state from memory into FP registers including FPSCR. * Load state from memory into FP registers including FPSCR.
* Assumes the caller has enabled FP in the MSR. * Assumes the caller has enabled FP in the MSR.
...@@ -136,31 +116,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX) ...@@ -136,31 +116,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
SYNC SYNC
MTMSRD(r5) /* enable use of fpu now */ MTMSRD(r5) /* enable use of fpu now */
isync isync
/*
* For SMP, we don't do lazy FPU switching because it just gets too
* horrendously complex, especially when a task switches from one CPU
* to another. Instead we call giveup_fpu in switch_to.
*/
#ifndef CONFIG_SMP
LOAD_REG_ADDRBASE(r3, last_task_used_math)
toreal(r3)
PPC_LL r4,ADDROFF(last_task_used_math)(r3)
PPC_LCMPI 0,r4,0
beq 1f
toreal(r4)
addi r4,r4,THREAD /* want last_task_used_math->thread */
addi r10,r4,THREAD_FPSTATE
SAVE_32FPVSRS(0, R5, R10)
mffs fr0
stfd fr0,FPSTATE_FPSCR(r10)
PPC_LL r5,PT_REGS(r4)
toreal(r5)
PPC_LL r4,_MSR-STACK_FRAME_OVERHEAD(r5)
li r10,MSR_FP|MSR_FE0|MSR_FE1
andc r4,r4,r10 /* disable FP for previous task */
PPC_STL r4,_MSR-STACK_FRAME_OVERHEAD(r5)
1:
#endif /* CONFIG_SMP */
/* enable use of FP after return */ /* enable use of FP after return */
#ifdef CONFIG_PPC32 #ifdef CONFIG_PPC32
mfspr r5,SPRN_SPRG_THREAD /* current task's THREAD (phys) */ mfspr r5,SPRN_SPRG_THREAD /* current task's THREAD (phys) */
...@@ -179,36 +134,17 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX) ...@@ -179,36 +134,17 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
lfd fr0,FPSTATE_FPSCR(r10) lfd fr0,FPSTATE_FPSCR(r10)
MTFSF_L(fr0) MTFSF_L(fr0)
REST_32FPVSRS(0, R4, R10) REST_32FPVSRS(0, R4, R10)
#ifndef CONFIG_SMP
subi r4,r5,THREAD
fromreal(r4)
PPC_STL r4,ADDROFF(last_task_used_math)(r3)
#endif /* CONFIG_SMP */
/* restore registers and return */ /* restore registers and return */
/* we haven't used ctr or xer or lr */ /* we haven't used ctr or xer or lr */
blr blr
/* /*
* giveup_fpu(tsk) * __giveup_fpu(tsk)
* Disable FP for the task given as the argument, * Disable FP for the task given as the argument,
* and save the floating-point registers in its thread_struct. * and save the floating-point registers in its thread_struct.
* Enables the FPU for use in the kernel on return. * Enables the FPU for use in the kernel on return.
*/ */
_GLOBAL(giveup_fpu) _GLOBAL(__giveup_fpu)
mfmsr r5
ori r5,r5,MSR_FP
#ifdef CONFIG_VSX
BEGIN_FTR_SECTION
oris r5,r5,MSR_VSX@h
END_FTR_SECTION_IFSET(CPU_FTR_VSX)
#endif
SYNC_601
ISYNC_601
MTMSRD(r5) /* enable use of fpu now */
SYNC_601
isync
PPC_LCMPI 0,r3,0
beqlr- /* if no previous owner, done */
addi r3,r3,THREAD /* want THREAD of task */ addi r3,r3,THREAD /* want THREAD of task */
PPC_LL r6,THREAD_FPSAVEAREA(r3) PPC_LL r6,THREAD_FPSAVEAREA(r3)
PPC_LL r5,PT_REGS(r3) PPC_LL r5,PT_REGS(r3)
...@@ -230,11 +166,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX) ...@@ -230,11 +166,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
andc r4,r4,r3 /* disable FP for previous task */ andc r4,r4,r3 /* disable FP for previous task */
PPC_STL r4,_MSR-STACK_FRAME_OVERHEAD(r5) PPC_STL r4,_MSR-STACK_FRAME_OVERHEAD(r5)
1: 1:
#ifndef CONFIG_SMP
li r5,0
LOAD_REG_ADDRBASE(r4,last_task_used_math)
PPC_STL r5,ADDROFF(last_task_used_math)(r4)
#endif /* CONFIG_SMP */
blr blr
/* /*
......
This diff is collapsed.
...@@ -89,13 +89,6 @@ _GLOBAL(power7_powersave_common) ...@@ -89,13 +89,6 @@ _GLOBAL(power7_powersave_common)
std r0,_LINK(r1) std r0,_LINK(r1)
std r0,_NIP(r1) std r0,_NIP(r1)
#ifndef CONFIG_SMP
/* Make sure FPU, VSX etc... are flushed as we may lose
* state when going to nap mode
*/
bl discard_lazy_cpu_state
#endif /* CONFIG_SMP */
/* Hard disable interrupts */ /* Hard disable interrupts */
mfmsr r9 mfmsr r9
rldicl r9,r9,48,1 rldicl r9,r9,48,1
......
...@@ -743,6 +743,8 @@ relocate_new_kernel: ...@@ -743,6 +743,8 @@ relocate_new_kernel:
/* Check for 47x cores */ /* Check for 47x cores */
mfspr r3,SPRN_PVR mfspr r3,SPRN_PVR
srwi r3,r3,16 srwi r3,r3,16
cmplwi cr0,r3,PVR_476FPE@h
beq setup_map_47x
cmplwi cr0,r3,PVR_476@h cmplwi cr0,r3,PVR_476@h
beq setup_map_47x beq setup_map_47x
cmplwi cr0,r3,PVR_476_ISS@h cmplwi cr0,r3,PVR_476_ISS@h
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment