- 12 Sep, 2012 5 commits
-
-
Timur Tabi authored
Add support for the Freescale P5040DS Reference Board ("Superhydra"), which is similar to the P5020DS. Features of the P5040 are listed below, but not all of these features (e.g. DPAA networking) are currently supported. Four P5040 single-threaded e5500 cores built Up to 2.4 GHz with 64-bit ISA support Three levels of instruction: user, supervisor, hypervisor CoreNet platform cache (CPC) 2.0 MB configures as dual 1 MB blocks hierarchical interconnect fabric Two 64-bit DDR3/3L SDRAM memory controllers with ECC and interleaving support Up to 1600MT/s Memory pre-fetch engine DPAA incorporating acceleration for the following functions Packet parsing, classification, and distribution (FMAN) Queue management for scheduling, packet sequencing and congestion management (QMAN) Hardware buffer management for buffer allocation and de-allocation (BMAN) Cryptography acceleration (SEC 5.0) at up to 40 Gbps SerDes 20 lanes at up to 5 Gbps Supports SGMII, XAUI, PCIe rev1.1/2.0, SATA Ethernet interfaces Two 10 Gbps Ethernet MACs Ten 1 Gbps Ethernet MACs High-speed peripheral interfaces Two PCI Express 2.0/3.0 controllers Additional peripheral interfaces Two serial ATA (SATA 2.0) controllers Two high-speed USB 2.0 controllers with integrated PHY Enhanced secure digital host controller (SD/MMC/eMMC) Enhanced serial peripheral interface (eSPI) Two I2C controllers Four UARTs Integrated flash controller supporting NAND and NOR flash DMA Dual four channel Support for hardware virtualization and partitioning enforcement Extra privileged level for hypervisor support QorIQ Trust Architecture 1.1 Secure boot, secure debug, tamper detection, volatile key storage Signed-off-by: Timur Tabi <timur@freescale.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
-
Kim Phillips authored
Add device tree (dtsi) files for the Freescale P5040 SOC. Since this SOC introduces SEC v5.2, add the dtsi file for that also. Signed-off-by: Kim Phillips <kim.phillips@freescale.com> Signed-off-by: Timur Tabi <timur@freescale.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
-
Timur Tabi authored
The PCI controller on the Freescale P5040 is v2.4. Signed-off-by: Timur Tabi <timur@freescale.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
-
Timur Tabi authored
We only need two examples of CAMP device trees in the upstream kernel. Co-operative Asymmetric Multi-Processing (CAMP) is a technique where two or more operating systems (typically multiple copies of the same Linux kernel) are loaded into memory, and each kernel is given a subset of the available cores to execute on. For example, on a four-core system, one kernel runs on cores 0 and 1, and the other runs on cores 2 and 3. The devices are also partitioned among the operating systems, and this is done with customized device trees. Each kernel gets its own device tree that has only the devices that it should know about. Unfortunately, this approach is very hackish. The kernels are trusted to only access devices in their respective device trees, and the partitioning only works for devices that can be handled. Crafting the device trees is a tricky process, and getting U-Boot to load and start all kernels is cumbersome. But most importantly, each CAMP setup is very application-specific, since the actual partitioning of resources is done in the DTS by the system designer. Therefore, it doesn't make a lot of sense to have a lot of CAMP device trees, since we only expect them to be used as examples. Signed-off-by: Timur Tabi <timur@freescale.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
-
Tang Yuantian authored
The following platforms are supported: mpc8544, mpc8572, mpc8536, p1021, p1025, p1024, p1010. Signed-off-by: Tang Yuantian <Yuantian.Tang@freescale.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
-
- 10 Sep, 2012 2 commits
-
-
Joe MacDonald authored
sys_subpage_prot() takes an unsigned long for 'addr' then does some stuff with it and the result is stored in a signed int, i, which is eventually used as the size parameter in a copy_from_user call. Update 'i' to be an unsigned long as well and since 'nw' is used in a size_t context which, depending on whether this is 32- or 64-bit may be unsigned int or unsigned long, switch that to a size_t and always be right. Finally, since we're in the neighbourhood, make the same changes to subpage_prot_clear(). Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Joe MacDonald <joe.macdonald@windriver.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com> Acked-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Alexey Kardashevskiy authored
The upcoming VFIO support requires a way to know which entry in the TCE map is not empty in order to do cleanup at QEMU exit/crash. This patch adds such functionality to POWERNV platform code. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 09 Sep, 2012 26 commits
-
-
Michael Neuling authored
These are no longer used so get rid of them Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Michael Neuling authored
Currently we mark the DABRX to interrupt on all matches (hypervisor/kernel/user and then filter in software. We can be a lot smarter now that we can set the DABRX dynamically. This sets the DABRX based on the flags passed by the user. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Michael Neuling authored
Rework set_dabr to take a DABRX value as well. Both the pseries and PS3 hypervisors do some checks on the DABRX values that are passed in the hcall. This patch stops bogus values from being passed to hypervisor. Also, in the case where we are clearing the breakpoint, where DABR and DABRX are zero, we modify the DABRX value to make it valid so that the hcall won't fail. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
The patch does cleanup on EEH PCI address cache based on the fact EEH core is the only user of the component. * Cleanup on function names so that they all have prefix "eeh" and looks more short. * Function printk() has been replaced with pr_debug() or pr_warning() accordingly. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
The idea comes from Benjamin Herrenschmidt. The eeh cache helps fetching the pci device according to the given I/O address. Since the eeh cache is serving for eeh, it's reasonable for eeh cache to trace eeh device except pci device. The patch make eeh cache to trace eeh device. Also, the major eeh entry function eeh_dn_check_failure has been renamed to eeh_dev_check_failure since it will take eeh device as input parameter. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
While EEH module is installed, PCI devices is checked one by one to see if it supports eeh. On different platforms, the PCI devices are referred through different ways when the EEH module is loaded. For example, on pSeries platform, that is done by OF node. However, we would do that by real PCI devices (struct pci_dev) on PowerNV platform in future. So we needs some mechanism to differentiate those cases by classifying them to probe modes, either from OF nodes or real PCI devices. The patch implements the support to eeh probe mode. Also, the EEH on pSeries has set it into EEH_PROBE_MODE_DEVTREE. That means the probe will be done based on OF nodes on pSeries platform. In addition, On pSeries platform, it's done by OF nodes. The patch moves the the probe function from EEH core to platform dependent backend and some cleanup applied. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
The patch removes the eeh related statistics for eeh device since they have been maintained by the corresponding eeh PE. Also, the flags used to trace the state of eeh device and PE have been reworked for a little bit. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
The patch reworks the current implementation so that the eeh errors will be handled basing on PE instead of eeh device. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
Once eeh error is found, eeh event will be created and put it into the global linked list. At the mean while, kernel thread will be started to process it. The handler for the kernel thread originally was eeh device sensitive. The patch reworks the handler of the kernel thread so that it's PE sensitive. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
The patch implements reset based on PE instead of eeh device. Also, The functions used to retrieve the reset type, either hot or fundamental reset, have been reworked for a little bit. More specificly, it's implemented based the the eeh device traverse function. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
The patch refactors the original implementation in order to enable I/O and retrieve EEH log based on PE. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
The patch introduces the function to traverse the devices of the specified PE and its child PEs. Also, the restore on device bars is implemented based on the traverse function. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
Originally, all the EEH operations were implemented based on OF node. Actually, it explicitly breaks the rules that the operation target is PE instead of device. Therefore, the patch makes all the operations based on PE instead of device. Unfortunately, the backend for config space has to be kept as original because it doesn't depend on PE. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
There're 2 conditions to trigger EEH error detection: invalid value returned from reading I/O or config space. On each case, the function eeh_dn_check_failure will be called to initialize EEH event and put it into the poll for further processing. The patch changes the function for a little bit so that the EEH error will be traced based on PE instead of EEH device any more. Also, the function eeh_find_device_pe() has been removed since the eeh device is tracing the PE by struct eeh_dev::pe. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
Since we've introduced dedicated struct to trace individual PEs, it's reasonable to trace its state through the dedicated struct instead of using "eeh_dev" any more. The patches implements the state tracing based on PE. It's notable that the PE state will be applied to the specified PE as well as its child PEs. That complies with the rule that problematic parent PE will prevent those child PEs from working properly. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
The original implementation builds EEH event based on EEH device. We already had dedicated struct to depict PE. It's reasonable to build EEH event based on PE. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
During PCI hotplug and EEH recovery, the PE hierarchy tree might be changed due to the PCI topology changes. At later point when the PCI device is added, the PE will be created dynamically again. The patch introduces new function to remove EEH devices from the associated PE. That also can cause that the parent PE is removed from the PE tree if the parent PE doesn't include valid EEH devices and child PEs. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
The patch creates PEs and associated the newly created PEs with it parent/silbing as well as EEH devices. It would become more straight to trace EEH errors and recover them accordingly. Once the EEH functionality on one PCI IOA has been enabled, we tries to create PE against it. If there's existing PE, to which the current PCI IOA should be attached, the existing PE will be converted from "device" type to "bus" type accordingly. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
The patch implements searching PE based on the following requirements: * Search PE according to PE address, which is traditional PE address that is composed of PCI bus/device/function number, or unified PE address assigned by firmware or platform. * Search parent PE according to the given EEH device. It's useful when creating new PE and put it into right position. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
For one particular PE, it's only meaningful in the ancestor PHB domain. Therefore, each PHB should have its own PE hierarchy tree to trace those PEs created against the PHB. The patch creates PEs for the PHBs and put those PEs into the global link list traced by "eeh_phb_pe". The link list of PEs would be first level of overall PE hierarchy tree across the system. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
The patch introduces global mutex for EEH so that the core data structures can be protected by that. Also, 2 inline functions are exported for that: eeh_lock() and eeh_unlock(). Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
As defined in PAPR 2.4, Partitionable Endpoint (PE) is an I/O subtree that can be treated as a unit for the purposes of partitioning and error recovery. Therefore, eeh core should be aware of PE. With eeh_pe struct, we can support PE explicitly. Further more, it makes all the stuff much more data centralized. Another important reason is for eeh core to support multiple platforms. Some of them like pSeries figures out PEs through OF nodes while others like powernv have to do that through PCI bus/device tree. With explicit PE support, eeh core will be implemented based on the centrialized data and platform dependent implementations figure it out by their feasible ways. When the struct is designed, following factors are taken in account: * Reflecting the relationships of PEs. PE might have parent as well children. * Reflecting the association of PE and (eeh) devices. * PEs have PHB boundary. * PE should have unique address assigned in the corresponding PHB domain. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
The patch adds more logs to EEH initialization functions for debugging purpose. Also, the machine type (pSeries) is checked in the platform initialization to assure it's the correct platform to invoke it. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
The EEH initialization functions have been postponed until slab/slub are ready. So we use slab/slub to allocate the memory chunks for newly creatd EEH devices. That would save lots of memory. The patch also does cleanup to replace "kmalloc" with "kzalloc" so that we needn't clear the allocated memory chunk explicitly. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
Currently, we have 3 phases for EEH initialization on pSeries platform. All of them are done through builtin functions: platform initialization, EEH device creation, and EEH subsystem enablement. All of them are done no later than ppc_md.setup_arch. That means that the slab/slub isn't ready yet, so we have to allocate memory chunks on basis of PAGE_SIZE for those dynamically created EEH devices. That's pretty expensive. In order to utilize slab/slub for memory allocation, we have to move the EEH initialization functions around, but all of them should be called after slab is ready. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Michael Ellerman authored
It's possible for the cpu_possible_mask to change between the time we initialise the pacas and the time we setup per_cpu areas. Obviously impossible cpus shouldn't ever be running, but stranger things have happened. So be paranoid and initialise data_offset with a poison value in case we don't set it up later. Based on a patch from Anton Blanchard. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 07 Sep, 2012 7 commits
-
-
Michael Neuling authored
We never use the XDABR hcall since we check for DABR hcall first. XDABR syscall is better since it allows us to also set the DABRX. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Michael Neuling authored
Change bp_info to info to be consistent with the rest of this file. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Michael Neuling authored
No functional change Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Suzuki Poulose authored
The powerpc kernel doesn't export the memory limit enforced by 'mem=' kernel parameter. This is required for building the ELF header in kexec-tools to limit the vmcore to capture only the used memory. On powerpc the kexec-tools depends on the device-tree for memory related information, unlike /proc/iomem on the x86. Without this information, the kexec-tools assumes the entire System RAM and vmcore creates an unnecessarily larger dump. This patch exports the memory limit, if present, via chosen/linux,memory-limit property, so that the vmcore can be limited to the memory limit. The prom_init seems to export this value in the same node. But doesn't really appear there. Also the memory_limit gets adjusted with the processing of crashkernel= parameter. This patch makes sure we get the actual limit. The kexec-tools will use the value to limit the 'end' of the memory regions. Tested this patch on ppc64 and ppc32(ppc440) with a kexec-tools patch by Mahesh. Signed-off-by: Suzuki K. Poulose <suzuki@in.ibm.com> Tested-by: Mahesh J. Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Suzuki Poulose authored
There are some device-tree nodes, whose values are of type phys_addr_t. The phys_addr_t is variable sized based on the CONFIG_PHSY_T_64BIT. Change these to a fixed unsigned long long for consistency. This patch does the change only for memory_limit. The following is a list of such variables which need the change: 1) kernel_end, crashk_size - in arch/powerpc/kernel/machine_kexec.c 2) (struct resource *)crashk_res.start - We could export a local static variable from machine_kexec.c. Changing the above values might break the kexec-tools. So, I will fix kexec-tools first to handle the different sized values and then change the above. Suggested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Suzuki K. Poulose <suzuki@in.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Matthew McClintock authored
Several files in obj-plat depend on libfdt header file. Sometimes when building one can see the following issue. This patch adds libfdt as dependency to those object files | In file included from arch/powerpc/boot/treeboot-iss4xx.c:33:0: | arch/powerpc/boot/libfdt.h:854:1: error: unterminated comment | In file included from arch/powerpc/boot/treeboot-iss4xx.c:33:0: | arch/powerpc/boot/libfdt.h:1:0: error: unterminated #ifndef | BOOTCC arch/powerpc/boot/inffast.o | make[1]: *** [arch/powerpc/boot/treeboot-iss4xx.o] Error 1 | make[1]: *** Waiting for unfinished jobs.... | BOOTCC arch/powerpc/boot/inflate.o | make: *** [uImage] Error 2 | ERROR: oe_runmake failed | ERROR: Function failed: do_compile (see /srv/home/pokybuild/yocto-autobuilder/yocto-slave/p1022ds/build/build/tmp/work/p1022ds-poky-linux-gnuspe/linux-qoriq-sdk-3.0.34-r5/temp/log.do_compile.2167 for further information) NOTE: recipe linux-qoriq-sdk-3.0.34-r5: task do_compile: Failed Signed-off-by: Matthew McClintock <msm@freescale.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Carl E. Love authored
Starting with Power 7+ we need to check for marked events if the SIAR register is valid, i.e. it contains the correct address of the instruction at the time the performance counter overflowed. The mmcra register on Power 7+, contains a new bit to indicate that the contents of the SIAR is valid. If the event is not marked, then the sample is recorded independently of the SIAR valid bit setting. For older processors, there is no SIAR valid bit to check so the samples are always recorded. This is done by forcing the cntr_marked_events bit mask to zero. The code will always record the sample in this case since the bit mask says the event is not a marked event even if it really is a marked event. Signed-off-by: Carl Love <cel@us.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-