- 10 Sep, 2012 1 commit
-
-
Alexey Kardashevskiy authored
The upcoming VFIO support requires a way to know which entry in the TCE map is not empty in order to do cleanup at QEMU exit/crash. This patch adds such functionality to POWERNV platform code. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 09 Sep, 2012 26 commits
-
-
Michael Neuling authored
These are no longer used so get rid of them Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Michael Neuling authored
Currently we mark the DABRX to interrupt on all matches (hypervisor/kernel/user and then filter in software. We can be a lot smarter now that we can set the DABRX dynamically. This sets the DABRX based on the flags passed by the user. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Michael Neuling authored
Rework set_dabr to take a DABRX value as well. Both the pseries and PS3 hypervisors do some checks on the DABRX values that are passed in the hcall. This patch stops bogus values from being passed to hypervisor. Also, in the case where we are clearing the breakpoint, where DABR and DABRX are zero, we modify the DABRX value to make it valid so that the hcall won't fail. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
The patch does cleanup on EEH PCI address cache based on the fact EEH core is the only user of the component. * Cleanup on function names so that they all have prefix "eeh" and looks more short. * Function printk() has been replaced with pr_debug() or pr_warning() accordingly. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
The idea comes from Benjamin Herrenschmidt. The eeh cache helps fetching the pci device according to the given I/O address. Since the eeh cache is serving for eeh, it's reasonable for eeh cache to trace eeh device except pci device. The patch make eeh cache to trace eeh device. Also, the major eeh entry function eeh_dn_check_failure has been renamed to eeh_dev_check_failure since it will take eeh device as input parameter. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
While EEH module is installed, PCI devices is checked one by one to see if it supports eeh. On different platforms, the PCI devices are referred through different ways when the EEH module is loaded. For example, on pSeries platform, that is done by OF node. However, we would do that by real PCI devices (struct pci_dev) on PowerNV platform in future. So we needs some mechanism to differentiate those cases by classifying them to probe modes, either from OF nodes or real PCI devices. The patch implements the support to eeh probe mode. Also, the EEH on pSeries has set it into EEH_PROBE_MODE_DEVTREE. That means the probe will be done based on OF nodes on pSeries platform. In addition, On pSeries platform, it's done by OF nodes. The patch moves the the probe function from EEH core to platform dependent backend and some cleanup applied. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
The patch removes the eeh related statistics for eeh device since they have been maintained by the corresponding eeh PE. Also, the flags used to trace the state of eeh device and PE have been reworked for a little bit. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
The patch reworks the current implementation so that the eeh errors will be handled basing on PE instead of eeh device. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
Once eeh error is found, eeh event will be created and put it into the global linked list. At the mean while, kernel thread will be started to process it. The handler for the kernel thread originally was eeh device sensitive. The patch reworks the handler of the kernel thread so that it's PE sensitive. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
The patch implements reset based on PE instead of eeh device. Also, The functions used to retrieve the reset type, either hot or fundamental reset, have been reworked for a little bit. More specificly, it's implemented based the the eeh device traverse function. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
The patch refactors the original implementation in order to enable I/O and retrieve EEH log based on PE. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
The patch introduces the function to traverse the devices of the specified PE and its child PEs. Also, the restore on device bars is implemented based on the traverse function. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
Originally, all the EEH operations were implemented based on OF node. Actually, it explicitly breaks the rules that the operation target is PE instead of device. Therefore, the patch makes all the operations based on PE instead of device. Unfortunately, the backend for config space has to be kept as original because it doesn't depend on PE. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
There're 2 conditions to trigger EEH error detection: invalid value returned from reading I/O or config space. On each case, the function eeh_dn_check_failure will be called to initialize EEH event and put it into the poll for further processing. The patch changes the function for a little bit so that the EEH error will be traced based on PE instead of EEH device any more. Also, the function eeh_find_device_pe() has been removed since the eeh device is tracing the PE by struct eeh_dev::pe. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
Since we've introduced dedicated struct to trace individual PEs, it's reasonable to trace its state through the dedicated struct instead of using "eeh_dev" any more. The patches implements the state tracing based on PE. It's notable that the PE state will be applied to the specified PE as well as its child PEs. That complies with the rule that problematic parent PE will prevent those child PEs from working properly. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
The original implementation builds EEH event based on EEH device. We already had dedicated struct to depict PE. It's reasonable to build EEH event based on PE. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
During PCI hotplug and EEH recovery, the PE hierarchy tree might be changed due to the PCI topology changes. At later point when the PCI device is added, the PE will be created dynamically again. The patch introduces new function to remove EEH devices from the associated PE. That also can cause that the parent PE is removed from the PE tree if the parent PE doesn't include valid EEH devices and child PEs. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
The patch creates PEs and associated the newly created PEs with it parent/silbing as well as EEH devices. It would become more straight to trace EEH errors and recover them accordingly. Once the EEH functionality on one PCI IOA has been enabled, we tries to create PE against it. If there's existing PE, to which the current PCI IOA should be attached, the existing PE will be converted from "device" type to "bus" type accordingly. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
The patch implements searching PE based on the following requirements: * Search PE according to PE address, which is traditional PE address that is composed of PCI bus/device/function number, or unified PE address assigned by firmware or platform. * Search parent PE according to the given EEH device. It's useful when creating new PE and put it into right position. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
For one particular PE, it's only meaningful in the ancestor PHB domain. Therefore, each PHB should have its own PE hierarchy tree to trace those PEs created against the PHB. The patch creates PEs for the PHBs and put those PEs into the global link list traced by "eeh_phb_pe". The link list of PEs would be first level of overall PE hierarchy tree across the system. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
The patch introduces global mutex for EEH so that the core data structures can be protected by that. Also, 2 inline functions are exported for that: eeh_lock() and eeh_unlock(). Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
As defined in PAPR 2.4, Partitionable Endpoint (PE) is an I/O subtree that can be treated as a unit for the purposes of partitioning and error recovery. Therefore, eeh core should be aware of PE. With eeh_pe struct, we can support PE explicitly. Further more, it makes all the stuff much more data centralized. Another important reason is for eeh core to support multiple platforms. Some of them like pSeries figures out PEs through OF nodes while others like powernv have to do that through PCI bus/device tree. With explicit PE support, eeh core will be implemented based on the centrialized data and platform dependent implementations figure it out by their feasible ways. When the struct is designed, following factors are taken in account: * Reflecting the relationships of PEs. PE might have parent as well children. * Reflecting the association of PE and (eeh) devices. * PEs have PHB boundary. * PE should have unique address assigned in the corresponding PHB domain. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
The patch adds more logs to EEH initialization functions for debugging purpose. Also, the machine type (pSeries) is checked in the platform initialization to assure it's the correct platform to invoke it. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
The EEH initialization functions have been postponed until slab/slub are ready. So we use slab/slub to allocate the memory chunks for newly creatd EEH devices. That would save lots of memory. The patch also does cleanup to replace "kmalloc" with "kzalloc" so that we needn't clear the allocated memory chunk explicitly. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
Currently, we have 3 phases for EEH initialization on pSeries platform. All of them are done through builtin functions: platform initialization, EEH device creation, and EEH subsystem enablement. All of them are done no later than ppc_md.setup_arch. That means that the slab/slub isn't ready yet, so we have to allocate memory chunks on basis of PAGE_SIZE for those dynamically created EEH devices. That's pretty expensive. In order to utilize slab/slub for memory allocation, we have to move the EEH initialization functions around, but all of them should be called after slab is ready. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Michael Ellerman authored
It's possible for the cpu_possible_mask to change between the time we initialise the pacas and the time we setup per_cpu areas. Obviously impossible cpus shouldn't ever be running, but stranger things have happened. So be paranoid and initialise data_offset with a poison value in case we don't set it up later. Based on a patch from Anton Blanchard. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 07 Sep, 2012 10 commits
-
-
Michael Neuling authored
We never use the XDABR hcall since we check for DABR hcall first. XDABR syscall is better since it allows us to also set the DABRX. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Michael Neuling authored
Change bp_info to info to be consistent with the rest of this file. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Michael Neuling authored
No functional change Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Suzuki Poulose authored
The powerpc kernel doesn't export the memory limit enforced by 'mem=' kernel parameter. This is required for building the ELF header in kexec-tools to limit the vmcore to capture only the used memory. On powerpc the kexec-tools depends on the device-tree for memory related information, unlike /proc/iomem on the x86. Without this information, the kexec-tools assumes the entire System RAM and vmcore creates an unnecessarily larger dump. This patch exports the memory limit, if present, via chosen/linux,memory-limit property, so that the vmcore can be limited to the memory limit. The prom_init seems to export this value in the same node. But doesn't really appear there. Also the memory_limit gets adjusted with the processing of crashkernel= parameter. This patch makes sure we get the actual limit. The kexec-tools will use the value to limit the 'end' of the memory regions. Tested this patch on ppc64 and ppc32(ppc440) with a kexec-tools patch by Mahesh. Signed-off-by: Suzuki K. Poulose <suzuki@in.ibm.com> Tested-by: Mahesh J. Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Suzuki Poulose authored
There are some device-tree nodes, whose values are of type phys_addr_t. The phys_addr_t is variable sized based on the CONFIG_PHSY_T_64BIT. Change these to a fixed unsigned long long for consistency. This patch does the change only for memory_limit. The following is a list of such variables which need the change: 1) kernel_end, crashk_size - in arch/powerpc/kernel/machine_kexec.c 2) (struct resource *)crashk_res.start - We could export a local static variable from machine_kexec.c. Changing the above values might break the kexec-tools. So, I will fix kexec-tools first to handle the different sized values and then change the above. Suggested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Suzuki K. Poulose <suzuki@in.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Matthew McClintock authored
Several files in obj-plat depend on libfdt header file. Sometimes when building one can see the following issue. This patch adds libfdt as dependency to those object files | In file included from arch/powerpc/boot/treeboot-iss4xx.c:33:0: | arch/powerpc/boot/libfdt.h:854:1: error: unterminated comment | In file included from arch/powerpc/boot/treeboot-iss4xx.c:33:0: | arch/powerpc/boot/libfdt.h:1:0: error: unterminated #ifndef | BOOTCC arch/powerpc/boot/inffast.o | make[1]: *** [arch/powerpc/boot/treeboot-iss4xx.o] Error 1 | make[1]: *** Waiting for unfinished jobs.... | BOOTCC arch/powerpc/boot/inflate.o | make: *** [uImage] Error 2 | ERROR: oe_runmake failed | ERROR: Function failed: do_compile (see /srv/home/pokybuild/yocto-autobuilder/yocto-slave/p1022ds/build/build/tmp/work/p1022ds-poky-linux-gnuspe/linux-qoriq-sdk-3.0.34-r5/temp/log.do_compile.2167 for further information) NOTE: recipe linux-qoriq-sdk-3.0.34-r5: task do_compile: Failed Signed-off-by: Matthew McClintock <msm@freescale.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Carl E. Love authored
Starting with Power 7+ we need to check for marked events if the SIAR register is valid, i.e. it contains the correct address of the instruction at the time the performance counter overflowed. The mmcra register on Power 7+, contains a new bit to indicate that the contents of the SIAR is valid. If the event is not marked, then the sample is recorded independently of the SIAR valid bit setting. For older processors, there is no SIAR valid bit to check so the samples are always recorded. This is done by forcing the cntr_marked_events bit mask to zero. The code will always record the sample in this case since the bit mask says the event is not a marked event even if it really is a marked event. Signed-off-by: Carl Love <cel@us.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
sukadev@linux.vnet.ibm.com authored
This definition will be used by subsequent perf and oprofile patches Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Anton Blanchard authored
The pseries firmware currently refuses any non power of two MSI-X request. Unfortunately most network drivers end up asking for that because they want a power of two for RX queues and one or two extra for everything else. This patch rounds up the firmware request to the next power of two if the quota allows it. If this fails we fall back to using the original request size. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Gavin Shan authored
When PCI probe flag PCI_REASSIGN_ALL_RSRC has been passed into PCI core, it's hoped that all resources to be reassigned by PCI core. As to particular P2P (PCI-to-PCI) bridge, the size of the corresponding BAR (I/O, MMIO, prefetchable MMIO) is calculated by the resources required by the PCI devices behind the P2P bridge. That means that the information like start/end address retrieved from the hardware registers of the P2P bridge is meainingless in the case. However, we still count that in and the BARs might have been configured by firmware with non-zero size. That leads to space waste. The patch explicitly sets the size of P2P bridge BARs to zero in case that resource reassignment is expected with PCI probe flag PCI_REASSIGN_ALL_RSRC. In the result, it will save overall resource required by the system without waste. Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 06 Sep, 2012 3 commits
-
-
Benjamin Herrenschmidt authored
Brings in various bug fixes from 3.6-rcX
-
Ananth N Mavinakayanahalli authored
commit: 8b7b80b9 [24/29] powerpc: Uprobes port to powerpc Caused a clash with the fore200e driver: In file included from drivers/atm/fore200e.c:70:0: drivers/atm/fore200e.h:263:3: error: redefinition of typedef 'opcode_t' with different type arch/powerpc/include/asm/probes.h:25:13: note: previous declaration of 'opcode_t' was here Fix the namespace clash by making opcode_t in probes.h to ppc_opcode_t. Signed-off-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
Mihai Caraman authored
Critical exception on 64-bit booke uses user-visible SPRG3 as scratch. Restore VDSO information in SPRG3 on exception prolog. Use a common sprg3 field in PACA for all powerpc64 architectures. Signed-off-by: Mihai Caraman <mihai.caraman@freescale.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-