- 15 Feb, 2017 6 commits
-
-
Aneesh Kumar K.V authored
We do them at the start of tlb flush, and we are sure a pte update will be followed by a tlbflush. Hence we can skip the ptesync in pte update helpers. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Tested-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Aneesh Kumar K.V authored
This helps us to do some optimization for application exit case, where we can skip the DD1 style pte update sequence. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Tested-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Aneesh Kumar K.V authored
In the kernel we do follow the below sequence in different code paths. pte = ptep_get_clear(ptep) .... set_pte_at(ptep, pte) We do that for mremap, autonuma protection update and softdirty clearing. This implies our optimization to skip a tlb flush when clearing a pte update is not valid, because for DD1 system that followup set_pte_at will be done witout doing the required tlbflush. Fix that by always doing the dd1 style pte update irrespective of new_pte value. In a later patch we will optimize the application exit case. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Tested-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Aneesh Kumar K.V authored
With radix, we can get page fault with DSISR_PROTFAULT value set in case of PROT_NONE or autonuma mapping. The PROT_NONE case in handled by the vma check where we consider the access bad. For autonuma we should fall through and fixup the access mask correctly. Without this patch we trigger the WARN_ON() on radix. This code moves that WARN_ON() within a radix_enabled() check. I also moved the WARN_ON() outside the if condition making it apply for all type of faults (exec/write/read). It is also conditionalized for book3s, because BOOK3E can also get a PROTFAULT to handle the D/I cache sync. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Ravi Bangoria authored
Currently xmon data-breakpoint feature is broken. Whenever there is a watchpoint match occurs, hw_breakpoint_handler will be called by do_break via notifier chains mechanism. If watchpoint is registered by xmon, hw_breakpoint_handler won't find any associated perf_event and returns immediately with NOTIFY_STOP. Similarly, do_break also returns without notifying to xmon. Solve this by returning NOTIFY_DONE when hw_breakpoint_handler does not find any perf_event associated with matched watchpoint, rather than NOTIFY_STOP, which tells the core code to continue calling the other breakpoint handlers including the xmon one. Cc: stable@vger.kernel.org Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Michael Ellerman authored
The recently merged HPT (Hash Page Table) resize support broke the build when BOOK3S_64=n (ie. 32-bit or 64-bit Book3E) and MEMORY_HOTPLUG=y: arch/powerpc/mm/mem.o: In function `.arch_add_memory': (.text+0x4e4): undefined reference to `.resize_hpt_for_hotplug' Fix it by adding a dummy version. Fixes: 438cc81a ("powerpc/pseries: Automatically resize HPT for memory hot add/remove") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
- 14 Feb, 2017 3 commits
-
-
Michael Ellerman authored
Merge the topic branch we're sharing with the kvm-ppc tree.
-
Michael Ellerman authored
Currently the build breaks if CMA=n and SPAPR_TCE_IOMMU=y: arch/powerpc/mm/mmu_context_iommu.c: In function ‘mm_iommu_get’: arch/powerpc/mm/mmu_context_iommu.c:193:42: error: ‘MIGRATE_CMA’ undeclared (first use in this function) if (get_pageblock_migratetype(page) == MIGRATE_CMA) { ^~~~~~~~~~~ Fix it by using the existing is_migrate_cma_page(), which evaulates to false when CMA=n. Fixes: 2e5bbb54 ("KVM: PPC: Book3S HV: Migrate pinned pages out of CMA") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Michael Ellerman authored
If we enable RADIX but disable HUGETLBFS, the build breaks with: arch/powerpc/mm/pgtable-radix.c:557:7: error: implicit declaration of function 'pmd_huge' arch/powerpc/mm/pgtable-radix.c:588:7: error: implicit declaration of function 'pud_huge' Fix it by stubbing those functions when HUGETLBFS=n. Fixes: 4b5d62ca ("powerpc/mm: add radix__remove_section_mapping()") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
- 10 Feb, 2017 15 commits
-
-
Wei Yongjun authored
Fix typo in "hotplug_delay" parameter description. This allows modinfo to match the help text to the parameter. Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Naveen N. Rao authored
... as the generic weak variant will do. Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Naveen N. Rao authored
kprobe_exceptions_notify() is not used on some of the architectures such as arm[64] and powerpc anymore. Introduce a weak variant for such architectures. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Anton Blanchard authored
The final paragraph of the help text is reversed. We want to enable this option by default, and disable it if the toolchain has a working -mprofile-kernel. Fixes: 8c50b72a ("powerpc/ftrace: Add Kconfig & Make glue for mprofile-kernel") Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Michael Ellerman authored
Currently the opal_exit tracepoint usually shows the opcode as 0: <idle>-0 [047] d.h. 635.654292: opal_entry: opcode=63 <idle>-0 [047] d.h. 635.654296: opal_exit: opcode=0 retval=0 kopald-1209 [019] d... 636.420943: opal_entry: opcode=10 kopald-1209 [019] d... 636.420959: opal_exit: opcode=0 retval=0 This is because we incorrectly load the opcode into r0 before calling __trace_opal_exit(), whereas it expects the opcode in r3 (first function parameter). In fact we are leaving the retval in r3, so opcode and retval will always show the same value. Instead load the opcode into r3, resulting in: <idle>-0 [040] d.h. 636.618625: opal_entry: opcode=63 <idle>-0 [040] d.h. 636.618627: opal_exit: opcode=63 retval=0 Fixes: c49f6353 ("powernv: Add OPAL tracepoints") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Michael Ellerman authored
Currently we get a warning that _mcount() can't be versioned: WARNING: EXPORT symbol "_mcount" [vmlinux] version generation failed, symbol will not be versioned. Add a prototype to asm-prototypes.h to fix it. The prototype is not really correct, mcount() is not a normal function, it has a special ABI. But for the purpose of versioning it doesn't matter. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Shailendra Singh authored
The generic implementation of of_node_to_nid() is EXPORT_SYMBOL, added in commit 298535c0 ("of, numa: Add NUMA of binding implementation."). The powerpc implementation added in commit 953039c8 ("[PATCH] powerpc: Allow devices to register with numa topology") is EXPORT_SYMBOL_GPL. This creates an inconsistency for of_node_to_nid() callers across architectures. Update the powerpc implementation to be exported consistently with the generic implementation. Signed-off-by: Shailendra Singh <shailendras@nvidia.com> Reviewed-by: Andy Ritger <aritger@nvidia.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Anju T authored
Kprobe placed on the kretprobe_trampoline() during boot time can be optimized, since the instruction at probe point is a 'nop'. Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Anju T authored
Current infrastructure of kprobe uses the unconditional trap instruction to probe a running kernel. Optprobe allows kprobe to replace the trap with a branch instruction to a detour buffer. Detour buffer contains instructions to create an in memory pt_regs. Detour buffer also has a call to optimized_callback() which in turn call the pre_handler(). After the execution of the pre-handler, a call is made for instruction emulation. The NIP is determined in advanced through dummy instruction emulation and a branch instruction is created to the NIP at the end of the trampoline. To address the limitation of branch instruction in POWER architecture, detour buffer slot is allocated from a reserved area. For the time being, 64KB is reserved in memory for this purpose. Instructions which can be emulated using analyse_instr() are the candidates for optimization. Before optimization ensure that the address range between the detour buffer allocated and the instruction being probed is within +/- 32MB. Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Acked-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Naveen N. Rao authored
Fix two issues with kprobes.h on BE which were exposed with the optprobes work: - one, having to do with a missing include for linux/module.h for MODULE_NAME_LEN -- this didn't show up previously since the only users of kprobe_lookup_name were in kprobes.c, which included linux/module.h through other headers, and - two, with a missing const qualifier for a local variable which ends up referring a string literal. Again, this is unique to how kprobe_lookup_name is being invoked in optprobes.c Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Anju T authored
To permit the use of relative branch instruction in powerpc, the target address has to be relatively nearby, since the address is specified in an immediate field (24 bit filed) in the instruction opcode itself. Here nearby refers to 32MB on either side of the current instruction. This patch verifies whether the target address is within +/- 32MB range or not. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Naveen N. Rao authored
Introduce __PPC_SH64() as a 64-bit variant to encode shift field in some of the shift and rotate instructions operating on double-words. Convert some of the BPF instruction macros to use the same. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
David Gibson authored
We've now implemented code in the pseries platform to use the new PAPR interface to allow resizing the hash page table (HPT) at runtime. This patch uses that interface to automatically attempt to resize the HPT when memory is hot added or removed. This tries to always keep the HPT at a reasonable size for our current memory size. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
David Gibson authored
The hypervisor needs to know a guest is capable of using the HPT resizing PAPR extension in order to make full advantage of it for memory hotplug. If the hypervisor knows the guest is HPT resize aware, it can size the initial HPT based on the initial guest RAM size, relying on the guest to resize the HPT when more memory is hot-added. Without this, the hypervisor must size the HPT for the maximum possible guest RAM, which can lead to a huge waste of space if the guest never actually expends to that maximum size. This patch advertises the guest's support for HPT resizing via the ibm,client-architecture-support OF interface. We use bit 5 of byte 6 of option vector 5 for this purpose, as defined in the PAPR ACR "HPT resizing option". Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Anshuman Khandual <khandual@linux.vnet.ibm.com> Reviewed-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
David Gibson authored
This adds support for using two hypercalls to change the size of the main hash page table while running as a PAPR guest. For now these hypercalls are only in experimental qemu versions. The interface is two part: first H_RESIZE_HPT_PREPARE is used to allocate and prepare the new hash table. This may be slow, but can be done asynchronously. Then, H_RESIZE_HPT_COMMIT is used to switch to the new hash table. This requires that no CPUs be concurrently updating the HPT, and so must be run under stop_machine(). This also adds a debugfs file which can be used to manually control HPT resizing or testing purposes. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Paul Mackerras <paulus@samba.org> [mpe: Rename the debugfs file to "hpt_order"] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
- 09 Feb, 2017 1 commit
-
-
David Gibson authored
This adds the hypercall numbers and wrapper functions for the hash page table resizing hypercalls. These hypercall numbers are defined in the PAPR ACR "HPT resizing option". It also adds a new firmware feature flag to track the presence of the HPT resizing calls. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
- 08 Feb, 2017 6 commits
-
-
Chris Packham authored
List all the current valid compatible strings for the l2cache binding. This should stop checkpatch.pl from complaining and will hopefully save someone from having to debug a typo in their dts. Signed-off-by: Chris Packham <chris.packham@alliedtelesis.co.nz> Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Benjamin Herrenschmidt authored
We don't need asm/xics.h Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Benjamin Herrenschmidt authored
Recent versions of OPAL can provide names for the various OPAL interrupts, so let's use them. This also modernises the code that fetches the interrupt array to use the helpers provided by the generic code instead of hand-parsing the property. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> [mpe: Free irqs on error, check allocation of names, consolidate error handling, whitespace.] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Mahesh Salgaonkar authored
On some CAPP errors we see console messages that prints unknown HMIs for which CAPI recovery is in progress. This patch fixes this by printing correct error info for HMI generated due to CAPP recovery. Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Tested-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Michael Neuling authored
These are common on bare metal machines, so put them in the defconfig. This adds 216KB to the vmlinux size Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Naveen N. Rao authored
Specifically: - CONFIG_BPF_SYSCALL - CONFIG_NET_SCHED - CONFIG_NET_CLS_BPF - CONFIG_NET_CLS_ACT - CONFIG_NET_ACT_BPF - CONFIG_CGROUP_BPF - CONFIG_UPROBE_EVENT ... in pseries, ppc64 and powernv defconfigs. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
- 07 Feb, 2017 9 commits
-
-
Finn Thain authored
Change the device probe test in the via-cuda.c driver so it will load on Egret-based machines too. Remove the now redundant via-maciisi.c driver. Tested-by: Stan Johnson <userm57@yahoo.com> Signed-off-by: Finn Thain <fthain@telegraphics.com.au> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Finn Thain authored
The Egret system controller was the predecessor to the Cuda and the differences are minor. On Cuda, byte acknowledgement requires one transition of the TACK signal; on Egret two are needed. On Cuda, TIP is active low; on Egret it is active high. And Cuda raises certain interrupts that Egret omits. Accomodating these differences complicates the Cuda driver slightly but avoids a lot of duplication (see next patch). Tested-by: Stan Johnson <userm57@yahoo.com> Signed-off-by: Finn Thain <fthain@telegraphics.com.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Finn Thain authored
Initialize data_index where appropriate to improve readability and assist debugging. This change doesn't affect driver behaviour. I prefer to see current_req->data[data_index++] in place of current_req->data[0] or current_req->data[1] inasmuchas it becomes obvious what the data_index variable does. Moreover, the actual value of data_index when examined at any given moment tells me something about prior events, which did prove helpful. Tested-by: Stan Johnson <userm57@yahoo.com> Signed-off-by: Finn Thain <fthain@telegraphics.com.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Finn Thain authored
The cuda_start() function uses spinlock_irq_save/restore for mutual exclusion. Let's have cuda_poll() do the same when polling the VIA interrupt. The benefit to disabling local irqs when the interrupt is being polled is that the interrupt handler now has the same timing properties regardless of whether it is invoked normally or from cuda_poll(). This driver was written back when local irqs remained enabled during execution of interrupt handlers and cuda_poll() was probably trying to achieve the same effect by use of enable/disable_irq. Tested-by: Stan Johnson <userm57@yahoo.com> Signed-off-by: Finn Thain <fthain@telegraphics.com.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Finn Thain authored
When a read transaction completes, one of several things will happen: a new transfer is started by the driver, a new transfer request is raised by the Cuda (i.e. TREQ asserted), or both happen at once. When both happen at once, there is a race condition between the TREQ test in the read_done state and the same test in cuda_start(). Moreover, the former test uses a stale TREQ value. Theoretically, this can result in the undesirable outcome that the interrupt handler completes with the state machine 'idle' when it should instead start the next transaction. Avoid this race by calling cuda_start() first and then confirming that it succeeded. If not, test the current TREQ value before entering the 'reading' state. Tested-by: Stan Johnson <userm57@yahoo.com> Signed-off-by: Finn Thain <fthain@telegraphics.com.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Finn Thain authored
When reading_reply is set, reply_ptr points into an adb_request struct. Conversely, when reply_ptr instead points into the global cuda_rbuf, reading_reply must be false. Unfortunately, this rule can be violated because re-initialization of reply_ptr and reading_reply presently depends on the TREQ input. Fix this by re-initializing reply_ptr and reading_reply as soon as they are known to be invalid. Tested-by: Stan Johnson <userm57@yahoo.com> Signed-off-by: Finn Thain <fthain@telegraphics.com.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Finn Thain authored
If the Cuda driver does not enter the 'read_done' state for some reason, it may continue in the 'reading' state until the buffer overflows. Add a bounds check to prevent this. Tested-by: Stan Johnson <userm57@yahoo.com> Signed-off-by: Finn Thain <fthain@telegraphics.com.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Finn Thain authored
Introduce some helpers for handling the signalling between VIA and Cuda. This abstraction will be used to add support for Egret devices, which utilize slightly different signalling. Don't invert the sense of the Cuda's active-low signals when storing them in the 'status' variable. Just assert, negate and test those signals using the helpers. The state machine does not need to test its own output signals to figure out what to do next: the next state depends on the Cuda's TREQ output. Just call the TREQ_asserted() helper function to test for that. Similarly, there is no need to store pin directions in the 'status' variable. That was only useful for debugging messages. Tested-by: Stan Johnson <userm57@yahoo.com> Signed-off-by: Finn Thain <fthain@telegraphics.com.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Finn Thain authored
There is no possibility that current_req can change during execution of cuda_start(). This can be confirmed by inspection: cuda_lock is always held whenever cuda_start() is called or current_req is modified. Tested-by: Stan Johnson <userm57@yahoo.com> Signed-off-by: Finn Thain <fthain@telegraphics.com.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-