- 07 Feb, 2017 7 commits
-
-
Finn Thain authored
When a read transaction completes, one of several things will happen: a new transfer is started by the driver, a new transfer request is raised by the Cuda (i.e. TREQ asserted), or both happen at once. When both happen at once, there is a race condition between the TREQ test in the read_done state and the same test in cuda_start(). Moreover, the former test uses a stale TREQ value. Theoretically, this can result in the undesirable outcome that the interrupt handler completes with the state machine 'idle' when it should instead start the next transaction. Avoid this race by calling cuda_start() first and then confirming that it succeeded. If not, test the current TREQ value before entering the 'reading' state. Tested-by: Stan Johnson <userm57@yahoo.com> Signed-off-by: Finn Thain <fthain@telegraphics.com.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Finn Thain authored
When reading_reply is set, reply_ptr points into an adb_request struct. Conversely, when reply_ptr instead points into the global cuda_rbuf, reading_reply must be false. Unfortunately, this rule can be violated because re-initialization of reply_ptr and reading_reply presently depends on the TREQ input. Fix this by re-initializing reply_ptr and reading_reply as soon as they are known to be invalid. Tested-by: Stan Johnson <userm57@yahoo.com> Signed-off-by: Finn Thain <fthain@telegraphics.com.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Finn Thain authored
If the Cuda driver does not enter the 'read_done' state for some reason, it may continue in the 'reading' state until the buffer overflows. Add a bounds check to prevent this. Tested-by: Stan Johnson <userm57@yahoo.com> Signed-off-by: Finn Thain <fthain@telegraphics.com.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Finn Thain authored
Introduce some helpers for handling the signalling between VIA and Cuda. This abstraction will be used to add support for Egret devices, which utilize slightly different signalling. Don't invert the sense of the Cuda's active-low signals when storing them in the 'status' variable. Just assert, negate and test those signals using the helpers. The state machine does not need to test its own output signals to figure out what to do next: the next state depends on the Cuda's TREQ output. Just call the TREQ_asserted() helper function to test for that. Similarly, there is no need to store pin directions in the 'status' variable. That was only useful for debugging messages. Tested-by: Stan Johnson <userm57@yahoo.com> Signed-off-by: Finn Thain <fthain@telegraphics.com.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Finn Thain authored
There is no possibility that current_req can change during execution of cuda_start(). This can be confirmed by inspection: cuda_lock is always held whenever cuda_start() is called or current_req is modified. Tested-by: Stan Johnson <userm57@yahoo.com> Signed-off-by: Finn Thain <fthain@telegraphics.com.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Finn Thain authored
Add missing log message severity, remove old debug messages and replace printk() loop with print_hex_dump() call. Tested-by: Stan Johnson <userm57@yahoo.com> Signed-off-by: Finn Thain <fthain@telegraphics.com.au> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Aneesh Kumar K.V authored
Without this we will always find the feature disabled. Fixes: 984d7a1e ("powerpc/mm: Fixup kernel read only mapping") Cc: stable@vger.kernel.org # v4.7+ Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Acked-by: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
- 06 Feb, 2017 11 commits
-
-
Nicholas Piggin authored
start,size has the benefit of being easier to search for (start,end usually gives you the preceeding vector from the one you want, as first result). Suggested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Nicholas Piggin authored
Somewhere along the line, search/replace left some naming garbled, and untidy alignment (aka. mpe stuffed it up). Might as well fix them all up now while git blame history doesn't extend too far. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Benjamin Herrenschmidt authored
This adds AUX vectors for the L1I,D, L2 and L3 cache levels providing for each cache level the size of the cache in bytes and the geometry (line size and number of ways). We chose to not use the existing alpha/sh definition which packs all the information in a single entry per cache level as it is too restricted to represent some of the geometries used on POWER. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Benjamin Herrenschmidt authored
All shipping firmware versions have it wrong in the device-tree Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Benjamin Herrenschmidt authored
Retrieved from device-tree when available Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Benjamin Herrenschmidt authored
We have two set of identical struct members for the I and D sides and mostly identical bunches of code to parse the device-tree to populate them. Instead make a ppc_cache_info structure with one copy for I and one for D Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Benjamin Herrenschmidt authored
It will be used to calculate the associativity Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Benjamin Herrenschmidt authored
In a number of places we called "cache line size" what is actually the cache block size, which in the powerpc architecture, means the effective size to use with cache management instructions (it can be different from the actual cache line size). We fix the naming across the board and properly retrieve both pieces of information when available in the device-tree. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Benjamin Herrenschmidt authored
We don't patch instructions based on the cache lines or block sizes these days. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Benjamin Herrenschmidt authored
The variables are defined twice in setup_32.c and setup_64.c, do it once in setup-common.c instead Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Benjamin Herrenschmidt authored
It's an kernel private macro, it doesn't belong there Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
- 03 Feb, 2017 4 commits
-
-
Andrew Donnellan authored
Enable support for GCC plugins on powerpc. Add an additional version check in gcc-plugins-check to advise users to upgrade to gcc 5.2+ on powerpc to avoid issues with header files (gcc <= 4.6) or missing copies of rs6000-cpus.def (4.8 to 5.1 on 64-bit targets). Signed-off-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com> Acked-by: Kees Cook <keescook@chromium.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Andrew Donnellan authored
Commit 38addce8 ("gcc-plugins: Add latent_entropy plugin") excludes certain powerpc early boot code from the latent entropy plugin by adding appropriate CFLAGS. It looks like this was supposed to cover prom_init.o, but ended up saying init.o (which doesn't exist) instead. Fix the typo. Fixes: 38addce8 ("gcc-plugins: Add latent_entropy plugin") Signed-off-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Andrew Donnellan authored
The variable DISABLE_LATENT_ENTROPY_PLUGIN is defined when CONFIG_PAX_LATENT_ENTROPY is set. This is leftover from the original PaX version of the plugin code and doesn't actually exist. Change the condition to depend on CONFIG_GCC_PLUGIN_LATENT_ENTROPY instead. Fixes: 38addce8 ("gcc-plugins: Add latent_entropy plugin") Signed-off-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Andrew Donnellan authored
Stub out the debugfs functions so that the build doesn't break when CONFIG_DEBUG_FS=n. Reported-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com> Acked-by: Ian Munsie <imunsie@au1.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
- 02 Feb, 2017 13 commits
-
-
Nathan Fontenot authored
As we add the ability to do DLPAR of additional devices through the sysfs interface we need to know which devices are supported. This adds the reporting of supported devices with a comma separated list reported in the existing /sys/kernel/dlpar. Signed-off-by: Nathan Fontenot <nfont@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
John Allen authored
Extend the existing PRRN infrastructure to perform the actual affinity updating for cpus and memory in addition to the device tree updating. For cpus, dynamic affinity updating already appears to exist in the kernel in the form of arch_update_cpu_topology(). For memory, we must place a READD operation on the hotplug queue for any phandle included in the PRRN event that is determined to be an LMB. Signed-off-by: John Allen <jallen@linux.vnet.ibm.com> Reviewed-by: Nathan Fontenot <nfont@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
John Allen authored
Currently, memory must be hot removed and subsequently re-added in order to dynamically update the affinity of LMBs specified by a PRRN event. Earlier implementations of the PRRN event handler ran into issues in which the hot remove would occur successfully, but a hotplug event would be initiated from another source and grab the hotplug lock preventing the hot add from occurring. To prevent this situation, this patch introduces the notion of a hot "readd" action for memory which atomizes a hot remove and a hot add into a single, serialized operation on the hotplug queue. Signed-off-by: John Allen <jallen@linux.vnet.ibm.com> Reviewed-by: Nathan Fontenot <nfont@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
John Allen authored
When adding and removing LMBs we should make the acquire/release of the DRC a separate step to allow for a few improvements. First this will ensure that LMBs removed during a remove by count operation are all available if a error occurs and we need to add them back. By first removeing all the LMBs from the kernel before releasing their DRCs the LMBs are available to add back should an error occur. Also, this will allow for faster re-add operations of memory for PRRN event handling since we can skip the unneeded step of having to release the DRC and the acquire it back. Signed-off-by: Nathan Fontenot <nfont@linux.vnet.ibm.com> Signed-off-by: John Allen <jallen@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Madhavan Srinivasan authored
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Michael Ellerman authored
Add a few things that have been missed from .gitignore over the years. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Michael Ellerman authored
CONFIG_PPC_PTDUMP currently selects CONFIG_DEBUG_FS. But CONFIG_DEBUG_FS is user-selectable, so we shouldn't select it. Instead depend on it. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Anton Blanchard authored
Commit db911217 ("powerpc: Turn on BPF_JIT in ppc64_defconfig") only added BPF_JIT to the ppc64 defconfig. Add it to our powernv and pseries defconfigs too. Signed-off-by: Anton Blanchard <anton@samba.org> Acked-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Anton Blanchard authored
We added support for HAVE_CONTEXT_TRACKING, but placed the option inside PPC_PSERIES. This has the undesirable effect that NO_HZ_FULL can be enabled on a kernel with both powernv and pseries support, but cannot on a kernel with powernv only support. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Daniel Axtens authored
In __get_user_nosleep, we create an intermediate pointer for the user address we're about to fetch. We currently don't tag this pointer as const. Make it const, as we are simply dereferencing it, and it's scope is limited to the __get_user_nosleep macro. Signed-off-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Daniel Axtens authored
In __get_user_nocheck, we create an intermediate pointer for the user address we're about to fetch. We currently don't tag this pointer as const. Make it const, as we are simply dereferencing it, and it's scope is limited to the __get_user_nocheck macro. Signed-off-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Daniel Axtens authored
In __get_user_check, we create an intermediate pointer for the user address we're about to fetch. We currently don't tag this pointer as const. Make it const, as we are simply dereferencing it, and it's scope is limited to the __get_user_check macro. Signed-off-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Michael Ellerman authored
opal_lpc_init() is called from an __init routine, and calls other __init routines, so should also be __init, init? Fixes: 023b13a5 ("powerpc/powernv: Add support for direct mapped LPC on POWER9") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
- 31 Jan, 2017 5 commits
-
-
Reza Arbab authored
Use remove_pagetable() and friends for radix vmemmap removal. We do not require the special-case handling of vmemmap done in the x86 versions of these functions. This is because vmemmap_free() has already freed the mapped pages, and calls us with an aligned address range. So, add a few failsafe WARNs, but otherwise the code to remove physical mappings is already sufficient for vmemmap. Signed-off-by: Reza Arbab <arbab@linux.vnet.ibm.com> Acked-by: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Reza Arbab authored
Tear down and free the four-level page tables of physical mappings during memory hotremove. Borrow the basic structure of remove_pagetable() and friends from the identically-named x86 functions. Reduce the frequency of tlb flushes and page_table_lock spinlocks by only doing them in the outermost function. There was some question as to whether the locking is needed at all. Leave it for now, but we could consider dropping it. Memory must be offline to be removed, thus not in use. So there shouldn't be the sort of concurrent page walking activity here that might prompt us to use RCU. Signed-off-by: Reza Arbab <arbab@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Reza Arbab authored
Wire up memory hotplug page mapping for radix. Share the mapping function already used by radix_init_pgtable(). Signed-off-by: Reza Arbab <arbab@linux.vnet.ibm.com> Acked-by: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Reza Arbab authored
Move the page mapping code in radix_init_pgtable() into a separate function that will also be used for memory hotplug. The current goto loop progressively decreases its mapping size as it covers the tail of a range whose end is unaligned. Change this to a for loop which can do the same for both ends of the range. Signed-off-by: Reza Arbab <arbab@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-
Benjamin Herrenschmidt authored
Use the new non-PCI ISA bridge support to expose the POWER9 LPC bus as direct mapped via the ISA IO port range. This enables direct access via drivers such as 8250 Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
-