- 19 Aug, 2015 3 commits
-
-
Heiko Carstens authored
Remove two more statements which always evaluate to 'false'. These are more leftovers from the 31 bit era. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Heiko Carstens authored
Remove the two facility bits 50 - constrained transactional-execution facility 74 - transactional-execution facility from the required facilities if the kernel is built with -march=zEC12. E.g. z/VM 6.3 doesn't virtualize the TX facility yet. Therefore a kernel built with -march=zEC12 and ipl'ed on a zEC12 machine as a z/VM 6.3 guest will emit a message about the missing facilities and stop working. The kernel however doesn't make use of the TX facility, therefore remove the two TX related facility bits and fix this unpleasant behavior. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Michael Holzheu authored
By accident this level has been removed by the NUMA infrastructure patch. For non-NUMA systems with CPUs that span more than one book, this makes the scheduler only use one of the books and the other books remain idle. Fix this and re-add the missing level. For NUMA and non-NUMA we have the following scheduling domains and groups: - SMT (Groups: CPU threads) - MC (Groups: Cores) - BOOK (Groups: Books) For the non-NUMA case we have one last level scheduling domain: - DIE (Groups: Whole system, has all CPUs -> cpu_cpu_mask) For the NUMA case we have the following two last level scheduling domains: - DIE (Groups: NUMA nodes -> cpu_cpu_mask -> returns node siblings) - NUMA (Groups: Whole system, has all CPUs -> created in sched_init_numa()) Fixes: e8054b65 ("s390/numa: add topology tree infrastructure") Reported-and-tested-by: Evgeny Cherkashin <Eugene.Crosser@ru.ibm.com> Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com> Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
- 09 Aug, 2015 2 commits
-
-
Stefan Haberland authored
This patch adds an enhanced detection for control unit initiated reconfiguration request scope. The first approach assumed the scope of the reconfiguration request to be restricted to the path on which the message was received. The enhanced approach determines the full scope of the reconfiguration request by evaluating additional path and device selection information contained in the reconfiguration message. Reviewed-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com> Reviewed-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: Stefan Haberland <stefan.haberland@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Stefan Haberland authored
DASD path verification requires the usage of sleep_on_immediatly to ensure that no other I/O request is blocking the recovery of disconnected devices. But two concurrent path verification workers for the same device may kill each others requests due to the usage of the immediate sleep_on function. This may lead to unsuccessful path verifications. Prevent that two parallel path verification workers conflict with each other by implementing a device flag signalling a already running worker. Signed-off-by: Stefan Haberland <stefan.haberland@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
- 07 Aug, 2015 6 commits
-
-
Martin Schwidefsky authored
As proposed by Andy Lutomirski create the SysV and the GNU hash for the vdso objects. This may make some dynamic loaders a bit faster. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Michael Holzheu authored
The core to node mapping data consumes about 2 KB bss data. To save memory for the non-NUMA case, make the data dynamic. In addition change the "core_to_node" array from "int" to "s32" which saves 1 KB also for the NUMA case. Suggested-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Guenter Roeck authored
__delay is exported by most architectures, and may be used in modules. Since it is not exported for s390, s390:allmodconfig currently fails to build with ERROR: "__delay" [drivers/net/phy/mdio-octeon.ko] undefined! Fixes: a6d67864 ("net: mdio-octeon: Modify driver to work on both ThunderX and Octeon") Cc: Radha Mohan Chintakuntla <rchintakuntla@cavium.com> Cc: David Daney <david.daney@cavium.com> Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Peter Senna Tschudin authored
This patch remove unneeded variables used to store return values. These issues were detected with the Coccinelle script: scripts/coccinelle/misc/returnvar.cocci [heiko.carstens@de.ibm.com]: make qeth_l[2/3]_stop() return void Signed-off-by: Peter Senna Tschudin <peter.senna@gmail.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Peter Senna Tschudin authored
Remove unneeded semicolon. The semantic patch that detects this change is available at scripts/coccinelle/misc/semicolon.cocci. Signed-off-by: Peter Senna Tschudin <peter.senna@gmail.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Michael Holzheu authored
Since we are already protected by the "sched_domains_mutex" lock, we can safely remove the topology lock. Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
- 04 Aug, 2015 4 commits
-
-
Martin Schwidefsky authored
The MT scaling values are updated on each calll to do_account_vtime. This function is called for each HZ interrupt and for each context switch. Context switch can happen often, the STCCTM instruction on this path is noticeable. Limit the updates to once per jiffy. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Heiko Carstens authored
x86 will wire up all syscalls reachable via sys_socketcall. Therefore this will yield a lot of warnings from the checksyscalls.sh scripts on s390 where we currently don't wire them up directly. This might change in the future, but this needs to be done carefully in order to not break anything. For the time being just tell the checksyscalls script to ignore the missing syscalls on s390. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Philipp Hachtmann authored
Signed-off-by: Philipp Hachtmann <phacht@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Michael Holzheu authored
NUMA emulation (aka fake NUMA) distributes the available memory to nodes without using real topology information about the physical memory of the machine. Splitting the system memory into nodes replicates the memory management structures for each node. Particularly each node has its own "mm locks" and its own "kswapd" task. For large systems, under certain conditions, this results in improved system performance and/or latency based on reduced pressure on the mm locks and the kswapd tasks. NUMA emulation distributes CPUs to nodes while respecting the original machine topology information. This is done by trying to avoid to separate CPUs which reside on the same book or even on the same MC. Because the current Linux scheduler code requires a stable cpu to node mapping, cores are pinned to nodes when the first CPU thread is set online. This patch is based on the initial implementation from Philipp Hachtmann. Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
- 03 Aug, 2015 7 commits
-
-
Philipp Hachtmann authored
NUMA emulation needs proper means to mangle the book/mc/core topology of the machine. The topology tree (toptree) consistently maintains cpu masks for the root, each node, and all leaves of the tree while the user may use the toptree functions to rearrange the tree in various ways. This patch contains several changes from Michael Holzheu. Signed-off-by: Philipp Hachtmann <phacht@linux.vnet.ibm.com> Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Philipp Hachtmann authored
Enable core NUMA support for s390 and add one simple default mode "plain" that creates one single NUMA node. This patch contains several changes from Michael Holzheu. Signed-off-by: Philipp Hachtmann <phacht@linux.vnet.ibm.com> Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Gerald Schaefer authored
With NUMA support for s390, arch_add_memory() needs to respect the nid parameter. Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Gerald Schaefer authored
Force get_user_page() to take the slow path for NUMA migration pages. Signed-off-by: Gerald Schaefer <geraldsc@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Martin Schwidefsky authored
Define pte_protnone and pmd_protnone for NUMA memory migration. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Christian Borntraeger authored
Right now we use the address of the sie control block as tag for the sampling data. This is hard to get for users. Let's just use the PID of the cpu thread to mark the hardware samples. Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com> Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Hendrik Brueckner authored
All calls to save_fpu_regs() specify the fpu structure of the current task pointer as parameter. The task pointer of the current task can also be retrieved from the CPU lowcore directly. Remove the parameter definition, load the __LC_CURRENT task pointer from the CPU lowcore, and rebase the FPU structure onto the task structure. Apply the same approach for the load_fpu_regs() function. Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
- 29 Jul, 2015 6 commits
-
-
Sebastian Ott authored
Make sure that we use the pci_rescan_remove_lock when we remove or add functions from/to the bus. Reviewed-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Sebastian Ott authored
Receiving error events for a pci function that's currently not in use will crash the kernel. For example the procedure for FW upgrades might include: * remove the function from Linux * apply FW upgrade * rescan for new functions Receiving an event during the FW upgrade will result in a use after free when printing the functions name. Just print "n/a" in such cases. Reviewed-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Sebastian Ott authored
Free bus resources when the allocation/registration of the bus failed. Reviewed-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Heiko Carstens authored
The public mailing lists and personal email addresses are sufficient. No need for an extra generic email address. Cc: Christian Borntraeger <borntraeger@de.ibm.com> Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Ursula Braun <ursula.braun@de.ibm.com> Cc: Ingo Tuchscherer <ingo.tuchscherer@de.ibm.com> Cc: Steffen Maier <maier@linux.vnet.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Heiko Carstens authored
Section mismatch in reference from the function __smp_store_cpu_state() to the function .init.text:memblock_alloc() The function __smp_store_cpu_state() references the function __init memblock_alloc(). Reviewed-by: Michael Holzheu <holzheu@linux.vnet.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Martin Schwidefsky authored
The 31-bit assembler code for the early sclp console is error prone as git commit fde24b54d976cc123506695c17db01438a11b673 "s390/sclp: clear upper register halves in _sclp_print_early" has shown. Convert the assembler code to C. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
- 22 Jul, 2015 12 commits
-
-
Peter Oberparleiter authored
A hardware problem on a FICON link is reported by the Channel Subsystem to the operating system via a Link Incident Record (LIR). In response, the operating system should issue a message that enables hardware service personnel to identify and repair the failing component. Current Linux LIR handling is broken because LIR data is incorrectly interpreted and no log message is generated. This patch fixes Linux LIR handling by implementing a new log message for LIRs indicating a degraded or non-operational link. Also LIRs are no longer used to deactivate channel paths because the available data does not reliably allow to determine the affected channel path. Signed-off-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com> Reviewed-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Peter Oberparleiter authored
After dutifully acting as statement separator for 6 long years, this comma has finally grown into a full semicolon. Signed-off-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Peter Oberparleiter authored
Dropping kernel messages during a console-buffer-full condition is preferable to halting the system until console messages are delivered, especially for production systems. Update default for sclp_console_drop kernel parameter accordingly. Signed-off-by: Peter Oberparleiter <peter.oberparleiter@linux.vnet.ibm.com> Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Sebastian Ott authored
Inline get_zdev to save ~200 bytes of kernel text for CONFIG_PCI=y. Also rename the function to to_zpci to make clear that we don't do reference counting here. Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Martin Schwidefsky authored
If a machine checks is received while the CPU is in the kernel, only the s390_do_machine_check function will be called. The call to s390_handle_mcck is postponed until the CPU returns to user space. Because of this it is safe to use the asynchronous stack for machine checks even if the CPU is already handling an interrupt. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Martin Schwidefsky authored
Reorder the instructions of UPDATE_VTIME to improve superscalar execution, remove duplicate checks for problem-state from the asynchronous interrupt handlers, and move the check for problem-state from the synchronous exit path to the program check path as it is only needed for program checks inside the kernel. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Martin Schwidefsky authored
Currently there are two mechanisms to deal with cleanup work due to interrupts. The HANDLE_SIE_INTERCEPT macro is used to undo the changes required to enter SIE in sie64a. If the SIE instruction causes a program check, or an asynchronous interrupt is received the HANDLE_SIE_INTERCEPT code forwards the program execution to sie_exit. All the other critical sections in entry.S are handled by the code in cleanup_critical that is called by the SWITCH_ASYNC macro. Move the sie64a function to the beginning of the critical section and add the code from HANDLE_SIE_INTERCEPT to cleanup_critical. Add a special case for the sie64a cleanup to the program check handler. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Martin Schwidefsky authored
The HANDLE_SIE_INTERCEPT macro is used in the interrupt handlers and the program check handler to undo a few changes done by sie64a. Among them are guest vs host LPP, the gmap ASCE vs kernel ASCE and the bit that indicates that SIE is currently running on the CPU. There is a race of a voluntary SIE exit vs asynchronous interrupts. If the CPU completed the SIE instruction and the TM instruction of the LPP macro at the time it receives an interrupt, the interrupt handler will run while the LPP, the ASCE and the SIE bit are still set up for guest execution. This might result in wrong sampling data, but it will not cause data corruption or lockups. The critical section in sie64a needs to be enlarged to include all instructions that undo the changes required for guest execution. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Hendrik Brueckner authored
Use the module_cpu_feature_match() module init function to add an module alias based on required CPU features. The modules are automatically loaded on hardware that supports the required CPU features. Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Hendrik Brueckner authored
Add support for the generic CPU feature modalias implementation that wires up optional CPU features to udev-based module autoprobing. The <asm/cpufeature.h> file provides definitions to map CPU features to s390 ELF hardware capabilities. Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Hendrik Brueckner authored
A section mismatch warning is reported if an __init annotated function is specified for module_cpu_feature_match(). Change the module_cpu_feature_match() function and annotate the generated cpu_feature_match_* function as __init. Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-
Hendrik Brueckner authored
Improve the save and restore behavior of FPU register contents to use the vector extension within the kernel. The kernel does not use floating-point or vector registers and, therefore, saving and restoring the FPU register contents are performed for handling signals or switching processes only. To prepare for using vector instructions and vector registers within the kernel, enhance the save behavior and implement a lazy restore at return to user space from a system call or interrupt. To implement the lazy restore, the save_fpu_regs() sets a CPU information flag, CIF_FPU, to indicate that the FPU registers must be restored. Saving and setting CIF_FPU is performed in an atomic fashion to be interrupt-safe. When the kernel wants to use the vector extension or wants to change the FPU register state for a task during signal handling, the save_fpu_regs() must be called first. The CIF_FPU flag is also set at process switch. At return to user space, the FPU state is restored. In particular, the FPU state includes the floating-point or vector register contents, as well as, vector-enablement and floating-point control. The FPU state restore and clearing CIF_FPU is also performed in an atomic fashion. For KVM, the restore of the FPU register state is performed when restoring the general-purpose guest registers before the SIE instructions is started. Because the path towards the SIE instruction is interruptible, the CIF_FPU flag must be checked again right before going into SIE. If set, the guest registers must be reloaded again by re-entering the outer SIE loop. This is the same behavior as if the SIE critical section is interrupted. Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
-