Commit becdce1c authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux

Pull s390 updates from Martin Schwidefsky:

 - Improvements for the spectre defense:
    * The spectre related code is consolidated to a single file
      nospec-branch.c
    * Automatic enable/disable for the spectre v2 defenses (expoline vs.
      nobp)
    * Syslog messages for specve v2 are added
    * Enable CONFIG_GENERIC_CPU_VULNERABILITIES and define the attribute
      functions for spectre v1 and v2

 - Add helper macros for assembler alternatives and use them to shorten
   the code in entry.S.

 - Add support for persistent configuration data via the SCLP Store Data
   interface. The H/W interface requires a page table that uses 4K pages
   only, the code to setup such an address space is added as well.

 - Enable virtio GPU emulation in QEMU. To do this the depends
   statements for a few common Kconfig options are modified.

 - Add support for format-3 channel path descriptors and add a binary
   sysfs interface to export the associated utility strings.

 - Add a sysfs attribute to control the IFCC handling in case of
   constant channel errors.

 - The vfio-ccw changes from Cornelia.

 - Bug fixes and cleanups.

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (40 commits)
  s390/kvm: improve stack frame constants in entry.S
  s390/lpp: use assembler alternatives for the LPP instruction
  s390/entry.S: use assembler alternatives
  s390: add assembler macros for CPU alternatives
  s390: add sysfs attributes for spectre
  s390: report spectre mitigation via syslog
  s390: add automatic detection of the spectre defense
  s390: move nobp parameter functions to nospec-branch.c
  s390/cio: add util_string sysfs attribute
  s390/chsc: query utility strings via fmt3 channel path descriptor
  s390/cio: rename struct channel_path_desc
  s390/cio: fix unbind of io_subchannel_driver
  s390/qdio: split up CCQ handling for EQBS / SQBS
  s390/qdio: don't retry EQBS after CCQ 96
  s390/qdio: restrict buffer merging to eligible devices
  s390/qdio: don't merge ERROR output buffers
  s390/qdio: simplify math in get_*_buffer_frontier()
  s390/decompressor: trim uncompressed image head during the build
  s390/crypto: Fix kernel crash on aes_s390 module remove.
  s390/defkeymap: fix global init to zero
  ...
parents f8cf2f16 92fa7a13
...@@ -28,7 +28,7 @@ every detail. More information/reference could be found here: ...@@ -28,7 +28,7 @@ every detail. More information/reference could be found here:
https://en.wikipedia.org/wiki/Channel_I/O https://en.wikipedia.org/wiki/Channel_I/O
- s390 architecture: - s390 architecture:
s390 Principles of Operation manual (IBM Form. No. SA22-7832) s390 Principles of Operation manual (IBM Form. No. SA22-7832)
- The existing Qemu code which implements a simple emulated channel - The existing QEMU code which implements a simple emulated channel
subsystem could also be a good reference. It makes it easier to follow subsystem could also be a good reference. It makes it easier to follow
the flow. the flow.
qemu/hw/s390x/css.c qemu/hw/s390x/css.c
...@@ -39,22 +39,22 @@ For vfio mediated device framework: ...@@ -39,22 +39,22 @@ For vfio mediated device framework:
Motivation of vfio-ccw Motivation of vfio-ccw
---------------------- ----------------------
Currently, a guest virtualized via qemu/kvm on s390 only sees Typically, a guest virtualized via QEMU/KVM on s390 only sees
paravirtualized virtio devices via the "Virtio Over Channel I/O paravirtualized virtio devices via the "Virtio Over Channel I/O
(virtio-ccw)" transport. This makes virtio devices discoverable via (virtio-ccw)" transport. This makes virtio devices discoverable via
standard operating system algorithms for handling channel devices. standard operating system algorithms for handling channel devices.
However this is not enough. On s390 for the majority of devices, which However this is not enough. On s390 for the majority of devices, which
use the standard Channel I/O based mechanism, we also need to provide use the standard Channel I/O based mechanism, we also need to provide
the functionality of passing through them to a Qemu virtual machine. the functionality of passing through them to a QEMU virtual machine.
This includes devices that don't have a virtio counterpart (e.g. tape This includes devices that don't have a virtio counterpart (e.g. tape
drives) or that have specific characteristics which guests want to drives) or that have specific characteristics which guests want to
exploit. exploit.
For passing a device to a guest, we want to use the same interface as For passing a device to a guest, we want to use the same interface as
everybody else, namely vfio. Thus, we would like to introduce vfio everybody else, namely vfio. We implement this vfio support for channel
support for channel devices. And we would like to name this new vfio devices via the vfio mediated device framework and the subchannel device
device "vfio-ccw". driver "vfio_ccw".
Access patterns of CCW devices Access patterns of CCW devices
------------------------------ ------------------------------
...@@ -99,7 +99,7 @@ As mentioned above, we realize vfio-ccw with a mdev implementation. ...@@ -99,7 +99,7 @@ As mentioned above, we realize vfio-ccw with a mdev implementation.
Channel I/O does not have IOMMU hardware support, so the physical Channel I/O does not have IOMMU hardware support, so the physical
vfio-ccw device does not have an IOMMU level translation or isolation. vfio-ccw device does not have an IOMMU level translation or isolation.
Sub-channel I/O instructions are all privileged instructions, When Subchannel I/O instructions are all privileged instructions. When
handling the I/O instruction interception, vfio-ccw has the software handling the I/O instruction interception, vfio-ccw has the software
policing and translation how the channel program is programmed before policing and translation how the channel program is programmed before
it gets sent to hardware. it gets sent to hardware.
...@@ -121,7 +121,7 @@ devices: ...@@ -121,7 +121,7 @@ devices:
- The vfio_mdev driver for the mediated vfio ccw device. - The vfio_mdev driver for the mediated vfio ccw device.
This is provided by the mdev framework. It is a vfio device driver for This is provided by the mdev framework. It is a vfio device driver for
the mdev that created by vfio_ccw. the mdev that created by vfio_ccw.
It realize a group of vfio device driver callbacks, adds itself to a It realizes a group of vfio device driver callbacks, adds itself to a
vfio group, and registers itself to the mdev framework as a mdev vfio group, and registers itself to the mdev framework as a mdev
driver. driver.
It uses a vfio iommu backend that uses the existing map and unmap It uses a vfio iommu backend that uses the existing map and unmap
...@@ -178,7 +178,7 @@ vfio-ccw I/O region ...@@ -178,7 +178,7 @@ vfio-ccw I/O region
An I/O region is used to accept channel program request from user An I/O region is used to accept channel program request from user
space and store I/O interrupt result for user space to retrieve. The space and store I/O interrupt result for user space to retrieve. The
defination of the region is: definition of the region is:
struct ccw_io_region { struct ccw_io_region {
#define ORB_AREA_SIZE 12 #define ORB_AREA_SIZE 12
...@@ -198,30 +198,23 @@ irb_area stores the I/O result. ...@@ -198,30 +198,23 @@ irb_area stores the I/O result.
ret_code stores a return code for each access of the region. ret_code stores a return code for each access of the region.
vfio-ccw patches overview vfio-ccw operation details
------------------------- --------------------------
For now, our patches are rebased on the latest mdev implementation. vfio-ccw follows what vfio-pci did on the s390 platform and uses
vfio-ccw follows what vfio-pci did on the s390 paltform and uses vfio-iommu-type1 as the vfio iommu backend.
vfio-iommu-type1 as the vfio iommu backend. It's a good start to launch
the code review for vfio-ccw. Note that the implementation is far from
complete yet; but we'd like to get feedback for the general
architecture.
* CCW translation APIs * CCW translation APIs
- Description: A group of APIs (start with 'cp_') to do CCW translation. The CCWs
These introduce a group of APIs (start with 'cp_') to do CCW passed in by a user space program are organized with their guest
translation. The CCWs passed in by a user space program are physical memory addresses. These APIs will copy the CCWs into kernel
organized with their guest physical memory addresses. These APIs space, and assemble a runnable kernel channel program by updating the
will copy the CCWs into the kernel space, and assemble a runnable guest physical addresses with their corresponding host physical addresses.
kernel channel program by updating the guest physical addresses with Note that we have to use IDALs even for direct-access CCWs, as the
their corresponding host physical addresses. referenced memory can be located anywhere, including above 2G.
- Patches:
vfio: ccw: introduce channel program interfaces
* vfio_ccw device driver * vfio_ccw device driver
- Description: This driver utilizes the CCW translation APIs and introduces
The following patches utilizes the CCW translation APIs and introduce
vfio_ccw, which is the driver for the I/O subchannel devices you want vfio_ccw, which is the driver for the I/O subchannel devices you want
to pass through. to pass through.
vfio_ccw implements the following vfio ioctls: vfio_ccw implements the following vfio ioctls:
...@@ -236,20 +229,14 @@ architecture. ...@@ -236,20 +229,14 @@ architecture.
This also provides the SET_IRQ ioctl to setup an event notifier to This also provides the SET_IRQ ioctl to setup an event notifier to
notify the user space program the I/O completion in an asynchronous notify the user space program the I/O completion in an asynchronous
way. way.
- Patches:
vfio: ccw: basic implementation for vfio_ccw driver The use of vfio-ccw is not limited to QEMU, while QEMU is definitely a
vfio: ccw: introduce ccw_io_region
vfio: ccw: realize VFIO_DEVICE_GET_REGION_INFO ioctl
vfio: ccw: realize VFIO_DEVICE_RESET ioctl
vfio: ccw: realize VFIO_DEVICE_G(S)ET_IRQ_INFO ioctls
The user of vfio-ccw is not limited to Qemu, while Qemu is definitely a
good example to get understand how these patches work. Here is a little good example to get understand how these patches work. Here is a little
bit more detail how an I/O request triggered by the Qemu guest will be bit more detail how an I/O request triggered by the QEMU guest will be
handled (without error handling). handled (without error handling).
Explanation: Explanation:
Q1-Q7: Qemu side process. Q1-Q7: QEMU side process.
K1-K5: Kernel side process. K1-K5: Kernel side process.
Q1. Get I/O region info during initialization. Q1. Get I/O region info during initialization.
...@@ -263,7 +250,7 @@ Q4. Write the guest channel program and ORB to the I/O region. ...@@ -263,7 +250,7 @@ Q4. Write the guest channel program and ORB to the I/O region.
K2. Translate the guest channel program to a host kernel space K2. Translate the guest channel program to a host kernel space
channel program, which becomes runnable for a real device. channel program, which becomes runnable for a real device.
K3. With the necessary information contained in the orb passed in K3. With the necessary information contained in the orb passed in
by Qemu, issue the ccwchain to the device. by QEMU, issue the ccwchain to the device.
K4. Return the ssch CC code. K4. Return the ssch CC code.
Q5. Return the CC code to the guest. Q5. Return the CC code to the guest.
...@@ -271,7 +258,7 @@ Q5. Return the CC code to the guest. ...@@ -271,7 +258,7 @@ Q5. Return the CC code to the guest.
K5. Interrupt handler gets the I/O result and write the result to K5. Interrupt handler gets the I/O result and write the result to
the I/O region. the I/O region.
K6. Signal Qemu to retrieve the result. K6. Signal QEMU to retrieve the result.
Q6. Get the signal and event handler reads out the result from the I/O Q6. Get the signal and event handler reads out the result from the I/O
region. region.
Q7. Update the irb for the guest. Q7. Update the irb for the guest.
...@@ -289,10 +276,20 @@ More information for DASD and ECKD could be found here: ...@@ -289,10 +276,20 @@ More information for DASD and ECKD could be found here:
https://en.wikipedia.org/wiki/Direct-access_storage_device https://en.wikipedia.org/wiki/Direct-access_storage_device
https://en.wikipedia.org/wiki/Count_key_data https://en.wikipedia.org/wiki/Count_key_data
Together with the corresponding work in Qemu, we can bring the passed Together with the corresponding work in QEMU, we can bring the passed
through DASD/ECKD device online in a guest now and use it as a block through DASD/ECKD device online in a guest now and use it as a block
device. device.
While the current code allows the guest to start channel programs via
START SUBCHANNEL, support for HALT SUBCHANNEL or CLEAR SUBCHANNEL is
not yet implemented.
vfio-ccw supports classic (command mode) channel I/O only. Transport
mode (HPF) is not supported.
QDIO subchannels are currently not supported. Classic devices other than
DASD/ECKD might work, but have not been tested.
Reference Reference
--------- ---------
1. ESA/s390 Principles of Operation manual (IBM Form. No. SA22-7832) 1. ESA/s390 Principles of Operation manual (IBM Form. No. SA22-7832)
......
...@@ -120,6 +120,7 @@ config S390 ...@@ -120,6 +120,7 @@ config S390
select GENERIC_CLOCKEVENTS select GENERIC_CLOCKEVENTS
select GENERIC_CPU_AUTOPROBE select GENERIC_CPU_AUTOPROBE
select GENERIC_CPU_DEVICES if !SMP select GENERIC_CPU_DEVICES if !SMP
select GENERIC_CPU_VULNERABILITIES
select GENERIC_FIND_FIRST_BIT select GENERIC_FIND_FIRST_BIT
select GENERIC_SMP_IDLE_THREAD select GENERIC_SMP_IDLE_THREAD
select GENERIC_TIME_VSYSCALL select GENERIC_TIME_VSYSCALL
...@@ -576,7 +577,7 @@ choice ...@@ -576,7 +577,7 @@ choice
config EXPOLINE_OFF config EXPOLINE_OFF
bool "spectre_v2=off" bool "spectre_v2=off"
config EXPOLINE_MEDIUM config EXPOLINE_AUTO
bool "spectre_v2=auto" bool "spectre_v2=auto"
config EXPOLINE_FULL config EXPOLINE_FULL
......
...@@ -47,9 +47,6 @@ cflags-$(CONFIG_MARCH_Z14_TUNE) += -mtune=z14 ...@@ -47,9 +47,6 @@ cflags-$(CONFIG_MARCH_Z14_TUNE) += -mtune=z14
cflags-y += -Wa,-I$(srctree)/arch/$(ARCH)/include cflags-y += -Wa,-I$(srctree)/arch/$(ARCH)/include
#KBUILD_IMAGE is necessary for make rpm
KBUILD_IMAGE :=arch/s390/boot/image
# #
# Prevent tail-call optimizations, to get clearer backtraces: # Prevent tail-call optimizations, to get clearer backtraces:
# #
...@@ -84,7 +81,7 @@ ifdef CONFIG_EXPOLINE ...@@ -84,7 +81,7 @@ ifdef CONFIG_EXPOLINE
CC_FLAGS_EXPOLINE += -mfunction-return=thunk CC_FLAGS_EXPOLINE += -mfunction-return=thunk
CC_FLAGS_EXPOLINE += -mindirect-branch-table CC_FLAGS_EXPOLINE += -mindirect-branch-table
export CC_FLAGS_EXPOLINE export CC_FLAGS_EXPOLINE
cflags-y += $(CC_FLAGS_EXPOLINE) cflags-y += $(CC_FLAGS_EXPOLINE) -DCC_USING_EXPOLINE
endif endif
endif endif
...@@ -126,6 +123,9 @@ tools := arch/s390/tools ...@@ -126,6 +123,9 @@ tools := arch/s390/tools
all: image bzImage all: image bzImage
#KBUILD_IMAGE is necessary for packaging targets like rpm-pkg, deb-pkg...
KBUILD_IMAGE := $(boot)/bzImage
install: vmlinux install: vmlinux
$(Q)$(MAKE) $(build)=$(boot) $@ $(Q)$(MAKE) $(build)=$(boot) $@
......
...@@ -29,11 +29,16 @@ LDFLAGS_vmlinux := --oformat $(LD_BFD) -e startup -T ...@@ -29,11 +29,16 @@ LDFLAGS_vmlinux := --oformat $(LD_BFD) -e startup -T
$(obj)/vmlinux: $(obj)/vmlinux.lds $(OBJECTS) $(obj)/vmlinux: $(obj)/vmlinux.lds $(OBJECTS)
$(call if_changed,ld) $(call if_changed,ld)
sed-sizes := -e 's/^\([0-9a-fA-F]*\) . \(__bss_start\|_end\)$$/\#define SZ\2 0x\1/p' TRIM_HEAD_SIZE := 0x11000
quiet_cmd_sizes = GEN $@ sed-sizes := -e 's/^\([0-9a-fA-F]*\) . \(__bss_start\|_end\)$$/\#define SZ\2 (0x\1 - $(TRIM_HEAD_SIZE))/p'
quiet_cmd_sizes = GEN $@
cmd_sizes = $(NM) $< | sed -n $(sed-sizes) > $@ cmd_sizes = $(NM) $< | sed -n $(sed-sizes) > $@
quiet_cmd_trim_head = TRIM $@
cmd_trim_head = tail -c +$$(($(TRIM_HEAD_SIZE) + 1)) $< > $@
$(obj)/sizes.h: vmlinux $(obj)/sizes.h: vmlinux
$(call if_changed,sizes) $(call if_changed,sizes)
...@@ -43,10 +48,13 @@ $(obj)/head.o: $(obj)/sizes.h ...@@ -43,10 +48,13 @@ $(obj)/head.o: $(obj)/sizes.h
CFLAGS_misc.o += -I$(objtree)/$(obj) CFLAGS_misc.o += -I$(objtree)/$(obj)
$(obj)/misc.o: $(obj)/sizes.h $(obj)/misc.o: $(obj)/sizes.h
OBJCOPYFLAGS_vmlinux.bin := -R .comment -S OBJCOPYFLAGS_vmlinux.bin.full := -R .comment -S
$(obj)/vmlinux.bin: vmlinux $(obj)/vmlinux.bin.full: vmlinux
$(call if_changed,objcopy) $(call if_changed,objcopy)
$(obj)/vmlinux.bin: $(obj)/vmlinux.bin.full
$(call if_changed,trim_head)
vmlinux.bin.all-y := $(obj)/vmlinux.bin vmlinux.bin.all-y := $(obj)/vmlinux.bin
suffix-$(CONFIG_KERNEL_GZIP) := gz suffix-$(CONFIG_KERNEL_GZIP) := gz
......
...@@ -23,12 +23,10 @@ ENTRY(startup_continue) ...@@ -23,12 +23,10 @@ ENTRY(startup_continue)
aghi %r15,-160 aghi %r15,-160
brasl %r14,decompress_kernel brasl %r14,decompress_kernel
# Set up registers for memory mover. We move the decompressed image to # Set up registers for memory mover. We move the decompressed image to
# 0x11000, starting at offset 0x11000 in the decompressed image so # 0x11000, where startup_continue of the decompressed image is supposed
# that code living at 0x11000 in the image will end up at 0x11000 in # to be.
# memory.
lgr %r4,%r2 lgr %r4,%r2
lg %r2,.Loffset-.LPG1(%r13) lg %r2,.Loffset-.LPG1(%r13)
la %r4,0(%r2,%r4)
lg %r3,.Lmvsize-.LPG1(%r13) lg %r3,.Lmvsize-.LPG1(%r13)
lgr %r5,%r3 lgr %r5,%r3
# Move the memory mover someplace safe so it doesn't overwrite itself. # Move the memory mover someplace safe so it doesn't overwrite itself.
......
...@@ -27,8 +27,8 @@ ...@@ -27,8 +27,8 @@
/* Symbols defined by linker scripts */ /* Symbols defined by linker scripts */
extern char input_data[]; extern char input_data[];
extern int input_len; extern int input_len;
extern char _text, _end; extern char _end[];
extern char _bss, _ebss; extern char _bss[], _ebss[];
static void error(char *m); static void error(char *m);
...@@ -144,7 +144,7 @@ unsigned long decompress_kernel(void) ...@@ -144,7 +144,7 @@ unsigned long decompress_kernel(void)
{ {
void *output, *kernel_end; void *output, *kernel_end;
output = (void *) ALIGN((unsigned long) &_end + HEAP_SIZE, PAGE_SIZE); output = (void *) ALIGN((unsigned long) _end + HEAP_SIZE, PAGE_SIZE);
kernel_end = output + SZ__bss_start; kernel_end = output + SZ__bss_start;
check_ipl_parmblock((void *) 0, (unsigned long) kernel_end); check_ipl_parmblock((void *) 0, (unsigned long) kernel_end);
...@@ -166,8 +166,8 @@ unsigned long decompress_kernel(void) ...@@ -166,8 +166,8 @@ unsigned long decompress_kernel(void)
* Clear bss section. free_mem_ptr and free_mem_end_ptr need to be * Clear bss section. free_mem_ptr and free_mem_end_ptr need to be
* initialized afterwards since they reside in bss. * initialized afterwards since they reside in bss.
*/ */
memset(&_bss, 0, &_ebss - &_bss); memset(_bss, 0, _ebss - _bss);
free_mem_ptr = (unsigned long) &_end; free_mem_ptr = (unsigned long) _end;
free_mem_end_ptr = free_mem_ptr + HEAP_SIZE; free_mem_end_ptr = free_mem_ptr + HEAP_SIZE;
__decompress(input_data, input_len, NULL, NULL, output, 0, NULL, error); __decompress(input_data, input_len, NULL, NULL, output, 0, NULL, error);
......
...@@ -52,6 +52,7 @@ SECTIONS ...@@ -52,6 +52,7 @@ SECTIONS
/* Sections to be discarded */ /* Sections to be discarded */
/DISCARD/ : { /DISCARD/ : {
*(.eh_frame) *(.eh_frame)
*(__ex_table)
*(*__ksymtab*) *(*__ksymtab*)
} }
} }
...@@ -1047,6 +1047,7 @@ static struct aead_alg gcm_aes_aead = { ...@@ -1047,6 +1047,7 @@ static struct aead_alg gcm_aes_aead = {
static struct crypto_alg *aes_s390_algs_ptr[5]; static struct crypto_alg *aes_s390_algs_ptr[5];
static int aes_s390_algs_num; static int aes_s390_algs_num;
static struct aead_alg *aes_s390_aead_alg;
static int aes_s390_register_alg(struct crypto_alg *alg) static int aes_s390_register_alg(struct crypto_alg *alg)
{ {
...@@ -1065,7 +1066,8 @@ static void aes_s390_fini(void) ...@@ -1065,7 +1066,8 @@ static void aes_s390_fini(void)
if (ctrblk) if (ctrblk)
free_page((unsigned long) ctrblk); free_page((unsigned long) ctrblk);
crypto_unregister_aead(&gcm_aes_aead); if (aes_s390_aead_alg)
crypto_unregister_aead(aes_s390_aead_alg);
} }
static int __init aes_s390_init(void) static int __init aes_s390_init(void)
...@@ -1123,6 +1125,7 @@ static int __init aes_s390_init(void) ...@@ -1123,6 +1125,7 @@ static int __init aes_s390_init(void)
ret = crypto_register_aead(&gcm_aes_aead); ret = crypto_register_aead(&gcm_aes_aead);
if (ret) if (ret)
goto out_err; goto out_err;
aes_s390_aead_alg = &gcm_aes_aead;
} }
return 0; return 0;
......
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_S390_ALTERNATIVE_ASM_H
#define _ASM_S390_ALTERNATIVE_ASM_H
#ifdef __ASSEMBLY__
/*
* Check the length of an instruction sequence. The length may not be larger
* than 254 bytes and it has to be divisible by 2.
*/
.macro alt_len_check start,end
.if ( \end - \start ) > 254
.error "cpu alternatives does not support instructions blocks > 254 bytes\n"
.endif
.if ( \end - \start ) % 2
.error "cpu alternatives instructions length is odd\n"
.endif
.endm
/*
* Issue one struct alt_instr descriptor entry (need to put it into
* the section .altinstructions, see below). This entry contains
* enough information for the alternatives patching code to patch an
* instruction. See apply_alternatives().
*/
.macro alt_entry orig_start, orig_end, alt_start, alt_end, feature
.long \orig_start - .
.long \alt_start - .
.word \feature
.byte \orig_end - \orig_start
.byte \alt_end - \alt_start
.endm
/*
* Fill up @bytes with nops. The macro emits 6-byte nop instructions
* for the bulk of the area, possibly followed by a 4-byte and/or
* a 2-byte nop if the size of the area is not divisible by 6.
*/
.macro alt_pad_fill bytes
.fill ( \bytes ) / 6, 6, 0xc0040000
.fill ( \bytes ) % 6 / 4, 4, 0x47000000
.fill ( \bytes ) % 6 % 4 / 2, 2, 0x0700
.endm
/*
* Fill up @bytes with nops. If the number of bytes is larger
* than 6, emit a jg instruction to branch over all nops, then
* fill an area of size (@bytes - 6) with nop instructions.
*/
.macro alt_pad bytes
.if ( \bytes > 0 )
.if ( \bytes > 6 )
jg . + \bytes
alt_pad_fill \bytes - 6
.else
alt_pad_fill \bytes
.endif
.endif
.endm
/*
* Define an alternative between two instructions. If @feature is
* present, early code in apply_alternatives() replaces @oldinstr with
* @newinstr. ".skip" directive takes care of proper instruction padding
* in case @newinstr is longer than @oldinstr.
*/
.macro ALTERNATIVE oldinstr, newinstr, feature
.pushsection .altinstr_replacement,"ax"
770: \newinstr
771: .popsection
772: \oldinstr
773: alt_len_check 770b, 771b
alt_len_check 772b, 773b
alt_pad ( ( 771b - 770b ) - ( 773b - 772b ) )
774: .pushsection .altinstructions,"a"
alt_entry 772b, 774b, 770b, 771b, \feature
.popsection
.endm
/*
* Define an alternative between two instructions. If @feature is
* present, early code in apply_alternatives() replaces @oldinstr with
* @newinstr. ".skip" directive takes care of proper instruction padding
* in case @newinstr is longer than @oldinstr.
*/
.macro ALTERNATIVE_2 oldinstr, newinstr1, feature1, newinstr2, feature2
.pushsection .altinstr_replacement,"ax"
770: \newinstr1
771: \newinstr2
772: .popsection
773: \oldinstr
774: alt_len_check 770b, 771b
alt_len_check 771b, 772b
alt_len_check 773b, 774b
.if ( 771b - 770b > 772b - 771b )
alt_pad ( ( 771b - 770b ) - ( 774b - 773b ) )
.else
alt_pad ( ( 772b - 771b ) - ( 774b - 773b ) )
.endif
775: .pushsection .altinstructions,"a"
alt_entry 773b, 775b, 770b, 771b,\feature1
alt_entry 773b, 775b, 771b, 772b,\feature2
.popsection
.endm
#endif /* __ASSEMBLY__ */
#endif /* _ASM_S390_ALTERNATIVE_ASM_H */
...@@ -230,5 +230,5 @@ int ccw_device_siosl(struct ccw_device *); ...@@ -230,5 +230,5 @@ int ccw_device_siosl(struct ccw_device *);
extern void ccw_device_get_schid(struct ccw_device *, struct subchannel_id *); extern void ccw_device_get_schid(struct ccw_device *, struct subchannel_id *);
struct channel_path_desc *ccw_device_get_chp_desc(struct ccw_device *, int); struct channel_path_desc_fmt0 *ccw_device_get_chp_desc(struct ccw_device *, int);
#endif /* _S390_CCWDEV_H_ */ #endif /* _S390_CCWDEV_H_ */
...@@ -9,7 +9,7 @@ ...@@ -9,7 +9,7 @@
#include <uapi/asm/chpid.h> #include <uapi/asm/chpid.h>
#include <asm/cio.h> #include <asm/cio.h>
struct channel_path_desc { struct channel_path_desc_fmt0 {
u8 flags; u8 flags;
u8 lsn; u8 lsn;
u8 desc; u8 desc;
......
...@@ -227,7 +227,7 @@ struct esw_eadm { ...@@ -227,7 +227,7 @@ struct esw_eadm {
* a field is valid; a field not being valid is always passed as %0. * a field is valid; a field not being valid is always passed as %0.
* If a unit check occurred, @ecw may contain sense data; this is retrieved * If a unit check occurred, @ecw may contain sense data; this is retrieved
* by the common I/O layer itself if the device doesn't support concurrent * by the common I/O layer itself if the device doesn't support concurrent
* sense (so that the device driver never needs to perform basic sene itself). * sense (so that the device driver never needs to perform basic sense itself).
* For unsolicited interrupts, the irb is passed as-is (expect for sense data, * For unsolicited interrupts, the irb is passed as-is (expect for sense data,
* if applicable). * if applicable).
*/ */
......
...@@ -29,12 +29,12 @@ ...@@ -29,12 +29,12 @@
/* CPU measurement facility support */ /* CPU measurement facility support */
static inline int cpum_cf_avail(void) static inline int cpum_cf_avail(void)
{ {
return MACHINE_HAS_LPP && test_facility(67); return test_facility(40) && test_facility(67);
} }
static inline int cpum_sf_avail(void) static inline int cpum_sf_avail(void)
{ {
return MACHINE_HAS_LPP && test_facility(68); return test_facility(40) && test_facility(68);
} }
......
...@@ -32,8 +32,10 @@ struct css_general_char { ...@@ -32,8 +32,10 @@ struct css_general_char {
u32 fcx : 1; /* bit 88 */ u32 fcx : 1; /* bit 88 */
u32 : 19; u32 : 19;
u32 alt_ssi : 1; /* bit 108 */ u32 alt_ssi : 1; /* bit 108 */
u32:1; u32 : 1;
u32 narf:1; /* bit 110 */ u32 narf : 1; /* bit 110 */
u32 : 12;
u32 util_str : 1;/* bit 123 */
} __packed; } __packed;
extern struct css_general_char css_general_characteristics; extern struct css_general_char css_general_characteristics;
......
...@@ -6,12 +6,10 @@ ...@@ -6,12 +6,10 @@
#include <linux/types.h> #include <linux/types.h>
extern int nospec_call_disable; extern int nospec_disable;
extern int nospec_return_disable;
void nospec_init_branches(void); void nospec_init_branches(void);
void nospec_call_revert(s32 *start, s32 *end); void nospec_revert(s32 *start, s32 *end);
void nospec_return_revert(s32 *start, s32 *end);
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
......
...@@ -151,4 +151,7 @@ void vmem_map_init(void); ...@@ -151,4 +151,7 @@ void vmem_map_init(void);
void *vmem_crst_alloc(unsigned long val); void *vmem_crst_alloc(unsigned long val);
pte_t *vmem_pte_alloc(void); pte_t *vmem_pte_alloc(void);
unsigned long base_asce_alloc(unsigned long addr, unsigned long num_pages);
void base_asce_free(unsigned long asce);
#endif /* _S390_PGALLOC_H */ #endif /* _S390_PGALLOC_H */
...@@ -390,10 +390,10 @@ static inline int scsw_cmd_is_valid_key(union scsw *scsw) ...@@ -390,10 +390,10 @@ static inline int scsw_cmd_is_valid_key(union scsw *scsw)
} }
/** /**
* scsw_cmd_is_valid_sctl - check fctl field validity * scsw_cmd_is_valid_sctl - check sctl field validity
* @scsw: pointer to scsw * @scsw: pointer to scsw
* *
* Return non-zero if the fctl field of the specified command mode scsw is * Return non-zero if the sctl field of the specified command mode scsw is
* valid, zero otherwise. * valid, zero otherwise.
*/ */
static inline int scsw_cmd_is_valid_sctl(union scsw *scsw) static inline int scsw_cmd_is_valid_sctl(union scsw *scsw)
......
...@@ -25,7 +25,6 @@ ...@@ -25,7 +25,6 @@
#define MACHINE_FLAG_DIAG44 _BITUL(6) #define MACHINE_FLAG_DIAG44 _BITUL(6)
#define MACHINE_FLAG_EDAT1 _BITUL(7) #define MACHINE_FLAG_EDAT1 _BITUL(7)
#define MACHINE_FLAG_EDAT2 _BITUL(8) #define MACHINE_FLAG_EDAT2 _BITUL(8)
#define MACHINE_FLAG_LPP _BITUL(9)
#define MACHINE_FLAG_TOPOLOGY _BITUL(10) #define MACHINE_FLAG_TOPOLOGY _BITUL(10)
#define MACHINE_FLAG_TE _BITUL(11) #define MACHINE_FLAG_TE _BITUL(11)
#define MACHINE_FLAG_TLB_LC _BITUL(12) #define MACHINE_FLAG_TLB_LC _BITUL(12)
...@@ -66,7 +65,6 @@ extern void detect_memory_memblock(void); ...@@ -66,7 +65,6 @@ extern void detect_memory_memblock(void);
#define MACHINE_HAS_DIAG44 (S390_lowcore.machine_flags & MACHINE_FLAG_DIAG44) #define MACHINE_HAS_DIAG44 (S390_lowcore.machine_flags & MACHINE_FLAG_DIAG44)
#define MACHINE_HAS_EDAT1 (S390_lowcore.machine_flags & MACHINE_FLAG_EDAT1) #define MACHINE_HAS_EDAT1 (S390_lowcore.machine_flags & MACHINE_FLAG_EDAT1)
#define MACHINE_HAS_EDAT2 (S390_lowcore.machine_flags & MACHINE_FLAG_EDAT2) #define MACHINE_HAS_EDAT2 (S390_lowcore.machine_flags & MACHINE_FLAG_EDAT2)
#define MACHINE_HAS_LPP (S390_lowcore.machine_flags & MACHINE_FLAG_LPP)
#define MACHINE_HAS_TOPOLOGY (S390_lowcore.machine_flags & MACHINE_FLAG_TOPOLOGY) #define MACHINE_HAS_TOPOLOGY (S390_lowcore.machine_flags & MACHINE_FLAG_TOPOLOGY)
#define MACHINE_HAS_TE (S390_lowcore.machine_flags & MACHINE_FLAG_TE) #define MACHINE_HAS_TE (S390_lowcore.machine_flags & MACHINE_FLAG_TE)
#define MACHINE_HAS_TLB_LC (S390_lowcore.machine_flags & MACHINE_FLAG_TLB_LC) #define MACHINE_HAS_TLB_LC (S390_lowcore.machine_flags & MACHINE_FLAG_TLB_LC)
......
...@@ -68,25 +68,27 @@ typedef struct dasd_information2_t { ...@@ -68,25 +68,27 @@ typedef struct dasd_information2_t {
#define DASD_FORMAT_CDL 2 #define DASD_FORMAT_CDL 2
/* /*
* values to be used for dasd_information_t.features * values to be used for dasd_information_t.features
* 0x00: default features * 0x100: default features
* 0x01: readonly (ro) * 0x001: readonly (ro)
* 0x02: use diag discipline (diag) * 0x002: use diag discipline (diag)
* 0x04: set the device initially online (internal use only) * 0x004: set the device initially online (internal use only)
* 0x08: enable ERP related logging * 0x008: enable ERP related logging
* 0x10: allow I/O to fail on lost paths * 0x010: allow I/O to fail on lost paths
* 0x20: allow I/O to fail when a lock was stolen * 0x020: allow I/O to fail when a lock was stolen
* 0x40: give access to raw eckd data * 0x040: give access to raw eckd data
* 0x80: enable discard support * 0x080: enable discard support
* 0x100: enable autodisable for IFCC errors (default)
*/ */
#define DASD_FEATURE_DEFAULT 0x00 #define DASD_FEATURE_READONLY 0x001
#define DASD_FEATURE_READONLY 0x01 #define DASD_FEATURE_USEDIAG 0x002
#define DASD_FEATURE_USEDIAG 0x02 #define DASD_FEATURE_INITIAL_ONLINE 0x004
#define DASD_FEATURE_INITIAL_ONLINE 0x04 #define DASD_FEATURE_ERPLOG 0x008
#define DASD_FEATURE_ERPLOG 0x08 #define DASD_FEATURE_FAILFAST 0x010
#define DASD_FEATURE_FAILFAST 0x10 #define DASD_FEATURE_FAILONSLCK 0x020
#define DASD_FEATURE_FAILONSLCK 0x20 #define DASD_FEATURE_USERAW 0x040
#define DASD_FEATURE_USERAW 0x40 #define DASD_FEATURE_DISCARD 0x080
#define DASD_FEATURE_DISCARD 0x80 #define DASD_FEATURE_PATH_AUTODISABLE 0x100
#define DASD_FEATURE_DEFAULT DASD_FEATURE_PATH_AUTODISABLE
#define DASD_PARTN_BITS 2 #define DASD_PARTN_BITS 2
......
...@@ -61,11 +61,11 @@ obj-y += debug.o irq.o ipl.o dis.o diag.o vdso.o als.o ...@@ -61,11 +61,11 @@ obj-y += debug.o irq.o ipl.o dis.o diag.o vdso.o als.o
obj-y += sysinfo.o jump_label.o lgr.o os_info.o machine_kexec.o pgm_check.o obj-y += sysinfo.o jump_label.o lgr.o os_info.o machine_kexec.o pgm_check.o
obj-y += runtime_instr.o cache.o fpu.o dumpstack.o guarded_storage.o sthyi.o obj-y += runtime_instr.o cache.o fpu.o dumpstack.o guarded_storage.o sthyi.o
obj-y += entry.o reipl.o relocate_kernel.o kdebugfs.o alternative.o obj-y += entry.o reipl.o relocate_kernel.o kdebugfs.o alternative.o
obj-y += nospec-branch.o
extra-y += head.o head64.o vmlinux.lds extra-y += head.o head64.o vmlinux.lds
obj-$(CONFIG_EXPOLINE) += nospec-branch.o CFLAGS_REMOVE_nospec-branch.o += $(CC_FLAGS_EXPOLINE)
CFLAGS_REMOVE_expoline.o += $(CC_FLAGS_EXPOLINE)
obj-$(CONFIG_MODULES) += module.o obj-$(CONFIG_MODULES) += module.o
obj-$(CONFIG_SMP) += smp.o obj-$(CONFIG_SMP) += smp.o
......
...@@ -2,6 +2,7 @@ ...@@ -2,6 +2,7 @@
#include <linux/module.h> #include <linux/module.h>
#include <asm/alternative.h> #include <asm/alternative.h>
#include <asm/facility.h> #include <asm/facility.h>
#include <asm/nospec-branch.h>
#define MAX_PATCH_LEN (255 - 1) #define MAX_PATCH_LEN (255 - 1)
...@@ -15,29 +16,6 @@ static int __init disable_alternative_instructions(char *str) ...@@ -15,29 +16,6 @@ static int __init disable_alternative_instructions(char *str)
early_param("noaltinstr", disable_alternative_instructions); early_param("noaltinstr", disable_alternative_instructions);
static int __init nobp_setup_early(char *str)
{
bool enabled;
int rc;
rc = kstrtobool(str, &enabled);
if (rc)
return rc;
if (enabled && test_facility(82))
__set_facility(82, S390_lowcore.alt_stfle_fac_list);
else
__clear_facility(82, S390_lowcore.alt_stfle_fac_list);
return 0;
}
early_param("nobp", nobp_setup_early);
static int __init nospec_setup_early(char *str)
{
__clear_facility(82, S390_lowcore.alt_stfle_fac_list);
return 0;
}
early_param("nospec", nospec_setup_early);
struct brcl_insn { struct brcl_insn {
u16 opc; u16 opc;
s32 disp; s32 disp;
......
...@@ -63,6 +63,7 @@ int main(void) ...@@ -63,6 +63,7 @@ int main(void)
OFFSET(__SF_SIE_CONTROL, stack_frame, empty1[0]); OFFSET(__SF_SIE_CONTROL, stack_frame, empty1[0]);
OFFSET(__SF_SIE_SAVEAREA, stack_frame, empty1[1]); OFFSET(__SF_SIE_SAVEAREA, stack_frame, empty1[1]);
OFFSET(__SF_SIE_REASON, stack_frame, empty1[2]); OFFSET(__SF_SIE_REASON, stack_frame, empty1[2]);
OFFSET(__SF_SIE_FLAGS, stack_frame, empty1[3]);
BLANK(); BLANK();
/* timeval/timezone offsets for use by vdso */ /* timeval/timezone offsets for use by vdso */
OFFSET(__VDSO_UPD_COUNT, vdso_data, tb_update_count); OFFSET(__VDSO_UPD_COUNT, vdso_data, tb_update_count);
......
...@@ -67,7 +67,7 @@ static noinline __init void init_kernel_storage_key(void) ...@@ -67,7 +67,7 @@ static noinline __init void init_kernel_storage_key(void)
#if PAGE_DEFAULT_KEY #if PAGE_DEFAULT_KEY
unsigned long end_pfn, init_pfn; unsigned long end_pfn, init_pfn;
end_pfn = PFN_UP(__pa(&_end)); end_pfn = PFN_UP(__pa(_end));
for (init_pfn = 0 ; init_pfn < end_pfn; init_pfn++) for (init_pfn = 0 ; init_pfn < end_pfn; init_pfn++)
page_set_storage_key(init_pfn << PAGE_SHIFT, page_set_storage_key(init_pfn << PAGE_SHIFT,
...@@ -242,8 +242,6 @@ static __init void detect_machine_facilities(void) ...@@ -242,8 +242,6 @@ static __init void detect_machine_facilities(void)
S390_lowcore.machine_flags |= MACHINE_FLAG_EDAT2; S390_lowcore.machine_flags |= MACHINE_FLAG_EDAT2;
if (test_facility(3)) if (test_facility(3))
S390_lowcore.machine_flags |= MACHINE_FLAG_IDTE; S390_lowcore.machine_flags |= MACHINE_FLAG_IDTE;
if (test_facility(40))
S390_lowcore.machine_flags |= MACHINE_FLAG_LPP;
if (test_facility(50) && test_facility(73)) { if (test_facility(50) && test_facility(73)) {
S390_lowcore.machine_flags |= MACHINE_FLAG_TE; S390_lowcore.machine_flags |= MACHINE_FLAG_TE;
__ctl_set_bit(0, 55); __ctl_set_bit(0, 55);
......
...@@ -11,6 +11,7 @@ ...@@ -11,6 +11,7 @@
#include <linux/init.h> #include <linux/init.h>
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/alternative-asm.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/cache.h> #include <asm/cache.h>
#include <asm/ctl_reg.h> #include <asm/ctl_reg.h>
...@@ -57,6 +58,8 @@ _CIF_WORK = (_CIF_MCCK_PENDING | _CIF_ASCE_PRIMARY | \ ...@@ -57,6 +58,8 @@ _CIF_WORK = (_CIF_MCCK_PENDING | _CIF_ASCE_PRIMARY | \
_CIF_ASCE_SECONDARY | _CIF_FPU) _CIF_ASCE_SECONDARY | _CIF_FPU)
_PIF_WORK = (_PIF_PER_TRAP | _PIF_SYSCALL_RESTART) _PIF_WORK = (_PIF_PER_TRAP | _PIF_SYSCALL_RESTART)
_LPP_OFFSET = __LC_LPP
#define BASED(name) name-cleanup_critical(%r13) #define BASED(name) name-cleanup_critical(%r13)
.macro TRACE_IRQS_ON .macro TRACE_IRQS_ON
...@@ -162,65 +165,22 @@ _PIF_WORK = (_PIF_PER_TRAP | _PIF_SYSCALL_RESTART) ...@@ -162,65 +165,22 @@ _PIF_WORK = (_PIF_PER_TRAP | _PIF_SYSCALL_RESTART)
.endm .endm
.macro BPOFF .macro BPOFF
.pushsection .altinstr_replacement, "ax" ALTERNATIVE "", ".long 0xb2e8c000", 82
660: .long 0xb2e8c000
.popsection
661: .long 0x47000000
.pushsection .altinstructions, "a"
.long 661b - .
.long 660b - .
.word 82
.byte 4
.byte 4
.popsection
.endm .endm
.macro BPON .macro BPON
.pushsection .altinstr_replacement, "ax" ALTERNATIVE "", ".long 0xb2e8d000", 82
662: .long 0xb2e8d000
.popsection
663: .long 0x47000000
.pushsection .altinstructions, "a"
.long 663b - .
.long 662b - .
.word 82
.byte 4
.byte 4
.popsection
.endm .endm
.macro BPENTER tif_ptr,tif_mask .macro BPENTER tif_ptr,tif_mask
.pushsection .altinstr_replacement, "ax" ALTERNATIVE "TSTMSK \tif_ptr,\tif_mask; jz .+8; .long 0xb2e8d000", \
662: .word 0xc004, 0x0000, 0x0000 # 6 byte nop "", 82
.word 0xc004, 0x0000, 0x0000 # 6 byte nop
.popsection
664: TSTMSK \tif_ptr,\tif_mask
jz . + 8
.long 0xb2e8d000
.pushsection .altinstructions, "a"
.long 664b - .
.long 662b - .
.word 82
.byte 12
.byte 12
.popsection
.endm .endm
.macro BPEXIT tif_ptr,tif_mask .macro BPEXIT tif_ptr,tif_mask
TSTMSK \tif_ptr,\tif_mask TSTMSK \tif_ptr,\tif_mask
.pushsection .altinstr_replacement, "ax" ALTERNATIVE "jz .+8; .long 0xb2e8c000", \
662: jnz . + 8 "jnz .+8; .long 0xb2e8d000", 82
.long 0xb2e8d000
.popsection
664: jz . + 8
.long 0xb2e8c000
.pushsection .altinstructions, "a"
.long 664b - .
.long 662b - .
.word 82
.byte 8
.byte 8
.popsection
.endm .endm
#ifdef CONFIG_EXPOLINE #ifdef CONFIG_EXPOLINE
...@@ -323,10 +283,8 @@ ENTRY(__switch_to) ...@@ -323,10 +283,8 @@ ENTRY(__switch_to)
aghi %r3,__TASK_pid aghi %r3,__TASK_pid
mvc __LC_CURRENT_PID(4,%r0),0(%r3) # store pid of next mvc __LC_CURRENT_PID(4,%r0),0(%r3) # store pid of next
lmg %r6,%r15,__SF_GPRS(%r15) # load gprs of next task lmg %r6,%r15,__SF_GPRS(%r15) # load gprs of next task
TSTMSK __LC_MACHINE_FLAGS,MACHINE_FLAG_LPP ALTERNATIVE "", ".insn s,0xb2800000,_LPP_OFFSET", 40
jz 0f BR_R1USE_R14
.insn s,0xb2800000,__LC_LPP # set program parameter
0: BR_R1USE_R14
.L__critical_start: .L__critical_start:
...@@ -339,10 +297,10 @@ ENTRY(__switch_to) ...@@ -339,10 +297,10 @@ ENTRY(__switch_to)
ENTRY(sie64a) ENTRY(sie64a)
stmg %r6,%r14,__SF_GPRS(%r15) # save kernel registers stmg %r6,%r14,__SF_GPRS(%r15) # save kernel registers
lg %r12,__LC_CURRENT lg %r12,__LC_CURRENT
stg %r2,__SF_EMPTY(%r15) # save control block pointer stg %r2,__SF_SIE_CONTROL(%r15) # save control block pointer
stg %r3,__SF_EMPTY+8(%r15) # save guest register save area stg %r3,__SF_SIE_SAVEAREA(%r15) # save guest register save area
xc __SF_EMPTY+16(8,%r15),__SF_EMPTY+16(%r15) # reason code = 0 xc __SF_SIE_REASON(8,%r15),__SF_SIE_REASON(%r15) # reason code = 0
mvc __SF_EMPTY+24(8,%r15),__TI_flags(%r12) # copy thread flags mvc __SF_SIE_FLAGS(8,%r15),__TI_flags(%r12) # copy thread flags
TSTMSK __LC_CPU_FLAGS,_CIF_FPU # load guest fp/vx registers ? TSTMSK __LC_CPU_FLAGS,_CIF_FPU # load guest fp/vx registers ?
jno .Lsie_load_guest_gprs jno .Lsie_load_guest_gprs
brasl %r14,load_fpu_regs # load guest fp/vx regs brasl %r14,load_fpu_regs # load guest fp/vx regs
...@@ -353,18 +311,18 @@ ENTRY(sie64a) ...@@ -353,18 +311,18 @@ ENTRY(sie64a)
jz .Lsie_gmap jz .Lsie_gmap
lctlg %c1,%c1,__GMAP_ASCE(%r14) # load primary asce lctlg %c1,%c1,__GMAP_ASCE(%r14) # load primary asce
.Lsie_gmap: .Lsie_gmap:
lg %r14,__SF_EMPTY(%r15) # get control block pointer lg %r14,__SF_SIE_CONTROL(%r15) # get control block pointer
oi __SIE_PROG0C+3(%r14),1 # we are going into SIE now oi __SIE_PROG0C+3(%r14),1 # we are going into SIE now
tm __SIE_PROG20+3(%r14),3 # last exit... tm __SIE_PROG20+3(%r14),3 # last exit...
jnz .Lsie_skip jnz .Lsie_skip
TSTMSK __LC_CPU_FLAGS,_CIF_FPU TSTMSK __LC_CPU_FLAGS,_CIF_FPU
jo .Lsie_skip # exit if fp/vx regs changed jo .Lsie_skip # exit if fp/vx regs changed
BPEXIT __SF_EMPTY+24(%r15),(_TIF_ISOLATE_BP|_TIF_ISOLATE_BP_GUEST) BPEXIT __SF_SIE_FLAGS(%r15),(_TIF_ISOLATE_BP|_TIF_ISOLATE_BP_GUEST)
.Lsie_entry: .Lsie_entry:
sie 0(%r14) sie 0(%r14)
.Lsie_exit: .Lsie_exit:
BPOFF BPOFF
BPENTER __SF_EMPTY+24(%r15),(_TIF_ISOLATE_BP|_TIF_ISOLATE_BP_GUEST) BPENTER __SF_SIE_FLAGS(%r15),(_TIF_ISOLATE_BP|_TIF_ISOLATE_BP_GUEST)
.Lsie_skip: .Lsie_skip:
ni __SIE_PROG0C+3(%r14),0xfe # no longer in SIE ni __SIE_PROG0C+3(%r14),0xfe # no longer in SIE
lctlg %c1,%c1,__LC_USER_ASCE # load primary asce lctlg %c1,%c1,__LC_USER_ASCE # load primary asce
...@@ -383,7 +341,7 @@ ENTRY(sie64a) ...@@ -383,7 +341,7 @@ ENTRY(sie64a)
nopr 7 nopr 7
.globl sie_exit .globl sie_exit
sie_exit: sie_exit:
lg %r14,__SF_EMPTY+8(%r15) # load guest register save area lg %r14,__SF_SIE_SAVEAREA(%r15) # load guest register save area
stmg %r0,%r13,0(%r14) # save guest gprs 0-13 stmg %r0,%r13,0(%r14) # save guest gprs 0-13
xgr %r0,%r0 # clear guest registers to xgr %r0,%r0 # clear guest registers to
xgr %r1,%r1 # prevent speculative use xgr %r1,%r1 # prevent speculative use
...@@ -392,11 +350,11 @@ sie_exit: ...@@ -392,11 +350,11 @@ sie_exit:
xgr %r4,%r4 xgr %r4,%r4
xgr %r5,%r5 xgr %r5,%r5
lmg %r6,%r14,__SF_GPRS(%r15) # restore kernel registers lmg %r6,%r14,__SF_GPRS(%r15) # restore kernel registers
lg %r2,__SF_EMPTY+16(%r15) # return exit reason code lg %r2,__SF_SIE_REASON(%r15) # return exit reason code
BR_R1USE_R14 BR_R1USE_R14
.Lsie_fault: .Lsie_fault:
lghi %r14,-EFAULT lghi %r14,-EFAULT
stg %r14,__SF_EMPTY+16(%r15) # set exit reason code stg %r14,__SF_SIE_REASON(%r15) # set exit reason code
j sie_exit j sie_exit
EX_TABLE(.Lrewind_pad6,.Lsie_fault) EX_TABLE(.Lrewind_pad6,.Lsie_fault)
...@@ -685,7 +643,7 @@ ENTRY(pgm_check_handler) ...@@ -685,7 +643,7 @@ ENTRY(pgm_check_handler)
slg %r14,BASED(.Lsie_critical_start) slg %r14,BASED(.Lsie_critical_start)
clg %r14,BASED(.Lsie_critical_length) clg %r14,BASED(.Lsie_critical_length)
jhe 0f jhe 0f
lg %r14,__SF_EMPTY(%r15) # get control block pointer lg %r14,__SF_SIE_CONTROL(%r15) # get control block pointer
ni __SIE_PROG0C+3(%r14),0xfe # no longer in SIE ni __SIE_PROG0C+3(%r14),0xfe # no longer in SIE
lctlg %c1,%c1,__LC_USER_ASCE # load primary asce lctlg %c1,%c1,__LC_USER_ASCE # load primary asce
larl %r9,sie_exit # skip forward to sie_exit larl %r9,sie_exit # skip forward to sie_exit
...@@ -1285,10 +1243,8 @@ ENTRY(mcck_int_handler) ...@@ -1285,10 +1243,8 @@ ENTRY(mcck_int_handler)
# PSW restart interrupt handler # PSW restart interrupt handler
# #
ENTRY(restart_int_handler) ENTRY(restart_int_handler)
TSTMSK __LC_MACHINE_FLAGS,MACHINE_FLAG_LPP ALTERNATIVE "", ".insn s,0xb2800000,_LPP_OFFSET", 40
jz 0f stg %r15,__LC_SAVE_AREA_RESTART
.insn s,0xb2800000,__LC_LPP
0: stg %r15,__LC_SAVE_AREA_RESTART
lg %r15,__LC_RESTART_STACK lg %r15,__LC_RESTART_STACK
aghi %r15,-__PT_SIZE # create pt_regs on stack aghi %r15,-__PT_SIZE # create pt_regs on stack
xc 0(__PT_SIZE,%r15),0(%r15) xc 0(__PT_SIZE,%r15),0(%r15)
...@@ -1397,8 +1353,8 @@ cleanup_critical: ...@@ -1397,8 +1353,8 @@ cleanup_critical:
clg %r9,BASED(.Lsie_crit_mcck_length) clg %r9,BASED(.Lsie_crit_mcck_length)
jh 1f jh 1f
oi __LC_CPU_FLAGS+7, _CIF_MCCK_GUEST oi __LC_CPU_FLAGS+7, _CIF_MCCK_GUEST
1: BPENTER __SF_EMPTY+24(%r15),(_TIF_ISOLATE_BP|_TIF_ISOLATE_BP_GUEST) 1: BPENTER __SF_SIE_FLAGS(%r15),(_TIF_ISOLATE_BP|_TIF_ISOLATE_BP_GUEST)
lg %r9,__SF_EMPTY(%r15) # get control block pointer lg %r9,__SF_SIE_CONTROL(%r15) # get control block pointer
ni __SIE_PROG0C+3(%r9),0xfe # no longer in SIE ni __SIE_PROG0C+3(%r9),0xfe # no longer in SIE
lctlg %c1,%c1,__LC_USER_ASCE # load primary asce lctlg %c1,%c1,__LC_USER_ASCE # load primary asce
larl %r9,sie_exit # skip forward to sie_exit larl %r9,sie_exit # skip forward to sie_exit
......
...@@ -159,7 +159,7 @@ int module_frob_arch_sections(Elf_Ehdr *hdr, Elf_Shdr *sechdrs, ...@@ -159,7 +159,7 @@ int module_frob_arch_sections(Elf_Ehdr *hdr, Elf_Shdr *sechdrs,
me->core_layout.size += me->arch.got_size; me->core_layout.size += me->arch.got_size;
me->arch.plt_offset = me->core_layout.size; me->arch.plt_offset = me->core_layout.size;
if (me->arch.plt_size) { if (me->arch.plt_size) {
if (IS_ENABLED(CONFIG_EXPOLINE) && !nospec_call_disable) if (IS_ENABLED(CONFIG_EXPOLINE) && !nospec_disable)
me->arch.plt_size += PLT_ENTRY_SIZE; me->arch.plt_size += PLT_ENTRY_SIZE;
me->core_layout.size += me->arch.plt_size; me->core_layout.size += me->arch.plt_size;
} }
...@@ -318,8 +318,7 @@ static int apply_rela(Elf_Rela *rela, Elf_Addr base, Elf_Sym *symtab, ...@@ -318,8 +318,7 @@ static int apply_rela(Elf_Rela *rela, Elf_Addr base, Elf_Sym *symtab,
info->plt_offset; info->plt_offset;
ip[0] = 0x0d10e310; /* basr 1,0 */ ip[0] = 0x0d10e310; /* basr 1,0 */
ip[1] = 0x100a0004; /* lg 1,10(1) */ ip[1] = 0x100a0004; /* lg 1,10(1) */
if (IS_ENABLED(CONFIG_EXPOLINE) && if (IS_ENABLED(CONFIG_EXPOLINE) && !nospec_disable) {
!nospec_call_disable) {
unsigned int *ij; unsigned int *ij;
ij = me->core_layout.base + ij = me->core_layout.base +
me->arch.plt_offset + me->arch.plt_offset +
...@@ -440,7 +439,7 @@ int module_finalize(const Elf_Ehdr *hdr, ...@@ -440,7 +439,7 @@ int module_finalize(const Elf_Ehdr *hdr,
void *aseg; void *aseg;
if (IS_ENABLED(CONFIG_EXPOLINE) && if (IS_ENABLED(CONFIG_EXPOLINE) &&
!nospec_call_disable && me->arch.plt_size) { !nospec_disable && me->arch.plt_size) {
unsigned int *ij; unsigned int *ij;
ij = me->core_layout.base + me->arch.plt_offset + ij = me->core_layout.base + me->arch.plt_offset +
...@@ -467,11 +466,11 @@ int module_finalize(const Elf_Ehdr *hdr, ...@@ -467,11 +466,11 @@ int module_finalize(const Elf_Ehdr *hdr,
if (IS_ENABLED(CONFIG_EXPOLINE) && if (IS_ENABLED(CONFIG_EXPOLINE) &&
(!strcmp(".nospec_call_table", secname))) (!strcmp(".nospec_call_table", secname)))
nospec_call_revert(aseg, aseg + s->sh_size); nospec_revert(aseg, aseg + s->sh_size);
if (IS_ENABLED(CONFIG_EXPOLINE) && if (IS_ENABLED(CONFIG_EXPOLINE) &&
(!strcmp(".nospec_return_table", secname))) (!strcmp(".nospec_return_table", secname)))
nospec_return_revert(aseg, aseg + s->sh_size); nospec_revert(aseg, aseg + s->sh_size);
} }
jump_label_apply_nops(me); jump_label_apply_nops(me);
......
// SPDX-License-Identifier: GPL-2.0 // SPDX-License-Identifier: GPL-2.0
#include <linux/module.h> #include <linux/module.h>
#include <linux/device.h>
#include <asm/nospec-branch.h> #include <asm/nospec-branch.h>
int nospec_call_disable = IS_ENABLED(CONFIG_EXPOLINE_OFF); static int __init nobp_setup_early(char *str)
int nospec_return_disable = !IS_ENABLED(CONFIG_EXPOLINE_FULL); {
bool enabled;
int rc;
rc = kstrtobool(str, &enabled);
if (rc)
return rc;
if (enabled && test_facility(82)) {
/*
* The user explicitely requested nobp=1, enable it and
* disable the expoline support.
*/
__set_facility(82, S390_lowcore.alt_stfle_fac_list);
if (IS_ENABLED(CONFIG_EXPOLINE))
nospec_disable = 1;
} else {
__clear_facility(82, S390_lowcore.alt_stfle_fac_list);
}
return 0;
}
early_param("nobp", nobp_setup_early);
static int __init nospec_setup_early(char *str)
{
__clear_facility(82, S390_lowcore.alt_stfle_fac_list);
return 0;
}
early_param("nospec", nospec_setup_early);
static int __init nospec_report(void)
{
if (IS_ENABLED(CC_USING_EXPOLINE) && !nospec_disable)
pr_info("Spectre V2 mitigation: execute trampolines.\n");
if (__test_facility(82, S390_lowcore.alt_stfle_fac_list))
pr_info("Spectre V2 mitigation: limited branch prediction.\n");
return 0;
}
arch_initcall(nospec_report);
#ifdef CONFIG_SYSFS
ssize_t cpu_show_spectre_v1(struct device *dev,
struct device_attribute *attr, char *buf)
{
return sprintf(buf, "Mitigation: __user pointer sanitization\n");
}
ssize_t cpu_show_spectre_v2(struct device *dev,
struct device_attribute *attr, char *buf)
{
if (IS_ENABLED(CC_USING_EXPOLINE) && !nospec_disable)
return sprintf(buf, "Mitigation: execute trampolines\n");
if (__test_facility(82, S390_lowcore.alt_stfle_fac_list))
return sprintf(buf, "Mitigation: limited branch prediction.\n");
return sprintf(buf, "Vulnerable\n");
}
#endif
#ifdef CONFIG_EXPOLINE
int nospec_disable = IS_ENABLED(CONFIG_EXPOLINE_OFF);
static int __init nospectre_v2_setup_early(char *str) static int __init nospectre_v2_setup_early(char *str)
{ {
nospec_call_disable = 1; nospec_disable = 1;
nospec_return_disable = 1;
return 0; return 0;
} }
early_param("nospectre_v2", nospectre_v2_setup_early); early_param("nospectre_v2", nospectre_v2_setup_early);
static int __init spectre_v2_auto_early(void)
{
if (IS_ENABLED(CC_USING_EXPOLINE)) {
/*
* The kernel has been compiled with expolines.
* Keep expolines enabled and disable nobp.
*/
nospec_disable = 0;
__clear_facility(82, S390_lowcore.alt_stfle_fac_list);
}
/*
* If the kernel has not been compiled with expolines the
* nobp setting decides what is done, this depends on the
* CONFIG_KERNEL_NP option and the nobp/nospec parameters.
*/
return 0;
}
#ifdef CONFIG_EXPOLINE_AUTO
early_initcall(spectre_v2_auto_early);
#endif
static int __init spectre_v2_setup_early(char *str) static int __init spectre_v2_setup_early(char *str)
{ {
if (str && !strncmp(str, "on", 2)) { if (str && !strncmp(str, "on", 2)) {
nospec_call_disable = 0; nospec_disable = 0;
nospec_return_disable = 0; __clear_facility(82, S390_lowcore.alt_stfle_fac_list);
}
if (str && !strncmp(str, "off", 3)) {
nospec_call_disable = 1;
nospec_return_disable = 1;
}
if (str && !strncmp(str, "auto", 4)) {
nospec_call_disable = 0;
nospec_return_disable = 1;
} }
if (str && !strncmp(str, "off", 3))
nospec_disable = 1;
if (str && !strncmp(str, "auto", 4))
spectre_v2_auto_early();
return 0; return 0;
} }
early_param("spectre_v2", spectre_v2_setup_early); early_param("spectre_v2", spectre_v2_setup_early);
...@@ -79,15 +155,9 @@ static void __init_or_module __nospec_revert(s32 *start, s32 *end) ...@@ -79,15 +155,9 @@ static void __init_or_module __nospec_revert(s32 *start, s32 *end)
} }
} }
void __init_or_module nospec_call_revert(s32 *start, s32 *end) void __init_or_module nospec_revert(s32 *start, s32 *end)
{
if (nospec_call_disable)
__nospec_revert(start, end);
}
void __init_or_module nospec_return_revert(s32 *start, s32 *end)
{ {
if (nospec_return_disable) if (nospec_disable)
__nospec_revert(start, end); __nospec_revert(start, end);
} }
...@@ -95,6 +165,8 @@ extern s32 __nospec_call_start[], __nospec_call_end[]; ...@@ -95,6 +165,8 @@ extern s32 __nospec_call_start[], __nospec_call_end[];
extern s32 __nospec_return_start[], __nospec_return_end[]; extern s32 __nospec_return_start[], __nospec_return_end[];
void __init nospec_init_branches(void) void __init nospec_init_branches(void)
{ {
nospec_call_revert(__nospec_call_start, __nospec_call_end); nospec_revert(__nospec_call_start, __nospec_call_end);
nospec_return_revert(__nospec_return_start, __nospec_return_end); nospec_revert(__nospec_return_start, __nospec_return_end);
} }
#endif /* CONFIG_EXPOLINE */
...@@ -221,6 +221,8 @@ static void __init conmode_default(void) ...@@ -221,6 +221,8 @@ static void __init conmode_default(void)
SET_CONSOLE_SCLP; SET_CONSOLE_SCLP;
#endif #endif
} }
if (IS_ENABLED(CONFIG_VT) && IS_ENABLED(CONFIG_DUMMY_CONSOLE))
conswitchp = &dummy_con;
} }
#ifdef CONFIG_CRASH_DUMP #ifdef CONFIG_CRASH_DUMP
...@@ -413,12 +415,12 @@ static void __init setup_resources(void) ...@@ -413,12 +415,12 @@ static void __init setup_resources(void)
struct memblock_region *reg; struct memblock_region *reg;
int j; int j;
code_resource.start = (unsigned long) &_text; code_resource.start = (unsigned long) _text;
code_resource.end = (unsigned long) &_etext - 1; code_resource.end = (unsigned long) _etext - 1;
data_resource.start = (unsigned long) &_etext; data_resource.start = (unsigned long) _etext;
data_resource.end = (unsigned long) &_edata - 1; data_resource.end = (unsigned long) _edata - 1;
bss_resource.start = (unsigned long) &__bss_start; bss_resource.start = (unsigned long) __bss_start;
bss_resource.end = (unsigned long) &__bss_stop - 1; bss_resource.end = (unsigned long) __bss_stop - 1;
for_each_memblock(memory, reg) { for_each_memblock(memory, reg) {
res = memblock_virt_alloc(sizeof(*res), 8); res = memblock_virt_alloc(sizeof(*res), 8);
...@@ -667,7 +669,7 @@ static void __init check_initrd(void) ...@@ -667,7 +669,7 @@ static void __init check_initrd(void)
*/ */
static void __init reserve_kernel(void) static void __init reserve_kernel(void)
{ {
unsigned long start_pfn = PFN_UP(__pa(&_end)); unsigned long start_pfn = PFN_UP(__pa(_end));
#ifdef CONFIG_DMA_API_DEBUG #ifdef CONFIG_DMA_API_DEBUG
/* /*
...@@ -888,9 +890,9 @@ void __init setup_arch(char **cmdline_p) ...@@ -888,9 +890,9 @@ void __init setup_arch(char **cmdline_p)
/* Is init_mm really needed? */ /* Is init_mm really needed? */
init_mm.start_code = PAGE_OFFSET; init_mm.start_code = PAGE_OFFSET;
init_mm.end_code = (unsigned long) &_etext; init_mm.end_code = (unsigned long) _etext;
init_mm.end_data = (unsigned long) &_edata; init_mm.end_data = (unsigned long) _edata;
init_mm.brk = (unsigned long) &_end; init_mm.brk = (unsigned long) _end;
parse_early_param(); parse_early_param();
#ifdef CONFIG_CRASH_DUMP #ifdef CONFIG_CRASH_DUMP
......
...@@ -153,8 +153,8 @@ int pfn_is_nosave(unsigned long pfn) ...@@ -153,8 +153,8 @@ int pfn_is_nosave(unsigned long pfn)
{ {
unsigned long nosave_begin_pfn = PFN_DOWN(__pa(&__nosave_begin)); unsigned long nosave_begin_pfn = PFN_DOWN(__pa(&__nosave_begin));
unsigned long nosave_end_pfn = PFN_DOWN(__pa(&__nosave_end)); unsigned long nosave_end_pfn = PFN_DOWN(__pa(&__nosave_end));
unsigned long end_rodata_pfn = PFN_DOWN(__pa(&__end_rodata)) - 1; unsigned long end_rodata_pfn = PFN_DOWN(__pa(__end_rodata)) - 1;
unsigned long stext_pfn = PFN_DOWN(__pa(&_stext)); unsigned long stext_pfn = PFN_DOWN(__pa(_stext));
/* Always save lowcore pages (LC protection might be enabled). */ /* Always save lowcore pages (LC protection might be enabled). */
if (pfn <= LC_PAGES) if (pfn <= LC_PAGES)
......
...@@ -24,8 +24,8 @@ enum address_markers_idx { ...@@ -24,8 +24,8 @@ enum address_markers_idx {
static struct addr_marker address_markers[] = { static struct addr_marker address_markers[] = {
[IDENTITY_NR] = {0, "Identity Mapping"}, [IDENTITY_NR] = {0, "Identity Mapping"},
[KERNEL_START_NR] = {(unsigned long)&_stext, "Kernel Image Start"}, [KERNEL_START_NR] = {(unsigned long)_stext, "Kernel Image Start"},
[KERNEL_END_NR] = {(unsigned long)&_end, "Kernel Image End"}, [KERNEL_END_NR] = {(unsigned long)_end, "Kernel Image End"},
[VMEMMAP_NR] = {0, "vmemmap Area"}, [VMEMMAP_NR] = {0, "vmemmap Area"},
[VMALLOC_NR] = {0, "vmalloc Area"}, [VMALLOC_NR] = {0, "vmalloc Area"},
[MODULES_NR] = {0, "Modules Area"}, [MODULES_NR] = {0, "Modules Area"},
......
...@@ -6,8 +6,9 @@ ...@@ -6,8 +6,9 @@
* Author(s): Martin Schwidefsky <schwidefsky@de.ibm.com> * Author(s): Martin Schwidefsky <schwidefsky@de.ibm.com>
*/ */
#include <linux/mm.h>
#include <linux/sysctl.h> #include <linux/sysctl.h>
#include <linux/slab.h>
#include <linux/mm.h>
#include <asm/mmu_context.h> #include <asm/mmu_context.h>
#include <asm/pgalloc.h> #include <asm/pgalloc.h>
#include <asm/gmap.h> #include <asm/gmap.h>
...@@ -366,3 +367,293 @@ void tlb_remove_table(struct mmu_gather *tlb, void *table) ...@@ -366,3 +367,293 @@ void tlb_remove_table(struct mmu_gather *tlb, void *table)
if ((*batch)->nr == MAX_TABLE_BATCH) if ((*batch)->nr == MAX_TABLE_BATCH)
tlb_flush_mmu(tlb); tlb_flush_mmu(tlb);
} }
/*
* Base infrastructure required to generate basic asces, region, segment,
* and page tables that do not make use of enhanced features like EDAT1.
*/
static struct kmem_cache *base_pgt_cache;
static unsigned long base_pgt_alloc(void)
{
u64 *table;
table = kmem_cache_alloc(base_pgt_cache, GFP_KERNEL);
if (table)
memset64(table, _PAGE_INVALID, PTRS_PER_PTE);
return (unsigned long) table;
}
static void base_pgt_free(unsigned long table)
{
kmem_cache_free(base_pgt_cache, (void *) table);
}
static unsigned long base_crst_alloc(unsigned long val)
{
unsigned long table;
table = __get_free_pages(GFP_KERNEL, CRST_ALLOC_ORDER);
if (table)
crst_table_init((unsigned long *)table, val);
return table;
}
static void base_crst_free(unsigned long table)
{
free_pages(table, CRST_ALLOC_ORDER);
}
#define BASE_ADDR_END_FUNC(NAME, SIZE) \
static inline unsigned long base_##NAME##_addr_end(unsigned long addr, \
unsigned long end) \
{ \
unsigned long next = (addr + (SIZE)) & ~((SIZE) - 1); \
\
return (next - 1) < (end - 1) ? next : end; \
}
BASE_ADDR_END_FUNC(page, _PAGE_SIZE)
BASE_ADDR_END_FUNC(segment, _SEGMENT_SIZE)
BASE_ADDR_END_FUNC(region3, _REGION3_SIZE)
BASE_ADDR_END_FUNC(region2, _REGION2_SIZE)
BASE_ADDR_END_FUNC(region1, _REGION1_SIZE)
static inline unsigned long base_lra(unsigned long address)
{
unsigned long real;
asm volatile(
" lra %0,0(%1)\n"
: "=d" (real) : "a" (address) : "cc");
return real;
}
static int base_page_walk(unsigned long origin, unsigned long addr,
unsigned long end, int alloc)
{
unsigned long *pte, next;
if (!alloc)
return 0;
pte = (unsigned long *) origin;
pte += (addr & _PAGE_INDEX) >> _PAGE_SHIFT;
do {
next = base_page_addr_end(addr, end);
*pte = base_lra(addr);
} while (pte++, addr = next, addr < end);
return 0;
}
static int base_segment_walk(unsigned long origin, unsigned long addr,
unsigned long end, int alloc)
{
unsigned long *ste, next, table;
int rc;
ste = (unsigned long *) origin;
ste += (addr & _SEGMENT_INDEX) >> _SEGMENT_SHIFT;
do {
next = base_segment_addr_end(addr, end);
if (*ste & _SEGMENT_ENTRY_INVALID) {
if (!alloc)
continue;
table = base_pgt_alloc();
if (!table)
return -ENOMEM;
*ste = table | _SEGMENT_ENTRY;
}
table = *ste & _SEGMENT_ENTRY_ORIGIN;
rc = base_page_walk(table, addr, next, alloc);
if (rc)
return rc;
if (!alloc)
base_pgt_free(table);
cond_resched();
} while (ste++, addr = next, addr < end);
return 0;
}
static int base_region3_walk(unsigned long origin, unsigned long addr,
unsigned long end, int alloc)
{
unsigned long *rtte, next, table;
int rc;
rtte = (unsigned long *) origin;
rtte += (addr & _REGION3_INDEX) >> _REGION3_SHIFT;
do {
next = base_region3_addr_end(addr, end);
if (*rtte & _REGION_ENTRY_INVALID) {
if (!alloc)
continue;
table = base_crst_alloc(_SEGMENT_ENTRY_EMPTY);
if (!table)
return -ENOMEM;
*rtte = table | _REGION3_ENTRY;
}
table = *rtte & _REGION_ENTRY_ORIGIN;
rc = base_segment_walk(table, addr, next, alloc);
if (rc)
return rc;
if (!alloc)
base_crst_free(table);
} while (rtte++, addr = next, addr < end);
return 0;
}
static int base_region2_walk(unsigned long origin, unsigned long addr,
unsigned long end, int alloc)
{
unsigned long *rste, next, table;
int rc;
rste = (unsigned long *) origin;
rste += (addr & _REGION2_INDEX) >> _REGION2_SHIFT;
do {
next = base_region2_addr_end(addr, end);
if (*rste & _REGION_ENTRY_INVALID) {
if (!alloc)
continue;
table = base_crst_alloc(_REGION3_ENTRY_EMPTY);
if (!table)
return -ENOMEM;
*rste = table | _REGION2_ENTRY;
}
table = *rste & _REGION_ENTRY_ORIGIN;
rc = base_region3_walk(table, addr, next, alloc);
if (rc)
return rc;
if (!alloc)
base_crst_free(table);
} while (rste++, addr = next, addr < end);
return 0;
}
static int base_region1_walk(unsigned long origin, unsigned long addr,
unsigned long end, int alloc)
{
unsigned long *rfte, next, table;
int rc;
rfte = (unsigned long *) origin;
rfte += (addr & _REGION1_INDEX) >> _REGION1_SHIFT;
do {
next = base_region1_addr_end(addr, end);
if (*rfte & _REGION_ENTRY_INVALID) {
if (!alloc)
continue;
table = base_crst_alloc(_REGION2_ENTRY_EMPTY);
if (!table)
return -ENOMEM;
*rfte = table | _REGION1_ENTRY;
}
table = *rfte & _REGION_ENTRY_ORIGIN;
rc = base_region2_walk(table, addr, next, alloc);
if (rc)
return rc;
if (!alloc)
base_crst_free(table);
} while (rfte++, addr = next, addr < end);
return 0;
}
/**
* base_asce_free - free asce and tables returned from base_asce_alloc()
* @asce: asce to be freed
*
* Frees all region, segment, and page tables that were allocated with a
* corresponding base_asce_alloc() call.
*/
void base_asce_free(unsigned long asce)
{
unsigned long table = asce & _ASCE_ORIGIN;
if (!asce)
return;
switch (asce & _ASCE_TYPE_MASK) {
case _ASCE_TYPE_SEGMENT:
base_segment_walk(table, 0, _REGION3_SIZE, 0);
break;
case _ASCE_TYPE_REGION3:
base_region3_walk(table, 0, _REGION2_SIZE, 0);
break;
case _ASCE_TYPE_REGION2:
base_region2_walk(table, 0, _REGION1_SIZE, 0);
break;
case _ASCE_TYPE_REGION1:
base_region1_walk(table, 0, -_PAGE_SIZE, 0);
break;
}
base_crst_free(table);
}
static int base_pgt_cache_init(void)
{
static DEFINE_MUTEX(base_pgt_cache_mutex);
unsigned long sz = _PAGE_TABLE_SIZE;
if (base_pgt_cache)
return 0;
mutex_lock(&base_pgt_cache_mutex);
if (!base_pgt_cache)
base_pgt_cache = kmem_cache_create("base_pgt", sz, sz, 0, NULL);
mutex_unlock(&base_pgt_cache_mutex);
return base_pgt_cache ? 0 : -ENOMEM;
}
/**
* base_asce_alloc - create kernel mapping without enhanced DAT features
* @addr: virtual start address of kernel mapping
* @num_pages: number of consecutive pages
*
* Generate an asce, including all required region, segment and page tables,
* that can be used to access the virtual kernel mapping. The difference is
* that the returned asce does not make use of any enhanced DAT features like
* e.g. large pages. This is required for some I/O functions that pass an
* asce, like e.g. some service call requests.
*
* Note: the returned asce may NEVER be attached to any cpu. It may only be
* used for I/O requests. tlb entries that might result because the
* asce was attached to a cpu won't be cleared.
*/
unsigned long base_asce_alloc(unsigned long addr, unsigned long num_pages)
{
unsigned long asce, table, end;
int rc;
if (base_pgt_cache_init())
return 0;
end = addr + num_pages * PAGE_SIZE;
if (end <= _REGION3_SIZE) {
table = base_crst_alloc(_SEGMENT_ENTRY_EMPTY);
if (!table)
return 0;
rc = base_segment_walk(table, addr, end, 1);
asce = table | _ASCE_TYPE_SEGMENT | _ASCE_TABLE_LENGTH;
} else if (end <= _REGION2_SIZE) {
table = base_crst_alloc(_REGION3_ENTRY_EMPTY);
if (!table)
return 0;
rc = base_region3_walk(table, addr, end, 1);
asce = table | _ASCE_TYPE_REGION3 | _ASCE_TABLE_LENGTH;
} else if (end <= _REGION1_SIZE) {
table = base_crst_alloc(_REGION2_ENTRY_EMPTY);
if (!table)
return 0;
rc = base_region2_walk(table, addr, end, 1);
asce = table | _ASCE_TYPE_REGION2 | _ASCE_TABLE_LENGTH;
} else {
table = base_crst_alloc(_REGION1_ENTRY_EMPTY);
if (!table)
return 0;
rc = base_region1_walk(table, addr, end, 1);
asce = table | _ASCE_TYPE_REGION1 | _ASCE_TABLE_LENGTH;
}
if (rc) {
base_asce_free(asce);
asce = 0;
}
return asce;
}
...@@ -3918,8 +3918,13 @@ static int dasd_generic_requeue_all_requests(struct dasd_device *device) ...@@ -3918,8 +3918,13 @@ static int dasd_generic_requeue_all_requests(struct dasd_device *device)
cqr = refers; cqr = refers;
} }
if (cqr->block) /*
list_del_init(&cqr->blocklist); * _dasd_requeue_request already checked for a valid
* blockdevice, no need to check again
* all erp requests (cqr->refers) have a cqr->block
* pointer copy from the original cqr
*/
list_del_init(&cqr->blocklist);
cqr->block->base->discipline->free_cp( cqr->block->base->discipline->free_cp(
cqr, (struct request *) cqr->callback_data); cqr, (struct request *) cqr->callback_data);
} }
......
...@@ -2214,15 +2214,28 @@ static void dasd_3990_erp_disable_path(struct dasd_device *device, __u8 lpum) ...@@ -2214,15 +2214,28 @@ static void dasd_3990_erp_disable_path(struct dasd_device *device, __u8 lpum)
{ {
int pos = pathmask_to_pos(lpum); int pos = pathmask_to_pos(lpum);
if (!(device->features & DASD_FEATURE_PATH_AUTODISABLE)) {
dev_err(&device->cdev->dev,
"Path %x.%02x (pathmask %02x) is operational despite excessive IFCCs\n",
device->path[pos].cssid, device->path[pos].chpid, lpum);
goto out;
}
/* no remaining path, cannot disable */ /* no remaining path, cannot disable */
if (!(dasd_path_get_opm(device) & ~lpum)) if (!(dasd_path_get_opm(device) & ~lpum)) {
return; dev_err(&device->cdev->dev,
"Last path %x.%02x (pathmask %02x) is operational despite excessive IFCCs\n",
device->path[pos].cssid, device->path[pos].chpid, lpum);
goto out;
}
dev_err(&device->cdev->dev, dev_err(&device->cdev->dev,
"Path %x.%02x (pathmask %02x) is disabled - IFCC threshold exceeded\n", "Path %x.%02x (pathmask %02x) is disabled - IFCC threshold exceeded\n",
device->path[pos].cssid, device->path[pos].chpid, lpum); device->path[pos].cssid, device->path[pos].chpid, lpum);
dasd_path_remove_opm(device, lpum); dasd_path_remove_opm(device, lpum);
dasd_path_add_ifccpm(device, lpum); dasd_path_add_ifccpm(device, lpum);
out:
device->path[pos].errorclk = 0; device->path[pos].errorclk = 0;
atomic_set(&device->path[pos].error_count, 0); atomic_set(&device->path[pos].error_count, 0);
} }
......
...@@ -1550,9 +1550,49 @@ dasd_path_threshold_store(struct device *dev, struct device_attribute *attr, ...@@ -1550,9 +1550,49 @@ dasd_path_threshold_store(struct device *dev, struct device_attribute *attr,
dasd_put_device(device); dasd_put_device(device);
return count; return count;
} }
static DEVICE_ATTR(path_threshold, 0644, dasd_path_threshold_show, static DEVICE_ATTR(path_threshold, 0644, dasd_path_threshold_show,
dasd_path_threshold_store); dasd_path_threshold_store);
/*
* configure if path is disabled after IFCC/CCC error threshold is
* exceeded
*/
static ssize_t
dasd_path_autodisable_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct dasd_devmap *devmap;
int flag;
devmap = dasd_find_busid(dev_name(dev));
if (!IS_ERR(devmap))
flag = (devmap->features & DASD_FEATURE_PATH_AUTODISABLE) != 0;
else
flag = (DASD_FEATURE_DEFAULT &
DASD_FEATURE_PATH_AUTODISABLE) != 0;
return snprintf(buf, PAGE_SIZE, flag ? "1\n" : "0\n");
}
static ssize_t
dasd_path_autodisable_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
unsigned int val;
int rc;
if (kstrtouint(buf, 0, &val) || val > 1)
return -EINVAL;
rc = dasd_set_feature(to_ccwdev(dev),
DASD_FEATURE_PATH_AUTODISABLE, val);
return rc ? : count;
}
static DEVICE_ATTR(path_autodisable, 0644,
dasd_path_autodisable_show,
dasd_path_autodisable_store);
/* /*
* interval for IFCC/CCC checks * interval for IFCC/CCC checks
* meaning time with no IFCC/CCC error before the error counter * meaning time with no IFCC/CCC error before the error counter
...@@ -1623,6 +1663,7 @@ static struct attribute * dasd_attrs[] = { ...@@ -1623,6 +1663,7 @@ static struct attribute * dasd_attrs[] = {
&dev_attr_host_access_count.attr, &dev_attr_host_access_count.attr,
&dev_attr_path_masks.attr, &dev_attr_path_masks.attr,
&dev_attr_path_threshold.attr, &dev_attr_path_threshold.attr,
&dev_attr_path_autodisable.attr,
&dev_attr_path_interval.attr, &dev_attr_path_interval.attr,
&dev_attr_path_reset.attr, &dev_attr_path_reset.attr,
&dev_attr_hpf.attr, &dev_attr_hpf.attr,
......
...@@ -214,24 +214,25 @@ static void set_ch_t(struct ch_t *geo, __u32 cyl, __u8 head) ...@@ -214,24 +214,25 @@ static void set_ch_t(struct ch_t *geo, __u32 cyl, __u8 head)
geo->head |= head; geo->head |= head;
} }
static int check_XRC(struct ccw1 *ccw, struct DE_eckd_data *data, static int set_timestamp(struct ccw1 *ccw, struct DE_eckd_data *data,
struct dasd_device *device) struct dasd_device *device)
{ {
struct dasd_eckd_private *private = device->private; struct dasd_eckd_private *private = device->private;
int rc; int rc;
if (!private->rdc_data.facilities.XRC_supported) rc = get_phys_clock(&data->ep_sys_time);
/*
* Ignore return code if XRC is not supported or
* sync clock is switched off
*/
if ((rc && !private->rdc_data.facilities.XRC_supported) ||
rc == -EOPNOTSUPP || rc == -EACCES)
return 0; return 0;
/* switch on System Time Stamp - needed for XRC Support */ /* switch on System Time Stamp - needed for XRC Support */
data->ga_extended |= 0x08; /* switch on 'Time Stamp Valid' */ data->ga_extended |= 0x08; /* switch on 'Time Stamp Valid' */
data->ga_extended |= 0x02; /* switch on 'Extended Parameter' */ data->ga_extended |= 0x02; /* switch on 'Extended Parameter' */
rc = get_phys_clock(&data->ep_sys_time);
/* Ignore return code if sync clock is switched off. */
if (rc == -EOPNOTSUPP || rc == -EACCES)
rc = 0;
if (ccw) { if (ccw) {
ccw->count = sizeof(struct DE_eckd_data); ccw->count = sizeof(struct DE_eckd_data);
ccw->flags |= CCW_FLAG_SLI; ccw->flags |= CCW_FLAG_SLI;
...@@ -286,12 +287,12 @@ define_extent(struct ccw1 *ccw, struct DE_eckd_data *data, unsigned int trk, ...@@ -286,12 +287,12 @@ define_extent(struct ccw1 *ccw, struct DE_eckd_data *data, unsigned int trk,
case DASD_ECKD_CCW_WRITE_KD_MT: case DASD_ECKD_CCW_WRITE_KD_MT:
data->mask.perm = 0x02; data->mask.perm = 0x02;
data->attributes.operation = private->attrib.operation; data->attributes.operation = private->attrib.operation;
rc = check_XRC(ccw, data, device); rc = set_timestamp(ccw, data, device);
break; break;
case DASD_ECKD_CCW_WRITE_CKD: case DASD_ECKD_CCW_WRITE_CKD:
case DASD_ECKD_CCW_WRITE_CKD_MT: case DASD_ECKD_CCW_WRITE_CKD_MT:
data->attributes.operation = DASD_BYPASS_CACHE; data->attributes.operation = DASD_BYPASS_CACHE;
rc = check_XRC(ccw, data, device); rc = set_timestamp(ccw, data, device);
break; break;
case DASD_ECKD_CCW_ERASE: case DASD_ECKD_CCW_ERASE:
case DASD_ECKD_CCW_WRITE_HOME_ADDRESS: case DASD_ECKD_CCW_WRITE_HOME_ADDRESS:
...@@ -299,7 +300,7 @@ define_extent(struct ccw1 *ccw, struct DE_eckd_data *data, unsigned int trk, ...@@ -299,7 +300,7 @@ define_extent(struct ccw1 *ccw, struct DE_eckd_data *data, unsigned int trk,
data->mask.perm = 0x3; data->mask.perm = 0x3;
data->mask.auth = 0x1; data->mask.auth = 0x1;
data->attributes.operation = DASD_BYPASS_CACHE; data->attributes.operation = DASD_BYPASS_CACHE;
rc = check_XRC(ccw, data, device); rc = set_timestamp(ccw, data, device);
break; break;
case DASD_ECKD_CCW_WRITE_FULL_TRACK: case DASD_ECKD_CCW_WRITE_FULL_TRACK:
data->mask.perm = 0x03; data->mask.perm = 0x03;
...@@ -310,7 +311,7 @@ define_extent(struct ccw1 *ccw, struct DE_eckd_data *data, unsigned int trk, ...@@ -310,7 +311,7 @@ define_extent(struct ccw1 *ccw, struct DE_eckd_data *data, unsigned int trk,
data->mask.perm = 0x02; data->mask.perm = 0x02;
data->attributes.operation = private->attrib.operation; data->attributes.operation = private->attrib.operation;
data->blk_size = blksize; data->blk_size = blksize;
rc = check_XRC(ccw, data, device); rc = set_timestamp(ccw, data, device);
break; break;
default: default:
dev_err(&device->cdev->dev, dev_err(&device->cdev->dev,
...@@ -993,7 +994,7 @@ static int dasd_eckd_read_conf(struct dasd_device *device) ...@@ -993,7 +994,7 @@ static int dasd_eckd_read_conf(struct dasd_device *device)
struct dasd_eckd_private *private, path_private; struct dasd_eckd_private *private, path_private;
struct dasd_uid *uid; struct dasd_uid *uid;
char print_path_uid[60], print_device_uid[60]; char print_path_uid[60], print_device_uid[60];
struct channel_path_desc *chp_desc; struct channel_path_desc_fmt0 *chp_desc;
struct subchannel_id sch_id; struct subchannel_id sch_id;
private = device->private; private = device->private;
...@@ -3440,7 +3441,7 @@ static int prepare_itcw(struct itcw *itcw, ...@@ -3440,7 +3441,7 @@ static int prepare_itcw(struct itcw *itcw,
dedata->mask.perm = 0x02; dedata->mask.perm = 0x02;
dedata->attributes.operation = basepriv->attrib.operation; dedata->attributes.operation = basepriv->attrib.operation;
dedata->blk_size = blksize; dedata->blk_size = blksize;
rc = check_XRC(NULL, dedata, basedev); rc = set_timestamp(NULL, dedata, basedev);
dedata->ga_extended |= 0x42; dedata->ga_extended |= 0x42;
lredata->operation.orientation = 0x0; lredata->operation.orientation = 0x0;
lredata->operation.operation = 0x3F; lredata->operation.operation = 0x3F;
......
...@@ -23,7 +23,7 @@ CFLAGS_REMOVE_sclp_early_core.o += $(CC_FLAGS_EXPOLINE) ...@@ -23,7 +23,7 @@ CFLAGS_REMOVE_sclp_early_core.o += $(CC_FLAGS_EXPOLINE)
obj-y += ctrlchar.o keyboard.o defkeymap.o sclp.o sclp_rw.o sclp_quiesce.o \ obj-y += ctrlchar.o keyboard.o defkeymap.o sclp.o sclp_rw.o sclp_quiesce.o \
sclp_cmd.o sclp_config.o sclp_cpi_sys.o sclp_ocf.o sclp_ctl.o \ sclp_cmd.o sclp_config.o sclp_cpi_sys.o sclp_ocf.o sclp_ctl.o \
sclp_early.o sclp_early_core.o sclp_early.o sclp_early_core.o sclp_sd.o
obj-$(CONFIG_TN3270) += raw3270.o obj-$(CONFIG_TN3270) += raw3270.o
obj-$(CONFIG_TN3270_CONSOLE) += con3270.o obj-$(CONFIG_TN3270_CONSOLE) += con3270.o
......
...@@ -9,7 +9,9 @@ ...@@ -9,7 +9,9 @@
#include <linux/kbd_kern.h> #include <linux/kbd_kern.h>
#include <linux/kbd_diacr.h> #include <linux/kbd_diacr.h>
u_short plain_map[NR_KEYS] = { #include "keyboard.h"
u_short ebc_plain_map[NR_KEYS] = {
0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000,
0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000,
0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000, 0xf000,
...@@ -85,12 +87,12 @@ static u_short shift_ctrl_map[NR_KEYS] = { ...@@ -85,12 +87,12 @@ static u_short shift_ctrl_map[NR_KEYS] = {
0xf20a, 0xf108, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf20a, 0xf108, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200, 0xf200,
}; };
ushort *key_maps[MAX_NR_KEYMAPS] = { ushort *ebc_key_maps[MAX_NR_KEYMAPS] = {
plain_map, shift_map, NULL, NULL, ebc_plain_map, shift_map, NULL, NULL,
ctrl_map, shift_ctrl_map, NULL, ctrl_map, shift_ctrl_map, NULL,
}; };
unsigned int keymap_count = 4; unsigned int ebc_keymap_count = 4;
/* /*
...@@ -99,7 +101,7 @@ unsigned int keymap_count = 4; ...@@ -99,7 +101,7 @@ unsigned int keymap_count = 4;
* the default and allocate dynamically in chunks of 512 bytes. * the default and allocate dynamically in chunks of 512 bytes.
*/ */
char func_buf[] = { char ebc_func_buf[] = {
'\033', '[', '[', 'A', 0, '\033', '[', '[', 'A', 0,
'\033', '[', '[', 'B', 0, '\033', '[', '[', 'B', 0,
'\033', '[', '[', 'C', 0, '\033', '[', '[', 'C', 0,
...@@ -123,37 +125,37 @@ char func_buf[] = { ...@@ -123,37 +125,37 @@ char func_buf[] = {
}; };
char *funcbufptr = func_buf; char *ebc_funcbufptr = ebc_func_buf;
int funcbufsize = sizeof(func_buf); int ebc_funcbufsize = sizeof(ebc_func_buf);
int funcbufleft = 0; /* space left */ int ebc_funcbufleft; /* space left */
char *func_table[MAX_NR_FUNC] = { char *ebc_func_table[MAX_NR_FUNC] = {
func_buf + 0, ebc_func_buf + 0,
func_buf + 5, ebc_func_buf + 5,
func_buf + 10, ebc_func_buf + 10,
func_buf + 15, ebc_func_buf + 15,
func_buf + 20, ebc_func_buf + 20,
func_buf + 25, ebc_func_buf + 25,
func_buf + 31, ebc_func_buf + 31,
func_buf + 37, ebc_func_buf + 37,
func_buf + 43, ebc_func_buf + 43,
func_buf + 49, ebc_func_buf + 49,
func_buf + 55, ebc_func_buf + 55,
func_buf + 61, ebc_func_buf + 61,
func_buf + 67, ebc_func_buf + 67,
func_buf + 73, ebc_func_buf + 73,
func_buf + 79, ebc_func_buf + 79,
func_buf + 85, ebc_func_buf + 85,
func_buf + 91, ebc_func_buf + 91,
func_buf + 97, ebc_func_buf + 97,
func_buf + 103, ebc_func_buf + 103,
func_buf + 109, ebc_func_buf + 109,
NULL, NULL,
}; };
struct kbdiacruc accent_table[MAX_DIACR] = { struct kbdiacruc ebc_accent_table[MAX_DIACR] = {
{'^', 'c', 0003}, {'^', 'd', 0004}, {'^', 'c', 0003}, {'^', 'd', 0004},
{'^', 'z', 0032}, {'^', 0012, 0000}, {'^', 'z', 0032}, {'^', 0012, 0000},
}; };
unsigned int accent_table_size = 4; unsigned int ebc_accent_table_size = 4;
...@@ -54,24 +54,24 @@ kbd_alloc(void) { ...@@ -54,24 +54,24 @@ kbd_alloc(void) {
kbd = kzalloc(sizeof(struct kbd_data), GFP_KERNEL); kbd = kzalloc(sizeof(struct kbd_data), GFP_KERNEL);
if (!kbd) if (!kbd)
goto out; goto out;
kbd->key_maps = kzalloc(sizeof(key_maps), GFP_KERNEL); kbd->key_maps = kzalloc(sizeof(ebc_key_maps), GFP_KERNEL);
if (!kbd->key_maps) if (!kbd->key_maps)
goto out_kbd; goto out_kbd;
for (i = 0; i < ARRAY_SIZE(key_maps); i++) { for (i = 0; i < ARRAY_SIZE(ebc_key_maps); i++) {
if (key_maps[i]) { if (ebc_key_maps[i]) {
kbd->key_maps[i] = kmemdup(key_maps[i], kbd->key_maps[i] = kmemdup(ebc_key_maps[i],
sizeof(u_short) * NR_KEYS, sizeof(u_short) * NR_KEYS,
GFP_KERNEL); GFP_KERNEL);
if (!kbd->key_maps[i]) if (!kbd->key_maps[i])
goto out_maps; goto out_maps;
} }
} }
kbd->func_table = kzalloc(sizeof(func_table), GFP_KERNEL); kbd->func_table = kzalloc(sizeof(ebc_func_table), GFP_KERNEL);
if (!kbd->func_table) if (!kbd->func_table)
goto out_maps; goto out_maps;
for (i = 0; i < ARRAY_SIZE(func_table); i++) { for (i = 0; i < ARRAY_SIZE(ebc_func_table); i++) {
if (func_table[i]) { if (ebc_func_table[i]) {
kbd->func_table[i] = kstrdup(func_table[i], kbd->func_table[i] = kstrdup(ebc_func_table[i],
GFP_KERNEL); GFP_KERNEL);
if (!kbd->func_table[i]) if (!kbd->func_table[i])
goto out_func; goto out_func;
...@@ -81,22 +81,22 @@ kbd_alloc(void) { ...@@ -81,22 +81,22 @@ kbd_alloc(void) {
kzalloc(sizeof(fn_handler_fn *) * NR_FN_HANDLER, GFP_KERNEL); kzalloc(sizeof(fn_handler_fn *) * NR_FN_HANDLER, GFP_KERNEL);
if (!kbd->fn_handler) if (!kbd->fn_handler)
goto out_func; goto out_func;
kbd->accent_table = kmemdup(accent_table, kbd->accent_table = kmemdup(ebc_accent_table,
sizeof(struct kbdiacruc) * MAX_DIACR, sizeof(struct kbdiacruc) * MAX_DIACR,
GFP_KERNEL); GFP_KERNEL);
if (!kbd->accent_table) if (!kbd->accent_table)
goto out_fn_handler; goto out_fn_handler;
kbd->accent_table_size = accent_table_size; kbd->accent_table_size = ebc_accent_table_size;
return kbd; return kbd;
out_fn_handler: out_fn_handler:
kfree(kbd->fn_handler); kfree(kbd->fn_handler);
out_func: out_func:
for (i = 0; i < ARRAY_SIZE(func_table); i++) for (i = 0; i < ARRAY_SIZE(ebc_func_table); i++)
kfree(kbd->func_table[i]); kfree(kbd->func_table[i]);
kfree(kbd->func_table); kfree(kbd->func_table);
out_maps: out_maps:
for (i = 0; i < ARRAY_SIZE(key_maps); i++) for (i = 0; i < ARRAY_SIZE(ebc_key_maps); i++)
kfree(kbd->key_maps[i]); kfree(kbd->key_maps[i]);
kfree(kbd->key_maps); kfree(kbd->key_maps);
out_kbd: out_kbd:
...@@ -112,10 +112,10 @@ kbd_free(struct kbd_data *kbd) ...@@ -112,10 +112,10 @@ kbd_free(struct kbd_data *kbd)
kfree(kbd->accent_table); kfree(kbd->accent_table);
kfree(kbd->fn_handler); kfree(kbd->fn_handler);
for (i = 0; i < ARRAY_SIZE(func_table); i++) for (i = 0; i < ARRAY_SIZE(ebc_func_table); i++)
kfree(kbd->func_table[i]); kfree(kbd->func_table[i]);
kfree(kbd->func_table); kfree(kbd->func_table);
for (i = 0; i < ARRAY_SIZE(key_maps); i++) for (i = 0; i < ARRAY_SIZE(ebc_key_maps); i++)
kfree(kbd->key_maps[i]); kfree(kbd->key_maps[i]);
kfree(kbd->key_maps); kfree(kbd->key_maps);
kfree(kbd); kfree(kbd);
...@@ -131,7 +131,7 @@ kbd_ascebc(struct kbd_data *kbd, unsigned char *ascebc) ...@@ -131,7 +131,7 @@ kbd_ascebc(struct kbd_data *kbd, unsigned char *ascebc)
int i, j, k; int i, j, k;
memset(ascebc, 0x40, 256); memset(ascebc, 0x40, 256);
for (i = 0; i < ARRAY_SIZE(key_maps); i++) { for (i = 0; i < ARRAY_SIZE(ebc_key_maps); i++) {
keymap = kbd->key_maps[i]; keymap = kbd->key_maps[i];
if (!keymap) if (!keymap)
continue; continue;
...@@ -158,7 +158,7 @@ kbd_ebcasc(struct kbd_data *kbd, unsigned char *ebcasc) ...@@ -158,7 +158,7 @@ kbd_ebcasc(struct kbd_data *kbd, unsigned char *ebcasc)
int i, j, k; int i, j, k;
memset(ebcasc, ' ', 256); memset(ebcasc, ' ', 256);
for (i = 0; i < ARRAY_SIZE(key_maps); i++) { for (i = 0; i < ARRAY_SIZE(ebc_key_maps); i++) {
keymap = kbd->key_maps[i]; keymap = kbd->key_maps[i];
if (!keymap) if (!keymap)
continue; continue;
......
...@@ -14,6 +14,17 @@ ...@@ -14,6 +14,17 @@
struct kbd_data; struct kbd_data;
extern int ebc_funcbufsize, ebc_funcbufleft;
extern char *ebc_func_table[MAX_NR_FUNC];
extern char ebc_func_buf[];
extern char *ebc_funcbufptr;
extern unsigned int ebc_keymap_count;
extern struct kbdiacruc ebc_accent_table[];
extern unsigned int ebc_accent_table_size;
extern unsigned short *ebc_key_maps[MAX_NR_KEYMAPS];
extern unsigned short ebc_plain_map[NR_KEYS];
typedef void (fn_handler_fn)(struct kbd_data *); typedef void (fn_handler_fn)(struct kbd_data *);
/* /*
......
...@@ -417,7 +417,7 @@ sclp_dispatch_evbufs(struct sccb_header *sccb) ...@@ -417,7 +417,7 @@ sclp_dispatch_evbufs(struct sccb_header *sccb)
reg = NULL; reg = NULL;
list_for_each(l, &sclp_reg_list) { list_for_each(l, &sclp_reg_list) {
reg = list_entry(l, struct sclp_register, list); reg = list_entry(l, struct sclp_register, list);
if (reg->receive_mask & (1 << (32 - evbuf->type))) if (reg->receive_mask & SCLP_EVTYP_MASK(evbuf->type))
break; break;
else else
reg = NULL; reg = NULL;
...@@ -618,9 +618,12 @@ struct sclp_statechangebuf { ...@@ -618,9 +618,12 @@ struct sclp_statechangebuf {
u16 _zeros : 12; u16 _zeros : 12;
u16 mask_length; u16 mask_length;
u64 sclp_active_facility_mask; u64 sclp_active_facility_mask;
sccb_mask_t sclp_receive_mask; u8 masks[2 * 1021 + 4]; /* variable length */
sccb_mask_t sclp_send_mask; /*
u32 read_data_function_mask; * u8 sclp_receive_mask[mask_length];
* u8 sclp_send_mask[mask_length];
* u32 read_data_function_mask;
*/
} __attribute__((packed)); } __attribute__((packed));
...@@ -631,14 +634,14 @@ sclp_state_change_cb(struct evbuf_header *evbuf) ...@@ -631,14 +634,14 @@ sclp_state_change_cb(struct evbuf_header *evbuf)
unsigned long flags; unsigned long flags;
struct sclp_statechangebuf *scbuf; struct sclp_statechangebuf *scbuf;
BUILD_BUG_ON(sizeof(struct sclp_statechangebuf) > PAGE_SIZE);
scbuf = (struct sclp_statechangebuf *) evbuf; scbuf = (struct sclp_statechangebuf *) evbuf;
if (scbuf->mask_length != sizeof(sccb_mask_t))
return;
spin_lock_irqsave(&sclp_lock, flags); spin_lock_irqsave(&sclp_lock, flags);
if (scbuf->validity_sclp_receive_mask) if (scbuf->validity_sclp_receive_mask)
sclp_receive_mask = scbuf->sclp_receive_mask; sclp_receive_mask = sccb_get_recv_mask(scbuf);
if (scbuf->validity_sclp_send_mask) if (scbuf->validity_sclp_send_mask)
sclp_send_mask = scbuf->sclp_send_mask; sclp_send_mask = sccb_get_send_mask(scbuf);
spin_unlock_irqrestore(&sclp_lock, flags); spin_unlock_irqrestore(&sclp_lock, flags);
if (scbuf->validity_sclp_active_facility_mask) if (scbuf->validity_sclp_active_facility_mask)
sclp.facilities = scbuf->sclp_active_facility_mask; sclp.facilities = scbuf->sclp_active_facility_mask;
...@@ -748,7 +751,7 @@ EXPORT_SYMBOL(sclp_remove_processed); ...@@ -748,7 +751,7 @@ EXPORT_SYMBOL(sclp_remove_processed);
/* Prepare init mask request. Called while sclp_lock is locked. */ /* Prepare init mask request. Called while sclp_lock is locked. */
static inline void static inline void
__sclp_make_init_req(u32 receive_mask, u32 send_mask) __sclp_make_init_req(sccb_mask_t receive_mask, sccb_mask_t send_mask)
{ {
struct init_sccb *sccb; struct init_sccb *sccb;
...@@ -761,12 +764,15 @@ __sclp_make_init_req(u32 receive_mask, u32 send_mask) ...@@ -761,12 +764,15 @@ __sclp_make_init_req(u32 receive_mask, u32 send_mask)
sclp_init_req.callback = NULL; sclp_init_req.callback = NULL;
sclp_init_req.callback_data = NULL; sclp_init_req.callback_data = NULL;
sclp_init_req.sccb = sccb; sclp_init_req.sccb = sccb;
sccb->header.length = sizeof(struct init_sccb); sccb->header.length = sizeof(*sccb);
sccb->mask_length = sizeof(sccb_mask_t); if (sclp_mask_compat_mode)
sccb->receive_mask = receive_mask; sccb->mask_length = SCLP_MASK_SIZE_COMPAT;
sccb->send_mask = send_mask; else
sccb->sclp_receive_mask = 0; sccb->mask_length = sizeof(sccb_mask_t);
sccb->sclp_send_mask = 0; sccb_set_recv_mask(sccb, receive_mask);
sccb_set_send_mask(sccb, send_mask);
sccb_set_sclp_recv_mask(sccb, 0);
sccb_set_sclp_send_mask(sccb, 0);
} }
/* Start init mask request. If calculate is non-zero, calculate the mask as /* Start init mask request. If calculate is non-zero, calculate the mask as
...@@ -822,8 +828,8 @@ sclp_init_mask(int calculate) ...@@ -822,8 +828,8 @@ sclp_init_mask(int calculate)
sccb->header.response_code == 0x20) { sccb->header.response_code == 0x20) {
/* Successful request */ /* Successful request */
if (calculate) { if (calculate) {
sclp_receive_mask = sccb->sclp_receive_mask; sclp_receive_mask = sccb_get_sclp_recv_mask(sccb);
sclp_send_mask = sccb->sclp_send_mask; sclp_send_mask = sccb_get_sclp_send_mask(sccb);
} else { } else {
sclp_receive_mask = 0; sclp_receive_mask = 0;
sclp_send_mask = 0; sclp_send_mask = 0;
...@@ -974,12 +980,18 @@ sclp_check_interface(void) ...@@ -974,12 +980,18 @@ sclp_check_interface(void)
irq_subclass_unregister(IRQ_SUBCLASS_SERVICE_SIGNAL); irq_subclass_unregister(IRQ_SUBCLASS_SERVICE_SIGNAL);
spin_lock_irqsave(&sclp_lock, flags); spin_lock_irqsave(&sclp_lock, flags);
del_timer(&sclp_request_timer); del_timer(&sclp_request_timer);
if (sclp_init_req.status == SCLP_REQ_DONE && rc = -EBUSY;
sccb->header.response_code == 0x20) { if (sclp_init_req.status == SCLP_REQ_DONE) {
rc = 0; if (sccb->header.response_code == 0x20) {
break; rc = 0;
} else break;
rc = -EBUSY; } else if (sccb->header.response_code == 0x74f0) {
if (!sclp_mask_compat_mode) {
sclp_mask_compat_mode = true;
retry = 0;
}
}
}
} }
unregister_external_irq(EXT_IRQ_SERVICE_SIG, sclp_check_handler); unregister_external_irq(EXT_IRQ_SERVICE_SIG, sclp_check_handler);
spin_unlock_irqrestore(&sclp_lock, flags); spin_unlock_irqrestore(&sclp_lock, flags);
......
...@@ -18,7 +18,7 @@ ...@@ -18,7 +18,7 @@
#define MAX_KMEM_PAGES (sizeof(unsigned long) << 3) #define MAX_KMEM_PAGES (sizeof(unsigned long) << 3)
#define SCLP_CONSOLE_PAGES 6 #define SCLP_CONSOLE_PAGES 6
#define SCLP_EVTYP_MASK(T) (1U << (32 - (T))) #define SCLP_EVTYP_MASK(T) (1UL << (sizeof(sccb_mask_t) * BITS_PER_BYTE - (T)))
#define EVTYP_OPCMD 0x01 #define EVTYP_OPCMD 0x01
#define EVTYP_MSG 0x02 #define EVTYP_MSG 0x02
...@@ -28,6 +28,7 @@ ...@@ -28,6 +28,7 @@
#define EVTYP_PMSGCMD 0x09 #define EVTYP_PMSGCMD 0x09
#define EVTYP_ASYNC 0x0A #define EVTYP_ASYNC 0x0A
#define EVTYP_CTLPROGIDENT 0x0B #define EVTYP_CTLPROGIDENT 0x0B
#define EVTYP_STORE_DATA 0x0C
#define EVTYP_ERRNOTIFY 0x18 #define EVTYP_ERRNOTIFY 0x18
#define EVTYP_VT220MSG 0x1A #define EVTYP_VT220MSG 0x1A
#define EVTYP_SDIAS 0x1C #define EVTYP_SDIAS 0x1C
...@@ -42,6 +43,7 @@ ...@@ -42,6 +43,7 @@
#define EVTYP_PMSGCMD_MASK SCLP_EVTYP_MASK(EVTYP_PMSGCMD) #define EVTYP_PMSGCMD_MASK SCLP_EVTYP_MASK(EVTYP_PMSGCMD)
#define EVTYP_ASYNC_MASK SCLP_EVTYP_MASK(EVTYP_ASYNC) #define EVTYP_ASYNC_MASK SCLP_EVTYP_MASK(EVTYP_ASYNC)
#define EVTYP_CTLPROGIDENT_MASK SCLP_EVTYP_MASK(EVTYP_CTLPROGIDENT) #define EVTYP_CTLPROGIDENT_MASK SCLP_EVTYP_MASK(EVTYP_CTLPROGIDENT)
#define EVTYP_STORE_DATA_MASK SCLP_EVTYP_MASK(EVTYP_STORE_DATA)
#define EVTYP_ERRNOTIFY_MASK SCLP_EVTYP_MASK(EVTYP_ERRNOTIFY) #define EVTYP_ERRNOTIFY_MASK SCLP_EVTYP_MASK(EVTYP_ERRNOTIFY)
#define EVTYP_VT220MSG_MASK SCLP_EVTYP_MASK(EVTYP_VT220MSG) #define EVTYP_VT220MSG_MASK SCLP_EVTYP_MASK(EVTYP_VT220MSG)
#define EVTYP_SDIAS_MASK SCLP_EVTYP_MASK(EVTYP_SDIAS) #define EVTYP_SDIAS_MASK SCLP_EVTYP_MASK(EVTYP_SDIAS)
...@@ -85,7 +87,7 @@ enum sclp_pm_event { ...@@ -85,7 +87,7 @@ enum sclp_pm_event {
#define SCLP_PANIC_PRIO 1 #define SCLP_PANIC_PRIO 1
#define SCLP_PANIC_PRIO_CLIENT 0 #define SCLP_PANIC_PRIO_CLIENT 0
typedef u32 sccb_mask_t; /* ATTENTION: assumes 32bit mask !!! */ typedef u64 sccb_mask_t;
struct sccb_header { struct sccb_header {
u16 length; u16 length;
...@@ -98,12 +100,53 @@ struct init_sccb { ...@@ -98,12 +100,53 @@ struct init_sccb {
struct sccb_header header; struct sccb_header header;
u16 _reserved; u16 _reserved;
u16 mask_length; u16 mask_length;
sccb_mask_t receive_mask; u8 masks[4 * 1021]; /* variable length */
sccb_mask_t send_mask; /*
sccb_mask_t sclp_receive_mask; * u8 receive_mask[mask_length];
sccb_mask_t sclp_send_mask; * u8 send_mask[mask_length];
* u8 sclp_receive_mask[mask_length];
* u8 sclp_send_mask[mask_length];
*/
} __attribute__((packed)); } __attribute__((packed));
#define SCLP_MASK_SIZE_COMPAT 4
static inline sccb_mask_t sccb_get_mask(u8 *masks, size_t len, int i)
{
sccb_mask_t res = 0;
memcpy(&res, masks + i * len, min(sizeof(res), len));
return res;
}
static inline void sccb_set_mask(u8 *masks, size_t len, int i, sccb_mask_t val)
{
memset(masks + i * len, 0, len);
memcpy(masks + i * len, &val, min(sizeof(val), len));
}
#define sccb_get_generic_mask(sccb, i) \
({ \
__typeof__(sccb) __sccb = sccb; \
\
sccb_get_mask(__sccb->masks, __sccb->mask_length, i); \
})
#define sccb_get_recv_mask(sccb) sccb_get_generic_mask(sccb, 0)
#define sccb_get_send_mask(sccb) sccb_get_generic_mask(sccb, 1)
#define sccb_get_sclp_recv_mask(sccb) sccb_get_generic_mask(sccb, 2)
#define sccb_get_sclp_send_mask(sccb) sccb_get_generic_mask(sccb, 3)
#define sccb_set_generic_mask(sccb, i, val) \
({ \
__typeof__(sccb) __sccb = sccb; \
\
sccb_set_mask(__sccb->masks, __sccb->mask_length, i, val); \
})
#define sccb_set_recv_mask(sccb, val) sccb_set_generic_mask(sccb, 0, val)
#define sccb_set_send_mask(sccb, val) sccb_set_generic_mask(sccb, 1, val)
#define sccb_set_sclp_recv_mask(sccb, val) sccb_set_generic_mask(sccb, 2, val)
#define sccb_set_sclp_send_mask(sccb, val) sccb_set_generic_mask(sccb, 3, val)
struct read_cpu_info_sccb { struct read_cpu_info_sccb {
struct sccb_header header; struct sccb_header header;
u16 nr_configured; u16 nr_configured;
...@@ -221,15 +264,17 @@ extern int sclp_init_state; ...@@ -221,15 +264,17 @@ extern int sclp_init_state;
extern int sclp_console_pages; extern int sclp_console_pages;
extern int sclp_console_drop; extern int sclp_console_drop;
extern unsigned long sclp_console_full; extern unsigned long sclp_console_full;
extern bool sclp_mask_compat_mode;
extern char sclp_early_sccb[PAGE_SIZE]; extern char sclp_early_sccb[PAGE_SIZE];
void sclp_early_wait_irq(void); void sclp_early_wait_irq(void);
int sclp_early_cmd(sclp_cmdw_t cmd, void *sccb); int sclp_early_cmd(sclp_cmdw_t cmd, void *sccb);
unsigned int sclp_early_con_check_linemode(struct init_sccb *sccb); unsigned int sclp_early_con_check_linemode(struct init_sccb *sccb);
unsigned int sclp_early_con_check_vt220(struct init_sccb *sccb);
int sclp_early_set_event_mask(struct init_sccb *sccb, int sclp_early_set_event_mask(struct init_sccb *sccb,
unsigned long receive_mask, sccb_mask_t receive_mask,
unsigned long send_mask); sccb_mask_t send_mask);
/* useful inlines */ /* useful inlines */
......
...@@ -249,7 +249,7 @@ static void __init sclp_early_console_detect(struct init_sccb *sccb) ...@@ -249,7 +249,7 @@ static void __init sclp_early_console_detect(struct init_sccb *sccb)
if (sccb->header.response_code != 0x20) if (sccb->header.response_code != 0x20)
return; return;
if (sccb->sclp_send_mask & EVTYP_VT220MSG_MASK) if (sclp_early_con_check_vt220(sccb))
sclp.has_vt220 = 1; sclp.has_vt220 = 1;
if (sclp_early_con_check_linemode(sccb)) if (sclp_early_con_check_linemode(sccb))
......
...@@ -14,6 +14,11 @@ ...@@ -14,6 +14,11 @@
char sclp_early_sccb[PAGE_SIZE] __aligned(PAGE_SIZE) __section(.data); char sclp_early_sccb[PAGE_SIZE] __aligned(PAGE_SIZE) __section(.data);
int sclp_init_state __section(.data) = sclp_init_state_uninitialized; int sclp_init_state __section(.data) = sclp_init_state_uninitialized;
/*
* Used to keep track of the size of the event masks. Qemu until version 2.11
* only supports 4 and needs a workaround.
*/
bool sclp_mask_compat_mode;
void sclp_early_wait_irq(void) void sclp_early_wait_irq(void)
{ {
...@@ -142,16 +147,24 @@ static void sclp_early_print_vt220(const char *str, unsigned int len) ...@@ -142,16 +147,24 @@ static void sclp_early_print_vt220(const char *str, unsigned int len)
} }
int sclp_early_set_event_mask(struct init_sccb *sccb, int sclp_early_set_event_mask(struct init_sccb *sccb,
unsigned long receive_mask, sccb_mask_t receive_mask,
unsigned long send_mask) sccb_mask_t send_mask)
{ {
retry:
memset(sccb, 0, sizeof(*sccb)); memset(sccb, 0, sizeof(*sccb));
sccb->header.length = sizeof(*sccb); sccb->header.length = sizeof(*sccb);
sccb->mask_length = sizeof(sccb_mask_t); if (sclp_mask_compat_mode)
sccb->receive_mask = receive_mask; sccb->mask_length = SCLP_MASK_SIZE_COMPAT;
sccb->send_mask = send_mask; else
sccb->mask_length = sizeof(sccb_mask_t);
sccb_set_recv_mask(sccb, receive_mask);
sccb_set_send_mask(sccb, send_mask);
if (sclp_early_cmd(SCLP_CMDW_WRITE_EVENT_MASK, sccb)) if (sclp_early_cmd(SCLP_CMDW_WRITE_EVENT_MASK, sccb))
return -EIO; return -EIO;
if ((sccb->header.response_code == 0x74f0) && !sclp_mask_compat_mode) {
sclp_mask_compat_mode = true;
goto retry;
}
if (sccb->header.response_code != 0x20) if (sccb->header.response_code != 0x20)
return -EIO; return -EIO;
return 0; return 0;
...@@ -159,19 +172,28 @@ int sclp_early_set_event_mask(struct init_sccb *sccb, ...@@ -159,19 +172,28 @@ int sclp_early_set_event_mask(struct init_sccb *sccb,
unsigned int sclp_early_con_check_linemode(struct init_sccb *sccb) unsigned int sclp_early_con_check_linemode(struct init_sccb *sccb)
{ {
if (!(sccb->sclp_send_mask & EVTYP_OPCMD_MASK)) if (!(sccb_get_sclp_send_mask(sccb) & EVTYP_OPCMD_MASK))
return 0; return 0;
if (!(sccb->sclp_receive_mask & (EVTYP_MSG_MASK | EVTYP_PMSGCMD_MASK))) if (!(sccb_get_sclp_recv_mask(sccb) & (EVTYP_MSG_MASK | EVTYP_PMSGCMD_MASK)))
return 0; return 0;
return 1; return 1;
} }
unsigned int sclp_early_con_check_vt220(struct init_sccb *sccb)
{
if (sccb_get_sclp_send_mask(sccb) & EVTYP_VT220MSG_MASK)
return 1;
return 0;
}
static int sclp_early_setup(int disable, int *have_linemode, int *have_vt220) static int sclp_early_setup(int disable, int *have_linemode, int *have_vt220)
{ {
unsigned long receive_mask, send_mask; unsigned long receive_mask, send_mask;
struct init_sccb *sccb; struct init_sccb *sccb;
int rc; int rc;
BUILD_BUG_ON(sizeof(struct init_sccb) > PAGE_SIZE);
*have_linemode = *have_vt220 = 0; *have_linemode = *have_vt220 = 0;
sccb = (struct init_sccb *) &sclp_early_sccb; sccb = (struct init_sccb *) &sclp_early_sccb;
receive_mask = disable ? 0 : EVTYP_OPCMD_MASK; receive_mask = disable ? 0 : EVTYP_OPCMD_MASK;
...@@ -180,7 +202,7 @@ static int sclp_early_setup(int disable, int *have_linemode, int *have_vt220) ...@@ -180,7 +202,7 @@ static int sclp_early_setup(int disable, int *have_linemode, int *have_vt220)
if (rc) if (rc)
return rc; return rc;
*have_linemode = sclp_early_con_check_linemode(sccb); *have_linemode = sclp_early_con_check_linemode(sccb);
*have_vt220 = sccb->send_mask & EVTYP_VT220MSG_MASK; *have_vt220 = !!(sccb_get_send_mask(sccb) & EVTYP_VT220MSG_MASK);
return rc; return rc;
} }
......
This diff is collapsed.
...@@ -502,7 +502,10 @@ sclp_tty_init(void) ...@@ -502,7 +502,10 @@ sclp_tty_init(void)
int i; int i;
int rc; int rc;
if (!CONSOLE_IS_SCLP) /* z/VM multiplexes the line mode output on the 32xx screen */
if (MACHINE_IS_VM && !CONSOLE_IS_SCLP)
return 0;
if (!sclp.has_linemode)
return 0; return 0;
driver = alloc_tty_driver(1); driver = alloc_tty_driver(1);
if (!driver) if (!driver)
......
...@@ -384,6 +384,28 @@ static ssize_t chp_chid_external_show(struct device *dev, ...@@ -384,6 +384,28 @@ static ssize_t chp_chid_external_show(struct device *dev,
} }
static DEVICE_ATTR(chid_external, 0444, chp_chid_external_show, NULL); static DEVICE_ATTR(chid_external, 0444, chp_chid_external_show, NULL);
static ssize_t util_string_read(struct file *filp, struct kobject *kobj,
struct bin_attribute *attr, char *buf,
loff_t off, size_t count)
{
struct channel_path *chp = to_channelpath(kobj_to_dev(kobj));
ssize_t rc;
mutex_lock(&chp->lock);
rc = memory_read_from_buffer(buf, count, &off, chp->desc_fmt3.util_str,
sizeof(chp->desc_fmt3.util_str));
mutex_unlock(&chp->lock);
return rc;
}
static BIN_ATTR_RO(util_string,
sizeof(((struct channel_path_desc_fmt3 *)0)->util_str));
static struct bin_attribute *chp_bin_attrs[] = {
&bin_attr_util_string,
NULL,
};
static struct attribute *chp_attrs[] = { static struct attribute *chp_attrs[] = {
&dev_attr_status.attr, &dev_attr_status.attr,
&dev_attr_configure.attr, &dev_attr_configure.attr,
...@@ -396,6 +418,7 @@ static struct attribute *chp_attrs[] = { ...@@ -396,6 +418,7 @@ static struct attribute *chp_attrs[] = {
}; };
static struct attribute_group chp_attr_group = { static struct attribute_group chp_attr_group = {
.attrs = chp_attrs, .attrs = chp_attrs,
.bin_attrs = chp_bin_attrs,
}; };
static const struct attribute_group *chp_attr_groups[] = { static const struct attribute_group *chp_attr_groups[] = {
&chp_attr_group, &chp_attr_group,
...@@ -422,7 +445,7 @@ int chp_update_desc(struct channel_path *chp) ...@@ -422,7 +445,7 @@ int chp_update_desc(struct channel_path *chp)
{ {
int rc; int rc;
rc = chsc_determine_base_channel_path_desc(chp->chpid, &chp->desc); rc = chsc_determine_fmt0_channel_path_desc(chp->chpid, &chp->desc);
if (rc) if (rc)
return rc; return rc;
...@@ -431,6 +454,7 @@ int chp_update_desc(struct channel_path *chp) ...@@ -431,6 +454,7 @@ int chp_update_desc(struct channel_path *chp)
* hypervisors implement the required chsc commands. * hypervisors implement the required chsc commands.
*/ */
chsc_determine_fmt1_channel_path_desc(chp->chpid, &chp->desc_fmt1); chsc_determine_fmt1_channel_path_desc(chp->chpid, &chp->desc_fmt1);
chsc_determine_fmt3_channel_path_desc(chp->chpid, &chp->desc_fmt3);
chsc_get_channel_measurement_chars(chp); chsc_get_channel_measurement_chars(chp);
return 0; return 0;
...@@ -506,20 +530,20 @@ int chp_new(struct chp_id chpid) ...@@ -506,20 +530,20 @@ int chp_new(struct chp_id chpid)
* On success return a newly allocated copy of the channel-path description * On success return a newly allocated copy of the channel-path description
* data associated with the given channel-path ID. Return %NULL on error. * data associated with the given channel-path ID. Return %NULL on error.
*/ */
struct channel_path_desc *chp_get_chp_desc(struct chp_id chpid) struct channel_path_desc_fmt0 *chp_get_chp_desc(struct chp_id chpid)
{ {
struct channel_path *chp; struct channel_path *chp;
struct channel_path_desc *desc; struct channel_path_desc_fmt0 *desc;
chp = chpid_to_chp(chpid); chp = chpid_to_chp(chpid);
if (!chp) if (!chp)
return NULL; return NULL;
desc = kmalloc(sizeof(struct channel_path_desc), GFP_KERNEL); desc = kmalloc(sizeof(*desc), GFP_KERNEL);
if (!desc) if (!desc)
return NULL; return NULL;
mutex_lock(&chp->lock); mutex_lock(&chp->lock);
memcpy(desc, &chp->desc, sizeof(struct channel_path_desc)); memcpy(desc, &chp->desc, sizeof(*desc));
mutex_unlock(&chp->lock); mutex_unlock(&chp->lock);
return desc; return desc;
} }
......
...@@ -44,8 +44,9 @@ struct channel_path { ...@@ -44,8 +44,9 @@ struct channel_path {
struct chp_id chpid; struct chp_id chpid;
struct mutex lock; /* Serialize access to below members. */ struct mutex lock; /* Serialize access to below members. */
int state; int state;
struct channel_path_desc desc; struct channel_path_desc_fmt0 desc;
struct channel_path_desc_fmt1 desc_fmt1; struct channel_path_desc_fmt1 desc_fmt1;
struct channel_path_desc_fmt3 desc_fmt3;
/* Channel-measurement related stuff: */ /* Channel-measurement related stuff: */
int cmg; int cmg;
int shared; int shared;
...@@ -61,7 +62,7 @@ static inline struct channel_path *chpid_to_chp(struct chp_id chpid) ...@@ -61,7 +62,7 @@ static inline struct channel_path *chpid_to_chp(struct chp_id chpid)
int chp_get_status(struct chp_id chpid); int chp_get_status(struct chp_id chpid);
u8 chp_get_sch_opm(struct subchannel *sch); u8 chp_get_sch_opm(struct subchannel *sch);
int chp_is_registered(struct chp_id chpid); int chp_is_registered(struct chp_id chpid);
struct channel_path_desc *chp_get_chp_desc(struct chp_id chpid); struct channel_path_desc_fmt0 *chp_get_chp_desc(struct chp_id chpid);
void chp_remove_cmg_attr(struct channel_path *chp); void chp_remove_cmg_attr(struct channel_path *chp);
int chp_add_cmg_attr(struct channel_path *chp); int chp_add_cmg_attr(struct channel_path *chp);
int chp_update_desc(struct channel_path *chp); int chp_update_desc(struct channel_path *chp);
......
...@@ -915,6 +915,8 @@ int chsc_determine_channel_path_desc(struct chp_id chpid, int fmt, int rfmt, ...@@ -915,6 +915,8 @@ int chsc_determine_channel_path_desc(struct chp_id chpid, int fmt, int rfmt,
return -EINVAL; return -EINVAL;
if ((rfmt == 2) && !css_general_characteristics.cib) if ((rfmt == 2) && !css_general_characteristics.cib)
return -EINVAL; return -EINVAL;
if ((rfmt == 3) && !css_general_characteristics.util_str)
return -EINVAL;
memset(page, 0, PAGE_SIZE); memset(page, 0, PAGE_SIZE);
scpd_area = page; scpd_area = page;
...@@ -940,43 +942,30 @@ int chsc_determine_channel_path_desc(struct chp_id chpid, int fmt, int rfmt, ...@@ -940,43 +942,30 @@ int chsc_determine_channel_path_desc(struct chp_id chpid, int fmt, int rfmt,
} }
EXPORT_SYMBOL_GPL(chsc_determine_channel_path_desc); EXPORT_SYMBOL_GPL(chsc_determine_channel_path_desc);
int chsc_determine_base_channel_path_desc(struct chp_id chpid, #define chsc_det_chp_desc(FMT, c) \
struct channel_path_desc *desc) int chsc_determine_fmt##FMT##_channel_path_desc( \
{ struct chp_id chpid, struct channel_path_desc_fmt##FMT *desc) \
struct chsc_scpd *scpd_area; { \
unsigned long flags; struct chsc_scpd *scpd_area; \
int ret; unsigned long flags; \
int ret; \
spin_lock_irqsave(&chsc_page_lock, flags); \
scpd_area = chsc_page; spin_lock_irqsave(&chsc_page_lock, flags); \
ret = chsc_determine_channel_path_desc(chpid, 0, 0, 0, 0, scpd_area); scpd_area = chsc_page; \
if (ret) ret = chsc_determine_channel_path_desc(chpid, 0, FMT, c, 0, \
goto out; scpd_area); \
if (ret) \
memcpy(desc, scpd_area->data, sizeof(*desc)); goto out; \
out: \
spin_unlock_irqrestore(&chsc_page_lock, flags); memcpy(desc, scpd_area->data, sizeof(*desc)); \
return ret; out: \
spin_unlock_irqrestore(&chsc_page_lock, flags); \
return ret; \
} }
int chsc_determine_fmt1_channel_path_desc(struct chp_id chpid, chsc_det_chp_desc(0, 0)
struct channel_path_desc_fmt1 *desc) chsc_det_chp_desc(1, 1)
{ chsc_det_chp_desc(3, 0)
struct chsc_scpd *scpd_area;
unsigned long flags;
int ret;
spin_lock_irqsave(&chsc_page_lock, flags);
scpd_area = chsc_page;
ret = chsc_determine_channel_path_desc(chpid, 0, 1, 1, 0, scpd_area);
if (ret)
goto out;
memcpy(desc, scpd_area->data, sizeof(*desc));
out:
spin_unlock_irqrestore(&chsc_page_lock, flags);
return ret;
}
static void static void
chsc_initialize_cmg_chars(struct channel_path *chp, u8 cmcv, chsc_initialize_cmg_chars(struct channel_path *chp, u8 cmcv,
......
...@@ -40,6 +40,11 @@ struct channel_path_desc_fmt1 { ...@@ -40,6 +40,11 @@ struct channel_path_desc_fmt1 {
u32 zeros[2]; u32 zeros[2];
} __attribute__ ((packed)); } __attribute__ ((packed));
struct channel_path_desc_fmt3 {
struct channel_path_desc_fmt1 fmt1_desc;
u8 util_str[64];
};
struct channel_path; struct channel_path;
struct css_chsc_char { struct css_chsc_char {
...@@ -147,10 +152,12 @@ int __chsc_do_secm(struct channel_subsystem *css, int enable); ...@@ -147,10 +152,12 @@ int __chsc_do_secm(struct channel_subsystem *css, int enable);
int chsc_chp_vary(struct chp_id chpid, int on); int chsc_chp_vary(struct chp_id chpid, int on);
int chsc_determine_channel_path_desc(struct chp_id chpid, int fmt, int rfmt, int chsc_determine_channel_path_desc(struct chp_id chpid, int fmt, int rfmt,
int c, int m, void *page); int c, int m, void *page);
int chsc_determine_base_channel_path_desc(struct chp_id chpid, int chsc_determine_fmt0_channel_path_desc(struct chp_id chpid,
struct channel_path_desc *desc); struct channel_path_desc_fmt0 *desc);
int chsc_determine_fmt1_channel_path_desc(struct chp_id chpid, int chsc_determine_fmt1_channel_path_desc(struct chp_id chpid,
struct channel_path_desc_fmt1 *desc); struct channel_path_desc_fmt1 *desc);
int chsc_determine_fmt3_channel_path_desc(struct chp_id chpid,
struct channel_path_desc_fmt3 *desc);
void chsc_chp_online(struct chp_id chpid); void chsc_chp_online(struct chp_id chpid);
void chsc_chp_offline(struct chp_id chpid); void chsc_chp_offline(struct chp_id chpid);
int chsc_get_channel_measurement_chars(struct channel_path *chp); int chsc_get_channel_measurement_chars(struct channel_path *chp);
......
...@@ -1073,8 +1073,7 @@ static int io_subchannel_probe(struct subchannel *sch) ...@@ -1073,8 +1073,7 @@ static int io_subchannel_probe(struct subchannel *sch)
return 0; return 0;
} }
static int static int io_subchannel_remove(struct subchannel *sch)
io_subchannel_remove (struct subchannel *sch)
{ {
struct io_subchannel_private *io_priv = to_io_private(sch); struct io_subchannel_private *io_priv = to_io_private(sch);
struct ccw_device *cdev; struct ccw_device *cdev;
...@@ -1082,14 +1081,12 @@ io_subchannel_remove (struct subchannel *sch) ...@@ -1082,14 +1081,12 @@ io_subchannel_remove (struct subchannel *sch)
cdev = sch_get_cdev(sch); cdev = sch_get_cdev(sch);
if (!cdev) if (!cdev)
goto out_free; goto out_free;
io_subchannel_quiesce(sch);
/* Set ccw device to not operational and drop reference. */ ccw_device_unregister(cdev);
spin_lock_irq(cdev->ccwlock); spin_lock_irq(sch->lock);
sch_set_cdev(sch, NULL); sch_set_cdev(sch, NULL);
set_io_private(sch, NULL); set_io_private(sch, NULL);
cdev->private->state = DEV_STATE_NOT_OPER; spin_unlock_irq(sch->lock);
spin_unlock_irq(cdev->ccwlock);
ccw_device_unregister(cdev);
out_free: out_free:
kfree(io_priv); kfree(io_priv);
sysfs_remove_group(&sch->dev.kobj, &io_subchannel_attr_group); sysfs_remove_group(&sch->dev.kobj, &io_subchannel_attr_group);
...@@ -1721,6 +1718,7 @@ static int ccw_device_remove(struct device *dev) ...@@ -1721,6 +1718,7 @@ static int ccw_device_remove(struct device *dev)
{ {
struct ccw_device *cdev = to_ccwdev(dev); struct ccw_device *cdev = to_ccwdev(dev);
struct ccw_driver *cdrv = cdev->drv; struct ccw_driver *cdrv = cdev->drv;
struct subchannel *sch;
int ret; int ret;
if (cdrv->remove) if (cdrv->remove)
...@@ -1746,7 +1744,9 @@ static int ccw_device_remove(struct device *dev) ...@@ -1746,7 +1744,9 @@ static int ccw_device_remove(struct device *dev)
ccw_device_set_timeout(cdev, 0); ccw_device_set_timeout(cdev, 0);
cdev->drv = NULL; cdev->drv = NULL;
cdev->private->int_class = IRQIO_CIO; cdev->private->int_class = IRQIO_CIO;
sch = to_subchannel(cdev->dev.parent);
spin_unlock_irq(cdev->ccwlock); spin_unlock_irq(cdev->ccwlock);
io_subchannel_quiesce(sch);
__disable_cmf(cdev); __disable_cmf(cdev);
return 0; return 0;
......
...@@ -460,8 +460,8 @@ __u8 ccw_device_get_path_mask(struct ccw_device *cdev) ...@@ -460,8 +460,8 @@ __u8 ccw_device_get_path_mask(struct ccw_device *cdev)
* On success return a newly allocated copy of the channel-path description * On success return a newly allocated copy of the channel-path description
* data associated with the given channel path. Return %NULL on error. * data associated with the given channel path. Return %NULL on error.
*/ */
struct channel_path_desc *ccw_device_get_chp_desc(struct ccw_device *cdev, struct channel_path_desc_fmt0 *ccw_device_get_chp_desc(struct ccw_device *cdev,
int chp_idx) int chp_idx)
{ {
struct subchannel *sch; struct subchannel *sch;
struct chp_id chpid; struct chp_id chpid;
......
...@@ -98,22 +98,6 @@ static inline int do_siga_output(unsigned long schid, unsigned long mask, ...@@ -98,22 +98,6 @@ static inline int do_siga_output(unsigned long schid, unsigned long mask,
return cc; return cc;
} }
static inline int qdio_check_ccq(struct qdio_q *q, unsigned int ccq)
{
/* all done or next buffer state different */
if (ccq == 0 || ccq == 32)
return 0;
/* no buffer processed */
if (ccq == 97)
return 1;
/* not all buffers processed */
if (ccq == 96)
return 2;
/* notify devices immediately */
DBF_ERROR("%4x ccq:%3d", SCH_NO(q), ccq);
return -EIO;
}
/** /**
* qdio_do_eqbs - extract buffer states for QEBSM * qdio_do_eqbs - extract buffer states for QEBSM
* @q: queue to manipulate * @q: queue to manipulate
...@@ -128,7 +112,7 @@ static inline int qdio_check_ccq(struct qdio_q *q, unsigned int ccq) ...@@ -128,7 +112,7 @@ static inline int qdio_check_ccq(struct qdio_q *q, unsigned int ccq)
static int qdio_do_eqbs(struct qdio_q *q, unsigned char *state, static int qdio_do_eqbs(struct qdio_q *q, unsigned char *state,
int start, int count, int auto_ack) int start, int count, int auto_ack)
{ {
int rc, tmp_count = count, tmp_start = start, nr = q->nr, retried = 0; int tmp_count = count, tmp_start = start, nr = q->nr;
unsigned int ccq = 0; unsigned int ccq = 0;
qperf_inc(q, eqbs); qperf_inc(q, eqbs);
...@@ -138,34 +122,30 @@ static int qdio_do_eqbs(struct qdio_q *q, unsigned char *state, ...@@ -138,34 +122,30 @@ static int qdio_do_eqbs(struct qdio_q *q, unsigned char *state,
again: again:
ccq = do_eqbs(q->irq_ptr->sch_token, state, nr, &tmp_start, &tmp_count, ccq = do_eqbs(q->irq_ptr->sch_token, state, nr, &tmp_start, &tmp_count,
auto_ack); auto_ack);
rc = qdio_check_ccq(q, ccq);
if (!rc)
return count - tmp_count;
if (rc == 1) { switch (ccq) {
DBF_DEV_EVENT(DBF_WARN, q->irq_ptr, "EQBS again:%2d", ccq); case 0:
goto again; case 32:
} /* all done, or next buffer state different */
return count - tmp_count;
if (rc == 2) { case 96:
/* not all buffers processed */
qperf_inc(q, eqbs_partial); qperf_inc(q, eqbs_partial);
DBF_DEV_EVENT(DBF_WARN, q->irq_ptr, "EQBS part:%02x", DBF_DEV_EVENT(DBF_WARN, q->irq_ptr, "EQBS part:%02x",
tmp_count); tmp_count);
/* return count - tmp_count;
* Retry once, if that fails bail out and process the case 97:
* extracted buffers before trying again. /* no buffer processed */
*/ DBF_DEV_EVENT(DBF_WARN, q->irq_ptr, "EQBS again:%2d", ccq);
if (!retried++) goto again;
goto again; default:
else DBF_ERROR("%4x ccq:%3d", SCH_NO(q), ccq);
return count - tmp_count; DBF_ERROR("%4x EQBS ERROR", SCH_NO(q));
DBF_ERROR("%3d%3d%2d", count, tmp_count, nr);
q->handler(q->irq_ptr->cdev, QDIO_ERROR_GET_BUF_STATE, q->nr,
q->first_to_kick, count, q->irq_ptr->int_parm);
return 0;
} }
DBF_ERROR("%4x EQBS ERROR", SCH_NO(q));
DBF_ERROR("%3d%3d%2d", count, tmp_count, nr);
q->handler(q->irq_ptr->cdev, QDIO_ERROR_GET_BUF_STATE,
q->nr, q->first_to_kick, count, q->irq_ptr->int_parm);
return 0;
} }
/** /**
...@@ -185,7 +165,6 @@ static int qdio_do_sqbs(struct qdio_q *q, unsigned char state, int start, ...@@ -185,7 +165,6 @@ static int qdio_do_sqbs(struct qdio_q *q, unsigned char state, int start,
unsigned int ccq = 0; unsigned int ccq = 0;
int tmp_count = count, tmp_start = start; int tmp_count = count, tmp_start = start;
int nr = q->nr; int nr = q->nr;
int rc;
if (!count) if (!count)
return 0; return 0;
...@@ -195,26 +174,32 @@ static int qdio_do_sqbs(struct qdio_q *q, unsigned char state, int start, ...@@ -195,26 +174,32 @@ static int qdio_do_sqbs(struct qdio_q *q, unsigned char state, int start,
nr += q->irq_ptr->nr_input_qs; nr += q->irq_ptr->nr_input_qs;
again: again:
ccq = do_sqbs(q->irq_ptr->sch_token, state, nr, &tmp_start, &tmp_count); ccq = do_sqbs(q->irq_ptr->sch_token, state, nr, &tmp_start, &tmp_count);
rc = qdio_check_ccq(q, ccq);
if (!rc) { switch (ccq) {
case 0:
case 32:
/* all done, or active buffer adapter-owned */
WARN_ON_ONCE(tmp_count); WARN_ON_ONCE(tmp_count);
return count - tmp_count; return count - tmp_count;
} case 96:
/* not all buffers processed */
if (rc == 1 || rc == 2) {
DBF_DEV_EVENT(DBF_INFO, q->irq_ptr, "SQBS again:%2d", ccq); DBF_DEV_EVENT(DBF_INFO, q->irq_ptr, "SQBS again:%2d", ccq);
qperf_inc(q, sqbs_partial); qperf_inc(q, sqbs_partial);
goto again; goto again;
default:
DBF_ERROR("%4x ccq:%3d", SCH_NO(q), ccq);
DBF_ERROR("%4x SQBS ERROR", SCH_NO(q));
DBF_ERROR("%3d%3d%2d", count, tmp_count, nr);
q->handler(q->irq_ptr->cdev, QDIO_ERROR_SET_BUF_STATE, q->nr,
q->first_to_kick, count, q->irq_ptr->int_parm);
return 0;
} }
DBF_ERROR("%4x SQBS ERROR", SCH_NO(q));
DBF_ERROR("%3d%3d%2d", count, tmp_count, nr);
q->handler(q->irq_ptr->cdev, QDIO_ERROR_SET_BUF_STATE,
q->nr, q->first_to_kick, count, q->irq_ptr->int_parm);
return 0;
} }
/* returns number of examined buffers and their common state in *state */ /*
* Returns number of examined buffers and their common state in *state.
* Requested number of buffers-to-examine must be > 0.
*/
static inline int get_buf_states(struct qdio_q *q, unsigned int bufnr, static inline int get_buf_states(struct qdio_q *q, unsigned int bufnr,
unsigned char *state, unsigned int count, unsigned char *state, unsigned int count,
int auto_ack, int merge_pending) int auto_ack, int merge_pending)
...@@ -225,17 +210,23 @@ static inline int get_buf_states(struct qdio_q *q, unsigned int bufnr, ...@@ -225,17 +210,23 @@ static inline int get_buf_states(struct qdio_q *q, unsigned int bufnr,
if (is_qebsm(q)) if (is_qebsm(q))
return qdio_do_eqbs(q, state, bufnr, count, auto_ack); return qdio_do_eqbs(q, state, bufnr, count, auto_ack);
for (i = 0; i < count; i++) { /* get initial state: */
if (!__state) { __state = q->slsb.val[bufnr];
__state = q->slsb.val[bufnr]; if (merge_pending && __state == SLSB_P_OUTPUT_PENDING)
if (merge_pending && __state == SLSB_P_OUTPUT_PENDING) __state = SLSB_P_OUTPUT_EMPTY;
__state = SLSB_P_OUTPUT_EMPTY;
} else if (merge_pending) { for (i = 1; i < count; i++) {
if ((q->slsb.val[bufnr] & __state) != __state)
break;
} else if (q->slsb.val[bufnr] != __state)
break;
bufnr = next_buf(bufnr); bufnr = next_buf(bufnr);
/* merge PENDING into EMPTY: */
if (merge_pending &&
q->slsb.val[bufnr] == SLSB_P_OUTPUT_PENDING &&
__state == SLSB_P_OUTPUT_EMPTY)
continue;
/* stop if next state differs from initial state: */
if (q->slsb.val[bufnr] != __state)
break;
} }
*state = __state; *state = __state;
return i; return i;
...@@ -502,8 +493,8 @@ static inline void inbound_primed(struct qdio_q *q, int count) ...@@ -502,8 +493,8 @@ static inline void inbound_primed(struct qdio_q *q, int count)
static int get_inbound_buffer_frontier(struct qdio_q *q) static int get_inbound_buffer_frontier(struct qdio_q *q)
{ {
int count, stop;
unsigned char state = 0; unsigned char state = 0;
int count;
q->timestamp = get_tod_clock_fast(); q->timestamp = get_tod_clock_fast();
...@@ -512,9 +503,7 @@ static int get_inbound_buffer_frontier(struct qdio_q *q) ...@@ -512,9 +503,7 @@ static int get_inbound_buffer_frontier(struct qdio_q *q)
* would return 0. * would return 0.
*/ */
count = min(atomic_read(&q->nr_buf_used), QDIO_MAX_BUFFERS_MASK); count = min(atomic_read(&q->nr_buf_used), QDIO_MAX_BUFFERS_MASK);
stop = add_buf(q->first_to_check, count); if (!count)
if (q->first_to_check == stop)
goto out; goto out;
/* /*
...@@ -734,8 +723,8 @@ void qdio_inbound_processing(unsigned long data) ...@@ -734,8 +723,8 @@ void qdio_inbound_processing(unsigned long data)
static int get_outbound_buffer_frontier(struct qdio_q *q) static int get_outbound_buffer_frontier(struct qdio_q *q)
{ {
int count, stop;
unsigned char state = 0; unsigned char state = 0;
int count;
q->timestamp = get_tod_clock_fast(); q->timestamp = get_tod_clock_fast();
...@@ -751,11 +740,11 @@ static int get_outbound_buffer_frontier(struct qdio_q *q) ...@@ -751,11 +740,11 @@ static int get_outbound_buffer_frontier(struct qdio_q *q)
* would return 0. * would return 0.
*/ */
count = min(atomic_read(&q->nr_buf_used), QDIO_MAX_BUFFERS_MASK); count = min(atomic_read(&q->nr_buf_used), QDIO_MAX_BUFFERS_MASK);
stop = add_buf(q->first_to_check, count); if (!count)
if (q->first_to_check == stop)
goto out; goto out;
count = get_buf_states(q, q->first_to_check, &state, count, 0, 1); count = get_buf_states(q, q->first_to_check, &state, count, 0,
q->u.out.use_cq);
if (!count) if (!count)
goto out; goto out;
......
...@@ -124,6 +124,11 @@ static void fsm_io_request(struct vfio_ccw_private *private, ...@@ -124,6 +124,11 @@ static void fsm_io_request(struct vfio_ccw_private *private,
if (scsw->cmd.fctl & SCSW_FCTL_START_FUNC) { if (scsw->cmd.fctl & SCSW_FCTL_START_FUNC) {
orb = (union orb *)io_region->orb_area; orb = (union orb *)io_region->orb_area;
/* Don't try to build a cp if transport mode is specified. */
if (orb->tm.b) {
io_region->ret_code = -EOPNOTSUPP;
goto err_out;
}
io_region->ret_code = cp_init(&private->cp, mdev_dev(mdev), io_region->ret_code = cp_init(&private->cp, mdev_dev(mdev),
orb); orb);
if (io_region->ret_code) if (io_region->ret_code)
......
...@@ -1369,7 +1369,7 @@ static void qeth_set_multiple_write_queues(struct qeth_card *card) ...@@ -1369,7 +1369,7 @@ static void qeth_set_multiple_write_queues(struct qeth_card *card)
static void qeth_update_from_chp_desc(struct qeth_card *card) static void qeth_update_from_chp_desc(struct qeth_card *card)
{ {
struct ccw_device *ccwdev; struct ccw_device *ccwdev;
struct channel_path_desc *chp_dsc; struct channel_path_desc_fmt0 *chp_dsc;
QETH_DBF_TEXT(SETUP, 2, "chp_desc"); QETH_DBF_TEXT(SETUP, 2, "chp_desc");
......
...@@ -11,7 +11,7 @@ if TTY ...@@ -11,7 +11,7 @@ if TTY
config VT config VT
bool "Virtual terminal" if EXPERT bool "Virtual terminal" if EXPERT
depends on !S390 && !UML depends on !UML
select INPUT select INPUT
default y default y
---help--- ---help---
......
...@@ -3,7 +3,8 @@ ...@@ -3,7 +3,8 @@
# #
menu "Graphics support" menu "Graphics support"
depends on HAS_IOMEM
if HAS_IOMEM
config HAVE_FB_ATMEL config HAVE_FB_ATMEL
bool bool
...@@ -36,6 +37,8 @@ config VIDEOMODE_HELPERS ...@@ -36,6 +37,8 @@ config VIDEOMODE_HELPERS
config HDMI config HDMI
bool bool
endif # HAS_IOMEM
if VT if VT
source "drivers/video/console/Kconfig" source "drivers/video/console/Kconfig"
endif endif
......
...@@ -8,7 +8,7 @@ config VGA_CONSOLE ...@@ -8,7 +8,7 @@ config VGA_CONSOLE
bool "VGA text console" if EXPERT || !X86 bool "VGA text console" if EXPERT || !X86
depends on !4xx && !PPC_8xx && !SPARC && !M68K && !PARISC && !SUPERH && \ depends on !4xx && !PPC_8xx && !SPARC && !M68K && !PARISC && !SUPERH && \
(!ARM || ARCH_FOOTBRIDGE || ARCH_INTEGRATOR || ARCH_NETWINDER) && \ (!ARM || ARCH_FOOTBRIDGE || ARCH_INTEGRATOR || ARCH_NETWINDER) && \
!ARM64 && !ARC && !MICROBLAZE && !OPENRISC && !NDS32 !ARM64 && !ARC && !MICROBLAZE && !OPENRISC && !NDS32 && !S390
default y default y
help help
Saying Y here will allow you to use Linux in text mode through a Saying Y here will allow you to use Linux in text mode through a
...@@ -84,7 +84,7 @@ config MDA_CONSOLE ...@@ -84,7 +84,7 @@ config MDA_CONSOLE
config SGI_NEWPORT_CONSOLE config SGI_NEWPORT_CONSOLE
tristate "SGI Newport Console support" tristate "SGI Newport Console support"
depends on SGI_IP22 depends on SGI_IP22 && HAS_IOMEM
select FONT_SUPPORT select FONT_SUPPORT
help help
Say Y here if you want the console on the Newport aka XL graphics Say Y here if you want the console on the Newport aka XL graphics
...@@ -152,7 +152,7 @@ config FRAMEBUFFER_CONSOLE_ROTATION ...@@ -152,7 +152,7 @@ config FRAMEBUFFER_CONSOLE_ROTATION
config STI_CONSOLE config STI_CONSOLE
bool "STI text console" bool "STI text console"
depends on PARISC depends on PARISC && HAS_IOMEM
select FONT_SUPPORT select FONT_SUPPORT
default y default y
help help
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment