Commit d60a540a authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux

Pull s390 updates from Heiko Carstens:
 "Since Martin is on vacation you get the s390 pull request for the
  v4.15 merge window this time from me.

  Besides a lot of cleanups and bug fixes these are the most important
  changes:

   - a new regset for runtime instrumentation registers

   - hardware accelerated AES-GCM support for the aes_s390 module

   - support for the new CEX6S crypto cards

   - support for FORTIFY_SOURCE

   - addition of missing z13 and new z14 instructions to the in-kernel
     disassembler

   - generate opcode tables for the in-kernel disassembler out of a
     simple text file instead of having to manually maintain those
     tables

   - fast memset16, memset32 and memset64 implementations

   - removal of named saved segment support

   - hardware counter support for z14

   - queued spinlocks and queued rwlocks implementations for s390

   - use the stack_depth tracking feature for s390 BPF JIT

   - a new s390_sthyi system call which emulates the sthyi (store
     hypervisor information) instruction

   - removal of the old KVM virtio transport

   - an s390 specific CPU alternatives implementation which is used in
     the new spinlock code"

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (88 commits)
  MAINTAINERS: add virtio-ccw.h to virtio/s390 section
  s390/noexec: execute kexec datamover without DAT
  s390: fix transactional execution control register handling
  s390/bpf: take advantage of stack_depth tracking
  s390: simplify transactional execution elf hwcap handling
  s390/zcrypt: Rework struct ap_qact_ap_info.
  s390/virtio: remove unused header file kvm_virtio.h
  s390: avoid undefined behaviour
  s390/disassembler: generate opcode tables from text file
  s390/disassembler: remove insn_to_mnemonic()
  s390/dasd: avoid calling do_gettimeofday()
  s390: vfio-ccw: Do not attempt to free no-op, test and tic cda.
  s390: remove named saved segment support
  s390/archrandom: Reconsider s390 arch random implementation
  s390/pci: do not require AIS facility
  s390/qdio: sanitize put_indicator
  s390/qdio: use atomic_cmpxchg
  s390/nmi: avoid using long-displacement facility
  s390: pass endianness info to sparse
  s390/decompressor: remove informational messages
  ...
parents 2101dd64 364a5607
...@@ -2548,6 +2548,9 @@ ...@@ -2548,6 +2548,9 @@
noalign [KNL,ARM] noalign [KNL,ARM]
noaltinstr [S390] Disables alternative instructions patching
(CPU alternatives feature).
noapic [SMP,APIC] Tells the kernel to not make use of any noapic [SMP,APIC] Tells the kernel to not make use of any
IOAPICs that may be present in the system. IOAPICs that may be present in the system.
......
...@@ -14335,6 +14335,7 @@ L: virtualization@lists.linux-foundation.org ...@@ -14335,6 +14335,7 @@ L: virtualization@lists.linux-foundation.org
L: kvm@vger.kernel.org L: kvm@vger.kernel.org
S: Supported S: Supported
F: drivers/s390/virtio/ F: drivers/s390/virtio/
F: arch/s390/include/uapi/asm/virtio-ccw.h
VIRTIO GPU DRIVER VIRTIO GPU DRIVER
M: David Airlie <airlied@linux.ie> M: David Airlie <airlied@linux.ie>
......
...@@ -68,6 +68,7 @@ config S390 ...@@ -68,6 +68,7 @@ config S390
select ARCH_BINFMT_ELF_STATE select ARCH_BINFMT_ELF_STATE
select ARCH_HAS_DEVMEM_IS_ALLOWED select ARCH_HAS_DEVMEM_IS_ALLOWED
select ARCH_HAS_ELF_RANDOMIZE select ARCH_HAS_ELF_RANDOMIZE
select ARCH_HAS_FORTIFY_SOURCE
select ARCH_HAS_GCOV_PROFILE_ALL select ARCH_HAS_GCOV_PROFILE_ALL
select ARCH_HAS_GIGANTIC_PAGE if (MEMORY_ISOLATION && COMPACTION) || CMA select ARCH_HAS_GIGANTIC_PAGE if (MEMORY_ISOLATION && COMPACTION) || CMA
select ARCH_HAS_KCOV select ARCH_HAS_KCOV
...@@ -143,7 +144,6 @@ config S390 ...@@ -143,7 +144,6 @@ config S390
select HAVE_DYNAMIC_FTRACE select HAVE_DYNAMIC_FTRACE
select HAVE_DYNAMIC_FTRACE_WITH_REGS select HAVE_DYNAMIC_FTRACE_WITH_REGS
select HAVE_EFFICIENT_UNALIGNED_ACCESS select HAVE_EFFICIENT_UNALIGNED_ACCESS
select HAVE_EXIT_THREAD
select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_FUNCTION_GRAPH_TRACER select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FUNCTION_TRACER select HAVE_FUNCTION_TRACER
...@@ -538,6 +538,22 @@ config ARCH_RANDOM ...@@ -538,6 +538,22 @@ config ARCH_RANDOM
If unsure, say Y. If unsure, say Y.
config ALTERNATIVES
def_bool y
prompt "Patch optimized instructions for running CPU type"
help
When enabled the kernel code is compiled with additional
alternative instructions blocks optimized for newer CPU types.
These alternative instructions blocks are patched at kernel boot
time when running CPU supports them. This mechanism is used to
optimize some critical code paths (i.e. spinlocks) for newer CPUs
even if kernel is build to support older machine generations.
This mechanism could be disabled by appending "noaltinstr"
option to the kernel command line.
If unsure, say Y.
endmenu endmenu
menu "Memory setup" menu "Memory setup"
...@@ -809,18 +825,6 @@ config PFAULT ...@@ -809,18 +825,6 @@ config PFAULT
Everybody who wants to run Linux under VM != VM4.2 should select Everybody who wants to run Linux under VM != VM4.2 should select
this option. this option.
config SHARED_KERNEL
bool "VM shared kernel support"
depends on !JUMP_LABEL
help
Select this option, if you want to share the text segment of the
Linux kernel between different VM guests. This reduces memory
usage with lots of guests but greatly increases kernel size.
Also if a kernel was IPL'ed from a shared segment the kexec system
call will not work.
You should only select this option if you know what you are
doing and want to exploit this feature.
config CMM config CMM
def_tristate n def_tristate n
prompt "Cooperative memory management" prompt "Cooperative memory management"
...@@ -930,17 +934,4 @@ config S390_GUEST ...@@ -930,17 +934,4 @@ config S390_GUEST
Select this option if you want to run the kernel as a guest under Select this option if you want to run the kernel as a guest under
the KVM hypervisor. the KVM hypervisor.
config S390_GUEST_OLD_TRANSPORT
def_bool y
prompt "Guest support for old s390 virtio transport (DEPRECATED)"
depends on S390_GUEST
help
Enable this option to add support for the old s390-virtio
transport (i.e. virtio devices NOT based on virtio-ccw). This
type of virtio devices is only available on the experimental
kuli userspace or with old (< 2.6) qemu. If you are running
with a modern version of qemu (which supports virtio-ccw since
1.4 and uses it by default since version 2.4), you probably won't
need this.
endmenu endmenu
...@@ -21,7 +21,7 @@ KBUILD_CFLAGS += -m64 ...@@ -21,7 +21,7 @@ KBUILD_CFLAGS += -m64
KBUILD_AFLAGS += -m64 KBUILD_AFLAGS += -m64
UTS_MACHINE := s390x UTS_MACHINE := s390x
STACK_SIZE := 16384 STACK_SIZE := 16384
CHECKFLAGS += -D__s390__ -D__s390x__ CHECKFLAGS += -D__s390__ -D__s390x__ -mbig-endian
export LD_BFD export LD_BFD
...@@ -133,6 +133,7 @@ archclean: ...@@ -133,6 +133,7 @@ archclean:
archprepare: archprepare:
$(Q)$(MAKE) $(build)=$(tools) include/generated/facilities.h $(Q)$(MAKE) $(build)=$(tools) include/generated/facilities.h
$(Q)$(MAKE) $(build)=$(tools) include/generated/dis.h
# Don't use tabs in echo arguments # Don't use tabs in echo arguments
define archhelp define archhelp
......
...@@ -12,7 +12,7 @@ targets += vmlinux.bin.xz vmlinux.bin.lzma vmlinux.bin.lzo vmlinux.bin.lz4 ...@@ -12,7 +12,7 @@ targets += vmlinux.bin.xz vmlinux.bin.lzma vmlinux.bin.lzo vmlinux.bin.lz4
targets += misc.o piggy.o sizes.h head.o targets += misc.o piggy.o sizes.h head.o
KBUILD_CFLAGS := -m64 -D__KERNEL__ -O2 KBUILD_CFLAGS := -m64 -D__KERNEL__ -O2
KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING -D__NO_FORTIFY
KBUILD_CFLAGS += $(cflags-y) -fno-delete-null-pointer-checks -msoft-float KBUILD_CFLAGS += $(cflags-y) -fno-delete-null-pointer-checks -msoft-float
KBUILD_CFLAGS += $(call cc-option,-mpacked-stack) KBUILD_CFLAGS += $(call cc-option,-mpacked-stack)
KBUILD_CFLAGS += $(call cc-option,-ffreestanding) KBUILD_CFLAGS += $(call cc-option,-ffreestanding)
......
...@@ -170,9 +170,7 @@ unsigned long decompress_kernel(void) ...@@ -170,9 +170,7 @@ unsigned long decompress_kernel(void)
free_mem_ptr = (unsigned long) &_end; free_mem_ptr = (unsigned long) &_end;
free_mem_end_ptr = free_mem_ptr + HEAP_SIZE; free_mem_end_ptr = free_mem_ptr + HEAP_SIZE;
puts("Uncompressing Linux... ");
__decompress(input_data, input_len, NULL, NULL, output, 0, NULL, error); __decompress(input_data, input_len, NULL, NULL, output, 0, NULL, error);
puts("Ok, booting the kernel.\n");
return (unsigned long) output; return (unsigned long) output;
} }
...@@ -69,7 +69,6 @@ CONFIG_KSM=y ...@@ -69,7 +69,6 @@ CONFIG_KSM=y
CONFIG_TRANSPARENT_HUGEPAGE=y CONFIG_TRANSPARENT_HUGEPAGE=y
CONFIG_CLEANCACHE=y CONFIG_CLEANCACHE=y
CONFIG_FRONTSWAP=y CONFIG_FRONTSWAP=y
CONFIG_CMA=y
CONFIG_CMA_DEBUG=y CONFIG_CMA_DEBUG=y
CONFIG_CMA_DEBUGFS=y CONFIG_CMA_DEBUGFS=y
CONFIG_MEM_SOFT_DIRTY=y CONFIG_MEM_SOFT_DIRTY=y
...@@ -379,7 +378,6 @@ CONFIG_BLK_DEV_LOOP=m ...@@ -379,7 +378,6 @@ CONFIG_BLK_DEV_LOOP=m
CONFIG_BLK_DEV_CRYPTOLOOP=m CONFIG_BLK_DEV_CRYPTOLOOP=m
CONFIG_BLK_DEV_DRBD=m CONFIG_BLK_DEV_DRBD=m
CONFIG_BLK_DEV_NBD=m CONFIG_BLK_DEV_NBD=m
CONFIG_BLK_DEV_OSD=m
CONFIG_BLK_DEV_RAM=y CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_SIZE=32768 CONFIG_BLK_DEV_RAM_SIZE=32768
CONFIG_BLK_DEV_RAM_DAX=y CONFIG_BLK_DEV_RAM_DAX=y
...@@ -416,7 +414,6 @@ CONFIG_SCSI_OSD_ULD=m ...@@ -416,7 +414,6 @@ CONFIG_SCSI_OSD_ULD=m
CONFIG_MD=y CONFIG_MD=y
CONFIG_BLK_DEV_MD=y CONFIG_BLK_DEV_MD=y
CONFIG_MD_LINEAR=m CONFIG_MD_LINEAR=m
CONFIG_MD_RAID0=m
CONFIG_MD_MULTIPATH=m CONFIG_MD_MULTIPATH=m
CONFIG_MD_FAULTY=m CONFIG_MD_FAULTY=m
CONFIG_BLK_DEV_DM=m CONFIG_BLK_DEV_DM=m
...@@ -483,6 +480,8 @@ CONFIG_INFINIBAND=m ...@@ -483,6 +480,8 @@ CONFIG_INFINIBAND=m
CONFIG_INFINIBAND_USER_ACCESS=m CONFIG_INFINIBAND_USER_ACCESS=m
CONFIG_MLX4_INFINIBAND=m CONFIG_MLX4_INFINIBAND=m
CONFIG_MLX5_INFINIBAND=m CONFIG_MLX5_INFINIBAND=m
CONFIG_VFIO=m
CONFIG_VFIO_PCI=m
CONFIG_VIRTIO_BALLOON=m CONFIG_VIRTIO_BALLOON=m
CONFIG_EXT4_FS=y CONFIG_EXT4_FS=y
CONFIG_EXT4_FS_POSIX_ACL=y CONFIG_EXT4_FS_POSIX_ACL=y
...@@ -599,7 +598,6 @@ CONFIG_DETECT_HUNG_TASK=y ...@@ -599,7 +598,6 @@ CONFIG_DETECT_HUNG_TASK=y
CONFIG_WQ_WATCHDOG=y CONFIG_WQ_WATCHDOG=y
CONFIG_PANIC_ON_OOPS=y CONFIG_PANIC_ON_OOPS=y
CONFIG_DEBUG_TIMEKEEPING=y CONFIG_DEBUG_TIMEKEEPING=y
CONFIG_DEBUG_RT_MUTEXES=y
CONFIG_DEBUG_WW_MUTEX_SLOWPATH=y CONFIG_DEBUG_WW_MUTEX_SLOWPATH=y
CONFIG_PROVE_LOCKING=y CONFIG_PROVE_LOCKING=y
CONFIG_LOCK_STAT=y CONFIG_LOCK_STAT=y
...@@ -629,10 +627,8 @@ CONFIG_SCHED_TRACER=y ...@@ -629,10 +627,8 @@ CONFIG_SCHED_TRACER=y
CONFIG_FTRACE_SYSCALLS=y CONFIG_FTRACE_SYSCALLS=y
CONFIG_STACK_TRACER=y CONFIG_STACK_TRACER=y
CONFIG_BLK_DEV_IO_TRACE=y CONFIG_BLK_DEV_IO_TRACE=y
CONFIG_UPROBE_EVENTS=y
CONFIG_FUNCTION_PROFILER=y CONFIG_FUNCTION_PROFILER=y
CONFIG_HIST_TRIGGERS=y CONFIG_HIST_TRIGGERS=y
CONFIG_TRACE_ENUM_MAP_FILE=y
CONFIG_LKDTM=m CONFIG_LKDTM=m
CONFIG_TEST_LIST_SORT=y CONFIG_TEST_LIST_SORT=y
CONFIG_TEST_SORT=y CONFIG_TEST_SORT=y
...@@ -649,6 +645,7 @@ CONFIG_ENCRYPTED_KEYS=m ...@@ -649,6 +645,7 @@ CONFIG_ENCRYPTED_KEYS=m
CONFIG_SECURITY=y CONFIG_SECURITY=y
CONFIG_SECURITY_NETWORK=y CONFIG_SECURITY_NETWORK=y
CONFIG_HARDENED_USERCOPY=y CONFIG_HARDENED_USERCOPY=y
CONFIG_FORTIFY_SOURCE=y
CONFIG_SECURITY_SELINUX=y CONFIG_SECURITY_SELINUX=y
CONFIG_SECURITY_SELINUX_BOOTPARAM=y CONFIG_SECURITY_SELINUX_BOOTPARAM=y
CONFIG_SECURITY_SELINUX_BOOTPARAM_VALUE=0 CONFIG_SECURITY_SELINUX_BOOTPARAM_VALUE=0
...@@ -705,12 +702,12 @@ CONFIG_CRYPTO_USER_API_RNG=m ...@@ -705,12 +702,12 @@ CONFIG_CRYPTO_USER_API_RNG=m
CONFIG_CRYPTO_USER_API_AEAD=m CONFIG_CRYPTO_USER_API_AEAD=m
CONFIG_ZCRYPT=m CONFIG_ZCRYPT=m
CONFIG_PKEY=m CONFIG_PKEY=m
CONFIG_CRYPTO_PAES_S390=m
CONFIG_CRYPTO_SHA1_S390=m CONFIG_CRYPTO_SHA1_S390=m
CONFIG_CRYPTO_SHA256_S390=m CONFIG_CRYPTO_SHA256_S390=m
CONFIG_CRYPTO_SHA512_S390=m CONFIG_CRYPTO_SHA512_S390=m
CONFIG_CRYPTO_DES_S390=m CONFIG_CRYPTO_DES_S390=m
CONFIG_CRYPTO_AES_S390=m CONFIG_CRYPTO_AES_S390=m
CONFIG_CRYPTO_PAES_S390=m
CONFIG_CRYPTO_GHASH_S390=m CONFIG_CRYPTO_GHASH_S390=m
CONFIG_CRYPTO_CRC32_S390=y CONFIG_CRYPTO_CRC32_S390=y
CONFIG_ASYMMETRIC_KEY_TYPE=y CONFIG_ASYMMETRIC_KEY_TYPE=y
......
...@@ -70,7 +70,6 @@ CONFIG_KSM=y ...@@ -70,7 +70,6 @@ CONFIG_KSM=y
CONFIG_TRANSPARENT_HUGEPAGE=y CONFIG_TRANSPARENT_HUGEPAGE=y
CONFIG_CLEANCACHE=y CONFIG_CLEANCACHE=y
CONFIG_FRONTSWAP=y CONFIG_FRONTSWAP=y
CONFIG_CMA=y
CONFIG_MEM_SOFT_DIRTY=y CONFIG_MEM_SOFT_DIRTY=y
CONFIG_ZSWAP=y CONFIG_ZSWAP=y
CONFIG_ZBUD=m CONFIG_ZBUD=m
...@@ -376,7 +375,6 @@ CONFIG_BLK_DEV_LOOP=m ...@@ -376,7 +375,6 @@ CONFIG_BLK_DEV_LOOP=m
CONFIG_BLK_DEV_CRYPTOLOOP=m CONFIG_BLK_DEV_CRYPTOLOOP=m
CONFIG_BLK_DEV_DRBD=m CONFIG_BLK_DEV_DRBD=m
CONFIG_BLK_DEV_NBD=m CONFIG_BLK_DEV_NBD=m
CONFIG_BLK_DEV_OSD=m
CONFIG_BLK_DEV_RAM=y CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_SIZE=32768 CONFIG_BLK_DEV_RAM_SIZE=32768
CONFIG_BLK_DEV_RAM_DAX=y CONFIG_BLK_DEV_RAM_DAX=y
...@@ -412,7 +410,6 @@ CONFIG_SCSI_OSD_ULD=m ...@@ -412,7 +410,6 @@ CONFIG_SCSI_OSD_ULD=m
CONFIG_MD=y CONFIG_MD=y
CONFIG_BLK_DEV_MD=y CONFIG_BLK_DEV_MD=y
CONFIG_MD_LINEAR=m CONFIG_MD_LINEAR=m
CONFIG_MD_RAID0=m
CONFIG_MD_MULTIPATH=m CONFIG_MD_MULTIPATH=m
CONFIG_MD_FAULTY=m CONFIG_MD_FAULTY=m
CONFIG_BLK_DEV_DM=m CONFIG_BLK_DEV_DM=m
...@@ -479,6 +476,8 @@ CONFIG_INFINIBAND=m ...@@ -479,6 +476,8 @@ CONFIG_INFINIBAND=m
CONFIG_INFINIBAND_USER_ACCESS=m CONFIG_INFINIBAND_USER_ACCESS=m
CONFIG_MLX4_INFINIBAND=m CONFIG_MLX4_INFINIBAND=m
CONFIG_MLX5_INFINIBAND=m CONFIG_MLX5_INFINIBAND=m
CONFIG_VFIO=m
CONFIG_VFIO_PCI=m
CONFIG_VIRTIO_BALLOON=m CONFIG_VIRTIO_BALLOON=m
CONFIG_EXT4_FS=y CONFIG_EXT4_FS=y
CONFIG_EXT4_FS_POSIX_ACL=y CONFIG_EXT4_FS_POSIX_ACL=y
...@@ -575,10 +574,8 @@ CONFIG_SCHED_TRACER=y ...@@ -575,10 +574,8 @@ CONFIG_SCHED_TRACER=y
CONFIG_FTRACE_SYSCALLS=y CONFIG_FTRACE_SYSCALLS=y
CONFIG_STACK_TRACER=y CONFIG_STACK_TRACER=y
CONFIG_BLK_DEV_IO_TRACE=y CONFIG_BLK_DEV_IO_TRACE=y
CONFIG_UPROBE_EVENTS=y
CONFIG_FUNCTION_PROFILER=y CONFIG_FUNCTION_PROFILER=y
CONFIG_HIST_TRIGGERS=y CONFIG_HIST_TRIGGERS=y
CONFIG_TRACE_ENUM_MAP_FILE=y
CONFIG_LKDTM=m CONFIG_LKDTM=m
CONFIG_PERCPU_TEST=m CONFIG_PERCPU_TEST=m
CONFIG_ATOMIC64_SELFTEST=y CONFIG_ATOMIC64_SELFTEST=y
...@@ -650,12 +647,12 @@ CONFIG_CRYPTO_USER_API_RNG=m ...@@ -650,12 +647,12 @@ CONFIG_CRYPTO_USER_API_RNG=m
CONFIG_CRYPTO_USER_API_AEAD=m CONFIG_CRYPTO_USER_API_AEAD=m
CONFIG_ZCRYPT=m CONFIG_ZCRYPT=m
CONFIG_PKEY=m CONFIG_PKEY=m
CONFIG_CRYPTO_PAES_S390=m
CONFIG_CRYPTO_SHA1_S390=m CONFIG_CRYPTO_SHA1_S390=m
CONFIG_CRYPTO_SHA256_S390=m CONFIG_CRYPTO_SHA256_S390=m
CONFIG_CRYPTO_SHA512_S390=m CONFIG_CRYPTO_SHA512_S390=m
CONFIG_CRYPTO_DES_S390=m CONFIG_CRYPTO_DES_S390=m
CONFIG_CRYPTO_AES_S390=m CONFIG_CRYPTO_AES_S390=m
CONFIG_CRYPTO_PAES_S390=m
CONFIG_CRYPTO_GHASH_S390=m CONFIG_CRYPTO_GHASH_S390=m
CONFIG_CRYPTO_CRC32_S390=y CONFIG_CRYPTO_CRC32_S390=y
CONFIG_CRC7=m CONFIG_CRC7=m
......
...@@ -68,7 +68,6 @@ CONFIG_KSM=y ...@@ -68,7 +68,6 @@ CONFIG_KSM=y
CONFIG_TRANSPARENT_HUGEPAGE=y CONFIG_TRANSPARENT_HUGEPAGE=y
CONFIG_CLEANCACHE=y CONFIG_CLEANCACHE=y
CONFIG_FRONTSWAP=y CONFIG_FRONTSWAP=y
CONFIG_CMA=y
CONFIG_MEM_SOFT_DIRTY=y CONFIG_MEM_SOFT_DIRTY=y
CONFIG_ZSWAP=y CONFIG_ZSWAP=y
CONFIG_ZBUD=m CONFIG_ZBUD=m
...@@ -374,7 +373,6 @@ CONFIG_BLK_DEV_LOOP=m ...@@ -374,7 +373,6 @@ CONFIG_BLK_DEV_LOOP=m
CONFIG_BLK_DEV_CRYPTOLOOP=m CONFIG_BLK_DEV_CRYPTOLOOP=m
CONFIG_BLK_DEV_DRBD=m CONFIG_BLK_DEV_DRBD=m
CONFIG_BLK_DEV_NBD=m CONFIG_BLK_DEV_NBD=m
CONFIG_BLK_DEV_OSD=m
CONFIG_BLK_DEV_RAM=y CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_SIZE=32768 CONFIG_BLK_DEV_RAM_SIZE=32768
CONFIG_BLK_DEV_RAM_DAX=y CONFIG_BLK_DEV_RAM_DAX=y
...@@ -410,7 +408,6 @@ CONFIG_SCSI_OSD_ULD=m ...@@ -410,7 +408,6 @@ CONFIG_SCSI_OSD_ULD=m
CONFIG_MD=y CONFIG_MD=y
CONFIG_BLK_DEV_MD=y CONFIG_BLK_DEV_MD=y
CONFIG_MD_LINEAR=m CONFIG_MD_LINEAR=m
CONFIG_MD_RAID0=m
CONFIG_MD_MULTIPATH=m CONFIG_MD_MULTIPATH=m
CONFIG_MD_FAULTY=m CONFIG_MD_FAULTY=m
CONFIG_BLK_DEV_DM=m CONFIG_BLK_DEV_DM=m
...@@ -477,6 +474,8 @@ CONFIG_INFINIBAND=m ...@@ -477,6 +474,8 @@ CONFIG_INFINIBAND=m
CONFIG_INFINIBAND_USER_ACCESS=m CONFIG_INFINIBAND_USER_ACCESS=m
CONFIG_MLX4_INFINIBAND=m CONFIG_MLX4_INFINIBAND=m
CONFIG_MLX5_INFINIBAND=m CONFIG_MLX5_INFINIBAND=m
CONFIG_VFIO=m
CONFIG_VFIO_PCI=m
CONFIG_VIRTIO_BALLOON=m CONFIG_VIRTIO_BALLOON=m
CONFIG_EXT4_FS=y CONFIG_EXT4_FS=y
CONFIG_EXT4_FS_POSIX_ACL=y CONFIG_EXT4_FS_POSIX_ACL=y
...@@ -573,10 +572,8 @@ CONFIG_SCHED_TRACER=y ...@@ -573,10 +572,8 @@ CONFIG_SCHED_TRACER=y
CONFIG_FTRACE_SYSCALLS=y CONFIG_FTRACE_SYSCALLS=y
CONFIG_STACK_TRACER=y CONFIG_STACK_TRACER=y
CONFIG_BLK_DEV_IO_TRACE=y CONFIG_BLK_DEV_IO_TRACE=y
CONFIG_UPROBE_EVENTS=y
CONFIG_FUNCTION_PROFILER=y CONFIG_FUNCTION_PROFILER=y
CONFIG_HIST_TRIGGERS=y CONFIG_HIST_TRIGGERS=y
CONFIG_TRACE_ENUM_MAP_FILE=y
CONFIG_LKDTM=m CONFIG_LKDTM=m
CONFIG_PERCPU_TEST=m CONFIG_PERCPU_TEST=m
CONFIG_ATOMIC64_SELFTEST=y CONFIG_ATOMIC64_SELFTEST=y
...@@ -648,12 +645,12 @@ CONFIG_CRYPTO_USER_API_RNG=m ...@@ -648,12 +645,12 @@ CONFIG_CRYPTO_USER_API_RNG=m
CONFIG_CRYPTO_USER_API_AEAD=m CONFIG_CRYPTO_USER_API_AEAD=m
CONFIG_ZCRYPT=m CONFIG_ZCRYPT=m
CONFIG_PKEY=m CONFIG_PKEY=m
CONFIG_CRYPTO_PAES_S390=m
CONFIG_CRYPTO_SHA1_S390=m CONFIG_CRYPTO_SHA1_S390=m
CONFIG_CRYPTO_SHA256_S390=m CONFIG_CRYPTO_SHA256_S390=m
CONFIG_CRYPTO_SHA512_S390=m CONFIG_CRYPTO_SHA512_S390=m
CONFIG_CRYPTO_DES_S390=m CONFIG_CRYPTO_DES_S390=m
CONFIG_CRYPTO_AES_S390=m CONFIG_CRYPTO_AES_S390=m
CONFIG_CRYPTO_PAES_S390=m
CONFIG_CRYPTO_GHASH_S390=m CONFIG_CRYPTO_GHASH_S390=m
CONFIG_CRYPTO_CRC32_S390=y CONFIG_CRYPTO_CRC32_S390=y
CONFIG_CRC7=m CONFIG_CRC7=m
......
...@@ -4,9 +4,11 @@ ...@@ -4,9 +4,11 @@
* s390 implementation of the AES Cipher Algorithm. * s390 implementation of the AES Cipher Algorithm.
* *
* s390 Version: * s390 Version:
* Copyright IBM Corp. 2005, 2007 * Copyright IBM Corp. 2005, 2017
* Author(s): Jan Glauber (jang@de.ibm.com) * Author(s): Jan Glauber (jang@de.ibm.com)
* Sebastian Siewior (sebastian@breakpoint.cc> SW-Fallback * Sebastian Siewior (sebastian@breakpoint.cc> SW-Fallback
* Patrick Steuer <patrick.steuer@de.ibm.com>
* Harald Freudenberger <freude@de.ibm.com>
* *
* Derived from "crypto/aes_generic.c" * Derived from "crypto/aes_generic.c"
* *
...@@ -22,20 +24,25 @@ ...@@ -22,20 +24,25 @@
#include <crypto/aes.h> #include <crypto/aes.h>
#include <crypto/algapi.h> #include <crypto/algapi.h>
#include <crypto/ghash.h>
#include <crypto/internal/aead.h>
#include <crypto/internal/skcipher.h> #include <crypto/internal/skcipher.h>
#include <crypto/scatterwalk.h>
#include <linux/err.h> #include <linux/err.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/cpufeature.h> #include <linux/cpufeature.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/spinlock.h> #include <linux/spinlock.h>
#include <linux/fips.h> #include <linux/fips.h>
#include <linux/string.h>
#include <crypto/xts.h> #include <crypto/xts.h>
#include <asm/cpacf.h> #include <asm/cpacf.h>
static u8 *ctrblk; static u8 *ctrblk;
static DEFINE_SPINLOCK(ctrblk_lock); static DEFINE_SPINLOCK(ctrblk_lock);
static cpacf_mask_t km_functions, kmc_functions, kmctr_functions; static cpacf_mask_t km_functions, kmc_functions, kmctr_functions,
kma_functions;
struct s390_aes_ctx { struct s390_aes_ctx {
u8 key[AES_MAX_KEY_SIZE]; u8 key[AES_MAX_KEY_SIZE];
...@@ -55,6 +62,17 @@ struct s390_xts_ctx { ...@@ -55,6 +62,17 @@ struct s390_xts_ctx {
struct crypto_skcipher *fallback; struct crypto_skcipher *fallback;
}; };
struct gcm_sg_walk {
struct scatter_walk walk;
unsigned int walk_bytes;
u8 *walk_ptr;
unsigned int walk_bytes_remain;
u8 buf[AES_BLOCK_SIZE];
unsigned int buf_bytes;
u8 *ptr;
unsigned int nbytes;
};
static int setkey_fallback_cip(struct crypto_tfm *tfm, const u8 *in_key, static int setkey_fallback_cip(struct crypto_tfm *tfm, const u8 *in_key,
unsigned int key_len) unsigned int key_len)
{ {
...@@ -771,6 +789,267 @@ static struct crypto_alg ctr_aes_alg = { ...@@ -771,6 +789,267 @@ static struct crypto_alg ctr_aes_alg = {
} }
}; };
static int gcm_aes_setkey(struct crypto_aead *tfm, const u8 *key,
unsigned int keylen)
{
struct s390_aes_ctx *ctx = crypto_aead_ctx(tfm);
switch (keylen) {
case AES_KEYSIZE_128:
ctx->fc = CPACF_KMA_GCM_AES_128;
break;
case AES_KEYSIZE_192:
ctx->fc = CPACF_KMA_GCM_AES_192;
break;
case AES_KEYSIZE_256:
ctx->fc = CPACF_KMA_GCM_AES_256;
break;
default:
return -EINVAL;
}
memcpy(ctx->key, key, keylen);
ctx->key_len = keylen;
return 0;
}
static int gcm_aes_setauthsize(struct crypto_aead *tfm, unsigned int authsize)
{
switch (authsize) {
case 4:
case 8:
case 12:
case 13:
case 14:
case 15:
case 16:
break;
default:
return -EINVAL;
}
return 0;
}
static void gcm_sg_walk_start(struct gcm_sg_walk *gw, struct scatterlist *sg,
unsigned int len)
{
memset(gw, 0, sizeof(*gw));
gw->walk_bytes_remain = len;
scatterwalk_start(&gw->walk, sg);
}
static int gcm_sg_walk_go(struct gcm_sg_walk *gw, unsigned int minbytesneeded)
{
int n;
/* minbytesneeded <= AES_BLOCK_SIZE */
if (gw->buf_bytes && gw->buf_bytes >= minbytesneeded) {
gw->ptr = gw->buf;
gw->nbytes = gw->buf_bytes;
goto out;
}
if (gw->walk_bytes_remain == 0) {
gw->ptr = NULL;
gw->nbytes = 0;
goto out;
}
gw->walk_bytes = scatterwalk_clamp(&gw->walk, gw->walk_bytes_remain);
if (!gw->walk_bytes) {
scatterwalk_start(&gw->walk, sg_next(gw->walk.sg));
gw->walk_bytes = scatterwalk_clamp(&gw->walk,
gw->walk_bytes_remain);
}
gw->walk_ptr = scatterwalk_map(&gw->walk);
if (!gw->buf_bytes && gw->walk_bytes >= minbytesneeded) {
gw->ptr = gw->walk_ptr;
gw->nbytes = gw->walk_bytes;
goto out;
}
while (1) {
n = min(gw->walk_bytes, AES_BLOCK_SIZE - gw->buf_bytes);
memcpy(gw->buf + gw->buf_bytes, gw->walk_ptr, n);
gw->buf_bytes += n;
gw->walk_bytes_remain -= n;
scatterwalk_unmap(&gw->walk);
scatterwalk_advance(&gw->walk, n);
scatterwalk_done(&gw->walk, 0, gw->walk_bytes_remain);
if (gw->buf_bytes >= minbytesneeded) {
gw->ptr = gw->buf;
gw->nbytes = gw->buf_bytes;
goto out;
}
gw->walk_bytes = scatterwalk_clamp(&gw->walk,
gw->walk_bytes_remain);
if (!gw->walk_bytes) {
scatterwalk_start(&gw->walk, sg_next(gw->walk.sg));
gw->walk_bytes = scatterwalk_clamp(&gw->walk,
gw->walk_bytes_remain);
}
gw->walk_ptr = scatterwalk_map(&gw->walk);
}
out:
return gw->nbytes;
}
static void gcm_sg_walk_done(struct gcm_sg_walk *gw, unsigned int bytesdone)
{
int n;
if (gw->ptr == NULL)
return;
if (gw->ptr == gw->buf) {
n = gw->buf_bytes - bytesdone;
if (n > 0) {
memmove(gw->buf, gw->buf + bytesdone, n);
gw->buf_bytes -= n;
} else
gw->buf_bytes = 0;
} else {
gw->walk_bytes_remain -= bytesdone;
scatterwalk_unmap(&gw->walk);
scatterwalk_advance(&gw->walk, bytesdone);
scatterwalk_done(&gw->walk, 0, gw->walk_bytes_remain);
}
}
static int gcm_aes_crypt(struct aead_request *req, unsigned int flags)
{
struct crypto_aead *tfm = crypto_aead_reqtfm(req);
struct s390_aes_ctx *ctx = crypto_aead_ctx(tfm);
unsigned int ivsize = crypto_aead_ivsize(tfm);
unsigned int taglen = crypto_aead_authsize(tfm);
unsigned int aadlen = req->assoclen;
unsigned int pclen = req->cryptlen;
int ret = 0;
unsigned int len, in_bytes, out_bytes,
min_bytes, bytes, aad_bytes, pc_bytes;
struct gcm_sg_walk gw_in, gw_out;
u8 tag[GHASH_DIGEST_SIZE];
struct {
u32 _[3]; /* reserved */
u32 cv; /* Counter Value */
u8 t[GHASH_DIGEST_SIZE];/* Tag */
u8 h[AES_BLOCK_SIZE]; /* Hash-subkey */
u64 taadl; /* Total AAD Length */
u64 tpcl; /* Total Plain-/Cipher-text Length */
u8 j0[GHASH_BLOCK_SIZE];/* initial counter value */
u8 k[AES_MAX_KEY_SIZE]; /* Key */
} param;
/*
* encrypt
* req->src: aad||plaintext
* req->dst: aad||ciphertext||tag
* decrypt
* req->src: aad||ciphertext||tag
* req->dst: aad||plaintext, return 0 or -EBADMSG
* aad, plaintext and ciphertext may be empty.
*/
if (flags & CPACF_DECRYPT)
pclen -= taglen;
len = aadlen + pclen;
memset(&param, 0, sizeof(param));
param.cv = 1;
param.taadl = aadlen * 8;
param.tpcl = pclen * 8;
memcpy(param.j0, req->iv, ivsize);
*(u32 *)(param.j0 + ivsize) = 1;
memcpy(param.k, ctx->key, ctx->key_len);
gcm_sg_walk_start(&gw_in, req->src, len);
gcm_sg_walk_start(&gw_out, req->dst, len);
do {
min_bytes = min_t(unsigned int,
aadlen > 0 ? aadlen : pclen, AES_BLOCK_SIZE);
in_bytes = gcm_sg_walk_go(&gw_in, min_bytes);
out_bytes = gcm_sg_walk_go(&gw_out, min_bytes);
bytes = min(in_bytes, out_bytes);
if (aadlen + pclen <= bytes) {
aad_bytes = aadlen;
pc_bytes = pclen;
flags |= CPACF_KMA_LAAD | CPACF_KMA_LPC;
} else {
if (aadlen <= bytes) {
aad_bytes = aadlen;
pc_bytes = (bytes - aadlen) &
~(AES_BLOCK_SIZE - 1);
flags |= CPACF_KMA_LAAD;
} else {
aad_bytes = bytes & ~(AES_BLOCK_SIZE - 1);
pc_bytes = 0;
}
}
if (aad_bytes > 0)
memcpy(gw_out.ptr, gw_in.ptr, aad_bytes);
cpacf_kma(ctx->fc | flags, &param,
gw_out.ptr + aad_bytes,
gw_in.ptr + aad_bytes, pc_bytes,
gw_in.ptr, aad_bytes);
gcm_sg_walk_done(&gw_in, aad_bytes + pc_bytes);
gcm_sg_walk_done(&gw_out, aad_bytes + pc_bytes);
aadlen -= aad_bytes;
pclen -= pc_bytes;
} while (aadlen + pclen > 0);
if (flags & CPACF_DECRYPT) {
scatterwalk_map_and_copy(tag, req->src, len, taglen, 0);
if (crypto_memneq(tag, param.t, taglen))
ret = -EBADMSG;
} else
scatterwalk_map_and_copy(param.t, req->dst, len, taglen, 1);
memzero_explicit(&param, sizeof(param));
return ret;
}
static int gcm_aes_encrypt(struct aead_request *req)
{
return gcm_aes_crypt(req, CPACF_ENCRYPT);
}
static int gcm_aes_decrypt(struct aead_request *req)
{
return gcm_aes_crypt(req, CPACF_DECRYPT);
}
static struct aead_alg gcm_aes_aead = {
.setkey = gcm_aes_setkey,
.setauthsize = gcm_aes_setauthsize,
.encrypt = gcm_aes_encrypt,
.decrypt = gcm_aes_decrypt,
.ivsize = GHASH_BLOCK_SIZE - sizeof(u32),
.maxauthsize = GHASH_DIGEST_SIZE,
.chunksize = AES_BLOCK_SIZE,
.base = {
.cra_flags = CRYPTO_ALG_TYPE_AEAD,
.cra_blocksize = 1,
.cra_ctxsize = sizeof(struct s390_aes_ctx),
.cra_priority = 900,
.cra_name = "gcm(aes)",
.cra_driver_name = "gcm-aes-s390",
.cra_module = THIS_MODULE,
},
};
static struct crypto_alg *aes_s390_algs_ptr[5]; static struct crypto_alg *aes_s390_algs_ptr[5];
static int aes_s390_algs_num; static int aes_s390_algs_num;
...@@ -790,16 +1069,19 @@ static void aes_s390_fini(void) ...@@ -790,16 +1069,19 @@ static void aes_s390_fini(void)
crypto_unregister_alg(aes_s390_algs_ptr[aes_s390_algs_num]); crypto_unregister_alg(aes_s390_algs_ptr[aes_s390_algs_num]);
if (ctrblk) if (ctrblk)
free_page((unsigned long) ctrblk); free_page((unsigned long) ctrblk);
crypto_unregister_aead(&gcm_aes_aead);
} }
static int __init aes_s390_init(void) static int __init aes_s390_init(void)
{ {
int ret; int ret;
/* Query available functions for KM, KMC and KMCTR */ /* Query available functions for KM, KMC, KMCTR and KMA */
cpacf_query(CPACF_KM, &km_functions); cpacf_query(CPACF_KM, &km_functions);
cpacf_query(CPACF_KMC, &kmc_functions); cpacf_query(CPACF_KMC, &kmc_functions);
cpacf_query(CPACF_KMCTR, &kmctr_functions); cpacf_query(CPACF_KMCTR, &kmctr_functions);
cpacf_query(CPACF_KMA, &kma_functions);
if (cpacf_test_func(&km_functions, CPACF_KM_AES_128) || if (cpacf_test_func(&km_functions, CPACF_KM_AES_128) ||
cpacf_test_func(&km_functions, CPACF_KM_AES_192) || cpacf_test_func(&km_functions, CPACF_KM_AES_192) ||
...@@ -840,6 +1122,14 @@ static int __init aes_s390_init(void) ...@@ -840,6 +1122,14 @@ static int __init aes_s390_init(void)
goto out_err; goto out_err;
} }
if (cpacf_test_func(&kma_functions, CPACF_KMA_GCM_AES_128) ||
cpacf_test_func(&kma_functions, CPACF_KMA_GCM_AES_192) ||
cpacf_test_func(&kma_functions, CPACF_KMA_GCM_AES_256)) {
ret = crypto_register_aead(&gcm_aes_aead);
if (ret)
goto out_err;
}
return 0; return 0;
out_err: out_err:
aes_s390_fini(); aes_s390_fini();
......
...@@ -53,7 +53,6 @@ CONFIG_KSM=y ...@@ -53,7 +53,6 @@ CONFIG_KSM=y
CONFIG_TRANSPARENT_HUGEPAGE=y CONFIG_TRANSPARENT_HUGEPAGE=y
CONFIG_CLEANCACHE=y CONFIG_CLEANCACHE=y
CONFIG_FRONTSWAP=y CONFIG_FRONTSWAP=y
CONFIG_CMA=y
CONFIG_ZSWAP=y CONFIG_ZSWAP=y
CONFIG_ZBUD=m CONFIG_ZBUD=m
CONFIG_ZSMALLOC=m CONFIG_ZSMALLOC=m
...@@ -163,7 +162,6 @@ CONFIG_MAGIC_SYSRQ=y ...@@ -163,7 +162,6 @@ CONFIG_MAGIC_SYSRQ=y
CONFIG_DEBUG_PAGEALLOC=y CONFIG_DEBUG_PAGEALLOC=y
CONFIG_DETECT_HUNG_TASK=y CONFIG_DETECT_HUNG_TASK=y
CONFIG_PANIC_ON_OOPS=y CONFIG_PANIC_ON_OOPS=y
CONFIG_DEBUG_RT_MUTEXES=y
CONFIG_PROVE_LOCKING=y CONFIG_PROVE_LOCKING=y
CONFIG_LOCK_STAT=y CONFIG_LOCK_STAT=y
CONFIG_DEBUG_LOCKDEP=y CONFIG_DEBUG_LOCKDEP=y
...@@ -179,7 +177,6 @@ CONFIG_TRACER_SNAPSHOT_PER_CPU_SWAP=y ...@@ -179,7 +177,6 @@ CONFIG_TRACER_SNAPSHOT_PER_CPU_SWAP=y
CONFIG_STACK_TRACER=y CONFIG_STACK_TRACER=y
CONFIG_BLK_DEV_IO_TRACE=y CONFIG_BLK_DEV_IO_TRACE=y
CONFIG_FUNCTION_PROFILER=y CONFIG_FUNCTION_PROFILER=y
CONFIG_TRACE_ENUM_MAP_FILE=y
CONFIG_KPROBES_SANITY_TEST=y CONFIG_KPROBES_SANITY_TEST=y
CONFIG_S390_PTDUMP=y CONFIG_S390_PTDUMP=y
CONFIG_CRYPTO_CRYPTD=m CONFIG_CRYPTO_CRYPTD=m
......
...@@ -15,6 +15,7 @@ generic-y += local64.h ...@@ -15,6 +15,7 @@ generic-y += local64.h
generic-y += mcs_spinlock.h generic-y += mcs_spinlock.h
generic-y += mm-arch-hooks.h generic-y += mm-arch-hooks.h
generic-y += preempt.h generic-y += preempt.h
generic-y += rwsem.h
generic-y += trace_clock.h generic-y += trace_clock.h
generic-y += unaligned.h generic-y += unaligned.h
generic-y += word-at-a-time.h generic-y += word-at-a-time.h
#ifndef _ASM_S390_ALTERNATIVE_H
#define _ASM_S390_ALTERNATIVE_H
#ifndef __ASSEMBLY__
#include <linux/types.h>
#include <linux/stddef.h>
#include <linux/stringify.h>
struct alt_instr {
s32 instr_offset; /* original instruction */
s32 repl_offset; /* offset to replacement instruction */
u16 facility; /* facility bit set for replacement */
u8 instrlen; /* length of original instruction */
u8 replacementlen; /* length of new instruction */
} __packed;
#ifdef CONFIG_ALTERNATIVES
extern void apply_alternative_instructions(void);
extern void apply_alternatives(struct alt_instr *start, struct alt_instr *end);
#else
static inline void apply_alternative_instructions(void) {};
static inline void apply_alternatives(struct alt_instr *start,
struct alt_instr *end) {};
#endif
/*
* |661: |662: |6620 |663:
* +-----------+---------------------+
* | oldinstr | oldinstr_padding |
* | +----------+----------+
* | | | |
* | | >6 bytes |6/4/2 nops|
* | |6 bytes jg----------->
* +-----------+---------------------+
* ^^ static padding ^^
*
* .altinstr_replacement section
* +---------------------+-----------+
* |6641: |6651:
* | alternative instr 1 |
* +-----------+---------+- - - - - -+
* |6642: |6652: |
* | alternative instr 2 | padding
* +---------------------+- - - - - -+
* ^ runtime ^
*
* .altinstructions section
* +---------------------------------+
* | alt_instr entries for each |
* | alternative instr |
* +---------------------------------+
*/
#define b_altinstr(num) "664"#num
#define e_altinstr(num) "665"#num
#define e_oldinstr_pad_end "663"
#define oldinstr_len "662b-661b"
#define oldinstr_total_len e_oldinstr_pad_end"b-661b"
#define altinstr_len(num) e_altinstr(num)"b-"b_altinstr(num)"b"
#define oldinstr_pad_len(num) \
"-(((" altinstr_len(num) ")-(" oldinstr_len ")) > 0) * " \
"((" altinstr_len(num) ")-(" oldinstr_len "))"
#define INSTR_LEN_SANITY_CHECK(len) \
".if " len " > 254\n" \
"\t.error \"cpu alternatives does not support instructions " \
"blocks > 254 bytes\"\n" \
".endif\n" \
".if (" len ") %% 2\n" \
"\t.error \"cpu alternatives instructions length is odd\"\n" \
".endif\n"
#define OLDINSTR_PADDING(oldinstr, num) \
".if " oldinstr_pad_len(num) " > 6\n" \
"\tjg " e_oldinstr_pad_end "f\n" \
"6620:\n" \
"\t.fill (" oldinstr_pad_len(num) " - (6620b-662b)) / 2, 2, 0x0700\n" \
".else\n" \
"\t.fill " oldinstr_pad_len(num) " / 6, 6, 0xc0040000\n" \
"\t.fill " oldinstr_pad_len(num) " %% 6 / 4, 4, 0x47000000\n" \
"\t.fill " oldinstr_pad_len(num) " %% 6 %% 4 / 2, 2, 0x0700\n" \
".endif\n"
#define OLDINSTR(oldinstr, num) \
"661:\n\t" oldinstr "\n662:\n" \
OLDINSTR_PADDING(oldinstr, num) \
e_oldinstr_pad_end ":\n" \
INSTR_LEN_SANITY_CHECK(oldinstr_len)
#define OLDINSTR_2(oldinstr, num1, num2) \
"661:\n\t" oldinstr "\n662:\n" \
".if " altinstr_len(num1) " < " altinstr_len(num2) "\n" \
OLDINSTR_PADDING(oldinstr, num2) \
".else\n" \
OLDINSTR_PADDING(oldinstr, num1) \
".endif\n" \
e_oldinstr_pad_end ":\n" \
INSTR_LEN_SANITY_CHECK(oldinstr_len)
#define ALTINSTR_ENTRY(facility, num) \
"\t.long 661b - .\n" /* old instruction */ \
"\t.long " b_altinstr(num)"b - .\n" /* alt instruction */ \
"\t.word " __stringify(facility) "\n" /* facility bit */ \
"\t.byte " oldinstr_total_len "\n" /* source len */ \
"\t.byte " altinstr_len(num) "\n" /* alt instruction len */
#define ALTINSTR_REPLACEMENT(altinstr, num) /* replacement */ \
b_altinstr(num)":\n\t" altinstr "\n" e_altinstr(num) ":\n" \
INSTR_LEN_SANITY_CHECK(altinstr_len(num))
#ifdef CONFIG_ALTERNATIVES
/* alternative assembly primitive: */
#define ALTERNATIVE(oldinstr, altinstr, facility) \
".pushsection .altinstr_replacement, \"ax\"\n" \
ALTINSTR_REPLACEMENT(altinstr, 1) \
".popsection\n" \
OLDINSTR(oldinstr, 1) \
".pushsection .altinstructions,\"a\"\n" \
ALTINSTR_ENTRY(facility, 1) \
".popsection\n"
#define ALTERNATIVE_2(oldinstr, altinstr1, facility1, altinstr2, facility2)\
".pushsection .altinstr_replacement, \"ax\"\n" \
ALTINSTR_REPLACEMENT(altinstr1, 1) \
ALTINSTR_REPLACEMENT(altinstr2, 2) \
".popsection\n" \
OLDINSTR_2(oldinstr, 1, 2) \
".pushsection .altinstructions,\"a\"\n" \
ALTINSTR_ENTRY(facility1, 1) \
ALTINSTR_ENTRY(facility2, 2) \
".popsection\n"
#else
/* Alternative instructions are disabled, let's put just oldinstr in */
#define ALTERNATIVE(oldinstr, altinstr, facility) \
oldinstr "\n"
#define ALTERNATIVE_2(oldinstr, altinstr1, facility1, altinstr2, facility2) \
oldinstr "\n"
#endif
/*
* Alternative instructions for different CPU types or capabilities.
*
* This allows to use optimized instructions even on generic binary
* kernels.
*
* oldinstr is padded with jump and nops at compile time if altinstr is
* longer. altinstr is padded with jump and nops at run-time during patching.
*
* For non barrier like inlines please define new variants
* without volatile and memory clobber.
*/
#define alternative(oldinstr, altinstr, facility) \
asm volatile(ALTERNATIVE(oldinstr, altinstr, facility) : : : "memory")
#define alternative_2(oldinstr, altinstr1, facility1, altinstr2, facility2) \
asm volatile(ALTERNATIVE_2(oldinstr, altinstr1, facility1, \
altinstr2, facility2) ::: "memory")
#endif /* __ASSEMBLY__ */
#endif /* _ASM_S390_ALTERNATIVE_H */
...@@ -28,42 +28,42 @@ static void s390_arch_random_generate(u8 *buf, unsigned int nbytes) ...@@ -28,42 +28,42 @@ static void s390_arch_random_generate(u8 *buf, unsigned int nbytes)
static inline bool arch_has_random(void) static inline bool arch_has_random(void)
{ {
if (static_branch_likely(&s390_arch_random_available))
return true;
return false; return false;
} }
static inline bool arch_has_random_seed(void) static inline bool arch_has_random_seed(void)
{ {
return arch_has_random(); if (static_branch_likely(&s390_arch_random_available))
return true;
return false;
} }
static inline bool arch_get_random_long(unsigned long *v) static inline bool arch_get_random_long(unsigned long *v)
{ {
if (static_branch_likely(&s390_arch_random_available)) {
s390_arch_random_generate((u8 *)v, sizeof(*v));
return true;
}
return false; return false;
} }
static inline bool arch_get_random_int(unsigned int *v) static inline bool arch_get_random_int(unsigned int *v)
{ {
if (static_branch_likely(&s390_arch_random_available)) {
s390_arch_random_generate((u8 *)v, sizeof(*v));
return true;
}
return false; return false;
} }
static inline bool arch_get_random_seed_long(unsigned long *v) static inline bool arch_get_random_seed_long(unsigned long *v)
{ {
return arch_get_random_long(v); if (static_branch_likely(&s390_arch_random_available)) {
s390_arch_random_generate((u8 *)v, sizeof(*v));
return true;
}
return false;
} }
static inline bool arch_get_random_seed_int(unsigned int *v) static inline bool arch_get_random_seed_int(unsigned int *v)
{ {
return arch_get_random_int(v); if (static_branch_likely(&s390_arch_random_available)) {
s390_arch_random_generate((u8 *)v, sizeof(*v));
return true;
}
return false;
} }
#endif /* CONFIG_ARCH_RANDOM */ #endif /* CONFIG_ARCH_RANDOM */
......
...@@ -40,19 +40,24 @@ __ATOMIC_OPS(__atomic64_xor, long, "laxg") ...@@ -40,19 +40,24 @@ __ATOMIC_OPS(__atomic64_xor, long, "laxg")
#undef __ATOMIC_OPS #undef __ATOMIC_OPS
#undef __ATOMIC_OP #undef __ATOMIC_OP
static inline void __atomic_add_const(int val, int *ptr) #define __ATOMIC_CONST_OP(op_name, op_type, op_string, op_barrier) \
{ static inline void op_name(op_type val, op_type *ptr) \
asm volatile( { \
" asi %[ptr],%[val]\n" asm volatile( \
: [ptr] "+Q" (*ptr) : [val] "i" (val) : "cc"); op_string " %[ptr],%[val]\n" \
op_barrier \
: [ptr] "+Q" (*ptr) : [val] "i" (val) : "cc", "memory");\
} }
static inline void __atomic64_add_const(long val, long *ptr) #define __ATOMIC_CONST_OPS(op_name, op_type, op_string) \
{ __ATOMIC_CONST_OP(op_name, op_type, op_string, "\n") \
asm volatile( __ATOMIC_CONST_OP(op_name##_barrier, op_type, op_string, "bcr 14,0\n")
" agsi %[ptr],%[val]\n"
: [ptr] "+Q" (*ptr) : [val] "i" (val) : "cc"); __ATOMIC_CONST_OPS(__atomic_add_const, int, "asi")
} __ATOMIC_CONST_OPS(__atomic64_add_const, long, "agsi")
#undef __ATOMIC_CONST_OPS
#undef __ATOMIC_CONST_OP
#else /* CONFIG_HAVE_MARCH_Z196_FEATURES */ #else /* CONFIG_HAVE_MARCH_Z196_FEATURES */
...@@ -108,6 +113,11 @@ __ATOMIC64_OPS(__atomic64_xor, "xgr") ...@@ -108,6 +113,11 @@ __ATOMIC64_OPS(__atomic64_xor, "xgr")
#undef __ATOMIC64_OPS #undef __ATOMIC64_OPS
#define __atomic_add_const(val, ptr) __atomic_add(val, ptr)
#define __atomic_add_const_barrier(val, ptr) __atomic_add(val, ptr)
#define __atomic64_add_const(val, ptr) __atomic64_add(val, ptr)
#define __atomic64_add_const_barrier(val, ptr) __atomic64_add(val, ptr)
#endif /* CONFIG_HAVE_MARCH_Z196_FEATURES */ #endif /* CONFIG_HAVE_MARCH_Z196_FEATURES */
static inline int __atomic_cmpxchg(int *ptr, int old, int new) static inline int __atomic_cmpxchg(int *ptr, int old, int new)
......
...@@ -42,6 +42,7 @@ struct ccwgroup_device { ...@@ -42,6 +42,7 @@ struct ccwgroup_device {
* @thaw: undo work done in @freeze * @thaw: undo work done in @freeze
* @restore: callback for restoring after hibernation * @restore: callback for restoring after hibernation
* @driver: embedded driver structure * @driver: embedded driver structure
* @ccw_driver: supported ccw_driver (optional)
*/ */
struct ccwgroup_driver { struct ccwgroup_driver {
int (*setup) (struct ccwgroup_device *); int (*setup) (struct ccwgroup_device *);
...@@ -56,6 +57,7 @@ struct ccwgroup_driver { ...@@ -56,6 +57,7 @@ struct ccwgroup_driver {
int (*restore)(struct ccwgroup_device *); int (*restore)(struct ccwgroup_device *);
struct device_driver driver; struct device_driver driver;
struct ccw_driver *ccw_driver;
}; };
extern int ccwgroup_driver_register (struct ccwgroup_driver *cdriver); extern int ccwgroup_driver_register (struct ccwgroup_driver *cdriver);
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
/* /*
* CP Assist for Cryptographic Functions (CPACF) * CP Assist for Cryptographic Functions (CPACF)
* *
* Copyright IBM Corp. 2003, 2016 * Copyright IBM Corp. 2003, 2017
* Author(s): Thomas Spatzier * Author(s): Thomas Spatzier
* Jan Glauber * Jan Glauber
* Harald Freudenberger (freude@de.ibm.com) * Harald Freudenberger (freude@de.ibm.com)
...@@ -134,6 +134,22 @@ ...@@ -134,6 +134,22 @@
#define CPACF_PRNO_TRNG_Q_R2C_RATIO 0x70 #define CPACF_PRNO_TRNG_Q_R2C_RATIO 0x70
#define CPACF_PRNO_TRNG 0x72 #define CPACF_PRNO_TRNG 0x72
/*
* Function codes for the KMA (CIPHER MESSAGE WITH AUTHENTICATION)
* instruction
*/
#define CPACF_KMA_QUERY 0x00
#define CPACF_KMA_GCM_AES_128 0x12
#define CPACF_KMA_GCM_AES_192 0x13
#define CPACF_KMA_GCM_AES_256 0x14
/*
* Flags for the KMA (CIPHER MESSAGE WITH AUTHENTICATION) instruction
*/
#define CPACF_KMA_LPC 0x100 /* Last-Plaintext/Ciphertext */
#define CPACF_KMA_LAAD 0x200 /* Last-AAD */
#define CPACF_KMA_HS 0x400 /* Hash-subkey Supplied */
typedef struct { unsigned char bytes[16]; } cpacf_mask_t; typedef struct { unsigned char bytes[16]; } cpacf_mask_t;
/** /**
...@@ -179,6 +195,8 @@ static inline int __cpacf_check_opcode(unsigned int opcode) ...@@ -179,6 +195,8 @@ static inline int __cpacf_check_opcode(unsigned int opcode)
return test_facility(77); /* check for MSA4 */ return test_facility(77); /* check for MSA4 */
case CPACF_PRNO: case CPACF_PRNO:
return test_facility(57); /* check for MSA5 */ return test_facility(57); /* check for MSA5 */
case CPACF_KMA:
return test_facility(146); /* check for MSA8 */
default: default:
BUG(); BUG();
} }
...@@ -470,4 +488,36 @@ static inline void cpacf_pckmo(long func, void *param) ...@@ -470,4 +488,36 @@ static inline void cpacf_pckmo(long func, void *param)
: "cc", "memory"); : "cc", "memory");
} }
/**
* cpacf_kma() - executes the KMA (CIPHER MESSAGE WITH AUTHENTICATION)
* instruction
* @func: the function code passed to KMA; see CPACF_KMA_xxx defines
* @param: address of parameter block; see POP for details on each func
* @dest: address of destination memory area
* @src: address of source memory area
* @src_len: length of src operand in bytes
* @aad: address of additional authenticated data memory area
* @aad_len: length of aad operand in bytes
*/
static inline void cpacf_kma(unsigned long func, void *param, u8 *dest,
const u8 *src, unsigned long src_len,
const u8 *aad, unsigned long aad_len)
{
register unsigned long r0 asm("0") = (unsigned long) func;
register unsigned long r1 asm("1") = (unsigned long) param;
register unsigned long r2 asm("2") = (unsigned long) src;
register unsigned long r3 asm("3") = (unsigned long) src_len;
register unsigned long r4 asm("4") = (unsigned long) aad;
register unsigned long r5 asm("5") = (unsigned long) aad_len;
register unsigned long r6 asm("6") = (unsigned long) dest;
asm volatile(
"0: .insn rrf,%[opc] << 16,%[dst],%[src],%[aad],0\n"
" brc 1,0b\n" /* handle partial completion */
: [dst] "+a" (r6), [src] "+a" (r2), [slen] "+d" (r3),
[aad] "+a" (r4), [alen] "+d" (r5)
: [fc] "d" (r0), [pba] "a" (r1), [opc] "i" (CPACF_KMA)
: "cc", "memory");
}
#endif /* _ASM_S390_CPACF_H */ #endif /* _ASM_S390_CPACF_H */
...@@ -8,6 +8,18 @@ ...@@ -8,6 +8,18 @@
#ifndef __ASM_CTL_REG_H #ifndef __ASM_CTL_REG_H
#define __ASM_CTL_REG_H #define __ASM_CTL_REG_H
#include <linux/const.h>
#define CR2_GUARDED_STORAGE _BITUL(63 - 59)
#define CR14_CHANNEL_REPORT_SUBMASK _BITUL(63 - 35)
#define CR14_RECOVERY_SUBMASK _BITUL(63 - 36)
#define CR14_DEGRADATION_SUBMASK _BITUL(63 - 37)
#define CR14_EXTERNAL_DAMAGE_SUBMASK _BITUL(63 - 38)
#define CR14_WARNING_SUBMASK _BITUL(63 - 39)
#ifndef __ASSEMBLY__
#include <linux/bug.h> #include <linux/bug.h>
#define __ctl_load(array, low, high) do { \ #define __ctl_load(array, low, high) do { \
...@@ -55,7 +67,11 @@ void smp_ctl_clear_bit(int cr, int bit); ...@@ -55,7 +67,11 @@ void smp_ctl_clear_bit(int cr, int bit);
union ctlreg0 { union ctlreg0 {
unsigned long val; unsigned long val;
struct { struct {
unsigned long : 32; unsigned long : 8;
unsigned long tcx : 1; /* Transactional-Execution control */
unsigned long pifo : 1; /* Transactional-Execution Program-
Interruption-Filtering Override */
unsigned long : 22;
unsigned long : 3; unsigned long : 3;
unsigned long lap : 1; /* Low-address-protection control */ unsigned long lap : 1; /* Low-address-protection control */
unsigned long : 4; unsigned long : 4;
...@@ -71,6 +87,19 @@ union ctlreg0 { ...@@ -71,6 +87,19 @@ union ctlreg0 {
}; };
}; };
union ctlreg2 {
unsigned long val;
struct {
unsigned long : 33;
unsigned long ducto : 25;
unsigned long : 1;
unsigned long gse : 1;
unsigned long : 1;
unsigned long tds : 1;
unsigned long tdc : 2;
};
};
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
# define ctl_set_bit(cr, bit) smp_ctl_set_bit(cr, bit) # define ctl_set_bit(cr, bit) smp_ctl_set_bit(cr, bit)
# define ctl_clear_bit(cr, bit) smp_ctl_clear_bit(cr, bit) # define ctl_clear_bit(cr, bit) smp_ctl_clear_bit(cr, bit)
...@@ -79,4 +108,5 @@ union ctlreg0 { ...@@ -79,4 +108,5 @@ union ctlreg0 {
# define ctl_clear_bit(cr, bit) __ctl_clear_bit(cr, bit) # define ctl_clear_bit(cr, bit) __ctl_clear_bit(cr, bit)
#endif #endif
#endif /* __ASSEMBLY__ */
#endif /* __ASM_CTL_REG_H */ #endif /* __ASM_CTL_REG_H */
This diff is collapsed.
...@@ -9,32 +9,7 @@ ...@@ -9,32 +9,7 @@
#ifndef __ASM_S390_DIS_H__ #ifndef __ASM_S390_DIS_H__
#define __ASM_S390_DIS_H__ #define __ASM_S390_DIS_H__
/* Type of operand */ #include <generated/dis.h>
#define OPERAND_GPR 0x1 /* Operand printed as %rx */
#define OPERAND_FPR 0x2 /* Operand printed as %fx */
#define OPERAND_AR 0x4 /* Operand printed as %ax */
#define OPERAND_CR 0x8 /* Operand printed as %cx */
#define OPERAND_VR 0x10 /* Operand printed as %vx */
#define OPERAND_DISP 0x20 /* Operand printed as displacement */
#define OPERAND_BASE 0x40 /* Operand printed as base register */
#define OPERAND_INDEX 0x80 /* Operand printed as index register */
#define OPERAND_PCREL 0x100 /* Operand printed as pc-relative symbol */
#define OPERAND_SIGNED 0x200 /* Operand printed as signed value */
#define OPERAND_LENGTH 0x400 /* Operand printed as length (+1) */
struct s390_operand {
int bits; /* The number of bits in the operand. */
int shift; /* The number of bits to shift. */
int flags; /* One bit syntax flags. */
};
struct s390_insn {
const char name[5];
unsigned char opfrag;
unsigned char format;
};
static inline int insn_length(unsigned char code) static inline int insn_length(unsigned char code)
{ {
...@@ -45,7 +20,6 @@ struct pt_regs; ...@@ -45,7 +20,6 @@ struct pt_regs;
void show_code(struct pt_regs *regs); void show_code(struct pt_regs *regs);
void print_fn_code(unsigned char *code, unsigned long len); void print_fn_code(unsigned char *code, unsigned long len);
int insn_to_mnemonic(unsigned char *instruction, char *buf, unsigned int len);
struct s390_insn *find_insn(unsigned char *code); struct s390_insn *find_insn(unsigned char *code);
static inline int is_known_insn(unsigned char *code) static inline int is_known_insn(unsigned char *code)
......
...@@ -13,6 +13,8 @@ ...@@ -13,6 +13,8 @@
#include <asm/cio.h> #include <asm/cio.h>
#include <asm/setup.h> #include <asm/setup.h>
#define NSS_NAME_SIZE 8
#define IPL_PARMBLOCK_ORIGIN 0x2000 #define IPL_PARMBLOCK_ORIGIN 0x2000
#define IPL_PARM_BLK_FCP_LEN (sizeof(struct ipl_list_hdr) + \ #define IPL_PARM_BLK_FCP_LEN (sizeof(struct ipl_list_hdr) + \
...@@ -106,7 +108,6 @@ extern size_t append_ipl_scpdata(char *, size_t); ...@@ -106,7 +108,6 @@ extern size_t append_ipl_scpdata(char *, size_t);
enum { enum {
IPL_DEVNO_VALID = 1, IPL_DEVNO_VALID = 1,
IPL_PARMBLOCK_VALID = 2, IPL_PARMBLOCK_VALID = 2,
IPL_NSS_VALID = 4,
}; };
enum ipl_type { enum ipl_type {
......
...@@ -63,8 +63,6 @@ typedef u16 kprobe_opcode_t; ...@@ -63,8 +63,6 @@ typedef u16 kprobe_opcode_t;
#define kretprobe_blacklist_size 0 #define kretprobe_blacklist_size 0
#define KPROBE_SWAP_INST 0x10
/* Architecture specific copy of original instruction */ /* Architecture specific copy of original instruction */
struct arch_specific_insn { struct arch_specific_insn {
/* copy of original instruction */ /* copy of original instruction */
......
...@@ -736,7 +736,6 @@ struct kvm_arch{ ...@@ -736,7 +736,6 @@ struct kvm_arch{
wait_queue_head_t ipte_wq; wait_queue_head_t ipte_wq;
int ipte_lock_count; int ipte_lock_count;
struct mutex ipte_mutex; struct mutex ipte_mutex;
struct ratelimit_state sthyi_limit;
spinlock_t start_stop_lock; spinlock_t start_stop_lock;
struct sie_page2 *sie_page2; struct sie_page2 *sie_page2;
struct kvm_s390_cpu_model model; struct kvm_s390_cpu_model model;
......
...@@ -134,8 +134,9 @@ struct lowcore { ...@@ -134,8 +134,9 @@ struct lowcore {
__u8 pad_0x03b4[0x03b8-0x03b4]; /* 0x03b4 */ __u8 pad_0x03b4[0x03b8-0x03b4]; /* 0x03b4 */
__u64 gmap; /* 0x03b8 */ __u64 gmap; /* 0x03b8 */
__u32 spinlock_lockval; /* 0x03c0 */ __u32 spinlock_lockval; /* 0x03c0 */
__u32 fpu_flags; /* 0x03c4 */ __u32 spinlock_index; /* 0x03c4 */
__u8 pad_0x03c8[0x0400-0x03c8]; /* 0x03c8 */ __u32 fpu_flags; /* 0x03c8 */
__u8 pad_0x03cc[0x0400-0x03cc]; /* 0x03cc */
/* Per cpu primary space access list */ /* Per cpu primary space access list */
__u32 paste[16]; /* 0x0400 */ __u32 paste[16]; /* 0x0400 */
......
...@@ -26,12 +26,9 @@ ...@@ -26,12 +26,9 @@
#define MCCK_CODE_CPU_TIMER_VALID _BITUL(63 - 46) #define MCCK_CODE_CPU_TIMER_VALID _BITUL(63 - 46)
#define MCCK_CODE_PSW_MWP_VALID _BITUL(63 - 20) #define MCCK_CODE_PSW_MWP_VALID _BITUL(63 - 20)
#define MCCK_CODE_PSW_IA_VALID _BITUL(63 - 23) #define MCCK_CODE_PSW_IA_VALID _BITUL(63 - 23)
#define MCCK_CODE_CR_VALID _BITUL(63 - 29)
#define MCCK_CR14_CR_PENDING_SUB_MASK (1 << 28) #define MCCK_CODE_GS_VALID _BITUL(63 - 36)
#define MCCK_CR14_RECOVERY_SUB_MASK (1 << 27) #define MCCK_CODE_FC_VALID _BITUL(63 - 43)
#define MCCK_CR14_DEGRAD_SUB_MASK (1 << 26)
#define MCCK_CR14_EXT_DAMAGE_SUB_MASK (1 << 25)
#define MCCK_CR14_WARN_SUB_MASK (1 << 24)
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
...@@ -87,6 +84,8 @@ union mci { ...@@ -87,6 +84,8 @@ union mci {
#define MCESA_ORIGIN_MASK (~0x3ffUL) #define MCESA_ORIGIN_MASK (~0x3ffUL)
#define MCESA_LC_MASK (0xfUL) #define MCESA_LC_MASK (0xfUL)
#define MCESA_MIN_SIZE (1024)
#define MCESA_MAX_SIZE (2048)
struct mcesa { struct mcesa {
u8 vector_save_area[1024]; u8 vector_save_area[1024];
...@@ -95,8 +94,12 @@ struct mcesa { ...@@ -95,8 +94,12 @@ struct mcesa {
struct pt_regs; struct pt_regs;
extern void s390_handle_mcck(void); void nmi_alloc_boot_cpu(struct lowcore *lc);
extern void s390_do_machine_check(struct pt_regs *regs); int nmi_alloc_per_cpu(struct lowcore *lc);
void nmi_free_per_cpu(struct lowcore *lc);
void s390_handle_mcck(void);
void s390_do_machine_check(struct pt_regs *regs);
#endif /* __ASSEMBLY__ */ #endif /* __ASSEMBLY__ */
#endif /* _ASM_S390_NMI_H */ #endif /* _ASM_S390_NMI_H */
...@@ -19,11 +19,7 @@ extern debug_info_t *pci_debug_err_id; ...@@ -19,11 +19,7 @@ extern debug_info_t *pci_debug_err_id;
static inline void zpci_err_hex(void *addr, int len) static inline void zpci_err_hex(void *addr, int len)
{ {
while (len > 0) { debug_event(pci_debug_err_id, 0, addr, len);
debug_event(pci_debug_err_id, 0, (void *) addr, len);
len -= pci_debug_err_id->buf_size;
addr += pci_debug_err_id->buf_size;
}
} }
#endif #endif
...@@ -82,6 +82,6 @@ int zpci_refresh_trans(u64 fn, u64 addr, u64 range); ...@@ -82,6 +82,6 @@ int zpci_refresh_trans(u64 fn, u64 addr, u64 range);
int zpci_load(u64 *data, u64 req, u64 offset); int zpci_load(u64 *data, u64 req, u64 offset);
int zpci_store(u64 data, u64 req, u64 offset); int zpci_store(u64 data, u64 req, u64 offset);
int zpci_store_block(const u64 *data, u64 req, u64 offset); int zpci_store_block(const u64 *data, u64 req, u64 offset);
void zpci_set_irq_ctrl(u16 ctl, char *unused, u8 isc); int zpci_set_irq_ctrl(u16 ctl, char *unused, u8 isc);
#endif #endif
...@@ -13,6 +13,7 @@ ...@@ -13,6 +13,7 @@
#define _S390_PGALLOC_H #define _S390_PGALLOC_H
#include <linux/threads.h> #include <linux/threads.h>
#include <linux/string.h>
#include <linux/gfp.h> #include <linux/gfp.h>
#include <linux/mm.h> #include <linux/mm.h>
...@@ -28,24 +29,9 @@ void page_table_free_rcu(struct mmu_gather *, unsigned long *, unsigned long); ...@@ -28,24 +29,9 @@ void page_table_free_rcu(struct mmu_gather *, unsigned long *, unsigned long);
void page_table_free_pgste(struct page *page); void page_table_free_pgste(struct page *page);
extern int page_table_allocate_pgste; extern int page_table_allocate_pgste;
static inline void clear_table(unsigned long *s, unsigned long val, size_t n)
{
struct addrtype { char _[256]; };
int i;
for (i = 0; i < n; i += 256) {
*s = val;
asm volatile(
"mvc 8(248,%[s]),0(%[s])\n"
: "+m" (*(struct addrtype *) s)
: [s] "a" (s));
s += 256 / sizeof(long);
}
}
static inline void crst_table_init(unsigned long *crst, unsigned long entry) static inline void crst_table_init(unsigned long *crst, unsigned long entry)
{ {
clear_table(crst, entry, _CRST_TABLE_SIZE); memset64((u64 *)crst, entry, _CRST_ENTRIES);
} }
static inline unsigned long pgd_entry_type(struct mm_struct *mm) static inline unsigned long pgd_entry_type(struct mm_struct *mm)
......
...@@ -22,6 +22,7 @@ ...@@ -22,6 +22,7 @@
#define CIF_IGNORE_IRQ 5 /* ignore interrupt (for udelay) */ #define CIF_IGNORE_IRQ 5 /* ignore interrupt (for udelay) */
#define CIF_ENABLED_WAIT 6 /* in enabled wait state */ #define CIF_ENABLED_WAIT 6 /* in enabled wait state */
#define CIF_MCCK_GUEST 7 /* machine check happening in guest */ #define CIF_MCCK_GUEST 7 /* machine check happening in guest */
#define CIF_DEDICATED_CPU 8 /* this CPU is dedicated */
#define _CIF_MCCK_PENDING _BITUL(CIF_MCCK_PENDING) #define _CIF_MCCK_PENDING _BITUL(CIF_MCCK_PENDING)
#define _CIF_ASCE_PRIMARY _BITUL(CIF_ASCE_PRIMARY) #define _CIF_ASCE_PRIMARY _BITUL(CIF_ASCE_PRIMARY)
...@@ -31,6 +32,7 @@ ...@@ -31,6 +32,7 @@
#define _CIF_IGNORE_IRQ _BITUL(CIF_IGNORE_IRQ) #define _CIF_IGNORE_IRQ _BITUL(CIF_IGNORE_IRQ)
#define _CIF_ENABLED_WAIT _BITUL(CIF_ENABLED_WAIT) #define _CIF_ENABLED_WAIT _BITUL(CIF_ENABLED_WAIT)
#define _CIF_MCCK_GUEST _BITUL(CIF_MCCK_GUEST) #define _CIF_MCCK_GUEST _BITUL(CIF_MCCK_GUEST)
#define _CIF_DEDICATED_CPU _BITUL(CIF_DEDICATED_CPU)
#ifndef __ASSEMBLY__ #ifndef __ASSEMBLY__
...@@ -219,10 +221,10 @@ void show_registers(struct pt_regs *regs); ...@@ -219,10 +221,10 @@ void show_registers(struct pt_regs *regs);
void show_cacheinfo(struct seq_file *m); void show_cacheinfo(struct seq_file *m);
/* Free all resources held by a thread. */ /* Free all resources held by a thread. */
extern void release_thread(struct task_struct *); static inline void release_thread(struct task_struct *tsk) { }
/* Free guarded storage control block for current */ /* Free guarded storage control block */
void exit_thread_gs(void); void guarded_storage_release(struct task_struct *tsk);
unsigned long get_wchan(struct task_struct *p); unsigned long get_wchan(struct task_struct *p);
#define task_pt_regs(tsk) ((struct pt_regs *) \ #define task_pt_regs(tsk) ((struct pt_regs *) \
......
...@@ -6,55 +6,55 @@ ...@@ -6,55 +6,55 @@
#define S390_RUNTIME_INSTR_STOP 0x2 #define S390_RUNTIME_INSTR_STOP 0x2
struct runtime_instr_cb { struct runtime_instr_cb {
__u64 buf_current; __u64 rca;
__u64 buf_origin; __u64 roa;
__u64 buf_limit; __u64 rla;
__u32 valid : 1; __u32 v : 1;
__u32 pstate : 1; __u32 s : 1;
__u32 pstate_set_buf : 1; __u32 k : 1;
__u32 home_space : 1; __u32 h : 1;
__u32 altered : 1; __u32 a : 1;
__u32 : 3; __u32 reserved1 : 3;
__u32 pstate_sample : 1; __u32 ps : 1;
__u32 sstate_sample : 1; __u32 qs : 1;
__u32 pstate_collect : 1; __u32 pc : 1;
__u32 sstate_collect : 1; __u32 qc : 1;
__u32 : 1; __u32 reserved2 : 1;
__u32 halted_int : 1; __u32 g : 1;
__u32 int_requested : 1; __u32 u : 1;
__u32 buffer_full_int : 1; __u32 l : 1;
__u32 key : 4; __u32 key : 4;
__u32 : 9; __u32 reserved3 : 8;
__u32 t : 1;
__u32 rgs : 3; __u32 rgs : 3;
__u32 mode : 4; __u32 m : 4;
__u32 next : 1; __u32 n : 1;
__u32 mae : 1; __u32 mae : 1;
__u32 : 2; __u32 reserved4 : 2;
__u32 call_type_br : 1; __u32 c : 1;
__u32 return_type_br : 1; __u32 r : 1;
__u32 other_type_br : 1; __u32 b : 1;
__u32 bc_other_type : 1; __u32 j : 1;
__u32 emit : 1; __u32 e : 1;
__u32 tx_abort : 1; __u32 x : 1;
__u32 : 2; __u32 reserved5 : 2;
__u32 bp_xn : 1; __u32 bpxn : 1;
__u32 bp_xt : 1; __u32 bpxt : 1;
__u32 bp_ti : 1; __u32 bpti : 1;
__u32 bp_ni : 1; __u32 bpni : 1;
__u32 suppr_y : 1; __u32 reserved6 : 2;
__u32 suppr_z : 1;
__u32 dc_miss_extra : 1; __u32 d : 1;
__u32 lat_lev_ignore : 1; __u32 f : 1;
__u32 ic_lat_lev : 4; __u32 ic : 4;
__u32 dc_lat_lev : 4; __u32 dc : 4;
__u64 reserved1; __u64 reserved7;
__u64 scaling_factor; __u64 sf;
__u64 rsic; __u64 rsic;
__u64 reserved2; __u64 reserved8;
} __packed __aligned(8); } __packed __aligned(8);
extern struct runtime_instr_cb runtime_instr_empty_cb; extern struct runtime_instr_cb runtime_instr_empty_cb;
...@@ -86,6 +86,8 @@ static inline void restore_ri_cb(struct runtime_instr_cb *cb_next, ...@@ -86,6 +86,8 @@ static inline void restore_ri_cb(struct runtime_instr_cb *cb_next,
load_runtime_instr_cb(&runtime_instr_empty_cb); load_runtime_instr_cb(&runtime_instr_empty_cb);
} }
void exit_thread_runtime_instr(void); struct task_struct;
void runtime_instr_release(struct task_struct *tsk);
#endif /* _RUNTIME_INSTR_H */ #endif /* _RUNTIME_INSTR_H */
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _S390_RWSEM_H
#define _S390_RWSEM_H
/*
* S390 version
* Copyright IBM Corp. 2002
* Author(s): Martin Schwidefsky (schwidefsky@de.ibm.com)
*
* Based on asm-alpha/semaphore.h and asm-i386/rwsem.h
*/
/*
*
* The MSW of the count is the negated number of active writers and waiting
* lockers, and the LSW is the total number of active locks
*
* The lock count is initialized to 0 (no active and no waiting lockers).
*
* When a writer subtracts WRITE_BIAS, it'll get 0xffff0001 for the case of an
* uncontended lock. This can be determined because XADD returns the old value.
* Readers increment by 1 and see a positive value when uncontended, negative
* if there are writers (and maybe) readers waiting (in which case it goes to
* sleep).
*
* The value of WAITING_BIAS supports up to 32766 waiting processes. This can
* be extended to 65534 by manually checking the whole MSW rather than relying
* on the S flag.
*
* The value of ACTIVE_BIAS supports up to 65535 active processes.
*
* This should be totally fair - if anything is waiting, a process that wants a
* lock will go to the back of the queue. When the currently active lock is
* released, if there's a writer at the front of the queue, then that and only
* that will be woken up; if there's a bunch of consecutive readers at the
* front, then they'll all be woken up, but no other readers will be.
*/
#ifndef _LINUX_RWSEM_H
#error "please don't include asm/rwsem.h directly, use linux/rwsem.h instead"
#endif
#define RWSEM_UNLOCKED_VALUE 0x0000000000000000L
#define RWSEM_ACTIVE_BIAS 0x0000000000000001L
#define RWSEM_ACTIVE_MASK 0x00000000ffffffffL
#define RWSEM_WAITING_BIAS (-0x0000000100000000L)
#define RWSEM_ACTIVE_READ_BIAS RWSEM_ACTIVE_BIAS
#define RWSEM_ACTIVE_WRITE_BIAS (RWSEM_WAITING_BIAS + RWSEM_ACTIVE_BIAS)
/*
* lock for reading
*/
static inline void __down_read(struct rw_semaphore *sem)
{
signed long old, new;
asm volatile(
" lg %0,%2\n"
"0: lgr %1,%0\n"
" aghi %1,%4\n"
" csg %0,%1,%2\n"
" jl 0b"
: "=&d" (old), "=&d" (new), "=Q" (sem->count)
: "Q" (sem->count), "i" (RWSEM_ACTIVE_READ_BIAS)
: "cc", "memory");
if (old < 0)
rwsem_down_read_failed(sem);
}
/*
* trylock for reading -- returns 1 if successful, 0 if contention
*/
static inline int __down_read_trylock(struct rw_semaphore *sem)
{
signed long old, new;
asm volatile(
" lg %0,%2\n"
"0: ltgr %1,%0\n"
" jm 1f\n"
" aghi %1,%4\n"
" csg %0,%1,%2\n"
" jl 0b\n"
"1:"
: "=&d" (old), "=&d" (new), "=Q" (sem->count)
: "Q" (sem->count), "i" (RWSEM_ACTIVE_READ_BIAS)
: "cc", "memory");
return old >= 0 ? 1 : 0;
}
/*
* lock for writing
*/
static inline long ___down_write(struct rw_semaphore *sem)
{
signed long old, new, tmp;
tmp = RWSEM_ACTIVE_WRITE_BIAS;
asm volatile(
" lg %0,%2\n"
"0: lgr %1,%0\n"
" ag %1,%4\n"
" csg %0,%1,%2\n"
" jl 0b"
: "=&d" (old), "=&d" (new), "=Q" (sem->count)
: "Q" (sem->count), "m" (tmp)
: "cc", "memory");
return old;
}
static inline void __down_write(struct rw_semaphore *sem)
{
if (___down_write(sem))
rwsem_down_write_failed(sem);
}
static inline int __down_write_killable(struct rw_semaphore *sem)
{
if (___down_write(sem))
if (IS_ERR(rwsem_down_write_failed_killable(sem)))
return -EINTR;
return 0;
}
/*
* trylock for writing -- returns 1 if successful, 0 if contention
*/
static inline int __down_write_trylock(struct rw_semaphore *sem)
{
signed long old;
asm volatile(
" lg %0,%1\n"
"0: ltgr %0,%0\n"
" jnz 1f\n"
" csg %0,%3,%1\n"
" jl 0b\n"
"1:"
: "=&d" (old), "=Q" (sem->count)
: "Q" (sem->count), "d" (RWSEM_ACTIVE_WRITE_BIAS)
: "cc", "memory");
return (old == RWSEM_UNLOCKED_VALUE) ? 1 : 0;
}
/*
* unlock after reading
*/
static inline void __up_read(struct rw_semaphore *sem)
{
signed long old, new;
asm volatile(
" lg %0,%2\n"
"0: lgr %1,%0\n"
" aghi %1,%4\n"
" csg %0,%1,%2\n"
" jl 0b"
: "=&d" (old), "=&d" (new), "=Q" (sem->count)
: "Q" (sem->count), "i" (-RWSEM_ACTIVE_READ_BIAS)
: "cc", "memory");
if (new < 0)
if ((new & RWSEM_ACTIVE_MASK) == 0)
rwsem_wake(sem);
}
/*
* unlock after writing
*/
static inline void __up_write(struct rw_semaphore *sem)
{
signed long old, new, tmp;
tmp = -RWSEM_ACTIVE_WRITE_BIAS;
asm volatile(
" lg %0,%2\n"
"0: lgr %1,%0\n"
" ag %1,%4\n"
" csg %0,%1,%2\n"
" jl 0b"
: "=&d" (old), "=&d" (new), "=Q" (sem->count)
: "Q" (sem->count), "m" (tmp)
: "cc", "memory");
if (new < 0)
if ((new & RWSEM_ACTIVE_MASK) == 0)
rwsem_wake(sem);
}
/*
* downgrade write lock to read lock
*/
static inline void __downgrade_write(struct rw_semaphore *sem)
{
signed long old, new, tmp;
tmp = -RWSEM_WAITING_BIAS;
asm volatile(
" lg %0,%2\n"
"0: lgr %1,%0\n"
" ag %1,%4\n"
" csg %0,%1,%2\n"
" jl 0b"
: "=&d" (old), "=&d" (new), "=Q" (sem->count)
: "Q" (sem->count), "m" (tmp)
: "cc", "memory");
if (new > 1)
rwsem_downgrade_wake(sem);
}
#endif /* _S390_RWSEM_H */
...@@ -4,6 +4,6 @@ ...@@ -4,6 +4,6 @@
#include <asm-generic/sections.h> #include <asm-generic/sections.h>
extern char _eshared[], _ehead[]; extern char _ehead[];
#endif #endif
...@@ -98,9 +98,6 @@ extern char vmpoff_cmd[]; ...@@ -98,9 +98,6 @@ extern char vmpoff_cmd[];
#define SET_CONSOLE_VT220 do { console_mode = 4; } while (0) #define SET_CONSOLE_VT220 do { console_mode = 4; } while (0)
#define SET_CONSOLE_HVC do { console_mode = 5; } while (0) #define SET_CONSOLE_HVC do { console_mode = 5; } while (0)
#define NSS_NAME_SIZE 8
extern char kernel_nss_name[];
#ifdef CONFIG_PFAULT #ifdef CONFIG_PFAULT
extern int pfault_init(void); extern int pfault_init(void);
extern void pfault_fini(void); extern void pfault_fini(void);
......
...@@ -28,6 +28,7 @@ extern void arch_send_call_function_ipi_mask(const struct cpumask *mask); ...@@ -28,6 +28,7 @@ extern void arch_send_call_function_ipi_mask(const struct cpumask *mask);
extern void smp_call_online_cpu(void (*func)(void *), void *); extern void smp_call_online_cpu(void (*func)(void *), void *);
extern void smp_call_ipl_cpu(void (*func)(void *), void *); extern void smp_call_ipl_cpu(void (*func)(void *), void *);
extern void smp_emergency_stop(void);
extern int smp_find_processor_id(u16 address); extern int smp_find_processor_id(u16 address);
extern int smp_store_status(int cpu); extern int smp_store_status(int cpu);
...@@ -53,6 +54,10 @@ static inline void smp_call_online_cpu(void (*func)(void *), void *data) ...@@ -53,6 +54,10 @@ static inline void smp_call_online_cpu(void (*func)(void *), void *data)
func(data); func(data);
} }
static inline void smp_emergency_stop(void)
{
}
static inline int smp_find_processor_id(u16 address) { return 0; } static inline int smp_find_processor_id(u16 address) { return 0; }
static inline int smp_store_status(int cpu) { return 0; } static inline int smp_store_status(int cpu) { return 0; }
static inline int smp_vcpu_scheduled(int cpu) { return 1; } static inline int smp_vcpu_scheduled(int cpu) { return 1; }
......
...@@ -14,6 +14,7 @@ ...@@ -14,6 +14,7 @@
#include <asm/atomic_ops.h> #include <asm/atomic_ops.h>
#include <asm/barrier.h> #include <asm/barrier.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/alternative.h>
#define SPINLOCK_LOCKVAL (S390_lowcore.spinlock_lockval) #define SPINLOCK_LOCKVAL (S390_lowcore.spinlock_lockval)
...@@ -36,20 +37,15 @@ bool arch_vcpu_is_preempted(int cpu); ...@@ -36,20 +37,15 @@ bool arch_vcpu_is_preempted(int cpu);
* (the type definitions are in asm/spinlock_types.h) * (the type definitions are in asm/spinlock_types.h)
*/ */
void arch_lock_relax(int cpu); void arch_spin_relax(arch_spinlock_t *lock);
void arch_spin_lock_wait(arch_spinlock_t *); void arch_spin_lock_wait(arch_spinlock_t *);
int arch_spin_trylock_retry(arch_spinlock_t *); int arch_spin_trylock_retry(arch_spinlock_t *);
void arch_spin_lock_wait_flags(arch_spinlock_t *, unsigned long flags); void arch_spin_lock_setup(int cpu);
static inline void arch_spin_relax(arch_spinlock_t *lock)
{
arch_lock_relax(lock->lock);
}
static inline u32 arch_spin_lockval(int cpu) static inline u32 arch_spin_lockval(int cpu)
{ {
return ~cpu; return cpu + 1;
} }
static inline int arch_spin_value_unlocked(arch_spinlock_t lock) static inline int arch_spin_value_unlocked(arch_spinlock_t lock)
...@@ -65,8 +61,7 @@ static inline int arch_spin_is_locked(arch_spinlock_t *lp) ...@@ -65,8 +61,7 @@ static inline int arch_spin_is_locked(arch_spinlock_t *lp)
static inline int arch_spin_trylock_once(arch_spinlock_t *lp) static inline int arch_spin_trylock_once(arch_spinlock_t *lp)
{ {
barrier(); barrier();
return likely(arch_spin_value_unlocked(*lp) && return likely(__atomic_cmpxchg_bool(&lp->lock, 0, SPINLOCK_LOCKVAL));
__atomic_cmpxchg_bool(&lp->lock, 0, SPINLOCK_LOCKVAL));
} }
static inline void arch_spin_lock(arch_spinlock_t *lp) static inline void arch_spin_lock(arch_spinlock_t *lp)
...@@ -79,7 +74,7 @@ static inline void arch_spin_lock_flags(arch_spinlock_t *lp, ...@@ -79,7 +74,7 @@ static inline void arch_spin_lock_flags(arch_spinlock_t *lp,
unsigned long flags) unsigned long flags)
{ {
if (!arch_spin_trylock_once(lp)) if (!arch_spin_trylock_once(lp))
arch_spin_lock_wait_flags(lp, flags); arch_spin_lock_wait(lp);
} }
static inline int arch_spin_trylock(arch_spinlock_t *lp) static inline int arch_spin_trylock(arch_spinlock_t *lp)
...@@ -93,11 +88,10 @@ static inline void arch_spin_unlock(arch_spinlock_t *lp) ...@@ -93,11 +88,10 @@ static inline void arch_spin_unlock(arch_spinlock_t *lp)
{ {
typecheck(int, lp->lock); typecheck(int, lp->lock);
asm volatile( asm volatile(
#ifdef CONFIG_HAVE_MARCH_ZEC12_FEATURES ALTERNATIVE("", ".long 0xb2fa0070", 49) /* NIAI 7 */
" .long 0xb2fa0070\n" /* NIAI 7 */ " sth %1,%0\n"
#endif : "=Q" (((unsigned short *) &lp->lock)[1])
" st %1,%0\n" : "d" (0) : "cc", "memory");
: "=Q" (lp->lock) : "d" (0) : "cc", "memory");
} }
/* /*
...@@ -115,164 +109,63 @@ static inline void arch_spin_unlock(arch_spinlock_t *lp) ...@@ -115,164 +109,63 @@ static inline void arch_spin_unlock(arch_spinlock_t *lp)
* read_can_lock - would read_trylock() succeed? * read_can_lock - would read_trylock() succeed?
* @lock: the rwlock in question. * @lock: the rwlock in question.
*/ */
#define arch_read_can_lock(x) ((int)(x)->lock >= 0) #define arch_read_can_lock(x) (((x)->cnts & 0xffff0000) == 0)
/** /**
* write_can_lock - would write_trylock() succeed? * write_can_lock - would write_trylock() succeed?
* @lock: the rwlock in question. * @lock: the rwlock in question.
*/ */
#define arch_write_can_lock(x) ((x)->lock == 0) #define arch_write_can_lock(x) ((x)->cnts == 0)
extern int _raw_read_trylock_retry(arch_rwlock_t *lp);
extern int _raw_write_trylock_retry(arch_rwlock_t *lp);
#define arch_read_lock_flags(lock, flags) arch_read_lock(lock) #define arch_read_lock_flags(lock, flags) arch_read_lock(lock)
#define arch_write_lock_flags(lock, flags) arch_write_lock(lock) #define arch_write_lock_flags(lock, flags) arch_write_lock(lock)
#define arch_read_relax(rw) barrier()
#define arch_write_relax(rw) barrier()
static inline int arch_read_trylock_once(arch_rwlock_t *rw) void arch_read_lock_wait(arch_rwlock_t *lp);
{ void arch_write_lock_wait(arch_rwlock_t *lp);
int old = ACCESS_ONCE(rw->lock);
return likely(old >= 0 &&
__atomic_cmpxchg_bool(&rw->lock, old, old + 1));
}
static inline int arch_write_trylock_once(arch_rwlock_t *rw)
{
int old = ACCESS_ONCE(rw->lock);
return likely(old == 0 &&
__atomic_cmpxchg_bool(&rw->lock, 0, 0x80000000));
}
#ifdef CONFIG_HAVE_MARCH_Z196_FEATURES
#define __RAW_OP_OR "lao"
#define __RAW_OP_AND "lan"
#define __RAW_OP_ADD "laa"
#define __RAW_LOCK(ptr, op_val, op_string) \
({ \
int old_val; \
\
typecheck(int *, ptr); \
asm volatile( \
op_string " %0,%2,%1\n" \
"bcr 14,0\n" \
: "=d" (old_val), "+Q" (*ptr) \
: "d" (op_val) \
: "cc", "memory"); \
old_val; \
})
#define __RAW_UNLOCK(ptr, op_val, op_string) \
({ \
int old_val; \
\
typecheck(int *, ptr); \
asm volatile( \
op_string " %0,%2,%1\n" \
: "=d" (old_val), "+Q" (*ptr) \
: "d" (op_val) \
: "cc", "memory"); \
old_val; \
})
extern void _raw_read_lock_wait(arch_rwlock_t *lp);
extern void _raw_write_lock_wait(arch_rwlock_t *lp, int prev);
static inline void arch_read_lock(arch_rwlock_t *rw) static inline void arch_read_lock(arch_rwlock_t *rw)
{ {
int old; int old;
old = __RAW_LOCK(&rw->lock, 1, __RAW_OP_ADD); old = __atomic_add(1, &rw->cnts);
if (old < 0) if (old & 0xffff0000)
_raw_read_lock_wait(rw); arch_read_lock_wait(rw);
} }
static inline void arch_read_unlock(arch_rwlock_t *rw) static inline void arch_read_unlock(arch_rwlock_t *rw)
{ {
__RAW_UNLOCK(&rw->lock, -1, __RAW_OP_ADD); __atomic_add_const_barrier(-1, &rw->cnts);
} }
static inline void arch_write_lock(arch_rwlock_t *rw) static inline void arch_write_lock(arch_rwlock_t *rw)
{ {
int old; if (!__atomic_cmpxchg_bool(&rw->cnts, 0, 0x30000))
arch_write_lock_wait(rw);
old = __RAW_LOCK(&rw->lock, 0x80000000, __RAW_OP_OR);
if (old != 0)
_raw_write_lock_wait(rw, old);
rw->owner = SPINLOCK_LOCKVAL;
} }
static inline void arch_write_unlock(arch_rwlock_t *rw) static inline void arch_write_unlock(arch_rwlock_t *rw)
{ {
rw->owner = 0; __atomic_add_barrier(-0x30000, &rw->cnts);
__RAW_UNLOCK(&rw->lock, 0x7fffffff, __RAW_OP_AND);
} }
#else /* CONFIG_HAVE_MARCH_Z196_FEATURES */
extern void _raw_read_lock_wait(arch_rwlock_t *lp); static inline int arch_read_trylock(arch_rwlock_t *rw)
extern void _raw_write_lock_wait(arch_rwlock_t *lp);
static inline void arch_read_lock(arch_rwlock_t *rw)
{
if (!arch_read_trylock_once(rw))
_raw_read_lock_wait(rw);
}
static inline void arch_read_unlock(arch_rwlock_t *rw)
{ {
int old; int old;
do { old = READ_ONCE(rw->cnts);
old = ACCESS_ONCE(rw->lock); return (!(old & 0xffff0000) &&
} while (!__atomic_cmpxchg_bool(&rw->lock, old, old - 1)); __atomic_cmpxchg_bool(&rw->cnts, old, old + 1));
}
static inline void arch_write_lock(arch_rwlock_t *rw)
{
if (!arch_write_trylock_once(rw))
_raw_write_lock_wait(rw);
rw->owner = SPINLOCK_LOCKVAL;
}
static inline void arch_write_unlock(arch_rwlock_t *rw)
{
typecheck(int, rw->lock);
rw->owner = 0;
asm volatile(
"st %1,%0\n"
: "+Q" (rw->lock)
: "d" (0)
: "cc", "memory");
}
#endif /* CONFIG_HAVE_MARCH_Z196_FEATURES */
static inline int arch_read_trylock(arch_rwlock_t *rw)
{
if (!arch_read_trylock_once(rw))
return _raw_read_trylock_retry(rw);
return 1;
} }
static inline int arch_write_trylock(arch_rwlock_t *rw) static inline int arch_write_trylock(arch_rwlock_t *rw)
{ {
if (!arch_write_trylock_once(rw) && !_raw_write_trylock_retry(rw)) int old;
return 0;
rw->owner = SPINLOCK_LOCKVAL;
return 1;
}
static inline void arch_read_relax(arch_rwlock_t *rw)
{
arch_lock_relax(rw->owner);
}
static inline void arch_write_relax(arch_rwlock_t *rw) old = READ_ONCE(rw->cnts);
{ return !old && __atomic_cmpxchg_bool(&rw->cnts, 0, 0x30000);
arch_lock_relax(rw->owner);
} }
#endif /* __ASM_SPINLOCK_H */ #endif /* __ASM_SPINLOCK_H */
...@@ -13,8 +13,8 @@ typedef struct { ...@@ -13,8 +13,8 @@ typedef struct {
#define __ARCH_SPIN_LOCK_UNLOCKED { .lock = 0, } #define __ARCH_SPIN_LOCK_UNLOCKED { .lock = 0, }
typedef struct { typedef struct {
int lock; int cnts;
int owner; arch_spinlock_t wait;
} arch_rwlock_t; } arch_rwlock_t;
#define __ARCH_RW_LOCK_UNLOCKED { 0 } #define __ARCH_RW_LOCK_UNLOCKED { 0 }
......
...@@ -18,6 +18,9 @@ ...@@ -18,6 +18,9 @@
#define __HAVE_ARCH_MEMMOVE /* gcc builtin & arch function */ #define __HAVE_ARCH_MEMMOVE /* gcc builtin & arch function */
#define __HAVE_ARCH_MEMSCAN /* inline & arch function */ #define __HAVE_ARCH_MEMSCAN /* inline & arch function */
#define __HAVE_ARCH_MEMSET /* gcc builtin & arch function */ #define __HAVE_ARCH_MEMSET /* gcc builtin & arch function */
#define __HAVE_ARCH_MEMSET16 /* arch function */
#define __HAVE_ARCH_MEMSET32 /* arch function */
#define __HAVE_ARCH_MEMSET64 /* arch function */
#define __HAVE_ARCH_STRCAT /* inline & arch function */ #define __HAVE_ARCH_STRCAT /* inline & arch function */
#define __HAVE_ARCH_STRCMP /* arch function */ #define __HAVE_ARCH_STRCMP /* arch function */
#define __HAVE_ARCH_STRCPY /* inline & arch function */ #define __HAVE_ARCH_STRCPY /* inline & arch function */
...@@ -31,17 +34,17 @@ ...@@ -31,17 +34,17 @@
#define __HAVE_ARCH_STRSTR /* arch function */ #define __HAVE_ARCH_STRSTR /* arch function */
/* Prototypes for non-inlined arch strings functions. */ /* Prototypes for non-inlined arch strings functions. */
extern int memcmp(const void *, const void *, size_t); int memcmp(const void *s1, const void *s2, size_t n);
extern void *memcpy(void *, const void *, size_t); void *memcpy(void *dest, const void *src, size_t n);
extern void *memset(void *, int, size_t); void *memset(void *s, int c, size_t n);
extern void *memmove(void *, const void *, size_t); void *memmove(void *dest, const void *src, size_t n);
extern int strcmp(const char *,const char *); int strcmp(const char *s1, const char *s2);
extern size_t strlcat(char *, const char *, size_t); size_t strlcat(char *dest, const char *src, size_t n);
extern size_t strlcpy(char *, const char *, size_t); size_t strlcpy(char *dest, const char *src, size_t size);
extern char *strncat(char *, const char *, size_t); char *strncat(char *dest, const char *src, size_t n);
extern char *strncpy(char *, const char *, size_t); char *strncpy(char *dest, const char *src, size_t n);
extern char *strrchr(const char *, int); char *strrchr(const char *s, int c);
extern char *strstr(const char *, const char *); char *strstr(const char *s1, const char *s2);
#undef __HAVE_ARCH_STRCHR #undef __HAVE_ARCH_STRCHR
#undef __HAVE_ARCH_STRNCHR #undef __HAVE_ARCH_STRNCHR
...@@ -50,7 +53,26 @@ extern char *strstr(const char *, const char *); ...@@ -50,7 +53,26 @@ extern char *strstr(const char *, const char *);
#undef __HAVE_ARCH_STRSEP #undef __HAVE_ARCH_STRSEP
#undef __HAVE_ARCH_STRSPN #undef __HAVE_ARCH_STRSPN
#if !defined(IN_ARCH_STRING_C) void *__memset16(uint16_t *s, uint16_t v, size_t count);
void *__memset32(uint32_t *s, uint32_t v, size_t count);
void *__memset64(uint64_t *s, uint64_t v, size_t count);
static inline void *memset16(uint16_t *s, uint16_t v, size_t count)
{
return __memset16(s, v, count * sizeof(v));
}
static inline void *memset32(uint32_t *s, uint32_t v, size_t count)
{
return __memset32(s, v, count * sizeof(v));
}
static inline void *memset64(uint64_t *s, uint64_t v, size_t count)
{
return __memset64(s, v, count * sizeof(v));
}
#if !defined(IN_ARCH_STRING_C) && (!defined(CONFIG_FORTIFY_SOURCE) || defined(__NO_FORTIFY))
static inline void *memchr(const void * s, int c, size_t n) static inline void *memchr(const void * s, int c, size_t n)
{ {
......
...@@ -37,8 +37,8 @@ static inline void restore_access_regs(unsigned int *acrs) ...@@ -37,8 +37,8 @@ static inline void restore_access_regs(unsigned int *acrs)
save_ri_cb(prev->thread.ri_cb); \ save_ri_cb(prev->thread.ri_cb); \
save_gs_cb(prev->thread.gs_cb); \ save_gs_cb(prev->thread.gs_cb); \
} \ } \
update_cr_regs(next); \
if (next->mm) { \ if (next->mm) { \
update_cr_regs(next); \
set_cpu_flag(CIF_FPU); \ set_cpu_flag(CIF_FPU); \
restore_access_regs(&next->thread.acrs[0]); \ restore_access_regs(&next->thread.acrs[0]); \
restore_ri_cb(next->thread.ri_cb, prev->thread.ri_cb); \ restore_ri_cb(next->thread.ri_cb, prev->thread.ri_cb); \
......
...@@ -156,7 +156,8 @@ static inline unsigned char topology_mnest_limit(void) ...@@ -156,7 +156,8 @@ static inline unsigned char topology_mnest_limit(void)
struct topology_core { struct topology_core {
unsigned char nl; unsigned char nl;
unsigned char reserved0[3]; unsigned char reserved0[3];
unsigned char :6; unsigned char :5;
unsigned char d:1;
unsigned char pp:2; unsigned char pp:2;
unsigned char reserved1; unsigned char reserved1;
unsigned short origin; unsigned short origin;
...@@ -198,4 +199,5 @@ struct service_level { ...@@ -198,4 +199,5 @@ struct service_level {
int register_service_level(struct service_level *); int register_service_level(struct service_level *);
int unregister_service_level(struct service_level *); int unregister_service_level(struct service_level *);
int sthyi_fill(void *dst, u64 *rc);
#endif /* __ASM_S390_SYSINFO_H */ #endif /* __ASM_S390_SYSINFO_H */
...@@ -17,6 +17,7 @@ struct cpu_topology_s390 { ...@@ -17,6 +17,7 @@ struct cpu_topology_s390 {
unsigned short book_id; unsigned short book_id;
unsigned short drawer_id; unsigned short drawer_id;
unsigned short node_id; unsigned short node_id;
unsigned short dedicated : 1;
cpumask_t thread_mask; cpumask_t thread_mask;
cpumask_t core_mask; cpumask_t core_mask;
cpumask_t book_mask; cpumask_t book_mask;
...@@ -35,6 +36,7 @@ extern cpumask_t cpus_with_topology; ...@@ -35,6 +36,7 @@ extern cpumask_t cpus_with_topology;
#define topology_book_cpumask(cpu) (&cpu_topology[cpu].book_mask) #define topology_book_cpumask(cpu) (&cpu_topology[cpu].book_mask)
#define topology_drawer_id(cpu) (cpu_topology[cpu].drawer_id) #define topology_drawer_id(cpu) (cpu_topology[cpu].drawer_id)
#define topology_drawer_cpumask(cpu) (&cpu_topology[cpu].drawer_mask) #define topology_drawer_cpumask(cpu) (&cpu_topology[cpu].drawer_mask)
#define topology_cpu_dedicated(cpu) (cpu_topology[cpu].dedicated)
#define mc_capable() 1 #define mc_capable() 1
......
...@@ -47,6 +47,7 @@ struct vdso_per_cpu_data { ...@@ -47,6 +47,7 @@ struct vdso_per_cpu_data {
extern struct vdso_data *vdso_data; extern struct vdso_data *vdso_data;
void vdso_alloc_boot_cpu(struct lowcore *lowcore);
int vdso_alloc_per_cpu(struct lowcore *lowcore); int vdso_alloc_per_cpu(struct lowcore *lowcore);
void vdso_free_per_cpu(struct lowcore *lowcore); void vdso_free_per_cpu(struct lowcore *lowcore);
......
/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
/*
* definition for virtio for kvm on s390
*
* Copyright IBM Corp. 2008
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License (version 2 only)
* as published by the Free Software Foundation.
*
* Author(s): Christian Borntraeger <borntraeger@de.ibm.com>
*/
#ifndef __KVM_S390_VIRTIO_H
#define __KVM_S390_VIRTIO_H
#include <linux/types.h>
struct kvm_device_desc {
/* The device type: console, network, disk etc. Type 0 terminates. */
__u8 type;
/* The number of virtqueues (first in config array) */
__u8 num_vq;
/*
* The number of bytes of feature bits. Multiply by 2: one for host
* features and one for guest acknowledgements.
*/
__u8 feature_len;
/* The number of bytes of the config array after virtqueues. */
__u8 config_len;
/* A status byte, written by the Guest. */
__u8 status;
__u8 config[0];
};
/*
* This is how we expect the device configuration field for a virtqueue
* to be laid out in config space.
*/
struct kvm_vqconfig {
/* The token returned with an interrupt. Set by the guest */
__u64 token;
/* The address of the virtio ring */
__u64 address;
/* The number of entries in the virtio_ring */
__u16 num;
};
#define KVM_S390_VIRTIO_NOTIFY 0
#define KVM_S390_VIRTIO_RESET 1
#define KVM_S390_VIRTIO_SET_STATUS 2
/* The alignment to use between consumer and producer parts of vring.
* This is pagesize for historical reasons. */
#define KVM_S390_VIRTIO_RING_ALIGN 4096
/* These values are supposed to be in ext_params on an interrupt */
#define VIRTIO_PARAM_MASK 0xff
#define VIRTIO_PARAM_VRING_INTERRUPT 0x0
#define VIRTIO_PARAM_CONFIG_CHANGED 0x1
#define VIRTIO_PARAM_DEV_ADD 0x2
#endif
#ifndef _UAPI_ASM_STHYI_H
#define _UAPI_ASM_STHYI_H
#define STHYI_FC_CP_IFL_CAP 0
#endif /* _UAPI_ASM_STHYI_H */
...@@ -316,7 +316,8 @@ ...@@ -316,7 +316,8 @@
#define __NR_pwritev2 377 #define __NR_pwritev2 377
#define __NR_s390_guarded_storage 378 #define __NR_s390_guarded_storage 378
#define __NR_statx 379 #define __NR_statx 379
#define NR_syscalls 380 #define __NR_s390_sthyi 380
#define NR_syscalls 381
/* /*
* There are some system calls that are not present on 64 bit, some * There are some system calls that are not present on 64 bit, some
......
...@@ -34,6 +34,8 @@ AFLAGS_REMOVE_head.o += $(CC_FLAGS_MARCH) ...@@ -34,6 +34,8 @@ AFLAGS_REMOVE_head.o += $(CC_FLAGS_MARCH)
AFLAGS_head.o += -march=z900 AFLAGS_head.o += -march=z900
endif endif
CFLAGS_als.o += -D__NO_FORTIFY
# #
# Passing null pointers is ok for smp code, since we access the lowcore here. # Passing null pointers is ok for smp code, since we access the lowcore here.
# #
...@@ -56,7 +58,7 @@ obj-y := traps.o time.o process.o base.o early.o setup.o idle.o vtime.o ...@@ -56,7 +58,7 @@ obj-y := traps.o time.o process.o base.o early.o setup.o idle.o vtime.o
obj-y += processor.o sys_s390.o ptrace.o signal.o cpcmd.o ebcdic.o nmi.o obj-y += processor.o sys_s390.o ptrace.o signal.o cpcmd.o ebcdic.o nmi.o
obj-y += debug.o irq.o ipl.o dis.o diag.o vdso.o als.o obj-y += debug.o irq.o ipl.o dis.o diag.o vdso.o als.o
obj-y += sysinfo.o jump_label.o lgr.o os_info.o machine_kexec.o pgm_check.o obj-y += sysinfo.o jump_label.o lgr.o os_info.o machine_kexec.o pgm_check.o
obj-y += runtime_instr.o cache.o fpu.o dumpstack.o guarded_storage.o obj-y += runtime_instr.o cache.o fpu.o dumpstack.o guarded_storage.o sthyi.o
obj-y += entry.o reipl.o relocate_kernel.o kdebugfs.o obj-y += entry.o reipl.o relocate_kernel.o kdebugfs.o
extra-y += head.o head64.o vmlinux.lds extra-y += head.o head64.o vmlinux.lds
...@@ -75,6 +77,7 @@ obj-$(CONFIG_KPROBES) += kprobes.o ...@@ -75,6 +77,7 @@ obj-$(CONFIG_KPROBES) += kprobes.o
obj-$(CONFIG_FUNCTION_TRACER) += mcount.o ftrace.o obj-$(CONFIG_FUNCTION_TRACER) += mcount.o ftrace.o
obj-$(CONFIG_CRASH_DUMP) += crash_dump.o obj-$(CONFIG_CRASH_DUMP) += crash_dump.o
obj-$(CONFIG_UPROBES) += uprobes.o obj-$(CONFIG_UPROBES) += uprobes.o
obj-$(CONFIG_ALTERNATIVES) += alternative.o
obj-$(CONFIG_PERF_EVENTS) += perf_event.o perf_cpum_cf.o perf_cpum_sf.o obj-$(CONFIG_PERF_EVENTS) += perf_event.o perf_cpum_cf.o perf_cpum_sf.o
obj-$(CONFIG_PERF_EVENTS) += perf_cpum_cf_events.o obj-$(CONFIG_PERF_EVENTS) += perf_cpum_cf_events.o
......
#include <linux/module.h>
#include <asm/alternative.h>
#include <asm/facility.h>
#define MAX_PATCH_LEN (255 - 1)
static int __initdata_or_module alt_instr_disabled;
static int __init disable_alternative_instructions(char *str)
{
alt_instr_disabled = 1;
return 0;
}
early_param("noaltinstr", disable_alternative_instructions);
struct brcl_insn {
u16 opc;
s32 disp;
} __packed;
static u16 __initdata_or_module nop16 = 0x0700;
static u32 __initdata_or_module nop32 = 0x47000000;
static struct brcl_insn __initdata_or_module nop48 = {
0xc004, 0
};
static const void *nops[] __initdata_or_module = {
&nop16,
&nop32,
&nop48
};
static void __init_or_module add_jump_padding(void *insns, unsigned int len)
{
struct brcl_insn brcl = {
0xc0f4,
len / 2
};
memcpy(insns, &brcl, sizeof(brcl));
insns += sizeof(brcl);
len -= sizeof(brcl);
while (len > 0) {
memcpy(insns, &nop16, 2);
insns += 2;
len -= 2;
}
}
static void __init_or_module add_padding(void *insns, unsigned int len)
{
if (len > 6)
add_jump_padding(insns, len);
else if (len >= 2)
memcpy(insns, nops[len / 2 - 1], len);
}
static void __init_or_module __apply_alternatives(struct alt_instr *start,
struct alt_instr *end)
{
struct alt_instr *a;
u8 *instr, *replacement;
u8 insnbuf[MAX_PATCH_LEN];
/*
* The scan order should be from start to end. A later scanned
* alternative code can overwrite previously scanned alternative code.
*/
for (a = start; a < end; a++) {
int insnbuf_sz = 0;
instr = (u8 *)&a->instr_offset + a->instr_offset;
replacement = (u8 *)&a->repl_offset + a->repl_offset;
if (!test_facility(a->facility))
continue;
if (unlikely(a->instrlen % 2 || a->replacementlen % 2)) {
WARN_ONCE(1, "cpu alternatives instructions length is "
"odd, skipping patching\n");
continue;
}
memcpy(insnbuf, replacement, a->replacementlen);
insnbuf_sz = a->replacementlen;
if (a->instrlen > a->replacementlen) {
add_padding(insnbuf + a->replacementlen,
a->instrlen - a->replacementlen);
insnbuf_sz += a->instrlen - a->replacementlen;
}
s390_kernel_write(instr, insnbuf, insnbuf_sz);
}
}
void __init_or_module apply_alternatives(struct alt_instr *start,
struct alt_instr *end)
{
if (!alt_instr_disabled)
__apply_alternatives(start, end);
}
extern struct alt_instr __alt_instructions[], __alt_instructions_end[];
void __init apply_alternative_instructions(void)
{
apply_alternatives(__alt_instructions, __alt_instructions_end);
}
...@@ -14,6 +14,7 @@ ...@@ -14,6 +14,7 @@
#include <asm/vdso.h> #include <asm/vdso.h>
#include <asm/pgtable.h> #include <asm/pgtable.h>
#include <asm/gmap.h> #include <asm/gmap.h>
#include <asm/nmi.h>
/* /*
* Make sure that the compiler is new enough. We want a compiler that * Make sure that the compiler is new enough. We want a compiler that
...@@ -159,6 +160,7 @@ int main(void) ...@@ -159,6 +160,7 @@ int main(void)
OFFSET(__LC_LAST_UPDATE_CLOCK, lowcore, last_update_clock); OFFSET(__LC_LAST_UPDATE_CLOCK, lowcore, last_update_clock);
OFFSET(__LC_INT_CLOCK, lowcore, int_clock); OFFSET(__LC_INT_CLOCK, lowcore, int_clock);
OFFSET(__LC_MCCK_CLOCK, lowcore, mcck_clock); OFFSET(__LC_MCCK_CLOCK, lowcore, mcck_clock);
OFFSET(__LC_CLOCK_COMPARATOR, lowcore, clock_comparator);
OFFSET(__LC_BOOT_CLOCK, lowcore, boot_clock); OFFSET(__LC_BOOT_CLOCK, lowcore, boot_clock);
OFFSET(__LC_CURRENT, lowcore, current_task); OFFSET(__LC_CURRENT, lowcore, current_task);
OFFSET(__LC_KERNEL_STACK, lowcore, kernel_stack); OFFSET(__LC_KERNEL_STACK, lowcore, kernel_stack);
...@@ -194,6 +196,9 @@ int main(void) ...@@ -194,6 +196,9 @@ int main(void)
OFFSET(__LC_CREGS_SAVE_AREA, lowcore, cregs_save_area); OFFSET(__LC_CREGS_SAVE_AREA, lowcore, cregs_save_area);
OFFSET(__LC_PGM_TDB, lowcore, pgm_tdb); OFFSET(__LC_PGM_TDB, lowcore, pgm_tdb);
BLANK(); BLANK();
/* extended machine check save area */
OFFSET(__MCESA_GS_SAVE_AREA, mcesa, guarded_storage_save_area);
BLANK();
/* gmap/sie offsets */ /* gmap/sie offsets */
OFFSET(__GMAP_ASCE, gmap, asce); OFFSET(__GMAP_ASCE, gmap, asce);
OFFSET(__SIE_PROG0C, kvm_s390_sie_block, prog0c); OFFSET(__SIE_PROG0C, kvm_s390_sie_block, prog0c);
......
...@@ -181,3 +181,4 @@ COMPAT_SYSCALL_WRAP3(mlock2, unsigned long, start, size_t, len, int, flags); ...@@ -181,3 +181,4 @@ COMPAT_SYSCALL_WRAP3(mlock2, unsigned long, start, size_t, len, int, flags);
COMPAT_SYSCALL_WRAP6(copy_file_range, int, fd_in, loff_t __user *, off_in, int, fd_out, loff_t __user *, off_out, size_t, len, unsigned int, flags); COMPAT_SYSCALL_WRAP6(copy_file_range, int, fd_in, loff_t __user *, off_in, int, fd_out, loff_t __user *, off_out, size_t, len, unsigned int, flags);
COMPAT_SYSCALL_WRAP2(s390_guarded_storage, int, command, struct gs_cb *, gs_cb); COMPAT_SYSCALL_WRAP2(s390_guarded_storage, int, command, struct gs_cb *, gs_cb);
COMPAT_SYSCALL_WRAP5(statx, int, dfd, const char __user *, path, unsigned, flags, unsigned, mask, struct statx __user *, buffer); COMPAT_SYSCALL_WRAP5(statx, int, dfd, const char __user *, path, unsigned, flags, unsigned, mask, struct statx __user *, buffer);
COMPAT_SYSCALL_WRAP4(s390_sthyi, unsigned long, code, void __user *, info, u64 __user *, rc, unsigned long, flags);
This diff is collapsed.
This diff is collapsed.
...@@ -31,14 +31,6 @@ ...@@ -31,14 +31,6 @@
#include <asm/facility.h> #include <asm/facility.h>
#include "entry.h" #include "entry.h"
/*
* Create a Kernel NSS if the SAVESYS= parameter is defined
*/
#define DEFSYS_CMD_SIZE 128
#define SAVESYS_CMD_SIZE 32
char kernel_nss_name[NSS_NAME_SIZE + 1];
static void __init setup_boot_command_line(void); static void __init setup_boot_command_line(void);
/* /*
...@@ -59,134 +51,6 @@ static void __init reset_tod_clock(void) ...@@ -59,134 +51,6 @@ static void __init reset_tod_clock(void)
S390_lowcore.last_update_clock = TOD_UNIX_EPOCH; S390_lowcore.last_update_clock = TOD_UNIX_EPOCH;
} }
#ifdef CONFIG_SHARED_KERNEL
int __init savesys_ipl_nss(char *cmd, const int cmdlen);
asm(
" .section .init.text,\"ax\",@progbits\n"
" .align 4\n"
" .type savesys_ipl_nss, @function\n"
"savesys_ipl_nss:\n"
" stmg 6,15,48(15)\n"
" lgr 14,3\n"
" sam31\n"
" diag 2,14,0x8\n"
" sam64\n"
" lgr 2,14\n"
" lmg 6,15,48(15)\n"
" br 14\n"
" .size savesys_ipl_nss, .-savesys_ipl_nss\n"
" .previous\n");
static __initdata char upper_command_line[COMMAND_LINE_SIZE];
static noinline __init void create_kernel_nss(void)
{
unsigned int i, stext_pfn, eshared_pfn, end_pfn, min_size;
#ifdef CONFIG_BLK_DEV_INITRD
unsigned int sinitrd_pfn, einitrd_pfn;
#endif
int response;
int hlen;
size_t len;
char *savesys_ptr;
char defsys_cmd[DEFSYS_CMD_SIZE];
char savesys_cmd[SAVESYS_CMD_SIZE];
/* Do nothing if we are not running under VM */
if (!MACHINE_IS_VM)
return;
/* Convert COMMAND_LINE to upper case */
for (i = 0; i < strlen(boot_command_line); i++)
upper_command_line[i] = toupper(boot_command_line[i]);
savesys_ptr = strstr(upper_command_line, "SAVESYS=");
if (!savesys_ptr)
return;
savesys_ptr += 8; /* Point to the beginning of the NSS name */
for (i = 0; i < NSS_NAME_SIZE; i++) {
if (savesys_ptr[i] == ' ' || savesys_ptr[i] == '\0')
break;
kernel_nss_name[i] = savesys_ptr[i];
}
stext_pfn = PFN_DOWN(__pa(&_stext));
eshared_pfn = PFN_DOWN(__pa(&_eshared));
end_pfn = PFN_UP(__pa(&_end));
min_size = end_pfn << 2;
hlen = snprintf(defsys_cmd, DEFSYS_CMD_SIZE,
"DEFSYS %s 00000-%.5X EW %.5X-%.5X SR %.5X-%.5X",
kernel_nss_name, stext_pfn - 1, stext_pfn,
eshared_pfn - 1, eshared_pfn, end_pfn);
#ifdef CONFIG_BLK_DEV_INITRD
if (INITRD_START && INITRD_SIZE) {
sinitrd_pfn = PFN_DOWN(__pa(INITRD_START));
einitrd_pfn = PFN_UP(__pa(INITRD_START + INITRD_SIZE));
min_size = einitrd_pfn << 2;
hlen += snprintf(defsys_cmd + hlen, DEFSYS_CMD_SIZE - hlen,
" EW %.5X-%.5X", sinitrd_pfn, einitrd_pfn);
}
#endif
snprintf(defsys_cmd + hlen, DEFSYS_CMD_SIZE - hlen,
" EW MINSIZE=%.7iK PARMREGS=0-13", min_size);
defsys_cmd[DEFSYS_CMD_SIZE - 1] = '\0';
snprintf(savesys_cmd, SAVESYS_CMD_SIZE, "SAVESYS %s \n IPL %s",
kernel_nss_name, kernel_nss_name);
savesys_cmd[SAVESYS_CMD_SIZE - 1] = '\0';
__cpcmd(defsys_cmd, NULL, 0, &response);
if (response != 0) {
pr_err("Defining the Linux kernel NSS failed with rc=%d\n",
response);
kernel_nss_name[0] = '\0';
return;
}
len = strlen(savesys_cmd);
ASCEBC(savesys_cmd, len);
response = savesys_ipl_nss(savesys_cmd, len);
/* On success: response is equal to the command size,
* max SAVESYS_CMD_SIZE
* On error: response contains the numeric portion of cp error message.
* for SAVESYS it will be >= 263
* for missing privilege class, it will be 1
*/
if (response > SAVESYS_CMD_SIZE || response == 1) {
pr_err("Saving the Linux kernel NSS failed with rc=%d\n",
response);
kernel_nss_name[0] = '\0';
return;
}
/* re-initialize cputime accounting. */
get_tod_clock_ext(tod_clock_base);
S390_lowcore.last_update_clock = *(__u64 *) &tod_clock_base[1];
S390_lowcore.last_update_timer = 0x7fffffffffffffffULL;
S390_lowcore.user_timer = 0;
S390_lowcore.system_timer = 0;
asm volatile("SPT 0(%0)" : : "a" (&S390_lowcore.last_update_timer));
/* re-setup boot command line with new ipl vm parms */
ipl_update_parameters();
setup_boot_command_line();
ipl_flags = IPL_NSS_VALID;
}
#else /* CONFIG_SHARED_KERNEL */
static inline void create_kernel_nss(void) { }
#endif /* CONFIG_SHARED_KERNEL */
/* /*
* Clear bss memory * Clear bss memory
*/ */
...@@ -375,8 +239,10 @@ static __init void detect_machine_facilities(void) ...@@ -375,8 +239,10 @@ static __init void detect_machine_facilities(void)
S390_lowcore.machine_flags |= MACHINE_FLAG_IDTE; S390_lowcore.machine_flags |= MACHINE_FLAG_IDTE;
if (test_facility(40)) if (test_facility(40))
S390_lowcore.machine_flags |= MACHINE_FLAG_LPP; S390_lowcore.machine_flags |= MACHINE_FLAG_LPP;
if (test_facility(50) && test_facility(73)) if (test_facility(50) && test_facility(73)) {
S390_lowcore.machine_flags |= MACHINE_FLAG_TE; S390_lowcore.machine_flags |= MACHINE_FLAG_TE;
__ctl_set_bit(0, 55);
}
if (test_facility(51)) if (test_facility(51))
S390_lowcore.machine_flags |= MACHINE_FLAG_TLB_LC; S390_lowcore.machine_flags |= MACHINE_FLAG_TLB_LC;
if (test_facility(129)) { if (test_facility(129)) {
...@@ -549,10 +415,6 @@ static void __init setup_boot_command_line(void) ...@@ -549,10 +415,6 @@ static void __init setup_boot_command_line(void)
append_to_cmdline(append_ipl_scpdata); append_to_cmdline(append_ipl_scpdata);
} }
/*
* Save ipl parameters, clear bss memory, initialize storage keys
* and create a kernel NSS at startup if the SAVESYS= parm is defined
*/
void __init startup_init(void) void __init startup_init(void)
{ {
reset_tod_clock(); reset_tod_clock();
...@@ -569,7 +431,6 @@ void __init startup_init(void) ...@@ -569,7 +431,6 @@ void __init startup_init(void)
setup_arch_string(); setup_arch_string();
ipl_update_parameters(); ipl_update_parameters();
setup_boot_command_line(); setup_boot_command_line();
create_kernel_nss();
detect_diag9c(); detect_diag9c();
detect_diag44(); detect_diag44();
detect_machine_facilities(); detect_machine_facilities();
......
...@@ -13,6 +13,7 @@ ...@@ -13,6 +13,7 @@
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/cache.h> #include <asm/cache.h>
#include <asm/ctl_reg.h>
#include <asm/errno.h> #include <asm/errno.h>
#include <asm/ptrace.h> #include <asm/ptrace.h>
#include <asm/thread_info.h> #include <asm/thread_info.h>
...@@ -952,15 +953,56 @@ load_fpu_regs: ...@@ -952,15 +953,56 @@ load_fpu_regs:
*/ */
ENTRY(mcck_int_handler) ENTRY(mcck_int_handler)
STCK __LC_MCCK_CLOCK STCK __LC_MCCK_CLOCK
la %r1,4095 # revalidate r1 la %r1,4095 # validate r1
spt __LC_CPU_TIMER_SAVE_AREA-4095(%r1) # revalidate cpu timer spt __LC_CPU_TIMER_SAVE_AREA-4095(%r1) # validate cpu timer
lmg %r0,%r15,__LC_GPREGS_SAVE_AREA-4095(%r1)# revalidate gprs sckc __LC_CLOCK_COMPARATOR # validate comparator
lam %a0,%a15,__LC_AREGS_SAVE_AREA-4095(%r1) # validate acrs
lmg %r0,%r15,__LC_GPREGS_SAVE_AREA-4095(%r1)# validate gprs
lg %r12,__LC_CURRENT lg %r12,__LC_CURRENT
larl %r13,cleanup_critical larl %r13,cleanup_critical
lmg %r8,%r9,__LC_MCK_OLD_PSW lmg %r8,%r9,__LC_MCK_OLD_PSW
TSTMSK __LC_MCCK_CODE,MCCK_CODE_SYSTEM_DAMAGE TSTMSK __LC_MCCK_CODE,MCCK_CODE_SYSTEM_DAMAGE
jo .Lmcck_panic # yes -> rest of mcck code invalid jo .Lmcck_panic # yes -> rest of mcck code invalid
lghi %r14,__LC_CPU_TIMER_SAVE_AREA TSTMSK __LC_MCCK_CODE,MCCK_CODE_CR_VALID
jno .Lmcck_panic # control registers invalid -> panic
la %r14,4095
lctlg %c0,%c15,__LC_CREGS_SAVE_AREA-4095(%r14) # validate ctl regs
ptlb
lg %r11,__LC_MCESAD-4095(%r14) # extended machine check save area
nill %r11,0xfc00 # MCESA_ORIGIN_MASK
TSTMSK __LC_CREGS_SAVE_AREA+16-4095(%r14),CR2_GUARDED_STORAGE
jno 0f
TSTMSK __LC_MCCK_CODE,MCCK_CODE_GS_VALID
jno 0f
.insn rxy,0xe3000000004d,0,__MCESA_GS_SAVE_AREA(%r11) # LGSC
0: l %r14,__LC_FP_CREG_SAVE_AREA-4095(%r14)
TSTMSK __LC_MCCK_CODE,MCCK_CODE_FC_VALID
jo 0f
sr %r14,%r14
0: sfpc %r14
TSTMSK __LC_MACHINE_FLAGS,MACHINE_FLAG_VX
jo 0f
lghi %r14,__LC_FPREGS_SAVE_AREA
ld %f0,0(%r14)
ld %f1,8(%r14)
ld %f2,16(%r14)
ld %f3,24(%r14)
ld %f4,32(%r14)
ld %f5,40(%r14)
ld %f6,48(%r14)
ld %f7,56(%r14)
ld %f8,64(%r14)
ld %f9,72(%r14)
ld %f10,80(%r14)
ld %f11,88(%r14)
ld %f12,96(%r14)
ld %f13,104(%r14)
ld %f14,112(%r14)
ld %f15,120(%r14)
j 1f
0: VLM %v0,%v15,0,%r11
VLM %v16,%v31,256,%r11
1: lghi %r14,__LC_CPU_TIMER_SAVE_AREA
mvc __LC_MCCK_ENTER_TIMER(8),0(%r14) mvc __LC_MCCK_ENTER_TIMER(8),0(%r14)
TSTMSK __LC_MCCK_CODE,MCCK_CODE_CPU_TIMER_VALID TSTMSK __LC_MCCK_CODE,MCCK_CODE_CPU_TIMER_VALID
jo 3f jo 3f
...@@ -976,9 +1018,13 @@ ENTRY(mcck_int_handler) ...@@ -976,9 +1018,13 @@ ENTRY(mcck_int_handler)
la %r14,__LC_LAST_UPDATE_TIMER la %r14,__LC_LAST_UPDATE_TIMER
2: spt 0(%r14) 2: spt 0(%r14)
mvc __LC_MCCK_ENTER_TIMER(8),0(%r14) mvc __LC_MCCK_ENTER_TIMER(8),0(%r14)
3: TSTMSK __LC_MCCK_CODE,(MCCK_CODE_PSW_MWP_VALID|MCCK_CODE_PSW_IA_VALID) 3: TSTMSK __LC_MCCK_CODE,MCCK_CODE_PSW_MWP_VALID
jno .Lmcck_panic # no -> skip cleanup critical jno .Lmcck_panic
SWITCH_ASYNC __LC_GPREGS_SAVE_AREA+64,__LC_MCCK_ENTER_TIMER tmhh %r8,0x0001 # interrupting from user ?
jnz 4f
TSTMSK __LC_MCCK_CODE,MCCK_CODE_PSW_IA_VALID
jno .Lmcck_panic
4: SWITCH_ASYNC __LC_GPREGS_SAVE_AREA+64,__LC_MCCK_ENTER_TIMER
.Lmcck_skip: .Lmcck_skip:
lghi %r14,__LC_GPREGS_SAVE_AREA+64 lghi %r14,__LC_GPREGS_SAVE_AREA+64
stmg %r0,%r7,__PT_R0(%r11) stmg %r0,%r7,__PT_R0(%r11)
......
...@@ -78,6 +78,7 @@ long sys_s390_runtime_instr(int command, int signum); ...@@ -78,6 +78,7 @@ long sys_s390_runtime_instr(int command, int signum);
long sys_s390_guarded_storage(int command, struct gs_cb __user *); long sys_s390_guarded_storage(int command, struct gs_cb __user *);
long sys_s390_pci_mmio_write(unsigned long, const void __user *, size_t); long sys_s390_pci_mmio_write(unsigned long, const void __user *, size_t);
long sys_s390_pci_mmio_read(unsigned long, void __user *, size_t); long sys_s390_pci_mmio_read(unsigned long, void __user *, size_t);
long sys_s390_sthyi(unsigned long function_code, void __user *buffer, u64 __user *return_code, unsigned long flags);
DECLARE_PER_CPU(u64, mt_cycles[8]); DECLARE_PER_CPU(u64, mt_cycles[8]);
......
...@@ -12,11 +12,10 @@ ...@@ -12,11 +12,10 @@
#include <asm/guarded_storage.h> #include <asm/guarded_storage.h>
#include "entry.h" #include "entry.h"
void exit_thread_gs(void) void guarded_storage_release(struct task_struct *tsk)
{ {
kfree(current->thread.gs_cb); kfree(tsk->thread.gs_cb);
kfree(current->thread.gs_bc_cb); kfree(tsk->thread.gs_bc_cb);
current->thread.gs_cb = current->thread.gs_bc_cb = NULL;
} }
static int gs_enable(void) static int gs_enable(void)
......
...@@ -279,8 +279,6 @@ static __init enum ipl_type get_ipl_type(void) ...@@ -279,8 +279,6 @@ static __init enum ipl_type get_ipl_type(void)
{ {
struct ipl_parameter_block *ipl = IPL_PARMBLOCK_START; struct ipl_parameter_block *ipl = IPL_PARMBLOCK_START;
if (ipl_flags & IPL_NSS_VALID)
return IPL_TYPE_NSS;
if (!(ipl_flags & IPL_DEVNO_VALID)) if (!(ipl_flags & IPL_DEVNO_VALID))
return IPL_TYPE_UNKNOWN; return IPL_TYPE_UNKNOWN;
if (!(ipl_flags & IPL_PARMBLOCK_VALID)) if (!(ipl_flags & IPL_PARMBLOCK_VALID))
...@@ -533,22 +531,6 @@ static struct attribute_group ipl_ccw_attr_group_lpar = { ...@@ -533,22 +531,6 @@ static struct attribute_group ipl_ccw_attr_group_lpar = {
.attrs = ipl_ccw_attrs_lpar .attrs = ipl_ccw_attrs_lpar
}; };
/* NSS ipl device attributes */
DEFINE_IPL_ATTR_RO(ipl_nss, name, "%s\n", kernel_nss_name);
static struct attribute *ipl_nss_attrs[] = {
&sys_ipl_type_attr.attr,
&sys_ipl_nss_name_attr.attr,
&sys_ipl_ccw_loadparm_attr.attr,
&sys_ipl_vm_parm_attr.attr,
NULL,
};
static struct attribute_group ipl_nss_attr_group = {
.attrs = ipl_nss_attrs,
};
/* UNKNOWN ipl device attributes */ /* UNKNOWN ipl device attributes */
static struct attribute *ipl_unknown_attrs[] = { static struct attribute *ipl_unknown_attrs[] = {
...@@ -598,9 +580,6 @@ static int __init ipl_init(void) ...@@ -598,9 +580,6 @@ static int __init ipl_init(void)
case IPL_TYPE_FCP_DUMP: case IPL_TYPE_FCP_DUMP:
rc = sysfs_create_group(&ipl_kset->kobj, &ipl_fcp_attr_group); rc = sysfs_create_group(&ipl_kset->kobj, &ipl_fcp_attr_group);
break; break;
case IPL_TYPE_NSS:
rc = sysfs_create_group(&ipl_kset->kobj, &ipl_nss_attr_group);
break;
default: default:
rc = sysfs_create_group(&ipl_kset->kobj, rc = sysfs_create_group(&ipl_kset->kobj,
&ipl_unknown_attr_group); &ipl_unknown_attr_group);
...@@ -1172,18 +1151,6 @@ static int __init reipl_nss_init(void) ...@@ -1172,18 +1151,6 @@ static int __init reipl_nss_init(void)
return rc; return rc;
reipl_block_ccw_init(reipl_block_nss); reipl_block_ccw_init(reipl_block_nss);
if (ipl_info.type == IPL_TYPE_NSS) {
memset(reipl_block_nss->ipl_info.ccw.nss_name,
' ', NSS_NAME_SIZE);
memcpy(reipl_block_nss->ipl_info.ccw.nss_name,
kernel_nss_name, strlen(kernel_nss_name));
ASCEBC(reipl_block_nss->ipl_info.ccw.nss_name, NSS_NAME_SIZE);
reipl_block_nss->ipl_info.ccw.vm_flags |=
DIAG308_VM_FLAGS_NSS_VALID;
reipl_block_ccw_fill_parms(reipl_block_nss);
}
reipl_capabilities |= IPL_TYPE_NSS; reipl_capabilities |= IPL_TYPE_NSS;
return 0; return 0;
} }
...@@ -1971,9 +1938,6 @@ void __init setup_ipl(void) ...@@ -1971,9 +1938,6 @@ void __init setup_ipl(void)
ipl_info.data.fcp.lun = IPL_PARMBLOCK_START->ipl_info.fcp.lun; ipl_info.data.fcp.lun = IPL_PARMBLOCK_START->ipl_info.fcp.lun;
break; break;
case IPL_TYPE_NSS: case IPL_TYPE_NSS:
strncpy(ipl_info.data.nss.name, kernel_nss_name,
sizeof(ipl_info.data.nss.name));
break;
case IPL_TYPE_UNKNOWN: case IPL_TYPE_UNKNOWN:
/* We have no info to copy */ /* We have no info to copy */
break; break;
......
...@@ -161,8 +161,6 @@ struct swap_insn_args { ...@@ -161,8 +161,6 @@ struct swap_insn_args {
static int swap_instruction(void *data) static int swap_instruction(void *data)
{ {
struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
unsigned long status = kcb->kprobe_status;
struct swap_insn_args *args = data; struct swap_insn_args *args = data;
struct ftrace_insn new_insn, *insn; struct ftrace_insn new_insn, *insn;
struct kprobe *p = args->p; struct kprobe *p = args->p;
...@@ -185,9 +183,7 @@ static int swap_instruction(void *data) ...@@ -185,9 +183,7 @@ static int swap_instruction(void *data)
ftrace_generate_nop_insn(&new_insn); ftrace_generate_nop_insn(&new_insn);
} }
skip_ftrace: skip_ftrace:
kcb->kprobe_status = KPROBE_SWAP_INST;
s390_kernel_write(p->addr, &new_insn, len); s390_kernel_write(p->addr, &new_insn, len);
kcb->kprobe_status = status;
return 0; return 0;
} }
NOKPROBE_SYMBOL(swap_instruction); NOKPROBE_SYMBOL(swap_instruction);
...@@ -574,9 +570,6 @@ static int kprobe_trap_handler(struct pt_regs *regs, int trapnr) ...@@ -574,9 +570,6 @@ static int kprobe_trap_handler(struct pt_regs *regs, int trapnr)
const struct exception_table_entry *entry; const struct exception_table_entry *entry;
switch(kcb->kprobe_status) { switch(kcb->kprobe_status) {
case KPROBE_SWAP_INST:
/* We are here because the instruction replacement failed */
return 0;
case KPROBE_HIT_SS: case KPROBE_HIT_SS:
case KPROBE_REENTER: case KPROBE_REENTER:
/* /*
......
...@@ -106,7 +106,7 @@ static void __do_machine_kdump(void *image) ...@@ -106,7 +106,7 @@ static void __do_machine_kdump(void *image)
static noinline void __machine_kdump(void *image) static noinline void __machine_kdump(void *image)
{ {
struct mcesa *mcesa; struct mcesa *mcesa;
unsigned long cr2_old, cr2_new; union ctlreg2 cr2_old, cr2_new;
int this_cpu, cpu; int this_cpu, cpu;
lgr_info_log(); lgr_info_log();
...@@ -123,11 +123,12 @@ static noinline void __machine_kdump(void *image) ...@@ -123,11 +123,12 @@ static noinline void __machine_kdump(void *image)
if (MACHINE_HAS_VX) if (MACHINE_HAS_VX)
save_vx_regs((__vector128 *) mcesa->vector_save_area); save_vx_regs((__vector128 *) mcesa->vector_save_area);
if (MACHINE_HAS_GS) { if (MACHINE_HAS_GS) {
__ctl_store(cr2_old, 2, 2); __ctl_store(cr2_old.val, 2, 2);
cr2_new = cr2_old | (1UL << 4); cr2_new = cr2_old;
__ctl_load(cr2_new, 2, 2); cr2_new.gse = 1;
__ctl_load(cr2_new.val, 2, 2);
save_gs_cb((struct gs_cb *) mcesa->guarded_storage_save_area); save_gs_cb((struct gs_cb *) mcesa->guarded_storage_save_area);
__ctl_load(cr2_old, 2, 2); __ctl_load(cr2_old.val, 2, 2);
} }
/* /*
* To create a good backchain for this CPU in the dump store_status * To create a good backchain for this CPU in the dump store_status
...@@ -145,7 +146,7 @@ static noinline void __machine_kdump(void *image) ...@@ -145,7 +146,7 @@ static noinline void __machine_kdump(void *image)
/* /*
* Check if kdump checksums are valid: We call purgatory with parameter "0" * Check if kdump checksums are valid: We call purgatory with parameter "0"
*/ */
static int kdump_csum_valid(struct kimage *image) static bool kdump_csum_valid(struct kimage *image)
{ {
#ifdef CONFIG_CRASH_DUMP #ifdef CONFIG_CRASH_DUMP
int (*start_kdump)(int) = (void *)image->start; int (*start_kdump)(int) = (void *)image->start;
...@@ -154,9 +155,9 @@ static int kdump_csum_valid(struct kimage *image) ...@@ -154,9 +155,9 @@ static int kdump_csum_valid(struct kimage *image)
__arch_local_irq_stnsm(0xfb); /* disable DAT */ __arch_local_irq_stnsm(0xfb); /* disable DAT */
rc = start_kdump(0); rc = start_kdump(0);
__arch_local_irq_stosm(0x04); /* enable DAT */ __arch_local_irq_stosm(0x04); /* enable DAT */
return rc ? 0 : -EINVAL; return rc == 0;
#else #else
return -EINVAL; return false;
#endif #endif
} }
...@@ -219,10 +220,6 @@ int machine_kexec_prepare(struct kimage *image) ...@@ -219,10 +220,6 @@ int machine_kexec_prepare(struct kimage *image)
{ {
void *reboot_code_buffer; void *reboot_code_buffer;
/* Can't replace kernel image since it is read-only. */
if (ipl_flags & IPL_NSS_VALID)
return -EOPNOTSUPP;
if (image->type == KEXEC_TYPE_CRASH) if (image->type == KEXEC_TYPE_CRASH)
return machine_kexec_prepare_kdump(); return machine_kexec_prepare_kdump();
...@@ -269,6 +266,7 @@ static void __do_machine_kexec(void *data) ...@@ -269,6 +266,7 @@ static void __do_machine_kexec(void *data)
s390_reset_system(); s390_reset_system();
data_mover = (relocate_kernel_t) page_to_phys(image->control_code_page); data_mover = (relocate_kernel_t) page_to_phys(image->control_code_page);
__arch_local_irq_stnsm(0xfb); /* disable DAT - avoid no-execute */
/* Call the moving routine */ /* Call the moving routine */
(*data_mover)(&image->head, image->start); (*data_mover)(&image->head, image->start);
......
...@@ -31,6 +31,7 @@ ...@@ -31,6 +31,7 @@
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/moduleloader.h> #include <linux/moduleloader.h>
#include <linux/bug.h> #include <linux/bug.h>
#include <asm/alternative.h>
#if 0 #if 0
#define DEBUGP printk #define DEBUGP printk
...@@ -429,6 +430,22 @@ int module_finalize(const Elf_Ehdr *hdr, ...@@ -429,6 +430,22 @@ int module_finalize(const Elf_Ehdr *hdr,
const Elf_Shdr *sechdrs, const Elf_Shdr *sechdrs,
struct module *me) struct module *me)
{ {
const Elf_Shdr *s;
char *secstrings;
if (IS_ENABLED(CONFIG_ALTERNATIVES)) {
secstrings = (void *)hdr + sechdrs[hdr->e_shstrndx].sh_offset;
for (s = sechdrs; s < sechdrs + hdr->e_shnum; s++) {
if (!strcmp(".altinstructions",
secstrings + s->sh_name)) {
/* patch .altinstructions */
void *aseg = (void *)s->sh_addr;
apply_alternatives(aseg, aseg + s->sh_size);
}
}
}
jump_label_apply_nops(me); jump_label_apply_nops(me);
return 0; return 0;
} }
...@@ -12,6 +12,9 @@ ...@@ -12,6 +12,9 @@
#include <linux/init.h> #include <linux/init.h>
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/hardirq.h> #include <linux/hardirq.h>
#include <linux/log2.h>
#include <linux/kprobes.h>
#include <linux/slab.h>
#include <linux/time.h> #include <linux/time.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/sched/signal.h> #include <linux/sched/signal.h>
...@@ -37,13 +40,94 @@ struct mcck_struct { ...@@ -37,13 +40,94 @@ struct mcck_struct {
}; };
static DEFINE_PER_CPU(struct mcck_struct, cpu_mcck); static DEFINE_PER_CPU(struct mcck_struct, cpu_mcck);
static struct kmem_cache *mcesa_cache;
static unsigned long mcesa_origin_lc;
static void s390_handle_damage(void) static inline int nmi_needs_mcesa(void)
{ {
smp_send_stop(); return MACHINE_HAS_VX || MACHINE_HAS_GS;
}
static inline unsigned long nmi_get_mcesa_size(void)
{
if (MACHINE_HAS_GS)
return MCESA_MAX_SIZE;
return MCESA_MIN_SIZE;
}
/*
* The initial machine check extended save area for the boot CPU.
* It will be replaced by nmi_init() with an allocated structure.
* The structure is required for machine check happening early in
* the boot process.
*/
static struct mcesa boot_mcesa __initdata __aligned(MCESA_MAX_SIZE);
void __init nmi_alloc_boot_cpu(struct lowcore *lc)
{
if (!nmi_needs_mcesa())
return;
lc->mcesad = (unsigned long) &boot_mcesa;
if (MACHINE_HAS_GS)
lc->mcesad |= ilog2(MCESA_MAX_SIZE);
}
static int __init nmi_init(void)
{
unsigned long origin, cr0, size;
if (!nmi_needs_mcesa())
return 0;
size = nmi_get_mcesa_size();
if (size > MCESA_MIN_SIZE)
mcesa_origin_lc = ilog2(size);
/* create slab cache for the machine-check-extended-save-areas */
mcesa_cache = kmem_cache_create("nmi_save_areas", size, size, 0, NULL);
if (!mcesa_cache)
panic("Couldn't create nmi save area cache");
origin = (unsigned long) kmem_cache_alloc(mcesa_cache, GFP_KERNEL);
if (!origin)
panic("Couldn't allocate nmi save area");
/* The pointer is stored with mcesa_bits ORed in */
kmemleak_not_leak((void *) origin);
__ctl_store(cr0, 0, 0);
__ctl_clear_bit(0, 28); /* disable lowcore protection */
/* Replace boot_mcesa on the boot CPU */
S390_lowcore.mcesad = origin | mcesa_origin_lc;
__ctl_load(cr0, 0, 0);
return 0;
}
early_initcall(nmi_init);
int nmi_alloc_per_cpu(struct lowcore *lc)
{
unsigned long origin;
if (!nmi_needs_mcesa())
return 0;
origin = (unsigned long) kmem_cache_alloc(mcesa_cache, GFP_KERNEL);
if (!origin)
return -ENOMEM;
/* The pointer is stored with mcesa_bits ORed in */
kmemleak_not_leak((void *) origin);
lc->mcesad = origin | mcesa_origin_lc;
return 0;
}
void nmi_free_per_cpu(struct lowcore *lc)
{
if (!nmi_needs_mcesa())
return;
kmem_cache_free(mcesa_cache, (void *)(lc->mcesad & MCESA_ORIGIN_MASK));
}
static notrace void s390_handle_damage(void)
{
smp_emergency_stop();
disabled_wait((unsigned long) __builtin_return_address(0)); disabled_wait((unsigned long) __builtin_return_address(0));
while (1); while (1);
} }
NOKPROBE_SYMBOL(s390_handle_damage);
/* /*
* Main machine check handler function. Will be called with interrupts enabled * Main machine check handler function. Will be called with interrupts enabled
...@@ -100,18 +184,16 @@ void s390_handle_mcck(void) ...@@ -100,18 +184,16 @@ void s390_handle_mcck(void)
EXPORT_SYMBOL_GPL(s390_handle_mcck); EXPORT_SYMBOL_GPL(s390_handle_mcck);
/* /*
* returns 0 if all registers could be validated * returns 0 if all required registers are available
* returns 1 otherwise * returns 1 otherwise
*/ */
static int notrace s390_validate_registers(union mci mci, int umode) static int notrace s390_check_registers(union mci mci, int umode)
{ {
union ctlreg2 cr2;
int kill_task; int kill_task;
u64 zero;
void *fpt_save_area; void *fpt_save_area;
struct mcesa *mcesa;
kill_task = 0; kill_task = 0;
zero = 0;
if (!mci.gr) { if (!mci.gr) {
/* /*
...@@ -122,18 +204,13 @@ static int notrace s390_validate_registers(union mci mci, int umode) ...@@ -122,18 +204,13 @@ static int notrace s390_validate_registers(union mci mci, int umode)
s390_handle_damage(); s390_handle_damage();
kill_task = 1; kill_task = 1;
} }
/* Validate control registers */ /* Check control registers */
if (!mci.cr) { if (!mci.cr) {
/* /*
* Control registers have unknown contents. * Control registers have unknown contents.
* Can't recover and therefore stopping machine. * Can't recover and therefore stopping machine.
*/ */
s390_handle_damage(); s390_handle_damage();
} else {
asm volatile(
" lctlg 0,15,0(%0)\n"
" ptlb\n"
: : "a" (&S390_lowcore.cregs_save_area) : "memory");
} }
if (!mci.fp) { if (!mci.fp) {
/* /*
...@@ -141,7 +218,6 @@ static int notrace s390_validate_registers(union mci mci, int umode) ...@@ -141,7 +218,6 @@ static int notrace s390_validate_registers(union mci mci, int umode)
* kernel currently uses floating point registers the * kernel currently uses floating point registers the
* system is stopped. If the process has its floating * system is stopped. If the process has its floating
* pointer registers loaded it is terminated. * pointer registers loaded it is terminated.
* Otherwise just revalidate the registers.
*/ */
if (S390_lowcore.fpu_flags & KERNEL_VXR_V0V7) if (S390_lowcore.fpu_flags & KERNEL_VXR_V0V7)
s390_handle_damage(); s390_handle_damage();
...@@ -155,72 +231,29 @@ static int notrace s390_validate_registers(union mci mci, int umode) ...@@ -155,72 +231,29 @@ static int notrace s390_validate_registers(union mci mci, int umode)
* If the kernel currently uses the floating pointer * If the kernel currently uses the floating pointer
* registers and needs the FPC register the system is * registers and needs the FPC register the system is
* stopped. If the process has its floating pointer * stopped. If the process has its floating pointer
* registers loaded it is terminated. Otherwiese the * registers loaded it is terminated.
* FPC is just revalidated.
*/ */
if (S390_lowcore.fpu_flags & KERNEL_FPC) if (S390_lowcore.fpu_flags & KERNEL_FPC)
s390_handle_damage(); s390_handle_damage();
asm volatile("lfpc %0" : : "Q" (zero));
if (!test_cpu_flag(CIF_FPU)) if (!test_cpu_flag(CIF_FPU))
kill_task = 1; kill_task = 1;
} else {
asm volatile("lfpc %0"
: : "Q" (S390_lowcore.fpt_creg_save_area));
} }
mcesa = (struct mcesa *)(S390_lowcore.mcesad & MCESA_ORIGIN_MASK); if (MACHINE_HAS_VX) {
if (!MACHINE_HAS_VX) {
/* Validate floating point registers */
asm volatile(
" ld 0,0(%0)\n"
" ld 1,8(%0)\n"
" ld 2,16(%0)\n"
" ld 3,24(%0)\n"
" ld 4,32(%0)\n"
" ld 5,40(%0)\n"
" ld 6,48(%0)\n"
" ld 7,56(%0)\n"
" ld 8,64(%0)\n"
" ld 9,72(%0)\n"
" ld 10,80(%0)\n"
" ld 11,88(%0)\n"
" ld 12,96(%0)\n"
" ld 13,104(%0)\n"
" ld 14,112(%0)\n"
" ld 15,120(%0)\n"
: : "a" (fpt_save_area) : "memory");
} else {
/* Validate vector registers */
union ctlreg0 cr0;
if (!mci.vr) { if (!mci.vr) {
/* /*
* Vector registers can't be restored. If the kernel * Vector registers can't be restored. If the kernel
* currently uses vector registers the system is * currently uses vector registers the system is
* stopped. If the process has its vector registers * stopped. If the process has its vector registers
* loaded it is terminated. Otherwise just revalidate * loaded it is terminated.
* the registers.
*/ */
if (S390_lowcore.fpu_flags & KERNEL_VXR) if (S390_lowcore.fpu_flags & KERNEL_VXR)
s390_handle_damage(); s390_handle_damage();
if (!test_cpu_flag(CIF_FPU)) if (!test_cpu_flag(CIF_FPU))
kill_task = 1; kill_task = 1;
} }
cr0.val = S390_lowcore.cregs_save_area[0];
cr0.afp = cr0.vx = 1;
__ctl_load(cr0.val, 0, 0);
asm volatile(
" la 1,%0\n"
" .word 0xe70f,0x1000,0x0036\n" /* vlm 0,15,0(1) */
" .word 0xe70f,0x1100,0x0c36\n" /* vlm 16,31,256(1) */
: : "Q" (*(struct vx_array *) mcesa->vector_save_area)
: "1");
__ctl_load(S390_lowcore.cregs_save_area[0], 0, 0);
} }
/* Validate access registers */ /* Check if access registers are valid */
asm volatile(
" lam 0,15,0(%0)"
: : "a" (&S390_lowcore.access_regs_save_area));
if (!mci.ar) { if (!mci.ar) {
/* /*
* Access registers have unknown contents. * Access registers have unknown contents.
...@@ -228,53 +261,41 @@ static int notrace s390_validate_registers(union mci mci, int umode) ...@@ -228,53 +261,41 @@ static int notrace s390_validate_registers(union mci mci, int umode)
*/ */
kill_task = 1; kill_task = 1;
} }
/* Validate guarded storage registers */ /* Check guarded storage registers */
if (MACHINE_HAS_GS && (S390_lowcore.cregs_save_area[2] & (1UL << 4))) { cr2.val = S390_lowcore.cregs_save_area[2];
if (!mci.gs) if (cr2.gse) {
if (!mci.gs) {
/* /*
* Guarded storage register can't be restored and * Guarded storage register can't be restored and
* the current processes uses guarded storage. * the current processes uses guarded storage.
* It has to be terminated. * It has to be terminated.
*/ */
kill_task = 1; kill_task = 1;
else }
load_gs_cb((struct gs_cb *)
mcesa->guarded_storage_save_area);
} }
/*
* We don't even try to validate the TOD register, since we simply
* can't write something sensible into that register.
*/
/*
* See if we can validate the TOD programmable register with its
* old contents (should be zero) otherwise set it to zero.
*/
if (!mci.pr)
asm volatile(
" sr 0,0\n"
" sckpf"
: : : "0", "cc");
else
asm volatile(
" l 0,%0\n"
" sckpf"
: : "Q" (S390_lowcore.tod_progreg_save_area)
: "0", "cc");
/* Validate clock comparator register */
set_clock_comparator(S390_lowcore.clock_comparator);
/* Check if old PSW is valid */ /* Check if old PSW is valid */
if (!mci.wp) if (!mci.wp) {
/* /*
* Can't tell if we come from user or kernel mode * Can't tell if we come from user or kernel mode
* -> stopping machine. * -> stopping machine.
*/ */
s390_handle_damage(); s390_handle_damage();
}
/* Check for invalid kernel instruction address */
if (!mci.ia && !umode) {
/*
* The instruction address got lost while running
* in the kernel -> stopping machine.
*/
s390_handle_damage();
}
if (!mci.ms || !mci.pm || !mci.ia) if (!mci.ms || !mci.pm || !mci.ia)
kill_task = 1; kill_task = 1;
return kill_task; return kill_task;
} }
NOKPROBE_SYMBOL(s390_check_registers);
/* /*
* Backup the guest's machine check info to its description block * Backup the guest's machine check info to its description block
...@@ -300,6 +321,7 @@ static void notrace s390_backup_mcck_info(struct pt_regs *regs) ...@@ -300,6 +321,7 @@ static void notrace s390_backup_mcck_info(struct pt_regs *regs)
mcck_backup->failing_storage_address mcck_backup->failing_storage_address
= S390_lowcore.failing_storage_address; = S390_lowcore.failing_storage_address;
} }
NOKPROBE_SYMBOL(s390_backup_mcck_info);
#define MAX_IPD_COUNT 29 #define MAX_IPD_COUNT 29
#define MAX_IPD_TIME (5 * 60 * USEC_PER_SEC) /* 5 minutes */ #define MAX_IPD_TIME (5 * 60 * USEC_PER_SEC) /* 5 minutes */
...@@ -372,7 +394,7 @@ void notrace s390_do_machine_check(struct pt_regs *regs) ...@@ -372,7 +394,7 @@ void notrace s390_do_machine_check(struct pt_regs *regs)
s390_handle_damage(); s390_handle_damage();
} }
} }
if (s390_validate_registers(mci, user_mode(regs))) { if (s390_check_registers(mci, user_mode(regs))) {
/* /*
* Couldn't restore all register contents for the * Couldn't restore all register contents for the
* user space process -> mark task for termination. * user space process -> mark task for termination.
...@@ -443,6 +465,7 @@ void notrace s390_do_machine_check(struct pt_regs *regs) ...@@ -443,6 +465,7 @@ void notrace s390_do_machine_check(struct pt_regs *regs)
clear_cpu_flag(CIF_MCCK_GUEST); clear_cpu_flag(CIF_MCCK_GUEST);
nmi_exit(); nmi_exit();
} }
NOKPROBE_SYMBOL(s390_do_machine_check);
static int __init machine_check_init(void) static int __init machine_check_init(void)
{ {
......
This diff is collapsed.
...@@ -823,12 +823,8 @@ static int cpumsf_pmu_event_init(struct perf_event *event) ...@@ -823,12 +823,8 @@ static int cpumsf_pmu_event_init(struct perf_event *event)
} }
/* Check online status of the CPU to which the event is pinned */ /* Check online status of the CPU to which the event is pinned */
if (event->cpu >= 0) { if (event->cpu >= 0 && !cpu_online(event->cpu))
if ((unsigned int)event->cpu >= nr_cpumask_bits)
return -ENODEV; return -ENODEV;
if (!cpu_online(event->cpu))
return -ENODEV;
}
/* Force reset of idle/hv excludes regardless of what the /* Force reset of idle/hv excludes regardless of what the
* user requested. * user requested.
......
This diff is collapsed.
This diff is collapsed.
...@@ -29,7 +29,6 @@ ...@@ -29,7 +29,6 @@
ENTRY(relocate_kernel) ENTRY(relocate_kernel)
basr %r13,0 # base address basr %r13,0 # base address
.base: .base:
stnsm sys_msk-.base(%r13),0xfb # disable DAT
stctg %c0,%c15,ctlregs-.base(%r13) stctg %c0,%c15,ctlregs-.base(%r13)
stmg %r0,%r15,gprregs-.base(%r13) stmg %r0,%r15,gprregs-.base(%r13)
lghi %r0,3 lghi %r0,3
...@@ -103,8 +102,6 @@ ENTRY(relocate_kernel) ...@@ -103,8 +102,6 @@ ENTRY(relocate_kernel)
.align 8 .align 8
load_psw: load_psw:
.long 0x00080000,0x80000000 .long 0x00080000,0x80000000
sys_msk:
.quad 0
ctlregs: ctlregs:
.rept 16 .rept 16
.quad 0 .quad 0
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment