- 30 Sep, 2020 1 commit
-
-
Sven Schnelle authored
remove the cad command line option as the instruction was never published and never used by userspace. Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
- 29 Sep, 2020 5 commits
-
-
Vasily Gorbik authored
Currently we overflow save_area_sync and write over save_area_async. Although this is not a real problem make startup_pgm_check_handler consistent with late pgm check handler and store [%r0,%r7] directly into gpregs_save_area. Reviewed-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Vasily Gorbik authored
Since commit 39421627 ("s390: remove broken hibernate / power management support") _swsusp_reset_dma is unused and could be safely removed. Reviewed-by: Sven Schnelle <svens@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Vasily Gorbik authored
Currently there are several minor problems with randomization base generation code: 1. It might misbehave in low memory conditions. In particular there might be enough space for the kernel on [0, block_sum] but after if (base < safe_addr) base = safe_addr; it might not be enough anymore. 2. It does not correctly handle minimal address constraint. In condition if (base < safe_addr) base = safe_addr; a synthetic value is compared with an address. If we have a memory setup with memory holes due to offline memory regions, and safe_addr is close to the end of the first online memory block - we might position the kernel in invalid memory. 3. block_sum calculation logic contains off-by-one error. Let's say we have a memory block in which the kernel fits perfectly (end - start == kernel_size). In this case: if (end - start < kernel_size) continue; block_sum += end - start - kernel_size; block_sum is not increased, while it is a valid kernel position. So, address problems listed and explain algorithm used. Besides that restructuring the code makes it possible to extend kernel positioning algorithm further. Currently we pick position in between single [min, max] range (min = safe_addr, max = memory_limit). In future we can do that for multiple ranges as well (by calling count_valid_kernel_positions for each range). Reviewed-by: Philipp Rudo <prudo@linux.ibm.com> Reviewed-by: Alexander Egorenkov <egorenar@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Vasily Gorbik authored
0 is a valid random value. To avoid mixing it with error code 0 as an return code make get_random() take extra argument to output random value and return an error code. Reviewed-by: Philipp Rudo <prudo@linux.ibm.com> Reviewed-by: Alexander Egorenkov <egorenar@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Qinglang Miao authored
Simplify the return expression. Link: https://lkml.kernel.org/r/20200921131101.93037-1-miaoqinglang@huawei.comSigned-off-by: Qinglang Miao <miaoqinglang@huawei.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
- 26 Sep, 2020 6 commits
-
-
Sven Schnelle authored
No need to have two mutexes, and while at it rename it to stp_mutex. Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Reviewed-by: Alexander Egorenkov <egorenar@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Sven Schnelle authored
This patch introduces /sys/devices/system/stp/scheduled_leap_seconds, which will contain either 0,0 if no leap second is scheduled, or the UTC timestamp + leap second offset. Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Reviewed-by: Alexander Egorenkov <egorenar@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Sven Schnelle authored
In the current implementation, leap seconds are only synchronized during the bootup process when the STP clock is synced. If the Leap second offset (LSO) changes the machine must be rebooted, which is not desired. This patch adds the required code to handle Leap second changes during runtime. If the Leap second changes, a Configuration change machine check is triggered. The STP code than schedules a Leap second insertion/deletion with do_adjtimex(). Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Reviewed-by: Alexander Egorenkov <egorenar@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Sven Schnelle authored
In hardware-dependent headers using u32 is easier to read and less error-prone. Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Reviewed-by: Alexander Egorenkov <egorenar@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Sven Schnelle authored
Use __packed instead of __attribute__((packed)) Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Reviewed-by: Alexander Egorenkov <egorenar@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Sven Schnelle authored
The sysfs function might race with stp_work_fn. To prevent that, add the required locking. Another issue is that the sysfs functions are checking the stp_online flag, but this flag just holds the user setting whether STP is enabled. Add a flag to clock_sync_flag whether stp_info holds valid data and use that instead. Cc: stable@vger.kernel.org Signed-off-by: Sven Schnelle <svens@linux.ibm.com> Reviewed-by: Alexander Egorenkov <egorenar@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
- 24 Sep, 2020 2 commits
-
-
Harald Freudenberger authored
This patch extends the pkey kernel module to support CCA and EP11 secure ECC (private) keys as source for deriving ECC protected (private) keys. There is yet another new ioctl to support this: PKEY_KBLOB2PROTK3 can handle all the old keys plus CCA and EP11 secure ECC keys. For details see ioctl description in pkey.h. The CPACF unit currently only supports a subset of 5 different ECC curves (P-256, P-384, P-521, ED25519, ED448) and so only keys of this curve type can be transformed into protected keys. However, the pkey and the cca/ep11 low level functions do not check this but simple pass-through the key blob to the firmware onto the crypto cards. So most likely the failure will be a response carrying an error code resulting in user space errno value EIO instead of EINVAL. Deriving a protected key from an EP11 ECC secure key requires a CEX7 in EP11 mode. Deriving a protected key from an CCA ECC secure key requires a CEX7 in CCA mode. Together with this new ioctl the ioctls for querying lists of apqns (PKEY_APQNS4K and PKEY_APQNS4KT) have been extended to support EP11 and CCA ECC secure key type and key blobs. Together with this ioctl there comes a new struct ep11kblob_header which is to be prepended onto the EP11 key blob. See details in pkey.h for the fields in there. The older EP11 AES key blob with some info stored in the (unused) session field is also supported with this new ioctl. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reviewed-by: Ingo Franzki <ifranzki@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Harald Freudenberger authored
Support for CCA APKA (used for CCA ECC keys) master keys. The existing mkvps sysfs attribute for each queue for cards in CCA mode is extended to show the APKA master key register states and verification pattern: Improve the mkvps sysfs attribute to display the APKA master key verification patterns for old, current and new master key registers. The APKA master key is used to encrypt CCA ECC secure keys. The syntax is analog to the existing AES mk verification patterns: APKA NEW: <new_apka_mk_state> <new_apka_mk_mkvp> APKA CUR: <cur_apka_mk_state> <cur_apka_mk_mkvp> APKA OLD: <old_apka_mk_state> <old_apka_mk_mkvp> with <new_apka_mk_state>: 'empty' or 'partial' or 'full' <cur_apka_mk_state>: 'valid' or 'invalid' <old_apka_mk_state>: 'valid' or 'invalid' <new_apka_mk_mkvp>, <cur_apka_mk_mkvp>, <old_apka_mk_mkvp> 8 byte hex string with leading 0x MKVP means Master Key Verification Pattern and is a folded hash over the key value. Only the states 'full' and 'valid' result in displaying a useful mkvp, otherwise a mkvp of all bytes zero is shown. If for any reason the FQ fails and the (cached) information is not available, the state '-' will be shown with the mkvp value also '-'. The values shown here are the very same as the cca panel tools displays. The internal function cca_findcard2() also supports to match against the APKA master key verification patterns and the pkey kernel module which uses this function needed compatible rewrite of these invocations. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
- 21 Sep, 2020 3 commits
-
-
Vasily Gorbik authored
This reverts commit 55a5542a ("s390/hibernate: fix error handling when suspend cpu != resume cpu"). It added sclp_early_printk_force() which is no longer used since commit 39421627 ("s390: remove broken hibernate / power management support"). No hibernate - no problem. Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Vasily Gorbik authored
Since commit 980d5f9a ("s390/boot: enable .bss section for compressed kernel") .bss section usage is no longer restricted. .bss section is a part of the decompressor's image and is zeroed by the linker. For that reason clean up now unneeded .data section usage. Reviewed-by: Alexander Egorenkov <egorenar@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Vasily Gorbik authored
.bss section is a part of the decompressor's image now, linker fills it with zeros already. No need do it with memset additionally. Reviewed-by: Alexander Egorenkov <egorenar@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
- 17 Sep, 2020 5 commits
-
-
Qinglang Miao authored
The spinlock ap_poll_timer_lock is initialized statically. It is unnecessary to initialize by spin_lock_init(). Signed-off-by: Qinglang Miao <miaoqinglang@huawei.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Liu Shixin authored
Use DEFINE_SEQ_ATTRIBUTE macro to simplify the code. Signed-off-by: Liu Shixin <liushixin2@huawei.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Heiko Carstens authored
Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Heiko Carstens authored
Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Harald Freudenberger authored
This patch reworks the zcrypt device driver so that the set_fs() invocation is not needed any more. Instead there is a new flag bool userspace passed through all the functions which tells if the pointer arguments are userspace or kernelspace. Together with the two new inline functions z_copy_from_user() and z_copy_to_user() which either invoke copy_from_user (userspace is true) or memcpy (userspace is false) the zcrypt dd and the AP bus now has no requirement for the set_fs() functionality any more. Signed-off-by: Harald Freudenberger <freude@linux.ibm.com> Reviewed-by: Ingo Franzki <ifranzki@linux.ibm.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
- 16 Sep, 2020 8 commits
-
-
Vasily Gorbik authored
Currently the kernel crashes in Kasan instrumentation code if CONFIG_KASAN_S390_4_LEVEL_PAGING is used on protected virtualization capable machine where the ultravisor imposes addressing limitations on the host and those limitations are lower then KASAN_SHADOW_OFFSET. The problem is that Kasan has to know in advance where vmalloc/modules areas would be. With protected virtualization enabled vmalloc/modules areas are moved down to the ultravisor secure storage limit while kasan still expects them at the very end of 4-level paging address space. To fix that make Kasan recognize when protected virtualization is enabled and predefine vmalloc/modules areas position which are compliant with ultravisor secure storage limit. Kasan shadow itself stays in place and might reside above that ultravisor secure storage limit. One slight difference compaired to a kernel without Kasan enabled is that vmalloc/modules areas position is not reverted to default if ultravisor initialization fails. It would still be below the ultravisor secure storage limit. Kernel layout with kasan, 4-level paging and protected virtualization enabled (ultravisor secure storage limit is at 0x0000800000000000): ---[ vmemmap Area Start ]--- 0x0000400000000000-0x0000400080000000 ---[ vmemmap Area End ]--- ---[ vmalloc Area Start ]--- 0x00007fe000000000-0x00007fff80000000 ---[ vmalloc Area End ]--- ---[ Modules Area Start ]--- 0x00007fff80000000-0x0000800000000000 ---[ Modules Area End ]--- ---[ Kasan Shadow Start ]--- 0x0018000000000000-0x001c000000000000 ---[ Kasan Shadow End ]--- 0x001c000000000000-0x0020000000000000 1P PGD I Kernel layout with kasan, 4-level paging and protected virtualization disabled/unsupported: ---[ vmemmap Area Start ]--- 0x0000400000000000-0x0000400060000000 ---[ vmemmap Area End ]--- ---[ Kasan Shadow Start ]--- 0x0018000000000000-0x001c000000000000 ---[ Kasan Shadow End ]--- ---[ vmalloc Area Start ]--- 0x001fffe000000000-0x001fffff80000000 ---[ vmalloc Area End ]--- ---[ Modules Area Start ]--- 0x001fffff80000000-0x0020000000000000 ---[ Modules Area End ]--- Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Vasily Gorbik authored
Avoid potential crash due to lack of secure storage limit. Check that max_sec_stor_addr is not 0 before adjusting vmalloc position. Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Vasily Gorbik authored
To make early kernel address space layout definition possible parse prot_virt option in the decompressor and pass it to the uncompressed kernel. This enables kasan to take ultravisor secure storage limit into consideration and pre-define vmalloc position correctly. Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Vasily Gorbik authored
Currently vmemmap area is unconditionally moved beyond Kasan shadow memory. When Kasan is not enabled vmemmap area position is calculated in setup_memory_end() and depends on limiting factors like ultravisor secure storage limit. Try to follow the same logic with Kasan enabled as well and avoid unnecessary vmemmap area position changes unless it really intersects with Kasan shadow. Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Vasily Gorbik authored
Kasan configuration options and size of physical memory present could affect kernel memory layout. In particular vmemmap, vmalloc and modules might come before kasan shadow or after it. To make ptdump correctly output markers in the right order markers have to be sorted. To preserve the original order of markers with the same start address avoid using sort() from lib/sort.c (which is not stable sorting algorithm) and sort markers in place. Reviewed-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Niklas Schnelle authored
this fixes a missing prototype compiler warning spotted by the kernel test robot. Fixes: abb95b75 ("s390/pci: consolidate SR-IOV specific code") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Heiko Carstens authored
Use ifdefs instead of IS_ENABLED() to avoid compile error for !PTDUMP_DEBUGFS: arch/s390/mm/dump_pagetables.c: In function ‘pt_dump_init’: arch/s390/mm/dump_pagetables.c:248:64: error: ‘ptdump_fops’ undeclared (first use in this function); did you mean ‘pidfd_fops’? debugfs_create_file("kernel_page_tables", 0400, NULL, NULL, &ptdump_fops); Reported-by: Julian Wiedmann <jwi@linux.ibm.com> Fixes: 08c8e685 ("s390: add ARCH_HAS_DEBUG_WX support") Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Alexander Egorenkov authored
- Support static uninitialized variables in compressed kernel. - Remove chkbss script - Get rid of workarounds for not having .bss section Signed-off-by: Alexander Egorenkov <egorenar@linux.ibm.com> Reviewed-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
- 14 Sep, 2020 10 commits
-
-
Janosch Frank authored
We don't need to export pages if we destroy the VM configuration afterwards anyway. Instead we can destroy the page which will zero it and then make it accessible to the host. Destroying is about twice as fast as the export. Signed-off-by: Janosch Frank <frankja@linux.ibm.com> Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Link: https://lore.kernel.org/kvm/20200907124700.10374-2-frankja@linux.ibm.com/Signed-off-by: Janosch Frank <frankja@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Vasily Gorbik authored
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> [hca@linux.ibm.com: add more markers, rename some markers] Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Vasily Gorbik authored
ARCH_HAS_DEBUG_WX feature support brought attention to the fact that currently initial kasan shadow memory mapped without noexec flag. So fix that. Temporary initial identity mapping is still created without noexec, but it is replaced by properly set up paging later. Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Heiko Carstens authored
Checks the whole kernel address space for W+X mappings. Note that currently the first lowcore page unfortunately has to be mapped W+X. Therefore this not reported as an insecure mapping. For the very same reason the wording is also different to other architectures if the test passes: On s390 it is "no unexpected W+X pages found" instead of "no W+X pages found". Tested-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Heiko Carstens authored
s390 version of ae5d1cf3 ("arm64: dump: Make the page table dumping seq_file optional"). Tested-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Heiko Carstens authored
Make sure that kprobe insn pages are not writable anymore. Tested-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Niklas Schnelle authored
clp_rescan_pci_devices_simple() is neither simpler than clp_scan_pci_devices() nor does it really scan PCI devices, in particular it will neither add newly discovered devices nor remove those which disappeared. Instead it only refreshes PCI function handles and also has just a single callsite in the same translation unit left which in fact only refreshes one specific function handle identified by a FID. Clarify this by renaming the function and its helper to clp_refresh_fh() respectvely __clp_refresh_fh() and make it take a fid directly which saves us dealing with the NULL case which updated all function handles but is not used anymore. Furthermore since the only callsite is in the same translation unit make it static. Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com> Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Niklas Schnelle authored
there is only one call site of clp_rescan_pci_devices() and all the function does is call zpci_remove_reserved_devices() followed by a duplicating clp_scan_pci_devices(). So inline the single call as a call to zpci_remove_reserved_devices() and clp_scan_pci_devices() and remove the function. Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com> Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Niklas Schnelle authored
the only caller of this was removed as part of the suspend/resume removal so no need to keep this function around. Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com> Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-
Niklas Schnelle authored
currently we have multiple #ifdef CONFIG_PCI_IOV blocks spread over different compliation units and headers, all dealing with SR-IOV specific behavior. This violates the style guide which discourages conditionally compiled code blocks and hinders maintainability by speading SR-IOV functionality over many files. Let's move all of this into a conditionally compiled pci_iov.c file and local header and prefix SR-IOV specific functions with zpci_iov_*. Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com> Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
-