Commit 0aaba41b authored by Martin Schwidefsky's avatar Martin Schwidefsky Committed by Heiko Carstens

s390: remove all code using the access register mode

The vdso code for the getcpu() and the clock_gettime() call use the access
register mode to access the per-CPU vdso data page with the current code.

An alternative to the complicated AR mode is to use the secondary space
mode. This makes the vdso faster and quite a bit simpler. The downside is
that the uaccess code has to be changed quite a bit.

Which instructions are used depends on the machine and what kind of uaccess
operation is requested. The instruction dictates which ASCE value needs
to be loaded into %cr1 and %cr7.

The different cases:

* User copy with MVCOS for z10 and newer machines
  The MVCOS instruction can copy between the primary space (aka user) and
  the home space (aka kernel) directly. For set_fs(KERNEL_DS) the kernel
  ASCE is loaded into %cr1. For set_fs(USER_DS) the user space is already
  loaded in %cr1.

* User copy with MVCP/MVCS for older machines
  To be able to execute the MVCP/MVCS instructions the kernel needs to
  switch to primary mode. The control register %cr1 has to be set to the
  kernel ASCE and %cr7 to either the kernel ASCE or the user ASCE dependent
  on set_fs(KERNEL_DS) vs set_fs(USER_DS).

* Data access in the user address space for strnlen / futex
  To use "normal" instruction with data from the user address space the
  secondary space mode is used. The kernel needs to switch to primary mode,
  %cr1 has to contain the kernel ASCE and %cr7 either the user ASCE or the
  kernel ASCE, dependent on set_fs.

To load a new value into %cr1 or %cr7 is an expensive operation, the kernel
tries to be lazy about it. E.g. for multiple user copies in a row with
MVCP/MVCS the replacement of the vdso ASCE in %cr7 with the user ASCE is
done only once. On return to user space a CPU bit is checked that loads the
vdso ASCE again.

To enable and disable the data access via the secondary space two new
functions are added, enable_sacf_uaccess and disable_sacf_uaccess. The fact
that a context is in secondary space uaccess mode is stored in the
mm_segment_t value for the task. The code of an interrupt may use set_fs
as long as it returns to the previous state it got with get_fs with another
call to set_fs. The code in finish_arch_post_lock_switch simply has to do a
set_fs with the current mm_segment_t value for the task.

For CPUs with MVCOS:

CPU running in                        | %cr1 ASCE | %cr7 ASCE |
--------------------------------------|-----------|-----------|
user space                            |  user     |  vdso     |
kernel, USER_DS, normal-mode          |  user     |  vdso     |
kernel, USER_DS, normal-mode, lazy    |  user     |  user     |
kernel, USER_DS, sacf-mode            |  kernel   |  user     |
kernel, KERNEL_DS, normal-mode        |  kernel   |  vdso     |
kernel, KERNEL_DS, normal-mode, lazy  |  kernel   |  kernel   |
kernel, KERNEL_DS, sacf-mode          |  kernel   |  kernel   |

For CPUs without MVCOS:

CPU running in                        | %cr1 ASCE | %cr7 ASCE |
--------------------------------------|-----------|-----------|
user space                            |  user     |  vdso     |
kernel, USER_DS, normal-mode          |  user     |  vdso     |
kernel, USER_DS, normal-mode lazy     |  kernel   |  user     |
kernel, USER_DS, sacf-mode            |  kernel   |  user     |
kernel, KERNEL_DS, normal-mode        |  kernel   |  vdso     |
kernel, KERNEL_DS, normal-mode, lazy  |  kernel   |  kernel   |
kernel, KERNEL_DS, sacf-mode          |  kernel   |  kernel   |

The lines with "lazy" refer to the state after a copy via the secondary
space with a delayed reload of %cr1 and %cr7.

There are three hardware address spaces that can cause a DAT exception,
primary, secondary and home space. The exception can be related to
four different fault types: user space fault, vdso fault, kernel fault,
and the gmap faults.

Dependent on the set_fs state and normal vs. sacf mode there are a number
of fault combinations:

1) user address space fault via the primary ASCE
2) gmap address space fault via the primary ASCE
3) kernel address space fault via the primary ASCE for machines with
   MVCOS and set_fs(KERNEL_DS)
4) vdso address space faults via the secondary ASCE with an invalid
   address while running in secondary space in problem state
5) user address space fault via the secondary ASCE for user-copy
   based on the secondary space mode, e.g. futex_ops or strnlen_user
6) kernel address space fault via the secondary ASCE for user-copy
   with secondary space mode with set_fs(KERNEL_DS)
7) kernel address space fault via the primary ASCE for user-copy
   with secondary space mode with set_fs(USER_DS) on machines without
   MVCOS.
8) kernel address space fault via the home space ASCE

Replace user_space_fault() with a new function get_fault_type() that
can distinguish all four different fault types.

With these changes the futex atomic ops from the kernel and the
strnlen_user will get a little bit slower, as well as the old style
uaccess with MVCP/MVCS. All user accesses based on MVCOS will be as
fast as before. On the positive side, the user space vdso code is a
lot faster and Linux ceases to use the complicated AR mode.
Reviewed-by: default avatarHeiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: default avatarMartin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: default avatarHeiko Carstens <heiko.carstens@de.ibm.com>
parent c771320e
...@@ -26,9 +26,9 @@ static inline int arch_futex_atomic_op_inuser(int op, int oparg, int *oval, ...@@ -26,9 +26,9 @@ static inline int arch_futex_atomic_op_inuser(int op, int oparg, int *oval,
u32 __user *uaddr) u32 __user *uaddr)
{ {
int oldval = 0, newval, ret; int oldval = 0, newval, ret;
mm_segment_t old_fs;
load_kernel_asce(); old_fs = enable_sacf_uaccess();
pagefault_disable(); pagefault_disable();
switch (op) { switch (op) {
case FUTEX_OP_SET: case FUTEX_OP_SET:
...@@ -55,6 +55,7 @@ static inline int arch_futex_atomic_op_inuser(int op, int oparg, int *oval, ...@@ -55,6 +55,7 @@ static inline int arch_futex_atomic_op_inuser(int op, int oparg, int *oval,
ret = -ENOSYS; ret = -ENOSYS;
} }
pagefault_enable(); pagefault_enable();
disable_sacf_uaccess(old_fs);
if (!ret) if (!ret)
*oval = oldval; *oval = oldval;
...@@ -65,9 +66,10 @@ static inline int arch_futex_atomic_op_inuser(int op, int oparg, int *oval, ...@@ -65,9 +66,10 @@ static inline int arch_futex_atomic_op_inuser(int op, int oparg, int *oval,
static inline int futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr, static inline int futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
u32 oldval, u32 newval) u32 oldval, u32 newval)
{ {
mm_segment_t old_fs;
int ret; int ret;
load_kernel_asce(); old_fs = enable_sacf_uaccess();
asm volatile( asm volatile(
" sacf 256\n" " sacf 256\n"
"0: cs %1,%4,0(%5)\n" "0: cs %1,%4,0(%5)\n"
...@@ -77,6 +79,7 @@ static inline int futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr, ...@@ -77,6 +79,7 @@ static inline int futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
: "=d" (ret), "+d" (oldval), "=m" (*uaddr) : "=d" (ret), "+d" (oldval), "=m" (*uaddr)
: "0" (-EFAULT), "d" (newval), "a" (uaddr), "m" (*uaddr) : "0" (-EFAULT), "d" (newval), "a" (uaddr), "m" (*uaddr)
: "cc", "memory"); : "cc", "memory");
disable_sacf_uaccess(old_fs);
*uval = oldval; *uval = oldval;
return ret; return ret;
} }
......
...@@ -115,33 +115,28 @@ struct lowcore { ...@@ -115,33 +115,28 @@ struct lowcore {
/* Address space pointer. */ /* Address space pointer. */
__u64 kernel_asce; /* 0x0378 */ __u64 kernel_asce; /* 0x0378 */
__u64 user_asce; /* 0x0380 */ __u64 user_asce; /* 0x0380 */
__u64 vdso_asce; /* 0x0388 */
/* /*
* The lpp and current_pid fields form a * The lpp and current_pid fields form a
* 64-bit value that is set as program * 64-bit value that is set as program
* parameter with the LPP instruction. * parameter with the LPP instruction.
*/ */
__u32 lpp; /* 0x0388 */ __u32 lpp; /* 0x0390 */
__u32 current_pid; /* 0x038c */ __u32 current_pid; /* 0x0394 */
/* SMP info area */ /* SMP info area */
__u32 cpu_nr; /* 0x0390 */ __u32 cpu_nr; /* 0x0398 */
__u32 softirq_pending; /* 0x0394 */ __u32 softirq_pending; /* 0x039c */
__u64 percpu_offset; /* 0x0398 */ __u32 preempt_count; /* 0x03a0 */
__u64 vdso_per_cpu_data; /* 0x03a0 */ __u32 spinlock_lockval; /* 0x03a4 */
__u64 machine_flags; /* 0x03a8 */ __u32 spinlock_index; /* 0x03a8 */
__u32 preempt_count; /* 0x03b0 */ __u32 fpu_flags; /* 0x03ac */
__u8 pad_0x03b4[0x03b8-0x03b4]; /* 0x03b4 */ __u64 percpu_offset; /* 0x03b0 */
__u64 gmap; /* 0x03b8 */ __u64 vdso_per_cpu_data; /* 0x03b8 */
__u32 spinlock_lockval; /* 0x03c0 */ __u64 machine_flags; /* 0x03c0 */
__u32 spinlock_index; /* 0x03c4 */ __u64 gmap; /* 0x03c8 */
__u32 fpu_flags; /* 0x03c8 */ __u8 pad_0x03d0[0x0e00-0x03d0]; /* 0x03d0 */
__u8 pad_0x03cc[0x0400-0x03cc]; /* 0x03cc */
/* Per cpu primary space access list */
__u32 paste[16]; /* 0x0400 */
__u8 pad_0x04c0[0x0e00-0x0440]; /* 0x0440 */
/* /*
* 0xe00 contains the address of the IPL Parameter Information * 0xe00 contains the address of the IPL Parameter Information
......
...@@ -71,41 +71,38 @@ static inline int init_new_context(struct task_struct *tsk, ...@@ -71,41 +71,38 @@ static inline int init_new_context(struct task_struct *tsk,
static inline void set_user_asce(struct mm_struct *mm) static inline void set_user_asce(struct mm_struct *mm)
{ {
S390_lowcore.user_asce = mm->context.asce; S390_lowcore.user_asce = mm->context.asce;
if (current->thread.mm_segment.ar4) __ctl_load(S390_lowcore.user_asce, 1, 1);
__ctl_load(S390_lowcore.user_asce, 7, 7); clear_cpu_flag(CIF_ASCE_PRIMARY);
set_cpu_flag(CIF_ASCE_PRIMARY);
} }
static inline void clear_user_asce(void) static inline void clear_user_asce(void)
{ {
S390_lowcore.user_asce = S390_lowcore.kernel_asce; S390_lowcore.user_asce = S390_lowcore.kernel_asce;
__ctl_load(S390_lowcore.kernel_asce, 1, 1);
__ctl_load(S390_lowcore.user_asce, 1, 1);
__ctl_load(S390_lowcore.user_asce, 7, 7);
}
static inline void load_kernel_asce(void)
{
unsigned long asce;
__ctl_store(asce, 1, 1);
if (asce != S390_lowcore.kernel_asce)
__ctl_load(S390_lowcore.kernel_asce, 1, 1);
set_cpu_flag(CIF_ASCE_PRIMARY); set_cpu_flag(CIF_ASCE_PRIMARY);
} }
mm_segment_t enable_sacf_uaccess(void);
void disable_sacf_uaccess(mm_segment_t old_fs);
static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next, static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
struct task_struct *tsk) struct task_struct *tsk)
{ {
int cpu = smp_processor_id(); int cpu = smp_processor_id();
S390_lowcore.user_asce = next->context.asce;
if (prev == next) if (prev == next)
return; return;
S390_lowcore.user_asce = next->context.asce;
cpumask_set_cpu(cpu, &next->context.cpu_attach_mask); cpumask_set_cpu(cpu, &next->context.cpu_attach_mask);
/* Clear old ASCE by loading the kernel ASCE. */ /* Clear previous user-ASCE from CR1 and CR7 */
__ctl_load(S390_lowcore.kernel_asce, 1, 1); if (!test_cpu_flag(CIF_ASCE_PRIMARY)) {
__ctl_load(S390_lowcore.kernel_asce, 7, 7); __ctl_load(S390_lowcore.kernel_asce, 1, 1);
set_cpu_flag(CIF_ASCE_PRIMARY);
}
if (test_cpu_flag(CIF_ASCE_SECONDARY)) {
__ctl_load(S390_lowcore.vdso_asce, 7, 7);
clear_cpu_flag(CIF_ASCE_SECONDARY);
}
cpumask_clear_cpu(cpu, &prev->context.cpu_attach_mask); cpumask_clear_cpu(cpu, &prev->context.cpu_attach_mask);
} }
...@@ -115,7 +112,6 @@ static inline void finish_arch_post_lock_switch(void) ...@@ -115,7 +112,6 @@ static inline void finish_arch_post_lock_switch(void)
struct task_struct *tsk = current; struct task_struct *tsk = current;
struct mm_struct *mm = tsk->mm; struct mm_struct *mm = tsk->mm;
load_kernel_asce();
if (mm) { if (mm) {
preempt_disable(); preempt_disable();
while (atomic_read(&mm->context.flush_count)) while (atomic_read(&mm->context.flush_count))
......
...@@ -109,9 +109,7 @@ extern void execve_tail(void); ...@@ -109,9 +109,7 @@ extern void execve_tail(void);
#define HAVE_ARCH_PICK_MMAP_LAYOUT #define HAVE_ARCH_PICK_MMAP_LAYOUT
typedef struct { typedef unsigned int mm_segment_t;
__u32 ar4;
} mm_segment_t;
/* /*
* Thread structure * Thread structure
......
...@@ -16,7 +16,7 @@ ...@@ -16,7 +16,7 @@
#include <asm/processor.h> #include <asm/processor.h>
#include <asm/ctl_reg.h> #include <asm/ctl_reg.h>
#include <asm/extable.h> #include <asm/extable.h>
#include <asm/facility.h>
/* /*
* The fs value determines whether argument validity checking should be * The fs value determines whether argument validity checking should be
...@@ -26,27 +26,16 @@ ...@@ -26,27 +26,16 @@
* For historical reasons, these macros are grossly misnamed. * For historical reasons, these macros are grossly misnamed.
*/ */
#define MAKE_MM_SEG(a) ((mm_segment_t) { (a) }) #define KERNEL_DS (0)
#define KERNEL_DS_SACF (1)
#define USER_DS (2)
#define KERNEL_DS MAKE_MM_SEG(0) #define USER_DS_SACF (3)
#define USER_DS MAKE_MM_SEG(1)
#define get_ds() (KERNEL_DS) #define get_ds() (KERNEL_DS)
#define get_fs() (current->thread.mm_segment) #define get_fs() (current->thread.mm_segment)
#define segment_eq(a,b) ((a).ar4 == (b).ar4) #define segment_eq(a,b) (((a) & 2) == ((b) & 2))
static inline void set_fs(mm_segment_t fs) void set_fs(mm_segment_t fs);
{
current->thread.mm_segment = fs;
if (uaccess_kernel()) {
set_cpu_flag(CIF_ASCE_SECONDARY);
__ctl_load(S390_lowcore.kernel_asce, 7, 7);
} else {
clear_cpu_flag(CIF_ASCE_SECONDARY);
__ctl_load(S390_lowcore.user_asce, 7, 7);
}
}
static inline int __range_ok(unsigned long addr, unsigned long size) static inline int __range_ok(unsigned long addr, unsigned long size)
{ {
...@@ -95,7 +84,7 @@ raw_copy_to_user(void __user *to, const void *from, unsigned long n); ...@@ -95,7 +84,7 @@ raw_copy_to_user(void __user *to, const void *from, unsigned long n);
static inline int __put_user_fn(void *x, void __user *ptr, unsigned long size) static inline int __put_user_fn(void *x, void __user *ptr, unsigned long size)
{ {
unsigned long spec = 0x810000UL; unsigned long spec = 0x010000UL;
int rc; int rc;
switch (size) { switch (size) {
...@@ -125,7 +114,7 @@ static inline int __put_user_fn(void *x, void __user *ptr, unsigned long size) ...@@ -125,7 +114,7 @@ static inline int __put_user_fn(void *x, void __user *ptr, unsigned long size)
static inline int __get_user_fn(void *x, const void __user *ptr, unsigned long size) static inline int __get_user_fn(void *x, const void __user *ptr, unsigned long size)
{ {
unsigned long spec = 0x81UL; unsigned long spec = 0x01UL;
int rc; int rc;
switch (size) { switch (size) {
......
...@@ -171,6 +171,7 @@ int main(void) ...@@ -171,6 +171,7 @@ int main(void)
OFFSET(__LC_RESTART_DATA, lowcore, restart_data); OFFSET(__LC_RESTART_DATA, lowcore, restart_data);
OFFSET(__LC_RESTART_SOURCE, lowcore, restart_source); OFFSET(__LC_RESTART_SOURCE, lowcore, restart_source);
OFFSET(__LC_USER_ASCE, lowcore, user_asce); OFFSET(__LC_USER_ASCE, lowcore, user_asce);
OFFSET(__LC_VDSO_ASCE, lowcore, vdso_asce);
OFFSET(__LC_LPP, lowcore, lpp); OFFSET(__LC_LPP, lowcore, lpp);
OFFSET(__LC_CURRENT_PID, lowcore, current_pid); OFFSET(__LC_CURRENT_PID, lowcore, current_pid);
OFFSET(__LC_PERCPU_OFFSET, lowcore, percpu_offset); OFFSET(__LC_PERCPU_OFFSET, lowcore, percpu_offset);
...@@ -178,7 +179,6 @@ int main(void) ...@@ -178,7 +179,6 @@ int main(void)
OFFSET(__LC_MACHINE_FLAGS, lowcore, machine_flags); OFFSET(__LC_MACHINE_FLAGS, lowcore, machine_flags);
OFFSET(__LC_PREEMPT_COUNT, lowcore, preempt_count); OFFSET(__LC_PREEMPT_COUNT, lowcore, preempt_count);
OFFSET(__LC_GMAP, lowcore, gmap); OFFSET(__LC_GMAP, lowcore, gmap);
OFFSET(__LC_PASTE, lowcore, paste);
/* software defined ABI-relevant lowcore locations 0xe00 - 0xe20 */ /* software defined ABI-relevant lowcore locations 0xe00 - 0xe20 */
OFFSET(__LC_DUMP_REIPL, lowcore, ipib); OFFSET(__LC_DUMP_REIPL, lowcore, ipib);
/* hardware defined lowcore locations 0x1000 - 0x18ff */ /* hardware defined lowcore locations 0x1000 - 0x18ff */
......
...@@ -379,13 +379,21 @@ ENTRY(system_call) ...@@ -379,13 +379,21 @@ ENTRY(system_call)
jg s390_handle_mcck # TIF bit will be cleared by handler jg s390_handle_mcck # TIF bit will be cleared by handler
# #
# _CIF_ASCE_PRIMARY and/or CIF_ASCE_SECONDARY set, load user space asce # _CIF_ASCE_PRIMARY and/or _CIF_ASCE_SECONDARY set, load user space asce
# #
.Lsysc_asce: .Lsysc_asce:
ni __LC_CPU_FLAGS+7,255-_CIF_ASCE_SECONDARY
lctlg %c7,%c7,__LC_VDSO_ASCE # load secondary asce
TSTMSK __LC_CPU_FLAGS,_CIF_ASCE_PRIMARY
jz .Lsysc_return
#ifndef CONFIG_HAVE_MARCH_Z10_FEATURES
tm __LC_STFLE_FAC_LIST+3,0x10 # has MVCOS ?
jnz .Lsysc_set_fs_fixup
ni __LC_CPU_FLAGS+7,255-_CIF_ASCE_PRIMARY ni __LC_CPU_FLAGS+7,255-_CIF_ASCE_PRIMARY
lctlg %c1,%c1,__LC_USER_ASCE # load primary asce lctlg %c1,%c1,__LC_USER_ASCE # load primary asce
TSTMSK __LC_CPU_FLAGS,_CIF_ASCE_SECONDARY j .Lsysc_return
jz .Lsysc_return .Lsysc_set_fs_fixup:
#endif
larl %r14,.Lsysc_return larl %r14,.Lsysc_return
jg set_fs_fixup jg set_fs_fixup
...@@ -741,10 +749,18 @@ ENTRY(io_int_handler) ...@@ -741,10 +749,18 @@ ENTRY(io_int_handler)
# _CIF_ASCE_PRIMARY and/or CIF_ASCE_SECONDARY set, load user space asce # _CIF_ASCE_PRIMARY and/or CIF_ASCE_SECONDARY set, load user space asce
# #
.Lio_asce: .Lio_asce:
ni __LC_CPU_FLAGS+7,255-_CIF_ASCE_SECONDARY
lctlg %c7,%c7,__LC_VDSO_ASCE # load secondary asce
TSTMSK __LC_CPU_FLAGS,_CIF_ASCE_PRIMARY
jz .Lio_return
#ifndef CONFIG_HAVE_MARCH_Z10_FEATURES
tm __LC_STFLE_FAC_LIST+3,0x10 # has MVCOS ?
jnz .Lio_set_fs_fixup
ni __LC_CPU_FLAGS+7,255-_CIF_ASCE_PRIMARY ni __LC_CPU_FLAGS+7,255-_CIF_ASCE_PRIMARY
lctlg %c1,%c1,__LC_USER_ASCE # load primary asce lctlg %c1,%c1,__LC_USER_ASCE # load primary asce
TSTMSK __LC_CPU_FLAGS,_CIF_ASCE_SECONDARY j .Lio_return
jz .Lio_return .Lio_set_fs_fixup:
#endif
larl %r14,.Lio_return larl %r14,.Lio_return
jg set_fs_fixup jg set_fs_fixup
......
...@@ -28,7 +28,7 @@ ENTRY(startup_continue) ...@@ -28,7 +28,7 @@ ENTRY(startup_continue)
lctlg %c0,%c15,.Lctl-.LPG1(%r13) # load control registers lctlg %c0,%c15,.Lctl-.LPG1(%r13) # load control registers
lg %r12,.Lparmaddr-.LPG1(%r13) # pointer to parameter area lg %r12,.Lparmaddr-.LPG1(%r13) # pointer to parameter area
# move IPL device to lowcore # move IPL device to lowcore
lghi %r0,__LC_PASTE larl %r0,boot_vdso_data
stg %r0,__LC_VDSO_PER_CPU stg %r0,__LC_VDSO_PER_CPU
# #
# Setup stack # Setup stack
......
...@@ -158,16 +158,9 @@ int vdso_alloc_per_cpu(struct lowcore *lowcore) ...@@ -158,16 +158,9 @@ int vdso_alloc_per_cpu(struct lowcore *lowcore)
{ {
unsigned long segment_table, page_table, page_frame; unsigned long segment_table, page_table, page_frame;
struct vdso_per_cpu_data *vd; struct vdso_per_cpu_data *vd;
u32 *psal, *aste;
int i;
lowcore->vdso_per_cpu_data = __LC_PASTE;
if (!vdso_enabled)
return 0;
segment_table = __get_free_pages(GFP_KERNEL, SEGMENT_ORDER); segment_table = __get_free_pages(GFP_KERNEL, SEGMENT_ORDER);
page_table = get_zeroed_page(GFP_KERNEL | GFP_DMA); page_table = get_zeroed_page(GFP_KERNEL);
page_frame = get_zeroed_page(GFP_KERNEL); page_frame = get_zeroed_page(GFP_KERNEL);
if (!segment_table || !page_table || !page_frame) if (!segment_table || !page_table || !page_frame)
goto out; goto out;
...@@ -179,25 +172,15 @@ int vdso_alloc_per_cpu(struct lowcore *lowcore) ...@@ -179,25 +172,15 @@ int vdso_alloc_per_cpu(struct lowcore *lowcore)
vd->cpu_nr = lowcore->cpu_nr; vd->cpu_nr = lowcore->cpu_nr;
vd->node_id = cpu_to_node(vd->cpu_nr); vd->node_id = cpu_to_node(vd->cpu_nr);
/* Set up access register mode page table */ /* Set up page table for the vdso address space */
memset64((u64 *)segment_table, _SEGMENT_ENTRY_EMPTY, _CRST_ENTRIES); memset64((u64 *)segment_table, _SEGMENT_ENTRY_EMPTY, _CRST_ENTRIES);
memset64((u64 *)page_table, _PAGE_INVALID, PTRS_PER_PTE); memset64((u64 *)page_table, _PAGE_INVALID, PTRS_PER_PTE);
*(unsigned long *) segment_table = _SEGMENT_ENTRY + page_table; *(unsigned long *) segment_table = _SEGMENT_ENTRY + page_table;
*(unsigned long *) page_table = _PAGE_PROTECT + page_frame; *(unsigned long *) page_table = _PAGE_PROTECT + page_frame;
psal = (u32 *) (page_table + 256*sizeof(unsigned long)); lowcore->vdso_asce = segment_table +
aste = psal + 32;
for (i = 4; i < 32; i += 4)
psal[i] = 0x80000000;
lowcore->paste[4] = (u32)(addr_t) psal;
psal[0] = 0x02000000;
psal[2] = (u32)(addr_t) aste;
*(unsigned long *) (aste + 2) = segment_table +
_ASCE_TABLE_LENGTH + _ASCE_USER_BITS + _ASCE_TYPE_SEGMENT; _ASCE_TABLE_LENGTH + _ASCE_USER_BITS + _ASCE_TYPE_SEGMENT;
aste[4] = (u32)(addr_t) psal;
lowcore->vdso_per_cpu_data = page_frame; lowcore->vdso_per_cpu_data = page_frame;
return 0; return 0;
...@@ -212,14 +195,8 @@ int vdso_alloc_per_cpu(struct lowcore *lowcore) ...@@ -212,14 +195,8 @@ int vdso_alloc_per_cpu(struct lowcore *lowcore)
void vdso_free_per_cpu(struct lowcore *lowcore) void vdso_free_per_cpu(struct lowcore *lowcore)
{ {
unsigned long segment_table, page_table, page_frame; unsigned long segment_table, page_table, page_frame;
u32 *psal, *aste;
if (!vdso_enabled)
return;
psal = (u32 *)(addr_t) lowcore->paste[4]; segment_table = lowcore->vdso_asce & PAGE_MASK;
aste = (u32 *)(addr_t) psal[2];
segment_table = *(unsigned long *)(aste + 2) & PAGE_MASK;
page_table = *(unsigned long *) segment_table; page_table = *(unsigned long *) segment_table;
page_frame = *(unsigned long *) page_table; page_frame = *(unsigned long *) page_table;
...@@ -228,16 +205,6 @@ void vdso_free_per_cpu(struct lowcore *lowcore) ...@@ -228,16 +205,6 @@ void vdso_free_per_cpu(struct lowcore *lowcore)
free_pages(segment_table, SEGMENT_ORDER); free_pages(segment_table, SEGMENT_ORDER);
} }
static void vdso_init_cr5(void)
{
unsigned long cr5;
if (!vdso_enabled)
return;
cr5 = offsetof(struct lowcore, paste);
__ctl_load(cr5, 5, 5);
}
/* /*
* This is called from binfmt_elf, we create the special vma for the * This is called from binfmt_elf, we create the special vma for the
* vDSO and insert it into the mm struct tree * vDSO and insert it into the mm struct tree
...@@ -314,8 +281,6 @@ static int __init vdso_init(void) ...@@ -314,8 +281,6 @@ static int __init vdso_init(void)
{ {
int i; int i;
if (!vdso_enabled)
return 0;
vdso_init_data(vdso_data); vdso_init_data(vdso_data);
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
/* Calculate the size of the 32 bit vDSO */ /* Calculate the size of the 32 bit vDSO */
...@@ -354,7 +319,6 @@ static int __init vdso_init(void) ...@@ -354,7 +319,6 @@ static int __init vdso_init(void)
vdso64_pagelist[vdso64_pages] = NULL; vdso64_pagelist[vdso64_pages] = NULL;
if (vdso_alloc_per_cpu(&S390_lowcore)) if (vdso_alloc_per_cpu(&S390_lowcore))
BUG(); BUG();
vdso_init_cr5();
get_page(virt_to_page(vdso_data)); get_page(virt_to_page(vdso_data));
......
...@@ -15,23 +15,11 @@ ...@@ -15,23 +15,11 @@
.type __kernel_getcpu,@function .type __kernel_getcpu,@function
__kernel_getcpu: __kernel_getcpu:
.cfi_startproc .cfi_startproc
ear %r1,%a4
lhi %r4,1
sll %r4,24
sar %a4,%r4
la %r4,0 la %r4,0
epsw %r0,0 sacf 256
sacf 512
l %r5,__VDSO_CPU_NR(%r4) l %r5,__VDSO_CPU_NR(%r4)
l %r4,__VDSO_NODE_ID(%r4) l %r4,__VDSO_NODE_ID(%r4)
tml %r0,0x4000 sacf 0
jo 1f
tml %r0,0x8000
jno 0f
sacf 256
j 1f
0: sacf 0
1: sar %a4,%r1
ltr %r2,%r2 ltr %r2,%r2
jz 2f jz 2f
st %r5,0(%r2) st %r5,0(%r2)
......
...@@ -114,23 +114,12 @@ __kernel_clock_gettime: ...@@ -114,23 +114,12 @@ __kernel_clock_gettime:
br %r14 br %r14
/* CPUCLOCK_VIRT for this thread */ /* CPUCLOCK_VIRT for this thread */
9: icm %r0,15,__VDSO_ECTG_OK(%r5) 9: lghi %r4,0
icm %r0,15,__VDSO_ECTG_OK(%r5)
jz 12f jz 12f
ear %r2,%a4 sacf 256 /* Magic ectg instruction */
llilh %r4,0x0100
sar %a4,%r4
lghi %r4,0
epsw %r5,0
sacf 512 /* Magic ectg instruction */
.insn ssf,0xc80100000000,__VDSO_ECTG_BASE(4),__VDSO_ECTG_USER(4),4 .insn ssf,0xc80100000000,__VDSO_ECTG_BASE(4),__VDSO_ECTG_USER(4),4
tml %r5,0x4000 sacf 0
jo 11f
tml %r5,0x8000
jno 10f
sacf 256
j 11f
10: sacf 0
11: sar %a4,%r2
algr %r1,%r0 /* r1 = cputime as TOD value */ algr %r1,%r0 /* r1 = cputime as TOD value */
mghi %r1,1000 /* convert to nanoseconds */ mghi %r1,1000 /* convert to nanoseconds */
srlg %r1,%r1,12 /* r1 = cputime in nanosec */ srlg %r1,%r1,12 /* r1 = cputime in nanosec */
......
...@@ -15,22 +15,11 @@ ...@@ -15,22 +15,11 @@
.type __kernel_getcpu,@function .type __kernel_getcpu,@function
__kernel_getcpu: __kernel_getcpu:
.cfi_startproc .cfi_startproc
ear %r1,%a4
llilh %r4,0x0100
sar %a4,%r4
la %r4,0 la %r4,0
epsw %r0,0 sacf 256
sacf 512
l %r5,__VDSO_CPU_NR(%r4) l %r5,__VDSO_CPU_NR(%r4)
l %r4,__VDSO_NODE_ID(%r4) l %r4,__VDSO_NODE_ID(%r4)
tml %r0,0x4000 sacf 0
jo 1f
tml %r0,0x8000
jno 0f
sacf 256
j 1f
0: sacf 0
1: sar %a4,%r1
ltgr %r2,%r2 ltgr %r2,%r2
jz 2f jz 2f
st %r5,0(%r2) st %r5,0(%r2)
......
...@@ -40,10 +40,67 @@ static inline int copy_with_mvcos(void) ...@@ -40,10 +40,67 @@ static inline int copy_with_mvcos(void)
} }
#endif #endif
void set_fs(mm_segment_t fs)
{
current->thread.mm_segment = fs;
if (fs == USER_DS) {
__ctl_load(S390_lowcore.user_asce, 1, 1);
clear_cpu_flag(CIF_ASCE_PRIMARY);
} else {
__ctl_load(S390_lowcore.kernel_asce, 1, 1);
set_cpu_flag(CIF_ASCE_PRIMARY);
}
if (fs & 1) {
if (fs == USER_DS_SACF)
__ctl_load(S390_lowcore.user_asce, 7, 7);
else
__ctl_load(S390_lowcore.kernel_asce, 7, 7);
set_cpu_flag(CIF_ASCE_SECONDARY);
}
}
EXPORT_SYMBOL(set_fs);
mm_segment_t enable_sacf_uaccess(void)
{
mm_segment_t old_fs;
unsigned long asce, cr;
old_fs = current->thread.mm_segment;
if (old_fs & 1)
return old_fs;
current->thread.mm_segment |= 1;
asce = S390_lowcore.kernel_asce;
if (likely(old_fs == USER_DS)) {
__ctl_store(cr, 1, 1);
if (cr != S390_lowcore.kernel_asce) {
__ctl_load(S390_lowcore.kernel_asce, 1, 1);
set_cpu_flag(CIF_ASCE_PRIMARY);
}
asce = S390_lowcore.user_asce;
}
__ctl_store(cr, 7, 7);
if (cr != asce) {
__ctl_load(asce, 7, 7);
set_cpu_flag(CIF_ASCE_SECONDARY);
}
return old_fs;
}
EXPORT_SYMBOL(enable_sacf_uaccess);
void disable_sacf_uaccess(mm_segment_t old_fs)
{
if (old_fs == USER_DS && test_facility(27)) {
__ctl_load(S390_lowcore.user_asce, 1, 1);
clear_cpu_flag(CIF_ASCE_PRIMARY);
}
current->thread.mm_segment = old_fs;
}
EXPORT_SYMBOL(disable_sacf_uaccess);
static inline unsigned long copy_from_user_mvcos(void *x, const void __user *ptr, static inline unsigned long copy_from_user_mvcos(void *x, const void __user *ptr,
unsigned long size) unsigned long size)
{ {
register unsigned long reg0 asm("0") = 0x81UL; register unsigned long reg0 asm("0") = 0x01UL;
unsigned long tmp1, tmp2; unsigned long tmp1, tmp2;
tmp1 = -4096UL; tmp1 = -4096UL;
...@@ -74,8 +131,9 @@ static inline unsigned long copy_from_user_mvcp(void *x, const void __user *ptr, ...@@ -74,8 +131,9 @@ static inline unsigned long copy_from_user_mvcp(void *x, const void __user *ptr,
unsigned long size) unsigned long size)
{ {
unsigned long tmp1, tmp2; unsigned long tmp1, tmp2;
mm_segment_t old_fs;
load_kernel_asce(); old_fs = enable_sacf_uaccess();
tmp1 = -256UL; tmp1 = -256UL;
asm volatile( asm volatile(
" sacf 0\n" " sacf 0\n"
...@@ -102,6 +160,7 @@ static inline unsigned long copy_from_user_mvcp(void *x, const void __user *ptr, ...@@ -102,6 +160,7 @@ static inline unsigned long copy_from_user_mvcp(void *x, const void __user *ptr,
EX_TABLE(7b,3b) EX_TABLE(8b,3b) EX_TABLE(9b,6b) EX_TABLE(7b,3b) EX_TABLE(8b,3b) EX_TABLE(9b,6b)
: "+a" (size), "+a" (ptr), "+a" (x), "+a" (tmp1), "=a" (tmp2) : "+a" (size), "+a" (ptr), "+a" (x), "+a" (tmp1), "=a" (tmp2)
: : "cc", "memory"); : : "cc", "memory");
disable_sacf_uaccess(old_fs);
return size; return size;
} }
...@@ -116,7 +175,7 @@ EXPORT_SYMBOL(raw_copy_from_user); ...@@ -116,7 +175,7 @@ EXPORT_SYMBOL(raw_copy_from_user);
static inline unsigned long copy_to_user_mvcos(void __user *ptr, const void *x, static inline unsigned long copy_to_user_mvcos(void __user *ptr, const void *x,
unsigned long size) unsigned long size)
{ {
register unsigned long reg0 asm("0") = 0x810000UL; register unsigned long reg0 asm("0") = 0x010000UL;
unsigned long tmp1, tmp2; unsigned long tmp1, tmp2;
tmp1 = -4096UL; tmp1 = -4096UL;
...@@ -147,8 +206,9 @@ static inline unsigned long copy_to_user_mvcs(void __user *ptr, const void *x, ...@@ -147,8 +206,9 @@ static inline unsigned long copy_to_user_mvcs(void __user *ptr, const void *x,
unsigned long size) unsigned long size)
{ {
unsigned long tmp1, tmp2; unsigned long tmp1, tmp2;
mm_segment_t old_fs;
load_kernel_asce(); old_fs = enable_sacf_uaccess();
tmp1 = -256UL; tmp1 = -256UL;
asm volatile( asm volatile(
" sacf 0\n" " sacf 0\n"
...@@ -175,6 +235,7 @@ static inline unsigned long copy_to_user_mvcs(void __user *ptr, const void *x, ...@@ -175,6 +235,7 @@ static inline unsigned long copy_to_user_mvcs(void __user *ptr, const void *x,
EX_TABLE(7b,3b) EX_TABLE(8b,3b) EX_TABLE(9b,6b) EX_TABLE(7b,3b) EX_TABLE(8b,3b) EX_TABLE(9b,6b)
: "+a" (size), "+a" (ptr), "+a" (x), "+a" (tmp1), "=a" (tmp2) : "+a" (size), "+a" (ptr), "+a" (x), "+a" (tmp1), "=a" (tmp2)
: : "cc", "memory"); : : "cc", "memory");
disable_sacf_uaccess(old_fs);
return size; return size;
} }
...@@ -189,7 +250,7 @@ EXPORT_SYMBOL(raw_copy_to_user); ...@@ -189,7 +250,7 @@ EXPORT_SYMBOL(raw_copy_to_user);
static inline unsigned long copy_in_user_mvcos(void __user *to, const void __user *from, static inline unsigned long copy_in_user_mvcos(void __user *to, const void __user *from,
unsigned long size) unsigned long size)
{ {
register unsigned long reg0 asm("0") = 0x810081UL; register unsigned long reg0 asm("0") = 0x010001UL;
unsigned long tmp1, tmp2; unsigned long tmp1, tmp2;
tmp1 = -4096UL; tmp1 = -4096UL;
...@@ -212,9 +273,10 @@ static inline unsigned long copy_in_user_mvcos(void __user *to, const void __use ...@@ -212,9 +273,10 @@ static inline unsigned long copy_in_user_mvcos(void __user *to, const void __use
static inline unsigned long copy_in_user_mvc(void __user *to, const void __user *from, static inline unsigned long copy_in_user_mvc(void __user *to, const void __user *from,
unsigned long size) unsigned long size)
{ {
mm_segment_t old_fs;
unsigned long tmp1; unsigned long tmp1;
load_kernel_asce(); old_fs = enable_sacf_uaccess();
asm volatile( asm volatile(
" sacf 256\n" " sacf 256\n"
" aghi %0,-1\n" " aghi %0,-1\n"
...@@ -238,6 +300,7 @@ static inline unsigned long copy_in_user_mvc(void __user *to, const void __user ...@@ -238,6 +300,7 @@ static inline unsigned long copy_in_user_mvc(void __user *to, const void __user
EX_TABLE(1b,6b) EX_TABLE(2b,0b) EX_TABLE(4b,0b) EX_TABLE(1b,6b) EX_TABLE(2b,0b) EX_TABLE(4b,0b)
: "+a" (size), "+a" (to), "+a" (from), "=a" (tmp1) : "+a" (size), "+a" (to), "+a" (from), "=a" (tmp1)
: : "cc", "memory"); : : "cc", "memory");
disable_sacf_uaccess(old_fs);
return size; return size;
} }
...@@ -251,7 +314,7 @@ EXPORT_SYMBOL(raw_copy_in_user); ...@@ -251,7 +314,7 @@ EXPORT_SYMBOL(raw_copy_in_user);
static inline unsigned long clear_user_mvcos(void __user *to, unsigned long size) static inline unsigned long clear_user_mvcos(void __user *to, unsigned long size)
{ {
register unsigned long reg0 asm("0") = 0x810000UL; register unsigned long reg0 asm("0") = 0x010000UL;
unsigned long tmp1, tmp2; unsigned long tmp1, tmp2;
tmp1 = -4096UL; tmp1 = -4096UL;
...@@ -279,9 +342,10 @@ static inline unsigned long clear_user_mvcos(void __user *to, unsigned long size ...@@ -279,9 +342,10 @@ static inline unsigned long clear_user_mvcos(void __user *to, unsigned long size
static inline unsigned long clear_user_xc(void __user *to, unsigned long size) static inline unsigned long clear_user_xc(void __user *to, unsigned long size)
{ {
mm_segment_t old_fs;
unsigned long tmp1, tmp2; unsigned long tmp1, tmp2;
load_kernel_asce(); old_fs = enable_sacf_uaccess();
asm volatile( asm volatile(
" sacf 256\n" " sacf 256\n"
" aghi %0,-1\n" " aghi %0,-1\n"
...@@ -310,6 +374,7 @@ static inline unsigned long clear_user_xc(void __user *to, unsigned long size) ...@@ -310,6 +374,7 @@ static inline unsigned long clear_user_xc(void __user *to, unsigned long size)
EX_TABLE(1b,6b) EX_TABLE(2b,0b) EX_TABLE(4b,0b) EX_TABLE(1b,6b) EX_TABLE(2b,0b) EX_TABLE(4b,0b)
: "+a" (size), "+a" (to), "=a" (tmp1), "=a" (tmp2) : "+a" (size), "+a" (to), "=a" (tmp1), "=a" (tmp2)
: : "cc", "memory"); : : "cc", "memory");
disable_sacf_uaccess(old_fs);
return size; return size;
} }
...@@ -345,10 +410,15 @@ static inline unsigned long strnlen_user_srst(const char __user *src, ...@@ -345,10 +410,15 @@ static inline unsigned long strnlen_user_srst(const char __user *src,
unsigned long __strnlen_user(const char __user *src, unsigned long size) unsigned long __strnlen_user(const char __user *src, unsigned long size)
{ {
mm_segment_t old_fs;
unsigned long len;
if (unlikely(!size)) if (unlikely(!size))
return 0; return 0;
load_kernel_asce(); old_fs = enable_sacf_uaccess();
return strnlen_user_srst(src, size); len = strnlen_user_srst(src, size);
disable_sacf_uaccess(old_fs);
return len;
} }
EXPORT_SYMBOL(__strnlen_user); EXPORT_SYMBOL(__strnlen_user);
......
...@@ -50,6 +50,13 @@ ...@@ -50,6 +50,13 @@
#define VM_FAULT_SIGNAL 0x080000 #define VM_FAULT_SIGNAL 0x080000
#define VM_FAULT_PFAULT 0x100000 #define VM_FAULT_PFAULT 0x100000
enum fault_type {
KERNEL_FAULT,
USER_FAULT,
VDSO_FAULT,
GMAP_FAULT,
};
static unsigned long store_indication __read_mostly; static unsigned long store_indication __read_mostly;
static int __init fault_init(void) static int __init fault_init(void)
...@@ -99,27 +106,34 @@ void bust_spinlocks(int yes) ...@@ -99,27 +106,34 @@ void bust_spinlocks(int yes)
} }
/* /*
* Returns the address space associated with the fault. * Find out which address space caused the exception.
* Returns 0 for kernel space and 1 for user space. * Access register mode is impossible, ignore space == 3.
*/ */
static inline int user_space_fault(struct pt_regs *regs) static inline enum fault_type get_fault_type(struct pt_regs *regs)
{ {
unsigned long trans_exc_code; unsigned long trans_exc_code;
/*
* The lowest two bits of the translation exception
* identification indicate which paging table was used.
*/
trans_exc_code = regs->int_parm_long & 3; trans_exc_code = regs->int_parm_long & 3;
if (trans_exc_code == 3) /* home space -> kernel */ if (likely(trans_exc_code == 0)) {
return 0; /* primary space exception */
if (user_mode(regs)) if (IS_ENABLED(CONFIG_PGSTE) &&
return 1; test_pt_regs_flag(regs, PIF_GUEST_FAULT))
if (trans_exc_code == 2) /* secondary space -> set_fs */ return GMAP_FAULT;
return current->thread.mm_segment.ar4; if (current->thread.mm_segment == USER_DS)
if (test_pt_regs_flag(regs, PIF_GUEST_FAULT)) return USER_FAULT;
return 1; return KERNEL_FAULT;
return 0; }
if (trans_exc_code == 2) {
/* secondary space exception */
if (current->thread.mm_segment & 1) {
if (current->thread.mm_segment == USER_DS_SACF)
return USER_FAULT;
return KERNEL_FAULT;
}
return VDSO_FAULT;
}
/* home space exception -> access via kernel ASCE */
return KERNEL_FAULT;
} }
static int bad_address(void *p) static int bad_address(void *p)
...@@ -204,20 +218,23 @@ static void dump_fault_info(struct pt_regs *regs) ...@@ -204,20 +218,23 @@ static void dump_fault_info(struct pt_regs *regs)
break; break;
} }
pr_cont("mode while using "); pr_cont("mode while using ");
if (!user_space_fault(regs)) { switch (get_fault_type(regs)) {
asce = S390_lowcore.kernel_asce; case USER_FAULT:
pr_cont("kernel ");
}
#ifdef CONFIG_PGSTE
else if (test_pt_regs_flag(regs, PIF_GUEST_FAULT)) {
struct gmap *gmap = (struct gmap *)S390_lowcore.gmap;
asce = gmap->asce;
pr_cont("gmap ");
}
#endif
else {
asce = S390_lowcore.user_asce; asce = S390_lowcore.user_asce;
pr_cont("user "); pr_cont("user ");
break;
case VDSO_FAULT:
asce = S390_lowcore.vdso_asce;
pr_cont("vdso ");
break;
case GMAP_FAULT:
asce = ((struct gmap *) S390_lowcore.gmap)->asce;
pr_cont("gmap ");
break;
case KERNEL_FAULT:
asce = S390_lowcore.kernel_asce;
pr_cont("kernel ");
break;
} }
pr_cont("ASCE.\n"); pr_cont("ASCE.\n");
dump_pagetable(asce, regs->int_parm_long & __FAIL_ADDR_MASK); dump_pagetable(asce, regs->int_parm_long & __FAIL_ADDR_MASK);
...@@ -273,7 +290,7 @@ static noinline void do_no_context(struct pt_regs *regs) ...@@ -273,7 +290,7 @@ static noinline void do_no_context(struct pt_regs *regs)
* Oops. The kernel tried to access some bad page. We'll have to * Oops. The kernel tried to access some bad page. We'll have to
* terminate things with extreme prejudice. * terminate things with extreme prejudice.
*/ */
if (!user_space_fault(regs)) if (get_fault_type(regs) == KERNEL_FAULT)
printk(KERN_ALERT "Unable to handle kernel pointer dereference" printk(KERN_ALERT "Unable to handle kernel pointer dereference"
" in virtual kernel address space\n"); " in virtual kernel address space\n");
else else
...@@ -395,12 +412,11 @@ static noinline void do_fault_error(struct pt_regs *regs, int access, int fault) ...@@ -395,12 +412,11 @@ static noinline void do_fault_error(struct pt_regs *regs, int access, int fault)
*/ */
static inline int do_exception(struct pt_regs *regs, int access) static inline int do_exception(struct pt_regs *regs, int access)
{ {
#ifdef CONFIG_PGSTE
struct gmap *gmap; struct gmap *gmap;
#endif
struct task_struct *tsk; struct task_struct *tsk;
struct mm_struct *mm; struct mm_struct *mm;
struct vm_area_struct *vma; struct vm_area_struct *vma;
enum fault_type type;
unsigned long trans_exc_code; unsigned long trans_exc_code;
unsigned long address; unsigned long address;
unsigned int flags; unsigned int flags;
...@@ -425,8 +441,19 @@ static inline int do_exception(struct pt_regs *regs, int access) ...@@ -425,8 +441,19 @@ static inline int do_exception(struct pt_regs *regs, int access)
* user context. * user context.
*/ */
fault = VM_FAULT_BADCONTEXT; fault = VM_FAULT_BADCONTEXT;
if (unlikely(!user_space_fault(regs) || faulthandler_disabled() || !mm)) type = get_fault_type(regs);
switch (type) {
case KERNEL_FAULT:
goto out;
case VDSO_FAULT:
fault = VM_FAULT_BADMAP;
goto out; goto out;
case USER_FAULT:
case GMAP_FAULT:
if (faulthandler_disabled() || !mm)
goto out;
break;
}
address = trans_exc_code & __FAIL_ADDR_MASK; address = trans_exc_code & __FAIL_ADDR_MASK;
perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address); perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
...@@ -437,10 +464,9 @@ static inline int do_exception(struct pt_regs *regs, int access) ...@@ -437,10 +464,9 @@ static inline int do_exception(struct pt_regs *regs, int access)
flags |= FAULT_FLAG_WRITE; flags |= FAULT_FLAG_WRITE;
down_read(&mm->mmap_sem); down_read(&mm->mmap_sem);
#ifdef CONFIG_PGSTE gmap = NULL;
gmap = test_pt_regs_flag(regs, PIF_GUEST_FAULT) ? if (IS_ENABLED(CONFIG_PGSTE) && type == GMAP_FAULT) {
(struct gmap *) S390_lowcore.gmap : NULL; gmap = (struct gmap *) S390_lowcore.gmap;
if (gmap) {
current->thread.gmap_addr = address; current->thread.gmap_addr = address;
current->thread.gmap_write_flag = !!(flags & FAULT_FLAG_WRITE); current->thread.gmap_write_flag = !!(flags & FAULT_FLAG_WRITE);
current->thread.gmap_int_code = regs->int_code & 0xffff; current->thread.gmap_int_code = regs->int_code & 0xffff;
...@@ -452,7 +478,6 @@ static inline int do_exception(struct pt_regs *regs, int access) ...@@ -452,7 +478,6 @@ static inline int do_exception(struct pt_regs *regs, int access)
if (gmap->pfault_enabled) if (gmap->pfault_enabled)
flags |= FAULT_FLAG_RETRY_NOWAIT; flags |= FAULT_FLAG_RETRY_NOWAIT;
} }
#endif
retry: retry:
fault = VM_FAULT_BADMAP; fault = VM_FAULT_BADMAP;
...@@ -507,15 +532,14 @@ static inline int do_exception(struct pt_regs *regs, int access) ...@@ -507,15 +532,14 @@ static inline int do_exception(struct pt_regs *regs, int access)
regs, address); regs, address);
} }
if (fault & VM_FAULT_RETRY) { if (fault & VM_FAULT_RETRY) {
#ifdef CONFIG_PGSTE if (IS_ENABLED(CONFIG_PGSTE) && gmap &&
if (gmap && (flags & FAULT_FLAG_RETRY_NOWAIT)) { (flags & FAULT_FLAG_RETRY_NOWAIT)) {
/* FAULT_FLAG_RETRY_NOWAIT has been set, /* FAULT_FLAG_RETRY_NOWAIT has been set,
* mmap_sem has not been released */ * mmap_sem has not been released */
current->thread.gmap_pfault = 1; current->thread.gmap_pfault = 1;
fault = VM_FAULT_PFAULT; fault = VM_FAULT_PFAULT;
goto out_up; goto out_up;
} }
#endif
/* Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk /* Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk
* of starvation. */ * of starvation. */
flags &= ~(FAULT_FLAG_ALLOW_RETRY | flags &= ~(FAULT_FLAG_ALLOW_RETRY |
...@@ -525,8 +549,7 @@ static inline int do_exception(struct pt_regs *regs, int access) ...@@ -525,8 +549,7 @@ static inline int do_exception(struct pt_regs *regs, int access)
goto retry; goto retry;
} }
} }
#ifdef CONFIG_PGSTE if (IS_ENABLED(CONFIG_PGSTE) && gmap) {
if (gmap) {
address = __gmap_link(gmap, current->thread.gmap_addr, address = __gmap_link(gmap, current->thread.gmap_addr,
address); address);
if (address == -EFAULT) { if (address == -EFAULT) {
...@@ -538,7 +561,6 @@ static inline int do_exception(struct pt_regs *regs, int access) ...@@ -538,7 +561,6 @@ static inline int do_exception(struct pt_regs *regs, int access)
goto out_up; goto out_up;
} }
} }
#endif
fault = 0; fault = 0;
out_up: out_up:
up_read(&mm->mmap_sem); up_read(&mm->mmap_sem);
......
...@@ -95,6 +95,7 @@ void __init paging_init(void) ...@@ -95,6 +95,7 @@ void __init paging_init(void)
} }
init_mm.context.asce = (__pa(init_mm.pgd) & PAGE_MASK) | asce_bits; init_mm.context.asce = (__pa(init_mm.pgd) & PAGE_MASK) | asce_bits;
S390_lowcore.kernel_asce = init_mm.context.asce; S390_lowcore.kernel_asce = init_mm.context.asce;
S390_lowcore.user_asce = S390_lowcore.kernel_asce;
crst_table_init((unsigned long *) init_mm.pgd, pgd_type); crst_table_init((unsigned long *) init_mm.pgd, pgd_type);
vmem_map_init(); vmem_map_init();
......
...@@ -71,10 +71,8 @@ static void __crst_table_upgrade(void *arg) ...@@ -71,10 +71,8 @@ static void __crst_table_upgrade(void *arg)
{ {
struct mm_struct *mm = arg; struct mm_struct *mm = arg;
if (current->active_mm == mm) { if (current->active_mm == mm)
clear_user_asce();
set_user_asce(mm); set_user_asce(mm);
}
__tlb_flush_local(); __tlb_flush_local();
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment