Commit 72f31889 authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: (31 commits)
  [S390] disassembler: mark exception causing instructions
  [S390] Enable exception traces by default
  [S390] return address of compat signals
  [S390] sysctl: get rid of dead declaration
  [S390] dasd: fix fixpoint divide exception in define_extent
  [S390] dasd: add sanity check to detect path connection error
  [S390] qdio: fix kernel panic for zfcp 31-bit
  [S390] Add s390x description to Documentation/kdump/kdump.txt
  [S390] Add VMCOREINFO_SYMBOL(high_memory) to vmcoreinfo
  [S390] dasd: fix expiration handling for recovery requests
  [S390] outstanding interrupts vs. smp_send_stop
  [S390] ipc: call generic sys_ipc demultiplexer
  [S390] zcrypt: Fix error return codes.
  [S390] zcrypt: Rework length parameter checking.
  [S390] cleanup trap handling
  [S390] Remove Kerntypes leftovers
  [S390] topology: increase poll frequency if change is anticipated
  [S390] entry[64].S improvements
  [S390] make arch/s390 subdirectories depend on config option
  [S390] kvm: move cmf host id constant out of lowcore
  ...

Fix up conflicts in arch/s390/kernel/{smp.c,topology.c} due to the
sysdev removal clashing with "topology: get rid of ifdefs" which moved
some of that code around.
parents a0e86bd4 2fa1d4fc
...@@ -66,7 +66,6 @@ GRTAGS ...@@ -66,7 +66,6 @@ GRTAGS
GSYMS GSYMS
GTAGS GTAGS
Image Image
Kerntypes
Module.markers Module.markers
Module.symvers Module.symvers
PENDING PENDING
......
...@@ -17,8 +17,8 @@ You can use common commands, such as cp and scp, to copy the ...@@ -17,8 +17,8 @@ You can use common commands, such as cp and scp, to copy the
memory image to a dump file on the local disk, or across the network to memory image to a dump file on the local disk, or across the network to
a remote system. a remote system.
Kdump and kexec are currently supported on the x86, x86_64, ppc64 and ia64 Kdump and kexec are currently supported on the x86, x86_64, ppc64, ia64,
architectures. and s390x architectures.
When the system kernel boots, it reserves a small section of memory for When the system kernel boots, it reserves a small section of memory for
the dump-capture kernel. This ensures that ongoing Direct Memory Access the dump-capture kernel. This ensures that ongoing Direct Memory Access
...@@ -34,11 +34,18 @@ Similarly on PPC64 machines first 32KB of physical memory is needed for ...@@ -34,11 +34,18 @@ Similarly on PPC64 machines first 32KB of physical memory is needed for
booting regardless of where the kernel is loaded and to support 64K page booting regardless of where the kernel is loaded and to support 64K page
size kexec backs up the first 64KB memory. size kexec backs up the first 64KB memory.
For s390x, when kdump is triggered, the crashkernel region is exchanged
with the region [0, crashkernel region size] and then the kdump kernel
runs in [0, crashkernel region size]. Therefore no relocatable kernel is
needed for s390x.
All of the necessary information about the system kernel's core image is All of the necessary information about the system kernel's core image is
encoded in the ELF format, and stored in a reserved area of memory encoded in the ELF format, and stored in a reserved area of memory
before a crash. The physical address of the start of the ELF header is before a crash. The physical address of the start of the ELF header is
passed to the dump-capture kernel through the elfcorehdr= boot passed to the dump-capture kernel through the elfcorehdr= boot
parameter. parameter. Optionally the size of the ELF header can also be passed
when using the elfcorehdr=[size[KMG]@]offset[KMG] syntax.
With the dump-capture kernel, you can access the memory image, or "old With the dump-capture kernel, you can access the memory image, or "old
memory," in two ways: memory," in two ways:
...@@ -291,6 +298,10 @@ Boot into System Kernel ...@@ -291,6 +298,10 @@ Boot into System Kernel
The region may be automatically placed on ia64, see the The region may be automatically placed on ia64, see the
dump-capture kernel config option notes above. dump-capture kernel config option notes above.
On s390x, typically use "crashkernel=xxM". The value of xx is dependent
on the memory consumption of the kdump system. In general this is not
dependent on the memory size of the production system.
Load the Dump-capture Kernel Load the Dump-capture Kernel
============================ ============================
...@@ -308,6 +319,8 @@ For ppc64: ...@@ -308,6 +319,8 @@ For ppc64:
- Use vmlinux - Use vmlinux
For ia64: For ia64:
- Use vmlinux or vmlinuz.gz - Use vmlinux or vmlinuz.gz
For s390x:
- Use image or bzImage
If you are using a uncompressed vmlinux image then use following command If you are using a uncompressed vmlinux image then use following command
...@@ -337,6 +350,8 @@ For i386, x86_64 and ia64: ...@@ -337,6 +350,8 @@ For i386, x86_64 and ia64:
For ppc64: For ppc64:
"1 maxcpus=1 noirqdistrib reset_devices" "1 maxcpus=1 noirqdistrib reset_devices"
For s390x:
"1 maxcpus=1 cgroup_disable=memory"
Notes on loading the dump-capture kernel: Notes on loading the dump-capture kernel:
...@@ -362,6 +377,20 @@ Notes on loading the dump-capture kernel: ...@@ -362,6 +377,20 @@ Notes on loading the dump-capture kernel:
dump. Hence generally it is useful either to build a UP dump-capture dump. Hence generally it is useful either to build a UP dump-capture
kernel or specify maxcpus=1 option while loading dump-capture kernel. kernel or specify maxcpus=1 option while loading dump-capture kernel.
* For s390x there are two kdump modes: If a ELF header is specified with
the elfcorehdr= kernel parameter, it is used by the kdump kernel as it
is done on all other architectures. If no elfcorehdr= kernel parameter is
specified, the s390x kdump kernel dynamically creates the header. The
second mode has the advantage that for CPU and memory hotplug, kdump has
not to be reloaded with kexec_load().
* For s390x systems with many attached devices the "cio_ignore" kernel
parameter should be used for the kdump kernel in order to prevent allocation
of kernel memory for devices that are not relevant for kdump. The same
applies to systems that use SCSI/FCP devices. In that case the
"allow_lun_scan" zfcp module parameter should be set to zero before
setting FCP devices online.
Kernel Panic Kernel Panic
============ ============
......
...@@ -41,7 +41,6 @@ ldd ...@@ -41,7 +41,6 @@ ldd
Debugging modules Debugging modules
The proc file system The proc file system
Starting points for debugging scripting languages etc. Starting points for debugging scripting languages etc.
Dumptool & Lcrash
SysRq SysRq
References References
Special Thanks Special Thanks
...@@ -2455,39 +2454,6 @@ jdb <filename> another fully interactive gdb style debugger. ...@@ -2455,39 +2454,6 @@ jdb <filename> another fully interactive gdb style debugger.
Dumptool & Lcrash ( lkcd )
==========================
Michael Holzheu & others here at IBM have a fairly mature port of
SGI's lcrash tool which allows one to look at kernel structures in a
running kernel.
It also complements a tool called dumptool which dumps all the kernel's
memory pages & registers to either a tape or a disk.
This can be used by tech support or an ambitious end user do
post mortem debugging of a machine like gdb core dumps.
Going into how to use this tool in detail will be explained
in other documentation supplied by IBM with the patches & the
lcrash homepage http://oss.sgi.com/projects/lkcd/ & the lcrash manpage.
How they work
-------------
Lcrash is a perfectly normal program,however, it requires 2
additional files, Kerntypes which is built using a patch to the
linux kernel sources in the linux root directory & the System.map.
Kerntypes is an objectfile whose sole purpose in life
is to provide stabs debug info to lcrash, to do this
Kerntypes is built from kerntypes.c which just includes the most commonly
referenced header files used when debugging, lcrash can then read the
.stabs section of this file.
Debugging a live system it uses /dev/mem
alternatively for post mortem debugging it uses the data
collected by dumptool.
SysRq SysRq
===== =====
This is now supported by linux for s/390 & z/Architecture. This is now supported by linux for s/390 & z/Architecture.
......
obj-y += kernel/ obj-y += kernel/
obj-y += mm/ obj-y += mm/
obj-y += crypto/ obj-$(CONFIG_KVM) += kvm/
obj-y += appldata/ obj-$(CONFIG_CRYPTO_HW) += crypto/
obj-y += hypfs/ obj-$(CONFIG_S390_HYPFS_FS) += hypfs/
obj-y += kvm/ obj-$(CONFIG_APPLDATA_BASE) += appldata/
obj-$(CONFIG_MATHEMU) += math-emu/
...@@ -193,18 +193,13 @@ config HOTPLUG_CPU ...@@ -193,18 +193,13 @@ config HOTPLUG_CPU
Say N if you want to disable CPU hotplug. Say N if you want to disable CPU hotplug.
config SCHED_MC config SCHED_MC
def_bool y def_bool n
prompt "Multi-core scheduler support"
depends on SMP
help
Multi-core scheduler support improves the CPU scheduler's decision
making when dealing with multi-core CPU chips at a cost of slightly
increased overhead in some places.
config SCHED_BOOK config SCHED_BOOK
def_bool y def_bool y
prompt "Book scheduler support" prompt "Book scheduler support"
depends on SMP && SCHED_MC depends on SMP
select SCHED_MC
help help
Book scheduler support improves the CPU scheduler's decision making Book scheduler support improves the CPU scheduler's decision making
when dealing with machines that have several books. when dealing with machines that have several books.
......
...@@ -99,7 +99,6 @@ core-y += arch/s390/ ...@@ -99,7 +99,6 @@ core-y += arch/s390/
libs-y += arch/s390/lib/ libs-y += arch/s390/lib/
drivers-y += drivers/s390/ drivers-y += drivers/s390/
drivers-$(CONFIG_MATHEMU) += arch/s390/math-emu/
# must be linked after kernel # must be linked after kernel
drivers-$(CONFIG_OPROFILE) += arch/s390/oprofile/ drivers-$(CONFIG_OPROFILE) += arch/s390/oprofile/
......
...@@ -23,4 +23,4 @@ $(obj)/compressed/vmlinux: FORCE ...@@ -23,4 +23,4 @@ $(obj)/compressed/vmlinux: FORCE
install: $(CONFIGURE) $(obj)/image install: $(CONFIGURE) $(obj)/image
sh -x $(srctree)/$(obj)/install.sh $(KERNELRELEASE) $(obj)/image \ sh -x $(srctree)/$(obj)/install.sh $(KERNELRELEASE) $(obj)/image \
System.map Kerntypes "$(INSTALL_PATH)" System.map "$(INSTALL_PATH)"
...@@ -22,6 +22,6 @@ enum die_val { ...@@ -22,6 +22,6 @@ enum die_val {
DIE_NMI_IPI, DIE_NMI_IPI,
}; };
extern void die(const char *, struct pt_regs *, long); extern void die(struct pt_regs *, const char *);
#endif #endif
...@@ -97,47 +97,52 @@ struct _lowcore { ...@@ -97,47 +97,52 @@ struct _lowcore {
__u32 gpregs_save_area[16]; /* 0x0180 */ __u32 gpregs_save_area[16]; /* 0x0180 */
__u32 cregs_save_area[16]; /* 0x01c0 */ __u32 cregs_save_area[16]; /* 0x01c0 */
/* Save areas. */
__u32 save_area_sync[8]; /* 0x0200 */
__u32 save_area_async[8]; /* 0x0220 */
__u32 save_area_restart[1]; /* 0x0240 */
__u8 pad_0x0244[0x0248-0x0244]; /* 0x0244 */
/* Return psws. */ /* Return psws. */
__u32 save_area[16]; /* 0x0200 */ psw_t return_psw; /* 0x0248 */
psw_t return_psw; /* 0x0240 */ psw_t return_mcck_psw; /* 0x0250 */
psw_t return_mcck_psw; /* 0x0248 */
/* CPU time accounting values */ /* CPU time accounting values */
__u64 sync_enter_timer; /* 0x0250 */ __u64 sync_enter_timer; /* 0x0258 */
__u64 async_enter_timer; /* 0x0258 */ __u64 async_enter_timer; /* 0x0260 */
__u64 mcck_enter_timer; /* 0x0260 */ __u64 mcck_enter_timer; /* 0x0268 */
__u64 exit_timer; /* 0x0268 */ __u64 exit_timer; /* 0x0270 */
__u64 user_timer; /* 0x0270 */ __u64 user_timer; /* 0x0278 */
__u64 system_timer; /* 0x0278 */ __u64 system_timer; /* 0x0280 */
__u64 steal_timer; /* 0x0280 */ __u64 steal_timer; /* 0x0288 */
__u64 last_update_timer; /* 0x0288 */ __u64 last_update_timer; /* 0x0290 */
__u64 last_update_clock; /* 0x0290 */ __u64 last_update_clock; /* 0x0298 */
/* Current process. */ /* Current process. */
__u32 current_task; /* 0x0298 */ __u32 current_task; /* 0x02a0 */
__u32 thread_info; /* 0x029c */ __u32 thread_info; /* 0x02a4 */
__u32 kernel_stack; /* 0x02a0 */ __u32 kernel_stack; /* 0x02a8 */
/* Interrupt and panic stack. */ /* Interrupt and panic stack. */
__u32 async_stack; /* 0x02a4 */ __u32 async_stack; /* 0x02ac */
__u32 panic_stack; /* 0x02a8 */ __u32 panic_stack; /* 0x02b0 */
/* Address space pointer. */ /* Address space pointer. */
__u32 kernel_asce; /* 0x02ac */ __u32 kernel_asce; /* 0x02b4 */
__u32 user_asce; /* 0x02b0 */ __u32 user_asce; /* 0x02b8 */
__u32 current_pid; /* 0x02b4 */ __u32 current_pid; /* 0x02bc */
/* SMP info area */ /* SMP info area */
__u32 cpu_nr; /* 0x02b8 */ __u32 cpu_nr; /* 0x02c0 */
__u32 softirq_pending; /* 0x02bc */ __u32 softirq_pending; /* 0x02c4 */
__u32 percpu_offset; /* 0x02c0 */ __u32 percpu_offset; /* 0x02c8 */
__u32 ext_call_fast; /* 0x02c4 */ __u32 ext_call_fast; /* 0x02cc */
__u64 int_clock; /* 0x02c8 */ __u64 int_clock; /* 0x02d0 */
__u64 mcck_clock; /* 0x02d0 */ __u64 mcck_clock; /* 0x02d8 */
__u64 clock_comparator; /* 0x02d8 */ __u64 clock_comparator; /* 0x02e0 */
__u32 machine_flags; /* 0x02e0 */ __u32 machine_flags; /* 0x02e8 */
__u32 ftrace_func; /* 0x02e4 */ __u32 ftrace_func; /* 0x02ec */
__u8 pad_0x02e8[0x0300-0x02e8]; /* 0x02e8 */ __u8 pad_0x02f8[0x0300-0x02f0]; /* 0x02f0 */
/* Interrupt response block */ /* Interrupt response block */
__u8 irb[64]; /* 0x0300 */ __u8 irb[64]; /* 0x0300 */
...@@ -229,57 +234,62 @@ struct _lowcore { ...@@ -229,57 +234,62 @@ struct _lowcore {
psw_t mcck_new_psw; /* 0x01e0 */ psw_t mcck_new_psw; /* 0x01e0 */
psw_t io_new_psw; /* 0x01f0 */ psw_t io_new_psw; /* 0x01f0 */
/* Entry/exit save area & return psws. */ /* Save areas. */
__u64 save_area[16]; /* 0x0200 */ __u64 save_area_sync[8]; /* 0x0200 */
psw_t return_psw; /* 0x0280 */ __u64 save_area_async[8]; /* 0x0240 */
psw_t return_mcck_psw; /* 0x0290 */ __u64 save_area_restart[1]; /* 0x0280 */
__u8 pad_0x0288[0x0290-0x0288]; /* 0x0288 */
/* Return psws. */
psw_t return_psw; /* 0x0290 */
psw_t return_mcck_psw; /* 0x02a0 */
/* CPU accounting and timing values. */ /* CPU accounting and timing values. */
__u64 sync_enter_timer; /* 0x02a0 */ __u64 sync_enter_timer; /* 0x02b0 */
__u64 async_enter_timer; /* 0x02a8 */ __u64 async_enter_timer; /* 0x02b8 */
__u64 mcck_enter_timer; /* 0x02b0 */ __u64 mcck_enter_timer; /* 0x02c0 */
__u64 exit_timer; /* 0x02b8 */ __u64 exit_timer; /* 0x02c8 */
__u64 user_timer; /* 0x02c0 */ __u64 user_timer; /* 0x02d0 */
__u64 system_timer; /* 0x02c8 */ __u64 system_timer; /* 0x02d8 */
__u64 steal_timer; /* 0x02d0 */ __u64 steal_timer; /* 0x02e0 */
__u64 last_update_timer; /* 0x02d8 */ __u64 last_update_timer; /* 0x02e8 */
__u64 last_update_clock; /* 0x02e0 */ __u64 last_update_clock; /* 0x02f0 */
/* Current process. */ /* Current process. */
__u64 current_task; /* 0x02e8 */ __u64 current_task; /* 0x02f8 */
__u64 thread_info; /* 0x02f0 */ __u64 thread_info; /* 0x0300 */
__u64 kernel_stack; /* 0x02f8 */ __u64 kernel_stack; /* 0x0308 */
/* Interrupt and panic stack. */ /* Interrupt and panic stack. */
__u64 async_stack; /* 0x0300 */ __u64 async_stack; /* 0x0310 */
__u64 panic_stack; /* 0x0308 */ __u64 panic_stack; /* 0x0318 */
/* Address space pointer. */ /* Address space pointer. */
__u64 kernel_asce; /* 0x0310 */ __u64 kernel_asce; /* 0x0320 */
__u64 user_asce; /* 0x0318 */ __u64 user_asce; /* 0x0328 */
__u64 current_pid; /* 0x0320 */ __u64 current_pid; /* 0x0330 */
/* SMP info area */ /* SMP info area */
__u32 cpu_nr; /* 0x0328 */ __u32 cpu_nr; /* 0x0338 */
__u32 softirq_pending; /* 0x032c */ __u32 softirq_pending; /* 0x033c */
__u64 percpu_offset; /* 0x0330 */ __u64 percpu_offset; /* 0x0340 */
__u64 ext_call_fast; /* 0x0338 */ __u64 ext_call_fast; /* 0x0348 */
__u64 int_clock; /* 0x0340 */ __u64 int_clock; /* 0x0350 */
__u64 mcck_clock; /* 0x0348 */ __u64 mcck_clock; /* 0x0358 */
__u64 clock_comparator; /* 0x0350 */ __u64 clock_comparator; /* 0x0360 */
__u64 vdso_per_cpu_data; /* 0x0358 */ __u64 vdso_per_cpu_data; /* 0x0368 */
__u64 machine_flags; /* 0x0360 */ __u64 machine_flags; /* 0x0370 */
__u64 ftrace_func; /* 0x0368 */ __u64 ftrace_func; /* 0x0378 */
__u64 gmap; /* 0x0370 */ __u64 gmap; /* 0x0380 */
__u64 cmf_hpp; /* 0x0378 */ __u8 pad_0x0388[0x0400-0x0388]; /* 0x0388 */
/* Interrupt response block. */ /* Interrupt response block. */
__u8 irb[64]; /* 0x0380 */ __u8 irb[64]; /* 0x0400 */
/* Per cpu primary space access list */ /* Per cpu primary space access list */
__u32 paste[16]; /* 0x03c0 */ __u32 paste[16]; /* 0x0440 */
__u8 pad_0x0400[0x0e00-0x0400]; /* 0x0400 */ __u8 pad_0x0480[0x0e00-0x0480]; /* 0x0480 */
/* /*
* 0xe00 contains the address of the IPL Parameter Information * 0xe00 contains the address of the IPL Parameter Information
......
...@@ -128,28 +128,11 @@ static inline int is_zero_pfn(unsigned long pfn) ...@@ -128,28 +128,11 @@ static inline int is_zero_pfn(unsigned long pfn)
* effect, this also makes sure that 64 bit module code cannot be used * effect, this also makes sure that 64 bit module code cannot be used
* as system call address. * as system call address.
*/ */
extern unsigned long VMALLOC_START; extern unsigned long VMALLOC_START;
extern unsigned long VMALLOC_END;
extern struct page *vmemmap;
#ifndef __s390x__ #define VMEM_MAX_PHYS ((unsigned long) vmemmap)
#define VMALLOC_SIZE (96UL << 20)
#define VMALLOC_END 0x7e000000UL
#define VMEM_MAP_END 0x80000000UL
#else /* __s390x__ */
#define VMALLOC_SIZE (128UL << 30)
#define VMALLOC_END 0x3e000000000UL
#define VMEM_MAP_END 0x40000000000UL
#endif /* __s390x__ */
/*
* VMEM_MAX_PHYS is the highest physical address that can be added to the 1:1
* mapping. This needs to be calculated at compile time since the size of the
* VMEM_MAP is static but the size of struct page can change.
*/
#define VMEM_MAX_PAGES ((VMEM_MAP_END - VMALLOC_END) / sizeof(struct page))
#define VMEM_MAX_PFN min(VMALLOC_START >> PAGE_SHIFT, VMEM_MAX_PAGES)
#define VMEM_MAX_PHYS ((VMEM_MAX_PFN << PAGE_SHIFT) & ~((16 << 20) - 1))
#define vmemmap ((struct page *) VMALLOC_END)
/* /*
* A 31 bit pagetable entry of S390 has following format: * A 31 bit pagetable entry of S390 has following format:
......
...@@ -80,8 +80,6 @@ struct thread_struct { ...@@ -80,8 +80,6 @@ struct thread_struct {
unsigned int acrs[NUM_ACRS]; unsigned int acrs[NUM_ACRS];
unsigned long ksp; /* kernel stack pointer */ unsigned long ksp; /* kernel stack pointer */
mm_segment_t mm_segment; mm_segment_t mm_segment;
unsigned long prot_addr; /* address of protection-excep. */
unsigned int trap_no;
unsigned long gmap_addr; /* address of last gmap fault. */ unsigned long gmap_addr; /* address of last gmap fault. */
struct per_regs per_user; /* User specified PER registers */ struct per_regs per_user; /* User specified PER registers */
struct per_event per_event; /* Cause of the last PER trap */ struct per_event per_event; /* Cause of the last PER trap */
......
...@@ -324,7 +324,8 @@ struct pt_regs ...@@ -324,7 +324,8 @@ struct pt_regs
psw_t psw; psw_t psw;
unsigned long gprs[NUM_GPRS]; unsigned long gprs[NUM_GPRS];
unsigned long orig_gpr2; unsigned long orig_gpr2;
unsigned int svc_code; unsigned int int_code;
unsigned long int_parm_long;
}; };
/* /*
......
...@@ -352,7 +352,7 @@ typedef void qdio_handler_t(struct ccw_device *, unsigned int, int, ...@@ -352,7 +352,7 @@ typedef void qdio_handler_t(struct ccw_device *, unsigned int, int,
* @no_output_qs: number of output queues * @no_output_qs: number of output queues
* @input_handler: handler to be called for input queues * @input_handler: handler to be called for input queues
* @output_handler: handler to be called for output queues * @output_handler: handler to be called for output queues
* @queue_start_poll: polling handlers (one per input queue or NULL) * @queue_start_poll_array: polling handlers (one per input queue or NULL)
* @int_parm: interruption parameter * @int_parm: interruption parameter
* @input_sbal_addr_array: address of no_input_qs * 128 pointers * @input_sbal_addr_array: address of no_input_qs * 128 pointers
* @output_sbal_addr_array: address of no_output_qs * 128 pointers * @output_sbal_addr_array: address of no_output_qs * 128 pointers
...@@ -372,7 +372,8 @@ struct qdio_initialize { ...@@ -372,7 +372,8 @@ struct qdio_initialize {
unsigned int no_output_qs; unsigned int no_output_qs;
qdio_handler_t *input_handler; qdio_handler_t *input_handler;
qdio_handler_t *output_handler; qdio_handler_t *output_handler;
void (**queue_start_poll) (struct ccw_device *, int, unsigned long); void (**queue_start_poll_array) (struct ccw_device *, int,
unsigned long);
int scan_threshold; int scan_threshold;
unsigned long int_parm; unsigned long int_parm;
void **input_sbal_addr_array; void **input_sbal_addr_array;
......
...@@ -56,6 +56,7 @@ enum { ...@@ -56,6 +56,7 @@ enum {
ec_schedule = 0, ec_schedule = 0,
ec_call_function, ec_call_function,
ec_call_function_single, ec_call_function_single,
ec_stop_cpu,
}; };
/* /*
......
...@@ -23,7 +23,6 @@ extern void __cpu_die (unsigned int cpu); ...@@ -23,7 +23,6 @@ extern void __cpu_die (unsigned int cpu);
extern int __cpu_up (unsigned int cpu); extern int __cpu_up (unsigned int cpu);
extern struct mutex smp_cpu_state_mutex; extern struct mutex smp_cpu_state_mutex;
extern int smp_cpu_polarization[];
extern void arch_send_call_function_single_ipi(int cpu); extern void arch_send_call_function_single_ipi(int cpu);
extern void arch_send_call_function_ipi_mask(const struct cpumask *mask); extern void arch_send_call_function_ipi_mask(const struct cpumask *mask);
......
...@@ -4,8 +4,8 @@ ...@@ -4,8 +4,8 @@
#ifdef CONFIG_64BIT #ifdef CONFIG_64BIT
#define SECTION_SIZE_BITS 28 #define SECTION_SIZE_BITS 28
#define MAX_PHYSADDR_BITS 42 #define MAX_PHYSADDR_BITS 46
#define MAX_PHYSMEM_BITS 42 #define MAX_PHYSMEM_BITS 46
#else #else
......
...@@ -27,7 +27,7 @@ static inline long syscall_get_nr(struct task_struct *task, ...@@ -27,7 +27,7 @@ static inline long syscall_get_nr(struct task_struct *task,
struct pt_regs *regs) struct pt_regs *regs)
{ {
return test_tsk_thread_flag(task, TIF_SYSCALL) ? return test_tsk_thread_flag(task, TIF_SYSCALL) ?
(regs->svc_code & 0xffff) : -1; (regs->int_code & 0xffff) : -1;
} }
static inline void syscall_rollback(struct task_struct *task, static inline void syscall_rollback(struct task_struct *task,
......
...@@ -20,8 +20,6 @@ ...@@ -20,8 +20,6 @@
struct task_struct; struct task_struct;
extern int sysctl_userprocess_debug;
extern struct task_struct *__switch_to(void *, void *); extern struct task_struct *__switch_to(void *, void *);
extern void update_per_regs(struct task_struct *task); extern void update_per_regs(struct task_struct *task);
......
...@@ -4,6 +4,10 @@ ...@@ -4,6 +4,10 @@
#include <linux/cpumask.h> #include <linux/cpumask.h>
#include <asm/sysinfo.h> #include <asm/sysinfo.h>
struct cpu;
#ifdef CONFIG_SCHED_BOOK
extern unsigned char cpu_core_id[NR_CPUS]; extern unsigned char cpu_core_id[NR_CPUS];
extern cpumask_t cpu_core_map[NR_CPUS]; extern cpumask_t cpu_core_map[NR_CPUS];
...@@ -16,8 +20,6 @@ static inline const struct cpumask *cpu_coregroup_mask(int cpu) ...@@ -16,8 +20,6 @@ static inline const struct cpumask *cpu_coregroup_mask(int cpu)
#define topology_core_cpumask(cpu) (&cpu_core_map[cpu]) #define topology_core_cpumask(cpu) (&cpu_core_map[cpu])
#define mc_capable() (1) #define mc_capable() (1)
#ifdef CONFIG_SCHED_BOOK
extern unsigned char cpu_book_id[NR_CPUS]; extern unsigned char cpu_book_id[NR_CPUS];
extern cpumask_t cpu_book_map[NR_CPUS]; extern cpumask_t cpu_book_map[NR_CPUS];
...@@ -29,19 +31,45 @@ static inline const struct cpumask *cpu_book_mask(int cpu) ...@@ -29,19 +31,45 @@ static inline const struct cpumask *cpu_book_mask(int cpu)
#define topology_book_id(cpu) (cpu_book_id[cpu]) #define topology_book_id(cpu) (cpu_book_id[cpu])
#define topology_book_cpumask(cpu) (&cpu_book_map[cpu]) #define topology_book_cpumask(cpu) (&cpu_book_map[cpu])
#endif /* CONFIG_SCHED_BOOK */ int topology_cpu_init(struct cpu *);
int topology_set_cpu_management(int fc); int topology_set_cpu_management(int fc);
void topology_schedule_update(void); void topology_schedule_update(void);
void store_topology(struct sysinfo_15_1_x *info); void store_topology(struct sysinfo_15_1_x *info);
void topology_expect_change(void);
#else /* CONFIG_SCHED_BOOK */
static inline void topology_schedule_update(void) { }
static inline int topology_cpu_init(struct cpu *cpu) { return 0; }
static inline void topology_expect_change(void) { }
#define POLARIZATION_UNKNWN (-1) #endif /* CONFIG_SCHED_BOOK */
#define POLARIZATION_UNKNOWN (-1)
#define POLARIZATION_HRZ (0) #define POLARIZATION_HRZ (0)
#define POLARIZATION_VL (1) #define POLARIZATION_VL (1)
#define POLARIZATION_VM (2) #define POLARIZATION_VM (2)
#define POLARIZATION_VH (3) #define POLARIZATION_VH (3)
#ifdef CONFIG_SMP extern int cpu_polarization[];
static inline void cpu_set_polarization(int cpu, int val)
{
#ifdef CONFIG_SCHED_BOOK
cpu_polarization[cpu] = val;
#endif
}
static inline int cpu_read_polarization(int cpu)
{
#ifdef CONFIG_SCHED_BOOK
return cpu_polarization[cpu];
#else
return POLARIZATION_HRZ;
#endif
}
#ifdef CONFIG_SCHED_BOOK
void s390_init_cpu_topology(void); void s390_init_cpu_topology(void);
#else #else
static inline void s390_init_cpu_topology(void) static inline void s390_init_cpu_topology(void)
......
...@@ -398,6 +398,7 @@ ...@@ -398,6 +398,7 @@
#define __ARCH_WANT_SYS_SIGNAL #define __ARCH_WANT_SYS_SIGNAL
#define __ARCH_WANT_SYS_UTIME #define __ARCH_WANT_SYS_UTIME
#define __ARCH_WANT_SYS_SOCKETCALL #define __ARCH_WANT_SYS_SOCKETCALL
#define __ARCH_WANT_SYS_IPC
#define __ARCH_WANT_SYS_FADVISE64 #define __ARCH_WANT_SYS_FADVISE64
#define __ARCH_WANT_SYS_GETPGRP #define __ARCH_WANT_SYS_GETPGRP
#define __ARCH_WANT_SYS_LLSEEK #define __ARCH_WANT_SYS_LLSEEK
......
...@@ -32,7 +32,8 @@ extra-y += head.o init_task.o vmlinux.lds ...@@ -32,7 +32,8 @@ extra-y += head.o init_task.o vmlinux.lds
extra-y += $(if $(CONFIG_64BIT),head64.o,head31.o) extra-y += $(if $(CONFIG_64BIT),head64.o,head31.o)
obj-$(CONFIG_MODULES) += s390_ksyms.o module.o obj-$(CONFIG_MODULES) += s390_ksyms.o module.o
obj-$(CONFIG_SMP) += smp.o topology.o obj-$(CONFIG_SMP) += smp.o
obj-$(CONFIG_SCHED_BOOK) += topology.o
obj-$(CONFIG_SMP) += $(if $(CONFIG_64BIT),switch_cpu64.o, \ obj-$(CONFIG_SMP) += $(if $(CONFIG_64BIT),switch_cpu64.o, \
switch_cpu.o) switch_cpu.o)
obj-$(CONFIG_HIBERNATION) += suspend.o swsusp_asm64.o obj-$(CONFIG_HIBERNATION) += suspend.o swsusp_asm64.o
......
...@@ -45,7 +45,8 @@ int main(void) ...@@ -45,7 +45,8 @@ int main(void)
DEFINE(__PT_PSW, offsetof(struct pt_regs, psw)); DEFINE(__PT_PSW, offsetof(struct pt_regs, psw));
DEFINE(__PT_GPRS, offsetof(struct pt_regs, gprs)); DEFINE(__PT_GPRS, offsetof(struct pt_regs, gprs));
DEFINE(__PT_ORIG_GPR2, offsetof(struct pt_regs, orig_gpr2)); DEFINE(__PT_ORIG_GPR2, offsetof(struct pt_regs, orig_gpr2));
DEFINE(__PT_SVC_CODE, offsetof(struct pt_regs, svc_code)); DEFINE(__PT_INT_CODE, offsetof(struct pt_regs, int_code));
DEFINE(__PT_INT_PARM_LONG, offsetof(struct pt_regs, int_parm_long));
DEFINE(__PT_SIZE, sizeof(struct pt_regs)); DEFINE(__PT_SIZE, sizeof(struct pt_regs));
BLANK(); BLANK();
DEFINE(__SF_BACKCHAIN, offsetof(struct stack_frame, back_chain)); DEFINE(__SF_BACKCHAIN, offsetof(struct stack_frame, back_chain));
...@@ -108,7 +109,9 @@ int main(void) ...@@ -108,7 +109,9 @@ int main(void)
DEFINE(__LC_PGM_NEW_PSW, offsetof(struct _lowcore, program_new_psw)); DEFINE(__LC_PGM_NEW_PSW, offsetof(struct _lowcore, program_new_psw));
DEFINE(__LC_MCK_NEW_PSW, offsetof(struct _lowcore, mcck_new_psw)); DEFINE(__LC_MCK_NEW_PSW, offsetof(struct _lowcore, mcck_new_psw));
DEFINE(__LC_IO_NEW_PSW, offsetof(struct _lowcore, io_new_psw)); DEFINE(__LC_IO_NEW_PSW, offsetof(struct _lowcore, io_new_psw));
DEFINE(__LC_SAVE_AREA, offsetof(struct _lowcore, save_area)); DEFINE(__LC_SAVE_AREA_SYNC, offsetof(struct _lowcore, save_area_sync));
DEFINE(__LC_SAVE_AREA_ASYNC, offsetof(struct _lowcore, save_area_async));
DEFINE(__LC_SAVE_AREA_RESTART, offsetof(struct _lowcore, save_area_restart));
DEFINE(__LC_RETURN_PSW, offsetof(struct _lowcore, return_psw)); DEFINE(__LC_RETURN_PSW, offsetof(struct _lowcore, return_psw));
DEFINE(__LC_RETURN_MCCK_PSW, offsetof(struct _lowcore, return_mcck_psw)); DEFINE(__LC_RETURN_MCCK_PSW, offsetof(struct _lowcore, return_mcck_psw));
DEFINE(__LC_SYNC_ENTER_TIMER, offsetof(struct _lowcore, sync_enter_timer)); DEFINE(__LC_SYNC_ENTER_TIMER, offsetof(struct _lowcore, sync_enter_timer));
...@@ -150,7 +153,6 @@ int main(void) ...@@ -150,7 +153,6 @@ int main(void)
DEFINE(__LC_LAST_BREAK, offsetof(struct _lowcore, breaking_event_addr)); DEFINE(__LC_LAST_BREAK, offsetof(struct _lowcore, breaking_event_addr));
DEFINE(__LC_VDSO_PER_CPU, offsetof(struct _lowcore, vdso_per_cpu_data)); DEFINE(__LC_VDSO_PER_CPU, offsetof(struct _lowcore, vdso_per_cpu_data));
DEFINE(__LC_GMAP, offsetof(struct _lowcore, gmap)); DEFINE(__LC_GMAP, offsetof(struct _lowcore, gmap));
DEFINE(__LC_CMF_HPP, offsetof(struct _lowcore, cmf_hpp));
DEFINE(__GMAP_ASCE, offsetof(struct gmap, asce)); DEFINE(__GMAP_ASCE, offsetof(struct gmap, asce));
#endif /* CONFIG_32BIT */ #endif /* CONFIG_32BIT */
return 0; return 0;
......
...@@ -33,7 +33,7 @@ s390_base_mcck_handler_fn: ...@@ -33,7 +33,7 @@ s390_base_mcck_handler_fn:
.previous .previous
ENTRY(s390_base_ext_handler) ENTRY(s390_base_ext_handler)
stmg %r0,%r15,__LC_SAVE_AREA stmg %r0,%r15,__LC_SAVE_AREA_ASYNC
basr %r13,0 basr %r13,0
0: aghi %r15,-STACK_FRAME_OVERHEAD 0: aghi %r15,-STACK_FRAME_OVERHEAD
larl %r1,s390_base_ext_handler_fn larl %r1,s390_base_ext_handler_fn
...@@ -41,7 +41,7 @@ ENTRY(s390_base_ext_handler) ...@@ -41,7 +41,7 @@ ENTRY(s390_base_ext_handler)
ltgr %r1,%r1 ltgr %r1,%r1
jz 1f jz 1f
basr %r14,%r1 basr %r14,%r1
1: lmg %r0,%r15,__LC_SAVE_AREA 1: lmg %r0,%r15,__LC_SAVE_AREA_ASYNC
ni __LC_EXT_OLD_PSW+1,0xfd # clear wait state bit ni __LC_EXT_OLD_PSW+1,0xfd # clear wait state bit
lpswe __LC_EXT_OLD_PSW lpswe __LC_EXT_OLD_PSW
...@@ -53,7 +53,7 @@ s390_base_ext_handler_fn: ...@@ -53,7 +53,7 @@ s390_base_ext_handler_fn:
.previous .previous
ENTRY(s390_base_pgm_handler) ENTRY(s390_base_pgm_handler)
stmg %r0,%r15,__LC_SAVE_AREA stmg %r0,%r15,__LC_SAVE_AREA_SYNC
basr %r13,0 basr %r13,0
0: aghi %r15,-STACK_FRAME_OVERHEAD 0: aghi %r15,-STACK_FRAME_OVERHEAD
larl %r1,s390_base_pgm_handler_fn larl %r1,s390_base_pgm_handler_fn
...@@ -61,7 +61,7 @@ ENTRY(s390_base_pgm_handler) ...@@ -61,7 +61,7 @@ ENTRY(s390_base_pgm_handler)
ltgr %r1,%r1 ltgr %r1,%r1
jz 1f jz 1f
basr %r14,%r1 basr %r14,%r1
lmg %r0,%r15,__LC_SAVE_AREA lmg %r0,%r15,__LC_SAVE_AREA_SYNC
lpswe __LC_PGM_OLD_PSW lpswe __LC_PGM_OLD_PSW
1: lpswe disabled_wait_psw-0b(%r13) 1: lpswe disabled_wait_psw-0b(%r13)
...@@ -142,7 +142,7 @@ s390_base_mcck_handler_fn: ...@@ -142,7 +142,7 @@ s390_base_mcck_handler_fn:
.previous .previous
ENTRY(s390_base_ext_handler) ENTRY(s390_base_ext_handler)
stm %r0,%r15,__LC_SAVE_AREA stm %r0,%r15,__LC_SAVE_AREA_ASYNC
basr %r13,0 basr %r13,0
0: ahi %r15,-STACK_FRAME_OVERHEAD 0: ahi %r15,-STACK_FRAME_OVERHEAD
l %r1,2f-0b(%r13) l %r1,2f-0b(%r13)
...@@ -150,7 +150,7 @@ ENTRY(s390_base_ext_handler) ...@@ -150,7 +150,7 @@ ENTRY(s390_base_ext_handler)
ltr %r1,%r1 ltr %r1,%r1
jz 1f jz 1f
basr %r14,%r1 basr %r14,%r1
1: lm %r0,%r15,__LC_SAVE_AREA 1: lm %r0,%r15,__LC_SAVE_AREA_ASYNC
ni __LC_EXT_OLD_PSW+1,0xfd # clear wait state bit ni __LC_EXT_OLD_PSW+1,0xfd # clear wait state bit
lpsw __LC_EXT_OLD_PSW lpsw __LC_EXT_OLD_PSW
...@@ -164,7 +164,7 @@ s390_base_ext_handler_fn: ...@@ -164,7 +164,7 @@ s390_base_ext_handler_fn:
.previous .previous
ENTRY(s390_base_pgm_handler) ENTRY(s390_base_pgm_handler)
stm %r0,%r15,__LC_SAVE_AREA stm %r0,%r15,__LC_SAVE_AREA_SYNC
basr %r13,0 basr %r13,0
0: ahi %r15,-STACK_FRAME_OVERHEAD 0: ahi %r15,-STACK_FRAME_OVERHEAD
l %r1,2f-0b(%r13) l %r1,2f-0b(%r13)
...@@ -172,7 +172,7 @@ ENTRY(s390_base_pgm_handler) ...@@ -172,7 +172,7 @@ ENTRY(s390_base_pgm_handler)
ltr %r1,%r1 ltr %r1,%r1
jz 1f jz 1f
basr %r14,%r1 basr %r14,%r1
lm %r0,%r15,__LC_SAVE_AREA lm %r0,%r15,__LC_SAVE_AREA_SYNC
lpsw __LC_PGM_OLD_PSW lpsw __LC_PGM_OLD_PSW
1: lpsw disabled_wait_psw-0b(%r13) 1: lpsw disabled_wait_psw-0b(%r13)
......
...@@ -278,9 +278,6 @@ asmlinkage long sys32_ipc(u32 call, int first, int second, int third, u32 ptr) ...@@ -278,9 +278,6 @@ asmlinkage long sys32_ipc(u32 call, int first, int second, int third, u32 ptr)
{ {
if (call >> 16) /* hack for backward compatibility */ if (call >> 16) /* hack for backward compatibility */
return -EINVAL; return -EINVAL;
call &= 0xffff;
switch (call) { switch (call) {
case SEMTIMEDOP: case SEMTIMEDOP:
return compat_sys_semtimedop(first, compat_ptr(ptr), return compat_sys_semtimedop(first, compat_ptr(ptr),
......
...@@ -501,8 +501,12 @@ static int setup_frame32(int sig, struct k_sigaction *ka, ...@@ -501,8 +501,12 @@ static int setup_frame32(int sig, struct k_sigaction *ka,
/* We forgot to include these in the sigcontext. /* We forgot to include these in the sigcontext.
To avoid breaking binary compatibility, they are passed as args. */ To avoid breaking binary compatibility, they are passed as args. */
regs->gprs[4] = current->thread.trap_no; if (sig == SIGSEGV || sig == SIGBUS || sig == SIGILL ||
regs->gprs[5] = current->thread.prot_addr; sig == SIGTRAP || sig == SIGFPE) {
/* set extra registers only for synchronous signals */
regs->gprs[4] = regs->int_code & 127;
regs->gprs[5] = regs->int_parm_long;
}
/* Place signal number on stack to allow backtrace from handler. */ /* Place signal number on stack to allow backtrace from handler. */
if (__put_user(regs->gprs[2], (int __force __user *) &frame->signo)) if (__put_user(regs->gprs[2], (int __force __user *) &frame->signo))
...@@ -544,9 +548,9 @@ static int setup_rt_frame32(int sig, struct k_sigaction *ka, siginfo_t *info, ...@@ -544,9 +548,9 @@ static int setup_rt_frame32(int sig, struct k_sigaction *ka, siginfo_t *info,
/* Set up to return from userspace. If provided, use a stub /* Set up to return from userspace. If provided, use a stub
already in userspace. */ already in userspace. */
if (ka->sa.sa_flags & SA_RESTORER) { if (ka->sa.sa_flags & SA_RESTORER) {
regs->gprs[14] = (__u64) ka->sa.sa_restorer; regs->gprs[14] = (__u64) ka->sa.sa_restorer | PSW32_ADDR_AMODE;
} else { } else {
regs->gprs[14] = (__u64) frame->retcode; regs->gprs[14] = (__u64) frame->retcode | PSW32_ADDR_AMODE;
err |= __put_user(S390_SYSCALL_OPCODE | __NR_rt_sigreturn, err |= __put_user(S390_SYSCALL_OPCODE | __NR_rt_sigreturn,
(u16 __force __user *)(frame->retcode)); (u16 __force __user *)(frame->retcode));
} }
......
...@@ -1578,10 +1578,15 @@ void show_code(struct pt_regs *regs) ...@@ -1578,10 +1578,15 @@ void show_code(struct pt_regs *regs)
ptr += sprintf(ptr, "%s Code:", mode); ptr += sprintf(ptr, "%s Code:", mode);
hops = 0; hops = 0;
while (start < end && hops < 8) { while (start < end && hops < 8) {
*ptr++ = (start == 32) ? '>' : ' '; opsize = insn_length(code[start]);
if (start + opsize == 32)
*ptr++ = '#';
else if (start == 32)
*ptr++ = '>';
else
*ptr++ = ' ';
addr = regs->psw.addr + start - 32; addr = regs->psw.addr + start - 32;
ptr += sprintf(ptr, ONELONG, addr); ptr += sprintf(ptr, ONELONG, addr);
opsize = insn_length(code[start]);
if (start + opsize >= end) if (start + opsize >= end)
break; break;
for (i = 0; i < opsize; i++) for (i = 0; i < opsize; i++)
......
...@@ -434,18 +434,22 @@ static void __init append_to_cmdline(size_t (*ipl_data)(char *, size_t)) ...@@ -434,18 +434,22 @@ static void __init append_to_cmdline(size_t (*ipl_data)(char *, size_t))
} }
} }
static void __init setup_boot_command_line(void) static inline int has_ebcdic_char(const char *str)
{ {
int i; int i;
/* convert arch command line to ascii */ for (i = 0; str[i]; i++)
for (i = 0; i < ARCH_COMMAND_LINE_SIZE; i++) if (str[i] & 0x80)
if (COMMAND_LINE[i] & 0x80) return 1;
break; return 0;
if (i < ARCH_COMMAND_LINE_SIZE) }
EBCASC(COMMAND_LINE, ARCH_COMMAND_LINE_SIZE);
COMMAND_LINE[ARCH_COMMAND_LINE_SIZE-1] = 0;
static void __init setup_boot_command_line(void)
{
COMMAND_LINE[ARCH_COMMAND_LINE_SIZE - 1] = 0;
/* convert arch command line to ascii if necessary */
if (has_ebcdic_char(COMMAND_LINE))
EBCASC(COMMAND_LINE, ARCH_COMMAND_LINE_SIZE);
/* copy arch command line */ /* copy arch command line */
strlcpy(boot_command_line, strstrip(COMMAND_LINE), strlcpy(boot_command_line, strstrip(COMMAND_LINE),
ARCH_COMMAND_LINE_SIZE); ARCH_COMMAND_LINE_SIZE);
......
This diff is collapsed.
...@@ -6,15 +6,15 @@ ...@@ -6,15 +6,15 @@
#include <asm/ptrace.h> #include <asm/ptrace.h>
extern void (*pgm_check_table[128])(struct pt_regs *, long, unsigned long); extern void (*pgm_check_table[128])(struct pt_regs *);
extern void *restart_stack; extern void *restart_stack;
asmlinkage long do_syscall_trace_enter(struct pt_regs *regs); asmlinkage long do_syscall_trace_enter(struct pt_regs *regs);
asmlinkage void do_syscall_trace_exit(struct pt_regs *regs); asmlinkage void do_syscall_trace_exit(struct pt_regs *regs);
void do_protection_exception(struct pt_regs *, long, unsigned long); void do_protection_exception(struct pt_regs *regs);
void do_dat_exception(struct pt_regs *, long, unsigned long); void do_dat_exception(struct pt_regs *regs);
void do_asce_exception(struct pt_regs *, long, unsigned long); void do_asce_exception(struct pt_regs *regs);
void do_per_trap(struct pt_regs *regs); void do_per_trap(struct pt_regs *regs);
void syscall_trace(struct pt_regs *regs, int entryexit); void syscall_trace(struct pt_regs *regs, int entryexit);
...@@ -28,7 +28,7 @@ void do_extint(struct pt_regs *regs, unsigned int, unsigned int, unsigned long); ...@@ -28,7 +28,7 @@ void do_extint(struct pt_regs *regs, unsigned int, unsigned int, unsigned long);
void do_restart(void); void do_restart(void);
int __cpuinit start_secondary(void *cpuvoid); int __cpuinit start_secondary(void *cpuvoid);
void __init startup_init(void); void __init startup_init(void);
void die(const char * str, struct pt_regs * regs, long err); void die(struct pt_regs *regs, const char *str);
void __init time_init(void); void __init time_init(void);
......
This diff is collapsed.
...@@ -329,8 +329,8 @@ iplstart: ...@@ -329,8 +329,8 @@ iplstart:
# #
# reset files in VM reader # reset files in VM reader
# #
stidp __LC_SAVE_AREA # store cpuid stidp __LC_SAVE_AREA_SYNC # store cpuid
tm __LC_SAVE_AREA,0xff # running VM ? tm __LC_SAVE_AREA_SYNC,0xff# running VM ?
bno .Lnoreset bno .Lnoreset
la %r2,.Lreset la %r2,.Lreset
lhi %r3,26 lhi %r3,26
......
...@@ -208,6 +208,7 @@ void machine_kexec_cleanup(struct kimage *image) ...@@ -208,6 +208,7 @@ void machine_kexec_cleanup(struct kimage *image)
void arch_crash_save_vmcoreinfo(void) void arch_crash_save_vmcoreinfo(void)
{ {
VMCOREINFO_SYMBOL(lowcore_ptr); VMCOREINFO_SYMBOL(lowcore_ptr);
VMCOREINFO_SYMBOL(high_memory);
VMCOREINFO_LENGTH(lowcore_ptr, NR_CPUS); VMCOREINFO_LENGTH(lowcore_ptr, NR_CPUS);
} }
......
...@@ -63,71 +63,83 @@ void detect_memory_layout(struct mem_chunk chunk[]) ...@@ -63,71 +63,83 @@ void detect_memory_layout(struct mem_chunk chunk[])
} }
EXPORT_SYMBOL(detect_memory_layout); EXPORT_SYMBOL(detect_memory_layout);
/*
* Move memory chunks array from index "from" to index "to"
*/
static void mem_chunk_move(struct mem_chunk chunk[], int to, int from)
{
int cnt = MEMORY_CHUNKS - to;
memmove(&chunk[to], &chunk[from], cnt * sizeof(struct mem_chunk));
}
/*
* Initialize memory chunk
*/
static void mem_chunk_init(struct mem_chunk *chunk, unsigned long addr,
unsigned long size, int type)
{
chunk->type = type;
chunk->addr = addr;
chunk->size = size;
}
/* /*
* Create memory hole with given address, size, and type * Create memory hole with given address, size, and type
*/ */
void create_mem_hole(struct mem_chunk chunks[], unsigned long addr, void create_mem_hole(struct mem_chunk chunk[], unsigned long addr,
unsigned long size, int type) unsigned long size, int type)
{ {
unsigned long start, end, new_size; unsigned long lh_start, lh_end, lh_size, ch_start, ch_end, ch_size;
int i; int i, ch_type;
for (i = 0; i < MEMORY_CHUNKS; i++) { for (i = 0; i < MEMORY_CHUNKS; i++) {
if (chunks[i].size == 0) if (chunk[i].size == 0)
continue;
if (addr + size < chunks[i].addr)
continue;
if (addr >= chunks[i].addr + chunks[i].size)
continue; continue;
start = max(addr, chunks[i].addr);
end = min(addr + size, chunks[i].addr + chunks[i].size); /* Define chunk properties */
new_size = end - start; ch_start = chunk[i].addr;
if (new_size == 0) ch_size = chunk[i].size;
continue; ch_end = ch_start + ch_size - 1;
if (start == chunks[i].addr && ch_type = chunk[i].type;
end == chunks[i].addr + chunks[i].size) {
/* Remove chunk */ /* Is memory chunk hit by memory hole? */
chunks[i].type = type; if (addr + size <= ch_start)
} else if (start == chunks[i].addr) { continue; /* No: memory hole in front of chunk */
/* Make chunk smaller at start */ if (addr > ch_end)
if (i >= MEMORY_CHUNKS - 1) continue; /* No: memory hole after chunk */
panic("Unable to create memory hole");
memmove(&chunks[i + 1], &chunks[i], /* Yes: Define local hole properties */
sizeof(struct mem_chunk) * lh_start = max(addr, chunk[i].addr);
(MEMORY_CHUNKS - (i + 1))); lh_end = min(addr + size - 1, ch_end);
chunks[i + 1].addr = chunks[i].addr + new_size; lh_size = lh_end - lh_start + 1;
chunks[i + 1].size = chunks[i].size - new_size;
chunks[i].size = new_size; if (lh_start == ch_start && lh_end == ch_end) {
chunks[i].type = type; /* Hole covers complete memory chunk */
i += 1; mem_chunk_init(&chunk[i], lh_start, lh_size, type);
} else if (end == chunks[i].addr + chunks[i].size) { } else if (lh_end == ch_end) {
/* Make chunk smaller at end */ /* Hole starts in memory chunk and convers chunk end */
if (i >= MEMORY_CHUNKS - 1) mem_chunk_move(chunk, i + 1, i);
panic("Unable to create memory hole"); mem_chunk_init(&chunk[i], ch_start, ch_size - lh_size,
memmove(&chunks[i + 1], &chunks[i], ch_type);
sizeof(struct mem_chunk) * mem_chunk_init(&chunk[i + 1], lh_start, lh_size, type);
(MEMORY_CHUNKS - (i + 1)));
chunks[i + 1].addr = start;
chunks[i + 1].size = new_size;
chunks[i + 1].type = type;
chunks[i].size -= new_size;
i += 1; i += 1;
} else if (lh_start == ch_start) {
/* Hole ends in memory chunk */
mem_chunk_move(chunk, i + 1, i);
mem_chunk_init(&chunk[i], lh_start, lh_size, type);
mem_chunk_init(&chunk[i + 1], lh_end + 1,
ch_size - lh_size, ch_type);
break;
} else { } else {
/* Create memory hole */ /* Hole splits memory chunk */
if (i >= MEMORY_CHUNKS - 2) mem_chunk_move(chunk, i + 2, i);
panic("Unable to create memory hole"); mem_chunk_init(&chunk[i], ch_start,
memmove(&chunks[i + 2], &chunks[i], lh_start - ch_start, ch_type);
sizeof(struct mem_chunk) * mem_chunk_init(&chunk[i + 1], lh_start, lh_size, type);
(MEMORY_CHUNKS - (i + 2))); mem_chunk_init(&chunk[i + 2], lh_end + 1,
chunks[i + 1].addr = addr; ch_end - lh_end, ch_type);
chunks[i + 1].size = size; break;
chunks[i + 1].type = type;
chunks[i + 2].addr = addr + size;
chunks[i + 2].size =
chunks[i].addr + chunks[i].size - (addr + size);
chunks[i + 2].type = chunks[i].type;
chunks[i].size = addr - chunks[i].addr;
i += 2;
} }
} }
} }
...@@ -17,11 +17,11 @@ ...@@ -17,11 +17,11 @@
# #
ENTRY(store_status) ENTRY(store_status)
/* Save register one and load save area base */ /* Save register one and load save area base */
stg %r1,__LC_SAVE_AREA+120(%r0) stg %r1,__LC_SAVE_AREA_RESTART
lghi %r1,SAVE_AREA_BASE lghi %r1,SAVE_AREA_BASE
/* General purpose registers */ /* General purpose registers */
stmg %r0,%r15,__LC_GPREGS_SAVE_AREA-SAVE_AREA_BASE(%r1) stmg %r0,%r15,__LC_GPREGS_SAVE_AREA-SAVE_AREA_BASE(%r1)
lg %r2,__LC_SAVE_AREA+120(%r0) lg %r2,__LC_SAVE_AREA_RESTART
stg %r2,__LC_GPREGS_SAVE_AREA-SAVE_AREA_BASE+8(%r1) stg %r2,__LC_GPREGS_SAVE_AREA-SAVE_AREA_BASE+8(%r1)
/* Control registers */ /* Control registers */
stctg %c0,%c15,__LC_CREGS_SAVE_AREA-SAVE_AREA_BASE(%r1) stctg %c0,%c15,__LC_CREGS_SAVE_AREA-SAVE_AREA_BASE(%r1)
......
...@@ -95,6 +95,15 @@ struct mem_chunk __initdata memory_chunk[MEMORY_CHUNKS]; ...@@ -95,6 +95,15 @@ struct mem_chunk __initdata memory_chunk[MEMORY_CHUNKS];
int __initdata memory_end_set; int __initdata memory_end_set;
unsigned long __initdata memory_end; unsigned long __initdata memory_end;
unsigned long VMALLOC_START;
EXPORT_SYMBOL(VMALLOC_START);
unsigned long VMALLOC_END;
EXPORT_SYMBOL(VMALLOC_END);
struct page *vmemmap;
EXPORT_SYMBOL(vmemmap);
/* An array with a pointer to the lowcore of every CPU. */ /* An array with a pointer to the lowcore of every CPU. */
struct _lowcore *lowcore_ptr[NR_CPUS]; struct _lowcore *lowcore_ptr[NR_CPUS];
EXPORT_SYMBOL(lowcore_ptr); EXPORT_SYMBOL(lowcore_ptr);
...@@ -278,6 +287,15 @@ static int __init early_parse_mem(char *p) ...@@ -278,6 +287,15 @@ static int __init early_parse_mem(char *p)
} }
early_param("mem", early_parse_mem); early_param("mem", early_parse_mem);
static int __init parse_vmalloc(char *arg)
{
if (!arg)
return -EINVAL;
VMALLOC_END = (memparse(arg, &arg) + PAGE_SIZE - 1) & PAGE_MASK;
return 0;
}
early_param("vmalloc", parse_vmalloc);
unsigned int user_mode = HOME_SPACE_MODE; unsigned int user_mode = HOME_SPACE_MODE;
EXPORT_SYMBOL_GPL(user_mode); EXPORT_SYMBOL_GPL(user_mode);
...@@ -383,7 +401,6 @@ setup_lowcore(void) ...@@ -383,7 +401,6 @@ setup_lowcore(void)
__ctl_set_bit(14, 29); __ctl_set_bit(14, 29);
} }
#else #else
lc->cmf_hpp = -1ULL;
lc->vdso_per_cpu_data = (unsigned long) &lc->paste[0]; lc->vdso_per_cpu_data = (unsigned long) &lc->paste[0];
#endif #endif
lc->sync_enter_timer = S390_lowcore.sync_enter_timer; lc->sync_enter_timer = S390_lowcore.sync_enter_timer;
...@@ -479,8 +496,7 @@ EXPORT_SYMBOL_GPL(real_memory_size); ...@@ -479,8 +496,7 @@ EXPORT_SYMBOL_GPL(real_memory_size);
static void __init setup_memory_end(void) static void __init setup_memory_end(void)
{ {
unsigned long memory_size; unsigned long vmax, vmalloc_size, tmp;
unsigned long max_mem;
int i; int i;
...@@ -490,12 +506,9 @@ static void __init setup_memory_end(void) ...@@ -490,12 +506,9 @@ static void __init setup_memory_end(void)
memory_end_set = 1; memory_end_set = 1;
} }
#endif #endif
memory_size = 0; real_memory_size = 0;
memory_end &= PAGE_MASK; memory_end &= PAGE_MASK;
max_mem = memory_end ? min(VMEM_MAX_PHYS, memory_end) : VMEM_MAX_PHYS;
memory_end = min(max_mem, memory_end);
/* /*
* Make sure all chunks are MAX_ORDER aligned so we don't need the * Make sure all chunks are MAX_ORDER aligned so we don't need the
* extra checks that HOLES_IN_ZONE would require. * extra checks that HOLES_IN_ZONE would require.
...@@ -515,23 +528,48 @@ static void __init setup_memory_end(void) ...@@ -515,23 +528,48 @@ static void __init setup_memory_end(void)
chunk->addr = start; chunk->addr = start;
chunk->size = end - start; chunk->size = end - start;
} }
real_memory_size = max(real_memory_size,
chunk->addr + chunk->size);
} }
/* Choose kernel address space layout: 2, 3, or 4 levels. */
#ifdef CONFIG_64BIT
vmalloc_size = VMALLOC_END ?: 128UL << 30;
tmp = (memory_end ?: real_memory_size) / PAGE_SIZE;
tmp = tmp * (sizeof(struct page) + PAGE_SIZE) + vmalloc_size;
if (tmp <= (1UL << 42))
vmax = 1UL << 42; /* 3-level kernel page table */
else
vmax = 1UL << 53; /* 4-level kernel page table */
#else
vmalloc_size = VMALLOC_END ?: 96UL << 20;
vmax = 1UL << 31; /* 2-level kernel page table */
#endif
/* vmalloc area is at the end of the kernel address space. */
VMALLOC_END = vmax;
VMALLOC_START = vmax - vmalloc_size;
/* Split remaining virtual space between 1:1 mapping & vmemmap array */
tmp = VMALLOC_START / (PAGE_SIZE + sizeof(struct page));
tmp = VMALLOC_START - tmp * sizeof(struct page);
tmp &= ~((vmax >> 11) - 1); /* align to page table level */
tmp = min(tmp, 1UL << MAX_PHYSMEM_BITS);
vmemmap = (struct page *) tmp;
/* Take care that memory_end is set and <= vmemmap */
memory_end = min(memory_end ?: real_memory_size, tmp);
/* Fixup memory chunk array to fit into 0..memory_end */
for (i = 0; i < MEMORY_CHUNKS; i++) { for (i = 0; i < MEMORY_CHUNKS; i++) {
struct mem_chunk *chunk = &memory_chunk[i]; struct mem_chunk *chunk = &memory_chunk[i];
real_memory_size = max(real_memory_size, if (chunk->addr >= memory_end) {
chunk->addr + chunk->size);
if (chunk->addr >= max_mem) {
memset(chunk, 0, sizeof(*chunk)); memset(chunk, 0, sizeof(*chunk));
continue; continue;
} }
if (chunk->addr + chunk->size > max_mem) if (chunk->addr + chunk->size > memory_end)
chunk->size = max_mem - chunk->addr; chunk->size = memory_end - chunk->addr;
memory_size = max(memory_size, chunk->addr + chunk->size);
} }
if (!memory_end)
memory_end = memory_size;
} }
void *restart_stack __attribute__((__section__(".data"))); void *restart_stack __attribute__((__section__(".data")));
...@@ -655,7 +693,6 @@ static int __init verify_crash_base(unsigned long crash_base, ...@@ -655,7 +693,6 @@ static int __init verify_crash_base(unsigned long crash_base,
static void __init reserve_kdump_bootmem(unsigned long addr, unsigned long size, static void __init reserve_kdump_bootmem(unsigned long addr, unsigned long size,
int type) int type)
{ {
create_mem_hole(memory_chunk, addr, size, type); create_mem_hole(memory_chunk, addr, size, type);
} }
......
...@@ -302,9 +302,13 @@ static int setup_frame(int sig, struct k_sigaction *ka, ...@@ -302,9 +302,13 @@ static int setup_frame(int sig, struct k_sigaction *ka,
/* We forgot to include these in the sigcontext. /* We forgot to include these in the sigcontext.
To avoid breaking binary compatibility, they are passed as args. */ To avoid breaking binary compatibility, they are passed as args. */
regs->gprs[4] = current->thread.trap_no; if (sig == SIGSEGV || sig == SIGBUS || sig == SIGILL ||
regs->gprs[5] = current->thread.prot_addr; sig == SIGTRAP || sig == SIGFPE) {
/* set extra registers only for synchronous signals */
regs->gprs[4] = regs->int_code & 127;
regs->gprs[5] = regs->int_parm_long;
regs->gprs[6] = task_thread_info(current)->last_break; regs->gprs[6] = task_thread_info(current)->last_break;
}
/* Place signal number on stack to allow backtrace from handler. */ /* Place signal number on stack to allow backtrace from handler. */
if (__put_user(regs->gprs[2], (int __user *) &frame->signo)) if (__put_user(regs->gprs[2], (int __user *) &frame->signo))
...@@ -434,13 +438,13 @@ void do_signal(struct pt_regs *regs) ...@@ -434,13 +438,13 @@ void do_signal(struct pt_regs *regs)
* call information. * call information.
*/ */
current_thread_info()->system_call = current_thread_info()->system_call =
test_thread_flag(TIF_SYSCALL) ? regs->svc_code : 0; test_thread_flag(TIF_SYSCALL) ? regs->int_code : 0;
signr = get_signal_to_deliver(&info, &ka, regs, NULL); signr = get_signal_to_deliver(&info, &ka, regs, NULL);
if (signr > 0) { if (signr > 0) {
/* Whee! Actually deliver the signal. */ /* Whee! Actually deliver the signal. */
if (current_thread_info()->system_call) { if (current_thread_info()->system_call) {
regs->svc_code = current_thread_info()->system_call; regs->int_code = current_thread_info()->system_call;
/* Check for system call restarting. */ /* Check for system call restarting. */
switch (regs->gprs[2]) { switch (regs->gprs[2]) {
case -ERESTART_RESTARTBLOCK: case -ERESTART_RESTARTBLOCK:
...@@ -457,7 +461,7 @@ void do_signal(struct pt_regs *regs) ...@@ -457,7 +461,7 @@ void do_signal(struct pt_regs *regs)
regs->gprs[2] = regs->orig_gpr2; regs->gprs[2] = regs->orig_gpr2;
regs->psw.addr = regs->psw.addr =
__rewind_psw(regs->psw, __rewind_psw(regs->psw,
regs->svc_code >> 16); regs->int_code >> 16);
break; break;
} }
} }
...@@ -488,11 +492,11 @@ void do_signal(struct pt_regs *regs) ...@@ -488,11 +492,11 @@ void do_signal(struct pt_regs *regs)
/* No handlers present - check for system call restart */ /* No handlers present - check for system call restart */
clear_thread_flag(TIF_SYSCALL); clear_thread_flag(TIF_SYSCALL);
if (current_thread_info()->system_call) { if (current_thread_info()->system_call) {
regs->svc_code = current_thread_info()->system_call; regs->int_code = current_thread_info()->system_call;
switch (regs->gprs[2]) { switch (regs->gprs[2]) {
case -ERESTART_RESTARTBLOCK: case -ERESTART_RESTARTBLOCK:
/* Restart with sys_restart_syscall */ /* Restart with sys_restart_syscall */
regs->svc_code = __NR_restart_syscall; regs->int_code = __NR_restart_syscall;
/* fallthrough */ /* fallthrough */
case -ERESTARTNOHAND: case -ERESTARTNOHAND:
case -ERESTARTSYS: case -ERESTARTSYS:
......
...@@ -69,9 +69,7 @@ enum s390_cpu_state { ...@@ -69,9 +69,7 @@ enum s390_cpu_state {
}; };
DEFINE_MUTEX(smp_cpu_state_mutex); DEFINE_MUTEX(smp_cpu_state_mutex);
int smp_cpu_polarization[NR_CPUS];
static int smp_cpu_state[NR_CPUS]; static int smp_cpu_state[NR_CPUS];
static int cpu_management;
static DEFINE_PER_CPU(struct cpu, cpu_devices); static DEFINE_PER_CPU(struct cpu, cpu_devices);
...@@ -149,29 +147,59 @@ void smp_switch_to_ipl_cpu(void (*func)(void *), void *data) ...@@ -149,29 +147,59 @@ void smp_switch_to_ipl_cpu(void (*func)(void *), void *data)
sp -= sizeof(struct pt_regs); sp -= sizeof(struct pt_regs);
regs = (struct pt_regs *) sp; regs = (struct pt_regs *) sp;
memcpy(&regs->gprs, &current_lc->gpregs_save_area, sizeof(regs->gprs)); memcpy(&regs->gprs, &current_lc->gpregs_save_area, sizeof(regs->gprs));
regs->psw = lc->psw_save_area; regs->psw = current_lc->psw_save_area;
sp -= STACK_FRAME_OVERHEAD; sp -= STACK_FRAME_OVERHEAD;
sf = (struct stack_frame *) sp; sf = (struct stack_frame *) sp;
sf->back_chain = regs->gprs[15]; sf->back_chain = 0;
smp_switch_to_cpu(func, data, sp, stap(), __cpu_logical_map[0]); smp_switch_to_cpu(func, data, sp, stap(), __cpu_logical_map[0]);
} }
static void smp_stop_cpu(void)
{
while (sigp(smp_processor_id(), sigp_stop) == sigp_busy)
cpu_relax();
}
void smp_send_stop(void) void smp_send_stop(void)
{ {
int cpu, rc; cpumask_t cpumask;
int cpu;
u64 end;
/* Disable all interrupts/machine checks */ /* Disable all interrupts/machine checks */
__load_psw_mask(psw_kernel_bits | PSW_MASK_DAT); __load_psw_mask(psw_kernel_bits | PSW_MASK_DAT);
trace_hardirqs_off(); trace_hardirqs_off();
/* stop all processors */ cpumask_copy(&cpumask, cpu_online_mask);
for_each_online_cpu(cpu) { cpumask_clear_cpu(smp_processor_id(), &cpumask);
if (cpu == smp_processor_id())
continue;
do {
rc = sigp(cpu, sigp_stop);
} while (rc == sigp_busy);
if (oops_in_progress) {
/*
* Give the other cpus the opportunity to complete
* outstanding interrupts before stopping them.
*/
end = get_clock() + (1000000UL << 12);
for_each_cpu(cpu, &cpumask) {
set_bit(ec_stop_cpu, (unsigned long *)
&lowcore_ptr[cpu]->ext_call_fast);
while (sigp(cpu, sigp_emergency_signal) == sigp_busy &&
get_clock() < end)
cpu_relax();
}
while (get_clock() < end) {
for_each_cpu(cpu, &cpumask)
if (cpu_stopped(cpu))
cpumask_clear_cpu(cpu, &cpumask);
if (cpumask_empty(&cpumask))
break;
cpu_relax();
}
}
/* stop all processors */
for_each_cpu(cpu, &cpumask) {
while (sigp(cpu, sigp_stop) == sigp_busy)
cpu_relax();
while (!cpu_stopped(cpu)) while (!cpu_stopped(cpu))
cpu_relax(); cpu_relax();
} }
...@@ -187,7 +215,7 @@ static void do_ext_call_interrupt(unsigned int ext_int_code, ...@@ -187,7 +215,7 @@ static void do_ext_call_interrupt(unsigned int ext_int_code,
{ {
unsigned long bits; unsigned long bits;
if (ext_int_code == 0x1202) if ((ext_int_code & 0xffff) == 0x1202)
kstat_cpu(smp_processor_id()).irqs[EXTINT_EXC]++; kstat_cpu(smp_processor_id()).irqs[EXTINT_EXC]++;
else else
kstat_cpu(smp_processor_id()).irqs[EXTINT_EMS]++; kstat_cpu(smp_processor_id()).irqs[EXTINT_EMS]++;
...@@ -196,6 +224,9 @@ static void do_ext_call_interrupt(unsigned int ext_int_code, ...@@ -196,6 +224,9 @@ static void do_ext_call_interrupt(unsigned int ext_int_code,
*/ */
bits = xchg(&S390_lowcore.ext_call_fast, 0); bits = xchg(&S390_lowcore.ext_call_fast, 0);
if (test_bit(ec_stop_cpu, &bits))
smp_stop_cpu();
if (test_bit(ec_schedule, &bits)) if (test_bit(ec_schedule, &bits))
scheduler_ipi(); scheduler_ipi();
...@@ -204,6 +235,7 @@ static void do_ext_call_interrupt(unsigned int ext_int_code, ...@@ -204,6 +235,7 @@ static void do_ext_call_interrupt(unsigned int ext_int_code,
if (test_bit(ec_call_function_single, &bits)) if (test_bit(ec_call_function_single, &bits))
generic_smp_call_function_single_interrupt(); generic_smp_call_function_single_interrupt();
} }
/* /*
...@@ -369,7 +401,7 @@ static int smp_rescan_cpus_sigp(cpumask_t avail) ...@@ -369,7 +401,7 @@ static int smp_rescan_cpus_sigp(cpumask_t avail)
if (cpu_known(cpu_id)) if (cpu_known(cpu_id))
continue; continue;
__cpu_logical_map[logical_cpu] = cpu_id; __cpu_logical_map[logical_cpu] = cpu_id;
smp_cpu_polarization[logical_cpu] = POLARIZATION_UNKNWN; cpu_set_polarization(logical_cpu, POLARIZATION_UNKNOWN);
if (!cpu_stopped(logical_cpu)) if (!cpu_stopped(logical_cpu))
continue; continue;
set_cpu_present(logical_cpu, true); set_cpu_present(logical_cpu, true);
...@@ -403,7 +435,7 @@ static int smp_rescan_cpus_sclp(cpumask_t avail) ...@@ -403,7 +435,7 @@ static int smp_rescan_cpus_sclp(cpumask_t avail)
if (cpu_known(cpu_id)) if (cpu_known(cpu_id))
continue; continue;
__cpu_logical_map[logical_cpu] = cpu_id; __cpu_logical_map[logical_cpu] = cpu_id;
smp_cpu_polarization[logical_cpu] = POLARIZATION_UNKNWN; cpu_set_polarization(logical_cpu, POLARIZATION_UNKNOWN);
set_cpu_present(logical_cpu, true); set_cpu_present(logical_cpu, true);
if (cpu >= info->configured) if (cpu >= info->configured)
smp_cpu_state[logical_cpu] = CPU_STATE_STANDBY; smp_cpu_state[logical_cpu] = CPU_STATE_STANDBY;
...@@ -656,7 +688,7 @@ int __cpuinit __cpu_up(unsigned int cpu) ...@@ -656,7 +688,7 @@ int __cpuinit __cpu_up(unsigned int cpu)
- sizeof(struct stack_frame)); - sizeof(struct stack_frame));
memset(sf, 0, sizeof(struct stack_frame)); memset(sf, 0, sizeof(struct stack_frame));
sf->gprs[9] = (unsigned long) sf; sf->gprs[9] = (unsigned long) sf;
cpu_lowcore->save_area[15] = (unsigned long) sf; cpu_lowcore->gpregs_save_area[15] = (unsigned long) sf;
__ctl_store(cpu_lowcore->cregs_save_area, 0, 15); __ctl_store(cpu_lowcore->cregs_save_area, 0, 15);
atomic_inc(&init_mm.context.attach_count); atomic_inc(&init_mm.context.attach_count);
asm volatile( asm volatile(
...@@ -806,7 +838,7 @@ void __init smp_prepare_boot_cpu(void) ...@@ -806,7 +838,7 @@ void __init smp_prepare_boot_cpu(void)
S390_lowcore.percpu_offset = __per_cpu_offset[0]; S390_lowcore.percpu_offset = __per_cpu_offset[0];
current_set[0] = current; current_set[0] = current;
smp_cpu_state[0] = CPU_STATE_CONFIGURED; smp_cpu_state[0] = CPU_STATE_CONFIGURED;
smp_cpu_polarization[0] = POLARIZATION_UNKNWN; cpu_set_polarization(0, POLARIZATION_UNKNOWN);
} }
void __init smp_cpus_done(unsigned int max_cpus) void __init smp_cpus_done(unsigned int max_cpus)
...@@ -868,7 +900,8 @@ static ssize_t cpu_configure_store(struct device *dev, ...@@ -868,7 +900,8 @@ static ssize_t cpu_configure_store(struct device *dev,
rc = sclp_cpu_deconfigure(__cpu_logical_map[cpu]); rc = sclp_cpu_deconfigure(__cpu_logical_map[cpu]);
if (!rc) { if (!rc) {
smp_cpu_state[cpu] = CPU_STATE_STANDBY; smp_cpu_state[cpu] = CPU_STATE_STANDBY;
smp_cpu_polarization[cpu] = POLARIZATION_UNKNWN; cpu_set_polarization(cpu, POLARIZATION_UNKNOWN);
topology_expect_change();
} }
} }
break; break;
...@@ -877,7 +910,8 @@ static ssize_t cpu_configure_store(struct device *dev, ...@@ -877,7 +910,8 @@ static ssize_t cpu_configure_store(struct device *dev,
rc = sclp_cpu_configure(__cpu_logical_map[cpu]); rc = sclp_cpu_configure(__cpu_logical_map[cpu]);
if (!rc) { if (!rc) {
smp_cpu_state[cpu] = CPU_STATE_CONFIGURED; smp_cpu_state[cpu] = CPU_STATE_CONFIGURED;
smp_cpu_polarization[cpu] = POLARIZATION_UNKNWN; cpu_set_polarization(cpu, POLARIZATION_UNKNOWN);
topology_expect_change();
} }
} }
break; break;
...@@ -892,35 +926,6 @@ static ssize_t cpu_configure_store(struct device *dev, ...@@ -892,35 +926,6 @@ static ssize_t cpu_configure_store(struct device *dev,
static DEVICE_ATTR(configure, 0644, cpu_configure_show, cpu_configure_store); static DEVICE_ATTR(configure, 0644, cpu_configure_show, cpu_configure_store);
#endif /* CONFIG_HOTPLUG_CPU */ #endif /* CONFIG_HOTPLUG_CPU */
static ssize_t cpu_polarization_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
int cpu = dev->id;
ssize_t count;
mutex_lock(&smp_cpu_state_mutex);
switch (smp_cpu_polarization[cpu]) {
case POLARIZATION_HRZ:
count = sprintf(buf, "horizontal\n");
break;
case POLARIZATION_VL:
count = sprintf(buf, "vertical:low\n");
break;
case POLARIZATION_VM:
count = sprintf(buf, "vertical:medium\n");
break;
case POLARIZATION_VH:
count = sprintf(buf, "vertical:high\n");
break;
default:
count = sprintf(buf, "unknown\n");
break;
}
mutex_unlock(&smp_cpu_state_mutex);
return count;
}
static DEVICE_ATTR(polarization, 0444, cpu_polarization_show, NULL);
static ssize_t show_cpu_address(struct device *dev, static ssize_t show_cpu_address(struct device *dev,
struct device_attribute *attr, char *buf) struct device_attribute *attr, char *buf)
{ {
...@@ -928,13 +933,11 @@ static ssize_t show_cpu_address(struct device *dev, ...@@ -928,13 +933,11 @@ static ssize_t show_cpu_address(struct device *dev,
} }
static DEVICE_ATTR(address, 0444, show_cpu_address, NULL); static DEVICE_ATTR(address, 0444, show_cpu_address, NULL);
static struct attribute *cpu_common_attrs[] = { static struct attribute *cpu_common_attrs[] = {
#ifdef CONFIG_HOTPLUG_CPU #ifdef CONFIG_HOTPLUG_CPU
&dev_attr_configure.attr, &dev_attr_configure.attr,
#endif #endif
&dev_attr_address.attr, &dev_attr_address.attr,
&dev_attr_polarization.attr,
NULL, NULL,
}; };
...@@ -1055,11 +1058,20 @@ static int __devinit smp_add_present_cpu(int cpu) ...@@ -1055,11 +1058,20 @@ static int __devinit smp_add_present_cpu(int cpu)
rc = sysfs_create_group(&s->kobj, &cpu_common_attr_group); rc = sysfs_create_group(&s->kobj, &cpu_common_attr_group);
if (rc) if (rc)
goto out_cpu; goto out_cpu;
if (!cpu_online(cpu)) if (cpu_online(cpu)) {
goto out;
rc = sysfs_create_group(&s->kobj, &cpu_online_attr_group); rc = sysfs_create_group(&s->kobj, &cpu_online_attr_group);
if (!rc) if (rc)
goto out_online;
}
rc = topology_cpu_init(c);
if (rc)
goto out_topology;
return 0; return 0;
out_topology:
if (cpu_online(cpu))
sysfs_remove_group(&s->kobj, &cpu_online_attr_group);
out_online:
sysfs_remove_group(&s->kobj, &cpu_common_attr_group); sysfs_remove_group(&s->kobj, &cpu_common_attr_group);
out_cpu: out_cpu:
#ifdef CONFIG_HOTPLUG_CPU #ifdef CONFIG_HOTPLUG_CPU
...@@ -1111,61 +1123,16 @@ static ssize_t __ref rescan_store(struct device *dev, ...@@ -1111,61 +1123,16 @@ static ssize_t __ref rescan_store(struct device *dev,
static DEVICE_ATTR(rescan, 0200, NULL, rescan_store); static DEVICE_ATTR(rescan, 0200, NULL, rescan_store);
#endif /* CONFIG_HOTPLUG_CPU */ #endif /* CONFIG_HOTPLUG_CPU */
static ssize_t dispatching_show(struct device *dev, static int __init s390_smp_init(void)
struct device_attribute *attr,
char *buf)
{
ssize_t count;
mutex_lock(&smp_cpu_state_mutex);
count = sprintf(buf, "%d\n", cpu_management);
mutex_unlock(&smp_cpu_state_mutex);
return count;
}
static ssize_t dispatching_store(struct device *dev,
struct device_attribute *attr,
const char *buf,
size_t count)
{
int val, rc;
char delim;
if (sscanf(buf, "%d %c", &val, &delim) != 1)
return -EINVAL;
if (val != 0 && val != 1)
return -EINVAL;
rc = 0;
get_online_cpus();
mutex_lock(&smp_cpu_state_mutex);
if (cpu_management == val)
goto out;
rc = topology_set_cpu_management(val);
if (!rc)
cpu_management = val;
out:
mutex_unlock(&smp_cpu_state_mutex);
put_online_cpus();
return rc ? rc : count;
}
static DEVICE_ATTR(dispatching, 0644, dispatching_show,
dispatching_store);
static int __init topology_init(void)
{ {
int cpu; int cpu, rc;
int rc;
register_cpu_notifier(&smp_cpu_nb); register_cpu_notifier(&smp_cpu_nb);
#ifdef CONFIG_HOTPLUG_CPU #ifdef CONFIG_HOTPLUG_CPU
rc = device_create_file(cpu_subsys.dev_root, &dev_attr_rescan); rc = device_create_file(cpu_subsys.dev_root, &dev_attr_rescan);
if (rc) if (rc)
return rc; return rc;
#endif #endif
rc = device_create_file(cpu_subsys.dev_root, &dev_attr_dispatching);
if (rc)
return rc;
for_each_present_cpu(cpu) { for_each_present_cpu(cpu) {
rc = smp_add_present_cpu(cpu); rc = smp_add_present_cpu(cpu);
if (rc) if (rc)
...@@ -1173,4 +1140,4 @@ static int __init topology_init(void) ...@@ -1173,4 +1140,4 @@ static int __init topology_init(void)
} }
return 0; return 0;
} }
subsys_initcall(topology_init); subsys_initcall(s390_smp_init);
...@@ -60,74 +60,22 @@ SYSCALL_DEFINE1(mmap2, struct s390_mmap_arg_struct __user *, arg) ...@@ -60,74 +60,22 @@ SYSCALL_DEFINE1(mmap2, struct s390_mmap_arg_struct __user *, arg)
} }
/* /*
* sys_ipc() is the de-multiplexer for the SysV IPC calls.. * sys_ipc() is the de-multiplexer for the SysV IPC calls.
*
* This is really horribly ugly.
*/ */
SYSCALL_DEFINE5(s390_ipc, uint, call, int, first, unsigned long, second, SYSCALL_DEFINE5(s390_ipc, uint, call, int, first, unsigned long, second,
unsigned long, third, void __user *, ptr) unsigned long, third, void __user *, ptr)
{ {
struct ipc_kludge tmp; if (call >> 16)
int ret;
switch (call) {
case SEMOP:
return sys_semtimedop(first, (struct sembuf __user *)ptr,
(unsigned)second, NULL);
case SEMTIMEDOP:
return sys_semtimedop(first, (struct sembuf __user *)ptr,
(unsigned)second,
(const struct timespec __user *) third);
case SEMGET:
return sys_semget(first, (int)second, third);
case SEMCTL: {
union semun fourth;
if (!ptr)
return -EINVAL;
if (get_user(fourth.__pad, (void __user * __user *) ptr))
return -EFAULT;
return sys_semctl(first, (int)second, third, fourth);
}
case MSGSND:
return sys_msgsnd (first, (struct msgbuf __user *) ptr,
(size_t)second, third);
break;
case MSGRCV:
if (!ptr)
return -EINVAL;
if (copy_from_user (&tmp, (struct ipc_kludge __user *) ptr,
sizeof (struct ipc_kludge)))
return -EFAULT;
return sys_msgrcv (first, tmp.msgp,
(size_t)second, tmp.msgtyp, third);
case MSGGET:
return sys_msgget((key_t)first, (int)second);
case MSGCTL:
return sys_msgctl(first, (int)second,
(struct msqid_ds __user *)ptr);
case SHMAT: {
ulong raddr;
ret = do_shmat(first, (char __user *)ptr,
(int)second, &raddr);
if (ret)
return ret;
return put_user (raddr, (ulong __user *) third);
break;
}
case SHMDT:
return sys_shmdt ((char __user *)ptr);
case SHMGET:
return sys_shmget(first, (size_t)second, third);
case SHMCTL:
return sys_shmctl(first, (int)second,
(struct shmid_ds __user *) ptr);
default:
return -ENOSYS;
}
return -EINVAL; return -EINVAL;
/* The s390 sys_ipc variant has only five parameters instead of six
* like the generic variant. The only difference is the handling of
* the SEMTIMEDOP subcall where on s390 the third parameter is used
* as a pointer to a struct timespec where the generic variant uses
* the fifth parameter.
* Therefore we can call the generic variant by simply passing the
* third parameter also as fifth parameter.
*/
return sys_ipc(call, first, second, third, ptr, third);
} }
#ifdef CONFIG_64BIT #ifdef CONFIG_64BIT
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -93,18 +93,22 @@ static unsigned long setup_zero_pages(void) ...@@ -93,18 +93,22 @@ static unsigned long setup_zero_pages(void)
void __init paging_init(void) void __init paging_init(void)
{ {
unsigned long max_zone_pfns[MAX_NR_ZONES]; unsigned long max_zone_pfns[MAX_NR_ZONES];
unsigned long pgd_type; unsigned long pgd_type, asce_bits;
init_mm.pgd = swapper_pg_dir; init_mm.pgd = swapper_pg_dir;
S390_lowcore.kernel_asce = __pa(init_mm.pgd) & PAGE_MASK;
#ifdef CONFIG_64BIT #ifdef CONFIG_64BIT
/* A three level page table (4TB) is enough for the kernel space. */ if (VMALLOC_END > (1UL << 42)) {
S390_lowcore.kernel_asce |= _ASCE_TYPE_REGION3 | _ASCE_TABLE_LENGTH; asce_bits = _ASCE_TYPE_REGION2 | _ASCE_TABLE_LENGTH;
pgd_type = _REGION2_ENTRY_EMPTY;
} else {
asce_bits = _ASCE_TYPE_REGION3 | _ASCE_TABLE_LENGTH;
pgd_type = _REGION3_ENTRY_EMPTY; pgd_type = _REGION3_ENTRY_EMPTY;
}
#else #else
S390_lowcore.kernel_asce |= _ASCE_TABLE_LENGTH; asce_bits = _ASCE_TABLE_LENGTH;
pgd_type = _SEGMENT_ENTRY_EMPTY; pgd_type = _SEGMENT_ENTRY_EMPTY;
#endif #endif
S390_lowcore.kernel_asce = (__pa(init_mm.pgd) & PAGE_MASK) | asce_bits;
clear_table((unsigned long *) init_mm.pgd, pgd_type, clear_table((unsigned long *) init_mm.pgd, pgd_type,
sizeof(unsigned long)*2048); sizeof(unsigned long)*2048);
vmem_map_init(); vmem_map_init();
......
This diff is collapsed.
...@@ -1718,7 +1718,7 @@ dasd_3990_erp_action_1B_32(struct dasd_ccw_req * default_erp, char *sense) ...@@ -1718,7 +1718,7 @@ dasd_3990_erp_action_1B_32(struct dasd_ccw_req * default_erp, char *sense)
erp->startdev = device; erp->startdev = device;
erp->memdev = device; erp->memdev = device;
erp->magic = default_erp->magic; erp->magic = default_erp->magic;
erp->expires = 0; erp->expires = default_erp->expires;
erp->retries = 256; erp->retries = 256;
erp->buildclk = get_clock(); erp->buildclk = get_clock();
erp->status = DASD_CQR_FILLED; erp->status = DASD_CQR_FILLED;
...@@ -2363,7 +2363,7 @@ static struct dasd_ccw_req *dasd_3990_erp_add_erp(struct dasd_ccw_req *cqr) ...@@ -2363,7 +2363,7 @@ static struct dasd_ccw_req *dasd_3990_erp_add_erp(struct dasd_ccw_req *cqr)
erp->memdev = device; erp->memdev = device;
erp->block = cqr->block; erp->block = cqr->block;
erp->magic = cqr->magic; erp->magic = cqr->magic;
erp->expires = 0; erp->expires = cqr->expires;
erp->retries = 256; erp->retries = 256;
erp->buildclk = get_clock(); erp->buildclk = get_clock();
erp->status = DASD_CQR_FILLED; erp->status = DASD_CQR_FILLED;
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment