1. 18 Oct, 2017 5 commits
  2. 16 Oct, 2017 6 commits
  3. 12 Oct, 2017 1 commit
  4. 09 Oct, 2017 8 commits
  5. 04 Oct, 2017 1 commit
    • Heiko Carstens's avatar
      s390: use generic rwsem implementation · 91a1fad7
      Heiko Carstens authored
      We never optimized our rwsem inline assemblies to make use of the new
      atomic instructions. The generic rwsem implementation implicitly makes
      use of the new instructions, since it implements the required rwsem
      primitives with atomic operations, which we did optimize.
      
      However even when compiling for old architectures the generic variant
      still generates better code. So it's time to simply remove our old
      code and switch to the generic implementation.
      Signed-off-by: default avatarHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: default avatarMartin Schwidefsky <schwidefsky@de.ibm.com>
      91a1fad7
  6. 29 Sep, 2017 11 commits
  7. 28 Sep, 2017 8 commits
    • Martin Schwidefsky's avatar
      s390/rwlock: introduce rwlock wait queueing · eb3b7b84
      Martin Schwidefsky authored
      Like the common queued rwlock code the s390 implementation uses the
      queued spinlock code on a spinlock_t embedded in the rwlock_t to achieve
      the queueing. The encoding of the rwlock_t differs though, the counter
      field in the rwlock_t is split into two parts. The upper two bytes hold
      the write bit and the write wait counter, the lower two bytes hold the
      read counter.
      
      The arch_read_lock operation works exactly like the common qrwlock but
      the enqueue operation for a writer follows a diffent logic. After the
      failed inline try to get the rwlock in write, the writer first increases
      the write wait counter, acquires the wait spin_lock for the queueing,
      and then loops until there are no readers and the write bit is zero.
      Without the write wait counter a CPU that just released the rwlock
      could immediately reacquire the lock in the inline code, bypassing all
      outstanding read and write waiters. For s390 this would cause massive
      imbalances in favour of writers in case of a contended rwlock.
      Signed-off-by: default avatarMartin Schwidefsky <schwidefsky@de.ibm.com>
      eb3b7b84
    • Martin Schwidefsky's avatar
      s390/spinlock: introduce spinlock wait queueing · b96f7d88
      Martin Schwidefsky authored
      The queued spinlock code for s390 follows the principles of the common
      code qspinlock implementation but with a few notable differences.
      
      The format of the spinlock_t locking word differs, s390 needs to store
      the logical CPU number of the lock holder in the spinlock_t to be able
      to use the diagnose 9c directed yield hypervisor call.
      
      The inline code sequences for spin_lock and spin_unlock are nice and
      short. The inline portion of a spin_lock now typically looks like this:
      
      	lhi	%r0,0			# 0 indicates an empty lock
      	l	%r1,0x3a0		# CPU number + 1 from lowcore
      	cs	%r0,%r1,<some_lock>	# lock operation
      	jnz	call_wait		# on failure call wait function
      locked:
      	...
      call_wait:
      	la	%r2,<some_lock>
      	brasl	%r14,arch_spin_lock_wait
      	j	locked
      
      A spin_unlock is as simple as before:
      
      	lhi	%r0,0
      	sth	%r0,2(%r2)		# unlock operation
      
      After a CPU has queued itself it may not enable interrupts again for the
      arch_spin_lock_flags() variant. The arch_spin_lock_wait_flags wait function
      is removed.
      
      To improve performance the code implements opportunistic lock stealing.
      If the wait function finds a spinlock_t that indicates that the lock is
      free but there are queued waiters, the CPU may steal the lock up to three
      times without queueing itself. The lock stealing update the steal counter
      in the lock word to prevent more than 3 steals. The counter is reset at
      the time the CPU next in the queue successfully takes the lock.
      
      While the queued spinlocks improve performance in a system with dedicated
      CPUs, in a virtualized environment with continuously overcommitted CPUs
      the queued spinlocks can have a negative effect on performance. This
      is due to the fact that a queued CPU that is preempted by the hypervisor
      will block the queue at some point even without holding the lock. With
      the classic spinlock it does not matter if a CPU is preempted that waits
      for the lock. Therefore use the queued spinlock code only if the system
      runs with dedicated CPUs and fall back to classic spinlocks when running
      with shared CPUs.
      Signed-off-by: default avatarMartin Schwidefsky <schwidefsky@de.ibm.com>
      b96f7d88
    • Martin Schwidefsky's avatar
      s390/spinlock: use the cpu number +1 as spinlock value · 81533803
      Martin Schwidefsky authored
      The queued spinlock code will come out simpler if the encoding of
      the CPU that holds the spinlock is (cpu+1) instead of (~cpu).
      Signed-off-by: default avatarMartin Schwidefsky <schwidefsky@de.ibm.com>
      81533803
    • Martin Schwidefsky's avatar
      s390/topology: add detection of dedicated vs shared CPUs · 1887aa07
      Martin Schwidefsky authored
      The topology information returned by STSI 15.x.x contains a flag
      if the CPUs of a topology-list are dedicated or shared. Make this
      information available if the machine provides topology information.
      Signed-off-by: default avatarMartin Schwidefsky <schwidefsky@de.ibm.com>
      1887aa07
    • Himanshu Jha's avatar
      s390/sclp: Use setup_timer and mod_timer · 8179c7ba
      Himanshu Jha authored
      Use setup_timer and mod_timer API instead of structure assignments.
      
      This is done using Coccinelle and semantic patch used
      for this as follows:
      
      @@
      expression x,y,z,a,b;
      @@
      
      -init_timer (&x);
      +setup_timer (&x, y, z);
      +mod_timer (&a, b);
      -x.function = y;
      -x.data = z;
      -x.expires = b;
      -add_timer(&a);
      Signed-off-by: default avatarHimanshu Jha <himanshujha199640@gmail.com>
      Signed-off-by: default avatarMartin Schwidefsky <schwidefsky@de.ibm.com>
      8179c7ba
    • Heiko Carstens's avatar
      s390/cpumf: remove superfluous nr_cpumask_bits check · 19220999
      Heiko Carstens authored
      Paul Burton reported that the nr_cpumask_bits check
      within cpumsf_pmu_event_init() is not necessary.
      
      Actually there is already a prior check within
      perf_event_alloc(). Therefore remove the check.
      Reported-by: default avatarPaul Burton <paul.burton@imgtec.com>
      Signed-off-by: default avatarHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: default avatarMartin Schwidefsky <schwidefsky@de.ibm.com>
      19220999
    • Harald Freudenberger's avatar
      s390/zcrypt: Explicitly check input data length. · 76b31381
      Harald Freudenberger authored
      The function to prepare MEX type 50 ap messages did
      not explicitly check for the data length in case of
      data > 512 bytes. Instead the function assumes the
      boundary check done in the ioctl function will always
      reject requests with invalid data length values.
      However, screening just the function code may give the
      illusion, that there may be a gap which could be
      exploited by userspace for buffer overwrite attacks.
      Signed-off-by: default avatarHarald Freudenberger <freude@linux.vnet.ibm.com>
      Signed-off-by: default avatarMartin Schwidefsky <schwidefsky@de.ibm.com>
      76b31381
    • Sebastian Ott's avatar
      s390/cmf: use tod_to_ns() · 08c6df97
      Sebastian Ott authored
      Instead of open coding tod clock to ns conversions use the timex helper.
      Signed-off-by: default avatarSebastian Ott <sebott@linux.vnet.ibm.com>
      Reviewed-by: default avatarPeter Oberparleiter <oberpar@linux.vnet.ibm.com>
      Signed-off-by: default avatarMartin Schwidefsky <schwidefsky@de.ibm.com>
      08c6df97