1. 03 Oct, 2018 2 commits
  2. 02 Oct, 2018 7 commits
    • Andrew Murray's avatar
      Documentation/lockstat: Fix trivial typo · bccb484b
      Andrew Murray authored
      Fix incorrect line number in example output
      Signed-off-by: default avatarAndrew Murray <andrew.murray@arm.com>
      Cc: Jiri Kosina <trivial@kernel.org>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: linux-doc@vger.kernel.org
      Link: http://lkml.kernel.org/r/1538391663-54524-1-git-send-email-andrew.murray@arm.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      bccb484b
    • Andrea Parri's avatar
      locking/memory-barriers: Replace smp_cond_acquire() with smp_cond_load_acquire() · 2f359c7e
      Andrea Parri authored
      Amend the changes in commit:
      
        1f03e8d2 ("locking/barriers: Replace smp_cond_acquire() with smp_cond_load_acquire()")
      
      ... by updating the documentation accordingly.
      
      Also remove some obsolete information related to the implementation.
      Signed-off-by: default avatarAndrea Parri <andrea.parri@amarulasolutions.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Acked-by: default avatarAlan Stern <stern@rowland.harvard.edu>
      Cc: Akira Yokosawa <akiyks@gmail.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Boqun Feng <boqun.feng@gmail.com>
      Cc: Daniel Lustig <dlustig@nvidia.com>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Jade Alglave <j.alglave@ucl.ac.uk>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Luc Maranget <luc.maranget@inria.fr>
      Cc: Nicholas Piggin <npiggin@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: linux-arch@vger.kernel.org
      Cc: parri.andrea@gmail.com
      Link: http://lkml.kernel.org/r/20180926182920.27644-5-paulmck@linux.ibm.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      2f359c7e
    • Paul E. McKenney's avatar
      tools/memory-model: Add more LKMM limitations · d8fa25c4
      Paul E. McKenney authored
      This commit adds more detail about compiler optimizations and
      not-yet-modeled Linux-kernel APIs.
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: default avatarAndrea Parri <andrea.parri@amarulasolutions.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: akiyks@gmail.com
      Cc: boqun.feng@gmail.com
      Cc: dhowells@redhat.com
      Cc: j.alglave@ucl.ac.uk
      Cc: linux-arch@vger.kernel.org
      Cc: luc.maranget@inria.fr
      Cc: npiggin@gmail.com
      Cc: parri.andrea@gmail.com
      Cc: stern@rowland.harvard.edu
      Cc: will.deacon@arm.com
      Link: http://lkml.kernel.org/r/20180926182920.27644-4-paulmck@linux.ibm.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      d8fa25c4
    • SeongJae Park's avatar
      tools/memory-model: Fix a README typo · 3d2046a6
      SeongJae Park authored
      This commit fixes a duplicate-"the" typo in README.
      Signed-off-by: default avatarSeongJae Park <sj38.park@gmail.com>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Acked-by: default avatarAlan Stern <stern@rowland.harvard.edu>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: akiyks@gmail.com
      Cc: boqun.feng@gmail.com
      Cc: dhowells@redhat.com
      Cc: j.alglave@ucl.ac.uk
      Cc: linux-arch@vger.kernel.org
      Cc: luc.maranget@inria.fr
      Cc: npiggin@gmail.com
      Cc: parri.andrea@gmail.com
      Cc: will.deacon@arm.com
      Link: http://lkml.kernel.org/r/20180926182920.27644-3-paulmck@linux.ibm.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      3d2046a6
    • Alan Stern's avatar
      tools/memory-model: Add extra ordering for locks and remove it for ordinary release/acquire · 6e89e831
      Alan Stern authored
      More than one kernel developer has expressed the opinion that the LKMM
      should enforce ordering of writes by locking.  In other words, given
      the following code:
      
      	WRITE_ONCE(x, 1);
      	spin_unlock(&s):
      	spin_lock(&s);
      	WRITE_ONCE(y, 1);
      
      the stores to x and y should be propagated in order to all other CPUs,
      even though those other CPUs might not access the lock s.  In terms of
      the memory model, this means expanding the cumul-fence relation.
      
      Locks should also provide read-read (and read-write) ordering in a
      similar way.  Given:
      
      	READ_ONCE(x);
      	spin_unlock(&s);
      	spin_lock(&s);
      	READ_ONCE(y);		// or WRITE_ONCE(y, 1);
      
      the load of x should be executed before the load of (or store to) y.
      The LKMM already provides this ordering, but it provides it even in
      the case where the two accesses are separated by a release/acquire
      pair of fences rather than unlock/lock.  This would prevent
      architectures from using weakly ordered implementations of release and
      acquire, which seems like an unnecessary restriction.  The patch
      therefore removes the ordering requirement from the LKMM for that
      case.
      
      There are several arguments both for and against this change.  Let us
      refer to these enhanced ordering properties by saying that the LKMM
      would require locks to be RCtso (a bit of a misnomer, but analogous to
      RCpc and RCsc) and it would require ordinary acquire/release only to
      be RCpc.  (Note: In the following, the phrase "all supported
      architectures" is meant not to include RISC-V.  Although RISC-V is
      indeed supported by the kernel, the implementation is still somewhat
      in a state of flux and therefore statements about it would be
      premature.)
      
      Pros:
      
      	The kernel already provides RCtso ordering for locks on all
      	supported architectures, even though this is not stated
      	explicitly anywhere.  Therefore the LKMM should formalize it.
      
      	In theory, guaranteeing RCtso ordering would reduce the need
      	for additional barrier-like constructs meant to increase the
      	ordering strength of locks.
      
      	Will Deacon and Peter Zijlstra are strongly in favor of
      	formalizing the RCtso requirement.  Linus Torvalds and Will
      	would like to go even further, requiring locks to have RCsc
      	behavior (ordering preceding writes against later reads), but
      	they recognize that this would incur a noticeable performance
      	degradation on the POWER architecture.  Linus also points out
      	that people have made the mistake, in the past, of assuming
      	that locking has stronger ordering properties than is
      	currently guaranteed, and this change would reduce the
      	likelihood of such mistakes.
      
      	Not requiring ordinary acquire/release to be any stronger than
      	RCpc may prove advantageous for future architectures, allowing
      	them to implement smp_load_acquire() and smp_store_release()
      	with more efficient machine instructions than would be
      	possible if the operations had to be RCtso.  Will and Linus
      	approve this rationale, hypothetical though it is at the
      	moment (it may end up affecting the RISC-V implementation).
      	The same argument may or may not apply to RMW-acquire/release;
      	see also the second Con entry below.
      
      	Linus feels that locks should be easy for people to use
      	without worrying about memory consistency issues, since they
      	are so pervasive in the kernel, whereas acquire/release is
      	much more of an "experts only" tool.  Requiring locks to be
      	RCtso is a step in this direction.
      
      Cons:
      
      	Andrea Parri and Luc Maranget think that locks should have the
      	same ordering properties as ordinary acquire/release (indeed,
      	Luc points out that the names "acquire" and "release" derive
      	from the usage of locks).  Andrea points out that having
      	different ordering properties for different forms of acquires
      	and releases is not only unnecessary, it would also be
      	confusing and unmaintainable.
      
      	Locks are constructed from lower-level primitives, typically
      	RMW-acquire (for locking) and ordinary release (for unlock).
      	It is illogical to require stronger ordering properties from
      	the high-level operations than from the low-level operations
      	they comprise.  Thus, this change would make
      
      		while (cmpxchg_acquire(&s, 0, 1) != 0)
      			cpu_relax();
      
      	an incorrect implementation of spin_lock(&s) as far as the
      	LKMM is concerned.  In theory this weakness can be ameliorated
      	by changing the LKMM even further, requiring
      	RMW-acquire/release also to be RCtso (which it already is on
      	all supported architectures).
      
      	As far as I know, nobody has singled out any examples of code
      	in the kernel that actually relies on locks being RCtso.
      	(People mumble about RCU and the scheduler, but nobody has
      	pointed to any actual code.  If there are any real cases,
      	their number is likely quite small.)  If RCtso ordering is not
      	needed, why require it?
      
      	A handful of locking constructs (qspinlocks, qrwlocks, and
      	mcs_spinlocks) are built on top of smp_cond_load_acquire()
      	instead of an RMW-acquire instruction.  It currently provides
      	only the ordinary acquire semantics, not the stronger ordering
      	this patch would require of locks.  In theory this could be
      	ameliorated by requiring smp_cond_load_acquire() in
      	combination with ordinary release also to be RCtso (which is
      	currently true on all supported architectures).
      
      	On future weakly ordered architectures, people may be able to
      	implement locks in a non-RCtso fashion with significant
      	performance improvement.  Meeting the RCtso requirement would
      	necessarily add run-time overhead.
      
      Overall, the technical aspects of these arguments seem relatively
      minor, and it appears mostly to boil down to a matter of opinion.
      Since the opinions of senior kernel maintainers such as Linus,
      Peter, and Will carry more weight than those of Luc and Andrea, this
      patch changes the model in accordance with the maintainers' wishes.
      Signed-off-by: default avatarAlan Stern <stern@rowland.harvard.edu>
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Reviewed-by: default avatarWill Deacon <will.deacon@arm.com>
      Reviewed-by: default avatarAndrea Parri <andrea.parri@amarulasolutions.com>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: akiyks@gmail.com
      Cc: boqun.feng@gmail.com
      Cc: dhowells@redhat.com
      Cc: j.alglave@ucl.ac.uk
      Cc: linux-arch@vger.kernel.org
      Cc: luc.maranget@inria.fr
      Cc: npiggin@gmail.com
      Cc: parri.andrea@gmail.com
      Link: http://lkml.kernel.org/r/20180926182920.27644-2-paulmck@linux.ibm.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      6e89e831
    • Paul E. McKenney's avatar
      tools/memory-model: Add litmus-test naming scheme · c4f790f2
      Paul E. McKenney authored
      This commit documents the scheme used to generate the names for the
      litmus tests.
      
      [ paulmck: Apply feedback from Andrea Parri and Will Deacon. ]
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: akiyks@gmail.com
      Cc: boqun.feng@gmail.com
      Cc: dhowells@redhat.com
      Cc: j.alglave@ucl.ac.uk
      Cc: linux-arch@vger.kernel.org
      Cc: luc.maranget@inria.fr
      Cc: npiggin@gmail.com
      Cc: parri.andrea@gmail.com
      Cc: stern@rowland.harvard.edu
      Link: http://lkml.kernel.org/r/20180926182920.27644-1-paulmck@linux.ibm.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      c4f790f2
    • Matthew Wilcox's avatar
      locking/spinlocks: Remove an instruction from spin and write locks · 27df8968
      Matthew Wilcox authored
      Both spin locks and write locks currently do:
      
       f0 0f b1 17             lock cmpxchg %edx,(%rdi)
       85 c0                   test   %eax,%eax
       75 05                   jne    [slowpath]
      
      This 'test' insn is superfluous; the cmpxchg insn sets the Z flag
      appropriately.  Peter pointed out that using atomic_try_cmpxchg_acquire()
      will let the compiler know this is true.  Comparing before/after
      disassemblies show the only effect is to remove this insn.
      
      Take this opportunity to make the spin & write lock code resemble each
      other more closely and have similar likely() hints.
      Suggested-by: default avatarPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: default avatarMatthew Wilcox <willy@infradead.org>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Waiman Long <longman@redhat.com>
      Link: http://lkml.kernel.org/r/20180820162639.GC25153@bombadil.infradead.orgSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      27df8968
  3. 11 Sep, 2018 2 commits
  4. 10 Sep, 2018 8 commits
    • Waiman Long's avatar
      locking/rwsem: Make owner store task pointer of last owning reader · 925b9cd1
      Waiman Long authored
      Currently, when a reader acquires a lock, it only sets the
      RWSEM_READER_OWNED bit in the owner field. The other bits are simply
      not used. When debugging hanging cases involving rwsems and readers,
      the owner value does not provide much useful information at all.
      
      This patch modifies the current behavior to always store the task_struct
      pointer of the last rwsem-acquiring reader in a reader-owned rwsem. This
      may be useful in debugging rwsem hanging cases especially if only one
      reader is involved. However, the task in the owner field may not the
      real owner or one of the real owners at all when the owner value is
      examined, for example, in a crash dump. So it is just an additional
      hint about the past history.
      
      If CONFIG_DEBUG_RWSEMS=y is enabled, the owner field will be checked at
      unlock time too to make sure the task pointer value is valid. That does
      have a slight performance cost and so is only enabled as part of that
      debug option.
      
      From the performance point of view, it is expected that the changes
      shouldn't have any noticeable performance impact. A rwsem microbenchmark
      (with 48 worker threads and 1:1 reader/writer ratio) was ran on a
      2-socket 24-core 48-thread Haswell system.  The locking rates on a
      4.19-rc1 based kernel were as follows:
      
        1) Unpatched kernel:				543.3 kops/s
        2) Patched kernel:				549.2 kops/s
        3) Patched kernel (CONFIG_DEBUG_RWSEMS on):	546.6 kops/s
      
      There was actually a slight increase in performance (1.1%) in this
      particular case. Maybe it was caused by the elimination of a branch or
      just a testing noise. Turning on the CONFIG_DEBUG_RWSEMS option also
      had less than the expected impact on performance.
      
      The least significant 2 bits of the owner value are now used to designate
      the rwsem is readers owned and the owners are anonymous.
      Signed-off-by: default avatarWaiman Long <longman@redhat.com>
      Acked-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Davidlohr Bueso <dave@stgolabs.net>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Link: http://lkml.kernel.org/r/1536265114-10842-1-git-send-email-longman@redhat.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      925b9cd1
    • Waiman Long's avatar
      locking/rwsem: Exit read lock slowpath if queue empty & no writer · 4b486b53
      Waiman Long authored
      It was discovered that a constant stream of readers with occassional
      writers pounding on a rwsem may cause many of the readers to enter the
      slowpath unnecessarily thus increasing latency and lowering performance.
      
      In the current code, a reader entering the slowpath critical section
      will unconditionally set the WAITING_BIAS, if not set yet, and clear
      its active count even if no one is in the wait queue and no writer
      is present. This causes some incoming readers to observe the presence
      of waiters in the wait queue and hence have to go into the slowpath
      themselves.
      
      With sufficient numbers of readers and a relatively short lock hold time,
      the WAITING_BIAS may be repeatedly turned on and off and a substantial
      portion of the readers will go into the slowpath sustaining a rather
      long queue in the wait queue spinlock and repeated WAITING_BIAS on/off
      cycle until the logjam is broken opportunistically.
      
      To avoid this situation from happening, an additional check is added to
      detect the special case that the reader in the critical section is the
      only one in the wait queue and no writer is present. When that happens,
      it can just exit the slowpath and return immediately as its active count
      has already been set in the lock.  Other incoming readers won't observe
      the presence of waiters and so will not be forced into the slowpath.
      
      The issue was found in a customer site where they had an application
      that pounded on the pread64 syscalls heavily on an XFS filesystem. The
      application was run in a recent 4-socket boxes with a lot of CPUs. They
      saw significant spinlock contention in the rwsem_down_read_failed() call.
      With this patch applied, the system CPU usage went down from 85% to 57%,
      and the spinlock contention in the pread64 syscalls was gone.
      Signed-off-by: default avatarWaiman Long <longman@redhat.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: default avatarDavidlohr Bueso <dbueso@suse.de>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Cc: Joe Mario <jmario@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1532459425-19204-1-git-send-email-longman@redhat.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      4b486b53
    • Peter Zijlstra's avatar
      jump_label/lockdep: Assert we hold the hotplug lock for _cpuslocked() operations · cb538267
      Peter Zijlstra authored
      Weirdly we seem to have forgotten this...
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
      cb538267
    • Ingo Molnar's avatar
    • Borislav Petkov's avatar
      jump_label: Fix typo in warning message · da260fe1
      Borislav Petkov authored
      There's no 'allocatote' - use the next best thing: 'allocate' :-)
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Jason Baron <jbaron@akamai.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt (VMware) <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20180907103521.31344-1-bp@alien8.deSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      da260fe1
    • Thomas Hellstrom's avatar
      locking/mutex: Fix mutex debug call and ww_mutex documentation · e13e2366
      Thomas Hellstrom authored
      The following commit:
      
        08295b3b ("Implement an algorithm choice for Wound-Wait mutexes")
      
      introduced a reference in the documentation to a function that was
      removed in an earlier commit.
      
      It also forgot to remove a call to debug_mutex_add_waiter() which is now
      unconditionally called by __mutex_add_waiter().
      
      Fix those bugs.
      Signed-off-by: default avatarThomas Hellstrom <thellstrom@vmware.com>
      Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: dri-devel@lists.freedesktop.org
      Fixes: 08295b3b ("Implement an algorithm choice for Wound-Wait mutexes")
      Link: http://lkml.kernel.org/r/20180903140708.2401-1-thellstrom@vmware.comSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      e13e2366
    • Borislav Petkov's avatar
      jump_label: Use static_key_linked() accessor · 34e12b86
      Borislav Petkov authored
      ... instead of open-coding it, in static_key_mod().
      
      No functional changes.
      Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Jason Baron <jbaron@akamai.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt (VMware) <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20180909114252.17575-1-bp@alien8.deSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
      34e12b86
    • Linus Torvalds's avatar
      Linux 4.19-rc3 · 11da3a7f
      Linus Torvalds authored
      11da3a7f
  5. 09 Sep, 2018 7 commits
  6. 08 Sep, 2018 6 commits
    • Linus Torvalds's avatar
      Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm · f8f65382
      Linus Torvalds authored
      Pull KVM fixes from Radim Krčmář:
       "ARM:
         - Fix a VFP corruption in 32-bit guest
         - Add missing cache invalidation for CoW pages
         - Two small cleanups
      
        s390:
         - Fallout from the hugetlbfs support: pfmf interpretion and locking
         - VSIE: fix keywrapping for nested guests
      
        PPC:
         - Fix a bug where pages might not get marked dirty, causing guest
           memory corruption on migration
         - Fix a bug causing reads from guest memory to use the wrong guest
           real address for very large HPT guests (>256G of memory), leading
           to failures in instruction emulation.
      
        x86:
         - Fix out of bound access from malicious pv ipi hypercalls
           (introduced in rc1)
         - Fix delivery of pending interrupts when entering a nested guest,
           preventing arbitrarily late injection
         - Sanitize kvm_stat output after destroying a guest
         - Fix infinite loop when emulating a nested guest page fault and
           improve the surrounding emulation code
         - Two minor cleanups"
      
      * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (28 commits)
        KVM: LAPIC: Fix pv ipis out-of-bounds access
        KVM: nVMX: Fix loss of pending IRQ/NMI before entering L2
        arm64: KVM: Remove pgd_lock
        KVM: Remove obsolete kvm_unmap_hva notifier backend
        arm64: KVM: Only force FPEXC32_EL2.EN if trapping FPSIMD
        KVM: arm/arm64: Clean dcache to PoC when changing PTE due to CoW
        KVM: s390: Properly lock mm context allow_gmap_hpage_1m setting
        KVM: s390: vsie: copy wrapping keys to right place
        KVM: s390: Fix pfmf and conditional skey emulation
        tools/kvm_stat: re-animate display of dead guests
        tools/kvm_stat: indicate dead guests as such
        tools/kvm_stat: handle guest removals more gracefully
        tools/kvm_stat: don't reset stats when setting PID filter for debugfs
        tools/kvm_stat: fix updates for dead guests
        tools/kvm_stat: fix handling of invalid paths in debugfs provider
        tools/kvm_stat: fix python3 issues
        KVM: x86: Unexport x86_emulate_instruction()
        KVM: x86: Rename emulate_instruction() to kvm_emulate_instruction()
        KVM: x86: Do not re-{try,execute} after failed emulation in L2
        KVM: x86: Default to not allowing emulation retry in kvm_mmu_page_fault
        ...
      f8f65382
    • Linus Torvalds's avatar
      Merge tag 'armsoc-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc · 0f3aa48a
      Linus Torvalds authored
      Pull ARM SoC fixes from Olof Johansson:
       "A few more fixes who have trickled in:
      
         - MMC bus width fixup for some Allwinner platforms
      
         - Fix for NULL deref in ti-aemif when no platform data is passed in
      
         - Fix div by 0 in SCMI code
      
         - Add a missing module alias in a new RPi driver"
      
      * tag 'armsoc-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc:
        memory: ti-aemif: fix a potential NULL-pointer dereference
        firmware: arm_scmi: fix divide by zero when sustained_perf_level is zero
        hwmon: rpi: add module alias to raspberrypi-hwmon
        arm64: allwinner: dts: h6: fix Pine H64 MMC bus width
      0f3aa48a
    • Olof Johansson's avatar
      Merge tag 'sunxi-fixes-for-4.19' of... · a132bb90
      Olof Johansson authored
      Merge tag 'sunxi-fixes-for-4.19' of https://git.kernel.org/pub/scm/linux/kernel/git/sunxi/linux into fixes
      
      Allwinner fixes for 4.19
      
      Just one fix for H6 mmc on the Pine H64: the mmc bus width was missing
      from the device tree. This was added in 4.19-rc1.
      
      * tag 'sunxi-fixes-for-4.19' of https://git.kernel.org/pub/scm/linux/kernel/git/sunxi/linux:
        arm64: allwinner: dts: h6: fix Pine H64 MMC bus width
      Signed-off-by: default avatarOlof Johansson <olof@lixom.net>
      a132bb90
    • Nadav Amit's avatar
      x86/mm: Use WRITE_ONCE() when setting PTEs · 9bc4f28a
      Nadav Amit authored
      When page-table entries are set, the compiler might optimize their
      assignment by using multiple instructions to set the PTE. This might
      turn into a security hazard if the user somehow manages to use the
      interim PTE. L1TF does not make our lives easier, making even an interim
      non-present PTE a security hazard.
      
      Using WRITE_ONCE() to set PTEs and friends should prevent this potential
      security hazard.
      
      I skimmed the differences in the binary with and without this patch. The
      differences are (obviously) greater when CONFIG_PARAVIRT=n as more
      code optimizations are possible. For better and worse, the impact on the
      binary with this patch is pretty small. Skimming the code did not cause
      anything to jump out as a security hazard, but it seems that at least
      move_soft_dirty_pte() caused set_pte_at() to use multiple writes.
      Signed-off-by: default avatarNadav Amit <namit@vmware.com>
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Sean Christopherson <sean.j.christopherson@intel.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20180902181451.80520-1-namit@vmware.com
      9bc4f28a
    • Thomas Gleixner's avatar
      x86/apic/vector: Make error return value negative · 47b7360c
      Thomas Gleixner authored
      activate_managed() returns EINVAL instead of -EINVAL in case of
      error. While this is unlikely to happen, the positive return value would
      cause further malfunction at the call site.
      
      Fixes: 2db1f959 ("x86/vector: Handle managed interrupts proper")
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Cc: stable@vger.kernel.org
      47b7360c
    • Linus Torvalds's avatar
      Merge branch 'i2c/for-current' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux · d7b686eb
      Linus Torvalds authored
      Pull i2c fixes from Wolfram Sang:
      
       - bugfixes for uniphier, i801, and xiic drivers
      
       - ID removal (never produced) for imx
      
       - one MAINTAINER addition
      
      * 'i2c/for-current' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux:
        i2c: xiic: Record xilinx i2c with Zynq fragment
        i2c: xiic: Make the start and the byte count write atomic
        i2c: i801: fix DNV's SMBCTRL register offset
        i2c: imx-lpi2c: Remove mx8dv compatible entry
        dt-bindings: imx-lpi2c: Remove mx8dv compatible entry
        i2c: uniphier-f: issue STOP only for last message or I2C_M_STOP
        i2c: uniphier: issue STOP only for last message or I2C_M_STOP
      d7b686eb
  7. 07 Sep, 2018 8 commits