1. 18 Nov, 2016 2 commits
  2. 09 Nov, 2016 4 commits
    • James Hogan's avatar
      KVM: MIPS: Precalculate MMIO load resume PC · ff686d55
      James Hogan authored
      commit e1e575f6 upstream.
      
      The advancing of the PC when completing an MMIO load is done before
      re-entering the guest, i.e. before restoring the guest ASID. However if
      the load is in a branch delay slot it may need to access guest code to
      read the prior branch instruction. This isn't safe in TLB mapped code at
      the moment, nor in the future when we'll access unmapped guest segments
      using direct user accessors too, as it could read the branch from host
      user memory instead.
      
      Therefore calculate the resume PC in advance while we're still in the
      right context and save it in the new vcpu->arch.io_pc (replacing the no
      longer needed vcpu->arch.pending_load_cause), and restore it on MMIO
      completion.
      
      Fixes: e685c689 ("KVM/MIPS32: Privileged instruction/target branch emulation.")
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      [james.hogan@imgtec.com: Backport to 3.10..3.16]
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      ff686d55
    • Nicholas Mc Guire's avatar
      MIPS: KVM: Fix unused variable build warning · fb357699
      Nicholas Mc Guire authored
      commit 5f508c43 upstream.
      
      As kvm_mips_complete_mmio_load() did not yet modify PC at this point
      as James Hogans <james.hogan@imgtec.com> explained the curr_pc variable
      and the comments along with it can be dropped.
      Signed-off-by: default avatarNicholas Mc Guire <hofrat@osadl.org>
      Link: http://lkml.org/lkml/2015/5/8/422
      Cc: Gleb Natapov <gleb@kernel.org>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: kvm@vger.kernel.org
      Cc: linux-mips@linux-mips.org
      Cc: linux-kernel@vger.kernel.org
      Patchwork: https://patchwork.linux-mips.org/patch/9993/Signed-off-by: default avatarRalf Baechle <ralf@linux-mips.org>
      [james.hogan@imgtec.com: Backport to 3.10..3.16]
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      fb357699
    • James Hogan's avatar
      KVM: MIPS: Drop other CPU ASIDs on guest MMU changes · e01f1c70
      James Hogan authored
      commit 91e4f1b6 upstream.
      
      When a guest TLB entry is replaced by TLBWI or TLBWR, we only invalidate
      TLB entries on the local CPU. This doesn't work correctly on an SMP host
      when the guest is migrated to a different physical CPU, as it could pick
      up stale TLB mappings from the last time the vCPU ran on that physical
      CPU.
      
      Therefore invalidate both user and kernel host ASIDs on other CPUs,
      which will cause new ASIDs to be generated when it next runs on those
      CPUs.
      
      We're careful only to do this if the TLB entry was already valid, and
      only for the kernel ASID where the virtual address it mapped is outside
      of the guest user address range.
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: linux-mips@linux-mips.org
      Cc: kvm@vger.kernel.org
      [james.hogan@imgtec.com: Backport to 3.10..3.16]
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      e01f1c70
    • Jiri Slaby's avatar
      Revert "KVM: MIPS: Drop other CPU ASIDs on guest MMU changes" · f668f2ee
      Jiri Slaby authored
      This reverts commit 168e5ebb, which is
      upstream commit 91e4f1b6. It caused
      build failures as it was improperly backported. New version is
      approaching, so revert this bad one.
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      Cc: James Hogan <james.hogan@imgtec.com>
      f668f2ee
  3. 08 Nov, 2016 32 commits
  4. 28 Oct, 2016 2 commits
    • Paul E. McKenney's avatar
      compiler: Allow 1- and 2-byte smp_load_acquire() and smp_store_release() · afe98b3b
      Paul E. McKenney authored
      commit 536fa402 upstream.
      
      CPUs without single-byte and double-byte loads and stores place some
      "interesting" requirements on concurrent code.  For example (adapted
      from Peter Hurley's test code), suppose we have the following structure:
      
      	struct foo {
      		spinlock_t lock1;
      		spinlock_t lock2;
      		char a; /* Protected by lock1. */
      		char b; /* Protected by lock2. */
      	};
      	struct foo *foop;
      
      Of course, it is common (and good) practice to place data protected
      by different locks in separate cache lines.  However, if the locks are
      rarely acquired (for example, only in rare error cases), and there are
      a great many instances of the data structure, then memory footprint can
      trump false-sharing concerns, so that it can be better to place them in
      the same cache cache line as above.
      
      But if the CPU does not support single-byte loads and stores, a store
      to foop->a will do a non-atomic read-modify-write operation on foop->b,
      which will come as a nasty surprise to someone holding foop->lock2.  So we
      now require CPUs to support single-byte and double-byte loads and stores.
      Therefore, this commit adjusts the definition of __native_word() to allow
      these sizes to be used by smp_load_acquire() and smp_store_release().
      Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      afe98b3b
    • Guenter Roeck's avatar
      metag: Only define atomic_dec_if_positive conditionally · e4dfd365
      Guenter Roeck authored
      commit 35d04077 upstream.
      
      The definition of atomic_dec_if_positive() assumes that
      atomic_sub_if_positive() exists, which is only the case if
      metag specific atomics are used. This results in the following
      build error when trying to build metag1_defconfig.
      
      kernel/ucount.c: In function 'dec_ucount':
      kernel/ucount.c:211: error:
      	implicit declaration of function 'atomic_sub_if_positive'
      
      Moving the definition of atomic_dec_if_positive() into the metag
      conditional code fixes the problem.
      
      Fixes: 6006c0d8 ("metag: Atomics, locks and bitops")
      Signed-off-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: default avatarJames Hogan <james.hogan@imgtec.com>
      Signed-off-by: default avatarJiri Slaby <jslaby@suse.cz>
      e4dfd365