1. 01 Aug, 2010 2 commits
  2. 29 Jul, 2010 1 commit
    • H. Peter Anvin's avatar
      x86, asm: Merge cmpxchg_486_u64() and cmpxchg8b_emu() · a378d933
      H. Peter Anvin authored
      We have two functions for doing exactly the same thing -- emulating
      cmpxchg8b on 486 and older hardware -- with different calling
      conventions, and yet doing the same thing.  Drop the C version and use
      the assembly version, via alternatives, for both the local and
      non-local versions of cmpxchg8b.
      Signed-off-by: default avatarH. Peter Anvin <hpa@linux.intel.com>
      LKML-Reference: <AANLkTikAmaDPji-TVDarmG1yD=fwbffcsmEU=YEuP+8r@mail.gmail.com>
      a378d933
  3. 28 Jul, 2010 5 commits
    • H. Peter Anvin's avatar
      x86, asm: Move cmpxchg emulation code to arch/x86/lib · 90c8f92f
      H. Peter Anvin authored
      Move cmpxchg emulation code from arch/x86/kernel/cpu (which is
      otherwise CPU identification) to arch/x86/lib, where other emulation
      code lives already.
      Signed-off-by: default avatarH. Peter Anvin <hpa@linux.intel.com>
      LKML-Reference: <AANLkTikAmaDPji-TVDarmG1yD=fwbffcsmEU=YEuP+8r@mail.gmail.com>
      90c8f92f
    • H. Peter Anvin's avatar
      x86, asm: Clean up and simplify <asm/cmpxchg.h> · 4532b305
      H. Peter Anvin authored
      Remove the __xg() hack to create a memory barrier near xchg and
      cmpxchg; it has been there since 1.3.11 but should not be necessary
      with "asm volatile" and a "memory" clobber, neither of which were
      there in the original implementation.
      
      However, we *should* make this a volatile reference.
      Signed-off-by: default avatarH. Peter Anvin <hpa@linux.intel.com>
      LKML-Reference: <AANLkTikAmaDPji-TVDarmG1yD=fwbffcsmEU=YEuP+8r@mail.gmail.com>
      4532b305
    • H. Peter Anvin's avatar
      x86, asm: Clean up and simplify set_64bit() · 69309a05
      H. Peter Anvin authored
      Clean up and simplify set_64bit().  This code is quite old (1.3.11)
      and contains a fair bit of auxilliary machinery that current versions
      of gcc handle just fine automatically.  Worse, the auxilliary
      machinery can actually cause an unnecessary spill to memory.
      
      Furthermore, the loading of the old value inside the loop in the
      32-bit case is unnecessary: if the value doesn't match, the CMPXCHG8B
      instruction will already have loaded the "new previous" value for us.
      
      Clean up the comment, too, and remove page references to obsolete
      versions of the Intel SDM.
      Signed-off-by: default avatarH. Peter Anvin <hpa@zytor.com>
      LKML-Reference: <tip-*@vger.kernel.org>
      69309a05
    • H. Peter Anvin's avatar
      d3608b56
    • H. Peter Anvin's avatar
      x86: Add memory modify constraints to xchg() and cmpxchg() · 113fc5a6
      H. Peter Anvin authored
      xchg() and cmpxchg() modify their memory operands, not merely read
      them.  For some versions of gcc the "memory" clobber has apparently
      dealt with the situation, but not for all.
      Originally-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarH. Peter Anvin <hpa@zytor.com>
      Cc: Glauber Costa <glommer@redhat.com>
      Cc: Avi Kivity <avi@redhat.com>
      Cc: Peter Palfrader <peter@palfrader.org>
      Cc: Greg KH <gregkh@suse.de>
      Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
      Cc: Zachary Amsden <zamsden@redhat.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: <stable@kernel.org>
      LKML-Reference: <4C4F7277.8050306@zytor.com>
      113fc5a6
  4. 27 Jul, 2010 10 commits
  5. 26 Jul, 2010 22 commits