[PATCH] ppc64: improve SLB reload
Paul Mackerras authored

Rewrite/cleanup of the SLB management code.  This removes nearly all the
SLB related code from arch/ppc64/kernel/stab.c and puts a rewritten version
in arch/ppc64/mm, where it better belongs.  The main SLB miss path is in
assembler and the other routines have been cleaned up and streamlined.

Notable changes:

- Ugly bitfields no longer used for generating SLB entries.

- slb_allocate() (the main SLB miss routine) is now in assembler, and all
  the data it uses is stored in the PACA.

- The mm context is now copied into the PACA at context switch time, to
  avoid looking up the thread struct on SLB miss.

- An SLB miss will now never (directly) result in a call to
  do_page_fault.  If we get a miss on a totally bogus address the handler
  will now put in an SLB referencing VSID 0.  This will never have any
  pages, so we'll get the (fatal) page fault shortly afterwards.  This
  simplifies the SLB entry and exit paths.

- The round-robin pointer in the PACA now references the last-used
  instead of next-to-use SLB slot, which simplifies the asm for updating it
  slightly.

- Unify do_slb_bolted with the general SLB miss path.  There is now one
  SLB miss handler, in assembler, and called with only the low-level
  exception prolog (EXCEPTION_PROLOG_[PI]SERIES rather than
  EXCEPTION_PROLOG_COMMON) and minimal extra save/restore logic.

- Streamlines the exception entry/exit path of the SLB miss handler to
  shave a few cycles off.  The most significant change is that the RI bit
  is left off throughout the whole handler, which avoids an extra mtmsrd to
  turn it back off on the exit path.
Signed-off-by: default avatarDavid Gibson <david@gibson.dropbear.id.au>
Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
178af227