Commit 760ec0e0 authored by Benjamin Herrenschmidt's avatar Benjamin Herrenschmidt Committed by Paul Mackerras

powerpc/44x: No need to mask MSR:CE, ME or DE in _tlbil_va on 440

The handlers for Critical, Machine Check or Debug interrupts
will save and restore MMUCR nowadays, thus we only need to
disable normal interrupts when invalidating TLB entries.
Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
Acked-by: default avatarKumar Gala <galak@kernel.crashing.org>
Acked-by: default avatarJosh Boyer <jwboyer@linux.vnet.ibm.com>
Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
parent 2a4aca11
...@@ -75,18 +75,19 @@ _GLOBAL(_tlbil_va) ...@@ -75,18 +75,19 @@ _GLOBAL(_tlbil_va)
mfspr r5,SPRN_MMUCR mfspr r5,SPRN_MMUCR
rlwimi r5,r4,0,24,31 /* Set TID */ rlwimi r5,r4,0,24,31 /* Set TID */
/* We have to run the search with interrupts disabled, even critical /* We have to run the search with interrupts disabled, otherwise
* and debug interrupts (in fact the only critical exceptions we have * an interrupt which causes a TLB miss can clobber the MMUCR
* are debug and machine check). Otherwise an interrupt which causes * between the mtspr and the tlbsx.
* a TLB miss can clobber the MMUCR between the mtspr and the tlbsx. */ *
* Critical and Machine Check interrupts take care of saving
* and restoring MMUCR, so only normal interrupts have to be
* taken care of.
*/
mfmsr r4 mfmsr r4
lis r6,(MSR_EE|MSR_CE|MSR_ME|MSR_DE)@ha wrteei 0
addi r6,r6,(MSR_EE|MSR_CE|MSR_ME|MSR_DE)@l
andc r6,r4,r6
mtmsr r6
mtspr SPRN_MMUCR,r5 mtspr SPRN_MMUCR,r5
tlbsx. r3, 0, r3 tlbsx. r3, 0, r3
mtmsr r4 wrtee r4
bne 1f bne 1f
sync sync
/* There are only 64 TLB entries, so r3 < 64, /* There are only 64 TLB entries, so r3 < 64,
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment