- 20 Mar, 2006 40 commits
-
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
sun4v uses ASI_MMU instead of ASI_DMMU Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
For the expansion of __GET_CPUID() on SMP. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Happily we have no D-cache aliasing issues on these chips, so the implementation is very straightforward. Add a stub in bootup which will be where the patching calls will be made for niagara/sun4v/hypervisor. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Things are a little tricky because, unlike sun4u, we have to: 1) do a hypervisor trap to do the TLB load. 2) do the TSB lookup calculations by hand Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
V9 requires a write memory barrier before the instruction flush. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
If we're just switching between different alternate global sets, nop it out on sun4v. Also, get rid of all of the alternate global save/restore in the OBP CIF trampoline code. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
They are totally unnecessary because: 1) Interrupts are already disabled when switch_to() runs. 2) We don't use hard-coded alternate globals any longer. This found a case in rtrap, which still assumed alternate global %g6 was current_thread_info(), and that is fixed by this changeset as well. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Instead of setting/clearing PSTATE_AG we have to change the %gl register value on sun4v. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
As we save trap state onto the stack, the store buffer fills up mid-way through and we stall for several cycles as the store buffer trickles out to the L2 cache. Meanwhile we can do some privileged register reads and other calculations, essentially for free. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
And more consistently check cheetah{,_plus} instead of assuming anything not spitfire is cheetah{,_plus}. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
When saving and restoing trap state, do the window spill/fill handling inline so that we never trap deeper than 2 trap levels. This is important for chips like Niagara. The window fixup code is massively simplified, and many more improvements are now possible. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
This, as well as making the code cleaner, allows a simplification in the TSB miss handling path. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
On uniprocessor, it's always zero for optimize that. On SMP, the jmpl to the stub kills the return address stack in the cpu branch prediction logic, so expand the code sequence inline and use a code patching section to fix things up. This also always better and explicit register selection, which will be taken advantage of in a future changeset. The hard_smp_processor_id() function is big, so do not inline it. Fix up tests for Jalapeno to also test for Serrano chips too. These tests want "jbus Ultra-IIIi" cases to match, so that is what we should test for. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
There are several tricky races involved with growing the TSB. So just use base-size TSBs for user contexts and we can revisit enabling this later. One part of the SMP problems is that tsb_context_switch() can see partially updated TSB configuration state if tsb_grow() is running in parallel. That's easily solved with a seqlock taken as a writer by tsb_grow() and taken as a reader to capture all the TSB config state in tsb_context_switch(). Then there is flush_tsb_user() running in parallel with a tsb_grow(). In theory we could take the seqlock as a reader there too, and just resample the TSB pointer and reflush but that looks really ugly. Lastly, I believe there is a case with threads that results in a TSB entry lock bit being set spuriously which will cause the next access to that TSB entry to wedge the cpu (since the TSB entry lock bit will never clear). It's either copy_tsb() or some bug elsewhere in the TSB assembly. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
The are distrupting, which by the sparc v9 definition means they can only occur when interrupts are enabled in the %pstate register. This never occurs in any of the trap handling code running at trap levels > 0. So just mark it as an unexpected trap. This allows us to kill off the cee_stuff member of struct thread_info. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
This way we don't need to lock the TSB into the TLB. The trick is that every TSB load/store is registered into a special instruction patch section. The default uses virtual addresses, and the patch instructions use physical address load/stores. We can't do this on all chips because only cheetah+ and later have the physical variant of the atomic quad load. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
If we are returning back to kernel mode, %g4 could be live (for example, in the case where we window spill in the etrap code). So do not change it's value if going back to kernel. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Since we use %g5 itself as a temporary, it can get clobbered if we take an interrupt mid-stream and thus cause end up with the final %g5 value too early as a result of rtrap processing. Set %g5 at the very end, atomically, to avoid this problem. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
%g6 is not necessarily set to current_thread_info() at sparc64_realfault_common. So store the fault code and address after we invoke etrap and %g6 is properly set up. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Just flip the bit off of whatever it's currently set to. PSTATE_IE is guarenteed to be enabled when we get here. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
No longer used, and move extern declaration to a header file. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
It is totally unnecessary complexity. After we take over the trap table, we handle all PROM tlb misses fully. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Instead of ugly hard-coded value. Signed-off-by: David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Some of the trap code was still assuming that alternate global %g6 was hard coded with current_thread_info(). Let's just consistently flush at KERNBASE when we need a pipeline synchronization. That's locked into the TLB and will always work. Signed-off-by: David S. Miller <davem@davemloft.net>
-