- 01 Aug, 2014 40 commits
-
-
Paul Burton authored
Detect the presence of MAAR using the MRP bit in Config5, and record that presence using a CPU option bit. A cpu_has_maar macro will then allow code to conditionalise upon the presence of MAARs. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7330/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Paul Burton authored
Add accessor macros for the Memory Accessibility Attribute Registers (MAARs), the bits contained within the MAARs & the Config5.MRP bit indicating their presence. The only current use of the MAARs is to enable speculative accesses to regions of memory. Besides the potential performance benefits of speculative accesses, they are a requirement for the P5600 core to handle non-128b-aligned MSA vector loads & stores rather than generating an address error. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7329/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Paul Burton authored
In light of the commit 16f77de8 (Revert "MIPS: Save/restore MSA context around signals") the MSA support in the kernel is incomplete. Until the replacement for the former sigcontext changes is agreed upon and in tree, mark MSA experimental & disable it by default. MSA is only implemented by one CPU supported by the kernel, the P5600. The P5600 is a 32 bit core, and thus MSA can only be used when the experimental CONFIG_MIPS_O32_FP64_SUPPORT option is enabled. Therefore MSA is only being used in experimental settings anyway and this change doesn't actually make any difference beyond clarifying the state of MSA support. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7311/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Paul Burton authored
MSA requires that Status.FR == 1, so for MIPS32 tasks MSA can only be used if CONFIG_MIPS_O32_FP64_SUPPORT is enabled. If it is not & the kernel is 32bit, there's no point including support for MSA. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7310/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Paul Burton authored
The TIF_MSA_CTX_LIVE flag (indicating that a task has MSA context which needs to be preserved) was being cleared in start_thread, but the TIF_USEDMSA flag (indicating that a task has used MSA in this timeslice) was not. In copy_thread neither flag was cleared, but both need to be. Without clearing these flags the kernel will proceed to attempt to save MSA context when the task is context switched out, and if the task had not used MSA in the meantime then it will fail because MSA or the FPU are disabled. The end result is typically: do_cpu invoked from kernel context![#1]: CPU: 0 PID: 99 Comm: sh Not tainted 3.16.0-rc4-00025-g6dc9476-dirty #88 task: 8f23dc60 ti: 8f1d8000 task.ti: 8f1d8000 ... Call Trace: [<8010edbc>] resume+0x5c/0x280 [<80481e0c>] __schedule+0x370/0x800 [<80104838>] work_resched+0x8/0x2c Fix by consistently clearing both flags in both functions. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7309/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Paul Burton authored
The MSA specification upon first read appears to suggest that it is safe to perform vector loads & stores with arbitrary alignment. However it leaves provision for "address-dependent exceptions"... Align the vector context to a 16 byte boundary to ensure that the kernel cannot cause any such exceptions. Note that the fpu field of struct thread_struct was already at a 16 byte boundary within the struct, the introduction of FPU_ALIGN simply makes the requirement explicit. The only part of this impacting the generated kernel binary is ARCH_MIN_TASKALIGN. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7308/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Paul Burton authored
Preemption must be disabled throughout the process of enabling the FPU, enabling MSA & initialising the vector registers. Without doing so it is possible to lose the FPU or MSA whilst initialising them causing that initialisation to fail. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7307/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Paul Burton authored
The kernel relies upon MSA being disabled when a task begins running, so that it can initialise or restore context in response to the resulting MSA disabled exception. Previously the state of MSA following boot was left as it was before the kernel ran, where MSA could potentially have been enabled. Explicitly disable it during boot to prevent any problems. As a nice side effect the code reads a little better too. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7306/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Paul Burton authored
Commit d96cc3d1 "MIPS: Add microMIPS MSA support." attempted to use the value of a macro within an inline asm statement but instead emitted a comment leading to the cfcmsa & ctcmsa instructions being omitted. Fix that by passing CFC_MSA_INSN & CTC_MSA_INSN as arguments to the asm statements. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7305/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Paul Burton authored
If a task does not execute scalar FP instructions prior to using MSA then the flags indicating that the task has live MSA context were not being set. The upper 64b of each vector register would then be lost upon the tasks first context switch after using MSA. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7500/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Paul Burton authored
When a task first makes use of MSA we need to ensure that the upper 64b of the vector registers are set to some value such that no information can be leaked to it from the previous task to use MSA context on the CPU. The architecture formerly specified that these bits would be cleared to 0 when a scalar FP instructions wrote to the aliased FP registers, which would have implicitly handled this as the kernel restored scalar FP context. However more recent versions of the specification now state that the value of the bits in such cases is unpredictable. Initialise them explictly to be sure, and set all the bits to 1 rather than 0 for consistency with the least significant 64b. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7497/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Paul Burton authored
The kernel depends upon MSA never being enabled when the FPU is not, a condition which is currently violated in a few places (whilst saving sigcontext, following mips_cpu_save). Catch all the problem cases by disabling MSA in lose_fpu, after saving context if necessary. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7302/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Paul Burton authored
Switching the vector context implicitly saves & restores the state of the aliased scalar FP data registers, however the scalar FP control & status register is distinct from the MSA control & status register. In order to allow scalar FP to function correctly in programs using MSA, the scalar CSR needs to be saved & restored along with the MSA vector context. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7301/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Paul Burton authored
I added a field for the MSACSR register in struct mips_fpu_struct, but never actually made use of it... This is a clear bug. Save and restore the MSACSR register along with the vector registers. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7300/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Paul Burton authored
Just #ifdef away the C functions when included from an assembly file, as will be done in a following commit. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7299/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Aaro Koskinen authored
Add interface & port definitions for D-Link DSR-1000N. Signed-off-by: Aaro Koskinen <aaro.koskinen@iki.fi> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Cc: David Daney <ddaney.cavm@gmail.com> Patchwork: https://patchwork.linux-mips.org/patch/7219/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Aaro Koskinen authored
Add USB clock type for D-Link DSR-1000N. Signed-off-by: Aaro Koskinen <aaro.koskinen@iki.fi> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Cc: David Daney <ddaney.cavm@gmail.com> Patchwork: https://patchwork.linux-mips.org/patch/7218/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Aaro Koskinen authored
Add a definition for D-Link DSR-1000N router. The bootloader on this board supplies 20006 in the bootinfo; the enum CVMX_BOARD_TYPE_CUST_DSR1000N comes from the GPL sources of the board. Signed-off-by: Aaro Koskinen <aaro.koskinen@iki.fi> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Cc: David Daney <ddaney.cavm@gmail.com> Patchwork: https://patchwork.linux-mips.org/patch/7217/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Aaro Koskinen authored
Disable HOTPLUG_CPU functionality if the bootloader version is incorrect. Signed-off-by: Aaro Koskinen <aaro.koskinen@iki.fi> Cc: linux-mips@linux-mips.org Cc: David Daney <ddaney.cavm@gmail.com> Patchwork: https://patchwork.linux-mips.org/patch/7200/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Aaro Koskinen authored
If nosmp kernel option given, we can assume HOTPLUG_CPU is disabled. Signed-off-by: Aaro Koskinen <aaro.koskinen@iki.fi> Acked-by: David Daney <david.daney@cavium.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7202/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Aaro Koskinen authored
If CONFIG_HOTPLUG_CPU is set, the driver thinks bootloader entry address is configured and we should jump there. However, this is not necessarily true if the kernel is booted on a system with older/incompatible bootloader. Add dynamic checks for the bootloader entry address. Signed-off-by: Aaro Koskinen <aaro.koskinen@iki.fi> Cc: linux-watchdog@vger.kernel.org Cc: linux-mips@linux-mips.org Cc: David Daney <ddaney.cavm@gmail.com> Cc: linux-watchdog@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/7201/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Aaro Koskinen authored
The same check is already done earlier in octeon_smp_hotplug_setup(). Signed-off-by: Aaro Koskinen <aaro.koskinen@iki.fi> Acked-by: David Daney <david.daney@cavium.com> Cc: linux-mips@linux-mips.org Cc: Aaro Koskinen <aaro.koskinen@iki.fi> Patchwork: https://patchwork.linux-mips.org/patch/7199/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Florian Fainelli authored
Commit 35133692 ("[MIPS] Allow setting of the cache attribute at run time") introduced the 'cca=' kernel command-line parameter which allows overriding the kernel pages cacheable attributes, document that parameter. [ralf@linux-mips.org: replace @mips.com email addresses with it's imgtec.com equivalent in this commit message. Rephrase slightly for a bit more pedantic correctness.] Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Cc: linux-mips@linux-mips.org Cc: blogic@openwrt.org Cc: anemo@mba.ocn.ne.jp Cc: chris.dearman@imgtec.com Patchwork: https://patchwork.linux-mips.org/patch/7182/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Jeffrey Deans authored
The GICBIS macro could update the GIC registers incorrectly, depending on the data value passed in: * Bits were only OR'd into the register data, so register fields could not be cleared. * Bits were OR'd into the register data without masking the data to the correct field width, corrupting adjacent bits. Signed-off-by: Jeffrey Deans <jeffrey.deans@imgtec.com> Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7378/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Jeffrey Deans authored
The Malta malta_ipi_irqdispatch() routine now checks only IPI interrupts when handling IPIs. It could previously call do_IRQ() for non-IPIs, and also call do_IRQ() with an invalid IRQ number if there were no pending GIC interrupts when gic_get_int() was called. Signed-off-by: Jeffrey Deans <jeffrey.deans@imgtec.com> Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7377/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Jeffrey Deans authored
Move most of the functionality of gic_get_int() into a new function gic_get_int_mask() which takes a bitmask of interrupts in which the caller is interested, and returns the subset which are pending for the current CPU. This allows CP0 IRQ dispatch routines to check only the GIC interrupts which are routed to a particular CPU interrupt input. gic_get_int() is reimplemented using gic_get_int_mask() and is retained for use by any platforms for which gic_get_int() is sufficient. Signed-off-by: Jeffrey Deans <jeffrey.deans@imgtec.com> Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7376/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Jeffrey Deans authored
A GIC interrupt which is declared as having a GIC_MAP_TO_NMI_MSK mapping causes the cpu parameter to gic_setup_intr() to be increased to 32, causing memory corruption when pcpu_masks[] is written to again later in the function. Signed-off-by: Jeffrey Deans <jeffrey.deans@imgtec.com> Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: stable@vger.kernel.org Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7375/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Jeffrey Deans authored
irq-gic.c:gic_get_int() masks out interrupts from the pending set which aren’t in the pcpu_mask. Only interrupts marked with GIC_FLAG_IPI were set in pcpu_mask, meaning that peripheral interrupts also had to be marked as IPIs. Remove the use of GIC_FLAG_IPI and allow the flags member of struct gic_intr_map to be zero. Signed-off-by: Jeffrey Deans <jeffrey.deans@imgtec.com> Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7374/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Jeffrey Deans authored
The value of GIC_NUM_INTRS is platform-specific. Using a default value from gic.h will result in incorrect behaviour on some systems, so require a suitable definition to be present in the platform's irq.h. Signed-off-by: Jeffrey Deans <jeffrey.deans@imgtec.com> Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7373/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Jeffrey Deans authored
Several bitmaps are declared in arch/mips/include/asm/gic.h, but the scope of their use is limited to arch/mips/kernel/irq-gic.c. Move the declarations from the header file to the C file. Signed-off-by: Jeffrey Deans <jeffrey.deans@imgtec.com> Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7372/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Leonid Yegoshin authored
Detect if the core supports unique exception codes for the Read-Inhibit and Execute-Inhibit exceptions and set the option accordingly. The RI/XI exception support is detected by setting the 27th bit (IEC) of the PageGrain C0 register and reading back the value of that register to verify the bit is enabled. Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com> Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7340/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Leonid Yegoshin authored
Use the regular tlb_do_page_fault_0 (no write) handler to handle the RI and XI exceptions. Also skip the RI/XI validation check on TLB load handler since it's redundant when the CPU has unique RI/XI exceptions. Singed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com> Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7339/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Leonid Yegoshin authored
MIPSr5 added support for unique exception codes for the Read-Inhibit and Execute-Inhibit exceptions. Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com> Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7338/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Markos Chandras authored
The Hardware Page Table Walker aims to speed up TLB refill exceptions by handling them in the hardware level instead of having a software TLB refill handler. However, a TLB refill exception can still be thrown in certain cases such as, synchronus exceptions, or address translation or memory errors during the HTW operation. As a result of which, HTW must not be considered a complete replacement for the TLB refill software handler, but rather a fast-path for it. For HTW to work, the PWBase register must contain the task's page global directory address so the HTW will kick in on TLB refill exceptions. Due to HTW being a separate engine embedded deep in the CPU pipeline, we need to restart the HTW everytime a PTE changes to avoid HTW fetching a old entry from the page tables. It's also necessary to restart the HTW on context switches to prevent it from fetching a page from the previous process. Finally, since HTW is using the entryhi register to write the translations to the TLB, it's necessary to stop the HTW whenever the entryhi changes (eg for tlb probe perations) and enable it back afterwards. == Performance == The following trivial test was used to measure the performance of the HTW. Using the same root filesystem, the following command was used to measure the number of tlb refill handler executions with and without (using 'nohtw' kernel parameter) HTW support. The kernel was modified to use a scratch register as a counter for the TLB refill exceptions. find /usr -type f -exec ls -lh {} \; HTW Enabled: TLB refill exceptions: 12306 HTW Disabled: TLB refill exceptions: 17805 Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Cc: Markos Chandras <markos.chandras@imgtec.com> Patchwork: https://patchwork.linux-mips.org/patch/7336/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Markos Chandras authored
Detect if the core implements the HTW and set the option accordingly. Also, add a new kernel parameter called 'nohtw' allowing the user to disable the htw support and fallback to the software refill handler. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7335/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Markos Chandras authored
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7326/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Markos Chandras authored
Moreover, report hardware page table walker support as 'htw' in the ASE list of /proc/cpuinfo, if the core implements this feature. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7334/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
Markos Chandras authored
Long integers which are 4 bytes in MIPS32 can't hold new CPU options anymore, so the type of the 'options' variable is changed to unsigned long long which allows 32 more cpu options to be defined for MIPS32 Also, re-arrange the 'options' struct member to avoid potential 4-byte alignment gap in the middle of the struct. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7324/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
James Hogan authored
Add cases in perf_event_mipsxx.c for CPU_P5600. All the event numbers listed for proAptiv also apply to P5600, so we use mipsxxcore_event_map2 and mipsxxcore_cache_map2 too, but the P5600 has 8-bit event numbers so bit 8 (256) of the user ABI config is used for the parity bit (to specify odd/even counter events). Signed-off-by: James Hogan <james.hogan@imgtec.com> Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7242/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
James Hogan authored
In mipsxx_pmu_map_raw_event(), set event_id to base_id after the cpu type conditional code to allow that code to override the base_id to use more bits from the config and a higher bit for parity. This will allow cores with up to 512 events between all even/odd counters (an 8-bit event id) such as P5600 to use bit 8 for parity. Signed-off-by: James Hogan <james.hogan@imgtec.com> Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/7243/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-