- 07 Nov, 2013 2 commits
-
-
Thierry Reding authored
No-MMU configurations currenty fail to build because they are missing the early_paging_init() symbol. Acked-by: Olof Johansson <olof@lixom.net> Signed-off-by: Thierry Reding <treding@nvidia.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Tushar Behera authored
Commit 6dedcca6 ("hotplug, powerpc, x86: Remove cpu_hotplug_driver_lock())" removes the the definition of cpu_hotplug_driver_{lock,unlock} APIs, thereby causing a build error. Replace these calls with {lock,unlock}_device_hotplug(). Signed-off-by: Tushar Behera <tushar.behera@linaro.org> Signed-off-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
- 30 Oct, 2013 1 commit
-
-
Russell King authored
Merge branch 'baserock/bjdooks/312-rc4/be/core-v3' of git://git.baserock.org/delta/linux into devel-stable Conflicts: arch/arm/kernel/head.S This series has been well tested and it would be great to get this merged now. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
- 29 Oct, 2013 2 commits
-
-
Russell King authored
Olof Johansson reported: In file included from arch/arm/include/asm/page.h:163:0, from include/linux/mm_types.h:16, from include/linux/sched.h:24, from arch/arm/kernel/asm-offsets.c:13: arch/arm/include/asm/memory.h: In function '__virt_to_idmap': arch/arm/include/asm/memory.h:300:6: error: 'arch_virt_to_idmap' undeclared (first use in this function) caused by arch_virt_to_idmap being placed inside a different preprocessor conditional to its user. Move it along side its user. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
Sricharan R authored
Commit 'f52bb722' Author: Sricharan R <r.sricharan@ti.com> ARM: mm: Correct virt_to_phys patching for 64 bit physical addresses introduced a __ARMEB__ macro usage in a new place, but missed the second underscore. So correcting it here. Also a explicit .align keyword is needed for the label with .long data-type to be aligned on the 4 byte boundary. Otherwise this can cause problem for thumb2 build. So adding it here. Signed-off-by: Sricharan R <r.sricharan@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
- 23 Oct, 2013 2 commits
-
-
Russell King authored
Merge branch 'for-rmk/prefetch' of git://git.kernel.org/pub/scm/linux/kernel/git/will/linux into devel-stable
-
Russell King authored
Merge branch 'for-rmk/perf' of git://git.kernel.org/pub/scm/linux/kernel/git/will/linux into devel-stable
-
- 19 Oct, 2013 23 commits
-
-
Victor Kamensky authored
cci_enable_port_for_self written in asm and it works with h/w registers that are in little endian format. When run in big endian mode it needs byteswaped constants before/after it writes/reads to/from such registers Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
-
Victor Kamensky authored
In order for ASID macro to be used as expression passed to inline asm as 'r' operand it needs to give 32 bit unsigned result, not unsigned 64bit expression. Otherwise when 64bit ASID is passed to inline assembler statement as 'r' operand (32bit) compiler behavior is not well specified. For example when __flush_tlb_mm function compiled in big endian case, and ASID is passed to tlb_op macro directly, 0 will be passed as 'mcr 15, 0, r4, cr8, cr3, {2}' argument in r4, unless ASID macro changed to produce 32 bit result. Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
-
Victor Kamensky authored
In big endian mode mcpm_entry_point is first function that called on secondaries CPU. First it should switch CPU into big endian code. [ben.dooks@codethink.co.uk: merge fix patch from Victor into this] Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org> Acked-by: Nicolas Pitre <nico@linaro.org> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
-
Victor Kamensky authored
In case of BE8 kernel data is in BE order whereas code stays in LE order. Move sigreturn_codes to separate .S file and use proper assembler mnemonics for these code snippets. In this case compiler will take care of proper instructions byteswaps for BE8 case. Change assumes that sufficiently Thumb-capable tools are used to build kernel. Problem was discovered during ltp testing of BE system: all rt_sig* tests failed. Tested against the same tests in both BE and LE modes. Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
-
Victor Kamensky authored
Fix inline asm for atomic64_xxx functions in arm atomic.h. Instead of %H operand specifiers code should use %Q for least significant part of the value, and %R for the most significant part of the value. %H always returns the higher of the two register numbers, and therefore it is not endian neutral. %H should be used with ldrexd and strexd instructions. Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
-
Ben Dooks authored
The arch_kgdb_breakpoint() function uses an inline assembly directive to assemble a specific instruction using .word. This means the linker will not treat is as an instruction, and therefore incorrectly swap the endian-ness if running BE8. As noted, this code means that kgdb is really only usable on arm32 kernels, and should be made dependant on not being a thumb2 kernel until fixed. However this is not something to be added to this patch. Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk> Reviewed-by: Dave Martin <Dave.Martin@arm.com>
-
Ben Dooks authored
Currently BUG() uses .word or .hword to create the necessary illegal instructions. However if we are building BE8 then these get swapped by the linker into different illegal instructions in the text. This means that the BUG() macro does not get trapped properly. Change to using <asm/opcodes.h> to provide the necessary ARM instruction building as we cannot rely on gcc/gas having the `.inst` instructions which where added to try and resolve this issue (reported by Dave Martin <Dave.Martin@arm.com>). Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk> Reviewed-by: Dave Martin <Dave.Martin@arm.com>
-
Ben Dooks authored
Use <asm/opcodes.h> to correctly transform instruction byte ordering into in-memory ordering. Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk> Reviewed-by: Dave Martin <Dave.Martin@arm.com>
-
Ben Dooks authored
The <hardware/coresight.h> needs to take into account the endian-ness of the processor when reading and writing data, so change to using the readl/writel relaxed variants from the raw ones. Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
-
Ben Dooks authored
To avoid having to make every text section swap the instruction order of all instructions, make sure modules are built also built with --be8 (as is the current kernel final link). If we do not do this, we would end up having to swap all instructions when loading a module, instead of just the instructions that we are applying ELF relocations to. Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk> Reviewed-by: Dave Martin <Dave.Martin@arm.com>
-
Ben Dooks authored
When in BE8 mode, our instructions are not in the same ordering as the data, so use <asm/opcodes.h> to take this into account. Note, also requires modules to be built --be8 Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk> Reviewed-by: Dave Martin <Dave.Martin@arm.com>
-
Ben Dooks authored
The trap handler needs to take into account the endian configuration of the system when loading instructions. Use <asm/opcodes.h> to provide the necessary conversion functions. Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk> Tested-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
-
Ben Dooks authored
If we are in BE8 mode, we must deal with the instruction stream being in LE order when data is being loaded in BE order. Ensure the data is swapped before processing to avoid thre following: Change to using <asm/opcodes.h> to provide the necessary conversion functions to change the byte ordering. This stops the following warning messages from the kernel on a fault: Unhandled fault: alignment exception (0x001) at 0xbfa09567 Alignment trap: not handling instruction 030091e8 at [<80333e8c>] Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Tested-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
-
Ben Dooks authored
Add support for the versatile express systems to boot big-endian. Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
-
Ben Dooks authored
Add indication we can run these cores in BE mode, and ensure that the secondary CPU is set to big-endian mode in the initialisation code as the initial code runs little-endian. Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk> Tested-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Acked-by: Jason Cooper <jason@lakedaemon.net>
-
Ben Dooks authored
Apart from a xgmac driver issue, the highbank seems to work correctly in big-endian mode. Allow the selection of big-endian in the system. Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk> Acked-by: Rob Herring <rob.herring@calxeda.com>
-
Ben Dooks authored
The smp_scu driver needs to use the relaxed readl/write accessors to avoid any issues with the endian mode the processor core is in. Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
-
Ben Dooks authored
Ensure the twd driver uses the correct calls to access the hardware to ensure that we do not end up with data in the wrong endian format. Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
-
Ben Dooks authored
The PL01X debug code needs to take into account which endian mode the processor is running in. If it is big-endian, ensure the data is swapped appropriately. Note, we could do this slightly more efficiently if we have an macro to do the necessary swap for the bits used by test. Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
-
Ben Dooks authored
If we are booting in LE and compiled for BE8, then add code to set the state to bE8. Since the instruction stream is always LE, we do not need to do anything special to the instruction. Also ensure that the secondary processors are started in the same mode. Note, we do add about 20 bytes to the kernel image, but it seems easier to do this than adding another configuration to change. Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Tested-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
-
Ben Dooks authored
The fixup_pv_table assumes that the instructions are in the same endian configuration as the data, but when the CPU is running in BE8 the instructions stay in little-endian format. Make sure if CONFIG_CPU_ENDIAN_BE8 is set that we do all the alterations to the instructions taking in to account the LDR/STR will be swapping the data endian-ness. Since the code is only modifying a byte, we avoid dual-swapping the data, and just change the bits we clear and ORR in (in the case where the code is not thumb2). For thumb2, we add the necessary rev16 instructions to ensure that the instructions are processed in the correct format, as it was easier than re-writing the code to contain a mask and shift. Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk> Reviewed-by: Dave Martin <Dave.Martin@arm.com> Tested-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
-
Ben Dooks authored
Add ARM_BE8() helper to wrap any code conditional on being compile when CONFIG_ARM_ENDIAN_BE8 is selected and convert existing places where this is to use it. Acked-by: Nicolas Pitre <nico@linaro.org> Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
-
Ben Dooks authored
The Kconfig for arch/arm/mach-ixp4xx has a local definition of ARCH_SUPPORTS_BIG_ENDIAN which could be used elsewhere. This means that if IXP4xx is selected and this symbol is selected eleswhere then an warning is produced. Clean the following error up by making the symbol be selected by the main ARCH_IXP4XX definition and have a common definition in arch/arm/mm/Kconfig warning: (ARCH_xxx) selects ARCH_SUPPORTS_BIG_ENDIAN which has unmet direct dependencies (ARCH_IXP4XX) warning: (ARCH_xxx) selects ARCH_SUPPORTS_BIG_ENDIAN which has unmet direct dependencies (ARCH_IXP4XX) Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
-
- 18 Oct, 2013 1 commit
-
-
Russell King authored
Merge branch 'for-rmk/arm-mm-lpae' of git://git.kernel.org/pub/scm/linux/kernel/git/ssantosh/linux-keystone into devel-stable This series extends the existing ARM v2p runtime patching for 64 bit. Needed for LPAE machines which have physical memory beyond 4GB.
-
- 11 Oct, 2013 5 commits
-
-
Santosh Shilimkar authored
This patch adds a step in the init sequence, in order to recreate the kernel code/data page table mappings prior to full paging initialization. This is necessary on LPAE systems that run out of a physical address space outside the 4G limit. On these systems, this implementation provides a machine descriptor hook that allows the PHYS_OFFSET to be overridden in a machine specific fashion. Cc: Russell King <linux@arm.linux.org.uk> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: R Sricharan <r.sricharan@ti.com> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
-
Sricharan R authored
The current phys_to_virt patching mechanism works only for 32 bit physical addresses and this patch extends the idea for 64bit physical addresses. The 64bit v2p patching mechanism patches the higher 8 bits of physical address with a constant using 'mov' instruction and lower 32bits are patched using 'add'. While this is correct, in those platforms where the lowmem addressable physical memory spawns across 4GB boundary, a carry bit can be produced as a result of addition of lower 32bits. This has to be taken in to account and added in to the upper. The patched __pv_offset and va are added in lower 32bits, where __pv_offset can be in two's complement form when PA_START < VA_START and that can result in a false carry bit. e.g 1) PA = 0x80000000; VA = 0xC0000000 __pv_offset = PA - VA = 0xC0000000 (2's complement) 2) PA = 0x2 80000000; VA = 0xC000000 __pv_offset = PA - VA = 0x1 C0000000 So adding __pv_offset + VA should never result in a true overflow for (1). So in order to differentiate between a true carry, a __pv_offset is extended to 64bit and the upper 32bits will have 0xffffffff if __pv_offset is 2's complement. So 'mvn #0' is inserted instead of 'mov' while patching for the same reason. Since mov, add, sub instruction are to patched with different constants inside the same stub, the rotation field of the opcode is using to differentiate between them. So the above examples for v2p translation becomes for VA=0xC0000000, 1) PA[63:32] = 0xffffffff PA[31:0] = VA + 0xC0000000 --> results in a carry PA[63:32] = PA[63:32] + carry PA[63:0] = 0x0 80000000 2) PA[63:32] = 0x1 PA[31:0] = VA + 0xC0000000 --> results in a carry PA[63:32] = PA[63:32] + carry PA[63:0] = 0x2 80000000 The above ideas were suggested by Nicolas Pitre <nico@linaro.org> as part of the review of first and second versions of the subject patch. There is no corresponding change on the phys_to_virt() side, because computations on the upper 32-bits would be discarded anyway. Cc: Russell King <linux@arm.linux.org.uk> Reviewed-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Sricharan R <r.sricharan@ti.com> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
-
Santosh Shilimkar authored
Commit 9e9a367c {ARM: Section based HYP idmap} moved the address conversion inside identity_mapping_add() without respective print which carries useful idmap information. Move the print as well inside identity_mapping_add() to fix the same. Cc: Will Deacon <will.deacon@arm.com> Cc: Nicolas Pitre <nico@linaro.org> Cc: Russell King <linux@arm.linux.org.uk> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
-
Santosh Shilimkar authored
On some PAE systems (e.g. TI Keystone), memory is above the 32-bit addressable limit, and the interconnect provides an aliased view of parts of physical memory in the 32-bit addressable space. This alias is strictly for boot time usage, and is not otherwise usable because of coherency limitations. On such systems, the idmap mechanism needs to take this aliased mapping into account. This patch introduces virt_to_idmap() and a arch function pointer which can be populated by platform which needs it. Also populate necessary idmap spots with now available virt_to_idmap(). Avoided #ifdef approach to be compatible with multi-platform builds. Most architecture won't touch it and in that case virt_to_idmap() fall-back to existing virt_to_phys() macro. Cc: Russell King <linux@arm.linux.org.uk> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
-
Santosh Shilimkar authored
Fix remainder types used when converting back and forth between physical and virtual addresses. Cc: Russell King <linux@arm.linux.org.uk> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
-
- 09 Oct, 2013 1 commit
-
-
Will Deacon authored
Since software events can always be scheduled, perf allows software and hardware events to be mixed together in the same event group. There are two ways in which this can come about: (1) A SW event is added to a HW group. This validates using the HW PMU of the group leader. (2) A HW event is added to a SW group. This inserts the SW events and the new HW event into a HW context, but the SW event remains the group leader. When validating the latter case, we would ideally compare the PMU of each event in the group with the relevant HW PMU. The problem is, in the face of potentially multiple HW PMUs, we don't have a handle on the relevant structure. Commit 7b9f72c6 ("ARM: perf: clean up event group validation") attempting to resolve this issue, but actually made things *worse* by comparing with the leader PMU. If the leader is a SW event, then we automatically `pass' all the HW events during validation! This patch removes the check against the leader PMU. Whilst this will allow events from multiple HW PMUs to be grouped together, that should probably be dealt with in perf core as the result of a later patch. Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 07 Oct, 2013 2 commits
-
-
Russell King authored
This avoids this file being incorrectly added to git. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
-
-
- 06 Oct, 2013 1 commit
-
-
Linus Torvalds authored
-