- 19 Oct, 2015 14 commits
-
-
Jungseok Lee authored
Unlike perf callchain relying on walk_stackframe(), dump_backtrace() has its own backtrace logic. A major difference between them is the moment a symbol is recorded. Perf writes down a symbol *before* calling unwind_frame(), but dump_backtrace() prints it out *after* unwind_frame(). As a result, the last valid symbol cannot be hooked in case of dump_backtrace(). This patch addresses the issue as synchronising dump_backtrace() with perf callchain. A simple test and its results are as follows: - crash trigger $ sudo echo c > /proc/sysrq-trigger - current status Call trace: [<fffffe00003dc738>] sysrq_handle_crash+0x24/0x30 [<fffffe00003dd2ac>] __handle_sysrq+0x128/0x19c [<fffffe00003dd730>] write_sysrq_trigger+0x60/0x74 [<fffffe0000249fc4>] proc_reg_write+0x84/0xc0 [<fffffe00001f2638>] __vfs_write+0x44/0x104 [<fffffe00001f2e60>] vfs_write+0x98/0x1a8 [<fffffe00001f3730>] SyS_write+0x50/0xb0 - with this change Call trace: [<fffffe00003dc738>] sysrq_handle_crash+0x24/0x30 [<fffffe00003dd2ac>] __handle_sysrq+0x128/0x19c [<fffffe00003dd730>] write_sysrq_trigger+0x60/0x74 [<fffffe0000249fc4>] proc_reg_write+0x84/0xc0 [<fffffe00001f2638>] __vfs_write+0x44/0x104 [<fffffe00001f2e60>] vfs_write+0x98/0x1a8 [<fffffe00001f3730>] SyS_write+0x50/0xb0 [<fffffe00000939ec>] el0_svc_naked+0x20/0x28 Note that this patch does not cover a case where MMU is disabled. The last stack frame of swapper, for example, has PC in a form of physical address. Unfortunately, a simple conversion using phys_to_virt() cannot cover all scenarios since PC is retrieved from LR - 4, not LR. It is a big tradeoff to change both head.S and unwind_frame() for only a few of symbols in *.S. Thus, this hunk does not take care of the case. Cc: AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: James Morse <james.morse@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Jungseok Lee <jungseoklee85@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Jisheng Zhang authored
Currently, if cpuidle is disabled or not supported, powertop reports zero wakeups and zero events. This is due to the cpu_idle tracepoints are missing. This patch is to make cpu_idle tracepoints always available even if cpuidle is disabled or not supported. Signed-off-by: Jisheng Zhang <jszhang@marvell.com> Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Suzuki K. Poulose authored
36bit VA lets us use 2 level page tables while limiting the available address space to 64GB. Cc: Will Deacon <will.deacon@arm.com> Cc: Steve Capper <steve.capper@linaro.org> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Suzuki K. Poulose authored
This patch turns on the 16K page support in the kernel. We support 48bit VA (4 level page tables) and 47bit VA (3 level page tables). With 16K we can map 128 entries using contiguous bit hint at level 3 to map 2M using single TLB entry. TODO: 16K supports 32 contiguous entries at level 2 to get us 1G(which is not yet supported by the infrastructure). That should be a separate patch altogether. Cc: Will Deacon <will.deacon@arm.com> Cc: Jeremy Linton <jeremy.linton@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Christoffer Dall <christoffer.dall@linaro.org> Cc: Steve Capper <steve.capper@linaro.org> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
This patch adds the page size to the arm64 kernel image header so that one can infer the PAGESIZE used by the kernel. This will be helpful to diagnose failures to boot the kernel with page size not supported by the CPU. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Suzuki K. Poulose authored
Ensure that the selected page size is supported by the CPU(s). If it doesn't park it. Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Suzuki K. Poulose authored
Update the help text for ARM64_64K_PAGES to reflect the reality about AArch32 support. Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
We choose NR_FIX_BTMAPS such that each slot (NR_FIX_BTMAPS * PAGE_SIZE) can address 256K. Use division to derive NR_FIX_BTMAPS rather than defining it for each page size. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Suzuki K. Poulose authored
We use !CONFIG_ARM64_64K_PAGES for CONFIG_ARM64_4K_PAGES (and vice versa) in code. It all worked well, so far since we only had two options. Now, with the introduction of 16K, these cases will break. This patch cleans up the code to use the required CONFIG symbol expression without the assumption that !64K => 4K (and vice versa) Cc: Will Deacon <will.deacon@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Suzuki K. Poulose authored
At the moment, we only support maximum of 3-level page table for swapper. With 48bit VA, 64K has only 3 levels and 4K uses section mapping. Add support for 4-level page table for swapper, needed by 16K pages. Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Suzuki K. Poulose authored
Now that we can calculate the number of levels required for mapping a va width, reserve exact number of pages that would be required to cover the idmap. The idmap should be able to handle the maximum physical address size supported. Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Suzuki K. Poulose authored
Introduce helpers for finding the number of page table levels required for a given VA width, shift for a particular page table level. Convert the existing users to the new helpers. More users to follow. Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Will Deacon <will.deacon@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Suzuki K. Poulose authored
We use section maps with 4K page size to create the swapper/idmaps. So far we have used !64K or 4K checks to handle the case where we use the section maps. This patch adds a new symbol, ARM64_SWAPPER_USES_SECTION_MAPS, to handle cases where we use section maps, instead of using the page size symbols. Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Suzuki K. Poulose authored
Move the kernel pagetable (both swapper and idmap) definitions from the generic asm/page.h to a new file, asm/kernel-pgtable.h. This is mostly a cosmetic change, to clean up the asm/page.h to get rid of the arch specific details which are not needed by the generic code. Also renames the symbols to prevent conflicts. e.g, BLOCK_SHIFT => SWAPPER_BLOCK_SHIFT Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 16 Oct, 2015 3 commits
-
-
Yang Shi authored
Fix handers to handlers. Signed-off-by: Yang Shi <yang.shi@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Salyzyn authored
ARMv7 does not have a PC alignment exception. ARMv8 AArch32 user space however can produce a PC alignment exception. Add handler so that we do not dump an unexpected stack trace in the logs. Signed-off-by: Mark Salyzyn <salyzyn@android.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Catalin Marinas authored
These were introduced by commit 03875ad5 (arm64: add kc_offset_to_vaddr and kc_vaddr_to_offset macro). Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 13 Oct, 2015 4 commits
-
-
Catalin Marinas authored
This reverts commit 1b6d7f87. This patch would conflict with Dan Williams' "tree-wide convert to memremap()" series (ioremap_cache replaced by arch_memremap) Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
yalin wang authored
This patch add kc_offset_to_vaddr() and kc_vaddr_to_offset(), the default version doesn't work on arm64, because arm64 kernel address is below the PAGE_OFFSET, like module address and vmemmap address are all below PAGE_OFFSET address. Signed-off-by: yalin wang <yalin.wang2010@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
yalin wang authored
Add ioremap_cache macro, because some code will test if this macro is defined or not, and will generate a generric version if not defined, for example, memremap.c do like this. Signed-off-by: yalin wang <yalin.wang2010@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
Sparse reports some new issues introduced by the kasan patches: arch/arm64/mm/kasan_init.c:91:13: warning: no previous prototype for 'kasan_early_init' [-Wmissing-prototypes] void __init kasan_early_init(void) ^ arch/arm64/mm/kasan_init.c:91:13: warning: symbol 'kasan_early_init' was not declared. Should it be static? [sparse] This patch resolves the problem by adding a prototype for kasan_early_init and marking the function as asmlinkage, since it's only called from head.S. Signed-off-by: Will Deacon <will.deacon@arm.com> Acked-by: Andrey Ryabinin <ryabinin.a.a@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 12 Oct, 2015 9 commits
-
-
Andrey Ryabinin authored
Signed-off-by: Andrey Ryabinin <ryabinin.a.a@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Linus Walleij authored
This prints out the virtual memory assigned to KASan in the boot crawl along with other memory assignments, if and only if KASan is activated. Example dmesg from the Juno Development board: Memory: 1691156K/2080768K available (5465K kernel code, 444K rwdata, 2160K rodata, 340K init, 217K bss, 373228K reserved, 16384K cma-reserved) Virtual kernel memory layout: kasan : 0xffffff8000000000 - 0xffffff9000000000 ( 64 GB) vmalloc : 0xffffff9000000000 - 0xffffffbdbfff0000 ( 182 GB) vmemmap : 0xffffffbdc0000000 - 0xffffffbfc0000000 ( 8 GB maximum) 0xffffffbdc2000000 - 0xffffffbdc3fc0000 ( 31 MB actual) fixed : 0xffffffbffabfd000 - 0xffffffbffac00000 ( 12 KB) PCI I/O : 0xffffffbffae00000 - 0xffffffbffbe00000 ( 16 MB) modules : 0xffffffbffc000000 - 0xffffffc000000000 ( 64 MB) memory : 0xffffffc000000000 - 0xffffffc07f000000 ( 2032 MB) .init : 0xffffffc0007f5000 - 0xffffffc00084a000 ( 340 KB) .text : 0xffffffc000080000 - 0xffffffc0007f45b4 ( 7634 KB) .data : 0xffffffc000850000 - 0xffffffc0008bf200 ( 445 KB) Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Andrey Ryabinin <ryabinin.a.a@gmail.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Andrey Ryabinin authored
This patch adds arch specific code for kernel address sanitizer (see Documentation/kasan.txt). 1/8 of kernel addresses reserved for shadow memory. There was no big enough hole for this, so virtual addresses for shadow were stolen from vmalloc area. At early boot stage the whole shadow region populated with just one physical page (kasan_zero_page). Later, this page reused as readonly zero shadow for some memory that KASan currently don't track (vmalloc). After mapping the physical memory, pages for shadow memory are allocated and mapped. Functions like memset/memmove/memcpy do a lot of memory accesses. If bad pointer passed to one of these function it is important to catch this. Compiler's instrumentation cannot do this since these functions are written in assembly. KASan replaces memory functions with manually instrumented variants. Original functions declared as weak symbols so strong definitions in mm/kasan/kasan.c could replace them. Original functions have aliases with '__' prefix in name, so we could call non-instrumented variant if needed. Some files built without kasan instrumentation (e.g. mm/slub.c). Original mem* function replaced (via #define) with prefixed variants to disable memory access checks for such files. Signed-off-by: Andrey Ryabinin <ryabinin.a.a@gmail.com> Tested-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Andrey Ryabinin authored
This will be used by KASAN latter. Signed-off-by: Andrey Ryabinin <ryabinin.a.a@gmail.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
Commit 654672d4 ("locking/atomics: Add _{acquire|release|relaxed}() variants of some atomic operation") introduced a relaxed atomic API to Linux that maps nicely onto the arm64 memory model, including the new ARMv8.1 atomic instructions. This patch hooks up the API to our relaxed atomic instructions, rather than have them all expand to the full-barrier variants as they do currently. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
Since arm64 does not use a builtin decompressor, the EFI stub is built into the kernel proper. So far, this has been working fine, but actually, since the stub is in fact a PE/COFF relocatable binary that is executed at an unknown offset in the 1:1 mapping provided by the UEFI firmware, we should not be seamlessly sharing code with the kernel proper, which is a position dependent executable linked at a high virtual offset. So instead, separate the contents of libstub and its dependencies, by putting them into their own namespace by prefixing all of its symbols with __efistub. This way, we have tight control over what parts of the kernel proper are referenced by the stub. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Matt Fleming <matt.fleming@intel.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
For more control over which functions are called with the MMU off or with the UEFI 1:1 mapping active, annotate some assembler routines as position independent. This is done by introducing ENDPIPROC(), which replaces the ENDPROC() declaration of those routines. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
With the stub to kernel interface being promoted to a proper interface so that other agents than the stub can boot the kernel proper in EFI mode, we can remove the linux,uefi-stub-kern-ver field, considering that its original purpose was to prevent this from happening in the first place. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Matt Fleming <matt.fleming@intel.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Catalin Marinas authored
A prior commit used to detect the hw breakpoint ABI behaviour based on the target state missed the asm/compat.h include and the build fails with !CONFIG_COMPAT. Fixes: 8f48c062 ("arm64: hw_breakpoint: use target state to determine ABI behaviour") Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 09 Oct, 2015 2 commits
-
-
Yang Yingliang authored
When cpu is disabled, all irqs will be migratged to another cpu. In some cases, a new affinity is different, the old affinity need to be updated and if irq_set_affinity's return value is IRQ_SET_MASK_OK_DONE, the old affinity can not be updated. Fix it by using irq_do_set_affinity. And migrating interrupts is a core code matter, so use the generic function irq_migrate_all_off_this_cpu() to migrate interrupts in kernel/irq/migration.c. Cc: Jiang Liu <jiang.liu@linux.intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Russell King - ARM Linux <linux@arm.linux.org.uk> Cc: Hanjun Guo <hanjun.guo@linaro.org> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipCatalin Marinas authored
* 'irq/for-arm' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: genirq: Introduce generic irq migration for cpu hotunplug
-
- 08 Oct, 2015 6 commits
-
-
Jeremy Linton authored
With 64k pages, the next larger segment size is 512M. The linux kernel also uses different protection flags to cover its code and data. Because of this requirement, the vast majority of the kernel code and data structures end up being mapped with 64k pages instead of the larger pages common with a 4k page kernel. Recent ARM processors support a contiguous bit in the page tables which allows the a TLB to cover a range larger than a single PTE if that range is mapped into physically contiguous ram. So, for the kernel its a good idea to set this flag. Some basic micro benchmarks show it can significantly reduce the number of L1 dTLB refills. Add boot option to enable/disable CONT marking, as well as fix a bug found by Steve Capper. Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> [catalin.marinas@arm.com: remove CONFIG_ARM64_CONT_PTE altogether] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Jeremy Linton authored
The kernel page dump utility needs to be aware of the CONT bit before it will break up pages ranges for display. Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Jeremy Linton authored
The default page attributes for a PMD being broken should have the CONT bit set. Create a new definition for an early boot range of PTE's that are contiguous. Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Jeremy Linton authored
Add the supporting macros to check if the contiguous bit is set, set the bit, or clear it in a PTE entry. Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Jeremy Linton authored
Define the bit positions in the PTE and PMD for the contiguous bit. Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Jeremy Linton authored
Add the number of pages required to form a contiguous range, as well as some supporting constants. Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 07 Oct, 2015 2 commits
-
-
Mark Rutland authored
As suggested by Will Deacon, add myself as a reviewer of the ARM PMU profiling and debugging code. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
Will Deacon maintains the profiling and debugging code under both arch/arm and arch/arm64. Update MAINTAINERS to reflect this, in preparation for adding myself as a reviewer of said code. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-