- 12 Oct, 2015 4 commits
-
-
Ard Biesheuvel authored
Since arm64 does not use a builtin decompressor, the EFI stub is built into the kernel proper. So far, this has been working fine, but actually, since the stub is in fact a PE/COFF relocatable binary that is executed at an unknown offset in the 1:1 mapping provided by the UEFI firmware, we should not be seamlessly sharing code with the kernel proper, which is a position dependent executable linked at a high virtual offset. So instead, separate the contents of libstub and its dependencies, by putting them into their own namespace by prefixing all of its symbols with __efistub. This way, we have tight control over what parts of the kernel proper are referenced by the stub. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Matt Fleming <matt.fleming@intel.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
For more control over which functions are called with the MMU off or with the UEFI 1:1 mapping active, annotate some assembler routines as position independent. This is done by introducing ENDPIPROC(), which replaces the ENDPROC() declaration of those routines. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
With the stub to kernel interface being promoted to a proper interface so that other agents than the stub can boot the kernel proper in EFI mode, we can remove the linux,uefi-stub-kern-ver field, considering that its original purpose was to prevent this from happening in the first place. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Matt Fleming <matt.fleming@intel.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Catalin Marinas authored
A prior commit used to detect the hw breakpoint ABI behaviour based on the target state missed the asm/compat.h include and the build fails with !CONFIG_COMPAT. Fixes: 8f48c062 ("arm64: hw_breakpoint: use target state to determine ABI behaviour") Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 09 Oct, 2015 2 commits
-
-
Yang Yingliang authored
When cpu is disabled, all irqs will be migratged to another cpu. In some cases, a new affinity is different, the old affinity need to be updated and if irq_set_affinity's return value is IRQ_SET_MASK_OK_DONE, the old affinity can not be updated. Fix it by using irq_do_set_affinity. And migrating interrupts is a core code matter, so use the generic function irq_migrate_all_off_this_cpu() to migrate interrupts in kernel/irq/migration.c. Cc: Jiang Liu <jiang.liu@linux.intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Russell King - ARM Linux <linux@arm.linux.org.uk> Cc: Hanjun Guo <hanjun.guo@linaro.org> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipCatalin Marinas authored
* 'irq/for-arm' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: genirq: Introduce generic irq migration for cpu hotunplug
-
- 08 Oct, 2015 6 commits
-
-
Jeremy Linton authored
With 64k pages, the next larger segment size is 512M. The linux kernel also uses different protection flags to cover its code and data. Because of this requirement, the vast majority of the kernel code and data structures end up being mapped with 64k pages instead of the larger pages common with a 4k page kernel. Recent ARM processors support a contiguous bit in the page tables which allows the a TLB to cover a range larger than a single PTE if that range is mapped into physically contiguous ram. So, for the kernel its a good idea to set this flag. Some basic micro benchmarks show it can significantly reduce the number of L1 dTLB refills. Add boot option to enable/disable CONT marking, as well as fix a bug found by Steve Capper. Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> [catalin.marinas@arm.com: remove CONFIG_ARM64_CONT_PTE altogether] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Jeremy Linton authored
The kernel page dump utility needs to be aware of the CONT bit before it will break up pages ranges for display. Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Jeremy Linton authored
The default page attributes for a PMD being broken should have the CONT bit set. Create a new definition for an early boot range of PTE's that are contiguous. Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Jeremy Linton authored
Add the supporting macros to check if the contiguous bit is set, set the bit, or clear it in a PTE entry. Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Jeremy Linton authored
Define the bit positions in the PTE and PMD for the contiguous bit. Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Jeremy Linton authored
Add the number of pages required to form a contiguous range, as well as some supporting constants. Signed-off-by: Jeremy Linton <jeremy.linton@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 07 Oct, 2015 21 commits
-
-
Mark Rutland authored
As suggested by Will Deacon, add myself as a reviewer of the ARM PMU profiling and debugging code. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
Will Deacon maintains the profiling and debugging code under both arch/arm and arch/arm64. Update MAINTAINERS to reflect this, in preparation for adding myself as a reviewer of said code. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
The A57 and A53 PMUs in Juno support different events, so describe them separately in both the Juno and Juno R1 DTs. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Liviu Dudau <liviu.dudau@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
The Cortex-A57 PMU supports a few events outside of the required PMUv3 set that are rather useful. This patch adds the event map data for said events. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
The Cortex-A53 PMU supports a few events outside of the required PMUv3 set that are rather useful. This patch adds the event map data for said events. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Mark Rutland authored
Now that the arm_pmu framework has been factored out to drivers/perf we can make use of it for arm64, gaining support for heterogeneous PMUs and unifying the two codebases before they diverge further. The as yet unused PMU name for PMUv3 is changed to armv8_pmuv3, matching the style previously applied to the 32-bit PMUs. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
The arm64 hw_breakpoint interface is slightly less flexible than its 32-bit counterpart, thanks to some changes in the architecture rendering unaligned watchpoint addresses obselete for AArch64. However, in a multi-arch environment (i.e. debugging a 32-bit target with a 64-bit GDB under a 64-bit kernel), we need to provide a feature compatible interface to GDB in order for debugging to function correctly. This patch adds a new helper, is_compat_bp, to our hw_breakpoint implementation which changes the interface behaviour based on the architecture of the debug target as opposed to the debugger itself. This allows debugged to function as expected for multi-arch configurations without relying on deprecated architectural behaviours when debugging native applications. Cc: Yao Qi <yao.qi@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
update_mmu_cache() consists of a dsb(ishst) instruction so that new user mappings are guaranteed to be visible to the page table walker on exception return. In reality this can be a very expensive operation which is rarely needed. Removing this barrier shows a modest improvement in hackbench scores and , in the worst case, we re-take the user fault and establish that there was nothing to do. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
__flush_tlb_pgtable is used to invalidate intermediate page table entries after they have been cleared and are about to be freed. Since pXd_clear imply memory barriers, we don't need the extra one here. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
mm_cpumask isn't actually used for anything on arm64, so remove all the code trying to keep it up-to-date. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
switch_mm performs some checks to try and avoid entering the ASID allocator: (1) If we're switching to the init_mm (no user mappings), then simply set a reserved TTBR0 value with no page table (the zero page) (2) If prev == next *and* the mm_cpumask indicates that we've run on this CPU before, then we can skip the allocator. However, there is plenty of redundancy here. With the new ASID allocator, if prev == next, then we know that our ASID is valid and do not need to worry about re-allocation. Consequently, we can drop the mm_cpumask check in (2) and move the prev == next check before the init_mm check, since if prev == next == init_mm then there's nothing to do. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
The TLB gather code sets fullmm=1 when tearing down the entire address space for an mm_struct on exit or execve. Given that the ASID allocator will never re-allocate a dirty ASID, this flushing is not needed and can simply be avoided in the flushing code. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
The ASID macro returns a 64-bit (long long) value, so there is no need to cast to (unsigned long) before shifting prior to a TLBI operation. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
Our current switch_mm implementation suffers from a number of problems: (1) The ASID allocator relies on IPIs to synchronise the CPUs on a rollover event (2) Because of (1), we cannot allocate ASIDs with interrupts disabled and therefore make use of a TIF_SWITCH_MM flag to postpone the actual switch to finish_arch_post_lock_switch (3) We run context switch with a reserved (invalid) TTBR0 value, even though the ASID and pgd are updated atomically (4) We take a global spinlock (cpu_asid_lock) during context-switch (5) We use h/w broadcast TLB operations when they are not required (e.g. in flush_context) This patch addresses these problems by rewriting the ASID algorithm to match the bitmap-based arch/arm/ implementation more closely. This in turn allows us to remove much of the complications surrounding switch_mm, including the ugly thread flag. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
There are a number of places where a single CPU is running with a private page-table and we need to perform maintenance on the TLB and I-cache in order to ensure correctness, but do not require the operation to be broadcast to other CPUs. This patch adds local variants of tlb_flush_all and __flush_icache_all to support these use-cases and updates the callers respectively. __local_flush_icache_all also implies an isb, since it is intended to be used synchronously. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: David Daney <david.daney@cavium.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
When cold-booting a CPU, we must invalidate any junk entries from the local TLB prior to enabling the MMU. This doesn't require broadcasting within the inner-shareable domain, so de-scope the operation to apply only to the local CPU. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Will Deacon authored
With commit b08d4640 ("arm64: remove dead code"), cpu_set_idmap_tcr_t0sz is no longer called and can therefore be removed from the kernel. This patch removes the function and effectively inlines the helper function __cpu_set_tcr_t0sz into cpu_set_default_tcr_t0sz. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Andrey Ryabinin authored
In order to not use lengthy (UL(0xffffffffffffffff) << VA_BITS) everywhere, replace it with VA_START. Signed-off-by: Andrey Ryabinin <ryabinin.a.a@gmail.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Feng Kan authored
This patch optimize copy_to-from-in_user for arm 64bit architecture. The copy template is used as template file for all the copy*.S files. Minor change was made to it to accommodate the copy to/from/in user files. Signed-off-by: Feng Kan <fkan@apm.com> Signed-off-by: Balamurugan Shanmugam <bshanmugam@apm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Feng Kan authored
This converts the memcpy.S to use the copy template file. The copy template file was based originally on the memcpy.S Signed-off-by: Feng Kan <fkan@apm.com> Signed-off-by: Balamurugan Shanmugam <bshanmugam@apm.com> [catalin.marinas@arm.com: removed tmp3(w) .req statements as they are not used] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
Alim Akhtar authored
This patch update defconfig, adds samsung serial and Synopsys Designware MMC configs related to exynos SoC Signed-off-by: Alim Akhtar <alim.akhtar@samsung.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
-
- 04 Oct, 2015 6 commits
-
-
Linus Torvalds authored
-
git://git.kernel.org/pub/scm/linux/kernel/git/cmetcalf/linux-tileLinus Torvalds authored
Pull strscpy string copy function implementation from Chris Metcalf. Chris sent this during the merge window, but I waffled back and forth on the pull request, which is why it's going in only now. The new "strscpy()" function is definitely easier to use and more secure than either strncpy() or strlcpy(), both of which are horrible nasty interfaces that have serious and irredeemable problems. strncpy() has a useless return value, and doesn't NUL-terminate an overlong result. To make matters worse, it pads a short result with zeroes, which is a performance disaster if you have big buffers. strlcpy(), by contrast, is a mis-designed "fix" for strlcpy(), lacking the insane NUL padding, but having a differently broken return value which returns the original length of the source string. Which means that it will read characters past the count from the source buffer, and you have to trust the source to be properly terminated. It also makes error handling fragile, since the test for overflow is unnecessarily subtle. strscpy() avoids both these problems, guaranteeing the NUL termination (but not excessive padding) if the destination size wasn't zero, and making the overflow condition very obvious by returning -E2BIG. It also doesn't read past the size of the source, and can thus be used for untrusted source data too. So why did I waffle about this for so long? Every time we introduce a new-and-improved interface, people start doing these interminable series of trivial conversion patches. And every time that happens, somebody does some silly mistake, and the conversion patch to the improved interface actually makes things worse. Because the patch is mindnumbing and trivial, nobody has the attention span to look at it carefully, and it's usually done over large swatches of source code which means that not every conversion gets tested. So I'm pulling the strscpy() support because it *is* a better interface. But I will refuse to pull mindless conversion patches. Use this in places where it makes sense, but don't do trivial patches to fix things that aren't actually known to be broken. * 'strscpy' of git://git.kernel.org/pub/scm/linux/kernel/git/cmetcalf/linux-tile: tile: use global strscpy() rather than private copy string: provide strscpy() Make asm/word-at-a-time.h available on all architectures
-
git://neil.brown.name/mdLinus Torvalds authored
Pull md fixes from Neil Brown: "Assorted fixes for md in 4.3-rc. Two tagged for -stable, and one is really a cleanup to match and improve kmemcache interface. * tag 'md/4.3-fixes' of git://neil.brown.name/md: md/bitmap: don't pass -1 to bitmap_storage_alloc. md/raid1: Avoid raid1 resync getting stuck md: drop null test before destroy functions md: clear CHANGE_PENDING in readonly array md/raid0: apply base queue limits *before* disk_stack_limits md/raid5: don't index beyond end of array in need_this_block(). raid5: update analysis state for failed stripe md: wait for pending superblock updates before switching to read-only
-
git://git.linux-mips.org/pub/scm/ralf/upstream-linusLinus Torvalds authored
Pull MIPS updates from Ralf Baechle: "This week's round of MIPS fixes: - Fix JZ4740 build - Fix fallback to GFP_DMA - FP seccomp in case of ENOSYS - Fix bootmem panic - A number of FP and CPS fixes - Wire up new syscalls - Make sure BPF assembler objects can properly be disassembled - Fix BPF assembler code for MIPS I" * 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus: MIPS: scall: Always run the seccomp syscall filters MIPS: Octeon: Fix kernel panic on startup from memory corruption MIPS: Fix R2300 FP context switch handling MIPS: Fix octeon FP context switch handling MIPS: BPF: Fix load delay slots. MIPS: BPF: Do all exports of symbols with FEXPORT(). MIPS: Fix the build on jz4740 after removing the custom gpio.h MIPS: CPS: #ifdef on CONFIG_MIPS_MT_SMP rather than CONFIG_MIPS_MT MIPS: CPS: Don't include MT code in non-MT kernels. MIPS: CPS: Stop dangling delay slot from has_mt. MIPS: dma-default: Fix 32-bit fall back to GFP_DMA MIPS: Wire up userfaultfd and membarrier syscalls.
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull irq fixes from Thomas Gleixner: "This update contains: - Fix for a long standing race affecting /proc/irq/NNN - One line fix for ARM GICV3-ITS counting the wrong data - Warning silencing in ARM GICV3-ITS. Another GCC trying to be overly clever issue" * 'irq-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: irqchip/gic-v3-its: Count additional LPIs for the aliased devices irqchip/gic-v3-its: Silence warning when its_lpi_alloc_chunks gets inlined genirq: Fix race in register_irq_proc()
-
Markos Chandras authored
The MIPS syscall handler code used to return -ENOSYS on invalid syscalls. Whilst this is expected, it caused problems for seccomp filters because the said filters never had the change to run since the code returned -ENOSYS before triggering them. This caused problems on the chromium testsuite for filters looking for invalid syscalls. This has now changed and the seccomp filters are always run even if the syscall is invalid. We return -ENOSYS once we return from the seccomp filters. Moreover, similar codepaths have been merged in the process which simplifies somewhat the overall syscall code. Signed-off-by: Markos Chandras <markos.chandras@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/11236/Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
-
- 03 Oct, 2015 1 commit
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull x86 fixes from Ingo Molnar: "Fixes all around the map: W+X kernel mapping fix, WCHAN fixes, two build failure fixes for corner case configs, x32 header fix and a speling fix" * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/headers/uapi: Fix __BITS_PER_LONG value for x32 builds x86/mm: Set NX on gap between __ex_table and rodata x86/kexec: Fix kexec crash in syscall kexec_file_load() x86/process: Unify 32bit and 64bit implementations of get_wchan() x86/process: Add proper bound checks in 64bit get_wchan() x86, efi, kasan: Fix build failure on !KASAN && KMEMCHECK=y kernels x86/hyperv: Fix the build in the !CONFIG_KEXEC_CORE case x86/cpufeatures: Correct spelling of the HWP_NOTIFY flag
-