- 28 Apr, 2016 3 commits
-
-
Ezequiel Garcia authored
The rtc-lib dependency is not required, and seems it was just copy-pasted from ARM's Kconfig. If platform requires rtc-lib, they should select it individually. Reviewed-by:
Arnd Bergmann <arnd@arndb.de> Signed-off-by:
Ezequiel Garcia <ezequiel@vanguardiasur.com.ar> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
Selecting both DEBUG_PAGEALLOC and HIBERNATION results in a build failure: | kernel/built-in.o: In function `saveable_page': | memremap.c:(.text+0x100f90): undefined reference to `kernel_page_present' | kernel/built-in.o: In function `swsusp_save': | memremap.c:(.text+0x1026f0): undefined reference to `kernel_page_present' | make: *** [vmlinux] Error 1 James sayeth: "This is caused by DEBUG_PAGEALLOC, which clears the PTE_VALID bit from 'free' pages. Hibernate uses it as a hint that it shouldn't save/access that page. This function is used to test whether the PTE_VALID bit has been cleared by kernel_map_pages(), hibernate is the only user. Fixing this exposes a bigger problem with that configuration though: if the resume kernel has cut free pages out of the linear map, we copy this swiss-cheese view of memory, and try to use it to restore... We can fixup the copy of the linear map, but it then explodes in my lazy 'clean the whole kernel to PoC' after resume, as now both the kernel and linear map have holes in them." On closer inspection, the whole Kconfig machinery around DEBUG_PAGEALLOC, HIBERNATION, ARCH_SUPPORTS_DEBUG_PAGEALLOC and PAGE_POISONING looks like it might need some affection. In particular, DEBUG_ALLOC has: > depends on !HIBERNATION || ARCH_SUPPORTS_DEBUG_PAGEALLOC && !PPC && !SPARC which looks pretty fishy. For the moment, require ARCH_SUPPORTS_DEBUG_PAGEALLOC to depend on !HIBERNATION on arm64 and get allmodconfig building again. Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
James Morse authored
Add support for hibernate/suspend-to-disk. Suspend borrows code from cpu_suspend() to write cpu state onto the stack, before calling swsusp_save() to save the memory image. Restore creates a set of temporary page tables, covering only the linear map, copies the restore code to a 'safe' page, then uses the copy to restore the memory image. The copied code executes in the lower half of the address space, and once complete, restores the original kernel's page tables. It then calls into cpu_resume(), and follows the normal cpu_suspend() path back into the suspend code. To restore a kernel using KASLR, the address of the page tables, and cpu_resume() are stored in the hibernate arch-header and the el2 vectors are pivotted via the 'safe' page in low memory. Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Tested-by: Kevin Hilman <khilman@baylibre.com> # Tested on Juno R2 Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- 20 Apr, 2016 1 commit
-
-
Yang Shi authored
HAVE_ARCH_TRANSPARENT_HUGEPAGE has been defined in arch/Kconfig already, the ARM64 version is identical with it and the default value is Y. So remove the redundant definition and just select it under CONFIG_ARM64. Signed-off-by:
Yang Shi <yang.shi@linaro.org> [will: sort into alphabetical order whilst I'm resolving conflicts] Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- 15 Apr, 2016 2 commits
-
-
Ganapatrao Kulkarni authored
Enable NUMA balancing for arm64 platforms. Add pte, pmd protnone helpers for use by automatic NUMA balancing. Reviewed-by:
Steve Capper <steve.capper@arm.com> Reviewed-by:
Robert Richter <rrichter@cavium.com> Signed-off-by:
Ganapatrao Kulkarni <gkulkarni@caviumnetworks.com> Signed-off-by:
David Daney <david.daney@cavium.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Ganapatrao Kulkarni authored
Attempt to get the memory and CPU NUMA node via of_numa. If that fails, default the dummy NUMA node and map all memory and CPUs to node 0. Tested-by:
Shannon Zhao <shannon.zhao@linaro.org> Reviewed-by:
Robert Richter <rrichter@cavium.com> Signed-off-by:
Ganapatrao Kulkarni <gkulkarni@caviumnetworks.com> Signed-off-by:
David Daney <david.daney@cavium.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- 08 Mar, 2016 2 commits
-
-
Bjorn Helgaas authored
Include pci/hotplug/Kconfig directly from pci/Kconfig, so arches don't have to source both pci/Kconfig and pci/hotplug/Kconfig. Note that this effectively adds pci/hotplug/Kconfig to the following arches, because they already sourced drivers/pci/Kconfig but they previously did not source drivers/pci/hotplug/Kconfig: alpha arm avr32 frv m68k microblaze mn10300 sparc unicore32 Inspired-by-patch-from: Bogicevic Sasa <brutallesale@gmail.com> Signed-off-by:
Bjorn Helgaas <bhelgaas@google.com>
-
Bogicevic Sasa authored
Include pci/pcie/Kconfig directly from pci/Kconfig, so arches don't have to source both pci/Kconfig and pci/pcie/Kconfig. Note that this effectively adds pci/pcie/Kconfig to the following arches, because they already sourced drivers/pci/Kconfig but they previously did not source drivers/pci/pcie/Kconfig: alpha avr32 blackfin frv m32r m68k microblaze mn10300 parisc sparc unicore32 xtensa [bhelgaas: changelog, source pci/pcie/Kconfig at top of pci/Kconfig, whitespace] Signed-off-by:
Sasa Bogicevic <brutallesale@gmail.com> Signed-off-by:
Bjorn Helgaas <bhelgaas@google.com>
-
- 29 Feb, 2016 1 commit
-
-
Marc Zyngier authored
With ARMv8.1 VHE, the architecture is able to (almost) transparently run the kernel at EL2, despite being written for EL1. This patch takes care of the "almost" part, mostly preventing the kernel from dropping from EL2 to EL1, and setting up the HYP configuration. Reviewed-by:
Christoffer Dall <christoffer.dall@linaro.org> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com>
-
- 26 Feb, 2016 2 commits
-
-
Will Deacon authored
UAO is a feature of ARMv8.2, so add a submenu like we have for 8.1. Signed-off-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Andrew Pinski authored
On ThunderX T88 pass 1.x through 2.1 parts, broadcast TLBI instructions may cause the icache to become corrupted if it contains data for a non-current ASID. This patch implements the workaround (which invalidates the local icache when switching the mm) by using code patching. Signed-off-by:
Andrew Pinski <apinski@cavium.com> Signed-off-by:
David Daney <david.daney@cavium.com> Reviewed-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 24 Feb, 2016 4 commits
-
-
Ard Biesheuvel authored
Since arm64 does not use a decompressor that supplies an execution environment where it is feasible to some extent to provide a source of randomness, the arm64 KASLR kernel depends on the bootloader to supply some random bits in the /chosen/kaslr-seed DT property upon kernel entry. On UEFI systems, we can use the EFI_RNG_PROTOCOL, if supplied, to obtain some random bits. At the same time, use it to randomize the offset of the kernel Image in physical memory. Reviewed-by:
Matt Fleming <matt@codeblueprint.co.uk> Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
This adds support for KASLR is implemented, based on entropy provided by the bootloader in the /chosen/kaslr-seed DT property. Depending on the size of the address space (VA_BITS) and the page size, the entropy in the virtual displacement is up to 13 bits (16k/2 levels) and up to 25 bits (all 4 levels), with the sidenote that displacements that result in the kernel image straddling a 1GB/32MB/512MB alignment boundary (for 4KB/16KB/64KB granule kernels, respectively) are not allowed, and will be rounded up to an acceptable value. If CONFIG_RANDOMIZE_MODULE_REGION_FULL is enabled, the module region is randomized independently from the core kernel. This makes it less likely that the location of core kernel data structures can be determined by an adversary, but causes all function calls from modules into the core kernel to be resolved via entries in the module PLTs. If CONFIG_RANDOMIZE_MODULE_REGION_FULL is not enabled, the module region is randomized by choosing a page al...
-
Ard Biesheuvel authored
This implements CONFIG_RELOCATABLE, which links the final vmlinux image with a dynamic relocation section, allowing the early boot code to perform a relocation to a different virtual address at runtime. This is a prerequisite for KASLR (CONFIG_RANDOMIZE_BASE). Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Ard Biesheuvel authored
This adds support for emitting PLTs at module load time for relative branches that are out of range. This is a prerequisite for KASLR, which may place the kernel and the modules anywhere in the vmalloc area, making it more likely that branch target offsets exceed the maximum range of +/- 128 MB. In this version, I removed the distinction between relocations against .init executable sections and ordinary executable sections. The reason is that it is hardly worth the trouble, given that .init.text usually does not contain that many far branches, and this version now only reserves PLT entry space for jump and call relocations against undefined symbols (since symbols defined in the same module can be assumed to be within +/- 128 MB) For example, the mac80211.ko module (which is fairly sizable at ~400 KB) built with -mcmodel=large gives the following relocation counts: relocs branches unique !local .text 3925 3347 518 219 .init.text 11 8 7 1 .exit.text 4 4 4 1 .text.unlikely 81 67 36 17 ('unique' means branches to unique type/symbol/addend combos, of which !local is the subset referring to undefined symbols) IOW, we are only emitting a single PLT entry for the .init sections, and we are better off just adding it to the core PLT section instead. Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 18 Feb, 2016 2 commits
-
-
Ard Biesheuvel authored
This wires up the existing generic huge-vmap feature, which allows ioremap() to use PMD or PUD sized block mappings. It also adds support to the unmap path for dealing with block mappings, which will allow us to unmap the __init region using unmap_kernel_range() in a subsequent patch. Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
James Morse authored
'User Access Override' is a new ARMv8.2 feature which allows the unprivileged load and store instructions to be overridden to behave in the normal way. This patch converts {get,put}_user() and friends to use ldtr*/sttr* instructions - so that they can only access EL0 memory, then enables UAO when fs==KERNEL_DS so that these functions can access kernel memory. This allows user space's read/write permissions to be checked against the page tables, instead of testing addr<USER_DS, then using the kernel's read/write permissions. Signed-off-by:
James Morse <james.morse@arm.com> [catalin.marinas@arm.com: move uao_thread_switch() above dsb()] Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 16 Feb, 2016 3 commits
-
-
Yang Shi authored
To enable UBSAN on arm64, ARCH_HAS_UBSAN_SANITIZE_ALL need to be selected. Basic kernel bootup test is passed on arm64 with CONFIG_UBSAN_SANITIZE_ALL enabled. Signed-off-by:
Yang Shi <yang.shi@linaro.org> Acked-by:
Andrey Ryabinin <aryabinin@virtuozzo.com> Tested-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Laura Abbott authored
ARCH_SUPPORTS_DEBUG_PAGEALLOC provides a hook to map and unmap pages for debugging purposes. This requires memory be mapped with PAGE_SIZE mappings since breaking down larger mappings at runtime will lead to TLB conflicts. Check if debug_pagealloc is enabled at runtime and if so, map everyting with PAGE_SIZE pages. Implement the functions to actually map/unmap the pages at runtime. Reviewed-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by:
Mark Rutland <mark.rutland@arm.com> Tested-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Laura Abbott <labbott@fedoraproject.org> [catalin.marinas@arm.com: static annotation block_mappings_allowed() and #ifdef] Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Lorenzo Pieralisi authored
The SBBR and ACPI specifications allow ACPI based systems that do not implement PSCI (eg systems with no EL3) to boot through the ACPI parking protocol specification[1]. This patch implements the ACPI parking protocol CPU operations, and adds code that eases parsing the parking protocol data structures to the ARM64 SMP initializion carried out at the same time as cpus enumeration. To wake-up the CPUs from the parked state, this patch implements a wakeup IPI for ARM64 (ie arch_send_wakeup_ipi_mask()) that mirrors the ARM one, so that a specific IPI is sent for wake-up purpose in order to distinguish it from other IPI sources. Given the current ACPI MADT parsing API, the patch implements a glue layer that helps passing MADT GICC data structure from SMP initialization code to the parking protocol implementation somewhat overriding the CPU operations interfaces. This to avoid creating a completely trasparent DT/ACPI CPU operations layer that would require crea...
-
- 21 Jan, 2016 1 commit
-
-
Christoph Hellwig authored
Move the generic implementation to <linux/dma-mapping.h> now that all architectures support it and remove the HAVE_DMA_ATTR Kconfig symbol now that everyone supports them. [valentinrothberg@gmail.com: remove leftovers in Kconfig] Signed-off-by:
Christoph Hellwig <hch@lst.de> Cc: "David S. Miller" <davem@davemloft.net> Cc: Aurelien Jacquiot <a-jacquiot@ti.com> Cc: Chris Metcalf <cmetcalf@ezchip.com> Cc: David Howells <dhowells@redhat.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Haavard Skinnemoen <hskinnemoen@gmail.com> Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no> Cc: Helge Deller <deller@gmx.de> Cc: James Hogan <james.hogan@imgtec.com> Cc: Jesper Nilsson <jesper.nilsson@axis.com> Cc: Koichi Yasutake <yasutake.koichi@jp.panasonic.com> Cc: Ley Foon Tan <lftan@altera.com> Cc: Mark Salter <msalter@redhat.com> Cc: Mikael Starvik <starvik@axis.com> Cc: Steven Miao <realmz6@gmail.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Chris...
-
- 15 Jan, 2016 1 commit
-
-
Daniel Cashman authored
arm64: arch_mmap_rnd() uses STACK_RND_MASK to generate the random offset for the mmap base address. This value represents a compromise between increased ASLR effectiveness and avoiding address-space fragmentation. Replace it with a Kconfig option, which is sensibly bounded, so that platform developers may choose where to place this compromise. Keep default values as new minimums. Signed-off-by:
Daniel Cashman <dcashman@google.com> Cc: Russell King <linux@arm.linux.org.uk> Acked-by:
Kees Cook <keescook@chromium.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Don Zickus <dzickus@redhat.com> Cc: Eric W. Biederman <ebiederm@xmission.com> Cc: Heinrich Schuchardt <xypron.glpk@gmx.de> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: David Rientjes <rientjes@google.com> Cc: Mark Salyzyn <salyzyn@android.com> Cc: Jeff Vander Stoep <jeffv@google.com> Cc: Nick Kralevich <nnk@google.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Hector Marco-Gisbert <hecmargi@upv.es> Cc: Borislav Petkov <bp@suse.de> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- 09 Jan, 2016 1 commit
-
-
Dan Williams authored
Let all the archs that implement devmem_is_allowed() opt-in to a common definition of CONFIG_STRICT_DEVM in lib/Kconfig.debug. Cc: Kees Cook <keescook@chromium.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: Will Deacon <will.deacon@arm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: "David S. Miller" <davem@davemloft.net> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Acked-by:
Heiko Carstens <heiko.carstens@de.ibm.com> [heiko: drop 'default y' for s390] Acked-by:
Ingo Molnar <mingo@redhat.com> Suggested-by:
Arnd Bergmann <arnd@arndb.de> Signed-off-by:
Dan Williams <dan.j.williams@intel.com>
-
- 04 Jan, 2016 1 commit
-
-
Jens Wiklander authored
Adds implementation for arm-smccc and enables CONFIG_HAVE_SMCCC. Acked-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Jens Wiklander <jens.wiklander@linaro.org> Signed-off-by:
Russell King <rmk+kernel@arm.linux.org.uk>
-
- 21 Dec, 2015 2 commits
-
-
David Woods authored
The arm64 MMU supports a Contiguous bit which is a hint that the TTE is one of a set of contiguous entries which can be cached in a single TLB entry. Supporting this bit adds new intermediate huge page sizes. The set of huge page sizes available depends on the base page size. Without using contiguous pages the huge page sizes are as follows. 4KB: 2MB 1GB 64KB: 512MB With a 4KB granule, the contiguous bit groups together sets of 16 pages and with a 64KB granule it groups sets of 32 pages. This enables two new huge page sizes in each case, so that the full set of available sizes is as follows. 4KB: 64KB 2MB 32MB 1GB 64KB: 2MB 512MB 16GB If a 16KB granule is used then the contiguous bit groups 128 pages at the PTE level and 32 pages at the PMD level. If the base page size is set to 64KB then 2MB pages are enabled by default. It is possible in the future to make 2MB the default huge page size for both 4KB and 64KB granules. Reviewed-by:...
-
Stefano Stabellini authored
Introduce CONFIG_PARAVIRT and PARAVIRT_TIME_ACCOUNTING on ARM64. Necessary duplication of paravirt.h and paravirt.c with ARM. The only paravirt interface supported is pv_time_ops.steal_clock, so no runtime pvops patching needed. This allows us to make use of steal_account_process_tick for stolen ticks accounting. Signed-off-by:
Stefano Stabellini <stefano.stabellini@eu.citrix.com> Acked-by:
Marc Zyngier <marc.zyngier@arm.com>
-
- 03 Dec, 2015 1 commit
-
-
Will Deacon authored
arm64 relies on the arm_arch_timer for sched_clock, so we can select HAVE_IRQ_TIME_ACCOUNTING and have the core sched-clock code enable the feature at runtime based on the rate. Reported-by:
Mario Smarduch <m.smarduch@samsung.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- 26 Nov, 2015 1 commit
-
-
Andrey Ryabinin authored
On KASAN + 16K_PAGES + 48BIT_VA arch/arm64/mm/kasan_init.c: In function ‘kasan_early_init’: include/linux/compiler.h:484:38: error: call to ‘__compiletime_assert_95’ declared with attribute error: BUILD_BUG_ON failed: !IS_ALIGNED(KASAN_SHADOW_END, PGDIR_SIZE) _compiletime_assert(condition, msg, __compiletime_assert_, __LINE__) Currently KASAN will not work on 16K_PAGES and 48BIT_VA, so forbid such configuration to avoid above build failure. Signed-off-by:
Andrey Ryabinin <aryabinin@virtuozzo.com> Reported-by:
Suzuki K. Poulose <Suzuki.Poulose@arm.com> Acked-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 24 Nov, 2015 1 commit
-
-
Marc Zyngier authored
Cortex-A57 parts up to r1p2 can misreport Stage 2 translation faults when a Stage 1 permission fault or device alignment fault should have been reported. This patch implements the workaround (which is to validate that the Stage-1 translation actually succeeds) by using code patching. Cc: stable@vger.kernel.org Reviewed-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Christoffer Dall <christoffer.dall@linaro.org>
-
- 10 Nov, 2015 1 commit
-
-
Yang Shi authored
FRAME_POINTER is defined in lib/Kconfig.debug, it is unnecessary to redefine it in arch/arm64/Kconfig.debug. ARM64 depends on frame pointer to get correct stack trace (also selecting ARCH_WANT_FRAME_POINTERS). However, the lib/Kconfig.debug definition allows such option to be disabled. This patch forces FRAME_POINTER always on on arm64. Signed-off-by:
Yang Shi <yang.shi@linaro.org> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 28 Oct, 2015 1 commit
-
-
Kefeng Wang authored
It allows a selectable timer interrupt frequency of 100, 250, 300 and 1000 HZ. We will get better performance when choose a suitable frequency in some scene. Cc: Will Deacon <will.deacon@arm.com> Signed-off-by:
Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 20 Oct, 2015 1 commit
-
-
Catalin Marinas authored
Commit 21539939 (arm64: 36 bit VA) introduced 36-bit VA support for the arm64 kernel when the 16KB page configuration is enabled. While this is a valid hardware configuration, it's not something we want to encourage since it reduces the memory (and I/O) range that the kernel can access. Make this depend on EXPERT to avoid complaints of Linux not mapping the whole RAM, especially on platforms following the ARM recommended memory map. Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 19 Oct, 2015 4 commits
-
-
Suzuki K. Poulose authored
36bit VA lets us use 2 level page tables while limiting the available address space to 64GB. Cc: Will Deacon <will.deacon@arm.com> Cc: Steve Capper <steve.capper@linaro.org> Signed-off-by:
Suzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Suzuki K. Poulose authored
This patch turns on the 16K page support in the kernel. We support 48bit VA (4 level page tables) and 47bit VA (3 level page tables). With 16K we can map 128 entries using contiguous bit hint at level 3 to map 2M using single TLB entry. TODO: 16K supports 32 contiguous entries at level 2 to get us 1G(which is not yet supported by the infrastructure). That should be a separate patch altogether. Cc: Will Deacon <will.deacon@arm.com> Cc: Jeremy Linton <jeremy.linton@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Christoffer Dall <christoffer.dall@linaro.org> Cc: Steve Capper <steve.capper@linaro.org> Signed-off-by:
Suzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Suzuki K. Poulose authored
Update the help text for ARM64_64K_PAGES to reflect the reality about AArch32 support. Cc: Will Deacon <will.deacon@arm.com> Signed-off-by:
Suzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
Suzuki K. Poulose authored
We use !CONFIG_ARM64_64K_PAGES for CONFIG_ARM64_4K_PAGES (and vice versa) in code. It all worked well, so far since we only had two options. Now, with the introduction of 16K, these cases will break. This patch cleans up the code to use the required CONFIG symbol expression without the assumption that !64K => 4K (and vice versa) Cc: Will Deacon <will.deacon@arm.com> Acked-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Suzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 15 Oct, 2015 1 commit
-
-
Robin Murphy authored
With iommu_dma_ops in place, hook them up to the configuration code, so IOMMU-fronted devices will get them automatically. Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
Robin Murphy <robin.murphy@arm.com> Signed-off-by:
Joerg Roedel <jroedel@suse.de>
-
- 12 Oct, 2015 1 commit
-
-
Andrey Ryabinin authored
This patch adds arch specific code for kernel address sanitizer (see Documentation/kasan.txt). 1/8 of kernel addresses reserved for shadow memory. There was no big enough hole for this, so virtual addresses for shadow were stolen from vmalloc area. At early boot stage the whole shadow region populated with just one physical page (kasan_zero_page). Later, this page reused as readonly zero shadow for some memory that KASan currently don't track (vmalloc). After mapping the physical memory, pages for shadow memory are allocated and mapped. Functions like memset/memmove/memcpy do a lot of memory accesses. If bad pointer passed to one of these function it is important to catch this. Compiler's instrumentation cannot do this since these functions are written in assembly. KASan replaces memory functions with manually instrumented variants. Original functions declared as weak symbols so strong definitions in mm/kasan/kasan.c could replace them. Original functions have aliases with '__'...
-
- 09 Oct, 2015 1 commit
-
-
Yang Yingliang authored
When cpu is disabled, all irqs will be migratged to another cpu. In some cases, a new affinity is different, the old affinity need to be updated and if irq_set_affinity's return value is IRQ_SET_MASK_OK_DONE, the old affinity can not be updated. Fix it by using irq_do_set_affinity. And migrating interrupts is a core code matter, so use the generic function irq_migrate_all_off_this_cpu() to migrate interrupts in kernel/irq/migration.c. Cc: Jiang Liu <jiang.liu@linux.intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Russell King - ARM Linux <linux@arm.linux.org.uk> Cc: Hanjun Guo <hanjun.guo@linaro.org> Acked-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 07 Oct, 2015 1 commit
-
-
Mark Rutland authored
Now that the arm_pmu framework has been factored out to drivers/perf we can make use of it for arm64, gaining support for heterogeneous PMUs and unifying the two codebases before they diverge further. The as yet unused PMU name for PMUv3 is changed to armv8_pmuv3, matching the style previously applied to the 32-bit PMUs. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-