An error occurred fetching the project authors.
- 01 May, 2019 1 commit
-
-
Mian Yousaf Kaukab authored
Enable CPU vulnerabilty show functions for spectre_v1, spectre_v2, meltdown and store-bypass. Signed-off-by:
Mian Yousaf Kaukab <ykaukab@suse.de> Signed-off-by:
Jeremy Linton <jeremy.linton@arm.com> Reviewed-by:
Andre Przywara <andre.przywara@arm.com> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Tested-by:
Stefan Wahren <stefan.wahren@i2se.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- 20 Mar, 2019 1 commit
-
-
Matthias Kaehlcke authored
The arm64 config selects MULTI_IRQ_HANDLER, which was renamed to GENERIC_IRQ_MULTI_HANDLER by commit 4c301f9b ("ARM: Convert to GENERIC_IRQ_MULTI_HANDLER"). The 'new' option is already selected, so just remove the obsolete entry. Signed-off-by:
Matthias Kaehlcke <mka@chromium.org> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 06 Mar, 2019 1 commit
-
-
Anshuman Khandual authored
Let arm64 subscribe to generic HugeTLB page migration framework. Right now this only works on the following PMD and PUD level HugeTLB page sizes with various kernel base page size combinations. CONT PTE PMD CONT PMD PUD -------- --- -------- --- 4K: NA 2M NA 1G 16K: NA 32M NA 64K: NA 512M NA Link: http://lkml.kernel.org/r/1545121450-1663-5-git-send-email-anshuman.khandual@arm.comSigned-off-by:
Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by:
Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Reviewed-by:
Steve Capper <steve.capper@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Michal Hocko <mhocko@suse.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- 28 Feb, 2019 1 commit
-
-
Zhang Lei authored
On the Fujitsu-A64FX cores ver(1.0, 1.1), memory access may cause an undefined fault (Data abort, DFSC=0b111111). This fault occurs under a specific hardware condition when a load/store instruction performs an address translation. Any load/store instruction, except non-fault access including Armv8 and SVE might cause this undefined fault. The TCR_ELx.NFD1 bit is used by the kernel when CONFIG_RANDOMIZE_BASE is enabled to mitigate timing attacks against KASLR where the kernel address space could be probed using the FFR and suppressed fault on SVE loads. Since this erratum causes spurious exceptions, which may corrupt the exception registers, we clear the TCR_ELx.NFDx=1 bits when booting on an affected CPU. Signed-off-by:
Zhang Lei <zhang.lei@jp.fujitsu.com> [Generated MIDR value/mask for __cpu_setup(), removed spurious-fault handler and always disabled the NFDx bits on affected CPUs] Signed-off-by:
James Morse <james.morse@arm.com> Tested-by:
zhang.lei <zhang.lei@jp.fujitsu.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 20 Feb, 2019 1 commit
-
-
Christoph Hellwig authored
This API is primarily used through DT entries, but two architectures and two drivers call it directly. So instead of selecting the config symbol for random architectures pull it in implicitly for the actual users. Also rename the Kconfig option to describe the feature better. Signed-off-by:
Christoph Hellwig <hch@lst.de> Acked-by: Paul Burton <paul.burton@mips.com> # MIPS Acked-by:
Lee Jones <lee.jones@linaro.org> Reviewed-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 13 Feb, 2019 3 commits
-
-
Christoph Hellwig authored
The OF_RESERVED_MEM can be used if we have either CMA or the generic declare coherent code built and we support the early flattened DT. So don't bother making it a user visible options that is selected by most configs that fit the above category, but just select it when the requirements are met. Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Rob Herring <robh@kernel.org>
-
Christoph Hellwig authored
Signed-off-by:
Christoph Hellwig <hch@lst.de> Acked-by: Catalin Marinas <catalin.marinas@arm.com> # arm64
-
Christoph Hellwig authored
Signed-off-by:
Christoph Hellwig <hch@lst.de> Acked-by: Paul Burton <paul.burton@mips.com> # MIPS Acked-by: Catalin Marinas <catalin.marinas@arm.com> # arm64
-
- 06 Feb, 2019 1 commit
-
-
Julien Thierry authored
Add a build option and a command line parameter to build and enable the support of pseudo-NMIs. Signed-off-by:
Julien Thierry <julien.thierry@arm.com> Suggested-by:
Daniel Thompson <daniel.thompson@linaro.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 21 Jan, 2019 1 commit
-
-
Mark Rutland authored
There are shipping arm64 platforms with 256 hardware threads. So that we can make use of these with defconfig, bump the arm64 default NR_CPUS to 256. At the same time, drop a redundant comment. We only have one default for NR_CPUS, so there's nothing to sort. Acked-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 28 Dec, 2018 1 commit
-
-
Andrey Konovalov authored
Now, that all the necessary infrastructure code has been introduced, select HAVE_ARCH_KASAN_SW_TAGS for arm64 to enable software tag-based KASAN mode. Link: http://lkml.kernel.org/r/25abce9a21d0c1df2d9d72488aced418c3465d7b.1544099024.git.andreyknvl@google.comSigned-off-by:
Andrey Konovalov <andreyknvl@google.com> Acked-by:
Will Deacon <will.deacon@arm.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Christoph Lameter <cl@linux.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- 21 Dec, 2018 1 commit
-
-
Masahiro Yamada authored
The Kconfig lexer supports special characters such as '.' and '/' in the parameter context. In my understanding, the reason is just to support bare file paths in the source statement. I do not see a good reason to complicate Kconfig for the room of ambiguity. The majority of code already surrounds file paths with double quotes, and it makes sense since file paths are constant string literals. Make it treewide consistent now. Signed-off-by:
Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by:
Wolfram Sang <wsa@the-dreams.de> Acked-by:
Geert Uytterhoeven <geert@linux-m68k.org> Acked-by:
Ingo Molnar <mingo@kernel.org>
-
- 20 Dec, 2018 1 commit
-
-
Sinan Kaya authored
ACPI and PCI are no longer coupled to each other. Specify requirements for both when pulling in code. Signed-off-by:
Sinan Kaya <okaya@kernel.org> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com>
-
- 13 Dec, 2018 2 commits
-
-
Christoph Hellwig authored
All architectures except for sparc64 use the dma-direct code in some form, and even for sparc64 we had the discussion of a direct mapping mode a while ago. In preparation for directly calling the direct mapping code don't bother having it optionally but always build the code in. This is a minor hardship for some powerpc and arm configs that don't pull it in yet (although they should in a relase ot two), and sparc64 which currently doesn't need it at all, but it will reduce the ifdef mess we'd otherwise need significantly. Signed-off-by:
Christoph Hellwig <hch@lst.de> Acked-by:
Jesper Dangaard Brouer <brouer@redhat.com> Tested-by:
Jesper Dangaard Brouer <brouer@redhat.com> Tested-by:
Tony Luck <tony.luck@intel.com>
-
Mark Rutland authored
Now that all the necessary bits are in place for userspace, add the necessary Kconfig logic to allow this to be enabled. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Kristina Martsenko <kristina.martsenko@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- 12 Dec, 2018 2 commits
-
-
Ard Biesheuvel authored
This enables the use of per-task stack canary values if GCC has support for emitting the stack canary reference relative to the value of sp_el0, which holds the task struct pointer in the arm64 kernel. The $(eval) extends KBUILD_CFLAGS at the moment the make rule is applied, which means asm-offsets.o (which we rely on for the offset value) is built without the arguments, and everything built afterwards has the options set. Reviewed-by:
Kees Cook <keescook@chromium.org> Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Robin Murphy authored
Wire up the basic support for hot-adding memory. Since memory hotplug is fairly tightly coupled to sparsemem, we tweak pfn_valid() to also cross-check the presence of a section in the manner of the generic implementation, before falling back to memblock to check for no-map regions within a present section as before. By having arch_add_memory(() create the linear mapping first, this then makes everything work in the way that __add_section() expects. We expect hotplug to be ACPI-driven, so the swapper_pg_dir updates should be safe from races by virtue of the global device hotplug lock. Signed-off-by:
Robin Murphy <robin.murphy@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- 11 Dec, 2018 1 commit
-
-
Arnd Bergmann authored
In some randconfig builds, the new CONFIG_ARM64_USER_VA_BITS_52 triggered a build failure: arch/arm64/mm/proc.S:287: Error: immediate out of range As it turns out, we were incorrectly setting PGTABLE_LEVELS here, lacking any other default value. This fixes the calculation of CONFIG_PGTABLE_LEVELS to consider all combinations again. Fixes: 68d23da4 ("arm64: Kconfig: Re-jig CONFIG options for 52-bit VA") Signed-off-by:
Arnd Bergmann <arnd@arndb.de> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- 10 Dec, 2018 4 commits
-
-
Will Deacon authored
Enabling 52-bit VAs for userspace is pretty confusing, since it requires you to select "48-bit" virtual addressing in the Kconfig. Rework the logic so that 52-bit user virtual addressing is advertised in the "Virtual address space size" choice, along with some help text to describe its interaction with Pointer Authentication. The EXPERT-only option to force all user mappings to the 52-bit range is then made available immediately below the VA size selection. Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Steve Capper authored
On arm64 52-bit VAs are provided to userspace when a hint is supplied to mmap. This helps maintain compatibility with software that expects at most 48-bit VAs to be returned. In order to help identify software that has 48-bit VA assumptions, this patch allows one to compile a kernel where 52-bit VAs are returned by default on HW that supports it. This feature is intended to be for development systems only. Signed-off-by:
Steve Capper <steve.capper@arm.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Steve Capper authored
On arm64 there is optional support for a 52-bit virtual address space. To exploit this one has to be running with a 64KB page size and be running on hardware that supports this. For an arm64 kernel supporting a 48 bit VA with a 64KB page size, some changes are needed to support a 52-bit userspace: * TCR_EL1.T0SZ needs to be 12 instead of 16, * TASK_SIZE needs to reflect the new size. This patch implements the above when the support for 52-bit VAs is detected at early boot time. On arm64 userspace addresses translation is controlled by TTBR0_EL1. As well as userspace, TTBR0_EL1 controls: * The identity mapping, * EFI runtime code. It is possible to run a kernel with an identity mapping that has a larger VA size than userspace (and for this case __cpu_set_tcr_t0sz() would set TCR_EL1.T0SZ as appropriate). However, when the conditions for 52-bit userspace are met; it is possible to keep TCR_EL1.T0SZ fixed at 12. Thus in this patch, the TCR_EL1.T0SZ size changing logic is disabled. Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
Steve Capper <steve.capper@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Marc Zyngier authored
Now that the infrastructure to handle erratum 1165522 is in place, let's make it a selectable option and add the required documentation. Reviewed-by:
James Morse <james.morse@arm.com> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- 06 Dec, 2018 4 commits
-
-
AKASHI Takahiro authored
With this patch, kernel verification can be done without IMA security subsystem enabled. Turn on CONFIG_KEXEC_VERIFY_SIG instead. On x86, a signature is embedded into a PE file (Microsoft's format) header of binary. Since arm64's "Image" can also be seen as a PE file as far as CONFIG_EFI is enabled, we adopt this format for kernel signing. You can create a signed kernel image with: $ sbsign --key ${KEY} --cert ${CERT} Image Signed-off-by:
AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Reviewed-by:
James Morse <james.morse@arm.com> [will: removed useless pr_debug()] Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Christoph Hellwig authored
These days architectures are mostly out of the business of dealing with struct scatterlist at all, unless they have architecture specific iommu drivers. Replace the ARCH_HAS_SG_CHAIN symbol with a ARCH_NO_SG_CHAIN one only enabled for architectures with horrible legacy iommu drivers like alpha and parisc, and conditionally for arm which wants to keep it disable for legacy platforms. Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Palmer Dabbelt <palmer@sifive.com>
-
AKASHI Takahiro authored
Modify arm64/Kconfig to enable kexec_file_load support. Signed-off-by:
AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Acked-by:
James Morse <james.morse@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
Suzuki K Poulose authored
We have two entries for ARM64_WORKAROUND_CLEAN_CACHE capability : 1) ARM Errata 826319, 827319, 824069, 819472 on A53 r0p[012] 2) ARM Errata 819472 on A53 r0p[01] Both have the same work around. Merge these entries to avoid duplicate entries for a single capability. Add a new Kconfig entry to control the "capability" entry to make it easier to handle combinations of the CONFIGs. Cc: Will Deacon <will.deacon@arm.com> Cc: Andre Przywara <andre.przywara@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- 01 Dec, 2018 2 commits
-
-
Christoph Hellwig authored
The arm64 codebase to implement coherent dma allocation for architectures with non-coherent DMA is a good start for a generic implementation, given that is uses the generic remap helpers, provides the atomic pool for allocations that can't sleep and still is realtively simple and well tested. Move it to kernel/dma and allow architectures to opt into it using a config symbol. Architectures just need to provide a new arch_dma_prep_coherent helper to writeback an invalidate the caches for any memory that gets remapped for uncached access. Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Will Deacon <will.deacon@arm.com> Reviewed-by:
Robin Murphy <robin.murphy@arm.com>
-
Christoph Hellwig authored
The dma remap code only makes sense for not cache coherent architectures (or possibly the corner case of highmem CMA allocations) and currently is only used by arm, arm64, csky and xtensa. Split it out into a separate file with a separate Kconfig symbol, which gets the right copyright notice given that this code was written by Laura Abbott working for Code Aurora at that point. Signed-off-by:
Christoph Hellwig <hch@lst.de> Acked-by:
Laura Abbott <labbott@redhat.com> Reviewed-by:
Robin Murphy <robin.murphy@arm.com>
-
- 29 Nov, 2018 1 commit
-
-
Catalin Marinas authored
On the affected Cortex-A76 cores (r0p0 to r3p0), if a virtual address for a cacheable mapping of a location is being accessed by a core while another core is remapping the virtual address to a new physical page using the recommended break-before-make sequence, then under very rare circumstances TLBI+DSB completes before a read using the translation being invalidated has been observed by other observers. The workaround repeats the TLBI+DSB operation and is shared with the Qualcomm Falkor erratum 1009 Reviewed-by:
Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 23 Nov, 2018 3 commits
-
-
Christoph Hellwig authored
Let architectures select the syscall support instead of duplicating the kconfig entry. Signed-off-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Masahiro Yamada <yamada.masahiro@socionext.com>
-
Christoph Hellwig authored
Move the definitions to drivers/pci and let the architectures select them. Two small differences to before: PCI_DOMAINS_GENERIC now selects PCI_DOMAINS, cutting down the churn for modern architectures. As the only architectured arm did previously also offer PCI_DOMAINS as a user visible choice in addition to selecting it from the relevant configs, this is gone now. Signed-off-by:
Christoph Hellwig <hch@lst.de> Acked-by:
Paul Burton <paul.burton@mips.com> Signed-off-by:
Masahiro Yamada <yamada.masahiro@socionext.com>
-
Christoph Hellwig authored
There is no good reason to duplicate the PCI menu in every architecture. Instead provide a selectable HAVE_PCI symbol that indicates availability of PCI support, and a FORCE_PCI symbol to for PCI on and the handle the rest in drivers/pci. Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Palmer Dabbelt <palmer@sifive.com> Acked-by:
Max Filippov <jcmvbkbc@gmail.com> Acked-by:
Thomas Gleixner <tglx@linutronix.de> Acked-by:
Bjorn Helgaas <bhelgaas@google.com> Acked-by:
Geert Uytterhoeven <geert@linux-m68k.org> Acked-by:
Paul Burton <paul.burton@mips.com> Signed-off-by:
Masahiro Yamada <yamada.masahiro@socionext.com>
-
- 20 Nov, 2018 1 commit
-
-
Ard Biesheuvel authored
On arm64, we use block mappings and contiguous hints to map the linear region, to minimize the TLB footprint. However, this means that the entire region is mapped using read/write permissions, which we cannot modify at page granularity without having to take intrusive measures to prevent TLB conflicts. This means the linear aliases of pages belonging to read-only mappings (executable or otherwise) in the vmalloc region are also mapped read/write, and could potentially be abused to modify things like module code, bpf JIT code or other read-only data. So let's fix this, by extending the set_memory_ro/rw routines to take the linear alias into account. The consequence of enabling this is that we can no longer use block mappings or contiguous hints, so in cases where the TLB footprint of the linear region is a bottleneck, performance may be affected. Therefore, allow this feature to be runtime en/disabled, by setting rodata=full (or 'on' to disable just this enhancement, or 'off' to disable read-only mappings for code and r/o data entirely) on the kernel command line. Also, allow the default value to be set via a Kconfig option. Tested-by:
Laura Abbott <labbott@redhat.com> Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Will Deacon <will.deacon@arm.com>
-
- 31 Oct, 2018 2 commits
-
-
Mike Rapoport authored
All architecures use memblock for early memory management. There is no need for the CONFIG_HAVE_MEMBLOCK configuration option. [rppt@linux.vnet.ibm.com: of/fdt: fixup #ifdefs] Link: http://lkml.kernel.org/r/20180919103457.GA20545@rapoport-lnx [rppt@linux.vnet.ibm.com: csky: fixups after bootmem removal] Link: http://lkml.kernel.org/r/20180926112744.GC4628@rapoport-lnx [rppt@linux.vnet.ibm.com: remove stale #else and the code it protects] Link: http://lkml.kernel.org/r/1538067825-24835-1-git-send-email-rppt@linux.vnet.ibm.com Link: http://lkml.kernel.org/r/1536927045-23536-4-git-send-email-rppt@linux.vnet.ibm.comSigned-off-by:
Mike Rapoport <rppt@linux.vnet.ibm.com> Acked-by:
Michal Hocko <mhocko@suse.com> Tested-by:
Jonathan Cameron <jonathan.cameron@huawei.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Zankel <chris@zankel.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Ingo Molnar <mingo@redhat.com> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: Jonas Bonn <jonas@southpole.se> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <lftan@altera.com> Cc: Mark Salter <msalter@redhat.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Palmer Dabbelt <palmer@sifive.com> Cc: Paul Burton <paul.burton@mips.com> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Serge Semin <fancer.lancer@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Mike Rapoport authored
All achitectures select NO_BOOTMEM which essentially becomes 'Y' for any kernel configuration and therefore it can be removed. [alexander.h.duyck@linux.intel.com: remove now defunct NO_BOOTMEM from depends list for deferred init] Link: http://lkml.kernel.org/r/20180925201814.3576.15105.stgit@localhost.localdomain Link: http://lkml.kernel.org/r/1536927045-23536-3-git-send-email-rppt@linux.vnet.ibm.comSigned-off-by:
Mike Rapoport <rppt@linux.vnet.ibm.com> Signed-off-by:
Alexander Duyck <alexander.h.duyck@linux.intel.com> Acked-by:
Michal Hocko <mhocko@suse.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Zankel <chris@zankel.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Ingo Molnar <mingo@redhat.com> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: Jonas Bonn <jonas@southpole.se> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Ley Foon Tan <lftan@altera.com> Cc: Mark Salter <msalter@redhat.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Matt Turner <mattst88@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Palmer Dabbelt <palmer@sifive.com> Cc: Paul Burton <paul.burton@mips.com> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Serge Semin <fancer.lancer@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- 19 Oct, 2018 1 commit
-
-
Christoph Hellwig authored
Now that the generic swiotlb code supports non-coherent DMA we can switch to it for arm64. For that we need to refactor the existing alloc/free/mmap/pgprot helpers to be used as the architecture hooks, and implement the standard arch_sync_dma_for_{device,cpu} hooks for cache maintaincance in the streaming dma hooks, which also implies using the generic dma_coherent flag in struct device. Note that we need to keep the old is_device_dma_coherent function around for now, so that the shared arm/arm64 Xen code keeps working. Signed-off-by:
Christoph Hellwig <hch@lst.de> Acked-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 03 Oct, 2018 1 commit
-
-
Arnd Bergmann authored
arm64_1188873_read_cntvct_el0() is protected by the correct CONFIG_ARM64_ERRATUM_1188873 #ifdef, but the only reference to it is also inside of an CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND section, and causes a warning if that is disabled: drivers/clocksource/arm_arch_timer.c:323:20: error: 'arm64_1188873_read_cntvct_el0' defined but not used [-Werror=unused-function] Since the erratum requires that we always apply the workaround in the timer driver, select that symbol as we do for SoC specific errata. Fixes: 95b861a4 ("arm64: arch_timer: Add workaround for ARM erratum 1188873") Acked-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Arnd Bergmann <arnd@arndb.de> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 01 Oct, 2018 1 commit
-
-
Marc Zyngier authored
When running on Cortex-A76, a timer access from an AArch32 EL0 task may end up with a corrupted value or register. The workaround for this is to trap these accesses at EL1/EL2 and execute them there. This only affects versions r0p0, r1p0 and r2p0 of the CPU. Acked-by:
Mark Rutland <mark.rutland@arm.com> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-
- 27 Sep, 2018 1 commit
-
-
Ard Biesheuvel authored
On a randomly chosen distro kernel build for arm64, vmlinux.o shows the following sections, containing jump label entries, and the associated RELA relocation records, respectively: ... [38088] __jump_table PROGBITS 0000000000000000 00e19f30 000000000002ea10 0000000000000000 WA 0 0 8 [38089] .rela__jump_table RELA 0000000000000000 01fd8bb0 000000000008be30 0000000000000018 I 38178 38088 8 ... In other words, we have 190 KB worth of 'struct jump_entry' instances, and 573 KB worth of RELA entries to relocate each entry's code, target and key members. This means the RELA section occupies 10% of the .init segment, and the two sections combined represent 5% of vmlinux's entire memory footprint. So let's switch from 64-bit absolute references to 32-bit relative references for the code and target field, and a 64-bit relative reference for the 'key' field (which may reside in another module or the core kernel, which may be more than 4 GB way on arm64 when running with KASLR enable): this reduces the size of the __jump_table by 33%, and gets rid of the RELA section entirely. Signed-off-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Acked-by:
Will Deacon <will.deacon@arm.com> Acked-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-s390@vger.kernel.org Cc: Arnd Bergmann <arnd@arndb.de> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Kees Cook <keescook@chromium.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Jessica Yu <jeyu@kernel.org> Link: https://lkml.kernel.org/r/20180919065144.25010-4-ard.biesheuvel@linaro.org
-
- 21 Sep, 2018 1 commit
-
-
James Morse authored
include/linux/mmzone.h describes ARCH_HAS_HOLES_MEMORYMODEL as relevant when parts the memmap have been free()d. This would happen on systems where memory is smaller than a sparsemem-section, and the extra struct pages are expensive. pfn_valid() on these systems returns true for the whole sparsemem-section, so an extra memmap_valid_within() check is needed. On arm64 we have nomap memory, so always provide pfn_valid() to test for nomap pages. This means ARCH_HAS_HOLES_MEMORYMODEL's extra checks are already rolled up into pfn_valid(). Remove it. Acked-by:
Will Deacon <will.deacon@arm.com> Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com>
-