- 12 Dec, 2018 4 commits
-
-
Robin Murphy authored
Wire up the basic support for hot-adding memory. Since memory hotplug is fairly tightly coupled to sparsemem, we tweak pfn_valid() to also cross-check the presence of a section in the manner of the generic implementation, before falling back to memblock to check for no-map regions within a present section as before. By having arch_add_memory(() create the linear mapping first, this then makes everything work in the way that __add_section() expects. We expect hotplug to be ACPI-driven, so the swapper_pg_dir updates should be safe from races by virtue of the global device hotplug lock. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
Commit 959bf2fd ("arm64: percpu: Rewrite per-cpu ops to allow use of LSE atomics") introduced alternative code sequences for the arm64 percpu atomics, so that the LSE instructions can be patched in at runtime if they are supported by the CPU. Unfortunately, when patching in the LSE sequence for a value-returning pcpu atomic, the argument registers are the wrong way round. The implementation of this_cpu_add_return() therefore ends up adding uninitialised stack to the percpu variable and returning garbage. As it turns out, there aren't very many users of the value-returning percpu atomics in mainline and we only spotted this due to a failure in the kprobes selftests. In this case, when attempting to single-step over the out-of-line instruction slot, the debug monitors would not be enabled because calling this_cpu_inc_return() on the kernel debug monitor refcount would fail to detect the transition from 0. We would consequently execute past the slot and take an undefined instruction exception from the kernel, resulting in a BUG: | kernel BUG at arch/arm64/kernel/traps.c:421! | PREEMPT SMP | pc : do_undefinstr+0x268/0x278 | lr : do_undefinstr+0x124/0x278 | Process swapper/0 (pid: 1, stack limit = 0x(____ptrval____)) | Call trace: | do_undefinstr+0x268/0x278 | el1_undef+0x10/0x78 | 0xffff00000803c004 | init_kprobes+0x150/0x180 | do_one_initcall+0x74/0x178 | kernel_init_freeable+0x188/0x224 | kernel_init+0x10/0x100 | ret_from_fork+0x10/0x1c Fix the argument order to get the value-returning pcpu atomics working correctly when implemented using the LSE instructions. Reported-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
While we can export symbols from assembly files, CONFIG_MODVERIONS requires C declarations of anyhting that's exported. Let's account for this as other architectures do by placing these declarations in <asm/asm-prototypes.h>, which kbuild will automatically use to generate modversion information for assembly files. Since we already define most prototypes in existing headers, we simply need to include those headers in <asm/asm-prototypes.h>, and don't need to duplicate these. Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
With the introduction of 52-bit virtual addressing for userspace, we are now in a position where the virtual addressing capability of userspace may exceed that of the kernel. Consequently, the VA_BITS definition cannot be used blindly, since it reflects only the size of kernel virtual addresses. This patch introduces MAX_USER_VA_BITS which is either VA_BITS or 52 depending on whether 52-bit virtual addressing has been configured at build time, removing a few places where the 52 is open-coded based on explicit CONFIG_ guards. Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 11 Dec, 2018 3 commits
-
-
Arnd Bergmann authored
In some randconfig builds, the new CONFIG_ARM64_USER_VA_BITS_52 triggered a build failure: arch/arm64/mm/proc.S:287: Error: immediate out of range As it turns out, we were incorrectly setting PGTABLE_LEVELS here, lacking any other default value. This fixes the calculation of CONFIG_PGTABLE_LEVELS to consider all combinations again. Fixes: 68d23da4 ("arm64: Kconfig: Re-jig CONFIG options for 52-bit VA") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
Commit 39624469 ("arm64: preempt: Provide our own implementation of asm/preempt.h") extended the preempt count field in struct thread_info to 64 bits, so that it consists of a 32-bit count plus a 32-bit flag indicating whether or not the current task needs rescheduling. Whilst the asm-offsets definition of TSK_TI_PREEMPT was updated to point to this new field, the assembly usage was left untouched meaning that a 32-bit load from TSK_TI_PREEMPT on a big-endian machine actually returns the reschedule flag instead of the count. Whilst we could fix this by pointing TSK_TI_PREEMPT at the count field, we're actually better off reworking the two assembly users so that they operate on the whole 64-bit value in favour of inspecting the thread flags separately in order to determine whether a reschedule is needed. Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reported-by: "kernelci.org bot" <bot@kernelci.org> Tested-by: Kevin Hilman <khilman@baylibre.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Arnd Bergmann authored
This is needed for compilation in some configurations that don't include it implicitly: arch/arm64/kernel/machine_kexec_file.c: In function 'arch_kimage_file_post_load_cleanup': arch/arm64/kernel/machine_kexec_file.c:37:2: error: implicit declaration of function 'vfree'; did you mean 'kvfree'? [-Werror=implicit-function-declaration] Fixes: 52b2a8af ("arm64: kexec_file: load initrd and device-tree") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 10 Dec, 2018 33 commits
-
-
Will Deacon authored
TASK_SIZE is defined using the vabits_user variable for 64-bit tasks, so ensure that this variable is exported to modules to avoid the following build breakage with allmodconfig: | ERROR: "vabits_user" [lib/test_user_copy.ko] undefined! | ERROR: "vabits_user" [drivers/misc/lkdtm/lkdtm.ko] undefined! | ERROR: "vabits_user" [drivers/infiniband/hw/mlx5/mlx5_ib.ko] undefined! Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
Merge in kexec_file_load() support from Akashi Takahiro.
-
Will Deacon authored
Pull in KVM workaround for A76 erratum #116522. Conflicts: arch/arm64/include/asm/cpucaps.h
-
Suzuki K Poulose authored
The __cpu_up() routine ignores the errors reported by the firmware for a CPU bringup operation and looks for the error status set by the booting CPU. If the CPU never entered the kernel, we could end up in assuming stale error status, which otherwise would have been set/cleared appropriately by the booting CPU. Reported-by: Steve Capper <steve.capper@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
Rather than add additional variables to detect specific early feature mismatches with secondary CPUs, we can instead dedicate the upper bits of the CPU boot status word to flag specific mismatches. This allows us to communicate both granule and VA-size mismatches back to the primary CPU without the need for additional book-keeping. Tested-by: Steve Capper <steve.capper@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
Enabling 52-bit VAs for userspace is pretty confusing, since it requires you to select "48-bit" virtual addressing in the Kconfig. Rework the logic so that 52-bit user virtual addressing is advertised in the "Virtual address space size" choice, along with some help text to describe its interaction with Pointer Authentication. The EXPERT-only option to force all user mappings to the 52-bit range is then made available immediately below the VA size selection. Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Steve Capper authored
On arm64 52-bit VAs are provided to userspace when a hint is supplied to mmap. This helps maintain compatibility with software that expects at most 48-bit VAs to be returned. In order to help identify software that has 48-bit VA assumptions, this patch allows one to compile a kernel where 52-bit VAs are returned by default on HW that supports it. This feature is intended to be for development systems only. Signed-off-by: Steve Capper <steve.capper@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Steve Capper authored
On arm64 there is optional support for a 52-bit virtual address space. To exploit this one has to be running with a 64KB page size and be running on hardware that supports this. For an arm64 kernel supporting a 48 bit VA with a 64KB page size, some changes are needed to support a 52-bit userspace: * TCR_EL1.T0SZ needs to be 12 instead of 16, * TASK_SIZE needs to reflect the new size. This patch implements the above when the support for 52-bit VAs is detected at early boot time. On arm64 userspace addresses translation is controlled by TTBR0_EL1. As well as userspace, TTBR0_EL1 controls: * The identity mapping, * EFI runtime code. It is possible to run a kernel with an identity mapping that has a larger VA size than userspace (and for this case __cpu_set_tcr_t0sz() would set TCR_EL1.T0SZ as appropriate). However, when the conditions for 52-bit userspace are met; it is possible to keep TCR_EL1.T0SZ fixed at 12. Thus in this patch, the TCR_EL1.T0SZ size changing logic is disabled. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Steve Capper <steve.capper@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Steve Capper authored
For cases where there is a mismatch in ARMv8.2-LVA support between CPUs we have to be careful in allowing secondary CPUs to boot if 52-bit virtual addresses have already been enabled on the boot CPU. This patch adds code to the secondary startup path. If the boot CPU has enabled 52-bit VAs then ID_AA64MMFR2_EL1 is checked to see if the secondary can also enable 52-bit support. If not, the secondary is prevented from booting and an error message is displayed indicating why. Technically this patch could be implemented using the cpufeature code when considering 52-bit userspace support. However, we employ low level checks here as the cpufeature code won't be able to run if we have mismatched 52-bit kernel va support. Signed-off-by: Steve Capper <steve.capper@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Steve Capper authored
Enabling 52-bit VAs on arm64 requires that the PGD table expands from 64 entries (for the 48-bit case) to 1024 entries. This quantity, PTRS_PER_PGD is used as follows to compute which PGD entry corresponds to a given virtual address, addr: pgd_index(addr) -> (addr >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1) Userspace addresses are prefixed by 0's, so for a 48-bit userspace address, uva, the following is true: (uva >> PGDIR_SHIFT) & (1024 - 1) == (uva >> PGDIR_SHIFT) & (64 - 1) In other words, a 48-bit userspace address will have the same pgd_index when using PTRS_PER_PGD = 64 and 1024. Kernel addresses are prefixed by 1's so, given a 48-bit kernel address, kva, we have the following inequality: (kva >> PGDIR_SHIFT) & (1024 - 1) != (kva >> PGDIR_SHIFT) & (64 - 1) In other words a 48-bit kernel virtual address will have a different pgd_index when using PTRS_PER_PGD = 64 and 1024. If, however, we note that: kva = 0xFFFF << 48 + lower (where lower[63:48] == 0b) and, PGDIR_SHIFT = 42 (as we are dealing with 64KB PAGE_SIZE) We can consider: (kva >> PGDIR_SHIFT) & (1024 - 1) - (kva >> PGDIR_SHIFT) & (64 - 1) = (0xFFFF << 6) & 0x3FF - (0xFFFF << 6) & 0x3F // "lower" cancels out = 0x3C0 In other words, one can switch PTRS_PER_PGD to the 52-bit value globally provided that they increment ttbr1_el1 by 0x3C0 * 8 = 0x1E00 bytes when running with 48-bit kernel VAs (TCR_EL1.T1SZ = 16). For kernel configuration where 52-bit userspace VAs are possible, this patch offsets ttbr1_el1 and sets PTRS_PER_PGD corresponding to the 52-bit value. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com> Suggested-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Steve Capper <steve.capper@arm.com> [will: added comment to TTBR1_BADDR_4852_OFFSET calculation] Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Steve Capper authored
Now that we have DEFAULT_MAP_WINDOW defined, we can arch_get_mmap_end and arch_get_mmap_base helpers to allow for high addresses in mmap. Signed-off-by: Steve Capper <steve.capper@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Steve Capper authored
We wish to introduce a 52-bit virtual address space for userspace but maintain compatibility with software that assumes the maximum VA space size is 48 bit. In order to achieve this, on 52-bit VA systems, we make mmap behave as if it were running on a 48-bit VA system (unless userspace explicitly requests a VA where addr[51:48] != 0). On a system running a 52-bit userspace we need TASK_SIZE to represent the 52-bit limit as it is used in various places to distinguish between kernelspace and userspace addresses. Thus we need a new limit for mmap, stack, ELF loader and EFI (which uses TTBR0) to represent the non-extended VA space. This patch introduces DEFAULT_MAP_WINDOW and DEFAULT_MAP_WINDOW_64 and switches the appropriate logic to use that instead of TASK_SIZE. Signed-off-by: Steve Capper <steve.capper@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Steve Capper authored
This patch adds support for "high" userspace addresses that are optionally supported on the system and have to be requested via a hint mechanism ("high" addr parameter to mmap). Architectures such as powerpc and x86 achieve this by making changes to their architectural versions of arch_get_unmapped_* functions. However, on arm64 we use the generic versions of these functions. Rather than duplicate the generic arch_get_unmapped_* implementations for arm64, this patch instead introduces two architectural helper macros and applies them to arch_get_unmapped_*: arch_get_mmap_end(addr) - get mmap upper limit depending on addr hint arch_get_mmap_base(addr, base) - get mmap_base depending on addr hint If these macros are not defined in architectural code then they default to (TASK_SIZE) and (base) so should not introduce any behavioural changes to architectures that do not define them. Signed-off-by: Steve Capper <steve.capper@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Qian Cai authored
If the kernel is configured with KASAN_EXTRA, the stack size is increased significantly due to setting the GCC -fstack-reuse option to "none" [1]. As a result, it can trigger a stack overrun quite often with 32k stack size compiled using GCC 8. For example, this reproducer https://github.com/linux-test-project/ltp/blob/master/testcases/kernel/syscalls/madvise/madvise06.c can trigger a "corrupted stack end detected inside scheduler" very reliably with CONFIG_SCHED_STACK_END_CHECK enabled. There are other reports at: https://lore.kernel.org/lkml/1542144497.12945.29.camel@gmx.us/ https://lore.kernel.org/lkml/721E7B42-2D55-4866-9C1A-3E8D64F33F9C@gmx.us/ There are just too many functions that could have a large stack with KASAN_EXTRA due to large local variables that have been called over and over again without being able to reuse the stacks. Some noticiable ones are, size 7536 shrink_inactive_list 7440 shrink_page_list 6560 fscache_stats_show 3920 jbd2_journal_commit_transaction 3216 try_to_unmap_one 3072 migrate_page_move_mapping 3584 migrate_misplaced_transhuge_page 3920 ip_vs_lblcr_schedule 4304 lpfc_nvme_info_show 3888 lpfc_debugfs_nvmestat_data.constprop There are other 49 functions over 2k in size while compiling kernel with "-Wframe-larger-than=" on this machine. Hence, it is too much work to change Makefiles for each object to compile without -fsanitize-address-use-after-scope individually. [1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81715#c23Signed-off-by: Qian Cai <cai@lca.pw> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Will Deacon authored
The dcache_by_line_op macro suffers from a couple of small problems: First, the GAS directives that are currently being used rely on assembler behavior that is not documented, and probably not guaranteed to produce the correct behavior going forward. As a result, we end up with some undefined symbols in cache.o: $ nm arch/arm64/mm/cache.o ... U civac ... U cvac U cvap U cvau This is due to the fact that the comparisons used to select the operation type in the dcache_by_line_op macro are comparing symbols not strings, and even though it seems that GAS is doing the right thing here (undefined symbols by the same name are equal to each other), it seems unwise to rely on this. Second, when patching in a DC CVAP instruction on CPUs that support it, the fallback path consists of a DC CVAU instruction which may be affected by CPU errata that require ARM64_WORKAROUND_CLEAN_CACHE. Solve these issues by unrolling the various maintenance routines and using the conditional directives that are documented as operating on strings. To avoid the complexity of nested alternatives, we move the DC CVAP patching to __clean_dcache_area_pop, falling back to a branch to __clean_dcache_area_poc if DCPOP is not supported by the CPU. Reported-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Suggested-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Marc Zyngier authored
Now that the infrastructure to handle erratum 1165522 is in place, let's make it a selectable option and add the required documentation. Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Marc Zyngier authored
In order to avoid TLB corruption whilst invalidating TLBs on CPUs affected by erratum 1165522, we need to prevent S1 page tables from being usable. For this, we set the EL1 S1 MMU on, and also disable the page table walker (by setting the TCR_EL1.EPD* bits to 1). This ensures that once we switch to the EL1/EL0 translation regime, speculated AT instructions won't be able to parse the page tables. Acked-by: Christoffer Dall <christoffer.dall@arm.com> Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Marc Zyngier authored
In order to ensure that slipping HCR_EL2.TGE is done at the right time when switching translation regime, let insert the required ISBs that will be patched in when erratum 1165522 is detected. Take this opportunity to add the missing include of asm/alternative.h which was getting there by pure luck. Acked-by: Christoffer Dall <christoffer.dall@arm.com> Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Marc Zyngier authored
In order to easily mitigate ARM erratum 1165522, we need to force affected CPUs to run in VHE mode if using KVM. Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Marc Zyngier authored
We are soon going to play with TCR_EL1.EPD{0,1}, so let's add the relevant definitions. Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Marc Zyngier authored
It is a bit odd that we only install stage-2 translation after having cleared HCR_EL2.TGE, which means that there is a window during which AT requests could fail as stage-2 is not configured yet. Let's move stage-2 configuration before we clear TGE, making the guest entry sequence clearer: we first configure all the guest stuff, then only switch to the guest translation regime. While we're at it, do the same thing for !VHE. It doesn't hurt, and keeps things symmetric. Acked-by: Christoffer Dall <christoffer.dall@arm.com> Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Marc Zyngier authored
An SVE system is so far the only case where we mandate VHE. As we're starting to grow this requirements, let's slightly rework the way we deal with that situation, allowing for easy extension of this check. Acked-by: Christoffer Dall <christoffer.dall@arm.com> Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Marc Zyngier authored
Contrary to the non-VHE version of the TLB invalidation helpers, the VHE code has interrupts enabled, meaning that we can take an interrupt in the middle of such a sequence, and start running something else with HCR_EL2.TGE cleared. That's really not a good idea. Take the heavy-handed option and disable interrupts in __tlb_switch_to_guest_vhe, restoring them in __tlb_switch_to_host_vhe. The latter also gain an ISB in order to make sure that TGE really has taken effect. Cc: stable@vger.kernel.org Acked-by: Christoffer Dall <christoffer.dall@arm.com> Reviewed-by: James Morse <james.morse@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
Now that arm64ksyms.c has been reduced to a stub, let's remove it entirely. New exports should be associated with their function definition. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
For a while now it's been possible to use EXPORT_SYMBOL() in assembly files, which allows us to place exports immediately after assembly functions, as we do for C functions. As a step towards removing arm64ksyms.c, let's move the ftrace exports to the assembly files the functions are defined in. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
For a while now it's been possible to use EXPORT_SYMBOL() in assembly files, which allows us to place exports immediately after assembly functions, as we do for C functions. As a step towards removing arm64ksyms.c, let's move the string routine exports to the assembly files the functions are defined in. Routines which should only be exported for !KASAN builds are exported using the EXPORT_SYMBOL_NOKASAN() helper. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
For a while now it's been possible to use EXPORT_SYMBOL() in assembly files, which allows us to place exports immediately after assembly functions, as we do for C functions. As a step towards removing arm64ksyms.c, let's move the uaccess exports to the assembly files the functions are defined in. As we have to include <asm/assembler.h>, the existing includes are fixed to follow the usual ordering conventions. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
For a while now it's been possible to use EXPORT_SYMBOL() in assembly files, which allows us to place exports immediately after assembly functions, as we do for C functions. As a step towards removing arm64ksyms.c, let's move the copy_page and clear_page exports to the assembly files the functions are defined in. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
For a while now it's been possible to use EXPORT_SYMBOL() in assembly files, which allows us to place exports immediately after assembly functions, as we do for C functions. As a step towards removing arm64ksyms.c, let's move the SMCCC exports to the assembly file the functions are defined in. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
For a while now it's been possible to use EXPORT_SYMBOL() in assembly files, which allows us to place exports immediately after assembly functions, as we do for C functions. As a step towards removing arm64ksyms.c, let's move the tishift exports to the assembly file the functions are defined in. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
So that we can export symbols directly from assembly files, let's make use of the generic <asm/export.h>. We have a few symbols that we'll want to conditionally export for !KASAN kernel builds, so we add a helper for that in <asm/assembler.h>. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
Since we define memstart_addr in a C file, we can have the export immediately after the definition of the symbol, as we do elsewhere. As a step towards removing arm64ksyms.c, move the export of memstart_addr to init.c, where the symbol is defined. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
Mark Rutland authored
Now that the arm64 bitops are inlines built atop of the regular atomics, we don't need to export anything. Remove the redundant exports. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
-