1. 12 Dec, 2018 2 commits
    • Mark Rutland's avatar
      arm64: add <asm/asm-prototypes.h> · c3296a13
      Mark Rutland authored
      While we can export symbols from assembly files, CONFIG_MODVERIONS requires C
      declarations of anyhting that's exported.
      
      Let's account for this as other architectures do by placing these declarations
      in <asm/asm-prototypes.h>, which kbuild will automatically use to generate
      modversion information for assembly files.
      
      Since we already define most prototypes in existing headers, we simply need to
      include those headers in <asm/asm-prototypes.h>, and don't need to duplicate
      these.
      Reviewed-by: default avatarRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      c3296a13
    • Will Deacon's avatar
      arm64: mm: Introduce MAX_USER_VA_BITS definition · 9b31cf49
      Will Deacon authored
      With the introduction of 52-bit virtual addressing for userspace, we are
      now in a position where the virtual addressing capability of userspace
      may exceed that of the kernel. Consequently, the VA_BITS definition
      cannot be used blindly, since it reflects only the size of kernel
      virtual addresses.
      
      This patch introduces MAX_USER_VA_BITS which is either VA_BITS or 52
      depending on whether 52-bit virtual addressing has been configured at
      build time, removing a few places where the 52 is open-coded based on
      explicit CONFIG_ guards.
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      9b31cf49
  2. 11 Dec, 2018 3 commits
    • Arnd Bergmann's avatar
      arm64: fix ARM64_USER_VA_BITS_52 builds · 4d08d20f
      Arnd Bergmann authored
      In some randconfig builds, the new CONFIG_ARM64_USER_VA_BITS_52
      triggered a build failure:
      
      arch/arm64/mm/proc.S:287: Error: immediate out of range
      
      As it turns out, we were incorrectly setting PGTABLE_LEVELS here,
      lacking any other default value.
      This fixes the calculation of CONFIG_PGTABLE_LEVELS to consider
      all combinations again.
      
      Fixes: 68d23da4 ("arm64: Kconfig: Re-jig CONFIG options for 52-bit VA")
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      4d08d20f
    • Will Deacon's avatar
      arm64: preempt: Fix big-endian when checking preempt count in assembly · 7faa313f
      Will Deacon authored
      Commit 39624469 ("arm64: preempt: Provide our own implementation of
      asm/preempt.h") extended the preempt count field in struct thread_info
      to 64 bits, so that it consists of a 32-bit count plus a 32-bit flag
      indicating whether or not the current task needs rescheduling.
      
      Whilst the asm-offsets definition of TSK_TI_PREEMPT was updated to point
      to this new field, the assembly usage was left untouched meaning that a
      32-bit load from TSK_TI_PREEMPT on a big-endian machine actually returns
      the reschedule flag instead of the count.
      
      Whilst we could fix this by pointing TSK_TI_PREEMPT at the count field,
      we're actually better off reworking the two assembly users so that they
      operate on the whole 64-bit value in favour of inspecting the thread
      flags separately in order to determine whether a reschedule is needed.
      Acked-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reported-by: default avatar"kernelci.org bot" <bot@kernelci.org>
      Tested-by: default avatarKevin Hilman <khilman@baylibre.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      7faa313f
    • Arnd Bergmann's avatar
      arm64: kexec_file: include linux/vmalloc.h · 732291c4
      Arnd Bergmann authored
      This is needed for compilation in some configurations that don't
      include it implicitly:
      
      arch/arm64/kernel/machine_kexec_file.c: In function 'arch_kimage_file_post_load_cleanup':
      arch/arm64/kernel/machine_kexec_file.c:37:2: error: implicit declaration of function 'vfree'; did you mean 'kvfree'? [-Werror=implicit-function-declaration]
      
      Fixes: 52b2a8af ("arm64: kexec_file: load initrd and device-tree")
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      732291c4
  3. 10 Dec, 2018 33 commits
  4. 07 Dec, 2018 2 commits
    • Will Deacon's avatar
      arm64: cmpxchg: Use "K" instead of "L" for ll/sc immediate constraint · 42305099
      Will Deacon authored
      The "L" AArch64 machine constraint, which we use for the "old" value in
      an LL/SC cmpxchg(), generates an immediate that is suitable for a 64-bit
      logical instruction. However, for cmpxchg() operations on types smaller
      than 64 bits, this constraint can result in an invalid instruction which
      is correctly rejected by GAS, such as EOR W1, W1, #0xffffffff.
      
      Whilst we could special-case the constraint based on the cmpxchg size,
      it's far easier to change the constraint to "K" and put up with using
      a register for large 64-bit immediates. For out-of-line LL/SC atomics,
      this is all moot anyway.
      Reported-by: default avatarRobin Murphy <robin.murphy@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      42305099
    • Will Deacon's avatar
      arm64: percpu: Rewrite per-cpu ops to allow use of LSE atomics · 959bf2fd
      Will Deacon authored
      Our percpu code is a bit of an inconsistent mess:
      
        * It rolls its own xchg(), but reuses cmpxchg_local()
        * It uses various different flavours of preempt_{enable,disable}()
        * It returns values even for the non-returning RmW operations
        * It makes no use of LSE atomics outside of the cmpxchg() ops
        * There are individual macros for different sizes of access, but these
          are all funneled through a switch statement rather than dispatched
          directly to the relevant case
      
      This patch rewrites the per-cpu operations to address these shortcomings.
      Whilst the new code is a lot cleaner, the big advantage is that we can
      use the non-returning ST- atomic instructions when we have LSE.
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      959bf2fd