1. 10 Feb, 2022 1 commit
  2. 09 Feb, 2022 10 commits
  3. 31 Jan, 2022 2 commits
  4. 25 Jan, 2022 5 commits
  5. 24 Jan, 2022 2 commits
  6. 06 Jan, 2022 1 commit
    • Ard Biesheuvel's avatar
      ARM: 9176/1: avoid literal references in inline assembly · 5fe41793
      Ard Biesheuvel authored
      Nathan reports that the new get_current() and per-CPU offset accessors
      may cause problems at build time due to the use of a literal to hold the
      address of the respective variables. This is due to the fact that LLD
      before v14 does not support the PC-relative group relocations that are
      normally used for this, and the fallback relies on literals but does not
      emit the literal pools explictly using the .ltorg directive.
      
      ./arch/arm/include/asm/current.h:53:6: error: out of range pc-relative fixup value
              asm(LOAD_SYM_ARMV6(%0, __current) : "=r"(cur));
                  ^
      ./arch/arm/include/asm/insn.h:25:2: note: expanded from macro 'LOAD_SYM_ARMV6'
              "       ldr     " #reg ", =" #sym "                     nt"
              ^
      <inline asm>:1:3: note: instantiated into assembly here
                      ldr     r0, =__current
                      ^
      
      Since emitting a literal pool in this particular case is not possible,
      let's avoid the LOAD_SYM_ARMV6() entirely, and use the ordinary C
      assigment instead.
      
      As it turns out, there are other such cases, and here, using .ltorg to
      emit the literal pool within range of the LDR instruction would be
      possible due to the presence of an unconditional branch right after it.
      Unfortunately, putting .ltorg directives in subsections appears to
      confuse the Clang inline assembler, resulting in similar errors even
      though the .ltorg is most definitely within range.
      
      So let's fix this by emitting the literal explicitly, and not rely on
      the assembler to figure this out. This means we have move the fallback
      out of the LOAD_SYM_ARMV6() macro and into the callers.
      
      Link: https://github.com/ClangBuiltLinux/linux/issues/1551
      
      Fixes: 9c46929e ("ARM: implement THREAD_INFO_IN_TASK for uniprocessor systems")
      Reported-by: default avatarNathan Chancellor <natechancellor@gmail.com>
      Tested-by: default avatarNathan Chancellor <nathan@kernel.org>
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Signed-off-by: default avatarRussell King (Oracle) <rmk+kernel@armlinux.org.uk>
      5fe41793
  7. 05 Jan, 2022 1 commit
  8. 17 Dec, 2021 1 commit
  9. 06 Dec, 2021 13 commits
  10. 03 Dec, 2021 4 commits
    • Arnd Bergmann's avatar
      ARM: riscpc: use GENERIC_IRQ_MULTI_HANDLER · c1fe8d05
      Arnd Bergmann authored
      This is one of the last platforms using the old entry path.
      While this code path is spread over a few files, it is fairly
      straightforward to convert it into an equivalent C version,
      leaving the existing algorithm and all the priority handling
      the same.
      
      Unlike most irqchip drivers, this means reading the status
      register(s) in a loop and always handling the highest-priority
      irq first.
      
      The IOMD_IRQREQC and IOMD_IRQREQD registers are not actaully
      used here, but I left the code in place for the time being,
      to keep the conversion as direct as possible. It could be
      removed in a cleanup on top.
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      [ardb: drop obsolete IOMD_IRQREQC/IOMD_IRQREQD handling]
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Tested-by: default avatarMarc Zyngier <maz@kernel.org>
      Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
      c1fe8d05
    • Ard Biesheuvel's avatar
      ARM: riscpc: drop support for IOMD_IRQREQC/IOMD_IRQREQD IRQ groups · d60ff2e7
      Ard Biesheuvel authored
      IOMD_IRQREQC nor IOMD_IRQREQD are ever defined, so any conditionally
      compiled code that depends on them is dead code, and can be removed.
      Suggested-by: default avatarRussell King <linux@armlinux.org.uk>
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      d60ff2e7
    • Ard Biesheuvel's avatar
      ARM: implement support for vmap'ed stacks · a1c510d0
      Ard Biesheuvel authored
      Wire up the generic support for managing task stack allocations via vmalloc,
      and implement the entry code that detects whether we faulted because of a
      stack overrun (or future stack overrun caused by pushing the pt_regs array)
      
      While this adds a fair amount of tricky entry asm code, it should be
      noted that it only adds a TST + branch to the svc_entry path. The code
      implementing the non-trivial handling of the overflow stack is emitted
      out-of-line into the .text section.
      
      Since on ARM, we rely on do_translation_fault() to keep PMD level page
      table entries that cover the vmalloc region up to date, we need to
      ensure that we don't hit such a stale PMD entry when accessing the
      stack. So we do a dummy read from the new stack while still running from
      the old one on the context switch path, and bump the vmalloc_seq counter
      when PMD level entries in the vmalloc range are modified, so that the MM
      switch fetches the latest version of the entries.
      
      Note that we need to increase the per-mode stack by 1 word, to gain some
      space to stash a GPR until we know it is safe to touch the stack.
      However, due to the cacheline alignment of the struct, this does not
      actually increase the memory footprint of the struct stack array at all.
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Tested-by: default avatarKeith Packard <keithpac@amazon.com>
      Tested-by: default avatarMarc Zyngier <maz@kernel.org>
      Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
      a1c510d0
    • Ard Biesheuvel's avatar
      ARM: entry: rework stack realignment code in svc_entry · ae5cc07d
      Ard Biesheuvel authored
      The original Thumb-2 enablement patches updated the stack realignment
      code in svc_entry to work around the lack of a STMIB instruction in
      Thumb-2, by subtracting 4 from the frame size, inverting the sense of
      the misaligment check, and changing to a STMIA instruction and a final
      stack push of a 4 byte quantity that results in the stack becoming
      aligned at the end of the sequence. It also pushes and pops R0 to the
      stack in order to have a temp register that Thumb-2 allows in general
      purpose ALU instructions, as TST using SP is not permitted.
      
      Both are a bit problematic for vmap'ed stacks, as using the stack is
      only permitted after we decide that we did not overflow the stack, or
      have already switched to the overflow stack.
      
      As for the alignment check: the current approach creates a corner case
      where, if the initial SUB of SP ends up right at the start of the stack,
      we will end up subtracting another 8 bytes and overflowing it.  This
      means we would need to add the overflow check *after* the SUB that
      deliberately misaligns the stack. However, this would require us to keep
      local state (i.e., whether we performed the subtract or not) across the
      overflow check, but without any GPRs or stack available.
      
      So let's switch to an approach where we don't use the stack, and where
      the alignment check of the stack pointer occurs in the usual way, as
      this is guaranteed not to result in overflow. This means we will be able
      to do the overflow check first.
      
      While at it, switch to R1 so the mode stack pointer in R0 remains
      accessible.
      Acked-by: default avatarNicolas Pitre <nico@fluxnic.net>
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Tested-by: default avatarMarc Zyngier <maz@kernel.org>
      Tested-by: Vladimir Murzin <vladimir.murzin@arm.com> # ARMv7M
      ae5cc07d