1. 20 May, 2022 8 commits
    • Ard Biesheuvel's avatar
      ARM: 9201/1: spectre-bhb: rely on linker to emit cross-section literal loads · ad12c2f1
      Ard Biesheuvel authored
      The assembler does not permit 'LDR PC, <sym>' when the symbol lives in a
      different section, which is why we have been relying on rather fragile
      open-coded arithmetic to load the address of the vector_swi routine into
      the program counter using a single LDR instruction in the SWI slot in
      the vector table. The literal was moved to a different section to in
      commit 19accfd3 ("ARM: move vector stubs") to ensure that the
      vector stubs page does not need to be mapped readable for user space,
      which is the case for the vector page itself, as it carries the kuser
      helpers as well.
      
      So the cross-section literal load is open-coded, and this relies on the
      address of vector_swi to be at the very start of the vector stubs page,
      and we won't notice if we got it wrong until booting the kernel and see
      it break. Fortunately, it was guaranteed to break, so this was fragile
      but not problematic.
      
      Now that we have added two other variants of the vector table, we have 3
      occurrences of the same trick, and so the size of our ISA/compiler/CPU
      validation space has tripled, in a way that may cause regressions to only
      be observed once booting the image in question on a CPU that exercises a
      particular vector table.
      
      So let's switch to true cross section references, and let the linker fix
      them up like it fixes up all the other cross section references in the
      vector page.
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Signed-off-by: default avatarRussell King (Oracle) <rmk+kernel@armlinux.org.uk>
      ad12c2f1
    • Ard Biesheuvel's avatar
      ARM: 9200/1: spectre-bhb: avoid cross-subsection jump using a numbered label · 1290c70d
      Ard Biesheuvel authored
      In order to minimize potential confusion regarding numbered labels
      appearing in a different order in the assembler output due to the use of
      subsections, use a named local label to jump back into the vector
      handler code from the associated loop8 mitigation sequence.
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Signed-off-by: default avatarRussell King (Oracle) <rmk+kernel@armlinux.org.uk>
      1290c70d
    • Ard Biesheuvel's avatar
      ARM: 9199/1: spectre-bhb: use local DSB and elide ISB in loop8 sequence · 892c608a
      Ard Biesheuvel authored
      The loop8 mitigation for Spectre-BHB only requires a CPU local DSB
      rather than a systemwide one, which is much more costly. And by the same
      reasoning as why it is justified to omit the ISB after BPIALL, we can
      also elide the ISB and rely on the exception return for the context
      synchronization.
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Signed-off-by: default avatarRussell King (Oracle) <rmk+kernel@armlinux.org.uk>
      892c608a
    • Ard Biesheuvel's avatar
      ARM: 9198/1: spectre-bhb: simplify BPIALL vector macro · c4f486f1
      Ard Biesheuvel authored
      The BPIALL mitigation for Spectre-BHB adds a single instruction to the
      handler sequence that doesn't clobber any registers. Given that these
      sequences are 10 instructions long, they don't fit neatly into a
      cacheline anyway, so we can simply move that single instruction to the
      start of the unmitigated one, and rearrange the symbol names accordingly.
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Signed-off-by: default avatarRussell King (Oracle) <rmk+kernel@armlinux.org.uk>
      c4f486f1
    • Ard Biesheuvel's avatar
      ARM: 9195/1: entry: avoid explicit literal loads · 50807460
      Ard Biesheuvel authored
      ARMv7 has MOVW/MOVT instruction pairs to load symbol addresses into
      registers without having to rely on literal loads that go via the
      D-cache.  For older cores, we now support a similar arrangement, based
      on PC-relative group relocations.
      
      This means we can elide most literal loads entirely from the entry path,
      by switching to the ldr_va macro to emit the appropriate sequence
      depending on the target architecture revision.
      
      While at it, switch to the bl_r macro for invoking the right PABT/DABT
      helpers instead of setting the LR register explicitly, which does not
      play well with cores that speculate across function returns.
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Reviewed-by: default avatarLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: default avatarRussell King (Oracle) <rmk+kernel@armlinux.org.uk>
      50807460
    • Ard Biesheuvel's avatar
      ARM: 9194/1: assembler: simplify ldr_this_cpu for !SMP builds · 952f0331
      Ard Biesheuvel authored
      When CONFIG_SMP is not defined, the CPU offset is always zero, and so
      we can simplify the sequence to load a per-CPU variable.
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Reviewed-by: default avatarLinus Walleij <linus.walleij@linaro.org>
      Signed-off-by: default avatarRussell King (Oracle) <rmk+kernel@armlinux.org.uk>
      952f0331
    • Wang Kefeng's avatar
      ARM: 9192/1: amba: fix memory leak in amba_device_try_add() · 7719a68b
      Wang Kefeng authored
      If amba_device_try_add() return error code (not EPROBE_DEFER),
      memory leak occurred when amba device fails to read periphid.
      
      unreferenced object 0xc1c60800 (size 1024):
        comm "swapper/0", pid 1, jiffies 4294937333 (age 75.200s)
        hex dump (first 32 bytes):
          40 40 db c1 04 08 c6 c1 04 08 c6 c1 00 00 00 00  @@..............
          00 d9 c1 c1 84 6f 38 c1 00 00 00 00 01 00 00 00  .....o8.........
        backtrace:
          [<(ptrval)>] kmem_cache_alloc_trace+0x168/0x2b4
          [<(ptrval)>] amba_device_alloc+0x38/0x7c
          [<(ptrval)>] of_platform_bus_create+0x2f4/0x4e8
          [<(ptrval)>] of_platform_bus_create+0x380/0x4e8
          [<(ptrval)>] of_platform_bus_create+0x380/0x4e8
          [<(ptrval)>] of_platform_bus_create+0x380/0x4e8
          [<(ptrval)>] of_platform_populate+0x70/0xc4
          [<(ptrval)>] of_platform_default_populate_init+0xb4/0xcc
          [<(ptrval)>] do_one_initcall+0x58/0x218
          [<(ptrval)>] kernel_init_freeable+0x250/0x29c
          [<(ptrval)>] kernel_init+0x24/0x148
          [<(ptrval)>] ret_from_fork+0x14/0x1c
          [<00000000>] 0x0
      unreferenced object 0xc1db4040 (size 64):
        comm "swapper/0", pid 1, jiffies 4294937333 (age 75.200s)
        hex dump (first 32 bytes):
          31 63 30 66 30 30 30 30 2e 77 64 74 00 00 00 00  1c0f0000.wdt....
          00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
        backtrace:
          [<(ptrval)>] __kmalloc_track_caller+0x19c/0x2f8
          [<(ptrval)>] kvasprintf+0x60/0xcc
          [<(ptrval)>] kvasprintf_const+0x54/0x78
          [<(ptrval)>] kobject_set_name_vargs+0x34/0xa8
          [<(ptrval)>] dev_set_name+0x40/0x5c
          [<(ptrval)>] of_device_make_bus_id+0x128/0x1f8
          [<(ptrval)>] of_platform_bus_create+0x4dc/0x4e8
          [<(ptrval)>] of_platform_bus_create+0x380/0x4e8
          [<(ptrval)>] of_platform_bus_create+0x380/0x4e8
          [<(ptrval)>] of_platform_bus_create+0x380/0x4e8
          [<(ptrval)>] of_platform_populate+0x70/0xc4
          [<(ptrval)>] of_platform_default_populate_init+0xb4/0xcc
          [<(ptrval)>] do_one_initcall+0x58/0x218
          [<(ptrval)>] kernel_init_freeable+0x250/0x29c
          [<(ptrval)>] kernel_init+0x24/0x148
          [<(ptrval)>] ret_from_fork+0x14/0x1c
      
      Fix them by adding amba_device_put() to release device name and
      amba device.
      Signed-off-by: default avatarKefeng Wang <wangkefeng.wang@huawei.com>
      Signed-off-by: default avatarRussell King (Oracle) <rmk+kernel@armlinux.org.uk>
      7719a68b
    • Wang Kefeng's avatar
      ARM: 9193/1: amba: Add amba_read_periphid() helper · 1f44de0f
      Wang Kefeng authored
      Add new amba_read_periphid() helper to simplify error handling.
      Signed-off-by: default avatarKefeng Wang <wangkefeng.wang@huawei.com>
      Signed-off-by: default avatarRussell King (Oracle) <rmk+kernel@armlinux.org.uk>
      1f44de0f
  2. 18 May, 2022 2 commits
  3. 03 Apr, 2022 8 commits
  4. 02 Apr, 2022 22 commits