1. 09 Dec, 2020 6 commits
    • Catalin Marinas's avatar
      Merge remote-tracking branch 'arm64/for-next/perf' into for-next/core · d8602f8b
      Catalin Marinas authored
      * arm64/for-next/perf:
        perf/imx_ddr: Add system PMU identifier for userspace
        bindings: perf: imx-ddr: add compatible string
        arm64: Fix build failure when HARDLOCKUP_DETECTOR_PERF is enabled
        arm64: Enable perf events based hard lockup detector
        perf/imx_ddr: Add stop event counters support for i.MX8MP
        perf/smmuv3: Support sysfs identifier file
        drivers/perf: hisi: Add identifier sysfs file
        perf: remove duplicate check on fwnode
        driver/perf: Add PMU driver for the ARM DMC-620 memory controller
      d8602f8b
    • Catalin Marinas's avatar
      Merge branch 'for-next/misc' into for-next/core · ba4259a6
      Catalin Marinas authored
      * for-next/misc:
        : Miscellaneous patches
        arm64: vmlinux.lds.S: Drop redundant *.init.rodata.*
        kasan: arm64: set TCR_EL1.TBID1 when enabled
        arm64: mte: optimize asynchronous tag check fault flag check
        arm64/mm: add fallback option to allocate virtually contiguous memory
        arm64/smp: Drop the macro S(x,s)
        arm64: consistently use reserved_pg_dir
        arm64: kprobes: Remove redundant kprobe_step_ctx
      
      # Conflicts:
      #	arch/arm64/kernel/vmlinux.lds.S
      ba4259a6
    • Catalin Marinas's avatar
      Merge branch 'for-next/uaccess' into for-next/core · e0f7a8d5
      Catalin Marinas authored
      * for-next/uaccess:
        : uaccess routines clean-up and set_fs() removal
        arm64: mark __system_matches_cap as __maybe_unused
        arm64: uaccess: remove vestigal UAO support
        arm64: uaccess: remove redundant PAN toggling
        arm64: uaccess: remove addr_limit_user_check()
        arm64: uaccess: remove set_fs()
        arm64: uaccess cleanup macro naming
        arm64: uaccess: split user/kernel routines
        arm64: uaccess: refactor __{get,put}_user
        arm64: uaccess: simplify __copy_user_flushcache()
        arm64: uaccess: rename privileged uaccess routines
        arm64: sdei: explicitly simulate PAN/UAO entry
        arm64: sdei: move uaccess logic to arch/arm64/
        arm64: head.S: always initialize PSTATE
        arm64: head.S: cleanup SCTLR_ELx initialization
        arm64: head.S: rename el2_setup -> init_kernel_el
        arm64: add C wrappers for SET_PSTATE_*()
        arm64: ensure ERET from kthread is illegal
      e0f7a8d5
    • Catalin Marinas's avatar
      Merge branches 'for-next/kvm-build-fix', 'for-next/va-refactor',... · 3c09ec59
      Catalin Marinas authored
      Merge branches 'for-next/kvm-build-fix', 'for-next/va-refactor', 'for-next/lto', 'for-next/mem-hotplug', 'for-next/cppc-ffh', 'for-next/pad-image-header', 'for-next/zone-dma-default-32-bit', 'for-next/signal-tag-bits' and 'for-next/cmdline-extended' into for-next/core
      
      * for-next/kvm-build-fix:
        : Fix KVM build issues with 64K pages
        KVM: arm64: Fix build error in user_mem_abort()
      
      * for-next/va-refactor:
        : VA layout changes
        arm64: mm: don't assume struct page is always 64 bytes
        Documentation/arm64: fix RST layout of memory.rst
        arm64: mm: tidy up top of kernel VA space
        arm64: mm: make vmemmap region a projection of the linear region
        arm64: mm: extend linear region for 52-bit VA configurations
      
      * for-next/lto:
        : Upgrade READ_ONCE() to RCpc acquire on arm64 with LTO
        arm64: lto: Strengthen READ_ONCE() to acquire when CONFIG_LTO=y
        arm64: alternatives: Remove READ_ONCE() usage during patch operation
        arm64: cpufeatures: Add capability for LDAPR instruction
        arm64: alternatives: Split up alternative.h
        arm64: uaccess: move uao_* alternatives to asm-uaccess.h
      
      * for-next/mem-hotplug:
        : Memory hotplug improvements
        arm64/mm/hotplug: Ensure early memory sections are all online
        arm64/mm/hotplug: Enable MEM_OFFLINE event handling
        arm64/mm/hotplug: Register boot memory hot remove notifier earlier
        arm64: mm: account for hotplug memory when randomizing the linear region
      
      * for-next/cppc-ffh:
        : Add CPPC FFH support using arm64 AMU counters
        arm64: abort counter_read_on_cpu() when irqs_disabled()
        arm64: implement CPPC FFH support using AMUs
        arm64: split counter validation function
        arm64: wrap and generalise counter read functions
      
      * for-next/pad-image-header:
        : Pad Image header to 64KB and unmap it
        arm64: head: tidy up the Image header definition
        arm64/head: avoid symbol names pointing into first 64 KB of kernel image
        arm64: omit [_text, _stext) from permanent kernel mapping
      
      * for-next/zone-dma-default-32-bit:
        : Default to 32-bit wide ZONE_DMA (previously reduced to 1GB for RPi4)
        of: unittest: Fix build on architectures without CONFIG_OF_ADDRESS
        mm: Remove examples from enum zone_type comment
        arm64: mm: Set ZONE_DMA size based on early IORT scan
        arm64: mm: Set ZONE_DMA size based on devicetree's dma-ranges
        of: unittest: Add test for of_dma_get_max_cpu_address()
        of/address: Introduce of_dma_get_max_cpu_address()
        arm64: mm: Move zone_dma_bits initialization into zone_sizes_init()
        arm64: mm: Move reserve_crashkernel() into mem_init()
        arm64: Force NO_BLOCK_MAPPINGS if crashkernel reservation is required
        arm64: Ignore any DMA offsets in the max_zone_phys() calculation
      
      * for-next/signal-tag-bits:
        : Expose the FAR_EL1 tag bits in siginfo
        arm64: expose FAR_EL1 tag bits in siginfo
        signal: define the SA_EXPOSE_TAGBITS bit in sa_flags
        signal: define the SA_UNSUPPORTED bit in sa_flags
        arch: provide better documentation for the arch-specific SA_* flags
        signal: clear non-uapi flag bits when passing/returning sa_flags
        arch: move SA_* definitions to generic headers
        parisc: start using signal-defs.h
        parisc: Drop parisc special case for __sighandler_t
      
      * for-next/cmdline-extended:
        : Add support for CONFIG_CMDLINE_EXTENDED
        arm64: Extend the kernel command line from the bootloader
        arm64: kaslr: Refactor early init command line parsing
      3c09ec59
    • Joakim Zhang's avatar
      perf/imx_ddr: Add system PMU identifier for userspace · 881b0520
      Joakim Zhang authored
      The DDR Perf for i.MX8 is a system PMU whose AXI ID would different from
      SoC to SoC. Need expose system PMU identifier for userspace which refer
      to /sys/bus/event_source/devices/<PMU DEVICE>/identifier.
      Signed-off-by: default avatarJoakim Zhang <qiangqing.zhang@nxp.com>
      Reviewed-by: default avatarJohn Garry <john.garry@huawei.com>
      Link: https://lore.kernel.org/r/20201130114202.26057-3-qiangqing.zhang@nxp.comSigned-off-by: default avatarWill Deacon <will@kernel.org>
      881b0520
    • Joakim Zhang's avatar
      d0c00977
  2. 04 Dec, 2020 1 commit
  3. 03 Dec, 2020 1 commit
  4. 02 Dec, 2020 17 commits
    • Mark Rutland's avatar
      arm64: uaccess: remove vestigal UAO support · 1517c4fa
      Mark Rutland authored
      Now that arm64 no longer uses UAO, remove the vestigal feature detection
      code and Kconfig text.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201202131558.39270-13-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      1517c4fa
    • Mark Rutland's avatar
      arm64: uaccess: remove redundant PAN toggling · 7cf283c7
      Mark Rutland authored
      Some code (e.g. futex) needs to make privileged accesses to userspace
      memory, and uses uaccess_{enable,disable}_privileged() in order to
      permit this. All other uaccess primitives use LDTR/STTR, and never need
      to toggle PAN.
      
      Remove the redundant PAN toggling.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201202131558.39270-12-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      7cf283c7
    • Mark Rutland's avatar
      arm64: uaccess: remove addr_limit_user_check() · b5a5a01d
      Mark Rutland authored
      Now that set_fs() is gone, addr_limit_user_check() is redundant. Remove
      the checks and associated thread flag.
      
      To ensure that _TIF_WORK_MASK can be used as an immediate value in an
      AND instruction (as it is in `ret_to_user`), TIF_MTE_ASYNC_FAULT is
      renumbered to keep the constituent bits of _TIF_WORK_MASK contiguous.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201202131558.39270-11-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      b5a5a01d
    • Mark Rutland's avatar
      arm64: uaccess: remove set_fs() · 3d2403fd
      Mark Rutland authored
      Now that the uaccess primitives dont take addr_limit into account, we
      have no need to manipulate this via set_fs() and get_fs(). Remove
      support for these, along with some infrastructure this renders
      redundant.
      
      We no longer need to flip UAO to access kernel memory under KERNEL_DS,
      and head.S unconditionally clears UAO for all kernel configurations via
      an ERET in init_kernel_el. Thus, we don't need to dynamically flip UAO,
      nor do we need to context-switch it. However, we still need to adjust
      PAN during SDEI entry.
      
      Masking of __user pointers no longer needs to use the dynamic value of
      addr_limit, and can use a constant derived from the maximum possible
      userspace task size. A new TASK_SIZE_MAX constant is introduced for
      this, which is also used by core code. In configurations supporting
      52-bit VAs, this may include a region of unusable VA space above a
      48-bit TTBR0 limit, but never includes any portion of TTBR1.
      
      Note that TASK_SIZE_MAX is an exclusive limit, while USER_DS and
      KERNEL_DS were inclusive limits, and is converted to a mask by
      subtracting one.
      
      As the SDEI entry code repurposes the otherwise unnecessary
      pt_regs::orig_addr_limit field to store the TTBR1 of the interrupted
      context, for now we rename that to pt_regs::sdei_ttbr1. In future we can
      consider factoring that out.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarJames Morse <james.morse@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201202131558.39270-10-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      3d2403fd
    • Mark Rutland's avatar
      arm64: uaccess cleanup macro naming · 7b90dc40
      Mark Rutland authored
      Now the uaccess primitives use LDTR/STTR unconditionally, the
      uao_{ldp,stp,user_alternative} asm macros are misnamed, and have a
      redundant argument. Let's remove the redundant argument and rename these
      to user_{ldp,stp,ldst} respectively to clean this up.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarRobin Murohy <robin.murphy@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201202131558.39270-9-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      7b90dc40
    • Mark Rutland's avatar
      arm64: uaccess: split user/kernel routines · fc703d80
      Mark Rutland authored
      This patch separates arm64's user and kernel memory access primitives
      into distinct routines, adding new __{get,put}_kernel_nofault() helpers
      to access kernel memory, upon which core code builds larger copy
      routines.
      
      The kernel access routines (using LDR/STR) are not affected by PAN (when
      legitimately accessing kernel memory), nor are they affected by UAO.
      Switching to KERNEL_DS may set UAO, but this does not adversely affect
      the kernel access routines.
      
      The user access routines (using LDTR/STTR) are not affected by PAN (when
      legitimately accessing user memory), but are affected by UAO. As these
      are only legitimate to use under USER_DS with UAO clear, this should not
      be problematic.
      
      Routines performing atomics to user memory (futex and deprecated
      instruction emulation) still need to transiently clear PAN, and these
      are left as-is. These are never used on kernel memory.
      
      Subsequent patches will refactor the uaccess helpers to remove redundant
      code, and will also remove the redundant PAN/UAO manipulation.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201202131558.39270-8-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      fc703d80
    • Mark Rutland's avatar
      arm64: uaccess: refactor __{get,put}_user · f253d827
      Mark Rutland authored
      As a step towards implementing __{get,put}_kernel_nofault(), this patch
      splits most user-memory specific logic out of __{get,put}_user(), with
      the memory access and fault handling in new __{raw_get,put}_mem()
      helpers.
      
      For now the LDR/LDTR patching is left within the *get_mem() helpers, and
      will be removed in a subsequent patch.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201202131558.39270-7-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      f253d827
    • Mark Rutland's avatar
      arm64: uaccess: simplify __copy_user_flushcache() · 9e94fdad
      Mark Rutland authored
      Currently __copy_user_flushcache() open-codes raw_copy_from_user(), and
      doesn't use uaccess_mask_ptr() on the user address. Let's have it call
      raw_copy_from_user(), which is both a simplification and ensures that
      user pointers are masked under speculation.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarRobin Murphy <robin.murphy@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201202131558.39270-6-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      9e94fdad
    • Mark Rutland's avatar
      arm64: uaccess: rename privileged uaccess routines · 923e1e7d
      Mark Rutland authored
      We currently have many uaccess_*{enable,disable}*() variants, which
      subsequent patches will cut down as part of removing set_fs() and
      friends. Once this simplification is made, most uaccess routines will
      only need to ensure that the user page tables are mapped in TTBR0, as is
      currently dealt with by uaccess_ttbr0_{enable,disable}().
      
      The existing uaccess_{enable,disable}() routines ensure that user page
      tables are mapped in TTBR0, and also disable PAN protections, which is
      necessary to be able to use atomics on user memory, but also permit
      unrelated privileged accesses to access user memory.
      
      As preparatory step, let's rename uaccess_{enable,disable}() to
      uaccess_{enable,disable}_privileged(), highlighting this caveat and
      discouraging wider misuse. Subsequent patches can reuse the
      uaccess_{enable,disable}() naming for the common case of ensuring the
      user page tables are mapped in TTBR0.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201202131558.39270-5-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      923e1e7d
    • Mark Rutland's avatar
      arm64: sdei: explicitly simulate PAN/UAO entry · 2376e75c
      Mark Rutland authored
      In preparation for removing addr_limit and set_fs() we must decouple the
      SDEI PAN/UAO manipulation from the uaccess code, and explicitly
      reinitialize these as required.
      
      SDEI enters the kernel with a non-architectural exception, and prior to
      the most recent revision of the specification (ARM DEN 0054B), PSTATE
      bits (e.g. PAN, UAO) are not manipulated in the same way as for
      architectural exceptions. Notably, older versions of the spec can be
      read ambiguously as to whether PSTATE bits are inherited unchanged from
      the interrupted context or whether they are generated from scratch, with
      TF-A doing the latter.
      
      We have three cases to consider:
      
      1) The existing TF-A implementation of SDEI will clear PAN and clear UAO
         (along with other bits in PSTATE) when delivering an SDEI exception.
      
      2) In theory, implementations of SDEI prior to revision B could inherit
         PAN and UAO (along with other bits in PSTATE) unchanged from the
         interrupted context. However, in practice such implementations do not
         exist.
      
      3) Going forward, new implementations of SDEI must clear UAO, and
         depending on SCTLR_ELx.SPAN must either inherit or set PAN.
      
      As we can ignore (2) we can assume that upon SDEI entry, UAO is always
      clear, though PAN may be clear, inherited, or set per SCTLR_ELx.SPAN.
      Therefore, we must explicitly initialize PAN, but do not need to do
      anything for UAO.
      
      Considering what we need to do:
      
      * When set_fs() is removed, force_uaccess_begin() will have no HW
        side-effects. As this only clears UAO, which we can assume has already
        been cleared upon entry, this is not a problem. We do not need to add
        code to manipulate UAO explicitly.
      
      * PAN may be cleared upon entry (in case 1 above), so where a kernel is
        built to use PAN and this is supported by all CPUs, the kernel must
        set PAN upon entry to ensure expected behaviour.
      
      * PAN may be inherited from the interrupted context (in case 3 above),
        and so where a kernel is not built to use PAN or where PAN support is
        not uniform across CPUs, the kernel must clear PAN to ensure expected
        behaviour.
      
      This patch reworks the SDEI code accordingly, explicitly setting PAN to
      the expected state in all cases. To cater for the cases where the kernel
      does not use PAN or this is not uniformly supported by hardware we add a
      new cpu_has_pan() helper which can be used regardless of whether the
      kernel is built to use PAN.
      
      The existing system_uses_ttbr0_pan() is redefined in terms of
      system_uses_hw_pan() both for clarity and as a minor optimization when
      HW PAN is not selected.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarJames Morse <james.morse@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201202131558.39270-3-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      2376e75c
    • Mark Rutland's avatar
      arm64: sdei: move uaccess logic to arch/arm64/ · a0ccf2ba
      Mark Rutland authored
      The SDEI support code is split across arch/arm64/ and drivers/firmware/,
      largley this is split so that the arch-specific portions are under
      arch/arm64, and the management logic is under drivers/firmware/.
      However, exception entry fixups are currently under drivers/firmware.
      
      Let's move the exception entry fixups under arch/arm64/. This
      de-clutters the management logic, and puts all the arch-specific
      portions in one place. Doing this also allows the fixups to be applied
      earlier, so things like PAN and UAO will be in a known good state before
      we run other logic. This will also make subsequent refactoring easier.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarJames Morse <james.morse@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201202131558.39270-2-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      a0ccf2ba
    • Mark Rutland's avatar
      arm64: head.S: always initialize PSTATE · d87a8e65
      Mark Rutland authored
      As with SCTLR_ELx and other control registers, some PSTATE bits are
      UNKNOWN out-of-reset, and we may not be able to rely on hardware or
      firmware to initialize them to our liking prior to entry to the kernel,
      e.g. in the primary/secondary boot paths and return from idle/suspend.
      
      It would be more robust (and easier to reason about) if we consistently
      initialized PSTATE to a default value, as we do with control registers.
      This will ensure that the kernel is not adversely affected by bits it is
      not aware of, e.g. when support for a feature such as PAN/UAO is
      disabled.
      
      This patch ensures that PSTATE is consistently initialized at boot time
      via an ERET. This is not intended to relax the existing requirements
      (e.g. DAIF bits must still be set prior to entering the kernel). For
      features detected dynamically (which may require system-wide support),
      it is still necessary to subsequently modify PSTATE.
      
      As ERET is not always a Context Synchronization Event, an ISB is placed
      before each exception return to ensure updates to control registers have
      taken effect. This handles the kernel being entered with SCTLR_ELx.EOS
      clear (or any future control bits being in an UNKNOWN state).
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201113124937.20574-6-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      d87a8e65
    • Mark Rutland's avatar
      arm64: head.S: cleanup SCTLR_ELx initialization · 2ffac9e3
      Mark Rutland authored
      Let's make SCTLR_ELx initialization a bit clearer by using meaningful
      names for the initialization values, following the same scheme for
      SCTLR_EL1 and SCTLR_EL2.
      
      These definitions will be used more widely in subsequent patches.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201113124937.20574-5-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      2ffac9e3
    • Mark Rutland's avatar
      arm64: head.S: rename el2_setup -> init_kernel_el · ecbb11ab
      Mark Rutland authored
      For a while now el2_setup has performed some basic initialization of EL1
      even when the kernel is booted at EL1, so the name is a little
      misleading. Further, some comments are stale as with VHE it doesn't drop
      the CPU to EL1.
      
      To clarify things, rename el2_setup to init_kernel_el, and update
      comments to be clearer as to the function's purpose.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201113124937.20574-4-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      ecbb11ab
    • Mark Rutland's avatar
      arm64: add C wrappers for SET_PSTATE_*() · 515d5c8a
      Mark Rutland authored
      To make callsites easier to read, add trivial C wrappers for the
      SET_PSTATE_*() helpers, and convert trivial uses over to these. The new
      wrappers will be used further in subsequent patches.
      
      There should be no functional change as a result of this patch.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201113124937.20574-3-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      515d5c8a
    • Mark Rutland's avatar
      arm64: ensure ERET from kthread is illegal · f80d0340
      Mark Rutland authored
      For consistency, all tasks have a pt_regs reserved at the highest
      portion of their task stack. Among other things, this ensures that a
      task's SP is always pointing within its stack rather than pointing
      immediately past the end.
      
      While it is never legitimate to ERET from a kthread, we take pains to
      initialize pt_regs for kthreads as if this were legitimate. As this is
      never legitimate, the effects of an erroneous return are rarely tested.
      
      Let's simplify things by initializing a kthread's pt_regs such that an
      ERET is caught as an illegal exception return, and removing the explicit
      initialization of other exception context. Note that as
      spectre_v4_enable_task_mitigation() only manipulates the PSTATE within
      the unused regs this is safe to remove.
      
      As user tasks will have their exception context initialized via
      start_thread() or start_compat_thread(), this should only impact cases
      where something has gone very wrong and we'd like that to be clearly
      indicated.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: James Morse <james.morse@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Link: https://lore.kernel.org/r/20201113124937.20574-2-mark.rutland@arm.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      f80d0340
    • Catalin Marinas's avatar
      of: unittest: Fix build on architectures without CONFIG_OF_ADDRESS · aed5041e
      Catalin Marinas authored
      of_dma_get_max_cpu_address() is not defined if !CONFIG_OF_ADDRESS, so
      return early in of_unittest_dma_get_max_cpu_address().
      
      Fixes: 07d13a1d ("of: unittest: Add test for of_dma_get_max_cpu_address()")
      Reported-by: default avatarkernel test robot <lkp@intel.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      aed5041e
  5. 27 Nov, 2020 3 commits
  6. 25 Nov, 2020 7 commits
  7. 23 Nov, 2020 5 commits