1. 02 Jul, 2015 1 commit
  2. 01 Jul, 2015 1 commit
  3. 30 Jun, 2015 3 commits
  4. 26 Jun, 2015 2 commits
  5. 25 Jun, 2015 2 commits
  6. 19 Jun, 2015 4 commits
  7. 17 Jun, 2015 3 commits
    • Vladimir Murzin's avatar
      arm64: compat: print compat_sp instead of sp · 4e2ee96a
      Vladimir Murzin authored
      We check against compat_sp, but print out arm64's sp - fix it.
      Signed-off-by: default avatarVladimir Murzin <vladimir.murzin@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      4e2ee96a
    • Dave P Martin's avatar
      arm64: mm: Fix freeing of the wrong memmap entries with !SPARSEMEM_VMEMMAP · b9bcc919
      Dave P Martin authored
      The memmap freeing code in free_unused_memmap() computes the end of
      each memblock by adding the memblock size onto the base.  However,
      if SPARSEMEM is enabled then the value (start) used for the base
      may already have been rounded downwards to work out which memmap
      entries to free after the previous memblock.
      
      This may cause memmap entries that are in use to get freed.
      
      In general, you're not likely to hit this problem unless there
      are at least 2 memblocks and one of them is not aligned to a
      sparsemem section boundary.  Note that carve-outs can increase
      the number of memblocks by splitting the regions listed in the
      device tree.
      
      This problem doesn't occur with SPARSEMEM_VMEMMAP, because the
      vmemmap code deals with freeing the unused regions of the memmap
      instead of requiring the arch code to do it.
      
      This patch gets the memblock base out of the memblock directly when
      computing the block end address to ensure the correct value is used.
      Signed-off-by: default avatarDave Martin <Dave.Martin@arm.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      b9bcc919
    • Mark Rutland's avatar
      arm64: entry: fix context tracking for el0_sp_pc · 46b0567c
      Mark Rutland authored
      Commit 6c81fe79 ("arm64: enable context tracking") did not
      update el0_sp_pc to use ct_user_exit, but this appears to have been
      unintentional. In commit 6ab6463a ("arm64: adjust el0_sync so
      that a function can be called") we made x0 available, and in the return
      to userspace we call ct_user_enter in the kernel_exit macro.
      
      Due to this, we currently don't correctly inform RCU of the user->kernel
      transition, and may erroneously account for time spent in the kernel as
      if we were in an extended quiescent state when CONFIG_CONTEXT_TRACKING
      is enabled.
      
      As we do record the kernel->user transition, a userspace application
      making accesses from an unaligned stack pointer can demonstrate the
      imbalance, provoking the following warning:
      
      ------------[ cut here ]------------
      WARNING: CPU: 2 PID: 3660 at kernel/context_tracking.c:75 context_tracking_enter+0xd8/0xe4()
      Modules linked in:
      CPU: 2 PID: 3660 Comm: a.out Not tainted 4.1.0-rc7+ #8
      Hardware name: ARM Juno development board (r0) (DT)
      Call trace:
      [<ffffffc000089914>] dump_backtrace+0x0/0x124
      [<ffffffc000089a48>] show_stack+0x10/0x1c
      [<ffffffc0005b3cbc>] dump_stack+0x84/0xc8
      [<ffffffc0000b3214>] warn_slowpath_common+0x98/0xd0
      [<ffffffc0000b330c>] warn_slowpath_null+0x14/0x20
      [<ffffffc00013ada4>] context_tracking_enter+0xd4/0xe4
      [<ffffffc0005b534c>] preempt_schedule_irq+0xd4/0x114
      [<ffffffc00008561c>] el1_preempt+0x4/0x28
      [<ffffffc0001b8040>] exit_files+0x38/0x4c
      [<ffffffc0000b5b94>] do_exit+0x430/0x978
      [<ffffffc0000b614c>] do_group_exit+0x40/0xd4
      [<ffffffc0000c0208>] get_signal+0x23c/0x4f4
      [<ffffffc0000890b4>] do_signal+0x1ac/0x518
      [<ffffffc000089650>] do_notify_resume+0x5c/0x68
      ---[ end trace 963c192600337066 ]---
      
      This patch adds the missing ct_user_exit to the el0_sp_pc entry path,
      correcting the context tracking for this case.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Fixes: 6c81fe79 ("arm64: enable context tracking")
      Cc: <stable@vger.kernel.org> # v3.17+
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      46b0567c
  8. 15 Jun, 2015 1 commit
  9. 12 Jun, 2015 4 commits
  10. 11 Jun, 2015 3 commits
  11. 08 Jun, 2015 1 commit
    • Josh Stone's avatar
      arm64: fix missing syscall trace exit · 04d7e098
      Josh Stone authored
      If a syscall is entered without TIF_SYSCALL_TRACE set, then it goes on
      the fast path.  It's then possible to have TIF_SYSCALL_TRACE added in
      the middle of the syscall, but ret_fast_syscall doesn't check this flag
      again.  This causes a ptrace syscall-exit-stop to be missed.
      
      For instance, from a PTRACE_EVENT_FORK reported during do_fork, the
      tracer might resume with PTRACE_SYSCALL, setting TIF_SYSCALL_TRACE.
      Now the completion of the fork should have a syscall-exit-stop.
      
      Russell King fixed this on arm by re-checking _TIF_SYSCALL_WORK in the
      fast exit path.  Do the same on arm64.
      Reviewed-by: default avatarWill Deacon <will.deacon@arm.com>
      Cc: Russell King <rmk+kernel@arm.linux.org.uk>
      Signed-off-by: default avatarJosh Stone <jistone@redhat.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      04d7e098
  12. 05 Jun, 2015 5 commits
  13. 03 Jun, 2015 1 commit
    • Marc Zyngier's avatar
      arm64: insn: Add aarch64_{get,set}_branch_offset · 10b48f7e
      Marc Zyngier authored
      In order to deal with branches located in alternate sequences,
      but pointing to the main kernel text, it is required to extract
      the relative displacement encoded in the instruction, and to be
      able to update said instruction with a new offset (once it is
      known).
      
      For this, we introduce three new helpers:
      - aarch64_insn_is_branch_imm is a predicate indicating if the
        instruction is an immediate branch
      - aarch64_get_branch_offset returns a signed value representing
        the byte offset encoded in a branch instruction
      - aarch64_set_branch_offset takes an instruction and an offset,
        and returns the corresponding updated instruction.
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      10b48f7e
  14. 02 Jun, 2015 4 commits
  15. 01 Jun, 2015 1 commit
  16. 27 May, 2015 4 commits
    • Mark Rutland's avatar
      arm64: psci: remove ACPI coupling · c5a13305
      Mark Rutland authored
      The 32-bit ARM port doesn't have ACPI headers, and conditionally
      including them is going to look horrendous. In preparation for sharing
      the PSCI invocation code with 32-bit, move the acpi_psci_* function
      declarations and definitions such that the PSCI client code need not
      include ACPI headers.
      
      While it would seem like we could simply hide the ACPI includes in
      psci.h, the ACPI headers have hilarious circular dependencies which make
      this infeasible without reorganising most of ACPICA. So rather than
      doing that, move the acpi_psci_* prototypes into psci.h.
      
      The psci_acpi_init function is made dependent on CONFIG_ACPI (with a
      stub implementation in asm/psci.h) such that it need not be built for
      32-bit ARM or kernels without ACPI support. The currently missing __init
      annotations are added to the prototypes in the header.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: default avatarHanjun Guo <hanjun.guo@linaro.org>
      Reviewed-by: default avatarAl Stone <al.stone@linaro.org>
      Reviewed-by: default avatarAshwin Chaugule <ashwin.chaugule@linaro.org>
      Tested-by: default avatarHanjun Guo <hanjun.guo@linaro.org>
      Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      c5a13305
    • Mark Rutland's avatar
      arm64: psci: kill psci_power_state · c8cc4273
      Mark Rutland authored
      A PSCI 1.0 implementation may choose to use the new extended StateID
      format, the presence of which may be queried via the PSCI_FEATURES call.
      The layout of this new StateID format is incompatible with the existing
      format, and so to handle both we must abstract attempts to parse the
      fields.
      
      In preparation for PSCI 1.0 support, this patch introduces
      psci_power_state_loses_context and psci_power_state_is_valid functions
      to query information from a PSCI power state, which is no longer
      decomposed (and hence the pack/unpack functions are removed). As it is
      no longer decomposed, it is now passed round as an opaque u32 token.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: default avatarLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Tested-by: default avatarHanjun Guo <hanjun.guo@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      c8cc4273
    • Mark Rutland's avatar
      arm64: psci: account for Trusted OS instances · ff3010e6
      Mark Rutland authored
      Software resident in the secure world (a "Trusted OS") may cause CPU_OFF
      calls for the CPU it is resident on to be denied. Such a denial would be
      fatal for the kernel, and so we must detect when this can happen before
      the point of no return.
      
      This patch implements Trusted OS detection for PSCI 0.2+ systems, using
      MIGRATE_INFO_TYPE and MIGRATE_INFO_UP_CPU. When a trusted OS is detected
      as resident on a particular CPU, attempts to hot unplug that CPU will be
      denied early, before they can prove fatal.
      
      Trusted OS migration is not implemented by this patch. Implementation of
      migratable UP trusted OSs seems unlikely, and the right policy for
      migration is unclear (and will likely differ across implementations). As
      such, it is likely that migration will require cooperation with Trusted
      OS drivers.
      
      PSCI implementations prior to 0.1 do not provide the facility to detect
      the presence of a Trusted OS, nor the CPU any such OS is resident on, so
      without additional information it is not possible to handle Trusted OSs
      with PSCI 0.1.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Tested-by: default avatarHanjun Guo <hanjun.guo@linaro.org>
      Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      ff3010e6
    • Mark Rutland's avatar
      arm64: psci: support unsigned return values · a06eed3e
      Mark Rutland authored
      PSCI_VERSION and MIGRATE_INFO_TYPE_UP_CPU return unsigned values, with
      the latter returning a 64-bit value. However, the PSCI invocation
      functions have prototypes returning int.
      
      This patch upgrades the invocation functions to return unsigned long,
      with a new typedef to keep things legible. As PSCI_VERSION cannot return
      a negative value, the erroneous check against PSCI_RET_NOT_SUPPORTED is
      also removed. The unrelated psci_initcall_t typedef is moved closer to
      its first user, to avoid confusion with the invocation functions.
      
      In preparation for sharing the code with ARM, unsigned long is used in
      preference of u64. In the SMC32 calling convention, the relevant fields
      will be 32 bits wide.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: default avatarHanjun Guo <hanjun.guo@linaro.org>
      Tested-by: default avatarHanjun Guo <hanjun.guo@linaro.org>
      Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      a06eed3e