1. 03 Jul, 2015 2 commits
    • Suzuki K. Poulose's avatar
      arm64: Fix show_unhandled_signal_ratelimited usage · f871d268
      Suzuki K. Poulose authored
      Commit 86dca36e introduced ratelimited usage for
      'unhandled_signal' messages.
      The commit checks the ratelimit irrespective of whether
      the signal is handled or not, which is wrong and leads
      to false reports like the below in dmesg :
      
      __do_user_fault: 127 callbacks suppressed
      
      Do the ratelimit check only if the signal is unhandled.
      
      Fixes: 86dca36e ("arm64: use private ratelimit state along with show_unhandled_signals")
      Cc: Vladimir Murzin <Vladimir.Murzin@arm.com>
      Signed-off-by: default avatarSuzuki K. Poulose <suzuki.poulose@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      f871d268
    • Hanjun Guo's avatar
      ARM64 / SMP: Switch pr_err() to pr_debug() for disabled GICC entry · f9058929
      Hanjun Guo authored
      It is normal that firmware presents GICC entry or entries (processors)
      with disabled flag in ACPI MADT, taking a system of 16 cpus for example,
      ACPI firmware may present 8 ebabled first with another 8 cpus disabled
      in MADT, the disabled cpus can be hot-added later.
      
      Firmware may also present more cpus than the hardware actually has, but
      disabled the unused ones, and easily enable it when the hardware has such
      cpus to make the firmware code scalable.
      
      So that's not an error for disabled cpus in MADT, we can switch pr_err()
      to pr_debug() to make the boot a little quieter by default.
      
      Since hwid for disabled cpus often are invalid, and we check invalid hwid
      first in the code, for use case that hot add cpus later will be filtered
      out and will not be counted in possible cups, so move this check before
      the hwid one to prepare the code to count for disabeld cpus when cpu
      hot-plug is introduced.
      Signed-off-by: default avatarHanjun Guo <hanjun.guo@linaro.org>
      Reviewed-by: default avatarAl Stone <ahs3@redhat.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      f9058929
  2. 02 Jul, 2015 1 commit
  3. 01 Jul, 2015 1 commit
  4. 30 Jun, 2015 3 commits
  5. 26 Jun, 2015 2 commits
  6. 25 Jun, 2015 2 commits
  7. 19 Jun, 2015 4 commits
  8. 17 Jun, 2015 3 commits
    • Vladimir Murzin's avatar
      arm64: compat: print compat_sp instead of sp · 4e2ee96a
      Vladimir Murzin authored
      We check against compat_sp, but print out arm64's sp - fix it.
      Signed-off-by: default avatarVladimir Murzin <vladimir.murzin@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      4e2ee96a
    • Dave P Martin's avatar
      arm64: mm: Fix freeing of the wrong memmap entries with !SPARSEMEM_VMEMMAP · b9bcc919
      Dave P Martin authored
      The memmap freeing code in free_unused_memmap() computes the end of
      each memblock by adding the memblock size onto the base.  However,
      if SPARSEMEM is enabled then the value (start) used for the base
      may already have been rounded downwards to work out which memmap
      entries to free after the previous memblock.
      
      This may cause memmap entries that are in use to get freed.
      
      In general, you're not likely to hit this problem unless there
      are at least 2 memblocks and one of them is not aligned to a
      sparsemem section boundary.  Note that carve-outs can increase
      the number of memblocks by splitting the regions listed in the
      device tree.
      
      This problem doesn't occur with SPARSEMEM_VMEMMAP, because the
      vmemmap code deals with freeing the unused regions of the memmap
      instead of requiring the arch code to do it.
      
      This patch gets the memblock base out of the memblock directly when
      computing the block end address to ensure the correct value is used.
      Signed-off-by: default avatarDave Martin <Dave.Martin@arm.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      b9bcc919
    • Mark Rutland's avatar
      arm64: entry: fix context tracking for el0_sp_pc · 46b0567c
      Mark Rutland authored
      Commit 6c81fe79 ("arm64: enable context tracking") did not
      update el0_sp_pc to use ct_user_exit, but this appears to have been
      unintentional. In commit 6ab6463a ("arm64: adjust el0_sync so
      that a function can be called") we made x0 available, and in the return
      to userspace we call ct_user_enter in the kernel_exit macro.
      
      Due to this, we currently don't correctly inform RCU of the user->kernel
      transition, and may erroneously account for time spent in the kernel as
      if we were in an extended quiescent state when CONFIG_CONTEXT_TRACKING
      is enabled.
      
      As we do record the kernel->user transition, a userspace application
      making accesses from an unaligned stack pointer can demonstrate the
      imbalance, provoking the following warning:
      
      ------------[ cut here ]------------
      WARNING: CPU: 2 PID: 3660 at kernel/context_tracking.c:75 context_tracking_enter+0xd8/0xe4()
      Modules linked in:
      CPU: 2 PID: 3660 Comm: a.out Not tainted 4.1.0-rc7+ #8
      Hardware name: ARM Juno development board (r0) (DT)
      Call trace:
      [<ffffffc000089914>] dump_backtrace+0x0/0x124
      [<ffffffc000089a48>] show_stack+0x10/0x1c
      [<ffffffc0005b3cbc>] dump_stack+0x84/0xc8
      [<ffffffc0000b3214>] warn_slowpath_common+0x98/0xd0
      [<ffffffc0000b330c>] warn_slowpath_null+0x14/0x20
      [<ffffffc00013ada4>] context_tracking_enter+0xd4/0xe4
      [<ffffffc0005b534c>] preempt_schedule_irq+0xd4/0x114
      [<ffffffc00008561c>] el1_preempt+0x4/0x28
      [<ffffffc0001b8040>] exit_files+0x38/0x4c
      [<ffffffc0000b5b94>] do_exit+0x430/0x978
      [<ffffffc0000b614c>] do_group_exit+0x40/0xd4
      [<ffffffc0000c0208>] get_signal+0x23c/0x4f4
      [<ffffffc0000890b4>] do_signal+0x1ac/0x518
      [<ffffffc000089650>] do_notify_resume+0x5c/0x68
      ---[ end trace 963c192600337066 ]---
      
      This patch adds the missing ct_user_exit to the el0_sp_pc entry path,
      correcting the context tracking for this case.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Fixes: 6c81fe79 ("arm64: enable context tracking")
      Cc: <stable@vger.kernel.org> # v3.17+
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      46b0567c
  9. 15 Jun, 2015 1 commit
  10. 12 Jun, 2015 4 commits
  11. 11 Jun, 2015 3 commits
  12. 08 Jun, 2015 1 commit
    • Josh Stone's avatar
      arm64: fix missing syscall trace exit · 04d7e098
      Josh Stone authored
      If a syscall is entered without TIF_SYSCALL_TRACE set, then it goes on
      the fast path.  It's then possible to have TIF_SYSCALL_TRACE added in
      the middle of the syscall, but ret_fast_syscall doesn't check this flag
      again.  This causes a ptrace syscall-exit-stop to be missed.
      
      For instance, from a PTRACE_EVENT_FORK reported during do_fork, the
      tracer might resume with PTRACE_SYSCALL, setting TIF_SYSCALL_TRACE.
      Now the completion of the fork should have a syscall-exit-stop.
      
      Russell King fixed this on arm by re-checking _TIF_SYSCALL_WORK in the
      fast exit path.  Do the same on arm64.
      Reviewed-by: default avatarWill Deacon <will.deacon@arm.com>
      Cc: Russell King <rmk+kernel@arm.linux.org.uk>
      Signed-off-by: default avatarJosh Stone <jistone@redhat.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      04d7e098
  13. 05 Jun, 2015 5 commits
  14. 03 Jun, 2015 1 commit
    • Marc Zyngier's avatar
      arm64: insn: Add aarch64_{get,set}_branch_offset · 10b48f7e
      Marc Zyngier authored
      In order to deal with branches located in alternate sequences,
      but pointing to the main kernel text, it is required to extract
      the relative displacement encoded in the instruction, and to be
      able to update said instruction with a new offset (once it is
      known).
      
      For this, we introduce three new helpers:
      - aarch64_insn_is_branch_imm is a predicate indicating if the
        instruction is an immediate branch
      - aarch64_get_branch_offset returns a signed value representing
        the byte offset encoded in a branch instruction
      - aarch64_set_branch_offset takes an instruction and an offset,
        and returns the corresponding updated instruction.
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      10b48f7e
  15. 02 Jun, 2015 4 commits
  16. 01 Jun, 2015 1 commit
  17. 27 May, 2015 2 commits
    • Mark Rutland's avatar
      arm64: psci: remove ACPI coupling · c5a13305
      Mark Rutland authored
      The 32-bit ARM port doesn't have ACPI headers, and conditionally
      including them is going to look horrendous. In preparation for sharing
      the PSCI invocation code with 32-bit, move the acpi_psci_* function
      declarations and definitions such that the PSCI client code need not
      include ACPI headers.
      
      While it would seem like we could simply hide the ACPI includes in
      psci.h, the ACPI headers have hilarious circular dependencies which make
      this infeasible without reorganising most of ACPICA. So rather than
      doing that, move the acpi_psci_* prototypes into psci.h.
      
      The psci_acpi_init function is made dependent on CONFIG_ACPI (with a
      stub implementation in asm/psci.h) such that it need not be built for
      32-bit ARM or kernels without ACPI support. The currently missing __init
      annotations are added to the prototypes in the header.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: default avatarHanjun Guo <hanjun.guo@linaro.org>
      Reviewed-by: default avatarAl Stone <al.stone@linaro.org>
      Reviewed-by: default avatarAshwin Chaugule <ashwin.chaugule@linaro.org>
      Tested-by: default avatarHanjun Guo <hanjun.guo@linaro.org>
      Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      c5a13305
    • Mark Rutland's avatar
      arm64: psci: kill psci_power_state · c8cc4273
      Mark Rutland authored
      A PSCI 1.0 implementation may choose to use the new extended StateID
      format, the presence of which may be queried via the PSCI_FEATURES call.
      The layout of this new StateID format is incompatible with the existing
      format, and so to handle both we must abstract attempts to parse the
      fields.
      
      In preparation for PSCI 1.0 support, this patch introduces
      psci_power_state_loses_context and psci_power_state_is_valid functions
      to query information from a PSCI power state, which is no longer
      decomposed (and hence the pack/unpack functions are removed). As it is
      no longer decomposed, it is now passed round as an opaque u32 token.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: default avatarLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Tested-by: default avatarHanjun Guo <hanjun.guo@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      c8cc4273