1. 06 Nov, 2015 1 commit
  2. 05 Nov, 2015 1 commit
    • Lorenzo Pieralisi's avatar
      arm64: cmpxchg_dbl: fix return value type · 57a65667
      Lorenzo Pieralisi authored
      The current arm64 __cmpxchg_double{_mb} implementations carry out the
      compare exchange by first comparing the old values passed in to the
      values read from the pointer provided and by stashing the cumulative
      bitwise difference in a 64-bit register.
      
      By comparing the register content against 0, it is possible to detect if
      the values read differ from the old values passed in, so that the compare
      exchange detects whether it has to bail out or carry on completing the
      operation with the exchange.
      
      Given the current implementation, to detect the cmpxchg operation
      status, the __cmpxchg_double{_mb} functions should return the 64-bit
      stashed bitwise difference so that the caller can detect cmpxchg failure
      by comparing the return value content against 0. The current implementation
      declares the return value as an int, which means that the 64-bit
      value stashing the bitwise difference is truncated before being
      returned to the __cmpxchg_double{_mb} callers, which means that
      any bitwise difference present in the top 32 bits goes undetected,
      triggering false positives and subsequent kernel failures.
      
      This patch fixes the issue by declaring the arm64 __cmpxchg_double{_mb}
      return values as a long, so that the bitwise difference is
      properly propagated on failure, restoring the expected behaviour.
      
      Fixes: e9a4b795 ("arm64: cmpxchg_dbl: patch in lse instructions when supported by the CPU")
      Cc: <stable@vger.kernel.org> # 4.3+
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      57a65667
  3. 02 Nov, 2015 1 commit
  4. 30 Oct, 2015 2 commits
  5. 29 Oct, 2015 3 commits
  6. 28 Oct, 2015 6 commits
  7. 22 Oct, 2015 2 commits
  8. 21 Oct, 2015 19 commits
  9. 20 Oct, 2015 1 commit
    • Catalin Marinas's avatar
      arm64: Make 36-bit VA depend on EXPERT · 56a3f30e
      Catalin Marinas authored
      Commit 21539939 (arm64: 36 bit VA) introduced 36-bit VA support for
      the arm64 kernel when the 16KB page configuration is enabled. While this
      is a valid hardware configuration, it's not something we want to
      encourage since it reduces the memory (and I/O) range that the kernel
      can access. Make this depend on EXPERT to avoid complaints of Linux not
      mapping the whole RAM, especially on platforms following the ARM
      recommended memory map.
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      56a3f30e
  10. 19 Oct, 2015 4 commits
    • Jungseok Lee's avatar
      arm64: Synchronise dump_backtrace() with perf callchain · 9f93f3e9
      Jungseok Lee authored
      Unlike perf callchain relying on walk_stackframe(), dump_backtrace()
      has its own backtrace logic. A major difference between them is the
      moment a symbol is recorded. Perf writes down a symbol *before*
      calling unwind_frame(), but dump_backtrace() prints it out *after*
      unwind_frame(). As a result, the last valid symbol cannot be hooked
      in case of dump_backtrace(). This patch addresses the issue as
      synchronising dump_backtrace() with perf callchain.
      
      A simple test and its results are as follows:
      
      - crash trigger
      
       $ sudo echo c > /proc/sysrq-trigger
      
      - current status
      
       Call trace:
       [<fffffe00003dc738>] sysrq_handle_crash+0x24/0x30
       [<fffffe00003dd2ac>] __handle_sysrq+0x128/0x19c
       [<fffffe00003dd730>] write_sysrq_trigger+0x60/0x74
       [<fffffe0000249fc4>] proc_reg_write+0x84/0xc0
       [<fffffe00001f2638>] __vfs_write+0x44/0x104
       [<fffffe00001f2e60>] vfs_write+0x98/0x1a8
       [<fffffe00001f3730>] SyS_write+0x50/0xb0
      
      - with this change
      
       Call trace:
       [<fffffe00003dc738>] sysrq_handle_crash+0x24/0x30
       [<fffffe00003dd2ac>] __handle_sysrq+0x128/0x19c
       [<fffffe00003dd730>] write_sysrq_trigger+0x60/0x74
       [<fffffe0000249fc4>] proc_reg_write+0x84/0xc0
       [<fffffe00001f2638>] __vfs_write+0x44/0x104
       [<fffffe00001f2e60>] vfs_write+0x98/0x1a8
       [<fffffe00001f3730>] SyS_write+0x50/0xb0
       [<fffffe00000939ec>] el0_svc_naked+0x20/0x28
      
      Note that this patch does not cover a case where MMU is disabled. The
      last stack frame of swapper, for example, has PC in a form of physical
      address. Unfortunately, a simple conversion using phys_to_virt() cannot
      cover all scenarios since PC is retrieved from LR - 4, not LR. It is
      a big tradeoff to change both head.S and unwind_frame() for only a few
      of symbols in *.S. Thus, this hunk does not take care of the case.
      
      Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarJungseok Lee <jungseoklee85@gmail.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      9f93f3e9
    • Jisheng Zhang's avatar
      arm64: add cpu_idle tracepoints to arch_cpu_idle · 096b3224
      Jisheng Zhang authored
      Currently, if cpuidle is disabled or not supported, powertop reports
      zero wakeups and zero events. This is due to the cpu_idle tracepoints
      are missing.
      
      This patch is to make cpu_idle tracepoints always available even if
      cpuidle is disabled or not supported.
      Signed-off-by: default avatarJisheng Zhang <jszhang@marvell.com>
      Acked-by: default avatarLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      096b3224
    • Suzuki K. Poulose's avatar
      arm64: 36 bit VA · 21539939
      Suzuki K. Poulose authored
      36bit VA lets us use 2 level page tables while limiting the
      available address space to 64GB.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Steve Capper <steve.capper@linaro.org>
      Signed-off-by: default avatarSuzuki K. Poulose <suzuki.poulose@arm.com>
      Reviewed-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Tested-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      21539939
    • Suzuki K. Poulose's avatar
      arm64: Add 16K page size support · 44eaacf1
      Suzuki K. Poulose authored
      This patch turns on the 16K page support in the kernel. We
      support 48bit VA (4 level page tables) and 47bit VA (3 level
      page tables).
      
      With 16K we can map 128 entries using contiguous bit hint
      at level 3 to map 2M using single TLB entry.
      
      TODO: 16K supports 32 contiguous entries at level 2 to get us
      1G(which is not yet supported by the infrastructure). That should
      be a separate patch altogether.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Jeremy Linton <jeremy.linton@arm.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Christoffer Dall <christoffer.dall@linaro.org>
      Cc: Steve Capper <steve.capper@linaro.org>
      Signed-off-by: default avatarSuzuki K. Poulose <suzuki.poulose@arm.com>
      Reviewed-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Tested-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      44eaacf1