1. 02 Sep, 2016 2 commits
  2. 01 Sep, 2016 4 commits
  3. 31 Aug, 2016 7 commits
  4. 26 Aug, 2016 5 commits
    • Will Deacon's avatar
      arm64: errata: Pass --fix-cortex-a53-843419 to ld if workaround enabled · 6ffe9923
      Will Deacon authored
      Cortex-A53 erratum 843419 is worked around by the linker, although it is
      a configure-time option to GCC as to whether ld is actually asked to
      apply the workaround or not.
      
      This patch ensures that we pass --fix-cortex-a53-843419 to the linker
      when both CONFIG_ARM64_ERRATUM_843419=y and the linker supports the
      option.
      Acked-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      6ffe9923
    • James Morse's avatar
      Revert "arm64: hibernate: Refuse to hibernate if the boot cpu is offline" · b2d8b0cb
      James Morse authored
      Now that we use the MPIDR to resume on the same CPU that we hibernated on,
      we no longer need to refuse to hibernate if the boot cpu is offline. (Which
      we can't possibly know if kexec causes logical CPUs to be renumbered).
      
      This reverts commit 1fe492ce.
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      b2d8b0cb
    • James Morse's avatar
      arm64: hibernate: Resume when hibernate image created on non-boot CPU · 8ec058fd
      James Morse authored
      disable_nonboot_cpus() assumes that the lowest numbered online CPU is
      the boot CPU, and that this is the correct CPU to run any power
      management code on.
      
      On arm64 CPU0 can be taken offline. For hibernate/resume this means we
      may hibernate on a CPU other than CPU0. If the system is rebooted with
      kexec 'CPU0' will be assigned to a different CPU. This complicates
      hibernate/resume as now we can't trust the CPU numbers.
      
      We currently forbid hibernate if CPU0 has been hotplugged out to avoid
      this situation without kexec.
      
      Save the MPIDR of the CPU we hibernated on in the hibernate arch-header,
      use hibernate_resume_nonboot_cpu_disable() to direct which CPU we should
      resume on based on the MPIDR of the CPU we hibernated on. This allows us to
      hibernate/resume on any CPU, even if the logical numbers have been
      shuffled by kexec.
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      8ec058fd
    • James Morse's avatar
      cpu/hotplug: Allow suspend/resume CPU to be specified · d391e552
      James Morse authored
      disable_nonboot_cpus() assumes that the lowest numbered online CPU is
      the boot CPU, and that this is the correct CPU to run any power
      management code on.
      
      On x86 this is always correct, as CPU0 cannot (easily) by taken offline.
      
      On arm64 CPU0 can be taken offline. For hibernate/resume this means we
      may hibernate on a CPU other than CPU0. If the system is rebooted with
      kexec 'CPU0' will be assigned to a different physical CPU. This
      complicates hibernate/resume as now we can't trust the CPU numbers.
      Arch code can find the correct physical CPU, and ensure it is online
      before resume from hibernate begins, but also needs to influence
      disable_nonboot_cpus()s choice of CPU.
      
      Rename disable_nonboot_cpus() as freeze_secondary_cpus() and add an
      argument indicating which CPU should be left standing. Follow the logic
      in migrate_to_reboot_cpu() to use the lowest numbered online CPU if the
      requested CPU is not online.
      Add disable_nonboot_cpus() as an inline function that has the existing
      behaviour.
      
      Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
      Reviewed-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      d391e552
    • Mark Rutland's avatar
      arm64: always enable DEBUG_RODATA and remove the Kconfig option · 40982fd6
      Mark Rutland authored
      Follow the example set by x86 in commit 9ccaf77c ("x86/mm:
      Always enable CONFIG_DEBUG_RODATA and remove the Kconfig option"), and
      make these protections a fundamental security feature rather than an
      opt-in. This also results in a minor code simplification.
      
      For those rare cases when users wish to disable this protection (e.g.
      for debugging), this can be done by passing 'rodata=off' on the command
      line.
      
      As DEBUG_RODATA_ALIGN is only intended to address a performance/memory
      tradeoff, and does not affect correctness, this is left user-selectable.
      DEBUG_MODULE_RONX is also left user-selectable until the core code
      provides a boot-time option to disable the protection for debugging
      use-cases.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Acked-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: default avatarKees Cook <keescook@chromium.org>
      Acked-by: default avatarLaura Abbott <labbott@redhat.com>
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      40982fd6
  5. 25 Aug, 2016 6 commits
    • AKASHI Takahiro's avatar
      arm64: mark reserved memblock regions explicitly in iomem · e7cd1903
      AKASHI Takahiro authored
      Kdump(kexec-tools) parses /proc/iomem to identify all the memory regions
      on the system. Since the current kernel names "nomap" regions, like UEFI
      runtime services code/data, as "System RAM," kexec-tools sets up elf core
      header to include them in a crash dump file (/proc/vmcore).
      
      Then crash dump kernel parses UEFI memory map again, re-marks those regions
      as "nomap" and does not create a memory mapping for them unlike the other
      areas of System RAM. In this case, copying /proc/vmcore through
      copy_oldmem_page() on crash dump kernel will end up with a kernel abort,
      as reported in [1].
      
      This patch names all the "nomap" regions explicitly as "reserved" so that
      we can exclude them from a crash dump file. acpi_os_ioremap() must also
      be modified because those regions have WB attributes [2].
      
      Apart from kdump, this change also matches x86's use of acpi (and
      /proc/iomem).
      
      [1] http://lists.infradead.org/pipermail/linux-arm-kernel/2016-August/448186.html
      [2] http://lists.infradead.org/pipermail/linux-arm-kernel/2016-August/450089.htmlReviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Tested-by: default avatarJames Morse <james.morse@arm.com>
      Reviewed-by: default avatarJames Morse <james.morse@arm.com>
      Signed-off-by: default avatarAKASHI Takahiro <takahiro.akashi@linaro.org>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      e7cd1903
    • James Morse's avatar
      arm64: hibernate: Support DEBUG_PAGEALLOC · 5ebe3a44
      James Morse authored
      DEBUG_PAGEALLOC removes the valid bit of page table entries to prevent
      any access to unallocated memory. Hibernate uses this as a hint that those
      pages don't need to be saved/restored. This patch adds the
      kernel_page_present() function it uses.
      
      hibernate.c copies the resume kernel's linear map for use during restore.
      Add _copy_pte() to fill-in the holes made by DEBUG_PAGEALLOC in the resume
      kernel, so we can restore data the original kernel had at these addresses.
      
      Finally, DEBUG_PAGEALLOC means the linear-map alias of KERNEL_START to
      KERNEL_END may have holes in it, so we can't lazily clean this whole
      area to the PoC. Only clean the new mmuoff region, and the kernel/kvm
      idmaps.
      
      This reverts commit da24eb1f.
      Reported-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      5ebe3a44
    • James Morse's avatar
      arm64: vmlinux.ld: Add mmuoff data sections and move mmuoff text into idmap · b6113038
      James Morse authored
      Resume from hibernate needs to clean any text executed by the kernel with
      the MMU off to the PoC. Collect these functions together into the
      .idmap.text section as all this code is tightly coupled and also needs
      the same cleaning after resume.
      
      Data is more complicated, secondary_holding_pen_release is written with
      the MMU on, clean and invalidated, then read with the MMU off. In contrast
      __boot_cpu_mode is written with the MMU off, the corresponding cache line
      is invalidated, so when we read it with the MMU on we don't get stale data.
      These cache maintenance operations conflict with each other if the values
      are within a Cache Writeback Granule (CWG) of each other.
      Collect the data into two sections .mmuoff.data.read and .mmuoff.data.write,
      the linker script ensures mmuoff.data.write section is aligned to the
      architectural maximum CWG of 2KB.
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      b6113038
    • James Morse's avatar
      arm64: Create sections.h · ee78fdc7
      James Morse authored
      Each time new section markers are added, kernel/vmlinux.ld.S is updated,
      and new extern char __start_foo[] definitions are scattered through the
      tree.
      
      Create asm/include/sections.h to collect these definitions (and include
      the existing asm-generic version).
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Tested-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      ee78fdc7
    • Catalin Marinas's avatar
      arm64: Introduce execute-only page access permissions · cab15ce6
      Catalin Marinas authored
      The ARMv8 architecture allows execute-only user permissions by clearing
      the PTE_UXN and PTE_USER bits. However, the kernel running on a CPU
      implementation without User Access Override (ARMv8.2 onwards) can still
      access such page, so execute-only page permission does not protect
      against read(2)/write(2) etc. accesses. Systems requiring such
      protection must enable features like SECCOMP.
      
      This patch changes the arm64 __P100 and __S100 protection_map[] macros
      to the new __PAGE_EXECONLY attributes. A side effect is that
      pte_user() no longer triggers for __PAGE_EXECONLY since PTE_USER isn't
      set. To work around this, the check is done on the PTE_NG bit via the
      pte_ng() macro. VM_READ is also checked now for page faults.
      Reviewed-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      cab15ce6
    • Pratyush Anand's avatar
      arm64: kprobe: Always clear pstate.D in breakpoint exception handler · 7419333f
      Pratyush Anand authored
      Whenever we are hitting a kprobe from a none-kprobe debug exception handler,
      we hit an infinite occurrences of "Unexpected kernel single-step exception
      at EL1"
      
      PSTATE.D is debug exception mask bit. It is set whenever we enter into an
      exception mode. When it is set then Watchpoint, Breakpoint, and Software
      Step exceptions are masked. However, software Breakpoint Instruction
      exceptions can never be masked. Therefore, if we ever execute a BRK
      instruction, irrespective of D-bit setting, we will be receiving a
      corresponding breakpoint exception.
      
      For example:
      
      - We are executing kprobe pre/post handler, and kprobe has been inserted in
        one of the instruction of a function called by handler. So, it executes
        BRK instruction and we land into the case of KPROBE_REENTER. (This case is
        already handled by current code)
      
      - We are executing uprobe handler or any other BRK handler such as in
        WARN_ON (BRK BUG_BRK_IMM), and we trace that path using kprobe.So, we
        enter into kprobe breakpoint handler,from another BRK handler.(This case
        is not being handled currently)
      
      In all such cases kprobe breakpoint exception will be raised when we were
      already in debug exception mode. SPSR's D bit (bit 9) shows the value of
      PSTATE.D immediately before the exception was taken. So, in above example
      cases we would find it set in kprobe breakpoint handler.  Single step
      exception will always be followed by a kprobe breakpoint exception.However,
      it will only be raised gracefully if we clear D bit while returning from
      breakpoint exception.  If D bit is set then, it results into undefined
      exception and when it's handler enables dbg then single step exception is
      generated, however it will never be handled(because address does not match
      and therefore treated as unexpected).
      
      This patch clears D-flag unconditionally in setup_singlestep, so that we can
      always get single step exception correctly after returning from breakpoint
      exception. Additionally, it also removes D-flag set statement for
      KPROBE_REENTER return path, because debug exception for KPROBE_REENTER will
      always take place in a debug exception state. So, D-flag will already be set
      in this case.
      Acked-by: default avatarSandeepa Prabhu <sandeepa.s.prabhu@gmail.com>
      Acked-by: default avatarMasami Hiramatsu <mhiramat@kernel.org>
      Signed-off-by: default avatarPratyush Anand <panand@redhat.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      7419333f
  6. 22 Aug, 2016 10 commits
  7. 21 Aug, 2016 3 commits
  8. 20 Aug, 2016 2 commits
    • Helge Deller's avatar
      parisc: Fix order of EREFUSED define in errno.h · 3eb53b20
      Helge Deller authored
      When building gccgo in userspace, errno.h gets parsed and the go include file
      sysinfo.go is generated.
      
      Since EREFUSED is defined to the same value as ECONNREFUSED, and ECONNREFUSED
      is defined later on in errno.h, this leads to go complaining that EREFUSED
      isn't defined yet.
      
      Fix this trivial problem by moving the define of EREFUSED down after
      ECONNREFUSED in errno.h (and clean up the indenting while touching this line).
      Signed-off-by: default avatarHelge Deller <deller@gmx.de>
      Cc: stable@vger.kernel.org
      3eb53b20
    • Helge Deller's avatar
      parisc: Fix automatic selection of cr16 clocksource · ae141830
      Helge Deller authored
      Commit 54b66800 (parisc: Add native high-resolution sched_clock()
      implementation) added support to use the CPU-internal cr16 counters as reliable
      clocksource with the help of HAVE_UNSTABLE_SCHED_CLOCK.
      
      Sadly the commit missed to remove the hack which prevented cr16 to become the
      default clocksource even on SMP systems.
      Signed-off-by: default avatarHelge Deller <deller@gmx.de>
      Cc: stable@vger.kernel.org # 4.7+
      ae141830
  9. 19 Aug, 2016 1 commit
    • Linus Torvalds's avatar
      Make the hardened user-copy code depend on having a hardened allocator · 6040e576
      Linus Torvalds authored
      The kernel test robot reported a usercopy failure in the new hardened
      sanity checks, due to a page-crossing copy of the FPU state into the
      task structure.
      
      This happened because the kernel test robot was testing with SLOB, which
      doesn't actually do the required book-keeping for slab allocations, and
      as a result the hardening code didn't realize that the task struct
      allocation was one single allocation - and the sanity checks fail.
      
      Since SLOB doesn't even claim to support hardening (and you really
      shouldn't use it), the straightforward solution is to just make the
      usercopy hardening code depend on the allocator supporting it.
      Reported-by: default avatarkernel test robot <xiaolong.ye@intel.com>
      Cc: Kees Cook <keescook@chromium.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      6040e576