1. 04 Jul, 2017 3 commits
    • Balbir Singh's avatar
      powerpc/Kconfig: Enable STRICT_KERNEL_RWX for some configs · 1e0fc9d1
      Balbir Singh authored
      All code that patches kernel text has been moved over to using
      patch_instruction() and patch_instruction() is able to cope with the
      kernel text being read only.
      
      The linker script has been updated to ensure the read only data ends
      on a large page boundary, so it and the preceding kernel text can be
      marked R_X. We also have implementations of mark_rodata_ro() for Hash
      and Radix MMU modes.
      
      There are some corner-cases missing when the kernel is built
      relocatable, so for now make it depend on !RELOCATABLE.
      
      There's also a temporary workaround to depend on !HIBERNATION to avoid
      a build failure, that will be removed once we've merged with the PM
      tree.
      Signed-off-by: default avatarBalbir Singh <bsingharora@gmail.com>
      [mpe: Make it depend on !RELOCATABLE, munge change log]
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      1e0fc9d1
    • Balbir Singh's avatar
      powerpc/mm/radix: Implement STRICT_RWX/mark_rodata_ro() for Radix · 7614ff32
      Balbir Singh authored
      The Radix linear mapping code (create_physical_mapping()) tries to use
      the largest page size it can at each step. Currently the only reason
      it steps down to a smaller page size is if the start addr is
      unaligned (never happens in practice), or the end of memory is not
      aligned to a huge page boundary.
      
      To support STRICT_RWX we need to break the mapping at __init_begin,
      so that the text and rodata prior to that can be marked R_X and the
      regular pages after can be marked RW.
      
      Having done that we can now implement mark_rodata_ro() for Radix,
      knowing that we won't need to split any mappings.
      Signed-off-by: default avatarBalbir Singh <bsingharora@gmail.com>
      [mpe: Split down to PAGE_SIZE, not 2MB, rewrite change log]
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      7614ff32
    • Balbir Singh's avatar
      powerpc/mm/hash: Implement mark_rodata_ro() for hash · cd65d697
      Balbir Singh authored
      With hash we update the bolted pte to mark it read-only. We rely
      on the MMU_FTR_KERNEL_RO to generate the correct permissions
      for read-only text. The radix implementation just prints a warning
      in this implementation
      Signed-off-by: default avatarBalbir Singh <bsingharora@gmail.com>
      [mpe: Make the warning louder when we don't have MMU_FTR_KERNEL_RO]
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      cd65d697
  2. 03 Jul, 2017 17 commits
  3. 02 Jul, 2017 19 commits
  4. 28 Jun, 2017 1 commit
    • Nicholas Piggin's avatar
      spin loop primitives for busy waiting · fd851a3c
      Nicholas Piggin authored
      Current busy-wait loops are implemented by repeatedly calling cpu_relax()
      to give an arch option for a low-latency option to improve power and/or
      SMT resource contention.
      
      This poses some difficulties for powerpc, which has SMT priority setting
      instructions (priorities determine how ifetch cycles are apportioned).
      powerpc's cpu_relax() is implemented by setting a low priority then
      setting normal priority. This has several problems:
      
       - Changing thread priority can have some execution cost and potential
         impact to other threads in the core. It's inefficient to execute them
         every time around a busy-wait loop.
      
       - Depending on implementation details, a `low ; medium` sequence may
         not have much if any affect. Some software with similar pattern
         actually inserts a lot of nops between, in order to cause a few fetch
         cycles with the low priority.
      
       - The busy-wait loop runs with regular priority. This might only be a few
         fetch cycles, but if there are several threads running such loops, they
         could cause a noticable impact on a non-idle thread.
      
      Implement spin_begin, spin_end primitives that can be used around busy
      wait loops, which default to no-ops. And spin_cpu_relax which defaults to
      cpu_relax.
      
      This will allow architectures to hook the entry and exit of busy-wait
      loops, and will allow powerpc to set low SMT priority at entry, and
      normal priority at exit.
      Suggested-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: default avatarNicholas Piggin <npiggin@gmail.com>
      Signed-off-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
      fd851a3c