1. 06 May, 2016 1 commit
  2. 05 May, 2016 1 commit
  3. 03 May, 2016 2 commits
    • Yang Shi's avatar
      arm64: always use STRICT_MM_TYPECHECKS · 2326df55
      Yang Shi authored
      Inspired by the counterpart of powerpc [1], which shows there is no negative
      effect on code generation from enabling STRICT_MM_TYPECHECKS with a modern
      compiler.
      
      And, Arnd's comment [2] about that patch says STRICT_MM_TYPECHECKS could
      be default as long as the architecture can pass structures in registers as
      function arguments. ARM64 can do it as long as the size of structure <= 16
      bytes. All the page table value types are u64 on ARM64.
      
      The below disassembly demonstrates it, entry is pte_t type:
      
                  entry = arch_make_huge_pte(entry, vma, page, writable);
         0xffff00000826fc38 <+80>:    and     x0, x0, #0xfffffffffffffffd
         0xffff00000826fc3c <+84>:    mov     w3, w21
         0xffff00000826fc40 <+88>:    mov     x2, x20
         0xffff00000826fc44 <+92>:    mov     x1, x19
         0xffff00000826fc48 <+96>:    orr     x0, x0, #0x400
         0xffff00000826fc4c <+100>:   bl      0xffff00000809bcc0 <arch_make_huge_pte>
      
      [1] http://www.spinics.net/lists/linux-mm/msg105951.html
      [2] http://www.spinics.net/lists/linux-mm/msg105969.html
      
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: default avatarArnd Bergmann <arnd@arndb.de>
      Acked-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: default avatarYang Shi <yang.shi@linaro.org>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      2326df55
    • James Morse's avatar
      arm64: kvm: Fix kvm teardown for systems using the extended idmap · c612505f
      James Morse authored
      If memory is located above 1<<VA_BITS, kvm adds an extra level to its page
      tables, merging the runtime tables and boot tables that contain the idmap.
      This lets us avoid the trampoline dance during initialisation.
      
      This also means there is no trampoline page mapped, so
      __cpu_reset_hyp_mode() can't call __kvm_hyp_reset() in this page. The good
      news is the idmap is still mapped, so we don't need the trampoline page.
      The bad news is we can't call it directly as the idmap is above
      HYP_PAGE_OFFSET, so its address is masked by kvm_call_hyp.
      
      Add a function __extended_idmap_trampoline which will branch into
      __kvm_hyp_reset in the idmap, change kvm_hyp_reset_entry() to return
      this address if __kvm_cpu_uses_extended_idmap(). In this case
      __kvm_hyp_reset() will still switch to the boot tables (which are the
      merged tables that were already in use), and branch into the idmap (where
      it already was).
      
      This fixes boot failures on these systems, where we fail to execute the
      missing trampoline page when tearing down kvm in init_subsystems():
      [    2.508922] kvm [1]: 8-bit VMID
      [    2.512057] kvm [1]: Hyp mode initialized successfully
      [    2.517242] kvm [1]: interrupt-controller@e1140000 IRQ13
      [    2.522622] kvm [1]: timer IRQ3
      [    2.525783] Kernel panic - not syncing: HYP panic:
      [    2.525783] PS:200003c9 PC:0000007ffffff820 ESR:86000005
      [    2.525783] FAR:0000007ffffff820 HPFAR:00000000003ffff0 PAR:0000000000000000
      [    2.525783] VCPU:          (null)
      [    2.525783]
      [    2.547667] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G        W       4.6.0-rc5+ #1
      [    2.555137] Hardware name: Default string Default string/Default string, BIOS ROD0084E 09/03/2015
      [    2.563994] Call trace:
      [    2.566432] [<ffffff80080888d0>] dump_backtrace+0x0/0x240
      [    2.571818] [<ffffff8008088b24>] show_stack+0x14/0x20
      [    2.576858] [<ffffff80083423ac>] dump_stack+0x94/0xb8
      [    2.581899] [<ffffff8008152130>] panic+0x10c/0x250
      [    2.586677] [<ffffff8008152024>] panic+0x0/0x250
      [    2.591281] SMP: stopping secondary CPUs
      [    3.649692] SMP: failed to stop secondary CPUs 0-2,4-7
      [    3.654818] Kernel Offset: disabled
      [    3.658293] Memory Limit: none
      [    3.661337] ---[ end Kernel panic - not syncing: HYP panic:
      [    3.661337] PS:200003c9 PC:0000007ffffff820 ESR:86000005
      [    3.661337] FAR:0000007ffffff820 HPFAR:00000000003ffff0 PAR:0000000000000000
      [    3.661337] VCPU:          (null)
      [    3.661337]
      Reported-by: default avatarWill Deacon <will.deacon@arm.com>
      Reviewed-by: default avatarMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: default avatarJames Morse <james.morse@arm.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      c612505f
  4. 28 Apr, 2016 17 commits
  5. 26 Apr, 2016 8 commits
  6. 25 Apr, 2016 9 commits
  7. 22 Apr, 2016 2 commits
    • Ard Biesheuvel's avatar
      mm: replace open coded page to virt conversion with page_to_virt() · 1dff8083
      Ard Biesheuvel authored
      The open coded conversion from struct page address to virtual address in
      lowmem_page_address() involves an intermediate conversion step to pfn
      number/physical address. Since the placement of the struct page array
      relative to the linear mapping may be completely independent from the
      placement of physical RAM (as is that case for arm64 after commit
      dfd55ad8 'arm64: vmemmap: use virtual projection of linear region'),
      the conversion to physical address and back again should factor out of
      the equation, but unfortunately, the shifting and pointer arithmetic
      involved prevent this from happening, and the resulting calculation
      essentially subtracts the address of the start of physical memory and
      adds it back again, in a way that prevents the compiler from optimizing
      it away.
      
      Since the start of physical memory is not a build time constant on arm64,
      the resulting conversion involves an unnecessary memory access, which
      we would like to get rid of. So replace the open coded conversion with
      a call to page_to_virt(), and use the open coded conversion as its
      default definition, to be overriden by the architecture, if desired.
      The existing arch specific definitions of page_to_virt are all equivalent
      to this default definition, so by itself this patch is a no-op.
      Acked-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      1dff8083
    • Ard Biesheuvel's avatar
      openrisc: drop wrongly typed definition of page_to_virt() · 86d618cd
      Ard Biesheuvel authored
      To align with generic code and other architectures that expect the macro
      page_to_virt to produce an expression whose type is 'void*', drop the
      arch specific definition, which is never referenced anyway.
      Acked-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      86d618cd