1. 24 Feb, 2016 1 commit
    • Ard Biesheuvel's avatar
      arm64: add support for module PLTs · fd045f6c
      Ard Biesheuvel authored
      This adds support for emitting PLTs at module load time for relative
      branches that are out of range. This is a prerequisite for KASLR, which
      may place the kernel and the modules anywhere in the vmalloc area,
      making it more likely that branch target offsets exceed the maximum
      range of +/- 128 MB.
      
      In this version, I removed the distinction between relocations against
      .init executable sections and ordinary executable sections. The reason
      is that it is hardly worth the trouble, given that .init.text usually
      does not contain that many far branches, and this version now only
      reserves PLT entry space for jump and call relocations against undefined
      symbols (since symbols defined in the same module can be assumed to be
      within +/- 128 MB)
      
      For example, the mac80211.ko module (which is fairly sizable at ~400 KB)
      built with -mcmodel=large gives the following relocation counts:
      
                          relocs    branches   unique     !local
        .text              3925       3347       518        219
        .init.text           11          8         7          1
        .exit.text            4          4         4          1
        .text.unlikely       81         67        36         17
      
      ('unique' means branches to unique type/symbol/addend combos, of which
      !local is the subset referring to undefined symbols)
      
      IOW, we are only emitting a single PLT entry for the .init sections, and
      we are better off just adding it to the core PLT section instead.
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      fd045f6c
  2. 23 Feb, 2016 3 commits
  3. 19 Feb, 2016 1 commit
  4. 18 Feb, 2016 17 commits
  5. 16 Feb, 2016 18 commits
    • Ard Biesheuvel's avatar
      arm64: use local label prefixes for __reg_num symbols · 7abc7d83
      Ard Biesheuvel authored
      The __reg_num_xNN symbols that are used to implement the msr_s and
      mrs_s macros are recorded in the ELF metadata of each object file.
      This does not affect the size of the final binary, but it does clutter
      the output of tools like readelf, i.e.,
      
        $ readelf -a vmlinux |grep -c __reg_num_x
        50976
      
      So let's use symbols with the .L prefix, these are strictly local,
      and don't end up in the object files.
      
        $ readelf -a vmlinux |grep -c __reg_num_x
        0
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      7abc7d83
    • David Brown's avatar
      arm64: vdso: Mark vDSO code as read-only · 88d8a799
      David Brown authored
      Although the arm64 vDSO is cleanly separated by code/data with the
      code being read-only in userspace mappings, the code page is still
      writable from the kernel.  There have been exploits (such as
      http://itszn.com/blog/?p=21) that take advantage of this on x86 to go
      from a bad kernel write to full root.
      
      Prevent this specific exploit on arm64 by putting the vDSO code page
      in read-only memory as well.
      
      Before the change:
      [    3.138366] vdso: 2 pages (1 code @ ffffffc000a71000, 1 data @ ffffffc000a70000)
      ---[ Kernel Mapping ]---
      0xffffffc000000000-0xffffffc000082000         520K     RW NX SHD AF            UXN MEM/NORMAL
      0xffffffc000082000-0xffffffc000200000        1528K     ro x  SHD AF            UXN MEM/NORMAL
      0xffffffc000200000-0xffffffc000800000           6M     ro x  SHD AF        BLK UXN MEM/NORMAL
      0xffffffc000800000-0xffffffc0009b6000        1752K     ro x  SHD AF            UXN MEM/NORMAL
      0xffffffc0009b6000-0xffffffc000c00000        2344K     RW NX SHD AF            UXN MEM/NORMAL
      0xffffffc000c00000-0xffffffc008000000         116M     RW NX SHD AF        BLK UXN MEM/NORMAL
      0xffffffc00c000000-0xffffffc07f000000        1840M     RW NX SHD AF        BLK UXN MEM/NORMAL
      0xffffffc800000000-0xffffffc840000000           1G     RW NX SHD AF        BLK UXN MEM/NORMAL
      0xffffffc840000000-0xffffffc87ae00000         942M     RW NX SHD AF        BLK UXN MEM/NORMAL
      0xffffffc87ae00000-0xffffffc87ae70000         448K     RW NX SHD AF            UXN MEM/NORMAL
      0xffffffc87af80000-0xffffffc87af8a000          40K     RW NX SHD AF            UXN MEM/NORMAL
      0xffffffc87af8b000-0xffffffc87b000000         468K     RW NX SHD AF            UXN MEM/NORMAL
      0xffffffc87b000000-0xffffffc87fe00000          78M     RW NX SHD AF        BLK UXN MEM/NORMAL
      0xffffffc87fe00000-0xffffffc87ff50000        1344K     RW NX SHD AF            UXN MEM/NORMAL
      0xffffffc87ff90000-0xffffffc87ffa0000          64K     RW NX SHD AF            UXN MEM/NORMAL
      0xffffffc87fff0000-0xffffffc880000000          64K     RW NX SHD AF            UXN MEM/NORMAL
      
      After:
      [    3.138368] vdso: 2 pages (1 code @ ffffffc0006de000, 1 data @ ffffffc000a74000)
      ---[ Kernel Mapping ]---
      0xffffffc000000000-0xffffffc000082000         520K     RW NX SHD AF            UXN MEM/NORMAL
      0xffffffc000082000-0xffffffc000200000        1528K     ro x  SHD AF            UXN MEM/NORMAL
      0xffffffc000200000-0xffffffc000800000           6M     ro x  SHD AF        BLK UXN MEM/NORMAL
      0xffffffc000800000-0xffffffc0009b8000        1760K     ro x  SHD AF            UXN MEM/NORMAL
      0xffffffc0009b8000-0xffffffc000c00000        2336K     RW NX SHD AF            UXN MEM/NORMAL
      0xffffffc000c00000-0xffffffc008000000         116M     RW NX SHD AF        BLK UXN MEM/NORMAL
      0xffffffc00c000000-0xffffffc07f000000        1840M     RW NX SHD AF        BLK UXN MEM/NORMAL
      0xffffffc800000000-0xffffffc840000000           1G     RW NX SHD AF        BLK UXN MEM/NORMAL
      0xffffffc840000000-0xffffffc87ae00000         942M     RW NX SHD AF        BLK UXN MEM/NORMAL
      0xffffffc87ae00000-0xffffffc87ae70000         448K     RW NX SHD AF            UXN MEM/NORMAL
      0xffffffc87af80000-0xffffffc87af8a000          40K     RW NX SHD AF            UXN MEM/NORMAL
      0xffffffc87af8b000-0xffffffc87b000000         468K     RW NX SHD AF            UXN MEM/NORMAL
      0xffffffc87b000000-0xffffffc87fe00000          78M     RW NX SHD AF        BLK UXN MEM/NORMAL
      0xffffffc87fe00000-0xffffffc87ff50000        1344K     RW NX SHD AF            UXN MEM/NORMAL
      0xffffffc87ff90000-0xffffffc87ffa0000          64K     RW NX SHD AF            UXN MEM/NORMAL
      0xffffffc87fff0000-0xffffffc880000000          64K     RW NX SHD AF            UXN MEM/NORMAL
      
      Inspired by https://lkml.org/lkml/2016/1/19/494 based on work by the
      PaX Team, Brad Spengler, and Kees Cook.
      Signed-off-by: default avatarDavid Brown <david.brown@linaro.org>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Acked-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      [catalin.marinas@arm.com: removed superfluous __PAGE_ALIGNED_DATA]
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      88d8a799
    • Yang Shi's avatar
      arm64: ubsan: select ARCH_HAS_UBSAN_SANITIZE_ALL · f0b7f8a4
      Yang Shi authored
      To enable UBSAN on arm64, ARCH_HAS_UBSAN_SANITIZE_ALL need to be selected.
      
      Basic kernel bootup test is passed on arm64 with CONFIG_UBSAN_SANITIZE_ALL
      enabled.
      Signed-off-by: default avatarYang Shi <yang.shi@linaro.org>
      Acked-by: default avatarAndrey Ryabinin <aryabinin@virtuozzo.com>
      Tested-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      f0b7f8a4
    • Yang Shi's avatar
      arm64: replace read_lock to rcu lock in call_step_hook · cf0a2543
      Yang Shi authored
      BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:917
      in_atomic(): 1, irqs_disabled(): 128, pid: 383, name: sh
      Preemption disabled at:[<ffff800000124c18>] kgdb_cpu_enter+0x158/0x6b8
      
      CPU: 3 PID: 383 Comm: sh Tainted: G        W       4.1.13-rt13 #2
      Hardware name: Freescale Layerscape 2085a RDB Board (DT)
      Call trace:
      [<ffff8000000885e8>] dump_backtrace+0x0/0x128
      [<ffff800000088734>] show_stack+0x24/0x30
      [<ffff80000079a7c4>] dump_stack+0x80/0xa0
      [<ffff8000000bd324>] ___might_sleep+0x18c/0x1a0
      [<ffff8000007a20ac>] __rt_spin_lock+0x2c/0x40
      [<ffff8000007a2268>] rt_read_lock+0x40/0x58
      [<ffff800000085328>] single_step_handler+0x38/0xd8
      [<ffff800000082368>] do_debug_exception+0x58/0xb8
      Exception stack(0xffff80834a1e7c80 to 0xffff80834a1e7da0)
      7c80: ffffff9c ffffffff 92c23ba0 0000ffff 4a1e7e40 ffff8083 001bfcc4 ffff8000
      7ca0: f2000400 00000000 00000000 00000000 4a1e7d80 ffff8083 0049501c ffff8000
      7cc0: 00005402 00000000 00aaa210 ffff8000 4a1e7ea0 ffff8083 000833f4 ffff8000
      7ce0: ffffff9c ffffffff 92c23ba0 0000ffff 4a1e7ea0 ffff8083 001bfcc0 ffff8000
      7d00: 4a0fc400 ffff8083 00005402 00000000 4a1e7d40 ffff8083 00490324 ffff8000
      7d20: ffffff9c 00000000 92c23ba0 0000ffff 000a0000 00000000 00000000 00000000
      7d40: 00000008 00000000 00080000 00000000 92c23b8b 0000ffff 92c23b8e 0000ffff
      7d60: 00000038 00000000 00001cb2 00000000 00000005 00000000 92d7b498 0000ffff
      7d80: 01010101 01010101 92be9000 0000ffff 00000000 00000000 00000030 00000000
      [<ffff8000000833f4>] el1_dbg+0x18/0x6c
      
      This issue is similar with 62c6c61a("arm64: replace read_lock to rcu lock in
      call_break_hook"), but comes to single_step_handler.
      
      This also solves kgdbts boot test silent hang issue on 4.4 -rt kernel.
      Signed-off-by: default avatarYang Shi <yang.shi@linaro.org>
      Acked-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      cf0a2543
    • Laura Abbott's avatar
      arm64: ptdump: Indicate whether memory should be faulting · d7e9d594
      Laura Abbott authored
      With CONFIG_DEBUG_PAGEALLOC, pages do not have the valid bit
      set when free in the buddy allocator. Add an indiciation to
      the page table dumping code that the valid bit is not set,
      'F' for fault, to make this easier to understand.
      Reviewed-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Tested-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarLaura Abbott <labbott@fedoraproject.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      d7e9d594
    • Laura Abbott's avatar
      arm64: Add support for ARCH_SUPPORTS_DEBUG_PAGEALLOC · 83863f25
      Laura Abbott authored
      ARCH_SUPPORTS_DEBUG_PAGEALLOC provides a hook to map and unmap
      pages for debugging purposes. This requires memory be mapped
      with PAGE_SIZE mappings since breaking down larger mappings
      at runtime will lead to TLB conflicts. Check if debug_pagealloc
      is enabled at runtime and if so, map everyting with PAGE_SIZE
      pages. Implement the functions to actually map/unmap the
      pages at runtime.
      Reviewed-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Tested-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarLaura Abbott <labbott@fedoraproject.org>
      [catalin.marinas@arm.com: static annotation block_mappings_allowed() and #ifdef]
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      83863f25
    • Laura Abbott's avatar
      arm64: Drop alloc function from create_mapping · 132233a7
      Laura Abbott authored
      create_mapping is only used in fixmap_remap_fdt. All the create_mapping
      calls need to happen on existing translation table pages without
      additional allocations. Rather than have an alloc function be called
      and fail, just set it to NULL and catch its use. Also change
      the name to create_mapping_noalloc to better capture what exactly is
      going on.
      Reviewed-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: default avatarMark Rutland <mark.rutland@arm.com>
      Signed-off-by: default avatarLaura Abbott <labbott@fedoraproject.org>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      132233a7
    • Will Deacon's avatar
      arm64: prefetch: add missing #include for spin_lock_prefetch · afb83cc3
      Will Deacon authored
      As of 52e662326e1e ("arm64: prefetch: don't provide spin_lock_prefetch
      with LSE"), spin_lock_prefetch is patched at runtime when the LSE atomics
      are in use. This relies on the ARM64_LSE_ATOMIC_INSN macro to drive
      the alternatives framework, but that macro is only available via
      asm/lse.h, which isn't explicitly included in processor.h. Consequently,
      drivers can run into build failures such as:
      
         In file included from include/linux/prefetch.h:14:0,
                          from drivers/net/ethernet/intel/i40e/i40e_txrx.c:27:
         arch/arm64/include/asm/processor.h: In function 'spin_lock_prefetch':
         arch/arm64/include/asm/processor.h:183:15: error: expected string literal before 'ARM64_LSE_ATOMIC_INSN'
           asm volatile(ARM64_LSE_ATOMIC_INSN(
      
      This patch add the missing include and gets things building again.
      Reported-by: default avatarkbuild test robot <fengguang.wu@intel.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      afb83cc3
    • Andrew Pinski's avatar
      arm64: lib: patch in prfm for copy_page if requested · 60e0a09d
      Andrew Pinski authored
      On ThunderX T88 pass 1 and pass 2, there is no hardware prefetching so
      we need to patch in explicit software prefetching instructions
      
      Prefetching improves this code by 60% over the original code and 2x
      over the code without prefetching for the affected hardware using the
      benchmark code at https://github.com/apinski-cavium/copy_page_benchmarkSigned-off-by: default avatarAndrew Pinski <apinski@cavium.com>
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Tested-by: default avatarAndrew Pinski <apinski@cavium.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      60e0a09d
    • Will Deacon's avatar
      arm64: lib: improve copy_page to deal with 128 bytes at a time · 223e23e8
      Will Deacon authored
      We want to avoid lots of different copy_page implementations, settling
      for something that is "good enough" everywhere and hopefully easy to
      understand and maintain whilst we're at it.
      
      This patch reworks our copy_page implementation based on discussions
      with Cavium on the list and benchmarking on Cortex-A processors so that:
      
        - The loop is unrolled to copy 128 bytes per iteration
      
        - The reads are offset so that we read from the next 128-byte block
          in the same iteration that we store the previous block
      
        - Explicit prefetch instructions are removed for now, since they hurt
          performance on CPUs with hardware prefetching
      
        - The loop exit condition is calculated at the start of the loop
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Tested-by: default avatarAndrew Pinski <apinski@cavium.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      223e23e8
    • Will Deacon's avatar
      arm64: prefetch: add alternative pattern for CPUs without a prefetcher · d5370f75
      Will Deacon authored
      Most CPUs have a hardware prefetcher which generally performs better
      without explicit prefetch instructions issued by software, however
      some CPUs (e.g. Cavium ThunderX) rely solely on explicit prefetch
      instructions.
      
      This patch adds an alternative pattern (ARM64_HAS_NO_HW_PREFETCH) to
      allow our library code to make use of explicit prefetch instructions
      during things like copy routines only when the CPU does not have the
      capability to perform the prefetching itself.
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Tested-by: default avatarAndrew Pinski <apinski@cavium.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      d5370f75
    • Will Deacon's avatar
      arm64: prefetch: don't provide spin_lock_prefetch with LSE · cd5e10bd
      Will Deacon authored
      The LSE atomics rely on us not dirtying data at L1 if we can avoid it,
      otherwise many of the potential scalability benefits are lost.
      
      This patch replaces spin_lock_prefetch with a nop when the LSE atomics
      are in use, so that users don't shoot themselves in the foot by causing
      needless coherence traffic at L1.
      Signed-off-by: default avatarWill Deacon <will.deacon@arm.com>
      Tested-by: default avatarAndrew Pinski <apinski@cavium.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      cd5e10bd
    • Lorenzo Pieralisi's avatar
      arm64: kernel: implement ACPI parking protocol · 5e89c55e
      Lorenzo Pieralisi authored
      The SBBR and ACPI specifications allow ACPI based systems that do not
      implement PSCI (eg systems with no EL3) to boot through the ACPI parking
      protocol specification[1].
      
      This patch implements the ACPI parking protocol CPU operations, and adds
      code that eases parsing the parking protocol data structures to the
      ARM64 SMP initializion carried out at the same time as cpus enumeration.
      
      To wake-up the CPUs from the parked state, this patch implements a
      wakeup IPI for ARM64 (ie arch_send_wakeup_ipi_mask()) that mirrors the
      ARM one, so that a specific IPI is sent for wake-up purpose in order
      to distinguish it from other IPI sources.
      
      Given the current ACPI MADT parsing API, the patch implements a glue
      layer that helps passing MADT GICC data structure from SMP initialization
      code to the parking protocol implementation somewhat overriding the CPU
      operations interfaces. This to avoid creating a completely trasparent
      DT/ACPI CPU operations layer that would require creating opaque
      structure handling for CPUs data (DT represents CPU through DT nodes, ACPI
      through static MADT table entries), which seems overkill given that ACPI
      on ARM64 mandates only two booting protocols (PSCI and parking protocol),
      so there is no need for further protocol additions.
      
      Based on the original work by Mark Salter <msalter@redhat.com>
      
      [1] https://acpica.org/sites/acpica/files/MP%20Startup%20for%20ARM%20platforms.docxSigned-off-by: default avatarLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Tested-by: default avatarLoc Ho <lho@apm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Hanjun Guo <hanjun.guo@linaro.org>
      Cc: Sudeep Holla <sudeep.holla@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Al Stone <ahs3@redhat.com>
      [catalin.marinas@arm.com: Added WARN_ONCE(!acpi_parking_protocol_valid() on the IPI]
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      5e89c55e
    • Mark Rutland's avatar
      arm64: mm: create new fine-grained mappings at boot · 068a17a5
      Mark Rutland authored
      At boot we may change the granularity of the tables mapping the kernel
      (by splitting or making sections). This may happen when we create the
      linear mapping (in __map_memblock), or at any point we try to apply
      fine-grained permissions to the kernel (e.g. fixup_executable,
      mark_rodata_ro, fixup_init).
      
      Changing the active page tables in this manner may result in multiple
      entries for the same address being allocated into TLBs, risking problems
      such as TLB conflict aborts or issues derived from the amalgamation of
      TLB entries. Generally, a break-before-make (BBM) approach is necessary
      to avoid conflicts, but we cannot do this for the kernel tables as it
      risks unmapping text or data being used to do so.
      
      Instead, we can create a new set of tables from scratch in the safety of
      the existing mappings, and subsequently migrate over to these using the
      new cpu_replace_ttbr1 helper, which avoids the two sets of tables being
      active simultaneously.
      
      To avoid issues when we later modify permissions of the page tables
      (e.g. in fixup_init), we must create the page tables at a granularity
      such that later modification does not result in splitting of tables.
      
      This patch applies this strategy, creating a new set of fine-grained
      page tables from scratch, and safely migrating to them. The existing
      fixmap and kasan shadow page tables are reused in the new fine-grained
      tables.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Tested-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Tested-by: default avatarJeremy Linton <jeremy.linton@arm.com>
      Cc: Laura Abbott <labbott@fedoraproject.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      068a17a5
    • Mark Rutland's avatar
      arm64: ensure _stext and _etext are page-aligned · fca082bf
      Mark Rutland authored
      Currently we have separate ALIGN_DEBUG_RO{,_MIN} directives to align
      _etext and __init_begin. While we ensure that __init_begin is
      page-aligned, we do not provide the same guarantee for _etext. This is
      not problematic currently as the alignment of __init_begin is sufficient
      to prevent issues when we modify permissions.
      
      Subsequent patches will assume page alignment of segments of the kernel
      we wish to map with different permissions. To ensure this, move _etext
      after the ALIGN_DEBUG_RO_MIN for the init section. This renders the
      prior ALIGN_DEBUG_RO irrelevant, and hence it is removed. Likewise,
      upgrade to ALIGN_DEBUG_RO_MIN(PAGE_SIZE) for _stext.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Tested-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Tested-by: default avatarJeremy Linton <jeremy.linton@arm.com>
      Cc: Laura Abbott <labbott@fedoraproject.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      fca082bf
    • Mark Rutland's avatar
      arm64: mm: allow passing a pgdir to alloc_init_* · 11509a30
      Mark Rutland authored
      To allow us to initialise pgdirs which are fixmapped, allow explicitly
      passing a pgdir rather than an mm. A new __create_pgd_mapping function
      is added for this, with existing __create_mapping callers migrated to
      this.
      
      The mm argument was previously only used at the top level. Now that it
      is redundant at all levels, it is removed. To indicate its new found
      similarity to alloc_init_{pud,pmd,pte}, __create_mapping is renamed to
      init_pgd.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Tested-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Tested-by: default avatarJeremy Linton <jeremy.linton@arm.com>
      Cc: Laura Abbott <labbott@fedoraproject.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      11509a30
    • Mark Rutland's avatar
      arm64: mm: allocate pagetables anywhere · cdef5f6e
      Mark Rutland authored
      Now that create_mapping uses fixmap slots to modify pte, pmd, and pud
      entries, we can access page tables anywhere in physical memory,
      regardless of the extent of the linear mapping.
      
      Given that, we no longer need to limit memblock allocations during page
      table creation, and can leave the limit as its default
      MEMBLOCK_ALLOC_ANYWHERE.
      
      We never add memory which will fall outside of the linear map range
      given phys_offset and MAX_MEMBLOCK_ADDR are configured appropriately, so
      any tables we create will fall in the linear map of the final tables.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Tested-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Tested-by: default avatarJeremy Linton <jeremy.linton@arm.com>
      Cc: Laura Abbott <labbott@fedoraproject.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      cdef5f6e
    • Mark Rutland's avatar
      arm64: mm: use fixmap when creating page tables · f4710445
      Mark Rutland authored
      As a preparatory step to allow us to allocate early page tables from
      unmapped memory using memblock_alloc, modify the __create_mapping
      callees to map and unmap the tables they modify using fixmap entries.
      
      All but the top-level pgd initialisation is performed via the fixmap.
      Subsequent patches will inject the pgd physical address, and migrate to
      using the FIX_PGD slot.
      Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
      Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      Tested-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Reviewed-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Tested-by: default avatarJeremy Linton <jeremy.linton@arm.com>
      Cc: Laura Abbott <labbott@fedoraproject.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
      f4710445