• Mark Rutland's avatar
    arm64: mm: place empty_zero_page in bss · 5227cfa7
    Mark Rutland authored
    Currently the zero page is set up in paging_init, and thus we cannot use
    the zero page earlier. We use the zero page as a reserved TTBR value
    from which no TLB entries may be allocated (e.g. when uninstalling the
    idmap). To enable such usage earlier (as may be required for invasive
    changes to the kernel page tables), and to minimise the time that the
    idmap is active, we need to be able to use the zero page before
    paging_init.
    
    This patch follows the example set by x86, by allocating the zero page
    at compile time, in .bss. This means that the zero page itself is
    available immediately upon entry to start_kernel (as we zero .bss before
    this), and also means that the zero page takes up no space in the raw
    Image binary. The associated struct page is allocated in bootmem_init,
    and remains unavailable until this time.
    
    Outside of arch code, the only users of empty_zero_page assume that the
    empty_zero_page symbol refers to the zeroed memory itself, and that
    ZERO_PAGE(x) must be used to acquire the associated struct page,
    following the example of x86. This patch also brings arm64 inline with
    these assumptions.
    Signed-off-by: default avatarMark Rutland <mark.rutland@arm.com>
    Reviewed-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
    Tested-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
    Reviewed-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
    Tested-by: default avatarJeremy Linton <jeremy.linton@arm.com>
    Cc: Laura Abbott <labbott@fedoraproject.org>
    Cc: Will Deacon <will.deacon@arm.com>
    Signed-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
    5227cfa7
mmu_context.h 3.9 KB