Commit 3567fa63 authored by Ard Biesheuvel's avatar Ard Biesheuvel Committed by Catalin Marinas

arm64: kaslr: Adjust randomization range dynamically

Currently, we base the KASLR randomization range on a rough estimate of
the available space in the upper VA region: the lower 1/4th has the
module region and the upper 1/4th has the fixmap, vmemmap and PCI I/O
ranges, and so we pick a random location in the remaining space in the
middle.

Once we enable support for 5-level paging with 4k pages, this no longer
works: the vmemmap region, being dimensioned to cover a 52-bit linear
region, takes up so much space in the upper VA region (the size of which
is based on a 48-bit VA space for compatibility with non-LVA hardware)
that the region above the vmalloc region takes up more than a quarter of
the available space.

So instead of a heuristic, let's derive the randomization range from the
actual boundaries of the vmalloc region.
Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20231213084024.2367360-16-ardb@google.comSigned-off-by: default avatarCatalin Marinas <catalin.marinas@arm.com>
Acked-by: default avatarMark Rutland <mark.rutland@arm.com>
parent d432b8d5
......@@ -36,6 +36,8 @@ PROVIDE(__pi___memcpy = __pi_memcpy);
PROVIDE(__pi___memmove = __pi_memmove);
PROVIDE(__pi___memset = __pi_memset);
PROVIDE(__pi_vabits_actual = vabits_actual);
#ifdef CONFIG_KVM
/*
......
......@@ -14,6 +14,7 @@
#include <asm/archrandom.h>
#include <asm/memory.h>
#include <asm/pgtable.h>
/* taken from lib/string.c */
static char *__strstr(const char *s1, const char *s2)
......@@ -87,7 +88,7 @@ static u64 get_kaslr_seed(void *fdt)
asmlinkage u64 kaslr_early_init(void *fdt)
{
u64 seed;
u64 seed, range;
if (is_kaslr_disabled_cmdline(fdt))
return 0;
......@@ -102,9 +103,9 @@ asmlinkage u64 kaslr_early_init(void *fdt)
/*
* OK, so we are proceeding with KASLR enabled. Calculate a suitable
* kernel image offset from the seed. Let's place the kernel in the
* middle half of the VMALLOC area (VA_BITS_MIN - 2), and stay clear of
* the lower and upper quarters to avoid colliding with other
* allocations.
* 'middle' half of the VMALLOC area, and stay clear of the lower and
* upper quarters to avoid colliding with other allocations.
*/
return BIT(VA_BITS_MIN - 3) + (seed & GENMASK(VA_BITS_MIN - 3, 0));
range = (VMALLOC_END - KIMAGE_VADDR) / 2;
return range / 2 + (((__uint128_t)range * seed) >> 64);
}
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment