Commit 857e17c2 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux

Pull arm64 fixes from Catalin Marinas:

 - keep the tail of an unaligned initrd reserved

 - adjust ftrace_make_call() to deal with the relative nature of PLTs

* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
  arm64/module: ftrace: deal with place relative nature of PLTs
  arm64: mm: Ensure tail of unaligned initrd is reserved
parents e9e1a2e7 4e69ecf4
...@@ -103,10 +103,15 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr) ...@@ -103,10 +103,15 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
* to be revisited if support for multiple ftrace entry points * to be revisited if support for multiple ftrace entry points
* is added in the future, but for now, the pr_err() below * is added in the future, but for now, the pr_err() below
* deals with a theoretical issue only. * deals with a theoretical issue only.
*
* Note that PLTs are place relative, and plt_entries_equal()
* checks whether they point to the same target. Here, we need
* to check if the actual opcodes are in fact identical,
* regardless of the offset in memory so use memcmp() instead.
*/ */
trampoline = get_plt_entry(addr, mod->arch.ftrace_trampoline); trampoline = get_plt_entry(addr, mod->arch.ftrace_trampoline);
if (!plt_entries_equal(mod->arch.ftrace_trampoline, if (memcmp(mod->arch.ftrace_trampoline, &trampoline,
&trampoline)) { sizeof(trampoline))) {
if (plt_entry_is_initialized(mod->arch.ftrace_trampoline)) { if (plt_entry_is_initialized(mod->arch.ftrace_trampoline)) {
pr_err("ftrace: far branches to multiple entry points unsupported inside a single module\n"); pr_err("ftrace: far branches to multiple entry points unsupported inside a single module\n");
return -EINVAL; return -EINVAL;
......
...@@ -363,7 +363,7 @@ void __init arm64_memblock_init(void) ...@@ -363,7 +363,7 @@ void __init arm64_memblock_init(void)
* Otherwise, this is a no-op * Otherwise, this is a no-op
*/ */
u64 base = phys_initrd_start & PAGE_MASK; u64 base = phys_initrd_start & PAGE_MASK;
u64 size = PAGE_ALIGN(phys_initrd_size); u64 size = PAGE_ALIGN(phys_initrd_start + phys_initrd_size) - base;
/* /*
* We can only add back the initrd memory if we don't end up * We can only add back the initrd memory if we don't end up
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment