Commit 4ed28639 authored by Michal Hocko's avatar Michal Hocko Committed by Linus Torvalds

fs, elf: drop MAP_FIXED usage from elf_map

Both load_elf_interp and load_elf_binary rely on elf_map to map segments
on a controlled address and they use MAP_FIXED to enforce that.  This is
however dangerous thing prone to silent data corruption which can be
even exploitable.

Let's take CVE-2017-1000253 as an example.  At the time (before commit
eab09532: "binfmt_elf: use ELF_ET_DYN_BASE only for PIE")
ELF_ET_DYN_BASE was at TASK_SIZE / 3 * 2 which is not that far away from
the stack top on 32b (legacy) memory layout (only 1GB away).  Therefore
we could end up mapping over the existing stack with some luck.

The issue has been fixed since then (a87938b2: "fs/binfmt_elf.c: fix
bug in loading of PIE binaries"), ELF_ET_DYN_BASE moved moved much
further from the stack (eab09532 and later by c715b72c: "mm:
revert x86_64 and arm64 ELF_ET_DYN_BASE base changes") and excessive
stack consumption early during execve fully stopped by da029c11
("exec: Limit arg stack to at most 75% of _STK_LIM").  So we should be
safe and any attack should be impractical.  On the other hand this is
just too subtle assumption so it can break quite easily and hard to
spot.

I believe that the MAP_FIXED usage in load_elf_binary (et. al) is still
fundamentally dangerous.  Moreover it shouldn't be even needed.  We are
at the early process stage and so there shouldn't be unrelated mappings
(except for stack and loader) existing so mmap for a given address should
succeed even without MAP_FIXED.  Something is terribly wrong if this is
not the case and we should rather fail than silently corrupt the
underlying mapping.

Address this issue by changing MAP_FIXED to the newly added
MAP_FIXED_NOREPLACE.  This will mean that mmap will fail if there is an
existing mapping clashing with the requested one without clobbering it.

[mhocko@suse.com: fix build]
[akpm@linux-foundation.org: coding-style fixes]
[avagin@openvz.org: don't use the same value for MAP_FIXED_NOREPLACE and MAP_SYNC]
  Link: http://lkml.kernel.org/r/20171218184916.24445-1-avagin@openvz.org
Link: http://lkml.kernel.org/r/20171213092550.2774-3-mhocko@kernel.orgSigned-off-by: default avatarMichal Hocko <mhocko@suse.com>
Signed-off-by: default avatarAndrei Vagin <avagin@openvz.org>
Signed-off-by: default avatarMichal Hocko <mhocko@suse.com>
Reviewed-by: default avatarKhalid Aziz <khalid.aziz@oracle.com>
Acked-by: default avatarMichael Ellerman <mpe@ellerman.id.au>
Acked-by: default avatarKees Cook <keescook@chromium.org>
Cc: Abdul Haleem <abdhalee@linux.vnet.ibm.com>
Cc: Joel Stanley <joel@jms.id.au>
Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent a4ff8e86
...@@ -377,6 +377,11 @@ static unsigned long elf_map(struct file *filep, unsigned long addr, ...@@ -377,6 +377,11 @@ static unsigned long elf_map(struct file *filep, unsigned long addr,
} else } else
map_addr = vm_mmap(filep, addr, size, prot, type, off); map_addr = vm_mmap(filep, addr, size, prot, type, off);
if ((type & MAP_FIXED_NOREPLACE) && BAD_ADDR(map_addr))
pr_info("%d (%s): Uhuuh, elf segment at %p requested but the memory is mapped already\n",
task_pid_nr(current), current->comm,
(void *)addr);
return(map_addr); return(map_addr);
} }
...@@ -575,7 +580,7 @@ static unsigned long load_elf_interp(struct elfhdr *interp_elf_ex, ...@@ -575,7 +580,7 @@ static unsigned long load_elf_interp(struct elfhdr *interp_elf_ex,
elf_prot |= PROT_EXEC; elf_prot |= PROT_EXEC;
vaddr = eppnt->p_vaddr; vaddr = eppnt->p_vaddr;
if (interp_elf_ex->e_type == ET_EXEC || load_addr_set) if (interp_elf_ex->e_type == ET_EXEC || load_addr_set)
elf_type |= MAP_FIXED; elf_type |= MAP_FIXED_NOREPLACE;
else if (no_base && interp_elf_ex->e_type == ET_DYN) else if (no_base && interp_elf_ex->e_type == ET_DYN)
load_addr = -vaddr; load_addr = -vaddr;
...@@ -939,7 +944,7 @@ static int load_elf_binary(struct linux_binprm *bprm) ...@@ -939,7 +944,7 @@ static int load_elf_binary(struct linux_binprm *bprm)
* the ET_DYN load_addr calculations, proceed normally. * the ET_DYN load_addr calculations, proceed normally.
*/ */
if (loc->elf_ex.e_type == ET_EXEC || load_addr_set) { if (loc->elf_ex.e_type == ET_EXEC || load_addr_set) {
elf_flags |= MAP_FIXED; elf_flags |= MAP_FIXED_NOREPLACE;
} else if (loc->elf_ex.e_type == ET_DYN) { } else if (loc->elf_ex.e_type == ET_DYN) {
/* /*
* This logic is run once for the first LOAD Program * This logic is run once for the first LOAD Program
...@@ -975,7 +980,7 @@ static int load_elf_binary(struct linux_binprm *bprm) ...@@ -975,7 +980,7 @@ static int load_elf_binary(struct linux_binprm *bprm)
load_bias = ELF_ET_DYN_BASE; load_bias = ELF_ET_DYN_BASE;
if (current->flags & PF_RANDOMIZE) if (current->flags & PF_RANDOMIZE)
load_bias += arch_mmap_rnd(); load_bias += arch_mmap_rnd();
elf_flags |= MAP_FIXED; elf_flags |= MAP_FIXED_NOREPLACE;
} else } else
load_bias = 0; load_bias = 0;
...@@ -1235,7 +1240,7 @@ static int load_elf_library(struct file *file) ...@@ -1235,7 +1240,7 @@ static int load_elf_library(struct file *file)
(eppnt->p_filesz + (eppnt->p_filesz +
ELF_PAGEOFFSET(eppnt->p_vaddr)), ELF_PAGEOFFSET(eppnt->p_vaddr)),
PROT_READ | PROT_WRITE | PROT_EXEC, PROT_READ | PROT_WRITE | PROT_EXEC,
MAP_FIXED | MAP_PRIVATE | MAP_DENYWRITE, MAP_FIXED_NOREPLACE | MAP_PRIVATE | MAP_DENYWRITE,
(eppnt->p_offset - (eppnt->p_offset -
ELF_PAGEOFFSET(eppnt->p_vaddr))); ELF_PAGEOFFSET(eppnt->p_vaddr)));
if (error != ELF_PAGESTART(eppnt->p_vaddr)) if (error != ELF_PAGESTART(eppnt->p_vaddr))
......
...@@ -26,7 +26,9 @@ ...@@ -26,7 +26,9 @@
#else #else
# define MAP_UNINITIALIZED 0x0 /* Don't support this flag */ # define MAP_UNINITIALIZED 0x0 /* Don't support this flag */
#endif #endif
#define MAP_FIXED_NOREPLACE 0x80000 /* MAP_FIXED which doesn't unmap underlying mapping */
/* 0x0100 - 0x80000 flags are defined in asm-generic/mman.h */
#define MAP_FIXED_NOREPLACE 0x100000 /* MAP_FIXED which doesn't unmap underlying mapping */
/* /*
* Flags for mlock * Flags for mlock
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment