Commit 5eb9bac0 authored by Anton Blanchard's avatar Anton Blanchard Committed by Benjamin Herrenschmidt

powerpc: Rearrange SLB preload code

With the new top down layout it is likely that the pc and stack will be in the
same segment, because the pc is most likely in a library allocated via a top
down mmap. Right now we bail out early if these segments match.

Rearrange the SLB preload code to sanity check all SLB preload addresses
are not in the kernel, then check all addresses for conflicts.
Signed-off-by: default avatarAnton Blanchard <anton@samba.org>
Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
parent 30d0b368
...@@ -218,23 +218,18 @@ void switch_slb(struct task_struct *tsk, struct mm_struct *mm) ...@@ -218,23 +218,18 @@ void switch_slb(struct task_struct *tsk, struct mm_struct *mm)
else else
unmapped_base = TASK_UNMAPPED_BASE_USER64; unmapped_base = TASK_UNMAPPED_BASE_USER64;
if (is_kernel_addr(pc)) if (is_kernel_addr(pc) || is_kernel_addr(stack) ||
return; is_kernel_addr(unmapped_base))
slb_allocate(pc);
if (esids_match(pc,stack))
return; return;
if (is_kernel_addr(stack)) slb_allocate(pc);
return;
slb_allocate(stack);
if (esids_match(pc,unmapped_base) || esids_match(stack,unmapped_base)) if (!esids_match(pc, stack))
return; slb_allocate(stack);
if (is_kernel_addr(unmapped_base)) if (!esids_match(pc, unmapped_base) &&
return; !esids_match(stack, unmapped_base))
slb_allocate(unmapped_base); slb_allocate(unmapped_base);
} }
static inline void patch_slb_encoding(unsigned int *insn_addr, static inline void patch_slb_encoding(unsigned int *insn_addr,
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment