Commit f1b92bb6 authored by Borislav Petkov's avatar Borislav Petkov Committed by Ingo Molnar

x86/ftrace, x86/asm: Kill ftrace_caller_end label

One of ftrace_caller_end and ftrace_return is redundant so unify them.
Rename ftrace_return to ftrace_epilogue to mean that everything after
that label represents, like an afterword, work which happens *after* the
ftrace call, e.g., the function graph tracer for one.

Steve wants this to rather mean "[a]n event which reflects meaningfully
on a recently ended conflict or struggle." I can imagine that ftrace can
be a struggle sometimes.

Anyway, beef up the comment about the code contents and layout before
ftrace_epilogue label.
Signed-off-by: default avatarBorislav Petkov <bp@suse.de>
Reviewed-by: default avatarSteven Rostedt <rostedt@goodmis.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1455612202-14414-4-git-send-email-bp@alien8.deSigned-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent 4f6c8938
......@@ -697,9 +697,8 @@ static inline void tramp_free(void *tramp) { }
#endif
/* Defined as markers to the end of the ftrace default trampolines */
extern void ftrace_caller_end(void);
extern void ftrace_regs_caller_end(void);
extern void ftrace_return(void);
extern void ftrace_epilogue(void);
extern void ftrace_caller_op_ptr(void);
extern void ftrace_regs_caller_op_ptr(void);
......@@ -746,7 +745,7 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
op_offset = (unsigned long)ftrace_regs_caller_op_ptr;
} else {
start_offset = (unsigned long)ftrace_caller;
end_offset = (unsigned long)ftrace_caller_end;
end_offset = (unsigned long)ftrace_epilogue;
op_offset = (unsigned long)ftrace_caller_op_ptr;
}
......@@ -754,7 +753,7 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
/*
* Allocate enough size to store the ftrace_caller code,
* the jmp to ftrace_return, as well as the address of
* the jmp to ftrace_epilogue, as well as the address of
* the ftrace_ops this trampoline is used for.
*/
trampoline = alloc_tramp(size + MCOUNT_INSN_SIZE + sizeof(void *));
......@@ -772,8 +771,8 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
ip = (unsigned long)trampoline + size;
/* The trampoline ends with a jmp to ftrace_return */
jmp = ftrace_jmp_replace(ip, (unsigned long)ftrace_return);
/* The trampoline ends with a jmp to ftrace_epilogue */
jmp = ftrace_jmp_replace(ip, (unsigned long)ftrace_epilogue);
memcpy(trampoline + size, jmp, MCOUNT_INSN_SIZE);
/*
......
......@@ -168,12 +168,14 @@ GLOBAL(ftrace_call)
restore_mcount_regs
/*
* The copied trampoline must call ftrace_return as it
* The copied trampoline must call ftrace_epilogue as it
* still may need to call the function graph tracer.
*
* The code up to this label is copied into trampolines so
* think twice before adding any new code or changing the
* layout here.
*/
GLOBAL(ftrace_caller_end)
GLOBAL(ftrace_return)
GLOBAL(ftrace_epilogue)
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
GLOBAL(ftrace_graph_call)
......@@ -244,14 +246,14 @@ GLOBAL(ftrace_regs_call)
popfq
/*
* As this jmp to ftrace_return can be a short jump
* As this jmp to ftrace_epilogue can be a short jump
* it must not be copied into the trampoline.
* The trampoline will add the code to jump
* to the return.
*/
GLOBAL(ftrace_regs_caller_end)
jmp ftrace_return
jmp ftrace_epilogue
END(ftrace_regs_caller)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment