Commit 59da2a06 authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/livepatching

Pull livepatching updates from Jiri Kosina:

 - removal of dead code (Kamalesh Babulal)

 - documentation update (Miroslav Benes)

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/livepatching:
  livepatch: doc: remove the limitation for schedule() patching
  powerpc/livepatch: Remove klp_write_module_reloc() stub
parents ebb4949e 372e2db7
......@@ -329,25 +329,6 @@ The current Livepatch implementation has several limitations:
by "notrace".
+ Anything inlined into __schedule() can not be patched.
The switch_to macro is inlined into __schedule(). It switches the
context between two processes in the middle of the macro. It does
not save RIP in x86_64 version (contrary to 32-bit version). Instead,
the currently used __schedule()/switch_to() handles both processes.
Now, let's have two different tasks. One calls the original
__schedule(), its registers are stored in a defined order and it
goes to sleep in the switch_to macro and some other task is restored
using the original __schedule(). Then there is the second task which
calls patched__schedule(), it goes to sleep there and the first task
is picked by the patched__schedule(). Its RSP is restored and now
the registers should be restored as well. But the order is different
in the new patched__schedule(), so...
There is work in progress to remove this limitation.
+ Livepatch modules can not be removed.
The current implementation just redirects the functions at the very
......
......@@ -28,13 +28,6 @@ static inline int klp_check_compiler_support(void)
return 0;
}
static inline int klp_write_module_reloc(struct module *mod, unsigned long
type, unsigned long loc, unsigned long value)
{
/* This requires infrastructure changes; we need the loadinfos. */
return -ENOSYS;
}
static inline void klp_arch_set_pc(struct pt_regs *regs, unsigned long ip)
{
regs->nip = ip;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment