Commit 54cfdb3e authored by Paolo Bonzini's avatar Paolo Bonzini

KVM: emulate: speed up emulated moves

We can just blindly move all 16 bytes of ctxt->src's value to ctxt->dst.
write_register_operand will take care of writing only the lower bytes.

Avoiding a call to memcpy (the compiler optimizes it out) gains about
200 cycles on kvm-unit-tests for register-to-register moves, and makes
them about as fast as arithmetic instructions.

We could perhaps get a larger speedup by moving all instructions _except_
moves out of x86_emulate_insn, removing opcode_len, and replacing the
switch statement with an inlined em_mov.
Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
parent d40a6898
......@@ -233,7 +233,7 @@ struct operand {
union {
unsigned long val;
u64 val64;
char valptr[sizeof(unsigned long) + 2];
char valptr[sizeof(sse128_t)];
sse128_t vec_val;
u64 mm_val;
void *data;
......
......@@ -2990,7 +2990,7 @@ static int em_rdpmc(struct x86_emulate_ctxt *ctxt)
static int em_mov(struct x86_emulate_ctxt *ctxt)
{
memcpy(ctxt->dst.valptr, ctxt->src.valptr, ctxt->op_bytes);
memcpy(ctxt->dst.valptr, ctxt->src.valptr, sizeof(ctxt->src.valptr));
return X86EMUL_CONTINUE;
}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment