- 30 Jan, 2008 40 commits
-
-
Avi Kivity authored
If all we're doing is increasing permissions on a pte (typical for demand paging), then there's not need to flush remote tlbs. Worst case they'll get a spurious page fault. Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
I spent an hour worrying why I see so many guest page faults on FC6 i386. Turns out bypass wasn't implemented for nonpae. Implement it so it doesn't happen again. Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
Split kvm_arch_vcpu_create() into kvm_arch_vcpu_create() and kvm_arch_vcpu_setup(), enabling preemption notification between the two. This mean that we can now do vcpu_load() within kvm_arch_vcpu_setup(). Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Zhang Xiantao authored
Moving !user_alloc case to kvm_arch to avoid unnecessary code logic in non-x86 platform. Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Zhang Xiantao authored
Instead of incrementally changing the mmu cache size for every memory slot operation, recalculate it from scratch. This is simpler and safer. Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
Instead of fetching one byte at a time, prefetch 15 bytes (or until the next page boundary) to avoid guest page table walks. Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
Theoretically used to acccess memory known to be ordinary RAM, it was never implemented. It is questionable whether it is possible to implement it correctly. Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Izik Eidus authored
Improve dirty bit setting for pages that kvm release, until now every page that we released we marked dirty, from now only pages that have potential to get dirty we mark dirty. Signed-off-by: Izik Eidus <izike@qumranet.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Izik Eidus authored
When we map a page, we check whether some other vcpu mapped it for us and if so, bail out. But we should decrease the refcount on the page as we do so. Signed-off-by: Izik Eidus <izike@qumranet.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Jerone Young authored
This patch moves structures: kvm_cpuid_entry kvm_cpuid from include/linux/kvm.h to include/asm-x86/kvm.h Signed-off-by: Jerone Young <jyoung5@us.ibm.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Jerone Young authored
Move structures: kvm_sregs kvm_msr_entry kvm_msrs kvm_msr_list from include/linux/kvm.h to include/asm-x86/kvm.h Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Jerone Young authored
This patch moves structures: kvm_segment kvm_dtable from include/linux/kvm.h to include/asm-x86/kvm.h Signed-off-by: Jerone Young <jyoung5@us.ibm.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Jerone Young authored
This patch moves structure lapic_state from include/linux/kvm.h to include/asm-x86/kvm.h Signed-off-by: Jerone Young <jyoung5@us.ibm.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Jerone Young authored
This patch moves structure kvm_regs to include/asm-x86/kvm.h. Each architecture will need to create there own version of this structure. Signed-off-by: Jerone Young <jyoung5@us.ibm.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Jerone Young authored
This patch moves structures: kvm_pic_state kvm_ioapic_state to inclue/asm-x86/kvm.h. Signed-off-by: Jerone Young <jyoung5@us.ibm.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Jerone Young authored
This patch moves sturct kvm_memory_alias from include/linux/kvm.h to include/asm-x86/kvm.h. Also have include/linux/kvm.h include include/asm/kvm.h. Signed-off-by: Jerone Young <jyoung5@us.ibm.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Hollis Blanchard authored
Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Hollis Blanchard authored
Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Hollis Blanchard authored
Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Izik Eidus authored
Use kvm_write_guest_page() with empty_zero_page, instead of doing kmap and memset. Signed-off-by: Izik Eidus <izike@qumranet.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Izik Eidus authored
Things are simpler and more regular this way. Signed-off-by: Izik Eidus <izike@qumranet.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Jan Kiszka authored
Ensure that segment.base == segment.selector << 4 when entering the real mode on Intel so that the CPU will not bark at us. This fixes some old protected mode demo from http://www.x86.org/articles/pmbasics/tspec_a1_doc.htm. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Zhang Xiantao authored
Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Zhang Xiantao authored
Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Zhang Xiantao authored
Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Zhang Xiantao authored
Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Zhang Xiantao authored
Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Zhang Xiantao authored
Move out kvm_mmu init and exit functionality from kvm_main.c Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Zhang Xiantao authored
Meanwhile keep the interface in common, and leave as more logic in common as possible. Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Amit Shah authored
Instead of having each architecture do it individually, we do this in the arch-independent code (just x86 as of now). [avi: add svm to the mix, which was added to mainline during the 2.6.24-rc process] Signed-off-by: Amit Shah <amit.shah@qumranet.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
This is in addition to the current virtual cpu statistics. Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
-
Avi Kivity authored
Measure the number of times we switch the fpu state. Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Avi Kivity authored
This is a little more accurate (since it counts actual reloads, not potential reloads), and reverses the sense of the statistic to measure a bad event like most of the other stats (e.g. we want to minimize all counters). Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Zhang Xiantao authored
Add two arch hooks to handle kvm_create_vm and kvm destroy_vm. Now, just put io_bus init and destory in common. Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-
Zhang Xiantao authored
Since their callers are not declared with __init. Signed-off-by: Zhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
-