- 08 Apr, 2012 30 commits
-
-
Scott Wood authored
DO_KVM will need to identify the particular exception type. There is an existing set of arbitrary numbers that Linux passes, but it's an undocumented mess that sort of corresponds to server/classic exception vectors but not really. Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Scott Wood authored
tlbilx is the new, preferred invalidation instruction. It is not found on e500 prior to e500mc, but there should be no harm in supporting it on all e500. Based on code from Ashish Kalra <Ashish.Kalra@freescale.com>. Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Scott Wood authored
Rather than invalidate everything when a TLB1 entry needs to be taken down, keep track of which host TLB1 entries are used for a given guest TLB1 entry, and invalidate just those entries. Based on code from Ashish Kalra <Ashish.Kalra@freescale.com> and Liu Yu <yu.liu@freescale.com>. Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Scott Wood authored
The PID handling is e500v1/v2-specific, and is moved to e500.c. The MMU sregs code and kvmppc_core_vcpu_translate will be shared with e500mc, and is moved from e500.c to e500_tlb.c. Partially based on patches from Liu Yu <yu.liu@freescale.com>. Signed-off-by: Scott Wood <scottwood@freescale.com> [agraf: fix bisectability] Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Scott Wood authored
Move vcpu to the beginning of vcpu_e500 to give it appropriate prominence, especially if more fields end up getting added to the end of vcpu_e500 (and vcpu ends up in the middle). Remove gratuitous "extern" and add parameter names to prototypes. Signed-off-by: Scott Wood <scottwood@freescale.com> [agraf: fix bisectability] Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Scott Wood authored
Keeping two separate headers for e500-specific things was a pain, and wasn't even organized along any logical boundary. There was TLB stuff in <asm/kvm_e500.h> despite the existence of arch/powerpc/kvm/e500_tlb.h, and nothing in <asm/kvm_e500.h> needed to be referenced from outside arch/powerpc/kvm. Signed-off-by: Scott Wood <scottwood@freescale.com> [agraf: fix bisectability] Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Scott Wood authored
This is in preparation for merging in the contents of arch/powerpc/include/asm/kvm_e500.h. Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Scott Wood authored
e500mc will want to do lpid allocation/deallocation here. Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Scott Wood authored
This gives us a place to put load/put actions that correspond to code that is booke-specific but not specific to a particular core. Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Scott Wood authored
We'll use it on e500mc as well. Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Scott Wood authored
Split e500 (v1/v2) and e500mc/e5500 to allow optimization of feature checks that differ between the two. Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Scott Wood authored
Currently 32-bit only cares about this for choice of exception vector, which is done in core-specific code. However, KVM will want to distinguish as well. Signed-off-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Takuya Yoshikawa authored
Now that we do neither double buffering nor heuristic selection of the write protection method these are not needed anymore. Note: some drivers have their own implementation of set_bit_le() and making it generic needs a bit of work; so we use test_and_set_bit_le() and will later replace it with generic set_bit_le(). Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Takuya Yoshikawa authored
We have seen some problems of the current implementation of get_dirty_log() which uses synchronize_srcu_expedited() for updating dirty bitmaps; e.g. it is noticeable that this sometimes gives us ms order of latency when we use VGA displays. Furthermore the recent discussion on the following thread "srcu: Implement call_srcu()" http://lkml.org/lkml/2012/1/31/211 also motivated us to implement get_dirty_log() without SRCU. This patch achieves this goal without sacrificing the performance of both VGA and live migration: in practice the new code is much faster than the old one unless we have too many dirty pages. Implementation: The key part of the implementation is the use of xchg() operation for clearing dirty bits atomically. Since this allows us to update only BITS_PER_LONG pages at once, we need to iterate over the dirty bitmap until every dirty bit is cleared again for the next call. Although some people may worry about the problem of using the atomic memory instruction many times to the concurrently accessible bitmap, it is usually accessed with mmu_lock held and we rarely see concurrent accesses: so what we need to care about is the pure xchg() overheads. Another point to note is that we do not use for_each_set_bit() to check which ones in each BITS_PER_LONG pages are actually dirty. Instead we simply use __ffs() in a loop. This is much faster than repeatedly call find_next_bit(). Performance: The dirty-log-perf unit test showed nice improvements, some times faster than before, except for some extreme cases; for such cases the speed of getting dirty page information is much faster than we process it in the userspace. For real workloads, both VGA and live migration, we have observed pure improvements: when the guest was reading a file during live migration, we originally saw a few ms of latency, but with the new method the latency was less than 200us. Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Takuya Yoshikawa authored
Dropped such mappings when we enabled dirty logging and we will never create new ones until we stop the logging. For this we introduce a new function which can be used to write protect a range of PT level pages: although we do not need to care about a range of pages at this point, the following patch will need this feature to optimize the write protection of many pages. Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Takuya Yoshikawa authored
We will use this in the following patch to implement another function which needs to write protect pages using the rmap information. Note that there is a small change in debug printing for large pages: we do not differentiate them from others to avoid duplicating code. Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Eric B Munson authored
check_and_clear_guest_paused does not need to be exported as it isn't used by any modules, remove the export. Signed-off-by: Eric B Munson <emunson@mgebm.net> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Marcelo Tosatti authored
S390's kvm_vcpu_stat does not contain halt_wakeup member. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Eric B Munson authored
A suspended VM can cause spurious soft lockup warnings. To avoid these, the watchdog now checks if the kernel knows it was stopped by the host and skips the warning if so. When the watchdog is reset successfully, clear the guest paused flag. Signed-off-by: Eric B Munson <emunson@mgebm.net> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Eric B Munson authored
Now that we have a flag that will tell the guest it was suspended, create an interface for that communication using a KVM ioctl. Signed-off-by: Eric B Munson <emunson@mgebm.net> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Eric B Munson authored
When a host stops or suspends a VM it will set a flag to show this. The watchdog will use these functions to determine if a softlockup is real, or the result of a suspended VM. Signed-off-by: Eric B Munson <emunson@mgebm.net> asm-generic changes Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Eric B Munson authored
This flag will be used to check if the vm was stopped by the host when a soft lockup was detected. The host will set the flag when it stops the guest. On resume, the guest will check this flag if a soft lockup is detected and skip issuing the warning. Signed-off-by: Eric B Munson <emunson@mgebm.net> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Alexander Graf authored
On PowerPC, we sometimes use a waitqueue per core, not per thread, so we can't always use the vcpu internal waitqueue. This code has been generalized by Christoffer Dall recently, but unfortunately broke compilation for PowerPC. At the time the helper function is defined, struct kvm_vcpu is not declared yet, so we can't dereference it. This patch moves all logic into the generic inline function, at which time we have all information necessary. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Christoffer Dall authored
The kvm_vcpu_kick function performs roughly the same funcitonality on most all architectures, so we shouldn't have separate copies. PowerPC keeps a pointer to interchanging waitqueues on the vcpu_arch structure and to accomodate this special need a __KVM_HAVE_ARCH_VCPU_GET_WQ define and accompanying function kvm_arch_vcpu_wq have been defined. For all other architectures this is a generic inline that just returns &vcpu->wq; Acked-by: Scott Wood <scottwood@freescale.com> Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Avi Kivity authored
Deprecated in favour of tracepoints. Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Jason Wang authored
Also count the exits of fast-path. Signed-off-by: Jason Wang <jasowang@redhat.com> Acked-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Amos Kong authored
kvm_io_bus devices are used for ioevent, pit, pic, ioapic, coalesced_mmio. Currently Qemu only emulates one PCI bus, it contains 32 slots, one slot contains 8 functions, maximum of supported PCI devices: 1 * 32 * 8 = 256. One virtio-blk takes one iobus device, one virtio-net(vhost=on) takes two iobus devices. The maximum of coalesced mmio zone is 100, each zone has an iobus devices. So 300 io_bus devices are not enough. Set an upper bounds for kvm_io_range to limit userspace. 1000 is a very large limit and not bloat the typical user. Signed-off-by: Amos Kong <akong@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Amos Kong authored
This patch makes the kvm_io_range array can be resized dynamically. Signed-off-by: Amos Kong <akong@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Liu, Jinsong authored
Intel recently release 2 new features, HLE and RTM. Refer to http://software.intel.com/file/41417. This patch expose them to guest. Signed-off-by: Liu, Jinsong <jinsong.liu@intel.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
-
Linus Torvalds authored
-
- 07 Apr, 2012 10 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmapLinus Torvalds authored
Pull two more small regmap fixes from Mark Brown: - Now we have users for it that aren't running Android it turns out that regcache_sync_region() is much more useful to drivers if it's exported for use by modules. Who knew? - Make sure we don't divide by zero when doing debugfs dumps of rbtrees, not visible up until now because everything was providing at least some cache on startup. * tag 'regmap-3.4-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap: regmap: prevent division by zero in rbtree_show regmap: Export regcache_sync_region()
-
git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds authored
Pull a few KVM fixes from Avi Kivity: "A bunch of powerpc KVM fixes, a guest and a host RCU fix (unrelated), and a small build fix." * 'kvm-updates/3.4' of git://git.kernel.org/pub/scm/virt/kvm/kvm: KVM: Resolve RCU vs. async page fault problem KVM: VMX: vmx_set_cr0 expects kvm->srcu locked KVM: PMU: Fix integer constant is too large warning in kvm_pmu_set_msr() KVM: PPC: Book3S: PR: Fix preemption KVM: PPC: Save/Restore CR over vcpu_run KVM: PPC: Book3S HV: Save and restore CR in __kvmppc_vcore_entry KVM: PPC: Book3S HV: Fix kvm_alloc_linear in case where no linears exist KVM: PPC: Book3S: Compile fix for ppc32 in HIOR access code
-
git://github.com/pmundt/linux-shLinus Torvalds authored
Pull SuperH fixes from Paul Mundt. * tag 'sh-for-linus' of git://github.com/pmundt/linux-sh: sh: fix clock-sh7757 for the latest sh_mobile_sdhi driver serial: sh-sci: use serial_port_in/out vs sci_in/out. sh: vsyscall: Fix up .eh_frame generation. sh: dma: Fix up device attribute mismatch from sysdev fallout. sh: dwarf unwinder depends on SHcompact. sh: fix up fallout from system.h disintegration.
-
git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-securityLinus Torvalds authored
Pull security layer fixlet from James Morris. * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security: sysctl: fix write access to dmesg_restrict/kptr_restrict
-
git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linuxLinus Torvalds authored
Pull ACPI & Power Management patches from Len Brown: "Two fixes for cpuidle merge-window changes, plus a URL fix in MAINTAINERS" * 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux: MAINTAINERS: Update git url for ACPI cpuidle: Fix panic in CPU off-lining with no idle driver ACPI processor: Use safe_halt() rather than halt() in acpi_idle_play_dead()
-
git://git.kernel.org/pub/scm/linux/kernel/git/nab/target-pendingLinus Torvalds authored
Pull target fixes from Nicholas Bellinger: "Pull two tcm_fc fabric related fixes for -rc2: Note that both have been CC'ed to stable, and patch #1 is the important one that addresses a memory corruption bug related to FC exchange timeouts + command abort. Thanks again to MDR for tracking down this issue!" * '3.4-rc-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/nab/target-pending: tcm_fc: Do not free tpg structure during wq allocation failure tcm_fc: Add abort flag for gracefully handling exchange timeout
-
Mark Rustad authored
Avoid freeing a registered tpg structure if an alloc_workqueue call fails. This fixes a bug where the failure was leaking memory associated with se_portal_group setup during the original core_tpg_register() call. Signed-off-by: Mark Rustad <mark.d.rustad@intel.com> Acked-by: Kiran Patil <Kiran.patil@intel.com> Cc: <stable@vger.kernel.org> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
-
Mark Rustad authored
Add abort flag and use it to terminate processing when an exchange is timed out or is reset. The abort flag is used in place of the transport_generic_free_cmd function call in the reset and timeout cases, because calling that function in that context would free memory that was in use. The aborted flag allows the lifetime to be managed in a more normal way, while truncating the processing. This change eliminates a source of memory corruption which manifested in a variety of ugly ways. (nab: Drop unused struct fc_exch *ep in ft_recv_seq) Signed-off-by: Mark Rustad <mark.d.rustad@intel.com> Acked-by: Kiran Patil <Kiran.patil@intel.com> Cc: <stable@vger.kernel.org> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
-
Len Brown authored
-
Igor Murzov authored
Signed-off-by: Igor Murzov <e-mail@date.by> Signed-off-by: Len Brown <len.brown@intel.com>
-