- 22 Apr, 2013 12 commits
-
-
Rusty Russell authored
Now we've adjusted all the code, we can simply set switcher_addr to wherever it needs to go below the fixmaps, rather than asserting that it should be so. With large NR_CPUS and PAE, people were hitting the "mapping switcher would thwack fixmap" message. Reported-by: Paul Bolle <pebolle@tiscali.nl> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
This optimizes the frobbing of our Switcher map. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
It's always to same, so no need to put in the PTE every time we're about to run. Keep a flag to track whether the pagetable has the Switcher entries allocated, and when allocating always initialize the Switcher text PTE. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
We currently use the whole top PGD entry for the switcher, so we simply share a fixed page of PTEs between all guests (actually, it's one per Host CPU, to ensure isolation between guests). Changes to a scheme where every guest has its own mappings. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
We will need this in page_table.c soon. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
We want a separate find_pte() function so we can call it for populating the switcher PTE entries. We can also use it in page_writable(). Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
This is a bit neater: we can immediately return if a PTE/PGD/PMD entry is invalid (which also kills the guest). It means we don't risk using invalid entries as we reshuffle the code. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
ie. SHARED_SWITCHER_PAGES == 1. It is well under a page, and it's a minor simplification: it's nice to have *one* simplification in a patch series! Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
There is a single page with the Switcher in it, but it's followed by 2 pages per Host CPU. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
We can use switcher_addr directly. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
We currently assume that the Switcher the top pgd; we want to remove this assumption, so check that vaddr is OK, rather then checking pgd index. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
We currently use the whole top PGD entry for the switcher, but that's hitting the fixmap in some configurations (mainly, large NR_CPUS). Introduce a variable, currently set to the constant. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
- 15 Apr, 2013 1 commit
-
-
Amit Shah authored
Returning EMFILE (process has too many open files) is incorrect to indicate a port is already open by another process. Use EBUSY for that. This does change what we report to userspace, but I believe userspace can look at it this way: it gets EBUSY, a new error code, instead of EMFILE. It's still an error, and that's not changing. Reported-by: Mateusz Guzik <mguzik@redhat.com> Signed-off-by: Amit Shah <amit.shah@redhat.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
- 08 Apr, 2013 6 commits
-
-
Wanlong Gao authored
Add hot cpu notifier to reset the request virtqueue affinity when doing cpu hotplug. Cc: linux-scsi@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Wanlong Gao <gaowanlong@cn.fujitsu.com> Reviewed-by: Asias He <asias@redhat.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Paolo Bonzini authored
This patch adds queue steering to virtio-scsi. When a target is sent multiple requests, we always drive them to the same queue so that FIFO processing order is kept. However, if a target was idle, we can choose a queue arbitrarily. In this case the queue is chosen according to the current VCPU, so the driver expects the number of request queues to be equal to the number of VCPUs. This makes it easy and fast to select the queue, and also lets the driver optimize the IRQ affinity for the virtqueues (each virtqueue's affinity is set to the CPU that "owns" the queue). The speedup comes from improving cache locality and giving CPU affinity to the virtqueues, which is why this scheme was selected. Assuming that the thread that is sending requests to the device is I/O-bound, it is likely to be sleeping at the time the ISR is executed, and thus executing the ISR on the same processor that sent the requests is cheap. However, the kernel will not execute the ISR on the "best" processor unless you explicitly set the affinity. This is because in practice you will have many such I/O-bound processes and thus many otherwise idle processors. Then the kernel will execute the ISR on a random processor, rather than the one that is sending requests to the device. The alternative to per-CPU virtqueues is per-target virtqueues. To achieve the same locality, we could dynamically choose the virtqueue's affinity based on the CPU of the last task that sent a request. This is less appealing because we do not set the affinity directly---we only provide a hint to the irqbalanced running in userspace. Dynamically changing the affinity only works if the userspace applies the hint fast enough. Cc: linux-scsi@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Wanlong Gao <gaowanlong@cn.fujitsu.com> Reviewed-by: Asias He <asias@redhat.com> Tested-by: Venkatesh Srinivas <venkateshs@google.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Paolo Bonzini authored
Avoid duplicated code in all of the callers. Cc: linux-scsi@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Wanlong Gao <gaowanlong@cn.fujitsu.com> Reviewed-by: Asias He <asias@redhat.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Paolo Bonzini authored
This will be needed soon in order to retrieve the per-target struct. Cc: linux-scsi@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Wanlong Gao <gaowanlong@cn.fujitsu.com> Reviewed-by: Asias He <asias@redhat.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Wanlong Gao authored
virtio_scsi_target_state is now empty. We will find new uses for it in the next few patches, so this patch does not drop it completely. And as James suggested, we use entries target_alloc and target_destroy in the host template to allocate and destroy the virtio_scsi_target_state of each target, attach this struct to scsi_target->hostdata. Now we can get at it from the sdev with scsi_target(sdev)->hostdata. No messing around with fixed size arrays and bulk memory allocation and no need to pass in the maximum target size as a parameter because everything should now happen dynamically. Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Cc: linux-scsi@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Wanlong Gao <gaowanlong@cn.fujitsu.com> Reviewed-by: Asias He <asias@redhat.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Wei Yongjun authored
Those symbols only used within this file, and should be static. Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn> Acked-by: Amit Shah <amit.shah@redhat.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
- 02 Apr, 2013 3 commits
-
-
Wei Yongjun authored
Fix to return a negative error code from the error handling case instead of 0, as returned elsewhere in this function. Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn> Acked-by: Sjur Brændeland <sjur.brandeland@stericsson.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Amos Kong authored
Some head files were split or moved to uapi/ without updating MAINTAINERS. Signed-off-by: Amos Kong <kongjianjun@gmail.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Paul Bolle authored
virtio_balloon.h exports "u16" and "u64" to userspace. Use "__u16" and "__u64" instead. Signed-off-by: Paul Bolle <pebolle@tiscali.nl> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
- 24 Mar, 2013 2 commits
-
-
Sjur Brændeland authored
Check that vringh_config is not NULL before using it. Signed-off-by: Sjur Brændeland <sjur.brandeland@stericsson.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Sjur Brændeland authored
Check on the correct return value from vringh_notify_enable_kern(). It returns false if more packets are available, not true. Signed-off-by: Sjur Brændeland <sjur.brandeland@stericsson.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
- 20 Mar, 2013 16 commits
-
-
Rusty Russell authored
Make the rest of the paths use virtqueue_add_sgs or add_outbuf. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
virtio_add_buf() is going away, replaced with virtio_add_sgs() which takes multiple terminated scatterlists. Cc: Eric Van Hensbergen <ericvh@gmail.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
We never add buffers with input and output parts, so use the new accessors. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
We never add buffers with input and output parts, so use the new accessors. Cc: Ohad Ben-Cohen <ohad@wizery.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
We never add buffers with input and output parts, so use the new accessors. Cc: Sjur Brendeland <sjur.brandeland@stericsson.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
We never add buffers with input and output parts, so use the new accessors. Acked-by: Amit Shah <amit.shah@redhat.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
We never add buffers with input and output parts, so use the new accessors. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Reviewed-by: Asias He <asias@redhat.com>
-
Rusty Russell authored
We never add buffers with input and output parts, so use the new accessors. Cc: "Michael S. Tsirkin" <mst@redhat.com> Reviewed-by: Asias He <asias@redhat.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
It's a bit cleaner to hand multiple sgs, rather than one big one. Cc: "Michael S. Tsirkin" <mst@redhat.com> Tested-by: Wanlong Gao <gaowanlong@cn.fujitsu.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
It's a bit clearer, and add_buf is going away. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Reviewed-by: Asias He <asias@redhat.com>
-
Wanlong Gao authored
Using the new virtqueue_add_sgs function lets us simplify the queueing path. In particular, all data protected by the tgt_lock is just gone (multiqueue will find a new use for the lock). Signed-off-by: Wanlong Gao <gaowanlong@cn.fujitsu.com> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Asias He <asias@redhat.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
It's simply a flag as to whether we have data now, so make it an explicit function parameter rather than a member of struct virtblk_req. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Reviewed-by: Asias He <asias@redhat.com>
-
Paolo Bonzini authored
(This is a respin of Paolo Bonzini's patch, but it calls virtqueue_add_sgs() instead of his multi-part API). This is similar to the previous patch, but a bit more radical because the bio and req paths now share the buffer construction code. Because the req path doesn't use vbr->sg, however, we need to add a couple of arguments to __virtblk_add_req. We also need to teach __virtblk_add_req how to build SCSI command requests. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Reviewed-by: Asias He <asias@redhat.com>
-
Paolo Bonzini authored
(This is a respin of Paolo Bonzini's patch, but it calls virtqueue_add_sgs() instead of his multi-part API). Move the creation of the request header and response footer to __virtblk_add_req. vbr->sg only contains the data scatterlist, the header/footer are added separately using virtqueue_add_sgs(). With this change, virtio-blk (with use_bio) is not relying anymore on the virtio functions ignoring the end markers in a scatterlist. The next patch will do the same for the other path. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Reviewed-by: Asias He <asias@redhat.com>
-
Paolo Bonzini authored
Right now, both virtblk_add_req and virtblk_add_req_wait call virtqueue_add_buf. To prepare for the next patches, abstract the call to virtqueue_add_buf into a new function __virtblk_add_req, and include the waiting logic directly in virtblk_add_req. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Asias He <asias@redhat.com> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-
Rusty Russell authored
As expected, the simplified accessors are faster. for i in `seq 50`; do /usr/bin/time -f 'Wall time:%e' ./vringh_test --indirect --eventidx --parallel --fast-vringh; done 2>&1 | stats --trim-outliers: Before: Using CPUS 0 and 3 Guest: notified 0, pinged 39062-39063(39063) Host: notified 39062-39063(39063), pinged 0 Wall time:1.760000-2.220000(1.789167) After: Using CPUS 0 and 3 Guest: notified 0, pinged 39037-39063(39062) Host: notified 39037-39063(39062), pinged 0 Wall time:1.640000-1.810000(1.676875) Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
-