- 24 May, 2011 11 commits
-
-
Lars Ellenberg authored
An administrative detach used to request a state change directly to D_DISKLESS, first suspending IO to avoid the last put_ldev() occuring from an endio handler, potentially in irq context. This is not enough on the receiving side (typically secondary), we may miss some peer_req on the way to local disk, which then may do the last put_ldev() from their drbd_peer_request_endio(). This patch makes the detach always go through the intermediate D_FAILED state. We may consider to rename it D_DETACHING. Alternative approach would be to create yet an other work item to be scheduled on the worker, do the destructor work from there, and get the timing right. manually picked commit 564040f from the drbd 8.4 branch. Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com> Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
-
Philipp Reisner authored
The old (optimistic) implementation could shrink the bio size on an primary device. Shrinking the bio size on a primary device is bad. Since there we might get BIOs with the old (bigger) size shortly after we published the new size. The new implementation is more conservative, and eventually increases the max_bio_size on a primary device (which is valid). It does so, when it knows the local limit AND the remote limit. We cache the last seen max_bio_size of the peer in the meta data, and rely on that, to make the operation of single nodes more efficient. Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com> Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
-
Philipp Reisner authored
Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com> Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
-
Philipp Reisner authored
Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com> Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
-
Philipp Reisner authored
It seems that the real cause of all the issues where that we did not noticed in drbd_try_connect() when the other guy closes one socket if the round trip time gets higher than 100ms. There were that 100ms hard coded! Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com> Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
-
Lars Ellenberg authored
It is no longer sufficient to trigger on local WRITE, we need to check on (rq_state & RQ_IN_ACT_LOG) before calling drbd_al_complete_io also in the error path. Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com> Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
-
Philipp Reisner authored
Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com> Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
-
Lars Ellenberg authored
If there is no replication traffic within the idle timeout (ping-int seconds), DRBD will send a P_PING, and adjust the timeout to ping-timeout. If there is no P_PING_ACK received within this ping-timeout, DRBD finally drops the connection, and tries to re-establish it. To decide which timeout was active, we compared the current timeout with the ping-timeout, and dropped the connection, if that was the case. By default, ping-int is 10 seconds, ping-timeout is 500 ms. Unfortunately, if you configure ping-timeout to be the same as ping-int, expiry of the idle-timeout had been mistaken for a missing ping ack, and caused an immediate reconnection attempt. Fix: Allow both timeouts to be equal, use a local variable to store which timeout is active. Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com> Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
-
Lars Ellenberg authored
We limit ourselves to a configurable maximum number of pages used as temporary bio pages. If the configured "max_buffers" is not big enough to match the bandwidth of the respective deployment, a distributed deadlock could be triggered by e.g. fast online verify and heavy application IO. TCP connections would block on congestion, because both receivers would wait on pages to become available. Fortunately the respective senders in this case would be able to give back some pages already. So do that. Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com> Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
-
Lars Ellenberg authored
For some time we contemplated calling the "struct lru_cache" a "struct tracked_set", and some comments kept the ts_ prefix. Fix those to match the member field names. Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com> Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
-
Philipp Reisner authored
In case a write failes on the local disk, go into D_INCONSISTENT disk state. That causes future reads of that block to be shipped to the peer. Read retry remote was already in place. Actually the documentation needs to get fixed now. Since the application is still shielded from the error. (as long as we have only a single disk failing) The difference to detach is that we keep the disk. And therefore might keep all the other, still working sectors up to date. Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com> Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
-
- 19 May, 2011 1 commit
-
-
Jens Axboe authored
Merge branches 'for-jens/xen-backend-fixes' and 'for-jens/xen-blkback-v3.3' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen into for-2.6.40/drivers
-
- 18 May, 2011 3 commits
-
-
Konrad Rzeszutek Wilk authored
If the backends, which use these two functions, are compiled as a module we need these two functions to be exported. Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
Konrad Rzeszutek Wilk authored
We only supported the M2P (and P2M) override only for the GNTMAP_contains_pte type mappings. Meaning that we grants operations would "contain the machine address of the PTE to update" If the flag is unset, then the grant operation is "contains a host virtual address". The latter case means that the Hypervisor takes care of updating our page table (specifically the PTE entry) with the guest's MFN. As such we should not try to do anything with the PTE. Previous to this patch we would try to clear the PTE which resulted in Xen hypervisor being upset with us: (XEN) mm.c:1066:d0 Attempt to implicitly unmap a granted PTE c0100000ccc59067 (XEN) domain_crash called from mm.c:1067 (XEN) Domain 0 (vcpu#0) crashed on cpu#3: (XEN) ----[ Xen-4.0-110228 x86_64 debug=y Not tainted ]---- and crashing us. This patch allows us to inhibit the PTE clearing in the PV guest if the GNTMAP_contains_pte is not set. On the m2p_remove_override path we provide the same parameter. Sadly in the grant-table driver we do not have a mechanism to tell m2p_remove_override whether to clear the PTE or not. Since the grant-table driver is used by user-space, we can safely assume that it operates only on PTE's. Hence the implementation for it to work on !GNTMAP_contains_pte returns -EOPNOTSUPP. In the future we can implement the support for this. It will require some extra accounting structure to keep track of the page[i], and the flag. [v1: Added documentation details, made it return -EOPNOTSUPP instead of trying to do a half-way implementation] Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
Jan Beulich authored
The sector number on empty barrier requests may (will?) be -1, which, given that it's being treated as unsigned 64-bit quantity, will almost always exceed the actual (virtual) disk's size. Inspired by Konrad's "When writting barriers set the sector number to zero...". While at it also add overflow checking to the math in vbd_translate(). Signed-off-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
- 13 May, 2011 1 commit
-
-
Laszlo Ersek authored
vbd_resize() up_read()'s xs_state.suspend_mutex twice in a row via double xenbus_transaction_end() calls. The next down_read() in xenbus_transaction_start() (at eg. the next resize attempt) hangs. Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=618317Acked-by: Jan Beulich <jbeulich@novell.com> Acked-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Laszlo Ersek <lersek@redhat.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
- 12 May, 2011 20 commits
-
-
Konrad Rzeszutek Wilk authored
The recent changes caused this field of the structure to be offset a bit. Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
Konrad Rzeszutek Wilk authored
And not depend on the driver being built with -DDEBUG flag. Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
Konrad Rzeszutek Wilk authored
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
Konrad Rzeszutek Wilk authored
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
Konrad Rzeszutek Wilk authored
No need for that '_st' and xen_blkif is more apt. Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
Konrad Rzeszutek Wilk authored
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
Konrad Rzeszutek Wilk authored
Not point of the blkif.h file. It is not used by the frontend. Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
Konrad Rzeszutek Wilk authored
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
Konrad Rzeszutek Wilk authored
CHECK: multiple assignments should be avoided Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
Konrad Rzeszutek Wilk authored
Break up the macro usage. Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
Konrad Rzeszutek Wilk authored
with more details. Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
Konrad Rzeszutek Wilk authored
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
Konrad Rzeszutek Wilk authored
From the blkif.h header, which was exposed to the frontend. Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
Konrad Rzeszutek Wilk authored
It is not really used for anything. Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
Konrad Rzeszutek Wilk authored
To make it easier to read. Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
Konrad Rzeszutek Wilk authored
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
Konrad Rzeszutek Wilk authored
And also make them uniform and prefix the message with 'xen-blkback'. Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
Konrad Rzeszutek Wilk authored
If the backend supports the 'feature-flush-cache' mode, use that instead of the 'feature-barrier' support. Currently there are three backends that support the 'feature-flush-cache' mode: NetBSD, Solaris and Linux kernel. The 'flush' option is much light-weight version than the 'barrier' support so lets try to use as there are no filesystems in the kernel that use full barriers anymore. Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
Konrad Rzeszutek Wilk authored
The operation BLKIF_OP_WRITE_FLUSH_CACHE has existed in the Xen tree header file for years but it was never present in the Linux tree because the frontend (nor the backend) supported this interface. Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
Marek Marczykowski authored
barrier variable is int, not long. This overflow caused another variable override: "err" (in PV code) and "binfo" (in xenlinux code - drivers/xen/blkfront/blkfront.c). The later caused incorrect device flags (RO/removable etc). Signed-off-by: Marek Marczykowski <marmarek@mimuw.edu.pl> Acked-by: Ian Campbell <Ian.Campbell@citrix.com> [v1: Changed title] Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
- 11 May, 2011 1 commit
-
-
Konrad Rzeszutek Wilk authored
Suggested-by: Ian Campbell <Ian.Campbell@eu.citrix.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
-
- 06 May, 2011 3 commits
-
-
Jens Axboe authored
drivers/block/cciss.c: In function ‘cciss_send_reset’: drivers/block/cciss.c:2515:2: error: implicit declaration of function ‘fill_cmd’ drivers/block/cciss.c: At top level: drivers/block/cciss.c:2531:12: error: conflicting types for ‘fill_cmd’ drivers/block/cciss.c:2534:1: note: an argument type that has a default promotion can’t match an empty parameter name list declaration drivers/block/cciss.c:2515:18: note: previous implicit declaration of ‘fill_cmd’ was here make[1]: *** [drivers/block/cciss.o] Error 1 make: *** [drivers/block/cciss.o] Error 2 Move fill_cmd() to above where it is first used. Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
-
Stephen M. Cameron authored
This is to allow number of commands reserved for use by SCSI tape drives and medium changers to be adjusted at driver load time via the kernel parameter cciss_tape_cmds, with a default value of 6, and a range of 2 - 16 inclusive. Previously, the driver limited the number of commands which could be queued to the SCSI half of the the driver to only 2. This is to fix the problem that if you had more than two tape drives, you couldn't, for example, erase or rewind them all at the same time. Signed-off-by: Stephen M. Cameron <scameron@beardog.cce.hp.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
-
Stephen M. Cameron authored
It causes NMIs which are undesirable at best, unsurvivable at worst. Prefer the soft reset instead. Signed-off-by: Stephen M. Cameron <scameron@beardog.cce.hp.com> Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
-