- 30 Apr, 2014 15 commits
-
-
Lars Ellenberg authored
In the drbd make request function, specifically in drbd_send_and_submit(), we decide whether we want to send the actual write request, or only a "set this block out of sync" information. We do so based on the current connection state, while holding the req_lock. The connection state is not supposed to change while holding the req_lock. But in drbd_start_resync, we did change that state anyways, while only holding the global_state_lock, which is enough to change sync-after dependencies (paused vs active resync), but not good enough to change the connection state. Fix: in drbd_start_resync, first grab the req_lock to serialize with drbd_send_and_submit(), before grabbing the global_state_lock to be able to evaluate the sync-after dependencies. Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com> Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Lars Ellenberg authored
Allow the user of REQ_DISCARD. Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com> Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Lars Ellenberg authored
Note that I do NOT call __drbd_chk_io_error for failed REQ_DISCARD. That may be wrong, though, or needs to differ between EOPNOTSUPP and other errors... Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com> Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Lars Ellenberg authored
If the receiver needs to serve a discard request on a queue that does not announce to be discard cabable, it falls back to do synchronous blkdev_issue_zeroout(). We expect only "reasonably" large (up to one activity log extent?) discard requests. We do this to not to not block the receiver for too long in this fallback code path, and to not set/clear too many bits inside one spinlock_irq_save() in drbd_set_in_sync/drbd_set_out_of_sync, Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com> Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Lars Ellenberg authored
We plan to use genl_family->parallel_ops = true in the future, but need to review all possible interactions first. For now, only selectively drop genl_lock() in drbd_set_role(), instead serializing on our own internal resource->conf_update mutex. We now can be promoted/demoted on many resources in parallel, which may significantly improve cluster failover times when fencing is required. Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com> Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Lars Ellenberg authored
Because all administrative requests via genetlink have been globally serialized via genl_lock(), we used to have one static struct drbd_config_context "admin context". Move this on-stack to the respective callback functions. This will allow us to selectively drop the genl_lock() (or use genl_family->parallel_ops) in the future. Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com> Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Philipp Reisner authored
When a 'cluster wide' disconnect executes, the result comes back from the peer, and immediately after that the connection breaks then _conn_rq_cond() reported back SS_CW_SUCCESS. Therefore _conn_request_state() calls conn_set_state(), which has a BUG() in it. The BUG() is hit because conn_is_valid_transition() does not like the transaction. Which goes back to is_valid_soft_transition() returning SS_OUTDATE_WO_CONN. This fix is to consider an error reported by is_valid_soft_transition() even when the peer agreed to the transaction. Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com> Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Lars Ellenberg authored
Before, application IO could pre-empt resync activity for up to hardcoded 20 seconds per resync request. A very busy server could throttle the effective resync bandwidth down to one request per 20 seconds. Now, we only let application IO pre-empt resync traffic while the current resync rate estimate is above c-min-rate. If you disable the c-min-rate throttle feature (set c-min-rate = 0), application IO will no longer pre-empt resync traffic at all. Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com> Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Lars Ellenberg authored
If max-buffers and socket buffer sizes are "too small" for the chosen resync rate, this could lead potentially lead to a distributed deadlock, which may or may not resolve itself via the "ko-count" and request timeout mechanism, or could be resolved by forced disconnect. One option to deal with this is proper configuration: use larger max-buffer and socket buffers settings, or reduce the resync rate. But even with bad configuration we should not deadlock, but "gracefully" recover. The issue is avoided by using only up to max-buffers/2 for resync requests, and by using max-buffers not as a hard limit for data buffer allocations, but as a throttle threshold only. Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com> Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Lars Ellenberg authored
While merging adjacent dirty blocks into resync requests, the resync rate throttle was disregarded. For very low resync rates, the effective rate may have exceeded the intended rate by a larger margin. Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com> Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Lars Ellenberg authored
If we don't make resync or verify progress for "too long", we want to flag it as "stalled". Since 2010, "use rolling marks for resync speed calculation" this "too long" was wrong by a factor of HZ. With HZ 250, it would have been flagged as stalled after 100 minutes. Hardcode 3 minutes instead. Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com> Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Philipp Reisner authored
If a user forces the operation he takes the blame in case the peer does not have enough space. No reason to dey this... Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com> Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Philipp Reisner authored
Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com> Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Philipp Reisner authored
Actually we are clearing the susp_fen flag if we are not going to call a fencing handler. For setting the susp_fen flag needs to be edge-triggerd, and not level triggered. Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com> Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Philipp Reisner authored
When we need to outdate the peer while being promoted to primary, and the connection gets established at the same time, we deadlock in drbd_try_outdate_peer() when trying to clear the susp_fen bit. Fix this by setting the STATE_SENT bit while holding the mutex. Using drbd_change_state(.. , CS_HARD, ..) which does not block until STATE_SENT is cleared, is only for clearness. It does not contribute anything to the fix. Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com> Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com> Signed-off-by: Jens Axboe <axboe@fb.com>
-
- 23 Apr, 2014 3 commits
-
-
Asai Thambi S P authored
A hardware quirk in P320h/P420m interfere with PCIe transactions on some AMD chipsets, making P320h/P420m unusable. This workaround is to disable ERO and NoSnoop bits in the parent and root complex for normal functioning of these devices NOTE: This workaround is specific to AMD chipset with a PCIe upstream device with device id 0x5aXX Signed-off-by: Asai Thambi S P <asamymuthupa@micron.com> Signed-off-by: Sam Bradshaw <sbradshaw@micron.com> Cc: stable@kernel.org Signed-off-by: Jens Axboe <axboe@fb.com>
-
Asai Thambi S P authored
In module exit, dfs_parent and it's subtree were removed before unregistering with pci. When debugfs entry for each device is attempted to remove in pci_remove() context, they don't exist, as dfs_parent and its children were already ripped apart. Modified to first unregister with pci and then remove dfs_parent. Signed-off-by: Asai Thambi S P <asamymuthupa@micron.com> Cc: stable@kernel.org Signed-off-by: Jens Axboe <axboe@fb.com>
-
Asai Thambi S P authored
Increased timeout for STANDBY IMMEDIATE command to 2 minutes. Signed-off-by: Selvan Mani <smani@micron.com> Signed-off-by: Asai Thambi S P <asamymuthupa@micron.com> Cc: stable@kernel.org Signed-off-by: Jens Axboe <axboe@fb.com>
-
- 22 Apr, 2014 2 commits
-
-
Alexander Gordeev authored
As result of deprecation of MSI-X/MSI enablement functions pci_enable_msix() and pci_enable_msi_block() all drivers using these two interfaces need to be updated to use the new pci_enable_msi_range() or pci_enable_msi_exact() and pci_enable_msix_range() or pci_enable_msix_exact() interfaces. Signed-off-by: Alexander Gordeev <agordeev@redhat.com> Cc: Mike Miller <mike.miller@hp.com> Cc: iss_storagedev@hp.com Cc: Jens Axboe <axboe@kernel.dk> Cc: linux-pci@vger.kernel.org Signed-off-by: Jens Axboe <axboe@fb.com>
-
Alexander Gordeev authored
Function pci_enable_msix_exact() is a variation of pci_enable_msix_range() that allows a device driver to request a particular number of MSI-X interrupts, rather than any number within a specified range. Signed-off-by: Alexander Gordeev <agordeev@redhat.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Cc: Kyungmin Park <kyungmin.park@samsung.com> Cc: linux-pci@vger.kernel.org Signed-off-by: Jens Axboe <axboe@fb.com>
-
- 16 Apr, 2014 14 commits
-
-
Jens Axboe authored
bsg currently checks ->request_fn to check whether a queue can handle struct request. But with blk-mq, we don't have a request_fn yet are request based. Add a queue_is_rq_based() helper and use that in bsg, I'm guessing this is not the last place we need to update for this. Besides, it better explains what is being checked. Signed-off-by: Jens Axboe <axboe@fb.com>
-
Jens Axboe authored
Signed-off-by: Jens Axboe <axboe@fb.com>
-
Jens Axboe authored
Since we are now, by default, applying timer slack to expiry times, the logic for when to modify a timer in the block code is suboptimal. The block layer keeps a forward rolling timer per queue for all requests, and modifies this timer if a request has a shorter timeout than what the current expiry time is. However, this breaks down when our rounded timer values get applied slack. Then each new request ends up modifying the timer, since we're still a little in front of the timer + slack. Fix this by allowing a tolerance of HZ / 2, the timeout handling doesn't need to be very precise. This drastically cuts down the number of timer modifications we have to make. Signed-off-by: Jens Axboe <axboe@fb.com>
-
Christoph Hellwig authored
This allows to mirror the blk-mq code flow for more a more readable I/O completion handler in SCSI. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Christoph Hellwig authored
We will use this work_struct to requeue scsi commands from the completion handler as well, so give it a more generic name. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Christoph Hellwig authored
This allows to requeue a request that has been accepted by ->queue_rq earlier. This is needed by the SCSI layer in various error conditions. The existing internal blk_mq_requeue_request is renamed to __blk_mq_requeue_request as it is a lower level building block for this funtionality. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Christoph Hellwig authored
Add a helper to unconditionally kick contexts of a queue. This will be needed by the SCSI layer to provide fair queueing between multiple devices on a single host. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Christoph Hellwig authored
Add a blk-mq equivalent to blk_delay_queue so that the scsi layer can ask to be kicked again after a delay. Signed-off-by: Christoph Hellwig <hch@lst.de> Modified by me to kill the unnecessary preempt disable/enable in the delayed workqueue handler. Signed-off-by: Jens Axboe <axboe@fb.com>
-
Christoph Hellwig authored
Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Christoph Hellwig authored
Add two unlinkely branches to make sure the resid is initialized correctly for bidi request pairs, and the second request gets properly freed. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Christoph Hellwig authored
Split out the bottom half of blk_mq_end_io so that drivers can perform work when they know a request has been completed, but before it has been freed. This also obsoletes blk_mq_end_io_partial as drivers can now pass any value to blk_update_request directly. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Jens Axboe authored
blk_mq_work_fn() is always invoked off the bounded workqueues, so it can happily preempt among the queues in that set without causing any issues for blk-mq. Signed-off-by: Jens Axboe <axboe@fb.com>
-
Jens Axboe authored
UP or CONFIG_PREEMPT_NONE will return 0, and what we really want to check is whether or not we are on the right CPU. So don't make PREEMPT part of this, just test the CPU in the mask directly. Signed-off-by: Jens Axboe <axboe@fb.com>
-
Jens Axboe authored
The friendly Intel kbuild test robot reported: drivers/cdrom/gdrom.c: In function 'gdrom_readdisk_dma': drivers/cdrom/gdrom.c:605:3: error: 'struct request' has no member named 'buffer' Convert that from req->buffer to bio_data(rq->bio). Apparently my grep missed this one, and I don't build for Sega Dreamcast enough. Signed-off-by: Jens Axboe <axboe@fb.com>
-
- 15 Apr, 2014 6 commits
-
-
Christoph Hellwig authored
Instead of setting the REQ_QUEUED flag on each of them just take it into account in the only macro checking it. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Christoph Hellwig authored
Add a new blk_mq_tag_set structure that gets set up before we initialize the queue. A single blk_mq_tag_set structure can be shared by multiple queues. Signed-off-by: Christoph Hellwig <hch@lst.de> Modular export of blk_mq_{alloc,free}_tagset added by me. Signed-off-by: Jens Axboe <axboe@fb.com>
-
Christoph Hellwig authored
If we want to share tag and request allocation between queues we cannot initialize the request at init/free time, but need to initialize it at allocation time as it might get used for different queues over its lifetime. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Christoph Hellwig authored
The current blk_mq_init_commands/blk_mq_free_commands interface has a two problems: 1) Because only the constructor is passed to blk_mq_init_commands there is no easy way to clean up when a comman initialization failed. The current code simply leaks the allocations done in the constructor. 2) There is no good place to call blk_mq_free_commands: before blk_cleanup_queue there is no guarantee that all outstanding commands have completed, so we can't free them yet. After blk_cleanup_queue the queue has usually been freed. This can be worked around by grabbing an unconditional reference before calling blk_cleanup_queue and dropping it after blk_mq_free_commands is done, although that's not exatly pretty and driver writers are guaranteed to get it wrong sooner or later. Both issues are easily fixed by making the request constructor and destructor normal blk_mq_ops methods. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Christoph Hellwig authored
Drivers shouldn't have to care about the block layer setting aside a request to implement the flush state machine. We already override the mq context and tag to make it more transparent, but so far haven't deal with the driver private data in the request. Make sure to override this as well, and while we're at it add a proper helper sitting in blk-mq.c that implements the full impersonation. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
-
Christoph Hellwig authored
Drivers can reach their private data easily using the blk_mq_rq_to_pdu helper and don't need req->special. By not initializing it code can be simplified nicely, and we also shave off a few more instructions from the I/O path. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
-