- 17 Oct, 2023 7 commits
-
-
Ming Lei authored
Rename mm_lock field of ublk_device as lock, so that this lock can be reused for protecting access of ub->ub_disk, which will be used for simplifying ublk_abort_queue() by quiesce queue in next patch. Signed-off-by: Ming Lei <ming.lei@redhat.com> Link: https://lore.kernel.org/r/20231009093324.957829-5-ming.lei@redhat.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Ming Lei authored
ublk_cancel_dev() just calls ublk_cancel_queue() to cancel all pending io commands after ublk request queue is idle. The only protection is just the read & write of ubq->nr_io_ready and avoid duplicated command cancel, so add one per-queue lock with cancel flag for providing this protection, meantime move ublk_cancel_dev() out of ub->mutex. Then we needn't to call io_uring_cmd_complete_in_task() to cancel pending command. And the same cancel logic will be re-used for cancelable uring command. This patch basically reverts commit ac5902f8 ("ublk: fix AB-BA lockdep warning"). Signed-off-by: Ming Lei <ming.lei@redhat.com> Link: https://lore.kernel.org/r/20231009093324.957829-4-ming.lei@redhat.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Ming Lei authored
In well-done ublk server implementation, ublk io command won't be linked into any link chain. Meantime they are always handled in no-wait style, so basically io cmd is always handled in submitter task context. However, the server may set IOSQE_ASYNC, or io command is linked to one chain mistakenly, then we may still run into io-wq context and ctx->uring_lock isn't held. So in case of IO_URING_F_UNLOCKED, schedule this command by io_uring_cmd_complete_in_task to force running it in submitter task. Then ublk_ch_uring_cmd_local() is guaranteed to run with context uring_lock held, and we needn't to worry about sync among submission code path any more. Signed-off-by: Ming Lei <ming.lei@redhat.com> Link: https://lore.kernel.org/r/20231009093324.957829-3-ming.lei@redhat.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Ming Lei authored
ublk_abort_queue() is called in ublk_daemon_monitor_work(), in which it is guaranteed that the device is live because monitor work is canceled when removing device, so no need to get the device reference. Signed-off-by: Ming Lei <ming.lei@redhat.com> Link: https://lore.kernel.org/r/20231009093324.957829-2-ming.lei@redhat.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Mike Christie authored
We are converting tcmu applications to ublk, but have systems with up to 1k devices. This patch allows us to configure the ublks_max from userspace with the ublks_max modparam. Signed-off-by: Mike Christie <michael.christie@oracle.com> Reviewed-by: Ming Lei <ming.lei@redhat.com> Link: https://lore.kernel.org/r/20231012150600.6198-3-michael.christie@oracle.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Mike Christie authored
The dev_id/ub_number is used for the ublk dev's char device's minor number so it has to fit into MINORMASK. This patch adds checks to prevent userspace from passing a number that's too large and limits what can be allocated by the ublk_index_idr for the case where userspace has the kernel allocate the dev_id/ub_number. Signed-off-by: Mike Christie <michael.christie@oracle.com> Reviewed-by: Ming Lei <ming.lei@redhat.com> Link: https://lore.kernel.org/r/20231012150600.6198-2-michael.christie@oracle.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Jens Axboe authored
Merge in io_uring fixes, as the ublk simplifying cancelations and aborts depend on the two patches from Ming adding cancelation support for uring_cmd. * for-6.7/io_uring: io_uring/kbuf: Use slab for struct io_buffer objects io_uring/kbuf: Allow the full buffer id space for provided buffers io_uring/kbuf: Fix check of BID wrapping in provided buffers io_uring/rsrc: cleanup io_pin_pages() io_uring: cancelable uring_cmd io_uring: retain top 8bits of uring_cmd flags for kernel internal use io_uring: add IORING_OP_WAITID support exit: add internal include file with helpers exit: add kernel_waitid_prepare() helper exit: move core of do_wait() into helper exit: abstract out should_wake helper for child_wait_callback() io_uring/rw: add support for IORING_OP_READ_MULTISHOT io_uring/rw: mark readv/writev as vectored in the opcode definition io_uring/rw: split io_read() into a helper
-
- 12 Oct, 2023 1 commit
-
-
Jens Axboe authored
Merge tag 'md-next-20231012' of https://git.kernel.org/pub/scm/linux/kernel/git/song/md into for-6.7/block Pull MD changes from Song: "1. Rewrite mddev_suspend(), by Yu Kuai; 2. Simplify md_seq_ops, by Yu Kuai; 3. Reduce unnecessary locking array_state_store(), by Mariusz Tkaczyk." * tag 'md-next-20231012' of https://git.kernel.org/pub/scm/linux/kernel/git/song/md: (23 commits) md: rename __mddev_suspend/resume() back to mddev_suspend/resume() md: remove old apis to suspend the array md: suspend array in md_start_sync() if array need reconfiguration md/raid5: replace suspend with quiesce() callback md/md-linear: cleanup linear_add() md: cleanup mddev_create/destroy_serial_pool() md: use new apis to suspend array before mddev_create/destroy_serial_pool md: use new apis to suspend array for ioctls involed array reconfiguration md: use new apis to suspend array for adding/removing rdev from state_store() md: use new apis to suspend array for sysfs apis md/raid5: use new apis to suspend array md/raid5-cache: use new apis to suspend array md/md-bitmap: use new apis to suspend array for location_store() md/dm-raid: use new apis to suspend array md: add new helpers to suspend/resume and lock/unlock array md: add new helpers to suspend/resume array md: replace is_md_suspended() with 'mddev->suspended' in md_check_recovery() md/raid5-cache: use READ_ONCE/WRITE_ONCE for 'conf->log' md: use READ_ONCE/WRITE_ONCE for 'suspend_lo' and 'suspend_hi' md/raid1: don't split discard io for write behind ...
-
- 11 Oct, 2023 20 commits
-
-
Song Liu authored
From Yu Kuai, written by Song Liu Recent tests with raid10 revealed many issues with the following scenarios: - add or remove disks to the array - issue io to the array At first, we fixed each problem independently respect that io can concurrent with array reconfiguration. However, with more issues reported continuously, I am hoping to fix these problems thoroughly. Refer to how block layer protect io with queue reconfiguration (for example, change elevator): blk_mq_freeze_queue -> wait for all io to be done, and prevent new io to be dispatched // reconfiguration blk_mq_unfreeze_queue I think we can do something similar to synchronize io with array reconfiguration. Current synchronization works as the following. For the reconfiguration operation: 1. Hold 'reconfig_mutex'; 2. Check that rdev can be added/removed, one condition is that there is no IO (for example, check nr_pending). 3. Do the actual operations to add/remove a rdev, one procedure is set/clear a pointer to rdev. 4. Check if there is still no IO on this rdev, if not, revert the change. IO path uses rcu_read_lock/unlock() to access rdev. - rcu is used wrongly; - There are lots of places involved that old rdev can be read, however, many places doesn't handle old value correctly; - Between step 3 and 4, if new io is dispatched, NULL will be read for the rdev, and data will be lost if step 4 failed. The new synchronization is similar to blk_mq_freeze_queue(). To add or remove disk: 1. Suspend the array, that is, stop new IO from being dispatched and wait for inflight IO to finish. 2. Add or remove rdevs to array; 3. Resume the array; IO path doesn't need to change for now, and all rcu implementation can be removed. Then main work is divided into 3 steps: First, first make sure new apis to suspend the array is general: - make sure suspend array will wait for io to be done(Done by [1]); - make sure suspend array can be called for all personalities(Done by [2]); - make sure suspend array can be called at any time(Done by [3]); - make sure suspend array doesn't rely on 'reconfig_mutex'(PATCH 3-5); Second replace old apis with new apis(PATCH 6-16). Specifically, the synchronization is changed from: lock reconfig_mutex suspend array make changes resume array unlock reconfig_mutex to: suspend array lock reconfig_mutex make changes unlock reconfig_mutex resume array Finally, for the remain path that involved reconfiguration, suspend the array first(PATCH 11,12, [4] and PATCH 17): Preparatory work: [1] https://lore.kernel.org/all/20230621165110.1498313-1-yukuai1@huaweicloud.com/ [2] https://lore.kernel.org/all/20230628012931.88911-2-yukuai1@huaweicloud.com/ [3] https://lore.kernel.org/all/20230825030956.1527023-1-yukuai1@huaweicloud.com/ [4] https://lore.kernel.org/all/20230825031622.1530464-1-yukuai1@huaweicloud.com/ * md-suspend-rewrite: md: rename __mddev_suspend/resume() back to mddev_suspend/resume() md: remove old apis to suspend the array md: suspend array in md_start_sync() if array need reconfiguration md/raid5: replace suspend with quiesce() callback md/md-linear: cleanup linear_add() md: cleanup mddev_create/destroy_serial_pool() md: use new apis to suspend array before mddev_create/destroy_serial_pool md: use new apis to suspend array for ioctls involed array reconfiguration md: use new apis to suspend array for adding/removing rdev from state_store() md: use new apis to suspend array for sysfs apis md/raid5: use new apis to suspend array md/raid5-cache: use new apis to suspend array md/md-bitmap: use new apis to suspend array for location_store() md/dm-raid: use new apis to suspend array md: add new helpers to suspend/resume and lock/unlock array md: add new helpers to suspend/resume array md: replace is_md_suspended() with 'mddev->suspended' in md_check_recovery() md/raid5-cache: use READ_ONCE/WRITE_ONCE for 'conf->log' md: use READ_ONCE/WRITE_ONCE for 'suspend_lo' and 'suspend_hi'
-
Yu Kuai authored
Now that the old apis are removed, __mddev_suspend/resume() can be renamed to their original names. This is done by: sed -i "s/__mddev_suspend/mddev_suspend/g" *.[ch] sed -i "s/__mddev_resume/mddev_resume/g" *.[ch] Signed-off-by: Yu Kuai <yukuai3@huawei.com> Signed-off-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20231010151958.145896-20-yukuai1@huaweicloud.com
-
Yu Kuai authored
Now that mddev_suspend() and mddev_resume() is not used anywhere, remove them, and remove 'MD_ALLOW_SB_UPDATE' and 'MD_UPDATING_SB' as well. Signed-off-by: Yu Kuai <yukuai3@huawei.com> Signed-off-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20231010151958.145896-19-yukuai1@huaweicloud.com
-
Yu Kuai authored
So that io won't concurrent with array reconfiguration, and it's safe to suspend the array directly because normal io won't rely on md_start_sync(). Signed-off-by: Yu Kuai <yukuai3@huawei.com> Signed-off-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20231010151958.145896-18-yukuai1@huaweicloud.com
-
Yu Kuai authored
raid5 is the only personality to suspend array in check_reshape() and start_reshape() callback, suspend and quiesce() callback can both wait for all normal io to be done, and prevent new io to be dispatched, the difference is that suspend is implemented in common layer, and quiesce() callback is implemented in raid5. In order to cleanup all the usage of mddev_suspend(), the new apis __mddev_suspend() need to be called before 'reconfig_mutex' is held, and it's not good to affect all the personalities in common layer just for raid5. Hence replace suspend with quiesce() callaback, prepare to reomove all the users of mddev_suspend(). Signed-off-by: Yu Kuai <yukuai3@huawei.com> Signed-off-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20231010151958.145896-17-yukuai1@huaweicloud.com
-
Yu Kuai authored
Now that caller already suspend the array, there is no need to suspend array in liner_add(). Note that mddev_suspend/resume() is not used anymore. Signed-off-by: Yu Kuai <yukuai3@huawei.com> Signed-off-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20231010151958.145896-16-yukuai1@huaweicloud.com
-
Yu Kuai authored
Now that except for stopping the array, all the callers already suspend the array, there is no need to suspend anymore, hence remove the second parameter. Signed-off-by: Yu Kuai <yukuai3@huawei.com> Signed-off-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20231010151958.145896-15-yukuai1@huaweicloud.com
-
Yu Kuai authored
mddev_create/destroy_serial_pool() will be called from several places where mddev_suspend() will be called later. Prepare to remove the mddev_suspend() from mddev_create/destroy_serial_pool(). Signed-off-by: Yu Kuai <yukuai3@huawei.com> Signed-off-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20231010151958.145896-14-yukuai1@huaweicloud.com
-
Yu Kuai authored
'reconfig_mutex' will be grabbed before these ioctls, suspend array before holding the lock, so that io won't concurrent with array reconfiguration through ioctls. This is not hot path, so performance is not concerned. Signed-off-by: Yu Kuai <yukuai3@huawei.com> Signed-off-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20231010151958.145896-13-yukuai1@huaweicloud.com
-
Yu Kuai authored
User can write 'remove' and 're-add' to trigger array reconfiguration through sysfs, suspend array in this case so that io won't concurrent with array reconfiguration. And now that all the caller of add_bound_rdev() alread suspend the array, remove mddev_suspend/resume() from add_bound_rdev() as well. Signed-off-by: Yu Kuai <yukuai3@huawei.com> Signed-off-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20231010151958.145896-12-yukuai1@huaweicloud.com
-
Yu Kuai authored
Convert to use new apis in following sysfs apis: - level_store - suspend_lo_store - suspend_hi_store - serialize_policy_store These are not hot path, so performance is not concerned. Signed-off-by: Yu Kuai <yukuai3@huawei.com> Signed-off-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20231010151958.145896-11-yukuai1@huaweicloud.com
-
Yu Kuai authored
Convert to use new apis, the old apis will be removed eventually. These are not hot path, so performance is not concerned. Signed-off-by: Yu Kuai <yukuai3@huawei.com> Signed-off-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20231010151958.145896-10-yukuai1@huaweicloud.com
-
Yu Kuai authored
Convert to use new apis, the old apis will be removed eventually. Signed-off-by: Yu Kuai <yukuai3@huawei.com> Signed-off-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20231010151958.145896-9-yukuai1@huaweicloud.com
-
Yu Kuai authored
Convert to use new apis, the old apis will be removed eventually. This is not hot path, so performance is not concerned. Signed-off-by: Yu Kuai <yukuai3@huawei.com> Signed-off-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20231010151958.145896-8-yukuai1@huaweicloud.com
-
Yu Kuai authored
Convert to use new apis, the old apis will be removed eventually. These are not hot path, so performance is not concerned. Signed-off-by: Yu Kuai <yukuai3@huawei.com> Signed-off-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20231010151958.145896-7-yukuai1@huaweicloud.com
-
Yu Kuai authored
The new helpers suspend the array first and then lock the array, Prepare to refactor from: mddev_lock/lock_nointr mddev_suspend ... mddev_resuem mddev_lock With: mddev_suspend_and_lock/lock_nointr ... mddev_unlock_and_resume After all the use cases is refactored, mddev_suspend/resume() will be removed. And mddev_suspend_and_lock() will also replace mddev_lock() for the case that the array will be reconfigured, in order to synchronize with io to prevent problems in many corner cases. Signed-off-by: Yu Kuai <yukuai3@huawei.com> Signed-off-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20231010151958.145896-6-yukuai1@huaweicloud.com
-
Yu Kuai authored
Advantages for new apis: - reconfig_mutex is not required; - the weird logical that suspend array hold 'reconfig_mutex' for mddev_check_recovery() to update superblock is not needed; - the specail handling, 'pers->prepare_suspend', for raid456 is not needed; - It's safe to be called at any time once mddev is allocated, and it's designed to be used from slow path where array configuration is changed; - the new helpers is designed to be called before mddev_lock(), hence it support to be interrupted by user as well. Signed-off-by: Yu Kuai <yukuai3@huawei.com> Signed-off-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20231010151958.145896-5-yukuai1@huaweicloud.com
-
Yu Kuai authored
Prepare to cleanup pers->prepare_suspend(), which is used to fix a deadlock in raid456 by returning error for io that is waiting for reshape to make progress in mddev_suspend(). This change will allow reshape to make progress while waiting for io to be done in mddev_suspend() in following patches. Signed-off-by: Yu Kuai <yukuai3@huawei.com> Signed-off-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20231010151958.145896-4-yukuai1@huaweicloud.com
-
Yu Kuai authored
'conf->log' is set with 'reconfig_mutex' grabbed, however, readers are not procted, hence protect it with READ_ONCE/WRITE_ONCE to prevent reading abnormal values. Signed-off-by: Yu Kuai <yukuai3@huawei.com> Signed-off-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20231010151958.145896-3-yukuai1@huaweicloud.com
-
Yu Kuai authored
Protect 'suspend_lo' and 'suspend_hi' with READ_ONCE/WRITE_ONCE to prevent reading abnormal values. Signed-off-by: Yu Kuai <yukuai3@huawei.com> Signed-off-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20231010151958.145896-2-yukuai1@huaweicloud.com
-
- 09 Oct, 2023 1 commit
-
-
Yu Kuai authored
Currently, discad io is treated the same as normal write io, and for write behind case, io size is limited to: BIO_MAX_VECS * (PAGE_SIZE >> 9) For 0.5KB sector size and 4KB PAGE_SIZE, this is just 1MB. For consequence, if 'WriteMostly' is set to one of the underlying disks, then diskcard io will be splited into 1MB and it will take a long time for the diskcard to finish. Fix this problem by disable write behind for discard io. Reported-by: Roman Mamedov <rm@romanrm.net> Closes: https://lore.kernel.org/all/6a1165f7-c792-c054-b8f0-1ad4f7b8ae01@ultracoder.org/Reported-and-tested-by: Kirill Kirilenko <kirill@ultracoder.org> Signed-off-by: Yu Kuai <yukuai3@huawei.com> Signed-off-by: Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20231007112105.407449-1-yukuai1@huaweicloud.com
-
- 05 Oct, 2023 3 commits
-
-
Gabriel Krisman Bertazi authored
The allocation of struct io_buffer for metadata of provided buffers is done through a custom allocator that directly gets pages and fragments them. But, slab would do just fine, as this is not a hot path (in fact, it is a deprecated feature) and, by keeping a custom allocator implementation we lose benefits like tracking, poisoning, sanitizers. Finally, the custom code is more complex and requires keeping the list of pages in struct ctx for no good reason. This patch cleans this path up and just uses slab. I microbenchmarked it by forcing the allocation of a large number of objects with the least number of io_uring commands possible (keeping nbufs=USHRT_MAX), with and without the patch. There is a slight increase in time spent in the allocation with slab, of course, but even when allocating to system resources exhaustion, which is not very realistic and happened around 1/2 billion provided buffers for me, it wasn't a significant hit in system time. Specially if we think of a real-world scenario, an application doing register/unregister of provided buffers will hit ctx->io_buffers_cache more often than actually going to slab. Signed-off-by: Gabriel Krisman Bertazi <krisman@suse.de> Link: https://lore.kernel.org/r/20231005000531.30800-4-krisman@suse.deSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Gabriel Krisman Bertazi authored
nbufs tracks the number of buffers and not the last bgid. In 16-bit, we have 2^16 valid buffers, but the check mistakenly rejects the last bid. Let's fix it to make the interface consistent with the documentation. Fixes: ddf0322d ("io_uring: add IORING_OP_PROVIDE_BUFFERS") Signed-off-by: Gabriel Krisman Bertazi <krisman@suse.de> Link: https://lore.kernel.org/r/20231005000531.30800-3-krisman@suse.deSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Gabriel Krisman Bertazi authored
Commit 3851d25c ("io_uring: check for rollover of buffer ID when providing buffers") introduced a check to prevent wrapping the BID counter when sqe->off is provided, but it's off-by-one too restrictive, rejecting the last possible BID (65534). i.e., the following fails with -EINVAL. io_uring_prep_provide_buffers(sqe, addr, size, 0xFFFF, 0, 0); Fixes: 3851d25c ("io_uring: check for rollover of buffer ID when providing buffers") Signed-off-by: Gabriel Krisman Bertazi <krisman@suse.de> Link: https://lore.kernel.org/r/20231005000531.30800-2-krisman@suse.deSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
- 04 Oct, 2023 5 commits
-
-
Jan Höppner authored
The length values for volume label type and volume label id are hard-coded in several places. Provide defines for those values and replace all occurrences accordingly. Note that the length is defined and used, and not the size since the volume label type string and volume label id string are not nul-terminated. Signed-off-by: Jan Höppner <hoeppner@linux.ibm.com> Reviewed-by: Stefan Haberland <sth@linux.ibm.com> Signed-off-by: Stefan Haberland <sth@linux.ibm.com> Link: https://lore.kernel.org/r/20230915131001.697070-4-sth@linux.ibm.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Jan Höppner authored
strncpy() is deprecated and needs to be replaced. The volume label information strings are not nul-terminated and strncpy() can simply be replaced with memcpy(). To enhance the readability of find_label() alongside this change, the following improvements are made: - Introduce the array dasd_vollabels[] containing all information necessary for the label detection. - Provide a helper function to obtain an index value corresponding to a volume label type. This allows the use of a switch statement to reduce indentation levels. - The 'temp' variable is used to check against valid volume label types. In the good case, this variable already contains the volume label type making it unnecessary to copy the information again from e.g. label->vol.vollbl. Remove the 'temp' variable and the second copy as all information are already provided. - Remove the 'found' variable and replace it with early returns Signed-off-by: Jan Höppner <hoeppner@linux.ibm.com> Reviewed-by: Stefan Haberland <sth@linux.ibm.com> Signed-off-by: Stefan Haberland <sth@linux.ibm.com> Link: https://lore.kernel.org/r/20230915131001.697070-3-sth@linux.ibm.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Jan Höppner authored
The data holding the volume label information is zeroed in case no valid volume label was found. Since the label information isn't used in that case, zeroing the data doesn't provide any value whatsoever. Remove the unnecessary memset() call accordingly. Signed-off-by: Jan Höppner <hoeppner@linux.ibm.com> Reviewed-by: Stefan Haberland <sth@linux.ibm.com> Signed-off-by: Stefan Haberland <sth@linux.ibm.com> Link: https://lore.kernel.org/r/20230915131001.697070-2-sth@linux.ibm.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Justin Stitt authored
`strncpy` is deprecated for use on NUL-terminated destination strings [1]. `aoe_iflist` is expected to be NUL-terminated which is evident by its use with string apis later on like `strspn`: | p = aoe_iflist + strspn(aoe_iflist, WHITESPACE); It also seems `aoe_iflist` does not need to be NUL-padded which means `strscpy` [2] is a suitable replacement due to the fact that it guarantees NUL-termination on the destination buffer while not unnecessarily NUL-padding. Link: https://www.kernel.org/doc/html/latest/process/deprecated.html#strncpy-on-nul-terminated-strings [1] Link: https://manpages.debian.org/testing/linux-manual-4.8/strscpy.9.en.html [2] Link: https://github.com/KSPP/linux/issues/90 Cc: linux-hardening@vger.kernel.org Cc: Kees Cook <keescook@chromium.org> Cc: Xu Panda <xu.panda@zte.com.cn> Cc: Yang Yang <yang.yang29@zte.com> Signed-off-by: Justin Stitt <justinstitt@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230919-strncpy-drivers-block-aoe-aoenet-c-v2-1-3d5d158410e9@google.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Justin Stitt authored
`strncpy` is deprecated for use on NUL-terminated destination strings [1]. We should favor a more robust and less ambiguous interface. We expect that both `nullb->disk_name` and `disk->disk_name` be NUL-terminated: | snprintf(nullb->disk_name, sizeof(nullb->disk_name), | "%s", config_item_name(&dev->group.cg_item)); ... | pr_info("disk %s created\n", nullb->disk_name); It seems like NUL-padding may be required due to __assign_disk_name() utilizing a memcpy as opposed to a `str*cpy` api. | static inline void __assign_disk_name(char *name, struct gendisk *disk) | { | if (disk) | memcpy(name, disk->disk_name, DISK_NAME_LEN); | else | memset(name, 0, DISK_NAME_LEN); | } Then we go and print it with `__print_disk_name` which wraps `nullb_trace_disk_name()`. | #define __print_disk_name(name) nullb_trace_disk_name(p, name) This function obviously expects a NUL-terminated string. | const char *nullb_trace_disk_name(struct trace_seq *p, char *name) | { | const char *ret = trace_seq_buffer_ptr(p); | | if (name && *name) | trace_seq_printf(p, "disk=%s, ", name); | trace_seq_putc(p, 0); | | return ret; | } >From the above, we need both 1) a NUL-terminated string and 2) a NUL-padded string. So, let's use strscpy_pad() as per Kees' suggestion from v1. Link: https://www.kernel.org/doc/html/latest/process/deprecated.html#strncpy-on-nul-terminated-strings [1] Link: https://github.com/KSPP/linux/issues/90 Cc: linux-hardening@vger.kernel.org Cc: Kees Cook <keescook@chromium.org> Cc: Nick Desaulniers <ndesaulniers@google.com> Cc: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Justin Stitt <justinstitt@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230919-strncpy-drivers-block-null_blk-main-c-v3-1-10cf0a87a2c3@google.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
- 03 Oct, 2023 2 commits
-
-
Joel Granados authored
This commit comes at the tail end of a greater effort to remove the empty elements at the end of the ctl_table arrays (sentinels) which will reduce the overall build time size of the kernel and run time memory bloat by ~64 bytes per sentinel (further information Link : https://lore.kernel.org/all/ZO5Yx5JFogGi%2FcBo@bombadil.infradead.org/) Remove sentinel element from cdrom_table Signed-off-by: Joel Granados <j.granados@samsung.com> Link: https://lore.kernel.org/lkml/20231002-jag-sysctl_remove_empty_elem_drivers-v2-1-02dd0d46f71e@samsung.comReviewed-by: Phillip Potter <phil@philpotter.co.uk> Link: https://lore.kernel.org/lkml/20231002214528.15529-1-phil@philpotter.co.ukSigned-off-by: Phillip Potter <phil@philpotter.co.uk> Link: https://lore.kernel.org/r/20231002220203.15637-2-phil@philpotter.co.ukSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Jens Axboe authored
This function is overly convoluted with a goto error path, and checks under the mmap_read_lock() that don't need to be at all. Rearrange it a bit so the checks and errors fall out naturally, rather than needing to jump around for it. Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
- 29 Sep, 2023 1 commit
-
-
Jens Axboe authored
Merge tag 'md-next-20230927' of https://git.kernel.org/pub/scm/linux/kernel/git/song/md into for-6.7/block Pull MD updates from Song: "1. Make rdev add/remove independent from daemon thread, by Yu Kuai; 2. Refactor code around quiesce() and mddev_suspend(), by Yu Kuai." * tag 'md-next-20230927' of https://git.kernel.org/pub/scm/linux/kernel/git/song/md: md: replace deprecated strncpy with memcpy md/md-linear: Annotate struct linear_conf with __counted_by md: don't check 'mddev->pers' and 'pers->quiesce' from suspend_lo_store() md: don't check 'mddev->pers' from suspend_hi_store() md-bitmap: suspend array earlier in location_store() md-bitmap: remove the checking of 'pers->quiesce' from location_store() md: don't rely on 'mddev->pers' to be set in mddev_suspend() md: initialize 'writes_pending' while allocating mddev md: initialize 'active_io' while allocating mddev md: delay remove_and_add_spares() for read only array to md_start_sync() md: factor out a helper rdev_addable() from remove_and_add_spares() md: factor out a helper rdev_is_spare() from remove_and_add_spares() md: factor out a helper rdev_removeable() from remove_and_add_spares() md: delay choosing sync action to md_start_sync() md: factor out a helper to choose sync action from md_check_recovery() md: use separate work_struct for md_start_sync()
-