- 25 Apr, 2022 5 commits
-
-
Pavel Begunkov authored
Don't hand code wq_stack_add_head() to ->free_list, which serves for recycling io_kiocb, add a helper doing it for us. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/f206f575486a8dd3d52f074ab37ed146b2d215b7.1649771823.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
Add io_req_cache_empty(), which checks if there are requests in the inline req cache or not. It'll be needed in the future, but also nicely cleans up a few spots poking into ->free_list directly. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/b18662389f3fb483d0bd07906647f65f6037475a.1649771823.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
io_flush_cached_reqs() isn't descriptive and has only one caller, inline it into __io_alloc_req_refill(). Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/ec38abe65a883d9fe6b169793119ce86806655a4.1649771823.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
All good users should not set IOSQE_IO_*LINK flags for the last request of a link. io_uring flushes collected links at the end of submission, but it's not the optimal way and so we don't care too much about it. Replace io_queue_sqe() call with io_queue_sqe_fallback() as the former one is inlined and will generate a bunch of extra code. This will also help compilers with the submission path inlining. > size ./fs/io_uring.o text data bss dec hex filename 87265 13734 8 101007 18a8f ./fs/io_uring.o > size ./fs/io_uring.o text data bss dec hex filename 87073 13734 8 100815 189cf ./fs/io_uring.o Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/01fb5e417ef49925d544a0b0bae30409845ed2b4.1649771823.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
We can do CQE filling a bit more efficiently when req->cqe is fully filled by memcpy()'ing it to the userspace instead of doing it field by field. It's easier on register spilling, removes a couple of extra loads/stores and write combines two u32 memory writes. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/ee3f514ff28b1fe3347a8eca93a9d91647f2eaad.1649771823.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
- 24 Apr, 2022 29 commits
-
-
Pavel Begunkov authored
We already have req->{result,user_data,cflags}, which mimic struct io_uring_cqe and are intended to store CQE data. Combine them into a struct io_uring_cqe field. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/e1efe65d5005cd6a9ec3440767eb15a9fa9351cf.1649771823.git.asml.silence@gmail.com [axboe: add mirror cqe to cater to fd union] Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
Rename io_sqe_file_register(), so the name better reflects what the function is doing. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/d5091518883786969e244d2f0854a47bbdaa5061.1649334991.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
Merge io_sqe_file_register() and io_sqe_file_register(). The only real difference left between them is from where we get an skb. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/dddda3039c71fcbec24b3465cbe8c7e7ae7bb0e8.1649334991.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
There is an old API nuisance where io_uring's SCM accounting functions traverse fixed file tables and so requires them to be set in advance, which leads to some implicit rules of how io_sqe_file_register() should be used. __io_sqe_files_scm() now works with only one file at a time, pass a file directly and get rid of all fixed table dereferencing inside. Clean io_sqe_file_register() callers. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/fb32031d892e61a7748c70da7999725d5e798671.1649334991.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
__io_sqe_files_scm() is now called only from one place passing a single file, so nr argument can be killed and __io_sqe_files_scm() simplified. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/66b492bc66dc8356d45d64076bb31d677d11a7c9.1649334991.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
Channel all SCM accounting through io_sqe_file_register(), so we do it uniformely for updates and initial registration and can kill duplicated code. Registration might be slightly slower in some case, but first we skip most of SCM accounting now so it's not a problem. Moreover, it's nicer for an empty set registration as we don't even try to allocate skb for them anymore. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/6c9afbeb22812777d0c43e52353b63db5b87ed1e.1649334991.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
io_uring deals with file reference loops by registering all fixed files in the SCM/GC infrastrucure. However, only a small subset of all file types can keep long-term references to other files and those that don't are not interesting for the garbage collector as they can't be in a reference loop. They neither can be directly recycled by GC nor affect loop searching. Let's skip io_uring SCM accounting for loop-less files, i.e. all but af_unix sockets, quite imroving fixed file updates performance and greatly helpnig with memory footprint. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/9c44ecf6e89d69130a8c4360cce2183ffc5ddd6f.1649277098.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Jens Axboe authored
We don't need to call this for every loop. This is particularly troublesome if we are task_work intensive, and get woken more often than we desire due to that. Just do it at the end, that's always safe as we initialize the waitqueue list head anyway. This can save a considerable amount of hammering on the waitqueue lock, which is also hot from the request completion side. Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
A small refactoring for io_req_add_compl_list() deduplicating some code. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/f0a5272b45efe4ffc41cb79b99784e39c699aade.1648209006.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
Some tooling keep complaining about self assignment in io_for_each_link(), the code is correct but still let's workaround it. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/f0de77b0b0f8309554ba6fba34327b7813bcc3ff.1648209006.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
In most cases io_put_task() is called from the submitter task and go through a higly optimised fast path, which has to be inlined. The other branch though is bulkier and we don't care about it as much because it implies atomics and other heavy calls. Extract it into a helper, which is expected not to be inlined. [before] size ./fs/io_uring.o text data bss dec hex filename 89328 13646 8 102982 19246 ./fs/io_uring.o [after] size ./fs/io_uring.o text data bss dec hex filename 89096 13646 8 102750 1915e ./fs/io_uring.o Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/dec213db0e0b8605132da81e0a0be687a4d140cb.1648209006.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
Refactor io_ring_submit_[un]lock(), make it accept issue_flags and remove manual IO_URING_F_UNLOCKED checks. It also allows us to place lockdep annotations inside instead of sprinkling them in a bunch of places. There is only one user that doesn't fit now, so hand code locking in __io_rsrc_put_work(). Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/e55c2c06767676a801252e8094c9ab09912487a4.1648209006.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
Both submittion and iopolling requires holding uring_lock. IOPOLL can users do them together in a single syscall, however it would still do 2 pairs of lock/unlock. Optimise this case combining locking into one lock/unlock pair, which especially nice for low QD. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/034b6c41658648ad3ad3c9485ac8eb546f010bc4.1647957378.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
Syscall should only iopoll for events when it's a IOPOLL ring and is not SQPOLL. Instead of check both flags every time we can save it in ring flags so it's easier to use. We don't care much about an extra if there, however it will be inconvenient to copy-paste this chunk with checks in future patches. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/7fd2f8fc2606305aa06dd8c0ff8f76a66b39c383.1647957378.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
IOPOLL doesn't use additional arguments like sigsets, but it still needs some basic verification, which is currently done by io_get_ext_arg(). This patch adds a separate function for the IOPOLL path, which is a bit simpler and doesn't do extra. This prepares us for further patches, which would have hurt inlining in the hot path otherwise. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/71b23fca412e3374b74be7711cfd42a3d9d5dfe0.1647957378.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
Move fast check out of io_queue_next(), it makes req->flags checks in __io_submit_flush_completions() a bit clearer and grants us better comtrol, e.g. can remove now not justified unlikely() in __io_submit_flush_completions(). Also, we don't care about having this check in io_free_req() as the function is a slow path and io_req_find_next() handles it correctly. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/1f9e1cc80adbb11b37017d511df4a2c6141a3f08.1647897811.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
There is a new (req->flags & REQ_F_POLLED) check in __io_submit_flush_completions() for poll recycling, however io_free_batch_list() is a much better place for it. First, we prefer it after putting the last req ref just to avoid potential problems in the future. Also, it'll enable the recycling for IOPOLL and also will place it closer to all other req->flags bits clean up requests. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/31dfe1dafda66ba3ce36b301884ec7e162c777d1.1647897811.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
We do several req->flags checks in the fast path of io_free_batch_list(). One explicit check of REQ_F_REFCOUNT, and two other hidden in io_queue_next() and io_dismantle_req(). Moreover, there is a io_req_put_rsrc_locked() call in between, so there is no hope req->flags will be preserved in registers. All those flags if not a slow path than definitely a slower path, so put them all under a single flags mask check and save several mem reloads and ifs. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/0fb493f73f2009aea395c570c2932fecaa4e1244.1647897811.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
Move the fast path from io_req_find_next() into callers. It prepares us for further changes. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/10bd0e564472dde0c7f8d90ae317d05356cd565a.1647897811.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
Now io_commit_cqring() is simple and it tolerates well being called without a new CQE filled, so kill a bunch of not needed anymore guards. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/36aed692dff402bba00a444a63a9cd2e97a340ea.1647897811.git.asml.silence@gmail.com [axboe: fold in followup fix] Signed-off-by: Jens Axboe <axboe@kernel.dk>
-
Pavel Begunkov authored
There should be no completions stashed when we first get into tctx_task_work(), so move completion flushing checks a bit later after we had a chance to execute some task works. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/c6765c804f3c438591b9825ab9c43d22039073c4.1647897811.git.asml.silence@gmail.comSigned-off-by: Jens Axboe <axboe@kernel.dk>
-
Linus Torvalds authored
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull scheduler fix from Borislav Petkov: - Fix a corner case when calculating sched runqueue variables That fix also removes a check for a zero divisor in the code, without mentioning it. Vincent clarified that it's ok after I whined about it: https://lore.kernel.org/all/CAKfTPtD2QEyZ6ADd5WrwETMOX0XOwJGnVddt7VHgfURdqgOS-Q@mail.gmail.com/ * tag 'sched_urgent_for_v5.18_rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: sched/pelt: Fix attach_entity_load_avg() corner case
-
git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linuxLinus Torvalds authored
Pull powerpc fixes from Michael Ellerman: - Partly revert a change to our timer_interrupt() that caused lockups with high res timers disabled. - Fix a bug in KVM TCE handling that could corrupt kernel memory. - Two commits fixing Power9/Power10 perf alternative event selection. Thanks to Alexey Kardashevskiy, Athira Rajeev, David Gibson, Frederic Barrat, Madhavan Srinivasan, Miguel Ojeda, and Nicholas Piggin. * tag 'powerpc-5.18-3' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: powerpc/perf: Fix 32bit compile powerpc/perf: Fix power10 event alternatives powerpc/perf: Fix power9 event alternatives KVM: PPC: Fix TCE handling for VFIO powerpc/time: Always set decrementer in timer_interrupt()
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull perf fixes from Borislav Petkov: - Add Sapphire Rapids CPU support - Fix a perf vmalloc-ed buffer mapping error (PERF_USE_VMALLOC in use) * tag 'perf_urgent_for_v5.18_rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: perf/x86/cstate: Add SAPPHIRERAPIDS_X CPU support perf/core: Fix perf_mmap fail when CONFIG_PERF_USE_VMALLOC enabled
-
git://git.kernel.org/pub/scm/linux/kernel/git/ras/rasLinus Torvalds authored
Pull EDAC fix from Borislav Petkov: - Read the reported error count from the proper register on synopsys_edac * tag 'edac_urgent_for_v5.18_rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/ras/ras: EDAC/synopsys: Read the error count from the correct register
-
Linus Torvalds authored
Since commit 559089e0 ("vmalloc: replace VM_NO_HUGE_VMAP with VM_ALLOW_HUGE_VMAP"), the use of hugepage mappings for vmalloc is an opt-in strategy, because it caused a number of problems that weren't noticed until x86 enabled it too. One of the issues was fixed by Nick Piggin in commit 3b8000ae ("mm/vmalloc: huge vmalloc backing pages should be split rather than compound"), but I'm still worried about page protection issues, and VM_FLUSH_RESET_PERMS in particular. However, like the hash table allocation case (commit f2edd118: "page_alloc: use vmalloc_huge for large system hash"), the use of kvmalloc() should be safe from any such games, since the returned pointer might be a SLUB allocation, and as such no user should reasonably be using it in any odd ways. We also know that the allocations are fairly large, since it falls back to the vmalloc case only when a kmalloc() fails. So using a hugepage mapping seems both safe and relevant. This patch does show a weakness in the opt-in strategy: since the opt-in flag is in the 'vm_flags', not the usual gfp_t allocation flags, very few of the usual interfaces actually expose it. That's not much of an issue in this case that already used one of the fairly specialized low-level vmalloc interfaces for the allocation, but for a lot of other vmalloc() users that might want to opt in, it's going to be very inconvenient. We'll either have to fix any compatibility problems, or expose it in the gfp flags (__GFP_COMP would have made a lot of sense) to allow normal vmalloc() users to use hugepage mappings. That said, the cases that really matter were probably already taken care of by the hash tabel allocation. Link: https://lore.kernel.org/all/20220415164413.2727220-1-song@kernel.org/ Link: https://lore.kernel.org/all/CAHk-=whao=iosX1s5Z4SF-ZGa-ebAukJoAdUJFk5SPwnofV+Vg@mail.gmail.com/ Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Paul Menzel <pmenzel@molgen.mpg.de> Cc: Song Liu <songliubraving@fb.com> Cc: Rick Edgecombe <rick.p.edgecombe@intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Song Liu authored
Use vmalloc_huge() in alloc_large_system_hash() so that large system hash (>= PMD_SIZE) could benefit from huge pages. Note that vmalloc_huge only allocates huge pages for systems with HAVE_ARCH_HUGE_VMALLOC. Signed-off-by: Song Liu <song@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Rik van Riel <riel@surriel.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
git://git.samba.org/ksmbdLinus Torvalds authored
Pull ksmbd server fixes from Steve French: - cap maximum sector size reported to avoid mount problems - reference count fix - fix filename rename race * tag '5.18-rc3-ksmbd-fixes' of git://git.samba.org/ksmbd: ksmbd: set fixed sector size to FS_SECTOR_SIZE_INFORMATION ksmbd: increment reference count of parent fp ksmbd: remove filename in ksmbd_file
-
- 23 Apr, 2022 6 commits
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arcLinus Torvalds authored
Pull ARC fixes from Vineet Gupta: - Assorted fixes * tag 'arc-5.18-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc: ARC: remove redundant READ_ONCE() in cmpxchg loop ARC: atomic: cleanup atomic-llsc definitions arc: drop definitions of pgd_index() and pgd_offset{, _k}() entirely ARC: dts: align SPI NOR node name with dtschema ARC: Remove a redundant memset() ARC: fix typos in comments ARC: entry: fix syscall_trace_exit argument
-
git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsiLinus Torvalds authored
Pull SCSI fix from James Bottomley: "One fix for an information leak caused by copying a buffer to userspace without checking for error first in the sr driver" * tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: scsi: sr: Do not leak information in ioctl
-
git://git.kernel.org/pub/scm/linux/kernel/git/xen/tipLinus Torvalds authored
Pull xen fixes from Juergen Gross: "A simple cleanup patch and a refcount fix for Xen on Arm" * tag 'for-linus-5.18-rc4-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip: arm/xen: Fix some refcount leaks xen: Convert kmap() to kmap_local_page()
-
git://anongit.freedesktop.org/drm/drmLinus Torvalds authored
Pull more drm fixes from Dave Airlie: "Maarten was away, so Maxine stepped up and sent me the drm-fixes merge, so no point leaving it for another week. The big change is an OF revert around bridge/panels, it may have some driver fallout, but hopefully this revert gets them shook out in the next week easier. Otherwise it's a bunch of locking/refcounts across drivers, a radeon dma_resv logic fix and some raspberry pi panel fixes. panel: - revert of patch that broke panel/bridge issues dma-buf: - remove unused header file. amdgpu: - partial revert of locking change radeon: - fix dma_resv logic inversion panel: - pi touchscreen panel init fixes vc4: - build fix - runtime pm refcount fix vmwgfx: - refcounting fix" * tag 'drm-fixes-2022-04-23' of git://anongit.freedesktop.org/drm/drm: drm/amdgpu: partial revert "remove ctx->lock" v2 Revert "drm: of: Lookup if child node has panel or bridge" Revert "drm: of: Properly try all possible cases for bridge/panel detection" drm/vc4: Use pm_runtime_resume_and_get to fix pm_runtime_get_sync() usage drm/vmwgfx: Fix gem refcounting and memory evictions drm/vc4: Fix build error when CONFIG_DRM_VC4=y && CONFIG_RASPBERRYPI_FIRMWARE=m drm/panel/raspberrypi-touchscreen: Initialise the bridge in prepare drm/panel/raspberrypi-touchscreen: Avoid NULL deref if not initialised dma-buf-map: remove renamed header file drm/radeon: fix logic inversion in radeon_sync_resv
-
git://git.kernel.org/pub/scm/linux/kernel/git/dtor/inputLinus Torvalds authored
Pull input fixes from Dmitry Torokhov: - a new set of keycodes to be used by marine navigation systems - minor fixes to omap4-keypad and cypress-sf drivers * tag 'input-for-v5.18-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input: Input: add Marine Navigation Keycodes Input: omap4-keypad - fix pm_runtime_get_sync() error checking Input: cypress-sf - register a callback to disable the regulators
-
git://git.kernel.dk/linux-blockLinus Torvalds authored
Pull block fixes from Jens Axboe: "Just two small regression fixes for bcache" * tag 'block-5.18-2022-04-22' of git://git.kernel.dk/linux-block: bcache: fix wrong bdev parameter when calling bio_alloc_clone() in do_bio_hook() bcache: put bch_bio_map() back to correct location in journal_write_unlocked()
-