- 22 Oct, 2023 40 commits
-
-
Kent Overstreet authored
We can't reuse bios without reinitializing them, and in the retry path it's safer to just make sure we don't reuse them at all. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
This is dead code Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
This works around a bug where bio_full() doesn't check for bio->bi_iter.bi_size overflowing - and, we don't really want to build bios that are that big anyways. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
The iterator counting assumed we're doing an obvious optimization when only updating the refcount on indirect extents - but we're not doing it yet. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Previously, we'd go into an infinite loop. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
We definitely don't need an exclusive inode lock for readdir. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
We have to free the old (in memory) btree node _before_ unlocking the new nodes - else, some other thread with a read lock on the old node could see stale data after another thread has already updated the new node. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
The generic IO path now handles inode updates for i_size and i_sectors - this means we can drop a fair amount of code from fs-io.c. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
As before - we're moving non Linux specific code out of fs-io.c. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
The next few patches are going to be more moving the logic around i_size/i_sectors updates to io.c, and better separating the Linux VFS specific code from core bcachefs code, to better support the fuse port. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Moving bch2_extent_update() to io.c will be greatly simplified if we no longer have to keep ei_inode.bi_size/bi_sectors up to date. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
In bch2_extent_update(), we have to update the inode if i_size is changing (the file is being extend) or if i_sectors is changing, but we want to avoid touching the inode if it's not necessary. Change sum_sector_overwrites() to also check if there's already data above where we're writing to - this means we're definitely not extending the file. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
this deserves a unit test Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
The tweaks to ctx->pos handling are also to help the fuse port Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
We can't use the page lock to protect it, because on writeback IO error we need to access the page state before calling end_page_writeback() and the page lock semantics are completely insane so that deadlocks. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Disk space accounting for erasure coding + compression was completely broken - we need to calculate the parity sectors delta the same way we calculate disk_sectors, by calculating the old and new usage and subtracting to get the difference. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
The bkey_s_c returned by btree_iter_(peek|next) points into the btree iter type, so advancing the iterator and then using the one previously returned is a bug... Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
This make the disk accounting code saner, and it's not clear why we'd ever want the same data to be in multiple stripes simultaneously. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
We were looking up the wrong entry in the stripes radix tree. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
If an extent only contained cached or erasure coded pointers, there won't be any devices in the normal dirty replicas list or an entry to update. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Justin Husted authored
Running the filesystem under valgrind exposed some garbage data being written to disk in bch2_journal_super_entries_add_common(), in the portion which encodes bch_replica_entry objects. Signed-off-by: Justin Husted <sigstop@gmail.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Justin Husted authored
Running the filesystem under valgrind exposed a path where the max_stale variable in bch2_gc_btree() might not be initialized before use in a rare case when there are no btree nodes in a transaction. Signed-off-by: Justin Husted <sigstop@gmail.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
bch2_extent_atomic_end counts the number of iterators requried for marking overwrites - but journal replay never marks overwrites, so that part was incorrect. And counting iterators for the key being inserted should be unnecessary because we did that prior to the key being inserted before it was first journalled. This should fix an iterator overflow bug - the iterators for walking overwrites were totally unneeded. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
This fixes a bug in io.c bch2_write_index_default() - it was missing the traverse call, but bch2_extent_atomic_end returns an error now and can just call it itself. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
This refactoring makes the code easier to understand by separating the bcachefs btree transactional code from the linux VFS code - but more importantly, it's also to share code with the fuse port. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
With the refactoring that's coming to add fuse support, we want bch2_hash_info_init() to be cheaper so we don't have to rely on anything cached besides the inode in the btree. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
We currently don't have a way to propagate inode io opts to indirect extents. This is a problem... Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-