- 22 Oct, 2023 40 commits
-
-
Kent Overstreet authored
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
roundup_pow_of_two(0) is undefined Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
It's possible to get -EIO in __btree_iter_traverse_all() after looping, with orig_iter NULL. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
When grab_cache_page_write_begin() fails but we did pin some pages, we shouldn't return -ENOMEM, we should do a partial write. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
This is the start of some refactoring work to make less code depend on the linux VFS - here the inode cache - to make e.g. the fuse port easier. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
The btree_trans struct needs to memoize/cache btree iterators, so that on transaction restart we don't have to completely redo btree lookups, and so that we can do them all at once in the correct order when the transaction had to restart to avoid a deadlock. This switches the btree iterator lookups to work based on iterator position, instead of trying to match them up based on the stack trace. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Will be replaced by cached btree iterators Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Change it to match for_each_btree_key() Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
We shouldn't ever be writing past i_size - but, apparently there's still a bug to track down. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
In order to avoid trying to allocate too many btree iterators, bch2_extent_atomic_end() needs to count how many iterators are going to be needed for insertions and overwrites - but we weren't counting the iterators for deleting a reflink_v when the refcount goes to 0. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
If the user buffer isn't aligned to the filesystem block size, on a large enough IO - where it won't fit into a single bio - bio_iov_iter_get_pages() won't necessarily return a bio with the proper alignment. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
When an extent is erasure coded, we need to record a replicas entry to indicate that data is present on the devices that extent has pointers to - but nr_required should be 0, because it's erasure coded. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Somewhat tricky and ugly, because iterating over extents backwards is a pain. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Last of the basic operations for iterating forwards and backwards over the btree: we now have - peek(), returns key >= iter->pos - next(), returns key > iter->pos - peek_prev(), returns key <= iter->pos - prev(), returns key < iter->pos Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
When converting from PAGE_SIZE to block_size, the .mkwrite path was missed Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
bcachefs used to work mostly in terms of PAGE_SIZE, not block size at the vfs level - but that has since been fixed. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Call bch2_btree_iter_verify from bch2_btree_node_iter_fix(); also verify in btree_iter_peek_uptodate() that iter->k matches what's in the btree. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Any time we're modifying what's in the btree, iterators potentially have to be updated - this one was exposed by the reflink code. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
The allocator needs to make sure there's buckets available on the RESERVE_NONE freelist if at all possible - otherwise foreground IO will get stuck. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
.key_debugcheck no longer needs to take a pointer to the btree node Also, try to make sure wherever we're inserting or modifying keys in the btree. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
With multiple iterators, if another iterator points to the key being modified, we need to call bch2_btree_node_iter_fix() to re-unpack the key into the iter->k Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
_iter, not iter Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Move extents instead of copying them - this way, we can iterate over only live extents, not the entire keyspace. Also, this means we can mostly skip running triggers. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Being more rigorous about noting when the key the iterator currently poins to has changed - which should also give us a nice performance improvement due to not having to check if we have to skip other bsets backwards as much. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Just for consistency Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
This was spotted when the move_extent() path tried to allocate a bio for a reflink_p extent, but adding pages to the bio failed because we overflowed bi_max_vecs. Oops. Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
rbio->c wasn't being initialized in the move path Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
-