- 25 Sep, 2008 40 commits
-
-
Chris Mason authored
This properly reflects the first 1MB we skip at the start of the device Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
btrfs_invalidatepage is not allowed to leave pages around on the lru. Any such pages will trigger an oops later on because the VM will see page->private and assume it is a buffer head. This also forces extra flushes of the async work queues before dropping all the pages on the btree inode during unmount. Left over items on the work queues are one possible cause of busy state ranges during truncate_inode_pages. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
The btree inode should only have a single extent_map in the cache, it doesn't make sense to ever drop it. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
It was testing the bio before doing logical->physical mapping, so the test was always wrong. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
The data read retry code needs to find the logical disk block before it can resubmit new bios. But, finding this block isn't allowed to take the fs_mutex because that will deadlock with a number of different callers. This changes the retry code to use the extent map cache instead, but that requires the extent map cache to have the extent we're looking for. This is a problem because btrfs_drop_extent_cache just drops the entire extent instead of the little tiny part it is invalidating. The bulk of the code in this patch changes btrfs_drop_extent_cache to invalidate only a portion of the extent cache, and changes btrfs_get_extent to deal with the results. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
This isn't required anymore because we don't reallocate blocks that have already been written in this transaction. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
This significantly improves streaming write performance by allowing concurrency in the data checksumming. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
This allows checksumming to happen in parallel among many cpus, and keeps us from bogging down pdflush with the checksumming code. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Block headers now store the chunk tree uuid Chunk items records the device uuid for each stripes Device extent items record better back refs to the chunk tree Block groups record better back refs to the chunk tree The chunk tree format has also changed. The objectid of BTRFS_CHUNK_ITEM_KEY used to be the logical offset of the chunk. Now it is a chunk tree id, with the logical offset being stored in the offset field of the key. This allows a single chunk tree to record multiple logical address spaces, upping the number of bytes indexed by a chunk tree from 2^64 to 2^128. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
This includes fixing a missing spinlock init call that caused oops on mount for most kernels other than 2.6.25. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
On huge machines, delayed allocation may try to allocate massive extents. This change allows btrfs_alloc_extent to return something smaller than the caller asked for, and the data allocation routines will loop over the allocations until it fills the whole delayed alloc. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Miguel authored
bio_endio() changed prototype on linux 2.6.24, support older kernels using the older prototype. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Miguel authored
write_cache_pages doesn't exist in linux 2.6.20, change the #if condition to match that. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Miguel authored
Fix for a endianess BUG when using btrfs v0.13 with kernels older than 2.6.23 Problem: Has of v0.13, btrfs-progs is using crc32c.c equivalent to the one found on linux-2.6.23/lib/libcrc32c.c Since crc32c_le() changed in linux-2.6.23, when running btrfs v0.13 with older kernels we have a missmatch between the versions of crc32c_le() from btrfs-progs and libcrc32c in the kernel. This missmatch causes a bug when using btrfs on big endian machines. Solution: btrfs_crc32c() macro that when compiling for kernels older than 2.6.23, does endianess conversion to parameters and return value of crc32c(). This endianess conversion nullifies the differences in implementation of crc32c_le(). If kernel 2.6.23 or better, it calls crc32c(). Signed-off-by: Miguel Sousa Filipe <miguel.filipe@gmail.com> --- Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
This adds basic O_DIRECT read and write support. In the write case, we just do a normal buffered write followed by a cache flush. O_DIRECT + O_SYNC are required to trigger metadata syncs. In the read case, there is a basic btrfs_get_block call for use by the generic O_DIRECT code. This does honor multi-volume mapping rules but it skips all checksumming. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Before it was done by the bio end_io routine, the work queue code is able to scale much better with faster IO subsystems. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Before, metadata checksumming was done by the callers of read_tree_block, which would set EXTENT_CSUM bits in the extent tree to show that a given range of pages was already checksummed and didn't need to be verified again. But, those bits could go away via try_to_releasepage, and the end result was bogus checksum failures on pages that never left the cache. The new code validates checksums when the page is read. It is a little tricky because metadata blocks can span pages and a single read may end up going via multiple bios. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
When a block is freed, it can be immediately reused if it is from the current transaction. But, an extra check is required to make sure the block had not been written yet. If it were reused after being written, the transid in the block header might match the transid of the next time the block was allocated. The parent node records the transaction ID of the block it is pointing to, and this is used as part of validating the block on reads. So, there can only be one version of a block per transaction. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Signed-off-by: Chris Mason <chris.mason@oracle.com>
-
Chris Mason authored
Checksums were only verified by btrfs_read_tree_block, which meant the functions to probe the page cache for blocks were not validating checksums. Normally this is fine because the buffers will only be in cache if they have already been validated. But, there is a window while the buffer is being read from disk where it could be up to date in the cache but not yet verified. This patch makes sure all buffers go through checksum verification before they are used. This is safer, and it prevents modification of buffers before they go through the csum code. Signed-off-by: Chris Mason <chris.mason@oracle.com>
-