-
Andrew Morton authored
ext2 and ext3 implement a custom LRU cache of buffer_heads - the eight most-recently-used inode bitmap buffers and the eight MRU block bitmap buffers. I don't like them, for a number of reasons: - The code is duplicated between filesystems - The functionality is unavailable to other filesystems - The LRU only applies to bitmap buffers. And not, say, indirects. - The LRUs are subtly dependent upon lock_super() for protection: without lock_super protection a bitmap could be evicted and freed while in use. And removing this dependence on lock_super() gets us one step on the way toward getting that semaphore out of the ext2 block allocator - it causes significant contention under some loads and should be a spinlock. - The LRUs pin 64 kbytes per mounted filesystem. Now, we could just delete those LRUs and rely on the VM to manage the memory. But that would introduce significant lock contention in __find_get_block - the blockdev mapping's private_lock and page_lock are heavily used. So this patch introduces a transparent per-CPU bh lru which is hidden inside __find_get_block(), __getblk() and __bread(). It is designed to shorten code paths and to reduce lock contention. It uses a seven-slot LRU. It achieves a 99% hit rate in `dbench 64'. It provides benefit to all filesystems. The next patches remove the open-coded LRUs from ext2 and ext3. Taken together, these patches are a code cleanup (300-400 lines gone), and they reduce lock contention. Anton tested these patches on the 32-way and demonstrated a throughput improvement of up to 15% on RAM-only dbench runs. See http://samba.org/~anton/linux/2.5.24/dbench/ Most of this benefit is from avoiding find_get_page() on the blockdev mapping. Because the generic LRU copes with indirect blocks as well as bitmaps.
e7ae11b6