- 21 Jun, 2016 4 commits
-
-
Dave Chinner authored
-
Dave Chinner authored
-
Christoph Hellwig authored
Instead punch the whole first, and the use the our zeroing helper to punch out the edge blocks. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
-
Christoph Hellwig authored
Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
-
- 20 Jun, 2016 12 commits
-
-
Christoph Hellwig authored
We now skip holes in it, so no need to have the caller do it as well. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
-
Christoph Hellwig authored
We'll want to use this code for large offsets now that we're skipping holes and unwritten extents efficiently. Also rename it to xfs_zero_range to be a bit more descriptive, and tell the caller if we actually did any zeroing. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
-
Christoph Hellwig authored
Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
-
Christoph Hellwig authored
Note that this removes support for the untested FIEMAP_FLAG_XATTR. It could be added relatively easily with iomap ops for the attr fork, but without test coverage I don't feel safe doing this. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
-
Christoph Hellwig authored
Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
-
Christoph Hellwig authored
Convert XFS to use the new iomap based multipage write path. This involves implementing the ->iomap_begin and ->iomap_end methods, and switching the buffered file write, page_mkwrite and xfs_iozero paths to the new iomap helpers. With this change __xfs_get_blocks will never be used for buffered writes, and the code handling them can be removed. Based on earlier code from Dave Chinner. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
-
Christoph Hellwig authored
Currently zeroing out blocks and waiting for writeout is a bit of a mess in truncate. This patch gives it a clear order in preparation for the iomap path: (1) we first wait for any direct I/O to complete to prevent any races for it (2) we then perform the actual zeroing, and only use the truncate_page helpers for truncating down. The truncate up case already is handled by the separate call to xfs_zero_eof. (3) only then we write back dirty data, as zeroing block may cause dirty pages when using either xfs_zero_eof or the new iomap infrastructure. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
-
Christoph Hellwig authored
And ensure it works for RT subvolume files an set the block device, both of which will be needed to be able to use the function in the buffered write path. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
-
Christoph Hellwig authored
Add a simple fiemap implementation based on iomap_ops, partially based on a previous implementation from Bob Peterson <rpeterso@redhat.com>. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
-
Christoph Hellwig authored
This avoid needing a separate inefficient get_block based DAX zero_range implementation in file systems. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
-
Christoph Hellwig authored
Add infrastructure for multipage buffered writes. This is implemented using an main iterator that applies an actor function to a range that can be written. This infrastucture is used to implement a buffered write helper, one to zero file ranges and one to implement the ->page_mkwrite VM operations. All of them borrow a fair amount of code from fs/buffers. for now by using an internal version of __block_write_begin that gets passed an iomap and builds the corresponding buffer head. The file system is gets a set of paired ->iomap_begin and ->iomap_end calls which allow it to map/reserve a range and get a notification once the write code is finished with it. Based on earlier code from Dave Chinner. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
-
Christoph Hellwig authored
Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
-
- 01 Jun, 2016 5 commits
-
-
Dave Chinner authored
When we have a lot of metadata to flush from the AIL, the buffer list can get very long. The current submission code tries to batch submission to optimise IO order of the metadata (i.e. ascending block order) to maximise block layer merging or IO to adjacent metadata blocks. Unfortunately, the method used can result in long lock times occurring as buffers locked early on in the buffer list might not be dispatched until the end of the IO licst processing. This is because sorting does not occur util after the buffer list has been processed and the buffers that are going to be submitted are locked. Hence when the buffer list is several thousand buffers long, the lock hold times before IO dispatch can be significant. To fix this, sort the buffer list before we start trying to lock and submit buffers. This means we can now submit buffers immediately after they are locked, allowing merging to occur immediately on the plug and dispatch to occur as quickly as possible. This means there is minimal delay between locking the buffer and IO submission occuring, hence reducing the worst case lock hold times seen during delayed write buffer IO submission signficantly. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Carlos Maiolino <cmaiolino@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
-
Christoph Hellwig authored
And the same for XFS_IOC_THAW. Just because we now have a common version of the ioctl we still need to provide the old name for it for anyone using those. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Eric Sandeen <sandeen@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
-
Eric Sandeen authored
Al Viro noticed that xfs_lock_inodes should be static, and that led to ... a few more. These are just the easy ones, others require moving functions higher in source files, so that's not done here to keep this review simple. Signed-off-by: Eric Sandeen <sandeen@sandeen.net> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dave Chinner <david@fromorbit.com>
-
Brian Foster authored
The static checker reports that after commit 8d99fe92 ("xfs: fix efi/efd error handling to avoid fs shutdown hangs"), the code has been reworked such that error == -EFSCORRUPTED is not possible in this codepath. Remove the spurious error check and just use SHUTDOWN_META_IO_ERROR unconditionally. Signed-off-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Carlos Maiolino <cmaiolino@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dave Chinner <david@fromorbit.com>
-
Brian Foster authored
Multi-block buffers are logged based on buffer offset in xfs_trans_log_buf(). xfs_buf_item_log() ultimately walks each mapping in the buffer and marks the associated range to be logged in the xfs_buf_log_format bitmap for that mapping. This code is broken, however, in that it marks the actual buffer offsets of the associated range in each bitmap rather than shifting to the byte range for that particular mapping. For example, on a 4k fsb fs, buffer offset 4096 refers to the first byte of the second mapping in the buffer. This means byte 0 of the second log format bitmap should be tagged as dirty. Instead, the current code marks byte offset 4096 of the second log format bitmap, which is invalid and potentially out of range of the mapping. As a result of this, the log item format code invoked at transaction commit time is not be able to correctly identify what parts of the buffer to copy into log vectors. This can lead to NULL log vector pointer dereferences in CIL push context if the item format code was not able to locate any dirty ranges at all. This crash has been reproduced on a 4k FSB filesystem using 16k directory blocks where an unlink operation happened not to log anything in the first block of the mapping. The logged offsets were all over 4k, marked as such in the subsequent log format mappings, and thus left the transaction with an xfs_log_item that is marked DIRTY but without any logged regions. Further, even when the logged regions are marked correctly in the buffer log format bitmaps, the format code doesn't copy the correct ranges of the buffer into the log. This means that any logged region beyond the first block of a multi-block buffer is subject to corruption after a crash and log recovery sequence. This is due to a failure to convert the mapping bm_len field from basic blocks to bytes in the buffer offset tracking code in xfs_buf_item_format(). Update xfs_buf_item_log() to convert buffer offsets to segment relative offsets when logging multi-block buffers. This ensures that the modified regions of a buffer are logged correctly and avoids the aforementioned crash. Also update xfs_buf_item_format() to correctly track the source offset into the buffer for the log vector formatting code. This ensures that the correct data is copied into the log. Signed-off-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
-
- 29 May, 2016 3 commits
-
-
Linus Torvalds authored
-
George Spelvin authored
The self-test was updated to cover zero-length strings; the function needs to be updated, too. Reported-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: George Spelvin <linux@sciencehorizons.net> Fixes: fcfd2fbf ("fs/namei.c: Add hashlen_string() function") Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
George Spelvin authored
The original name was simply hash_string(), but that conflicted with a function with that name in drivers/base/power/trace.c, and I decided that calling it "hashlen_" was better anyway. But you have to do it in two places. [ This caused build errors for architectures that don't define CONFIG_DCACHE_WORD_ACCESS - Linus ] Signed-off-by: George Spelvin <linux@sciencehorizons.net> Reported-by: Guenter Roeck <linux@roeck-us.net> Fixes: fcfd2fbf ("fs/namei.c: Add hashlen_string() function") Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
- 28 May, 2016 16 commits
-
-
Mikulas Patocka authored
The HPFS filesystem used generic_show_options to produce string that is displayed in /proc/mounts. However, there is a problem that the options may disappear after remount. If we mount the filesystem with option1 and then remount it with option2, /proc/mounts should show both option1 and option2, however it only shows option2 because the whole option string is replaced with replace_mount_options in hpfs_remount_fs. To fix this bug, implement the hpfs_show_options function that prints options that are currently selected. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Mikulas Patocka authored
Commit c8f33d0b ("affs: kstrdup() memory handling") checks if the kstrdup function returns NULL due to out-of-memory condition. However, if we are remounting a filesystem with no change to filesystem-specific options, the parameter data is NULL. In this case, kstrdup returns NULL (because it was passed NULL parameter), although no out of memory condition exists. The mount syscall then fails with ENOMEM. This patch fixes the bug. We fail with ENOMEM only if data is non-NULL. The patch also changes the call to replace_mount_options - if we didn't pass any filesystem-specific options, we don't call replace_mount_options (thus we don't erase existing reported options). Fixes: c8f33d0b ("affs: kstrdup() memory handling") Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Cc: stable@vger.kernel.org # v4.1+ Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Mikulas Patocka authored
Commit ce657611 ("hpfs: kstrdup() out of memory handling") checks if the kstrdup function returns NULL due to out-of-memory condition. However, if we are remounting a filesystem with no change to filesystem-specific options, the parameter data is NULL. In this case, kstrdup returns NULL (because it was passed NULL parameter), although no out of memory condition exists. The mount syscall then fails with ENOMEM. This patch fixes the bug. We fail with ENOMEM only if data is non-NULL. The patch also changes the call to replace_mount_options - if we didn't pass any filesystem-specific options, we don't call replace_mount_options (thus we don't erase existing reported options). Fixes: ce657611 ("hpfs: kstrdup() out of memory handling") Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
git://git.linux-mips.org/pub/scm/ralf/upstream-linusLinus Torvalds authored
Pull more MIPS updates from Ralf Baechle: "This is the secondnd batch of MIPS patches for 4.7. Summary: CPS: - Copy EVA configuration when starting secondary VPs. EIC: - Clear Status IPL. Lasat: - Fix a few off by one bugs. lib: - Mark intrinsics notrace. Not only are the intrinsics uninteresting, it would cause infinite recursion. MAINTAINERS: - Add file patterns for MIPS BRCM device tree bindings. - Add file patterns for mips device tree bindings. MT7628: - Fix MT7628 pinmux typos. - wled_an pinmux gpio. - EPHY LEDs pinmux support. Pistachio: - Enable KASLR VDSO: - Build microMIPS VDSO for microMIPS kernels. - Fix aliasing warning by building with `-fno-strict-aliasing' for debugging but also tracing them might result in recursion. Misc: - Add missing FROZEN hotplug notifier transitions. - Fix clk binding example for varioius PIC32 devices. - Fix cpu interrupt controller node-names in the DT files. - Fix XPA CPU feature separation. - Fix write_gc0_* macros when writing zero. - Add inline asm encoding helpers. - Add missing VZ accessor microMIPS encodings. - Fix little endian microMIPS MSA encodings. - Add 64-bit HTW fields and fix its configuration. - Fix sigreturn via VDSO on microMIPS kernel. - Lots of typo fixes. - Add definitions of SegCtl registers and use them" * 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus: (49 commits) MIPS: Add missing FROZEN hotplug notifier transitions MIPS: Build microMIPS VDSO for microMIPS kernels MIPS: Fix sigreturn via VDSO on microMIPS kernel MIPS: devicetree: fix cpu interrupt controller node-names MIPS: VDSO: Build with `-fno-strict-aliasing' MIPS: Pistachio: Enable KASLR MIPS: lib: Mark intrinsics notrace MIPS: Fix 64-bit HTW configuration MIPS: Add 64-bit HTW fields MAINTAINERS: Add file patterns for mips device tree bindings MAINTAINERS: Add file patterns for mips brcm device tree bindings MIPS: Simplify DSP instruction encoding macros MIPS: Add missing tlbinvf/XPA microMIPS encodings MIPS: Fix little endian microMIPS MSA encodings MIPS: Add missing VZ accessor microMIPS encodings MIPS: Add inline asm encoding helpers MIPS: Spelling fix lets -> let's MIPS: VR41xx: Fix typo MIPS: oprofile: Fix typo MIPS: math-emu: Fix typo ...
-
Guenter Roeck authored
Various builds (such as i386:allmodconfig) fail with fs/binfmt_aout.c:133:2: error: expected identifier or '(' before 'return' fs/binfmt_aout.c:134:1: error: expected identifier or '(' before '}' token [ Oops. My bad, I had stupidly thought that "allmodconfig" covered this on x86-64 too, but it obviously doesn't. Egg on my face. - Linus ] Fixes: 5d22fc25 ("mm: remove more IS_ERR_VALUE abuses") Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
git://ftp.sciencehorizons.net/linuxLinus Torvalds authored
Pull string hash improvements from George Spelvin: "This series does several related things: - Makes the dcache hash (fs/namei.c) useful for general kernel use. (Thanks to Bruce for noticing the zero-length corner case) - Converts the string hashes in <linux/sunrpc/svcauth.h> to use the above. - Avoids 64-bit multiplies in hash_64() on 32-bit platforms. Two 32-bit multiplies will do well enough. - Rids the world of the bad hash multipliers in hash_32. This finishes the job started in commit 689de1d6 ("Minimal fix-up of bad hashing behavior of hash_64()") The vast majority of Linux architectures have hardware support for 32x32-bit multiply and so derive no benefit from "simplified" multipliers. The few processors that do not (68000, h8/300 and some models of Microblaze) have arch-specific implementations added. Those patches are last in the series. - Overhauls the dcache hash mixing. The patch in commit 0fed3ac8 ("namei: Improve hash mixing if CONFIG_DCACHE_WORD_ACCESS") was an off-the-cuff suggestion. Replaced with a much more careful design that's simultaneously faster and better. (My own invention, as there was noting suitable in the literature I could find. Comments welcome!) - Modify the hash_name() loop to skip the initial HASH_MIX(). This would let us salt the hash if we ever wanted to. - Sort out partial_name_hash(). The hash function is declared as using a long state, even though it's truncated to 32 bits at the end and the extra internal state contributes nothing to the result. And some callers do odd things: - fs/hfs/string.c only allocates 32 bits of state - fs/hfsplus/unicode.c uses it to hash 16-bit unicode symbols not bytes - Modify bytemask_from_count to handle inputs of 1..sizeof(long) rather than 0..sizeof(long)-1. This would simplify users other than full_name_hash" Special thanks to Bruce Fields for testing and finding bugs in v1. (I learned some humbling lessons about "obviously correct" code.) On the arch-specific front, the m68k assembly has been tested in a standalone test harness, I've been in contact with the Microblaze maintainers who mostly don't care, as the hardware multiplier is never omitted in real-world applications, and I haven't heard anything from the H8/300 world" * 'hash' of git://ftp.sciencehorizons.net/linux: h8300: Add <asm/hash.h> microblaze: Add <asm/hash.h> m68k: Add <asm/hash.h> <linux/hash.h>: Add support for architecture-specific functions fs/namei.c: Improve dcache hash function Eliminate bad hash multipliers from hash_32() and hash_64() Change hash_64() return value to 32 bits <linux/sunrpc/svcauth.h>: Define hash_str() in terms of hashlen_string() fs/namei.c: Add hashlen_string() function Pull out string hash to <linux/stringhash.h>
-
George Spelvin authored
This will improve the performance of hash_32() and hash_64(), but due to complete lack of multi-bit shift instructions on H8, performance will still be bad in surrounding code. Designing H8-specific hash algorithms to work around that is a separate project. (But if the maintainers would like to get in touch...) Signed-off-by: George Spelvin <linux@sciencehorizons.net> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: uclinux-h8-devel@lists.sourceforge.jp
-
George Spelvin authored
Microblaze is an FPGA soft core that can be configured various ways. If it is configured without a multiplier, the standard __hash_32() will require a call to __mulsi3, which is a slow software loop. Instead, use a shift-and-add sequence for the constant multiply. GCC knows how to do this, but it's not as clever as some. Signed-off-by: George Spelvin <linux@sciencehorizons.net> Cc: Alistair Francis <alistair.francis@xilinx.com> Cc: Michal Simek <michal.simek@xilinx.com>
-
George Spelvin authored
This provides a multiply by constant GOLDEN_RATIO_32 = 0x61C88647 for the original mc68000, which lacks a 32x32-bit multiply instruction. Yes, the amount of optimization effort put in is excessive. :-) Shift-add chain found by Yevgen Voronenko's Hcub algorithm at http://spiral.ece.cmu.edu/mcm/gen.htmlSigned-off-by: George Spelvin <linux@sciencehorizons.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Andreas Schwab <schwab@linux-m68k.org> Cc: Philippe De Muyter <phdm@macq.eu> Cc: linux-m68k@lists.linux-m68k.org
-
George Spelvin authored
This is just the infrastructure; there are no users yet. This is modelled on CONFIG_ARCH_RANDOM; a CONFIG_ symbol declares the existence of <asm/hash.h>. That file may define its own versions of various functions, and define HAVE_* symbols (no CONFIG_ prefix!) to suppress the generic ones. Included is a self-test (in lib/test_hash.c) that verifies the basics. It is NOT in general required that the arch-specific functions compute the same thing as the generic, but if a HAVE_* symbol is defined with the value 1, then equality is tested. Signed-off-by: George Spelvin <linux@sciencehorizons.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Andreas Schwab <schwab@linux-m68k.org> Cc: Philippe De Muyter <phdm@macq.eu> Cc: linux-m68k@lists.linux-m68k.org Cc: Alistair Francis <alistai@xilinx.com> Cc: Michal Simek <michal.simek@xilinx.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: uclinux-h8-devel@lists.sourceforge.jp
-
George Spelvin authored
Patch 0fed3ac8 improved the hash mixing, but the function is slower than necessary; there's a 7-instruction dependency chain (10 on x86) each loop iteration. Word-at-a-time access is a very tight loop (which is good, because link_path_walk() is one of the hottest code paths in the entire kernel), and the hash mixing function must not have a longer latency to avoid slowing it down. There do not appear to be any published fast hash functions that: 1) Operate on the input a word at a time, and 2) Don't need to know the length of the input beforehand, and 3) Have a single iterated mixing function, not needing conditional branches or unrolling to distinguish different loop iterations. One of the algorithms which comes closest is Yann Collet's xxHash, but that's two dependent multiplies per word, which is too much. The key insights in this design are: 1) Barring expensive ops like multiplies, to diffuse one input bit across 64 bits of hash state takes at least log2(64) = 6 sequentially dependent instructions. That is more cycles than we'd like. 2) An operation like "hash ^= hash << 13" requires a second temporary register anyway, and on a 2-operand machine like x86, it's three instructions. 3) A better use of a second register is to hold a two-word hash state. With careful design, no temporaries are needed at all, so it doesn't increase register pressure. And this gets rid of register copying on 2-operand machines, so the code is smaller and faster. 4) Using two words of state weakens the requirement for one-round mixing; we now have two rounds of mixing before cancellation is possible. 5) A two-word hash state also allows operations on both halves to be done in parallel, so on a superscalar processor we get more mixing in fewer cycles. I ended up using a mixing function inspired by the ChaCha and Speck round functions. It is 6 simple instructions and 3 cycles per iteration (assuming multiply by 9 can be done by an "lea" instruction): x ^= *input++; y ^= x; x = ROL(x, K1); x += y; y = ROL(y, K2); y *= 9; Not only is this reversible, two consecutive rounds are reversible: if you are given the initial and final states, but not the intermediate state, it is possible to compute both input words. This means that at least 3 words of input are required to create a collision. (It also has the property, used by hash_name() to avoid a branch, that it hashes all-zero to all-zero.) The rotate constants K1 and K2 were found by experiment. The search took a sample of random initial states (I used 1023) and considered the effect of flipping each of the 64 input bits on each of the 128 output bits two rounds later. Each of the 8192 pairs can be considered a biased coin, and adding up the Shannon entropy of all of them produces a score. The best-scoring shifts also did well in other tests (flipping bits in y, trying 3 or 4 rounds of mixing, flipping all 64*63/2 pairs of input bits), so the choice was made with the additional constraint that the sum of the shifts is odd and not too close to the word size. The final state is then folded into a 32-bit hash value by a less carefully optimized multiply-based scheme. This also has to be fast, as pathname components tend to be short (the most common case is one iteration!), but there's some room for latency, as there is a fair bit of intervening logic before the hash value is used for anything. (Performance verified with "bonnie++ -s 0 -n 1536:-2" on tmpfs. I need a better benchmark; the numbers seem to show a slight dip in performance between 4.6.0 and this patch, but they're too noisy to quote.) Special thanks to Bruce fields for diligent testing which uncovered a nasty fencepost error in an earlier version of this patch. [checkpatch.pl formatting complaints noted and respectfully disagreed with.] Signed-off-by: George Spelvin <linux@sciencehorizons.net> Tested-by: J. Bruce Fields <bfields@redhat.com>
-
George Spelvin authored
The "simplified" prime multipliers made very bad hash functions, so get rid of them. This completes the work of 689de1d6. To avoid the inefficiency which was the motivation for the "simplified" multipliers, hash_64() on 32-bit systems is changed to use a different algorithm. It makes two calls to hash_32() instead. drivers/media/usb/dvb-usb-v2/af9015.c uses the old GOLDEN_RATIO_PRIME_32 for some horrible reason, so it inherits a copy of the old definition. Signed-off-by: George Spelvin <linux@sciencehorizons.net> Cc: Antti Palosaari <crope@iki.fi> Cc: Mauro Carvalho Chehab <m.chehab@samsung.com>
-
George Spelvin authored
That's all that's ever asked for, and it makes the return type of hash_long() consistent. It also allows (upcoming patch) an optimized implementation of hash_64 on 32-bit machines. I tried adding a BUILD_BUG_ON to ensure the number of bits requested was never more than 32 (most callers use a compile-time constant), but adding <linux/bug.h> to <linux/hash.h> breaks the tools/perf compiler unless tools/perf/MANIFEST is updated, and understanding that code base well enough to update it is too much trouble. I did the rest of an allyesconfig build with such a check, and nothing tripped. Signed-off-by: George Spelvin <linux@sciencehorizons.net>
-
George Spelvin authored
Finally, the first use of previous two patches: eliminate the separate ad-hoc string hash functions in the sunrpc code. Now hash_str() is a wrapper around hash_string(), and hash_mem() is likewise a wrapper around full_name_hash(). Note that sunrpc code *does* call hash_mem() with a zero length, which is why the previous patch needed to handle that in full_name_hash(). (Thanks, Bruce, for finding that!) This also eliminates the only caller of hash_long which asks for more than 32 bits of output. The comment about the quality of hashlen_string() and full_name_hash() is jumping the gun by a few patches; they aren't very impressive now, but will be improved greatly later in the series. Signed-off-by: George Spelvin <linux@sciencehorizons.net> Tested-by: J. Bruce Fields <bfields@redhat.com> Acked-by: J. Bruce Fields <bfields@redhat.com> Cc: Jeff Layton <jlayton@poochiereds.net> Cc: linux-nfs@vger.kernel.org
-
George Spelvin authored
We'd like to make more use of the highly-optimized dcache hash functions throughout the kernel, rather than have every subsystem create its own, and a function that hashes basic null-terminated strings is required for that. (The name is to emphasize that it returns both hash and length.) It's actually useful in the dcache itself, specifically d_alloc_name(). Other uses in the next patch. full_name_hash() is also tweaked to make it more generally useful: 1) Take a "char *" rather than "unsigned char *" argument, to be consistent with hash_name(). 2) Handle zero-length inputs. If we want more callers, we don't want to make them worry about corner cases. Signed-off-by: George Spelvin <linux@sciencehorizons.net>
-
George Spelvin authored
... so they can be used without the rest of <linux/dcache.h> The hashlen_* macros will make sense next patch. Signed-off-by: George Spelvin <linux@sciencehorizons.net>
-