- 25 Aug, 2018 1 commit
-
-
Horia Geantă authored
Descriptor address needs to be swapped to CPU endianness before being DMA unmapped. Cc: <stable@vger.kernel.org> # 4.8+ Fixes: 261ea058 ("crypto: caam - handle core endianness != caam endianness") Reported-by: Laurentiu Tudor <laurentiu.tudor@nxp.com> Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 07 Aug, 2018 9 commits
-
-
Ard Biesheuvel authored
Enhance the GHASH implementation that uses 64-bit polynomial multiplication by adding support for 4-way aggregation. This more than doubles the performance, from 2.4 cycles per byte to 1.1 cpb on Cortex-A53. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
Checking the TIF_NEED_RESCHED flag is disproportionately costly on cores with fast crypto instructions and comparatively slow memory accesses. On algorithms such as GHASH, which executes at ~1 cycle per byte on cores that implement support for 64 bit polynomial multiplication, there is really no need to check the TIF_NEED_RESCHED particularly often, and so we can remove the NEON yield check from the assembler routines. However, unlike the AEAD or skcipher APIs, the shash/ahash APIs take arbitrary input lengths, and so there needs to be some sanity check to ensure that we don't hog the CPU for excessive amounts of time. So let's simply cap the maximum input size that is processed in one go to 64 KB. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
kbuild test robot authored
Fixes: 915e4e84 ("crypto: hisilicon - SEC security accelerator driver") Signed-off-by: kbuild test robot <fengguang.wu@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Colin Ian King authored
Variable esign is being assigned but is never used hence it is redundant and can be removed. Cleans up clang warning: warning: variable 'esign' set but not used [-Wunused-but-set-variable] Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
Squeeze out another 5% of performance by minimizing the number of invocations of kernel_neon_begin()/kernel_neon_end() on the common path, which also allows some reloads of the key schedule to be optimized away. The resulting code runs at 2.3 cycles per byte on a Cortex-A53. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
Implement a faster version of the GHASH transform which amortizes the reduction modulo the characteristic polynomial across two input blocks at a time. On a Cortex-A53, the gcm(aes) performance increases 24%, from 3.0 cycles per byte to 2.4 cpb for large input sizes. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
Update the core AES/GCM transform and the associated plumbing to operate on 2 AES/GHASH blocks at a time. By itself, this is not expected to result in a noticeable speedup, but it paves the way for reimplementing the GHASH component using 2-way aggregation. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6Herbert Xu authored
Merge crypto-2.6 to pick up NEON yield revert.
-
Ard Biesheuvel authored
As it turns out, checking the TIF_NEED_RESCHED flag after each iteration results in a significant performance regression (~10%) when running fast algorithms (i.e., ones that use special instructions and operate in the < 4 cycles per byte range) on in-order cores with comparatively slow memory accesses such as the Cortex-A53. Given the speed of these ciphers, and the fact that the page based nature of the AEAD scatterwalk API guarantees that the core NEON transform is never invoked with more than a single page's worth of input, we can estimate the worst case duration of any resulting scheduling blackout: on a 1 GHz Cortex-A53 running with 64k pages, processing a page's worth of input at 4 cycles per byte results in a delay of ~250 us, which is a reasonable upper bound. So let's remove the yield checks from the fused AES-CCM and AES-GCM routines entirely. This reverts commit 7b67ae4d and partially reverts commit 7c50136a. Fixes: 7c50136a ("crypto: arm64/aes-ghash - yield NEON after every ...") Fixes: 7b67ae4d ("crypto: arm64/aes-ccm - yield NEON after every ...") Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 03 Aug, 2018 24 commits
-
-
Eric Biggers authored
Make it return -EINVAL if crypto_dh_key_len() is incorrect rather than overflowing the buffer. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
It was forgotten to increase DH_KPP_SECRET_MIN_SIZE to include 'q_size', causing an out-of-bounds write of 4 bytes in crypto_dh_encode_key(), and an out-of-bounds read of 4 bytes in crypto_dh_decode_key(). Fix it, and fix the lengths of the test vectors to match this. Reported-by: syzbot+6d38d558c25b53b8f4ed@syzkaller.appspotmail.com Fixes: e3fe0ae1 ("crypto: dh - add public key verification test") Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tom Lendacky authored
Should the PSP initialization fail, the PSP data structure will be freed and the value contained in the sp_device struct set to NULL. At module unload, psp_dev_destroy() does not check if the pointer value is NULL and will end up dereferencing a NULL pointer. Add a pointer check of the psp_data field in the sp_device struct in psp_dev_destroy() and return immediately if it is NULL. Cc: <stable@vger.kernel.org> # 4.16.x- Fixes: 2a6170df ("crypto: ccp: Add Platform Security Processor (PSP) device support") Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Acked-by: Gary R Hook <gary.hook@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
The 4-way ChaCha20 NEON code implements 16-bit rotates with vrev32.16, but the one-way code (used on remainder blocks) implements it with vshl + vsri, which is slower. Switch the one-way code to vrev32.16 too. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Gilad Ben-Yossef authored
The ccree driver had a sanity check that we are not asked to encrypt an XTS buffer bigger than a sane sector size since XTS IV needs to include the sector number in the IV so this is not expected in any real use case. Unfortunately, this breaks cryptsetup benchmark test which has a synthetic performance test using 64k buffer of data with the same IV. Remove the sanity check and allow the user to hang themselves and/or run benchmarks if they so wish. Reported-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Gilad Ben-Yossef authored
In certain error path req_ctx->iv was being freed despite not being allocated because it was not initialized to NULL. Rather than play whack a mole with the structure various field, zero it before use. This fixes a kernel panic that may occur if an invalid buffer size was requested triggering the bug above. Fixes: 63ee04c8 ("crypto: ccree - add skcipher support") Reported-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Gilad Ben-Yossef authored
IV generation is not available via the skcipher interface. Remove the left over support of it from the ablkcipher days. Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Gilad Ben-Yossef authored
Drop the explicit setting of CRYPTO_ALG_TYPE_AEAD or CRYPTO_ALG_TYPE_SKCIPHER flags during alg registration as they are set anyway by the framework. Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Like the skcipher_walk and blkcipher_walk cases: scatterwalk_done() is only meant to be called after a nonzero number of bytes have been processed, since scatterwalk_pagedone() will flush the dcache of the *previous* page. But in the error case of ablkcipher_walk_done(), e.g. if the input wasn't an integer number of blocks, scatterwalk_done() was actually called after advancing 0 bytes. This caused a crash ("BUG: unable to handle kernel paging request") during '!PageSlab(page)' on architectures like arm and arm64 that define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, provided that the input was page-aligned as in that case walk->offset == 0. Fix it by reorganizing ablkcipher_walk_done() to skip the scatterwalk_advance() and scatterwalk_done() if an error has occurred. Reported-by: Liu Chao <liuchao741@huawei.com> Fixes: bf06099d ("crypto: skcipher - Add ablkcipher_walk interfaces") Cc: <stable@vger.kernel.org> # v2.6.35+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Like the skcipher_walk case: scatterwalk_done() is only meant to be called after a nonzero number of bytes have been processed, since scatterwalk_pagedone() will flush the dcache of the *previous* page. But in the error case of blkcipher_walk_done(), e.g. if the input wasn't an integer number of blocks, scatterwalk_done() was actually called after advancing 0 bytes. This caused a crash ("BUG: unable to handle kernel paging request") during '!PageSlab(page)' on architectures like arm and arm64 that define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, provided that the input was page-aligned as in that case walk->offset == 0. Fix it by reorganizing blkcipher_walk_done() to skip the scatterwalk_advance() and scatterwalk_done() if an error has occurred. This bug was found by syzkaller fuzzing. Reproducer, assuming ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE: #include <linux/if_alg.h> #include <sys/socket.h> #include <unistd.h> int main() { struct sockaddr_alg addr = { .salg_type = "skcipher", .salg_name = "ecb(aes-generic)", }; char buffer[4096] __attribute__((aligned(4096))) = { 0 }; int fd; fd = socket(AF_ALG, SOCK_SEQPACKET, 0); bind(fd, (void *)&addr, sizeof(addr)); setsockopt(fd, SOL_ALG, ALG_SET_KEY, buffer, 16); fd = accept(fd, NULL, NULL); write(fd, buffer, 15); read(fd, buffer, 15); } Reported-by: Liu Chao <liuchao741@huawei.com> Fixes: 5cde0af2 ("[CRYPTO] cipher: Added block cipher type") Cc: <stable@vger.kernel.org> # v2.6.19+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
scatterwalk_done() is only meant to be called after a nonzero number of bytes have been processed, since scatterwalk_pagedone() will flush the dcache of the *previous* page. But in the error case of skcipher_walk_done(), e.g. if the input wasn't an integer number of blocks, scatterwalk_done() was actually called after advancing 0 bytes. This caused a crash ("BUG: unable to handle kernel paging request") during '!PageSlab(page)' on architectures like arm and arm64 that define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, provided that the input was page-aligned as in that case walk->offset == 0. Fix it by reorganizing skcipher_walk_done() to skip the scatterwalk_advance() and scatterwalk_done() if an error has occurred. This bug was found by syzkaller fuzzing. Reproducer, assuming ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE: #include <linux/if_alg.h> #include <sys/socket.h> #include <unistd.h> int main() { struct sockaddr_alg addr = { .salg_type = "skcipher", .salg_name = "cbc(aes-generic)", }; char buffer[4096] __attribute__((aligned(4096))) = { 0 }; int fd; fd = socket(AF_ALG, SOCK_SEQPACKET, 0); bind(fd, (void *)&addr, sizeof(addr)); setsockopt(fd, SOL_ALG, ALG_SET_KEY, buffer, 16); fd = accept(fd, NULL, NULL); write(fd, buffer, 15); read(fd, buffer, 15); } Reported-by: Liu Chao <liuchao741@huawei.com> Fixes: b286d8b1 ("crypto: skcipher - Add skcipher walk interface") Cc: <stable@vger.kernel.org> # v4.10+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Setting 'walk->nbytes = walk->total' in skcipher_walk_first() doesn't make sense because actually walk->nbytes needs to be set to the length of the first step in the walk, which may be less than walk->total. This is done by skcipher_walk_next() which is called immediately afterwards. Also walk->nbytes was already set to 0 in skcipher_walk_skcipher(), which is a better default value in case it's forgotten to be set later. Therefore, remove the unnecessary assignment to walk->nbytes. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
scatterwalk_samebuf() is never used. Remove it. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
All callers pass chain=0 to scatterwalk_crypto_chain(). Remove this unneeded parameter. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
The ALIGN() macro needs to be passed the alignment, not the alignmask (which is the alignment minus 1). Fixes: b286d8b1 ("crypto: skcipher - Add skcipher walk interface") Cc: <stable@vger.kernel.org> # v4.10+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Jonathan Cameron authored
Enable all 4 SEC units available on d05 boards. Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Jonathan Cameron authored
This accelerator is found inside hisilicon hip06 and hip07 SoCs. Each instance provides a number of queues which feed a different number of backend acceleration units. The queues are operating in an out of order mode in the interests of throughput. The silicon does not do tracking of dependencies between multiple 'messages' or update of the IVs as appropriate for training. Hence where relevant we need to do this in software. Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Jonathan Cameron authored
The hip06 and hip07 SoCs contain a number of these crypto units which accelerate AES and DES operations. Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Rob Herring <robh@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Horia Geantă authored
Avoid RCU stalls in the case of non-preemptible kernel and lengthy speed tests by rescheduling when advancing from one block size to another. Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Jia-Ju Bai authored
__virtio_crypto_ablkcipher_do_req() is never called in atomic context. __virtio_crypto_ablkcipher_do_req() is only called by virtio_crypto_ablkcipher_crypt_req(), which is only called by virtcrypto_find_vqs() that is never called in atomic context. __virtio_crypto_ablkcipher_do_req() calls kzalloc_node() with GFP_ATOMIC, which is not necessary. GFP_ATOMIC can be replaced with GFP_KERNEL. This is found by a static analysis tool named DCNS written by myself. I also manually check the kernel code before reporting it. Signed-off-by: Jia-Ju Bai <baijiaju1990@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Jia-Ju Bai authored
adf_dev_aer_schedule_reset() is never called in atomic context, as it calls wait_for_completion_timeout(). adf_dev_aer_schedule_reset() calls kzalloc() with GFP_ATOMIC, which is not necessary. GFP_ATOMIC can be replaced with GFP_KERNEL. This is found by a static analysis tool named DCNS written by myself. I also manually check the kernel code before reporting it. Signed-off-by: Jia-Ju Bai <baijiaju1990@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Jia-Ju Bai authored
crypto_alloc_context() is only called by nitrox_skcipher_init(), which is never called in atomic context. crypto_alloc_context() calls dma_pool_alloc() with GFP_ATOMIC, which is not necessary. GFP_ATOMIC can be replaced with GFP_KERNEL. This is found by a static analysis tool named DCNS written by myself. I also manually check the kernel code before reporting it. Signed-off-by: Jia-Ju Bai <baijiaju1990@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Stephan Müller authored
The cipher implementations of the kernel crypto API favor in-place cipher operations. Thus, switch the CTR cipher operation in the DRBG to perform in-place operations. This is implemented by using the output buffer as input buffer and zeroizing it before the cipher operation to implement a CTR encryption of a NULL buffer. The speed improvement is quite visibile with the following comparison using the LRNG implementation. Without the patch set: 16 bytes| 12.267661 MB/s| 61338304 bytes | 5000000213 ns 32 bytes| 23.603770 MB/s| 118018848 bytes | 5000000073 ns 64 bytes| 46.732262 MB/s| 233661312 bytes | 5000000241 ns 128 bytes| 90.038042 MB/s| 450190208 bytes | 5000000244 ns 256 bytes| 160.399616 MB/s| 801998080 bytes | 5000000393 ns 512 bytes| 259.878400 MB/s| 1299392000 bytes | 5000001675 ns 1024 bytes| 386.050662 MB/s| 1930253312 bytes | 5000001661 ns 2048 bytes| 493.641728 MB/s| 2468208640 bytes | 5000001598 ns 4096 bytes| 581.835981 MB/s| 2909179904 bytes | 5000003426 ns With the patch set: 16 bytes | 17.051142 MB/s | 85255712 bytes | 5000000854 ns 32 bytes | 32.695898 MB/s | 163479488 bytes | 5000000544 ns 64 bytes | 64.490739 MB/s | 322453696 bytes | 5000000954 ns 128 bytes | 123.285043 MB/s | 616425216 bytes | 5000000201 ns 256 bytes | 233.434573 MB/s | 1167172864 bytes | 5000000573 ns 512 bytes | 384.405197 MB/s | 1922025984 bytes | 5000000671 ns 1024 bytes | 566.313370 MB/s | 2831566848 bytes | 5000001080 ns 2048 bytes | 744.518042 MB/s | 3722590208 bytes | 5000000926 ns 4096 bytes | 867.501670 MB/s | 4337508352 bytes | 5000002181 ns Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linuxHerbert Xu authored
Merge mainline to pick up c7513c2a ("crypto/arm64: aes-ce-gcm - add missing kernel_neon_begin/end pair").
-
- 31 Jul, 2018 1 commit
-
-
Ard Biesheuvel authored
Calling pmull_gcm_encrypt_block() requires kernel_neon_begin() and kernel_neon_end() to be used since the routine touches the NEON register file. Add the missing calls. Also, since NEON register contents are not preserved outside of a kernel mode NEON region, pass the key schedule array again. Fixes: 7c50136a ("crypto: arm64/aes-ghash - yield NEON after every ...") Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
-
- 29 Jul, 2018 5 commits
-
-
Linus Torvalds authored
-
git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4Linus Torvalds authored
Pull ext4 fixes from Ted Ts'o: "Some miscellaneous ext4 fixes for 4.18; one fix is for a regression introduced in 4.18-rc4. Sorry for the late-breaking pull. I was originally going to wait for the next merge window, but Eric Whitney found a regression introduced in 4.18-rc4, so I decided to push out the regression plus the other fixes now. (The other commits have been baking in linux-next since early July)" * tag 'ext4_for_linus_stable' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4: ext4: fix check to prevent initializing reserved inodes ext4: check for allocation block validity with block group locked ext4: fix inline data updates with checksums enabled ext4: clear mmp sequence number when remounting read-only ext4: fix false negatives *and* false positives in ext4_check_descriptors()
-
Linus Torvalds authored
Anatoly Trosinenko reports that a corrupted squashfs image can cause a kernel oops. It turns out that squashfs can end up being confused about negative fragment lengths. The regular squashfs_read_data() does check for negative lengths, but squashfs_read_metadata() did not, and the fragment size code just blindly trusted the on-disk value. Fix both the fragment parsing and the metadata reading code. Reported-by: Anatoly Trosinenko <anatoly.trosinenko@gmail.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Phillip Lougher <phillip@squashfs.org.uk> Cc: stable@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Theodore Ts'o authored
Commit 8844618d: "ext4: only look at the bg_flags field if it is valid" will complain if block group zero does not have the EXT4_BG_INODE_ZEROED flag set. Unfortunately, this is not correct, since a freshly created file system has this flag cleared. It gets almost immediately after the file system is mounted read-write --- but the following somewhat unlikely sequence will end up triggering a false positive report of a corrupted file system: mkfs.ext4 /dev/vdc mount -o ro /dev/vdc /vdc mount -o remount,rw /dev/vdc Instead, when initializing the inode table for block group zero, test to make sure that itable_unused count is not too large, since that is the case that will result in some or all of the reserved inodes getting cleared. This fixes the failures reported by Eric Whiteney when running generic/230 and generic/231 in the the nojournal test case. Fixes: 8844618d ("ext4: only look at the bg_flags field if it is valid") Reported-by: Eric Whitney <enwlinux@gmail.com> Signed-off-by: Theodore Ts'o <tytso@mit.edu>
-
git://git.kernel.org/pub/scm/linux/kernel/git/tytso/randomLinus Torvalds authored
Pull random fixes from Ted Ts'o: "In reaction to the fixes to address CVE-2018-1108, some Linux distributions that have certain systemd versions in some cases combined with patches to libcrypt for FIPS/FEDRAMP compliance, have led to boot-time stalls for some hardware. The reaction by some distros and Linux sysadmins has been to install packages that try to do complicated things with the CPU and hope that leads to randomness. To mitigate this, if RDRAND is available, mix it into entropy provided by userspace. It won't hurt, and it will probably help" * tag 'random_for_linus_stable' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random: random: mix rdrand with entropy sent in from userspace
-