- 21 Sep, 2018 7 commits
-
-
Ard Biesheuvel authored
The Crypto Extension instantiation of the aes-modes.S collection of skciphers uses only 15 NEON registers for the round key array, whereas the pure NEON flavor uses 16 NEON registers for the AES S-box. This means we have a spare register available that we can use to hold the XTS mask vector, removing the need to reload it at every iteration of the inner loop. Since the pure NEON version does not permit this optimization, tweak the macros so we can factor out this functionality. Also, replace the literal load with a short sequence to compose the mask vector. On Cortex-A53, this results in a ~4% speedup. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
Currently, we rely on the generic CTS chaining mode wrapper to instantiate the cts(cbc(aes)) skcipher. Due to the high performance of the ARMv8 Crypto Extensions AES instructions (~1 cycles per byte), any overhead in the chaining mode layers is amplified, and so it pays off considerably to fold the CTS handling into the SIMD routines. On Cortex-A53, this results in a ~50% speedup for smaller input sizes. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
The reasoning of commit f10dc56c ("crypto: arm64 - revert NEON yield for fast AEAD implementations") applies equally to skciphers: the walk API already guarantees that the input size of each call into the NEON code is bounded to the size of a page, and so there is no need for an additional TIF_NEED_RESCHED flag check inside the inner loop. So revert the skcipher changes to aes-modes.S (but retain the mac ones) This partially reverts commit 0c8f838a. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
For some reason, the asmlinkage prototypes of the NEON routines take u8[] arguments for the round key arrays, while the actual round keys are arrays of u32, and so passing them into those routines requires u8* casts at each occurrence. Fix that. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Srikanth Jampala authored
use dma_pool_zalloc() instead of dma_pool_alloc with __GFP_ZERO flag. crypto dma pool renamed to "nitrox-context". Signed-off-by: Srikanth Jampala <Jampala.Srikanth@cavium.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6Herbert Xu authored
Merge crypto-2.6 to resolve caam conflict with skcipher conversion.
-
Horia Geantă authored
In some cases the zero-length hw_desc array at the end of ablkcipher_edesc struct requires for 4B of tail padding. Due to tail padding and the way pointers to S/G table and IV are computed: edesc->sec4_sg = (void *)edesc + sizeof(struct ablkcipher_edesc) + desc_bytes; iv = (u8 *)edesc->hw_desc + desc_bytes + sec4_sg_bytes; first 4 bytes of IV are overwritten by S/G table. Update computation of pointer to S/G table to rely on offset of hw_desc member and not on sizeof() operator. Cc: <stable@vger.kernel.org> # 4.13+ Fixes: 115957bb ("crypto: caam - fix IV DMA mapping and updating") Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 14 Sep, 2018 5 commits
-
-
Srikanth Jampala authored
Added support to configure SR-IOV using sysfs interface. Supported VF modes are 16, 32, 64 and 128. Grouped the hardware configuration functions to "nitrox_hal.h" file. Changed driver version to "1.1". Signed-off-by: Srikanth Jampala <Jampala.Srikanth@cavium.com> Reviewed-by: Gadam Sreerama <sgadam@cavium.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Mikulas Patocka authored
This patch fixes gcmaes_crypt_by_sg so that it won't use memory allocation if the data doesn't cross a page boundary. Authenticated encryption may be used by dm-crypt. If the encryption or decryption fails, it would result in I/O error and filesystem corruption. The function gcmaes_crypt_by_sg is using GFP_ATOMIC allocation that can fail anytime. This patch fixes the logic so that it won't attempt the failing allocation if the data doesn't cross a page boundary. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
kbuild test robot authored
Fixes: b7637754 ("crc-t10dif: Pick better transform if one becomes available") Signed-off-by: kbuild test robot <fengguang.wu@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Kees Cook authored
In the quest to remove all stack VLA usage from the kernel[1], this uses the new HASH_MAX_DIGESTSIZE from the crypto layer to allocate the upper bounds on stack usage. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.comSigned-off-by: Kees Cook <keescook@chromium.org> Acked-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ondrej Mosnacek authored
It turns out OSXSAVE needs to be checked only for AVX, not for SSE. Without this patch the affected modules refuse to load on CPUs with SSE2 but without AVX support. Fixes: 877ccce7 ("crypto: x86/aegis,morus - Fix and simplify CPUID checks") Cc: <stable@vger.kernel.org> # 4.18 Reported-by: Zdenek Kaspar <zkaspar82@gmail.com> Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 13 Sep, 2018 1 commit
-
-
Brijesh Singh authored
Currently, the CCP driver assumes that the SEV command issued to the PSP will always return (i.e. it will never hang). But recently, firmware bugs have shown that a command can hang. Since of the SEV commands are used in probe routines, this can cause boot hangs and/or loss of virtualization capabilities. To protect against firmware bugs, add a timeout in the SEV command execution flow. If a command does not complete within the specified timeout then return -ETIMEOUT and stop the driver from executing any further commands since the state of the SEV firmware is unknown. Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: Gary Hook <Gary.Hook@amd.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: linux-kernel@vger.kernel.org Signed-off-by: Brijesh Singh <brijesh.singh@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 04 Sep, 2018 24 commits
-
-
Eric Biggers authored
Optimize ChaCha20 NEON performance by: - Implementing the 8-bit rotations using the 'vtbl.8' instruction. - Streamlining the part that adds the original state and XORs the data. - Making some other small tweaks. On ARM Cortex-A7, these optimizations improve ChaCha20 performance from about 12.08 cycles per byte to about 11.37 -- a 5.9% improvement. There is a tradeoff involved with the 'vtbl.8' rotation method since there is at least one CPU (Cortex-A53) where it's not fastest. But it seems to be a better default; see the added comment. Overall, this patch reduces Cortex-A53 performance by less than 0.5%. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Martin K. Petersen authored
Add a way to print the currently active CRC algorithm in: /sys/module/crc_t10dif/parameters/transform Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Martin K. Petersen authored
T10 CRC library is linked into the kernel thanks to block and SCSI. The crypto accelerators are typically loaded later as modules and are therefore not available when the T10 CRC library is initialized. Use the crypto notifier facility to trigger a switch to a better algorithm if one becomes available after the initial hash has been registered. Use RCU to protect the original transform while the new one is being set up. Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org Suggested-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Martin K. Petersen authored
Introduce a facility that can be used to receive a notification callback when a new algorithm becomes available. This can be used by existing crypto registrations to trigger a switch from a software-only algorithm to a hardware-accelerated version. A new CRYPTO_MSG_ALG_LOADED state is introduced to the existing crypto notification chain, and the register/unregister functions are exported so they can be called by subsystems outside of crypto. Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Suggested-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
The arm64 implementation of the CRC-T10DIF algorithm uses the 64x64 bit polynomial multiplication instructions, which are optional in the architecture, and if these instructions are not available, we fall back to the C routine which is slow and inefficient. So let's reuse the 64x64 bit PMULL alternative from the GHASH driver that uses a sequence of ~40 instructions involving 8x8 bit PMULL and some shifting and masking. This is a lot slower than the original, but it is still twice as fast as the current [unoptimized] C code on Cortex-A53, and it is time invariant and much easier on the D-cache. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
Reorganize the CRC-T10DIF asm routine so we can easily instantiate an alternative version based on 8x8 polynomial multiplication in a subsequent patch. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
Now that the scalar fallbacks have been moved out of this driver into the core crc32()/crc32c() routines, we are left with a CRC32 crypto API driver for arm64 that is based only on 64x64 polynomial multiplication, which is an optional instruction in the ARMv8 architecture, and is less and less likely to be available on cores that do not also implement the CRC32 instructions, given that those are mandatory in the architecture as of ARMv8.1. Since the scalar instructions do not require the special handling that SIMD instructions do, and since they turn out to be considerably faster on some cores (Cortex-A53) as well, there is really no point in keeping this code around so let's just remove it. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
Replace the literal load of the addend vector with a sequence that performs each add individually. This sequence is only 2 instructions longer than the original, and 2% faster on Cortex-A53. This is an improvement by itself, but also works around a Clang issue, whose integrated assembler does not implement the GNU ARM asm syntax completely, and does not support the =literal notation for FP registers (more info at https://bugs.llvm.org/show_bug.cgi?id=38642) Cc: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
Speed up the GHASH algorithm based on 64-bit polynomial multiplication by adding support for 4-way aggregation. This improves throughput by ~85% on Cortex-A53, from 1.7 cycles per byte to 0.9 cycles per byte. When combined with AES into GCM, throughput improves by ~25%, from 3.8 cycles per byte to 3.0 cycles per byte. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
As it turns out, the AVX2 multibuffer SHA routines are currently broken [0], in a way that would have likely been noticed if this code were in wide use. Since the code is too complicated to be maintained by anyone except the original authors, and since the performance benefits for real-world use cases are debatable to begin with, it is better to drop it entirely for the moment. [0] https://marc.info/?l=linux-crypto-vger&m=153476243825350&w=2Suggested-by: Eric Biggers <ebiggers@google.com> Cc: Megha Dey <megha.dey@linux.intel.com> Cc: Tim Chen <tim.c.chen@linux.intel.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tudor Ambarus authored
Adopt the SPDX license identifiers to ease license compliance management. Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Brijesh Singh authored
Currently, the CCP driver assumes that the SEV command issued to the PSP will always return (i.e. it will never hang). But recently, firmware bugs have shown that a command can hang. Since of the SEV commands are used in probe routines, this can cause boot hangs and/or loss of virtualization capabilities. To protect against firmware bugs, add a timeout in the SEV command execution flow. If a command does not complete within the specified timeout then return -ETIMEOUT and stop the driver from executing any further commands since the state of the SEV firmware is unknown. Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: Gary Hook <Gary.Hook@amd.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: linux-kernel@vger.kernel.org Signed-off-by: Brijesh Singh <brijesh.singh@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Kees Cook authored
In the quest to remove all stack VLA usage from the kernel[1], this uses the newly defined max alignment to perform unaligned hashing to avoid VLAs, and drops the helper function while adding sanity checks on the resulting buffer sizes. Additionally, the __aligned_largest macro is removed since this helper was the only user. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.comSigned-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Kees Cook authored
In the quest to remove all stack VLA usage from the kernel[1], this uses the new upper bound for the stack buffer. Also adds a sanity check. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.comSigned-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Kees Cook authored
In the quest to remove all stack VLA usage from the kernel[1], this exposes a new general upper bound on crypto blocksize and alignmask (higher than for the existing cipher limits) for VLA removal, and introduces new checks. At present, the highest cra_alignmask in the kernel is 63. The highest cra_blocksize is 144 (SHA3_224_BLOCK_SIZE, 18 8-byte words). For the new blocksize limit, I went with 160 (20 8-byte words). [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.comSigned-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Kees Cook authored
In the quest to remove all stack VLA usage from the kernel[1], this removes the VLAs in SHASH_DESC_ON_STACK (via crypto_shash_descsize()) by using the maximum allowable size (which is now more clearly captured in a macro), along with a few other cases. Similar limits are turned into macros as well. A review of existing sizes shows that SHA512_DIGEST_SIZE (64) is the largest digest size and that sizeof(struct sha3_state) (360) is the largest descriptor size. The corresponding maximums are reduced. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.comSigned-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
In the quest to remove all stack VLA usage from the kernel[1], this drops AHASH_REQUEST_ON_STACK by preallocating the ahash request area combined with the skcipher area (which are not used at the same time). [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.comSigned-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Kees Cook authored
In the quest to remove all stack VLA usage from the kernel[1], this uses the upper bounds on blocksize. Since this is always a cipher blocksize, use the existing cipher max blocksize. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.comSigned-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Kees Cook authored
In the quest to remove all stack VLA usage from the kernel[1], this uses the maximum blocksize and adds a sanity check. For xcbc, the blocksize must always be 16, so use that, since it's already being enforced during instantiation. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.comSigned-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Jason A. Donenfeld authored
These are unused, undesired, and have never actually been used by anybody. The original authors of this code have changed their mind about its inclusion. While originally proposed for disk encryption on low-end devices, the idea was discarded [1] in favor of something else before that could really get going. Therefore, this patch removes Speck. [1] https://marc.info/?l=linux-crypto-vger&m=153359499015659Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Acked-by: Eric Biggers <ebiggers@google.com> Cc: stable@vger.kernel.org Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Horia Geantă authored
Convert driver from deprecated ablkcipher API to skcipher. Link: https://www.mail-archive.com/search?l=mid&q=20170728085622.GC19664@gondor.apana.org.auSigned-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Horia Geantă authored
Convert driver from deprecated ablkcipher API to skcipher. Link: https://www.mail-archive.com/search?l=mid&q=20170728085622.GC19664@gondor.apana.org.auSigned-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Horia Geantă authored
IV generation is done only at AEAD level. Support in ablkcipher is not needed, thus remove the dead code. Link: https://www.mail-archive.com/search?l=mid&q=20160901101257.GA3362@gondor.apana.org.aSigned-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Horia Geantă authored
IV generation is done only at AEAD level. Support in ablkcipher is not needed, thus remove the dead code. Link: https://www.mail-archive.com/search?l=mid&q=20160901101257.GA3362@gondor.apana.org.auSigned-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 02 Sep, 2018 3 commits
-
-
Linus Torvalds authored
-
git://git.kernel.org/pub/scm/linux/kernel/git/robh/linuxLinus Torvalds authored
Pull devicetree updates from Rob Herring: "A couple of new helper functions in preparation for some tree wide clean-ups. I'm sending these new helpers now for rc2 in order to simplify the dependencies on subsequent cleanups across the tree in 4.20" * tag 'devicetree-fixes-for-4.19' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux: of: Add device_type access helper functions of: add node name compare helper functions of: add helper to lookup compatible child node
-
git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-socLinus Torvalds authored
Pull ARM SoC fixes from Olof Johansson: "First batch of fixes post-merge window: - A handful of devicetree changes for i.MX2{3,8} to change over to new panel bindings. The platforms were moved from legacy framebuffers to DRM and some development board panels hadn't yet been converted. - OMAP fixes related to ti-sysc driver conversion fallout, fixing some register offsets, no_console_suspend fixes, etc. - Droid4 changes to fix flaky eMMC probing and vibrator DTS mismerge. - Fixed 0755->0644 permissions on a newly added file. - Defconfig changes to make ARM Versatile more useful with QEMU (helps testing). - Enable defconfig options for new TI SoC platform that was merged this window (AM6)" * tag 'armsoc-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: arm64: defconfig: Enable TI's AM6 SoC platform ARM: defconfig: Update the ARM Versatile defconfig ARM: dts: omap4-droid4: Fix emmc errors seen on some devices ARM: dts: Fix file permission for am335x-osd3358-sm-red.dts ARM: imx_v6_v7_defconfig: Select CONFIG_DRM_PANEL_SEIKO_43WVF1G ARM: mxs_defconfig: Select CONFIG_DRM_PANEL_SEIKO_43WVF1G ARM: dts: imx23-evk: Convert to the new display bindings ARM: dts: imx23-evk: Move regulators outside simple-bus ARM: dts: imx28-evk: Convert to the new display bindings ARM: dts: imx28-evk: Move regulators outside simple-bus Revert "ARM: dts: imx7d: Invert legacy PCI irq mapping" arm: dts: am4372: setup rtc as system-power-controller ARM: dts: omap4-droid4: fix vibrations on Droid 4 bus: ti-sysc: Fix no_console_suspend handling bus: ti-sysc: Fix module register ioremap for larger offsets ARM: OMAP2+: Fix module address for modules using mpu_rt_idx ARM: OMAP2+: Fix null hwmod for ti-sysc debug
-