- 28 Sep, 2018 7 commits
-
-
Kees Cook authored
In the quest to remove all stack VLA usage from the kernel[1], this replaces struct crypto_skcipher and SKCIPHER_REQUEST_ON_STACK() usage with struct crypto_sync_skcipher and SYNC_SKCIPHER_REQUEST_ON_STACK(), which uses a fixed stack size. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Cc: Trond Myklebust <trond.myklebust@hammerspace.com> Cc: Anna Schumaker <anna.schumaker@netapp.com> Cc: "J. Bruce Fields" <bfields@fieldses.org> Cc: Jeff Layton <jlayton@kernel.org> Cc: YueHaibing <yuehaibing@huawei.com> Cc: linux-nfs@vger.kernel.org Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Kees Cook authored
In preparation for removal of VLAs due to skcipher requests on the stack via SKCIPHER_REQUEST_ON_STACK() usage, this introduces the infrastructure for the "sync skcipher" tfm, which is for handling the on-stack cases of skcipher, which are always non-ASYNC and have a known limited request size. The crypto API additions: struct crypto_sync_skcipher (wrapper for struct crypto_skcipher) crypto_alloc_sync_skcipher() crypto_free_sync_skcipher() crypto_sync_skcipher_setkey() crypto_sync_skcipher_get_flags() crypto_sync_skcipher_set_flags() crypto_sync_skcipher_clear_flags() crypto_sync_skcipher_blocksize() crypto_sync_skcipher_ivsize() crypto_sync_skcipher_reqtfm() skcipher_request_set_sync_tfm() SYNC_SKCIPHER_REQUEST_ON_STACK() (with tfm type check) Signed-off-by: Kees Cook <keescook@chromium.org> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Dan Aloni authored
The encryption mode of pkcs1pad never uses out_sg and out_buf, so there's no need to allocate the buffer, which presently is not even being freed. CC: Herbert Xu <herbert@gondor.apana.org.au> CC: linux-crypto@vger.kernel.org CC: "David S. Miller" <davem@davemloft.net> Signed-off-by: Dan Aloni <dan@kernelim.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Christoph Manszewski authored
Add support for aes counter(ctr) block cipher mode of operation for Exynos Hardware. In contrast to ecb and cbc modes, aes-ctr allows encyption/decryption for request sizes not being a multiple of 16(bytes). Hardware requires block sizes being a multiple of 16(bytes). In order to achieve this, copy request source and destination memory, and align it's size to 16. That way hardware processes additional bytes, that are omitted when copying the result back to its original destination. Tested on Odroid-U3 with Exynos 4412 CPU, kernel 4.19-rc2 with crypto run-time self test testmgr. Signed-off-by: Christoph Manszewski <c.manszewski@samsung.com> Reviewed-by: Krzysztof Kozlowski <krzk@kernel.org> Acked-by: Kamil Konieczny <k.konieczny@partner.samsung.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Christoph Manszewski authored
Modifications in s5p-sss.c: - remove unnecessary 'goto' statements (making code shorter), - change uint_8 and uint_32 to u8 and u32 types (for consistency in the driver and making code shorter), Signed-off-by: Christoph Manszewski <c.manszewski@samsung.com> Reviewed-by: Krzysztof Kozlowski <krzk@kernel.org> Acked-by: Kamil Konieczny <k.konieczny@partner.samsung.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Christoph Manszewski authored
Fix misalignment of continued argument list. Signed-off-by: Christoph Manszewski <c.manszewski@samsung.com> Reviewed-by: Krzysztof Kozlowski <krzk@kernel.org> Acked-by: Kamil Konieczny <k.konieczny@partner.samsung.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Christoph Manszewski authored
Remove a race condition introduced by error path in functions: s5p_aes_interrupt and s5p_aes_crypt_start. Setting the busy field of struct s5p_aes_dev to false made it possible for s5p_tasklet_cb to change the req field, before s5p_aes_complete was called. Change the first parameter of s5p_aes_complete to struct ablkcipher_request. Before spin_unlock, make a copy of the currently handled request, to ensure s5p_aes_complete function call with the correct request. Signed-off-by: Christoph Manszewski <c.manszewski@samsung.com> Acked-by: Kamil Konieczny <k.konieczny@partner.samsung.com> Reviewed-by: Krzysztof Kozlowski <krzk@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 21 Sep, 2018 30 commits
-
-
Stefan Agner authored
The table id (second) argument to MODULE_DEVICE_TABLE is often referenced otherwise. This is not the case for CPU features. This leads to a warning when building the kernel with Clang: arch/arm/crypto/crc32-ce-glue.c:239:33: warning: variable 'crc32_cpu_feature' is not needed and will not be emitted [-Wunneeded-internal-declaration] static const struct cpu_feature crc32_cpu_feature[] = { ^ Avoid warnings by using __maybe_unused, similar to commit 1f318a8b ("modules: mark __inittest/__exittest as __maybe_unused"). Fixes: 2a9faf8b ("crypto: arm/crc32 - enable module autoloading based on CPU feature bits") Signed-off-by: Stefan Agner <stefan@agner.ch> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Stefan Agner authored
The table id (second) argument to MODULE_DEVICE_TABLE is often referenced otherwise. This is not the case for CPU features. This leads to warnings when building the kernel with Clang: arch/arm/crypto/aes-ce-glue.c:450:1: warning: variable 'cpu_feature_match_AES' is not needed and will not be emitted [-Wunneeded-internal-declaration] module_cpu_feature_match(AES, aes_init); ^ Avoid warnings by using __maybe_unused, similar to commit 1f318a8b ("modules: mark __inittest/__exittest as __maybe_unused"). Fixes: 67bad2fd ("cpu: add generic support for CPU feature based module autoloading") Signed-off-by: Stefan Agner <stefan@agner.ch> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Janakarajan Natarajan authored
During PSP initialization, there is an attempt to update the SEV firmware by looking in /lib/firmware/amd/. Currently, sev.fw is the expected name of the firmware blob. This patch will allow for firmware filenames based on the family and model of the processor. Model specific firmware files are given highest priority. Followed by firmware for a subset of models. Lastly, failing the previous two options, fallback to looking for sev.fw. Signed-off-by: Janakarajan Natarajan <Janakarajan.Natarajan@amd.com> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Acked-by: Gary R Hook <gary.hook@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Janakarajan Natarajan authored
Under certain configuration SEV functions can be defined as no-op. In such a case error can be uninitialized. Initialize the variable to 0. Cc: Dan Carpenter <Dan.Carpenter@oracle.com> Reported-by: Dan Carpenter <Dan.Carpenter@oracle.com> Signed-off-by: Janakarajan Natarajan <Janakarajan.Natarajan@amd.com> Acked-by: Gary R Hook <gary.hook@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ondrej Mosnacek authored
This patch simplifies the LRW template to recompute the LRW tweaks from scratch in the second pass and thus also removes the need to allocate a dynamic buffer using kmalloc(). As discussed at [1], the use of kmalloc causes deadlocks with dm-crypt. PERFORMANCE MEASUREMENTS (x86_64) Performed using: https://gitlab.com/omos/linux-crypto-bench Crypto driver used: lrw(ecb-aes-aesni) The results show that the new code has about the same performance as the old code. For 512-byte message it seems to be even slightly faster, but that might be just noise. Before: ALGORITHM KEY (b) DATA (B) TIME ENC (ns) TIME DEC (ns) lrw(aes) 256 64 200 203 lrw(aes) 320 64 202 204 lrw(aes) 384 64 204 205 lrw(aes) 256 512 415 415 lrw(aes) 320 512 432 440 lrw(aes) 384 512 449 451 lrw(aes) 256 4096 1838 1995 lrw(aes) 320 4096 2123 1980 lrw(aes) 384 4096 2100 2119 lrw(aes) 256 16384 7183 6954 lrw(aes) 320 16384 7844 7631 lrw(aes) 384 16384 8256 8126 lrw(aes) 256 32768 14772 14484 lrw(aes) 320 32768 15281 15431 lrw(aes) 384 32768 16469 16293 After: ALGORITHM KEY (b) DATA (B) TIME ENC (ns) TIME DEC (ns) lrw(aes) 256 64 197 196 lrw(aes) 320 64 200 197 lrw(aes) 384 64 203 199 lrw(aes) 256 512 385 380 lrw(aes) 320 512 401 395 lrw(aes) 384 512 415 415 lrw(aes) 256 4096 1869 1846 lrw(aes) 320 4096 2080 1981 lrw(aes) 384 4096 2160 2109 lrw(aes) 256 16384 7077 7127 lrw(aes) 320 16384 7807 7766 lrw(aes) 384 16384 8108 8357 lrw(aes) 256 32768 14111 14454 lrw(aes) 320 32768 15268 15082 lrw(aes) 384 32768 16581 16250 [1] https://lkml.org/lkml/2018/8/23/1315Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ondrej Mosnacek authored
This patch rewrites the tweak computation to a slightly simpler method that performs less bswaps. Based on performance measurements the new code seems to provide slightly better performance than the old one. PERFORMANCE MEASUREMENTS (x86_64) Performed using: https://gitlab.com/omos/linux-crypto-bench Crypto driver used: lrw(ecb-aes-aesni) Before: ALGORITHM KEY (b) DATA (B) TIME ENC (ns) TIME DEC (ns) lrw(aes) 256 64 204 286 lrw(aes) 320 64 227 203 lrw(aes) 384 64 208 204 lrw(aes) 256 512 441 439 lrw(aes) 320 512 456 455 lrw(aes) 384 512 469 483 lrw(aes) 256 4096 2136 2190 lrw(aes) 320 4096 2161 2213 lrw(aes) 384 4096 2295 2369 lrw(aes) 256 16384 7692 7868 lrw(aes) 320 16384 8230 8691 lrw(aes) 384 16384 8971 8813 lrw(aes) 256 32768 15336 15560 lrw(aes) 320 32768 16410 16346 lrw(aes) 384 32768 18023 17465 After: ALGORITHM KEY (b) DATA (B) TIME ENC (ns) TIME DEC (ns) lrw(aes) 256 64 200 203 lrw(aes) 320 64 202 204 lrw(aes) 384 64 204 205 lrw(aes) 256 512 415 415 lrw(aes) 320 512 432 440 lrw(aes) 384 512 449 451 lrw(aes) 256 4096 1838 1995 lrw(aes) 320 4096 2123 1980 lrw(aes) 384 4096 2100 2119 lrw(aes) 256 16384 7183 6954 lrw(aes) 320 16384 7844 7631 lrw(aes) 384 16384 8256 8126 lrw(aes) 256 32768 14772 14484 lrw(aes) 320 32768 15281 15431 lrw(aes) 384 32768 16469 16293 Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ondrej Mosnacek authored
This patch adds a test vector for lrw(aes) that triggers wrap-around of the counter, which is a tricky corner case. Suggested-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ondrej Mosnacek authored
When the LRW block counter overflows, the current implementation returns 128 as the index to the precomputed multiplication table, which has 128 entries. This patch fixes it to return the correct value (127). Fixes: 64470f1b ("[CRYPTO] lrw: Liskov Rivest Wagner, a tweakable narrow block cipher mode") Cc: <stable@vger.kernel.org> # 2.6.20+ Reported-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Horia Geantă authored
ghash is a keyed hash algorithm, thus setkey needs to be called. Otherwise the following error occurs: $ modprobe tcrypt mode=318 sec=1 testing speed of async ghash-generic (ghash-generic) tcrypt: test 0 ( 16 byte blocks, 16 bytes per update, 1 updates): tcrypt: hashing failed ret=-126 Cc: <stable@vger.kernel.org> # 4.6+ Fixes: 0660511c ("crypto: tcrypt - Use ahash") Tested-by: Franck Lenormand <franck.lenormand@nxp.com> Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Horia Geantă authored
Enable CAAM (Cryptographic Accelerator and Assurance Module) driver for QorIQ Data Path Acceleration Architecture (DPAA) v2. It handles DPSECI (Data Path SEC Interface) DPAA2 objects that sit on the Management Complex (MC) fsl-mc bus. Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Horia Geantă authored
Add support for unkeyed and keyed (hmac) md5, sha algorithms. Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Horia Geantă authored
caam/qi2 driver will support ahash algorithms, thus move ahash descriptors generation in a shared location. Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Horia Geantă authored
Add support to submit the following skcipher algorithms via the DPSECI backend: cbc({aes,des,des3_ede}) ctr(aes), rfc3686(ctr(aes)) xts(aes) Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Horia Geantă authored
Add CAAM driver that works using the DPSECI backend, i.e. manages DPSECI DPAA2 objects sitting on the Management Complex (MC) fsl-mc bus. Data transfers (crypto requests) are sent/received to/from CAAM crypto engine via Queue Interface (v2), this being similar to existing caam/qi. OTOH, configuration/setup (obtaining virtual queue IDs, authorization etc.) is done by sending commands to the MC f/w. Note that the CAAM accelerator included in DPAA2 platforms still has Job Rings. However, the driver being added does not handle access via this backend. Kconfig & Makefile are updated such that DPAA2-CAAM (a.k.a. "caam/qi2") driver does not depend on caam/jr or caam/qi backends - which rely on platform bus support (ctrl.c). Support for the following aead and authenc algorithms is also added in this patch: -aead: gcm(aes) rfc4106(gcm(aes)) rfc4543(gcm(aes)) -authenc: authenc(hmac({md5,sha*}),cbc({aes,des,des3_ede})) echainiv(authenc(hmac({md5,sha*}),cbc({aes,des,des3_ede}))) authenc(hmac({md5,sha*}),rfc3686(ctr(aes)) seqiv(authenc(hmac({md5,sha*}),rfc3686(ctr(aes))) Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Horia Geantă authored
Add support to translate error codes returned by QI v2, i.e. Queue Interface present on DataPath Acceleration Architecture v2 (DPAA2). Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Horia Geantă authored
Add the low-level API that allows to manage DPSECI DPAA2 objects that sit on the Management Complex (MC) fsl-mc bus. The API is compatible with MC firmware 10.2.0+. Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Horia Geantă authored
Fix the following sparse endianness warnings: drivers/crypto/caam/regs.h:95:1: sparse: incorrect type in return expression (different base types) @@ expected unsigned int @@ got restricted __le32unsigned int @@ drivers/crypto/caam/regs.h:95:1: expected unsigned int drivers/crypto/caam/regs.h:95:1: got restricted __le32 [usertype] <noident> drivers/crypto/caam/regs.h:95:1: sparse: incorrect type in return expression (different base types) @@ expected unsigned int @@ got restricted __be32unsigned int @@ drivers/crypto/caam/regs.h:95:1: expected unsigned int drivers/crypto/caam/regs.h:95:1: got restricted __be32 [usertype] <noident> drivers/crypto/caam/regs.h:92:1: sparse: cast to restricted __le32 drivers/crypto/caam/regs.h:92:1: sparse: cast to restricted __be32 Fixes: 261ea058 ("crypto: caam - handle core endianness != caam endianness") Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Horia Geantă authored
Add support for Congestion State Change Notifications (CSCN), which allow DPIO users to be notified when a congestion group changes its state (due to hitting the entrance / exit threshold). Acked-by: Li Yang <leoyang.li@nxp.com> Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Horia Geantă authored
Add support for dpaa2_fd_list format, i.e. dpaa2_fl_entry structure and accessors. Frame list entries (FLEs) are similar, but not identical to FDs: + "F" (final) bit - FMT[b'01] is reserved - DD, SC, DROPP bits (covered by "FD compatibility" field in FLE case) - FLC[5:0] not used for stashing Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Acked-by: Li Yang <leoyang.li@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Horia Geantă authored
This commit adds back functions removed in commit a211c817 ("staging: fsl-mc/dpio: remove couple of unused functions") since dpseci object will make use of them. Acked-by: Li Yang <leoyang.li@nxp.com> Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Horia Geantă authored
Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Acked-by: Laurentiu Tudor <laurentiu.tudor@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
In commit 9f480fae ("crypto: chacha20 - Fix keystream alignment for chacha20_block()"), I had missed that chacha20_block() can be called directly on the buffer passed to get_random_bytes(), which can have any alignment. So, while my commit didn't break anything, it didn't fully solve the alignment problems. Revert my solution and just update chacha20_block() to use put_unaligned_le32(), so the output buffer need not be aligned. This is simpler, and on many CPUs it's the same speed. But, I kept the 'tmp' buffers in extract_crng_user() and _get_random_bytes() 4-byte aligned, since that alignment is actually needed for _crng_backtrack_protect() too. Reported-by: Stephan Müller <smueller@chronox.de> Cc: Theodore Ts'o <tytso@mit.edu> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ondrej Mosnacek authored
Since commit acb9b159 ("crypto: gf128mul - define gf128mul_x_* in gf128mul.h"), the gf128mul_x_*() functions are very fast and therefore caching the computed XTS tweaks has only negligible advantage over computing them twice. In fact, since the current caching implementation limits the size of the calls to the child ecb(...) algorithm to PAGE_SIZE (usually 4096 B), it is often actually slower than the simple recomputing implementation. This patch simplifies the XTS template to recompute the XTS tweaks from scratch in the second pass and thus also removes the need to allocate a dynamic buffer using kmalloc(). As discussed at [1], the use of kmalloc causes deadlocks with dm-crypt. PERFORMANCE RESULTS I measured time to encrypt/decrypt a memory buffer of varying sizes with xts(ecb-aes-aesni) using a tool I wrote ([2]) and the results suggest that after this patch the performance is either better or comparable for both small and large buffers. Note that there is a lot of noise in the measurements, but the overall difference is easy to see. Old code: ALGORITHM KEY (b) DATA (B) TIME ENC (ns) TIME DEC (ns) xts(aes) 256 64 331 328 xts(aes) 384 64 332 333 xts(aes) 512 64 338 348 xts(aes) 256 512 889 920 xts(aes) 384 512 1019 993 xts(aes) 512 512 1032 990 xts(aes) 256 4096 2152 2292 xts(aes) 384 4096 2453 2597 xts(aes) 512 4096 3041 2641 xts(aes) 256 16384 9443 8027 xts(aes) 384 16384 8536 8925 xts(aes) 512 16384 9232 9417 xts(aes) 256 32768 16383 14897 xts(aes) 384 32768 17527 16102 xts(aes) 512 32768 18483 17322 New code: ALGORITHM KEY (b) DATA (B) TIME ENC (ns) TIME DEC (ns) xts(aes) 256 64 328 324 xts(aes) 384 64 324 319 xts(aes) 512 64 320 322 xts(aes) 256 512 476 473 xts(aes) 384 512 509 492 xts(aes) 512 512 531 514 xts(aes) 256 4096 2132 1829 xts(aes) 384 4096 2357 2055 xts(aes) 512 4096 2178 2027 xts(aes) 256 16384 6920 6983 xts(aes) 384 16384 8597 7505 xts(aes) 512 16384 7841 8164 xts(aes) 256 32768 13468 12307 xts(aes) 384 32768 14808 13402 xts(aes) 512 32768 15753 14636 [1] https://lkml.org/lkml/2018/8/23/1315 [2] https://gitlab.com/omos/linux-crypto-benchSigned-off-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
The Crypto Extension instantiation of the aes-modes.S collection of skciphers uses only 15 NEON registers for the round key array, whereas the pure NEON flavor uses 16 NEON registers for the AES S-box. This means we have a spare register available that we can use to hold the XTS mask vector, removing the need to reload it at every iteration of the inner loop. Since the pure NEON version does not permit this optimization, tweak the macros so we can factor out this functionality. Also, replace the literal load with a short sequence to compose the mask vector. On Cortex-A53, this results in a ~4% speedup. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
Currently, we rely on the generic CTS chaining mode wrapper to instantiate the cts(cbc(aes)) skcipher. Due to the high performance of the ARMv8 Crypto Extensions AES instructions (~1 cycles per byte), any overhead in the chaining mode layers is amplified, and so it pays off considerably to fold the CTS handling into the SIMD routines. On Cortex-A53, this results in a ~50% speedup for smaller input sizes. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
The reasoning of commit f10dc56c ("crypto: arm64 - revert NEON yield for fast AEAD implementations") applies equally to skciphers: the walk API already guarantees that the input size of each call into the NEON code is bounded to the size of a page, and so there is no need for an additional TIF_NEED_RESCHED flag check inside the inner loop. So revert the skcipher changes to aes-modes.S (but retain the mac ones) This partially reverts commit 0c8f838a. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
For some reason, the asmlinkage prototypes of the NEON routines take u8[] arguments for the round key arrays, while the actual round keys are arrays of u32, and so passing them into those routines requires u8* casts at each occurrence. Fix that. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Srikanth Jampala authored
use dma_pool_zalloc() instead of dma_pool_alloc with __GFP_ZERO flag. crypto dma pool renamed to "nitrox-context". Signed-off-by: Srikanth Jampala <Jampala.Srikanth@cavium.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6Herbert Xu authored
Merge crypto-2.6 to resolve caam conflict with skcipher conversion.
-
Horia Geantă authored
In some cases the zero-length hw_desc array at the end of ablkcipher_edesc struct requires for 4B of tail padding. Due to tail padding and the way pointers to S/G table and IV are computed: edesc->sec4_sg = (void *)edesc + sizeof(struct ablkcipher_edesc) + desc_bytes; iv = (u8 *)edesc->hw_desc + desc_bytes + sec4_sg_bytes; first 4 bytes of IV are overwritten by S/G table. Update computation of pointer to S/G table to rely on offset of hw_desc member and not on sizeof() operator. Cc: <stable@vger.kernel.org> # 4.13+ Fixes: 115957bb ("crypto: caam - fix IV DMA mapping and updating") Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 14 Sep, 2018 3 commits
-
-
Srikanth Jampala authored
Added support to configure SR-IOV using sysfs interface. Supported VF modes are 16, 32, 64 and 128. Grouped the hardware configuration functions to "nitrox_hal.h" file. Changed driver version to "1.1". Signed-off-by: Srikanth Jampala <Jampala.Srikanth@cavium.com> Reviewed-by: Gadam Sreerama <sgadam@cavium.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Mikulas Patocka authored
This patch fixes gcmaes_crypt_by_sg so that it won't use memory allocation if the data doesn't cross a page boundary. Authenticated encryption may be used by dm-crypt. If the encryption or decryption fails, it would result in I/O error and filesystem corruption. The function gcmaes_crypt_by_sg is using GFP_ATOMIC allocation that can fail anytime. This patch fixes the logic so that it won't attempt the failing allocation if the data doesn't cross a page boundary. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
kbuild test robot authored
Fixes: b7637754 ("crc-t10dif: Pick better transform if one becomes available") Signed-off-by: kbuild test robot <fengguang.wu@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-