- 09 Dec, 2022 34 commits
-
-
Linus Walleij authored
It turns out we can just modify the newer STM32 CRYP driver to be used with Ux500 and now that we have done that, delete the old and sparsely maintained Ux500 CRYP driver. Cc: Lionel Debieve <lionel.debieve@foss.st.com> Cc: Maxime Coquelin <mcoquelin.stm32@gmail.com> Cc: Alexandre Torgue <alexandre.torgue@foss.st.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Linus Walleij authored
This adds a few small quirks to handle the differences between the STM32 and Ux500 cryp blocks. The following differences are handled with special bool switch bits in the capabilities: - The main difference is that some registers are removed, so we add register offsets for all registers in the per-variant data. Then we assign the right offsets for Ux500 vs the STM32 variants. - The Ux500 does not support the aeads algorithms; gcm(aes) and ccm(aes). Avoid registering them when running on Ux500. - The Ux500 has a special "linear" key format and does some elaborare bit swizzling of the key bits before writing them into the key registers. This is written as an "application note" inside the DB8500 design specification, and seems to be the result of some mishap when assigning the data lines to register bits. (STM32 has clearly fixed this.) - The Ux500 does not have the KP "key prepare" bit in the CR register. Instead, we need to set the KSE bit, "key schedule encryption" bit which does the same thing but is in bit 11 rather than being a special "algorithm type" as on STM32. The algorithm must however be specified as AES ECB while doing this. - The Ux500 cannot just read out IV registers, we need to set the KEYRDEN "key read enable" bit, as this protects not just the key but also the IV from being read out. Enable this bit before reading out the IV and disable it afterwards. Cc: Maxime Coquelin <mcoquelin.stm32@gmail.com> Cc: Alexandre Torgue <alexandre.torgue@foss.st.com> Acked by: Lionel Debieve <lionel.debieve@foss.st.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Linus Walleij authored
The Ux500 cryp and hash drivers are older versions of the hardware managed by the stm32 driver. Instead of trying to improve the Ux500 cryp and hash drivers, start to switch over to the modern and more well-maintained STM32 drivers. Cc: Maxime Coquelin <mcoquelin.stm32@gmail.com> Cc: Alexandre Torgue <alexandre.torgue@foss.st.com> Acked-by: Lionel Debieve <lionel.debieve@foss.st.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Linus Walleij authored
This adds device tree bindings for the Ux500 CRYP block as a compatible in the STM32 CRYP bindings. The Ux500 CRYP binding has been used for ages in the kernel device tree for Ux500 but was never documented, so fill in the gap by making it a sibling of the STM32 CRYP block, which is what it is. The relationship to the existing STM32 CRYP block is pretty obvious when looking at the register map, and I have written patches to reuse the STM32 CRYP driver on the Ux500. The two properties added are DMA channels and power domain. Power domains are a generic SoC feature and the STM32 variant also has DMA channels. Cc: devicetree@vger.kernel.org Cc: Rob Herring <robh+dt@kernel.org> Cc: Krzysztof Kozlowski <krzysztof.kozlowski+dt@linaro.org> Cc: Lionel Debieve <lionel.debieve@foss.st.com> Cc: Maxime Coquelin <mcoquelin.stm32@gmail.com> Cc: Alexandre Torgue <alexandre.torgue@foss.st.com> Acked-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Xiongfeng Wang authored
for_each_pci_dev() is implemented by pci_get_device(). The comment of pci_get_device() says that it will increase the reference count for the returned pci_dev and also decrease the reference count for the input pci_dev @from if it is not NULL. If we break for_each_pci_dev() loop with pdev not NULL, we need to call pci_dev_put() to decrease the reference count. We add a new struct 'amd_geode_priv' to record pointer of the pci_dev and membase, and then add missing pci_dev_put() for the normal and error path. Fixes: ef5d8627 ("[PATCH] Add Geode HW RNG driver") Signed-off-by: Xiongfeng Wang <wangxiongfeng2@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Xiongfeng Wang authored
for_each_pci_dev() is implemented by pci_get_device(). The comment of pci_get_device() says that it will increase the reference count for the returned pci_dev and also decrease the reference count for the input pci_dev @from if it is not NULL. If we break for_each_pci_dev() loop with pdev not NULL, we need to call pci_dev_put() to decrease the reference count. Add the missing pci_dev_put() for the normal and error path. Fixes: 96d63c02 ("[PATCH] Add AMD HW RNG driver") Signed-off-by: Xiongfeng Wang <wangxiongfeng2@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This driver has been implicitly relying on kmalloc alignment to be sufficient for DMA. This may no longer be the case with upcoming arm64 changes. This patch changes it to explicitly request DMA alignment from the Crypto API. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This driver has been implicitly relying on kmalloc alignment to be sufficient for DMA. This may no longer be the case with upcoming arm64 changes. This patch changes it to explicitly request DMA alignment from the Crypto API. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This driver has been implicitly relying on kmalloc alignment to be sufficient for DMA. This may no longer be the case with upcoming arm64 changes. This patch changes it to explicitly request DMA alignment from the Crypto API. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This driver has been implicitly relying on kmalloc alignment to be sufficient for DMA. This may no longer be the case with upcoming arm64 changes. This patch changes it to explicitly request DMA alignment from the Crypto API. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This driver has been implicitly relying on kmalloc alignment to be sufficient for DMA. This may no longer be the case with upcoming arm64 changes. This patch changes it to explicitly request DMA alignment from the Crypto API. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This driver has been implicitly relying on kmalloc alignment to be sufficient for DMA. This may no longer be the case with upcoming arm64 changes. This patch changes it to explicitly request DMA alignment from the Crypto API. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This driver has been implicitly relying on kmalloc alignment to be sufficient for DMA. This may no longer be the case with upcoming arm64 changes. This patch changes it to explicitly request DMA alignment from the Crypto API. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This driver has been implicitly relying on kmalloc alignment to be sufficient for DMA. This may no longer be the case with upcoming arm64 changes. This patch changes it to explicitly request DMA alignment from the Crypto API. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This driver has been implicitly relying on kmalloc alignment to be sufficient for DMA. This may no longer be the case with upcoming arm64 changes. This patch changes it to explicitly request DMA alignment from the Crypto API. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This driver has been implicitly relying on kmalloc alignment to be sufficient for DMA. This may no longer be the case with upcoming arm64 changes. This patch changes it to explicitly request DMA alignment from the Crypto API. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Gaosheng Cui authored
Smatch report warning as follows: drivers/crypto/img-hash.c:366 img_hash_dma_task() warn: variable dereferenced before check 'hdev->req' Variable dereferenced should be done after check 'hdev->req', fix it. Fixes: d358f1ab ("crypto: img-hash - Add Imagination Technologies hw hash accelerator") Fixes: 10badea2 ("crypto: img-hash - Fix null pointer exception") Signed-off-by: Gaosheng Cui <cuigaosheng1@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
Use the frame_push and frame_pop macros to set up the stack frame so that return address protections will be enabled automically when configured. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
Use the frame_push and frame_pop macros to set up the stack frame so that return address protections will be enabled automically when configured. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
Use the frame_push and frame_pop macros to create the stack frames in the AES chaining mode wrappers so that they will get PAC and/or shadow call stack protection when configured. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
Use the frame_push and frame_pop macros consistently to create the stack frame, so that we will get PAC and/or shadow call stack handling as well when enabled. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch fixes the sparse warning about arrays of flexible structures by removing an unnecessary use of them in struct __crypto_ctx. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Giovanni Cabiddu authored
The acomp API allows to send requests with a NULL destination buffer. In this case, the algorithm implementation needs to allocate the destination scatter list, perform the operation and return the buffer to the user. For decompression, data is likely to expand and be bigger than the allocated buffer. This implements a re-submission mechanism for decompression requests that is triggered if the destination buffer, allocated by the driver, is not sufficiently big to store the output from decompression. If an overflow is detected when processing the callback for a decompression request with a NULL destination buffer, a workqueue is scheduled. This allocates a new scatter list of size CRYPTO_ACOMP_DST_MAX, now 128KB, creates a new firmware scatter list and resubmits the job to the hardware accelerator. Suggested-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Giovanni Cabiddu authored
The acomp API allows to send requests with a NULL destination buffer. In this case, the algorithm implementation needs to allocate the destination scatter list, perform the operation and return the buffer to the user. For decompression, data is likely to expand and be bigger than the allocated buffer. Define the maximum size (128KB) that acomp implementations will allocate for decompression operations as destination buffer when they receive a request with a NULL destination buffer. Suggested-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Giovanni Cabiddu authored
Enable deflate for QAT GEN4 devices. This adds (1) logic to create configuration entries at probe time for the compression instances for QAT GEN4 devices; (2) the implementation of QAT GEN4 specific compression operations, required since the creation of the compression request template is different between GEN2 and GEN4; and (3) updates to the firmware API related to compression for GEN4. The implementation configures the device to produce data compressed dynamically, optimized for throughput over compression ratio. Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Reviewed-by: Wojciech Ziemba <wojciech.ziemba@intel.com> Reviewed-by: Adam Guerin <adam.guerin@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Giovanni Cabiddu authored
Add infrastructure for implementing the acomp APIs in the QAT driver and expose the deflate algorithm for QAT GEN2 devices. This adds (1) the compression service which includes logic to create, allocate and handle compression instances; (2) logic to create configuration entries at probe time for the compression instances; (3) updates to the firmware API for allowing the compression service; and; (4) a back-end for deflate that implements the acomp api for QAT GEN2 devices. The implementation configures the device to produce data compressed statically, optimized for throughput over compression ratio. Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Reviewed-by: Wojciech Ziemba <wojciech.ziemba@intel.com> Reviewed-by: Adam Guerin <adam.guerin@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Giovanni Cabiddu authored
Rename qat_crypto_dev_config() in adf_gen2_dev_config() and relocate it to the newly created file adf_gen2_config.c. This function is specific to QAT GEN2 devices and will be used also to configure the compression service. In addition change the drivers to use the dev_config() in the hardware data structure (which for GEN2 devices now points to adf_gen2_dev_config()), for consistency. Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Reviewed-by: Wojciech Ziemba <wojciech.ziemba@intel.com> Reviewed-by: Adam Guerin <adam.guerin@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Giovanni Cabiddu authored
Move qat_algs_alloc_flags() from qat_crypto.h to qat_bl.h as this will be used also by the compression logic. Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Reviewed-by: Wojciech Ziemba <wojciech.ziemba@intel.com> Reviewed-by: Adam Guerin <adam.guerin@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Giovanni Cabiddu authored
Move the structures qat_instance_backlog and qat_alg_req from qat_crypto.h to qat_algs_send.h since they are not unique to crypto. Both structures will be used by the compression service to support requests with the CRYPTO_TFM_REQ_MAY_BACKLOG flag set. Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Reviewed-by: Wojciech Ziemba <wojciech.ziemba@intel.com> Reviewed-by: Adam Guerin <adam.guerin@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Giovanni Cabiddu authored
The compression service requires an additional pre-allocated buffer for each destination scatter list. Extend the function qat_alg_sgl_to_bufl() to take an additional structure that contains the dma address and the size of the extra buffer which will be appended in the destination FW SGL. The logic that unmaps buffers in qat_alg_free_bufl() has been changed to start unmapping from buffer 0 instead of skipping the initial buffers num_buff - num_mapped_bufs as that functionality was not used in the code. Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Reviewed-by: Wojciech Ziemba <wojciech.ziemba@intel.com> Reviewed-by: Adam Guerin <adam.guerin@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Giovanni Cabiddu authored
The structure qat_crypto_request_buffs which contains the source and destination buffer lists and correspondent sizes and dma addresses is also required for the compression service. Rename it as qat_request_buffs and move it to qat_bl.h. Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Reviewed-by: Wojciech Ziemba <wojciech.ziemba@intel.com> Reviewed-by: Adam Guerin <adam.guerin@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Giovanni Cabiddu authored
The functions qat_alg_sgl_to_bufl() and qat_alg_free_bufl() take as argument a qat_crypto_instance and a qat_crypto_request structure. These two structures are used only to get a reference to the adf_accel_dev and qat_crypto_request_buffs. In order to reuse these functions for the compression service, change the signature so that they take adf_accel_dev and qat_crypto_request_buffs. Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Reviewed-by: Wojciech Ziemba <wojciech.ziemba@intel.com> Reviewed-by: Adam Guerin <adam.guerin@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Giovanni Cabiddu authored
Rename the functions qat_alg_sgl_to_bufl() and qat_alg_free_bufl() as qat_bl_sgl_to_bufl() and qat_bl_free_bufl() after their relocation into the qat_bl module. This commit does not implement any functional change. Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Giovanni Cabiddu authored
Move the logic that maps, unmaps and converts scatterlists into QAT bufferlists from qat_algs.c to a new module, qat_bl. This is to allow reuse of the logic by the data compression service. This commit does not implement any functional change. Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Reviewed-by: Wojciech Ziemba <wojciech.ziemba@intel.com> Reviewed-by: Adam Guerin <adam.guerin@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 05 Dec, 2022 1 commit
-
-
Herbert Xu authored
Directly including asm/cache.h leads to build failures on powerpc so replace it with linux/cache.h instead. Fixes: e634ac4a ("crypto: api - Add crypto_tfm_ctx_dma") Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 02 Dec, 2022 5 commits
-
-
Tianjia Zhang authored
Commit d2825fa9 ("crypto: sm3,sm4 - move into crypto directory") moves the SM3 and SM4 stand-alone library and the algorithm implementation for the Crypto API into the same directory, and the corresponding relationship of Kconfig is modified, CONFIG_CRYPTO_SM3/4 corresponds to the stand-alone library of SM3/4, and CONFIG_CRYPTO_SM3/4_GENERIC corresponds to the algorithm implementation for the Crypto API. Therefore, it is necessary for this module to depend on the correct algorithm. Fixes: d2825fa9 ("crypto: sm3,sm4 - move into crypto directory") Cc: Jason A. Donenfeld <Jason@zx2c4.com> Cc: stable@vger.kernel.org # v5.19+ Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This driver has been implicitly relying on kmalloc alignment to be sufficient for DMA. This may no longer be the case with upcoming arm64 changes. This patch changes it to explicitly request DMA alignment from the Crypto API. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch adds helpers to access the kpp context structure and request context structure with an added alignment for DMA access. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch adds helpers to access the akcipher context structure and request context structure with an added alignment for DMA access. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
Previously we limited the maximum alignment mask to 63. This is mostly due to stack usage for shash. This patch introduces a separate limit for shash algorithms and increases the general limit to 127 which is the value that we need for DMA allocations on arm64. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-