Commit 5d95ff84 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'v6.5-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto updates from Herbert Xu:
 "API:
   - Add linear akcipher/sig API
   - Add tfm cloning (hmac, cmac)
   - Add statesize to crypto_ahash

  Algorithms:
   - Allow only odd e and restrict value in FIPS mode for RSA
   - Replace LFSR with SHA3-256 in jitter
   - Add interface for gathering of raw entropy in jitter

  Drivers:
   - Fix race on data_avail and actual data in hwrng/virtio
   - Add hash and HMAC support in starfive
   - Add RSA algo support in starfive
   - Add support for PCI device 0x156E in ccp"

* tag 'v6.5-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (85 commits)
  crypto: akcipher - Do not copy dst if it is NULL
  crypto: sig - Fix verify call
  crypto: akcipher - Set request tfm on sync path
  crypto: sm2 - Provide sm2_compute_z_digest when sm2 is disabled
  hwrng: imx-rngc - switch to DEFINE_SIMPLE_DEV_PM_OPS
  hwrng: st - keep clock enabled while hwrng is registered
  hwrng: st - support compile-testing
  hwrng: imx-rngc - fix the timeout for init and self check
  KEYS: asymmetric: Use new crypto interface without scatterlists
  KEYS: asymmetric: Move sm2 code into x509_public_key
  KEYS: Add forward declaration in asymmetric-parser.h
  crypto: sig - Add interface for sign/verify
  crypto: akcipher - Add sync interface without SG lists
  crypto: cipher - On clone do crypto_mod_get()
  crypto: api - Add __crypto_alloc_tfmgfp
  crypto: api - Remove crypto_init_ops()
  crypto: rsa - allow only odd e and restrict value in FIPS mode
  crypto: geniv - Split geniv out of AEAD Kconfig option
  crypto: algboss - Add missing dependency on RNG2
  crypto: starfive - Add RSA algo support
  ...
parents d85a143b 486bfb05
......@@ -27,7 +27,18 @@ Description: (RW) Reports the current configuration of the QAT device.
* sym;asym: the device is configured for running crypto
services
* asym;sym: identical to sym;asym
* dc: the device is configured for running compression services
* sym: the device is configured for running symmetric crypto
services
* asym: the device is configured for running asymmetric crypto
services
* asym;dc: the device is configured for running asymmetric
crypto services and compression services
* dc;asym: identical to asym;dc
* sym;dc: the device is configured for running symmetric crypto
services and compression services
* dc;sym: identical to sym;dc
It is possible to set the configuration only if the device
is in the `down` state (see /sys/bus/pci/devices/<BDF>/qat/state)
......@@ -47,3 +58,38 @@ Description: (RW) Reports the current configuration of the QAT device.
dc
This attribute is only available for qat_4xxx devices.
What: /sys/bus/pci/devices/<BDF>/qat/pm_idle_enabled
Date: June 2023
KernelVersion: 6.5
Contact: qat-linux@intel.com
Description: (RW) This configuration option provides a way to force the device into remaining in
the MAX power state.
If idle support is enabled the device will transition to the `MIN` power state when
idle, otherwise will stay in the MAX power state.
Write to the file to enable or disable idle support.
The values are:
* 0: idle support is disabled
* 1: idle support is enabled
Default value is 1.
It is possible to set the pm_idle_enabled value only if the device
is in the `down` state (see /sys/bus/pci/devices/<BDF>/qat/state)
The following example shows how to change the pm_idle_enabled of
a device::
# cat /sys/bus/pci/devices/<BDF>/qat/state
up
# cat /sys/bus/pci/devices/<BDF>/qat/pm_idle_enabled
1
# echo down > /sys/bus/pci/devices/<BDF>/qat/state
# echo 0 > /sys/bus/pci/devices/<BDF>/qat/pm_idle_enabled
# echo up > /sys/bus/pci/devices/<BDF>/qat/state
# cat /sys/bus/pci/devices/<BDF>/qat/pm_idle_enabled
0
This attribute is only available for qat_4xxx devices.
......@@ -24,12 +24,20 @@ properties:
deprecated: true
description: Kept only for ABI backward compatibility
- items:
- enum:
- qcom,ipq4019-qce
- qcom,sm8150-qce
- const: qcom,qce
- items:
- enum:
- qcom,ipq6018-qce
- qcom,ipq8074-qce
- qcom,msm8996-qce
- qcom,qcm2290-qce
- qcom,sdm845-qce
- qcom,sm6115-qce
- const: qcom,ipq4019-qce
- const: qcom,qce
......@@ -46,16 +54,12 @@ properties:
maxItems: 1
clocks:
items:
- description: iface clocks register interface.
- description: bus clocks data transfer interface.
- description: core clocks rest of the crypto block.
minItems: 1
maxItems: 3
clock-names:
items:
- const: iface
- const: bus
- const: core
minItems: 1
maxItems: 3
iommus:
minItems: 1
......@@ -89,9 +93,37 @@ allOf:
enum:
- qcom,crypto-v5.1
- qcom,crypto-v5.4
- qcom,ipq4019-qce
- qcom,ipq6018-qce
- qcom,ipq8074-qce
- qcom,msm8996-qce
- qcom,sdm845-qce
then:
properties:
clocks:
maxItems: 3
clock-names:
items:
- const: iface
- const: bus
- const: core
required:
- clocks
- clock-names
- if:
properties:
compatible:
contains:
enum:
- qcom,qcm2290-qce
- qcom,sm6115-qce
then:
properties:
clocks:
maxItems: 1
clock-names:
items:
- const: core
required:
- clocks
- clock-names
......
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/crypto/starfive,jh7110-crypto.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: StarFive Cryptographic Module
maintainers:
- Jia Jie Ho <jiajie.ho@starfivetech.com>
- William Qiu <william.qiu@starfivetech.com>
properties:
compatible:
const: starfive,jh7110-crypto
reg:
maxItems: 1
clocks:
items:
- description: Hardware reference clock
- description: AHB reference clock
clock-names:
items:
- const: hclk
- const: ahb
interrupts:
maxItems: 1
resets:
maxItems: 1
dmas:
items:
- description: TX DMA channel
- description: RX DMA channel
dma-names:
items:
- const: tx
- const: rx
required:
- compatible
- reg
- clocks
- clock-names
- resets
- dmas
- dma-names
additionalProperties: false
examples:
- |
crypto: crypto@16000000 {
compatible = "starfive,jh7110-crypto";
reg = <0x16000000 0x4000>;
clocks = <&clk 15>, <&clk 16>;
clock-names = "hclk", "ahb";
interrupts = <28>;
resets = <&reset 3>;
dmas = <&dma 1 2>,
<&dma 0 2>;
dma-names = "tx", "rx";
};
...
......@@ -20265,6 +20265,13 @@ F: Documentation/devicetree/bindings/clock/starfive,jh71*.yaml
F: drivers/clk/starfive/clk-starfive-jh71*
F: include/dt-bindings/clock/starfive?jh71*.h
STARFIVE CRYPTO DRIVER
M: Jia Jie Ho <jiajie.ho@starfivetech.com>
M: William Qiu <william.qiu@starfivetech.com>
S: Supported
F: Documentation/devicetree/bindings/crypto/starfive*
F: drivers/crypto/starfive/
STARFIVE JH71X0 PINCTRL DRIVERS
M: Emil Renner Berthing <kernel@esmil.dk>
M: Jianlong Huang <jianlong.huang@starfivetech.com>
......
......@@ -26,8 +26,8 @@
#include "sha1.h"
asmlinkage void sha1_transform_neon(void *state_h, const char *data,
unsigned int rounds);
asmlinkage void sha1_transform_neon(struct sha1_state *state_h,
const u8 *data, int rounds);
static int sha1_neon_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
......@@ -39,8 +39,7 @@ static int sha1_neon_update(struct shash_desc *desc, const u8 *data,
return sha1_update_arm(desc, data, len);
kernel_neon_begin();
sha1_base_do_update(desc, data, len,
(sha1_block_fn *)sha1_transform_neon);
sha1_base_do_update(desc, data, len, sha1_transform_neon);
kernel_neon_end();
return 0;
......@@ -54,9 +53,8 @@ static int sha1_neon_finup(struct shash_desc *desc, const u8 *data,
kernel_neon_begin();
if (len)
sha1_base_do_update(desc, data, len,
(sha1_block_fn *)sha1_transform_neon);
sha1_base_do_finalize(desc, (sha1_block_fn *)sha1_transform_neon);
sha1_base_do_update(desc, data, len, sha1_transform_neon);
sha1_base_do_finalize(desc, sha1_transform_neon);
kernel_neon_end();
return sha1_base_finish(desc, out);
......
......@@ -21,8 +21,8 @@
#include "sha256_glue.h"
asmlinkage void sha256_block_data_order_neon(u32 *digest, const void *data,
unsigned int num_blks);
asmlinkage void sha256_block_data_order_neon(struct sha256_state *digest,
const u8 *data, int num_blks);
static int crypto_sha256_neon_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
......@@ -34,8 +34,7 @@ static int crypto_sha256_neon_update(struct shash_desc *desc, const u8 *data,
return crypto_sha256_arm_update(desc, data, len);
kernel_neon_begin();
sha256_base_do_update(desc, data, len,
(sha256_block_fn *)sha256_block_data_order_neon);
sha256_base_do_update(desc, data, len, sha256_block_data_order_neon);
kernel_neon_end();
return 0;
......@@ -50,9 +49,8 @@ static int crypto_sha256_neon_finup(struct shash_desc *desc, const u8 *data,
kernel_neon_begin();
if (len)
sha256_base_do_update(desc, data, len,
(sha256_block_fn *)sha256_block_data_order_neon);
sha256_base_do_finalize(desc,
(sha256_block_fn *)sha256_block_data_order_neon);
sha256_block_data_order_neon);
sha256_base_do_finalize(desc, sha256_block_data_order_neon);
kernel_neon_end();
return sha256_base_finish(desc, out);
......
......@@ -20,8 +20,8 @@
MODULE_ALIAS_CRYPTO("sha384-neon");
MODULE_ALIAS_CRYPTO("sha512-neon");
asmlinkage void sha512_block_data_order_neon(u64 *state, u8 const *src,
int blocks);
asmlinkage void sha512_block_data_order_neon(struct sha512_state *state,
const u8 *src, int blocks);
static int sha512_neon_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
......@@ -33,8 +33,7 @@ static int sha512_neon_update(struct shash_desc *desc, const u8 *data,
return sha512_arm_update(desc, data, len);
kernel_neon_begin();
sha512_base_do_update(desc, data, len,
(sha512_block_fn *)sha512_block_data_order_neon);
sha512_base_do_update(desc, data, len, sha512_block_data_order_neon);
kernel_neon_end();
return 0;
......@@ -49,9 +48,8 @@ static int sha512_neon_finup(struct shash_desc *desc, const u8 *data,
kernel_neon_begin();
if (len)
sha512_base_do_update(desc, data, len,
(sha512_block_fn *)sha512_block_data_order_neon);
sha512_base_do_finalize(desc,
(sha512_block_fn *)sha512_block_data_order_neon);
sha512_block_data_order_neon);
sha512_base_do_finalize(desc, sha512_block_data_order_neon);
kernel_neon_end();
return sha512_base_finish(desc, out);
......
......@@ -12,8 +12,9 @@
#include <crypto/internal/simd.h>
#include <crypto/sha2.h>
#include <crypto/sha256_base.h>
#include <linux/types.h>
#include <linux/module.h>
#include <linux/string.h>
#include <linux/types.h>
MODULE_DESCRIPTION("SHA-224/SHA-256 secure hash for arm64");
MODULE_AUTHOR("Andy Polyakov <appro@openssl.org>");
......
......@@ -71,8 +71,15 @@ config CRYPTO_AEAD
config CRYPTO_AEAD2
tristate
select CRYPTO_ALGAPI2
select CRYPTO_NULL2
select CRYPTO_RNG2
config CRYPTO_SIG
tristate
select CRYPTO_SIG2
select CRYPTO_ALGAPI
config CRYPTO_SIG2
tristate
select CRYPTO_ALGAPI2
config CRYPTO_SKCIPHER
tristate
......@@ -82,7 +89,6 @@ config CRYPTO_SKCIPHER
config CRYPTO_SKCIPHER2
tristate
select CRYPTO_ALGAPI2
select CRYPTO_RNG2
config CRYPTO_HASH
tristate
......@@ -143,12 +149,14 @@ config CRYPTO_MANAGER
config CRYPTO_MANAGER2
def_tristate CRYPTO_MANAGER || (CRYPTO_MANAGER!=n && CRYPTO_ALGAPI=y)
select CRYPTO_ACOMP2
select CRYPTO_AEAD2
select CRYPTO_HASH2
select CRYPTO_SKCIPHER2
select CRYPTO_AKCIPHER2
select CRYPTO_SIG2
select CRYPTO_HASH2
select CRYPTO_KPP2
select CRYPTO_ACOMP2
select CRYPTO_RNG2
select CRYPTO_SKCIPHER2
config CRYPTO_USER
tristate "Userspace cryptographic algorithm configuration"
......@@ -833,13 +841,16 @@ config CRYPTO_GCM
This is required for IPSec ESP (XFRM_ESP).
config CRYPTO_SEQIV
tristate "Sequence Number IV Generator"
config CRYPTO_GENIV
tristate
select CRYPTO_AEAD
select CRYPTO_SKCIPHER
select CRYPTO_NULL
select CRYPTO_RNG_DEFAULT
select CRYPTO_MANAGER
select CRYPTO_RNG_DEFAULT
config CRYPTO_SEQIV
tristate "Sequence Number IV Generator"
select CRYPTO_GENIV
help
Sequence Number IV generator
......@@ -850,10 +861,7 @@ config CRYPTO_SEQIV
config CRYPTO_ECHAINIV
tristate "Encrypted Chain IV Generator"
select CRYPTO_AEAD
select CRYPTO_NULL
select CRYPTO_RNG_DEFAULT
select CRYPTO_MANAGER
select CRYPTO_GENIV
help
Encrypted Chain IV generator
......@@ -1277,6 +1285,7 @@ endif # if CRYPTO_DRBG_MENU
config CRYPTO_JITTERENTROPY
tristate "CPU Jitter Non-Deterministic RNG (Random Number Generator)"
select CRYPTO_RNG
select CRYPTO_SHA3
help
CPU Jitter RNG (Random Number Generator) from the Jitterentropy library
......@@ -1287,6 +1296,26 @@ config CRYPTO_JITTERENTROPY
See https://www.chronox.de/jent.html
config CRYPTO_JITTERENTROPY_TESTINTERFACE
bool "CPU Jitter RNG Test Interface"
depends on CRYPTO_JITTERENTROPY
help
The test interface allows a privileged process to capture
the raw unconditioned high resolution time stamp noise that
is collected by the Jitter RNG for statistical analysis. As
this data is used at the same time to generate random bits,
the Jitter RNG operates in an insecure mode as long as the
recording is enabled. This interface therefore is only
intended for testing purposes and is not suitable for
production systems.
The raw noise data can be obtained using the jent_raw_hires
debugfs file. Using the option
jitterentropy_testing.boot_raw_hires_test=1 the raw noise of
the first 1000 entropy events since boot can be sampled.
If unsure, select N.
config CRYPTO_KDF800108_CTR
tristate
select CRYPTO_HMAC
......@@ -1372,6 +1401,9 @@ config CRYPTO_STATS
help
Enable the gathering of crypto stats.
Enabling this option reduces the performance of the crypto API. It
should only be enabled when there is actually a use case for it.
This collects data sizes, numbers of requests, and numbers
of errors processed by:
- AEAD ciphers (encrypt, decrypt)
......
......@@ -14,7 +14,7 @@ crypto_algapi-y := algapi.o scatterwalk.o $(crypto_algapi-y)
obj-$(CONFIG_CRYPTO_ALGAPI2) += crypto_algapi.o
obj-$(CONFIG_CRYPTO_AEAD2) += aead.o
obj-$(CONFIG_CRYPTO_AEAD2) += geniv.o
obj-$(CONFIG_CRYPTO_GENIV) += geniv.o
obj-$(CONFIG_CRYPTO_SKCIPHER2) += skcipher.o
obj-$(CONFIG_CRYPTO_SEQIV) += seqiv.o
......@@ -25,6 +25,7 @@ crypto_hash-y += shash.o
obj-$(CONFIG_CRYPTO_HASH2) += crypto_hash.o
obj-$(CONFIG_CRYPTO_AKCIPHER2) += akcipher.o
obj-$(CONFIG_CRYPTO_SIG2) += sig.o
obj-$(CONFIG_CRYPTO_KPP2) += kpp.o
dh_generic-y := dh.o
......@@ -171,6 +172,7 @@ CFLAGS_jitterentropy.o = -O0
KASAN_SANITIZE_jitterentropy.o = n
UBSAN_SANITIZE_jitterentropy.o = n
jitterentropy_rng-y := jitterentropy.o jitterentropy-kcapi.o
obj-$(CONFIG_CRYPTO_JITTERENTROPY_TESTINTERFACE) += jitterentropy-testing.o
obj-$(CONFIG_CRYPTO_TEST) += tcrypt.o
obj-$(CONFIG_CRYPTO_GHASH) += ghash-generic.o
obj-$(CONFIG_CRYPTO_POLYVAL) += polyval-generic.o
......
// SPDX-License-Identifier: GPL-2.0-or-later
#ifndef _AEGIS_NEON_H
#define _AEGIS_NEON_H
void crypto_aegis128_init_neon(void *state, const void *key, const void *iv);
void crypto_aegis128_update_neon(void *state, const void *msg);
void crypto_aegis128_encrypt_chunk_neon(void *state, void *dst, const void *src,
unsigned int size);
void crypto_aegis128_decrypt_chunk_neon(void *state, void *dst, const void *src,
unsigned int size);
int crypto_aegis128_final_neon(void *state, void *tag_xor,
unsigned int assoclen,
unsigned int cryptlen,
unsigned int authsize);
#endif
......@@ -16,6 +16,7 @@
#define AEGIS_BLOCK_SIZE 16
#include <stddef.h>
#include "aegis-neon.h"
extern int aegis128_have_aes_insn;
......
......@@ -7,17 +7,7 @@
#include <asm/neon.h>
#include "aegis.h"
void crypto_aegis128_init_neon(void *state, const void *key, const void *iv);
void crypto_aegis128_update_neon(void *state, const void *msg);
void crypto_aegis128_encrypt_chunk_neon(void *state, void *dst, const void *src,
unsigned int size);
void crypto_aegis128_decrypt_chunk_neon(void *state, void *dst, const void *src,
unsigned int size);
int crypto_aegis128_final_neon(void *state, void *tag_xor,
unsigned int assoclen,
unsigned int cryptlen,
unsigned int authsize);
#include "aegis-neon.h"
int aegis128_have_aes_insn __ro_after_init;
......
......@@ -31,12 +31,6 @@ struct ahash_request_priv {
void *ubuf[] CRYPTO_MINALIGN_ATTR;
};
static inline struct ahash_alg *crypto_ahash_alg(struct crypto_ahash *hash)
{
return container_of(crypto_hash_alg_common(hash), struct ahash_alg,
halg);
}
static int hash_walk_next(struct crypto_hash_walk *walk)
{
unsigned int alignmask = walk->alignmask;
......@@ -432,6 +426,8 @@ static int crypto_ahash_init_tfm(struct crypto_tfm *tfm)
hash->setkey = ahash_nosetkey;
crypto_ahash_set_statesize(hash, alg->halg.statesize);
if (tfm->__crt_alg->cra_type != &crypto_ahash_type)
return crypto_init_shash_ops_async(tfm);
......@@ -573,6 +569,7 @@ struct crypto_ahash *crypto_clone_ahash(struct crypto_ahash *hash)
nhash->import = hash->import;
nhash->setkey = hash->setkey;
nhash->reqsize = hash->reqsize;
nhash->statesize = hash->statesize;
if (tfm->__crt_alg->cra_type != &crypto_ahash_type)
return crypto_clone_shash_ops_async(nhash, hash);
......
......@@ -10,6 +10,7 @@
#include <linux/errno.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/scatterlist.h>
#include <linux/seq_file.h>
#include <linux/slab.h>
#include <linux/string.h>
......@@ -17,6 +18,8 @@
#include "internal.h"
#define CRYPTO_ALG_TYPE_AHASH_MASK 0x0000000e
static int __maybe_unused crypto_akcipher_report(
struct sk_buff *skb, struct crypto_alg *alg)
{
......@@ -105,7 +108,7 @@ static const struct crypto_type crypto_akcipher_type = {
.report_stat = crypto_akcipher_report_stat,
#endif
.maskclear = ~CRYPTO_ALG_TYPE_MASK,
.maskset = CRYPTO_ALG_TYPE_MASK,
.maskset = CRYPTO_ALG_TYPE_AHASH_MASK,
.type = CRYPTO_ALG_TYPE_AKCIPHER,
.tfmsize = offsetof(struct crypto_akcipher, base),
};
......@@ -186,5 +189,124 @@ int akcipher_register_instance(struct crypto_template *tmpl,
}
EXPORT_SYMBOL_GPL(akcipher_register_instance);
int crypto_akcipher_sync_prep(struct crypto_akcipher_sync_data *data)
{
unsigned int reqsize = crypto_akcipher_reqsize(data->tfm);
struct akcipher_request *req;
struct scatterlist *sg;
unsigned int mlen;
unsigned int len;
u8 *buf;
if (data->dst)
mlen = max(data->slen, data->dlen);
else
mlen = data->slen + data->dlen;
len = sizeof(*req) + reqsize + mlen;
if (len < mlen)
return -EOVERFLOW;
req = kzalloc(len, GFP_KERNEL);
if (!req)
return -ENOMEM;
data->req = req;
akcipher_request_set_tfm(req, data->tfm);
buf = (u8 *)(req + 1) + reqsize;
data->buf = buf;
memcpy(buf, data->src, data->slen);
sg = &data->sg;
sg_init_one(sg, buf, mlen);
akcipher_request_set_crypt(req, sg, data->dst ? sg : NULL,
data->slen, data->dlen);
crypto_init_wait(&data->cwait);
akcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP,
crypto_req_done, &data->cwait);
return 0;
}
EXPORT_SYMBOL_GPL(crypto_akcipher_sync_prep);
int crypto_akcipher_sync_post(struct crypto_akcipher_sync_data *data, int err)
{
err = crypto_wait_req(err, &data->cwait);
if (data->dst)
memcpy(data->dst, data->buf, data->dlen);
data->dlen = data->req->dst_len;
kfree_sensitive(data->req);
return err;
}
EXPORT_SYMBOL_GPL(crypto_akcipher_sync_post);
int crypto_akcipher_sync_encrypt(struct crypto_akcipher *tfm,
const void *src, unsigned int slen,
void *dst, unsigned int dlen)
{
struct crypto_akcipher_sync_data data = {
.tfm = tfm,
.src = src,
.dst = dst,
.slen = slen,
.dlen = dlen,
};
return crypto_akcipher_sync_prep(&data) ?:
crypto_akcipher_sync_post(&data,
crypto_akcipher_encrypt(data.req));
}
EXPORT_SYMBOL_GPL(crypto_akcipher_sync_encrypt);
int crypto_akcipher_sync_decrypt(struct crypto_akcipher *tfm,
const void *src, unsigned int slen,
void *dst, unsigned int dlen)
{
struct crypto_akcipher_sync_data data = {
.tfm = tfm,
.src = src,
.dst = dst,
.slen = slen,
.dlen = dlen,
};
return crypto_akcipher_sync_prep(&data) ?:
crypto_akcipher_sync_post(&data,
crypto_akcipher_decrypt(data.req)) ?:
data.dlen;
}
EXPORT_SYMBOL_GPL(crypto_akcipher_sync_decrypt);
static void crypto_exit_akcipher_ops_sig(struct crypto_tfm *tfm)
{
struct crypto_akcipher **ctx = crypto_tfm_ctx(tfm);
crypto_free_akcipher(*ctx);
}
int crypto_init_akcipher_ops_sig(struct crypto_tfm *tfm)
{
struct crypto_akcipher **ctx = crypto_tfm_ctx(tfm);
struct crypto_alg *calg = tfm->__crt_alg;
struct crypto_akcipher *akcipher;
if (!crypto_mod_get(calg))
return -EAGAIN;
akcipher = crypto_create_tfm(calg, &crypto_akcipher_type);
if (IS_ERR(akcipher)) {
crypto_mod_put(calg);
return PTR_ERR(akcipher);
}
*ctx = akcipher;
tfm->exit = crypto_exit_akcipher_ops_sig;
return 0;
}
EXPORT_SYMBOL_GPL(crypto_init_akcipher_ops_sig);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Generic public key cipher type");
......@@ -345,15 +345,6 @@ struct crypto_alg *crypto_alg_mod_lookup(const char *name, u32 type, u32 mask)
}
EXPORT_SYMBOL_GPL(crypto_alg_mod_lookup);
static int crypto_init_ops(struct crypto_tfm *tfm, u32 type, u32 mask)
{
const struct crypto_type *type_obj = tfm->__crt_alg->cra_type;
if (type_obj)
return type_obj->init(tfm, type, mask);
return 0;
}
static void crypto_exit_ops(struct crypto_tfm *tfm)
{
const struct crypto_type *type = tfm->__crt_alg->cra_type;
......@@ -395,25 +386,21 @@ void crypto_shoot_alg(struct crypto_alg *alg)
}
EXPORT_SYMBOL_GPL(crypto_shoot_alg);
struct crypto_tfm *__crypto_alloc_tfm(struct crypto_alg *alg, u32 type,
u32 mask)
struct crypto_tfm *__crypto_alloc_tfmgfp(struct crypto_alg *alg, u32 type,
u32 mask, gfp_t gfp)
{
struct crypto_tfm *tfm = NULL;
unsigned int tfm_size;
int err = -ENOMEM;
tfm_size = sizeof(*tfm) + crypto_ctxsize(alg, type, mask);
tfm = kzalloc(tfm_size, GFP_KERNEL);
tfm = kzalloc(tfm_size, gfp);
if (tfm == NULL)
goto out_err;
tfm->__crt_alg = alg;
refcount_set(&tfm->refcnt, 1);
err = crypto_init_ops(tfm, type, mask);
if (err)
goto out_free_tfm;
if (!tfm->exit && alg->cra_init && (err = alg->cra_init(tfm)))
goto cra_init_failed;
......@@ -421,7 +408,6 @@ struct crypto_tfm *__crypto_alloc_tfm(struct crypto_alg *alg, u32 type,
cra_init_failed:
crypto_exit_ops(tfm);
out_free_tfm:
if (err == -EAGAIN)
crypto_shoot_alg(alg);
kfree(tfm);
......@@ -430,6 +416,13 @@ struct crypto_tfm *__crypto_alloc_tfm(struct crypto_alg *alg, u32 type,
out:
return tfm;
}
EXPORT_SYMBOL_GPL(__crypto_alloc_tfmgfp);
struct crypto_tfm *__crypto_alloc_tfm(struct crypto_alg *alg, u32 type,
u32 mask)
{
return __crypto_alloc_tfmgfp(alg, type, mask, GFP_KERNEL);
}
EXPORT_SYMBOL_GPL(__crypto_alloc_tfm);
/*
......
This diff is collapsed.
......@@ -6,13 +6,15 @@
*/
#define pr_fmt(fmt) "X.509: "fmt
#include <crypto/hash.h>
#include <crypto/sm2.h>
#include <keys/asymmetric-parser.h>
#include <keys/asymmetric-subtype.h>
#include <keys/system_keyring.h>
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/slab.h>
#include <keys/asymmetric-subtype.h>
#include <keys/asymmetric-parser.h>
#include <keys/system_keyring.h>
#include <crypto/hash.h>
#include <linux/string.h>
#include "asymmetric_keys.h"
#include "x509_parser.h"
......@@ -30,9 +32,6 @@ int x509_get_sig_params(struct x509_certificate *cert)
pr_devel("==>%s()\n", __func__);
sig->data = cert->tbs;
sig->data_size = cert->tbs_size;
sig->s = kmemdup(cert->raw_sig, cert->raw_sig_size, GFP_KERNEL);
if (!sig->s)
return -ENOMEM;
......@@ -65,7 +64,21 @@ int x509_get_sig_params(struct x509_certificate *cert)
desc->tfm = tfm;
ret = crypto_shash_digest(desc, cert->tbs, cert->tbs_size, sig->digest);
if (strcmp(cert->pub->pkey_algo, "sm2") == 0) {
ret = strcmp(sig->hash_algo, "sm3") != 0 ? -EINVAL :
crypto_shash_init(desc) ?:
sm2_compute_z_digest(desc, cert->pub->key,
cert->pub->keylen, sig->digest) ?:
crypto_shash_init(desc) ?:
crypto_shash_update(desc, sig->digest,
sig->digest_size) ?:
crypto_shash_finup(desc, cert->tbs, cert->tbs_size,
sig->digest);
} else {
ret = crypto_shash_digest(desc, cert->tbs, cert->tbs_size,
sig->digest);
}
if (ret < 0)
goto error_2;
......
......@@ -90,3 +90,31 @@ void crypto_cipher_decrypt_one(struct crypto_cipher *tfm,
cipher_crypt_one(tfm, dst, src, false);
}
EXPORT_SYMBOL_NS_GPL(crypto_cipher_decrypt_one, CRYPTO_INTERNAL);
struct crypto_cipher *crypto_clone_cipher(struct crypto_cipher *cipher)
{
struct crypto_tfm *tfm = crypto_cipher_tfm(cipher);
struct crypto_alg *alg = tfm->__crt_alg;
struct crypto_cipher *ncipher;
struct crypto_tfm *ntfm;
if (alg->cra_init)
return ERR_PTR(-ENOSYS);
if (unlikely(!crypto_mod_get(alg)))
return ERR_PTR(-ESTALE);
ntfm = __crypto_alloc_tfmgfp(alg, CRYPTO_ALG_TYPE_CIPHER,
CRYPTO_ALG_TYPE_MASK, GFP_ATOMIC);
if (IS_ERR(ntfm)) {
crypto_mod_put(alg);
return ERR_CAST(ntfm);
}
ntfm->crt_flags = tfm->crt_flags;
ncipher = __crypto_cipher_cast(ntfm);
return ncipher;
}
EXPORT_SYMBOL_GPL(crypto_clone_cipher);
......@@ -198,13 +198,14 @@ static int crypto_cmac_digest_final(struct shash_desc *pdesc, u8 *out)
return 0;
}
static int cmac_init_tfm(struct crypto_tfm *tfm)
static int cmac_init_tfm(struct crypto_shash *tfm)
{
struct shash_instance *inst = shash_alg_instance(tfm);
struct cmac_tfm_ctx *ctx = crypto_shash_ctx(tfm);
struct crypto_cipher_spawn *spawn;
struct crypto_cipher *cipher;
struct crypto_instance *inst = (void *)tfm->__crt_alg;
struct crypto_cipher_spawn *spawn = crypto_instance_ctx(inst);
struct cmac_tfm_ctx *ctx = crypto_tfm_ctx(tfm);
spawn = shash_instance_ctx(inst);
cipher = crypto_spawn_cipher(spawn);
if (IS_ERR(cipher))
return PTR_ERR(cipher);
......@@ -212,11 +213,26 @@ static int cmac_init_tfm(struct crypto_tfm *tfm)
ctx->child = cipher;
return 0;
};
}
static int cmac_clone_tfm(struct crypto_shash *tfm, struct crypto_shash *otfm)
{
struct cmac_tfm_ctx *octx = crypto_shash_ctx(otfm);
struct cmac_tfm_ctx *ctx = crypto_shash_ctx(tfm);
struct crypto_cipher *cipher;
cipher = crypto_clone_cipher(octx->child);
if (IS_ERR(cipher))
return PTR_ERR(cipher);
ctx->child = cipher;
static void cmac_exit_tfm(struct crypto_tfm *tfm)
return 0;
}
static void cmac_exit_tfm(struct crypto_shash *tfm)
{
struct cmac_tfm_ctx *ctx = crypto_tfm_ctx(tfm);
struct cmac_tfm_ctx *ctx = crypto_shash_ctx(tfm);
crypto_free_cipher(ctx->child);
}
......@@ -274,13 +290,13 @@ static int cmac_create(struct crypto_template *tmpl, struct rtattr **tb)
~(crypto_tfm_ctx_alignment() - 1))
+ alg->cra_blocksize * 2;
inst->alg.base.cra_init = cmac_init_tfm;
inst->alg.base.cra_exit = cmac_exit_tfm;
inst->alg.init = crypto_cmac_digest_init;
inst->alg.update = crypto_cmac_digest_update;
inst->alg.final = crypto_cmac_digest_final;
inst->alg.setkey = crypto_cmac_digest_setkey;
inst->alg.init_tfm = cmac_init_tfm;
inst->alg.clone_tfm = cmac_clone_tfm;
inst->alg.exit_tfm = cmac_exit_tfm;
inst->free = shash_free_singlespawn_instance;
......
......@@ -177,6 +177,7 @@ static int hmac_clone_tfm(struct crypto_shash *dst, struct crypto_shash *src)
static void hmac_exit_tfm(struct crypto_shash *parent)
{
struct hmac_ctx *ctx = hmac_ctx(parent);
crypto_free_shash(ctx->hash);
}
......
......@@ -18,9 +18,12 @@
#include <linux/numa.h>
#include <linux/refcount.h>
#include <linux/rwsem.h>
#include <linux/scatterlist.h>
#include <linux/sched.h>
#include <linux/types.h>
struct akcipher_request;
struct crypto_akcipher;
struct crypto_instance;
struct crypto_template;
......@@ -32,6 +35,19 @@ struct crypto_larval {
bool test_started;
};
struct crypto_akcipher_sync_data {
struct crypto_akcipher *tfm;
const void *src;
void *dst;
unsigned int slen;
unsigned int dlen;
struct akcipher_request *req;
struct crypto_wait cwait;
struct scatterlist sg;
u8 *buf;
};
enum {
CRYPTOA_UNSPEC,
CRYPTOA_ALG,
......@@ -102,6 +118,8 @@ void crypto_remove_spawns(struct crypto_alg *alg, struct list_head *list,
struct crypto_alg *nalg);
void crypto_remove_final(struct list_head *list);
void crypto_shoot_alg(struct crypto_alg *alg);
struct crypto_tfm *__crypto_alloc_tfmgfp(struct crypto_alg *alg, u32 type,
u32 mask, gfp_t gfp);
struct crypto_tfm *__crypto_alloc_tfm(struct crypto_alg *alg, u32 type,
u32 mask);
void *crypto_create_tfm_node(struct crypto_alg *alg,
......@@ -109,6 +127,10 @@ void *crypto_create_tfm_node(struct crypto_alg *alg,
void *crypto_clone_tfm(const struct crypto_type *frontend,
struct crypto_tfm *otfm);
int crypto_akcipher_sync_prep(struct crypto_akcipher_sync_data *data);
int crypto_akcipher_sync_post(struct crypto_akcipher_sync_data *data, int err);
int crypto_init_akcipher_ops_sig(struct crypto_tfm *tfm);
static inline void *crypto_create_tfm(struct crypto_alg *alg,
const struct crypto_type *frontend)
{
......
......@@ -2,7 +2,7 @@
* Non-physical true random number generator based on timing jitter --
* Linux Kernel Crypto API specific code
*
* Copyright Stephan Mueller <smueller@chronox.de>, 2015
* Copyright Stephan Mueller <smueller@chronox.de>, 2015 - 2023
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
......@@ -37,6 +37,8 @@
* DAMAGE.
*/
#include <crypto/hash.h>
#include <crypto/sha3.h>
#include <linux/fips.h>
#include <linux/kernel.h>
#include <linux/module.h>
......@@ -46,6 +48,8 @@
#include "jitterentropy.h"
#define JENT_CONDITIONING_HASH "sha3-256-generic"
/***************************************************************************
* Helper function
***************************************************************************/
......@@ -60,11 +64,6 @@ void jent_zfree(void *ptr)
kfree_sensitive(ptr);
}
void jent_memcpy(void *dest, const void *src, unsigned int n)
{
memcpy(dest, src, n);
}
/*
* Obtain a high-resolution time stamp value. The time stamp is used to measure
* the execution time of a given code path and its variations. Hence, the time
......@@ -89,6 +88,92 @@ void jent_get_nstime(__u64 *out)
tmp = ktime_get_ns();
*out = tmp;
jent_raw_hires_entropy_store(tmp);
}
int jent_hash_time(void *hash_state, __u64 time, u8 *addtl,
unsigned int addtl_len, __u64 hash_loop_cnt,
unsigned int stuck)
{
struct shash_desc *hash_state_desc = (struct shash_desc *)hash_state;
SHASH_DESC_ON_STACK(desc, hash_state_desc->tfm);
u8 intermediary[SHA3_256_DIGEST_SIZE];
__u64 j = 0;
int ret;
desc->tfm = hash_state_desc->tfm;
if (sizeof(intermediary) != crypto_shash_digestsize(desc->tfm)) {
pr_warn_ratelimited("Unexpected digest size\n");
return -EINVAL;
}
/*
* This loop fills a buffer which is injected into the entropy pool.
* The main reason for this loop is to execute something over which we
* can perform a timing measurement. The injection of the resulting
* data into the pool is performed to ensure the result is used and
* the compiler cannot optimize the loop away in case the result is not
* used at all. Yet that data is considered "additional information"
* considering the terminology from SP800-90A without any entropy.
*
* Note, it does not matter which or how much data you inject, we are
* interested in one Keccack1600 compression operation performed with
* the crypto_shash_final.
*/
for (j = 0; j < hash_loop_cnt; j++) {
ret = crypto_shash_init(desc) ?:
crypto_shash_update(desc, intermediary,
sizeof(intermediary)) ?:
crypto_shash_finup(desc, addtl, addtl_len, intermediary);
if (ret)
goto err;
}
/*
* Inject the data from the previous loop into the pool. This data is
* not considered to contain any entropy, but it stirs the pool a bit.
*/
ret = crypto_shash_update(desc, intermediary, sizeof(intermediary));
if (ret)
goto err;
/*
* Insert the time stamp into the hash context representing the pool.
*
* If the time stamp is stuck, do not finally insert the value into the
* entropy pool. Although this operation should not do any harm even
* when the time stamp has no entropy, SP800-90B requires that any
* conditioning operation to have an identical amount of input data
* according to section 3.1.5.
*/
if (!stuck) {
ret = crypto_shash_update(hash_state_desc, (u8 *)&time,
sizeof(__u64));
}
err:
shash_desc_zero(desc);
memzero_explicit(intermediary, sizeof(intermediary));
return ret;
}
int jent_read_random_block(void *hash_state, char *dst, unsigned int dst_len)
{
struct shash_desc *hash_state_desc = (struct shash_desc *)hash_state;
u8 jent_block[SHA3_256_DIGEST_SIZE];
/* Obtain data from entropy pool and re-initialize it */
int ret = crypto_shash_final(hash_state_desc, jent_block) ?:
crypto_shash_init(hash_state_desc) ?:
crypto_shash_update(hash_state_desc, jent_block,
sizeof(jent_block));
if (!ret && dst_len)
memcpy(dst, jent_block, dst_len);
memzero_explicit(jent_block, sizeof(jent_block));
return ret;
}
/***************************************************************************
......@@ -98,32 +183,82 @@ void jent_get_nstime(__u64 *out)
struct jitterentropy {
spinlock_t jent_lock;
struct rand_data *entropy_collector;
struct crypto_shash *tfm;
struct shash_desc *sdesc;
};
static int jent_kcapi_init(struct crypto_tfm *tfm)
static void jent_kcapi_cleanup(struct crypto_tfm *tfm)
{
struct jitterentropy *rng = crypto_tfm_ctx(tfm);
int ret = 0;
rng->entropy_collector = jent_entropy_collector_alloc(1, 0);
if (!rng->entropy_collector)
ret = -ENOMEM;
spin_lock(&rng->jent_lock);
spin_lock_init(&rng->jent_lock);
return ret;
}
if (rng->sdesc) {
shash_desc_zero(rng->sdesc);
kfree(rng->sdesc);
}
rng->sdesc = NULL;
static void jent_kcapi_cleanup(struct crypto_tfm *tfm)
{
struct jitterentropy *rng = crypto_tfm_ctx(tfm);
if (rng->tfm)
crypto_free_shash(rng->tfm);
rng->tfm = NULL;
spin_lock(&rng->jent_lock);
if (rng->entropy_collector)
jent_entropy_collector_free(rng->entropy_collector);
rng->entropy_collector = NULL;
spin_unlock(&rng->jent_lock);
}
static int jent_kcapi_init(struct crypto_tfm *tfm)
{
struct jitterentropy *rng = crypto_tfm_ctx(tfm);
struct crypto_shash *hash;
struct shash_desc *sdesc;
int size, ret = 0;
spin_lock_init(&rng->jent_lock);
/*
* Use SHA3-256 as conditioner. We allocate only the generic
* implementation as we are not interested in high-performance. The
* execution time of the SHA3 operation is measured and adds to the
* Jitter RNG's unpredictable behavior. If we have a slower hash
* implementation, the execution timing variations are larger. When
* using a fast implementation, we would need to call it more often
* as its variations are lower.
*/
hash = crypto_alloc_shash(JENT_CONDITIONING_HASH, 0, 0);
if (IS_ERR(hash)) {
pr_err("Cannot allocate conditioning digest\n");
return PTR_ERR(hash);
}
rng->tfm = hash;
size = sizeof(struct shash_desc) + crypto_shash_descsize(hash);
sdesc = kmalloc(size, GFP_KERNEL);
if (!sdesc) {
ret = -ENOMEM;
goto err;
}
sdesc->tfm = hash;
crypto_shash_init(sdesc);
rng->sdesc = sdesc;
rng->entropy_collector = jent_entropy_collector_alloc(1, 0, sdesc);
if (!rng->entropy_collector) {
ret = -ENOMEM;
goto err;
}
spin_lock_init(&rng->jent_lock);
return 0;
err:
jent_kcapi_cleanup(tfm);
return ret;
}
static int jent_kcapi_random(struct crypto_rng *tfm,
const u8 *src, unsigned int slen,
u8 *rdata, unsigned int dlen)
......@@ -180,20 +315,34 @@ static struct rng_alg jent_alg = {
.cra_module = THIS_MODULE,
.cra_init = jent_kcapi_init,
.cra_exit = jent_kcapi_cleanup,
}
};
static int __init jent_mod_init(void)
{
SHASH_DESC_ON_STACK(desc, tfm);
struct crypto_shash *tfm;
int ret = 0;
ret = jent_entropy_init();
jent_testing_init();
tfm = crypto_alloc_shash(JENT_CONDITIONING_HASH, 0, 0);
if (IS_ERR(tfm)) {
jent_testing_exit();
return PTR_ERR(tfm);
}
desc->tfm = tfm;
crypto_shash_init(desc);
ret = jent_entropy_init(desc);
shash_desc_zero(desc);
crypto_free_shash(tfm);
if (ret) {
/* Handle permanent health test error */
if (fips_enabled)
panic("jitterentropy: Initialization failed with host not compliant with requirements: %d\n", ret);
jent_testing_exit();
pr_info("jitterentropy: Initialization failed with host not compliant with requirements: %d\n", ret);
return -EFAULT;
}
......@@ -202,6 +351,7 @@ static int __init jent_mod_init(void)
static void __exit jent_mod_exit(void)
{
jent_testing_exit();
crypto_unregister_rng(&jent_alg);
}
......
/* SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause */
/*
* Test interface for Jitter RNG.
*
* Copyright (C) 2023, Stephan Mueller <smueller@chronox.de>
*/
#include <linux/debugfs.h>
#include <linux/module.h>
#include <linux/uaccess.h>
#include "jitterentropy.h"
#define JENT_TEST_RINGBUFFER_SIZE (1<<10)
#define JENT_TEST_RINGBUFFER_MASK (JENT_TEST_RINGBUFFER_SIZE - 1)
struct jent_testing {
u32 jent_testing_rb[JENT_TEST_RINGBUFFER_SIZE];
u32 rb_reader;
atomic_t rb_writer;
atomic_t jent_testing_enabled;
spinlock_t lock;
wait_queue_head_t read_wait;
};
static struct dentry *jent_raw_debugfs_root = NULL;
/*************************** Generic Data Handling ****************************/
/*
* boot variable:
* 0 ==> No boot test, gathering of runtime data allowed
* 1 ==> Boot test enabled and ready for collecting data, gathering runtime
* data is disabled
* 2 ==> Boot test completed and disabled, gathering of runtime data is
* disabled
*/
static void jent_testing_reset(struct jent_testing *data)
{
unsigned long flags;
spin_lock_irqsave(&data->lock, flags);
data->rb_reader = 0;
atomic_set(&data->rb_writer, 0);
spin_unlock_irqrestore(&data->lock, flags);
}
static void jent_testing_data_init(struct jent_testing *data, u32 boot)
{
/*
* The boot time testing implies we have a running test. If the
* caller wants to clear it, he has to unset the boot_test flag
* at runtime via sysfs to enable regular runtime testing
*/
if (boot)
return;
jent_testing_reset(data);
atomic_set(&data->jent_testing_enabled, 1);
pr_warn("Enabling data collection\n");
}
static void jent_testing_fini(struct jent_testing *data, u32 boot)
{
/* If we have boot data, we do not reset yet to allow data to be read */
if (boot)
return;
atomic_set(&data->jent_testing_enabled, 0);
jent_testing_reset(data);
pr_warn("Disabling data collection\n");
}
static bool jent_testing_store(struct jent_testing *data, u32 value,
u32 *boot)
{
unsigned long flags;
if (!atomic_read(&data->jent_testing_enabled) && (*boot != 1))
return false;
spin_lock_irqsave(&data->lock, flags);
/*
* Disable entropy testing for boot time testing after ring buffer
* is filled.
*/
if (*boot) {
if (((u32)atomic_read(&data->rb_writer)) >
JENT_TEST_RINGBUFFER_SIZE) {
*boot = 2;
pr_warn_once("One time data collection test disabled\n");
spin_unlock_irqrestore(&data->lock, flags);
return false;
}
if (atomic_read(&data->rb_writer) == 1)
pr_warn("One time data collection test enabled\n");
}
data->jent_testing_rb[((u32)atomic_read(&data->rb_writer)) &
JENT_TEST_RINGBUFFER_MASK] = value;
atomic_inc(&data->rb_writer);
spin_unlock_irqrestore(&data->lock, flags);
if (wq_has_sleeper(&data->read_wait))
wake_up_interruptible(&data->read_wait);
return true;
}
static bool jent_testing_have_data(struct jent_testing *data)
{
return ((((u32)atomic_read(&data->rb_writer)) &
JENT_TEST_RINGBUFFER_MASK) !=
(data->rb_reader & JENT_TEST_RINGBUFFER_MASK));
}
static int jent_testing_reader(struct jent_testing *data, u32 *boot,
u8 *outbuf, u32 outbuflen)
{
unsigned long flags;
int collected_data = 0;
jent_testing_data_init(data, *boot);
while (outbuflen) {
u32 writer = (u32)atomic_read(&data->rb_writer);
spin_lock_irqsave(&data->lock, flags);
/* We have no data or reached the writer. */
if (!writer || (writer == data->rb_reader)) {
spin_unlock_irqrestore(&data->lock, flags);
/*
* Now we gathered all boot data, enable regular data
* collection.
*/
if (*boot) {
*boot = 0;
goto out;
}
wait_event_interruptible(data->read_wait,
jent_testing_have_data(data));
if (signal_pending(current)) {
collected_data = -ERESTARTSYS;
goto out;
}
continue;
}
/* We copy out word-wise */
if (outbuflen < sizeof(u32)) {
spin_unlock_irqrestore(&data->lock, flags);
goto out;
}
memcpy(outbuf, &data->jent_testing_rb[data->rb_reader],
sizeof(u32));
data->rb_reader++;
spin_unlock_irqrestore(&data->lock, flags);
outbuf += sizeof(u32);
outbuflen -= sizeof(u32);
collected_data += sizeof(u32);
}
out:
jent_testing_fini(data, *boot);
return collected_data;
}
static int jent_testing_extract_user(struct file *file, char __user *buf,
size_t nbytes, loff_t *ppos,
int (*reader)(u8 *outbuf, u32 outbuflen))
{
u8 *tmp, *tmp_aligned;
int ret = 0, large_request = (nbytes > 256);
if (!nbytes)
return 0;
/*
* The intention of this interface is for collecting at least
* 1000 samples due to the SP800-90B requirements. So, we make no
* effort in avoiding allocating more memory that actually needed
* by the user. Hence, we allocate sufficient memory to always hold
* that amount of data.
*/
tmp = kmalloc(JENT_TEST_RINGBUFFER_SIZE + sizeof(u32), GFP_KERNEL);
if (!tmp)
return -ENOMEM;
tmp_aligned = PTR_ALIGN(tmp, sizeof(u32));
while (nbytes) {
int i;
if (large_request && need_resched()) {
if (signal_pending(current)) {
if (ret == 0)
ret = -ERESTARTSYS;
break;
}
schedule();
}
i = min_t(int, nbytes, JENT_TEST_RINGBUFFER_SIZE);
i = reader(tmp_aligned, i);
if (i <= 0) {
if (i < 0)
ret = i;
break;
}
if (copy_to_user(buf, tmp_aligned, i)) {
ret = -EFAULT;
break;
}
nbytes -= i;
buf += i;
ret += i;
}
kfree_sensitive(tmp);
if (ret > 0)
*ppos += ret;
return ret;
}
/************** Raw High-Resolution Timer Entropy Data Handling **************/
static u32 boot_raw_hires_test = 0;
module_param(boot_raw_hires_test, uint, 0644);
MODULE_PARM_DESC(boot_raw_hires_test,
"Enable gathering boot time high resolution timer entropy of the first Jitter RNG entropy events");
static struct jent_testing jent_raw_hires = {
.rb_reader = 0,
.rb_writer = ATOMIC_INIT(0),
.lock = __SPIN_LOCK_UNLOCKED(jent_raw_hires.lock),
.read_wait = __WAIT_QUEUE_HEAD_INITIALIZER(jent_raw_hires.read_wait)
};
int jent_raw_hires_entropy_store(__u32 value)
{
return jent_testing_store(&jent_raw_hires, value, &boot_raw_hires_test);
}
EXPORT_SYMBOL(jent_raw_hires_entropy_store);
static int jent_raw_hires_entropy_reader(u8 *outbuf, u32 outbuflen)
{
return jent_testing_reader(&jent_raw_hires, &boot_raw_hires_test,
outbuf, outbuflen);
}
static ssize_t jent_raw_hires_read(struct file *file, char __user *to,
size_t count, loff_t *ppos)
{
return jent_testing_extract_user(file, to, count, ppos,
jent_raw_hires_entropy_reader);
}
static const struct file_operations jent_raw_hires_fops = {
.owner = THIS_MODULE,
.read = jent_raw_hires_read,
};
/******************************* Initialization *******************************/
void jent_testing_init(void)
{
jent_raw_debugfs_root = debugfs_create_dir(KBUILD_MODNAME, NULL);
debugfs_create_file_unsafe("jent_raw_hires", 0400,
jent_raw_debugfs_root, NULL,
&jent_raw_hires_fops);
}
EXPORT_SYMBOL(jent_testing_init);
void jent_testing_exit(void)
{
debugfs_remove_recursive(jent_raw_debugfs_root);
}
EXPORT_SYMBOL(jent_testing_exit);
......@@ -2,7 +2,7 @@
* Non-physical true random number generator based on timing jitter --
* Jitter RNG standalone code.
*
* Copyright Stephan Mueller <smueller@chronox.de>, 2015 - 2020
* Copyright Stephan Mueller <smueller@chronox.de>, 2015 - 2023
*
* Design
* ======
......@@ -47,7 +47,7 @@
/*
* This Jitterentropy RNG is based on the jitterentropy library
* version 2.2.0 provided at https://www.chronox.de/jent.html
* version 3.4.0 provided at https://www.chronox.de/jent.html
*/
#ifdef __OPTIMIZE__
......@@ -57,18 +57,19 @@
typedef unsigned long long __u64;
typedef long long __s64;
typedef unsigned int __u32;
typedef unsigned char u8;
#define NULL ((void *) 0)
/* The entropy pool */
struct rand_data {
/* SHA3-256 is used as conditioner */
#define DATA_SIZE_BITS 256
/* all data values that are vital to maintain the security
* of the RNG are marked as SENSITIVE. A user must not
* access that information while the RNG executes its loops to
* calculate the next random value. */
__u64 data; /* SENSITIVE Actual random number */
__u64 old_data; /* SENSITIVE Previous random number */
void *hash_state; /* SENSITIVE hash state entropy pool */
__u64 prev_time; /* SENSITIVE Previous time stamp */
#define DATA_SIZE_BITS ((sizeof(__u64)) * 8)
__u64 last_delta; /* SENSITIVE stuck test */
__s64 last_delta2; /* SENSITIVE stuck test */
unsigned int osr; /* Oversample rate */
......@@ -117,7 +118,6 @@ struct rand_data {
* zero). */
#define JENT_ESTUCK 8 /* Too many stuck results during init. */
#define JENT_EHEALTH 9 /* Health test failed during initialization */
#define JENT_ERCT 10 /* RCT failed during initialization */
/*
* The output n bits can receive more than n bits of min entropy, of course,
......@@ -302,15 +302,13 @@ static int jent_permanent_health_failure(struct rand_data *ec)
* an entropy collection.
*
* Input:
* @ec entropy collector struct -- may be NULL
* @bits is the number of low bits of the timer to consider
* @min is the number of bits we shift the timer value to the right at
* the end to make sure we have a guaranteed minimum value
*
* @return Newly calculated loop counter
*/
static __u64 jent_loop_shuffle(struct rand_data *ec,
unsigned int bits, unsigned int min)
static __u64 jent_loop_shuffle(unsigned int bits, unsigned int min)
{
__u64 time = 0;
__u64 shuffle = 0;
......@@ -318,12 +316,7 @@ static __u64 jent_loop_shuffle(struct rand_data *ec,
unsigned int mask = (1<<bits) - 1;
jent_get_nstime(&time);
/*
* Mix the current state of the random number into the shuffle
* calculation to balance that shuffle a bit more.
*/
if (ec)
time ^= ec->data;
/*
* We fold the time value as much as possible to ensure that as many
* bits of the time stamp are included as possible.
......@@ -345,81 +338,32 @@ static __u64 jent_loop_shuffle(struct rand_data *ec,
* execution time jitter
*
* This function injects the individual bits of the time value into the
* entropy pool using an LFSR.
* entropy pool using a hash.
*
* The code is deliberately inefficient with respect to the bit shifting
* and shall stay that way. This function is the root cause why the code
* shall be compiled without optimization. This function not only acts as
* folding operation, but this function's execution is used to measure
* the CPU execution time jitter. Any change to the loop in this function
* implies that careful retesting must be done.
*
* @ec [in] entropy collector struct
* @time [in] time stamp to be injected
* @loop_cnt [in] if a value not equal to 0 is set, use the given value as
* number of loops to perform the folding
* @stuck [in] Is the time stamp identified as stuck?
* ec [in] entropy collector
* time [in] time stamp to be injected
* stuck [in] Is the time stamp identified as stuck?
*
* Output:
* updated ec->data
*
* @return Number of loops the folding operation is performed
* updated hash context in the entropy collector or error code
*/
static void jent_lfsr_time(struct rand_data *ec, __u64 time, __u64 loop_cnt,
int stuck)
static int jent_condition_data(struct rand_data *ec, __u64 time, int stuck)
{
unsigned int i;
__u64 j = 0;
__u64 new = 0;
#define MAX_FOLD_LOOP_BIT 4
#define MIN_FOLD_LOOP_BIT 0
__u64 fold_loop_cnt =
jent_loop_shuffle(ec, MAX_FOLD_LOOP_BIT, MIN_FOLD_LOOP_BIT);
/*
* testing purposes -- allow test app to set the counter, not
* needed during runtime
*/
if (loop_cnt)
fold_loop_cnt = loop_cnt;
for (j = 0; j < fold_loop_cnt; j++) {
new = ec->data;
for (i = 1; (DATA_SIZE_BITS) >= i; i++) {
__u64 tmp = time << (DATA_SIZE_BITS - i);
tmp = tmp >> (DATA_SIZE_BITS - 1);
/*
* Fibonacci LSFR with polynomial of
* x^64 + x^61 + x^56 + x^31 + x^28 + x^23 + 1 which is
* primitive according to
* http://poincare.matf.bg.ac.rs/~ezivkovm/publications/primpol1.pdf
* (the shift values are the polynomial values minus one
* due to counting bits from 0 to 63). As the current
* position is always the LSB, the polynomial only needs
* to shift data in from the left without wrap.
*/
tmp ^= ((new >> 63) & 1);
tmp ^= ((new >> 60) & 1);
tmp ^= ((new >> 55) & 1);
tmp ^= ((new >> 30) & 1);
tmp ^= ((new >> 27) & 1);
tmp ^= ((new >> 22) & 1);
new <<= 1;
new ^= tmp;
}
}
/*
* If the time stamp is stuck, do not finally insert the value into
* the entropy pool. Although this operation should not do any harm
* even when the time stamp has no entropy, SP800-90B requires that
* any conditioning operation (SP800-90B considers the LFSR to be a
* conditioning operation) to have an identical amount of input
* data according to section 3.1.5.
*/
if (!stuck)
ec->data = new;
#define SHA3_HASH_LOOP (1<<3)
struct {
int rct_count;
unsigned int apt_observations;
unsigned int apt_count;
unsigned int apt_base;
} addtl = {
ec->rct_count,
ec->apt_observations,
ec->apt_count,
ec->apt_base
};
return jent_hash_time(ec->hash_state, time, (u8 *)&addtl, sizeof(addtl),
SHA3_HASH_LOOP, stuck);
}
/*
......@@ -453,7 +397,7 @@ static void jent_memaccess(struct rand_data *ec, __u64 loop_cnt)
#define MAX_ACC_LOOP_BIT 7
#define MIN_ACC_LOOP_BIT 0
__u64 acc_loop_cnt =
jent_loop_shuffle(ec, MAX_ACC_LOOP_BIT, MIN_ACC_LOOP_BIT);
jent_loop_shuffle(MAX_ACC_LOOP_BIT, MIN_ACC_LOOP_BIT);
if (NULL == ec || NULL == ec->mem)
return;
......@@ -521,14 +465,15 @@ static int jent_measure_jitter(struct rand_data *ec)
stuck = jent_stuck(ec, current_delta);
/* Now call the next noise sources which also injects the data */
jent_lfsr_time(ec, current_delta, 0, stuck);
if (jent_condition_data(ec, current_delta, stuck))
stuck = 1;
return stuck;
}
/*
* Generator of one 64 bit random number
* Function fills rand_data->data
* Function fills rand_data->hash_state
*
* @ec [in] Reference to entropy collector
*/
......@@ -575,7 +520,7 @@ static void jent_gen_entropy(struct rand_data *ec)
* @return 0 when request is fulfilled or an error
*
* The following error codes can occur:
* -1 entropy_collector is NULL
* -1 entropy_collector is NULL or the generation failed
* -2 Intermittent health failure
* -3 Permanent health failure
*/
......@@ -605,7 +550,7 @@ int jent_read_entropy(struct rand_data *ec, unsigned char *data,
* Perform startup health tests and return permanent
* error if it fails.
*/
if (jent_entropy_init())
if (jent_entropy_init(ec->hash_state))
return -3;
return -2;
......@@ -615,7 +560,8 @@ int jent_read_entropy(struct rand_data *ec, unsigned char *data,
tocopy = (DATA_SIZE_BITS / 8);
else
tocopy = len;
jent_memcpy(p, &ec->data, tocopy);
if (jent_read_random_block(ec->hash_state, p, tocopy))
return -1;
len -= tocopy;
p += tocopy;
......@@ -629,7 +575,8 @@ int jent_read_entropy(struct rand_data *ec, unsigned char *data,
***************************************************************************/
struct rand_data *jent_entropy_collector_alloc(unsigned int osr,
unsigned int flags)
unsigned int flags,
void *hash_state)
{
struct rand_data *entropy_collector;
......@@ -656,6 +603,8 @@ struct rand_data *jent_entropy_collector_alloc(unsigned int osr,
osr = 1; /* minimum sampling rate is 1 */
entropy_collector->osr = osr;
entropy_collector->hash_state = hash_state;
/* fill the data pad with non-zero values */
jent_gen_entropy(entropy_collector);
......@@ -669,7 +618,7 @@ void jent_entropy_collector_free(struct rand_data *entropy_collector)
jent_zfree(entropy_collector);
}
int jent_entropy_init(void)
int jent_entropy_init(void *hash_state)
{
int i;
__u64 delta_sum = 0;
......@@ -682,6 +631,7 @@ int jent_entropy_init(void)
/* Required for RCT */
ec.osr = 1;
ec.hash_state = hash_state;
/* We could perform statistical tests here, but the problem is
* that we only have a few loop counts to do testing. These
......@@ -719,7 +669,7 @@ int jent_entropy_init(void)
/* Invoke core entropy collection logic */
jent_get_nstime(&time);
ec.prev_time = time;
jent_lfsr_time(&ec, time, 0, 0);
jent_condition_data(&ec, time, 0);
jent_get_nstime(&time2);
/* test whether timer works */
......@@ -762,14 +712,12 @@ int jent_entropy_init(void)
if ((nonstuck % JENT_APT_WINDOW_SIZE) == 0) {
jent_apt_reset(&ec,
delta & JENT_APT_WORD_MASK);
if (jent_health_failure(&ec))
return JENT_EHEALTH;
}
}
/* Validate RCT */
if (jent_rct_failure(&ec))
return JENT_ERCT;
/* Validate health test result */
if (jent_health_failure(&ec))
return JENT_EHEALTH;
/* test whether we have an increasing timer */
if (!(time2 > time))
......
......@@ -2,14 +2,28 @@
extern void *jent_zalloc(unsigned int len);
extern void jent_zfree(void *ptr);
extern void jent_memcpy(void *dest, const void *src, unsigned int n);
extern void jent_get_nstime(__u64 *out);
extern int jent_hash_time(void *hash_state, __u64 time, u8 *addtl,
unsigned int addtl_len, __u64 hash_loop_cnt,
unsigned int stuck);
int jent_read_random_block(void *hash_state, char *dst, unsigned int dst_len);
struct rand_data;
extern int jent_entropy_init(void);
extern int jent_entropy_init(void *hash_state);
extern int jent_read_entropy(struct rand_data *ec, unsigned char *data,
unsigned int len);
extern struct rand_data *jent_entropy_collector_alloc(unsigned int osr,
unsigned int flags);
unsigned int flags,
void *hash_state);
extern void jent_entropy_collector_free(struct rand_data *entropy_collector);
#ifdef CONFIG_CRYPTO_JITTERENTROPY_TESTINTERFACE
int jent_raw_hires_entropy_store(__u32 value);
void jent_testing_init(void);
void jent_testing_exit(void);
#else /* CONFIG_CRYPTO_JITTERENTROPY_TESTINTERFACE */
static inline int jent_raw_hires_entropy_store(__u32 value) { return 0; }
static inline void jent_testing_init(void) { }
static inline void jent_testing_exit(void) { }
#endif /* CONFIG_CRYPTO_JITTERENTROPY_TESTINTERFACE */
......@@ -205,6 +205,32 @@ static int rsa_check_key_length(unsigned int len)
return -EINVAL;
}
static int rsa_check_exponent_fips(MPI e)
{
MPI e_max = NULL;
/* check if odd */
if (!mpi_test_bit(e, 0)) {
return -EINVAL;
}
/* check if 2^16 < e < 2^256. */
if (mpi_cmp_ui(e, 65536) <= 0) {
return -EINVAL;
}
e_max = mpi_alloc(0);
mpi_set_bit(e_max, 256);
if (mpi_cmp(e, e_max) >= 0) {
mpi_free(e_max);
return -EINVAL;
}
mpi_free(e_max);
return 0;
}
static int rsa_set_pub_key(struct crypto_akcipher *tfm, const void *key,
unsigned int keylen)
{
......@@ -232,6 +258,11 @@ static int rsa_set_pub_key(struct crypto_akcipher *tfm, const void *key,
return -EINVAL;
}
if (fips_enabled && rsa_check_exponent_fips(mpi_key->e)) {
rsa_free_mpi_key(mpi_key);
return -EINVAL;
}
return 0;
err:
......@@ -290,6 +321,11 @@ static int rsa_set_priv_key(struct crypto_akcipher *tfm, const void *key,
return -EINVAL;
}
if (fips_enabled && rsa_check_exponent_fips(mpi_key->e)) {
rsa_free_mpi_key(mpi_key);
return -EINVAL;
}
return 0;
err:
......
......@@ -597,7 +597,7 @@ struct crypto_shash *crypto_clone_shash(struct crypto_shash *hash)
return hash;
}
if (!alg->clone_tfm)
if (!alg->clone_tfm && (alg->init_tfm || alg->base.cra_init))
return ERR_PTR(-ENOSYS);
nhash = crypto_clone_tfm(&crypto_shash_type, tfm);
......@@ -606,11 +606,13 @@ struct crypto_shash *crypto_clone_shash(struct crypto_shash *hash)
nhash->descsize = hash->descsize;
if (alg->clone_tfm) {
err = alg->clone_tfm(nhash, hash);
if (err) {
crypto_free_shash(nhash);
return ERR_PTR(err);
}
}
return nhash;
}
......
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* Public Key Signature Algorithm
*
* Copyright (c) 2023 Herbert Xu <herbert@gondor.apana.org.au>
*/
#include <crypto/akcipher.h>
#include <crypto/internal/sig.h>
#include <linux/cryptouser.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/scatterlist.h>
#include <linux/seq_file.h>
#include <linux/string.h>
#include <net/netlink.h>
#include "internal.h"
#define CRYPTO_ALG_TYPE_SIG_MASK 0x0000000e
static const struct crypto_type crypto_sig_type;
static inline struct crypto_sig *__crypto_sig_tfm(struct crypto_tfm *tfm)
{
return container_of(tfm, struct crypto_sig, base);
}
static int crypto_sig_init_tfm(struct crypto_tfm *tfm)
{
if (tfm->__crt_alg->cra_type != &crypto_sig_type)
return crypto_init_akcipher_ops_sig(tfm);
return 0;
}
static void __maybe_unused crypto_sig_show(struct seq_file *m,
struct crypto_alg *alg)
{
seq_puts(m, "type : sig\n");
}
static int __maybe_unused crypto_sig_report(struct sk_buff *skb,
struct crypto_alg *alg)
{
struct crypto_report_akcipher rsig = {};
strscpy(rsig.type, "sig", sizeof(rsig.type));
return nla_put(skb, CRYPTOCFGA_REPORT_AKCIPHER, sizeof(rsig), &rsig);
}
static int __maybe_unused crypto_sig_report_stat(struct sk_buff *skb,
struct crypto_alg *alg)
{
struct crypto_stat_akcipher rsig = {};
strscpy(rsig.type, "sig", sizeof(rsig.type));
return nla_put(skb, CRYPTOCFGA_STAT_AKCIPHER, sizeof(rsig), &rsig);
}
static const struct crypto_type crypto_sig_type = {
.extsize = crypto_alg_extsize,
.init_tfm = crypto_sig_init_tfm,
#ifdef CONFIG_PROC_FS
.show = crypto_sig_show,
#endif
#if IS_ENABLED(CONFIG_CRYPTO_USER)
.report = crypto_sig_report,
#endif
#ifdef CONFIG_CRYPTO_STATS
.report_stat = crypto_sig_report_stat,
#endif
.maskclear = ~CRYPTO_ALG_TYPE_MASK,
.maskset = CRYPTO_ALG_TYPE_SIG_MASK,
.type = CRYPTO_ALG_TYPE_SIG,
.tfmsize = offsetof(struct crypto_sig, base),
};
struct crypto_sig *crypto_alloc_sig(const char *alg_name, u32 type, u32 mask)
{
return crypto_alloc_tfm(alg_name, &crypto_sig_type, type, mask);
}
EXPORT_SYMBOL_GPL(crypto_alloc_sig);
int crypto_sig_maxsize(struct crypto_sig *tfm)
{
struct crypto_akcipher **ctx = crypto_sig_ctx(tfm);
return crypto_akcipher_maxsize(*ctx);
}
EXPORT_SYMBOL_GPL(crypto_sig_maxsize);
int crypto_sig_sign(struct crypto_sig *tfm,
const void *src, unsigned int slen,
void *dst, unsigned int dlen)
{
struct crypto_akcipher **ctx = crypto_sig_ctx(tfm);
struct crypto_akcipher_sync_data data = {
.tfm = *ctx,
.src = src,
.dst = dst,
.slen = slen,
.dlen = dlen,
};
return crypto_akcipher_sync_prep(&data) ?:
crypto_akcipher_sync_post(&data,
crypto_akcipher_sign(data.req));
}
EXPORT_SYMBOL_GPL(crypto_sig_sign);
int crypto_sig_verify(struct crypto_sig *tfm,
const void *src, unsigned int slen,
const void *digest, unsigned int dlen)
{
struct crypto_akcipher **ctx = crypto_sig_ctx(tfm);
struct crypto_akcipher_sync_data data = {
.tfm = *ctx,
.src = src,
.slen = slen,
.dlen = dlen,
};
int err;
err = crypto_akcipher_sync_prep(&data);
if (err)
return err;
memcpy(data.buf + slen, digest, dlen);
return crypto_akcipher_sync_post(&data,
crypto_akcipher_verify(data.req));
}
EXPORT_SYMBOL_GPL(crypto_sig_verify);
int crypto_sig_set_pubkey(struct crypto_sig *tfm,
const void *key, unsigned int keylen)
{
struct crypto_akcipher **ctx = crypto_sig_ctx(tfm);
return crypto_akcipher_set_pub_key(*ctx, key, keylen);
}
EXPORT_SYMBOL_GPL(crypto_sig_set_pubkey);
int crypto_sig_set_privkey(struct crypto_sig *tfm,
const void *key, unsigned int keylen)
{
struct crypto_akcipher **ctx = crypto_sig_ctx(tfm);
return crypto_akcipher_set_priv_key(*ctx, key, keylen);
}
EXPORT_SYMBOL_GPL(crypto_sig_set_privkey);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Public Key Signature Algorithms");
......@@ -13,11 +13,14 @@
#include <crypto/internal/akcipher.h>
#include <crypto/akcipher.h>
#include <crypto/hash.h>
#include <crypto/sm3.h>
#include <crypto/rng.h>
#include <crypto/sm2.h>
#include "sm2signature.asn1.h"
/* The default user id as specified in GM/T 0009-2012 */
#define SM2_DEFAULT_USERID "1234567812345678"
#define SM2_DEFAULT_USERID_LEN 16
#define MPI_NBYTES(m) ((mpi_get_nbits(m) + 7) / 8)
struct ecc_domain_parms {
......@@ -60,6 +63,9 @@ static const struct ecc_domain_parms sm2_ecp = {
.h = 1
};
static int __sm2_set_pub_key(struct mpi_ec_ctx *ec,
const void *key, unsigned int keylen);
static int sm2_ec_ctx_init(struct mpi_ec_ctx *ec)
{
const struct ecc_domain_parms *ecp = &sm2_ecp;
......@@ -213,12 +219,13 @@ int sm2_get_signature_s(void *context, size_t hdrlen, unsigned char tag,
return 0;
}
static int sm2_z_digest_update(struct sm3_state *sctx,
static int sm2_z_digest_update(struct shash_desc *desc,
MPI m, unsigned int pbytes)
{
static const unsigned char zero[32];
unsigned char *in;
unsigned int inlen;
int err;
in = mpi_get_buffer(m, &inlen, NULL);
if (!in)
......@@ -226,21 +233,22 @@ static int sm2_z_digest_update(struct sm3_state *sctx,
if (inlen < pbytes) {
/* padding with zero */
sm3_update(sctx, zero, pbytes - inlen);
sm3_update(sctx, in, inlen);
err = crypto_shash_update(desc, zero, pbytes - inlen) ?:
crypto_shash_update(desc, in, inlen);
} else if (inlen > pbytes) {
/* skip the starting zero */
sm3_update(sctx, in + inlen - pbytes, pbytes);
err = crypto_shash_update(desc, in + inlen - pbytes, pbytes);
} else {
sm3_update(sctx, in, inlen);
err = crypto_shash_update(desc, in, inlen);
}
kfree(in);
return 0;
return err;
}
static int sm2_z_digest_update_point(struct sm3_state *sctx,
MPI_POINT point, struct mpi_ec_ctx *ec, unsigned int pbytes)
static int sm2_z_digest_update_point(struct shash_desc *desc,
MPI_POINT point, struct mpi_ec_ctx *ec,
unsigned int pbytes)
{
MPI x, y;
int ret = -EINVAL;
......@@ -248,50 +256,68 @@ static int sm2_z_digest_update_point(struct sm3_state *sctx,
x = mpi_new(0);
y = mpi_new(0);
if (!mpi_ec_get_affine(x, y, point, ec) &&
!sm2_z_digest_update(sctx, x, pbytes) &&
!sm2_z_digest_update(sctx, y, pbytes))
ret = 0;
ret = mpi_ec_get_affine(x, y, point, ec) ? -EINVAL :
sm2_z_digest_update(desc, x, pbytes) ?:
sm2_z_digest_update(desc, y, pbytes);
mpi_free(x);
mpi_free(y);
return ret;
}
int sm2_compute_z_digest(struct crypto_akcipher *tfm,
const unsigned char *id, size_t id_len,
unsigned char dgst[SM3_DIGEST_SIZE])
int sm2_compute_z_digest(struct shash_desc *desc,
const void *key, unsigned int keylen, void *dgst)
{
struct mpi_ec_ctx *ec = akcipher_tfm_ctx(tfm);
uint16_t bits_len;
unsigned char entl[2];
struct sm3_state sctx;
struct mpi_ec_ctx *ec;
unsigned int bits_len;
unsigned int pbytes;
u8 entl[2];
int err;
if (id_len > (USHRT_MAX / 8) || !ec->Q)
return -EINVAL;
ec = kmalloc(sizeof(*ec), GFP_KERNEL);
if (!ec)
return -ENOMEM;
err = __sm2_set_pub_key(ec, key, keylen);
if (err)
goto out_free_ec;
bits_len = (uint16_t)(id_len * 8);
bits_len = SM2_DEFAULT_USERID_LEN * 8;
entl[0] = bits_len >> 8;
entl[1] = bits_len & 0xff;
pbytes = MPI_NBYTES(ec->p);
/* ZA = H256(ENTLA | IDA | a | b | xG | yG | xA | yA) */
sm3_init(&sctx);
sm3_update(&sctx, entl, 2);
sm3_update(&sctx, id, id_len);
if (sm2_z_digest_update(&sctx, ec->a, pbytes) ||
sm2_z_digest_update(&sctx, ec->b, pbytes) ||
sm2_z_digest_update_point(&sctx, ec->G, ec, pbytes) ||
sm2_z_digest_update_point(&sctx, ec->Q, ec, pbytes))
return -EINVAL;
err = crypto_shash_init(desc);
if (err)
goto out_deinit_ec;
sm3_final(&sctx, dgst);
return 0;
err = crypto_shash_update(desc, entl, 2);
if (err)
goto out_deinit_ec;
err = crypto_shash_update(desc, SM2_DEFAULT_USERID,
SM2_DEFAULT_USERID_LEN);
if (err)
goto out_deinit_ec;
err = sm2_z_digest_update(desc, ec->a, pbytes) ?:
sm2_z_digest_update(desc, ec->b, pbytes) ?:
sm2_z_digest_update_point(desc, ec->G, ec, pbytes) ?:
sm2_z_digest_update_point(desc, ec->Q, ec, pbytes);
if (err)
goto out_deinit_ec;
err = crypto_shash_final(desc, dgst);
out_deinit_ec:
sm2_ec_ctx_deinit(ec);
out_free_ec:
kfree(ec);
return err;
}
EXPORT_SYMBOL(sm2_compute_z_digest);
EXPORT_SYMBOL_GPL(sm2_compute_z_digest);
static int _sm2_verify(struct mpi_ec_ctx *ec, MPI hash, MPI sig_r, MPI sig_s)
{
......@@ -391,6 +417,14 @@ static int sm2_set_pub_key(struct crypto_akcipher *tfm,
const void *key, unsigned int keylen)
{
struct mpi_ec_ctx *ec = akcipher_tfm_ctx(tfm);
return __sm2_set_pub_key(ec, key, keylen);
}
static int __sm2_set_pub_key(struct mpi_ec_ctx *ec,
const void *key, unsigned int keylen)
{
MPI a;
int rc;
......
......@@ -335,9 +335,20 @@ config HW_RANDOM_HISI
If unsure, say Y.
config HW_RANDOM_HISTB
tristate "Hisilicon STB Random Number Generator support"
depends on ARCH_HISI || COMPILE_TEST
default ARCH_HISI
help
This driver provides kernel-side support for the Random Number
Generator hardware found on Hisilicon Hi37xx SoC.
To compile this driver as a module, choose M here: the
module will be called histb-rng.
config HW_RANDOM_ST
tristate "ST Microelectronics HW Random Number Generator support"
depends on HW_RANDOM && ARCH_STI
depends on HW_RANDOM && (ARCH_STI || COMPILE_TEST)
help
This driver provides kernel-side support for the Random Number
Generator hardware found on STi series of SoCs.
......@@ -400,9 +411,9 @@ config HW_RANDOM_POLARFIRE_SOC
config HW_RANDOM_MESON
tristate "Amlogic Meson Random Number Generator support"
depends on HW_RANDOM
depends on ARCH_MESON || COMPILE_TEST
default y
depends on HAS_IOMEM && OF
default HW_RANDOM if ARCH_MESON
help
This driver provides kernel-side support for the Random Number
Generator hardware found on Amlogic Meson SoCs.
......@@ -427,9 +438,9 @@ config HW_RANDOM_CAVIUM
config HW_RANDOM_MTK
tristate "Mediatek Random Number Generator support"
depends on HW_RANDOM
depends on ARCH_MEDIATEK || COMPILE_TEST
default y
depends on HAS_IOMEM && OF
default HW_RANDOM if ARCH_MEDIATEK
help
This driver provides kernel-side support for the Random Number
Generator hardware found on Mediatek SoCs.
......@@ -456,7 +467,8 @@ config HW_RANDOM_S390
config HW_RANDOM_EXYNOS
tristate "Samsung Exynos True Random Number Generator support"
depends on ARCH_EXYNOS || COMPILE_TEST
default HW_RANDOM
depends on HAS_IOMEM
default HW_RANDOM if ARCH_EXYNOS
help
This driver provides support for the True Random Number
Generator available in Exynos SoCs.
......@@ -483,7 +495,8 @@ config HW_RANDOM_OPTEE
config HW_RANDOM_NPCM
tristate "NPCM Random Number Generator support"
depends on ARCH_NPCM || COMPILE_TEST
default HW_RANDOM
depends on HAS_IOMEM
default HW_RANDOM if ARCH_NPCM
help
This driver provides support for the Random Number
Generator hardware available in Nuvoton NPCM SoCs.
......
......@@ -29,6 +29,7 @@ obj-$(CONFIG_HW_RANDOM_NOMADIK) += nomadik-rng.o
obj-$(CONFIG_HW_RANDOM_PSERIES) += pseries-rng.o
obj-$(CONFIG_HW_RANDOM_POWERNV) += powernv-rng.o
obj-$(CONFIG_HW_RANDOM_HISI) += hisi-rng.o
obj-$(CONFIG_HW_RANDOM_HISTB) += histb-rng.o
obj-$(CONFIG_HW_RANDOM_BCM2835) += bcm2835-rng.o
obj-$(CONFIG_HW_RANDOM_IPROC_RNG200) += iproc-rng200.o
obj-$(CONFIG_HW_RANDOM_ST) += st-rng.o
......
......@@ -23,14 +23,49 @@
#define RNM_PF_RANDOM 0x400
#define RNM_TRNG_RESULT 0x408
/* Extended TRNG Read and Status Registers */
#define RNM_PF_TRNG_DAT 0x1000
#define RNM_PF_TRNG_RES 0x1008
struct cn10k_rng {
void __iomem *reg_base;
struct hwrng ops;
struct pci_dev *pdev;
/* Octeon CN10K-A A0/A1, CNF10K-A A0/A1 and CNF10K-B A0/B0
* does not support extended TRNG registers
*/
bool extended_trng_regs;
};
#define PLAT_OCTEONTX_RESET_RNG_EBG_HEALTH_STATE 0xc2000b0f
#define PCI_SUBSYS_DEVID_CN10K_A_RNG 0xB900
#define PCI_SUBSYS_DEVID_CNF10K_A_RNG 0xBA00
#define PCI_SUBSYS_DEVID_CNF10K_B_RNG 0xBC00
static bool cn10k_is_extended_trng_regs_supported(struct pci_dev *pdev)
{
/* CN10K-A A0/A1 */
if ((pdev->subsystem_device == PCI_SUBSYS_DEVID_CN10K_A_RNG) &&
(!pdev->revision || (pdev->revision & 0xff) == 0x50 ||
(pdev->revision & 0xff) == 0x51))
return false;
/* CNF10K-A A0 */
if ((pdev->subsystem_device == PCI_SUBSYS_DEVID_CNF10K_A_RNG) &&
(!pdev->revision || (pdev->revision & 0xff) == 0x60 ||
(pdev->revision & 0xff) == 0x61))
return false;
/* CNF10K-B A0/B0 */
if ((pdev->subsystem_device == PCI_SUBSYS_DEVID_CNF10K_B_RNG) &&
(!pdev->revision || (pdev->revision & 0xff) == 0x70 ||
(pdev->revision & 0xff) == 0x74))
return false;
return true;
}
static unsigned long reset_rng_health_state(struct cn10k_rng *rng)
{
struct arm_smccc_res res;
......@@ -63,9 +98,23 @@ static int check_rng_health(struct cn10k_rng *rng)
return 0;
}
static void cn10k_read_trng(struct cn10k_rng *rng, u64 *value)
/* Returns true when valid data available otherwise return false */
static bool cn10k_read_trng(struct cn10k_rng *rng, u64 *value)
{
u16 retry_count = 0;
u64 upper, lower;
u64 status;
if (rng->extended_trng_regs) {
do {
*value = readq(rng->reg_base + RNM_PF_TRNG_DAT);
if (*value)
return true;
status = readq(rng->reg_base + RNM_PF_TRNG_RES);
if (!status && (retry_count++ > 0x1000))
return false;
} while (!status);
}
*value = readq(rng->reg_base + RNM_PF_RANDOM);
......@@ -82,6 +131,7 @@ static void cn10k_read_trng(struct cn10k_rng *rng, u64 *value)
*value = (upper & 0xFFFFFFFF00000000) | (lower & 0xFFFFFFFF);
}
return true;
}
static int cn10k_rng_read(struct hwrng *hwrng, void *data,
......@@ -100,7 +150,8 @@ static int cn10k_rng_read(struct hwrng *hwrng, void *data,
size = max;
while (size >= 8) {
cn10k_read_trng(rng, &value);
if (!cn10k_read_trng(rng, &value))
goto out;
*((u64 *)pos) = value;
size -= 8;
......@@ -108,7 +159,8 @@ static int cn10k_rng_read(struct hwrng *hwrng, void *data,
}
if (size > 0) {
cn10k_read_trng(rng, &value);
if (!cn10k_read_trng(rng, &value))
goto out;
while (size > 0) {
*pos = (u8)value;
......@@ -118,6 +170,7 @@ static int cn10k_rng_read(struct hwrng *hwrng, void *data,
}
}
out:
return max - size;
}
......@@ -147,6 +200,8 @@ static int cn10k_rng_probe(struct pci_dev *pdev, const struct pci_device_id *id)
rng->ops.read = cn10k_rng_read;
rng->ops.priv = (unsigned long)rng;
rng->extended_trng_regs = cn10k_is_extended_trng_regs_supported(pdev);
reset_rng_health_state(rng);
err = devm_hwrng_register(&pdev->dev, &rng->ops);
......
// SPDX-License-Identifier: GPL-2.0-or-later OR MIT
/*
* Device driver for True RNG in HiSTB SoCs
*
* Copyright (c) 2023 David Yang
*/
#include <crypto/internal/rng.h>
#include <linux/device.h>
#include <linux/err.h>
#include <linux/hw_random.h>
#include <linux/io.h>
#include <linux/iopoll.h>
#include <linux/kernel.h>
#include <linux/mod_devicetable.h>
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
#define HISTB_TRNG_CTRL 0x0
#define RNG_CTRL 0x0
#define RNG_SOURCE GENMASK(1, 0)
#define DROP_ENABLE BIT(5)
#define POST_PROCESS_ENABLE BIT(7)
#define POST_PROCESS_DEPTH GENMASK(15, 8)
#define HISTB_TRNG_NUMBER 0x4
#define HISTB_TRNG_STAT 0x8
#define RNG_NUMBER 0x4
#define RNG_STAT 0x8
#define DATA_COUNT GENMASK(2, 0) /* max 4 */
struct histb_trng_priv {
struct histb_rng_priv {
struct hwrng rng;
void __iomem *base;
};
......@@ -35,19 +31,19 @@ struct histb_trng_priv {
* depth = 1 -> ~1ms
* depth = 255 -> ~16ms
*/
static int histb_trng_wait(void __iomem *base)
static int histb_rng_wait(void __iomem *base)
{
u32 val;
return readl_relaxed_poll_timeout(base + HISTB_TRNG_STAT, val,
return readl_relaxed_poll_timeout(base + RNG_STAT, val,
val & DATA_COUNT, 1000, 30 * 1000);
}
static void histb_trng_init(void __iomem *base, unsigned int depth)
static void histb_rng_init(void __iomem *base, unsigned int depth)
{
u32 val;
val = readl_relaxed(base + HISTB_TRNG_CTRL);
val = readl_relaxed(base + RNG_CTRL);
val &= ~RNG_SOURCE;
val |= 2;
......@@ -58,72 +54,72 @@ static void histb_trng_init(void __iomem *base, unsigned int depth)
val |= POST_PROCESS_ENABLE;
val |= DROP_ENABLE;
writel_relaxed(val, base + HISTB_TRNG_CTRL);
writel_relaxed(val, base + RNG_CTRL);
}
static int histb_trng_read(struct hwrng *rng, void *data, size_t max, bool wait)
static int histb_rng_read(struct hwrng *rng, void *data, size_t max, bool wait)
{
struct histb_trng_priv *priv = container_of(rng, typeof(*priv), rng);
struct histb_rng_priv *priv = container_of(rng, typeof(*priv), rng);
void __iomem *base = priv->base;
for (int i = 0; i < max; i += sizeof(u32)) {
if (!(readl_relaxed(base + HISTB_TRNG_STAT) & DATA_COUNT)) {
if (!(readl_relaxed(base + RNG_STAT) & DATA_COUNT)) {
if (!wait)
return i;
if (histb_trng_wait(base)) {
if (histb_rng_wait(base)) {
pr_err("failed to generate random number, generated %d\n",
i);
return i ? i : -ETIMEDOUT;
}
}
*(u32 *) (data + i) = readl_relaxed(base + HISTB_TRNG_NUMBER);
*(u32 *) (data + i) = readl_relaxed(base + RNG_NUMBER);
}
return max;
}
static unsigned int histb_trng_get_depth(void __iomem *base)
static unsigned int histb_rng_get_depth(void __iomem *base)
{
return (readl_relaxed(base + HISTB_TRNG_CTRL) & POST_PROCESS_DEPTH) >> 8;
return (readl_relaxed(base + RNG_CTRL) & POST_PROCESS_DEPTH) >> 8;
}
static ssize_t
depth_show(struct device *dev, struct device_attribute *attr, char *buf)
{
struct histb_trng_priv *priv = dev_get_drvdata(dev);
struct histb_rng_priv *priv = dev_get_drvdata(dev);
void __iomem *base = priv->base;
return sprintf(buf, "%d\n", histb_trng_get_depth(base));
return sprintf(buf, "%d\n", histb_rng_get_depth(base));
}
static ssize_t
depth_store(struct device *dev, struct device_attribute *attr,
const char *buf, size_t count)
{
struct histb_trng_priv *priv = dev_get_drvdata(dev);
struct histb_rng_priv *priv = dev_get_drvdata(dev);
void __iomem *base = priv->base;
unsigned int depth;
if (kstrtouint(buf, 0, &depth))
return -ERANGE;
histb_trng_init(base, depth);
histb_rng_init(base, depth);
return count;
}
static DEVICE_ATTR_RW(depth);
static struct attribute *histb_trng_attrs[] = {
static struct attribute *histb_rng_attrs[] = {
&dev_attr_depth.attr,
NULL,
};
ATTRIBUTE_GROUPS(histb_trng);
ATTRIBUTE_GROUPS(histb_rng);
static int histb_trng_probe(struct platform_device *pdev)
static int histb_rng_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct histb_trng_priv *priv;
struct histb_rng_priv *priv;
void __iomem *base;
int ret;
......@@ -133,17 +129,17 @@ static int histb_trng_probe(struct platform_device *pdev)
base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(base))
return -ENOMEM;
return PTR_ERR(base);
histb_trng_init(base, 144);
if (histb_trng_wait(base)) {
histb_rng_init(base, 144);
if (histb_rng_wait(base)) {
dev_err(dev, "cannot bring up device\n");
return -ENODEV;
}
priv->base = base;
priv->rng.name = pdev->name;
priv->rng.read = histb_trng_read;
priv->rng.read = histb_rng_read;
ret = devm_hwrng_register(dev, &priv->rng);
if (ret) {
dev_err(dev, "failed to register hwrng: %d\n", ret);
......@@ -155,22 +151,23 @@ static int histb_trng_probe(struct platform_device *pdev)
return 0;
}
static const struct of_device_id histb_trng_of_match[] = {
{ .compatible = "hisilicon,histb-trng", },
static const struct of_device_id histb_rng_of_match[] = {
{ .compatible = "hisilicon,histb-rng", },
{ }
};
MODULE_DEVICE_TABLE(of, histb_rng_of_match);
static struct platform_driver histb_trng_driver = {
.probe = histb_trng_probe,
static struct platform_driver histb_rng_driver = {
.probe = histb_rng_probe,
.driver = {
.name = "histb-trng",
.of_match_table = histb_trng_of_match,
.dev_groups = histb_trng_groups,
.name = "histb-rng",
.of_match_table = histb_rng_of_match,
.dev_groups = histb_rng_groups,
},
};
module_platform_driver(histb_trng_driver);
module_platform_driver(histb_rng_driver);
MODULE_DESCRIPTION("HiSTB True RNG");
MODULE_DESCRIPTION("Hisilicon STB random number generator driver");
MODULE_LICENSE("Dual MIT/GPL");
MODULE_AUTHOR("David Yang <mmyangfl@gmail.com>");
......@@ -17,6 +17,7 @@
#include <linux/hw_random.h>
#include <linux/completion.h>
#include <linux/io.h>
#include <linux/bitfield.h>
#define RNGC_VER_ID 0x0000
#define RNGC_COMMAND 0x0004
......@@ -26,7 +27,7 @@
#define RNGC_FIFO 0x0014
/* the fields in the ver id register */
#define RNGC_TYPE_SHIFT 28
#define RNG_TYPE GENMASK(31, 28)
#define RNGC_VER_MAJ_SHIFT 8
/* the rng_type field */
......@@ -34,20 +35,19 @@
#define RNGC_TYPE_RNGC 0x2
#define RNGC_CMD_CLR_ERR 0x00000020
#define RNGC_CMD_CLR_INT 0x00000010
#define RNGC_CMD_SEED 0x00000002
#define RNGC_CMD_SELF_TEST 0x00000001
#define RNGC_CMD_CLR_ERR BIT(5)
#define RNGC_CMD_CLR_INT BIT(4)
#define RNGC_CMD_SEED BIT(1)
#define RNGC_CMD_SELF_TEST BIT(0)
#define RNGC_CTRL_MASK_ERROR 0x00000040
#define RNGC_CTRL_MASK_DONE 0x00000020
#define RNGC_CTRL_AUTO_SEED 0x00000010
#define RNGC_CTRL_MASK_ERROR BIT(6)
#define RNGC_CTRL_MASK_DONE BIT(5)
#define RNGC_CTRL_AUTO_SEED BIT(4)
#define RNGC_STATUS_ERROR 0x00010000
#define RNGC_STATUS_FIFO_LEVEL_MASK 0x00000f00
#define RNGC_STATUS_FIFO_LEVEL_SHIFT 8
#define RNGC_STATUS_SEED_DONE 0x00000020
#define RNGC_STATUS_ST_DONE 0x00000010
#define RNGC_STATUS_ERROR BIT(16)
#define RNGC_STATUS_FIFO_LEVEL_MASK GENMASK(11, 8)
#define RNGC_STATUS_SEED_DONE BIT(5)
#define RNGC_STATUS_ST_DONE BIT(4)
#define RNGC_ERROR_STATUS_STAT_ERR 0x00000008
......@@ -110,7 +110,7 @@ static int imx_rngc_self_test(struct imx_rngc *rngc)
cmd = readl(rngc->base + RNGC_COMMAND);
writel(cmd | RNGC_CMD_SELF_TEST, rngc->base + RNGC_COMMAND);
ret = wait_for_completion_timeout(&rngc->rng_op_done, RNGC_TIMEOUT);
ret = wait_for_completion_timeout(&rngc->rng_op_done, msecs_to_jiffies(RNGC_TIMEOUT));
imx_rngc_irq_mask_clear(rngc);
if (!ret)
return -ETIMEDOUT;
......@@ -122,7 +122,6 @@ static int imx_rngc_read(struct hwrng *rng, void *data, size_t max, bool wait)
{
struct imx_rngc *rngc = container_of(rng, struct imx_rngc, rng);
unsigned int status;
unsigned int level;
int retval = 0;
while (max >= sizeof(u32)) {
......@@ -132,11 +131,7 @@ static int imx_rngc_read(struct hwrng *rng, void *data, size_t max, bool wait)
if (status & RNGC_STATUS_ERROR)
break;
/* how many random numbers are in FIFO? [0-16] */
level = (status & RNGC_STATUS_FIFO_LEVEL_MASK) >>
RNGC_STATUS_FIFO_LEVEL_SHIFT;
if (level) {
if (status & RNGC_STATUS_FIFO_LEVEL_MASK) {
/* retrieve a random number from FIFO */
*(u32 *)data = readl(rngc->base + RNGC_FIFO);
......@@ -187,9 +182,7 @@ static int imx_rngc_init(struct hwrng *rng)
cmd = readl(rngc->base + RNGC_COMMAND);
writel(cmd | RNGC_CMD_SEED, rngc->base + RNGC_COMMAND);
ret = wait_for_completion_timeout(&rngc->rng_op_done,
RNGC_TIMEOUT);
ret = wait_for_completion_timeout(&rngc->rng_op_done, msecs_to_jiffies(RNGC_TIMEOUT));
if (!ret) {
ret = -ETIMEDOUT;
goto err;
......@@ -229,7 +222,7 @@ static void imx_rngc_cleanup(struct hwrng *rng)
imx_rngc_irq_mask_clear(rngc);
}
static int imx_rngc_probe(struct platform_device *pdev)
static int __init imx_rngc_probe(struct platform_device *pdev)
{
struct imx_rngc *rngc;
int ret;
......@@ -256,7 +249,7 @@ static int imx_rngc_probe(struct platform_device *pdev)
return irq;
ver_id = readl(rngc->base + RNGC_VER_ID);
rng_type = ver_id >> RNGC_TYPE_SHIFT;
rng_type = FIELD_GET(RNG_TYPE, ver_id);
/*
* This driver supports only RNGC and RNGB. (There's a different
* driver for RNGA.)
......@@ -305,7 +298,7 @@ static int imx_rngc_probe(struct platform_device *pdev)
return 0;
}
static int __maybe_unused imx_rngc_suspend(struct device *dev)
static int imx_rngc_suspend(struct device *dev)
{
struct imx_rngc *rngc = dev_get_drvdata(dev);
......@@ -314,7 +307,7 @@ static int __maybe_unused imx_rngc_suspend(struct device *dev)
return 0;
}
static int __maybe_unused imx_rngc_resume(struct device *dev)
static int imx_rngc_resume(struct device *dev)
{
struct imx_rngc *rngc = dev_get_drvdata(dev);
......@@ -323,10 +316,10 @@ static int __maybe_unused imx_rngc_resume(struct device *dev)
return 0;
}
static SIMPLE_DEV_PM_OPS(imx_rngc_pm_ops, imx_rngc_suspend, imx_rngc_resume);
static DEFINE_SIMPLE_DEV_PM_OPS(imx_rngc_pm_ops, imx_rngc_suspend, imx_rngc_resume);
static const struct of_device_id imx_rngc_dt_ids[] = {
{ .compatible = "fsl,imx25-rngb", .data = NULL, },
{ .compatible = "fsl,imx25-rngb" },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, imx_rngc_dt_ids);
......@@ -334,7 +327,7 @@ MODULE_DEVICE_TABLE(of, imx_rngc_dt_ids);
static struct platform_driver imx_rngc_driver = {
.driver = {
.name = KBUILD_MODNAME,
.pm = &imx_rngc_pm_ops,
.pm = pm_sleep_ptr(&imx_rngc_pm_ops),
.of_match_table = imx_rngc_dt_ids,
},
};
......
......@@ -42,7 +42,6 @@
struct st_rng_data {
void __iomem *base;
struct clk *clk;
struct hwrng ops;
};
......@@ -85,26 +84,18 @@ static int st_rng_probe(struct platform_device *pdev)
if (IS_ERR(base))
return PTR_ERR(base);
clk = devm_clk_get(&pdev->dev, NULL);
clk = devm_clk_get_enabled(&pdev->dev, NULL);
if (IS_ERR(clk))
return PTR_ERR(clk);
ret = clk_prepare_enable(clk);
if (ret)
return ret;
ddata->ops.priv = (unsigned long)ddata;
ddata->ops.read = st_rng_read;
ddata->ops.name = pdev->name;
ddata->base = base;
ddata->clk = clk;
dev_set_drvdata(&pdev->dev, ddata);
ret = devm_hwrng_register(&pdev->dev, &ddata->ops);
if (ret) {
dev_err(&pdev->dev, "Failed to register HW RNG\n");
clk_disable_unprepare(clk);
return ret;
}
......@@ -113,15 +104,6 @@ static int st_rng_probe(struct platform_device *pdev)
return 0;
}
static int st_rng_remove(struct platform_device *pdev)
{
struct st_rng_data *ddata = dev_get_drvdata(&pdev->dev);
clk_disable_unprepare(ddata->clk);
return 0;
}
static const struct of_device_id st_rng_match[] __maybe_unused = {
{ .compatible = "st,rng" },
{},
......@@ -134,7 +116,6 @@ static struct platform_driver st_rng_driver = {
.of_match_table = of_match_ptr(st_rng_match),
},
.probe = st_rng_probe,
.remove = st_rng_remove
};
module_platform_driver(st_rng_driver);
......
......@@ -4,6 +4,7 @@
* Copyright (C) 2007, 2008 Rusty Russell IBM Corporation
*/
#include <asm/barrier.h>
#include <linux/err.h>
#include <linux/hw_random.h>
#include <linux/scatterlist.h>
......@@ -37,13 +38,13 @@ struct virtrng_info {
static void random_recv_done(struct virtqueue *vq)
{
struct virtrng_info *vi = vq->vdev->priv;
unsigned int len;
/* We can get spurious callbacks, e.g. shared IRQs + virtio_pci. */
if (!virtqueue_get_buf(vi->vq, &vi->data_avail))
if (!virtqueue_get_buf(vi->vq, &len))
return;
vi->data_idx = 0;
smp_store_release(&vi->data_avail, len);
complete(&vi->have_data);
}
......@@ -52,7 +53,6 @@ static void request_entropy(struct virtrng_info *vi)
struct scatterlist sg;
reinit_completion(&vi->have_data);
vi->data_avail = 0;
vi->data_idx = 0;
sg_init_one(&sg, vi->data, sizeof(vi->data));
......@@ -88,7 +88,7 @@ static int virtio_read(struct hwrng *rng, void *buf, size_t size, bool wait)
read = 0;
/* copy available data */
if (vi->data_avail) {
if (smp_load_acquire(&vi->data_avail)) {
chunk = copy_data(vi, buf, size);
size -= chunk;
read += chunk;
......
......@@ -807,5 +807,6 @@ config CRYPTO_DEV_SA2UL
acceleration for cryptographic algorithms on these devices.
source "drivers/crypto/aspeed/Kconfig"
source "drivers/crypto/starfive/Kconfig"
endif # CRYPTO_HW
......@@ -50,3 +50,4 @@ obj-y += xilinx/
obj-y += hisilicon/
obj-$(CONFIG_CRYPTO_DEV_AMLOGIC_GXL) += amlogic/
obj-y += intel/
obj-y += starfive/
......@@ -389,7 +389,7 @@ static struct i2c_driver atmel_ecc_driver = {
.name = "atmel-ecc",
.of_match_table = of_match_ptr(atmel_ecc_dt_ids),
},
.probe_new = atmel_ecc_probe,
.probe = atmel_ecc_probe,
.remove = atmel_ecc_remove,
.id_table = atmel_ecc_id,
};
......
......@@ -141,7 +141,7 @@ static const struct i2c_device_id atmel_sha204a_id[] = {
MODULE_DEVICE_TABLE(i2c, atmel_sha204a_id);
static struct i2c_driver atmel_sha204a_driver = {
.probe_new = atmel_sha204a_probe,
.probe = atmel_sha204a_probe,
.remove = atmel_sha204a_remove,
.id_table = atmel_sha204a_id,
......
......@@ -162,6 +162,15 @@ config CRYPTO_DEV_FSL_CAAM_PRNG_API
config CRYPTO_DEV_FSL_CAAM_BLOB_GEN
bool
config CRYPTO_DEV_FSL_CAAM_RNG_TEST
bool "Test caam rng"
select CRYPTO_DEV_FSL_CAAM_RNG_API
help
Selecting this will enable a self-test to run for the
caam RNG.
This test is several minutes long and executes
just before the RNG is registered with the hw_random API.
endif # CRYPTO_DEV_FSL_CAAM_JR
endif # CRYPTO_DEV_FSL_CAAM
......
......@@ -172,6 +172,50 @@ static void caam_cleanup(struct hwrng *rng)
kfifo_free(&ctx->fifo);
}
#ifdef CONFIG_CRYPTO_DEV_FSL_CAAM_RNG_TEST
static inline void test_len(struct hwrng *rng, size_t len, bool wait)
{
u8 *buf;
int read_len;
struct caam_rng_ctx *ctx = to_caam_rng_ctx(rng);
struct device *dev = ctx->ctrldev;
buf = kcalloc(CAAM_RNG_MAX_FIFO_STORE_SIZE, sizeof(u8), GFP_KERNEL);
while (len > 0) {
read_len = rng->read(rng, buf, len, wait);
if (read_len < 0 || (read_len == 0 && wait)) {
dev_err(dev, "RNG Read FAILED received %d bytes\n",
read_len);
kfree(buf);
return;
}
print_hex_dump_debug("random bytes@: ",
DUMP_PREFIX_ADDRESS, 16, 4,
buf, read_len, 1);
len = len - read_len;
}
kfree(buf);
}
static inline void test_mode_once(struct hwrng *rng, bool wait)
{
test_len(rng, 32, wait);
test_len(rng, 64, wait);
test_len(rng, 128, wait);
}
static void self_test(struct hwrng *rng)
{
pr_info("Executing RNG SELF-TEST with wait\n");
test_mode_once(rng, true);
}
#endif
static int caam_init(struct hwrng *rng)
{
struct caam_rng_ctx *ctx = to_caam_rng_ctx(rng);
......@@ -258,6 +302,10 @@ int caam_rng_init(struct device *ctrldev)
return ret;
}
#ifdef CONFIG_CRYPTO_DEV_FSL_CAAM_RNG_TEST
self_test(&ctx->rng);
#endif
devres_close_group(ctrldev, caam_rng_init);
return 0;
}
This diff is collapsed.
......@@ -95,6 +95,7 @@ struct caam_drv_private {
u8 blob_present; /* Nonzero if BLOB support present in device */
u8 mc_en; /* Nonzero if MC f/w is active */
u8 optee_en; /* Nonzero if OP-TEE f/w is active */
bool pr_support; /* RNG prediction resistance available */
int secvio_irq; /* Security violation interrupt number */
int virt_en; /* Virtualization enabled in CAAM */
int era; /* CAAM Era (internal HW revision) */
......
......@@ -3,7 +3,7 @@
* CAAM hardware register-level view
*
* Copyright 2008-2011 Freescale Semiconductor, Inc.
* Copyright 2018 NXP
* Copyright 2018, 2023 NXP
*/
#ifndef REGS_H
......@@ -523,6 +523,8 @@ struct rng4tst {
#define RTSDCTL_ENT_DLY_MASK (0xffff << RTSDCTL_ENT_DLY_SHIFT)
#define RTSDCTL_ENT_DLY_MIN 3200
#define RTSDCTL_ENT_DLY_MAX 12800
#define RTSDCTL_SAMP_SIZE_MASK 0xffff
#define RTSDCTL_SAMP_SIZE_VAL 512
u32 rtsdctl; /* seed control register */
union {
u32 rtsblim; /* PRGM=1: sparse bit limit register */
......@@ -534,7 +536,15 @@ struct rng4tst {
u32 rtfrqmax; /* PRGM=1: freq. count max. limit register */
u32 rtfrqcnt; /* PRGM=0: freq. count register */
};
u32 rsvd1[40];
union {
u32 rtscmc; /* statistical check run monobit count */
u32 rtscml; /* statistical check run monobit limit */
};
union {
u32 rtscrc[6]; /* statistical check run length count */
u32 rtscrl[6]; /* statistical check run length limit */
};
u32 rsvd1[33];
#define RDSTA_SKVT 0x80000000
#define RDSTA_SKVN 0x40000000
#define RDSTA_PR0 BIT(4)
......
......@@ -67,6 +67,11 @@ int psp_send_platform_access_msg(enum psp_platform_access_msg msg,
return -ENODEV;
pa_dev = psp->platform_access_data;
if (!pa_dev->vdata->cmdresp_reg || !pa_dev->vdata->cmdbuff_addr_lo_reg ||
!pa_dev->vdata->cmdbuff_addr_hi_reg)
return -ENODEV;
cmd = psp->io_regs + pa_dev->vdata->cmdresp_reg;
lo = psp->io_regs + pa_dev->vdata->cmdbuff_addr_lo_reg;
hi = psp->io_regs + pa_dev->vdata->cmdbuff_addr_hi_reg;
......
......@@ -361,6 +361,14 @@ static const struct tee_vdata teev1 = {
.ring_rptr_reg = 0x10554, /* C2PMSG_21 */
};
static const struct tee_vdata teev2 = {
.cmdresp_reg = 0x10944, /* C2PMSG_17 */
.cmdbuff_addr_lo_reg = 0x10948, /* C2PMSG_18 */
.cmdbuff_addr_hi_reg = 0x1094c, /* C2PMSG_19 */
.ring_wptr_reg = 0x10950, /* C2PMSG_20 */
.ring_rptr_reg = 0x10954, /* C2PMSG_21 */
};
static const struct platform_access_vdata pa_v1 = {
.cmdresp_reg = 0x10570, /* C2PMSG_28 */
.cmdbuff_addr_lo_reg = 0x10574, /* C2PMSG_29 */
......@@ -369,6 +377,11 @@ static const struct platform_access_vdata pa_v1 = {
.doorbell_cmd_reg = 0x10a40, /* C2PMSG_80 */
};
static const struct platform_access_vdata pa_v2 = {
.doorbell_button_reg = 0x10a24, /* C2PMSG_73 */
.doorbell_cmd_reg = 0x10a40, /* C2PMSG_80 */
};
static const struct psp_vdata pspv1 = {
.sev = &sevv1,
.feature_reg = 0x105fc, /* C2PMSG_63 */
......@@ -399,6 +412,22 @@ static const struct psp_vdata pspv4 = {
.intsts_reg = 0x10694, /* P2CMSG_INTSTS */
};
static const struct psp_vdata pspv5 = {
.tee = &teev2,
.platform_access = &pa_v2,
.feature_reg = 0x109fc, /* C2PMSG_63 */
.inten_reg = 0x10510, /* P2CMSG_INTEN */
.intsts_reg = 0x10514, /* P2CMSG_INTSTS */
};
static const struct psp_vdata pspv6 = {
.sev = &sevv2,
.tee = &teev2,
.feature_reg = 0x109fc, /* C2PMSG_63 */
.inten_reg = 0x10510, /* P2CMSG_INTEN */
.intsts_reg = 0x10514, /* P2CMSG_INTSTS */
};
#endif
static const struct sp_dev_vdata dev_vdata[] = {
......@@ -451,6 +480,18 @@ static const struct sp_dev_vdata dev_vdata[] = {
.bar = 2,
#ifdef CONFIG_CRYPTO_DEV_SP_PSP
.psp_vdata = &pspv3,
#endif
},
{ /* 7 */
.bar = 2,
#ifdef CONFIG_CRYPTO_DEV_SP_PSP
.psp_vdata = &pspv5,
#endif
},
{ /* 8 */
.bar = 2,
#ifdef CONFIG_CRYPTO_DEV_SP_PSP
.psp_vdata = &pspv6,
#endif
},
};
......@@ -463,6 +504,8 @@ static const struct pci_device_id sp_pci_table[] = {
{ PCI_VDEVICE(AMD, 0x14CA), (kernel_ulong_t)&dev_vdata[5] },
{ PCI_VDEVICE(AMD, 0x15C7), (kernel_ulong_t)&dev_vdata[6] },
{ PCI_VDEVICE(AMD, 0x1649), (kernel_ulong_t)&dev_vdata[6] },
{ PCI_VDEVICE(AMD, 0x17E0), (kernel_ulong_t)&dev_vdata[7] },
{ PCI_VDEVICE(AMD, 0x156E), (kernel_ulong_t)&dev_vdata[8] },
/* Last entry must be zero */
{ 0, }
};
......
......@@ -82,10 +82,3 @@ config CRYPTO_DEV_HISI_TRNG
select CRYPTO_RNG
help
Support for HiSilicon TRNG Driver.
config CRYPTO_DEV_HISTB_TRNG
tristate "Support for HiSTB TRNG Driver"
depends on ARCH_HISI || COMPILE_TEST
select HW_RANDOM
help
Support for HiSTB TRNG Driver.
......@@ -5,4 +5,4 @@ obj-$(CONFIG_CRYPTO_DEV_HISI_SEC2) += sec2/
obj-$(CONFIG_CRYPTO_DEV_HISI_QM) += hisi_qm.o
hisi_qm-objs = qm.o sgl.o debugfs.o
obj-$(CONFIG_CRYPTO_DEV_HISI_ZIP) += zip/
obj-y += trng/
obj-$(CONFIG_CRYPTO_DEV_HISI_TRNG) += trng/
obj-$(CONFIG_CRYPTO_DEV_HISI_TRNG) += hisi-trng-v2.o
hisi-trng-v2-objs = trng.o
obj-$(CONFIG_CRYPTO_DEV_HISTB_TRNG) += histb-trng.o
histb-trng-objs += trng-stb.o
......@@ -1175,9 +1175,9 @@ static int aead_perform(struct aead_request *req, int encrypt,
/* The 12 hmac bytes are scattered,
* we need to copy them into a safe buffer */
req_ctx->hmac_virt = dma_pool_alloc(buffer_pool, flags, &dma);
crypt->icv_rev_aes = dma;
if (unlikely(!req_ctx->hmac_virt))
goto free_buf_dst;
crypt->icv_rev_aes = dma;
if (!encrypt) {
scatterwalk_map_and_copy(req_ctx->hmac_virt,
req->src, cryptlen, authsize, 0);
......
......@@ -72,7 +72,7 @@ enum icp_qat_4xxx_slice_mask {
ICP_ACCEL_4XXX_MASK_COMPRESS_SLICE = BIT(3),
ICP_ACCEL_4XXX_MASK_UCS_SLICE = BIT(4),
ICP_ACCEL_4XXX_MASK_EIA3_SLICE = BIT(5),
ICP_ACCEL_4XXX_MASK_SMX_SLICE = BIT(6),
ICP_ACCEL_4XXX_MASK_SMX_SLICE = BIT(7),
};
void adf_init_hw_data_4xxx(struct adf_hw_device_data *hw_data, u32 dev_id);
......
......@@ -7,6 +7,7 @@
#include <adf_accel_devices.h>
#include <adf_cfg.h>
#include <adf_common_drv.h>
#include <adf_dbgfs.h>
#include "adf_4xxx_hw_data.h"
#include "qat_compression.h"
......@@ -24,11 +25,25 @@ MODULE_DEVICE_TABLE(pci, adf_pci_tbl);
enum configs {
DEV_CFG_CY = 0,
DEV_CFG_DC,
DEV_CFG_SYM,
DEV_CFG_ASYM,
DEV_CFG_ASYM_SYM,
DEV_CFG_ASYM_DC,
DEV_CFG_DC_ASYM,
DEV_CFG_SYM_DC,
DEV_CFG_DC_SYM,
};
static const char * const services_operations[] = {
ADF_CFG_CY,
ADF_CFG_DC,
ADF_CFG_SYM,
ADF_CFG_ASYM,
ADF_CFG_ASYM_SYM,
ADF_CFG_ASYM_DC,
ADF_CFG_DC_ASYM,
ADF_CFG_SYM_DC,
ADF_CFG_DC_SYM,
};
static void adf_cleanup_accel(struct adf_accel_dev *accel_dev)
......@@ -37,8 +52,8 @@ static void adf_cleanup_accel(struct adf_accel_dev *accel_dev)
adf_clean_hw_data_4xxx(accel_dev->hw_device);
accel_dev->hw_device = NULL;
}
adf_dbgfs_exit(accel_dev);
adf_cfg_dev_remove(accel_dev);
debugfs_remove(accel_dev->debugfs_dir);
adf_devmgr_rm_dev(accel_dev, NULL);
}
......@@ -241,6 +256,21 @@ static int adf_comp_dev_config(struct adf_accel_dev *accel_dev)
return ret;
}
static int adf_no_dev_config(struct adf_accel_dev *accel_dev)
{
unsigned long val;
int ret;
val = 0;
ret = adf_cfg_add_key_value_param(accel_dev, ADF_KERNEL_SEC, ADF_NUM_DC,
&val, ADF_DEC);
if (ret)
return ret;
return adf_cfg_add_key_value_param(accel_dev, ADF_KERNEL_SEC, ADF_NUM_CY,
&val, ADF_DEC);
}
int adf_gen4_dev_config(struct adf_accel_dev *accel_dev)
{
char services[ADF_CFG_MAX_VAL_LEN_IN_BYTES] = {0};
......@@ -265,11 +295,15 @@ int adf_gen4_dev_config(struct adf_accel_dev *accel_dev)
switch (ret) {
case DEV_CFG_CY:
case DEV_CFG_ASYM_SYM:
ret = adf_crypto_dev_config(accel_dev);
break;
case DEV_CFG_DC:
ret = adf_comp_dev_config(accel_dev);
break;
default:
ret = adf_no_dev_config(accel_dev);
break;
}
if (ret)
......@@ -289,7 +323,6 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
struct adf_accel_dev *accel_dev;
struct adf_accel_pci *accel_pci_dev;
struct adf_hw_device_data *hw_data;
char name[ADF_DEVICE_NAME_LENGTH];
unsigned int i, bar_nr;
unsigned long bar_mask;
struct adf_bar *bar;
......@@ -348,12 +381,6 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
goto out_err;
}
/* Create dev top level debugfs entry */
snprintf(name, sizeof(name), "%s%s_%s", ADF_DEVICE_NAME_PREFIX,
hw_data->dev_class->name, pci_name(pdev));
accel_dev->debugfs_dir = debugfs_create_dir(name, NULL);
/* Create device configuration table */
ret = adf_cfg_dev_add(accel_dev);
if (ret)
......@@ -410,6 +437,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
goto out_err;
}
adf_dbgfs_init(accel_dev);
ret = adf_dev_up(accel_dev, true);
if (ret)
goto out_err_dev_stop;
......
......@@ -16,6 +16,7 @@
#include <adf_accel_devices.h>
#include <adf_common_drv.h>
#include <adf_cfg.h>
#include <adf_dbgfs.h>
#include "adf_c3xxx_hw_data.h"
static const struct pci_device_id adf_pci_tbl[] = {
......@@ -65,8 +66,8 @@ static void adf_cleanup_accel(struct adf_accel_dev *accel_dev)
kfree(accel_dev->hw_device);
accel_dev->hw_device = NULL;
}
adf_dbgfs_exit(accel_dev);
adf_cfg_dev_remove(accel_dev);
debugfs_remove(accel_dev->debugfs_dir);
adf_devmgr_rm_dev(accel_dev, NULL);
}
......@@ -75,7 +76,6 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
struct adf_accel_dev *accel_dev;
struct adf_accel_pci *accel_pci_dev;
struct adf_hw_device_data *hw_data;
char name[ADF_DEVICE_NAME_LENGTH];
unsigned int i, bar_nr;
unsigned long bar_mask;
int ret;
......@@ -142,12 +142,6 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
goto out_err;
}
/* Create dev top level debugfs entry */
snprintf(name, sizeof(name), "%s%s_%s", ADF_DEVICE_NAME_PREFIX,
hw_data->dev_class->name, pci_name(pdev));
accel_dev->debugfs_dir = debugfs_create_dir(name, NULL);
/* Create device configuration table */
ret = adf_cfg_dev_add(accel_dev);
if (ret)
......@@ -199,6 +193,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
goto out_err_free_reg;
}
adf_dbgfs_init(accel_dev);
ret = adf_dev_up(accel_dev, true);
if (ret)
goto out_err_dev_stop;
......
......@@ -16,6 +16,7 @@
#include <adf_accel_devices.h>
#include <adf_common_drv.h>
#include <adf_cfg.h>
#include <adf_dbgfs.h>
#include "adf_c3xxxvf_hw_data.h"
static const struct pci_device_id adf_pci_tbl[] = {
......@@ -64,8 +65,8 @@ static void adf_cleanup_accel(struct adf_accel_dev *accel_dev)
kfree(accel_dev->hw_device);
accel_dev->hw_device = NULL;
}
adf_dbgfs_exit(accel_dev);
adf_cfg_dev_remove(accel_dev);
debugfs_remove(accel_dev->debugfs_dir);
pf = adf_devmgr_pci_to_accel_dev(accel_pci_dev->pci_dev->physfn);
adf_devmgr_rm_dev(accel_dev, pf);
}
......@@ -76,7 +77,6 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
struct adf_accel_dev *pf;
struct adf_accel_pci *accel_pci_dev;
struct adf_hw_device_data *hw_data;
char name[ADF_DEVICE_NAME_LENGTH];
unsigned int i, bar_nr;
unsigned long bar_mask;
int ret;
......@@ -123,12 +123,6 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
hw_data->ae_mask = hw_data->get_ae_mask(hw_data);
accel_pci_dev->sku = hw_data->get_sku(hw_data);
/* Create dev top level debugfs entry */
snprintf(name, sizeof(name), "%s%s_%s", ADF_DEVICE_NAME_PREFIX,
hw_data->dev_class->name, pci_name(pdev));
accel_dev->debugfs_dir = debugfs_create_dir(name, NULL);
/* Create device configuration table */
ret = adf_cfg_dev_add(accel_dev);
if (ret)
......@@ -173,6 +167,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
/* Completion for VF2PF request/response message exchange */
init_completion(&accel_dev->vf.msg_received);
adf_dbgfs_init(accel_dev);
ret = adf_dev_up(accel_dev, false);
if (ret)
goto out_err_dev_stop;
......
......@@ -16,6 +16,7 @@
#include <adf_accel_devices.h>
#include <adf_common_drv.h>
#include <adf_cfg.h>
#include <adf_dbgfs.h>
#include "adf_c62x_hw_data.h"
static const struct pci_device_id adf_pci_tbl[] = {
......@@ -65,8 +66,8 @@ static void adf_cleanup_accel(struct adf_accel_dev *accel_dev)
kfree(accel_dev->hw_device);
accel_dev->hw_device = NULL;
}
adf_dbgfs_exit(accel_dev);
adf_cfg_dev_remove(accel_dev);
debugfs_remove(accel_dev->debugfs_dir);
adf_devmgr_rm_dev(accel_dev, NULL);
}
......@@ -75,7 +76,6 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
struct adf_accel_dev *accel_dev;
struct adf_accel_pci *accel_pci_dev;
struct adf_hw_device_data *hw_data;
char name[ADF_DEVICE_NAME_LENGTH];
unsigned int i, bar_nr;
unsigned long bar_mask;
int ret;
......@@ -142,12 +142,6 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
goto out_err;
}
/* Create dev top level debugfs entry */
snprintf(name, sizeof(name), "%s%s_%s", ADF_DEVICE_NAME_PREFIX,
hw_data->dev_class->name, pci_name(pdev));
accel_dev->debugfs_dir = debugfs_create_dir(name, NULL);
/* Create device configuration table */
ret = adf_cfg_dev_add(accel_dev);
if (ret)
......@@ -199,6 +193,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
goto out_err_free_reg;
}
adf_dbgfs_init(accel_dev);
ret = adf_dev_up(accel_dev, true);
if (ret)
goto out_err_dev_stop;
......
......@@ -16,6 +16,7 @@
#include <adf_accel_devices.h>
#include <adf_common_drv.h>
#include <adf_cfg.h>
#include <adf_dbgfs.h>
#include "adf_c62xvf_hw_data.h"
static const struct pci_device_id adf_pci_tbl[] = {
......@@ -64,8 +65,8 @@ static void adf_cleanup_accel(struct adf_accel_dev *accel_dev)
kfree(accel_dev->hw_device);
accel_dev->hw_device = NULL;
}
adf_dbgfs_exit(accel_dev);
adf_cfg_dev_remove(accel_dev);
debugfs_remove(accel_dev->debugfs_dir);
pf = adf_devmgr_pci_to_accel_dev(accel_pci_dev->pci_dev->physfn);
adf_devmgr_rm_dev(accel_dev, pf);
}
......@@ -76,7 +77,6 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
struct adf_accel_dev *pf;
struct adf_accel_pci *accel_pci_dev;
struct adf_hw_device_data *hw_data;
char name[ADF_DEVICE_NAME_LENGTH];
unsigned int i, bar_nr;
unsigned long bar_mask;
int ret;
......@@ -123,12 +123,6 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
hw_data->ae_mask = hw_data->get_ae_mask(hw_data);
accel_pci_dev->sku = hw_data->get_sku(hw_data);
/* Create dev top level debugfs entry */
snprintf(name, sizeof(name), "%s%s_%s", ADF_DEVICE_NAME_PREFIX,
hw_data->dev_class->name, pci_name(pdev));
accel_dev->debugfs_dir = debugfs_create_dir(name, NULL);
/* Create device configuration table */
ret = adf_cfg_dev_add(accel_dev);
if (ret)
......@@ -173,6 +167,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
/* Completion for VF2PF request/response message exchange */
init_completion(&accel_dev->vf.msg_received);
adf_dbgfs_init(accel_dev);
ret = adf_dev_up(accel_dev, false);
if (ret)
goto out_err_dev_stop;
......
......@@ -27,7 +27,9 @@ intel_qat-objs := adf_cfg.o \
qat_hal.o \
qat_bl.o
intel_qat-$(CONFIG_DEBUG_FS) += adf_transport_debug.o
intel_qat-$(CONFIG_DEBUG_FS) += adf_transport_debug.o \
adf_dbgfs.o
intel_qat-$(CONFIG_PCI_IOV) += adf_sriov.o adf_vf_isr.o adf_pfvf_utils.o \
adf_pfvf_pf_msg.o adf_pfvf_pf_proto.o \
adf_pfvf_vf_msg.o adf_pfvf_vf_proto.o \
......
......@@ -202,7 +202,7 @@ struct adf_hw_device_data {
int (*ring_pair_reset)(struct adf_accel_dev *accel_dev, u32 bank_nr);
void (*reset_device)(struct adf_accel_dev *accel_dev);
void (*set_msix_rttable)(struct adf_accel_dev *accel_dev);
char *(*uof_get_name)(struct adf_accel_dev *accel_dev, u32 obj_num);
const char *(*uof_get_name)(struct adf_accel_dev *accel_dev, u32 obj_num);
u32 (*uof_get_num_objs)(void);
u32 (*uof_get_ae_mask)(struct adf_accel_dev *accel_dev, u32 obj_num);
int (*dev_config)(struct adf_accel_dev *accel_dev);
......
......@@ -13,7 +13,7 @@ static int adf_ae_fw_load_images(struct adf_accel_dev *accel_dev, void *fw_addr,
struct adf_fw_loader_data *loader_data = accel_dev->fw_loader;
struct adf_hw_device_data *hw_device = accel_dev->hw_device;
struct icp_qat_fw_loader_handle *loader;
char *obj_name;
const char *obj_name;
u32 num_objs;
u32 ae_mask;
int i;
......
......@@ -286,7 +286,6 @@ int adf_init_admin_pm(struct adf_accel_dev *accel_dev, u32 idle_delay)
return adf_send_admin(accel_dev, &req, &resp, ae_mask);
}
EXPORT_SYMBOL_GPL(adf_init_admin_pm);
int adf_init_admin_comms(struct adf_accel_dev *accel_dev)
{
......
......@@ -74,15 +74,30 @@ int adf_cfg_dev_add(struct adf_accel_dev *accel_dev)
INIT_LIST_HEAD(&dev_cfg_data->sec_list);
init_rwsem(&dev_cfg_data->lock);
accel_dev->cfg = dev_cfg_data;
return 0;
}
EXPORT_SYMBOL_GPL(adf_cfg_dev_add);
/* accel_dev->debugfs_dir should always be non-NULL here */
dev_cfg_data->debug = debugfs_create_file("dev_cfg", S_IRUSR,
void adf_cfg_dev_dbgfs_add(struct adf_accel_dev *accel_dev)
{
struct adf_cfg_device_data *dev_cfg_data = accel_dev->cfg;
dev_cfg_data->debug = debugfs_create_file("dev_cfg", 0400,
accel_dev->debugfs_dir,
dev_cfg_data,
&qat_dev_cfg_fops);
return 0;
}
EXPORT_SYMBOL_GPL(adf_cfg_dev_add);
void adf_cfg_dev_dbgfs_rm(struct adf_accel_dev *accel_dev)
{
struct adf_cfg_device_data *dev_cfg_data = accel_dev->cfg;
if (!dev_cfg_data)
return;
debugfs_remove(dev_cfg_data->debug);
dev_cfg_data->debug = NULL;
}
static void adf_cfg_section_del_all(struct list_head *head);
......@@ -116,7 +131,6 @@ void adf_cfg_dev_remove(struct adf_accel_dev *accel_dev)
down_write(&dev_cfg_data->lock);
adf_cfg_section_del_all(&dev_cfg_data->sec_list);
up_write(&dev_cfg_data->lock);
debugfs_remove(dev_cfg_data->debug);
kfree(dev_cfg_data);
accel_dev->cfg = NULL;
}
......
......@@ -31,6 +31,8 @@ struct adf_cfg_device_data {
int adf_cfg_dev_add(struct adf_accel_dev *accel_dev);
void adf_cfg_dev_remove(struct adf_accel_dev *accel_dev);
void adf_cfg_dev_dbgfs_add(struct adf_accel_dev *accel_dev);
void adf_cfg_dev_dbgfs_rm(struct adf_accel_dev *accel_dev);
int adf_cfg_section_add(struct adf_accel_dev *accel_dev, const char *name);
void adf_cfg_del_all(struct adf_accel_dev *accel_dev);
int adf_cfg_add_key_value_param(struct adf_accel_dev *accel_dev,
......
......@@ -25,7 +25,15 @@
#define ADF_DC "Dc"
#define ADF_CFG_DC "dc"
#define ADF_CFG_CY "sym;asym"
#define ADF_CFG_SYM "sym"
#define ADF_CFG_ASYM "asym"
#define ADF_CFG_ASYM_SYM "asym;sym"
#define ADF_CFG_ASYM_DC "asym;dc"
#define ADF_CFG_DC_ASYM "dc;asym"
#define ADF_CFG_SYM_DC "sym;dc"
#define ADF_CFG_DC_SYM "dc;sym"
#define ADF_SERVICES_ENABLED "ServicesEnabled"
#define ADF_PM_IDLE_SUPPORT "PmIdleSupport"
#define ADF_ETRMGR_COALESCING_ENABLED "InterruptCoalescingEnabled"
#define ADF_ETRMGR_COALESCING_ENABLED_FORMAT \
ADF_ETRMGR_BANK "%d" ADF_ETRMGR_COALESCING_ENABLED
......
......@@ -187,7 +187,7 @@ void qat_uclo_del_obj(struct icp_qat_fw_loader_handle *handle);
int qat_uclo_wr_mimage(struct icp_qat_fw_loader_handle *handle, void *addr_ptr,
int mem_size);
int qat_uclo_map_obj(struct icp_qat_fw_loader_handle *handle,
void *addr_ptr, u32 mem_size, char *obj_name);
void *addr_ptr, u32 mem_size, const char *obj_name);
int qat_uclo_set_cfg_ae_mask(struct icp_qat_fw_loader_handle *handle,
unsigned int cfg_ae_mask);
int adf_init_misc_wq(void);
......
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright(c) 2023 Intel Corporation */
#include <linux/debugfs.h>
#include "adf_accel_devices.h"
#include "adf_cfg.h"
#include "adf_common_drv.h"
#include "adf_dbgfs.h"
/**
* adf_dbgfs_init() - add persistent debugfs entries
* @accel_dev: Pointer to acceleration device.
*
* This function creates debugfs entries that are persistent through a device
* state change (from up to down or vice versa).
*/
void adf_dbgfs_init(struct adf_accel_dev *accel_dev)
{
char name[ADF_DEVICE_NAME_LENGTH];
void *ret;
/* Create dev top level debugfs entry */
snprintf(name, sizeof(name), "%s%s_%s", ADF_DEVICE_NAME_PREFIX,
accel_dev->hw_device->dev_class->name,
pci_name(accel_dev->accel_pci_dev.pci_dev));
ret = debugfs_create_dir(name, NULL);
if (IS_ERR_OR_NULL(ret))
return;
accel_dev->debugfs_dir = ret;
adf_cfg_dev_dbgfs_add(accel_dev);
}
EXPORT_SYMBOL_GPL(adf_dbgfs_init);
/**
* adf_dbgfs_exit() - remove persistent debugfs entries
* @accel_dev: Pointer to acceleration device.
*/
void adf_dbgfs_exit(struct adf_accel_dev *accel_dev)
{
adf_cfg_dev_dbgfs_rm(accel_dev);
debugfs_remove(accel_dev->debugfs_dir);
}
EXPORT_SYMBOL_GPL(adf_dbgfs_exit);
/**
* adf_dbgfs_add() - add non-persistent debugfs entries
* @accel_dev: Pointer to acceleration device.
*
* This function creates debugfs entries that are not persistent through
* a device state change (from up to down or vice versa).
*/
void adf_dbgfs_add(struct adf_accel_dev *accel_dev)
{
if (!accel_dev->debugfs_dir)
return;
}
/**
* adf_dbgfs_rm() - remove non-persistent debugfs entries
* @accel_dev: Pointer to acceleration device.
*/
void adf_dbgfs_rm(struct adf_accel_dev *accel_dev)
{
if (!accel_dev->debugfs_dir)
return;
}
/* SPDX-License-Identifier: GPL-2.0-only */
/* Copyright(c) 2023 Intel Corporation */
#ifndef ADF_DBGFS_H
#define ADF_DBGFS_H
#ifdef CONFIG_DEBUG_FS
void adf_dbgfs_init(struct adf_accel_dev *accel_dev);
void adf_dbgfs_add(struct adf_accel_dev *accel_dev);
void adf_dbgfs_rm(struct adf_accel_dev *accel_dev);
void adf_dbgfs_exit(struct adf_accel_dev *accel_dev);
#else
static inline void adf_dbgfs_init(struct adf_accel_dev *accel_dev)
{
}
static inline void adf_dbgfs_add(struct adf_accel_dev *accel_dev)
{
}
static inline void adf_dbgfs_rm(struct adf_accel_dev *accel_dev)
{
}
static inline void adf_dbgfs_exit(struct adf_accel_dev *accel_dev)
{
}
#endif
#endif
......@@ -23,15 +23,25 @@ struct adf_gen4_pm_data {
static int send_host_msg(struct adf_accel_dev *accel_dev)
{
char pm_idle_support_cfg[ADF_CFG_MAX_VAL_LEN_IN_BYTES] = {};
void __iomem *pmisc = adf_get_pmisc_base(accel_dev);
bool pm_idle_support;
u32 msg;
int ret;
msg = ADF_CSR_RD(pmisc, ADF_GEN4_PM_HOST_MSG);
if (msg & ADF_GEN4_PM_MSG_PENDING)
return -EBUSY;
adf_cfg_get_param_value(accel_dev, ADF_GENERAL_SEC,
ADF_PM_IDLE_SUPPORT, pm_idle_support_cfg);
ret = kstrtobool(pm_idle_support_cfg, &pm_idle_support);
if (ret)
pm_idle_support = true;
/* Send HOST_MSG */
msg = FIELD_PREP(ADF_GEN4_PM_MSG_PAYLOAD_BIT_MASK, PM_SET_MIN);
msg = FIELD_PREP(ADF_GEN4_PM_MSG_PAYLOAD_BIT_MASK,
pm_idle_support ? PM_SET_MIN : PM_NO_CHANGE);
msg |= ADF_GEN4_PM_MSG_PENDING;
ADF_CSR_WR(pmisc, ADF_GEN4_PM_HOST_MSG, msg);
......
......@@ -37,6 +37,7 @@
#define ADF_GEN4_PM_DEFAULT_IDLE_FILTER (0x0)
#define ADF_GEN4_PM_MAX_IDLE_FILTER (0x7)
#define ADF_GEN4_PM_DEFAULT_IDLE_SUPPORT (0x1)
int adf_gen4_enable_pm(struct adf_accel_dev *accel_dev);
bool adf_gen4_handle_pm_interrupt(struct adf_accel_dev *accel_dev);
......
......@@ -7,6 +7,7 @@
#include "adf_accel_devices.h"
#include "adf_cfg.h"
#include "adf_common_drv.h"
#include "adf_dbgfs.h"
static LIST_HEAD(service_table);
static DEFINE_MUTEX(service_lock);
......@@ -216,6 +217,9 @@ static int adf_dev_start(struct adf_accel_dev *accel_dev)
clear_bit(ADF_STATUS_STARTED, &accel_dev->status);
return -EFAULT;
}
adf_dbgfs_add(accel_dev);
return 0;
}
......@@ -240,6 +244,8 @@ static void adf_dev_stop(struct adf_accel_dev *accel_dev)
!test_bit(ADF_STATUS_STARTING, &accel_dev->status))
return;
adf_dbgfs_rm(accel_dev);
clear_bit(ADF_STATUS_STARTING, &accel_dev->status);
clear_bit(ADF_STATUS_STARTED, &accel_dev->status);
......
......@@ -78,6 +78,13 @@ static ssize_t state_store(struct device *dev, struct device_attribute *attr,
static const char * const services_operations[] = {
ADF_CFG_CY,
ADF_CFG_DC,
ADF_CFG_SYM,
ADF_CFG_ASYM,
ADF_CFG_ASYM_SYM,
ADF_CFG_ASYM_DC,
ADF_CFG_DC_ASYM,
ADF_CFG_SYM_DC,
ADF_CFG_DC_SYM,
};
static ssize_t cfg_services_show(struct device *dev, struct device_attribute *attr,
......@@ -145,12 +152,65 @@ static ssize_t cfg_services_store(struct device *dev, struct device_attribute *a
return count;
}
static ssize_t pm_idle_enabled_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
char pm_idle_enabled[ADF_CFG_MAX_VAL_LEN_IN_BYTES] = {};
struct adf_accel_dev *accel_dev;
int ret;
accel_dev = adf_devmgr_pci_to_accel_dev(to_pci_dev(dev));
if (!accel_dev)
return -EINVAL;
ret = adf_cfg_get_param_value(accel_dev, ADF_GENERAL_SEC,
ADF_PM_IDLE_SUPPORT, pm_idle_enabled);
if (ret)
return sysfs_emit(buf, "1\n");
return sysfs_emit(buf, "%s\n", pm_idle_enabled);
}
static ssize_t pm_idle_enabled_store(struct device *dev, struct device_attribute *attr,
const char *buf, size_t count)
{
unsigned long pm_idle_enabled_cfg_val;
struct adf_accel_dev *accel_dev;
bool pm_idle_enabled;
int ret;
ret = kstrtobool(buf, &pm_idle_enabled);
if (ret)
return ret;
pm_idle_enabled_cfg_val = pm_idle_enabled;
accel_dev = adf_devmgr_pci_to_accel_dev(to_pci_dev(dev));
if (!accel_dev)
return -EINVAL;
if (adf_dev_started(accel_dev)) {
dev_info(dev, "Device qat_dev%d must be down to set pm_idle_enabled.\n",
accel_dev->accel_id);
return -EINVAL;
}
ret = adf_cfg_add_key_value_param(accel_dev, ADF_GENERAL_SEC,
ADF_PM_IDLE_SUPPORT, &pm_idle_enabled_cfg_val,
ADF_DEC);
if (ret)
return ret;
return count;
}
static DEVICE_ATTR_RW(pm_idle_enabled);
static DEVICE_ATTR_RW(state);
static DEVICE_ATTR_RW(cfg_services);
static struct attribute *qat_attrs[] = {
&dev_attr_state.attr,
&dev_attr_cfg_services.attr,
&dev_attr_pm_idle_enabled.attr,
NULL,
};
......
......@@ -87,8 +87,7 @@ enum icp_qat_capabilities_mask {
ICP_ACCEL_CAPABILITIES_AUTHENTICATION = BIT(3),
ICP_ACCEL_CAPABILITIES_RESERVED_1 = BIT(4),
ICP_ACCEL_CAPABILITIES_COMPRESSION = BIT(5),
ICP_ACCEL_CAPABILITIES_LZS_COMPRESSION = BIT(6),
ICP_ACCEL_CAPABILITIES_RAND = BIT(7),
/* Bits 6-7 are currently reserved */
ICP_ACCEL_CAPABILITIES_ZUC = BIT(8),
ICP_ACCEL_CAPABILITIES_SHA3 = BIT(9),
/* Bits 10-11 are currently reserved */
......
......@@ -106,7 +106,6 @@ static int qat_get_inter_state_size(enum icp_qat_hw_auth_algo qat_hash_alg)
default:
return -EFAULT;
}
return -EFAULT;
}
static int qat_alg_do_precomputes(struct icp_qat_hw_auth_algo_blk *hash,
......
......@@ -170,15 +170,14 @@ static void qat_dh_cb(struct icp_qat_fw_pke_resp *resp)
}
areq->dst_len = req->ctx.dh->p_size;
dma_unmap_single(dev, req->out.dh.r, req->ctx.dh->p_size,
DMA_FROM_DEVICE);
if (req->dst_align) {
scatterwalk_map_and_copy(req->dst_align, areq->dst, 0,
areq->dst_len, 1);
kfree_sensitive(req->dst_align);
}
dma_unmap_single(dev, req->out.dh.r, req->ctx.dh->p_size,
DMA_FROM_DEVICE);
dma_unmap_single(dev, req->phy_in, sizeof(struct qat_dh_input_params),
DMA_TO_DEVICE);
dma_unmap_single(dev, req->phy_out,
......@@ -521,12 +520,14 @@ static void qat_rsa_cb(struct icp_qat_fw_pke_resp *resp)
err = (err == ICP_QAT_FW_COMN_STATUS_FLAG_OK) ? 0 : -EINVAL;
kfree_sensitive(req->src_align);
dma_unmap_single(dev, req->in.rsa.enc.m, req->ctx.rsa->key_sz,
DMA_TO_DEVICE);
kfree_sensitive(req->src_align);
areq->dst_len = req->ctx.rsa->key_sz;
dma_unmap_single(dev, req->out.rsa.enc.c, req->ctx.rsa->key_sz,
DMA_FROM_DEVICE);
if (req->dst_align) {
scatterwalk_map_and_copy(req->dst_align, areq->dst, 0,
areq->dst_len, 1);
......@@ -534,9 +535,6 @@ static void qat_rsa_cb(struct icp_qat_fw_pke_resp *resp)
kfree_sensitive(req->dst_align);
}
dma_unmap_single(dev, req->out.rsa.enc.c, req->ctx.rsa->key_sz,
DMA_FROM_DEVICE);
dma_unmap_single(dev, req->phy_in, sizeof(struct qat_rsa_input_params),
DMA_TO_DEVICE);
dma_unmap_single(dev, req->phy_out,
......
......@@ -1685,7 +1685,7 @@ static void qat_uclo_del_mof(struct icp_qat_fw_loader_handle *handle)
}
static int qat_uclo_seek_obj_inside_mof(struct icp_qat_mof_handle *mobj_handle,
char *obj_name, char **obj_ptr,
const char *obj_name, char **obj_ptr,
unsigned int *obj_size)
{
struct icp_qat_mof_objhdr *obj_hdr = mobj_handle->obj_table.obj_hdr;
......@@ -1837,8 +1837,8 @@ static int qat_uclo_check_mof_format(struct icp_qat_mof_file_hdr *mof_hdr)
static int qat_uclo_map_mof_obj(struct icp_qat_fw_loader_handle *handle,
struct icp_qat_mof_file_hdr *mof_ptr,
u32 mof_size, char *obj_name, char **obj_ptr,
unsigned int *obj_size)
u32 mof_size, const char *obj_name,
char **obj_ptr, unsigned int *obj_size)
{
struct icp_qat_mof_chunkhdr *mof_chunkhdr;
unsigned int file_id = mof_ptr->file_id;
......@@ -1888,7 +1888,7 @@ static int qat_uclo_map_mof_obj(struct icp_qat_fw_loader_handle *handle,
}
int qat_uclo_map_obj(struct icp_qat_fw_loader_handle *handle,
void *addr_ptr, u32 mem_size, char *obj_name)
void *addr_ptr, u32 mem_size, const char *obj_name)
{
char *obj_addr;
u32 obj_size;
......
......@@ -16,6 +16,7 @@
#include <adf_accel_devices.h>
#include <adf_common_drv.h>
#include <adf_cfg.h>
#include <adf_dbgfs.h>
#include "adf_dh895xcc_hw_data.h"
static const struct pci_device_id adf_pci_tbl[] = {
......@@ -65,8 +66,8 @@ static void adf_cleanup_accel(struct adf_accel_dev *accel_dev)
kfree(accel_dev->hw_device);
accel_dev->hw_device = NULL;
}
adf_dbgfs_exit(accel_dev);
adf_cfg_dev_remove(accel_dev);
debugfs_remove(accel_dev->debugfs_dir);
adf_devmgr_rm_dev(accel_dev, NULL);
}
......@@ -75,7 +76,6 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
struct adf_accel_dev *accel_dev;
struct adf_accel_pci *accel_pci_dev;
struct adf_hw_device_data *hw_data;
char name[ADF_DEVICE_NAME_LENGTH];
unsigned int i, bar_nr;
unsigned long bar_mask;
int ret;
......@@ -140,12 +140,6 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
goto out_err;
}
/* Create dev top level debugfs entry */
snprintf(name, sizeof(name), "%s%s_%s", ADF_DEVICE_NAME_PREFIX,
hw_data->dev_class->name, pci_name(pdev));
accel_dev->debugfs_dir = debugfs_create_dir(name, NULL);
/* Create device configuration table */
ret = adf_cfg_dev_add(accel_dev);
if (ret)
......@@ -199,6 +193,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
goto out_err_free_reg;
}
adf_dbgfs_init(accel_dev);
ret = adf_dev_up(accel_dev, true);
if (ret)
goto out_err_dev_stop;
......
......@@ -297,7 +297,7 @@ static int mv_cesa_des_setkey(struct crypto_skcipher *cipher, const u8 *key,
static int mv_cesa_des3_ede_setkey(struct crypto_skcipher *cipher,
const u8 *key, unsigned int len)
{
struct mv_cesa_des_ctx *ctx = crypto_skcipher_ctx(cipher);
struct mv_cesa_des3_ctx *ctx = crypto_skcipher_ctx(cipher);
int err;
err = verify_skcipher_des3_key(cipher, key);
......
......@@ -141,6 +141,8 @@ int otx2_cpt_attach_rscrs_msg(struct otx2_cptlfs_info *lfs)
req->hdr.sig = OTX2_MBOX_REQ_SIG;
req->hdr.pcifunc = 0;
req->cptlfs = lfs->lfs_num;
req->cpt_blkaddr = lfs->blkaddr;
req->modify = 1;
ret = otx2_cpt_send_mbox_msg(mbox, lfs->pdev);
if (ret)
return ret;
......@@ -168,6 +170,7 @@ int otx2_cpt_detach_rsrcs_msg(struct otx2_cptlfs_info *lfs)
req->hdr.id = MBOX_MSG_DETACH_RESOURCES;
req->hdr.sig = OTX2_MBOX_REQ_SIG;
req->hdr.pcifunc = 0;
req->cptlfs = 1;
ret = otx2_cpt_send_mbox_msg(mbox, lfs->pdev);
if (ret)
return ret;
......
......@@ -19,6 +19,7 @@ struct otx2_cptvf_dev {
struct otx2_mbox pfvf_mbox;
struct work_struct pfvf_mbox_work;
struct workqueue_struct *pfvf_mbox_wq;
int blkaddr;
void *bbuf_base;
unsigned long cap_flag;
};
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment