Commit 83ee7650 authored by Christian Lamparter's avatar Christian Lamparter Committed by Greg Kroah-Hartman

crypto: crypto4xx - fix cfb and ofb "overran dst buffer" issues

commit 7e92e171 upstream.

Currently, crypto4xx CFB and OFB AES ciphers are
failing testmgr's test vectors.

|cfb-aes-ppc4xx encryption overran dst buffer on test vector 3, cfg="in-place"
|ofb-aes-ppc4xx encryption overran dst buffer on test vector 1, cfg="in-place"

This is because of a very subtile "bug" in the hardware that
gets indirectly mentioned in 18.1.3.5 Encryption/Decryption
of the hardware spec:

the OFB and CFB modes for AES are listed there as operation
modes for >>> "Block ciphers" <<<. Which kind of makes sense,
but we would like them to be considered as stream ciphers just
like the CTR mode.

To workaround this issue and stop the hardware from causing
"overran dst buffer" on crypttexts that are not a multiple
of 16 (AES_BLOCK_SIZE), we force the driver to use the scatter
buffers as the go-between.

As a bonus this patch also kills redundant pd_uinfo->num_gd
and pd_uinfo->num_sd setters since the value has already been
set before.

Cc: stable@vger.kernel.org
Fixes: f2a13e7c ("crypto: crypto4xx - enable AES RFC3686, ECB, CFB and OFB offloads")
Signed-off-by: default avatarChristian Lamparter <chunkeey@gmail.com>
Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
parent 0b2f2b9c
...@@ -714,7 +714,23 @@ int crypto4xx_build_pd(struct crypto_async_request *req, ...@@ -714,7 +714,23 @@ int crypto4xx_build_pd(struct crypto_async_request *req,
size_t offset_to_sr_ptr; size_t offset_to_sr_ptr;
u32 gd_idx = 0; u32 gd_idx = 0;
int tmp; int tmp;
bool is_busy; bool is_busy, force_sd;
/*
* There's a very subtile/disguised "bug" in the hardware that
* gets indirectly mentioned in 18.1.3.5 Encryption/Decryption
* of the hardware spec:
* *drum roll* the AES/(T)DES OFB and CFB modes are listed as
* operation modes for >>> "Block ciphers" <<<.
*
* To workaround this issue and stop the hardware from causing
* "overran dst buffer" on crypttexts that are not a multiple
* of 16 (AES_BLOCK_SIZE), we force the driver to use the
* scatter buffers.
*/
force_sd = (req_sa->sa_command_1.bf.crypto_mode9_8 == CRYPTO_MODE_CFB
|| req_sa->sa_command_1.bf.crypto_mode9_8 == CRYPTO_MODE_OFB)
&& (datalen % AES_BLOCK_SIZE);
/* figure how many gd are needed */ /* figure how many gd are needed */
tmp = sg_nents_for_len(src, assoclen + datalen); tmp = sg_nents_for_len(src, assoclen + datalen);
...@@ -732,7 +748,7 @@ int crypto4xx_build_pd(struct crypto_async_request *req, ...@@ -732,7 +748,7 @@ int crypto4xx_build_pd(struct crypto_async_request *req,
} }
/* figure how many sd are needed */ /* figure how many sd are needed */
if (sg_is_last(dst)) { if (sg_is_last(dst) && force_sd == false) {
num_sd = 0; num_sd = 0;
} else { } else {
if (datalen > PPC4XX_SD_BUFFER_SIZE) { if (datalen > PPC4XX_SD_BUFFER_SIZE) {
...@@ -807,9 +823,10 @@ int crypto4xx_build_pd(struct crypto_async_request *req, ...@@ -807,9 +823,10 @@ int crypto4xx_build_pd(struct crypto_async_request *req,
pd->sa_len = sa_len; pd->sa_len = sa_len;
pd_uinfo = &dev->pdr_uinfo[pd_entry]; pd_uinfo = &dev->pdr_uinfo[pd_entry];
pd_uinfo->async_req = req;
pd_uinfo->num_gd = num_gd; pd_uinfo->num_gd = num_gd;
pd_uinfo->num_sd = num_sd; pd_uinfo->num_sd = num_sd;
pd_uinfo->dest_va = dst;
pd_uinfo->async_req = req;
if (iv_len) if (iv_len)
memcpy(pd_uinfo->sr_va->save_iv, iv, iv_len); memcpy(pd_uinfo->sr_va->save_iv, iv, iv_len);
...@@ -828,7 +845,6 @@ int crypto4xx_build_pd(struct crypto_async_request *req, ...@@ -828,7 +845,6 @@ int crypto4xx_build_pd(struct crypto_async_request *req,
/* get first gd we are going to use */ /* get first gd we are going to use */
gd_idx = fst_gd; gd_idx = fst_gd;
pd_uinfo->first_gd = fst_gd; pd_uinfo->first_gd = fst_gd;
pd_uinfo->num_gd = num_gd;
gd = crypto4xx_get_gdp(dev, &gd_dma, gd_idx); gd = crypto4xx_get_gdp(dev, &gd_dma, gd_idx);
pd->src = gd_dma; pd->src = gd_dma;
/* enable gather */ /* enable gather */
...@@ -865,17 +881,14 @@ int crypto4xx_build_pd(struct crypto_async_request *req, ...@@ -865,17 +881,14 @@ int crypto4xx_build_pd(struct crypto_async_request *req,
* Indicate gather array is not used * Indicate gather array is not used
*/ */
pd_uinfo->first_gd = 0xffffffff; pd_uinfo->first_gd = 0xffffffff;
pd_uinfo->num_gd = 0;
} }
if (sg_is_last(dst)) { if (!num_sd) {
/* /*
* we know application give us dst a whole piece of memory * we know application give us dst a whole piece of memory
* no need to use scatter ring. * no need to use scatter ring.
*/ */
pd_uinfo->using_sd = 0; pd_uinfo->using_sd = 0;
pd_uinfo->first_sd = 0xffffffff; pd_uinfo->first_sd = 0xffffffff;
pd_uinfo->num_sd = 0;
pd_uinfo->dest_va = dst;
sa->sa_command_0.bf.scatter = 0; sa->sa_command_0.bf.scatter = 0;
pd->dest = (u32)dma_map_page(dev->core_dev->device, pd->dest = (u32)dma_map_page(dev->core_dev->device,
sg_page(dst), dst->offset, sg_page(dst), dst->offset,
...@@ -889,9 +902,7 @@ int crypto4xx_build_pd(struct crypto_async_request *req, ...@@ -889,9 +902,7 @@ int crypto4xx_build_pd(struct crypto_async_request *req,
nbytes = datalen; nbytes = datalen;
sa->sa_command_0.bf.scatter = 1; sa->sa_command_0.bf.scatter = 1;
pd_uinfo->using_sd = 1; pd_uinfo->using_sd = 1;
pd_uinfo->dest_va = dst;
pd_uinfo->first_sd = fst_sd; pd_uinfo->first_sd = fst_sd;
pd_uinfo->num_sd = num_sd;
sd = crypto4xx_get_sdp(dev, &sd_dma, sd_idx); sd = crypto4xx_get_sdp(dev, &sd_dma, sd_idx);
pd->dest = sd_dma; pd->dest = sd_dma;
/* setup scatter descriptor */ /* setup scatter descriptor */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment