Commit ccc9d4a6 authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto update from Herbert Xu:
 "API:

   - Add support for cipher output IVs in testmgr
   - Add missing crypto_ahash_blocksize helper
   - Mark authenc and des ciphers as not allowed under FIPS.

Algorithms:

   - Add CRC support to 842 compression
   - Add keywrap algorithm
   - A number of changes to the akcipher interface:
      + Separate functions for setting public/private keys.
      + Use SG lists.

Drivers:

   - Add Intel SHA Extension optimised SHA1 and SHA256
   - Use dma_map_sg instead of custom functions in crypto drivers
   - Add support for STM32 RNG
   - Add support for ST RNG
   - Add Device Tree support to exynos RNG driver
   - Add support for mxs-dcp crypto device on MX6SL
   - Add xts(aes) support to caam
   - Add ctr(aes) and xts(aes) support to qat
   - A large set of fixes from Russell King for the marvell/cesa driver"

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (115 commits)
  crypto: asymmetric_keys - Fix unaligned access in x509_get_sig_params()
  crypto: akcipher - Don't #include crypto/public_key.h as the contents aren't used
  hwrng: exynos - Add Device Tree support
  hwrng: exynos - Fix missing configuration after suspend to RAM
  hwrng: exynos - Add timeout for waiting on init done
  dt-bindings: rng: Describe Exynos4 PRNG bindings
  crypto: marvell/cesa - use __le32 for hardware descriptors
  crypto: marvell/cesa - fix missing cpu_to_le32() in mv_cesa_dma_add_op()
  crypto: marvell/cesa - use memcpy_fromio()/memcpy_toio()
  crypto: marvell/cesa - use gfp_t for gfp flags
  crypto: marvell/cesa - use dma_addr_t for cur_dma
  crypto: marvell/cesa - use readl_relaxed()/writel_relaxed()
  crypto: caam - fix indentation of close braces
  crypto: caam - only export the state we really need to export
  crypto: caam - fix non-block aligned hash calculation
  crypto: caam - avoid needlessly saving and restoring caam_hash_ctx
  crypto: caam - print errno code when hash registration fails
  crypto: marvell/cesa - fix memory leak
  crypto: marvell/cesa - fix first-fragment handling in mv_cesa_ahash_dma_last_req()
  crypto: marvell/cesa - rearrange handling for sw padded hashes
  ...
parents 66ef3493 271817a3
Exynos Pseudo Random Number Generator
Required properties:
- compatible : Should be "samsung,exynos4-rng".
- reg : Specifies base physical address and size of the registers map.
- clocks : Phandle to clock-controller plus clock-specifier pair.
- clock-names : "secss" as a clock name.
Example:
rng@10830400 {
compatible = "samsung,exynos4-rng";
reg = <0x10830400 0x200>;
clocks = <&clock CLK_SSS>;
clock-names = "secss";
};
STMicroelectronics HW Random Number Generator
----------------------------------------------
Required parameters:
compatible : Should be "st,rng"
reg : Base address and size of IP's register map.
clocks : Phandle to device's clock (See: ../clocks/clock-bindings.txt)
Example:
rng@fee80000 {
compatible = "st,rng";
reg = <0xfee80000 0x1000>;
clocks = <&clk_sysin>;
}
STMicroelectronics STM32 HW RNG
===============================
The STM32 hardware random number generator is a simple fixed purpose IP and
is fully separated from other crypto functions.
Required properties:
- compatible : Should be "st,stm32-rng"
- reg : Should be register base and length as documented in the datasheet
- interrupts : The designated IRQ line for the RNG
- clocks : The clock needed to enable the RNG
Example:
rng: rng@50060800 {
compatible = "st,stm32-rng";
reg = <0x50060800 0x400>;
interrupts = <80>;
clocks = <&rcc 0 38>;
};
...@@ -3,7 +3,7 @@ Introduction: ...@@ -3,7 +3,7 @@ Introduction:
The hw_random framework is software that makes use of a The hw_random framework is software that makes use of a
special hardware feature on your CPU or motherboard, special hardware feature on your CPU or motherboard,
a Random Number Generator (RNG). The software has two parts: a Random Number Generator (RNG). The software has two parts:
a core providing the /dev/hw_random character device and its a core providing the /dev/hwrng character device and its
sysfs support, plus a hardware-specific driver that plugs sysfs support, plus a hardware-specific driver that plugs
into that core. into that core.
...@@ -14,7 +14,7 @@ Introduction: ...@@ -14,7 +14,7 @@ Introduction:
http://sourceforge.net/projects/gkernel/ http://sourceforge.net/projects/gkernel/
Those tools use /dev/hw_random to fill the kernel entropy pool, Those tools use /dev/hwrng to fill the kernel entropy pool,
which is used internally and exported by the /dev/urandom and which is used internally and exported by the /dev/urandom and
/dev/random special files. /dev/random special files.
...@@ -32,13 +32,13 @@ Theory of operation: ...@@ -32,13 +32,13 @@ Theory of operation:
The rng-tools package uses such tests in "rngd", and lets you The rng-tools package uses such tests in "rngd", and lets you
run them by hand with a "rngtest" utility. run them by hand with a "rngtest" utility.
/dev/hw_random is char device major 10, minor 183. /dev/hwrng is char device major 10, minor 183.
CLASS DEVICE. There is a /sys/class/misc/hw_random node with CLASS DEVICE. There is a /sys/class/misc/hw_random node with
two unique attributes, "rng_available" and "rng_current". The two unique attributes, "rng_available" and "rng_current". The
"rng_available" attribute lists the hardware-specific drivers "rng_available" attribute lists the hardware-specific drivers
available, while "rng_current" lists the one which is currently available, while "rng_current" lists the one which is currently
connected to /dev/hw_random. If your system has more than one connected to /dev/hwrng. If your system has more than one
RNG available, you may change the one used by writing a name from RNG available, you may change the one used by writing a name from
the list in "rng_available" into "rng_current". the list in "rng_available" into "rng_current".
......
...@@ -1529,6 +1529,7 @@ W: http://www.stlinux.com ...@@ -1529,6 +1529,7 @@ W: http://www.stlinux.com
S: Maintained S: Maintained
F: arch/arm/mach-sti/ F: arch/arm/mach-sti/
F: arch/arm/boot/dts/sti* F: arch/arm/boot/dts/sti*
F: drivers/char/hw_random/st-rng.c
F: drivers/clocksource/arm_global_timer.c F: drivers/clocksource/arm_global_timer.c
F: drivers/clocksource/clksrc_st_lpc.c F: drivers/clocksource/clksrc_st_lpc.c
F: drivers/i2c/busses/i2c-st.c F: drivers/i2c/busses/i2c-st.c
...@@ -6587,6 +6588,13 @@ M: Guenter Roeck <linux@roeck-us.net> ...@@ -6587,6 +6588,13 @@ M: Guenter Roeck <linux@roeck-us.net>
S: Maintained S: Maintained
F: drivers/net/dsa/mv88e6352.c F: drivers/net/dsa/mv88e6352.c
MARVELL CRYPTO DRIVER
M: Boris Brezillon <boris.brezillon@free-electrons.com>
M: Arnaud Ebalard <arno@natisbad.org>
F: drivers/crypto/marvell/
S: Maintained
L: linux-crypto@vger.kernel.org
MARVELL GIGABIT ETHERNET DRIVERS (skge/sky2) MARVELL GIGABIT ETHERNET DRIVERS (skge/sky2)
M: Mirko Lindner <mlindner@marvell.com> M: Mirko Lindner <mlindner@marvell.com>
M: Stephen Hemminger <stephen@networkplumber.org> M: Stephen Hemminger <stephen@networkplumber.org>
......
...@@ -610,5 +610,19 @@ &pinctrl_pwm1_chan2_default ...@@ -610,5 +610,19 @@ &pinctrl_pwm1_chan2_default
clocks = <&clk_sysin>; clocks = <&clk_sysin>;
st,pwm-num-chan = <4>; st,pwm-num-chan = <4>;
}; };
rng10: rng@08a89000 {
compatible = "st,rng";
reg = <0x08a89000 0x1000>;
clocks = <&clk_sysin>;
status = "okay";
};
rng11: rng@08a8a000 {
compatible = "st,rng";
reg = <0x08a8a000 0x1000>;
clocks = <&clk_sysin>;
status = "okay";
};
}; };
}; };
...@@ -174,6 +174,13 @@ rcc: rcc@40023810 { ...@@ -174,6 +174,13 @@ rcc: rcc@40023810 {
reg = <0x40023800 0x400>; reg = <0x40023800 0x400>;
clocks = <&clk_hse>; clocks = <&clk_hse>;
}; };
rng: rng@50060800 {
compatible = "st,stm32-rng";
reg = <0x50060800 0x400>;
interrupts = <80>;
clocks = <&rcc 0 38>;
};
}; };
}; };
......
...@@ -19,7 +19,7 @@ ...@@ -19,7 +19,7 @@
#include <crypto/sha.h> #include <crypto/sha.h>
/* must be big enough for the largest SHA variant */ /* must be big enough for the largest SHA variant */
#define SHA_MAX_STATE_SIZE 16 #define SHA_MAX_STATE_SIZE (SHA512_DIGEST_SIZE / 4)
#define SHA_MAX_BLOCK_SIZE SHA512_BLOCK_SIZE #define SHA_MAX_BLOCK_SIZE SHA512_BLOCK_SIZE
struct s390_sha_ctx { struct s390_sha_ctx {
......
...@@ -171,9 +171,11 @@ asinstr += $(call as-instr,pshufb %xmm0$(comma)%xmm0,-DCONFIG_AS_SSSE3=1) ...@@ -171,9 +171,11 @@ asinstr += $(call as-instr,pshufb %xmm0$(comma)%xmm0,-DCONFIG_AS_SSSE3=1)
asinstr += $(call as-instr,crc32l %eax$(comma)%eax,-DCONFIG_AS_CRC32=1) asinstr += $(call as-instr,crc32l %eax$(comma)%eax,-DCONFIG_AS_CRC32=1)
avx_instr := $(call as-instr,vxorps %ymm0$(comma)%ymm1$(comma)%ymm2,-DCONFIG_AS_AVX=1) avx_instr := $(call as-instr,vxorps %ymm0$(comma)%ymm1$(comma)%ymm2,-DCONFIG_AS_AVX=1)
avx2_instr :=$(call as-instr,vpbroadcastb %xmm0$(comma)%ymm1,-DCONFIG_AS_AVX2=1) avx2_instr :=$(call as-instr,vpbroadcastb %xmm0$(comma)%ymm1,-DCONFIG_AS_AVX2=1)
sha1_ni_instr :=$(call as-instr,sha1msg1 %xmm0$(comma)%xmm1,-DCONFIG_AS_SHA1_NI=1)
sha256_ni_instr :=$(call as-instr,sha256msg1 %xmm0$(comma)%xmm1,-DCONFIG_AS_SHA256_NI=1)
KBUILD_AFLAGS += $(cfi) $(cfi-sigframe) $(cfi-sections) $(asinstr) $(avx_instr) $(avx2_instr) KBUILD_AFLAGS += $(cfi) $(cfi-sigframe) $(cfi-sections) $(asinstr) $(avx_instr) $(avx2_instr) $(sha1_ni_instr) $(sha256_ni_instr)
KBUILD_CFLAGS += $(cfi) $(cfi-sigframe) $(cfi-sections) $(asinstr) $(avx_instr) $(avx2_instr) KBUILD_CFLAGS += $(cfi) $(cfi-sigframe) $(cfi-sections) $(asinstr) $(avx_instr) $(avx2_instr) $(sha1_ni_instr) $(sha256_ni_instr)
LDFLAGS := -m elf_$(UTS_MACHINE) LDFLAGS := -m elf_$(UTS_MACHINE)
......
...@@ -5,6 +5,8 @@ ...@@ -5,6 +5,8 @@
avx_supported := $(call as-instr,vpxor %xmm0$(comma)%xmm0$(comma)%xmm0,yes,no) avx_supported := $(call as-instr,vpxor %xmm0$(comma)%xmm0$(comma)%xmm0,yes,no)
avx2_supported := $(call as-instr,vpgatherdd %ymm0$(comma)(%eax$(comma)%ymm1\ avx2_supported := $(call as-instr,vpgatherdd %ymm0$(comma)(%eax$(comma)%ymm1\
$(comma)4)$(comma)%ymm2,yes,no) $(comma)4)$(comma)%ymm2,yes,no)
sha1_ni_supported :=$(call as-instr,sha1msg1 %xmm0$(comma)%xmm1,yes,no)
sha256_ni_supported :=$(call as-instr,sha256msg1 %xmm0$(comma)%xmm1,yes,no)
obj-$(CONFIG_CRYPTO_GLUE_HELPER_X86) += glue_helper.o obj-$(CONFIG_CRYPTO_GLUE_HELPER_X86) += glue_helper.o
...@@ -91,9 +93,15 @@ ifeq ($(avx2_supported),yes) ...@@ -91,9 +93,15 @@ ifeq ($(avx2_supported),yes)
sha1-ssse3-y += sha1_avx2_x86_64_asm.o sha1-ssse3-y += sha1_avx2_x86_64_asm.o
poly1305-x86_64-y += poly1305-avx2-x86_64.o poly1305-x86_64-y += poly1305-avx2-x86_64.o
endif endif
ifeq ($(sha1_ni_supported),yes)
sha1-ssse3-y += sha1_ni_asm.o
endif
crc32c-intel-y := crc32c-intel_glue.o crc32c-intel-y := crc32c-intel_glue.o
crc32c-intel-$(CONFIG_64BIT) += crc32c-pcl-intel-asm_64.o crc32c-intel-$(CONFIG_64BIT) += crc32c-pcl-intel-asm_64.o
crc32-pclmul-y := crc32-pclmul_asm.o crc32-pclmul_glue.o crc32-pclmul-y := crc32-pclmul_asm.o crc32-pclmul_glue.o
sha256-ssse3-y := sha256-ssse3-asm.o sha256-avx-asm.o sha256-avx2-asm.o sha256_ssse3_glue.o sha256-ssse3-y := sha256-ssse3-asm.o sha256-avx-asm.o sha256-avx2-asm.o sha256_ssse3_glue.o
ifeq ($(sha256_ni_supported),yes)
sha256-ssse3-y += sha256_ni_asm.o
endif
sha512-ssse3-y := sha512-ssse3-asm.o sha512-avx-asm.o sha512-avx2-asm.o sha512_ssse3_glue.o sha512-ssse3-y := sha512-ssse3-asm.o sha512-avx-asm.o sha512-avx2-asm.o sha512_ssse3_glue.o
crct10dif-pclmul-y := crct10dif-pcl-asm_64.o crct10dif-pclmul_glue.o crct10dif-pclmul-y := crct10dif-pcl-asm_64.o crct10dif-pclmul_glue.o
...@@ -330,7 +330,7 @@ ENDPROC(crc_pcl) ...@@ -330,7 +330,7 @@ ENDPROC(crc_pcl)
## PCLMULQDQ tables ## PCLMULQDQ tables
## Table is 128 entries x 2 words (8 bytes) each ## Table is 128 entries x 2 words (8 bytes) each
################################################################ ################################################################
.section .rotata, "a", %progbits .section .rodata, "a", %progbits
.align 8 .align 8
K_table: K_table:
.long 0x493c7d27, 0x00000001 .long 0x493c7d27, 0x00000001
......
/*
* Intel SHA Extensions optimized implementation of a SHA-1 update function
*
* This file is provided under a dual BSD/GPLv2 license. When using or
* redistributing this file, you may do so under either license.
*
* GPL LICENSE SUMMARY
*
* Copyright(c) 2015 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
*
* Contact Information:
* Sean Gulley <sean.m.gulley@intel.com>
* Tim Chen <tim.c.chen@linux.intel.com>
*
* BSD LICENSE
*
* Copyright(c) 2015 Intel Corporation.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in
* the documentation and/or other materials provided with the
* distribution.
* * Neither the name of Intel Corporation nor the names of its
* contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
*/
#include <linux/linkage.h>
#define DIGEST_PTR %rdi /* 1st arg */
#define DATA_PTR %rsi /* 2nd arg */
#define NUM_BLKS %rdx /* 3rd arg */
#define RSPSAVE %rax
/* gcc conversion */
#define FRAME_SIZE 32 /* space for 2x16 bytes */
#define ABCD %xmm0
#define E0 %xmm1 /* Need two E's b/c they ping pong */
#define E1 %xmm2
#define MSG0 %xmm3
#define MSG1 %xmm4
#define MSG2 %xmm5
#define MSG3 %xmm6
#define SHUF_MASK %xmm7
/*
* Intel SHA Extensions optimized implementation of a SHA-1 update function
*
* The function takes a pointer to the current hash values, a pointer to the
* input data, and a number of 64 byte blocks to process. Once all blocks have
* been processed, the digest pointer is updated with the resulting hash value.
* The function only processes complete blocks, there is no functionality to
* store partial blocks. All message padding and hash value initialization must
* be done outside the update function.
*
* The indented lines in the loop are instructions related to rounds processing.
* The non-indented lines are instructions related to the message schedule.
*
* void sha1_ni_transform(uint32_t *digest, const void *data,
uint32_t numBlocks)
* digest : pointer to digest
* data: pointer to input data
* numBlocks: Number of blocks to process
*/
.text
.align 32
ENTRY(sha1_ni_transform)
mov %rsp, RSPSAVE
sub $FRAME_SIZE, %rsp
and $~0xF, %rsp
shl $6, NUM_BLKS /* convert to bytes */
jz .Ldone_hash
add DATA_PTR, NUM_BLKS /* pointer to end of data */
/* load initial hash values */
pinsrd $3, 1*16(DIGEST_PTR), E0
movdqu 0*16(DIGEST_PTR), ABCD
pand UPPER_WORD_MASK(%rip), E0
pshufd $0x1B, ABCD, ABCD
movdqa PSHUFFLE_BYTE_FLIP_MASK(%rip), SHUF_MASK
.Lloop0:
/* Save hash values for addition after rounds */
movdqa E0, (0*16)(%rsp)
movdqa ABCD, (1*16)(%rsp)
/* Rounds 0-3 */
movdqu 0*16(DATA_PTR), MSG0
pshufb SHUF_MASK, MSG0
paddd MSG0, E0
movdqa ABCD, E1
sha1rnds4 $0, E0, ABCD
/* Rounds 4-7 */
movdqu 1*16(DATA_PTR), MSG1
pshufb SHUF_MASK, MSG1
sha1nexte MSG1, E1
movdqa ABCD, E0
sha1rnds4 $0, E1, ABCD
sha1msg1 MSG1, MSG0
/* Rounds 8-11 */
movdqu 2*16(DATA_PTR), MSG2
pshufb SHUF_MASK, MSG2
sha1nexte MSG2, E0
movdqa ABCD, E1
sha1rnds4 $0, E0, ABCD
sha1msg1 MSG2, MSG1
pxor MSG2, MSG0
/* Rounds 12-15 */
movdqu 3*16(DATA_PTR), MSG3
pshufb SHUF_MASK, MSG3
sha1nexte MSG3, E1
movdqa ABCD, E0
sha1msg2 MSG3, MSG0
sha1rnds4 $0, E1, ABCD
sha1msg1 MSG3, MSG2
pxor MSG3, MSG1
/* Rounds 16-19 */
sha1nexte MSG0, E0
movdqa ABCD, E1
sha1msg2 MSG0, MSG1
sha1rnds4 $0, E0, ABCD
sha1msg1 MSG0, MSG3
pxor MSG0, MSG2
/* Rounds 20-23 */
sha1nexte MSG1, E1
movdqa ABCD, E0
sha1msg2 MSG1, MSG2
sha1rnds4 $1, E1, ABCD
sha1msg1 MSG1, MSG0
pxor MSG1, MSG3
/* Rounds 24-27 */
sha1nexte MSG2, E0
movdqa ABCD, E1
sha1msg2 MSG2, MSG3
sha1rnds4 $1, E0, ABCD
sha1msg1 MSG2, MSG1
pxor MSG2, MSG0
/* Rounds 28-31 */
sha1nexte MSG3, E1
movdqa ABCD, E0
sha1msg2 MSG3, MSG0
sha1rnds4 $1, E1, ABCD
sha1msg1 MSG3, MSG2
pxor MSG3, MSG1
/* Rounds 32-35 */
sha1nexte MSG0, E0
movdqa ABCD, E1
sha1msg2 MSG0, MSG1
sha1rnds4 $1, E0, ABCD
sha1msg1 MSG0, MSG3
pxor MSG0, MSG2
/* Rounds 36-39 */
sha1nexte MSG1, E1
movdqa ABCD, E0
sha1msg2 MSG1, MSG2
sha1rnds4 $1, E1, ABCD
sha1msg1 MSG1, MSG0
pxor MSG1, MSG3
/* Rounds 40-43 */
sha1nexte MSG2, E0
movdqa ABCD, E1
sha1msg2 MSG2, MSG3
sha1rnds4 $2, E0, ABCD
sha1msg1 MSG2, MSG1
pxor MSG2, MSG0
/* Rounds 44-47 */
sha1nexte MSG3, E1
movdqa ABCD, E0
sha1msg2 MSG3, MSG0
sha1rnds4 $2, E1, ABCD
sha1msg1 MSG3, MSG2
pxor MSG3, MSG1
/* Rounds 48-51 */
sha1nexte MSG0, E0
movdqa ABCD, E1
sha1msg2 MSG0, MSG1
sha1rnds4 $2, E0, ABCD
sha1msg1 MSG0, MSG3
pxor MSG0, MSG2
/* Rounds 52-55 */
sha1nexte MSG1, E1
movdqa ABCD, E0
sha1msg2 MSG1, MSG2
sha1rnds4 $2, E1, ABCD
sha1msg1 MSG1, MSG0
pxor MSG1, MSG3
/* Rounds 56-59 */
sha1nexte MSG2, E0
movdqa ABCD, E1
sha1msg2 MSG2, MSG3
sha1rnds4 $2, E0, ABCD
sha1msg1 MSG2, MSG1
pxor MSG2, MSG0
/* Rounds 60-63 */
sha1nexte MSG3, E1
movdqa ABCD, E0
sha1msg2 MSG3, MSG0
sha1rnds4 $3, E1, ABCD
sha1msg1 MSG3, MSG2
pxor MSG3, MSG1
/* Rounds 64-67 */
sha1nexte MSG0, E0
movdqa ABCD, E1
sha1msg2 MSG0, MSG1
sha1rnds4 $3, E0, ABCD
sha1msg1 MSG0, MSG3
pxor MSG0, MSG2
/* Rounds 68-71 */
sha1nexte MSG1, E1
movdqa ABCD, E0
sha1msg2 MSG1, MSG2
sha1rnds4 $3, E1, ABCD
pxor MSG1, MSG3
/* Rounds 72-75 */
sha1nexte MSG2, E0
movdqa ABCD, E1
sha1msg2 MSG2, MSG3
sha1rnds4 $3, E0, ABCD
/* Rounds 76-79 */
sha1nexte MSG3, E1
movdqa ABCD, E0
sha1rnds4 $3, E1, ABCD
/* Add current hash values with previously saved */
sha1nexte (0*16)(%rsp), E0
paddd (1*16)(%rsp), ABCD
/* Increment data pointer and loop if more to process */
add $64, DATA_PTR
cmp NUM_BLKS, DATA_PTR
jne .Lloop0
/* Write hash values back in the correct order */
pshufd $0x1B, ABCD, ABCD
movdqu ABCD, 0*16(DIGEST_PTR)
pextrd $3, E0, 1*16(DIGEST_PTR)
.Ldone_hash:
mov RSPSAVE, %rsp
ret
ENDPROC(sha1_ni_transform)
.data
.align 64
PSHUFFLE_BYTE_FLIP_MASK:
.octa 0x000102030405060708090a0b0c0d0e0f
UPPER_WORD_MASK:
.octa 0xFFFFFFFF000000000000000000000000
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -41,19 +41,11 @@ ...@@ -41,19 +41,11 @@
asmlinkage void sha512_transform_ssse3(u64 *digest, const char *data, asmlinkage void sha512_transform_ssse3(u64 *digest, const char *data,
u64 rounds); u64 rounds);
#ifdef CONFIG_AS_AVX
asmlinkage void sha512_transform_avx(u64 *digest, const char *data,
u64 rounds);
#endif
#ifdef CONFIG_AS_AVX2
asmlinkage void sha512_transform_rorx(u64 *digest, const char *data,
u64 rounds);
#endif
static void (*sha512_transform_asm)(u64 *, const char *, u64); typedef void (sha512_transform_fn)(u64 *digest, const char *data, u64 rounds);
static int sha512_ssse3_update(struct shash_desc *desc, const u8 *data, static int sha512_update(struct shash_desc *desc, const u8 *data,
unsigned int len) unsigned int len, sha512_transform_fn *sha512_xform)
{ {
struct sha512_state *sctx = shash_desc_ctx(desc); struct sha512_state *sctx = shash_desc_ctx(desc);
...@@ -66,14 +58,14 @@ static int sha512_ssse3_update(struct shash_desc *desc, const u8 *data, ...@@ -66,14 +58,14 @@ static int sha512_ssse3_update(struct shash_desc *desc, const u8 *data,
kernel_fpu_begin(); kernel_fpu_begin();
sha512_base_do_update(desc, data, len, sha512_base_do_update(desc, data, len,
(sha512_block_fn *)sha512_transform_asm); (sha512_block_fn *)sha512_xform);
kernel_fpu_end(); kernel_fpu_end();
return 0; return 0;
} }
static int sha512_ssse3_finup(struct shash_desc *desc, const u8 *data, static int sha512_finup(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out) unsigned int len, u8 *out, sha512_transform_fn *sha512_xform)
{ {
if (!irq_fpu_usable()) if (!irq_fpu_usable())
return crypto_sha512_finup(desc, data, len, out); return crypto_sha512_finup(desc, data, len, out);
...@@ -81,20 +73,32 @@ static int sha512_ssse3_finup(struct shash_desc *desc, const u8 *data, ...@@ -81,20 +73,32 @@ static int sha512_ssse3_finup(struct shash_desc *desc, const u8 *data,
kernel_fpu_begin(); kernel_fpu_begin();
if (len) if (len)
sha512_base_do_update(desc, data, len, sha512_base_do_update(desc, data, len,
(sha512_block_fn *)sha512_transform_asm); (sha512_block_fn *)sha512_xform);
sha512_base_do_finalize(desc, (sha512_block_fn *)sha512_transform_asm); sha512_base_do_finalize(desc, (sha512_block_fn *)sha512_xform);
kernel_fpu_end(); kernel_fpu_end();
return sha512_base_finish(desc, out); return sha512_base_finish(desc, out);
} }
static int sha512_ssse3_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
return sha512_update(desc, data, len, sha512_transform_ssse3);
}
static int sha512_ssse3_finup(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out)
{
return sha512_finup(desc, data, len, out, sha512_transform_ssse3);
}
/* Add padding and return the message digest. */ /* Add padding and return the message digest. */
static int sha512_ssse3_final(struct shash_desc *desc, u8 *out) static int sha512_ssse3_final(struct shash_desc *desc, u8 *out)
{ {
return sha512_ssse3_finup(desc, NULL, 0, out); return sha512_ssse3_finup(desc, NULL, 0, out);
} }
static struct shash_alg algs[] = { { static struct shash_alg sha512_ssse3_algs[] = { {
.digestsize = SHA512_DIGEST_SIZE, .digestsize = SHA512_DIGEST_SIZE,
.init = sha512_base_init, .init = sha512_base_init,
.update = sha512_ssse3_update, .update = sha512_ssse3_update,
...@@ -126,8 +130,25 @@ static struct shash_alg algs[] = { { ...@@ -126,8 +130,25 @@ static struct shash_alg algs[] = { {
} }
} }; } };
static int register_sha512_ssse3(void)
{
if (boot_cpu_has(X86_FEATURE_SSSE3))
return crypto_register_shashes(sha512_ssse3_algs,
ARRAY_SIZE(sha512_ssse3_algs));
return 0;
}
static void unregister_sha512_ssse3(void)
{
if (boot_cpu_has(X86_FEATURE_SSSE3))
crypto_unregister_shashes(sha512_ssse3_algs,
ARRAY_SIZE(sha512_ssse3_algs));
}
#ifdef CONFIG_AS_AVX #ifdef CONFIG_AS_AVX
static bool __init avx_usable(void) asmlinkage void sha512_transform_avx(u64 *digest, const char *data,
u64 rounds);
static bool avx_usable(void)
{ {
if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL)) { if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL)) {
if (cpu_has_avx) if (cpu_has_avx)
...@@ -137,47 +158,185 @@ static bool __init avx_usable(void) ...@@ -137,47 +158,185 @@ static bool __init avx_usable(void)
return true; return true;
} }
#endif
static int __init sha512_ssse3_mod_init(void) static int sha512_avx_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
{ {
/* test for SSSE3 first */ return sha512_update(desc, data, len, sha512_transform_avx);
if (cpu_has_ssse3) }
sha512_transform_asm = sha512_transform_ssse3;
#ifdef CONFIG_AS_AVX static int sha512_avx_finup(struct shash_desc *desc, const u8 *data,
/* allow AVX to override SSSE3, it's a little faster */ unsigned int len, u8 *out)
if (avx_usable()) { {
#ifdef CONFIG_AS_AVX2 return sha512_finup(desc, data, len, out, sha512_transform_avx);
if (boot_cpu_has(X86_FEATURE_AVX2)) }
sha512_transform_asm = sha512_transform_rorx;
else /* Add padding and return the message digest. */
#endif static int sha512_avx_final(struct shash_desc *desc, u8 *out)
sha512_transform_asm = sha512_transform_avx; {
return sha512_avx_finup(desc, NULL, 0, out);
}
static struct shash_alg sha512_avx_algs[] = { {
.digestsize = SHA512_DIGEST_SIZE,
.init = sha512_base_init,
.update = sha512_avx_update,
.final = sha512_avx_final,
.finup = sha512_avx_finup,
.descsize = sizeof(struct sha512_state),
.base = {
.cra_name = "sha512",
.cra_driver_name = "sha512-avx",
.cra_priority = 160,
.cra_flags = CRYPTO_ALG_TYPE_SHASH,
.cra_blocksize = SHA512_BLOCK_SIZE,
.cra_module = THIS_MODULE,
} }
#endif }, {
.digestsize = SHA384_DIGEST_SIZE,
.init = sha384_base_init,
.update = sha512_avx_update,
.final = sha512_avx_final,
.finup = sha512_avx_finup,
.descsize = sizeof(struct sha512_state),
.base = {
.cra_name = "sha384",
.cra_driver_name = "sha384-avx",
.cra_priority = 160,
.cra_flags = CRYPTO_ALG_TYPE_SHASH,
.cra_blocksize = SHA384_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
} };
if (sha512_transform_asm) { static int register_sha512_avx(void)
#ifdef CONFIG_AS_AVX {
if (sha512_transform_asm == sha512_transform_avx) if (avx_usable())
pr_info("Using AVX optimized SHA-512 implementation\n"); return crypto_register_shashes(sha512_avx_algs,
#ifdef CONFIG_AS_AVX2 ARRAY_SIZE(sha512_avx_algs));
else if (sha512_transform_asm == sha512_transform_rorx) return 0;
pr_info("Using AVX2 optimized SHA-512 implementation\n"); }
static void unregister_sha512_avx(void)
{
if (avx_usable())
crypto_unregister_shashes(sha512_avx_algs,
ARRAY_SIZE(sha512_avx_algs));
}
#else
static inline int register_sha512_avx(void) { return 0; }
static inline void unregister_sha512_avx(void) { }
#endif #endif
else
#if defined(CONFIG_AS_AVX2) && defined(CONFIG_AS_AVX)
asmlinkage void sha512_transform_rorx(u64 *digest, const char *data,
u64 rounds);
static int sha512_avx2_update(struct shash_desc *desc, const u8 *data,
unsigned int len)
{
return sha512_update(desc, data, len, sha512_transform_rorx);
}
static int sha512_avx2_finup(struct shash_desc *desc, const u8 *data,
unsigned int len, u8 *out)
{
return sha512_finup(desc, data, len, out, sha512_transform_rorx);
}
/* Add padding and return the message digest. */
static int sha512_avx2_final(struct shash_desc *desc, u8 *out)
{
return sha512_avx2_finup(desc, NULL, 0, out);
}
static struct shash_alg sha512_avx2_algs[] = { {
.digestsize = SHA512_DIGEST_SIZE,
.init = sha512_base_init,
.update = sha512_avx2_update,
.final = sha512_avx2_final,
.finup = sha512_avx2_finup,
.descsize = sizeof(struct sha512_state),
.base = {
.cra_name = "sha512",
.cra_driver_name = "sha512-avx2",
.cra_priority = 170,
.cra_flags = CRYPTO_ALG_TYPE_SHASH,
.cra_blocksize = SHA512_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
}, {
.digestsize = SHA384_DIGEST_SIZE,
.init = sha384_base_init,
.update = sha512_avx2_update,
.final = sha512_avx2_final,
.finup = sha512_avx2_finup,
.descsize = sizeof(struct sha512_state),
.base = {
.cra_name = "sha384",
.cra_driver_name = "sha384-avx2",
.cra_priority = 170,
.cra_flags = CRYPTO_ALG_TYPE_SHASH,
.cra_blocksize = SHA384_BLOCK_SIZE,
.cra_module = THIS_MODULE,
}
} };
static bool avx2_usable(void)
{
if (avx_usable() && boot_cpu_has(X86_FEATURE_AVX2) &&
boot_cpu_has(X86_FEATURE_BMI2))
return true;
return false;
}
static int register_sha512_avx2(void)
{
if (avx2_usable())
return crypto_register_shashes(sha512_avx2_algs,
ARRAY_SIZE(sha512_avx2_algs));
return 0;
}
static void unregister_sha512_avx2(void)
{
if (avx2_usable())
crypto_unregister_shashes(sha512_avx2_algs,
ARRAY_SIZE(sha512_avx2_algs));
}
#else
static inline int register_sha512_avx2(void) { return 0; }
static inline void unregister_sha512_avx2(void) { }
#endif #endif
pr_info("Using SSSE3 optimized SHA-512 implementation\n");
return crypto_register_shashes(algs, ARRAY_SIZE(algs)); static int __init sha512_ssse3_mod_init(void)
{
if (register_sha512_ssse3())
goto fail;
if (register_sha512_avx()) {
unregister_sha512_ssse3();
goto fail;
}
if (register_sha512_avx2()) {
unregister_sha512_avx();
unregister_sha512_ssse3();
goto fail;
} }
pr_info("Neither AVX nor SSSE3 is available/usable.\n");
return 0;
fail:
return -ENODEV; return -ENODEV;
} }
static void __exit sha512_ssse3_mod_fini(void) static void __exit sha512_ssse3_mod_fini(void)
{ {
crypto_unregister_shashes(algs, ARRAY_SIZE(algs)); unregister_sha512_avx2();
unregister_sha512_avx();
unregister_sha512_ssse3();
} }
module_init(sha512_ssse3_mod_init); module_init(sha512_ssse3_mod_init);
......
...@@ -348,6 +348,13 @@ config CRYPTO_XTS ...@@ -348,6 +348,13 @@ config CRYPTO_XTS
key size 256, 384 or 512 bits. This implementation currently key size 256, 384 or 512 bits. This implementation currently
can't handle a sectorsize which is not a multiple of 16 bytes. can't handle a sectorsize which is not a multiple of 16 bytes.
config CRYPTO_KEYWRAP
tristate "Key wrapping support"
select CRYPTO_BLKCIPHER
help
Support for key wrapping (NIST SP800-38F / RFC3394) without
padding.
comment "Hash modes" comment "Hash modes"
config CRYPTO_CMAC config CRYPTO_CMAC
...@@ -597,17 +604,18 @@ config CRYPTO_SHA1 ...@@ -597,17 +604,18 @@ config CRYPTO_SHA1
SHA-1 secure hash standard (FIPS 180-1/DFIPS 180-2). SHA-1 secure hash standard (FIPS 180-1/DFIPS 180-2).
config CRYPTO_SHA1_SSSE3 config CRYPTO_SHA1_SSSE3
tristate "SHA1 digest algorithm (SSSE3/AVX/AVX2)" tristate "SHA1 digest algorithm (SSSE3/AVX/AVX2/SHA-NI)"
depends on X86 && 64BIT depends on X86 && 64BIT
select CRYPTO_SHA1 select CRYPTO_SHA1
select CRYPTO_HASH select CRYPTO_HASH
help help
SHA-1 secure hash standard (FIPS 180-1/DFIPS 180-2) implemented SHA-1 secure hash standard (FIPS 180-1/DFIPS 180-2) implemented
using Supplemental SSE3 (SSSE3) instructions or Advanced Vector using Supplemental SSE3 (SSSE3) instructions or Advanced Vector
Extensions (AVX/AVX2), when available. Extensions (AVX/AVX2) or SHA-NI(SHA Extensions New Instructions),
when available.
config CRYPTO_SHA256_SSSE3 config CRYPTO_SHA256_SSSE3
tristate "SHA256 digest algorithm (SSSE3/AVX/AVX2)" tristate "SHA256 digest algorithm (SSSE3/AVX/AVX2/SHA-NI)"
depends on X86 && 64BIT depends on X86 && 64BIT
select CRYPTO_SHA256 select CRYPTO_SHA256
select CRYPTO_HASH select CRYPTO_HASH
...@@ -615,7 +623,8 @@ config CRYPTO_SHA256_SSSE3 ...@@ -615,7 +623,8 @@ config CRYPTO_SHA256_SSSE3
SHA-256 secure hash standard (DFIPS 180-2) implemented SHA-256 secure hash standard (DFIPS 180-2) implemented
using Supplemental SSE3 (SSSE3) instructions, or Advanced Vector using Supplemental SSE3 (SSSE3) instructions, or Advanced Vector
Extensions version 1 (AVX1), or Advanced Vector Extensions Extensions version 1 (AVX1), or Advanced Vector Extensions
version 2 (AVX2) instructions, when available. version 2 (AVX2) instructions, or SHA-NI (SHA Extensions New
Instructions) when available.
config CRYPTO_SHA512_SSSE3 config CRYPTO_SHA512_SSSE3
tristate "SHA512 digest algorithm (SSSE3/AVX/AVX2)" tristate "SHA512 digest algorithm (SSSE3/AVX/AVX2)"
......
...@@ -31,10 +31,13 @@ obj-$(CONFIG_CRYPTO_HASH2) += crypto_hash.o ...@@ -31,10 +31,13 @@ obj-$(CONFIG_CRYPTO_HASH2) += crypto_hash.o
obj-$(CONFIG_CRYPTO_PCOMP2) += pcompress.o obj-$(CONFIG_CRYPTO_PCOMP2) += pcompress.o
obj-$(CONFIG_CRYPTO_AKCIPHER2) += akcipher.o obj-$(CONFIG_CRYPTO_AKCIPHER2) += akcipher.o
$(obj)/rsakey-asn1.o: $(obj)/rsakey-asn1.c $(obj)/rsakey-asn1.h $(obj)/rsapubkey-asn1.o: $(obj)/rsapubkey-asn1.c $(obj)/rsapubkey-asn1.h
clean-files += rsakey-asn1.c rsakey-asn1.h $(obj)/rsaprivkey-asn1.o: $(obj)/rsaprivkey-asn1.c $(obj)/rsaprivkey-asn1.h
clean-files += rsapubkey-asn1.c rsapubkey-asn1.h
clean-files += rsaprivkey-asn1.c rsaprivkey-asn1.h
rsa_generic-y := rsakey-asn1.o rsa_generic-y := rsapubkey-asn1.o
rsa_generic-y += rsaprivkey-asn1.o
rsa_generic-y += rsa.o rsa_generic-y += rsa.o
rsa_generic-y += rsa_helper.o rsa_generic-y += rsa_helper.o
obj-$(CONFIG_CRYPTO_RSA) += rsa_generic.o obj-$(CONFIG_CRYPTO_RSA) += rsa_generic.o
...@@ -67,6 +70,7 @@ obj-$(CONFIG_CRYPTO_CTS) += cts.o ...@@ -67,6 +70,7 @@ obj-$(CONFIG_CRYPTO_CTS) += cts.o
obj-$(CONFIG_CRYPTO_LRW) += lrw.o obj-$(CONFIG_CRYPTO_LRW) += lrw.o
obj-$(CONFIG_CRYPTO_XTS) += xts.o obj-$(CONFIG_CRYPTO_XTS) += xts.o
obj-$(CONFIG_CRYPTO_CTR) += ctr.o obj-$(CONFIG_CRYPTO_CTR) += ctr.o
obj-$(CONFIG_CRYPTO_KEYWRAP) += keywrap.o
obj-$(CONFIG_CRYPTO_GCM) += gcm.o obj-$(CONFIG_CRYPTO_GCM) += gcm.o
obj-$(CONFIG_CRYPTO_CCM) += ccm.o obj-$(CONFIG_CRYPTO_CCM) += ccm.o
obj-$(CONFIG_CRYPTO_CHACHA20POLY1305) += chacha20poly1305.o obj-$(CONFIG_CRYPTO_CHACHA20POLY1305) += chacha20poly1305.o
......
...@@ -21,7 +21,6 @@ ...@@ -21,7 +21,6 @@
#include <linux/cryptouser.h> #include <linux/cryptouser.h>
#include <net/netlink.h> #include <net/netlink.h>
#include <crypto/akcipher.h> #include <crypto/akcipher.h>
#include <crypto/public_key.h>
#include "internal.h" #include "internal.h"
#ifdef CONFIG_NET #ifdef CONFIG_NET
......
...@@ -49,11 +49,12 @@ static int pkcs7_digest(struct pkcs7_message *pkcs7, ...@@ -49,11 +49,12 @@ static int pkcs7_digest(struct pkcs7_message *pkcs7,
sinfo->sig.digest_size = digest_size = crypto_shash_digestsize(tfm); sinfo->sig.digest_size = digest_size = crypto_shash_digestsize(tfm);
ret = -ENOMEM; ret = -ENOMEM;
digest = kzalloc(digest_size + desc_size, GFP_KERNEL); digest = kzalloc(ALIGN(digest_size, __alignof__(*desc)) + desc_size,
GFP_KERNEL);
if (!digest) if (!digest)
goto error_no_desc; goto error_no_desc;
desc = digest + digest_size; desc = PTR_ALIGN(digest + digest_size, __alignof__(*desc));
desc->tfm = tfm; desc->tfm = tfm;
desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP; desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
......
...@@ -546,9 +546,9 @@ int x509_decode_time(time64_t *_t, size_t hdrlen, ...@@ -546,9 +546,9 @@ int x509_decode_time(time64_t *_t, size_t hdrlen,
if (year < 1970 || if (year < 1970 ||
mon < 1 || mon > 12 || mon < 1 || mon > 12 ||
day < 1 || day > mon_len || day < 1 || day > mon_len ||
hour < 0 || hour > 23 || hour > 23 ||
min < 0 || min > 59 || min > 59 ||
sec < 0 || sec > 59) sec > 59)
goto invalid_time; goto invalid_time;
*_t = mktime64(year, mon, day, hour, min, sec); *_t = mktime64(year, mon, day, hour, min, sec);
......
...@@ -194,14 +194,15 @@ int x509_get_sig_params(struct x509_certificate *cert) ...@@ -194,14 +194,15 @@ int x509_get_sig_params(struct x509_certificate *cert)
* digest storage space. * digest storage space.
*/ */
ret = -ENOMEM; ret = -ENOMEM;
digest = kzalloc(digest_size + desc_size, GFP_KERNEL); digest = kzalloc(ALIGN(digest_size, __alignof__(*desc)) + desc_size,
GFP_KERNEL);
if (!digest) if (!digest)
goto error; goto error;
cert->sig.digest = digest; cert->sig.digest = digest;
cert->sig.digest_size = digest_size; cert->sig.digest_size = digest_size;
desc = digest + digest_size; desc = PTR_ALIGN(digest + digest_size, __alignof__(*desc));
desc->tfm = tfm; desc->tfm = tfm;
desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP; desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
......
...@@ -98,10 +98,6 @@ void jent_get_nstime(__u64 *out) ...@@ -98,10 +98,6 @@ void jent_get_nstime(__u64 *out)
* If random_get_entropy does not return a value (which is possible on, * If random_get_entropy does not return a value (which is possible on,
* for example, MIPS), invoke __getnstimeofday * for example, MIPS), invoke __getnstimeofday
* hoping that there are timers we can work with. * hoping that there are timers we can work with.
*
* The list of available timers can be obtained from
* /sys/devices/system/clocksource/clocksource0/available_clocksource
* and are registered with clocksource_register()
*/ */
if ((0 == tmp) && if ((0 == tmp) &&
(0 == __getnstimeofday(&ts))) { (0 == __getnstimeofday(&ts))) {
......
This diff is collapsed.
...@@ -97,24 +97,21 @@ static int rsa_enc(struct akcipher_request *req) ...@@ -97,24 +97,21 @@ static int rsa_enc(struct akcipher_request *req)
goto err_free_c; goto err_free_c;
} }
m = mpi_read_raw_data(req->src, req->src_len);
if (!m) {
ret = -ENOMEM; ret = -ENOMEM;
m = mpi_read_raw_from_sgl(req->src, req->src_len);
if (!m)
goto err_free_c; goto err_free_c;
}
ret = _rsa_enc(pkey, c, m); ret = _rsa_enc(pkey, c, m);
if (ret) if (ret)
goto err_free_m; goto err_free_m;
ret = mpi_read_buffer(c, req->dst, req->dst_len, &req->dst_len, &sign); ret = mpi_write_to_sgl(c, req->dst, &req->dst_len, &sign);
if (ret) if (ret)
goto err_free_m; goto err_free_m;
if (sign < 0) { if (sign < 0)
ret = -EBADMSG; ret = -EBADMSG;
goto err_free_m;
}
err_free_m: err_free_m:
mpi_free(m); mpi_free(m);
...@@ -145,25 +142,21 @@ static int rsa_dec(struct akcipher_request *req) ...@@ -145,25 +142,21 @@ static int rsa_dec(struct akcipher_request *req)
goto err_free_m; goto err_free_m;
} }
c = mpi_read_raw_data(req->src, req->src_len);
if (!c) {
ret = -ENOMEM; ret = -ENOMEM;
c = mpi_read_raw_from_sgl(req->src, req->src_len);
if (!c)
goto err_free_m; goto err_free_m;
}
ret = _rsa_dec(pkey, m, c); ret = _rsa_dec(pkey, m, c);
if (ret) if (ret)
goto err_free_c; goto err_free_c;
ret = mpi_read_buffer(m, req->dst, req->dst_len, &req->dst_len, &sign); ret = mpi_write_to_sgl(m, req->dst, &req->dst_len, &sign);
if (ret) if (ret)
goto err_free_c; goto err_free_c;
if (sign < 0) { if (sign < 0)
ret = -EBADMSG; ret = -EBADMSG;
goto err_free_c;
}
err_free_c: err_free_c:
mpi_free(c); mpi_free(c);
err_free_m: err_free_m:
...@@ -193,24 +186,21 @@ static int rsa_sign(struct akcipher_request *req) ...@@ -193,24 +186,21 @@ static int rsa_sign(struct akcipher_request *req)
goto err_free_s; goto err_free_s;
} }
m = mpi_read_raw_data(req->src, req->src_len);
if (!m) {
ret = -ENOMEM; ret = -ENOMEM;
m = mpi_read_raw_from_sgl(req->src, req->src_len);
if (!m)
goto err_free_s; goto err_free_s;
}
ret = _rsa_sign(pkey, s, m); ret = _rsa_sign(pkey, s, m);
if (ret) if (ret)
goto err_free_m; goto err_free_m;
ret = mpi_read_buffer(s, req->dst, req->dst_len, &req->dst_len, &sign); ret = mpi_write_to_sgl(s, req->dst, &req->dst_len, &sign);
if (ret) if (ret)
goto err_free_m; goto err_free_m;
if (sign < 0) { if (sign < 0)
ret = -EBADMSG; ret = -EBADMSG;
goto err_free_m;
}
err_free_m: err_free_m:
mpi_free(m); mpi_free(m);
...@@ -241,7 +231,8 @@ static int rsa_verify(struct akcipher_request *req) ...@@ -241,7 +231,8 @@ static int rsa_verify(struct akcipher_request *req)
goto err_free_m; goto err_free_m;
} }
s = mpi_read_raw_data(req->src, req->src_len); ret = -ENOMEM;
s = mpi_read_raw_from_sgl(req->src, req->src_len);
if (!s) { if (!s) {
ret = -ENOMEM; ret = -ENOMEM;
goto err_free_m; goto err_free_m;
...@@ -251,14 +242,12 @@ static int rsa_verify(struct akcipher_request *req) ...@@ -251,14 +242,12 @@ static int rsa_verify(struct akcipher_request *req)
if (ret) if (ret)
goto err_free_s; goto err_free_s;
ret = mpi_read_buffer(m, req->dst, req->dst_len, &req->dst_len, &sign); ret = mpi_write_to_sgl(m, req->dst, &req->dst_len, &sign);
if (ret) if (ret)
goto err_free_s; goto err_free_s;
if (sign < 0) { if (sign < 0)
ret = -EBADMSG; ret = -EBADMSG;
goto err_free_s;
}
err_free_s: err_free_s:
mpi_free(s); mpi_free(s);
...@@ -282,13 +271,30 @@ static int rsa_check_key_length(unsigned int len) ...@@ -282,13 +271,30 @@ static int rsa_check_key_length(unsigned int len)
return -EINVAL; return -EINVAL;
} }
static int rsa_setkey(struct crypto_akcipher *tfm, const void *key, static int rsa_set_pub_key(struct crypto_akcipher *tfm, const void *key,
unsigned int keylen)
{
struct rsa_key *pkey = akcipher_tfm_ctx(tfm);
int ret;
ret = rsa_parse_pub_key(pkey, key, keylen);
if (ret)
return ret;
if (rsa_check_key_length(mpi_get_size(pkey->n) << 3)) {
rsa_free_key(pkey);
ret = -EINVAL;
}
return ret;
}
static int rsa_set_priv_key(struct crypto_akcipher *tfm, const void *key,
unsigned int keylen) unsigned int keylen)
{ {
struct rsa_key *pkey = akcipher_tfm_ctx(tfm); struct rsa_key *pkey = akcipher_tfm_ctx(tfm);
int ret; int ret;
ret = rsa_parse_key(pkey, key, keylen); ret = rsa_parse_priv_key(pkey, key, keylen);
if (ret) if (ret)
return ret; return ret;
...@@ -299,6 +305,13 @@ static int rsa_setkey(struct crypto_akcipher *tfm, const void *key, ...@@ -299,6 +305,13 @@ static int rsa_setkey(struct crypto_akcipher *tfm, const void *key,
return ret; return ret;
} }
static int rsa_max_size(struct crypto_akcipher *tfm)
{
struct rsa_key *pkey = akcipher_tfm_ctx(tfm);
return pkey->n ? mpi_get_size(pkey->n) : -EINVAL;
}
static void rsa_exit_tfm(struct crypto_akcipher *tfm) static void rsa_exit_tfm(struct crypto_akcipher *tfm)
{ {
struct rsa_key *pkey = akcipher_tfm_ctx(tfm); struct rsa_key *pkey = akcipher_tfm_ctx(tfm);
...@@ -311,7 +324,9 @@ static struct akcipher_alg rsa = { ...@@ -311,7 +324,9 @@ static struct akcipher_alg rsa = {
.decrypt = rsa_dec, .decrypt = rsa_dec,
.sign = rsa_sign, .sign = rsa_sign,
.verify = rsa_verify, .verify = rsa_verify,
.setkey = rsa_setkey, .set_priv_key = rsa_set_priv_key,
.set_pub_key = rsa_set_pub_key,
.max_size = rsa_max_size,
.exit = rsa_exit_tfm, .exit = rsa_exit_tfm,
.base = { .base = {
.cra_name = "rsa", .cra_name = "rsa",
......
...@@ -15,7 +15,8 @@ ...@@ -15,7 +15,8 @@
#include <linux/err.h> #include <linux/err.h>
#include <linux/fips.h> #include <linux/fips.h>
#include <crypto/internal/rsa.h> #include <crypto/internal/rsa.h>
#include "rsakey-asn1.h" #include "rsapubkey-asn1.h"
#include "rsaprivkey-asn1.h"
int rsa_get_n(void *context, size_t hdrlen, unsigned char tag, int rsa_get_n(void *context, size_t hdrlen, unsigned char tag,
const void *value, size_t vlen) const void *value, size_t vlen)
...@@ -94,7 +95,7 @@ void rsa_free_key(struct rsa_key *key) ...@@ -94,7 +95,7 @@ void rsa_free_key(struct rsa_key *key)
EXPORT_SYMBOL_GPL(rsa_free_key); EXPORT_SYMBOL_GPL(rsa_free_key);
/** /**
* rsa_parse_key() - extracts an rsa key from BER encoded buffer * rsa_parse_pub_key() - extracts an rsa public key from BER encoded buffer
* and stores it in the provided struct rsa_key * and stores it in the provided struct rsa_key
* *
* @rsa_key: struct rsa_key key representation * @rsa_key: struct rsa_key key representation
...@@ -103,13 +104,13 @@ EXPORT_SYMBOL_GPL(rsa_free_key); ...@@ -103,13 +104,13 @@ EXPORT_SYMBOL_GPL(rsa_free_key);
* *
* Return: 0 on success or error code in case of error * Return: 0 on success or error code in case of error
*/ */
int rsa_parse_key(struct rsa_key *rsa_key, const void *key, int rsa_parse_pub_key(struct rsa_key *rsa_key, const void *key,
unsigned int key_len) unsigned int key_len)
{ {
int ret; int ret;
free_mpis(rsa_key); free_mpis(rsa_key);
ret = asn1_ber_decoder(&rsakey_decoder, rsa_key, key, key_len); ret = asn1_ber_decoder(&rsapubkey_decoder, rsa_key, key, key_len);
if (ret < 0) if (ret < 0)
goto error; goto error;
...@@ -118,4 +119,31 @@ int rsa_parse_key(struct rsa_key *rsa_key, const void *key, ...@@ -118,4 +119,31 @@ int rsa_parse_key(struct rsa_key *rsa_key, const void *key,
free_mpis(rsa_key); free_mpis(rsa_key);
return ret; return ret;
} }
EXPORT_SYMBOL_GPL(rsa_parse_key); EXPORT_SYMBOL_GPL(rsa_parse_pub_key);
/**
* rsa_parse_pub_key() - extracts an rsa private key from BER encoded buffer
* and stores it in the provided struct rsa_key
*
* @rsa_key: struct rsa_key key representation
* @key: key in BER format
* @key_len: length of key
*
* Return: 0 on success or error code in case of error
*/
int rsa_parse_priv_key(struct rsa_key *rsa_key, const void *key,
unsigned int key_len)
{
int ret;
free_mpis(rsa_key);
ret = asn1_ber_decoder(&rsaprivkey_decoder, rsa_key, key, key_len);
if (ret < 0)
goto error;
return 0;
error:
free_mpis(rsa_key);
return ret;
}
EXPORT_SYMBOL_GPL(rsa_parse_priv_key);
RsaKey ::= SEQUENCE {
n INTEGER ({ rsa_get_n }),
e INTEGER ({ rsa_get_e }),
d INTEGER ({ rsa_get_d })
}
RsaPrivKey ::= SEQUENCE {
version INTEGER,
n INTEGER ({ rsa_get_n }),
e INTEGER ({ rsa_get_e }),
d INTEGER ({ rsa_get_d }),
prime1 INTEGER,
prime2 INTEGER,
exponent1 INTEGER,
exponent2 INTEGER,
coefficient INTEGER
}
RsaPubKey ::= SEQUENCE {
n INTEGER ({ rsa_get_n }),
e INTEGER ({ rsa_get_e })
}
...@@ -91,7 +91,7 @@ static void crypto_exit_skcipher_ops_blkcipher(struct crypto_tfm *tfm) ...@@ -91,7 +91,7 @@ static void crypto_exit_skcipher_ops_blkcipher(struct crypto_tfm *tfm)
crypto_free_blkcipher(*ctx); crypto_free_blkcipher(*ctx);
} }
int crypto_init_skcipher_ops_blkcipher(struct crypto_tfm *tfm) static int crypto_init_skcipher_ops_blkcipher(struct crypto_tfm *tfm)
{ {
struct crypto_alg *calg = tfm->__crt_alg; struct crypto_alg *calg = tfm->__crt_alg;
struct crypto_skcipher *skcipher = __crypto_skcipher_cast(tfm); struct crypto_skcipher *skcipher = __crypto_skcipher_cast(tfm);
...@@ -182,7 +182,7 @@ static void crypto_exit_skcipher_ops_ablkcipher(struct crypto_tfm *tfm) ...@@ -182,7 +182,7 @@ static void crypto_exit_skcipher_ops_ablkcipher(struct crypto_tfm *tfm)
crypto_free_ablkcipher(*ctx); crypto_free_ablkcipher(*ctx);
} }
int crypto_init_skcipher_ops_ablkcipher(struct crypto_tfm *tfm) static int crypto_init_skcipher_ops_ablkcipher(struct crypto_tfm *tfm)
{ {
struct crypto_alg *calg = tfm->__crt_alg; struct crypto_alg *calg = tfm->__crt_alg;
struct crypto_skcipher *skcipher = __crypto_skcipher_cast(tfm); struct crypto_skcipher *skcipher = __crypto_skcipher_cast(tfm);
......
...@@ -48,6 +48,8 @@ ...@@ -48,6 +48,8 @@
#define ENCRYPT 1 #define ENCRYPT 1
#define DECRYPT 0 #define DECRYPT 0
#define MAX_DIGEST_SIZE 64
/* /*
* return a string with the driver name * return a string with the driver name
*/ */
...@@ -950,7 +952,7 @@ static void test_ahash_speed(const char *algo, unsigned int secs, ...@@ -950,7 +952,7 @@ static void test_ahash_speed(const char *algo, unsigned int secs,
struct tcrypt_result tresult; struct tcrypt_result tresult;
struct ahash_request *req; struct ahash_request *req;
struct crypto_ahash *tfm; struct crypto_ahash *tfm;
static char output[1024]; char *output;
int i, ret; int i, ret;
tfm = crypto_alloc_ahash(algo, 0, 0); tfm = crypto_alloc_ahash(algo, 0, 0);
...@@ -963,9 +965,9 @@ static void test_ahash_speed(const char *algo, unsigned int secs, ...@@ -963,9 +965,9 @@ static void test_ahash_speed(const char *algo, unsigned int secs,
printk(KERN_INFO "\ntesting speed of async %s (%s)\n", algo, printk(KERN_INFO "\ntesting speed of async %s (%s)\n", algo,
get_driver_name(crypto_ahash, tfm)); get_driver_name(crypto_ahash, tfm));
if (crypto_ahash_digestsize(tfm) > sizeof(output)) { if (crypto_ahash_digestsize(tfm) > MAX_DIGEST_SIZE) {
pr_err("digestsize(%u) > outputbuffer(%zu)\n", pr_err("digestsize(%u) > %d\n", crypto_ahash_digestsize(tfm),
crypto_ahash_digestsize(tfm), sizeof(output)); MAX_DIGEST_SIZE);
goto out; goto out;
} }
...@@ -980,6 +982,10 @@ static void test_ahash_speed(const char *algo, unsigned int secs, ...@@ -980,6 +982,10 @@ static void test_ahash_speed(const char *algo, unsigned int secs,
ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
tcrypt_complete, &tresult); tcrypt_complete, &tresult);
output = kmalloc(MAX_DIGEST_SIZE, GFP_KERNEL);
if (!output)
goto out_nomem;
for (i = 0; speed[i].blen != 0; i++) { for (i = 0; speed[i].blen != 0; i++) {
if (speed[i].blen > TVMEMSIZE * PAGE_SIZE) { if (speed[i].blen > TVMEMSIZE * PAGE_SIZE) {
pr_err("template (%u) too big for tvmem (%lu)\n", pr_err("template (%u) too big for tvmem (%lu)\n",
...@@ -1006,6 +1012,9 @@ static void test_ahash_speed(const char *algo, unsigned int secs, ...@@ -1006,6 +1012,9 @@ static void test_ahash_speed(const char *algo, unsigned int secs,
} }
} }
kfree(output);
out_nomem:
ahash_request_free(req); ahash_request_free(req);
out: out:
......
...@@ -1034,12 +1034,22 @@ static int __test_skcipher(struct crypto_skcipher *tfm, int enc, ...@@ -1034,12 +1034,22 @@ static int __test_skcipher(struct crypto_skcipher *tfm, int enc,
q = data; q = data;
if (memcmp(q, template[i].result, template[i].rlen)) { if (memcmp(q, template[i].result, template[i].rlen)) {
pr_err("alg: skcipher%s: Test %d failed on %s for %s\n", pr_err("alg: skcipher%s: Test %d failed (invalid result) on %s for %s\n",
d, j, e, algo); d, j, e, algo);
hexdump(q, template[i].rlen); hexdump(q, template[i].rlen);
ret = -EINVAL; ret = -EINVAL;
goto out; goto out;
} }
if (template[i].iv_out &&
memcmp(iv, template[i].iv_out,
crypto_skcipher_ivsize(tfm))) {
pr_err("alg: skcipher%s: Test %d failed (invalid output IV) on %s for %s\n",
d, j, e, algo);
hexdump(iv, crypto_skcipher_ivsize(tfm));
ret = -EINVAL;
goto out;
}
} }
j = 0; j = 0;
...@@ -1845,34 +1855,34 @@ static int do_test_rsa(struct crypto_akcipher *tfm, ...@@ -1845,34 +1855,34 @@ static int do_test_rsa(struct crypto_akcipher *tfm,
struct tcrypt_result result; struct tcrypt_result result;
unsigned int out_len_max, out_len = 0; unsigned int out_len_max, out_len = 0;
int err = -ENOMEM; int err = -ENOMEM;
struct scatterlist src, dst, src_tab[2];
req = akcipher_request_alloc(tfm, GFP_KERNEL); req = akcipher_request_alloc(tfm, GFP_KERNEL);
if (!req) if (!req)
return err; return err;
init_completion(&result.completion); init_completion(&result.completion);
err = crypto_akcipher_setkey(tfm, vecs->key, vecs->key_len);
if (err)
goto free_req;
akcipher_request_set_crypt(req, vecs->m, outbuf_enc, vecs->m_size, if (vecs->public_key_vec)
out_len); err = crypto_akcipher_set_pub_key(tfm, vecs->key,
/* expect this to fail, and update the required buf len */ vecs->key_len);
crypto_akcipher_encrypt(req); else
out_len = req->dst_len; err = crypto_akcipher_set_priv_key(tfm, vecs->key,
if (!out_len) { vecs->key_len);
err = -EINVAL; if (err)
goto free_req; goto free_req;
}
out_len_max = out_len; out_len_max = crypto_akcipher_maxsize(tfm);
err = -ENOMEM;
outbuf_enc = kzalloc(out_len_max, GFP_KERNEL); outbuf_enc = kzalloc(out_len_max, GFP_KERNEL);
if (!outbuf_enc) if (!outbuf_enc)
goto free_req; goto free_req;
akcipher_request_set_crypt(req, vecs->m, outbuf_enc, vecs->m_size, sg_init_table(src_tab, 2);
out_len); sg_set_buf(&src_tab[0], vecs->m, 8);
sg_set_buf(&src_tab[1], vecs->m + 8, vecs->m_size - 8);
sg_init_one(&dst, outbuf_enc, out_len_max);
akcipher_request_set_crypt(req, src_tab, &dst, vecs->m_size,
out_len_max);
akcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, akcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
tcrypt_complete, &result); tcrypt_complete, &result);
...@@ -1882,13 +1892,13 @@ static int do_test_rsa(struct crypto_akcipher *tfm, ...@@ -1882,13 +1892,13 @@ static int do_test_rsa(struct crypto_akcipher *tfm,
pr_err("alg: rsa: encrypt test failed. err %d\n", err); pr_err("alg: rsa: encrypt test failed. err %d\n", err);
goto free_all; goto free_all;
} }
if (out_len != vecs->c_size) { if (req->dst_len != vecs->c_size) {
pr_err("alg: rsa: encrypt test failed. Invalid output len\n"); pr_err("alg: rsa: encrypt test failed. Invalid output len\n");
err = -EINVAL; err = -EINVAL;
goto free_all; goto free_all;
} }
/* verify that encrypted message is equal to expected */ /* verify that encrypted message is equal to expected */
if (memcmp(vecs->c, outbuf_enc, vecs->c_size)) { if (memcmp(vecs->c, sg_virt(req->dst), vecs->c_size)) {
pr_err("alg: rsa: encrypt test failed. Invalid output\n"); pr_err("alg: rsa: encrypt test failed. Invalid output\n");
err = -EINVAL; err = -EINVAL;
goto free_all; goto free_all;
...@@ -1903,9 +1913,10 @@ static int do_test_rsa(struct crypto_akcipher *tfm, ...@@ -1903,9 +1913,10 @@ static int do_test_rsa(struct crypto_akcipher *tfm,
err = -ENOMEM; err = -ENOMEM;
goto free_all; goto free_all;
} }
sg_init_one(&src, vecs->c, vecs->c_size);
sg_init_one(&dst, outbuf_dec, out_len_max);
init_completion(&result.completion); init_completion(&result.completion);
akcipher_request_set_crypt(req, outbuf_enc, outbuf_dec, vecs->c_size, akcipher_request_set_crypt(req, &src, &dst, vecs->c_size, out_len_max);
out_len);
/* Run RSA decrypt - m = c^d mod n;*/ /* Run RSA decrypt - m = c^d mod n;*/
err = wait_async_op(&result, crypto_akcipher_decrypt(req)); err = wait_async_op(&result, crypto_akcipher_decrypt(req));
...@@ -2080,7 +2091,6 @@ static const struct alg_test_desc alg_test_descs[] = { ...@@ -2080,7 +2091,6 @@ static const struct alg_test_desc alg_test_descs[] = {
}, { }, {
.alg = "authenc(hmac(md5),ecb(cipher_null))", .alg = "authenc(hmac(md5),ecb(cipher_null))",
.test = alg_test_aead, .test = alg_test_aead,
.fips_allowed = 1,
.suite = { .suite = {
.aead = { .aead = {
.enc = { .enc = {
...@@ -2096,7 +2106,6 @@ static const struct alg_test_desc alg_test_descs[] = { ...@@ -2096,7 +2106,6 @@ static const struct alg_test_desc alg_test_descs[] = {
}, { }, {
.alg = "authenc(hmac(sha1),cbc(aes))", .alg = "authenc(hmac(sha1),cbc(aes))",
.test = alg_test_aead, .test = alg_test_aead,
.fips_allowed = 1,
.suite = { .suite = {
.aead = { .aead = {
.enc = { .enc = {
...@@ -2110,7 +2119,6 @@ static const struct alg_test_desc alg_test_descs[] = { ...@@ -2110,7 +2119,6 @@ static const struct alg_test_desc alg_test_descs[] = {
}, { }, {
.alg = "authenc(hmac(sha1),cbc(des))", .alg = "authenc(hmac(sha1),cbc(des))",
.test = alg_test_aead, .test = alg_test_aead,
.fips_allowed = 1,
.suite = { .suite = {
.aead = { .aead = {
.enc = { .enc = {
...@@ -2124,7 +2132,6 @@ static const struct alg_test_desc alg_test_descs[] = { ...@@ -2124,7 +2132,6 @@ static const struct alg_test_desc alg_test_descs[] = {
}, { }, {
.alg = "authenc(hmac(sha1),cbc(des3_ede))", .alg = "authenc(hmac(sha1),cbc(des3_ede))",
.test = alg_test_aead, .test = alg_test_aead,
.fips_allowed = 1,
.suite = { .suite = {
.aead = { .aead = {
.enc = { .enc = {
...@@ -2138,7 +2145,6 @@ static const struct alg_test_desc alg_test_descs[] = { ...@@ -2138,7 +2145,6 @@ static const struct alg_test_desc alg_test_descs[] = {
}, { }, {
.alg = "authenc(hmac(sha1),ecb(cipher_null))", .alg = "authenc(hmac(sha1),ecb(cipher_null))",
.test = alg_test_aead, .test = alg_test_aead,
.fips_allowed = 1,
.suite = { .suite = {
.aead = { .aead = {
.enc = { .enc = {
...@@ -2158,7 +2164,6 @@ static const struct alg_test_desc alg_test_descs[] = { ...@@ -2158,7 +2164,6 @@ static const struct alg_test_desc alg_test_descs[] = {
}, { }, {
.alg = "authenc(hmac(sha224),cbc(des))", .alg = "authenc(hmac(sha224),cbc(des))",
.test = alg_test_aead, .test = alg_test_aead,
.fips_allowed = 1,
.suite = { .suite = {
.aead = { .aead = {
.enc = { .enc = {
...@@ -2172,7 +2177,6 @@ static const struct alg_test_desc alg_test_descs[] = { ...@@ -2172,7 +2177,6 @@ static const struct alg_test_desc alg_test_descs[] = {
}, { }, {
.alg = "authenc(hmac(sha224),cbc(des3_ede))", .alg = "authenc(hmac(sha224),cbc(des3_ede))",
.test = alg_test_aead, .test = alg_test_aead,
.fips_allowed = 1,
.suite = { .suite = {
.aead = { .aead = {
.enc = { .enc = {
...@@ -2186,7 +2190,6 @@ static const struct alg_test_desc alg_test_descs[] = { ...@@ -2186,7 +2190,6 @@ static const struct alg_test_desc alg_test_descs[] = {
}, { }, {
.alg = "authenc(hmac(sha256),cbc(aes))", .alg = "authenc(hmac(sha256),cbc(aes))",
.test = alg_test_aead, .test = alg_test_aead,
.fips_allowed = 1,
.suite = { .suite = {
.aead = { .aead = {
.enc = { .enc = {
...@@ -2200,7 +2203,6 @@ static const struct alg_test_desc alg_test_descs[] = { ...@@ -2200,7 +2203,6 @@ static const struct alg_test_desc alg_test_descs[] = {
}, { }, {
.alg = "authenc(hmac(sha256),cbc(des))", .alg = "authenc(hmac(sha256),cbc(des))",
.test = alg_test_aead, .test = alg_test_aead,
.fips_allowed = 1,
.suite = { .suite = {
.aead = { .aead = {
.enc = { .enc = {
...@@ -2214,7 +2216,6 @@ static const struct alg_test_desc alg_test_descs[] = { ...@@ -2214,7 +2216,6 @@ static const struct alg_test_desc alg_test_descs[] = {
}, { }, {
.alg = "authenc(hmac(sha256),cbc(des3_ede))", .alg = "authenc(hmac(sha256),cbc(des3_ede))",
.test = alg_test_aead, .test = alg_test_aead,
.fips_allowed = 1,
.suite = { .suite = {
.aead = { .aead = {
.enc = { .enc = {
...@@ -2228,7 +2229,6 @@ static const struct alg_test_desc alg_test_descs[] = { ...@@ -2228,7 +2229,6 @@ static const struct alg_test_desc alg_test_descs[] = {
}, { }, {
.alg = "authenc(hmac(sha384),cbc(des))", .alg = "authenc(hmac(sha384),cbc(des))",
.test = alg_test_aead, .test = alg_test_aead,
.fips_allowed = 1,
.suite = { .suite = {
.aead = { .aead = {
.enc = { .enc = {
...@@ -2242,7 +2242,6 @@ static const struct alg_test_desc alg_test_descs[] = { ...@@ -2242,7 +2242,6 @@ static const struct alg_test_desc alg_test_descs[] = {
}, { }, {
.alg = "authenc(hmac(sha384),cbc(des3_ede))", .alg = "authenc(hmac(sha384),cbc(des3_ede))",
.test = alg_test_aead, .test = alg_test_aead,
.fips_allowed = 1,
.suite = { .suite = {
.aead = { .aead = {
.enc = { .enc = {
...@@ -2256,7 +2255,6 @@ static const struct alg_test_desc alg_test_descs[] = { ...@@ -2256,7 +2255,6 @@ static const struct alg_test_desc alg_test_descs[] = {
}, { }, {
.alg = "authenc(hmac(sha512),cbc(aes))", .alg = "authenc(hmac(sha512),cbc(aes))",
.test = alg_test_aead, .test = alg_test_aead,
.fips_allowed = 1,
.suite = { .suite = {
.aead = { .aead = {
.enc = { .enc = {
...@@ -2270,7 +2268,6 @@ static const struct alg_test_desc alg_test_descs[] = { ...@@ -2270,7 +2268,6 @@ static const struct alg_test_desc alg_test_descs[] = {
}, { }, {
.alg = "authenc(hmac(sha512),cbc(des))", .alg = "authenc(hmac(sha512),cbc(des))",
.test = alg_test_aead, .test = alg_test_aead,
.fips_allowed = 1,
.suite = { .suite = {
.aead = { .aead = {
.enc = { .enc = {
...@@ -2284,7 +2281,6 @@ static const struct alg_test_desc alg_test_descs[] = { ...@@ -2284,7 +2281,6 @@ static const struct alg_test_desc alg_test_descs[] = {
}, { }, {
.alg = "authenc(hmac(sha512),cbc(des3_ede))", .alg = "authenc(hmac(sha512),cbc(des3_ede))",
.test = alg_test_aead, .test = alg_test_aead,
.fips_allowed = 1,
.suite = { .suite = {
.aead = { .aead = {
.enc = { .enc = {
...@@ -3011,7 +3007,6 @@ static const struct alg_test_desc alg_test_descs[] = { ...@@ -3011,7 +3007,6 @@ static const struct alg_test_desc alg_test_descs[] = {
}, { }, {
.alg = "ecb(des)", .alg = "ecb(des)",
.test = alg_test_skcipher, .test = alg_test_skcipher,
.fips_allowed = 1,
.suite = { .suite = {
.cipher = { .cipher = {
.enc = { .enc = {
...@@ -3291,6 +3286,22 @@ static const struct alg_test_desc alg_test_descs[] = { ...@@ -3291,6 +3286,22 @@ static const struct alg_test_desc alg_test_descs[] = {
.alg = "jitterentropy_rng", .alg = "jitterentropy_rng",
.fips_allowed = 1, .fips_allowed = 1,
.test = alg_test_null, .test = alg_test_null,
}, {
.alg = "kw(aes)",
.test = alg_test_skcipher,
.fips_allowed = 1,
.suite = {
.cipher = {
.enc = {
.vecs = aes_kw_enc_tv_template,
.count = ARRAY_SIZE(aes_kw_enc_tv_template)
},
.dec = {
.vecs = aes_kw_dec_tv_template,
.count = ARRAY_SIZE(aes_kw_dec_tv_template)
}
}
}
}, { }, {
.alg = "lrw(aes)", .alg = "lrw(aes)",
.test = alg_test_skcipher, .test = alg_test_skcipher,
......
...@@ -67,6 +67,7 @@ struct hash_testvec { ...@@ -67,6 +67,7 @@ struct hash_testvec {
struct cipher_testvec { struct cipher_testvec {
char *key; char *key;
char *iv; char *iv;
char *iv_out;
char *input; char *input;
char *result; char *result;
unsigned short tap[MAX_TAP]; unsigned short tap[MAX_TAP];
...@@ -149,7 +150,8 @@ static struct akcipher_testvec rsa_tv_template[] = { ...@@ -149,7 +150,8 @@ static struct akcipher_testvec rsa_tv_template[] = {
{ {
#ifndef CONFIG_CRYPTO_FIPS #ifndef CONFIG_CRYPTO_FIPS
.key = .key =
"\x30\x81\x88" /* sequence of 136 bytes */ "\x30\x81\x9A" /* sequence of 154 bytes */
"\x02\x01\x01" /* version - integer of 1 byte */
"\x02\x41" /* modulus - integer of 65 bytes */ "\x02\x41" /* modulus - integer of 65 bytes */
"\x00\xAA\x36\xAB\xCE\x88\xAC\xFD\xFF\x55\x52\x3C\x7F\xC4\x52\x3F" "\x00\xAA\x36\xAB\xCE\x88\xAC\xFD\xFF\x55\x52\x3C\x7F\xC4\x52\x3F"
"\x90\xEF\xA0\x0D\xF3\x77\x4A\x25\x9F\x2E\x62\xB4\xC5\xD9\x9C\xB5" "\x90\xEF\xA0\x0D\xF3\x77\x4A\x25\x9F\x2E\x62\xB4\xC5\xD9\x9C\xB5"
...@@ -161,19 +163,25 @@ static struct akcipher_testvec rsa_tv_template[] = { ...@@ -161,19 +163,25 @@ static struct akcipher_testvec rsa_tv_template[] = {
"\x0A\x03\x37\x48\x62\x64\x87\x69\x5F\x5F\x30\xBC\x38\xB9\x8B\x44" "\x0A\x03\x37\x48\x62\x64\x87\x69\x5F\x5F\x30\xBC\x38\xB9\x8B\x44"
"\xC2\xCD\x2D\xFF\x43\x40\x98\xCD\x20\xD8\xA1\x38\xD0\x90\xBF\x64" "\xC2\xCD\x2D\xFF\x43\x40\x98\xCD\x20\xD8\xA1\x38\xD0\x90\xBF\x64"
"\x79\x7C\x3F\xA7\xA2\xCD\xCB\x3C\xD1\xE0\xBD\xBA\x26\x54\xB4\xF9" "\x79\x7C\x3F\xA7\xA2\xCD\xCB\x3C\xD1\xE0\xBD\xBA\x26\x54\xB4\xF9"
"\xDF\x8E\x8A\xE5\x9D\x73\x3D\x9F\x33\xB3\x01\x62\x4A\xFD\x1D\x51", "\xDF\x8E\x8A\xE5\x9D\x73\x3D\x9F\x33\xB3\x01\x62\x4A\xFD\x1D\x51"
"\x02\x01\x00" /* prime1 - integer of 1 byte */
"\x02\x01\x00" /* prime2 - integer of 1 byte */
"\x02\x01\x00" /* exponent1 - integer of 1 byte */
"\x02\x01\x00" /* exponent2 - integer of 1 byte */
"\x02\x01\x00", /* coefficient - integer of 1 byte */
.m = "\x54\x85\x9b\x34\x2c\x49\xea\x2a", .m = "\x54\x85\x9b\x34\x2c\x49\xea\x2a",
.c = .c =
"\x63\x1c\xcd\x7b\xe1\x7e\xe4\xde\xc9\xa8\x89\xa1\x74\xcb\x3c\x63" "\x63\x1c\xcd\x7b\xe1\x7e\xe4\xde\xc9\xa8\x89\xa1\x74\xcb\x3c\x63"
"\x7d\x24\xec\x83\xc3\x15\xe4\x7f\x73\x05\x34\xd1\xec\x22\xbb\x8a" "\x7d\x24\xec\x83\xc3\x15\xe4\x7f\x73\x05\x34\xd1\xec\x22\xbb\x8a"
"\x5e\x32\x39\x6d\xc1\x1d\x7d\x50\x3b\x9f\x7a\xad\xf0\x2e\x25\x53" "\x5e\x32\x39\x6d\xc1\x1d\x7d\x50\x3b\x9f\x7a\xad\xf0\x2e\x25\x53"
"\x9f\x6e\xbd\x4c\x55\x84\x0c\x9b\xcf\x1a\x4b\x51\x1e\x9e\x0c\x06", "\x9f\x6e\xbd\x4c\x55\x84\x0c\x9b\xcf\x1a\x4b\x51\x1e\x9e\x0c\x06",
.key_len = 139, .key_len = 157,
.m_size = 8, .m_size = 8,
.c_size = 64, .c_size = 64,
}, { }, {
.key = .key =
"\x30\x82\x01\x0B" /* sequence of 267 bytes */ "\x30\x82\x01\x1D" /* sequence of 285 bytes */
"\x02\x01\x01" /* version - integer of 1 byte */
"\x02\x81\x81" /* modulus - integer of 129 bytes */ "\x02\x81\x81" /* modulus - integer of 129 bytes */
"\x00\xBB\xF8\x2F\x09\x06\x82\xCE\x9C\x23\x38\xAC\x2B\x9D\xA8\x71" "\x00\xBB\xF8\x2F\x09\x06\x82\xCE\x9C\x23\x38\xAC\x2B\x9D\xA8\x71"
"\xF7\x36\x8D\x07\xEE\xD4\x10\x43\xA4\x40\xD6\xB6\xF0\x74\x54\xF5" "\xF7\x36\x8D\x07\xEE\xD4\x10\x43\xA4\x40\xD6\xB6\xF0\x74\x54\xF5"
...@@ -194,8 +202,13 @@ static struct akcipher_testvec rsa_tv_template[] = { ...@@ -194,8 +202,13 @@ static struct akcipher_testvec rsa_tv_template[] = {
"\x44\xE5\x6A\xAF\x68\xC5\x6C\x09\x2C\xD3\x8D\xC3\xBE\xF5\xD2\x0A" "\x44\xE5\x6A\xAF\x68\xC5\x6C\x09\x2C\xD3\x8D\xC3\xBE\xF5\xD2\x0A"
"\x93\x99\x26\xED\x4F\x74\xA1\x3E\xDD\xFB\xE1\xA1\xCE\xCC\x48\x94" "\x93\x99\x26\xED\x4F\x74\xA1\x3E\xDD\xFB\xE1\xA1\xCE\xCC\x48\x94"
"\xAF\x94\x28\xC2\xB7\xB8\x88\x3F\xE4\x46\x3A\x4B\xC8\x5B\x1C\xB3" "\xAF\x94\x28\xC2\xB7\xB8\x88\x3F\xE4\x46\x3A\x4B\xC8\x5B\x1C\xB3"
"\xC1", "\xC1"
.key_len = 271, "\x02\x01\x00" /* prime1 - integer of 1 byte */
"\x02\x01\x00" /* prime2 - integer of 1 byte */
"\x02\x01\x00" /* exponent1 - integer of 1 byte */
"\x02\x01\x00" /* exponent2 - integer of 1 byte */
"\x02\x01\x00", /* coefficient - integer of 1 byte */
.key_len = 289,
.m = "\x54\x85\x9b\x34\x2c\x49\xea\x2a", .m = "\x54\x85\x9b\x34\x2c\x49\xea\x2a",
.c = .c =
"\x74\x1b\x55\xac\x47\xb5\x08\x0a\x6e\x2b\x2d\xf7\x94\xb8\x8a\x95" "\x74\x1b\x55\xac\x47\xb5\x08\x0a\x6e\x2b\x2d\xf7\x94\xb8\x8a\x95"
...@@ -211,7 +224,8 @@ static struct akcipher_testvec rsa_tv_template[] = { ...@@ -211,7 +224,8 @@ static struct akcipher_testvec rsa_tv_template[] = {
}, { }, {
#endif #endif
.key = .key =
"\x30\x82\x02\x0D" /* sequence of 525 bytes */ "\x30\x82\x02\x1F" /* sequence of 543 bytes */
"\x02\x01\x01" /* version - integer of 1 byte */
"\x02\x82\x01\x00" /* modulus - integer of 256 bytes */ "\x02\x82\x01\x00" /* modulus - integer of 256 bytes */
"\xDB\x10\x1A\xC2\xA3\xF1\xDC\xFF\x13\x6B\xED\x44\xDF\xF0\x02\x6D" "\xDB\x10\x1A\xC2\xA3\xF1\xDC\xFF\x13\x6B\xED\x44\xDF\xF0\x02\x6D"
"\x13\xC7\x88\xDA\x70\x6B\x54\xF1\xE8\x27\xDC\xC3\x0F\x99\x6A\xFA" "\x13\xC7\x88\xDA\x70\x6B\x54\xF1\xE8\x27\xDC\xC3\x0F\x99\x6A\xFA"
...@@ -246,8 +260,13 @@ static struct akcipher_testvec rsa_tv_template[] = { ...@@ -246,8 +260,13 @@ static struct akcipher_testvec rsa_tv_template[] = {
"\x77\xAF\x51\x27\x5B\x5E\x69\xB8\x81\xE6\x11\xC5\x43\x23\x81\x04" "\x77\xAF\x51\x27\x5B\x5E\x69\xB8\x81\xE6\x11\xC5\x43\x23\x81\x04"
"\x62\xFF\xE9\x46\xB8\xD8\x44\xDB\xA5\xCC\x31\x54\x34\xCE\x3E\x82" "\x62\xFF\xE9\x46\xB8\xD8\x44\xDB\xA5\xCC\x31\x54\x34\xCE\x3E\x82"
"\xD6\xBF\x7A\x0B\x64\x21\x6D\x88\x7E\x5B\x45\x12\x1E\x63\x8D\x49" "\xD6\xBF\x7A\x0B\x64\x21\x6D\x88\x7E\x5B\x45\x12\x1E\x63\x8D\x49"
"\xA7\x1D\xD9\x1E\x06\xCD\xE8\xBA\x2C\x8C\x69\x32\xEA\xBE\x60\x71", "\xA7\x1D\xD9\x1E\x06\xCD\xE8\xBA\x2C\x8C\x69\x32\xEA\xBE\x60\x71"
.key_len = 529, "\x02\x01\x00" /* prime1 - integer of 1 byte */
"\x02\x01\x00" /* prime2 - integer of 1 byte */
"\x02\x01\x00" /* exponent1 - integer of 1 byte */
"\x02\x01\x00" /* exponent2 - integer of 1 byte */
"\x02\x01\x00", /* coefficient - integer of 1 byte */
.key_len = 547,
.m = "\x54\x85\x9b\x34\x2c\x49\xea\x2a", .m = "\x54\x85\x9b\x34\x2c\x49\xea\x2a",
.c = .c =
"\xb2\x97\x76\xb4\xae\x3e\x38\x3c\x7e\x64\x1f\xcc\xa2\x7f\xf6\xbe" "\xb2\x97\x76\xb4\xae\x3e\x38\x3c\x7e\x64\x1f\xcc\xa2\x7f\xf6\xbe"
...@@ -23813,6 +23832,46 @@ static struct aead_testvec rfc7539esp_dec_tv_template[] = { ...@@ -23813,6 +23832,46 @@ static struct aead_testvec rfc7539esp_dec_tv_template[] = {
}, },
}; };
/*
* All key wrapping test vectors taken from
* http://csrc.nist.gov/groups/STM/cavp/documents/mac/kwtestvectors.zip
*
* Note: as documented in keywrap.c, the ivout for encryption is the first
* semiblock of the ciphertext from the test vector. For decryption, iv is
* the first semiblock of the ciphertext.
*/
static struct cipher_testvec aes_kw_enc_tv_template[] = {
{
.key = "\x75\x75\xda\x3a\x93\x60\x7c\xc2"
"\xbf\xd8\xce\xc7\xaa\xdf\xd9\xa6",
.klen = 16,
.input = "\x42\x13\x6d\x3c\x38\x4a\x3e\xea"
"\xc9\x5a\x06\x6f\xd2\x8f\xed\x3f",
.ilen = 16,
.result = "\xf6\x85\x94\x81\x6f\x64\xca\xa3"
"\xf5\x6f\xab\xea\x25\x48\xf5\xfb",
.rlen = 16,
.iv_out = "\x03\x1f\x6b\xd7\xe6\x1e\x64\x3d",
},
};
static struct cipher_testvec aes_kw_dec_tv_template[] = {
{
.key = "\x80\xaa\x99\x73\x27\xa4\x80\x6b"
"\x6a\x7a\x41\xa5\x2b\x86\xc3\x71"
"\x03\x86\xf9\x32\x78\x6e\xf7\x96"
"\x76\xfa\xfb\x90\xb8\x26\x3c\x5f",
.klen = 32,
.input = "\xd3\x3d\x3d\x97\x7b\xf0\xa9\x15"
"\x59\xf9\x9c\x8a\xcd\x29\x3d\x43",
.ilen = 16,
.result = "\x0a\x25\x6b\xa7\x5c\xfa\x03\xaa"
"\xa0\x2b\xa9\x42\x03\xf1\x5b\xaa",
.rlen = 16,
.iv = "\x42\x3c\x96\x0d\x8a\x2a\xc4\xc1",
},
};
/* /*
* ANSI X9.31 Continuous Pseudo-Random Number Generator (AES mode) * ANSI X9.31 Continuous Pseudo-Random Number Generator (AES mode)
* test vectors, taken from Appendix B.2.9 and B.2.10: * test vectors, taken from Appendix B.2.9 and B.2.10:
...@@ -10,7 +10,7 @@ menuconfig HW_RANDOM ...@@ -10,7 +10,7 @@ menuconfig HW_RANDOM
To compile this driver as a module, choose M here: the To compile this driver as a module, choose M here: the
module will be called rng-core. This provides a device module will be called rng-core. This provides a device
that's usually called /dev/hw_random, and which exposes one that's usually called /dev/hwrng, and which exposes one
of possibly several hardware random number generators. of possibly several hardware random number generators.
These hardware random number generators do not feed directly These hardware random number generators do not feed directly
...@@ -346,6 +346,16 @@ config HW_RANDOM_MSM ...@@ -346,6 +346,16 @@ config HW_RANDOM_MSM
If unsure, say Y. If unsure, say Y.
config HW_RANDOM_ST
tristate "ST Microelectronics HW Random Number Generator support"
depends on HW_RANDOM && ARCH_STI
---help---
This driver provides kernel-side support for the Random Number
Generator hardware found on STi series of SoCs.
To compile this driver as a module, choose M here: the
module will be called st-rng.
config HW_RANDOM_XGENE config HW_RANDOM_XGENE
tristate "APM X-Gene True Random Number Generator (TRNG) support" tristate "APM X-Gene True Random Number Generator (TRNG) support"
depends on HW_RANDOM && ARCH_XGENE depends on HW_RANDOM && ARCH_XGENE
...@@ -359,6 +369,18 @@ config HW_RANDOM_XGENE ...@@ -359,6 +369,18 @@ config HW_RANDOM_XGENE
If unsure, say Y. If unsure, say Y.
config HW_RANDOM_STM32
tristate "STMicroelectronics STM32 random number generator"
depends on HW_RANDOM && (ARCH_STM32 || COMPILE_TEST)
help
This driver provides kernel-side support for the Random Number
Generator hardware found on STM32 microcontrollers.
To compile this driver as a module, choose M here: the
module will be called stm32-rng.
If unsure, say N.
endif # HW_RANDOM endif # HW_RANDOM
config UML_RANDOM config UML_RANDOM
......
...@@ -30,4 +30,6 @@ obj-$(CONFIG_HW_RANDOM_TPM) += tpm-rng.o ...@@ -30,4 +30,6 @@ obj-$(CONFIG_HW_RANDOM_TPM) += tpm-rng.o
obj-$(CONFIG_HW_RANDOM_BCM2835) += bcm2835-rng.o obj-$(CONFIG_HW_RANDOM_BCM2835) += bcm2835-rng.o
obj-$(CONFIG_HW_RANDOM_IPROC_RNG200) += iproc-rng200.o obj-$(CONFIG_HW_RANDOM_IPROC_RNG200) += iproc-rng200.o
obj-$(CONFIG_HW_RANDOM_MSM) += msm-rng.o obj-$(CONFIG_HW_RANDOM_MSM) += msm-rng.o
obj-$(CONFIG_HW_RANDOM_ST) += st-rng.o
obj-$(CONFIG_HW_RANDOM_XGENE) += xgene-rng.o obj-$(CONFIG_HW_RANDOM_XGENE) += xgene-rng.o
obj-$(CONFIG_HW_RANDOM_STM32) += stm32-rng.o
...@@ -323,7 +323,7 @@ static ssize_t hwrng_attr_current_store(struct device *dev, ...@@ -323,7 +323,7 @@ static ssize_t hwrng_attr_current_store(struct device *dev,
return -ERESTARTSYS; return -ERESTARTSYS;
err = -ENODEV; err = -ENODEV;
list_for_each_entry(rng, &rng_list, list) { list_for_each_entry(rng, &rng_list, list) {
if (strcmp(rng->name, buf) == 0) { if (sysfs_streq(rng->name, buf)) {
err = 0; err = 0;
if (rng != current_rng) if (rng != current_rng)
err = set_current_rng(rng); err = set_current_rng(rng);
......
...@@ -53,15 +53,11 @@ static void exynos_rng_writel(struct exynos_rng *rng, u32 val, u32 offset) ...@@ -53,15 +53,11 @@ static void exynos_rng_writel(struct exynos_rng *rng, u32 val, u32 offset)
__raw_writel(val, rng->mem + offset); __raw_writel(val, rng->mem + offset);
} }
static int exynos_init(struct hwrng *rng) static int exynos_rng_configure(struct exynos_rng *exynos_rng)
{ {
struct exynos_rng *exynos_rng = container_of(rng,
struct exynos_rng, rng);
int i; int i;
int ret = 0; int ret = 0;
pm_runtime_get_sync(exynos_rng->dev);
for (i = 0 ; i < 5 ; i++) for (i = 0 ; i < 5 ; i++)
exynos_rng_writel(exynos_rng, jiffies, exynos_rng_writel(exynos_rng, jiffies,
EXYNOS_PRNG_SEED_OFFSET + 4*i); EXYNOS_PRNG_SEED_OFFSET + 4*i);
...@@ -70,6 +66,17 @@ static int exynos_init(struct hwrng *rng) ...@@ -70,6 +66,17 @@ static int exynos_init(struct hwrng *rng)
& SEED_SETTING_DONE)) & SEED_SETTING_DONE))
ret = -EIO; ret = -EIO;
return ret;
}
static int exynos_init(struct hwrng *rng)
{
struct exynos_rng *exynos_rng = container_of(rng,
struct exynos_rng, rng);
int ret = 0;
pm_runtime_get_sync(exynos_rng->dev);
ret = exynos_rng_configure(exynos_rng);
pm_runtime_put_noidle(exynos_rng->dev); pm_runtime_put_noidle(exynos_rng->dev);
return ret; return ret;
...@@ -81,21 +88,24 @@ static int exynos_read(struct hwrng *rng, void *buf, ...@@ -81,21 +88,24 @@ static int exynos_read(struct hwrng *rng, void *buf,
struct exynos_rng *exynos_rng = container_of(rng, struct exynos_rng *exynos_rng = container_of(rng,
struct exynos_rng, rng); struct exynos_rng, rng);
u32 *data = buf; u32 *data = buf;
int retry = 100;
pm_runtime_get_sync(exynos_rng->dev); pm_runtime_get_sync(exynos_rng->dev);
exynos_rng_writel(exynos_rng, PRNG_START, 0); exynos_rng_writel(exynos_rng, PRNG_START, 0);
while (!(exynos_rng_readl(exynos_rng, while (!(exynos_rng_readl(exynos_rng,
EXYNOS_PRNG_STATUS_OFFSET) & PRNG_DONE)) EXYNOS_PRNG_STATUS_OFFSET) & PRNG_DONE) && --retry)
cpu_relax(); cpu_relax();
if (!retry)
return -ETIMEDOUT;
exynos_rng_writel(exynos_rng, PRNG_DONE, EXYNOS_PRNG_STATUS_OFFSET); exynos_rng_writel(exynos_rng, PRNG_DONE, EXYNOS_PRNG_STATUS_OFFSET);
*data = exynos_rng_readl(exynos_rng, EXYNOS_PRNG_OUT1_OFFSET); *data = exynos_rng_readl(exynos_rng, EXYNOS_PRNG_OUT1_OFFSET);
pm_runtime_mark_last_busy(exynos_rng->dev); pm_runtime_mark_last_busy(exynos_rng->dev);
pm_runtime_autosuspend(exynos_rng->dev); pm_runtime_put_sync_autosuspend(exynos_rng->dev);
return 4; return 4;
} }
...@@ -152,15 +162,45 @@ static int exynos_rng_runtime_resume(struct device *dev) ...@@ -152,15 +162,45 @@ static int exynos_rng_runtime_resume(struct device *dev)
return clk_prepare_enable(exynos_rng->clk); return clk_prepare_enable(exynos_rng->clk);
} }
static int exynos_rng_suspend(struct device *dev)
{
return pm_runtime_force_suspend(dev);
}
static int exynos_rng_resume(struct device *dev)
{
struct platform_device *pdev = to_platform_device(dev);
struct exynos_rng *exynos_rng = platform_get_drvdata(pdev);
int ret;
ret = pm_runtime_force_resume(dev);
if (ret)
return ret;
return exynos_rng_configure(exynos_rng);
}
#endif #endif
static UNIVERSAL_DEV_PM_OPS(exynos_rng_pm_ops, exynos_rng_runtime_suspend, static const struct dev_pm_ops exynos_rng_pm_ops = {
exynos_rng_runtime_resume, NULL); SET_SYSTEM_SLEEP_PM_OPS(exynos_rng_suspend, exynos_rng_resume)
SET_RUNTIME_PM_OPS(exynos_rng_runtime_suspend,
exynos_rng_runtime_resume, NULL)
};
static const struct of_device_id exynos_rng_dt_match[] = {
{
.compatible = "samsung,exynos4-rng",
},
{ },
};
MODULE_DEVICE_TABLE(of, exynos_rng_dt_match);
static struct platform_driver exynos_rng_driver = { static struct platform_driver exynos_rng_driver = {
.driver = { .driver = {
.name = "exynos-rng", .name = "exynos-rng",
.pm = &exynos_rng_pm_ops, .pm = &exynos_rng_pm_ops,
.of_match_table = exynos_rng_dt_match,
}, },
.probe = exynos_rng_probe, .probe = exynos_rng_probe,
}; };
......
...@@ -141,12 +141,11 @@ static void mxc_rnga_cleanup(struct hwrng *rng) ...@@ -141,12 +141,11 @@ static void mxc_rnga_cleanup(struct hwrng *rng)
static int __init mxc_rnga_probe(struct platform_device *pdev) static int __init mxc_rnga_probe(struct platform_device *pdev)
{ {
int err = -ENODEV; int err;
struct resource *res; struct resource *res;
struct mxc_rng *mxc_rng; struct mxc_rng *mxc_rng;
mxc_rng = devm_kzalloc(&pdev->dev, sizeof(struct mxc_rng), mxc_rng = devm_kzalloc(&pdev->dev, sizeof(*mxc_rng), GFP_KERNEL);
GFP_KERNEL);
if (!mxc_rng) if (!mxc_rng)
return -ENOMEM; return -ENOMEM;
...@@ -160,13 +159,12 @@ static int __init mxc_rnga_probe(struct platform_device *pdev) ...@@ -160,13 +159,12 @@ static int __init mxc_rnga_probe(struct platform_device *pdev)
mxc_rng->clk = devm_clk_get(&pdev->dev, NULL); mxc_rng->clk = devm_clk_get(&pdev->dev, NULL);
if (IS_ERR(mxc_rng->clk)) { if (IS_ERR(mxc_rng->clk)) {
dev_err(&pdev->dev, "Could not get rng_clk!\n"); dev_err(&pdev->dev, "Could not get rng_clk!\n");
err = PTR_ERR(mxc_rng->clk); return PTR_ERR(mxc_rng->clk);
goto out;
} }
err = clk_prepare_enable(mxc_rng->clk); err = clk_prepare_enable(mxc_rng->clk);
if (err) if (err)
goto out; return err;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0); res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
mxc_rng->mem = devm_ioremap_resource(&pdev->dev, res); mxc_rng->mem = devm_ioremap_resource(&pdev->dev, res);
...@@ -181,14 +179,10 @@ static int __init mxc_rnga_probe(struct platform_device *pdev) ...@@ -181,14 +179,10 @@ static int __init mxc_rnga_probe(struct platform_device *pdev)
goto err_ioremap; goto err_ioremap;
} }
dev_info(&pdev->dev, "MXC RNGA Registered.\n");
return 0; return 0;
err_ioremap: err_ioremap:
clk_disable_unprepare(mxc_rng->clk); clk_disable_unprepare(mxc_rng->clk);
out:
return err; return err;
} }
......
...@@ -96,7 +96,7 @@ static int octeon_rng_probe(struct platform_device *pdev) ...@@ -96,7 +96,7 @@ static int octeon_rng_probe(struct platform_device *pdev)
rng->ops = ops; rng->ops = ops;
platform_set_drvdata(pdev, &rng->ops); platform_set_drvdata(pdev, &rng->ops);
ret = hwrng_register(&rng->ops); ret = devm_hwrng_register(&pdev->dev, &rng->ops);
if (ret) if (ret)
return -ENOENT; return -ENOENT;
...@@ -105,21 +105,11 @@ static int octeon_rng_probe(struct platform_device *pdev) ...@@ -105,21 +105,11 @@ static int octeon_rng_probe(struct platform_device *pdev)
return 0; return 0;
} }
static int octeon_rng_remove(struct platform_device *pdev)
{
struct hwrng *rng = platform_get_drvdata(pdev);
hwrng_unregister(rng);
return 0;
}
static struct platform_driver octeon_rng_driver = { static struct platform_driver octeon_rng_driver = {
.driver = { .driver = {
.name = "octeon_rng", .name = "octeon_rng",
}, },
.probe = octeon_rng_probe, .probe = octeon_rng_probe,
.remove = octeon_rng_remove,
}; };
module_platform_driver(octeon_rng_driver); module_platform_driver(octeon_rng_driver);
......
...@@ -138,6 +138,7 @@ static const struct of_device_id rng_match[] = { ...@@ -138,6 +138,7 @@ static const struct of_device_id rng_match[] = {
{ .compatible = "pasemi,pwrficient-rng", }, { .compatible = "pasemi,pwrficient-rng", },
{ }, { },
}; };
MODULE_DEVICE_TABLE(of, rng_match);
static struct platform_driver rng_driver = { static struct platform_driver rng_driver = {
.driver = { .driver = {
......
...@@ -129,6 +129,7 @@ static const struct of_device_id ppc4xx_rng_match[] = { ...@@ -129,6 +129,7 @@ static const struct of_device_id ppc4xx_rng_match[] = {
{ .compatible = "amcc,ppc440epx-rng", }, { .compatible = "amcc,ppc440epx-rng", },
{}, {},
}; };
MODULE_DEVICE_TABLE(of, ppc4xx_rng_match);
static struct platform_driver ppc4xx_rng_driver = { static struct platform_driver ppc4xx_rng_driver = {
.driver = { .driver = {
......
/*
* ST Random Number Generator Driver ST's Platforms
*
* Author: Pankaj Dev: <pankaj.dev@st.com>
* Lee Jones <lee.jones@linaro.org>
*
* Copyright (C) 2015 STMicroelectronics (R&D) Limited
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/hw_random.h>
#include <linux/io.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
/* Registers */
#define ST_RNG_STATUS_REG 0x20
#define ST_RNG_DATA_REG 0x24
/* Registers fields */
#define ST_RNG_STATUS_BAD_SEQUENCE BIT(0)
#define ST_RNG_STATUS_BAD_ALTERNANCE BIT(1)
#define ST_RNG_STATUS_FIFO_FULL BIT(5)
#define ST_RNG_SAMPLE_SIZE 2 /* 2 Byte (16bit) samples */
#define ST_RNG_FIFO_DEPTH 4
#define ST_RNG_FIFO_SIZE (ST_RNG_FIFO_DEPTH * ST_RNG_SAMPLE_SIZE)
/*
* Samples are documented to be available every 0.667us, so in theory
* the 4 sample deep FIFO should take 2.668us to fill. However, during
* thorough testing, it became apparent that filling the FIFO actually
* takes closer to 12us. We then multiply by 2 in order to account for
* the lack of udelay()'s reliability, suggested by Russell King.
*/
#define ST_RNG_FILL_FIFO_TIMEOUT (12 * 2)
struct st_rng_data {
void __iomem *base;
struct clk *clk;
struct hwrng ops;
};
static int st_rng_read(struct hwrng *rng, void *data, size_t max, bool wait)
{
struct st_rng_data *ddata = (struct st_rng_data *)rng->priv;
u32 status;
int i;
if (max < sizeof(u16))
return -EINVAL;
/* Wait until FIFO is full - max 4uS*/
for (i = 0; i < ST_RNG_FILL_FIFO_TIMEOUT; i++) {
status = readl_relaxed(ddata->base + ST_RNG_STATUS_REG);
if (status & ST_RNG_STATUS_FIFO_FULL)
break;
udelay(1);
}
if (i == ST_RNG_FILL_FIFO_TIMEOUT)
return 0;
for (i = 0; i < ST_RNG_FIFO_SIZE && i < max; i += 2)
*(u16 *)(data + i) =
readl_relaxed(ddata->base + ST_RNG_DATA_REG);
return i; /* No of bytes read */
}
static int st_rng_probe(struct platform_device *pdev)
{
struct st_rng_data *ddata;
struct resource *res;
struct clk *clk;
void __iomem *base;
int ret;
ddata = devm_kzalloc(&pdev->dev, sizeof(*ddata), GFP_KERNEL);
if (!ddata)
return -ENOMEM;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
base = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(base))
return PTR_ERR(base);
clk = devm_clk_get(&pdev->dev, NULL);
if (IS_ERR(clk))
return PTR_ERR(clk);
ret = clk_prepare_enable(clk);
if (ret)
return ret;
ddata->ops.priv = (unsigned long)ddata;
ddata->ops.read = st_rng_read;
ddata->ops.name = pdev->name;
ddata->base = base;
ddata->clk = clk;
dev_set_drvdata(&pdev->dev, ddata);
ret = hwrng_register(&ddata->ops);
if (ret) {
dev_err(&pdev->dev, "Failed to register HW RNG\n");
return ret;
}
dev_info(&pdev->dev, "Successfully registered HW RNG\n");
return 0;
}
static int st_rng_remove(struct platform_device *pdev)
{
struct st_rng_data *ddata = dev_get_drvdata(&pdev->dev);
hwrng_unregister(&ddata->ops);
clk_disable_unprepare(ddata->clk);
return 0;
}
static const struct of_device_id st_rng_match[] = {
{ .compatible = "st,rng" },
{},
};
MODULE_DEVICE_TABLE(of, st_rng_match);
static struct platform_driver st_rng_driver = {
.driver = {
.name = "st-hwrandom",
.of_match_table = of_match_ptr(st_rng_match),
},
.probe = st_rng_probe,
.remove = st_rng_remove
};
module_platform_driver(st_rng_driver);
MODULE_AUTHOR("Pankaj Dev <pankaj.dev@st.com>");
MODULE_LICENSE("GPL v2");
/*
* Copyright (c) 2015, Daniel Thompson
*
* This file is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This file is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/hw_random.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/of_address.h>
#include <linux/of_platform.h>
#include <linux/pm_runtime.h>
#include <linux/slab.h>
#define RNG_CR 0x00
#define RNG_CR_RNGEN BIT(2)
#define RNG_SR 0x04
#define RNG_SR_SEIS BIT(6)
#define RNG_SR_CEIS BIT(5)
#define RNG_SR_DRDY BIT(0)
#define RNG_DR 0x08
/*
* It takes 40 cycles @ 48MHz to generate each random number (e.g. <1us).
* At the time of writing STM32 parts max out at ~200MHz meaning a timeout
* of 500 leaves us a very comfortable margin for error. The loop to which
* the timeout applies takes at least 4 instructions per iteration so the
* timeout is enough to take us up to multi-GHz parts!
*/
#define RNG_TIMEOUT 500
struct stm32_rng_private {
struct hwrng rng;
void __iomem *base;
struct clk *clk;
};
static int stm32_rng_read(struct hwrng *rng, void *data, size_t max, bool wait)
{
struct stm32_rng_private *priv =
container_of(rng, struct stm32_rng_private, rng);
u32 sr;
int retval = 0;
pm_runtime_get_sync((struct device *) priv->rng.priv);
while (max > sizeof(u32)) {
sr = readl_relaxed(priv->base + RNG_SR);
if (!sr && wait) {
unsigned int timeout = RNG_TIMEOUT;
do {
cpu_relax();
sr = readl_relaxed(priv->base + RNG_SR);
} while (!sr && --timeout);
}
/* If error detected or data not ready... */
if (sr != RNG_SR_DRDY)
break;
*(u32 *)data = readl_relaxed(priv->base + RNG_DR);
retval += sizeof(u32);
data += sizeof(u32);
max -= sizeof(u32);
}
if (WARN_ONCE(sr & (RNG_SR_SEIS | RNG_SR_CEIS),
"bad RNG status - %x\n", sr))
writel_relaxed(0, priv->base + RNG_SR);
pm_runtime_mark_last_busy((struct device *) priv->rng.priv);
pm_runtime_put_sync_autosuspend((struct device *) priv->rng.priv);
return retval || !wait ? retval : -EIO;
}
static int stm32_rng_init(struct hwrng *rng)
{
struct stm32_rng_private *priv =
container_of(rng, struct stm32_rng_private, rng);
int err;
err = clk_prepare_enable(priv->clk);
if (err)
return err;
writel_relaxed(RNG_CR_RNGEN, priv->base + RNG_CR);
/* clear error indicators */
writel_relaxed(0, priv->base + RNG_SR);
return 0;
}
static void stm32_rng_cleanup(struct hwrng *rng)
{
struct stm32_rng_private *priv =
container_of(rng, struct stm32_rng_private, rng);
writel_relaxed(0, priv->base + RNG_CR);
clk_disable_unprepare(priv->clk);
}
static int stm32_rng_probe(struct platform_device *ofdev)
{
struct device *dev = &ofdev->dev;
struct device_node *np = ofdev->dev.of_node;
struct stm32_rng_private *priv;
struct resource res;
int err;
priv = devm_kzalloc(dev, sizeof(struct stm32_rng_private), GFP_KERNEL);
if (!priv)
return -ENOMEM;
err = of_address_to_resource(np, 0, &res);
if (err)
return err;
priv->base = devm_ioremap_resource(dev, &res);
if (IS_ERR(priv->base))
return PTR_ERR(priv->base);
priv->clk = devm_clk_get(&ofdev->dev, NULL);
if (IS_ERR(priv->clk))
return PTR_ERR(priv->clk);
dev_set_drvdata(dev, priv);
priv->rng.name = dev_driver_string(dev),
#ifndef CONFIG_PM
priv->rng.init = stm32_rng_init,
priv->rng.cleanup = stm32_rng_cleanup,
#endif
priv->rng.read = stm32_rng_read,
priv->rng.priv = (unsigned long) dev;
pm_runtime_set_autosuspend_delay(dev, 100);
pm_runtime_use_autosuspend(dev);
pm_runtime_enable(dev);
return devm_hwrng_register(dev, &priv->rng);
}
#ifdef CONFIG_PM
static int stm32_rng_runtime_suspend(struct device *dev)
{
struct stm32_rng_private *priv = dev_get_drvdata(dev);
stm32_rng_cleanup(&priv->rng);
return 0;
}
static int stm32_rng_runtime_resume(struct device *dev)
{
struct stm32_rng_private *priv = dev_get_drvdata(dev);
return stm32_rng_init(&priv->rng);
}
#endif
static UNIVERSAL_DEV_PM_OPS(stm32_rng_pm_ops, stm32_rng_runtime_suspend,
stm32_rng_runtime_resume, NULL);
static const struct of_device_id stm32_rng_match[] = {
{
.compatible = "st,stm32-rng",
},
{},
};
MODULE_DEVICE_TABLE(of, stm32_rng_match);
static struct platform_driver stm32_rng_driver = {
.driver = {
.name = "stm32-rng",
.pm = &stm32_rng_pm_ops,
.of_match_table = stm32_rng_match,
},
.probe = stm32_rng_probe,
};
module_platform_driver(stm32_rng_driver);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Daniel Thompson <daniel.thompson@linaro.org>");
MODULE_DESCRIPTION("STMicroelectronics STM32 RNG device driver");
...@@ -420,7 +420,7 @@ config CRYPTO_DEV_CCP ...@@ -420,7 +420,7 @@ config CRYPTO_DEV_CCP
bool "Support for AMD Cryptographic Coprocessor" bool "Support for AMD Cryptographic Coprocessor"
depends on ((X86 && PCI) || (ARM64 && (OF_ADDRESS || ACPI))) && HAS_IOMEM depends on ((X86 && PCI) || (ARM64 && (OF_ADDRESS || ACPI))) && HAS_IOMEM
help help
The AMD Cryptographic Coprocessor provides hardware support The AMD Cryptographic Coprocessor provides hardware offload support
for encryption, hashing and related operations. for encryption, hashing and related operations.
if CRYPTO_DEV_CCP if CRYPTO_DEV_CCP
...@@ -429,7 +429,8 @@ endif ...@@ -429,7 +429,8 @@ endif
config CRYPTO_DEV_MXS_DCP config CRYPTO_DEV_MXS_DCP
tristate "Support for Freescale MXS DCP" tristate "Support for Freescale MXS DCP"
depends on ARCH_MXS depends on (ARCH_MXS || ARCH_MXC)
select STMP_DEVICE
select CRYPTO_CBC select CRYPTO_CBC
select CRYPTO_ECB select CRYPTO_ECB
select CRYPTO_AES select CRYPTO_AES
......
...@@ -740,26 +740,6 @@ void crypto4xx_return_pd(struct crypto4xx_device *dev, ...@@ -740,26 +740,6 @@ void crypto4xx_return_pd(struct crypto4xx_device *dev,
pd_uinfo->state = PD_ENTRY_FREE; pd_uinfo->state = PD_ENTRY_FREE;
} }
/*
* derive number of elements in scatterlist
* Shamlessly copy from talitos.c
*/
static int get_sg_count(struct scatterlist *sg_list, int nbytes)
{
struct scatterlist *sg = sg_list;
int sg_nents = 0;
while (nbytes) {
sg_nents++;
if (sg->length > nbytes)
break;
nbytes -= sg->length;
sg = sg_next(sg);
}
return sg_nents;
}
static u32 get_next_gd(u32 current) static u32 get_next_gd(u32 current)
{ {
if (current != PPC4XX_LAST_GD) if (current != PPC4XX_LAST_GD)
...@@ -800,7 +780,7 @@ u32 crypto4xx_build_pd(struct crypto_async_request *req, ...@@ -800,7 +780,7 @@ u32 crypto4xx_build_pd(struct crypto_async_request *req,
u32 gd_idx = 0; u32 gd_idx = 0;
/* figure how many gd is needed */ /* figure how many gd is needed */
num_gd = get_sg_count(src, datalen); num_gd = sg_nents_for_len(src, datalen);
if (num_gd == 1) if (num_gd == 1)
num_gd = 0; num_gd = 0;
...@@ -1284,6 +1264,7 @@ static const struct of_device_id crypto4xx_match[] = { ...@@ -1284,6 +1264,7 @@ static const struct of_device_id crypto4xx_match[] = {
{ .compatible = "amcc,ppc4xx-crypto",}, { .compatible = "amcc,ppc4xx-crypto",},
{ }, { },
}; };
MODULE_DEVICE_TABLE(of, crypto4xx_match);
static struct platform_driver crypto4xx_driver = { static struct platform_driver crypto4xx_driver = {
.driver = { .driver = {
......
...@@ -260,7 +260,11 @@ static struct atmel_aes_dev *atmel_aes_find_dev(struct atmel_aes_ctx *ctx) ...@@ -260,7 +260,11 @@ static struct atmel_aes_dev *atmel_aes_find_dev(struct atmel_aes_ctx *ctx)
static int atmel_aes_hw_init(struct atmel_aes_dev *dd) static int atmel_aes_hw_init(struct atmel_aes_dev *dd)
{ {
clk_prepare_enable(dd->iclk); int err;
err = clk_prepare_enable(dd->iclk);
if (err)
return err;
if (!(dd->flags & AES_FLAGS_INIT)) { if (!(dd->flags & AES_FLAGS_INIT)) {
atmel_aes_write(dd, AES_CR, AES_CR_SWRST); atmel_aes_write(dd, AES_CR, AES_CR_SWRST);
...@@ -1320,7 +1324,6 @@ static int atmel_aes_probe(struct platform_device *pdev) ...@@ -1320,7 +1324,6 @@ static int atmel_aes_probe(struct platform_device *pdev)
struct crypto_platform_data *pdata; struct crypto_platform_data *pdata;
struct device *dev = &pdev->dev; struct device *dev = &pdev->dev;
struct resource *aes_res; struct resource *aes_res;
unsigned long aes_phys_size;
int err; int err;
pdata = pdev->dev.platform_data; pdata = pdev->dev.platform_data;
...@@ -1337,7 +1340,7 @@ static int atmel_aes_probe(struct platform_device *pdev) ...@@ -1337,7 +1340,7 @@ static int atmel_aes_probe(struct platform_device *pdev)
goto aes_dd_err; goto aes_dd_err;
} }
aes_dd = kzalloc(sizeof(struct atmel_aes_dev), GFP_KERNEL); aes_dd = devm_kzalloc(&pdev->dev, sizeof(*aes_dd), GFP_KERNEL);
if (aes_dd == NULL) { if (aes_dd == NULL) {
dev_err(dev, "unable to alloc data struct.\n"); dev_err(dev, "unable to alloc data struct.\n");
err = -ENOMEM; err = -ENOMEM;
...@@ -1368,36 +1371,35 @@ static int atmel_aes_probe(struct platform_device *pdev) ...@@ -1368,36 +1371,35 @@ static int atmel_aes_probe(struct platform_device *pdev)
goto res_err; goto res_err;
} }
aes_dd->phys_base = aes_res->start; aes_dd->phys_base = aes_res->start;
aes_phys_size = resource_size(aes_res);
/* Get the IRQ */ /* Get the IRQ */
aes_dd->irq = platform_get_irq(pdev, 0); aes_dd->irq = platform_get_irq(pdev, 0);
if (aes_dd->irq < 0) { if (aes_dd->irq < 0) {
dev_err(dev, "no IRQ resource info\n"); dev_err(dev, "no IRQ resource info\n");
err = aes_dd->irq; err = aes_dd->irq;
goto aes_irq_err; goto res_err;
} }
err = request_irq(aes_dd->irq, atmel_aes_irq, IRQF_SHARED, "atmel-aes", err = devm_request_irq(&pdev->dev, aes_dd->irq, atmel_aes_irq,
aes_dd); IRQF_SHARED, "atmel-aes", aes_dd);
if (err) { if (err) {
dev_err(dev, "unable to request aes irq.\n"); dev_err(dev, "unable to request aes irq.\n");
goto aes_irq_err; goto res_err;
} }
/* Initializing the clock */ /* Initializing the clock */
aes_dd->iclk = clk_get(&pdev->dev, "aes_clk"); aes_dd->iclk = devm_clk_get(&pdev->dev, "aes_clk");
if (IS_ERR(aes_dd->iclk)) { if (IS_ERR(aes_dd->iclk)) {
dev_err(dev, "clock initialization failed.\n"); dev_err(dev, "clock initialization failed.\n");
err = PTR_ERR(aes_dd->iclk); err = PTR_ERR(aes_dd->iclk);
goto clk_err; goto res_err;
} }
aes_dd->io_base = ioremap(aes_dd->phys_base, aes_phys_size); aes_dd->io_base = devm_ioremap_resource(&pdev->dev, aes_res);
if (!aes_dd->io_base) { if (!aes_dd->io_base) {
dev_err(dev, "can't ioremap\n"); dev_err(dev, "can't ioremap\n");
err = -ENOMEM; err = -ENOMEM;
goto aes_io_err; goto res_err;
} }
atmel_aes_hw_version_init(aes_dd); atmel_aes_hw_version_init(aes_dd);
...@@ -1434,17 +1436,9 @@ static int atmel_aes_probe(struct platform_device *pdev) ...@@ -1434,17 +1436,9 @@ static int atmel_aes_probe(struct platform_device *pdev)
err_aes_dma: err_aes_dma:
atmel_aes_buff_cleanup(aes_dd); atmel_aes_buff_cleanup(aes_dd);
err_aes_buff: err_aes_buff:
iounmap(aes_dd->io_base);
aes_io_err:
clk_put(aes_dd->iclk);
clk_err:
free_irq(aes_dd->irq, aes_dd);
aes_irq_err:
res_err: res_err:
tasklet_kill(&aes_dd->done_task); tasklet_kill(&aes_dd->done_task);
tasklet_kill(&aes_dd->queue_task); tasklet_kill(&aes_dd->queue_task);
kfree(aes_dd);
aes_dd = NULL;
aes_dd_err: aes_dd_err:
dev_err(dev, "initialization failed.\n"); dev_err(dev, "initialization failed.\n");
...@@ -1469,16 +1463,6 @@ static int atmel_aes_remove(struct platform_device *pdev) ...@@ -1469,16 +1463,6 @@ static int atmel_aes_remove(struct platform_device *pdev)
atmel_aes_dma_cleanup(aes_dd); atmel_aes_dma_cleanup(aes_dd);
iounmap(aes_dd->io_base);
clk_put(aes_dd->iclk);
if (aes_dd->irq > 0)
free_irq(aes_dd->irq, aes_dd);
kfree(aes_dd);
aes_dd = NULL;
return 0; return 0;
} }
......
...@@ -794,7 +794,11 @@ static void atmel_sha_finish_req(struct ahash_request *req, int err) ...@@ -794,7 +794,11 @@ static void atmel_sha_finish_req(struct ahash_request *req, int err)
static int atmel_sha_hw_init(struct atmel_sha_dev *dd) static int atmel_sha_hw_init(struct atmel_sha_dev *dd)
{ {
clk_prepare_enable(dd->iclk); int err;
err = clk_prepare_enable(dd->iclk);
if (err)
return err;
if (!(SHA_FLAGS_INIT & dd->flags)) { if (!(SHA_FLAGS_INIT & dd->flags)) {
atmel_sha_write(dd, SHA_CR, SHA_CR_SWRST); atmel_sha_write(dd, SHA_CR, SHA_CR_SWRST);
...@@ -1345,11 +1349,9 @@ static int atmel_sha_probe(struct platform_device *pdev) ...@@ -1345,11 +1349,9 @@ static int atmel_sha_probe(struct platform_device *pdev)
struct crypto_platform_data *pdata; struct crypto_platform_data *pdata;
struct device *dev = &pdev->dev; struct device *dev = &pdev->dev;
struct resource *sha_res; struct resource *sha_res;
unsigned long sha_phys_size;
int err; int err;
sha_dd = devm_kzalloc(&pdev->dev, sizeof(struct atmel_sha_dev), sha_dd = devm_kzalloc(&pdev->dev, sizeof(*sha_dd), GFP_KERNEL);
GFP_KERNEL);
if (sha_dd == NULL) { if (sha_dd == NULL) {
dev_err(dev, "unable to alloc data struct.\n"); dev_err(dev, "unable to alloc data struct.\n");
err = -ENOMEM; err = -ENOMEM;
...@@ -1378,7 +1380,6 @@ static int atmel_sha_probe(struct platform_device *pdev) ...@@ -1378,7 +1380,6 @@ static int atmel_sha_probe(struct platform_device *pdev)
goto res_err; goto res_err;
} }
sha_dd->phys_base = sha_res->start; sha_dd->phys_base = sha_res->start;
sha_phys_size = resource_size(sha_res);
/* Get the IRQ */ /* Get the IRQ */
sha_dd->irq = platform_get_irq(pdev, 0); sha_dd->irq = platform_get_irq(pdev, 0);
...@@ -1388,26 +1389,26 @@ static int atmel_sha_probe(struct platform_device *pdev) ...@@ -1388,26 +1389,26 @@ static int atmel_sha_probe(struct platform_device *pdev)
goto res_err; goto res_err;
} }
err = request_irq(sha_dd->irq, atmel_sha_irq, IRQF_SHARED, "atmel-sha", err = devm_request_irq(&pdev->dev, sha_dd->irq, atmel_sha_irq,
sha_dd); IRQF_SHARED, "atmel-sha", sha_dd);
if (err) { if (err) {
dev_err(dev, "unable to request sha irq.\n"); dev_err(dev, "unable to request sha irq.\n");
goto res_err; goto res_err;
} }
/* Initializing the clock */ /* Initializing the clock */
sha_dd->iclk = clk_get(&pdev->dev, "sha_clk"); sha_dd->iclk = devm_clk_get(&pdev->dev, "sha_clk");
if (IS_ERR(sha_dd->iclk)) { if (IS_ERR(sha_dd->iclk)) {
dev_err(dev, "clock initialization failed.\n"); dev_err(dev, "clock initialization failed.\n");
err = PTR_ERR(sha_dd->iclk); err = PTR_ERR(sha_dd->iclk);
goto clk_err; goto res_err;
} }
sha_dd->io_base = ioremap(sha_dd->phys_base, sha_phys_size); sha_dd->io_base = devm_ioremap_resource(&pdev->dev, sha_res);
if (!sha_dd->io_base) { if (!sha_dd->io_base) {
dev_err(dev, "can't ioremap\n"); dev_err(dev, "can't ioremap\n");
err = -ENOMEM; err = -ENOMEM;
goto sha_io_err; goto res_err;
} }
atmel_sha_hw_version_init(sha_dd); atmel_sha_hw_version_init(sha_dd);
...@@ -1421,12 +1422,12 @@ static int atmel_sha_probe(struct platform_device *pdev) ...@@ -1421,12 +1422,12 @@ static int atmel_sha_probe(struct platform_device *pdev)
if (IS_ERR(pdata)) { if (IS_ERR(pdata)) {
dev_err(&pdev->dev, "platform data not available\n"); dev_err(&pdev->dev, "platform data not available\n");
err = PTR_ERR(pdata); err = PTR_ERR(pdata);
goto err_pdata; goto res_err;
} }
} }
if (!pdata->dma_slave) { if (!pdata->dma_slave) {
err = -ENXIO; err = -ENXIO;
goto err_pdata; goto res_err;
} }
err = atmel_sha_dma_init(sha_dd, pdata); err = atmel_sha_dma_init(sha_dd, pdata);
if (err) if (err)
...@@ -1457,12 +1458,6 @@ static int atmel_sha_probe(struct platform_device *pdev) ...@@ -1457,12 +1458,6 @@ static int atmel_sha_probe(struct platform_device *pdev)
if (sha_dd->caps.has_dma) if (sha_dd->caps.has_dma)
atmel_sha_dma_cleanup(sha_dd); atmel_sha_dma_cleanup(sha_dd);
err_sha_dma: err_sha_dma:
err_pdata:
iounmap(sha_dd->io_base);
sha_io_err:
clk_put(sha_dd->iclk);
clk_err:
free_irq(sha_dd->irq, sha_dd);
res_err: res_err:
tasklet_kill(&sha_dd->done_task); tasklet_kill(&sha_dd->done_task);
sha_dd_err: sha_dd_err:
......
...@@ -218,7 +218,11 @@ static struct atmel_tdes_dev *atmel_tdes_find_dev(struct atmel_tdes_ctx *ctx) ...@@ -218,7 +218,11 @@ static struct atmel_tdes_dev *atmel_tdes_find_dev(struct atmel_tdes_ctx *ctx)
static int atmel_tdes_hw_init(struct atmel_tdes_dev *dd) static int atmel_tdes_hw_init(struct atmel_tdes_dev *dd)
{ {
clk_prepare_enable(dd->iclk); int err;
err = clk_prepare_enable(dd->iclk);
if (err)
return err;
if (!(dd->flags & TDES_FLAGS_INIT)) { if (!(dd->flags & TDES_FLAGS_INIT)) {
atmel_tdes_write(dd, TDES_CR, TDES_CR_SWRST); atmel_tdes_write(dd, TDES_CR, TDES_CR_SWRST);
...@@ -1355,7 +1359,6 @@ static int atmel_tdes_probe(struct platform_device *pdev) ...@@ -1355,7 +1359,6 @@ static int atmel_tdes_probe(struct platform_device *pdev)
struct crypto_platform_data *pdata; struct crypto_platform_data *pdata;
struct device *dev = &pdev->dev; struct device *dev = &pdev->dev;
struct resource *tdes_res; struct resource *tdes_res;
unsigned long tdes_phys_size;
int err; int err;
tdes_dd = devm_kmalloc(&pdev->dev, sizeof(*tdes_dd), GFP_KERNEL); tdes_dd = devm_kmalloc(&pdev->dev, sizeof(*tdes_dd), GFP_KERNEL);
...@@ -1389,7 +1392,6 @@ static int atmel_tdes_probe(struct platform_device *pdev) ...@@ -1389,7 +1392,6 @@ static int atmel_tdes_probe(struct platform_device *pdev)
goto res_err; goto res_err;
} }
tdes_dd->phys_base = tdes_res->start; tdes_dd->phys_base = tdes_res->start;
tdes_phys_size = resource_size(tdes_res);
/* Get the IRQ */ /* Get the IRQ */
tdes_dd->irq = platform_get_irq(pdev, 0); tdes_dd->irq = platform_get_irq(pdev, 0);
...@@ -1399,26 +1401,26 @@ static int atmel_tdes_probe(struct platform_device *pdev) ...@@ -1399,26 +1401,26 @@ static int atmel_tdes_probe(struct platform_device *pdev)
goto res_err; goto res_err;
} }
err = request_irq(tdes_dd->irq, atmel_tdes_irq, IRQF_SHARED, err = devm_request_irq(&pdev->dev, tdes_dd->irq, atmel_tdes_irq,
"atmel-tdes", tdes_dd); IRQF_SHARED, "atmel-tdes", tdes_dd);
if (err) { if (err) {
dev_err(dev, "unable to request tdes irq.\n"); dev_err(dev, "unable to request tdes irq.\n");
goto tdes_irq_err; goto res_err;
} }
/* Initializing the clock */ /* Initializing the clock */
tdes_dd->iclk = clk_get(&pdev->dev, "tdes_clk"); tdes_dd->iclk = devm_clk_get(&pdev->dev, "tdes_clk");
if (IS_ERR(tdes_dd->iclk)) { if (IS_ERR(tdes_dd->iclk)) {
dev_err(dev, "clock initialization failed.\n"); dev_err(dev, "clock initialization failed.\n");
err = PTR_ERR(tdes_dd->iclk); err = PTR_ERR(tdes_dd->iclk);
goto clk_err; goto res_err;
} }
tdes_dd->io_base = ioremap(tdes_dd->phys_base, tdes_phys_size); tdes_dd->io_base = devm_ioremap_resource(&pdev->dev, tdes_res);
if (!tdes_dd->io_base) { if (!tdes_dd->io_base) {
dev_err(dev, "can't ioremap\n"); dev_err(dev, "can't ioremap\n");
err = -ENOMEM; err = -ENOMEM;
goto tdes_io_err; goto res_err;
} }
atmel_tdes_hw_version_init(tdes_dd); atmel_tdes_hw_version_init(tdes_dd);
...@@ -1474,12 +1476,6 @@ static int atmel_tdes_probe(struct platform_device *pdev) ...@@ -1474,12 +1476,6 @@ static int atmel_tdes_probe(struct platform_device *pdev)
err_pdata: err_pdata:
atmel_tdes_buff_cleanup(tdes_dd); atmel_tdes_buff_cleanup(tdes_dd);
err_tdes_buff: err_tdes_buff:
iounmap(tdes_dd->io_base);
tdes_io_err:
clk_put(tdes_dd->iclk);
clk_err:
free_irq(tdes_dd->irq, tdes_dd);
tdes_irq_err:
res_err: res_err:
tasklet_kill(&tdes_dd->done_task); tasklet_kill(&tdes_dd->done_task);
tasklet_kill(&tdes_dd->queue_task); tasklet_kill(&tdes_dd->queue_task);
...@@ -1510,13 +1506,6 @@ static int atmel_tdes_remove(struct platform_device *pdev) ...@@ -1510,13 +1506,6 @@ static int atmel_tdes_remove(struct platform_device *pdev)
atmel_tdes_buff_cleanup(tdes_dd); atmel_tdes_buff_cleanup(tdes_dd);
iounmap(tdes_dd->io_base);
clk_put(tdes_dd->iclk);
if (tdes_dd->irq >= 0)
free_irq(tdes_dd->irq, tdes_dd);
return 0; return 0;
} }
......
...@@ -96,26 +96,6 @@ struct bfin_crypto_crc_ctx { ...@@ -96,26 +96,6 @@ struct bfin_crypto_crc_ctx {
u32 key; u32 key;
}; };
/*
* derive number of elements in scatterlist
*/
static int sg_count(struct scatterlist *sg_list)
{
struct scatterlist *sg = sg_list;
int sg_nents = 1;
if (sg_list == NULL)
return 0;
while (!sg_is_last(sg)) {
sg_nents++;
sg = sg_next(sg);
}
return sg_nents;
}
/* /*
* get element in scatter list by given index * get element in scatter list by given index
*/ */
...@@ -160,7 +140,7 @@ static int bfin_crypto_crc_init(struct ahash_request *req) ...@@ -160,7 +140,7 @@ static int bfin_crypto_crc_init(struct ahash_request *req)
} }
spin_unlock_bh(&crc_list.lock); spin_unlock_bh(&crc_list.lock);
if (sg_count(req->src) > CRC_MAX_DMA_DESC) { if (sg_nents(req->src) > CRC_MAX_DMA_DESC) {
dev_dbg(ctx->crc->dev, "init: requested sg list is too big > %d\n", dev_dbg(ctx->crc->dev, "init: requested sg list is too big > %d\n",
CRC_MAX_DMA_DESC); CRC_MAX_DMA_DESC);
return -EINVAL; return -EINVAL;
...@@ -376,7 +356,8 @@ static int bfin_crypto_crc_handle_queue(struct bfin_crypto_crc *crc, ...@@ -376,7 +356,8 @@ static int bfin_crypto_crc_handle_queue(struct bfin_crypto_crc *crc,
ctx->sg = req->src; ctx->sg = req->src;
/* Chop crc buffer size to multiple of 32 bit */ /* Chop crc buffer size to multiple of 32 bit */
nsg = ctx->sg_nents = sg_count(ctx->sg); nsg = sg_nents(ctx->sg);
ctx->sg_nents = nsg;
ctx->sg_buflen = ctx->buflast_len + req->nbytes; ctx->sg_buflen = ctx->buflast_len + req->nbytes;
ctx->bufnext_len = ctx->sg_buflen % 4; ctx->bufnext_len = ctx->sg_buflen % 4;
ctx->sg_buflen &= ~0x3; ctx->sg_buflen &= ~0x3;
......
This diff is collapsed.
This diff is collapsed.
...@@ -1492,7 +1492,6 @@ struct sec4_sg_entry { ...@@ -1492,7 +1492,6 @@ struct sec4_sg_entry {
#define JUMP_JSL (1 << JUMP_JSL_SHIFT) #define JUMP_JSL (1 << JUMP_JSL_SHIFT)
#define JUMP_TYPE_SHIFT 22 #define JUMP_TYPE_SHIFT 22
#define JUMP_TYPE_MASK (0x03 << JUMP_TYPE_SHIFT)
#define JUMP_TYPE_LOCAL (0x00 << JUMP_TYPE_SHIFT) #define JUMP_TYPE_LOCAL (0x00 << JUMP_TYPE_SHIFT)
#define JUMP_TYPE_NONLOCAL (0x01 << JUMP_TYPE_SHIFT) #define JUMP_TYPE_NONLOCAL (0x01 << JUMP_TYPE_SHIFT)
#define JUMP_TYPE_HALT (0x02 << JUMP_TYPE_SHIFT) #define JUMP_TYPE_HALT (0x02 << JUMP_TYPE_SHIFT)
......
...@@ -69,81 +69,13 @@ static inline struct sec4_sg_entry *sg_to_sec4_sg_len( ...@@ -69,81 +69,13 @@ static inline struct sec4_sg_entry *sg_to_sec4_sg_len(
return sec4_sg_ptr - 1; return sec4_sg_ptr - 1;
} }
/* count number of elements in scatterlist */
static inline int __sg_count(struct scatterlist *sg_list, int nbytes,
bool *chained)
{
struct scatterlist *sg = sg_list;
int sg_nents = 0;
while (nbytes > 0) {
sg_nents++;
nbytes -= sg->length;
if (!sg_is_last(sg) && (sg + 1)->length == 0)
*chained = true;
sg = sg_next(sg);
}
return sg_nents;
}
/* derive number of elements in scatterlist, but return 0 for 1 */ /* derive number of elements in scatterlist, but return 0 for 1 */
static inline int sg_count(struct scatterlist *sg_list, int nbytes, static inline int sg_count(struct scatterlist *sg_list, int nbytes)
bool *chained)
{ {
int sg_nents = __sg_count(sg_list, nbytes, chained); int sg_nents = sg_nents_for_len(sg_list, nbytes);
if (likely(sg_nents == 1)) if (likely(sg_nents == 1))
return 0; return 0;
return sg_nents; return sg_nents;
} }
static inline void dma_unmap_sg_chained(
struct device *dev, struct scatterlist *sg, unsigned int nents,
enum dma_data_direction dir, bool chained)
{
if (unlikely(chained)) {
int i;
struct scatterlist *tsg = sg;
/*
* Use a local copy of the sg pointer to avoid moving the
* head of the list pointed to by sg as we walk the list.
*/
for (i = 0; i < nents; i++) {
dma_unmap_sg(dev, tsg, 1, dir);
tsg = sg_next(tsg);
}
} else if (nents) {
dma_unmap_sg(dev, sg, nents, dir);
}
}
static inline int dma_map_sg_chained(
struct device *dev, struct scatterlist *sg, unsigned int nents,
enum dma_data_direction dir, bool chained)
{
if (unlikely(chained)) {
int i;
struct scatterlist *tsg = sg;
/*
* Use a local copy of the sg pointer to avoid moving the
* head of the list pointed to by sg as we walk the list.
*/
for (i = 0; i < nents; i++) {
if (!dma_map_sg(dev, tsg, 1, dir)) {
dma_unmap_sg_chained(dev, sg, i, dir,
chained);
nents = 0;
break;
}
tsg = sg_next(tsg);
}
} else
nents = dma_map_sg(dev, sg, nents, dir);
return nents;
}
...@@ -5,12 +5,12 @@ config CRYPTO_DEV_CCP_DD ...@@ -5,12 +5,12 @@ config CRYPTO_DEV_CCP_DD
select HW_RANDOM select HW_RANDOM
help help
Provides the interface to use the AMD Cryptographic Coprocessor Provides the interface to use the AMD Cryptographic Coprocessor
which can be used to accelerate or offload encryption operations which can be used to offload encryption operations such as SHA,
such as SHA, AES and more. If you choose 'M' here, this module AES and more. If you choose 'M' here, this module will be called
will be called ccp. ccp.
config CRYPTO_DEV_CCP_CRYPTO config CRYPTO_DEV_CCP_CRYPTO
tristate "Encryption and hashing acceleration support" tristate "Encryption and hashing offload support"
depends on CRYPTO_DEV_CCP_DD depends on CRYPTO_DEV_CCP_DD
default m default m
select CRYPTO_HASH select CRYPTO_HASH
...@@ -18,6 +18,5 @@ config CRYPTO_DEV_CCP_CRYPTO ...@@ -18,6 +18,5 @@ config CRYPTO_DEV_CCP_CRYPTO
select CRYPTO_AUTHENC select CRYPTO_AUTHENC
help help
Support for using the cryptographic API with the AMD Cryptographic Support for using the cryptographic API with the AMD Cryptographic
Coprocessor. This module supports acceleration and offload of SHA Coprocessor. This module supports offload of SHA and AES algorithms.
and AES algorithms. If you choose 'M' here, this module will be If you choose 'M' here, this module will be called ccp_crypto.
called ccp_crypto.
This diff is collapsed.
...@@ -305,14 +305,16 @@ struct scatterlist *ccp_crypto_sg_table_add(struct sg_table *table, ...@@ -305,14 +305,16 @@ struct scatterlist *ccp_crypto_sg_table_add(struct sg_table *table,
for (sg = table->sgl; sg; sg = sg_next(sg)) for (sg = table->sgl; sg; sg = sg_next(sg))
if (!sg_page(sg)) if (!sg_page(sg))
break; break;
BUG_ON(!sg); if (WARN_ON(!sg))
return NULL;
for (; sg && sg_add; sg = sg_next(sg), sg_add = sg_next(sg_add)) { for (; sg && sg_add; sg = sg_next(sg), sg_add = sg_next(sg_add)) {
sg_set_page(sg, sg_page(sg_add), sg_add->length, sg_set_page(sg, sg_page(sg_add), sg_add->length,
sg_add->offset); sg_add->offset);
sg_last = sg; sg_last = sg;
} }
BUG_ON(sg_add); if (WARN_ON(sg_add))
return NULL;
return sg_last; return sg_last;
} }
......
This diff is collapsed.
This diff is collapsed.
...@@ -319,7 +319,7 @@ static const struct pci_device_id ccp_pci_table[] = { ...@@ -319,7 +319,7 @@ static const struct pci_device_id ccp_pci_table[] = {
MODULE_DEVICE_TABLE(pci, ccp_pci_table); MODULE_DEVICE_TABLE(pci, ccp_pci_table);
static struct pci_driver ccp_pci_driver = { static struct pci_driver ccp_pci_driver = {
.name = "AMD Cryptographic Coprocessor", .name = "ccp",
.id_table = ccp_pci_table, .id_table = ccp_pci_table,
.probe = ccp_pci_probe, .probe = ccp_pci_probe,
.remove = ccp_pci_remove, .remove = ccp_pci_remove,
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -34,7 +34,7 @@ ...@@ -34,7 +34,7 @@
#define DRV_MODULE_VERSION "0.2" #define DRV_MODULE_VERSION "0.2"
#define DRV_MODULE_RELDATE "July 28, 2011" #define DRV_MODULE_RELDATE "July 28, 2011"
static char version[] = static const char version[] =
DRV_MODULE_NAME ".c:v" DRV_MODULE_VERSION " (" DRV_MODULE_RELDATE ")\n"; DRV_MODULE_NAME ".c:v" DRV_MODULE_VERSION " (" DRV_MODULE_RELDATE ")\n";
MODULE_AUTHOR("David S. Miller (davem@davemloft.net)"); MODULE_AUTHOR("David S. Miller (davem@davemloft.net)");
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment