Commit eba0e319 authored by Linus Torvalds's avatar Linus Torvalds

Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (125 commits)
  [CRYPTO] twofish: Merge common glue code
  [CRYPTO] hifn_795x: Fixup container_of() usage
  [CRYPTO] cast6: inline bloat--
  [CRYPTO] api: Set default CRYPTO_MINALIGN to unsigned long long
  [CRYPTO] tcrypt: Make xcbc available as a standalone test
  [CRYPTO] xcbc: Remove bogus hash/cipher test
  [CRYPTO] xcbc: Fix algorithm leak when block size check fails
  [CRYPTO] tcrypt: Zero axbuf in the right function
  [CRYPTO] padlock: Only reset the key once for each CBC and ECB operation
  [CRYPTO] api: Include sched.h for cond_resched in scatterwalk.h
  [CRYPTO] salsa20-asm: Remove unnecessary dependency on CRYPTO_SALSA20
  [CRYPTO] tcrypt: Add select of AEAD
  [CRYPTO] salsa20: Add x86-64 assembly version
  [CRYPTO] salsa20_i586: Salsa20 stream cipher algorithm (i586 version)
  [CRYPTO] gcm: Introduce rfc4106
  [CRYPTO] api: Show async type
  [CRYPTO] chainiv: Avoid lock spinning where possible
  [CRYPTO] seqiv: Add select AEAD in Kconfig
  [CRYPTO] scatterwalk: Handle zero nbytes in scatterwalk_map_and_copy
  [CRYPTO] null: Allow setkey on digest_null 
  ...
parents df8dc74e 15e7b445
...@@ -33,9 +33,16 @@ The idea is to make the user interface and algorithm registration API ...@@ -33,9 +33,16 @@ The idea is to make the user interface and algorithm registration API
very simple, while hiding the core logic from both. Many good ideas very simple, while hiding the core logic from both. Many good ideas
from existing APIs such as Cryptoapi and Nettle have been adapted for this. from existing APIs such as Cryptoapi and Nettle have been adapted for this.
The API currently supports three types of transforms: Ciphers, Digests and The API currently supports five main types of transforms: AEAD (Authenticated
Compressors. The compression algorithms especially seem to be performing Encryption with Associated Data), Block Ciphers, Ciphers, Compressors and
very well so far. Hashes.
Please note that Block Ciphers is somewhat of a misnomer. It is in fact
meant to support all ciphers including stream ciphers. The difference
between Block Ciphers and Ciphers is that the latter operates on exactly
one block while the former can operate on an arbitrary amount of data,
subject to block size requirements (i.e., non-stream ciphers can only
process multiples of blocks).
Support for hardware crypto devices via an asynchronous interface is Support for hardware crypto devices via an asynchronous interface is
under development. under development.
...@@ -69,29 +76,12 @@ Here's an example of how to use the API: ...@@ -69,29 +76,12 @@ Here's an example of how to use the API:
Many real examples are available in the regression test module (tcrypt.c). Many real examples are available in the regression test module (tcrypt.c).
CONFIGURATION NOTES
As Triple DES is part of the DES module, for those using modular builds,
add the following line to /etc/modprobe.conf:
alias des3_ede des
The Null algorithms reside in the crypto_null module, so these lines
should also be added:
alias cipher_null crypto_null
alias digest_null crypto_null
alias compress_null crypto_null
The SHA384 algorithm shares code within the SHA512 module, so you'll
also need:
alias sha384 sha512
DEVELOPER NOTES DEVELOPER NOTES
Transforms may only be allocated in user context, and cryptographic Transforms may only be allocated in user context, and cryptographic
methods may only be called from softirq and user contexts. methods may only be called from softirq and user contexts. For
transforms with a setkey method it too should only be called from
user context.
When using the API for ciphers, performance will be optimal if each When using the API for ciphers, performance will be optimal if each
scatterlist contains data which is a multiple of the cipher's block scatterlist contains data which is a multiple of the cipher's block
...@@ -130,8 +120,9 @@ might already be working on. ...@@ -130,8 +120,9 @@ might already be working on.
BUGS BUGS
Send bug reports to: Send bug reports to:
Herbert Xu <herbert@gondor.apana.org.au> linux-crypto@vger.kernel.org
Cc: David S. Miller <davem@redhat.com> Cc: Herbert Xu <herbert@gondor.apana.org.au>,
David S. Miller <davem@redhat.com>
FURTHER INFORMATION FURTHER INFORMATION
......
This diff is collapsed.
...@@ -4,12 +4,16 @@ ...@@ -4,12 +4,16 @@
obj-$(CONFIG_CRYPTO_AES_586) += aes-i586.o obj-$(CONFIG_CRYPTO_AES_586) += aes-i586.o
obj-$(CONFIG_CRYPTO_TWOFISH_586) += twofish-i586.o obj-$(CONFIG_CRYPTO_TWOFISH_586) += twofish-i586.o
obj-$(CONFIG_CRYPTO_SALSA20_586) += salsa20-i586.o
obj-$(CONFIG_CRYPTO_AES_X86_64) += aes-x86_64.o obj-$(CONFIG_CRYPTO_AES_X86_64) += aes-x86_64.o
obj-$(CONFIG_CRYPTO_TWOFISH_X86_64) += twofish-x86_64.o obj-$(CONFIG_CRYPTO_TWOFISH_X86_64) += twofish-x86_64.o
obj-$(CONFIG_CRYPTO_SALSA20_X86_64) += salsa20-x86_64.o
aes-i586-y := aes-i586-asm_32.o aes_32.o aes-i586-y := aes-i586-asm_32.o aes_glue.o
twofish-i586-y := twofish-i586-asm_32.o twofish_32.o twofish-i586-y := twofish-i586-asm_32.o twofish_glue.o
salsa20-i586-y := salsa20-i586-asm_32.o salsa20_glue.o
aes-x86_64-y := aes-x86_64-asm_64.o aes_64.o aes-x86_64-y := aes-x86_64-asm_64.o aes_glue.o
twofish-x86_64-y := twofish-x86_64-asm_64.o twofish_64.o twofish-x86_64-y := twofish-x86_64-asm_64.o twofish_glue.o
salsa20-x86_64-y := salsa20-x86_64-asm_64.o salsa20_glue.o
...@@ -46,9 +46,9 @@ ...@@ -46,9 +46,9 @@
#define in_blk 16 #define in_blk 16
/* offsets in crypto_tfm structure */ /* offsets in crypto_tfm structure */
#define ekey (crypto_tfm_ctx_offset + 0) #define klen (crypto_tfm_ctx_offset + 0)
#define nrnd (crypto_tfm_ctx_offset + 256) #define ekey (crypto_tfm_ctx_offset + 4)
#define dkey (crypto_tfm_ctx_offset + 260) #define dkey (crypto_tfm_ctx_offset + 244)
// register mapping for encrypt and decrypt subroutines // register mapping for encrypt and decrypt subroutines
...@@ -221,8 +221,8 @@ ...@@ -221,8 +221,8 @@
.global aes_enc_blk .global aes_enc_blk
.extern ft_tab .extern crypto_ft_tab
.extern fl_tab .extern crypto_fl_tab
.align 4 .align 4
...@@ -236,7 +236,7 @@ aes_enc_blk: ...@@ -236,7 +236,7 @@ aes_enc_blk:
1: push %ebx 1: push %ebx
mov in_blk+4(%esp),%r2 mov in_blk+4(%esp),%r2
push %esi push %esi
mov nrnd(%ebp),%r3 // number of rounds mov klen(%ebp),%r3 // key size
push %edi push %edi
#if ekey != 0 #if ekey != 0
lea ekey(%ebp),%ebp // key pointer lea ekey(%ebp),%ebp // key pointer
...@@ -255,26 +255,26 @@ aes_enc_blk: ...@@ -255,26 +255,26 @@ aes_enc_blk:
sub $8,%esp // space for register saves on stack sub $8,%esp // space for register saves on stack
add $16,%ebp // increment to next round key add $16,%ebp // increment to next round key
cmp $12,%r3 cmp $24,%r3
jb 4f // 10 rounds for 128-bit key jb 4f // 10 rounds for 128-bit key
lea 32(%ebp),%ebp lea 32(%ebp),%ebp
je 3f // 12 rounds for 192-bit key je 3f // 12 rounds for 192-bit key
lea 32(%ebp),%ebp lea 32(%ebp),%ebp
2: fwd_rnd1( -64(%ebp) ,ft_tab) // 14 rounds for 256-bit key 2: fwd_rnd1( -64(%ebp), crypto_ft_tab) // 14 rounds for 256-bit key
fwd_rnd2( -48(%ebp) ,ft_tab) fwd_rnd2( -48(%ebp), crypto_ft_tab)
3: fwd_rnd1( -32(%ebp) ,ft_tab) // 12 rounds for 192-bit key 3: fwd_rnd1( -32(%ebp), crypto_ft_tab) // 12 rounds for 192-bit key
fwd_rnd2( -16(%ebp) ,ft_tab) fwd_rnd2( -16(%ebp), crypto_ft_tab)
4: fwd_rnd1( (%ebp) ,ft_tab) // 10 rounds for 128-bit key 4: fwd_rnd1( (%ebp), crypto_ft_tab) // 10 rounds for 128-bit key
fwd_rnd2( +16(%ebp) ,ft_tab) fwd_rnd2( +16(%ebp), crypto_ft_tab)
fwd_rnd1( +32(%ebp) ,ft_tab) fwd_rnd1( +32(%ebp), crypto_ft_tab)
fwd_rnd2( +48(%ebp) ,ft_tab) fwd_rnd2( +48(%ebp), crypto_ft_tab)
fwd_rnd1( +64(%ebp) ,ft_tab) fwd_rnd1( +64(%ebp), crypto_ft_tab)
fwd_rnd2( +80(%ebp) ,ft_tab) fwd_rnd2( +80(%ebp), crypto_ft_tab)
fwd_rnd1( +96(%ebp) ,ft_tab) fwd_rnd1( +96(%ebp), crypto_ft_tab)
fwd_rnd2(+112(%ebp) ,ft_tab) fwd_rnd2(+112(%ebp), crypto_ft_tab)
fwd_rnd1(+128(%ebp) ,ft_tab) fwd_rnd1(+128(%ebp), crypto_ft_tab)
fwd_rnd2(+144(%ebp) ,fl_tab) // last round uses a different table fwd_rnd2(+144(%ebp), crypto_fl_tab) // last round uses a different table
// move final values to the output array. CAUTION: the // move final values to the output array. CAUTION: the
// order of these assigns rely on the register mappings // order of these assigns rely on the register mappings
...@@ -297,8 +297,8 @@ aes_enc_blk: ...@@ -297,8 +297,8 @@ aes_enc_blk:
.global aes_dec_blk .global aes_dec_blk
.extern it_tab .extern crypto_it_tab
.extern il_tab .extern crypto_il_tab
.align 4 .align 4
...@@ -312,14 +312,11 @@ aes_dec_blk: ...@@ -312,14 +312,11 @@ aes_dec_blk:
1: push %ebx 1: push %ebx
mov in_blk+4(%esp),%r2 mov in_blk+4(%esp),%r2
push %esi push %esi
mov nrnd(%ebp),%r3 // number of rounds mov klen(%ebp),%r3 // key size
push %edi push %edi
#if dkey != 0 #if dkey != 0
lea dkey(%ebp),%ebp // key pointer lea dkey(%ebp),%ebp // key pointer
#endif #endif
mov %r3,%r0
shl $4,%r0
add %r0,%ebp
// input four columns and xor in first round key // input four columns and xor in first round key
...@@ -333,27 +330,27 @@ aes_dec_blk: ...@@ -333,27 +330,27 @@ aes_dec_blk:
xor 12(%ebp),%r5 xor 12(%ebp),%r5
sub $8,%esp // space for register saves on stack sub $8,%esp // space for register saves on stack
sub $16,%ebp // increment to next round key add $16,%ebp // increment to next round key
cmp $12,%r3 cmp $24,%r3
jb 4f // 10 rounds for 128-bit key jb 4f // 10 rounds for 128-bit key
lea -32(%ebp),%ebp lea 32(%ebp),%ebp
je 3f // 12 rounds for 192-bit key je 3f // 12 rounds for 192-bit key
lea -32(%ebp),%ebp lea 32(%ebp),%ebp
2: inv_rnd1( +64(%ebp), it_tab) // 14 rounds for 256-bit key 2: inv_rnd1( -64(%ebp), crypto_it_tab) // 14 rounds for 256-bit key
inv_rnd2( +48(%ebp), it_tab) inv_rnd2( -48(%ebp), crypto_it_tab)
3: inv_rnd1( +32(%ebp), it_tab) // 12 rounds for 192-bit key 3: inv_rnd1( -32(%ebp), crypto_it_tab) // 12 rounds for 192-bit key
inv_rnd2( +16(%ebp), it_tab) inv_rnd2( -16(%ebp), crypto_it_tab)
4: inv_rnd1( (%ebp), it_tab) // 10 rounds for 128-bit key 4: inv_rnd1( (%ebp), crypto_it_tab) // 10 rounds for 128-bit key
inv_rnd2( -16(%ebp), it_tab) inv_rnd2( +16(%ebp), crypto_it_tab)
inv_rnd1( -32(%ebp), it_tab) inv_rnd1( +32(%ebp), crypto_it_tab)
inv_rnd2( -48(%ebp), it_tab) inv_rnd2( +48(%ebp), crypto_it_tab)
inv_rnd1( -64(%ebp), it_tab) inv_rnd1( +64(%ebp), crypto_it_tab)
inv_rnd2( -80(%ebp), it_tab) inv_rnd2( +80(%ebp), crypto_it_tab)
inv_rnd1( -96(%ebp), it_tab) inv_rnd1( +96(%ebp), crypto_it_tab)
inv_rnd2(-112(%ebp), it_tab) inv_rnd2(+112(%ebp), crypto_it_tab)
inv_rnd1(-128(%ebp), it_tab) inv_rnd1(+128(%ebp), crypto_it_tab)
inv_rnd2(-144(%ebp), il_tab) // last round uses a different table inv_rnd2(+144(%ebp), crypto_il_tab) // last round uses a different table
// move final values to the output array. CAUTION: the // move final values to the output array. CAUTION: the
// order of these assigns rely on the register mappings // order of these assigns rely on the register mappings
......
...@@ -8,10 +8,10 @@ ...@@ -8,10 +8,10 @@
* including this sentence is retained in full. * including this sentence is retained in full.
*/ */
.extern aes_ft_tab .extern crypto_ft_tab
.extern aes_it_tab .extern crypto_it_tab
.extern aes_fl_tab .extern crypto_fl_tab
.extern aes_il_tab .extern crypto_il_tab
.text .text
...@@ -56,13 +56,13 @@ ...@@ -56,13 +56,13 @@
.align 8; \ .align 8; \
FUNC: movq r1,r2; \ FUNC: movq r1,r2; \
movq r3,r4; \ movq r3,r4; \
leaq BASE+KEY+52(r8),r9; \ leaq BASE+KEY+48+4(r8),r9; \
movq r10,r11; \ movq r10,r11; \
movl (r7),r5 ## E; \ movl (r7),r5 ## E; \
movl 4(r7),r1 ## E; \ movl 4(r7),r1 ## E; \
movl 8(r7),r6 ## E; \ movl 8(r7),r6 ## E; \
movl 12(r7),r7 ## E; \ movl 12(r7),r7 ## E; \
movl BASE(r8),r10 ## E; \ movl BASE+0(r8),r10 ## E; \
xorl -48(r9),r5 ## E; \ xorl -48(r9),r5 ## E; \
xorl -44(r9),r1 ## E; \ xorl -44(r9),r1 ## E; \
xorl -40(r9),r6 ## E; \ xorl -40(r9),r6 ## E; \
...@@ -154,37 +154,37 @@ FUNC: movq r1,r2; \ ...@@ -154,37 +154,37 @@ FUNC: movq r1,r2; \
/* void aes_enc_blk(stuct crypto_tfm *tfm, u8 *out, const u8 *in) */ /* void aes_enc_blk(stuct crypto_tfm *tfm, u8 *out, const u8 *in) */
entry(aes_enc_blk,0,enc128,enc192) entry(aes_enc_blk,0,enc128,enc192)
encrypt_round(aes_ft_tab,-96) encrypt_round(crypto_ft_tab,-96)
encrypt_round(aes_ft_tab,-80) encrypt_round(crypto_ft_tab,-80)
enc192: encrypt_round(aes_ft_tab,-64) enc192: encrypt_round(crypto_ft_tab,-64)
encrypt_round(aes_ft_tab,-48) encrypt_round(crypto_ft_tab,-48)
enc128: encrypt_round(aes_ft_tab,-32) enc128: encrypt_round(crypto_ft_tab,-32)
encrypt_round(aes_ft_tab,-16) encrypt_round(crypto_ft_tab,-16)
encrypt_round(aes_ft_tab, 0) encrypt_round(crypto_ft_tab, 0)
encrypt_round(aes_ft_tab, 16) encrypt_round(crypto_ft_tab, 16)
encrypt_round(aes_ft_tab, 32) encrypt_round(crypto_ft_tab, 32)
encrypt_round(aes_ft_tab, 48) encrypt_round(crypto_ft_tab, 48)
encrypt_round(aes_ft_tab, 64) encrypt_round(crypto_ft_tab, 64)
encrypt_round(aes_ft_tab, 80) encrypt_round(crypto_ft_tab, 80)
encrypt_round(aes_ft_tab, 96) encrypt_round(crypto_ft_tab, 96)
encrypt_final(aes_fl_tab,112) encrypt_final(crypto_fl_tab,112)
return return
/* void aes_dec_blk(struct crypto_tfm *tfm, u8 *out, const u8 *in) */ /* void aes_dec_blk(struct crypto_tfm *tfm, u8 *out, const u8 *in) */
entry(aes_dec_blk,240,dec128,dec192) entry(aes_dec_blk,240,dec128,dec192)
decrypt_round(aes_it_tab,-96) decrypt_round(crypto_it_tab,-96)
decrypt_round(aes_it_tab,-80) decrypt_round(crypto_it_tab,-80)
dec192: decrypt_round(aes_it_tab,-64) dec192: decrypt_round(crypto_it_tab,-64)
decrypt_round(aes_it_tab,-48) decrypt_round(crypto_it_tab,-48)
dec128: decrypt_round(aes_it_tab,-32) dec128: decrypt_round(crypto_it_tab,-32)
decrypt_round(aes_it_tab,-16) decrypt_round(crypto_it_tab,-16)
decrypt_round(aes_it_tab, 0) decrypt_round(crypto_it_tab, 0)
decrypt_round(aes_it_tab, 16) decrypt_round(crypto_it_tab, 16)
decrypt_round(aes_it_tab, 32) decrypt_round(crypto_it_tab, 32)
decrypt_round(aes_it_tab, 48) decrypt_round(crypto_it_tab, 48)
decrypt_round(aes_it_tab, 64) decrypt_round(crypto_it_tab, 64)
decrypt_round(aes_it_tab, 80) decrypt_round(crypto_it_tab, 80)
decrypt_round(aes_it_tab, 96) decrypt_round(crypto_it_tab, 96)
decrypt_final(aes_il_tab,112) decrypt_final(crypto_il_tab,112)
return return
This diff is collapsed.
/*
* Cryptographic API.
*
* AES Cipher Algorithm.
*
* Based on Brian Gladman's code.
*
* Linux developers:
* Alexander Kjeldaas <astor@fast.no>
* Herbert Valerio Riedel <hvr@hvrlab.org>
* Kyle McMartin <kyle@debian.org>
* Adam J. Richter <adam@yggdrasil.com> (conversion to 2.5 API).
* Andreas Steinmetz <ast@domdv.de> (adapted to x86_64 assembler)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* ---------------------------------------------------------------------------
* Copyright (c) 2002, Dr Brian Gladman <brg@gladman.me.uk>, Worcester, UK.
* All rights reserved.
*
* LICENSE TERMS
*
* The free distribution and use of this software in both source and binary
* form is allowed (with or without changes) provided that:
*
* 1. distributions of this source code include the above copyright
* notice, this list of conditions and the following disclaimer;
*
* 2. distributions in binary form include the above copyright
* notice, this list of conditions and the following disclaimer
* in the documentation and/or other associated materials;
*
* 3. the copyright holder's name is not used to endorse products
* built using this software without specific written permission.
*
* ALTERNATIVELY, provided that this notice is retained in full, this product
* may be distributed under the terms of the GNU General Public License (GPL),
* in which case the provisions of the GPL apply INSTEAD OF those given above.
*
* DISCLAIMER
*
* This software is provided 'as is' with no explicit or implied warranties
* in respect of its properties, including, but not limited to, correctness
* and/or fitness for purpose.
* ---------------------------------------------------------------------------
*/
/* Some changes from the Gladman version:
s/RIJNDAEL(e_key)/E_KEY/g
s/RIJNDAEL(d_key)/D_KEY/g
*/
#include <asm/byteorder.h>
#include <linux/bitops.h>
#include <linux/crypto.h>
#include <linux/errno.h>
#include <linux/init.h>
#include <linux/module.h>
#include <linux/types.h>
#define AES_MIN_KEY_SIZE 16
#define AES_MAX_KEY_SIZE 32
#define AES_BLOCK_SIZE 16
/*
* #define byte(x, nr) ((unsigned char)((x) >> (nr*8)))
*/
static inline u8 byte(const u32 x, const unsigned n)
{
return x >> (n << 3);
}
struct aes_ctx
{
u32 key_length;
u32 buf[120];
};
#define E_KEY (&ctx->buf[0])
#define D_KEY (&ctx->buf[60])
static u8 pow_tab[256] __initdata;
static u8 log_tab[256] __initdata;
static u8 sbx_tab[256] __initdata;
static u8 isb_tab[256] __initdata;
static u32 rco_tab[10];
u32 aes_ft_tab[4][256];
u32 aes_it_tab[4][256];
u32 aes_fl_tab[4][256];
u32 aes_il_tab[4][256];
static inline u8 f_mult(u8 a, u8 b)
{
u8 aa = log_tab[a], cc = aa + log_tab[b];
return pow_tab[cc + (cc < aa ? 1 : 0)];
}
#define ff_mult(a, b) (a && b ? f_mult(a, b) : 0)
#define ls_box(x) \
(aes_fl_tab[0][byte(x, 0)] ^ \
aes_fl_tab[1][byte(x, 1)] ^ \
aes_fl_tab[2][byte(x, 2)] ^ \
aes_fl_tab[3][byte(x, 3)])
static void __init gen_tabs(void)
{
u32 i, t;
u8 p, q;
/* log and power tables for GF(2**8) finite field with
0x011b as modular polynomial - the simplest primitive
root is 0x03, used here to generate the tables */
for (i = 0, p = 1; i < 256; ++i) {
pow_tab[i] = (u8)p;
log_tab[p] = (u8)i;
p ^= (p << 1) ^ (p & 0x80 ? 0x01b : 0);
}
log_tab[1] = 0;
for (i = 0, p = 1; i < 10; ++i) {
rco_tab[i] = p;
p = (p << 1) ^ (p & 0x80 ? 0x01b : 0);
}
for (i = 0; i < 256; ++i) {
p = (i ? pow_tab[255 - log_tab[i]] : 0);
q = ((p >> 7) | (p << 1)) ^ ((p >> 6) | (p << 2));
p ^= 0x63 ^ q ^ ((q >> 6) | (q << 2));
sbx_tab[i] = p;
isb_tab[p] = (u8)i;
}
for (i = 0; i < 256; ++i) {
p = sbx_tab[i];
t = p;
aes_fl_tab[0][i] = t;
aes_fl_tab[1][i] = rol32(t, 8);
aes_fl_tab[2][i] = rol32(t, 16);
aes_fl_tab[3][i] = rol32(t, 24);
t = ((u32)ff_mult(2, p)) |
((u32)p << 8) |
((u32)p << 16) | ((u32)ff_mult(3, p) << 24);
aes_ft_tab[0][i] = t;
aes_ft_tab[1][i] = rol32(t, 8);
aes_ft_tab[2][i] = rol32(t, 16);
aes_ft_tab[3][i] = rol32(t, 24);
p = isb_tab[i];
t = p;
aes_il_tab[0][i] = t;
aes_il_tab[1][i] = rol32(t, 8);
aes_il_tab[2][i] = rol32(t, 16);
aes_il_tab[3][i] = rol32(t, 24);
t = ((u32)ff_mult(14, p)) |
((u32)ff_mult(9, p) << 8) |
((u32)ff_mult(13, p) << 16) |
((u32)ff_mult(11, p) << 24);
aes_it_tab[0][i] = t;
aes_it_tab[1][i] = rol32(t, 8);
aes_it_tab[2][i] = rol32(t, 16);
aes_it_tab[3][i] = rol32(t, 24);
}
}
#define star_x(x) (((x) & 0x7f7f7f7f) << 1) ^ ((((x) & 0x80808080) >> 7) * 0x1b)
#define imix_col(y, x) \
u = star_x(x); \
v = star_x(u); \
w = star_x(v); \
t = w ^ (x); \
(y) = u ^ v ^ w; \
(y) ^= ror32(u ^ t, 8) ^ \
ror32(v ^ t, 16) ^ \
ror32(t, 24)
/* initialise the key schedule from the user supplied key */
#define loop4(i) \
{ \
t = ror32(t, 8); t = ls_box(t) ^ rco_tab[i]; \
t ^= E_KEY[4 * i]; E_KEY[4 * i + 4] = t; \
t ^= E_KEY[4 * i + 1]; E_KEY[4 * i + 5] = t; \
t ^= E_KEY[4 * i + 2]; E_KEY[4 * i + 6] = t; \
t ^= E_KEY[4 * i + 3]; E_KEY[4 * i + 7] = t; \
}
#define loop6(i) \
{ \
t = ror32(t, 8); t = ls_box(t) ^ rco_tab[i]; \
t ^= E_KEY[6 * i]; E_KEY[6 * i + 6] = t; \
t ^= E_KEY[6 * i + 1]; E_KEY[6 * i + 7] = t; \
t ^= E_KEY[6 * i + 2]; E_KEY[6 * i + 8] = t; \
t ^= E_KEY[6 * i + 3]; E_KEY[6 * i + 9] = t; \
t ^= E_KEY[6 * i + 4]; E_KEY[6 * i + 10] = t; \
t ^= E_KEY[6 * i + 5]; E_KEY[6 * i + 11] = t; \
}
#define loop8(i) \
{ \
t = ror32(t, 8); ; t = ls_box(t) ^ rco_tab[i]; \
t ^= E_KEY[8 * i]; E_KEY[8 * i + 8] = t; \
t ^= E_KEY[8 * i + 1]; E_KEY[8 * i + 9] = t; \
t ^= E_KEY[8 * i + 2]; E_KEY[8 * i + 10] = t; \
t ^= E_KEY[8 * i + 3]; E_KEY[8 * i + 11] = t; \
t = E_KEY[8 * i + 4] ^ ls_box(t); \
E_KEY[8 * i + 12] = t; \
t ^= E_KEY[8 * i + 5]; E_KEY[8 * i + 13] = t; \
t ^= E_KEY[8 * i + 6]; E_KEY[8 * i + 14] = t; \
t ^= E_KEY[8 * i + 7]; E_KEY[8 * i + 15] = t; \
}
static int aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
unsigned int key_len)
{
struct aes_ctx *ctx = crypto_tfm_ctx(tfm);
const __le32 *key = (const __le32 *)in_key;
u32 *flags = &tfm->crt_flags;
u32 i, j, t, u, v, w;
if (key_len % 8) {
*flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
return -EINVAL;
}
ctx->key_length = key_len;
D_KEY[key_len + 24] = E_KEY[0] = le32_to_cpu(key[0]);
D_KEY[key_len + 25] = E_KEY[1] = le32_to_cpu(key[1]);
D_KEY[key_len + 26] = E_KEY[2] = le32_to_cpu(key[2]);
D_KEY[key_len + 27] = E_KEY[3] = le32_to_cpu(key[3]);
switch (key_len) {
case 16:
t = E_KEY[3];
for (i = 0; i < 10; ++i)
loop4(i);
break;
case 24:
E_KEY[4] = le32_to_cpu(key[4]);
t = E_KEY[5] = le32_to_cpu(key[5]);
for (i = 0; i < 8; ++i)
loop6 (i);
break;
case 32:
E_KEY[4] = le32_to_cpu(key[4]);
E_KEY[5] = le32_to_cpu(key[5]);
E_KEY[6] = le32_to_cpu(key[6]);
t = E_KEY[7] = le32_to_cpu(key[7]);
for (i = 0; i < 7; ++i)
loop8(i);
break;
}
D_KEY[0] = E_KEY[key_len + 24];
D_KEY[1] = E_KEY[key_len + 25];
D_KEY[2] = E_KEY[key_len + 26];
D_KEY[3] = E_KEY[key_len + 27];
for (i = 4; i < key_len + 24; ++i) {
j = key_len + 24 - (i & ~3) + (i & 3);
imix_col(D_KEY[j], E_KEY[i]);
}
return 0;
}
asmlinkage void aes_enc_blk(struct crypto_tfm *tfm, u8 *out, const u8 *in);
asmlinkage void aes_dec_blk(struct crypto_tfm *tfm, u8 *out, const u8 *in);
static void aes_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
{
aes_enc_blk(tfm, dst, src);
}
static void aes_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
{
aes_dec_blk(tfm, dst, src);
}
static struct crypto_alg aes_alg = {
.cra_name = "aes",
.cra_driver_name = "aes-x86_64",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_TYPE_CIPHER,
.cra_blocksize = AES_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct aes_ctx),
.cra_module = THIS_MODULE,
.cra_list = LIST_HEAD_INIT(aes_alg.cra_list),
.cra_u = {
.cipher = {
.cia_min_keysize = AES_MIN_KEY_SIZE,
.cia_max_keysize = AES_MAX_KEY_SIZE,
.cia_setkey = aes_set_key,
.cia_encrypt = aes_encrypt,
.cia_decrypt = aes_decrypt
}
}
};
static int __init aes_init(void)
{
gen_tabs();
return crypto_register_alg(&aes_alg);
}
static void __exit aes_fini(void)
{
crypto_unregister_alg(&aes_alg);
}
module_init(aes_init);
module_exit(aes_fini);
MODULE_DESCRIPTION("Rijndael (AES) Cipher Algorithm");
MODULE_LICENSE("GPL");
MODULE_ALIAS("aes");
/*
* Glue Code for the asm optimized version of the AES Cipher Algorithm
*
*/
#include <crypto/aes.h>
asmlinkage void aes_enc_blk(struct crypto_tfm *tfm, u8 *out, const u8 *in);
asmlinkage void aes_dec_blk(struct crypto_tfm *tfm, u8 *out, const u8 *in);
static void aes_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
{
aes_enc_blk(tfm, dst, src);
}
static void aes_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
{
aes_dec_blk(tfm, dst, src);
}
static struct crypto_alg aes_alg = {
.cra_name = "aes",
.cra_driver_name = "aes-asm",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_TYPE_CIPHER,
.cra_blocksize = AES_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct crypto_aes_ctx),
.cra_module = THIS_MODULE,
.cra_list = LIST_HEAD_INIT(aes_alg.cra_list),
.cra_u = {
.cipher = {
.cia_min_keysize = AES_MIN_KEY_SIZE,
.cia_max_keysize = AES_MAX_KEY_SIZE,
.cia_setkey = crypto_aes_set_key,
.cia_encrypt = aes_encrypt,
.cia_decrypt = aes_decrypt
}
}
};
static int __init aes_init(void)
{
return crypto_register_alg(&aes_alg);
}
static void __exit aes_fini(void)
{
crypto_unregister_alg(&aes_alg);
}
module_init(aes_init);
module_exit(aes_fini);
MODULE_DESCRIPTION("Rijndael (AES) Cipher Algorithm, asm optimized");
MODULE_LICENSE("GPL");
MODULE_ALIAS("aes");
MODULE_ALIAS("aes-asm");
This diff is collapsed.
This diff is collapsed.
/*
* Glue code for optimized assembly version of Salsa20.
*
* Copyright (c) 2007 Tan Swee Heng <thesweeheng@gmail.com>
*
* The assembly codes are public domain assembly codes written by Daniel. J.
* Bernstein <djb@cr.yp.to>. The codes are modified to include indentation
* and to remove extraneous comments and functions that are not needed.
* - i586 version, renamed as salsa20-i586-asm_32.S
* available from <http://cr.yp.to/snuffle/salsa20/x86-pm/salsa20.s>
* - x86-64 version, renamed as salsa20-x86_64-asm_64.S
* available from <http://cr.yp.to/snuffle/salsa20/amd64-3/salsa20.s>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
* Software Foundation; either version 2 of the License, or (at your option)
* any later version.
*
*/
#include <crypto/algapi.h>
#include <linux/module.h>
#include <linux/crypto.h>
#define SALSA20_IV_SIZE 8U
#define SALSA20_MIN_KEY_SIZE 16U
#define SALSA20_MAX_KEY_SIZE 32U
// use the ECRYPT_* function names
#define salsa20_keysetup ECRYPT_keysetup
#define salsa20_ivsetup ECRYPT_ivsetup
#define salsa20_encrypt_bytes ECRYPT_encrypt_bytes
struct salsa20_ctx
{
u32 input[16];
};
asmlinkage void salsa20_keysetup(struct salsa20_ctx *ctx, const u8 *k,
u32 keysize, u32 ivsize);
asmlinkage void salsa20_ivsetup(struct salsa20_ctx *ctx, const u8 *iv);
asmlinkage void salsa20_encrypt_bytes(struct salsa20_ctx *ctx,
const u8 *src, u8 *dst, u32 bytes);
static int setkey(struct crypto_tfm *tfm, const u8 *key,
unsigned int keysize)
{
struct salsa20_ctx *ctx = crypto_tfm_ctx(tfm);
salsa20_keysetup(ctx, key, keysize*8, SALSA20_IV_SIZE*8);
return 0;
}
static int encrypt(struct blkcipher_desc *desc,
struct scatterlist *dst, struct scatterlist *src,
unsigned int nbytes)
{
struct blkcipher_walk walk;
struct crypto_blkcipher *tfm = desc->tfm;
struct salsa20_ctx *ctx = crypto_blkcipher_ctx(tfm);
int err;
blkcipher_walk_init(&walk, dst, src, nbytes);
err = blkcipher_walk_virt_block(desc, &walk, 64);
salsa20_ivsetup(ctx, walk.iv);
if (likely(walk.nbytes == nbytes))
{
salsa20_encrypt_bytes(ctx, walk.src.virt.addr,
walk.dst.virt.addr, nbytes);
return blkcipher_walk_done(desc, &walk, 0);
}
while (walk.nbytes >= 64) {
salsa20_encrypt_bytes(ctx, walk.src.virt.addr,
walk.dst.virt.addr,
walk.nbytes - (walk.nbytes % 64));
err = blkcipher_walk_done(desc, &walk, walk.nbytes % 64);
}
if (walk.nbytes) {
salsa20_encrypt_bytes(ctx, walk.src.virt.addr,
walk.dst.virt.addr, walk.nbytes);
err = blkcipher_walk_done(desc, &walk, 0);
}
return err;
}
static struct crypto_alg alg = {
.cra_name = "salsa20",
.cra_driver_name = "salsa20-asm",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
.cra_type = &crypto_blkcipher_type,
.cra_blocksize = 1,
.cra_ctxsize = sizeof(struct salsa20_ctx),
.cra_alignmask = 3,
.cra_module = THIS_MODULE,
.cra_list = LIST_HEAD_INIT(alg.cra_list),
.cra_u = {
.blkcipher = {
.setkey = setkey,
.encrypt = encrypt,
.decrypt = encrypt,
.min_keysize = SALSA20_MIN_KEY_SIZE,
.max_keysize = SALSA20_MAX_KEY_SIZE,
.ivsize = SALSA20_IV_SIZE,
}
}
};
static int __init init(void)
{
return crypto_register_alg(&alg);
}
static void __exit fini(void)
{
crypto_unregister_alg(&alg);
}
module_init(init);
module_exit(fini);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION ("Salsa20 stream cipher algorithm (optimized assembly version)");
MODULE_ALIAS("salsa20");
MODULE_ALIAS("salsa20-asm");
/*
* Glue Code for optimized x86_64 assembler version of TWOFISH
*
* Originally Twofish for GPG
* By Matthew Skala <mskala@ansuz.sooke.bc.ca>, July 26, 1998
* 256-bit key length added March 20, 1999
* Some modifications to reduce the text size by Werner Koch, April, 1998
* Ported to the kerneli patch by Marc Mutz <Marc@Mutz.com>
* Ported to CryptoAPI by Colin Slater <hoho@tacomeat.net>
*
* The original author has disclaimed all copyright interest in this
* code and thus put it in the public domain. The subsequent authors
* have put this under the GNU General Public License.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
* USA
*
* This code is a "clean room" implementation, written from the paper
* _Twofish: A 128-Bit Block Cipher_ by Bruce Schneier, John Kelsey,
* Doug Whiting, David Wagner, Chris Hall, and Niels Ferguson, available
* through http://www.counterpane.com/twofish.html
*
* For background information on multiplication in finite fields, used for
* the matrix operations in the key schedule, see the book _Contemporary
* Abstract Algebra_ by Joseph A. Gallian, especially chapter 22 in the
* Third Edition.
*/
#include <crypto/twofish.h>
#include <linux/crypto.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/types.h>
asmlinkage void twofish_enc_blk(struct crypto_tfm *tfm, u8 *dst, const u8 *src);
asmlinkage void twofish_dec_blk(struct crypto_tfm *tfm, u8 *dst, const u8 *src);
static void twofish_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
{
twofish_enc_blk(tfm, dst, src);
}
static void twofish_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
{
twofish_dec_blk(tfm, dst, src);
}
static struct crypto_alg alg = {
.cra_name = "twofish",
.cra_driver_name = "twofish-x86_64",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_TYPE_CIPHER,
.cra_blocksize = TF_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct twofish_ctx),
.cra_alignmask = 3,
.cra_module = THIS_MODULE,
.cra_list = LIST_HEAD_INIT(alg.cra_list),
.cra_u = {
.cipher = {
.cia_min_keysize = TF_MIN_KEY_SIZE,
.cia_max_keysize = TF_MAX_KEY_SIZE,
.cia_setkey = twofish_setkey,
.cia_encrypt = twofish_encrypt,
.cia_decrypt = twofish_decrypt
}
}
};
static int __init init(void)
{
return crypto_register_alg(&alg);
}
static void __exit fini(void)
{
crypto_unregister_alg(&alg);
}
module_init(init);
module_exit(fini);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION ("Twofish Cipher Algorithm, x86_64 asm optimized");
MODULE_ALIAS("twofish");
/* /*
* Glue Code for optimized 586 assembler version of TWOFISH * Glue Code for assembler optimized version of TWOFISH
* *
* Originally Twofish for GPG * Originally Twofish for GPG
* By Matthew Skala <mskala@ansuz.sooke.bc.ca>, July 26, 1998 * By Matthew Skala <mskala@ansuz.sooke.bc.ca>, July 26, 1998
...@@ -44,7 +44,6 @@ ...@@ -44,7 +44,6 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/types.h> #include <linux/types.h>
asmlinkage void twofish_enc_blk(struct crypto_tfm *tfm, u8 *dst, const u8 *src); asmlinkage void twofish_enc_blk(struct crypto_tfm *tfm, u8 *dst, const u8 *src);
asmlinkage void twofish_dec_blk(struct crypto_tfm *tfm, u8 *dst, const u8 *src); asmlinkage void twofish_dec_blk(struct crypto_tfm *tfm, u8 *dst, const u8 *src);
...@@ -60,7 +59,7 @@ static void twofish_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src) ...@@ -60,7 +59,7 @@ static void twofish_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
static struct crypto_alg alg = { static struct crypto_alg alg = {
.cra_name = "twofish", .cra_name = "twofish",
.cra_driver_name = "twofish-i586", .cra_driver_name = "twofish-asm",
.cra_priority = 200, .cra_priority = 200,
.cra_flags = CRYPTO_ALG_TYPE_CIPHER, .cra_flags = CRYPTO_ALG_TYPE_CIPHER,
.cra_blocksize = TF_BLOCK_SIZE, .cra_blocksize = TF_BLOCK_SIZE,
...@@ -93,5 +92,6 @@ module_init(init); ...@@ -93,5 +92,6 @@ module_init(init);
module_exit(fini); module_exit(fini);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION ("Twofish Cipher Algorithm, i586 asm optimized"); MODULE_DESCRIPTION ("Twofish Cipher Algorithm, asm optimized");
MODULE_ALIAS("twofish"); MODULE_ALIAS("twofish");
MODULE_ALIAS("twofish-asm");
...@@ -24,10 +24,6 @@ config CRYPTO_ALGAPI ...@@ -24,10 +24,6 @@ config CRYPTO_ALGAPI
help help
This option provides the API for cryptographic algorithms. This option provides the API for cryptographic algorithms.
config CRYPTO_ABLKCIPHER
tristate
select CRYPTO_BLKCIPHER
config CRYPTO_AEAD config CRYPTO_AEAD
tristate tristate
select CRYPTO_ALGAPI select CRYPTO_ALGAPI
...@@ -36,6 +32,15 @@ config CRYPTO_BLKCIPHER ...@@ -36,6 +32,15 @@ config CRYPTO_BLKCIPHER
tristate tristate
select CRYPTO_ALGAPI select CRYPTO_ALGAPI
config CRYPTO_SEQIV
tristate "Sequence Number IV Generator"
select CRYPTO_AEAD
select CRYPTO_BLKCIPHER
help
This IV generator generates an IV based on a sequence number by
xoring it with a salt. This algorithm is mainly useful for CTR
and similar modes.
config CRYPTO_HASH config CRYPTO_HASH
tristate tristate
select CRYPTO_ALGAPI select CRYPTO_ALGAPI
...@@ -91,7 +96,7 @@ config CRYPTO_SHA1 ...@@ -91,7 +96,7 @@ config CRYPTO_SHA1
SHA-1 secure hash standard (FIPS 180-1/DFIPS 180-2). SHA-1 secure hash standard (FIPS 180-1/DFIPS 180-2).
config CRYPTO_SHA256 config CRYPTO_SHA256
tristate "SHA256 digest algorithm" tristate "SHA224 and SHA256 digest algorithm"
select CRYPTO_ALGAPI select CRYPTO_ALGAPI
help help
SHA256 secure hash standard (DFIPS 180-2). SHA256 secure hash standard (DFIPS 180-2).
...@@ -99,6 +104,9 @@ config CRYPTO_SHA256 ...@@ -99,6 +104,9 @@ config CRYPTO_SHA256
This version of SHA implements a 256 bit hash with 128 bits of This version of SHA implements a 256 bit hash with 128 bits of
security against collision attacks. security against collision attacks.
This code also includes SHA-224, a 224 bit hash with 112 bits
of security against collision attacks.
config CRYPTO_SHA512 config CRYPTO_SHA512
tristate "SHA384 and SHA512 digest algorithms" tristate "SHA384 and SHA512 digest algorithms"
select CRYPTO_ALGAPI select CRYPTO_ALGAPI
...@@ -195,9 +203,34 @@ config CRYPTO_XTS ...@@ -195,9 +203,34 @@ config CRYPTO_XTS
key size 256, 384 or 512 bits. This implementation currently key size 256, 384 or 512 bits. This implementation currently
can't handle a sectorsize which is not a multiple of 16 bytes. can't handle a sectorsize which is not a multiple of 16 bytes.
config CRYPTO_CTR
tristate "CTR support"
select CRYPTO_BLKCIPHER
select CRYPTO_SEQIV
select CRYPTO_MANAGER
help
CTR: Counter mode
This block cipher algorithm is required for IPSec.
config CRYPTO_GCM
tristate "GCM/GMAC support"
select CRYPTO_CTR
select CRYPTO_AEAD
select CRYPTO_GF128MUL
help
Support for Galois/Counter Mode (GCM) and Galois Message
Authentication Code (GMAC). Required for IPSec.
config CRYPTO_CCM
tristate "CCM support"
select CRYPTO_CTR
select CRYPTO_AEAD
help
Support for Counter with CBC MAC. Required for IPsec.
config CRYPTO_CRYPTD config CRYPTO_CRYPTD
tristate "Software async crypto daemon" tristate "Software async crypto daemon"
select CRYPTO_ABLKCIPHER select CRYPTO_BLKCIPHER
select CRYPTO_MANAGER select CRYPTO_MANAGER
help help
This is a generic software asynchronous crypto daemon that This is a generic software asynchronous crypto daemon that
...@@ -320,6 +353,7 @@ config CRYPTO_AES_586 ...@@ -320,6 +353,7 @@ config CRYPTO_AES_586
tristate "AES cipher algorithms (i586)" tristate "AES cipher algorithms (i586)"
depends on (X86 || UML_X86) && !64BIT depends on (X86 || UML_X86) && !64BIT
select CRYPTO_ALGAPI select CRYPTO_ALGAPI
select CRYPTO_AES
help help
AES cipher algorithms (FIPS-197). AES uses the Rijndael AES cipher algorithms (FIPS-197). AES uses the Rijndael
algorithm. algorithm.
...@@ -341,6 +375,7 @@ config CRYPTO_AES_X86_64 ...@@ -341,6 +375,7 @@ config CRYPTO_AES_X86_64
tristate "AES cipher algorithms (x86_64)" tristate "AES cipher algorithms (x86_64)"
depends on (X86 || UML_X86) && 64BIT depends on (X86 || UML_X86) && 64BIT
select CRYPTO_ALGAPI select CRYPTO_ALGAPI
select CRYPTO_AES
help help
AES cipher algorithms (FIPS-197). AES uses the Rijndael AES cipher algorithms (FIPS-197). AES uses the Rijndael
algorithm. algorithm.
...@@ -441,6 +476,46 @@ config CRYPTO_SEED ...@@ -441,6 +476,46 @@ config CRYPTO_SEED
See also: See also:
<http://www.kisa.or.kr/kisa/seed/jsp/seed_eng.jsp> <http://www.kisa.or.kr/kisa/seed/jsp/seed_eng.jsp>
config CRYPTO_SALSA20
tristate "Salsa20 stream cipher algorithm (EXPERIMENTAL)"
depends on EXPERIMENTAL
select CRYPTO_BLKCIPHER
help
Salsa20 stream cipher algorithm.
Salsa20 is a stream cipher submitted to eSTREAM, the ECRYPT
Stream Cipher Project. See <http://www.ecrypt.eu.org/stream/>
The Salsa20 stream cipher algorithm is designed by Daniel J.
Bernstein <djb@cr.yp.to>. See <http://cr.yp.to/snuffle.html>
config CRYPTO_SALSA20_586
tristate "Salsa20 stream cipher algorithm (i586) (EXPERIMENTAL)"
depends on (X86 || UML_X86) && !64BIT
depends on EXPERIMENTAL
select CRYPTO_BLKCIPHER
help
Salsa20 stream cipher algorithm.
Salsa20 is a stream cipher submitted to eSTREAM, the ECRYPT
Stream Cipher Project. See <http://www.ecrypt.eu.org/stream/>
The Salsa20 stream cipher algorithm is designed by Daniel J.
Bernstein <djb@cr.yp.to>. See <http://cr.yp.to/snuffle.html>
config CRYPTO_SALSA20_X86_64
tristate "Salsa20 stream cipher algorithm (x86_64) (EXPERIMENTAL)"
depends on (X86 || UML_X86) && 64BIT
depends on EXPERIMENTAL
select CRYPTO_BLKCIPHER
help
Salsa20 stream cipher algorithm.
Salsa20 is a stream cipher submitted to eSTREAM, the ECRYPT
Stream Cipher Project. See <http://www.ecrypt.eu.org/stream/>
The Salsa20 stream cipher algorithm is designed by Daniel J.
Bernstein <djb@cr.yp.to>. See <http://cr.yp.to/snuffle.html>
config CRYPTO_DEFLATE config CRYPTO_DEFLATE
tristate "Deflate compression algorithm" tristate "Deflate compression algorithm"
...@@ -491,6 +566,7 @@ config CRYPTO_TEST ...@@ -491,6 +566,7 @@ config CRYPTO_TEST
tristate "Testing module" tristate "Testing module"
depends on m depends on m
select CRYPTO_ALGAPI select CRYPTO_ALGAPI
select CRYPTO_AEAD
help help
Quick & dirty crypto test module. Quick & dirty crypto test module.
...@@ -498,10 +574,19 @@ config CRYPTO_AUTHENC ...@@ -498,10 +574,19 @@ config CRYPTO_AUTHENC
tristate "Authenc support" tristate "Authenc support"
select CRYPTO_AEAD select CRYPTO_AEAD
select CRYPTO_MANAGER select CRYPTO_MANAGER
select CRYPTO_HASH
help help
Authenc: Combined mode wrapper for IPsec. Authenc: Combined mode wrapper for IPsec.
This is required for IPSec. This is required for IPSec.
config CRYPTO_LZO
tristate "LZO compression algorithm"
select CRYPTO_ALGAPI
select LZO_COMPRESS
select LZO_DECOMPRESS
help
This is the LZO algorithm.
source "drivers/crypto/Kconfig" source "drivers/crypto/Kconfig"
endif # if CRYPTO endif # if CRYPTO
...@@ -8,9 +8,14 @@ crypto_algapi-$(CONFIG_PROC_FS) += proc.o ...@@ -8,9 +8,14 @@ crypto_algapi-$(CONFIG_PROC_FS) += proc.o
crypto_algapi-objs := algapi.o scatterwalk.o $(crypto_algapi-y) crypto_algapi-objs := algapi.o scatterwalk.o $(crypto_algapi-y)
obj-$(CONFIG_CRYPTO_ALGAPI) += crypto_algapi.o obj-$(CONFIG_CRYPTO_ALGAPI) += crypto_algapi.o
obj-$(CONFIG_CRYPTO_ABLKCIPHER) += ablkcipher.o
obj-$(CONFIG_CRYPTO_AEAD) += aead.o obj-$(CONFIG_CRYPTO_AEAD) += aead.o
obj-$(CONFIG_CRYPTO_BLKCIPHER) += blkcipher.o
crypto_blkcipher-objs := ablkcipher.o
crypto_blkcipher-objs += blkcipher.o
obj-$(CONFIG_CRYPTO_BLKCIPHER) += crypto_blkcipher.o
obj-$(CONFIG_CRYPTO_BLKCIPHER) += chainiv.o
obj-$(CONFIG_CRYPTO_BLKCIPHER) += eseqiv.o
obj-$(CONFIG_CRYPTO_SEQIV) += seqiv.o
crypto_hash-objs := hash.o crypto_hash-objs := hash.o
obj-$(CONFIG_CRYPTO_HASH) += crypto_hash.o obj-$(CONFIG_CRYPTO_HASH) += crypto_hash.o
...@@ -32,6 +37,9 @@ obj-$(CONFIG_CRYPTO_CBC) += cbc.o ...@@ -32,6 +37,9 @@ obj-$(CONFIG_CRYPTO_CBC) += cbc.o
obj-$(CONFIG_CRYPTO_PCBC) += pcbc.o obj-$(CONFIG_CRYPTO_PCBC) += pcbc.o
obj-$(CONFIG_CRYPTO_LRW) += lrw.o obj-$(CONFIG_CRYPTO_LRW) += lrw.o
obj-$(CONFIG_CRYPTO_XTS) += xts.o obj-$(CONFIG_CRYPTO_XTS) += xts.o
obj-$(CONFIG_CRYPTO_CTR) += ctr.o
obj-$(CONFIG_CRYPTO_GCM) += gcm.o
obj-$(CONFIG_CRYPTO_CCM) += ccm.o
obj-$(CONFIG_CRYPTO_CRYPTD) += cryptd.o obj-$(CONFIG_CRYPTO_CRYPTD) += cryptd.o
obj-$(CONFIG_CRYPTO_DES) += des_generic.o obj-$(CONFIG_CRYPTO_DES) += des_generic.o
obj-$(CONFIG_CRYPTO_FCRYPT) += fcrypt.o obj-$(CONFIG_CRYPTO_FCRYPT) += fcrypt.o
...@@ -48,10 +56,12 @@ obj-$(CONFIG_CRYPTO_TEA) += tea.o ...@@ -48,10 +56,12 @@ obj-$(CONFIG_CRYPTO_TEA) += tea.o
obj-$(CONFIG_CRYPTO_KHAZAD) += khazad.o obj-$(CONFIG_CRYPTO_KHAZAD) += khazad.o
obj-$(CONFIG_CRYPTO_ANUBIS) += anubis.o obj-$(CONFIG_CRYPTO_ANUBIS) += anubis.o
obj-$(CONFIG_CRYPTO_SEED) += seed.o obj-$(CONFIG_CRYPTO_SEED) += seed.o
obj-$(CONFIG_CRYPTO_SALSA20) += salsa20_generic.o
obj-$(CONFIG_CRYPTO_DEFLATE) += deflate.o obj-$(CONFIG_CRYPTO_DEFLATE) += deflate.o
obj-$(CONFIG_CRYPTO_MICHAEL_MIC) += michael_mic.o obj-$(CONFIG_CRYPTO_MICHAEL_MIC) += michael_mic.o
obj-$(CONFIG_CRYPTO_CRC32C) += crc32c.o obj-$(CONFIG_CRYPTO_CRC32C) += crc32c.o
obj-$(CONFIG_CRYPTO_AUTHENC) += authenc.o obj-$(CONFIG_CRYPTO_AUTHENC) += authenc.o
obj-$(CONFIG_CRYPTO_LZO) += lzo.o
obj-$(CONFIG_CRYPTO_TEST) += tcrypt.o obj-$(CONFIG_CRYPTO_TEST) += tcrypt.o
......
...@@ -13,14 +13,18 @@ ...@@ -13,14 +13,18 @@
* *
*/ */
#include <crypto/algapi.h> #include <crypto/internal/skcipher.h>
#include <linux/errno.h> #include <linux/err.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/rtnetlink.h>
#include <linux/sched.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/seq_file.h> #include <linux/seq_file.h>
#include "internal.h"
static int setkey_unaligned(struct crypto_ablkcipher *tfm, const u8 *key, static int setkey_unaligned(struct crypto_ablkcipher *tfm, const u8 *key,
unsigned int keylen) unsigned int keylen)
{ {
...@@ -66,6 +70,16 @@ static unsigned int crypto_ablkcipher_ctxsize(struct crypto_alg *alg, u32 type, ...@@ -66,6 +70,16 @@ static unsigned int crypto_ablkcipher_ctxsize(struct crypto_alg *alg, u32 type,
return alg->cra_ctxsize; return alg->cra_ctxsize;
} }
int skcipher_null_givencrypt(struct skcipher_givcrypt_request *req)
{
return crypto_ablkcipher_encrypt(&req->creq);
}
int skcipher_null_givdecrypt(struct skcipher_givcrypt_request *req)
{
return crypto_ablkcipher_decrypt(&req->creq);
}
static int crypto_init_ablkcipher_ops(struct crypto_tfm *tfm, u32 type, static int crypto_init_ablkcipher_ops(struct crypto_tfm *tfm, u32 type,
u32 mask) u32 mask)
{ {
...@@ -78,6 +92,11 @@ static int crypto_init_ablkcipher_ops(struct crypto_tfm *tfm, u32 type, ...@@ -78,6 +92,11 @@ static int crypto_init_ablkcipher_ops(struct crypto_tfm *tfm, u32 type,
crt->setkey = setkey; crt->setkey = setkey;
crt->encrypt = alg->encrypt; crt->encrypt = alg->encrypt;
crt->decrypt = alg->decrypt; crt->decrypt = alg->decrypt;
if (!alg->ivsize) {
crt->givencrypt = skcipher_null_givencrypt;
crt->givdecrypt = skcipher_null_givdecrypt;
}
crt->base = __crypto_ablkcipher_cast(tfm);
crt->ivsize = alg->ivsize; crt->ivsize = alg->ivsize;
return 0; return 0;
...@@ -90,10 +109,13 @@ static void crypto_ablkcipher_show(struct seq_file *m, struct crypto_alg *alg) ...@@ -90,10 +109,13 @@ static void crypto_ablkcipher_show(struct seq_file *m, struct crypto_alg *alg)
struct ablkcipher_alg *ablkcipher = &alg->cra_ablkcipher; struct ablkcipher_alg *ablkcipher = &alg->cra_ablkcipher;
seq_printf(m, "type : ablkcipher\n"); seq_printf(m, "type : ablkcipher\n");
seq_printf(m, "async : %s\n", alg->cra_flags & CRYPTO_ALG_ASYNC ?
"yes" : "no");
seq_printf(m, "blocksize : %u\n", alg->cra_blocksize); seq_printf(m, "blocksize : %u\n", alg->cra_blocksize);
seq_printf(m, "min keysize : %u\n", ablkcipher->min_keysize); seq_printf(m, "min keysize : %u\n", ablkcipher->min_keysize);
seq_printf(m, "max keysize : %u\n", ablkcipher->max_keysize); seq_printf(m, "max keysize : %u\n", ablkcipher->max_keysize);
seq_printf(m, "ivsize : %u\n", ablkcipher->ivsize); seq_printf(m, "ivsize : %u\n", ablkcipher->ivsize);
seq_printf(m, "geniv : %s\n", ablkcipher->geniv ?: "<default>");
} }
const struct crypto_type crypto_ablkcipher_type = { const struct crypto_type crypto_ablkcipher_type = {
...@@ -105,5 +127,220 @@ const struct crypto_type crypto_ablkcipher_type = { ...@@ -105,5 +127,220 @@ const struct crypto_type crypto_ablkcipher_type = {
}; };
EXPORT_SYMBOL_GPL(crypto_ablkcipher_type); EXPORT_SYMBOL_GPL(crypto_ablkcipher_type);
static int no_givdecrypt(struct skcipher_givcrypt_request *req)
{
return -ENOSYS;
}
static int crypto_init_givcipher_ops(struct crypto_tfm *tfm, u32 type,
u32 mask)
{
struct ablkcipher_alg *alg = &tfm->__crt_alg->cra_ablkcipher;
struct ablkcipher_tfm *crt = &tfm->crt_ablkcipher;
if (alg->ivsize > PAGE_SIZE / 8)
return -EINVAL;
crt->setkey = tfm->__crt_alg->cra_flags & CRYPTO_ALG_GENIV ?
alg->setkey : setkey;
crt->encrypt = alg->encrypt;
crt->decrypt = alg->decrypt;
crt->givencrypt = alg->givencrypt;
crt->givdecrypt = alg->givdecrypt ?: no_givdecrypt;
crt->base = __crypto_ablkcipher_cast(tfm);
crt->ivsize = alg->ivsize;
return 0;
}
static void crypto_givcipher_show(struct seq_file *m, struct crypto_alg *alg)
__attribute__ ((unused));
static void crypto_givcipher_show(struct seq_file *m, struct crypto_alg *alg)
{
struct ablkcipher_alg *ablkcipher = &alg->cra_ablkcipher;
seq_printf(m, "type : givcipher\n");
seq_printf(m, "async : %s\n", alg->cra_flags & CRYPTO_ALG_ASYNC ?
"yes" : "no");
seq_printf(m, "blocksize : %u\n", alg->cra_blocksize);
seq_printf(m, "min keysize : %u\n", ablkcipher->min_keysize);
seq_printf(m, "max keysize : %u\n", ablkcipher->max_keysize);
seq_printf(m, "ivsize : %u\n", ablkcipher->ivsize);
seq_printf(m, "geniv : %s\n", ablkcipher->geniv ?: "<built-in>");
}
const struct crypto_type crypto_givcipher_type = {
.ctxsize = crypto_ablkcipher_ctxsize,
.init = crypto_init_givcipher_ops,
#ifdef CONFIG_PROC_FS
.show = crypto_givcipher_show,
#endif
};
EXPORT_SYMBOL_GPL(crypto_givcipher_type);
const char *crypto_default_geniv(const struct crypto_alg *alg)
{
return alg->cra_flags & CRYPTO_ALG_ASYNC ? "eseqiv" : "chainiv";
}
static int crypto_givcipher_default(struct crypto_alg *alg, u32 type, u32 mask)
{
struct rtattr *tb[3];
struct {
struct rtattr attr;
struct crypto_attr_type data;
} ptype;
struct {
struct rtattr attr;
struct crypto_attr_alg data;
} palg;
struct crypto_template *tmpl;
struct crypto_instance *inst;
struct crypto_alg *larval;
const char *geniv;
int err;
larval = crypto_larval_lookup(alg->cra_driver_name,
CRYPTO_ALG_TYPE_GIVCIPHER,
CRYPTO_ALG_TYPE_MASK);
err = PTR_ERR(larval);
if (IS_ERR(larval))
goto out;
err = -EAGAIN;
if (!crypto_is_larval(larval))
goto drop_larval;
ptype.attr.rta_len = sizeof(ptype);
ptype.attr.rta_type = CRYPTOA_TYPE;
ptype.data.type = type | CRYPTO_ALG_GENIV;
/* GENIV tells the template that we're making a default geniv. */
ptype.data.mask = mask | CRYPTO_ALG_GENIV;
tb[0] = &ptype.attr;
palg.attr.rta_len = sizeof(palg);
palg.attr.rta_type = CRYPTOA_ALG;
/* Must use the exact name to locate ourselves. */
memcpy(palg.data.name, alg->cra_driver_name, CRYPTO_MAX_ALG_NAME);
tb[1] = &palg.attr;
tb[2] = NULL;
if ((alg->cra_flags & CRYPTO_ALG_TYPE_MASK) ==
CRYPTO_ALG_TYPE_BLKCIPHER)
geniv = alg->cra_blkcipher.geniv;
else
geniv = alg->cra_ablkcipher.geniv;
if (!geniv)
geniv = crypto_default_geniv(alg);
tmpl = crypto_lookup_template(geniv);
err = -ENOENT;
if (!tmpl)
goto kill_larval;
inst = tmpl->alloc(tb);
err = PTR_ERR(inst);
if (IS_ERR(inst))
goto put_tmpl;
if ((err = crypto_register_instance(tmpl, inst))) {
tmpl->free(inst);
goto put_tmpl;
}
/* Redo the lookup to use the instance we just registered. */
err = -EAGAIN;
put_tmpl:
crypto_tmpl_put(tmpl);
kill_larval:
crypto_larval_kill(larval);
drop_larval:
crypto_mod_put(larval);
out:
crypto_mod_put(alg);
return err;
}
static struct crypto_alg *crypto_lookup_skcipher(const char *name, u32 type,
u32 mask)
{
struct crypto_alg *alg;
alg = crypto_alg_mod_lookup(name, type, mask);
if (IS_ERR(alg))
return alg;
if ((alg->cra_flags & CRYPTO_ALG_TYPE_MASK) ==
CRYPTO_ALG_TYPE_GIVCIPHER)
return alg;
if (!((alg->cra_flags & CRYPTO_ALG_TYPE_MASK) ==
CRYPTO_ALG_TYPE_BLKCIPHER ? alg->cra_blkcipher.ivsize :
alg->cra_ablkcipher.ivsize))
return alg;
return ERR_PTR(crypto_givcipher_default(alg, type, mask));
}
int crypto_grab_skcipher(struct crypto_skcipher_spawn *spawn, const char *name,
u32 type, u32 mask)
{
struct crypto_alg *alg;
int err;
type = crypto_skcipher_type(type);
mask = crypto_skcipher_mask(mask);
alg = crypto_lookup_skcipher(name, type, mask);
if (IS_ERR(alg))
return PTR_ERR(alg);
err = crypto_init_spawn(&spawn->base, alg, spawn->base.inst, mask);
crypto_mod_put(alg);
return err;
}
EXPORT_SYMBOL_GPL(crypto_grab_skcipher);
struct crypto_ablkcipher *crypto_alloc_ablkcipher(const char *alg_name,
u32 type, u32 mask)
{
struct crypto_tfm *tfm;
int err;
type = crypto_skcipher_type(type);
mask = crypto_skcipher_mask(mask);
for (;;) {
struct crypto_alg *alg;
alg = crypto_lookup_skcipher(alg_name, type, mask);
if (IS_ERR(alg)) {
err = PTR_ERR(alg);
goto err;
}
tfm = __crypto_alloc_tfm(alg, type, mask);
if (!IS_ERR(tfm))
return __crypto_ablkcipher_cast(tfm);
crypto_mod_put(alg);
err = PTR_ERR(tfm);
err:
if (err != -EAGAIN)
break;
if (signal_pending(current)) {
err = -EINTR;
break;
}
}
return ERR_PTR(err);
}
EXPORT_SYMBOL_GPL(crypto_alloc_ablkcipher);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Asynchronous block chaining cipher type"); MODULE_DESCRIPTION("Asynchronous block chaining cipher type");
This diff is collapsed.
This diff is collapsed.
...@@ -472,7 +472,7 @@ int crypto_check_attr_type(struct rtattr **tb, u32 type) ...@@ -472,7 +472,7 @@ int crypto_check_attr_type(struct rtattr **tb, u32 type)
} }
EXPORT_SYMBOL_GPL(crypto_check_attr_type); EXPORT_SYMBOL_GPL(crypto_check_attr_type);
struct crypto_alg *crypto_attr_alg(struct rtattr *rta, u32 type, u32 mask) const char *crypto_attr_alg_name(struct rtattr *rta)
{ {
struct crypto_attr_alg *alga; struct crypto_attr_alg *alga;
...@@ -486,7 +486,21 @@ struct crypto_alg *crypto_attr_alg(struct rtattr *rta, u32 type, u32 mask) ...@@ -486,7 +486,21 @@ struct crypto_alg *crypto_attr_alg(struct rtattr *rta, u32 type, u32 mask)
alga = RTA_DATA(rta); alga = RTA_DATA(rta);
alga->name[CRYPTO_MAX_ALG_NAME - 1] = 0; alga->name[CRYPTO_MAX_ALG_NAME - 1] = 0;
return crypto_alg_mod_lookup(alga->name, type, mask); return alga->name;
}
EXPORT_SYMBOL_GPL(crypto_attr_alg_name);
struct crypto_alg *crypto_attr_alg(struct rtattr *rta, u32 type, u32 mask)
{
const char *name;
int err;
name = crypto_attr_alg_name(rta);
err = PTR_ERR(name);
if (IS_ERR(name))
return ERR_PTR(err);
return crypto_alg_mod_lookup(name, type, mask);
} }
EXPORT_SYMBOL_GPL(crypto_attr_alg); EXPORT_SYMBOL_GPL(crypto_attr_alg);
...@@ -605,6 +619,53 @@ int crypto_tfm_in_queue(struct crypto_queue *queue, struct crypto_tfm *tfm) ...@@ -605,6 +619,53 @@ int crypto_tfm_in_queue(struct crypto_queue *queue, struct crypto_tfm *tfm)
} }
EXPORT_SYMBOL_GPL(crypto_tfm_in_queue); EXPORT_SYMBOL_GPL(crypto_tfm_in_queue);
static inline void crypto_inc_byte(u8 *a, unsigned int size)
{
u8 *b = (a + size);
u8 c;
for (; size; size--) {
c = *--b + 1;
*b = c;
if (c)
break;
}
}
void crypto_inc(u8 *a, unsigned int size)
{
__be32 *b = (__be32 *)(a + size);
u32 c;
for (; size >= 4; size -= 4) {
c = be32_to_cpu(*--b) + 1;
*b = cpu_to_be32(c);
if (c)
return;
}
crypto_inc_byte(a, size);
}
EXPORT_SYMBOL_GPL(crypto_inc);
static inline void crypto_xor_byte(u8 *a, const u8 *b, unsigned int size)
{
for (; size; size--)
*a++ ^= *b++;
}
void crypto_xor(u8 *dst, const u8 *src, unsigned int size)
{
u32 *a = (u32 *)dst;
u32 *b = (u32 *)src;
for (; size >= 4; size -= 4)
*a++ ^= *b++;
crypto_xor_byte((u8 *)a, (u8 *)b, size);
}
EXPORT_SYMBOL_GPL(crypto_xor);
static int __init crypto_algapi_init(void) static int __init crypto_algapi_init(void)
{ {
crypto_init_proc(); crypto_init_proc();
......
...@@ -137,7 +137,7 @@ static struct crypto_alg *crypto_larval_alloc(const char *name, u32 type, ...@@ -137,7 +137,7 @@ static struct crypto_alg *crypto_larval_alloc(const char *name, u32 type,
return alg; return alg;
} }
static void crypto_larval_kill(struct crypto_alg *alg) void crypto_larval_kill(struct crypto_alg *alg)
{ {
struct crypto_larval *larval = (void *)alg; struct crypto_larval *larval = (void *)alg;
...@@ -147,6 +147,7 @@ static void crypto_larval_kill(struct crypto_alg *alg) ...@@ -147,6 +147,7 @@ static void crypto_larval_kill(struct crypto_alg *alg)
complete_all(&larval->completion); complete_all(&larval->completion);
crypto_alg_put(alg); crypto_alg_put(alg);
} }
EXPORT_SYMBOL_GPL(crypto_larval_kill);
static struct crypto_alg *crypto_larval_wait(struct crypto_alg *alg) static struct crypto_alg *crypto_larval_wait(struct crypto_alg *alg)
{ {
...@@ -176,11 +177,9 @@ static struct crypto_alg *crypto_alg_lookup(const char *name, u32 type, ...@@ -176,11 +177,9 @@ static struct crypto_alg *crypto_alg_lookup(const char *name, u32 type,
return alg; return alg;
} }
struct crypto_alg *crypto_alg_mod_lookup(const char *name, u32 type, u32 mask) struct crypto_alg *crypto_larval_lookup(const char *name, u32 type, u32 mask)
{ {
struct crypto_alg *alg; struct crypto_alg *alg;
struct crypto_alg *larval;
int ok;
if (!name) if (!name)
return ERR_PTR(-ENOENT); return ERR_PTR(-ENOENT);
...@@ -193,7 +192,17 @@ struct crypto_alg *crypto_alg_mod_lookup(const char *name, u32 type, u32 mask) ...@@ -193,7 +192,17 @@ struct crypto_alg *crypto_alg_mod_lookup(const char *name, u32 type, u32 mask)
if (alg) if (alg)
return crypto_is_larval(alg) ? crypto_larval_wait(alg) : alg; return crypto_is_larval(alg) ? crypto_larval_wait(alg) : alg;
larval = crypto_larval_alloc(name, type, mask); return crypto_larval_alloc(name, type, mask);
}
EXPORT_SYMBOL_GPL(crypto_larval_lookup);
struct crypto_alg *crypto_alg_mod_lookup(const char *name, u32 type, u32 mask)
{
struct crypto_alg *alg;
struct crypto_alg *larval;
int ok;
larval = crypto_larval_lookup(name, type, mask);
if (IS_ERR(larval) || !crypto_is_larval(larval)) if (IS_ERR(larval) || !crypto_is_larval(larval))
return larval; return larval;
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -369,7 +369,7 @@ static const u8 Tr[4][8] = { ...@@ -369,7 +369,7 @@ static const u8 Tr[4][8] = {
}; };
/* forward octave */ /* forward octave */
static inline void W(u32 *key, unsigned int i) { static void W(u32 *key, unsigned int i) {
u32 I; u32 I;
key[6] ^= F1(key[7], Tr[i % 4][0], Tm[i][0]); key[6] ^= F1(key[7], Tr[i % 4][0], Tm[i][0]);
key[5] ^= F2(key[6], Tr[i % 4][1], Tm[i][1]); key[5] ^= F2(key[6], Tr[i % 4][1], Tm[i][1]);
...@@ -428,7 +428,7 @@ static int cast6_setkey(struct crypto_tfm *tfm, const u8 *in_key, ...@@ -428,7 +428,7 @@ static int cast6_setkey(struct crypto_tfm *tfm, const u8 *in_key,
} }
/*forward quad round*/ /*forward quad round*/
static inline void Q (u32 * block, u8 * Kr, u32 * Km) { static void Q (u32 * block, u8 * Kr, u32 * Km) {
u32 I; u32 I;
block[2] ^= F1(block[3], Kr[0], Km[0]); block[2] ^= F1(block[3], Kr[0], Km[0]);
block[1] ^= F2(block[2], Kr[1], Km[1]); block[1] ^= F2(block[2], Kr[1], Km[1]);
...@@ -437,7 +437,7 @@ static inline void Q (u32 * block, u8 * Kr, u32 * Km) { ...@@ -437,7 +437,7 @@ static inline void Q (u32 * block, u8 * Kr, u32 * Km) {
} }
/*reverse quad round*/ /*reverse quad round*/
static inline void QBAR (u32 * block, u8 * Kr, u32 * Km) { static void QBAR (u32 * block, u8 * Kr, u32 * Km) {
u32 I; u32 I;
block[3] ^= F1(block[0], Kr[3], Km[3]); block[3] ^= F1(block[0], Kr[3], Km[3]);
block[0] ^= F3(block[1], Kr[2], Km[2]); block[0] ^= F3(block[1], Kr[2], Km[2]);
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -228,7 +228,7 @@ static struct crypto_instance *cryptd_alloc_blkcipher( ...@@ -228,7 +228,7 @@ static struct crypto_instance *cryptd_alloc_blkcipher(
struct crypto_alg *alg; struct crypto_alg *alg;
alg = crypto_get_attr_alg(tb, CRYPTO_ALG_TYPE_BLKCIPHER, alg = crypto_get_attr_alg(tb, CRYPTO_ALG_TYPE_BLKCIPHER,
CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_ASYNC); CRYPTO_ALG_TYPE_MASK);
if (IS_ERR(alg)) if (IS_ERR(alg))
return ERR_PTR(PTR_ERR(alg)); return ERR_PTR(PTR_ERR(alg));
...@@ -236,13 +236,15 @@ static struct crypto_instance *cryptd_alloc_blkcipher( ...@@ -236,13 +236,15 @@ static struct crypto_instance *cryptd_alloc_blkcipher(
if (IS_ERR(inst)) if (IS_ERR(inst))
goto out_put_alg; goto out_put_alg;
inst->alg.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER | CRYPTO_ALG_ASYNC; inst->alg.cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_ASYNC;
inst->alg.cra_type = &crypto_ablkcipher_type; inst->alg.cra_type = &crypto_ablkcipher_type;
inst->alg.cra_ablkcipher.ivsize = alg->cra_blkcipher.ivsize; inst->alg.cra_ablkcipher.ivsize = alg->cra_blkcipher.ivsize;
inst->alg.cra_ablkcipher.min_keysize = alg->cra_blkcipher.min_keysize; inst->alg.cra_ablkcipher.min_keysize = alg->cra_blkcipher.min_keysize;
inst->alg.cra_ablkcipher.max_keysize = alg->cra_blkcipher.max_keysize; inst->alg.cra_ablkcipher.max_keysize = alg->cra_blkcipher.max_keysize;
inst->alg.cra_ablkcipher.geniv = alg->cra_blkcipher.geniv;
inst->alg.cra_ctxsize = sizeof(struct cryptd_blkcipher_ctx); inst->alg.cra_ctxsize = sizeof(struct cryptd_blkcipher_ctx);
inst->alg.cra_init = cryptd_blkcipher_init_tfm; inst->alg.cra_init = cryptd_blkcipher_init_tfm;
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -12,6 +12,7 @@ ...@@ -12,6 +12,7 @@
* *
*/ */
#include <crypto/scatterwalk.h>
#include <linux/mm.h> #include <linux/mm.h>
#include <linux/errno.h> #include <linux/errno.h>
#include <linux/hardirq.h> #include <linux/hardirq.h>
...@@ -20,9 +21,6 @@ ...@@ -20,9 +21,6 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/scatterlist.h> #include <linux/scatterlist.h>
#include "internal.h"
#include "scatterwalk.h"
static int init(struct hash_desc *desc) static int init(struct hash_desc *desc)
{ {
struct crypto_tfm *tfm = crypto_hash_tfm(desc->tfm); struct crypto_tfm *tfm = crypto_hash_tfm(desc->tfm);
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
obj-$(CONFIG_CRYPTO_DEV_PADLOCK_AES) += padlock-aes.o obj-$(CONFIG_CRYPTO_DEV_PADLOCK_AES) += padlock-aes.o
obj-$(CONFIG_CRYPTO_DEV_PADLOCK_SHA) += padlock-sha.o obj-$(CONFIG_CRYPTO_DEV_PADLOCK_SHA) += padlock-sha.o
obj-$(CONFIG_CRYPTO_DEV_GEODE) += geode-aes.o obj-$(CONFIG_CRYPTO_DEV_GEODE) += geode-aes.o
obj-$(CONFIG_CRYPTO_DEV_HIFN_795X) += hifn_795x.o
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment