- 10 Jan, 2008 40 commits
-
-
Herbert Xu authored
This patch changes setkey to use RTA_OK to check the validity of the setkey request. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
The ivsize should be fetched from ablkcipher, not blkcipher. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Sebastian Siewior authored
crypto_blkcipher_decrypt is wrong because it does not care about the IV. Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Sebastian Siewior authored
crypto_blkcipher_decrypt is wrong because it does not care about the IV. Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tan Swee Heng authored
This patch adds a simple speed test for salsa20. Usage: modprobe tcrypt mode=206 Signed-of-by: Tan Swee Heng <thesweeheng@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Zoltan Sogor authored
Add LZO compression algorithm support Signed-off-by: Zoltan Sogor <weth@inf.u-szeged.hu> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Zoltan Sogor authored
Add common compression tester function Modify deflate test case to use the common compressor test function Signed-off-by: Zoltan Sogor <weth@inf.u-szeged.hu> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tan Swee Heng authored
This is a large test vector for Salsa20 that crosses the 4096-bytes page boundary. Signed-off-by: Tan Swee Heng <thesweeheng@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tan Swee Heng authored
This patch fixes the multi-page processing bug that affects large test vectors (the same bug that previously affected ctr.c). There is an optimization for the case walk.nbytes == nbytes. Also we now use crypto_xor() instead of adhoc XOR routines. Signed-off-by: Tan Swee Heng <thesweeheng@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
The abreq structure is currently allocated on the stack. This is broken if the underlying algorithm is asynchronous. This patch changes it so that it's taken from the private context instead which has been enlarged accordingly. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
Unfortunately the generic chaining hasn't been ported to all architectures yet, and notably not s390. So this patch restores the chainging that we've been using previously which does work everywhere. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
The scatterwalk infrastructure is used by algorithms so it needs to move out of crypto for future users that may live in drivers/crypto or asm/*/crypto. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch changes gcm/authenc to return EBADMSG instead of EINVAL for ICV mismatches. This convention has already been adopted by IPsec. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
The crypto_aead convention for ICVs is to include it directly in the output. If we decided to change this in future then we would make the ICV (if the algorithm has an explicit one) available in the request itself. For now no algorithm needs this so this patch changes gcm to conform to this convention. It also adjusts the tcrypt aead tests to take this into account. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
Currently the gcm(aes) tests have to be taken together with all other ciphers. This patch makes it available by itself at number 35. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
The previous code incorrectly included the hash in the verification which also meant that we'd crash and burn when it comes to actually verifying the hash since we'd go past the end of the SG list. This patch fixes that by subtracting authsize from cryptlen at the start. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
Having enckeylen as a template parameter makes it a pain for hardware devices that implement ciphers with many key sizes since each one would have to be registered separately. Since the authenc algorithm is mainly used for legacy purposes where its key is going to be constructed out of two separate keys, we can in fact embed this value into the key itself. This patch does this by prepending an rtnetlink header to the key that contains the encryption key length. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
As it is authsize is an algorithm paramter which cannot be changed at run-time. This is inconvenient because hardware that implements such algorithms would have to register each authsize that they support separately. Since authsize is a property common to all AEAD algorithms, we can add a function setauthsize that sets it at run-time, just like setkey. This patch does exactly that and also changes authenc so that authsize is no longer a parameter of its template. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
Since alignment masks are always one less than a power of two, we can use binary or to find their maximum. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Kamalesh Babulal authored
drivers/char/hw_random/pasemi-rng.c: In function `pasemi_rng_data_present': drivers/char/hw_random/pasemi-rng.c:53: error: `wait' undeclared (first use in this function) drivers/char/hw_random/pasemi-rng.c:53: error: (Each undeclared identifier is reported only once drivers/char/hw_random/pasemi-rng.c:53: error: for each function it appears in.) drivers/char/hw_random/pasemi-rng.c: At top level: drivers/char/hw_random/pasemi-rng.c:93: warning: initialization from incompatible pointer type Signed-off-by: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Sebastian Siewior authored
Some CPUs support only 128 bit keys in HW. This patch adds SW fallback support for the other keys which may be required. The generic algorithm (and the block mode) must be availble in case of a fallback. Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc> Signed-off-by: Jan Glauber <jang@linux.vnet.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Denis Cheng authored
These utilities implemented in lib/hexdump.c are more handy, please use this. Signed-off-by: Denis Cheng <crquan@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Sebastian Siewior authored
There is no reason to keep the IV in the private structre. Instead keep just a pointer to make the patch smaller :) This also remove a few memcpy()s Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Jan Glauber authored
Add test vectors to tcrypt for AES in CBC mode for key sizes 192 and 256. The test vectors are copied from NIST SP800-38A. Signed-off-by: Jan Glauber <jang@linux.vnet.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tan Swee Heng authored
This patch adds a large AES CTR mode test vector. The test vector is 4100 bytes in size. It was generated using a C++ program that called Crypto++. Note that this patch increases considerably the size of "struct cipher_testvec" and hence the size of tcrypt.ko. Signed-off-by: Tan Swee Heng <thesweeheng@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tan Swee Heng authored
Currently the number of entries in a cipher test vector template is limited by TVMEMSIZE/sizeof(struct cipher_testvec). This patch circumvents the problem by pointing cipher_tv to each entry in the template, rather than the template itself. Signed-off-by: Tan Swee Heng <thesweeheng@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
When the data spans across a page boundary, CTR may incorrectly process a partial block in the middle because the blkcipher walking code may supply partial blocks in the middle as long as the total length of the supplied data is more than a block. CTR is supposed to return any unused partial block in that case to the walker. This patch fixes this by doing exactly that, returning partial blocks to the walker unless we received less than a block-worth of data to start with. This also allows us to optimise the bulk of the processing since we no longer have to worry about partial blocks until the very end. Thanks to Tan Swee Heng for fixes and actually testing this :) Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Sebastian Siewior authored
32 bit and 64 bit glue code is using (now) the same piece code. This patch unifies them. Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Mikko Herranen authored
Add GCM/GMAC support to cryptoapi. GCM (Galois/Counter Mode) is an AEAD mode of operations for any block cipher with a block size of 16. The typical example is AES-GCM. Signed-off-by: Mikko Herranen <mh1@iki.fi> Reviewed-by: Mika Kukkonen <mika.kukkonen@nsn.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Mikko Herranen authored
Add AEAD support to tcrypt, needed by GCM. Signed-off-by: Mikko Herranen <mh1@iki.fi> Reviewed-by: Mika Kukkonen <mika.kukkonen@nsn.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Denys Vlasenko authored
Analogously to camellia7 patch, move "absorb kw2 to other subkeys" and "absorb kw4 to other subkeys" code parts into camellia_setup_tail(). This further reduces source and object code size at the cost of two brances in key setup code. Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Denys Vlasenko authored
Move "key XOR is end of F-function" code part into camellia_setup_tail(), it is sufficiently similar between camellia_setup128 and camellia_setup256. Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Denys Vlasenko authored
unifies encrypt/decrypt routines for different key lengths. This reduces module size by ~25%, with tiny (less than 1%) speed impact. Also collapses encrypt/decrypt into more readable (visually shorter) form using macros. Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Denys Vlasenko authored
Remove unused macro params. Use (u8)(expr) instead of (expr) & 0xff, helps gcc to realize how to use simpler commands. Move CAMELLIA_FLS macro closer to encrypt/decrypt routines. Signed-off-by: Denys Vlasenko <vda.linux@googlemail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch replaces the custom inc/xor in CTR with the generic functions. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch replaces the custom xor in CBC with the generic crypto_xor. It changes the operations for in-place encryption slightly to avoid calling crypto_xor with tmpbuf since it is not necessarily aligned. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
All common block ciphers have a block size that's a power of 2. In fact, all of our block ciphers obey this rule. If we require this then CBC can be optimised to avoid an expensive divide on in-place decryption. I've also changed the saving of the first IV in the in-place decryption case to the last IV because that lets us use walk->iv (which is already aligned) for the xor operation where alignment is required. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch replaces the custom xor in CBC with the generic crypto_xor. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
With the addition of more stream ciphers we need to curb the proliferation of ad-hoc xor functions. This patch creates a generic pair of functions, crypto_inc and crypto_xor which does big-endian increment and exclusive or, respectively. For optimum performance, they both use u32 operations so alignment must be as that of u32 even though the arguments are of type u8 *. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Patrick McHardy authored
Signed-off-by: Patrick McHardy <kaber@trash.net> Acked-by: Evgeniy Polyakov <johnpol@2ka.mipt.ru> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-