Commit 5bcbe22c authored by Linus Torvalds's avatar Linus Torvalds

Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6

Pull crypto update from Herbert Xu:
 "API:
   - Try to catch hash output overrun in testmgr
   - Introduce walksize attribute for batched walking
   - Make crypto_xor() and crypto_inc() alignment agnostic

  Algorithms:
   - Add time-invariant AES algorithm
   - Add standalone CBCMAC algorithm

  Drivers:
   - Add NEON acclerated chacha20 on ARM/ARM64
   - Expose AES-CTR as synchronous skcipher on ARM64
   - Add scalar AES implementation on ARM64
   - Improve scalar AES implementation on ARM
   - Improve NEON AES implementation on ARM/ARM64
   - Merge CRC32 and PMULL instruction based drivers on ARM64
   - Add NEON acclerated CBCMAC/CMAC/XCBC AES on ARM64
   - Add IPsec AUTHENC implementation in atmel
   - Add Support for Octeon-tx CPT Engine
   - Add Broadcom SPU driver
   - Add MediaTek driver"

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (142 commits)
  crypto: xts - Add ECB dependency
  crypto: cavium - switch to pci_alloc_irq_vectors
  crypto: cavium - switch to pci_alloc_irq_vectors
  crypto: cavium - remove dead MSI-X related define
  crypto: brcm - Avoid double free in ahash_finup()
  crypto: cavium - fix Kconfig dependencies
  crypto: cavium - cpt_bind_vq_to_grp could return an error code
  crypto: doc - fix typo
  hwrng: omap - update Kconfig help description
  crypto: ccm - drop unnecessary minimum 32-bit alignment
  crypto: ccm - honour alignmask of subordinate MAC cipher
  crypto: caam - fix state buffer DMA (un)mapping
  crypto: caam - abstract ahash request double buffering
  crypto: caam - fix error path for ctx_dma mapping failure
  crypto: caam - fix DMA API leaks for multiple setkey() calls
  crypto: caam - don't dma_map key for hash algorithms
  crypto: caam - use dma_map_sg() return code
  crypto: caam - replace sg_count() with sg_nents_for_len()
  crypto: caam - check sg_count() return value
  crypto: caam - fix HW S/G in ablkcipher_giv_edesc_alloc()
  ..
parents 1db934a5 12cb3a1c
...@@ -14,7 +14,7 @@ Asynchronous Message Digest API ...@@ -14,7 +14,7 @@ Asynchronous Message Digest API
:doc: Asynchronous Message Digest API :doc: Asynchronous Message Digest API
.. kernel-doc:: include/crypto/hash.h .. kernel-doc:: include/crypto/hash.h
:functions: crypto_alloc_ahash crypto_free_ahash crypto_ahash_init crypto_ahash_digestsize crypto_ahash_reqtfm crypto_ahash_reqsize crypto_ahash_setkey crypto_ahash_finup crypto_ahash_final crypto_ahash_digest crypto_ahash_export crypto_ahash_import :functions: crypto_alloc_ahash crypto_free_ahash crypto_ahash_init crypto_ahash_digestsize crypto_ahash_reqtfm crypto_ahash_reqsize crypto_ahash_statesize crypto_ahash_setkey crypto_ahash_finup crypto_ahash_final crypto_ahash_digest crypto_ahash_export crypto_ahash_import
Asynchronous Hash Request Handle Asynchronous Hash Request Handle
-------------------------------- --------------------------------
......
...@@ -59,4 +59,4 @@ Synchronous Block Cipher API - Deprecated ...@@ -59,4 +59,4 @@ Synchronous Block Cipher API - Deprecated
:doc: Synchronous Block Cipher API :doc: Synchronous Block Cipher API
.. kernel-doc:: include/linux/crypto.h .. kernel-doc:: include/linux/crypto.h
:functions: crypto_alloc_blkcipher rypto_free_blkcipher crypto_has_blkcipher crypto_blkcipher_name crypto_blkcipher_ivsize crypto_blkcipher_blocksize crypto_blkcipher_setkey crypto_blkcipher_encrypt crypto_blkcipher_encrypt_iv crypto_blkcipher_decrypt crypto_blkcipher_decrypt_iv crypto_blkcipher_set_iv crypto_blkcipher_get_iv :functions: crypto_alloc_blkcipher crypto_free_blkcipher crypto_has_blkcipher crypto_blkcipher_name crypto_blkcipher_ivsize crypto_blkcipher_blocksize crypto_blkcipher_setkey crypto_blkcipher_encrypt crypto_blkcipher_encrypt_iv crypto_blkcipher_decrypt crypto_blkcipher_decrypt_iv crypto_blkcipher_set_iv crypto_blkcipher_get_iv
The Broadcom Secure Processing Unit (SPU) hardware supports symmetric
cryptographic offload for Broadcom SoCs. A SoC may have multiple SPU hardware
blocks.
Required properties:
- compatible: Should be one of the following:
brcm,spum-crypto - for devices with SPU-M hardware
brcm,spu2-crypto - for devices with SPU2 hardware
brcm,spu2-v2-crypto - for devices with enhanced SPU2 hardware features like SHA3
and Rabin Fingerprint support
brcm,spum-nsp-crypto - for the Northstar Plus variant of the SPU-M hardware
- reg: Should contain SPU registers location and length.
- mboxes: The mailbox channel to be used to communicate with the SPU.
Mailbox channels correspond to DMA rings on the device.
Example:
crypto@612d0000 {
compatible = "brcm,spum-crypto";
reg = <0 0x612d0000 0 0x900>;
mboxes = <&pdc0 0>;
};
MediaTek cryptographic accelerators
Required properties:
- compatible: Should be "mediatek,eip97-crypto"
- reg: Address and length of the register set for the device
- interrupts: Should contain the five crypto engines interrupts in numeric
order. These are global system and four descriptor rings.
- clocks: the clock used by the core
- clock-names: the names of the clock listed in the clocks property. These are
"ethif", "cryp"
- power-domains: Must contain a reference to the PM domain.
Example:
crypto: crypto@1b240000 {
compatible = "mediatek,eip97-crypto";
reg = <0 0x1b240000 0 0x20000>;
interrupts = <GIC_SPI 82 IRQ_TYPE_LEVEL_LOW>,
<GIC_SPI 83 IRQ_TYPE_LEVEL_LOW>,
<GIC_SPI 84 IRQ_TYPE_LEVEL_LOW>,
<GIC_SPI 91 IRQ_TYPE_LEVEL_LOW>,
<GIC_SPI 97 IRQ_TYPE_LEVEL_LOW>;
clocks = <&topckgen CLK_TOP_ETHIF_SEL>,
<&ethsys CLK_ETHSYS_CRYPTO>;
clock-names = "ethif","cryp";
power-domains = <&scpsys MT2701_POWER_DOMAIN_ETH>;
};
...@@ -3031,6 +3031,13 @@ W: http://www.cavium.com ...@@ -3031,6 +3031,13 @@ W: http://www.cavium.com
S: Supported S: Supported
F: drivers/net/ethernet/cavium/liquidio/ F: drivers/net/ethernet/cavium/liquidio/
CAVIUM OCTEON-TX CRYPTO DRIVER
M: George Cherian <george.cherian@cavium.com>
L: linux-crypto@vger.kernel.org
W: http://www.cavium.com
S: Supported
F: drivers/crypto/cavium/cpt/
CC2520 IEEE-802.15.4 RADIO DRIVER CC2520 IEEE-802.15.4 RADIO DRIVER
M: Varka Bhadram <varkabhadram@gmail.com> M: Varka Bhadram <varkabhadram@gmail.com>
L: linux-wpan@vger.kernel.org L: linux-wpan@vger.kernel.org
......
...@@ -62,35 +62,18 @@ config CRYPTO_SHA512_ARM ...@@ -62,35 +62,18 @@ config CRYPTO_SHA512_ARM
using optimized ARM assembler and NEON, when available. using optimized ARM assembler and NEON, when available.
config CRYPTO_AES_ARM config CRYPTO_AES_ARM
tristate "AES cipher algorithms (ARM-asm)" tristate "Scalar AES cipher for ARM"
depends on ARM
select CRYPTO_ALGAPI select CRYPTO_ALGAPI
select CRYPTO_AES select CRYPTO_AES
help help
Use optimized AES assembler routines for ARM platforms. Use optimized AES assembler routines for ARM platforms.
AES cipher algorithms (FIPS-197). AES uses the Rijndael
algorithm.
Rijndael appears to be consistently a very good performer in
both hardware and software across a wide range of computing
environments regardless of its use in feedback or non-feedback
modes. Its key setup time is excellent, and its key agility is
good. Rijndael's very low memory requirements make it very well
suited for restricted-space environments, in which it also
demonstrates excellent performance. Rijndael's operations are
among the easiest to defend against power and timing attacks.
The AES specifies three key sizes: 128, 192 and 256 bits
See <http://csrc.nist.gov/encryption/aes/> for more information.
config CRYPTO_AES_ARM_BS config CRYPTO_AES_ARM_BS
tristate "Bit sliced AES using NEON instructions" tristate "Bit sliced AES using NEON instructions"
depends on KERNEL_MODE_NEON depends on KERNEL_MODE_NEON
select CRYPTO_AES_ARM
select CRYPTO_BLKCIPHER select CRYPTO_BLKCIPHER
select CRYPTO_SIMD select CRYPTO_SIMD
select CRYPTO_AES_ARM
help help
Use a faster and more secure NEON based implementation of AES in CBC, Use a faster and more secure NEON based implementation of AES in CBC,
CTR and XTS modes CTR and XTS modes
...@@ -130,4 +113,10 @@ config CRYPTO_CRC32_ARM_CE ...@@ -130,4 +113,10 @@ config CRYPTO_CRC32_ARM_CE
depends on KERNEL_MODE_NEON && CRC32 depends on KERNEL_MODE_NEON && CRC32
select CRYPTO_HASH select CRYPTO_HASH
config CRYPTO_CHACHA20_NEON
tristate "NEON accelerated ChaCha20 symmetric cipher"
depends on KERNEL_MODE_NEON
select CRYPTO_BLKCIPHER
select CRYPTO_CHACHA20
endif endif
...@@ -8,6 +8,7 @@ obj-$(CONFIG_CRYPTO_SHA1_ARM) += sha1-arm.o ...@@ -8,6 +8,7 @@ obj-$(CONFIG_CRYPTO_SHA1_ARM) += sha1-arm.o
obj-$(CONFIG_CRYPTO_SHA1_ARM_NEON) += sha1-arm-neon.o obj-$(CONFIG_CRYPTO_SHA1_ARM_NEON) += sha1-arm-neon.o
obj-$(CONFIG_CRYPTO_SHA256_ARM) += sha256-arm.o obj-$(CONFIG_CRYPTO_SHA256_ARM) += sha256-arm.o
obj-$(CONFIG_CRYPTO_SHA512_ARM) += sha512-arm.o obj-$(CONFIG_CRYPTO_SHA512_ARM) += sha512-arm.o
obj-$(CONFIG_CRYPTO_CHACHA20_NEON) += chacha20-neon.o
ce-obj-$(CONFIG_CRYPTO_AES_ARM_CE) += aes-arm-ce.o ce-obj-$(CONFIG_CRYPTO_AES_ARM_CE) += aes-arm-ce.o
ce-obj-$(CONFIG_CRYPTO_SHA1_ARM_CE) += sha1-arm-ce.o ce-obj-$(CONFIG_CRYPTO_SHA1_ARM_CE) += sha1-arm-ce.o
...@@ -26,8 +27,8 @@ $(warning $(ce-obj-y) $(ce-obj-m)) ...@@ -26,8 +27,8 @@ $(warning $(ce-obj-y) $(ce-obj-m))
endif endif
endif endif
aes-arm-y := aes-armv4.o aes_glue.o aes-arm-y := aes-cipher-core.o aes-cipher-glue.o
aes-arm-bs-y := aesbs-core.o aesbs-glue.o aes-arm-bs-y := aes-neonbs-core.o aes-neonbs-glue.o
sha1-arm-y := sha1-armv4-large.o sha1_glue.o sha1-arm-y := sha1-armv4-large.o sha1_glue.o
sha1-arm-neon-y := sha1-armv7-neon.o sha1_neon_glue.o sha1-arm-neon-y := sha1-armv7-neon.o sha1_neon_glue.o
sha256-arm-neon-$(CONFIG_KERNEL_MODE_NEON) := sha256_neon_glue.o sha256-arm-neon-$(CONFIG_KERNEL_MODE_NEON) := sha256_neon_glue.o
...@@ -40,17 +41,15 @@ aes-arm-ce-y := aes-ce-core.o aes-ce-glue.o ...@@ -40,17 +41,15 @@ aes-arm-ce-y := aes-ce-core.o aes-ce-glue.o
ghash-arm-ce-y := ghash-ce-core.o ghash-ce-glue.o ghash-arm-ce-y := ghash-ce-core.o ghash-ce-glue.o
crct10dif-arm-ce-y := crct10dif-ce-core.o crct10dif-ce-glue.o crct10dif-arm-ce-y := crct10dif-ce-core.o crct10dif-ce-glue.o
crc32-arm-ce-y:= crc32-ce-core.o crc32-ce-glue.o crc32-arm-ce-y:= crc32-ce-core.o crc32-ce-glue.o
chacha20-neon-y := chacha20-neon-core.o chacha20-neon-glue.o
quiet_cmd_perl = PERL $@ quiet_cmd_perl = PERL $@
cmd_perl = $(PERL) $(<) > $(@) cmd_perl = $(PERL) $(<) > $(@)
$(src)/aesbs-core.S_shipped: $(src)/bsaes-armv7.pl
$(call cmd,perl)
$(src)/sha256-core.S_shipped: $(src)/sha256-armv4.pl $(src)/sha256-core.S_shipped: $(src)/sha256-armv4.pl
$(call cmd,perl) $(call cmd,perl)
$(src)/sha512-core.S_shipped: $(src)/sha512-armv4.pl $(src)/sha512-core.S_shipped: $(src)/sha512-armv4.pl
$(call cmd,perl) $(call cmd,perl)
.PRECIOUS: $(obj)/aesbs-core.S $(obj)/sha256-core.S $(obj)/sha512-core.S .PRECIOUS: $(obj)/sha256-core.S $(obj)/sha512-core.S
This diff is collapsed.
...@@ -169,19 +169,19 @@ ENTRY(ce_aes_ecb_encrypt) ...@@ -169,19 +169,19 @@ ENTRY(ce_aes_ecb_encrypt)
.Lecbencloop3x: .Lecbencloop3x:
subs r4, r4, #3 subs r4, r4, #3
bmi .Lecbenc1x bmi .Lecbenc1x
vld1.8 {q0-q1}, [r1, :64]! vld1.8 {q0-q1}, [r1]!
vld1.8 {q2}, [r1, :64]! vld1.8 {q2}, [r1]!
bl aes_encrypt_3x bl aes_encrypt_3x
vst1.8 {q0-q1}, [r0, :64]! vst1.8 {q0-q1}, [r0]!
vst1.8 {q2}, [r0, :64]! vst1.8 {q2}, [r0]!
b .Lecbencloop3x b .Lecbencloop3x
.Lecbenc1x: .Lecbenc1x:
adds r4, r4, #3 adds r4, r4, #3
beq .Lecbencout beq .Lecbencout
.Lecbencloop: .Lecbencloop:
vld1.8 {q0}, [r1, :64]! vld1.8 {q0}, [r1]!
bl aes_encrypt bl aes_encrypt
vst1.8 {q0}, [r0, :64]! vst1.8 {q0}, [r0]!
subs r4, r4, #1 subs r4, r4, #1
bne .Lecbencloop bne .Lecbencloop
.Lecbencout: .Lecbencout:
...@@ -195,19 +195,19 @@ ENTRY(ce_aes_ecb_decrypt) ...@@ -195,19 +195,19 @@ ENTRY(ce_aes_ecb_decrypt)
.Lecbdecloop3x: .Lecbdecloop3x:
subs r4, r4, #3 subs r4, r4, #3
bmi .Lecbdec1x bmi .Lecbdec1x
vld1.8 {q0-q1}, [r1, :64]! vld1.8 {q0-q1}, [r1]!
vld1.8 {q2}, [r1, :64]! vld1.8 {q2}, [r1]!
bl aes_decrypt_3x bl aes_decrypt_3x
vst1.8 {q0-q1}, [r0, :64]! vst1.8 {q0-q1}, [r0]!
vst1.8 {q2}, [r0, :64]! vst1.8 {q2}, [r0]!
b .Lecbdecloop3x b .Lecbdecloop3x
.Lecbdec1x: .Lecbdec1x:
adds r4, r4, #3 adds r4, r4, #3
beq .Lecbdecout beq .Lecbdecout
.Lecbdecloop: .Lecbdecloop:
vld1.8 {q0}, [r1, :64]! vld1.8 {q0}, [r1]!
bl aes_decrypt bl aes_decrypt
vst1.8 {q0}, [r0, :64]! vst1.8 {q0}, [r0]!
subs r4, r4, #1 subs r4, r4, #1
bne .Lecbdecloop bne .Lecbdecloop
.Lecbdecout: .Lecbdecout:
...@@ -226,10 +226,10 @@ ENTRY(ce_aes_cbc_encrypt) ...@@ -226,10 +226,10 @@ ENTRY(ce_aes_cbc_encrypt)
vld1.8 {q0}, [r5] vld1.8 {q0}, [r5]
prepare_key r2, r3 prepare_key r2, r3
.Lcbcencloop: .Lcbcencloop:
vld1.8 {q1}, [r1, :64]! @ get next pt block vld1.8 {q1}, [r1]! @ get next pt block
veor q0, q0, q1 @ ..and xor with iv veor q0, q0, q1 @ ..and xor with iv
bl aes_encrypt bl aes_encrypt
vst1.8 {q0}, [r0, :64]! vst1.8 {q0}, [r0]!
subs r4, r4, #1 subs r4, r4, #1
bne .Lcbcencloop bne .Lcbcencloop
vst1.8 {q0}, [r5] vst1.8 {q0}, [r5]
...@@ -244,8 +244,8 @@ ENTRY(ce_aes_cbc_decrypt) ...@@ -244,8 +244,8 @@ ENTRY(ce_aes_cbc_decrypt)
.Lcbcdecloop3x: .Lcbcdecloop3x:
subs r4, r4, #3 subs r4, r4, #3
bmi .Lcbcdec1x bmi .Lcbcdec1x
vld1.8 {q0-q1}, [r1, :64]! vld1.8 {q0-q1}, [r1]!
vld1.8 {q2}, [r1, :64]! vld1.8 {q2}, [r1]!
vmov q3, q0 vmov q3, q0
vmov q4, q1 vmov q4, q1
vmov q5, q2 vmov q5, q2
...@@ -254,19 +254,19 @@ ENTRY(ce_aes_cbc_decrypt) ...@@ -254,19 +254,19 @@ ENTRY(ce_aes_cbc_decrypt)
veor q1, q1, q3 veor q1, q1, q3
veor q2, q2, q4 veor q2, q2, q4
vmov q6, q5 vmov q6, q5
vst1.8 {q0-q1}, [r0, :64]! vst1.8 {q0-q1}, [r0]!
vst1.8 {q2}, [r0, :64]! vst1.8 {q2}, [r0]!
b .Lcbcdecloop3x b .Lcbcdecloop3x
.Lcbcdec1x: .Lcbcdec1x:
adds r4, r4, #3 adds r4, r4, #3
beq .Lcbcdecout beq .Lcbcdecout
vmov q15, q14 @ preserve last round key vmov q15, q14 @ preserve last round key
.Lcbcdecloop: .Lcbcdecloop:
vld1.8 {q0}, [r1, :64]! @ get next ct block vld1.8 {q0}, [r1]! @ get next ct block
veor q14, q15, q6 @ combine prev ct with last key veor q14, q15, q6 @ combine prev ct with last key
vmov q6, q0 vmov q6, q0
bl aes_decrypt bl aes_decrypt
vst1.8 {q0}, [r0, :64]! vst1.8 {q0}, [r0]!
subs r4, r4, #1 subs r4, r4, #1
bne .Lcbcdecloop bne .Lcbcdecloop
.Lcbcdecout: .Lcbcdecout:
...@@ -300,15 +300,15 @@ ENTRY(ce_aes_ctr_encrypt) ...@@ -300,15 +300,15 @@ ENTRY(ce_aes_ctr_encrypt)
rev ip, r6 rev ip, r6
add r6, r6, #1 add r6, r6, #1
vmov s11, ip vmov s11, ip
vld1.8 {q3-q4}, [r1, :64]! vld1.8 {q3-q4}, [r1]!
vld1.8 {q5}, [r1, :64]! vld1.8 {q5}, [r1]!
bl aes_encrypt_3x bl aes_encrypt_3x
veor q0, q0, q3 veor q0, q0, q3
veor q1, q1, q4 veor q1, q1, q4
veor q2, q2, q5 veor q2, q2, q5
rev ip, r6 rev ip, r6
vst1.8 {q0-q1}, [r0, :64]! vst1.8 {q0-q1}, [r0]!
vst1.8 {q2}, [r0, :64]! vst1.8 {q2}, [r0]!
vmov s27, ip vmov s27, ip
b .Lctrloop3x b .Lctrloop3x
.Lctr1x: .Lctr1x:
...@@ -318,10 +318,10 @@ ENTRY(ce_aes_ctr_encrypt) ...@@ -318,10 +318,10 @@ ENTRY(ce_aes_ctr_encrypt)
vmov q0, q6 vmov q0, q6
bl aes_encrypt bl aes_encrypt
subs r4, r4, #1 subs r4, r4, #1
bmi .Lctrhalfblock @ blocks < 0 means 1/2 block bmi .Lctrtailblock @ blocks < 0 means tail block
vld1.8 {q3}, [r1, :64]! vld1.8 {q3}, [r1]!
veor q3, q0, q3 veor q3, q0, q3
vst1.8 {q3}, [r0, :64]! vst1.8 {q3}, [r0]!
adds r6, r6, #1 @ increment BE ctr adds r6, r6, #1 @ increment BE ctr
rev ip, r6 rev ip, r6
...@@ -333,10 +333,8 @@ ENTRY(ce_aes_ctr_encrypt) ...@@ -333,10 +333,8 @@ ENTRY(ce_aes_ctr_encrypt)
vst1.8 {q6}, [r5] vst1.8 {q6}, [r5]
pop {r4-r6, pc} pop {r4-r6, pc}
.Lctrhalfblock: .Lctrtailblock:
vld1.8 {d1}, [r1, :64] vst1.8 {q0}, [r0, :64] @ return just the key stream
veor d0, d0, d1
vst1.8 {d0}, [r0, :64]
pop {r4-r6, pc} pop {r4-r6, pc}
.Lctrcarry: .Lctrcarry:
...@@ -405,8 +403,8 @@ ENTRY(ce_aes_xts_encrypt) ...@@ -405,8 +403,8 @@ ENTRY(ce_aes_xts_encrypt)
.Lxtsenc3x: .Lxtsenc3x:
subs r4, r4, #3 subs r4, r4, #3
bmi .Lxtsenc1x bmi .Lxtsenc1x
vld1.8 {q0-q1}, [r1, :64]! @ get 3 pt blocks vld1.8 {q0-q1}, [r1]! @ get 3 pt blocks
vld1.8 {q2}, [r1, :64]! vld1.8 {q2}, [r1]!
next_tweak q4, q3, q7, q6 next_tweak q4, q3, q7, q6
veor q0, q0, q3 veor q0, q0, q3
next_tweak q5, q4, q7, q6 next_tweak q5, q4, q7, q6
...@@ -416,8 +414,8 @@ ENTRY(ce_aes_xts_encrypt) ...@@ -416,8 +414,8 @@ ENTRY(ce_aes_xts_encrypt)
veor q0, q0, q3 veor q0, q0, q3
veor q1, q1, q4 veor q1, q1, q4
veor q2, q2, q5 veor q2, q2, q5
vst1.8 {q0-q1}, [r0, :64]! @ write 3 ct blocks vst1.8 {q0-q1}, [r0]! @ write 3 ct blocks
vst1.8 {q2}, [r0, :64]! vst1.8 {q2}, [r0]!
vmov q3, q5 vmov q3, q5
teq r4, #0 teq r4, #0
beq .Lxtsencout beq .Lxtsencout
...@@ -426,11 +424,11 @@ ENTRY(ce_aes_xts_encrypt) ...@@ -426,11 +424,11 @@ ENTRY(ce_aes_xts_encrypt)
adds r4, r4, #3 adds r4, r4, #3
beq .Lxtsencout beq .Lxtsencout
.Lxtsencloop: .Lxtsencloop:
vld1.8 {q0}, [r1, :64]! vld1.8 {q0}, [r1]!
veor q0, q0, q3 veor q0, q0, q3
bl aes_encrypt bl aes_encrypt
veor q0, q0, q3 veor q0, q0, q3
vst1.8 {q0}, [r0, :64]! vst1.8 {q0}, [r0]!
subs r4, r4, #1 subs r4, r4, #1
beq .Lxtsencout beq .Lxtsencout
next_tweak q3, q3, q7, q6 next_tweak q3, q3, q7, q6
...@@ -456,8 +454,8 @@ ENTRY(ce_aes_xts_decrypt) ...@@ -456,8 +454,8 @@ ENTRY(ce_aes_xts_decrypt)
.Lxtsdec3x: .Lxtsdec3x:
subs r4, r4, #3 subs r4, r4, #3
bmi .Lxtsdec1x bmi .Lxtsdec1x
vld1.8 {q0-q1}, [r1, :64]! @ get 3 ct blocks vld1.8 {q0-q1}, [r1]! @ get 3 ct blocks
vld1.8 {q2}, [r1, :64]! vld1.8 {q2}, [r1]!
next_tweak q4, q3, q7, q6 next_tweak q4, q3, q7, q6
veor q0, q0, q3 veor q0, q0, q3
next_tweak q5, q4, q7, q6 next_tweak q5, q4, q7, q6
...@@ -467,8 +465,8 @@ ENTRY(ce_aes_xts_decrypt) ...@@ -467,8 +465,8 @@ ENTRY(ce_aes_xts_decrypt)
veor q0, q0, q3 veor q0, q0, q3
veor q1, q1, q4 veor q1, q1, q4
veor q2, q2, q5 veor q2, q2, q5
vst1.8 {q0-q1}, [r0, :64]! @ write 3 pt blocks vst1.8 {q0-q1}, [r0]! @ write 3 pt blocks
vst1.8 {q2}, [r0, :64]! vst1.8 {q2}, [r0]!
vmov q3, q5 vmov q3, q5
teq r4, #0 teq r4, #0
beq .Lxtsdecout beq .Lxtsdecout
...@@ -477,12 +475,12 @@ ENTRY(ce_aes_xts_decrypt) ...@@ -477,12 +475,12 @@ ENTRY(ce_aes_xts_decrypt)
adds r4, r4, #3 adds r4, r4, #3
beq .Lxtsdecout beq .Lxtsdecout
.Lxtsdecloop: .Lxtsdecloop:
vld1.8 {q0}, [r1, :64]! vld1.8 {q0}, [r1]!
veor q0, q0, q3 veor q0, q0, q3
add ip, r2, #32 @ 3rd round key add ip, r2, #32 @ 3rd round key
bl aes_decrypt bl aes_decrypt
veor q0, q0, q3 veor q0, q0, q3
vst1.8 {q0}, [r0, :64]! vst1.8 {q0}, [r0]!
subs r4, r4, #1 subs r4, r4, #1
beq .Lxtsdecout beq .Lxtsdecout
next_tweak q3, q3, q7, q6 next_tweak q3, q3, q7, q6
......
...@@ -278,14 +278,15 @@ static int ctr_encrypt(struct skcipher_request *req) ...@@ -278,14 +278,15 @@ static int ctr_encrypt(struct skcipher_request *req)
u8 *tsrc = walk.src.virt.addr; u8 *tsrc = walk.src.virt.addr;
/* /*
* Minimum alignment is 8 bytes, so if nbytes is <= 8, we need * Tell aes_ctr_encrypt() to process a tail block.
* to tell aes_ctr_encrypt() to only read half a block.
*/ */
blocks = (nbytes <= 8) ? -1 : 1; blocks = -1;
ce_aes_ctr_encrypt(tail, tsrc, (u8 *)ctx->key_enc, ce_aes_ctr_encrypt(tail, NULL, (u8 *)ctx->key_enc,
num_rounds(ctx), blocks, walk.iv); num_rounds(ctx), blocks, walk.iv);
memcpy(tdst, tail, nbytes); if (tdst != tsrc)
memcpy(tdst, tsrc, nbytes);
crypto_xor(tdst, tail, nbytes);
err = skcipher_walk_done(&walk, 0); err = skcipher_walk_done(&walk, 0);
} }
kernel_neon_end(); kernel_neon_end();
...@@ -345,7 +346,6 @@ static struct skcipher_alg aes_algs[] = { { ...@@ -345,7 +346,6 @@ static struct skcipher_alg aes_algs[] = { {
.cra_flags = CRYPTO_ALG_INTERNAL, .cra_flags = CRYPTO_ALG_INTERNAL,
.cra_blocksize = AES_BLOCK_SIZE, .cra_blocksize = AES_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct crypto_aes_ctx), .cra_ctxsize = sizeof(struct crypto_aes_ctx),
.cra_alignmask = 7,
.cra_module = THIS_MODULE, .cra_module = THIS_MODULE,
}, },
.min_keysize = AES_MIN_KEY_SIZE, .min_keysize = AES_MIN_KEY_SIZE,
...@@ -361,7 +361,6 @@ static struct skcipher_alg aes_algs[] = { { ...@@ -361,7 +361,6 @@ static struct skcipher_alg aes_algs[] = { {
.cra_flags = CRYPTO_ALG_INTERNAL, .cra_flags = CRYPTO_ALG_INTERNAL,
.cra_blocksize = AES_BLOCK_SIZE, .cra_blocksize = AES_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct crypto_aes_ctx), .cra_ctxsize = sizeof(struct crypto_aes_ctx),
.cra_alignmask = 7,
.cra_module = THIS_MODULE, .cra_module = THIS_MODULE,
}, },
.min_keysize = AES_MIN_KEY_SIZE, .min_keysize = AES_MIN_KEY_SIZE,
...@@ -378,7 +377,6 @@ static struct skcipher_alg aes_algs[] = { { ...@@ -378,7 +377,6 @@ static struct skcipher_alg aes_algs[] = { {
.cra_flags = CRYPTO_ALG_INTERNAL, .cra_flags = CRYPTO_ALG_INTERNAL,
.cra_blocksize = 1, .cra_blocksize = 1,
.cra_ctxsize = sizeof(struct crypto_aes_ctx), .cra_ctxsize = sizeof(struct crypto_aes_ctx),
.cra_alignmask = 7,
.cra_module = THIS_MODULE, .cra_module = THIS_MODULE,
}, },
.min_keysize = AES_MIN_KEY_SIZE, .min_keysize = AES_MIN_KEY_SIZE,
...@@ -396,7 +394,6 @@ static struct skcipher_alg aes_algs[] = { { ...@@ -396,7 +394,6 @@ static struct skcipher_alg aes_algs[] = { {
.cra_flags = CRYPTO_ALG_INTERNAL, .cra_flags = CRYPTO_ALG_INTERNAL,
.cra_blocksize = AES_BLOCK_SIZE, .cra_blocksize = AES_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct crypto_aes_xts_ctx), .cra_ctxsize = sizeof(struct crypto_aes_xts_ctx),
.cra_alignmask = 7,
.cra_module = THIS_MODULE, .cra_module = THIS_MODULE,
}, },
.min_keysize = 2 * AES_MIN_KEY_SIZE, .min_keysize = 2 * AES_MIN_KEY_SIZE,
......
/*
* Scalar AES core transform
*
* Copyright (C) 2017 Linaro Ltd.
* Author: Ard Biesheuvel <ard.biesheuvel@linaro.org>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/linkage.h>
.text
.align 5
rk .req r0
rounds .req r1
in .req r2
out .req r3
ttab .req ip
t0 .req lr
t1 .req r2
t2 .req r3
.macro __select, out, in, idx
.if __LINUX_ARM_ARCH__ < 7
and \out, \in, #0xff << (8 * \idx)
.else
ubfx \out, \in, #(8 * \idx), #8
.endif
.endm
.macro __load, out, in, idx
.if __LINUX_ARM_ARCH__ < 7 && \idx > 0
ldr \out, [ttab, \in, lsr #(8 * \idx) - 2]
.else
ldr \out, [ttab, \in, lsl #2]
.endif
.endm
.macro __hround, out0, out1, in0, in1, in2, in3, t3, t4, enc
__select \out0, \in0, 0
__select t0, \in1, 1
__load \out0, \out0, 0
__load t0, t0, 1
.if \enc
__select \out1, \in1, 0
__select t1, \in2, 1
.else
__select \out1, \in3, 0
__select t1, \in0, 1
.endif
__load \out1, \out1, 0
__select t2, \in2, 2
__load t1, t1, 1
__load t2, t2, 2
eor \out0, \out0, t0, ror #24
__select t0, \in3, 3
.if \enc
__select \t3, \in3, 2
__select \t4, \in0, 3
.else
__select \t3, \in1, 2
__select \t4, \in2, 3
.endif
__load \t3, \t3, 2
__load t0, t0, 3
__load \t4, \t4, 3
eor \out1, \out1, t1, ror #24
eor \out0, \out0, t2, ror #16
ldm rk!, {t1, t2}
eor \out1, \out1, \t3, ror #16
eor \out0, \out0, t0, ror #8
eor \out1, \out1, \t4, ror #8
eor \out0, \out0, t1
eor \out1, \out1, t2
.endm
.macro fround, out0, out1, out2, out3, in0, in1, in2, in3
__hround \out0, \out1, \in0, \in1, \in2, \in3, \out2, \out3, 1
__hround \out2, \out3, \in2, \in3, \in0, \in1, \in1, \in2, 1
.endm
.macro iround, out0, out1, out2, out3, in0, in1, in2, in3
__hround \out0, \out1, \in0, \in3, \in2, \in1, \out2, \out3, 0
__hround \out2, \out3, \in2, \in1, \in0, \in3, \in1, \in0, 0
.endm
.macro __rev, out, in
.if __LINUX_ARM_ARCH__ < 6
lsl t0, \in, #24
and t1, \in, #0xff00
and t2, \in, #0xff0000
orr \out, t0, \in, lsr #24
orr \out, \out, t1, lsl #8
orr \out, \out, t2, lsr #8
.else
rev \out, \in
.endif
.endm
.macro __adrl, out, sym, c
.if __LINUX_ARM_ARCH__ < 7
ldr\c \out, =\sym
.else
movw\c \out, #:lower16:\sym
movt\c \out, #:upper16:\sym
.endif
.endm
.macro do_crypt, round, ttab, ltab
push {r3-r11, lr}
ldr r4, [in]
ldr r5, [in, #4]
ldr r6, [in, #8]
ldr r7, [in, #12]
ldm rk!, {r8-r11}
#ifdef CONFIG_CPU_BIG_ENDIAN
__rev r4, r4
__rev r5, r5
__rev r6, r6
__rev r7, r7
#endif
eor r4, r4, r8
eor r5, r5, r9
eor r6, r6, r10
eor r7, r7, r11
__adrl ttab, \ttab
tst rounds, #2
bne 1f
0: \round r8, r9, r10, r11, r4, r5, r6, r7
\round r4, r5, r6, r7, r8, r9, r10, r11
1: subs rounds, rounds, #4
\round r8, r9, r10, r11, r4, r5, r6, r7
__adrl ttab, \ltab, ls
\round r4, r5, r6, r7, r8, r9, r10, r11
bhi 0b
#ifdef CONFIG_CPU_BIG_ENDIAN
__rev r4, r4
__rev r5, r5
__rev r6, r6
__rev r7, r7
#endif
ldr out, [sp]
str r4, [out]
str r5, [out, #4]
str r6, [out, #8]
str r7, [out, #12]
pop {r3-r11, pc}
.align 3
.ltorg
.endm
ENTRY(__aes_arm_encrypt)
do_crypt fround, crypto_ft_tab, crypto_fl_tab
ENDPROC(__aes_arm_encrypt)
ENTRY(__aes_arm_decrypt)
do_crypt iround, crypto_it_tab, crypto_il_tab
ENDPROC(__aes_arm_decrypt)
/*
* Scalar AES core transform
*
* Copyright (C) 2017 Linaro Ltd.
* Author: Ard Biesheuvel <ard.biesheuvel@linaro.org>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <crypto/aes.h>
#include <linux/crypto.h>
#include <linux/module.h>
asmlinkage void __aes_arm_encrypt(u32 *rk, int rounds, const u8 *in, u8 *out);
EXPORT_SYMBOL(__aes_arm_encrypt);
asmlinkage void __aes_arm_decrypt(u32 *rk, int rounds, const u8 *in, u8 *out);
EXPORT_SYMBOL(__aes_arm_decrypt);
static void aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
{
struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm);
int rounds = 6 + ctx->key_length / 4;
__aes_arm_encrypt(ctx->key_enc, rounds, in, out);
}
static void aes_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
{
struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm);
int rounds = 6 + ctx->key_length / 4;
__aes_arm_decrypt(ctx->key_dec, rounds, in, out);
}
static struct crypto_alg aes_alg = {
.cra_name = "aes",
.cra_driver_name = "aes-arm",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_TYPE_CIPHER,
.cra_blocksize = AES_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct crypto_aes_ctx),
.cra_module = THIS_MODULE,
.cra_cipher.cia_min_keysize = AES_MIN_KEY_SIZE,
.cra_cipher.cia_max_keysize = AES_MAX_KEY_SIZE,
.cra_cipher.cia_setkey = crypto_aes_set_key,
.cra_cipher.cia_encrypt = aes_encrypt,
.cra_cipher.cia_decrypt = aes_decrypt,
#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
.cra_alignmask = 3,
#endif
};
static int __init aes_init(void)
{
return crypto_register_alg(&aes_alg);
}
static void __exit aes_fini(void)
{
crypto_unregister_alg(&aes_alg);
}
module_init(aes_init);
module_exit(aes_fini);
MODULE_DESCRIPTION("Scalar AES cipher for ARM");
MODULE_AUTHOR("Ard Biesheuvel <ard.biesheuvel@linaro.org>");
MODULE_LICENSE("GPL v2");
MODULE_ALIAS_CRYPTO("aes");
This diff is collapsed.
This diff is collapsed.
/*
* Glue Code for the asm optimized version of the AES Cipher Algorithm
*/
#include <linux/module.h>
#include <linux/crypto.h>
#include <crypto/aes.h>
#include "aes_glue.h"
EXPORT_SYMBOL(AES_encrypt);
EXPORT_SYMBOL(AES_decrypt);
EXPORT_SYMBOL(private_AES_set_encrypt_key);
EXPORT_SYMBOL(private_AES_set_decrypt_key);
static void aes_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
{
struct AES_CTX *ctx = crypto_tfm_ctx(tfm);
AES_encrypt(src, dst, &ctx->enc_key);
}
static void aes_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
{
struct AES_CTX *ctx = crypto_tfm_ctx(tfm);
AES_decrypt(src, dst, &ctx->dec_key);
}
static int aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
unsigned int key_len)
{
struct AES_CTX *ctx = crypto_tfm_ctx(tfm);
switch (key_len) {
case AES_KEYSIZE_128:
key_len = 128;
break;
case AES_KEYSIZE_192:
key_len = 192;
break;
case AES_KEYSIZE_256:
key_len = 256;
break;
default:
tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
return -EINVAL;
}
if (private_AES_set_encrypt_key(in_key, key_len, &ctx->enc_key) == -1) {
tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
return -EINVAL;
}
/* private_AES_set_decrypt_key expects an encryption key as input */
ctx->dec_key = ctx->enc_key;
if (private_AES_set_decrypt_key(in_key, key_len, &ctx->dec_key) == -1) {
tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
return -EINVAL;
}
return 0;
}
static struct crypto_alg aes_alg = {
.cra_name = "aes",
.cra_driver_name = "aes-asm",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_TYPE_CIPHER,
.cra_blocksize = AES_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct AES_CTX),
.cra_module = THIS_MODULE,
.cra_list = LIST_HEAD_INIT(aes_alg.cra_list),
.cra_u = {
.cipher = {
.cia_min_keysize = AES_MIN_KEY_SIZE,
.cia_max_keysize = AES_MAX_KEY_SIZE,
.cia_setkey = aes_set_key,
.cia_encrypt = aes_encrypt,
.cia_decrypt = aes_decrypt
}
}
};
static int __init aes_init(void)
{
return crypto_register_alg(&aes_alg);
}
static void __exit aes_fini(void)
{
crypto_unregister_alg(&aes_alg);
}
module_init(aes_init);
module_exit(aes_fini);
MODULE_DESCRIPTION("Rijndael (AES) Cipher Algorithm (ASM)");
MODULE_LICENSE("GPL");
MODULE_ALIAS_CRYPTO("aes");
MODULE_ALIAS_CRYPTO("aes-asm");
MODULE_AUTHOR("David McCullough <ucdevel@gmail.com>");
#define AES_MAXNR 14
struct AES_KEY {
unsigned int rd_key[4 * (AES_MAXNR + 1)];
int rounds;
};
struct AES_CTX {
struct AES_KEY enc_key;
struct AES_KEY dec_key;
};
asmlinkage void AES_encrypt(const u8 *in, u8 *out, struct AES_KEY *ctx);
asmlinkage void AES_decrypt(const u8 *in, u8 *out, struct AES_KEY *ctx);
asmlinkage int private_AES_set_decrypt_key(const unsigned char *userKey,
const int bits, struct AES_KEY *key);
asmlinkage int private_AES_set_encrypt_key(const unsigned char *userKey,
const int bits, struct AES_KEY *key);
This diff is collapsed.
/*
* linux/arch/arm/crypto/aesbs-glue.c - glue code for NEON bit sliced AES
*
* Copyright (C) 2013 Linaro Ltd <ard.biesheuvel@linaro.org>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <asm/neon.h>
#include <crypto/aes.h>
#include <crypto/cbc.h>
#include <crypto/internal/simd.h>
#include <crypto/internal/skcipher.h>
#include <linux/module.h>
#include <crypto/xts.h>
#include "aes_glue.h"
#define BIT_SLICED_KEY_MAXSIZE (128 * (AES_MAXNR - 1) + 2 * AES_BLOCK_SIZE)
struct BS_KEY {
struct AES_KEY rk;
int converted;
u8 __aligned(8) bs[BIT_SLICED_KEY_MAXSIZE];
} __aligned(8);
asmlinkage void bsaes_enc_key_convert(u8 out[], struct AES_KEY const *in);
asmlinkage void bsaes_dec_key_convert(u8 out[], struct AES_KEY const *in);
asmlinkage void bsaes_cbc_encrypt(u8 const in[], u8 out[], u32 bytes,
struct BS_KEY *key, u8 iv[]);
asmlinkage void bsaes_ctr32_encrypt_blocks(u8 const in[], u8 out[], u32 blocks,
struct BS_KEY *key, u8 const iv[]);
asmlinkage void bsaes_xts_encrypt(u8 const in[], u8 out[], u32 bytes,
struct BS_KEY *key, u8 tweak[]);
asmlinkage void bsaes_xts_decrypt(u8 const in[], u8 out[], u32 bytes,
struct BS_KEY *key, u8 tweak[]);
struct aesbs_cbc_ctx {
struct AES_KEY enc;
struct BS_KEY dec;
};
struct aesbs_ctr_ctx {
struct BS_KEY enc;
};
struct aesbs_xts_ctx {
struct BS_KEY enc;
struct BS_KEY dec;
struct AES_KEY twkey;
};
static int aesbs_cbc_set_key(struct crypto_skcipher *tfm, const u8 *in_key,
unsigned int key_len)
{
struct aesbs_cbc_ctx *ctx = crypto_skcipher_ctx(tfm);
int bits = key_len * 8;
if (private_AES_set_encrypt_key(in_key, bits, &ctx->enc)) {
crypto_skcipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
return -EINVAL;
}
ctx->dec.rk = ctx->enc;
private_AES_set_decrypt_key(in_key, bits, &ctx->dec.rk);
ctx->dec.converted = 0;
return 0;
}
static int aesbs_ctr_set_key(struct crypto_skcipher *tfm, const u8 *in_key,
unsigned int key_len)
{
struct aesbs_ctr_ctx *ctx = crypto_skcipher_ctx(tfm);
int bits = key_len * 8;
if (private_AES_set_encrypt_key(in_key, bits, &ctx->enc.rk)) {
crypto_skcipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
return -EINVAL;
}
ctx->enc.converted = 0;
return 0;
}
static int aesbs_xts_set_key(struct crypto_skcipher *tfm, const u8 *in_key,
unsigned int key_len)
{
struct aesbs_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
int bits = key_len * 4;
int err;
err = xts_verify_key(tfm, in_key, key_len);
if (err)
return err;
if (private_AES_set_encrypt_key(in_key, bits, &ctx->enc.rk)) {
crypto_skcipher_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
return -EINVAL;
}
ctx->dec.rk = ctx->enc.rk;
private_AES_set_decrypt_key(in_key, bits, &ctx->dec.rk);
private_AES_set_encrypt_key(in_key + key_len / 2, bits, &ctx->twkey);
ctx->enc.converted = ctx->dec.converted = 0;
return 0;
}
static inline void aesbs_encrypt_one(struct crypto_skcipher *tfm,
const u8 *src, u8 *dst)
{
struct aesbs_cbc_ctx *ctx = crypto_skcipher_ctx(tfm);
AES_encrypt(src, dst, &ctx->enc);
}
static int aesbs_cbc_encrypt(struct skcipher_request *req)
{
return crypto_cbc_encrypt_walk(req, aesbs_encrypt_one);
}
static inline void aesbs_decrypt_one(struct crypto_skcipher *tfm,
const u8 *src, u8 *dst)
{
struct aesbs_cbc_ctx *ctx = crypto_skcipher_ctx(tfm);
AES_decrypt(src, dst, &ctx->dec.rk);
}
static int aesbs_cbc_decrypt(struct skcipher_request *req)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
struct aesbs_cbc_ctx *ctx = crypto_skcipher_ctx(tfm);
struct skcipher_walk walk;
unsigned int nbytes;
int err;
for (err = skcipher_walk_virt(&walk, req, false);
(nbytes = walk.nbytes); err = skcipher_walk_done(&walk, nbytes)) {
u32 blocks = nbytes / AES_BLOCK_SIZE;
u8 *dst = walk.dst.virt.addr;
u8 *src = walk.src.virt.addr;
u8 *iv = walk.iv;
if (blocks >= 8) {
kernel_neon_begin();
bsaes_cbc_encrypt(src, dst, nbytes, &ctx->dec, iv);
kernel_neon_end();
nbytes %= AES_BLOCK_SIZE;
continue;
}
nbytes = crypto_cbc_decrypt_blocks(&walk, tfm,
aesbs_decrypt_one);
}
return err;
}
static void inc_be128_ctr(__be32 ctr[], u32 addend)
{
int i;
for (i = 3; i >= 0; i--, addend = 1) {
u32 n = be32_to_cpu(ctr[i]) + addend;
ctr[i] = cpu_to_be32(n);
if (n >= addend)
break;
}
}
static int aesbs_ctr_encrypt(struct skcipher_request *req)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
struct aesbs_ctr_ctx *ctx = crypto_skcipher_ctx(tfm);
struct skcipher_walk walk;
u32 blocks;
int err;
err = skcipher_walk_virt(&walk, req, false);
while ((blocks = walk.nbytes / AES_BLOCK_SIZE)) {
u32 tail = walk.nbytes % AES_BLOCK_SIZE;
__be32 *ctr = (__be32 *)walk.iv;
u32 headroom = UINT_MAX - be32_to_cpu(ctr[3]);
/* avoid 32 bit counter overflow in the NEON code */
if (unlikely(headroom < blocks)) {
blocks = headroom + 1;
tail = walk.nbytes - blocks * AES_BLOCK_SIZE;
}
kernel_neon_begin();
bsaes_ctr32_encrypt_blocks(walk.src.virt.addr,
walk.dst.virt.addr, blocks,
&ctx->enc, walk.iv);
kernel_neon_end();
inc_be128_ctr(ctr, blocks);
err = skcipher_walk_done(&walk, tail);
}
if (walk.nbytes) {
u8 *tdst = walk.dst.virt.addr + blocks * AES_BLOCK_SIZE;
u8 *tsrc = walk.src.virt.addr + blocks * AES_BLOCK_SIZE;
u8 ks[AES_BLOCK_SIZE];
AES_encrypt(walk.iv, ks, &ctx->enc.rk);
if (tdst != tsrc)
memcpy(tdst, tsrc, walk.nbytes);
crypto_xor(tdst, ks, walk.nbytes);
err = skcipher_walk_done(&walk, 0);
}
return err;
}
static int aesbs_xts_encrypt(struct skcipher_request *req)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
struct aesbs_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
struct skcipher_walk walk;
int err;
err = skcipher_walk_virt(&walk, req, false);
/* generate the initial tweak */
AES_encrypt(walk.iv, walk.iv, &ctx->twkey);
while (walk.nbytes) {
kernel_neon_begin();
bsaes_xts_encrypt(walk.src.virt.addr, walk.dst.virt.addr,
walk.nbytes, &ctx->enc, walk.iv);
kernel_neon_end();
err = skcipher_walk_done(&walk, walk.nbytes % AES_BLOCK_SIZE);
}
return err;
}
static int aesbs_xts_decrypt(struct skcipher_request *req)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
struct aesbs_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
struct skcipher_walk walk;
int err;
err = skcipher_walk_virt(&walk, req, false);
/* generate the initial tweak */
AES_encrypt(walk.iv, walk.iv, &ctx->twkey);
while (walk.nbytes) {
kernel_neon_begin();
bsaes_xts_decrypt(walk.src.virt.addr, walk.dst.virt.addr,
walk.nbytes, &ctx->dec, walk.iv);
kernel_neon_end();
err = skcipher_walk_done(&walk, walk.nbytes % AES_BLOCK_SIZE);
}
return err;
}
static struct skcipher_alg aesbs_algs[] = { {
.base = {
.cra_name = "__cbc(aes)",
.cra_driver_name = "__cbc-aes-neonbs",
.cra_priority = 300,
.cra_flags = CRYPTO_ALG_INTERNAL,
.cra_blocksize = AES_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct aesbs_cbc_ctx),
.cra_alignmask = 7,
.cra_module = THIS_MODULE,
},
.min_keysize = AES_MIN_KEY_SIZE,
.max_keysize = AES_MAX_KEY_SIZE,
.ivsize = AES_BLOCK_SIZE,
.setkey = aesbs_cbc_set_key,
.encrypt = aesbs_cbc_encrypt,
.decrypt = aesbs_cbc_decrypt,
}, {
.base = {
.cra_name = "__ctr(aes)",
.cra_driver_name = "__ctr-aes-neonbs",
.cra_priority = 300,
.cra_flags = CRYPTO_ALG_INTERNAL,
.cra_blocksize = 1,
.cra_ctxsize = sizeof(struct aesbs_ctr_ctx),
.cra_alignmask = 7,
.cra_module = THIS_MODULE,
},
.min_keysize = AES_MIN_KEY_SIZE,
.max_keysize = AES_MAX_KEY_SIZE,
.ivsize = AES_BLOCK_SIZE,
.chunksize = AES_BLOCK_SIZE,
.setkey = aesbs_ctr_set_key,
.encrypt = aesbs_ctr_encrypt,
.decrypt = aesbs_ctr_encrypt,
}, {
.base = {
.cra_name = "__xts(aes)",
.cra_driver_name = "__xts-aes-neonbs",
.cra_priority = 300,
.cra_flags = CRYPTO_ALG_INTERNAL,
.cra_blocksize = AES_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct aesbs_xts_ctx),
.cra_alignmask = 7,
.cra_module = THIS_MODULE,
},
.min_keysize = 2 * AES_MIN_KEY_SIZE,
.max_keysize = 2 * AES_MAX_KEY_SIZE,
.ivsize = AES_BLOCK_SIZE,
.setkey = aesbs_xts_set_key,
.encrypt = aesbs_xts_encrypt,
.decrypt = aesbs_xts_decrypt,
} };
struct simd_skcipher_alg *aesbs_simd_algs[ARRAY_SIZE(aesbs_algs)];
static void aesbs_mod_exit(void)
{
int i;
for (i = 0; i < ARRAY_SIZE(aesbs_simd_algs) && aesbs_simd_algs[i]; i++)
simd_skcipher_free(aesbs_simd_algs[i]);
crypto_unregister_skciphers(aesbs_algs, ARRAY_SIZE(aesbs_algs));
}
static int __init aesbs_mod_init(void)
{
struct simd_skcipher_alg *simd;
const char *basename;
const char *algname;
const char *drvname;
int err;
int i;
if (!cpu_has_neon())
return -ENODEV;
err = crypto_register_skciphers(aesbs_algs, ARRAY_SIZE(aesbs_algs));
if (err)
return err;
for (i = 0; i < ARRAY_SIZE(aesbs_algs); i++) {
algname = aesbs_algs[i].base.cra_name + 2;
drvname = aesbs_algs[i].base.cra_driver_name + 2;
basename = aesbs_algs[i].base.cra_driver_name;
simd = simd_skcipher_create_compat(algname, drvname, basename);
err = PTR_ERR(simd);
if (IS_ERR(simd))
goto unregister_simds;
aesbs_simd_algs[i] = simd;
}
return 0;
unregister_simds:
aesbs_mod_exit();
return err;
}
module_init(aesbs_mod_init);
module_exit(aesbs_mod_exit);
MODULE_DESCRIPTION("Bit sliced AES in CBC/CTR/XTS modes using NEON");
MODULE_AUTHOR("Ard Biesheuvel <ard.biesheuvel@linaro.org>");
MODULE_LICENSE("GPL");
This diff is collapsed.
This diff is collapsed.
/*
* ChaCha20 256-bit cipher algorithm, RFC7539, ARM NEON functions
*
* Copyright (C) 2016 Linaro, Ltd. <ard.biesheuvel@linaro.org>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* Based on:
* ChaCha20 256-bit cipher algorithm, RFC7539, SIMD glue code
*
* Copyright (C) 2015 Martin Willi
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
#include <crypto/algapi.h>
#include <crypto/chacha20.h>
#include <crypto/internal/skcipher.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <asm/hwcap.h>
#include <asm/neon.h>
#include <asm/simd.h>
asmlinkage void chacha20_block_xor_neon(u32 *state, u8 *dst, const u8 *src);
asmlinkage void chacha20_4block_xor_neon(u32 *state, u8 *dst, const u8 *src);
static void chacha20_doneon(u32 *state, u8 *dst, const u8 *src,
unsigned int bytes)
{
u8 buf[CHACHA20_BLOCK_SIZE];
while (bytes >= CHACHA20_BLOCK_SIZE * 4) {
chacha20_4block_xor_neon(state, dst, src);
bytes -= CHACHA20_BLOCK_SIZE * 4;
src += CHACHA20_BLOCK_SIZE * 4;
dst += CHACHA20_BLOCK_SIZE * 4;
state[12] += 4;
}
while (bytes >= CHACHA20_BLOCK_SIZE) {
chacha20_block_xor_neon(state, dst, src);
bytes -= CHACHA20_BLOCK_SIZE;
src += CHACHA20_BLOCK_SIZE;
dst += CHACHA20_BLOCK_SIZE;
state[12]++;
}
if (bytes) {
memcpy(buf, src, bytes);
chacha20_block_xor_neon(state, buf, buf);
memcpy(dst, buf, bytes);
}
}
static int chacha20_neon(struct skcipher_request *req)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
struct chacha20_ctx *ctx = crypto_skcipher_ctx(tfm);
struct skcipher_walk walk;
u32 state[16];
int err;
if (req->cryptlen <= CHACHA20_BLOCK_SIZE || !may_use_simd())
return crypto_chacha20_crypt(req);
err = skcipher_walk_virt(&walk, req, true);
crypto_chacha20_init(state, ctx, walk.iv);
kernel_neon_begin();
while (walk.nbytes > 0) {
unsigned int nbytes = walk.nbytes;
if (nbytes < walk.total)
nbytes = round_down(nbytes, walk.stride);
chacha20_doneon(state, walk.dst.virt.addr, walk.src.virt.addr,
nbytes);
err = skcipher_walk_done(&walk, walk.nbytes - nbytes);
}
kernel_neon_end();
return err;
}
static struct skcipher_alg alg = {
.base.cra_name = "chacha20",
.base.cra_driver_name = "chacha20-neon",
.base.cra_priority = 300,
.base.cra_blocksize = 1,
.base.cra_ctxsize = sizeof(struct chacha20_ctx),
.base.cra_module = THIS_MODULE,
.min_keysize = CHACHA20_KEY_SIZE,
.max_keysize = CHACHA20_KEY_SIZE,
.ivsize = CHACHA20_IV_SIZE,
.chunksize = CHACHA20_BLOCK_SIZE,
.walksize = 4 * CHACHA20_BLOCK_SIZE,
.setkey = crypto_chacha20_setkey,
.encrypt = chacha20_neon,
.decrypt = chacha20_neon,
};
static int __init chacha20_simd_mod_init(void)
{
if (!(elf_hwcap & HWCAP_NEON))
return -ENODEV;
return crypto_register_skcipher(&alg);
}
static void __exit chacha20_simd_mod_fini(void)
{
crypto_unregister_skcipher(&alg);
}
module_init(chacha20_simd_mod_init);
module_exit(chacha20_simd_mod_fini);
MODULE_AUTHOR("Ard Biesheuvel <ard.biesheuvel@linaro.org>");
MODULE_LICENSE("GPL v2");
MODULE_ALIAS_CRYPTO("chacha20");
...@@ -516,4 +516,3 @@ CONFIG_CRYPTO_GHASH_ARM64_CE=y ...@@ -516,4 +516,3 @@ CONFIG_CRYPTO_GHASH_ARM64_CE=y
CONFIG_CRYPTO_AES_ARM64_CE_CCM=y CONFIG_CRYPTO_AES_ARM64_CE_CCM=y
CONFIG_CRYPTO_AES_ARM64_CE_BLK=y CONFIG_CRYPTO_AES_ARM64_CE_BLK=y
# CONFIG_CRYPTO_AES_ARM64_NEON_BLK is not set # CONFIG_CRYPTO_AES_ARM64_NEON_BLK is not set
CONFIG_CRYPTO_CRC32_ARM64=y
...@@ -37,10 +37,14 @@ config CRYPTO_CRCT10DIF_ARM64_CE ...@@ -37,10 +37,14 @@ config CRYPTO_CRCT10DIF_ARM64_CE
select CRYPTO_HASH select CRYPTO_HASH
config CRYPTO_CRC32_ARM64_CE config CRYPTO_CRC32_ARM64_CE
tristate "CRC32 and CRC32C digest algorithms using PMULL instructions" tristate "CRC32 and CRC32C digest algorithms using ARMv8 extensions"
depends on KERNEL_MODE_NEON && CRC32 depends on CRC32
select CRYPTO_HASH select CRYPTO_HASH
config CRYPTO_AES_ARM64
tristate "AES core cipher using scalar instructions"
select CRYPTO_AES
config CRYPTO_AES_ARM64_CE config CRYPTO_AES_ARM64_CE
tristate "AES core cipher using ARMv8 Crypto Extensions" tristate "AES core cipher using ARMv8 Crypto Extensions"
depends on ARM64 && KERNEL_MODE_NEON depends on ARM64 && KERNEL_MODE_NEON
...@@ -67,9 +71,17 @@ config CRYPTO_AES_ARM64_NEON_BLK ...@@ -67,9 +71,17 @@ config CRYPTO_AES_ARM64_NEON_BLK
select CRYPTO_AES select CRYPTO_AES
select CRYPTO_SIMD select CRYPTO_SIMD
config CRYPTO_CRC32_ARM64 config CRYPTO_CHACHA20_NEON
tristate "CRC32 and CRC32C using optional ARMv8 instructions" tristate "NEON accelerated ChaCha20 symmetric cipher"
depends on ARM64 depends on KERNEL_MODE_NEON
select CRYPTO_HASH select CRYPTO_BLKCIPHER
select CRYPTO_CHACHA20
config CRYPTO_AES_ARM64_BS
tristate "AES in ECB/CBC/CTR/XTS modes using bit-sliced NEON algorithm"
depends on KERNEL_MODE_NEON
select CRYPTO_BLKCIPHER
select CRYPTO_AES_ARM64_NEON_BLK
select CRYPTO_SIMD
endif endif
...@@ -41,15 +41,20 @@ sha256-arm64-y := sha256-glue.o sha256-core.o ...@@ -41,15 +41,20 @@ sha256-arm64-y := sha256-glue.o sha256-core.o
obj-$(CONFIG_CRYPTO_SHA512_ARM64) += sha512-arm64.o obj-$(CONFIG_CRYPTO_SHA512_ARM64) += sha512-arm64.o
sha512-arm64-y := sha512-glue.o sha512-core.o sha512-arm64-y := sha512-glue.o sha512-core.o
obj-$(CONFIG_CRYPTO_CHACHA20_NEON) += chacha20-neon.o
chacha20-neon-y := chacha20-neon-core.o chacha20-neon-glue.o
obj-$(CONFIG_CRYPTO_AES_ARM64) += aes-arm64.o
aes-arm64-y := aes-cipher-core.o aes-cipher-glue.o
obj-$(CONFIG_CRYPTO_AES_ARM64_BS) += aes-neon-bs.o
aes-neon-bs-y := aes-neonbs-core.o aes-neonbs-glue.o
AFLAGS_aes-ce.o := -DINTERLEAVE=4 AFLAGS_aes-ce.o := -DINTERLEAVE=4
AFLAGS_aes-neon.o := -DINTERLEAVE=4 AFLAGS_aes-neon.o := -DINTERLEAVE=4
CFLAGS_aes-glue-ce.o := -DUSE_V8_CRYPTO_EXTENSIONS CFLAGS_aes-glue-ce.o := -DUSE_V8_CRYPTO_EXTENSIONS
obj-$(CONFIG_CRYPTO_CRC32_ARM64) += crc32-arm64.o
CFLAGS_crc32-arm64.o := -mcpu=generic+crc
$(obj)/aes-glue-%.o: $(src)/aes-glue.c FORCE $(obj)/aes-glue-%.o: $(src)/aes-glue.c FORCE
$(call if_changed_rule,cc_o_c) $(call if_changed_rule,cc_o_c)
......
...@@ -258,7 +258,6 @@ static struct aead_alg ccm_aes_alg = { ...@@ -258,7 +258,6 @@ static struct aead_alg ccm_aes_alg = {
.cra_priority = 300, .cra_priority = 300,
.cra_blocksize = 1, .cra_blocksize = 1,
.cra_ctxsize = sizeof(struct crypto_aes_ctx), .cra_ctxsize = sizeof(struct crypto_aes_ctx),
.cra_alignmask = 7,
.cra_module = THIS_MODULE, .cra_module = THIS_MODULE,
}, },
.ivsize = AES_BLOCK_SIZE, .ivsize = AES_BLOCK_SIZE,
......
/*
* Scalar AES core transform
*
* Copyright (C) 2017 Linaro Ltd <ard.biesheuvel@linaro.org>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/linkage.h>
#include <asm/assembler.h>
.text
rk .req x0
out .req x1
in .req x2
rounds .req x3
tt .req x4
lt .req x2
.macro __pair, enc, reg0, reg1, in0, in1e, in1d, shift
ubfx \reg0, \in0, #\shift, #8
.if \enc
ubfx \reg1, \in1e, #\shift, #8
.else
ubfx \reg1, \in1d, #\shift, #8
.endif
ldr \reg0, [tt, \reg0, uxtw #2]
ldr \reg1, [tt, \reg1, uxtw #2]
.endm
.macro __hround, out0, out1, in0, in1, in2, in3, t0, t1, enc
ldp \out0, \out1, [rk], #8
__pair \enc, w13, w14, \in0, \in1, \in3, 0
__pair \enc, w15, w16, \in1, \in2, \in0, 8
__pair \enc, w17, w18, \in2, \in3, \in1, 16
__pair \enc, \t0, \t1, \in3, \in0, \in2, 24
eor \out0, \out0, w13
eor \out1, \out1, w14
eor \out0, \out0, w15, ror #24
eor \out1, \out1, w16, ror #24
eor \out0, \out0, w17, ror #16
eor \out1, \out1, w18, ror #16
eor \out0, \out0, \t0, ror #8
eor \out1, \out1, \t1, ror #8
.endm
.macro fround, out0, out1, out2, out3, in0, in1, in2, in3
__hround \out0, \out1, \in0, \in1, \in2, \in3, \out2, \out3, 1
__hround \out2, \out3, \in2, \in3, \in0, \in1, \in1, \in2, 1
.endm
.macro iround, out0, out1, out2, out3, in0, in1, in2, in3
__hround \out0, \out1, \in0, \in3, \in2, \in1, \out2, \out3, 0
__hround \out2, \out3, \in2, \in1, \in0, \in3, \in1, \in0, 0
.endm
.macro do_crypt, round, ttab, ltab
ldp w5, w6, [in]
ldp w7, w8, [in, #8]
ldp w9, w10, [rk], #16
ldp w11, w12, [rk, #-8]
CPU_BE( rev w5, w5 )
CPU_BE( rev w6, w6 )
CPU_BE( rev w7, w7 )
CPU_BE( rev w8, w8 )
eor w5, w5, w9
eor w6, w6, w10
eor w7, w7, w11
eor w8, w8, w12
adr_l tt, \ttab
adr_l lt, \ltab
tbnz rounds, #1, 1f
0: \round w9, w10, w11, w12, w5, w6, w7, w8
\round w5, w6, w7, w8, w9, w10, w11, w12
1: subs rounds, rounds, #4
\round w9, w10, w11, w12, w5, w6, w7, w8
csel tt, tt, lt, hi
\round w5, w6, w7, w8, w9, w10, w11, w12
b.hi 0b
CPU_BE( rev w5, w5 )
CPU_BE( rev w6, w6 )
CPU_BE( rev w7, w7 )
CPU_BE( rev w8, w8 )
stp w5, w6, [out]
stp w7, w8, [out, #8]
ret
.endm
.align 5
ENTRY(__aes_arm64_encrypt)
do_crypt fround, crypto_ft_tab, crypto_fl_tab
ENDPROC(__aes_arm64_encrypt)
.align 5
ENTRY(__aes_arm64_decrypt)
do_crypt iround, crypto_it_tab, crypto_il_tab
ENDPROC(__aes_arm64_decrypt)
/*
* Scalar AES core transform
*
* Copyright (C) 2017 Linaro Ltd <ard.biesheuvel@linaro.org>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <crypto/aes.h>
#include <linux/crypto.h>
#include <linux/module.h>
asmlinkage void __aes_arm64_encrypt(u32 *rk, u8 *out, const u8 *in, int rounds);
EXPORT_SYMBOL(__aes_arm64_encrypt);
asmlinkage void __aes_arm64_decrypt(u32 *rk, u8 *out, const u8 *in, int rounds);
EXPORT_SYMBOL(__aes_arm64_decrypt);
static void aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
{
struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm);
int rounds = 6 + ctx->key_length / 4;
__aes_arm64_encrypt(ctx->key_enc, out, in, rounds);
}
static void aes_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
{
struct crypto_aes_ctx *ctx = crypto_tfm_ctx(tfm);
int rounds = 6 + ctx->key_length / 4;
__aes_arm64_decrypt(ctx->key_dec, out, in, rounds);
}
static struct crypto_alg aes_alg = {
.cra_name = "aes",
.cra_driver_name = "aes-arm64",
.cra_priority = 200,
.cra_flags = CRYPTO_ALG_TYPE_CIPHER,
.cra_blocksize = AES_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct crypto_aes_ctx),
.cra_module = THIS_MODULE,
.cra_cipher.cia_min_keysize = AES_MIN_KEY_SIZE,
.cra_cipher.cia_max_keysize = AES_MAX_KEY_SIZE,
.cra_cipher.cia_setkey = crypto_aes_set_key,
.cra_cipher.cia_encrypt = aes_encrypt,
.cra_cipher.cia_decrypt = aes_decrypt
};
static int __init aes_init(void)
{
return crypto_register_alg(&aes_alg);
}
static void __exit aes_fini(void)
{
crypto_unregister_alg(&aes_alg);
}
module_init(aes_init);
module_exit(aes_fini);
MODULE_DESCRIPTION("Scalar AES cipher for arm64");
MODULE_AUTHOR("Ard Biesheuvel <ard.biesheuvel@linaro.org>");
MODULE_LICENSE("GPL v2");
MODULE_ALIAS_CRYPTO("aes");
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
...@@ -122,23 +122,39 @@ ...@@ -122,23 +122,39 @@
#include <linux/linkage.h> #include <linux/linkage.h>
#include <asm/inst.h> #include <asm/inst.h>
.data # constants in mergeable sections, linker can reorder and merge
.section .rodata.cst16.POLY, "aM", @progbits, 16
.align 16 .align 16
POLY: .octa 0xC2000000000000000000000000000001 POLY: .octa 0xC2000000000000000000000000000001
.section .rodata.cst16.POLY2, "aM", @progbits, 16
.align 16
POLY2: .octa 0xC20000000000000000000001C2000000 POLY2: .octa 0xC20000000000000000000001C2000000
TWOONE: .octa 0x00000001000000000000000000000001
# order of these constants should not change. .section .rodata.cst16.TWOONE, "aM", @progbits, 16
# more specifically, ALL_F should follow SHIFT_MASK, and ZERO should follow ALL_F .align 16
TWOONE: .octa 0x00000001000000000000000000000001
.section .rodata.cst16.SHUF_MASK, "aM", @progbits, 16
.align 16
SHUF_MASK: .octa 0x000102030405060708090A0B0C0D0E0F SHUF_MASK: .octa 0x000102030405060708090A0B0C0D0E0F
SHIFT_MASK: .octa 0x0f0e0d0c0b0a09080706050403020100
ALL_F: .octa 0xffffffffffffffffffffffffffffffff .section .rodata.cst16.ONE, "aM", @progbits, 16
ZERO: .octa 0x00000000000000000000000000000000 .align 16
ONE: .octa 0x00000000000000000000000000000001 ONE: .octa 0x00000000000000000000000000000001
.section .rodata.cst16.ONEf, "aM", @progbits, 16
.align 16
ONEf: .octa 0x01000000000000000000000000000000 ONEf: .octa 0x01000000000000000000000000000000
# order of these constants should not change.
# more specifically, ALL_F should follow SHIFT_MASK, and zero should follow ALL_F
.section .rodata, "a", @progbits
.align 16
SHIFT_MASK: .octa 0x0f0e0d0c0b0a09080706050403020100
ALL_F: .octa 0xffffffffffffffffffffffffffffffff
.octa 0x00000000000000000000000000000000
.text .text
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment