1. 15 Feb, 2019 3 commits
  2. 08 Feb, 2019 23 commits
    • Herbert Xu's avatar
      crypto: qat - Remove unused goto label · 920d7f72
      Herbert Xu authored
      This patch removes an unused label.
      Reported-by: default avatarStephen Rothwell <sfr@canb.auug.org.au>
      Fixes: f0fcf9ad ("crypto: qat - no need to check return...")
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      920d7f72
    • Gilad Ben-Yossef's avatar
      MAINTAINERS: crypto: ccree: remove co-maintainer · 532a50fd
      Gilad Ben-Yossef authored
      The best-laid plans of mice and men often go awry.
      Remove Yael C. as co-maintainer as she moved on to other endeavours.
      Signed-off-by: default avatarGilad Ben-Yossef <gilad@benyossef.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      532a50fd
    • Eric Biggers's avatar
      crypto: testmgr - check for aead_request corruption · a6e5ef9b
      Eric Biggers authored
      Check that algorithms do not change the aead_request structure, as users
      may rely on submitting the request again (e.g. after copying new data
      into the same source buffer) without reinitializing everything.
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      a6e5ef9b
    • Eric Biggers's avatar
      crypto: testmgr - check for skcipher_request corruption · fa353c99
      Eric Biggers authored
      Check that algorithms do not change the skcipher_request structure, as
      users may rely on submitting the request again (e.g. after copying new
      data into the same source buffer) without reinitializing everything.
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      fa353c99
    • Eric Biggers's avatar
      crypto: testmgr - convert hash testing to use testvec_configs · 4cc2dcf9
      Eric Biggers authored
      Convert alg_test_hash() to use the new test framework, adding a list of
      testvec_configs to test by default.  When the extra self-tests are
      enabled, randomly generated testvec_configs are tested as well.
      
      This improves hash test coverage mainly because now all algorithms have
      a variety of data layouts tested, whereas before each algorithm was
      responsible for declaring its own chunked test cases which were often
      missing or provided poor test coverage.  The new code also tests both
      the MAY_SLEEP and !MAY_SLEEP cases and buffers that cross pages.
      
      This already found bugs in the hash walk code and in the arm32 and arm64
      implementations of crct10dif.
      
      I removed the hash chunked test vectors that were the same as
      non-chunked ones, but left the ones that were unique.
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      4cc2dcf9
    • Eric Biggers's avatar
      crypto: testmgr - convert aead testing to use testvec_configs · ed96804f
      Eric Biggers authored
      Convert alg_test_aead() to use the new test framework, using the same
      list of testvec_configs that skcipher testing uses.
      
      This significantly improves AEAD test coverage mainly because previously
      there was only very limited test coverage of the possible data layouts.
      Now the data layouts to test are listed in one place for all algorithms
      and optionally are also randomly generated.  In fact, only one AEAD
      algorithm (AES-GCM) even had a chunked test case before.
      
      This already found bugs in all the AEGIS and MORUS implementations, the
      x86 AES-GCM implementation, and the arm64 AES-CCM implementation.
      
      I removed the AEAD chunked test vectors that were the same as
      non-chunked ones, but left the ones that were unique.
      
      Note: the rewritten test code allocates an aead_request just once per
      algorithm rather than once per encryption/decryption, but some AEAD
      algorithms incorrectly change the tfm pointer in the request.  It's
      nontrivial to fix these, so to move forward I'm temporarily working
      around it by resetting the tfm pointer.  But they'll need to be fixed.
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      ed96804f
    • Eric Biggers's avatar
      crypto: testmgr - convert skcipher testing to use testvec_configs · 4e7babba
      Eric Biggers authored
      Convert alg_test_skcipher() to use the new test framework, adding a list
      of testvec_configs to test by default.  When the extra self-tests are
      enabled, randomly generated testvec_configs are tested as well.
      
      This improves skcipher test coverage mainly because now all algorithms
      have a variety of data layouts tested, whereas before each algorithm was
      responsible for declaring its own chunked test cases which were often
      missing or provided poor test coverage.  The new code also tests both
      the MAY_SLEEP and !MAY_SLEEP cases, different IV alignments, and buffers
      that cross pages.
      
      This has already found a bug in the arm64 ctr-aes-neonbs algorithm.
      It would have easily found many past bugs.
      
      I removed the skcipher chunked test vectors that were the same as
      non-chunked ones, but left the ones that were unique.
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      4e7babba
    • Eric Biggers's avatar
      crypto: testmgr - implement random testvec_config generation · 25f9dddb
      Eric Biggers authored
      Add functions that generate a random testvec_config, in preparation for
      using it for randomized fuzz tests.
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      25f9dddb
    • Eric Biggers's avatar
      crypto: testmgr - introduce CONFIG_CRYPTO_MANAGER_EXTRA_TESTS · 5b2706a4
      Eric Biggers authored
      To achieve more comprehensive crypto test coverage, I'd like to add fuzz
      tests that use random data layouts and request flags.
      
      To be most effective these tests should be part of testmgr, so they
      automatically run on every algorithm registered with the crypto API.
      However, they will take much longer to run than the current tests and
      therefore will only really be intended to be run by developers, whereas
      the current tests have a wider audience.
      
      Therefore, add a new kconfig option CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
      that can be set by developers to enable these extra, expensive tests.
      
      Similar to the regular tests, also add a module parameter
      cryptomgr.noextratests to support disabling the tests.
      
      Finally, another module parameter cryptomgr.fuzz_iterations is added to
      control how many iterations the fuzz tests do.  Note: for now setting
      this to 0 will be equivalent to cryptomgr.noextratests=1.  But I opted
      for separate parameters to provide more flexibility to add other types
      of tests under the "extra tests" category in the future.
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      5b2706a4
    • Eric Biggers's avatar
      crypto: testmgr - add testvec_config struct and helper functions · 3f47a03d
      Eric Biggers authored
      Crypto algorithms must produce the same output for the same input
      regardless of data layout, i.e. how the src and dst scatterlists are
      divided into chunks and how each chunk is aligned.  Request flags such
      as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.
      
      However, testing of this currently has many gaps.  For example,
      individual algorithms are responsible for providing their own chunked
      test vectors.  But many don't bother to do this or test only one or two
      cases, providing poor test coverage.  Also, other things such as
      misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all.
      
      Test code is also duplicated between the chunked and non-chunked cases,
      making it difficult to make other improvements.
      
      To improve the situation, this patch series basically moves the chunk
      descriptions into the testmgr itself so that they are shared by all
      algorithms.  However, it's done in an extensible way via a new struct
      'testvec_config', which describes not just the scaled chunk lengths but
      also all other aspects of the crypto operation besides the data itself
      such as the buffer alignments, the request flags, whether the operation
      is in-place or not, the IV alignment, and for hash algorithms when to
      do each update() and when to use finup() vs. final() vs. digest().
      
      Then, this patch series makes skcipher, aead, and hash algorithms be
      tested against a list of default testvec_configs, replacing the current
      test code.  This improves overall test coverage, without reducing test
      performance too much.  Note that the test vectors themselves are not
      changed, except for removing the chunk lists.
      
      This series also adds randomized fuzz tests, enabled by a new kconfig
      option intended for developer use only, where skcipher, aead, and hash
      algorithms are tested against many randomly generated testvec_configs.
      This provides much more comprehensive test coverage.
      
      These improved tests have already exposed many bugs.
      
      To start it off, this initial patch adds the testvec_config and various
      helper functions that will be used by the skcipher, aead, and hash test
      code that will be converted to use the new testvec_config framework.
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      3f47a03d
    • Eric Biggers's avatar
      crypto: arm64/aes-neonbs - fix returning final keystream block · 12455e32
      Eric Biggers authored
      The arm64 NEON bit-sliced implementation of AES-CTR fails the improved
      skcipher tests because it sometimes produces the wrong ciphertext.  The
      bug is that the final keystream block isn't returned from the assembly
      code when the number of non-final blocks is zero.  This can happen if
      the input data ends a few bytes after a page boundary.  In this case the
      last bytes get "encrypted" by XOR'ing them with uninitialized memory.
      
      Fix the assembly code to return the final keystream block when needed.
      
      Fixes: 88a3f582 ("crypto: arm64/aes - don't use IV buffer to return final keystream block")
      Cc: <stable@vger.kernel.org> # v4.11+
      Reviewed-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      12455e32
    • Eric Biggers's avatar
      crypto: ahash - fix another early termination in hash walk · 77568e53
      Eric Biggers authored
      Hash algorithms with an alignmask set, e.g. "xcbc(aes-aesni)" and
      "michael_mic", fail the improved hash tests because they sometimes
      produce the wrong digest.  The bug is that in the case where a
      scatterlist element crosses pages, not all the data is actually hashed
      because the scatterlist walk terminates too early.  This happens because
      the 'nbytes' variable in crypto_hash_walk_done() is assigned the number
      of bytes remaining in the page, then later interpreted as the number of
      bytes remaining in the scatterlist element.  Fix it.
      
      Fixes: 900a081f ("crypto: ahash - Fix early termination in hash walk")
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      77568e53
    • Eric Biggers's avatar
      crypto: x86/aesni-gcm - fix crash on empty plaintext · 3af34963
      Eric Biggers authored
      gcmaes_crypt_by_sg() dereferences the NULL pointer returned by
      scatterwalk_ffwd() when encrypting an empty plaintext and the source
      scatterlist ends immediately after the associated data.
      
      Fix it by only fast-forwarding to the src/dst data scatterlists if the
      data length is nonzero.
      
      This bug is reproduced by the "rfc4543(gcm(aes))" test vectors when run
      with the new AEAD test manager.
      
      Fixes: e8455207 ("crypto: aesni - Update aesni-intel_glue to use scatter/gather")
      Cc: <stable@vger.kernel.org> # v4.17+
      Cc: Dave Watson <davejwatson@fb.com>
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      3af34963
    • Eric Biggers's avatar
      crypto: x86/morus - fix handling chunked inputs and MAY_SLEEP · 2060e284
      Eric Biggers authored
      The x86 MORUS implementations all fail the improved AEAD tests because
      they produce the wrong result with some data layouts.  The issue is that
      they assume that if the skcipher_walk API gives 'nbytes' not aligned to
      the walksize (a.k.a. walk.stride), then it is the end of the data.  In
      fact, this can happen before the end.
      
      Also, when the CRYPTO_TFM_REQ_MAY_SLEEP flag is given, they can
      incorrectly sleep in the skcipher_walk_*() functions while preemption
      has been disabled by kernel_fpu_begin().
      
      Fix these bugs.
      
      Fixes: 56e8e57f ("crypto: morus - Add common SIMD glue code for MORUS")
      Cc: <stable@vger.kernel.org> # v4.18+
      Cc: Ondrej Mosnacek <omosnace@redhat.com>
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Reviewed-by: default avatarOndrej Mosnacek <omosnace@redhat.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      2060e284
    • Eric Biggers's avatar
      crypto: x86/aegis - fix handling chunked inputs and MAY_SLEEP · ba6771c0
      Eric Biggers authored
      The x86 AEGIS implementations all fail the improved AEAD tests because
      they produce the wrong result with some data layouts.  The issue is that
      they assume that if the skcipher_walk API gives 'nbytes' not aligned to
      the walksize (a.k.a. walk.stride), then it is the end of the data.  In
      fact, this can happen before the end.
      
      Also, when the CRYPTO_TFM_REQ_MAY_SLEEP flag is given, they can
      incorrectly sleep in the skcipher_walk_*() functions while preemption
      has been disabled by kernel_fpu_begin().
      
      Fix these bugs.
      
      Fixes: 1d373d4e ("crypto: x86 - Add optimized AEGIS implementations")
      Cc: <stable@vger.kernel.org> # v4.18+
      Cc: Ondrej Mosnacek <omosnace@redhat.com>
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Reviewed-by: default avatarOndrej Mosnacek <omosnace@redhat.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      ba6771c0
    • Eric Biggers's avatar
      crypto: morus - fix handling chunked inputs · d644f1c8
      Eric Biggers authored
      The generic MORUS implementations all fail the improved AEAD tests
      because they produce the wrong result with some data layouts.  The issue
      is that they assume that if the skcipher_walk API gives 'nbytes' not
      aligned to the walksize (a.k.a. walk.stride), then it is the end of the
      data.  In fact, this can happen before the end.  Fix them.
      
      Fixes: 396be41f ("crypto: morus - Add generic MORUS AEAD implementations")
      Cc: <stable@vger.kernel.org> # v4.18+
      Cc: Ondrej Mosnacek <omosnace@redhat.com>
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Reviewed-by: default avatarOndrej Mosnacek <omosnace@redhat.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      d644f1c8
    • Eric Biggers's avatar
      crypto: aegis - fix handling chunked inputs · 0f533e67
      Eric Biggers authored
      The generic AEGIS implementations all fail the improved AEAD tests
      because they produce the wrong result with some data layouts.  The issue
      is that they assume that if the skcipher_walk API gives 'nbytes' not
      aligned to the walksize (a.k.a. walk.stride), then it is the end of the
      data.  In fact, this can happen before the end.  Fix them.
      
      Fixes: f606a88e ("crypto: aegis - Add generic AEGIS AEAD implementations")
      Cc: <stable@vger.kernel.org> # v4.18+
      Cc: Ondrej Mosnacek <omosnace@redhat.com>
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Reviewed-by: default avatarOndrej Mosnacek <omosnace@redhat.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      0f533e67
    • Pankaj Gupta's avatar
      crypto: caam - fixed handling of sg list · 42e95d1f
      Pankaj Gupta authored
      when the source sg contains more than 1 fragment and
      destination sg contains 1 fragment, the caam driver
      mishandle the buffers to be sent to caam.
      
      Fixes: f2147b88 ("crypto: caam - Convert GCM to new AEAD interface")
      Cc: <stable@vger.kernel.org> # 4.2+
      Signed-off-by: default avatarPankaj Gupta <pankaj.gupta@nxp.com>
      Signed-off-by: default avatarArun Pathak <arun.pathak@nxp.com>
      Reviewed-by: default avatarHoria Geanta <horia.geanta@nxp.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      42e95d1f
    • Eric Biggers's avatar
      crypto: arm64/crct10dif-ce - cleanup and optimizations · 6227cd12
      Eric Biggers authored
      The x86, arm, and arm64 asm implementations of crct10dif are very
      difficult to understand partly because many of the comments, labels, and
      macros are named incorrectly: the lengths mentioned are usually off by a
      factor of two from the actual code.  Many other things are unnecessarily
      convoluted as well, e.g. there are many more fold constants than
      actually needed and some aren't fully reduced.
      
      This series therefore cleans up all these implementations to be much
      more maintainable.  I also made some small optimizations where I saw
      opportunities, resulting in slightly better performance.
      
      This patch cleans up the arm64 version.
      Acked-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      6227cd12
    • Eric Biggers's avatar
      crypto: arm/crct10dif-ce - cleanup and optimizations · e7b3ed33
      Eric Biggers authored
      The x86, arm, and arm64 asm implementations of crct10dif are very
      difficult to understand partly because many of the comments, labels, and
      macros are named incorrectly: the lengths mentioned are usually off by a
      factor of two from the actual code.  Many other things are unnecessarily
      convoluted as well, e.g. there are many more fold constants than
      actually needed and some aren't fully reduced.
      
      This series therefore cleans up all these implementations to be much
      more maintainable.  I also made some small optimizations where I saw
      opportunities, resulting in slightly better performance.
      
      This patch cleans up the arm version.
      
      (Also moved the constants to .rodata as suggested by Ard Biesheuvel.)
      Acked-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      e7b3ed33
    • Eric Biggers's avatar
      crypto: x86/crct10dif-pcl - cleanup and optimizations · 0974037f
      Eric Biggers authored
      The x86, arm, and arm64 asm implementations of crct10dif are very
      difficult to understand partly because many of the comments, labels, and
      macros are named incorrectly: the lengths mentioned are usually off by a
      factor of two from the actual code.  Many other things are unnecessarily
      convoluted as well, e.g. there are many more fold constants than
      actually needed and some aren't fully reduced.
      
      This series therefore cleans up all these implementations to be much
      more maintainable.  I also made some small optimizations where I saw
      opportunities, resulting in slightly better performance.
      
      This patch cleans up the x86 version.
      
      As part of this, I removed support for len < 16 from the x86 assembly;
      now the glue code falls back to the generic table-based implementation
      in this case.  Due to the overhead of kernel_fpu_begin(), this actually
      significantly improves performance on these lengths.  (And even if
      kernel_fpu_begin() were free, the generic code is still faster for about
      len < 11.)  This removal also eliminates error-prone special cases and
      makes the x86, arm32, and arm64 ports of the code match more closely.
      Acked-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      0974037f
    • Singh, Brijesh's avatar
      crypto: ccp - fix the SEV probe in kexec boot path · f8903b3e
      Singh, Brijesh authored
      A kexec reboot may leave the firmware in INIT or WORKING state.
      Currently, we issue PLATFORM_INIT command during the probe without
      checking the current state. The PLATFORM_INIT command fails if the
      FW is already in INIT state. Lets check the current state, if FW
      is not in UNINIT state then transition it to UNINIT before
      initializing or upgrading the FW.
      Signed-off-by: default avatarBrijesh Singh <brijesh.singh@amd.com>
      Cc: Tom Lendacky <thomas.lendacky@amd.com>
      Cc: Gary Hook <gary.hook@amd.com>
      Reviewed-by: default avatarTom Lendacky <thomas.lendacky@amd.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      f8903b3e
    • Christopher Diaz Riveros's avatar
      crypto: testmgr - use kmemdup · e3d90e52
      Christopher Diaz Riveros authored
      Fixes coccinnelle alerts:
      
      /crypto/testmgr.c:2112:13-20: WARNING opportunity for kmemdup
      /crypto/testmgr.c:2130:13-20: WARNING opportunity for kmemdup
      /crypto/testmgr.c:2152:9-16: WARNING opportunity for kmemdup
      Signed-off-by: default avatarChristopher Diaz Riveros <chrisadr@gentoo.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      e3d90e52
  3. 01 Feb, 2019 14 commits
    • Ard Biesheuvel's avatar
      crypto: arm64/crct10dif - register PMULL variants as separate algos · 8336bdf1
      Ard Biesheuvel authored
      The arm64 CRC-T10DIF implementation either uses 8-bit or 64-bit
      polynomial multiplication instructions, since the latter are
      faster but not mandatory in the architecture.
      
      Since that prevents us from testing both implementations on the
      same system, let's expose both implementations to the crypto API,
      with the priorities reflecting that the P64 version is the
      preferred one if available.
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      8336bdf1
    • Ard Biesheuvel's avatar
      crypto: arm64/crct10dif - remove dead code · 1b2ca568
      Ard Biesheuvel authored
      Remove some code that is no longer called now that we make sure never
      to invoke the SIMD routine with less than 16 bytes of input.
      Reviewed-by: default avatarEric Biggers <ebiggers@kernel.org>
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      1b2ca568
    • Ard Biesheuvel's avatar
      crypto: arm/crct10dif - remove dead code · c03f3cb4
      Ard Biesheuvel authored
      Remove some code that is no longer called now that we make sure never
      to invoke the SIMD routine with less that 16 bytes of input.
      Reviewed-by: default avatarEric Biggers <ebiggers@kernel.org>
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      c03f3cb4
    • Ard Biesheuvel's avatar
      crypto: arm64/crct10dif - revert to C code for short inputs · d72b9d4a
      Ard Biesheuvel authored
      The SIMD routine ported from x86 used to have a special code path
      for inputs < 16 bytes, which got lost somewhere along the way.
      Instead, the current glue code aligns the input pointer to 16 bytes,
      which is not really necessary on this architecture (although it
      could be beneficial to performance to expose aligned data to the
      the NEON routine), but this could result in inputs of less than
      16 bytes to be passed in. This not only fails the new extended
      tests that Eric has implemented, it also results in the code
      reading past the end of the input, which could potentially result
      in crashes when dealing with less than 16 bytes of input at the
      end of a page which is followed by an unmapped page.
      
      So update the glue code to only invoke the NEON routine if the
      input is at least 16 bytes.
      Reported-by: default avatarEric Biggers <ebiggers@kernel.org>
      Reviewed-by: default avatarEric Biggers <ebiggers@kernel.org>
      Fixes: 6ef5737f ("crypto: arm64/crct10dif - port x86 SSE implementation to arm64")
      Cc: <stable@vger.kernel.org> # v4.10+
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      d72b9d4a
    • Ard Biesheuvel's avatar
      crypto: arm/crct10dif - revert to C code for short inputs · 62fecf29
      Ard Biesheuvel authored
      The SIMD routine ported from x86 used to have a special code path
      for inputs < 16 bytes, which got lost somewhere along the way.
      Instead, the current glue code aligns the input pointer to permit
      the NEON routine to use special versions of the vld1 instructions
      that assume 16 byte alignment, but this could result in inputs of
      less than 16 bytes to be passed in. This not only fails the new
      extended tests that Eric has implemented, it also results in the
      code reading past the end of the input, which could potentially
      result in crashes when dealing with less than 16 bytes of input
      at the end of a page which is followed by an unmapped page.
      
      So update the glue code to only invoke the NEON routine if the
      input is at least 16 bytes.
      Reported-by: default avatarEric Biggers <ebiggers@kernel.org>
      Reviewed-by: default avatarEric Biggers <ebiggers@kernel.org>
      Fixes: 1d481f1c ("crypto: arm/crct10dif - port x86 SSE implementation to ARM")
      Cc: <stable@vger.kernel.org> # v4.10+
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      62fecf29
    • Horia Geantă's avatar
      crypto: caam - fix DMA mapping of stack memory · c19650d6
      Horia Geantă authored
      Roland reports the following issue and provides a root cause analysis:
      
      "On a v4.19 i.MX6 system with IMA and CONFIG_DMA_API_DEBUG enabled, a
      warning is generated when accessing files on a filesystem for which IMA
      measurement is enabled:
      
          ------------[ cut here ]------------
          WARNING: CPU: 0 PID: 1 at kernel/dma/debug.c:1181 check_for_stack.part.9+0xd0/0x120
          caam_jr 2101000.jr0: DMA-API: device driver maps memory from stack [addr=b668049e]
          Modules linked in:
          CPU: 0 PID: 1 Comm: switch_root Not tainted 4.19.0-20181214-1 #2
          Hardware name: Freescale i.MX6 Quad/DualLite (Device Tree)
          Backtrace:
          [<c010efb8>] (dump_backtrace) from [<c010f2d0>] (show_stack+0x20/0x24)
          [<c010f2b0>] (show_stack) from [<c08b04f4>] (dump_stack+0xa0/0xcc)
          [<c08b0454>] (dump_stack) from [<c012b610>] (__warn+0xf0/0x108)
          [<c012b520>] (__warn) from [<c012b680>] (warn_slowpath_fmt+0x58/0x74)
          [<c012b62c>] (warn_slowpath_fmt) from [<c0199acc>] (check_for_stack.part.9+0xd0/0x120)
          [<c01999fc>] (check_for_stack.part.9) from [<c019a040>] (debug_dma_map_page+0x144/0x174)
          [<c0199efc>] (debug_dma_map_page) from [<c065f7f4>] (ahash_final_ctx+0x5b4/0xcf0)
          [<c065f240>] (ahash_final_ctx) from [<c065b3c4>] (ahash_final+0x1c/0x20)
          [<c065b3a8>] (ahash_final) from [<c03fe278>] (crypto_ahash_op+0x38/0x80)
          [<c03fe240>] (crypto_ahash_op) from [<c03fe2e0>] (crypto_ahash_final+0x20/0x24)
          [<c03fe2c0>] (crypto_ahash_final) from [<c03f19a8>] (ima_calc_file_hash+0x29c/0xa40)
          [<c03f170c>] (ima_calc_file_hash) from [<c03f2b24>] (ima_collect_measurement+0x1dc/0x240)
          [<c03f2948>] (ima_collect_measurement) from [<c03f0a60>] (process_measurement+0x4c4/0x6b8)
          [<c03f059c>] (process_measurement) from [<c03f0cdc>] (ima_file_check+0x88/0xa4)
          [<c03f0c54>] (ima_file_check) from [<c02d8adc>] (path_openat+0x5d8/0x1364)
          [<c02d8504>] (path_openat) from [<c02dad24>] (do_filp_open+0x84/0xf0)
          [<c02daca0>] (do_filp_open) from [<c02cf50c>] (do_open_execat+0x84/0x1b0)
          [<c02cf488>] (do_open_execat) from [<c02d1058>] (__do_execve_file+0x43c/0x890)
          [<c02d0c1c>] (__do_execve_file) from [<c02d1770>] (sys_execve+0x44/0x4c)
          [<c02d172c>] (sys_execve) from [<c0101000>] (ret_fast_syscall+0x0/0x28)
          ---[ end trace 3455789a10e3aefd ]---
      
      The cause is that the struct ahash_request *req is created as a
      stack-local variable up in the stack (presumably somewhere in the IMA
      implementation), then passed down into the CAAM driver, which tries to
      dma_single_map the req->result (indirectly via map_seq_out_ptr_result)
      in order to make that buffer available for the CAAM to store the result
      of the following hash operation.
      
      The calling code doesn't know how req will be used by the CAAM driver,
      and there could be other such occurrences where stack memory is passed
      down to the CAAM driver. Therefore we should rather fix this issue in
      the CAAM driver where the requirements are known."
      
      Fix this problem by:
      -instructing the crypto engine to write the final hash in state->caam_ctx
      -subsequently memcpy-ing the final hash into req->result
      
      Cc: <stable@vger.kernel.org> # v4.19+
      Reported-by: default avatarRoland Hieber <rhi@pengutronix.de>
      Signed-off-by: default avatarHoria Geantă <horia.geanta@nxp.com>
      Tested-by: default avatarRoland Hieber <rhi@pengutronix.de>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      c19650d6
    • Ard Biesheuvel's avatar
      crypto: arm64/ghash - register PMULL variants as separate algos · 5a22b198
      Ard Biesheuvel authored
      The arm64 GHASH implementation either uses 8-bit or 64-bit
      polynomial multiplication instructions, since the latter are
      faster but not mandatory in the architecture.
      
      Since that prevents us from testing both implementations on the
      same system, let's expose both implementations to the crypto API,
      with the priorities reflecting that the P64 version is the
      preferred one if available.
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      5a22b198
    • Milan Broz's avatar
      crypto: testmgr - mark crc32 checksum as FIPS allowed · a8a34416
      Milan Broz authored
      The CRC32 is not a cryptographic hash algorithm,
      so the FIPS restrictions should not apply to it.
      (The CRC32C variant is already allowed.)
      
      This CRC32 variant is used for in dm-crypt legacy TrueCrypt
      IV implementation (tcw); detected by cryptsetup test suite
      failure in FIPS mode.
      Signed-off-by: default avatarMilan Broz <gmazyland@gmail.com>
      Reviewed-by: default avatarStephan Mueller <smueller@chronox.de>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      a8a34416
    • Masahiro Yamada's avatar
      crypto: bcm - remove -I. header search path and unused macro define · 87fec010
      Masahiro Yamada authored
      The header search path -I. in kernel Makefiles is very suspicious;
      it allows the compiler to search for headers in the top of $(srctree),
      where obviously no header file exists.
      
      'git grep BCMDRIVER' has no hit. So, this macro is not referenced.
      
      I was able to build this driver without the extra compiler options.
      Signed-off-by: default avatarMasahiro Yamada <yamada.masahiro@socionext.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      87fec010
    • Masahiro Yamada's avatar
      crypto: prefix header search paths with $(srctree)/ · 320ca3e5
      Masahiro Yamada authored
      Currently, the Kbuild core manipulates header search paths in a crazy
      way [1].
      
      To fix this mess, I want all Makefiles to add explicit $(srctree)/ to
      the search paths in the srctree. Some Makefiles are already written in
      that way, but not all. The goal of this work is to make the notation
      consistent, and finally get rid of the gross hacks.
      
      Having whitespaces after -I does not matter since commit 48f6e3cf
      ("kbuild: do not drop -I without parameter").
      
      [1]: https://patchwork.kernel.org/patch/9632347/Signed-off-by: default avatarMasahiro Yamada <yamada.masahiro@socionext.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      320ca3e5
    • Ard Biesheuvel's avatar
      crypto: arm64/aes-ccm - don't use an atomic walk needlessly · f9352900
      Ard Biesheuvel authored
      When the AES-CCM code was first added, the NEON register were saved
      and restored eagerly, and so the code avoided doing so, and executed
      the scatterwalk in atomic context inside the kernel_neon_begin/end
      section.
      
      This has been changed in the meantime, so switch to non-atomic
      scatterwalks.
      
      Fixes: bd2ad885 ("crypto: arm64/aes-ce-ccm - move kernel mode neon ...")
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      f9352900
    • Ard Biesheuvel's avatar
      crypto: arm64/aes-ccm - fix bugs in non-NEON fallback routine · 969e2f59
      Ard Biesheuvel authored
      Commit 5092fcf3 ("crypto: arm64/aes-ce-ccm: add non-SIMD generic
      fallback") introduced C fallback code to replace the NEON routines
      when invoked from a context where the NEON is not available (i.e.,
      from the context of a softirq taken while the NEON is already being
      used in kernel process context)
      
      Fix two logical flaws in the MAC calculation of the associated data.
      Reported-by: default avatarEric Biggers <ebiggers@kernel.org>
      Fixes: 5092fcf3 ("crypto: arm64/aes-ce-ccm: add non-SIMD generic fallback")
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      969e2f59
    • Ard Biesheuvel's avatar
      crypto: arm64/aes-ccm - fix logical bug in AAD MAC handling · eaf46edf
      Ard Biesheuvel authored
      The NEON MAC calculation routine fails to handle the case correctly
      where there is some data in the buffer, and the input fills it up
      exactly. In this case, we enter the loop at the end with w8 == 0,
      while a negative value is assumed, and so the loop carries on until
      the increment of the 32-bit counter wraps around, which is quite
      obviously wrong.
      
      So omit the loop altogether in this case, and exit right away.
      Reported-by: default avatarEric Biggers <ebiggers@kernel.org>
      Fixes: a3fd8210 ("arm64/crypto: AES in CCM mode using ARMv8 Crypto ...")
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      eaf46edf
    • Eric Biggers's avatar
      crypto: testmgr - skip crc32c context test for ahash algorithms · eb5e6730
      Eric Biggers authored
      Instantiating "cryptd(crc32c)" causes a crypto self-test failure because
      the crypto_alloc_shash() in alg_test_crc32c() fails.  This is because
      cryptd(crc32c) is an ahash algorithm, not a shash algorithm; so it can
      only be accessed through the ahash API, unlike shash algorithms which
      can be accessed through both the ahash and shash APIs.
      
      As the test is testing the shash descriptor format which is only
      applicable to shash algorithms, skip it for ahash algorithms.
      
      (Note that it's still important to fix crypto self-test failures even
       for weird algorithm instantiations like cryptd(crc32c) that no one
       would really use; in fips_enabled mode unprivileged users can use them
       to panic the kernel, and also they prevent treating a crypto self-test
       failure as a bug when fuzzing the kernel.)
      
      Fixes: 8e3ee85e ("crypto: crc32c - Test descriptor context format")
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
      Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
      eb5e6730