- 22 May, 2019 40 commits
-
-
Raul E Rangel authored
commit 43d8dabb upstream. The tag set is allocated in mmc_init_queue but never freed. This results in a memory leak. This change makes sure we free the tag set when the queue is also freed. Signed-off-by: Raul E Rangel <rrangel@chromium.org> Reviewed-by: Jens Axboe <axboe@kernel.dk> Acked-by: Adrian Hunter <adrian.hunter@intel.com> Fixes: 81196976 ("mmc: block: Add blk-mq support") Cc: stable@vger.kernel.org Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Sowjanya Komatineni authored
commit 92cd1667 upstream. ddr_signaling is set to true for DDR50 and DDR52 modes but is not set back to false for other modes. This programs incorrect host clock when mode change happens from DDR52/DDR50 to other SDR or HS modes like incase of mmc_retune where it switches from HS400 to HS DDR and then from HS DDR to HS mode and then to HS200. This patch fixes the ddr_signaling to set properly for non DDR modes. Tested-by: Jon Hunter <jonathanh@nvidia.com> Acked-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Sowjanya Komatineni <skomatineni@nvidia.com> Cc: stable@vger.kernel.org # v4.20 + Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Christoph Muellner authored
commit 28f22fb7 upstream. Add disable-cqe-dcmd as optional property for MMC hosts. This property allows to disable or not enable the direct command features of the command queue engine. Signed-off-by: Christoph Muellner <christoph.muellner@theobroma-systems.com> Signed-off-by: Philipp Tomsich <philipp.tomsich@theobroma-systems.com> Fixes: 84362d79 ("mmc: sdhci-of-arasan: Add CQHCI support for arasan,sdhci-5.1") Cc: stable@vger.kernel.org Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Aneesh Kumar K.V authored
commit 67476656 upstream. This move the dependency to DEV_DAX_PMEM_COMPAT such that only if DEV_DAX_PMEM is built as module we can allow the compat support. This allows to test the new code easily in a emulation setup where we often build things without module support. Cc: <stable@vger.kernel.org> Fixes: 730926c3 ("device-dax: Add /sys/class/dax backwards compatibility") Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Biggers authored
commit 4a8108b7 upstream. If the user-provided IV needs to be aligned to the algorithm's alignmask, then skcipher_walk_virt() copies the IV into a new aligned buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then if the caller unconditionally accesses walk.iv, it's a use-after-free. xts-aes-neonbs doesn't set an alignmask, so currently it isn't affected by this despite unconditionally accessing walk.iv. However this is more subtle than desired, and unconditionally accessing walk.iv has caused a real problem in other algorithms. Thus, update xts-aes-neonbs to start checking the return value of skcipher_walk_virt(). Fixes: 1abee99e ("crypto: arm64/aes - reimplement bit-sliced ARM/NEON implementation for arm64") Cc: <stable@vger.kernel.org> # v4.11+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Biggers authored
commit 767f015e upstream. If the user-provided IV needs to be aligned to the algorithm's alignmask, then skcipher_walk_virt() copies the IV into a new aligned buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then if the caller unconditionally accesses walk.iv, it's a use-after-free. arm32 xts-aes-neonbs doesn't set an alignmask, so currently it isn't affected by this despite unconditionally accessing walk.iv. However this is more subtle than desired, and it was actually broken prior to the alignmask being removed by commit cc477bf6 ("crypto: arm/aes - replace bit-sliced OpenSSL NEON code"). Thus, update xts-aes-neonbs to start checking the return value of skcipher_walk_virt(). Fixes: e4e7f10b ("ARM: add support for bit sliced AES using NEON instructions") Cc: <stable@vger.kernel.org> # v3.13+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Horia Geantă authored
commit 418cd20e upstream. Commit 30724445 ("crypto: caam - generate hash keys in-place") fixed ahash implementation in caam/jr driver such that user-provided key buffer is not DMA mapped, since it's not guaranteed to be DMAable. Apply a similar fix for caam/qi2 driver. Cc: <stable@vger.kernel.org> # v4.20+ Fixes: 3f16f6c9 ("crypto: caam/qi2 - add support for ahash algorithms") Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Horia Geantă authored
commit 5965dc74 upstream. Commits c19650d6 ("crypto: caam - fix DMA mapping of stack memory") and 65055e21 ("crypto: caam - fix hash context DMA unmap size") fixed the ahash implementation in caam/jr driver such that req->result is not DMA-mapped (since it's not guaranteed to be DMA-able). Apply a similar fix for ahash implementation in caam/qi2 driver. Cc: <stable@vger.kernel.org> # v4.20+ Fixes: 3f16f6c9 ("crypto: caam/qi2 - add support for ahash algorithms") Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Horia Geantă authored
commit 07586d3d upstream. Commit 04e6d25c ("crypto: caam - fix zero-length buffer DMA mapping") fixed an issue in caam/jr driver where ahash implementation was DMA mapping a zero-length buffer. Current commit applies a similar fix for caam/qi2 driver. Cc: <stable@vger.kernel.org> # v4.20+ Fixes: 3f16f6c9 ("crypto: caam/qi2 - add support for ahash algorithms") Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Zhang Zhijie authored
commit f0cfd57b upstream. The Kernel Crypto API request output the next IV data to IV buffer for CBC implementation. So the last block data of ciphertext should be copid into assigned IV buffer. Reported-by: Eric Biggers <ebiggers@google.com> Fixes: 433cd2c6 ("crypto: rockchip - add crypto driver for rk3288") Cc: <stable@vger.kernel.org> # v4.5+ Signed-off-by: Zhang Zhijie <zhangzj@rock-chips.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Biggers authored
commit f699594d upstream. GCM instances can be created by either the "gcm" template, which only allows choosing the block cipher, e.g. "gcm(aes)"; or by "gcm_base", which allows choosing the ctr and ghash implementations, e.g. "gcm_base(ctr(aes-generic),ghash-generic)". However, a "gcm_base" instance prevents a "gcm" instance from being registered using the same implementations. Nor will the instance be found by lookups of "gcm". This can be used as a denial of service. Moreover, "gcm_base" instances are never tested by the crypto self-tests, even if there are compatible "gcm" tests. The root cause of these problems is that instances of the two templates use different cra_names. Therefore, fix these problems by making "gcm_base" instances set the same cra_name as "gcm" instances, e.g. "gcm(aes)" instead of "gcm_base(ctr(aes-generic),ghash-generic)". This requires extracting the block cipher name from the name of the ctr algorithm. It also requires starting to verify that the algorithms are really ctr and ghash, not something else entirely. But it would be bizarre if anyone were actually using non-gcm-compatible algorithms with gcm_base, so this shouldn't break anyone in practice. Fixes: d00aa19b ("[CRYPTO] gcm: Allow block cipher parameter") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Biggers authored
commit 580e2951 upstream. The arm64 gcm-aes-ce algorithm is failing the extra crypto self-tests following my patches to test the !may_use_simd() code paths, which previously were untested. The problem is that in the !may_use_simd() case, an odd number of AES blocks can be processed within each step of the skcipher_walk. However, the skcipher_walk is being done with a "stride" of 2 blocks and is advanced by an even number of blocks after each step. This causes the encryption to produce the wrong ciphertext and authentication tag, and causes the decryption to incorrectly fail. Fix it by only processing an even number of blocks per step. Fixes: c2b24c36 ("crypto: arm64/aes-gcm-ce - fix scatterwalk API violation") Fixes: 71e52c27 ("crypto: arm64/aes-ce-gcm - operate on two input blocks at a time") Cc: <stable@vger.kernel.org> # v4.19+ Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Biggers authored
commit dec3d0b1 upstream. The ->digest() method of crct10dif-pclmul reads the current CRC value from the shash_desc context. But this value is uninitialized, causing crypto_shash_digest() to compute the wrong result. Fix it. Probably this wasn't noticed before because lib/crc-t10dif.c only uses crypto_shash_update(), not crypto_shash_digest(). Likewise, crypto_shash_digest() is not yet tested by the crypto self-tests because those only test the ahash API which only uses shash init/update/final. Fixes: 0b95a7f8 ("crypto: crct10dif - Glue code to cast accelerated CRCT10DIF assembly as a crypto transform") Cc: <stable@vger.kernel.org> # v3.11+ Cc: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Biggers authored
commit 307508d1 upstream. The ->digest() method of crct10dif-generic reads the current CRC value from the shash_desc context. But this value is uninitialized, causing crypto_shash_digest() to compute the wrong result. Fix it. Probably this wasn't noticed before because lib/crc-t10dif.c only uses crypto_shash_update(), not crypto_shash_digest(). Likewise, crypto_shash_digest() is not yet tested by the crypto self-tests because those only test the ahash API which only uses shash init/update/final. This bug was detected by my patches that improve testmgr to fuzz algorithms against their generic implementation. Fixes: 2d31e518 ("crypto: crct10dif - Wrap crc_t10dif function all to use crypto transform framework") Cc: <stable@vger.kernel.org> # v3.11+ Cc: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Biggers authored
commit dcaca01a upstream. skcipher_walk_done() assumes it's a bug if, after the "slow" path is executed where the next chunk of data is processed via a bounce buffer, the algorithm says it didn't process all bytes. Thus it WARNs on this. However, this can happen legitimately when the message needs to be evenly divisible into "blocks" but isn't, and the algorithm has a 'walksize' greater than the block size. For example, ecb-aes-neonbs sets 'walksize' to 128 bytes and only supports messages evenly divisible into 16-byte blocks. If, say, 17 message bytes remain but they straddle scatterlist elements, the skcipher_walk code will take the "slow" path and pass the algorithm all 17 bytes in the bounce buffer. But the algorithm will only be able to process 16 bytes, triggering the WARN. Fix this by just removing the WARN_ON(). Returning -EINVAL, as the code already does, is the right behavior. This bug was detected by my patches that improve testmgr to fuzz algorithms against their generic implementation. Fixes: b286d8b1 ("crypto: skcipher - Add skcipher walk interface") Cc: <stable@vger.kernel.org> # v4.10+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Daniel Axtens authored
commit dcf7b482 upstream. The original assembly imported from OpenSSL has two copy-paste errors in handling CTR mode. When dealing with a 2 or 3 block tail, the code branches to the CBC decryption exit path, rather than to the CTR exit path. This leads to corruption of the IV, which leads to subsequent blocks being corrupted. This can be detected with libkcapi test suite, which is available at https://github.com/smuellerDD/libkcapiReported-by: Ondrej Mosnáček <omosnacek@gmail.com> Fixes: 5c380d62 ("crypto: vmx - Add support for VMS instructions by ASM") Cc: stable@vger.kernel.org Signed-off-by: Daniel Axtens <dja@axtens.net> Tested-by: Michael Ellerman <mpe@ellerman.id.au> Tested-by: Ondrej Mosnacek <omosnacek@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Singh, Brijesh authored
commit f5a2aeb8 upstream. Currently, we free the psp_master if the PLATFORM_INIT fails during the SEV FW probe. If psp_master is freed then driver does not invoke the PSP FW. As per SEV FW spec, there are several commands (PLATFORM_RESET, PLATFORM_STATUS, GET_ID etc) which can be executed in the UNINIT state We should not free the psp_master when PLATFORM_INIT fails. Fixes: 200664d5 ("crypto: ccp: Add SEV support") Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Gary Hook <gary.hook@amd.com> Cc: stable@vger.kernel.org # 4.19.y Signed-off-by: Brijesh Singh <brijesh.singh@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Biggers authored
commit 6a1faa4a upstream. CCM instances can be created by either the "ccm" template, which only allows choosing the block cipher, e.g. "ccm(aes)"; or by "ccm_base", which allows choosing the ctr and cbcmac implementations, e.g. "ccm_base(ctr(aes-generic),cbcmac(aes-generic))". However, a "ccm_base" instance prevents a "ccm" instance from being registered using the same implementations. Nor will the instance be found by lookups of "ccm". This can be used as a denial of service. Moreover, "ccm_base" instances are never tested by the crypto self-tests, even if there are compatible "ccm" tests. The root cause of these problems is that instances of the two templates use different cra_names. Therefore, fix these problems by making "ccm_base" instances set the same cra_name as "ccm" instances, e.g. "ccm(aes)" instead of "ccm_base(ctr(aes-generic),cbcmac(aes-generic))". This requires extracting the block cipher name from the name of the ctr and cbcmac algorithms. It also requires starting to verify that the algorithms are really ctr and cbcmac using the same block cipher, not something else entirely. But it would be bizarre if anyone were actually using non-ccm-compatible algorithms with ccm_base, so this shouldn't break anyone in practice. Fixes: 4a49b499 ("[CRYPTO] ccm: Added CCM mode") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Biggers authored
commit 5e27f38f upstream. If the rfc7539 template is instantiated with specific implementations, e.g. "rfc7539(chacha20-generic,poly1305-generic)" rather than "rfc7539(chacha20,poly1305)", then the implementation names end up included in the instance's cra_name. This is incorrect because it then prevents all users from allocating "rfc7539(chacha20,poly1305)", if the highest priority implementations of chacha20 and poly1305 were selected. Also, the self-tests aren't run on an instance allocated in this way. Fix it by setting the instance's cra_name from the underlying algorithms' actual cra_names, rather than from the requested names. This matches what other templates do. Fixes: 71ebc4d1 ("crypto: chacha20poly1305 - Add a ChaCha20-Poly1305 AEAD construction, RFC7539") Cc: <stable@vger.kernel.org> # v4.2+ Cc: Martin Willi <martin@strongswan.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Martin Willi <martin@strongswan.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Biggers authored
commit 7aceaaef upstream. The arm64 implementations of ChaCha and XChaCha are failing the extra crypto self-tests following my patches to test the !may_use_simd() code paths, which previously were untested. The problem is as follows: When !may_use_simd(), the arm64 NEON implementations fall back to the generic implementation, which uses the skcipher_walk API to iterate through the src/dst scatterlists. Due to how the skcipher_walk API works, walk.stride is set from the skcipher_alg actually being used, which in this case is the arm64 NEON algorithm. Thus walk.stride is 5*CHACHA_BLOCK_SIZE, not CHACHA_BLOCK_SIZE. This unnecessarily large stride shouldn't cause an actual problem. However, the generic implementation computes round_down(nbytes, walk.stride). round_down() assumes the round amount is a power of 2, which 5*CHACHA_BLOCK_SIZE is not, so it gives the wrong result. This causes the following case in skcipher_walk_done() to be hit, causing a WARN() and failing the encryption operation: if (WARN_ON(err)) { /* unexpected case; didn't process all bytes */ err = -EINVAL; goto finish; } Fix it by rounding down to CHACHA_BLOCK_SIZE instead of walk.stride. (Or we could replace round_down() with rounddown(), but that would add a slow division operation every time, which I think we should avoid.) Fixes: 2fe55987 ("crypto: arm64/chacha - use combined SIMD/ALU routine for more speed") Cc: <stable@vger.kernel.org> # v5.0+ Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Biggers authored
commit aec286cd upstream. If the user-provided IV needs to be aligned to the algorithm's alignmask, then skcipher_walk_virt() copies the IV into a new aligned buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then if the caller unconditionally accesses walk.iv, it's a use-after-free. Fix this in the LRW template by checking the return value of skcipher_walk_virt(). This bug was detected by my patches that improve testmgr to fuzz algorithms against their generic implementation. When the extra self-tests were run on a KASAN-enabled kernel, a KASAN use-after-free splat occured during lrw(aes) testing. Fixes: c778f96b ("crypto: lrw - Optimize tweak computation") Cc: <stable@vger.kernel.org> # v4.20+ Cc: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Eric Biggers authored
commit edaf28e9 upstream. If the user-provided IV needs to be aligned to the algorithm's alignmask, then skcipher_walk_virt() copies the IV into a new aligned buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then if the caller unconditionally accesses walk.iv, it's a use-after-free. salsa20-generic doesn't set an alignmask, so currently it isn't affected by this despite unconditionally accessing walk.iv. However this is more subtle than desired, and it was actually broken prior to the alignmask being removed by commit b62b3db7 ("crypto: salsa20-generic - cleanup and convert to skcipher API"). Since salsa20-generic does not update the IV and does not need any IV alignment, update it to use req->iv instead of walk.iv. Fixes: 2407d608 ("[CRYPTO] salsa20: Salsa20 stream cipher") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Christian Lamparter authored
commit 7e92e171 upstream. Currently, crypto4xx CFB and OFB AES ciphers are failing testmgr's test vectors. |cfb-aes-ppc4xx encryption overran dst buffer on test vector 3, cfg="in-place" |ofb-aes-ppc4xx encryption overran dst buffer on test vector 1, cfg="in-place" This is because of a very subtile "bug" in the hardware that gets indirectly mentioned in 18.1.3.5 Encryption/Decryption of the hardware spec: the OFB and CFB modes for AES are listed there as operation modes for >>> "Block ciphers" <<<. Which kind of makes sense, but we would like them to be considered as stream ciphers just like the CTR mode. To workaround this issue and stop the hardware from causing "overran dst buffer" on crypttexts that are not a multiple of 16 (AES_BLOCK_SIZE), we force the driver to use the scatter buffers as the go-between. As a bonus this patch also kills redundant pd_uinfo->num_gd and pd_uinfo->num_sd setters since the value has already been set before. Cc: stable@vger.kernel.org Fixes: f2a13e7c ("crypto: crypto4xx - enable AES RFC3686, ECB, CFB and OFB offloads") Signed-off-by: Christian Lamparter <chunkeey@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Christian Lamparter authored
commit 25baaf8e upstream. Commit 8efd972e ("crypto: testmgr - support checking skcipher output IV") caused the crypto4xx driver to produce the following error: | ctr-aes-ppc4xx encryption test failed (wrong output IV) | on test vector 0, cfg="in-place" This patch fixes this by reworking the crypto4xx_setkey_aes() function to: - not save the iv for ECB (as per 18.2.38 CRYP0_SA_CMD_0: "This bit mut be cleared for DES ECB mode or AES ECB mode, when no IV is used.") - instruct the hardware to save the generated IV for all other modes of operations that have IV and then supply it back to the callee in pretty much the same way as we do it for cbc-aes already. - make it clear that the DIR_(IN|OUT)BOUND is the important bit that tells the hardware to encrypt or decrypt the data. (this is cosmetic - but it hopefully prevents me from getting confused again). - don't load any bogus hash when we don't use any hash operation to begin with. Cc: stable@vger.kernel.org Fixes: f2a13e7c ("crypto: crypto4xx - enable AES RFC3686, ECB, CFB and OFB offloads") Signed-off-by: Christian Lamparter <chunkeey@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Yazen Ghannam authored
commit 71a84402 upstream. AMD family 17h Models 10h-2Fh may report a high number of L1 BTB MCA errors under certain conditions. The errors are benign and can safely be ignored. However, the high error rate may cause the MCA threshold counter to overflow causing a high rate of thresholding interrupts. In addition, users may see the errors reported through the AMD MCE decoder module, even with the interrupt disabled, due to MCA polling. Clear the "Counter Present" bit in the Instruction Fetch bank's MCA_MISC0 register. This will prevent enabling MCA thresholding on this bank which will prevent the high interrupt rate due to this error. Define an AMD-specific function to filter these errors from the MCE event pool so that they don't get reported during early boot. Rename filter function in EDAC/mce_amd to avoid a naming conflict, while at it. [ bp: Move function prototype to the internal header and massage/cleanup, fix typos. ] Reported-by: Rafał Miłecki <rafal@milecki.pl> Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: "clemej@gmail.com" <clemej@gmail.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: James Morse <james.morse@arm.com> Cc: Kees Cook <keescook@chromium.org> Cc: Mauro Carvalho Chehab <mchehab@kernel.org> Cc: Pu Wen <puwen@hygon.cn> Cc: Qiuxu Zhuo <qiuxu.zhuo@intel.com> Cc: Shirish S <Shirish.S@amd.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vishal Verma <vishal.l.verma@intel.com> Cc: linux-edac <linux-edac@vger.kernel.org> Cc: x86-ml <x86@kernel.org> Cc: <stable@vger.kernel.org> # 5.0.x: c95b323d: x86/MCE/AMD: Turn off MC4_MISC thresholding on all family 0x15 models Cc: <stable@vger.kernel.org> # 5.0.x: 30aa3d26: x86/MCE/AMD: Carve out the MC4_MISC thresholding quirk Cc: <stable@vger.kernel.org> # 5.0.x: 9308fd40: x86/MCE: Group AMD function prototypes in <asm/mce.h> Cc: <stable@vger.kernel.org> # 5.0.x Link: https://lkml.kernel.org/r/20190325163410.171021-2-Yazen.Ghannam@amd.comSigned-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Yazen Ghannam authored
commit 45d4b7b9 upstream. Some systems may report spurious MCA errors. In general, spurious MCA errors may be disabled by clearing a particular bit in MCA_CTL. However, clearing a bit in MCA_CTL may not be recommended for some errors, so the only option is to ignore them. An MCA error is printed and handled after it has been added to the MCE event pool. So an MCA error can be ignored by not adding it to that pool in the first place. Add such a filtering function. [ bp: Move function prototype to the internal header and massage. ] Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Arnd Bergmann <arnd@arndb.de> Cc: "clemej@gmail.com" <clemej@gmail.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Pu Wen <puwen@hygon.cn> Cc: Qiuxu Zhuo <qiuxu.zhuo@intel.com> Cc: "rafal@milecki.pl" <rafal@milecki.pl> Cc: Shirish S <Shirish.S@amd.com> Cc: <stable@vger.kernel.org> # 5.0.x Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vishal Verma <vishal.l.verma@intel.com> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20190325163410.171021-1-Yazen.Ghannam@amd.comSigned-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Peter Zijlstra authored
commit 6690e86b upstream. Effectively reverts commit: 2c7577a7 ("sched/x86_64: Don't save flags on context switch") Specifically because SMAP uses FLAGS.AC which invalidates the claim that the kernel has clean flags. In particular; while preemption from interrupt return is fine (the IRET frame on the exception stack contains FLAGS) it breaks any code that does synchonous scheduling, including preempt_enable(). This has become a significant issue ever since commit: 5b24a7a2 ("Add 'unsafe' user access functions for batched accesses") provided for means of having 'normal' C code between STAC / CLAC, exposing the FLAGS.AC state. So far this hasn't led to trouble, however fix it before it comes apart. Reported-by: Julien Thierry <julien.thierry@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: stable@kernel.org Fixes: 5b24a7a2 ("Add 'unsafe' user access functions for batched accesses") Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jean-Philippe Brucker authored
commit 827a108e upstream. When the CPU comes out of suspend, the firmware may have modified the OS Double Lock Register. Save it in an unused slot of cpu_suspend_ctx, and restore it on resume. Cc: <stable@vger.kernel.org> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Jean-Philippe Brucker authored
commit 6fda41bf upstream. Some firmwares may reboot CPUs with OS Double Lock set. Make sure that it is unlocked, in order to use debug exceptions. Cc: <stable@vger.kernel.org> Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Vincenzo Frascino authored
commit d2631193 upstream. Currently, compat tasks running on arm64 can allocate memory up to TASK_SIZE_32 (UL(0x100000000)). This means that mmap() allocations, if we treat them as returning an array, are not compliant with the sections 6.5.8 of the C standard (C99) which states that: "If the expression P points to an element of an array object and the expression Q points to the last element of the same array object, the pointer expression Q+1 compares greater than P". Redefine TASK_SIZE_32 to address the issue. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Jann Horn <jannh@google.com> Cc: <stable@vger.kernel.org> Reported-by: Jann Horn <jannh@google.com> Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com> [will: fixed typo in comment] Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Will Deacon authored
commit 75a19a02 upstream. When executing clock_gettime(), either in the vDSO or via a system call, we need to ensure that the read of the counter register occurs within the seqlock reader critical section. This ensures that updates to the clocksource parameters (e.g. the multiplier) are consistent with the counter value and therefore avoids the situation where time appears to go backwards across multiple reads. Extend the vDSO logic so that the seqlock critical section covers the read of the counter register as well as accesses to the data page. Since reads of the counter system registers are not ordered by memory barrier instructions, introduce dependency ordering from the counter read to a subsequent memory access so that the seqlock memory barriers apply to the counter access in both the vDSO and the system call paths. Cc: <stable@vger.kernel.org> Cc: Marc Zyngier <marc.zyngier@arm.com> Tested-by: Vincenzo Frascino <vincenzo.frascino@arm.com> Link: https://lore.kernel.org/linux-arm-kernel/alpine.DEB.2.21.1902081950260.1662@nanos.tec.linutronix.de/Reported-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Boyang Zhou authored
commit f08cae2f upstream. The file offset argument to the arm64 sys_mmap() implementation is scaled from bytes to pages by shifting right by PAGE_SHIFT. Unfortunately, the offset is passed in as a signed 'off_t' type and therefore large offsets (i.e. with the top bit set) are incorrectly sign-extended by the shift. This has been observed to cause false mmap() failures when mapping GPU doorbells on an arm64 server part. Change the type of the file offset argument to sys_mmap() from 'off_t' to 'unsigned long' so that the shifting scales the value as expected. Cc: <stable@vger.kernel.org> Signed-off-by: Boyang Zhou <zhouby_cn@126.com> [will: rewrote commit message] Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Hans de Goede authored
commit 9274c783 upstream. The ACEPC T8 and T11 Cherry Trail Z8350 mini PCs use an AXP288 and as PCs, rather then portables, they does not have a battery. Still for some reason the AXP288 not only thinks there is a battery, it actually thinks it is discharging while the PC is running, slowly going to 0% full, causing userspace to shutdown the system due to the battery being critically low after a while. This commit adds the ACEPC T8 and T11 to the axp288 fuel-gauge driver blacklist, so that we stop reporting bogus battery readings on this device. BugLink: https://bugzilla.redhat.com/show_bug.cgi?id=1690852 Cc: stable@vger.kernel.org Signed-off-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Sebastian Reichel <sebastian.reichel@collabora.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Gustavo A. R. Silva authored
commit c3422ad5 upstream. Currently there is no check on platform_get_irq() return value in case it fails, hence never actually reporting any errors and causing unexpected behavior when using such value as argument for function regmap_irq_get_virq(). Fix this by adding a proper check, a message reporting any errors and returning *pirq* Addresses-Coverity-ID: 1443940 ("Improper use of negative value") Fixes: 843735b7 ("power: axp288_charger: axp288 charger driver") Cc: stable@vger.kernel.org Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Reviewed-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Sebastian Reichel <sebastian.reichel@collabora.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Wen Yang authored
commit 629266bf upstream. The call to of_get_next_child returns a node pointer with refcount incremented thus it must be explicitly decremented after the last usage. Detected by coccinelle with warnings like: arch/arm/mach-exynos/firmware.c:201:2-8: ERROR: missing of_node_put; acquired a node pointer with refcount incremented on line 193, but without a corresponding object release within this function. Cc: stable@vger.kernel.org Signed-off-by: Wen Yang <wen.yang99@zte.com.cn> Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Christoph Muellner authored
commit 7bda9482 upstream. Direct commands (DCMDs) are an optional feature of eMMC 5.1's command queue engine (CQE). The Arasan eMMC 5.1 controller uses the CQHCI, which exposes a control register bit to enable the feature. The current implementation sets this bit unconditionally. This patch allows to suppress the feature activation, by specifying the property disable-cqe-dcmd. Signed-off-by: Christoph Muellner <christoph.muellner@theobroma-systems.com> Signed-off-by: Philipp Tomsich <philipp.tomsich@theobroma-systems.com> Acked-by: Adrian Hunter <adrian.hunter@intel.com> Fixes: 84362d79 ("mmc: sdhci-of-arasan: Add CQHCI support for arasan,sdhci-5.1") Cc: stable@vger.kernel.org Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Sylwester Nawrocki authored
commit 9b23e1a3 upstream. The name of CODEC input widget to which microphone is connected through the "Headphone" jack is "IN12" not "IN1". This fixes microphone support on Odroid XU3. Cc: <stable@vger.kernel.org> # v4.14+ Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com> Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Sylwester Nawrocki authored
commit 34dc8225 upstream. Add missing audio routing entry for the capture stream, this change is required to fix audio recording on Odroid XU3/XU3-Lite. Fixes: 885b005d ("ARM: dts: exynos: Add support for secondary DAI to Odroid XU3") Cc: stable@vger.kernel.org Signed-off-by: Sylwester Nawrocki <s.nawrocki@samsung.com> Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Stuart Menefy authored
commit b7ed69d6 upstream. Fix the interrupt information for the GPIO lines with a shared EINT interrupt. Fixes: 16d7ff26 ("ARM: dts: add dts files for exynos5260 SoC") Cc: stable@vger.kernel.org Signed-off-by: Stuart Menefy <stuart.menefy@mathembedded.com> Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Christian Lamparter authored
commit f3e35357 upstream. David Bauer reported that the VDSL modem (attached via PCIe) on his AVM Fritz!Box 7530 was complaining about not having enough space in the BAR. A closer inspection of the old qcom-ipq40xx.dtsi pulled from the GL-iNet repository listed: | qcom,pcie@80000 { | compatible = "qcom,msm_pcie"; | reg = <0x80000 0x2000>, | <0x99000 0x800>, | <0x40000000 0xf1d>, | <0x40000f20 0xa8>, | <0x40100000 0x1000>, | <0x40200000 0x100000>, | <0x40300000 0xd00000>; | reg-names = "parf", "phy", "dm_core", "elbi", | "conf", "io", "bars"; Matching the reg-names with the listed reg leads to <0xd00000> as the size for the "bars". Cc: stable@vger.kernel.org BugLink: https://www.mail-archive.com/openwrt-devel@lists.openwrt.org/msg45212.htmlReported-by: David Bauer <mail@david-bauer.net> Signed-off-by: Christian Lamparter <chunkeey@gmail.com> Signed-off-by: Andy Gross <agross@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-