crypto: drbg - reseed 'nopr' drbgs periodically from get_random_bytes()
In contrast to the fully prediction resistant 'pr' DRBGs, the 'nopr' variants get seeded once at boot and reseeded only rarely thereafter, namely only after 2^20 requests have been served each. AFAICT, this reseeding based on the number of requests served is primarily motivated by information theoretic considerations, c.f. NIST SP800-90Ar1, sec. 8.6.8 ("Reseeding"). However, given the relatively large seed lifetime of 2^20 requests, the 'nopr' DRBGs can hardly be considered to provide any prediction resistance whatsoever, i.e. to protect against threats like side channel leaks of the internal DRBG state (think e.g. leaked VM snapshots). This is expected and completely in line with the 'nopr' naming, but as e.g. the "drbg_nopr_hmac_sha512" implementation is potentially being used for providing the "stdrng" and thus, the crypto_default_rng serving the in-kernel crypto, it would certainly be desirable to achieve at least the same level of prediction resistance as get_random_bytes() does. Note that the chacha20 rngs underlying get_random_bytes() get reseeded every CRNG_RESEED_INTERVAL == 5min: the secondary, per-NUMA node rngs from the primary one and the primary rng in turn from the entropy pool, provided sufficient entropy is available. The 'nopr' DRBGs do draw randomness from get_random_bytes() for their initial seed already, so making them to reseed themselves periodically from get_random_bytes() in order to let them benefit from the latter's prediction resistance is not such a big change conceptually. In principle, it would have been also possible to make the 'nopr' DRBGs to periodically invoke a full reseeding operation, i.e. to also consider the jitterentropy source (if enabled) in addition to get_random_bytes() for the seed value. However, get_random_bytes() is relatively lightweight as compared to the jitterentropy generation process and thus, even though the 'nopr' reseeding is supposed to get invoked infrequently, it's IMO still worthwhile to avoid occasional latency spikes for drbg_generate() and stick to get_random_bytes() only. As an additional remark, note that drawing randomness from the non-SP800-90B-conforming get_random_bytes() only won't adversely affect SP800-90A conformance either: the very same is being done during boot via drbg_seed_from_random() already once rng_is_initialized() flips to true and it follows that if the DRBG implementation does conform to SP800-90A now, it will continue to do so. Make the 'nopr' DRBGs to reseed themselves periodically from get_random_bytes() every CRNG_RESEED_INTERVAL == 5min. More specifically, introduce a new member ->last_seed_time to struct drbg_state for recording in units of jiffies when the last seeding operation had taken place. Make __drbg_seed() maintain it and let drbg_generate() invoke a reseed from get_random_bytes() via drbg_seed_from_random() if more than 5min have passed by since the last seeding operation. Be careful to not to reseed if in testing mode though, or otherwise the drbg related tests in crypto/testmgr.c would fail to reproduce the expected output. In order to keep the formatting clean in drbg_generate() wrap the logic for deciding whether or not a reseed is due in a new helper, drbg_nopr_reseed_interval_elapsed(). Signed-off-by: Nicolai Stange <nstange@suse.de> Reviewed-by: Stephan Müller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Showing
Please register or sign in to comment