Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit a033aed5 authored by Eric Biggers's avatar Eric Biggers Committed by Herbert Xu
Browse files

crypto: x86/chacha - yield the FPU occasionally



To improve responsiveness, yield the FPU (temporarily re-enabling
preemption) every 4 KiB encrypted/decrypted, rather than keeping
preemption disabled during the entire encryption/decryption operation.

Alternatively we could do this for every skcipher_walk step, but steps
may be small in some cases, and yielding the FPU is expensive on x86.

Suggested-by: default avatarMartin Willi <martin@strongswan.org>
Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
Signed-off-by: default avatarHerbert Xu <herbert@gondor.apana.org.au>
parent 7a507d62
Loading
Loading
Loading
Loading
+11 −1
Original line number Diff line number Diff line
@@ -132,6 +132,7 @@ static int chacha_simd_stream_xor(struct skcipher_request *req,
{
	u32 *state, state_buf[16 + 2] __aligned(8);
	struct skcipher_walk walk;
	int next_yield = 4096; /* bytes until next FPU yield */
	int err;

	BUILD_BUG_ON(CHACHA_STATE_ALIGN != 16);
@@ -144,12 +145,21 @@ static int chacha_simd_stream_xor(struct skcipher_request *req,
	while (walk.nbytes > 0) {
		unsigned int nbytes = walk.nbytes;

		if (nbytes < walk.total)
		if (nbytes < walk.total) {
			nbytes = round_down(nbytes, walk.stride);
			next_yield -= nbytes;
		}

		chacha_dosimd(state, walk.dst.virt.addr, walk.src.virt.addr,
			      nbytes, ctx->nrounds);

		if (next_yield <= 0) {
			/* temporarily allow preemption */
			kernel_fpu_end();
			kernel_fpu_begin();
			next_yield = 4096;
		}

		err = skcipher_walk_done(&walk, walk.nbytes - nbytes);
	}