Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit bb110137 authored by Jens Axboe's avatar Jens Axboe Committed by Sridhar Arra
Browse files

block: don't use blocking queue entered for recursive bio submits



If we end up splitting a bio and the queue goes away between
the initial submission and the later split submission, then we
can block forever in blk_queue_enter() waiting for the reference
to drop to zero. This will never happen, since we already hold
a reference.

Mark a split bio as already having entered the queue, so we can
just use the live non-blocking queue enter variant.

Thanks to Tetsuo Handa for the analysis.

Change-Id: Ifb03a192324bcb53c79f6d249422cce37cae2776
Reported-by: default avatar <syzbot+c4f9cebf9d651f6e54de@syzkaller.appspotmail.com>
Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
Git-commit: cd4a4ae4
Git-repo: https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block


Signed-off-by: default avatarSridhar Arra <sarra@codeaurora.org>
parent 0265a24b
Loading
Loading
Loading
Loading
+3 −1
Original line number Original line Diff line number Diff line
@@ -2227,7 +2227,9 @@ blk_qc_t generic_make_request(struct bio *bio)


	if (bio->bi_opf & REQ_NOWAIT)
	if (bio->bi_opf & REQ_NOWAIT)
		flags = BLK_MQ_REQ_NOWAIT;
		flags = BLK_MQ_REQ_NOWAIT;
	if (blk_queue_enter(q, flags) < 0) {
	if (bio_flagged(bio, BIO_QUEUE_ENTERED))
		blk_queue_enter_live(q);
	else if (blk_queue_enter(q, flags) < 0) {
		if (!blk_queue_dying(q) && (bio->bi_opf & REQ_NOWAIT))
		if (!blk_queue_dying(q) && (bio->bi_opf & REQ_NOWAIT))
			bio_wouldblock_error(bio);
			bio_wouldblock_error(bio);
		else
		else
+10 −0
Original line number Original line Diff line number Diff line
@@ -211,6 +211,16 @@ void blk_queue_split(struct request_queue *q, struct bio **bio)
		/* there isn't chance to merge the splitted bio */
		/* there isn't chance to merge the splitted bio */
		split->bi_opf |= REQ_NOMERGE;
		split->bi_opf |= REQ_NOMERGE;


		/*
		 * Since we're recursing into make_request here, ensure
		 * that we mark this bio as already having entered the queue.
		 * If not, and the queue is going away, we can get stuck
		 * forever on waiting for the queue reference to drop. But
		 * that will never happen, as we're already holding a
		 * reference to it.
		 */
		bio_set_flag(*bio, BIO_QUEUE_ENTERED);

		bio_chain(split, *bio);
		bio_chain(split, *bio);
		trace_block_split(q, split, (*bio)->bi_iter.bi_sector);
		trace_block_split(q, split, (*bio)->bi_iter.bi_sector);
		generic_make_request(*bio);
		generic_make_request(*bio);
+2 −0
Original line number Original line Diff line number Diff line
@@ -149,6 +149,8 @@ struct bio {
				 * throttling rules. Don't do it again. */
				 * throttling rules. Don't do it again. */
#define BIO_TRACE_COMPLETION 10	/* bio_endio() should trace the final completion
#define BIO_TRACE_COMPLETION 10	/* bio_endio() should trace the final completion
				 * of this bio. */
				 * of this bio. */
#define BIO_QUEUE_ENTERED 11	/* can use blk_queue_enter_live() */

/* See BVEC_POOL_OFFSET below before adding new flags */
/* See BVEC_POOL_OFFSET below before adding new flags */


/*
/*