Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 7d9595d8 authored by Mike Snitzer's avatar Mike Snitzer
Browse files

dm rq: fix the starting and stopping of blk-mq queues



Improve dm_stop_queue() to cancel any requeue_work.  Also, have
dm_start_queue() and dm_stop_queue() clear/set the QUEUE_FLAG_STOPPED
for the blk-mq request_queue.

On suspend dm_stop_queue() handles stopping the blk-mq request_queue
BUT: even though the hw_queues are marked BLK_MQ_S_STOPPED at that point
there is still a race that is allowing block/blk-mq.c to call ->queue_rq
against a hctx that it really shouldn't.  Add a check to
dm_mq_queue_rq() that guards against this rarity (albeit _not_
race-free).

Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
Cc: stable@vger.kernel.org # must patch dm.c on < 4.8 kernels
parent 1814f2e3
Loading
Loading
Loading
Loading
+19 −1
Original line number Diff line number Diff line
@@ -78,6 +78,7 @@ void dm_start_queue(struct request_queue *q)
	if (!q->mq_ops)
		dm_old_start_queue(q);
	else {
		queue_flag_clear_unlocked(QUEUE_FLAG_STOPPED, q);
		blk_mq_start_stopped_hw_queues(q, true);
		blk_mq_kick_requeue_list(q);
	}
@@ -101,9 +102,15 @@ void dm_stop_queue(struct request_queue *q)
{
	if (!q->mq_ops)
		dm_old_stop_queue(q);
	else
	else {
		spin_lock_irq(q->queue_lock);
		queue_flag_set(QUEUE_FLAG_STOPPED, q);
		spin_unlock_irq(q->queue_lock);

		blk_mq_cancel_requeue_work(q);
		blk_mq_stop_hw_queues(q);
	}
}

static struct dm_rq_target_io *alloc_old_rq_tio(struct mapped_device *md,
						gfp_t gfp_mask)
@@ -864,6 +871,17 @@ static int dm_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
		dm_put_live_table(md, srcu_idx);
	}

	/*
	 * On suspend dm_stop_queue() handles stopping the blk-mq
	 * request_queue BUT: even though the hw_queues are marked
	 * BLK_MQ_S_STOPPED at that point there is still a race that
	 * is allowing block/blk-mq.c to call ->queue_rq against a
	 * hctx that it really shouldn't.  The following check guards
	 * against this rarity (albeit _not_ race-free).
	 */
	if (unlikely(test_bit(BLK_MQ_S_STOPPED, &hctx->state)))
		return BLK_MQ_RQ_QUEUE_BUSY;

	if (ti->type->busy && ti->type->busy(ti))
		return BLK_MQ_RQ_QUEUE_BUSY;