Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 2e633095 authored by Joe Thornber's avatar Joe Thornber Committed by Mike Snitzer
Browse files

dm cache policy smq: don't do any writebacks unless IDLE



If there are no clean blocks to be demoted the writeback will be
triggered at that point.  Preemptively writing back can hurt high IO
load scenarios.

Signed-off-by: default avatarJoe Thornber <ejt@redhat.com>
Signed-off-by: default avatarMike Snitzer <snitzer@redhat.com>
parent 49b7f768
Loading
Loading
Loading
Loading
+4 −5
Original line number Original line Diff line number Diff line
@@ -1120,8 +1120,6 @@ static bool clean_target_met(struct smq_policy *mq, bool idle)
	 * Cache entries may not be populated.  So we cannot rely on the
	 * Cache entries may not be populated.  So we cannot rely on the
	 * size of the clean queue.
	 * size of the clean queue.
	 */
	 */
	unsigned nr_clean;

	if (idle) {
	if (idle) {
		/*
		/*
		 * We'd like to clean everything.
		 * We'd like to clean everything.
@@ -1129,9 +1127,10 @@ static bool clean_target_met(struct smq_policy *mq, bool idle)
		return q_size(&mq->dirty) == 0u;
		return q_size(&mq->dirty) == 0u;
	}
	}


	nr_clean = from_cblock(mq->cache_size) - q_size(&mq->dirty);
	/*
	return (nr_clean + btracker_nr_writebacks_queued(mq->bg_work)) >=
	 * If we're busy we don't worry about cleaning at all.
		percent_to_target(mq, CLEAN_TARGET);
	 */
	return true;
}
}


static bool free_target_met(struct smq_policy *mq)
static bool free_target_met(struct smq_policy *mq)