Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 78605d7d authored by Patrick Bellasi's avatar Patrick Bellasi
Browse files

ANDROID: sched/fair: schedtune: update before schedutil



When a task is enqueue its boosted value must be accounted on that CPU
to better support the selection of the required frequency.
However, schedutil is (implicitly) updated by update_load_avg() which
always happens before schedtune_{en,de}queue_task(), thus potentially
introducing a latency between boost value updates and frequency
selections.

Let's update schedtune at the beginning of enqueue_task_fair(),
which will ensure that all schedutil updates will see the most
updated boost value for a CPU.

Signed-off-by: default avatarPatrick Bellasi <patrick.bellasi@arm.com>
Change-Id: I1038f00600dd43ca38b76b2c5681b4f438ae4036
parent cb22d915
Loading
Loading
Loading
Loading
+26 −28
Original line number Diff line number Diff line
@@ -5172,6 +5172,24 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
	 */
	util_est_enqueue(&rq->cfs, p);

	/*
	 * The code below (indirectly) updates schedutil which looks at
	 * the cfs_rq utilization to select a frequency.
	 * Let's update schedtune here to ensure the boost value of the
	 * current task is accounted for in the selection of the OPP.
	 *
	 * We do it also in the case where we enqueue a throttled task;
	 * we could argue that a throttled task should not boost a CPU,
	 * however:
	 * a) properly implementing CPU boosting considering throttled
	 *    tasks will increase a lot the complexity of the solution
	 * b) it's not easy to quantify the benefits introduced by
	 *    such a more complex solution.
	 * Thus, for the time being we go for the simple solution and boost
	 * also for throttled RQs.
	 */
	schedtune_enqueue_task(p, cpu_of(rq));

	/*
	 * If in_iowait is set, the code below may not trigger any cpufreq
	 * utilization updates, so do it here explicitly with the IOWAIT flag
@@ -5212,25 +5230,6 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
		update_cfs_shares(se);
	}

	/*
	 * Update SchedTune accounting.
	 *
	 * We do it before updating the CPU capacity to ensure the
	 * boost value of the current task is accounted for in the
	 * selection of the OPP.
	 *
	 * We do it also in the case where we enqueue a throttled task;
	 * we could argue that a throttled task should not boost a CPU,
	 * however:
	 * a) properly implementing CPU boosting considering throttled
	 *    tasks will increase a lot the complexity of the solution
	 * b) it's not easy to quantify the benefits introduced by
	 *    such a more complex solution.
	 * Thus, for the time being we go for the simple solution and boost
	 * also for throttled RQs.
	 */
	schedtune_enqueue_task(p, cpu_of(rq));

	if (!se) {
		add_nr_running(rq, 1);
		if (!task_new)
@@ -5254,6 +5253,14 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags)
	struct sched_entity *se = &p->se;
	int task_sleep = flags & DEQUEUE_SLEEP;

	/*
	 * The code below (indirectly) updates schedutil which looks at
	 * the cfs_rq utilization to select a frequency.
	 * Let's update schedtune here to ensure the boost value of the
	 * current task is not more accounted for in the selection of the OPP.
	 */
	schedtune_dequeue_task(p, cpu_of(rq));

	for_each_sched_entity(se) {
		cfs_rq = cfs_rq_of(se);
		dequeue_entity(cfs_rq, se, flags);
@@ -5296,15 +5303,6 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags)
		update_cfs_shares(se);
	}

	/*
	 * Update SchedTune accounting
	 *
	 * We do it before updating the CPU capacity to ensure the
	 * boost value of the current task is accounted for in the
	 * selection of the OPP.
	 */
	schedtune_dequeue_task(p, cpu_of(rq));

	if (!se) {
		sub_nr_running(rq, 1);
		walt_dec_cumulative_runnable_avg(rq, p);