Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 58f9f0c7 authored by Morten Rasmussen's avatar Morten Rasmussen Committed by Chris Redpath
Browse files

ANDROID: sched: Add over-utilization/tipping point indicator



Energy-aware scheduling is only meant to be active while the system is
_not_ over-utilized. That is, there are spare cycles available to shift
tasks around based on their actual utilization to get a more
energy-efficient task distribution without depriving any tasks. When
above the tipping point task placement is done the traditional way based
on load_avg, spreading the tasks across as many cpus as possible based
on priority scaled load to preserve smp_nice. Below the tipping point we
want to use util_avg instead. We need to define a criteria for when we
make the switch.

The util_avg for each cpu converges towards 100% (1024) regardless of
how many task additional task we may put on it. If we define
over-utilized as:

sum_{cpus}(rq.cfs.avg.util_avg) + margin > sum_{cpus}(rq.capacity)

some individual cpus may be over-utilized running multiple tasks even
when the above condition is false. That should be okay as long as we try
to spread the tasks out to avoid per-cpu over-utilization as much as
possible and if all tasks have the _same_ priority. If the latter isn't
true, we have to consider priority to preserve smp_nice.

For example, we could have n_cpus nice=-10 util_avg=55% tasks and
n_cpus/2 nice=0 util_avg=60% tasks. Balancing based on util_avg we are
likely to end up with nice=-10 tasks sharing cpus and nice=0 tasks
getting their own as we 1.5*n_cpus tasks in total and 55%+55% is less
over-utilized than 55%+60% for those cpus that have to be shared. The
system utilization is only 85% of the system capacity, but we are
breaking smp_nice.

To be sure not to break smp_nice, we have defined over-utilization
conservatively as when any cpu in the system is fully utilized at it's
highest frequency instead:

cpu_rq(any).cfs.avg.util_avg + margin > cpu_rq(any).capacity

IOW, as soon as one cpu is (nearly) 100% utilized, we switch to load_avg
to factor in priority to preserve smp_nice.

With this definition, we can skip periodic load-balance as no cpu has an
always-running task when the system is not over-utilized. All tasks will
be periodic and we can balance them at wake-up. This conservative
condition does however mean that some scenarios that could benefit from
energy-aware decisions even if one cpu is fully utilized would not get
those benefits.

For system where some cpus might have reduced capacity on some cpus
(RT-pressure and/or big.LITTLE), we want periodic load-balance checks as
soon a just a single cpu is fully utilized as it might one of those with
reduced capacity and in that case we want to migrate it.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: default avatarMorten Rasmussen <morten.rasmussen@arm.com>
Change-Id: I09caeeb1151b5c02d67ed738e25728d1771eb45f
Signed-off-by: default avatarChris Redpath <chris.redpath@arm.com>
parent 15d78f22
Loading
Loading
Loading
Loading
+25 −5
Original line number Diff line number Diff line
@@ -4870,6 +4870,8 @@ static inline void hrtick_update(struct rq *rq)
}
#endif

static bool cpu_overutilized(int cpu);

/*
 * The enqueue_task method is called before nr_running is
 * increased. Here we update the fair scheduling stats and
@@ -4880,6 +4882,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
{
	struct cfs_rq *cfs_rq;
	struct sched_entity *se = &p->se;
	int task_new = !(flags & ENQUEUE_WAKEUP);

	/*
	 * If in_iowait is set, the code below may not trigger any cpufreq
@@ -4919,9 +4922,12 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
		update_cfs_shares(se);
	}

	if (!se)
	if (!se) {
		add_nr_running(rq, 1);

		if (!task_new && !rq->rd->overutilized &&
		    cpu_overutilized(rq->cpu))
			rq->rd->overutilized = true;
	}
	hrtick_update(rq);
}

@@ -7742,11 +7748,12 @@ group_type group_classify(struct sched_group *group,
 * @local_group: Does group contain this_cpu.
 * @sgs: variable to hold the statistics for this group.
 * @overload: Indicate more than one runnable task for any CPU.
 * @overutilized: Indicate overutilization for any CPU.
 */
static inline void update_sg_lb_stats(struct lb_env *env,
			struct sched_group *group, int load_idx,
			int local_group, struct sg_lb_stats *sgs,
			bool *overload)
			bool *overload, bool *overutilized)
{
	unsigned long load;
	int i, nr_running;
@@ -7784,6 +7791,9 @@ static inline void update_sg_lb_stats(struct lb_env *env,
		if (env->sd->flags & SD_ASYM_CPUCAPACITY &&
		    !sgs->group_misfit_task && rq_has_misfit(rq))
			sgs->group_misfit_task = capacity_of(i);

		if (cpu_overutilized(i))
			*overutilized = true;
	}

	/* Adjust by relative CPU capacity of the group */
@@ -7930,7 +7940,7 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
	struct sg_lb_stats *local = &sds->local_stat;
	struct sg_lb_stats tmp_sgs;
	int load_idx, prefer_sibling = 0;
	bool overload = false;
	bool overload = false, overutilized = false;

	if (child && child->flags & SD_PREFER_SIBLING)
		prefer_sibling = 1;
@@ -7952,7 +7962,7 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
		}

		update_sg_lb_stats(env, sg, load_idx, local_group, sgs,
						&overload);
						&overload, &overutilized);

		if (local_group)
			goto next_group;
@@ -7997,6 +8007,13 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
		/* update overload indicator if we are at root domain */
		if (env->dst_rq->rd->overload != overload)
			env->dst_rq->rd->overload = overload;

		/* Update over-utilization (tipping point, U >= 0) indicator */
		if (env->dst_rq->rd->overutilized != overutilized)
			env->dst_rq->rd->overutilized = overutilized;
	} else {
		if (!env->dst_rq->rd->overutilized && overutilized)
			env->dst_rq->rd->overutilized = true;
	}
}

@@ -9446,6 +9463,9 @@ static void task_tick_fair(struct rq *rq, struct task_struct *curr, int queued)
		task_tick_numa(rq, curr);

	update_misfit_task(rq, curr);

	if (!rq->rd->overutilized && cpu_overutilized(task_cpu(curr)))
		rq->rd->overutilized = true;
}

/*
+3 −0
Original line number Diff line number Diff line
@@ -629,6 +629,9 @@ struct root_domain {
	/* Indicate more than one runnable task for any CPU */
	bool overload;

	/* Indicate one or more cpus over-utilized (tipping point) */
	bool overutilized;

	/*
	 * The bit corresponding to a CPU gets set here if such CPU has more
	 * than one runnable -deadline task (as it is below for RT tasks).