Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 40a44d2b authored by Pavankumar Kondeti's avatar Pavankumar Kondeti Committed by Gerrit - the friendly Code Review server
Browse files

core_ctl: un-isolate BIG CPUs more aggressively



The current algorithm to bring additional BIG CPUs is very
conservative. It works when BIG tasks alone run on BIG
cluster. When co-location and scheduler boost features
are activated, small/medium tasks also run on BIG cluster.
We don't want these tasks to downmigrate, when BIG CPUs are
available but isolated. The following changes are done to
un-isolate CPUs more aggressively.

(1) Round up the big_avg. When the big_avg indicates that
there are 1.5 tasks on an average in the last window, it
indicates that we need 2 BIG CPUs not 1 BIG CPU.

(2) Track the maximum number of running tasks in the last
window on all CPUs. If any of the CPU in a cluster has more
than 4 runnable tasks in the last window, bring an additional
CPU to help out.

Signed-off-by: default avatarPavankumar Kondeti <pkondeti@codeaurora.org>
[markivx: Retain core_ctl changes alone]
Signed-off-by: default avatarVikram Mulukutla <markivx@codeaurora.org>

Change-Id: I39ca6aa52cb7e0d3798d4c69f4285e3ed9a8bac9
Signed-off-by: default avatarSatya Durga Srinivasu Prabhala <satyap@codeaurora.org>
parent 5afb026c
Loading
Loading
Loading
Loading
+15 −28
Original line number Diff line number Diff line
@@ -39,6 +39,7 @@ struct cluster_data {
	cpumask_t cpu_mask;
	unsigned int need_cpus;
	unsigned int task_thres;
	unsigned int max_nr;
	s64 need_ts;
	struct list_head lru;
	bool pending;
@@ -458,47 +459,25 @@ static struct kobj_type ktype_core_ctl = {

/* ==================== runqueue based core count =================== */

#define NR_RUNNING_TOLERANCE 5

static void update_running_avg(void)
{
	int avg, iowait_avg, big_avg;
	int max_nr, big_max_nr;
	struct cluster_data *cluster;
	unsigned int index = 0;

	sched_get_nr_running_avg(&avg, &iowait_avg, &big_avg);

	/*
	 * Round up to the next integer if the average nr running tasks
	 * is within NR_RUNNING_TOLERANCE/100 of the next integer.
	 * If normal rounding up is used, it will allow a transient task
	 * to trigger online event. By the time core is onlined, the task
	 * has finished.
	 * Rounding to closest suffers same problem because scheduler
	 * might only provide running stats per jiffy, and a transient
	 * task could skew the number for one jiffy. If core control
	 * samples every 2 jiffies, it will observe 0.5 additional running
	 * average which rounds up to 1 task.
	 */
	avg = (avg + NR_RUNNING_TOLERANCE) / 100;
	big_avg = (big_avg + NR_RUNNING_TOLERANCE) / 100;
	sched_get_nr_running_avg(&avg, &iowait_avg, &big_avg,
				 &max_nr, &big_max_nr);

	for_each_cluster(cluster, index) {
		if (!cluster->inited)
			continue;
		/*
		 * Big cluster only need to take care of big tasks, but if
		 * there are not enough big cores, big tasks need to be run
		 * on little as well. Thus for little's runqueue stat, it
		 * has to use overall runqueue average, or derive what big
		 * tasks would have to be run on little. The latter approach
		 * is not easy to get given core control reacts much slower
		 * than scheduler, and can't predict scheduler's behavior.
		 */
		cluster->nrrun = cluster->is_big_cluster ? big_avg : avg;
		cluster->max_nr = cluster->is_big_cluster ? big_max_nr : max_nr;
	}
}

#define MAX_NR_THRESHOLD	4
/* adjust needed CPUs based on current runqueue information */
static unsigned int apply_task_need(const struct cluster_data *cluster,
				    unsigned int new_need)
@@ -509,7 +488,15 @@ static unsigned int apply_task_need(const struct cluster_data *cluster,

	/* only unisolate more cores if there are tasks to run */
	if (cluster->nrrun > new_need)
		return new_need + 1;
		new_need = new_need + 1;

	/*
	 * We don't want tasks to be overcrowded in a cluster.
	 * If any CPU has more than MAX_NR_THRESHOLD in the last
	 * window, bring another CPU to help out.
	 */
	if (cluster->max_nr > MAX_NR_THRESHOLD)
		new_need = new_need + 1;

	return new_need;
}