Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 3b14c88e authored by Pavankumar Kondeti's avatar Pavankumar Kondeti
Browse files

sched: Provide a facility to restrict RT tasks to lower power cluster



The current CPU selection algorithm for RT tasks looks for the
least loaded CPU in all clusters. Stop the search at the lowest
possible power cluster based on "sched_restrict_cluster_spill"
sysctl tunable.

Change-Id: I34fdaefea56e0d1b7e7178d800f1bb86aa0ec01c
Signed-off-by: default avatarPavankumar Kondeti <pkondeti@codeaurora.org>
parent 060d79c2
Loading
Loading
Loading
Loading
+12 −5
Original line number Diff line number Diff line
@@ -1255,11 +1255,18 @@ Default value: 0

Appears at /proc/sys/kernel/sched_restrict_cluster_spill

This tunable can be used to restrict the higher capacity cluster pulling tasks
from the lower capacity cluster in the load balance path. The restriction is
lifted if all of the CPUS in the lower capacity cluster are above spill.
The power cost is used to break the ties if the capacity of clusters are same
for applying this restriction.
This tunable can be used to restrict tasks spilling to the higher capacity
(higher power) cluster. When this tunable is enabled,

- Restrict the higher capacity cluster pulling tasks from the lower capacity
cluster in the load balance path. The restriction is lifted if all of the CPUS
in the lower capacity cluster are above spill. The power cost is used to break
the ties if the capacity of clusters are same for applying this restriction.

- The current CPU selection algorithm for RT tasks looks for the least loaded
CPU across all clusters. When this tunable is enabled, the RT tasks are
restricted to the lowest possible power cluster.


=========================
8. HMP SCHEDULER TRACE POINTS
+6 −2
Original line number Diff line number Diff line
@@ -1655,6 +1655,7 @@ static int find_lowest_rq_hmp(struct task_struct *task)
	int prev_cpu = task_cpu(task);
	u64 cpu_load, min_load = ULLONG_MAX;
	int i;
	int restrict_cluster = sysctl_sched_restrict_cluster_spill;

	/* Make sure the mask is initialized first */
	if (unlikely(!lowest_mask))
@@ -1682,8 +1683,9 @@ static int find_lowest_rq_hmp(struct task_struct *task)
			if (sched_cpu_high_irqload(i))
				continue;

			cpu_load = scale_load_to_cpu(
			  cpu_rq(i)->hmp_stats.cumulative_runnable_avg, i);
			cpu_load = cpu_rq(i)->hmp_stats.cumulative_runnable_avg;
			if (!restrict_cluster)
				cpu_load = scale_load_to_cpu(cpu_load, i);

			if (cpu_load < min_load ||
				(cpu_load == min_load &&
@@ -1693,6 +1695,8 @@ static int find_lowest_rq_hmp(struct task_struct *task)
				best_cpu = i;
			}
		}
		if (restrict_cluster && best_cpu != -1)
			break;
	}

	return best_cpu;
+1 −0
Original line number Diff line number Diff line
@@ -954,6 +954,7 @@ extern unsigned int sched_init_task_load_pelt;
extern unsigned int sched_init_task_load_windows;
extern unsigned int sched_heavy_task;
extern unsigned int up_down_migrate_scale_factor;
extern unsigned int sysctl_sched_restrict_cluster_spill;
extern void reset_cpu_hmp_stats(int cpu, int reset_cra);
extern unsigned int max_task_load(void);
extern void sched_account_irqtime(int cpu, struct task_struct *curr,