Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit c0d07e76 authored by Pavankumar Kondeti's avatar Pavankumar Kondeti
Browse files

sched/fair: Skip energy_diff() for need_idle tasks



The need_idle eligible tasks are placed on the least loaded
CPUs including idle CPUs. The energy_diff() can overturn this
decision if utilizing this idle CPU results in higher power.
Since need_idle eligible tasks scheduling latency is critical
for performance, skip energy_diff() for such tasks.

Change-Id: Ic72c0ed9461a5f91a2f0de055765fd9b2457cdde
Signed-off-by: default avatarPavankumar Kondeti <pkondeti@codeaurora.org>
parent 79830a25
Loading
Loading
Loading
Loading
+5 −0
Original line number Diff line number Diff line
@@ -786,6 +786,11 @@ DEFINE_EVENT(sched_task_util, sched_task_util_imbalance,
	TP_PROTO(struct task_struct *p, int task_cpu, unsigned long task_util, int nominated_cpu, int target_cpu, int ediff, bool need_idle),
	TP_ARGS(p, task_cpu, task_util, nominated_cpu, target_cpu, ediff, need_idle)
);

DEFINE_EVENT(sched_task_util, sched_task_util_need_idle,
	TP_PROTO(struct task_struct *p, int task_cpu, unsigned long task_util, int nominated_cpu, int target_cpu, int ediff, bool need_idle),
	TP_ARGS(p, task_cpu, task_util, nominated_cpu, target_cpu, ediff, need_idle)
);
#endif

/*
+8 −0
Original line number Diff line number Diff line
@@ -7144,6 +7144,14 @@ static int energy_aware_wake_cpu(struct task_struct *p, int target, int sync)
			return target_cpu;
		}

		if (need_idle) {
			trace_sched_task_util_need_idle(p, task_cpu(p),
						task_util(p),
						target_cpu, target_cpu,
						0, need_idle);
			return target_cpu;
		}

		/*
		 * We always want to migrate the task to the best CPU when
		 * placement boost is active.