Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 939013b8 authored by Pavankumar Kondeti's avatar Pavankumar Kondeti Committed by Lingutla Chandrasekhar
Browse files

sched/fair: Optimize the tick path active migration



When a task is upmigrating via tickpath, the lower capacity CPU
that is running the task will wake up the migration task to
carry the migration to the other higher capacity CPU. The migration
task dequeue the task from lower capacity CPU and enqueue it on
the higher capacity CPU. A rescheduler IPI is sent now to the higher
capacity CPU. If the higher capacity CPU was in deep sleep state, it
results in more waiting time for the task to be upmigrated. This can
be optimized by waking up the higher capacity CPU along with waking
the migration task on the lower capacity CPU. Since we reserve the
higher capacity CPU, the is_reserved() API can be used to prevent
the CPU entering idle again.

Change-Id: I7bda9a905a66a9326c1dc74e50fa94eb58e6b705
Signed-off-by: default avatarPavankumar Kondeti <pkondeti@codeaurora.org>
[clingutla@codeaurora.org: Resolved minor merge conflicts]
Signed-off-by: default avatarLingutla Chandrasekhar <clingutla@codeaurora.org>
parent 0fe1d128
Loading
Loading
Loading
Loading
+6 −1
Original line number Diff line number Diff line
@@ -12531,6 +12531,7 @@ void check_for_migration(struct rq *rq, struct task_struct *p)
	int active_balance;
	int new_cpu = -1;
	int prev_cpu = task_cpu(p);
	int ret;

	if (rq->misfit_task_load) {
		if (rq->curr->state != TASK_RUNNING ||
@@ -12550,9 +12551,13 @@ void check_for_migration(struct rq *rq, struct task_struct *p)
			if (active_balance) {
				mark_reserved(new_cpu);
				raw_spin_unlock(&migration_lock);
				stop_one_cpu_nowait(prev_cpu,
				ret = stop_one_cpu_nowait(prev_cpu,
					active_load_balance_cpu_stop, rq,
					&rq->active_balance_work);
				if (!ret)
					clear_reserved(new_cpu);
				else
					wake_up_if_idle(new_cpu);
				return;
			}
		} else {
+4 −2
Original line number Diff line number Diff line
@@ -60,7 +60,8 @@ static noinline int __cpuidle cpu_idle_poll(void)
	stop_critical_timings();

	while (!tif_need_resched() &&
		(cpu_idle_force_poll || tick_check_broadcast_expired()))
		(cpu_idle_force_poll || tick_check_broadcast_expired() ||
		is_reserved(smp_processor_id())))
		cpu_relax();
	start_critical_timings();
	trace_cpu_idle_rcuidle(PWR_EVENT_EXIT, smp_processor_id());
@@ -256,7 +257,8 @@ static void do_idle(void)
		 * broadcast device expired for us, we don't want to go deep
		 * idle as we know that the IPI is going to arrive right away.
		 */
		if (cpu_idle_force_poll || tick_check_broadcast_expired()) {
		if (cpu_idle_force_poll || tick_check_broadcast_expired() ||
				is_reserved(smp_processor_id())) {
			tick_nohz_idle_restart_tick();
			cpu_idle_poll();
		} else {