Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit ec888d46 authored by Olav Haugan's avatar Olav Haugan Committed by Joel Fernandes
Browse files

sched: Update task->on_rq when tasks are moving between runqueues



Task->on_rq has three states:
	0 - Task is not on runqueue (rq)
	1 (TASK_ON_RQ_QUEUED) - Task is on rq
	2 (TASK_ON_RQ_MIGRATING) - Task is on rq but in the
	process of being migrated to another rq

When a task is moving between rqs task->on_rq state should be
TASK_ON_RQ_MIGRATING in order for WALT to account rq's cumulative
runnable average correctly.  Without such state marking for all the
classes, WALT's update_history() would try to fixup task's demand
which was never contributed to any of CPUs during migration.

Change-Id: Iced3428f3924fe8ab5d0075698273ead04f12d5b
Signed-off-by: default avatarOlav Haugan <ohaugan@codeaurora.org>
[joonwoop: Reinforced changelog to explain why this is needed by WALT.
           Fixed conflicts in deadline.c]
Signed-off-by: default avatarJoonwoo Park <joonwoop@codeaurora.org>
parent 8526e9f0
Loading
Loading
Loading
Loading
+2 −0
Original line number Diff line number Diff line
@@ -1331,7 +1331,9 @@ static void __migrate_swap_task(struct task_struct *p, int cpu)
		dst_rq = cpu_rq(cpu);

		deactivate_task(src_rq, p, 0);
		p->on_rq = TASK_ON_RQ_MIGRATING;
		set_task_cpu(p, cpu);
		p->on_rq = TASK_ON_RQ_QUEUED;
		activate_task(dst_rq, p, 0);
		check_preempt_curr(dst_rq, p, 0);
	} else {
+4 −0
Original line number Diff line number Diff line
@@ -1587,7 +1587,9 @@ retry:

	deactivate_task(rq, next_task, 0);
	clear_average_bw(&next_task->dl, &rq->dl);
	next_task->on_rq = TASK_ON_RQ_MIGRATING;
	set_task_cpu(next_task, later_rq->cpu);
	next_task->on_rq = TASK_ON_RQ_QUEUED;
	add_average_bw(&next_task->dl, &later_rq->dl);
	activate_task(later_rq, next_task, 0);
	ret = 1;
@@ -1677,7 +1679,9 @@ static void pull_dl_task(struct rq *this_rq)

			deactivate_task(src_rq, p, 0);
			clear_average_bw(&p->dl, &src_rq->dl);
			p->on_rq = TASK_ON_RQ_MIGRATING;
			set_task_cpu(p, this_cpu);
			p->on_rq = TASK_ON_RQ_QUEUED;
			add_average_bw(&p->dl, &this_rq->dl);
			activate_task(this_rq, p, 0);
			dmin = p->dl.deadline;
+4 −0
Original line number Diff line number Diff line
@@ -1882,7 +1882,9 @@ retry:
	}

	deactivate_task(rq, next_task, 0);
	next_task->on_rq = TASK_ON_RQ_MIGRATING;
	set_task_cpu(next_task, lowest_rq->cpu);
	next_task->on_rq = TASK_ON_RQ_QUEUED;
	activate_task(lowest_rq, next_task, 0);
	ret = 1;

@@ -2136,7 +2138,9 @@ static void pull_rt_task(struct rq *this_rq)
			resched = true;

			deactivate_task(src_rq, p, 0);
			p->on_rq = TASK_ON_RQ_MIGRATING;
			set_task_cpu(p, this_cpu);
			p->on_rq = TASK_ON_RQ_QUEUED;
			activate_task(this_rq, p, 0);
			/*
			 * We continue with the search, just in