Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 8b1a1ce1 authored by Joonwoo Park's avatar Joonwoo Park Committed by Todd Kjos
Browse files

sched: WALT: fix broken cumulative runnable average accounting



When running tasks's ravg.demand is changed update_history() adjusts
rq->cumulative_runnable_avg to reflect change of CPU load.  Currently
this fixup is broken by accumulating task's new demand without
subtracting the task's old demand.

Fix the fixup logic to subtract the task's old demand.

Change-Id: I61beb32a4850879ccb39b733f5564251e465bfeb
Signed-off-by: default avatarJoonwoo Park <joonwoop@codeaurora.org>
(cherry picked from commit 48f67ea85de468a9b3e47e723e7681cf7771dea6)
Signed-off-by: default avatarQuentin Perret <quentin.perret@arm.com>
parent 241a319a
Loading
Loading
Loading
Loading
+3 −1
Original line number Diff line number Diff line
@@ -112,8 +112,10 @@ walt_dec_cumulative_runnable_avg(struct rq *rq,

static void
fixup_cumulative_runnable_avg(struct rq *rq,
			      struct task_struct *p, s64 task_load_delta)
			      struct task_struct *p, u64 new_task_load)
{
	s64 task_load_delta = (s64)new_task_load - task_load(p);

	rq->cumulative_runnable_avg += task_load_delta;
	if ((s64)rq->cumulative_runnable_avg < 0)
		panic("cra less than zero: tld: %lld, task_load(p) = %u\n",