Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 85b088e9 authored by Michal Nazarewicz's avatar Michal Nazarewicz Committed by Ingo Molnar
Browse files

sched/fair: Avoid integer overflow



sa->runnable_avg_sum is of type u32 but after shifting it by NICE_0_SHIFT
bits it is promoted to u64.  This of course makes no sense, since the
result will never be more then 32-bit long.  Casting sa->runnable_avg_sum
to u64 before it is shifted, fixes this problem.

Reviewed-by: default avatarBen Segall <bsegall@google.com>
Signed-off-by: default avatarMichal Nazarewicz <mina86@mina86.com>
Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1384112521-25177-1-git-send-email-mpn@google.com


Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent 911b2898
Loading
Loading
Loading
Loading
+1 −1
Original line number Diff line number Diff line
@@ -2178,7 +2178,7 @@ static inline void __update_tg_runnable_avg(struct sched_avg *sa,
	long contrib;

	/* The fraction of a cpu used by this cfs_rq */
	contrib = div_u64(sa->runnable_avg_sum << NICE_0_SHIFT,
	contrib = div_u64((u64)sa->runnable_avg_sum << NICE_0_SHIFT,
			  sa->runnable_avg_period + 1);
	contrib -= cfs_rq->tg_runnable_contrib;