sched/fair: Tighten prefer_spread feature
This patch tightens the prefer_spread feature by doing the
following.
(1) While picking the busiest group in update_sd_pick_busiest(),
if the current group and busiest group are classified as same,
use number of runnable tasks to break the ties. Use group load
as the next tie breaker. Otherwise we may end up selecting the
group with more utilization but with just 1 task.
(2) Ignore average load checks when the load balancing CPU is
idle and prefer_spread is set.
(3) Allow no-hz idle balance CPUs to pull the tasks when the
sched domain is not over-utilized but prefer_spread is set.
(4) There are cases in calculate_imbalance() that skip imbalance
override check due to which task are not getting pulled. Move this
check to outside of calculate_imbalance() and set the imbalance to
half of the group load.
(5) when the weighted CPU load is 0, find_busiest_queue() can't
find the busiest rq. Fix this as well.
Change-Id: I93d1a62cbd4be34af993ae664a398aa868d29a0c
Signed-off-by:
Pavankumar Kondeti <pkondeti@codeaurora.org>
Loading
Please register or sign in to comment