Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Skip to content
Commit d1aff798 authored by Satya Durga Srinivasu Prabhala's avatar Satya Durga Srinivasu Prabhala Committed by Pavankumar Kondeti
Browse files

sched/fair: improve placement logic



When all the cpus in the biggest cluster are isolated, a task that
should have run there i.e. it has very high demand, ends up finding
no clusters able to accept it. find_best_target() returns -1 and
since the prev cpu is isolated, the code relies on fallback scheme
to find a cpu for it. The fallback scheme doesn't select the best
capacity cpu and this leads to bad performance.

There is also another related problem where we could have unoptimal
placement when the cluster on which the task should be placed is busy.
For e.g. when a high demand task wakes up and if the all the cores in
its fitting cluster are busy, it gets placed on its prev cpu. There
might be a better cpu, one with more spare capacity than the prev_cpu.

Both the above issues stem from the way the groups are traversed. We
always start with the smallest capacity group and go towards highest
capacity group while we skip groups that don't meet the demand or
preferred criteria. We loose the opportunity to note the best cpus
in the skipped groups which could have been used if the eligible groups
are busy.

So to address this issue, we need to curb skipping the groups and also
introduce tracking the cpu with most spare capacity. Not skipping a
smaller capacity group (a group that doesn't meet demand or preferred
criteria) may end up setting the placement target to those ones. Hence
we need to start with a group that best accommodates the task demand
and its preferred group and visit the smaller capacity groups only after
starting group and higher capacity groups couldn't fit the task. If
a cpu is found,indicated by target_capacity != ULONG_MAX, we need to
skip visiting any more groups.

Now if a task couldn't be placed on any of the cpus, indicated by
target_cpu == -1, in any of the groups we place it on a cpu with the
most spare capacity. So in the above problems, where the best fitting
group or higher couldn't accommodate itwe would end up using
the cpu with most spare capacity.

Note that, if no cpu with any spare capacity is found, we still could
end up using prev_cpu and if prev_cpu is isolated we could end up
employing fallback mechanism.

There are few more quirks that need to be addressed with the above
scheme:
* We reset target_capacity to ULONG_MAX while under placement
  boost. This was done in order to search higher capacity clusters even
  when a target was found and was done with the assumption that we
  always traverse towards higher capacity clusters. This not being the
  case, since we could traverse low capacity clusters now, reset
  target_capacity as long as we haven't reached highest capacity cpus.
* while we are in active migration i.e p->state == TASK_RUNNING, we
  limit target to idle cpus. The cpu with most spare capacity may not
  necessarily be idle, so ensure that we don't put a actively migrated
  task there unless its idle.

Change-Id: I8f67411e2a8015ef1a996875dac95a5a43c8e1a7
Signed-off-by: default avatarAbhijeet Dharmapurikar <adharmap@codeaurora.org>
Signed-off-by: default avatarSatya Durga Srinivasu Prabhala <satyap@codeaurora.org>
[clingutla@codeaurora.org : Resolved trivial merge conflicts]
Signed-off-by: default avatarLingutla Chandrasekhar <clingutla@codeaurora.org>
Signed-off-by: default avatarPavankumar Kondeti <pkondeti@codeaurora.org>
parent a32e0660
Loading
Loading
Loading
Loading
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment