Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 7fd0d2dd authored by Suresh Siddha's avatar Suresh Siddha Committed by Ingo Molnar
Browse files

sched: fix MC/HT scheduler optimization, without breaking the FUZZ logic.



First fix the check
	if (*imbalance + SCHED_LOAD_SCALE_FUZZ < busiest_load_per_task)
with this
	if (*imbalance < busiest_load_per_task)

As the current check is always false for nice 0 tasks (as
SCHED_LOAD_SCALE_FUZZ is same as busiest_load_per_task for nice 0
tasks).

With the above change, imbalance was getting reset to 0 in the corner
case condition, making the FUZZ logic fail. Fix it by not corrupting the
imbalance and change the imbalance, only when it finds that the HT/MC
optimization is needed.

Signed-off-by: default avatarSuresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
parent b21010ed
Loading
Loading
Loading
Loading
+3 −5
Original line number Diff line number Diff line
@@ -2512,7 +2512,7 @@ find_busiest_group(struct sched_domain *sd, int this_cpu,
	 * a think about bumping its value to force at least one task to be
	 * moved
	 */
	if (*imbalance + SCHED_LOAD_SCALE_FUZZ < busiest_load_per_task) {
	if (*imbalance < busiest_load_per_task) {
		unsigned long tmp, pwr_now, pwr_move;
		unsigned int imbn;

@@ -2564,9 +2564,7 @@ find_busiest_group(struct sched_domain *sd, int this_cpu,
		pwr_move /= SCHED_LOAD_SCALE;

		/* Move if we gain throughput */
		if (pwr_move <= pwr_now)
			goto out_balanced;

		if (pwr_move > pwr_now)
			*imbalance = busiest_load_per_task;
	}