Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit f8700df7 authored by Suresh Siddha's avatar Suresh Siddha Committed by Ingo Molnar
Browse files

sched: fix broken SMT/MC optimizations



On a four package system with HT - HT load balancing optimizations were
broken.  For example, if two tasks end up running on two logical threads
of one of the packages, scheduler is not able to pull one of the tasks
to a completely idle package.

In this scenario, for nice-0 tasks, imbalance calculated by scheduler
will be 512 and find_busiest_queue() will return 0 (as each cpu's load
is 1024 > imbalance and has only one task running).

Similarly MC scheduler optimizations also get fixed with this patch.

[ mingo@elte.hu: restored fair balancing by increasing the fuzz and
                 adding it back to the power decision, without the /2
                 factor. ]

Signed-off-by: default avatarSuresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
parent efe567fc
Loading
Loading
Loading
Loading
+1 −1
Original line number Diff line number Diff line
@@ -681,7 +681,7 @@ enum cpu_idle_type {
#define SCHED_LOAD_SHIFT	10
#define SCHED_LOAD_SCALE	(1L << SCHED_LOAD_SHIFT)

#define SCHED_LOAD_SCALE_FUZZ	(SCHED_LOAD_SCALE >> 1)
#define SCHED_LOAD_SCALE_FUZZ	SCHED_LOAD_SCALE

#ifdef CONFIG_SMP
#define SD_LOAD_BALANCE		1	/* Do load balancing on this domain. */
+1 −1
Original line number Diff line number Diff line
@@ -2517,7 +2517,7 @@ find_busiest_group(struct sched_domain *sd, int this_cpu,
	 * a think about bumping its value to force at least one task to be
	 * moved
	 */
	if (*imbalance + SCHED_LOAD_SCALE_FUZZ < busiest_load_per_task/2) {
	if (*imbalance + SCHED_LOAD_SCALE_FUZZ < busiest_load_per_task) {
		unsigned long tmp, pwr_now, pwr_move;
		unsigned int imbn;