Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 870a0bb5 authored by Peter Zijlstra's avatar Peter Zijlstra Committed by Ingo Molnar
Browse files

sched/numa: Don't scale the imbalance



It's far too easy to get ridiculously large imbalance pct when you
scale it like that. Use a fixed 125% for now.

Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/n/tip-zsriaft1dv7hhboyrpvqjy6s@git.kernel.org


Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent 04f733b4
Loading
Loading
Loading
Loading
+1 −6
Original line number Diff line number Diff line
@@ -6261,11 +6261,6 @@ static int *sched_domains_numa_distance;
static struct cpumask ***sched_domains_numa_masks;
static int sched_domains_curr_level;

static inline unsigned long numa_scale(unsigned long x, int level)
{
	return x * sched_domains_numa_distance[level] / sched_domains_numa_scale;
}

static inline int sd_local_flags(int level)
{
	if (sched_domains_numa_distance[level] > REMOTE_DISTANCE)
@@ -6286,7 +6281,7 @@ sd_numa_init(struct sched_domain_topology_level *tl, int cpu)
		.min_interval		= sd_weight,
		.max_interval		= 2*sd_weight,
		.busy_factor		= 32,
		.imbalance_pct		= 100 + numa_scale(25, level),
		.imbalance_pct		= 125,
		.cache_nice_tries	= 2,
		.busy_idx		= 3,
		.idle_idx		= 2,