Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 8e7fbcbc authored by Peter Zijlstra's avatar Peter Zijlstra Committed by Ingo Molnar
Browse files

sched: Remove stale power aware scheduling remnants and dysfunctional knobs



It's been broken forever (i.e. it's not scheduling in a power
aware fashion), as reported by Suresh and others sending
patches, and nobody cares enough to fix it properly ...
so remove it to make space free for something better.

There's various problems with the code as it stands today, first
and foremost the user interface which is bound to topology
levels and has multiple values per level. This results in a
state explosion which the administrator or distro needs to
master and almost nobody does.

Furthermore large configuration state spaces aren't good, it
means the thing doesn't just work right because it's either
under so many impossibe to meet constraints, or even if
there's an achievable state workloads have to be aware of
it precisely and can never meet it for dynamic workloads.

So pushing this kind of decision to user-space was a bad idea
even with a single knob - it's exponentially worse with knobs
on every node of the topology.

There is a proposal to replace the user interface with a single
3 state knob:

 sched_balance_policy := { performance, power, auto }

where 'auto' would be the preferred default which looks at things
like Battery/AC mode and possible cpufreq state or whatever the hw
exposes to show us power use expectations - but there's been no
progress on it in the past many months.

Aside from that, the actual implementation of the various knobs
is known to be broken. There have been sporadic attempts at
fixing things but these always stop short of reaching a mergable
state.

Therefore this wholesale removal with the hopes of spurring
people who care to come forward once again and work on a
coherent replacement.

Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Suresh Siddha <suresh.b.siddha@intel.com>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Link: http://lkml.kernel.org/r/1326104915.2442.53.camel@twins


Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent fac536f7
Loading
Loading
Loading
Loading
+0 −25
Original line number Diff line number Diff line
@@ -9,31 +9,6 @@ Description:

		/sys/devices/system/cpu/cpu#/

What:		/sys/devices/system/cpu/sched_mc_power_savings
		/sys/devices/system/cpu/sched_smt_power_savings
Date:		June 2006
Contact:	Linux kernel mailing list <linux-kernel@vger.kernel.org>
Description:	Discover and adjust the kernel's multi-core scheduler support.

		Possible values are:

		0 - No power saving load balance (default value)
		1 - Fill one thread/core/package first for long running threads
		2 - Also bias task wakeups to semi-idle cpu package for power
		    savings

		sched_mc_power_savings is dependent upon SCHED_MC, which is
		itself architecture dependent.

		sched_smt_power_savings is dependent upon SCHED_SMT, which
		is itself architecture dependent.

		The two files are independent of each other. It is possible
		that one file may be present without the other.

		Introduced by git commit 5c45bf27.


What:		/sys/devices/system/cpu/kernel_max
		/sys/devices/system/cpu/offline
		/sys/devices/system/cpu/online
+0 −4
Original line number Diff line number Diff line
@@ -61,10 +61,6 @@ The implementor should read comments in include/linux/sched.h:
struct sched_domain fields, SD_FLAG_*, SD_*_INIT to get an idea of
the specifics and what to tune.

For SMT, the architecture must define CONFIG_SCHED_SMT and provide a
cpumask_t cpu_sibling_map[NR_CPUS], where cpu_sibling_map[i] is the mask of
all "i"'s siblings as well as "i" itself.

Architectures may retain the regular override the default SD_*_INIT flags
while using the generic domain builder in kernel/sched.c if they wish to
retain the traditional SMT->SMP->NUMA topology (or some subset of that). This
+1 −2
Original line number Diff line number Diff line
@@ -429,8 +429,7 @@ const struct cpumask *cpu_coregroup_mask(int cpu)
	 * For perf, we return last level cache shared map.
	 * And for power savings, we return cpu_core_map
	 */
	if ((sched_mc_power_savings || sched_smt_power_savings) &&
	    !(cpu_has(c, X86_FEATURE_AMD_DCM)))
	if (!(cpu_has(c, X86_FEATURE_AMD_DCM)))
		return cpu_core_mask(cpu);
	else
		return cpu_llc_shared_mask(cpu);
+0 −4
Original line number Diff line number Diff line
@@ -330,8 +330,4 @@ void __init cpu_dev_init(void)
		panic("Failed to register CPU subsystem");

	cpu_dev_register_generic();

#if defined(CONFIG_SCHED_MC) || defined(CONFIG_SCHED_SMT)
	sched_create_sysfs_power_savings_entries(cpu_subsys.dev_root);
#endif
}
+0 −2
Original line number Diff line number Diff line
@@ -36,8 +36,6 @@ extern void cpu_remove_dev_attr(struct device_attribute *attr);
extern int cpu_add_dev_attr_group(struct attribute_group *attrs);
extern void cpu_remove_dev_attr_group(struct attribute_group *attrs);

extern int sched_create_sysfs_power_savings_entries(struct device *dev);

#ifdef CONFIG_HOTPLUG_CPU
extern void unregister_cpu(struct cpu *cpu);
extern ssize_t arch_cpu_probe(const char *, size_t);
Loading