Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit e221d028 authored by Mike Galbraith's avatar Mike Galbraith Committed by Thomas Gleixner
Browse files

sched,rt: fix isolated CPUs leaving root_task_group indefinitely throttled



Root task group bandwidth replenishment must service all CPUs, regardless of
where the timer was last started, and regardless of the isolation mechanism,
lest 'Quoth the Raven, "Nevermore"' become rt scheduling policy.

Signed-off-by: default avatarMike Galbraith <efault@gmx.de>
Signed-off-by: default avatarPeter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1344326558.6968.25.camel@marge.simpson.net


Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
parent 35cf4e50
Loading
Loading
Loading
Loading
+13 −0
Original line number Original line Diff line number Diff line
@@ -788,6 +788,19 @@ static int do_sched_rt_period_timer(struct rt_bandwidth *rt_b, int overrun)
	const struct cpumask *span;
	const struct cpumask *span;


	span = sched_rt_period_mask();
	span = sched_rt_period_mask();
#ifdef CONFIG_RT_GROUP_SCHED
	/*
	 * FIXME: isolated CPUs should really leave the root task group,
	 * whether they are isolcpus or were isolated via cpusets, lest
	 * the timer run on a CPU which does not service all runqueues,
	 * potentially leaving other CPUs indefinitely throttled.  If
	 * isolation is really required, the user will turn the throttle
	 * off to kill the perturbations it causes anyway.  Meanwhile,
	 * this maintains functionality for boot and/or troubleshooting.
	 */
	if (rt_b == &root_task_group.rt_bandwidth)
		span = cpu_online_mask;
#endif
	for_each_cpu(i, span) {
	for_each_cpu(i, span) {
		int enqueue = 0;
		int enqueue = 0;
		struct rt_rq *rt_rq = sched_rt_period_rt_rq(rt_b, i);
		struct rt_rq *rt_rq = sched_rt_period_rt_rq(rt_b, i);