Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 1c3a4e61 authored by Paul E. McKenney's avatar Paul E. McKenney Committed by Greg Kroah-Hartman
Browse files

sched: Make resched_cpu() unconditional



commit 7c2102e56a3f7d85b5d8f33efbd7aecc1f36fdd8 upstream.

The current implementation of synchronize_sched_expedited() incorrectly
assumes that resched_cpu() is unconditional, which it is not.  This means
that synchronize_sched_expedited() can hang when resched_cpu()'s trylock
fails as follows (analysis by Neeraj Upadhyay):

o	CPU1 is waiting for expedited wait to complete:

	sync_rcu_exp_select_cpus
	     rdp->exp_dynticks_snap & 0x1   // returns 1 for CPU5
	     IPI sent to CPU5

	synchronize_sched_expedited_wait
		 ret = swait_event_timeout(rsp->expedited_wq,
					   sync_rcu_preempt_exp_done(rnp_root),
					   jiffies_stall);

	expmask = 0x20, CPU 5 in idle path (in cpuidle_enter())

o	CPU5 handles IPI and fails to acquire rq lock.

	Handles IPI
	     sync_sched_exp_handler
		 resched_cpu
		     returns while failing to try lock acquire rq->lock
		 need_resched is not set

o	CPU5 calls  rcu_idle_enter() and as need_resched is not set, goes to
	idle (schedule() is not called).

o	CPU 1 reports RCU stall.

Given that resched_cpu() is now used only by RCU, this commit fixes the
assumption by making resched_cpu() unconditional.

Reported-by: default avatarNeeraj Upadhyay <neeraju@codeaurora.org>
Suggested-by: default avatarNeeraj Upadhyay <neeraju@codeaurora.org>
Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: default avatarSteven Rostedt (VMware) <rostedt@goodmis.org>
Acked-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
parent 7982b5ed
Loading
Loading
Loading
Loading
+1 −2
Original line number Diff line number Diff line
@@ -632,8 +632,7 @@ void resched_cpu(int cpu)
	struct rq *rq = cpu_rq(cpu);
	unsigned long flags;

	if (!raw_spin_trylock_irqsave(&rq->lock, flags))
		return;
	raw_spin_lock_irqsave(&rq->lock, flags);
	resched_curr(rq);
	raw_spin_unlock_irqrestore(&rq->lock, flags);
}