Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit b22366cd authored by Frederic Weisbecker's avatar Frederic Weisbecker
Browse files

context_tracking: Restore preempted context state after preempt_schedule_irq()



From the context tracking POV, preempt_schedule_irq() behaves pretty much
like an exception: It can be called anytime and schedule another task.

But currently it doesn't restore the context tracking state of the preempted
code on preempt_schedule_irq() return.

As a result, if preempt_schedule_irq() is called in the tiny frame between
user_enter() and the actual return to userspace, we resume userspace with
the wrong context tracking state.

Fix this by using exception_enter/exit() which are a perfect fit for this
kind of issue.

Signed-off-by: default avatarFrederic Weisbecker <fweisbec@gmail.com>
Cc: Li Zhong <zhong@linux.vnet.ibm.com>
Cc: Kevin Hilman <khilman@linaro.org>
Cc: Mats Liljegren <mats.liljegren@enea.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Namhyung Kim <namhyung.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
parent 6c1e0256
Loading
Loading
Loading
Loading
+5 −1
Original line number Original line Diff line number Diff line
@@ -3082,11 +3082,13 @@ EXPORT_SYMBOL(preempt_schedule);
asmlinkage void __sched preempt_schedule_irq(void)
asmlinkage void __sched preempt_schedule_irq(void)
{
{
	struct thread_info *ti = current_thread_info();
	struct thread_info *ti = current_thread_info();
	enum ctx_state prev_state;


	/* Catch callers which need to be fixed */
	/* Catch callers which need to be fixed */
	BUG_ON(ti->preempt_count || !irqs_disabled());
	BUG_ON(ti->preempt_count || !irqs_disabled());


	user_exit();
	prev_state = exception_enter();

	do {
	do {
		add_preempt_count(PREEMPT_ACTIVE);
		add_preempt_count(PREEMPT_ACTIVE);
		local_irq_enable();
		local_irq_enable();
@@ -3100,6 +3102,8 @@ asmlinkage void __sched preempt_schedule_irq(void)
		 */
		 */
		barrier();
		barrier();
	} while (need_resched());
	} while (need_resched());

	exception_exit(prev_state);
}
}


#endif /* CONFIG_PREEMPT */
#endif /* CONFIG_PREEMPT */