Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 2e636d5e authored by Peter Zijlstra's avatar Peter Zijlstra Committed by Ingo Molnar
Browse files

sched/preempt: Fix preempt_count manipulations



Vikram reported that his ARM64 compiler managed to 'optimize' away the
preempt_count manipulations in code like:

	preempt_enable_no_resched();
	put_user();
	preempt_disable();

Irrespective of that fact that that is horrible code that should be
fixed for many reasons, it does highlight a deficiency in the generic
preempt_count manipulators. As it is never right to combine/elide
preempt_count manipulations like this.

Therefore sprinkle some volatile in the two generic accessors to
ensure the compiler is aware of the fact that the preempt_count is
observed outside of the regular program-order view and thus cannot be
optimized away like this.

x86; the only arch not using the generic code is not affected as we
do all this in asm in order to use the segment base per-cpu stuff.

Reported-by: default avatarVikram Mulukutla <markivx@codeaurora.org>
Tested-by: default avatarVikram Mulukutla <markivx@codeaurora.org>
Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes: a7878709 ("sched, arch: Create asm/preempt.h")
Link: http://lkml.kernel.org/r/20160516131751.GH3205@twins.programming.kicks-ass.net


Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent bc231d9e
Loading
Loading
Loading
Loading
+2 −2
Original line number Diff line number Diff line
@@ -7,10 +7,10 @@

static __always_inline int preempt_count(void)
{
	return current_thread_info()->preempt_count;
	return READ_ONCE(current_thread_info()->preempt_count);
}

static __always_inline int *preempt_count_ptr(void)
static __always_inline volatile int *preempt_count_ptr(void)
{
	return &current_thread_info()->preempt_count;
}