Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 0e9867b7 authored by Heiko Carstens's avatar Heiko Carstens Committed by Greg Kroah-Hartman
Browse files

s390/ctl_reg: make __ctl_load a full memory barrier




[ Upstream commit e991c24d68b8c0ba297eeb7af80b1e398e98c33f ]

We have quite a lot of code that depends on the order of the
__ctl_load inline assemby and subsequent memory accesses, like
e.g. disabling lowcore protection and the writing to lowcore.

Since the __ctl_load macro does not have memory barrier semantics, nor
any other dependencies the compiler is, theoretically, free to shuffle
code around. Or in other words: storing to lowcore could happen before
lowcore protection is disabled.

In order to avoid this class of potential bugs simply add a full
memory barrier to the __ctl_load macro.

Signed-off-by: default avatarHeiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: default avatarMartin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: default avatarSasha Levin <alexander.levin@verizon.com>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
parent 9d00195b
Loading
Loading
Loading
Loading
+3 −1
Original line number Diff line number Diff line
@@ -15,7 +15,9 @@
	BUILD_BUG_ON(sizeof(addrtype) != (high - low + 1) * sizeof(long));\
	asm volatile(							\
		"	lctlg	%1,%2,%0\n"				\
		: : "Q" (*(addrtype *)(&array)), "i" (low), "i" (high));\
		:							\
		: "Q" (*(addrtype *)(&array)), "i" (low), "i" (high)	\
		: "memory");						\
}

#define __ctl_store(array, low, high) {					\