Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 8d53fa19 authored by Peter Zijlstra's avatar Peter Zijlstra Committed by Ingo Molnar
Browse files

locking/qspinlock: Clarify xchg_tail() ordering



While going over the code I noticed that xchg_tail() is a RELEASE but
had no obvious pairing commented.

It pairs with a somewhat unique address dependency through
decode_tail().

So the store-release of xchg_tail() is paired by the address
dependency of the load of xchg_tail followed by the dereference from
the pointer computed from that load.

The @old -> @prev transformation itself is pure, and therefore does
not depend on external state, so that is immaterial wrt. ordering.

Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Waiman Long <waiman.long@hpe.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent ae0b5c2f
Loading
Loading
Loading
Loading
+13 −2
Original line number Diff line number Diff line
@@ -90,7 +90,7 @@ static DEFINE_PER_CPU_ALIGNED(struct mcs_spinlock, mcs_nodes[MAX_NODES]);
 * therefore increment the cpu number by one.
 */

static inline u32 encode_tail(int cpu, int idx)
static inline __pure u32 encode_tail(int cpu, int idx)
{
	u32 tail;

@@ -103,7 +103,7 @@ static inline u32 encode_tail(int cpu, int idx)
	return tail;
}

static inline struct mcs_spinlock *decode_tail(u32 tail)
static inline __pure struct mcs_spinlock *decode_tail(u32 tail)
{
	int cpu = (tail >> _Q_TAIL_CPU_OFFSET) - 1;
	int idx = (tail &  _Q_TAIL_IDX_MASK) >> _Q_TAIL_IDX_OFFSET;
@@ -455,6 +455,8 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
	 * pending stuff.
	 *
	 * p,*,* -> n,*,*
	 *
	 * RELEASE, such that the stores to @node must be complete.
	 */
	old = xchg_tail(lock, tail);
	next = NULL;
@@ -465,6 +467,15 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
	 */
	if (old & _Q_TAIL_MASK) {
		prev = decode_tail(old);
		/*
		 * The above xchg_tail() is also a load of @lock which generates,
		 * through decode_tail(), a pointer.
		 *
		 * The address dependency matches the RELEASE of xchg_tail()
		 * such that the access to @prev must happen after.
		 */
		smp_read_barrier_depends();

		WRITE_ONCE(prev->next, node);

		pv_wait_node(node, prev);