Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit b316ff78 authored by Peter Zijlstra's avatar Peter Zijlstra Committed by Ingo Molnar
Browse files

locking/spinlock, netfilter: Fix nf_conntrack_lock() barriers



Even with spin_unlock_wait() fixed, nf_conntrack_lock{,_all}() is
borken as it misses a bunch of memory barriers to order the whole
global vs local locks scheme.

Even x86 (and other TSO archs) are affected.

Signed-off-by: default avatarPeter Zijlstra (Intel) <peterz@infradead.org>
[ Updated the comments. ]
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: David S. Miller <davem@davemloft.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent be3e7844
Loading
Loading
Loading
Loading
+22 −1
Original line number Diff line number Diff line
@@ -83,6 +83,13 @@ void nf_conntrack_lock(spinlock_t *lock) __acquires(lock)
	spin_lock(lock);
	while (unlikely(nf_conntrack_locks_all)) {
		spin_unlock(lock);

		/*
		 * Order the 'nf_conntrack_locks_all' load vs. the
		 * spin_unlock_wait() loads below, to ensure
		 * that 'nf_conntrack_locks_all_lock' is indeed held:
		 */
		smp_rmb(); /* spin_lock(&nf_conntrack_locks_all_lock) */
		spin_unlock_wait(&nf_conntrack_locks_all_lock);
		spin_lock(lock);
	}
@@ -128,6 +135,14 @@ static void nf_conntrack_all_lock(void)
	spin_lock(&nf_conntrack_locks_all_lock);
	nf_conntrack_locks_all = true;

	/*
	 * Order the above store of 'nf_conntrack_locks_all' against
	 * the spin_unlock_wait() loads below, such that if
	 * nf_conntrack_lock() observes 'nf_conntrack_locks_all'
	 * we must observe nf_conntrack_locks[] held:
	 */
	smp_mb(); /* spin_lock(&nf_conntrack_locks_all_lock) */

	for (i = 0; i < CONNTRACK_LOCKS; i++) {
		spin_unlock_wait(&nf_conntrack_locks[i]);
	}
@@ -135,7 +150,13 @@ static void nf_conntrack_all_lock(void)

static void nf_conntrack_all_unlock(void)
{
	nf_conntrack_locks_all = false;
	/*
	 * All prior stores must be complete before we clear
	 * 'nf_conntrack_locks_all'. Otherwise nf_conntrack_lock()
	 * might observe the false value but not the entire
	 * critical section:
	 */
	smp_store_release(&nf_conntrack_locks_all, false);
	spin_unlock(&nf_conntrack_locks_all_lock);
}