Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 6016ffc3 authored by Paul E. McKenney's avatar Paul E. McKenney
Browse files

atomics: Add header comment so spin_unlock_wait()



There is material describing the ordering guarantees provided by
spin_unlock_wait(), but it is not necessarily easy to find.  This commit
therefore adds a docbook header comment to this function informally
describing its semantics.

Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: default avatarPeter Zijlstra <peterz@infradead.org>
parent 79269ee3
Loading
Loading
Loading
Loading
+20 −0
Original line number Diff line number Diff line
@@ -369,6 +369,26 @@ static __always_inline int spin_trylock_irq(spinlock_t *lock)
	raw_spin_trylock_irqsave(spinlock_check(lock), flags); \
})

/**
 * spin_unlock_wait - Interpose between successive critical sections
 * @lock: the spinlock whose critical sections are to be interposed.
 *
 * Semantically this is equivalent to a spin_lock() immediately
 * followed by a spin_unlock().  However, most architectures have
 * more efficient implementations in which the spin_unlock_wait()
 * cannot block concurrent lock acquisition, and in some cases
 * where spin_unlock_wait() does not write to the lock variable.
 * Nevertheless, spin_unlock_wait() can have high overhead, so if
 * you feel the need to use it, please check to see if there is
 * a better way to get your job done.
 *
 * The ordering guarantees provided by spin_unlock_wait() are:
 *
 * 1.  All accesses preceding the spin_unlock_wait() happen before
 *     any accesses in later critical sections for this same lock.
 * 2.  All accesses following the spin_unlock_wait() happen after
 *     any accesses in earlier critical sections for this same lock.
 */
static __always_inline void spin_unlock_wait(spinlock_t *lock)
{
	raw_spin_unlock_wait(&lock->rlock);