Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit e4f9bfb3 authored by Peter Zijlstra's avatar Peter Zijlstra Committed by Ingo Molnar
Browse files

ia64: Fix up smp_mb__{before,after}_clear_bit()



IA64 doesn't actually have acquire/release barriers, its a lie!

Add a comment explaining this and fix up the bitop barriers.

Signed-off-by: default avatarPeter Zijlstra <peterz@infradead.org>
Acked-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
Link: http://lkml.kernel.org/n/tip-akevfh136um9dqvb1ohm55ca@git.kernel.org


Cc: Akinobu Mita <akinobu.mita@gmail.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Tony Luck <tony.luck@intel.com>
Cc: linux-ia64@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent 81cef0fe
Loading
Loading
Loading
Loading
+2 −5
Original line number Original line Diff line number Diff line
@@ -65,11 +65,8 @@ __set_bit (int nr, volatile void *addr)
	*((__u32 *) addr + (nr >> 5)) |= (1 << (nr & 31));
	*((__u32 *) addr + (nr >> 5)) |= (1 << (nr & 31));
}
}


/*
#define smp_mb__before_clear_bit()	barrier();
 * clear_bit() has "acquire" semantics.
#define smp_mb__after_clear_bit()	barrier();
 */
#define smp_mb__before_clear_bit()	smp_mb()
#define smp_mb__after_clear_bit()	do { /* skip */; } while (0)


/**
/**
 * clear_bit - Clears a bit in memory
 * clear_bit - Clears a bit in memory
+9 −0
Original line number Original line Diff line number Diff line
@@ -118,6 +118,15 @@ extern long ia64_cmpxchg_called_with_bad_pointer(void);
#define cmpxchg_rel(ptr, o, n)	\
#define cmpxchg_rel(ptr, o, n)	\
	ia64_cmpxchg(rel, (ptr), (o), (n), sizeof(*(ptr)))
	ia64_cmpxchg(rel, (ptr), (o), (n), sizeof(*(ptr)))


/*
 * Worse still - early processor implementations actually just ignored
 * the acquire/release and did a full fence all the time.  Unfortunately
 * this meant a lot of badly written code that used .acq when they really
 * wanted .rel became legacy out in the wild - so when we made a cpu
 * that strictly did the .acq or .rel ... all that code started breaking - so
 * we had to back-pedal and keep the "legacy" behavior of a full fence :-(
 */

/* for compatibility with other platforms: */
/* for compatibility with other platforms: */
#define cmpxchg(ptr, o, n)	cmpxchg_acq((ptr), (o), (n))
#define cmpxchg(ptr, o, n)	cmpxchg_acq((ptr), (o), (n))
#define cmpxchg64(ptr, o, n)	cmpxchg_acq((ptr), (o), (n))
#define cmpxchg64(ptr, o, n)	cmpxchg_acq((ptr), (o), (n))