Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 535560d8 authored by Ingo Molnar's avatar Ingo Molnar
Browse files

Merge commit '3cf2f34e' into sched/core, to fix build error



Fix this dependency on the locking tree's smp_mb*() API changes:

  kernel/sched/idle.c:247:3: error: implicit declaration of function ‘smp_mb__after_atomic’ [-Werror=implicit-function-declaration]

Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
parents f602d063 3cf2f34e
Loading
Loading
Loading
Loading
+12 −19
Original line number Original line Diff line number Diff line
@@ -285,15 +285,13 @@ If a caller requires memory barrier semantics around an atomic_t
operation which does not return a value, a set of interfaces are
operation which does not return a value, a set of interfaces are
defined which accomplish this:
defined which accomplish this:


	void smp_mb__before_atomic_dec(void);
	void smp_mb__before_atomic(void);
	void smp_mb__after_atomic_dec(void);
	void smp_mb__after_atomic(void);
	void smp_mb__before_atomic_inc(void);
	void smp_mb__after_atomic_inc(void);


For example, smp_mb__before_atomic_dec() can be used like so:
For example, smp_mb__before_atomic() can be used like so:


	obj->dead = 1;
	obj->dead = 1;
	smp_mb__before_atomic_dec();
	smp_mb__before_atomic();
	atomic_dec(&obj->ref_count);
	atomic_dec(&obj->ref_count);


It makes sure that all memory operations preceding the atomic_dec()
It makes sure that all memory operations preceding the atomic_dec()
@@ -302,15 +300,10 @@ operation. In the above example, it guarantees that the assignment of
"1" to obj->dead will be globally visible to other cpus before the
"1" to obj->dead will be globally visible to other cpus before the
atomic counter decrement.
atomic counter decrement.


Without the explicit smp_mb__before_atomic_dec() call, the
Without the explicit smp_mb__before_atomic() call, the
implementation could legally allow the atomic counter update visible
implementation could legally allow the atomic counter update visible
to other cpus before the "obj->dead = 1;" assignment.
to other cpus before the "obj->dead = 1;" assignment.


The other three interfaces listed are used to provide explicit
ordering with respect to memory operations after an atomic_dec() call
(smp_mb__after_atomic_dec()) and around atomic_inc() calls
(smp_mb__{before,after}_atomic_inc()).

A missing memory barrier in the cases where they are required by the
A missing memory barrier in the cases where they are required by the
atomic_t implementation above can have disastrous results.  Here is
atomic_t implementation above can have disastrous results.  Here is
an example, which follows a pattern occurring frequently in the Linux
an example, which follows a pattern occurring frequently in the Linux
@@ -487,12 +480,12 @@ Finally there is the basic operation:
Which returns a boolean indicating if bit "nr" is set in the bitmask
Which returns a boolean indicating if bit "nr" is set in the bitmask
pointed to by "addr".
pointed to by "addr".


If explicit memory barriers are required around clear_bit() (which
If explicit memory barriers are required around {set,clear}_bit() (which do
does not return a value, and thus does not need to provide memory
not return a value, and thus does not need to provide memory barrier
barrier semantics), two interfaces are provided:
semantics), two interfaces are provided:


	void smp_mb__before_clear_bit(void);
	void smp_mb__before_atomic(void);
	void smp_mb__after_clear_bit(void);
	void smp_mb__after_atomic(void);


They are used as follows, and are akin to their atomic_t operation
They are used as follows, and are akin to their atomic_t operation
brothers:
brothers:
@@ -500,13 +493,13 @@ brothers:
	/* All memory operations before this call will
	/* All memory operations before this call will
	 * be globally visible before the clear_bit().
	 * be globally visible before the clear_bit().
	 */
	 */
	smp_mb__before_clear_bit();
	smp_mb__before_atomic();
	clear_bit( ... );
	clear_bit( ... );


	/* The clear_bit() will be visible before all
	/* The clear_bit() will be visible before all
	 * subsequent memory operations.
	 * subsequent memory operations.
	 */
	 */
	 smp_mb__after_clear_bit();
	 smp_mb__after_atomic();


There are two special bitops with lock barrier semantics (acquire/release,
There are two special bitops with lock barrier semantics (acquire/release,
same as spinlocks). These operate in the same way as their non-_lock/unlock
same as spinlocks). These operate in the same way as their non-_lock/unlock
+11 −31
Original line number Original line Diff line number Diff line
@@ -1583,20 +1583,21 @@ There are some more advanced barrier functions:
     insert anything more than a compiler barrier in a UP compilation.
     insert anything more than a compiler barrier in a UP compilation.




 (*) smp_mb__before_atomic_dec();
 (*) smp_mb__before_atomic();
 (*) smp_mb__after_atomic_dec();
 (*) smp_mb__after_atomic();
 (*) smp_mb__before_atomic_inc();
 (*) smp_mb__after_atomic_inc();


     These are for use with atomic add, subtract, increment and decrement
     These are for use with atomic (such as add, subtract, increment and
     functions that don't return a value, especially when used for reference
     decrement) functions that don't return a value, especially when used for
     counting.  These functions do not imply memory barriers.
     reference counting.  These functions do not imply memory barriers.

     These are also used for atomic bitop functions that do not return a
     value (such as set_bit and clear_bit).


     As an example, consider a piece of code that marks an object as being dead
     As an example, consider a piece of code that marks an object as being dead
     and then decrements the object's reference count:
     and then decrements the object's reference count:


	obj->dead = 1;
	obj->dead = 1;
	smp_mb__before_atomic_dec();
	smp_mb__before_atomic();
	atomic_dec(&obj->ref_count);
	atomic_dec(&obj->ref_count);


     This makes sure that the death mark on the object is perceived to be set
     This makes sure that the death mark on the object is perceived to be set
@@ -1606,27 +1607,6 @@ There are some more advanced barrier functions:
     operations" subsection for information on where to use these.
     operations" subsection for information on where to use these.




 (*) smp_mb__before_clear_bit(void);
 (*) smp_mb__after_clear_bit(void);

     These are for use similar to the atomic inc/dec barriers.  These are
     typically used for bitwise unlocking operations, so care must be taken as
     there are no implicit memory barriers here either.

     Consider implementing an unlock operation of some nature by clearing a
     locking bit.  The clear_bit() would then need to be barriered like this:

	smp_mb__before_clear_bit();
	clear_bit( ... );

     This prevents memory operations before the clear leaking to after it.  See
     the subsection on "Locking Functions" with reference to RELEASE operation
     implications.

     See Documentation/atomic_ops.txt for more information.  See the "Atomic
     operations" subsection for information on where to use these.


MMIO WRITE BARRIER
MMIO WRITE BARRIER
------------------
------------------


@@ -2283,11 +2263,11 @@ operations:
	change_bit();
	change_bit();


With these the appropriate explicit memory barrier should be used if necessary
With these the appropriate explicit memory barrier should be used if necessary
(smp_mb__before_clear_bit() for instance).
(smp_mb__before_atomic() for instance).




The following also do _not_ imply memory barriers, and so may require explicit
The following also do _not_ imply memory barriers, and so may require explicit
memory barriers under some circumstances (smp_mb__before_atomic_dec() for
memory barriers under some circumstances (smp_mb__before_atomic() for
instance):
instance):


	atomic_add();
	atomic_add();
+0 −5
Original line number Original line Diff line number Diff line
@@ -292,9 +292,4 @@ static inline long atomic64_dec_if_positive(atomic64_t *v)
#define atomic_dec(v) atomic_sub(1,(v))
#define atomic_dec(v) atomic_sub(1,(v))
#define atomic64_dec(v) atomic64_sub(1,(v))
#define atomic64_dec(v) atomic64_sub(1,(v))


#define smp_mb__before_atomic_dec()	smp_mb()
#define smp_mb__after_atomic_dec()	smp_mb()
#define smp_mb__before_atomic_inc()	smp_mb()
#define smp_mb__after_atomic_inc()	smp_mb()

#endif /* _ALPHA_ATOMIC_H */
#endif /* _ALPHA_ATOMIC_H */
+0 −3
Original line number Original line Diff line number Diff line
@@ -53,9 +53,6 @@ __set_bit(unsigned long nr, volatile void * addr)
	*m |= 1 << (nr & 31);
	*m |= 1 << (nr & 31);
}
}


#define smp_mb__before_clear_bit()	smp_mb()
#define smp_mb__after_clear_bit()	smp_mb()

static inline void
static inline void
clear_bit(unsigned long nr, volatile void * addr)
clear_bit(unsigned long nr, volatile void * addr)
{
{
+0 −5
Original line number Original line Diff line number Diff line
@@ -190,11 +190,6 @@ static inline void atomic_clear_mask(unsigned long mask, unsigned long *addr)


#endif /* !CONFIG_ARC_HAS_LLSC */
#endif /* !CONFIG_ARC_HAS_LLSC */


#define smp_mb__before_atomic_dec()	barrier()
#define smp_mb__after_atomic_dec()	barrier()
#define smp_mb__before_atomic_inc()	barrier()
#define smp_mb__after_atomic_inc()	barrier()

/**
/**
 * __atomic_add_unless - add unless the number is a given value
 * __atomic_add_unless - add unless the number is a given value
 * @v: pointer of type atomic_t
 * @v: pointer of type atomic_t
Loading