Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit b393e8b3 authored by Paul E. McKenney's avatar Paul E. McKenney
Browse files

percpu: READ_ONCE() now implies smp_read_barrier_depends()



Because READ_ONCE() now implies smp_read_barrier_depends(), this commit
removes the now-redundant smp_read_barrier_depends() following the
READ_ONCE() in __ref_is_percpu().

Signed-off-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: default avatarTejun Heo <tj@kernel.org>
Cc: Christoph Lameter <cl@linux.com>
parent 7088efa9
Loading
Loading
Loading
Loading
+3 −3
Original line number Original line Diff line number Diff line
@@ -139,12 +139,12 @@ static inline bool __ref_is_percpu(struct percpu_ref *ref,
	 * when using it as a pointer, __PERCPU_REF_ATOMIC may be set in
	 * when using it as a pointer, __PERCPU_REF_ATOMIC may be set in
	 * between contaminating the pointer value, meaning that
	 * between contaminating the pointer value, meaning that
	 * READ_ONCE() is required when fetching it.
	 * READ_ONCE() is required when fetching it.
	 *
	 * The smp_read_barrier_depends() implied by READ_ONCE() pairs
	 * with smp_store_release() in __percpu_ref_switch_to_percpu().
	 */
	 */
	percpu_ptr = READ_ONCE(ref->percpu_count_ptr);
	percpu_ptr = READ_ONCE(ref->percpu_count_ptr);


	/* paired with smp_store_release() in __percpu_ref_switch_to_percpu() */
	smp_read_barrier_depends();

	/*
	/*
	 * Theoretically, the following could test just ATOMIC; however,
	 * Theoretically, the following could test just ATOMIC; however,
	 * then we'd have to mask off DEAD separately as DEAD may be
	 * then we'd have to mask off DEAD separately as DEAD may be
+4 −4
Original line number Original line Diff line number Diff line
@@ -197,10 +197,10 @@ static void __percpu_ref_switch_to_percpu(struct percpu_ref *ref)
	atomic_long_add(PERCPU_COUNT_BIAS, &ref->count);
	atomic_long_add(PERCPU_COUNT_BIAS, &ref->count);


	/*
	/*
	 * Restore per-cpu operation.  smp_store_release() is paired with
	 * Restore per-cpu operation.  smp_store_release() is paired
	 * smp_read_barrier_depends() in __ref_is_percpu() and guarantees
	 * with READ_ONCE() in __ref_is_percpu() and guarantees that the
	 * that the zeroing is visible to all percpu accesses which can see
	 * zeroing is visible to all percpu accesses which can see the
	 * the following __PERCPU_REF_ATOMIC clearing.
	 * following __PERCPU_REF_ATOMIC clearing.
	 */
	 */
	for_each_possible_cpu(cpu)
	for_each_possible_cpu(cpu)
		*per_cpu_ptr(percpu_count, cpu) = 0;
		*per_cpu_ptr(percpu_count, cpu) = 0;