Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit e6cd1e07 authored by Milton Miller's avatar Milton Miller Committed by Linus Torvalds
Browse files

call_function_many: fix list delete vs add race



Peter pointed out there was nothing preventing the list_del_rcu in
smp_call_function_interrupt from running before the list_add_rcu in
smp_call_function_many.

Fix this by not setting refs until we have gotten the lock for the list.
Take advantage of the wmb in list_add_rcu to save an explicit additional
one.

I tried to force this race with a udelay before the lock & list_add and
by mixing all 64 online cpus with just 3 random cpus in the mask, but
was unsuccessful.  Still, inspection shows a valid race, and the fix is
a extension of the existing protection window in the current code.

Cc: stable@kernel.org (v2.6.32 and later)
Reported-by: default avatarPeter Zijlstra <peterz@infradead.org>
Signed-off-by: default avatarMilton Miller <miltonm@bga.com>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent ef2b4b95
Loading
Loading
Loading
Loading
+13 −7
Original line number Diff line number Diff line
@@ -491,14 +491,15 @@ void smp_call_function_many(const struct cpumask *mask,
	cpumask_clear_cpu(this_cpu, data->cpumask);

	/*
	 * To ensure the interrupt handler gets an complete view
	 * we order the cpumask and refs writes and order the read
	 * of them in the interrupt handler.  In addition we may
	 * only clear our own cpu bit from the mask.
	 * We reuse the call function data without waiting for any grace
	 * period after some other cpu removes it from the global queue.
	 * This means a cpu might find our data block as it is writen.
	 * The interrupt handler waits until it sees refs filled out
	 * while its cpu mask bit is set; here we may only clear our
	 * own cpu mask bit, and must wait to set refs until we are sure
	 * previous writes are complete and we have obtained the lock to
	 * add the element to the queue.
	 */
	smp_wmb();

	atomic_set(&data->refs, cpumask_weight(data->cpumask));

	raw_spin_lock_irqsave(&call_function.lock, flags);
	/*
@@ -507,6 +508,11 @@ void smp_call_function_many(const struct cpumask *mask,
	 * will not miss any other list entries:
	 */
	list_add_rcu(&data->csd.list, &call_function.queue);
	/*
	 * We rely on the wmb() in list_add_rcu to order the writes
	 * to func, data, and cpumask before this write to refs.
	 */
	atomic_set(&data->refs, cpumask_weight(data->cpumask));
	raw_spin_unlock_irqrestore(&call_function.lock, flags);

	/*