Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 93981800 authored by Tejun Heo's avatar Tejun Heo Committed by Linus Torvalds
Browse files

workqueue: fix race condition in schedule_on_each_cpu()



Commit 65a64464 ("HWPOISON: Allow
schedule_on_each_cpu() from keventd") which allows schedule_on_each_cpu()
to be called from keventd added a race condition.  schedule_on_each_cpu()
may race with cpu hotplug and end up executing the function twice on a
cpu.

Fix it by moving direct execution into the section protected with
get/put_online_cpus().  While at it, update code such that direct
execution is done after works have been scheduled for all other cpus and
drop unnecessary cpu != orig test from flush loop.

Signed-off-by: default avatarTejun Heo <tj@kernel.org>
Cc: Andi Kleen <ak@linux.intel.com>
Acked-by: default avatarOleg Nesterov <oleg@redhat.com>
Cc: Ingo Molnar <mingo@redhat.com>
Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent e1319331
Loading
Loading
Loading
Loading
+13 −15
Original line number Diff line number Diff line
@@ -692,31 +692,29 @@ int schedule_on_each_cpu(work_func_t func)
	if (!works)
		return -ENOMEM;

	get_online_cpus();

	/*
	 * when running in keventd don't schedule a work item on itself.
	 * Can just call directly because the work queue is already bound.
	 * This also is faster.
	 * Make this a generic parameter for other workqueues?
	 * When running in keventd don't schedule a work item on
	 * itself.  Can just call directly because the work queue is
	 * already bound.  This also is faster.
	 */
	if (current_is_keventd()) {
	if (current_is_keventd())
		orig = raw_smp_processor_id();
		INIT_WORK(per_cpu_ptr(works, orig), func);
		func(per_cpu_ptr(works, orig));
	}

	get_online_cpus();
	for_each_online_cpu(cpu) {
		struct work_struct *work = per_cpu_ptr(works, cpu);

		if (cpu == orig)
			continue;
		INIT_WORK(work, func);
		if (cpu != orig)
			schedule_work_on(cpu, work);
	}
	for_each_online_cpu(cpu) {
		if (cpu != orig)
	if (orig >= 0)
		func(per_cpu_ptr(works, orig));

	for_each_online_cpu(cpu)
		flush_work(per_cpu_ptr(works, cpu));
	}

	put_online_cpus();
	free_percpu(works);
	return 0;