Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit c2fda509 authored by Lai Jiangshan's avatar Lai Jiangshan Committed by Tejun Heo
Browse files

workqueue: allow work_on_cpu() to be called recursively



If the @FN call work_on_cpu() again, the lockdep will complain:

> [ INFO: possible recursive locking detected ]
> 3.11.0-rc1-lockdep-fix-a #6 Not tainted
> ---------------------------------------------
> kworker/0:1/142 is trying to acquire lock:
>  ((&wfc.work)){+.+.+.}, at: [<ffffffff81077100>] flush_work+0x0/0xb0
>
> but task is already holding lock:
>  ((&wfc.work)){+.+.+.}, at: [<ffffffff81075dd9>] process_one_work+0x169/0x610
>
> other info that might help us debug this:
>  Possible unsafe locking scenario:
>
>        CPU0
>        ----
>   lock((&wfc.work));
>   lock((&wfc.work));
>
>  *** DEADLOCK ***

It is false-positive lockdep report. In this sutiation,
the two "wfc"s of the two work_on_cpu() are different,
they are both on stack. flush_work() can't be deadlock.

To fix this, we need to avoid the lockdep checking in this case,
thus we instroduce a internal __flush_work() which skip the lockdep.

tj: Minor comment adjustment.

Signed-off-by: default avatarLai Jiangshan <laijs@cn.fujitsu.com>
Reported-by: default avatar"Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>
Reported-by: default avatarAlexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: default avatarTejun Heo <tj@kernel.org>
parent ad81f054
Loading
Loading
Loading
Loading
+22 −10
Original line number Diff line number Diff line
@@ -2817,6 +2817,19 @@ already_gone:
	return false;
}

static bool __flush_work(struct work_struct *work)
{
	struct wq_barrier barr;

	if (start_flush_work(work, &barr)) {
		wait_for_completion(&barr.done);
		destroy_work_on_stack(&barr.work);
		return true;
	} else {
		return false;
	}
}

/**
 * flush_work - wait for a work to finish executing the last queueing instance
 * @work: the work to flush
@@ -2830,18 +2843,10 @@ already_gone:
 */
bool flush_work(struct work_struct *work)
{
	struct wq_barrier barr;

	lock_map_acquire(&work->lockdep_map);
	lock_map_release(&work->lockdep_map);

	if (start_flush_work(work, &barr)) {
		wait_for_completion(&barr.done);
		destroy_work_on_stack(&barr.work);
		return true;
	} else {
		return false;
	}
	return __flush_work(work);
}
EXPORT_SYMBOL_GPL(flush_work);

@@ -4756,7 +4761,14 @@ long work_on_cpu(int cpu, long (*fn)(void *), void *arg)

	INIT_WORK_ONSTACK(&wfc.work, work_for_cpu_fn);
	schedule_work_on(cpu, &wfc.work);
	flush_work(&wfc.work);

	/*
	 * The work item is on-stack and can't lead to deadlock through
	 * flushing.  Use __flush_work() to avoid spurious lockdep warnings
	 * when work_on_cpu()s are nested.
	 */
	__flush_work(&wfc.work);

	return wfc.ret;
}
EXPORT_SYMBOL_GPL(work_on_cpu);