Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 6128fcbc authored by Lai Jiangshan's avatar Lai Jiangshan Committed by Gerrit - the friendly Code Review server
Browse files

workqueue/hotplug: simplify workqueue_offline_cpu()

Since the recent cpu/hotplug refactoring, workqueue_offline_cpu() is
guaranteed to run on the local cpu which is going offline.

This also fixes the following deadlock by removing work item
scheduling and flushing from CPU hotplug path.

http://lkml.kernel.org/r/1504764252-29091-1-git-send-email-prsood@codeaurora.org



tj: Description update.

Change-Id: Ibc7095a0f74b477c85030512ec2e3b20b6ec1c32
Signed-off-by: default avatarLai Jiangshan <jiangshanlai@gmail.com>
Signed-off-by: default avatarTejun Heo <tj@kernel.org>
Git-commit: e8b3f8db7aad99fcc5234fc5b89984ff6620de3d
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git


Signed-off-by: default avatarPrateek Sood <prsood@codeaurora.org>
parent 3d139965
Loading
Loading
Loading
Loading
+6 −9
Original line number Diff line number Diff line
@@ -1641,7 +1641,7 @@ static void worker_enter_idle(struct worker *worker)
		mod_timer(&pool->idle_timer, jiffies + IDLE_WORKER_TIMEOUT);

	/*
	 * Sanity check nr_running.  Because wq_unbind_fn() releases
	 * Sanity check nr_running.  Because unbind_workers() releases
	 * pool->lock between setting %WORKER_UNBOUND and zapping
	 * nr_running, the warning may trigger spuriously.  Check iff
	 * unbind is not in progress.
@@ -4478,9 +4478,8 @@ void show_workqueue_state(void)
 * cpu comes back online.
 */

static void wq_unbind_fn(struct work_struct *work)
static void unbind_workers(int cpu)
{
	int cpu = smp_processor_id();
	struct worker_pool *pool;
	struct worker *worker;

@@ -4677,12 +4676,13 @@ int workqueue_online_cpu(unsigned int cpu)

int workqueue_offline_cpu(unsigned int cpu)
{
	struct work_struct unbind_work;
	struct workqueue_struct *wq;

	/* unbinding per-cpu workers should happen on the local CPU */
	INIT_WORK_ONSTACK(&unbind_work, wq_unbind_fn);
	queue_work_on(cpu, system_highpri_wq, &unbind_work);
	if (WARN_ON(cpu != smp_processor_id()))
		return -1;

	unbind_workers(cpu);

	/* update NUMA affinity of unbound workqueues */
	mutex_lock(&wq_pool_mutex);
@@ -4690,9 +4690,6 @@ int workqueue_offline_cpu(unsigned int cpu)
		wq_update_unbound_numa(wq, cpu, false);
	mutex_unlock(&wq_pool_mutex);

	/* wait for per-cpu unbinding to finish */
	flush_work(&unbind_work);
	destroy_work_on_stack(&unbind_work);
	return 0;
}