Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit aa877175 authored by Boris Ostrovsky's avatar Boris Ostrovsky Committed by Ingo Molnar
Browse files

cpu/hotplug: Prevent alloc/free of irq descriptors during CPU up/down (again)



Now that Xen no longer allocates irqs in _cpu_up() we can restore
commit:

  a8994181 ("hotplug: Prevent alloc/free of irq descriptors during cpu up/down")

Signed-off-by: default avatarBoris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: default avatarJuergen Gross <jgross@suse.com>
Acked-by: default avatarThomas Gleixner <tglx@linutronix.de>
Cc: Anna-Maria Gleixner <anna-maria@linutronix.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: david.vrabel@citrix.com
Cc: xen-devel@lists.xenproject.org
Link: http://lkml.kernel.org/r/1470244948-17674-3-git-send-email-boris.ostrovsky@oracle.com


Signed-off-by: default avatarIngo Molnar <mingo@kernel.org>
parent a0cba217
Loading
Loading
Loading
Loading
+0 −11
Original line number Diff line number Diff line
@@ -1108,17 +1108,8 @@ int native_cpu_up(unsigned int cpu, struct task_struct *tidle)

	common_cpu_up(cpu, tidle);

	/*
	 * We have to walk the irq descriptors to setup the vector
	 * space for the cpu which comes online.  Prevent irq
	 * alloc/free across the bringup.
	 */
	irq_lock_sparse();

	err = do_boot_cpu(apicid, cpu, tidle);

	if (err) {
		irq_unlock_sparse();
		pr_err("do_boot_cpu failed(%d) to wakeup CPU#%u\n", err, cpu);
		return -EIO;
	}
@@ -1136,8 +1127,6 @@ int native_cpu_up(unsigned int cpu, struct task_struct *tidle)
		touch_nmi_watchdog();
	}

	irq_unlock_sparse();

	return 0;
}

+8 −0
Original line number Diff line number Diff line
@@ -349,8 +349,16 @@ static int bringup_cpu(unsigned int cpu)
	struct task_struct *idle = idle_thread_get(cpu);
	int ret;

	/*
	 * Some architectures have to walk the irq descriptors to
	 * setup the vector space for the cpu which comes online.
	 * Prevent irq alloc/free across the bringup.
	 */
	irq_lock_sparse();

	/* Arch-specific enabling code. */
	ret = __cpu_up(cpu, idle);
	irq_unlock_sparse();
	if (ret) {
		cpu_notify(CPU_UP_CANCELED, cpu);
		return ret;