Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit a5cae7b8 authored by Chris Wilson's avatar Chris Wilson
Browse files

drm/i915/breadcrumbs: Disable interrupt bottom-half first on idling



Before walking the rbtree of waiters (marking them as complete and waking
them), decouple the interrupt handler. This prevents a race between the
missed waiter waking up and removing its intel_wait (which skips
checking the lock) and the interrupt handler dereferencing the
intel_wait. (Though we do not expect to encounter waiters during idle!)

Fixes: e1c0c91b ("drm/i915: Wake up all waiters before idling")
Signed-off-by: default avatarChris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: default avatarTvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170315210726.12095-3-chris@chris-wilson.co.uk
parent 429732e8
Loading
Loading
Loading
Loading
+8 −7
Original line number Diff line number Diff line
@@ -179,7 +179,7 @@ void __intel_engine_disarm_breadcrumbs(struct intel_engine_cs *engine)
void intel_engine_disarm_breadcrumbs(struct intel_engine_cs *engine)
{
	struct intel_breadcrumbs *b = &engine->breadcrumbs;
	struct intel_wait *wait, *n;
	struct intel_wait *wait, *n, *first;

	if (!b->irq_armed)
		return;
@@ -190,18 +190,19 @@ void intel_engine_disarm_breadcrumbs(struct intel_engine_cs *engine)
	 */

	spin_lock_irq(&b->rb_lock);

	spin_lock(&b->irq_lock);
	first = fetch_and_zero(&b->irq_wait);
	__intel_engine_disarm_breadcrumbs(engine);
	spin_unlock(&b->irq_lock);

	rbtree_postorder_for_each_entry_safe(wait, n, &b->waiters, node) {
		RB_CLEAR_NODE(&wait->node);
		if (wake_up_process(wait->tsk) && wait == b->irq_wait)
		if (wake_up_process(wait->tsk) && wait == first)
			missed_breadcrumb(engine);
	}
	b->waiters = RB_ROOT;

	spin_lock(&b->irq_lock);
	b->irq_wait = NULL;
	__intel_engine_disarm_breadcrumbs(engine);
	spin_unlock(&b->irq_lock);

	spin_unlock_irq(&b->rb_lock);
}