Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 7bfd8e1f authored by Eric Biggers's avatar Eric Biggers Committed by Greg Kroah-Hartman
Browse files

binder: use wake_up_pollfree()



commit a880b28a71e39013e357fd3adccd1d8a31bc69a8 upstream.

wake_up_poll() uses nr_exclusive=1, so it's not guaranteed to wake up
all exclusive waiters.  Yet, POLLFREE *must* wake up all waiters.  epoll
and aio poll are fortunately not affected by this, but it's very
fragile.  Thus, the new function wake_up_pollfree() has been introduced.

Convert binder to use wake_up_pollfree().

Reported-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
Fixes: f5cb779b ("ANDROID: binder: remove waitqueue when thread exits.")
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/r/20211209010455.42744-3-ebiggers@kernel.org


Signed-off-by: default avatarEric Biggers <ebiggers@google.com>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
parent a36e1978
Loading
Loading
Loading
Loading
+9 −12
Original line number Original line Diff line number Diff line
@@ -4336,23 +4336,20 @@ static int binder_thread_release(struct binder_proc *proc,
	}
	}


	/*
	/*
	 * If this thread used poll, make sure we remove the waitqueue
	 * If this thread used poll, make sure we remove the waitqueue from any
	 * from any epoll data structures holding it with POLLFREE.
	 * poll data structures holding it.
	 * waitqueue_active() is safe to use here because we're holding
	 * the inner lock.
	 */
	 */
	if ((thread->looper & BINDER_LOOPER_STATE_POLL) &&
	if (thread->looper & BINDER_LOOPER_STATE_POLL)
	    waitqueue_active(&thread->wait)) {
		wake_up_pollfree(&thread->wait);
		wake_up_poll(&thread->wait, POLLHUP | POLLFREE);
	}


	binder_inner_proc_unlock(thread->proc);
	binder_inner_proc_unlock(thread->proc);


	/*
	/*
	 * This is needed to avoid races between wake_up_poll() above and
	 * This is needed to avoid races between wake_up_pollfree() above and
	 * and ep_remove_waitqueue() called for other reasons (eg the epoll file
	 * someone else removing the last entry from the queue for other reasons
	 * descriptor being closed); ep_remove_waitqueue() holds an RCU read
	 * (e.g. ep_remove_wait_queue() being called due to an epoll file
	 * lock, so we can be sure it's done after calling synchronize_rcu().
	 * descriptor being closed).  Such other users hold an RCU read lock, so
	 * we can be sure they're done after we call synchronize_rcu().
	 */
	 */
	if (thread->looper & BINDER_LOOPER_STATE_POLL)
	if (thread->looper & BINDER_LOOPER_STATE_POLL)
		synchronize_rcu();
		synchronize_rcu();