Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 95503d29 authored by J. Bruce Fields's avatar J. Bruce Fields
Browse files

svcrpc: fix unlikely races preventing queueing of sockets



In the rpc server, When something happens that might be reason to wake
up a thread to do something, what we do is

	- modify xpt_flags, sk_sock->flags, xpt_reserved, or
	  xpt_nr_rqsts to indicate the new situation
	- call svc_xprt_enqueue() to decide whether to wake up a thread.

svc_xprt_enqueue may require multiple conditions to be true before
queueing up a thread to handle the xprt.  In the SMP case, one of the
other CPU's may have set another required condition, and in that case,
although both CPUs run svc_xprt_enqueue(), it's possible that neither
call sees the writes done by the other CPU in time, and neither one
recognizes that all the required conditions have been set.  A socket
could therefore be ignored indefinitely.

Add memory barries to ensure that any svc_xprt_enqueue() call will
always see the conditions changed by other CPUs before deciding to
ignore a socket.

I've never seen this race reported.  In the unlikely event it happens,
another event will usually come along and the problem will fix itself.
So I don't think this is worth backporting to stable.

Chuck tried this patch and said "I don't see any performance
regressions, but my server has only a single last-level CPU cache."

Tested-by: default avatarChuck Lever <chuck.lever@oracle.com>
Signed-off-by: default avatarJ. Bruce Fields <bfields@redhat.com>
parent 66c898ca
Loading
Loading
Loading
Loading
+11 −1
Original line number Diff line number Diff line
@@ -357,6 +357,7 @@ static void svc_xprt_release_slot(struct svc_rqst *rqstp)
	struct svc_xprt	*xprt = rqstp->rq_xprt;
	if (test_and_clear_bit(RQ_DATA, &rqstp->rq_flags)) {
		atomic_dec(&xprt->xpt_nr_rqsts);
		smp_wmb(); /* See smp_rmb() in svc_xprt_ready() */
		svc_xprt_enqueue(xprt);
	}
}
@@ -365,6 +366,15 @@ static bool svc_xprt_ready(struct svc_xprt *xprt)
{
	unsigned long xpt_flags;

	/*
	 * If another cpu has recently updated xpt_flags,
	 * sk_sock->flags, xpt_reserved, or xpt_nr_rqsts, we need to
	 * know about it; otherwise it's possible that both that cpu and
	 * this one could call svc_xprt_enqueue() without either
	 * svc_xprt_enqueue() recognizing that the conditions below
	 * are satisfied, and we could stall indefinitely:
	 */
	smp_rmb();
	xpt_flags = READ_ONCE(xprt->xpt_flags);

	if (xpt_flags & (BIT(XPT_CONN) | BIT(XPT_CLOSE)))
@@ -479,7 +489,7 @@ void svc_reserve(struct svc_rqst *rqstp, int space)
	if (xprt && space < rqstp->rq_reserved) {
		atomic_sub((rqstp->rq_reserved - space), &xprt->xpt_reserved);
		rqstp->rq_reserved = space;

		smp_wmb(); /* See smp_rmb() in svc_xprt_ready() */
		svc_xprt_enqueue(xprt);
	}
}
+2 −1
Original line number Diff line number Diff line
@@ -314,8 +314,9 @@ static void svc_rdma_wc_receive(struct ib_cq *cq, struct ib_wc *wc)

	spin_lock(&rdma->sc_rq_dto_lock);
	list_add_tail(&ctxt->rc_list, &rdma->sc_rq_dto_q);
	spin_unlock(&rdma->sc_rq_dto_lock);
	/* Note the unlock pairs with the smp_rmb in svc_xprt_ready: */
	set_bit(XPT_DATA, &rdma->sc_xprt.xpt_flags);
	spin_unlock(&rdma->sc_rq_dto_lock);
	if (!test_bit(RDMAXPRT_CONN_PENDING, &rdma->sc_flags))
		svc_xprt_enqueue(&rdma->sc_xprt);
	goto out;
+2 −1
Original line number Diff line number Diff line
@@ -287,9 +287,10 @@ static void svc_rdma_wc_read_done(struct ib_cq *cq, struct ib_wc *wc)
		spin_lock(&rdma->sc_rq_dto_lock);
		list_add_tail(&info->ri_readctxt->rc_list,
			      &rdma->sc_read_complete_q);
		/* Note the unlock pairs with the smp_rmb in svc_xprt_ready: */
		set_bit(XPT_DATA, &rdma->sc_xprt.xpt_flags);
		spin_unlock(&rdma->sc_rq_dto_lock);

		set_bit(XPT_DATA, &rdma->sc_xprt.xpt_flags);
		svc_xprt_enqueue(&rdma->sc_xprt);
	}