Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 6b229cf7 authored by Eric Dumazet's avatar Eric Dumazet Committed by David S. Miller
Browse files

udp: add batching to udp_rmem_release()



If udp_recvmsg() constantly releases sk_rmem_alloc
for every read packet, it gives opportunity for
producers to immediately grab spinlocks and desperatly
try adding another packet, causing false sharing.

We can add a simple heuristic to give the signal
by batches of ~25 % of the queue capacity.

This patch considerably increases performance under
flood by about 50 %, since the thread draining the queue
is no longer slowed by false sharing.

Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent c84d9490
Loading
Loading
Loading
Loading
+3 −0
Original line number Diff line number Diff line
@@ -79,6 +79,9 @@ struct udp_sock {
	int			(*gro_complete)(struct sock *sk,
						struct sk_buff *skb,
						int nhoff);

	/* This field is dirtied by udp_recvmsg() */
	int		forward_deficit;
};

static inline struct udp_sock *udp_sk(const struct sock *sk)
+12 −0
Original line number Diff line number Diff line
@@ -1177,8 +1177,20 @@ int udp_sendpage(struct sock *sk, struct page *page, int offset,
/* fully reclaim rmem/fwd memory allocated for skb */
static void udp_rmem_release(struct sock *sk, int size, int partial)
{
	struct udp_sock *up = udp_sk(sk);
	int amt;

	if (likely(partial)) {
		up->forward_deficit += size;
		size = up->forward_deficit;
		if (size < (sk->sk_rcvbuf >> 2) &&
		    !skb_queue_empty(&sk->sk_receive_queue))
			return;
	} else {
		size += up->forward_deficit;
	}
	up->forward_deficit = 0;

	atomic_sub(size, &sk->sk_rmem_alloc);
	sk->sk_forward_alloc += size;
	amt = (sk->sk_forward_alloc - partial) & ~(SK_MEM_QUANTUM - 1);