Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit ac64da0b authored by Eric Dumazet's avatar Eric Dumazet Committed by David S. Miller
Browse files

net: rps: fix cpu unplug



softnet_data.input_pkt_queue is protected by a spinlock that
we must hold when transferring packets from victim queue to an active
one. This is because other cpus could still be trying to enqueue packets
into victim queue.

A second problem is that when we transfert the NAPI poll_list from
victim to current cpu, we absolutely need to special case the percpu
backlog, because we do not want to add complex locking to protect
process_queue : Only owner cpu is allowed to manipulate it, unless cpu
is offline.

Based on initial patch from Prasad Sodagudi & Subash Abhinov
Kasiviswanathan.

This version is better because we do not slow down packet processing,
only make migration safer.

Reported-by: default avatarPrasad Sodagudi <psodagud@codeaurora.org>
Reported-by: default avatarSubash Abhinov Kasiviswanathan <subashab@codeaurora.org>
Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
Cc: Tom Herbert <therbert@google.com>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent 57d737c5
Loading
Loading
Loading
Loading
+15 −5
Original line number Diff line number Diff line
@@ -7072,10 +7072,20 @@ static int dev_cpu_callback(struct notifier_block *nfb,
		oldsd->output_queue = NULL;
		oldsd->output_queue_tailp = &oldsd->output_queue;
	}
	/* Append NAPI poll list from offline CPU. */
	if (!list_empty(&oldsd->poll_list)) {
		list_splice_init(&oldsd->poll_list, &sd->poll_list);
		raise_softirq_irqoff(NET_RX_SOFTIRQ);
	/* Append NAPI poll list from offline CPU, with one exception :
	 * process_backlog() must be called by cpu owning percpu backlog.
	 * We properly handle process_queue & input_pkt_queue later.
	 */
	while (!list_empty(&oldsd->poll_list)) {
		struct napi_struct *napi = list_first_entry(&oldsd->poll_list,
							    struct napi_struct,
							    poll_list);

		list_del_init(&napi->poll_list);
		if (napi->poll == process_backlog)
			napi->state = 0;
		else
			____napi_schedule(sd, napi);
	}

	raise_softirq_irqoff(NET_TX_SOFTIRQ);
@@ -7086,7 +7096,7 @@ static int dev_cpu_callback(struct notifier_block *nfb,
		netif_rx_internal(skb);
		input_queue_head_incr(oldsd);
	}
	while ((skb = __skb_dequeue(&oldsd->input_pkt_queue))) {
	while ((skb = skb_dequeue(&oldsd->input_pkt_queue))) {
		netif_rx_internal(skb);
		input_queue_head_incr(oldsd);
	}