Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 0f5c6c30 authored by Jisheng Zhang's avatar Jisheng Zhang Committed by David S. Miller
Browse files

net: mvneta: fix mvneta_config_rss on armada 3700



The mvneta Ethernet driver is used on a few different Marvell SoCs.
Some SoCs have per cpu interrupts for Ethernet events, the driver uses
a per CPU napi structure for this case. Some SoCs such as armada 3700
have a single interrupt for Ethernet events, the driver uses a global
napi structure for this case.

Current mvneta_config_rss() always operates the per cpu napi structure.
Fix it by operating a global napi for "single interrupt" case, and per
cpu napi structure for remaining cases.

Signed-off-by: default avatarJisheng Zhang <Jisheng.Zhang@synaptics.com>
Fixes: 2636ac3c ("net: mvneta: Add network support for Armada 3700 SoC")
Reviewed-by: default avatarAndrew Lunn <andrew@lunn.ch>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent 0d86caff
Loading
Loading
Loading
Loading
+20 −11
Original line number Diff line number Diff line
@@ -4107,6 +4107,7 @@ static int mvneta_config_rss(struct mvneta_port *pp)

	on_each_cpu(mvneta_percpu_mask_interrupt, pp, true);

	if (!pp->neta_armada3700) {
		/* We have to synchronise on the napi of each CPU */
		for_each_online_cpu(cpu) {
			struct mvneta_pcpu_port *pcpu_port =
@@ -4115,6 +4116,10 @@ static int mvneta_config_rss(struct mvneta_port *pp)
			napi_synchronize(&pcpu_port->napi);
			napi_disable(&pcpu_port->napi);
		}
	} else {
		napi_synchronize(&pp->napi);
		napi_disable(&pp->napi);
	}

	pp->rxq_def = pp->indir[0];

@@ -4130,6 +4135,7 @@ static int mvneta_config_rss(struct mvneta_port *pp)
	mvneta_percpu_elect(pp);
	spin_unlock(&pp->lock);

	if (!pp->neta_armada3700) {
		/* We have to synchronise on the napi of each CPU */
		for_each_online_cpu(cpu) {
			struct mvneta_pcpu_port *pcpu_port =
@@ -4137,6 +4143,9 @@ static int mvneta_config_rss(struct mvneta_port *pp)

			napi_enable(&pcpu_port->napi);
		}
	} else {
		napi_enable(&pp->napi);
	}

	netif_tx_start_all_queues(pp->dev);