Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit bc14786a authored by Eric Dumazet's avatar Eric Dumazet Committed by David S. Miller
Browse files

bnx2x: fix panic when TX ring is full



There is a off by one error in the minimal number of BD in
bnx2x_start_xmit() and bnx2x_tx_int() before stopping/resuming tx queue.

A full size GSO packet, with data included in skb->head really needs
(MAX_SKB_FRAGS + 4) BDs, because of bnx2x_tx_split()

This error triggers if BQL is disabled and heavy TCP transmit traffic
occurs.

bnx2x_tx_split() definitely can be called, remove a wrong comment.

Reported-by: default avatarTomas Hruby <thruby@google.com>
Signed-off-by: default avatarEric Dumazet <edumazet@google.com>
Cc: Eilon Greenstein <eilong@broadcom.com>
Cc: Yaniv Rosner <yanivr@broadcom.com>
Cc: Merav Sicron <meravs@broadcom.com>
Cc: Tom Herbert <therbert@google.com>
Cc: Robert Evans <evansr@google.com>
Cc: Willem de Bruijn <willemb@google.com>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent d9cb9bd6
Loading
Loading
Loading
Loading
+3 −5
Original line number Original line Diff line number Diff line
@@ -190,7 +190,7 @@ int bnx2x_tx_int(struct bnx2x *bp, struct bnx2x_fp_txdata *txdata)


		if ((netif_tx_queue_stopped(txq)) &&
		if ((netif_tx_queue_stopped(txq)) &&
		    (bp->state == BNX2X_STATE_OPEN) &&
		    (bp->state == BNX2X_STATE_OPEN) &&
		    (bnx2x_tx_avail(bp, txdata) >= MAX_SKB_FRAGS + 3))
		    (bnx2x_tx_avail(bp, txdata) >= MAX_SKB_FRAGS + 4))
			netif_tx_wake_queue(txq);
			netif_tx_wake_queue(txq);


		__netif_tx_unlock(txq);
		__netif_tx_unlock(txq);
@@ -2516,8 +2516,6 @@ int bnx2x_poll(struct napi_struct *napi, int budget)
/* we split the first BD into headers and data BDs
/* we split the first BD into headers and data BDs
 * to ease the pain of our fellow microcode engineers
 * to ease the pain of our fellow microcode engineers
 * we use one mapping for both BDs
 * we use one mapping for both BDs
 * So far this has only been observed to happen
 * in Other Operating Systems(TM)
 */
 */
static noinline u16 bnx2x_tx_split(struct bnx2x *bp,
static noinline u16 bnx2x_tx_split(struct bnx2x *bp,
				   struct bnx2x_fp_txdata *txdata,
				   struct bnx2x_fp_txdata *txdata,
@@ -3171,7 +3169,7 @@ netdev_tx_t bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev)


	txdata->tx_bd_prod += nbd;
	txdata->tx_bd_prod += nbd;


	if (unlikely(bnx2x_tx_avail(bp, txdata) < MAX_SKB_FRAGS + 3)) {
	if (unlikely(bnx2x_tx_avail(bp, txdata) < MAX_SKB_FRAGS + 4)) {
		netif_tx_stop_queue(txq);
		netif_tx_stop_queue(txq);


		/* paired memory barrier is in bnx2x_tx_int(), we have to keep
		/* paired memory barrier is in bnx2x_tx_int(), we have to keep
@@ -3180,7 +3178,7 @@ netdev_tx_t bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev)
		smp_mb();
		smp_mb();


		fp->eth_q_stats.driver_xoff++;
		fp->eth_q_stats.driver_xoff++;
		if (bnx2x_tx_avail(bp, txdata) >= MAX_SKB_FRAGS + 3)
		if (bnx2x_tx_avail(bp, txdata) >= MAX_SKB_FRAGS + 4)
			netif_tx_wake_queue(txq);
			netif_tx_wake_queue(txq);
	}
	}
	txdata->tx_pkt++;
	txdata->tx_pkt++;