Donate to e Foundation | Murena handsets with /e/OS | Own a part of Murena! Learn more

Commit 925068dc authored by David S. Miller's avatar David S. Miller
Browse files

Merge branch 'davem-next' of master.kernel.org:/pub/scm/linux/kernel/git/jgarzik/netdev-2.6

parents 83aa2e96 67fbbe15
Loading
Loading
Loading
Loading
+320 −99
Original line number Diff line number Diff line
Linux* Base Driver for the Intel(R) PRO/10GbE Family of Adapters
================================================================
Linux Base Driver for 10 Gigabit Intel(R) Network Connection
=============================================================

November 17, 2004
October 9, 2007


Contents
@@ -9,65 +9,122 @@ Contents

- In This Release
- Identifying Your Adapter
- Building and Installation
- Command Line Parameters
- Improving Performance
- Additional Configurations
- Known Issues/Troubleshooting
- Support



In This Release
===============

This file describes the Linux* Base Driver for the Intel(R) PRO/10GbE Family 
of Adapters, version 1.0.x.  
This file describes the ixgb Linux Base Driver for the 10 Gigabit Intel(R)
Network Connection.  This driver includes support for Itanium(R)2-based
systems.

For questions related to hardware requirements, refer to the documentation
supplied with your Intel PRO/10GbE adapter. All hardware requirements listed 
apply to use with Linux.
supplied with your 10 Gigabit adapter.  All hardware requirements listed apply
to use with Linux.

The following features are available in this kernel:
 - Native VLANs
 - Channel Bonding (teaming)
 - SNMP

Channel Bonding documentation can be found in the Linux kernel source:
/Documentation/networking/bonding.txt

The driver information previously displayed in the /proc filesystem is not
supported in this release.  Alternatively, you can use ethtool (version 1.6
or later), lspci, and ifconfig to obtain the same information.

Instructions on updating ethtool can be found in the section "Additional
Configurations" later in this document.


Identifying Your Adapter
========================

To verify your Intel adapter is supported, find the board ID number on the 
adapter. Look for a label that has a barcode and a number in the format  
A12345-001. 
The following Intel network adapters are compatible with the drivers in this
release:

Controller  Adapter Name                 Physical Layer
----------  ------------                 --------------
82597EX     Intel(R) PRO/10GbE LR/SR/CX4 10G Base-LR (1310 nm optical fiber)
            Server Adapters              10G Base-SR (850 nm optical fiber)
                                         10G Base-CX4(twin-axial copper cabling)

For more information on how to identify your adapter, go to the Adapter &
Driver ID Guide at:

    http://support.intel.com/support/network/sb/CS-012904.htm


Building and Installation
=========================

select m for "Intel(R) PRO/10GbE support" located at:
      Location:
        -> Device Drivers
          -> Network device support (NETDEVICES [=y])
            -> Ethernet (10000 Mbit) (NETDEV_10000 [=y])
1. make modules && make modules_install

2. Load the module:

    modprobe ixgb <parameter>=<value>

   The insmod command can be used if the full
   path to the driver module is specified.  For example:

     insmod /lib/modules/<KERNEL VERSION>/kernel/drivers/net/ixgb/ixgb.ko

   With 2.6 based kernels also make sure that older ixgb drivers are
   removed from the kernel, before loading the new module:

Use the above information and the Adapter & Driver ID Guide at:
     rmmod ixgb; modprobe ixgb

  http://support.intel.com/support/network/adapter/pro100/21397.htm
3. Assign an IP address to the interface by entering the following, where
   x is the interface number:

For the latest Intel network drivers for Linux, go to:
     ifconfig ethx <IP_address>

4. Verify that the interface works. Enter the following, where <IP_address>
   is the IP address for another machine on the same subnet as the interface
   that is being tested:

     ping  <IP_address>

    http://downloadfinder.intel.com/scripts-df/support_intel.asp

Command Line Parameters
=======================

If the driver is built as a module, the  following optional parameters are
used by entering them on the command line with the modprobe or insmod command
using this syntax:
used by entering them on the command line with the modprobe command using
this syntax:

     modprobe ixgb [<option>=<VAL1>,<VAL2>,...]

     insmod ixgb [<option>=<VAL1>,<VAL2>,...]
For example, with two 10GbE PCI adapters, entering:

For example, with two PRO/10GbE PCI adapters, entering:

    insmod ixgb TxDescriptors=80,128
     modprobe ixgb TxDescriptors=80,128

loads the ixgb driver with 80 TX resources for the first adapter and 128 TX
resources for the second adapter.

The default value for each parameter is generally the recommended setting,
unless otherwise noted. Also, if the driver is statically built into the
kernel, the driver is loaded with the default values for all the parameters.
Ethtool can be used to change some of the parameters at runtime.
unless otherwise noted.

FlowControl
Valid Range: 0-3 (0=none, 1=Rx only, 2=Tx only, 3=Rx&Tx)
Default: Read from the EEPROM
         If EEPROM is not detected, default is 3
         If EEPROM is not detected, default is 1
    This parameter controls the automatic generation(Tx) and response(Rx) to
    Ethernet PAUSE frames.
    Ethernet PAUSE frames.  There are hardware bugs associated with enabling
    Tx flow control so beware.

RxDescriptors
Valid Range: 64-512
@@ -83,7 +140,7 @@ Default Value: 512

RxIntDelay
Valid Range: 0-65535 (0=off)
Default Value: 6
Default Value: 72
    This value delays the generation of receive interrupts in units of
    0.8192 microseconds.  Receive interrupt reduction can improve CPU
    efficiency if properly tuned for specific network traffic.  Increasing
@@ -105,22 +162,16 @@ Default Value: 1
    A value of '1' indicates that the driver should enable IP checksum
    offload for received packets (both UDP and TCP) to the adapter hardware.

XsumTX
Valid Range: 0-1
Default Value: 1
    A value of '1' indicates that the driver should enable IP checksum
    offload for transmitted packets (both UDP and TCP) to the adapter 
    hardware.

Improving Performance
=====================

With the Intel PRO/10 GbE adapter, the default Linux configuration will very 
likely limit the total available throughput artificially.  There is a set of 
things that when applied together increase the ability of Linux to transmit 
and receive data.  The following enhancements were originally acquired from
settings published at http://www.spec.org/web99 for various submitted results 
using Linux.
With the 10 Gigabit server adapters, the default Linux configuration will
very likely limit the total available throughput artificially.  There is a set
of configuration changes that, when applied together, will increase the ability
of Linux to transmit and receive data.  The following enhancements were
originally acquired from settings published at http://www.spec.org/web99/ for
various submitted results using Linux.

NOTE: These changes are only suggestions, and serve as a starting point for
      tuning your network performance.
@@ -134,17 +185,21 @@ The changes are made in three major ways, listed in order of greatest effect:

NOTE: setpci modifies the adapter's configuration registers to allow it to read
up to 4k bytes at a time (for transmits).  However, for some systems the
behavior after modifying this register may be undefined (possibly errors of some 
kind). A power-cycle, hard reset or explicitly setting the e6 register back to 
22 (setpci -d 8086:1048 e6.b=22) may be required to get back to a stable 
configuration.
behavior after modifying this register may be undefined (possibly errors of
some kind).  A power-cycle, hard reset or explicitly setting the e6 register
back to 22 (setpci -d 8086:1a48 e6.b=22) may be required to get back to a
stable configuration.

- COPY these lines and paste them into ixgb_perf.sh:
#!/bin/bash
echo "configuring network performance , edit this file to change the interface"
echo "configuring network performance , edit this file to change the interface
or device ID of 10GbE card"
# set mmrbc to 4k reads, modify only Intel 10GbE device IDs
setpci -d 8086:1048 e6.b=2e
# set the MTU (max transmission unit) - it requires your switch and clients to change too!
# replace 1a48 with appropriate 10GbE device's ID installed on the system,
# if needed.
setpci -d 8086:1a48 e6.b=2e
# set the MTU (max transmission unit) - it requires your switch and clients
# to change as well.
# set the txqueuelen
# your ixgb adapter should be loaded as eth1 for this to work, change if needed
ifconfig eth1 mtu 9000 txqueuelen 1000 up
@@ -159,24 +214,36 @@ sysctl -p ./sysctl_ixgb.conf
# several network benchmark tests, your mileage may vary

### IPV4 specific settings
net.ipv4.tcp_timestamps = 0 # turns TCP timestamp support off, default 1, reduces CPU use
net.ipv4.tcp_sack = 0 # turn SACK support off, default on
# turn TCP timestamp support off, default 1, reduces CPU use
net.ipv4.tcp_timestamps = 0
# turn SACK support off, default on
# on systems with a VERY fast bus -> memory interface this is the big gainer
net.ipv4.tcp_rmem = 10000000 10000000 10000000 # sets min/default/max TCP read buffer, default 4096 87380 174760
net.ipv4.tcp_wmem = 10000000 10000000 10000000 # sets min/pressure/max TCP write buffer, default 4096 16384 131072
net.ipv4.tcp_mem = 10000000 10000000 10000000 # sets min/pressure/max TCP buffer space, default 31744 32256 32768
net.ipv4.tcp_sack = 0
# set min/default/max TCP read buffer, default 4096 87380 174760
net.ipv4.tcp_rmem = 10000000 10000000 10000000
# set min/pressure/max TCP write buffer, default 4096 16384 131072
net.ipv4.tcp_wmem = 10000000 10000000 10000000
# set min/pressure/max TCP buffer space, default 31744 32256 32768
net.ipv4.tcp_mem = 10000000 10000000 10000000

### CORE settings (mostly for socket and UDP effect)
net.core.rmem_max = 524287 # maximum receive socket buffer size, default 131071
net.core.wmem_max = 524287 # maximum send socket buffer size, default 131071
net.core.rmem_default = 524287 # default receive socket buffer size, default 65535
net.core.wmem_default = 524287 # default send socket buffer size, default 65535
net.core.optmem_max = 524287 # maximum amount of option memory buffers, default 10240
net.core.netdev_max_backlog = 300000 # number of unprocessed input packets before kernel starts dropping them, default 300
# set maximum receive socket buffer size, default 131071
net.core.rmem_max = 524287
# set maximum send socket buffer size, default 131071
net.core.wmem_max = 524287
# set default receive socket buffer size, default 65535
net.core.rmem_default = 524287
# set default send socket buffer size, default 65535
net.core.wmem_default = 524287
# set maximum amount of option memory buffers, default 10240
net.core.optmem_max = 524287
# set number of unprocessed input packets before kernel starts dropping them; default 300
net.core.netdev_max_backlog = 300000
- END sysctl_ixgb.conf

Edit the ixgb_perf.sh script if necessary to change eth1 to whatever interface
your ixgb driver is using.
your ixgb driver is using and/or replace '1a48' with appropriate 10GbE device's
ID installed on the system.

NOTE: Unless these scripts are added to the boot process, these changes will
      only last only until the next system reboot.
@@ -184,7 +251,6 @@ only last only until the next system reboot.

Resolving Slow UDP Traffic
--------------------------

If your server does not seem to be able to receive UDP traffic as fast as it
can receive TCP traffic, it could be because Linux, by default, does not set
the network stack buffers as large as they need to be to support high UDP
@@ -200,13 +266,168 @@ defaults of max=131071 (128k - 1) and default=65535 (64k - 1). These variables
will increase the amount of memory used by the network stack for receives, and
can be increased significantly more if necessary for your application.


Additional Configurations
=========================

  Configuring the Driver on Different Distributions
  -------------------------------------------------
  Configuring a network driver to load properly when the system is started is
  distribution dependent. Typically, the configuration process involves adding
  an alias line to /etc/modprobe.conf as well as editing other system startup
  scripts and/or configuration files.  Many popular Linux distributions ship
  with tools to make these changes for you.  To learn the proper way to
  configure a network device for your system, refer to your distribution
  documentation.  If during this process you are asked for the driver or module
  name, the name for the Linux Base Driver for the Intel 10GbE Family of
  Adapters is ixgb.

  Viewing Link Messages
  ---------------------
  Link messages will not be displayed to the console if the distribution is
  restricting system messages. In order to see network driver link messages on
  your console, set dmesg to eight by entering the following:

       dmesg -n 8

  NOTE: This setting is not saved across reboots.


  Jumbo Frames
  ------------
  The driver supports Jumbo Frames for all adapters. Jumbo Frames support is
  enabled by changing the MTU to a value larger than the default of 1500.
  The maximum value for the MTU is 16114.  Use the ifconfig command to
  increase the MTU size.  For example:

        ifconfig ethx mtu 9000 up

  The maximum MTU setting for Jumbo Frames is 16114.  This value coincides
  with the maximum Jumbo Frames size of 16128.


  Ethtool
  -------
  The driver utilizes the ethtool interface for driver configuration and
  diagnostics, as well as displaying statistical information.  Ethtool
  version 1.6 or later is required for this functionality.

  The latest release of ethtool can be found from
  http://sourceforge.net/projects/gkernel

  NOTE: Ethtool 1.6 only supports a limited set of ethtool options. Support
        for a more complete ethtool feature set can be enabled by upgrading
        to the latest version.


  NAPI
  ----

  NAPI (Rx polling mode) is supported in the ixgb driver.  NAPI is enabled
  or disabled based on the configuration of the kernel.  see CONFIG_IXGB_NAPI

  See www.cyberus.ca/~hadi/usenix-paper.tgz for more information on NAPI.


Known Issues/Troubleshooting
============================

  NOTE: After installing the driver, if your Intel Network Connection is not
  working, verify in the "In This Release" section of the readme that you have
  installed the correct driver.

  Intel(R) PRO/10GbE CX4 Server Adapter Cable Interoperability Issue with
  Fujitsu XENPAK Module in SmartBits Chassis
  ---------------------------------------------------------------------
  Excessive CRC errors may be observed if the Intel(R) PRO/10GbE CX4
  Server adapter is connected to a Fujitsu XENPAK CX4 module in a SmartBits
  chassis using 15 m/24AWG cable assemblies manufactured by Fujitsu or Leoni.
  The CRC errors may be received either by the Intel(R) PRO/10GbE CX4
  Server adapter or the SmartBits. If this situation occurs using a different
  cable assembly may resolve the issue.

  CX4 Server Adapter Cable Interoperability Issues with HP Procurve 3400cl
  Switch Port
  ------------------------------------------------------------------------
  Excessive CRC errors may be observed if the Intel(R) PRO/10GbE CX4 Server
  adapter is connected to an HP Procurve 3400cl switch port using short cables
  (1 m or shorter). If this situation occurs, using a longer cable may resolve
  the issue.

  Excessive CRC errors may be observed using Fujitsu 24AWG cable assemblies that
  Are 10 m or longer or where using a Leoni 15 m/24AWG cable assembly. The CRC
  errors may be received either by the CX4 Server adapter or at the switch. If
  this situation occurs, using a different cable assembly may resolve the issue.


  Jumbo Frames System Requirement
  -------------------------------
  Memory allocation failures have been observed on Linux systems with 64 MB
  of RAM or less that are running Jumbo Frames.  If you are using Jumbo
  Frames, your system may require more than the advertised minimum
  requirement of 64 MB of system memory.


  Performance Degradation with Jumbo Frames
  -----------------------------------------
  Degradation in throughput performance may be observed in some Jumbo frames
  environments.  If this is observed, increasing the application's socket buffer
  size and/or increasing the /proc/sys/net/ipv4/tcp_*mem entry values may help.
  See the specific application manual and /usr/src/linux*/Documentation/
  networking/ip-sysctl.txt for more details.


  Allocating Rx Buffers when Using Jumbo Frames
  ---------------------------------------------
  Allocating Rx buffers when using Jumbo Frames on 2.6.x kernels may fail if
  the available memory is heavily fragmented. This issue may be seen with PCI-X
  adapters or with packet split disabled. This can be reduced or eliminated
  by changing the amount of available memory for receive buffer allocation, by
  increasing /proc/sys/vm/min_free_kbytes.


  Multiple Interfaces on Same Ethernet Broadcast Network
  ------------------------------------------------------
  Due to the default ARP behavior on Linux, it is not possible to have
  one system on two IP networks in the same Ethernet broadcast domain
  (non-partitioned switch) behave as expected.  All Ethernet interfaces
  will respond to IP traffic for any IP address assigned to the system.
  This results in unbalanced receive traffic.

  If you have multiple interfaces in a server, do either of the following:

  - Turn on ARP filtering by entering:
      echo 1 > /proc/sys/net/ipv4/conf/all/arp_filter

  - Install the interfaces in separate broadcast domains - either in
    different switches or in a switch partitioned to VLANs.


  UDP Stress Test Dropped Packet Issue
  --------------------------------------
  Under small packets UDP stress test with 10GbE driver, the Linux system
  may drop UDP packets due to the fullness of socket buffers. You may want
  to change the driver's Flow Control variables to the minimum value for
  controlling packet reception.


  Tx Hangs Possible Under Stress
  ------------------------------
  Under stress conditions, if TX hangs occur, turning off TSO
  "ethtool -K eth0 tso off" may resolve the problem.


Support
=======

For general information and support, go to the Intel support website at:
For general information, go to the Intel support website at:

    http://support.intel.com

or the Intel Wired Networking project hosted by Sourceforge at:

    http://sourceforge.net/projects/e1000

If an issue is identified with the released source code on the supported
kernel with a supported adapter, email the specific information related to 
the issue to linux.nics@intel.com.
kernel with a supported adapter, email the specific information related
to the issue to e1000-devel@lists.sf.net
+9 −50
Original line number Diff line number Diff line
@@ -1694,26 +1694,6 @@ config VIA_RHINE_MMIO

	  If unsure, say Y.

config VIA_RHINE_NAPI
	bool "Use Rx Polling (NAPI)"
	depends on VIA_RHINE
	help
	  NAPI is a new driver API designed to reduce CPU and interrupt load
	  when the driver is receiving lots of packets from the card.

	  If your estimated Rx load is 10kpps or more, or if the card will be
	  deployed on potentially unfriendly networks (e.g. in a firewall),
	  then say Y here.

config LAN_SAA9730
	bool "Philips SAA9730 Ethernet support"
	depends on NET_PCI && PCI && MIPS_ATLAS
	help
	  The SAA9730 is a combined multimedia and peripheral controller used
	  in thin clients, Internet access terminals, and diskless
	  workstations.
	  See <http://www.semiconductors.philips.com/pip/SAA9730_flyer_1>.

config SC92031
	tristate "Silan SC92031 PCI Fast Ethernet Adapter driver (EXPERIMENTAL)"
	depends on NET_PCI && PCI && EXPERIMENTAL
@@ -2029,6 +2009,15 @@ config IGB
         To compile this driver as a module, choose M here. The module
         will be called igb.

config IGB_LRO 
	bool "Use software LRO"
	depends on IGB && INET
	select INET_LRO
	---help---
	  Say Y here if you want to use large receive offload. 

	  If in doubt, say N.

source "drivers/net/ixp2000/Kconfig"

config MYRI_SBUS
@@ -2273,10 +2262,6 @@ config GIANFAR
	  This driver supports the Gigabit TSEC on the MPC83xx, MPC85xx,
	  and MPC86xx family of chips, and the FEC on the 8540.

config GFAR_NAPI
	bool "Use Rx Polling (NAPI)"
	depends on GIANFAR

config UCC_GETH
	tristate "Freescale QE Gigabit Ethernet"
	depends on QUICC_ENGINE
@@ -2285,10 +2270,6 @@ config UCC_GETH
	  This driver supports the Gigabit Ethernet mode of the QUICC Engine,
	  which is available on some Freescale SOCs.

config UGETH_NAPI
	bool "Use Rx Polling (NAPI)"
	depends on UCC_GETH

config UGETH_MAGIC_PACKET
	bool "Magic Packet detection support"
	depends on UCC_GETH
@@ -2378,14 +2359,6 @@ config CHELSIO_T1_1G
          Enables support for Chelsio's gigabit Ethernet PCI cards.  If you
          are using only 10G cards say 'N' here.

config CHELSIO_T1_NAPI
	bool "Use Rx Polling (NAPI)"
	depends on CHELSIO_T1
	default y
	help
	  NAPI is a driver API designed to reduce CPU and interrupt load
	  when the driver is receiving lots of packets from the card.

config CHELSIO_T3
	tristate "Chelsio Communications T3 10Gb Ethernet support"
	depends on PCI && INET
@@ -2457,20 +2430,6 @@ config IXGB
	  To compile this driver as a module, choose M here. The module
	  will be called ixgb.

config IXGB_NAPI
	bool "Use Rx Polling (NAPI) (EXPERIMENTAL)"
	depends on IXGB && EXPERIMENTAL
	help
	  NAPI is a new driver API designed to reduce CPU and interrupt load
	  when the driver is receiving lots of packets from the card. It is
	  still somewhat experimental and thus not yet enabled by default.

	  If your estimated Rx load is 10kpps or more, or if the card will be
	  deployed on potentially unfriendly networks (e.g. in a firewall),
	  then say Y here.

	  If in doubt, say N.

config S2IO
	tristate "S2IO 10Gbe XFrame NIC"
	depends on PCI
+0 −1
Original line number Diff line number Diff line
@@ -166,7 +166,6 @@ obj-$(CONFIG_EEXPRESS_PRO) += eepro.o
obj-$(CONFIG_8139CP) += 8139cp.o
obj-$(CONFIG_8139TOO) += 8139too.o
obj-$(CONFIG_ZNET) += znet.o
obj-$(CONFIG_LAN_SAA9730) += saa9730.o
obj-$(CONFIG_CPMAC) += cpmac.o
obj-$(CONFIG_DEPCA) += depca.o
obj-$(CONFIG_EWRK3) += ewrk3.o
+0 −2
Original line number Diff line number Diff line
@@ -1153,9 +1153,7 @@ static int __devinit init_one(struct pci_dev *pdev,
#ifdef CONFIG_NET_POLL_CONTROLLER
		netdev->poll_controller = t1_netpoll;
#endif
#ifdef CONFIG_CHELSIO_T1_NAPI
		netif_napi_add(netdev, &adapter->napi, t1_poll, 64);
#endif

		SET_ETHTOOL_OPS(netdev, &t1_ethtool_ops);
	}
+5 −65
Original line number Diff line number Diff line
@@ -1396,20 +1396,10 @@ static void sge_rx(struct sge *sge, struct freelQ *fl, unsigned int len)

	if (unlikely(adapter->vlan_grp && p->vlan_valid)) {
		st->vlan_xtract++;
#ifdef CONFIG_CHELSIO_T1_NAPI
		vlan_hwaccel_receive_skb(skb, adapter->vlan_grp,
					 ntohs(p->vlan));
#else
			vlan_hwaccel_rx(skb, adapter->vlan_grp,
					ntohs(p->vlan));
#endif
	} else {
#ifdef CONFIG_CHELSIO_T1_NAPI
	} else
		netif_receive_skb(skb);
#else
		netif_rx(skb);
#endif
	}
}

/*
@@ -1568,7 +1558,6 @@ static inline int responses_pending(const struct adapter *adapter)
	return (e->GenerationBit == Q->genbit);
}

#ifdef CONFIG_CHELSIO_T1_NAPI
/*
 * A simpler version of process_responses() that handles only pure (i.e.,
 * non data-carrying) responses.  Such respones are too light-weight to justify
@@ -1636,9 +1625,6 @@ int t1_poll(struct napi_struct *napi, int budget)
	return work_done;
}

/*
 * NAPI version of the main interrupt handler.
 */
irqreturn_t t1_interrupt(int irq, void *data)
{
	struct adapter *adapter = data;
@@ -1656,7 +1642,8 @@ irqreturn_t t1_interrupt(int irq, void *data)
			else {
				/* no data, no NAPI needed */
				writel(sge->respQ.cidx, adapter->regs + A_SG_SLEEPING);
				napi_enable(&adapter->napi);	/* undo schedule_prep */
				/* undo schedule_prep */
				napi_enable(&adapter->napi);
			}
		}
		return IRQ_HANDLED;
@@ -1672,53 +1659,6 @@ irqreturn_t t1_interrupt(int irq, void *data)
	return IRQ_RETVAL(handled != 0);
}

#else
/*
 * Main interrupt handler, optimized assuming that we took a 'DATA'
 * interrupt.
 *
 * 1. Clear the interrupt
 * 2. Loop while we find valid descriptors and process them; accumulate
 *      information that can be processed after the loop
 * 3. Tell the SGE at which index we stopped processing descriptors
 * 4. Bookkeeping; free TX buffers, ring doorbell if there are any
 *      outstanding TX buffers waiting, replenish RX buffers, potentially
 *      reenable upper layers if they were turned off due to lack of TX
 *      resources which are available again.
 * 5. If we took an interrupt, but no valid respQ descriptors was found we
 *      let the slow_intr_handler run and do error handling.
 */
irqreturn_t t1_interrupt(int irq, void *cookie)
{
	int work_done;
	struct adapter *adapter = cookie;
	struct respQ *Q = &adapter->sge->respQ;

	spin_lock(&adapter->async_lock);

	writel(F_PL_INTR_SGE_DATA, adapter->regs + A_PL_CAUSE);

	if (likely(responses_pending(adapter)))
		work_done = process_responses(adapter, -1);
	else
		work_done = t1_slow_intr_handler(adapter);

	/*
	 * The unconditional clearing of the PL_CAUSE above may have raced
	 * with DMA completion and the corresponding generation of a response
	 * to cause us to miss the resulting data interrupt.  The next write
	 * is also unconditional to recover the missed interrupt and render
	 * this race harmless.
	 */
	writel(Q->cidx, adapter->regs + A_SG_SLEEPING);

	if (!work_done)
		adapter->sge->stats.unhandled_irqs++;
	spin_unlock(&adapter->async_lock);
	return IRQ_RETVAL(work_done != 0);
}
#endif

/*
 * Enqueues the sk_buff onto the cmdQ[qid] and has hardware fetch it.
 *
Loading