Jeff Kirsher | d7064f4 | 2013-08-23 17:19:23 -0700 | [diff] [blame] | 1 | Linux* Base Driver for the Intel(R) Ethernet 10 Gigabit PCI Express Family of |
| 2 | Adapters |
| 3 | ============================================================================= |
PJ Waskiewicz | 09e1c06 | 2009-03-13 22:15:54 +0000 | [diff] [blame] | 4 | |
Jeff Kirsher | d7064f4 | 2013-08-23 17:19:23 -0700 | [diff] [blame] | 5 | Intel 10 Gigabit Linux driver. |
| 6 | Copyright(c) 1999 - 2013 Intel Corporation. |
PJ Waskiewicz | 09e1c06 | 2009-03-13 22:15:54 +0000 | [diff] [blame] | 7 | |
| 8 | Contents |
| 9 | ======== |
| 10 | |
PJ Waskiewicz | 09e1c06 | 2009-03-13 22:15:54 +0000 | [diff] [blame] | 11 | - Identifying Your Adapter |
PJ Waskiewicz | 09e1c06 | 2009-03-13 22:15:54 +0000 | [diff] [blame] | 12 | - Additional Configurations |
Jeff Kirsher | 872857a | 2010-12-09 23:55:47 -0800 | [diff] [blame] | 13 | - Performance Tuning |
| 14 | - Known Issues |
PJ Waskiewicz | 09e1c06 | 2009-03-13 22:15:54 +0000 | [diff] [blame] | 15 | - Support |
| 16 | |
PJ Waskiewicz | 09e1c06 | 2009-03-13 22:15:54 +0000 | [diff] [blame] | 17 | Identifying Your Adapter |
| 18 | ======================== |
| 19 | |
Jeff Kirsher | d7064f4 | 2013-08-23 17:19:23 -0700 | [diff] [blame] | 20 | The driver in this release is compatible with 82598, 82599 and X540-based |
| 21 | Intel Network Connections. |
PJ Waskiewicz | 09e1c06 | 2009-03-13 22:15:54 +0000 | [diff] [blame] | 22 | |
Jeff Kirsher | 872857a | 2010-12-09 23:55:47 -0800 | [diff] [blame] | 23 | For more information on how to identify your adapter, go to the Adapter & |
| 24 | Driver ID Guide at: |
PJ Waskiewicz | 09e1c06 | 2009-03-13 22:15:54 +0000 | [diff] [blame] | 25 | |
Jeff Kirsher | 872857a | 2010-12-09 23:55:47 -0800 | [diff] [blame] | 26 | http://support.intel.com/support/network/sb/CS-012904.htm |
| 27 | |
| 28 | SFP+ Devices with Pluggable Optics |
| 29 | ---------------------------------- |
| 30 | |
| 31 | 82599-BASED ADAPTERS |
| 32 | |
| 33 | NOTES: If your 82599-based Intel(R) Network Adapter came with Intel optics, or |
| 34 | is an Intel(R) Ethernet Server Adapter X520-2, then it only supports Intel |
| 35 | optics and/or the direct attach cables listed below. |
| 36 | |
| 37 | When 82599-based SFP+ devices are connected back to back, they should be set to |
Jeff Kirsher | 68f20d9 | 2010-12-17 12:14:34 +0000 | [diff] [blame] | 38 | the same Speed setting via ethtool. Results may vary if you mix speed settings. |
Jeff Kirsher | 872857a | 2010-12-09 23:55:47 -0800 | [diff] [blame] | 39 | 82598-based adapters support all passive direct attach cables that comply |
| 40 | with SFF-8431 v4.1 and SFF-8472 v10.4 specifications. Active direct attach |
| 41 | cables are not supported. |
| 42 | |
| 43 | Supplier Type Part Numbers |
| 44 | |
| 45 | SR Modules |
| 46 | Intel DUAL RATE 1G/10G SFP+ SR (bailed) FTLX8571D3BCV-IT |
| 47 | Intel DUAL RATE 1G/10G SFP+ SR (bailed) AFBR-703SDDZ-IN1 |
| 48 | Intel DUAL RATE 1G/10G SFP+ SR (bailed) AFBR-703SDZ-IN2 |
| 49 | LR Modules |
| 50 | Intel DUAL RATE 1G/10G SFP+ LR (bailed) FTLX1471D3BCV-IT |
| 51 | Intel DUAL RATE 1G/10G SFP+ LR (bailed) AFCT-701SDDZ-IN1 |
| 52 | Intel DUAL RATE 1G/10G SFP+ LR (bailed) AFCT-701SDZ-IN2 |
| 53 | |
| 54 | The following is a list of 3rd party SFP+ modules and direct attach cables that |
| 55 | have received some testing. Not all modules are applicable to all devices. |
| 56 | |
| 57 | Supplier Type Part Numbers |
| 58 | |
| 59 | Finisar SFP+ SR bailed, 10g single rate FTLX8571D3BCL |
| 60 | Avago SFP+ SR bailed, 10g single rate AFBR-700SDZ |
| 61 | Finisar SFP+ LR bailed, 10g single rate FTLX1471D3BCL |
| 62 | |
| 63 | Finisar DUAL RATE 1G/10G SFP+ SR (No Bail) FTLX8571D3QCV-IT |
| 64 | Avago DUAL RATE 1G/10G SFP+ SR (No Bail) AFBR-703SDZ-IN1 |
| 65 | Finisar DUAL RATE 1G/10G SFP+ LR (No Bail) FTLX1471D3QCV-IT |
| 66 | Avago DUAL RATE 1G/10G SFP+ LR (No Bail) AFCT-701SDZ-IN1 |
| 67 | Finistar 1000BASE-T SFP FCLF8522P2BTL |
| 68 | Avago 1000BASE-T SFP ABCU-5710RZ |
| 69 | |
| 70 | 82599-based adapters support all passive and active limiting direct attach |
| 71 | cables that comply with SFF-8431 v4.1 and SFF-8472 v10.4 specifications. |
| 72 | |
| 73 | Laser turns off for SFP+ when ifconfig down |
| 74 | ------------------------------------------- |
| 75 | "ifconfig down" turns off the laser for 82599-based SFP+ fiber adapters. |
Jeff Kirsher | d7064f4 | 2013-08-23 17:19:23 -0700 | [diff] [blame] | 76 | "ifconfig up" turns on the laser. |
PJ Waskiewicz | 09e1c06 | 2009-03-13 22:15:54 +0000 | [diff] [blame] | 77 | |
| 78 | |
Jeff Kirsher | 872857a | 2010-12-09 23:55:47 -0800 | [diff] [blame] | 79 | 82598-BASED ADAPTERS |
PJ Waskiewicz | 09e1c06 | 2009-03-13 22:15:54 +0000 | [diff] [blame] | 80 | |
Jeff Kirsher | 872857a | 2010-12-09 23:55:47 -0800 | [diff] [blame] | 81 | NOTES for 82598-Based Adapters: |
| 82 | - Intel(R) Network Adapters that support removable optical modules only support |
| 83 | their original module type (i.e., the Intel(R) 10 Gigabit SR Dual Port |
| 84 | Express Module only supports SR optical modules). If you plug in a different |
| 85 | type of module, the driver will not load. |
| 86 | - Hot Swapping/hot plugging optical modules is not supported. |
| 87 | - Only single speed, 10 gigabit modules are supported. |
| 88 | - LAN on Motherboard (LOMs) may support DA, SR, or LR modules. Other module |
| 89 | types are not supported. Please see your system documentation for details. |
PJ Waskiewicz | 09e1c06 | 2009-03-13 22:15:54 +0000 | [diff] [blame] | 90 | |
Jeff Kirsher | 872857a | 2010-12-09 23:55:47 -0800 | [diff] [blame] | 91 | The following is a list of 3rd party SFP+ modules and direct attach cables that |
| 92 | have received some testing. Not all modules are applicable to all devices. |
PJ Waskiewicz | 09e1c06 | 2009-03-13 22:15:54 +0000 | [diff] [blame] | 93 | |
Jeff Kirsher | 872857a | 2010-12-09 23:55:47 -0800 | [diff] [blame] | 94 | Supplier Type Part Numbers |
PJ Waskiewicz | 09e1c06 | 2009-03-13 22:15:54 +0000 | [diff] [blame] | 95 | |
Jeff Kirsher | 872857a | 2010-12-09 23:55:47 -0800 | [diff] [blame] | 96 | Finisar SFP+ SR bailed, 10g single rate FTLX8571D3BCL |
| 97 | Avago SFP+ SR bailed, 10g single rate AFBR-700SDZ |
| 98 | Finisar SFP+ LR bailed, 10g single rate FTLX1471D3BCL |
PJ Waskiewicz | 09e1c06 | 2009-03-13 22:15:54 +0000 | [diff] [blame] | 99 | |
Jeff Kirsher | 872857a | 2010-12-09 23:55:47 -0800 | [diff] [blame] | 100 | 82598-based adapters support all passive direct attach cables that comply |
| 101 | with SFF-8431 v4.1 and SFF-8472 v10.4 specifications. Active direct attach |
| 102 | cables are not supported. |
PJ Waskiewicz | 09e1c06 | 2009-03-13 22:15:54 +0000 | [diff] [blame] | 103 | |
PJ Waskiewicz | 09e1c06 | 2009-03-13 22:15:54 +0000 | [diff] [blame] | 104 | |
Jeff Kirsher | 872857a | 2010-12-09 23:55:47 -0800 | [diff] [blame] | 105 | Flow Control |
| 106 | ------------ |
| 107 | Ethernet Flow Control (IEEE 802.3x) can be configured with ethtool to enable |
| 108 | receiving and transmitting pause frames for ixgbe. When TX is enabled, PAUSE |
| 109 | frames are generated when the receive packet buffer crosses a predefined |
| 110 | threshold. When rx is enabled, the transmit unit will halt for the time delay |
| 111 | specified when a PAUSE frame is received. |
PJ Waskiewicz | 09e1c06 | 2009-03-13 22:15:54 +0000 | [diff] [blame] | 112 | |
Jeff Kirsher | 872857a | 2010-12-09 23:55:47 -0800 | [diff] [blame] | 113 | Flow Control is enabled by default. If you want to disable a flow control |
Jeff Kirsher | 68f20d9 | 2010-12-17 12:14:34 +0000 | [diff] [blame] | 114 | capable link partner, use ethtool: |
PJ Waskiewicz | 09e1c06 | 2009-03-13 22:15:54 +0000 | [diff] [blame] | 115 | |
Jeff Kirsher | 872857a | 2010-12-09 23:55:47 -0800 | [diff] [blame] | 116 | ethtool -A eth? autoneg off RX off TX off |
PJ Waskiewicz | 09e1c06 | 2009-03-13 22:15:54 +0000 | [diff] [blame] | 117 | |
Jeff Kirsher | 872857a | 2010-12-09 23:55:47 -0800 | [diff] [blame] | 118 | NOTE: For 82598 backplane cards entering 1 gig mode, flow control default |
| 119 | behavior is changed to off. Flow control in 1 gig mode on these devices can |
| 120 | lead to Tx hangs. |
PJ Waskiewicz | 09e1c06 | 2009-03-13 22:15:54 +0000 | [diff] [blame] | 121 | |
Jeff Kirsher | d7064f4 | 2013-08-23 17:19:23 -0700 | [diff] [blame] | 122 | Intel(R) Ethernet Flow Director |
| 123 | ------------------------------- |
| 124 | Supports advanced filters that direct receive packets by their flows to |
| 125 | different queues. Enables tight control on routing a flow in the platform. |
| 126 | Matches flows and CPU cores for flow affinity. Supports multiple parameters |
| 127 | for flexible flow classification and load balancing. |
| 128 | |
| 129 | Flow director is enabled only if the kernel is multiple TX queue capable. |
| 130 | |
| 131 | An included script (set_irq_affinity.sh) automates setting the IRQ to CPU |
| 132 | affinity. |
| 133 | |
| 134 | You can verify that the driver is using Flow Director by looking at the counter |
| 135 | in ethtool: fdir_miss and fdir_match. |
| 136 | |
| 137 | Other ethtool Commands: |
| 138 | To enable Flow Director |
| 139 | ethtool -K ethX ntuple on |
| 140 | To add a filter |
| 141 | Use -U switch. e.g., ethtool -U ethX flow-type tcp4 src-ip 0x178000a |
| 142 | action 1 |
| 143 | To see the list of filters currently present: |
| 144 | ethtool -u ethX |
| 145 | |
| 146 | Perfect Filter: Perfect filter is an interface to load the filter table that |
| 147 | funnels all flow into queue_0 unless an alternative queue is specified using |
| 148 | "action". In that case, any flow that matches the filter criteria will be |
| 149 | directed to the appropriate queue. |
| 150 | |
| 151 | If the queue is defined as -1, filter will drop matching packets. |
| 152 | |
| 153 | To account for filter matches and misses, there are two stats in ethtool: |
| 154 | fdir_match and fdir_miss. In addition, rx_queue_N_packets shows the number of |
| 155 | packets processed by the Nth queue. |
| 156 | |
| 157 | NOTE: Receive Packet Steering (RPS) and Receive Flow Steering (RFS) are not |
| 158 | compatible with Flow Director. IF Flow Director is enabled, these will be |
| 159 | disabled. |
| 160 | |
| 161 | The following three parameters impact Flow Director. |
| 162 | |
| 163 | FdirMode |
| 164 | -------- |
| 165 | Valid Range: 0-2 (0=off, 1=ATR, 2=Perfect filter mode) |
| 166 | Default Value: 1 |
| 167 | |
| 168 | Flow Director filtering modes. |
| 169 | |
| 170 | FdirPballoc |
| 171 | ----------- |
| 172 | Valid Range: 0-2 (0=64k, 1=128k, 2=256k) |
| 173 | Default Value: 0 |
| 174 | |
| 175 | Flow Director allocated packet buffer size. |
| 176 | |
| 177 | AtrSampleRate |
| 178 | -------------- |
| 179 | Valid Range: 1-100 |
| 180 | Default Value: 20 |
| 181 | |
| 182 | Software ATR Tx packet sample rate. For example, when set to 20, every 20th |
| 183 | packet, looks to see if the packet will create a new flow. |
| 184 | |
| 185 | Node |
| 186 | ---- |
| 187 | Valid Range: 0-n |
| 188 | Default Value: 1 (off) |
| 189 | |
| 190 | 0 - n: where n is the number of NUMA nodes (i.e. 0 - 3) currently online in |
| 191 | your system |
| 192 | 1: turns this option off |
| 193 | |
| 194 | The Node parameter will allow you to pick which NUMA node you want to have |
| 195 | the adapter allocate memory on. |
| 196 | |
| 197 | max_vfs |
| 198 | ------- |
| 199 | Valid Range: 1-63 |
| 200 | Default Value: 0 |
| 201 | |
| 202 | If the value is greater than 0 it will also force the VMDq parameter to be 1 |
| 203 | or more. |
| 204 | |
| 205 | This parameter adds support for SR-IOV. It causes the driver to spawn up to |
| 206 | max_vfs worth of virtual function. |
| 207 | |
| 208 | |
PJ Waskiewicz | 09e1c06 | 2009-03-13 22:15:54 +0000 | [diff] [blame] | 209 | Additional Configurations |
| 210 | ========================= |
| 211 | |
PJ Waskiewicz | 09e1c06 | 2009-03-13 22:15:54 +0000 | [diff] [blame] | 212 | Jumbo Frames |
| 213 | ------------ |
| 214 | The driver supports Jumbo Frames for all adapters. Jumbo Frames support is |
| 215 | enabled by changing the MTU to a value larger than the default of 1500. |
| 216 | The maximum value for the MTU is 16110. Use the ifconfig command to |
| 217 | increase the MTU size. For example: |
| 218 | |
| 219 | ifconfig ethx mtu 9000 up |
| 220 | |
| 221 | The maximum MTU setting for Jumbo Frames is 16110. This value coincides |
| 222 | with the maximum Jumbo Frames size of 16128. |
| 223 | |
| 224 | Generic Receive Offload, aka GRO |
| 225 | -------------------------------- |
| 226 | The driver supports the in-kernel software implementation of GRO. GRO has |
| 227 | shown that by coalescing Rx traffic into larger chunks of data, CPU |
| 228 | utilization can be significantly reduced when under large Rx load. GRO is an |
| 229 | evolution of the previously-used LRO interface. GRO is able to coalesce |
| 230 | other protocols besides TCP. It's also safe to use with configurations that |
| 231 | are problematic for LRO, namely bridging and iSCSI. |
| 232 | |
PJ Waskiewicz | 09e1c06 | 2009-03-13 22:15:54 +0000 | [diff] [blame] | 233 | Data Center Bridging, aka DCB |
| 234 | ----------------------------- |
PJ Waskiewicz | 09e1c06 | 2009-03-13 22:15:54 +0000 | [diff] [blame] | 235 | DCB is a configuration Quality of Service implementation in hardware. |
| 236 | It uses the VLAN priority tag (802.1p) to filter traffic. That means |
| 237 | that there are 8 different priorities that traffic can be filtered into. |
| 238 | It also enables priority flow control which can limit or eliminate the |
| 239 | number of dropped packets during network stress. Bandwidth can be |
| 240 | allocated to each of these priorities, which is enforced at the hardware |
| 241 | level. |
| 242 | |
| 243 | To enable DCB support in ixgbe, you must enable the DCB netlink layer to |
| 244 | allow the userspace tools (see below) to communicate with the driver. |
| 245 | This can be found in the kernel configuration here: |
| 246 | |
| 247 | -> Networking support |
| 248 | -> Networking options |
| 249 | -> Data Center Bridging support |
| 250 | |
| 251 | Once this is selected, DCB support must be selected for ixgbe. This can |
| 252 | be found here: |
| 253 | |
| 254 | -> Device Drivers |
| 255 | -> Network device support (NETDEVICES [=y]) |
| 256 | -> Ethernet (10000 Mbit) (NETDEV_10000 [=y]) |
| 257 | -> Intel(R) 10GbE PCI Express adapters support |
| 258 | -> Data Center Bridging (DCB) Support |
| 259 | |
| 260 | After these options are selected, you must rebuild your kernel and your |
| 261 | modules. |
| 262 | |
| 263 | In order to use DCB, userspace tools must be downloaded and installed. |
| 264 | The dcbd tools can be found at: |
| 265 | |
| 266 | http://e1000.sf.net |
| 267 | |
PJ Waskiewicz | 09e1c06 | 2009-03-13 22:15:54 +0000 | [diff] [blame] | 268 | Ethtool |
| 269 | ------- |
| 270 | The driver utilizes the ethtool interface for driver configuration and |
Jeff Kirsher | 872857a | 2010-12-09 23:55:47 -0800 | [diff] [blame] | 271 | diagnostics, as well as displaying statistical information. The latest |
Jeff Kirsher | 68f20d9 | 2010-12-17 12:14:34 +0000 | [diff] [blame] | 272 | ethtool version is required for this functionality. |
PJ Waskiewicz | 09e1c06 | 2009-03-13 22:15:54 +0000 | [diff] [blame] | 273 | |
| 274 | The latest release of ethtool can be found from |
Jeff Kirsher | 68f20d9 | 2010-12-17 12:14:34 +0000 | [diff] [blame] | 275 | http://ftp.kernel.org/pub/software/network/ethtool/ |
PJ Waskiewicz | 09e1c06 | 2009-03-13 22:15:54 +0000 | [diff] [blame] | 276 | |
Jeff Kirsher | 872857a | 2010-12-09 23:55:47 -0800 | [diff] [blame] | 277 | FCoE |
PJ Waskiewicz | 09e1c06 | 2009-03-13 22:15:54 +0000 | [diff] [blame] | 278 | ---- |
Jeff Kirsher | 872857a | 2010-12-09 23:55:47 -0800 | [diff] [blame] | 279 | This release of the ixgbe driver contains new code to enable users to use |
| 280 | Fiber Channel over Ethernet (FCoE) and Data Center Bridging (DCB) |
| 281 | functionality that is supported by the 82598-based hardware. This code has |
| 282 | no default effect on the regular driver operation, and configuring DCB and |
| 283 | FCoE is outside the scope of this driver README. Refer to |
| 284 | http://www.open-fcoe.org/ for FCoE project information and contact |
| 285 | e1000-eedc@lists.sourceforge.net for DCB information. |
PJ Waskiewicz | 09e1c06 | 2009-03-13 22:15:54 +0000 | [diff] [blame] | 286 | |
Jeff Kirsher | 872857a | 2010-12-09 23:55:47 -0800 | [diff] [blame] | 287 | MAC and VLAN anti-spoofing feature |
| 288 | ---------------------------------- |
| 289 | When a malicious driver attempts to send a spoofed packet, it is dropped by |
| 290 | the hardware and not transmitted. An interrupt is sent to the PF driver |
| 291 | notifying it of the spoof attempt. |
PJ Waskiewicz | 09e1c06 | 2009-03-13 22:15:54 +0000 | [diff] [blame] | 292 | |
Jeff Kirsher | 872857a | 2010-12-09 23:55:47 -0800 | [diff] [blame] | 293 | When a spoofed packet is detected the PF driver will send the following |
| 294 | message to the system log (displayed by the "dmesg" command): |
| 295 | |
| 296 | Spoof event(s) detected on VF (n) |
| 297 | |
| 298 | Where n=the VF that attempted to do the spoofing. |
| 299 | |
| 300 | |
| 301 | Performance Tuning |
| 302 | ================== |
| 303 | |
| 304 | An excellent article on performance tuning can be found at: |
| 305 | |
| 306 | http://www.redhat.com/promo/summit/2008/downloads/pdf/Thursday/Mark_Wagner.pdf |
| 307 | |
| 308 | |
| 309 | Known Issues |
| 310 | ============ |
| 311 | |
Jeff Kirsher | d7064f4 | 2013-08-23 17:19:23 -0700 | [diff] [blame] | 312 | Enabling SR-IOV in a 32-bit or 64-bit Microsoft* Windows* Server 2008/R2 |
| 313 | Guest OS using Intel (R) 82576-based GbE or Intel (R) 82599-based 10GbE |
| 314 | controller under KVM |
| 315 | ------------------------------------------------------------------------ |
Jeff Kirsher | 872857a | 2010-12-09 23:55:47 -0800 | [diff] [blame] | 316 | KVM Hypervisor/VMM supports direct assignment of a PCIe device to a VM. This |
| 317 | includes traditional PCIe devices, as well as SR-IOV-capable devices using |
| 318 | Intel 82576-based and 82599-based controllers. |
| 319 | |
| 320 | While direct assignment of a PCIe device or an SR-IOV Virtual Function (VF) |
| 321 | to a Linux-based VM running 2.6.32 or later kernel works fine, there is a |
| 322 | known issue with Microsoft Windows Server 2008 VM that results in a "yellow |
| 323 | bang" error. This problem is within the KVM VMM itself, not the Intel driver, |
| 324 | or the SR-IOV logic of the VMM, but rather that KVM emulates an older CPU |
| 325 | model for the guests, and this older CPU model does not support MSI-X |
| 326 | interrupts, which is a requirement for Intel SR-IOV. |
| 327 | |
| 328 | If you wish to use the Intel 82576 or 82599-based controllers in SR-IOV mode |
| 329 | with KVM and a Microsoft Windows Server 2008 guest try the following |
| 330 | workaround. The workaround is to tell KVM to emulate a different model of CPU |
| 331 | when using qemu to create the KVM guest: |
| 332 | |
| 333 | "-cpu qemu64,model=13" |
PJ Waskiewicz | 09e1c06 | 2009-03-13 22:15:54 +0000 | [diff] [blame] | 334 | |
| 335 | |
| 336 | Support |
| 337 | ======= |
| 338 | |
| 339 | For general information, go to the Intel support website at: |
| 340 | |
| 341 | http://support.intel.com |
| 342 | |
| 343 | or the Intel Wired Networking project hosted by Sourceforge at: |
| 344 | |
| 345 | http://e1000.sourceforge.net |
| 346 | |
| 347 | If an issue is identified with the released source code on the supported |
| 348 | kernel with a supported adapter, email the specific information related |
| 349 | to the issue to e1000-devel@lists.sf.net |