blob: 6f0cb57b59c6b3713b3428c0dfadbbec46972ab5 [file] [log] [blame]
Jeff Kirsherd7064f42013-08-23 17:19:23 -07001Linux* Base Driver for the Intel(R) Ethernet 10 Gigabit PCI Express Family of
2Adapters
3=============================================================================
PJ Waskiewicz09e1c062009-03-13 22:15:54 +00004
Jeff Kirsherd7064f42013-08-23 17:19:23 -07005Intel 10 Gigabit Linux driver.
6Copyright(c) 1999 - 2013 Intel Corporation.
PJ Waskiewicz09e1c062009-03-13 22:15:54 +00007
8Contents
9========
10
PJ Waskiewicz09e1c062009-03-13 22:15:54 +000011- Identifying Your Adapter
PJ Waskiewicz09e1c062009-03-13 22:15:54 +000012- Additional Configurations
Jeff Kirsher872857a2010-12-09 23:55:47 -080013- Performance Tuning
14- Known Issues
PJ Waskiewicz09e1c062009-03-13 22:15:54 +000015- Support
16
PJ Waskiewicz09e1c062009-03-13 22:15:54 +000017Identifying Your Adapter
18========================
19
Jeff Kirsherd7064f42013-08-23 17:19:23 -070020The driver in this release is compatible with 82598, 82599 and X540-based
21Intel Network Connections.
PJ Waskiewicz09e1c062009-03-13 22:15:54 +000022
Jeff Kirsher872857a2010-12-09 23:55:47 -080023For more information on how to identify your adapter, go to the Adapter &
24Driver ID Guide at:
PJ Waskiewicz09e1c062009-03-13 22:15:54 +000025
Jeff Kirsher872857a2010-12-09 23:55:47 -080026 http://support.intel.com/support/network/sb/CS-012904.htm
27
28SFP+ Devices with Pluggable Optics
29----------------------------------
30
3182599-BASED ADAPTERS
32
33NOTES: If your 82599-based Intel(R) Network Adapter came with Intel optics, or
34is an Intel(R) Ethernet Server Adapter X520-2, then it only supports Intel
35optics and/or the direct attach cables listed below.
36
37When 82599-based SFP+ devices are connected back to back, they should be set to
Jeff Kirsher68f20d92010-12-17 12:14:34 +000038the same Speed setting via ethtool. Results may vary if you mix speed settings.
Jeff Kirsher872857a2010-12-09 23:55:47 -08003982598-based adapters support all passive direct attach cables that comply
40with SFF-8431 v4.1 and SFF-8472 v10.4 specifications. Active direct attach
41cables are not supported.
42
43Supplier Type Part Numbers
44
45SR Modules
46Intel DUAL RATE 1G/10G SFP+ SR (bailed) FTLX8571D3BCV-IT
47Intel DUAL RATE 1G/10G SFP+ SR (bailed) AFBR-703SDDZ-IN1
48Intel DUAL RATE 1G/10G SFP+ SR (bailed) AFBR-703SDZ-IN2
49LR Modules
50Intel DUAL RATE 1G/10G SFP+ LR (bailed) FTLX1471D3BCV-IT
51Intel DUAL RATE 1G/10G SFP+ LR (bailed) AFCT-701SDDZ-IN1
52Intel DUAL RATE 1G/10G SFP+ LR (bailed) AFCT-701SDZ-IN2
53
54The following is a list of 3rd party SFP+ modules and direct attach cables that
55have received some testing. Not all modules are applicable to all devices.
56
57Supplier Type Part Numbers
58
59Finisar SFP+ SR bailed, 10g single rate FTLX8571D3BCL
60Avago SFP+ SR bailed, 10g single rate AFBR-700SDZ
61Finisar SFP+ LR bailed, 10g single rate FTLX1471D3BCL
62
63Finisar DUAL RATE 1G/10G SFP+ SR (No Bail) FTLX8571D3QCV-IT
64Avago DUAL RATE 1G/10G SFP+ SR (No Bail) AFBR-703SDZ-IN1
65Finisar DUAL RATE 1G/10G SFP+ LR (No Bail) FTLX1471D3QCV-IT
66Avago DUAL RATE 1G/10G SFP+ LR (No Bail) AFCT-701SDZ-IN1
67Finistar 1000BASE-T SFP FCLF8522P2BTL
68Avago 1000BASE-T SFP ABCU-5710RZ
69
7082599-based adapters support all passive and active limiting direct attach
71cables that comply with SFF-8431 v4.1 and SFF-8472 v10.4 specifications.
72
Stephen Hemmingerd7018be2015-04-09 22:03:21 -070073Laser turns off for SFP+ when device is down
Jeff Kirsher872857a2010-12-09 23:55:47 -080074-------------------------------------------
Stephen Hemmingerd7018be2015-04-09 22:03:21 -070075"ip link set down" turns off the laser for 82599-based SFP+ fiber adapters.
76"ip link set up" turns on the laser.
PJ Waskiewicz09e1c062009-03-13 22:15:54 +000077
78
Jeff Kirsher872857a2010-12-09 23:55:47 -08007982598-BASED ADAPTERS
PJ Waskiewicz09e1c062009-03-13 22:15:54 +000080
Jeff Kirsher872857a2010-12-09 23:55:47 -080081NOTES for 82598-Based Adapters:
82- Intel(R) Network Adapters that support removable optical modules only support
83 their original module type (i.e., the Intel(R) 10 Gigabit SR Dual Port
84 Express Module only supports SR optical modules). If you plug in a different
85 type of module, the driver will not load.
86- Hot Swapping/hot plugging optical modules is not supported.
87- Only single speed, 10 gigabit modules are supported.
88- LAN on Motherboard (LOMs) may support DA, SR, or LR modules. Other module
89 types are not supported. Please see your system documentation for details.
PJ Waskiewicz09e1c062009-03-13 22:15:54 +000090
Jeff Kirsher872857a2010-12-09 23:55:47 -080091The following is a list of 3rd party SFP+ modules and direct attach cables that
92have received some testing. Not all modules are applicable to all devices.
PJ Waskiewicz09e1c062009-03-13 22:15:54 +000093
Jeff Kirsher872857a2010-12-09 23:55:47 -080094Supplier Type Part Numbers
PJ Waskiewicz09e1c062009-03-13 22:15:54 +000095
Jeff Kirsher872857a2010-12-09 23:55:47 -080096Finisar SFP+ SR bailed, 10g single rate FTLX8571D3BCL
97Avago SFP+ SR bailed, 10g single rate AFBR-700SDZ
98Finisar SFP+ LR bailed, 10g single rate FTLX1471D3BCL
PJ Waskiewicz09e1c062009-03-13 22:15:54 +000099
Jeff Kirsher872857a2010-12-09 23:55:47 -080010082598-based adapters support all passive direct attach cables that comply
101with SFF-8431 v4.1 and SFF-8472 v10.4 specifications. Active direct attach
102cables are not supported.
PJ Waskiewicz09e1c062009-03-13 22:15:54 +0000103
PJ Waskiewicz09e1c062009-03-13 22:15:54 +0000104
Jeff Kirsher872857a2010-12-09 23:55:47 -0800105Flow Control
106------------
107Ethernet Flow Control (IEEE 802.3x) can be configured with ethtool to enable
108receiving and transmitting pause frames for ixgbe. When TX is enabled, PAUSE
109frames are generated when the receive packet buffer crosses a predefined
110threshold. When rx is enabled, the transmit unit will halt for the time delay
111specified when a PAUSE frame is received.
PJ Waskiewicz09e1c062009-03-13 22:15:54 +0000112
Jeff Kirsher872857a2010-12-09 23:55:47 -0800113Flow Control is enabled by default. If you want to disable a flow control
Jeff Kirsher68f20d92010-12-17 12:14:34 +0000114capable link partner, use ethtool:
PJ Waskiewicz09e1c062009-03-13 22:15:54 +0000115
Jeff Kirsher872857a2010-12-09 23:55:47 -0800116 ethtool -A eth? autoneg off RX off TX off
PJ Waskiewicz09e1c062009-03-13 22:15:54 +0000117
Jeff Kirsher872857a2010-12-09 23:55:47 -0800118NOTE: For 82598 backplane cards entering 1 gig mode, flow control default
119behavior is changed to off. Flow control in 1 gig mode on these devices can
120lead to Tx hangs.
PJ Waskiewicz09e1c062009-03-13 22:15:54 +0000121
Jeff Kirsherd7064f42013-08-23 17:19:23 -0700122Intel(R) Ethernet Flow Director
123-------------------------------
124Supports advanced filters that direct receive packets by their flows to
125different queues. Enables tight control on routing a flow in the platform.
126Matches flows and CPU cores for flow affinity. Supports multiple parameters
127for flexible flow classification and load balancing.
128
129Flow director is enabled only if the kernel is multiple TX queue capable.
130
131An included script (set_irq_affinity.sh) automates setting the IRQ to CPU
132affinity.
133
134You can verify that the driver is using Flow Director by looking at the counter
135in ethtool: fdir_miss and fdir_match.
136
137Other ethtool Commands:
138To enable Flow Director
139 ethtool -K ethX ntuple on
140To add a filter
Rami Rosen6dc69642014-12-05 19:35:43 +0200141 Use -U switch. e.g., ethtool -U ethX flow-type tcp4 src-ip 10.0.128.23
Jeff Kirsherd7064f42013-08-23 17:19:23 -0700142 action 1
143To see the list of filters currently present:
144 ethtool -u ethX
145
146Perfect Filter: Perfect filter is an interface to load the filter table that
147funnels all flow into queue_0 unless an alternative queue is specified using
148"action". In that case, any flow that matches the filter criteria will be
149directed to the appropriate queue.
150
151If the queue is defined as -1, filter will drop matching packets.
152
153To account for filter matches and misses, there are two stats in ethtool:
154fdir_match and fdir_miss. In addition, rx_queue_N_packets shows the number of
155packets processed by the Nth queue.
156
157NOTE: Receive Packet Steering (RPS) and Receive Flow Steering (RFS) are not
158compatible with Flow Director. IF Flow Director is enabled, these will be
159disabled.
160
161The following three parameters impact Flow Director.
162
163FdirMode
164--------
165Valid Range: 0-2 (0=off, 1=ATR, 2=Perfect filter mode)
166Default Value: 1
167
168 Flow Director filtering modes.
169
170FdirPballoc
171-----------
172Valid Range: 0-2 (0=64k, 1=128k, 2=256k)
173Default Value: 0
174
175 Flow Director allocated packet buffer size.
176
177AtrSampleRate
178--------------
179Valid Range: 1-100
180Default Value: 20
181
182 Software ATR Tx packet sample rate. For example, when set to 20, every 20th
183 packet, looks to see if the packet will create a new flow.
184
185Node
186----
187Valid Range: 0-n
188Default Value: 1 (off)
189
190 0 - n: where n is the number of NUMA nodes (i.e. 0 - 3) currently online in
191 your system
192 1: turns this option off
193
194 The Node parameter will allow you to pick which NUMA node you want to have
195 the adapter allocate memory on.
196
197max_vfs
198-------
199Valid Range: 1-63
200Default Value: 0
201
202 If the value is greater than 0 it will also force the VMDq parameter to be 1
203 or more.
204
205 This parameter adds support for SR-IOV. It causes the driver to spawn up to
206 max_vfs worth of virtual function.
207
208
PJ Waskiewicz09e1c062009-03-13 22:15:54 +0000209Additional Configurations
210=========================
211
PJ Waskiewicz09e1c062009-03-13 22:15:54 +0000212 Jumbo Frames
213 ------------
214 The driver supports Jumbo Frames for all adapters. Jumbo Frames support is
215 enabled by changing the MTU to a value larger than the default of 1500.
Stephen Hemmingerd7018be2015-04-09 22:03:21 -0700216 The maximum value for the MTU is 16110. Use the ip command to
PJ Waskiewicz09e1c062009-03-13 22:15:54 +0000217 increase the MTU size. For example:
218
Stephen Hemmingerd7018be2015-04-09 22:03:21 -0700219 ip link set dev ethx mtu 9000
PJ Waskiewicz09e1c062009-03-13 22:15:54 +0000220
Stephen Hemmingerd7018be2015-04-09 22:03:21 -0700221 The maximum MTU setting for Jumbo Frames is 9710. This value coincides
222 with the maximum Jumbo Frames size of 9728.
PJ Waskiewicz09e1c062009-03-13 22:15:54 +0000223
224 Generic Receive Offload, aka GRO
225 --------------------------------
226 The driver supports the in-kernel software implementation of GRO. GRO has
227 shown that by coalescing Rx traffic into larger chunks of data, CPU
228 utilization can be significantly reduced when under large Rx load. GRO is an
229 evolution of the previously-used LRO interface. GRO is able to coalesce
230 other protocols besides TCP. It's also safe to use with configurations that
231 are problematic for LRO, namely bridging and iSCSI.
232
PJ Waskiewicz09e1c062009-03-13 22:15:54 +0000233 Data Center Bridging, aka DCB
234 -----------------------------
PJ Waskiewicz09e1c062009-03-13 22:15:54 +0000235 DCB is a configuration Quality of Service implementation in hardware.
236 It uses the VLAN priority tag (802.1p) to filter traffic. That means
237 that there are 8 different priorities that traffic can be filtered into.
238 It also enables priority flow control which can limit or eliminate the
239 number of dropped packets during network stress. Bandwidth can be
240 allocated to each of these priorities, which is enforced at the hardware
241 level.
242
243 To enable DCB support in ixgbe, you must enable the DCB netlink layer to
244 allow the userspace tools (see below) to communicate with the driver.
245 This can be found in the kernel configuration here:
246
247 -> Networking support
248 -> Networking options
249 -> Data Center Bridging support
250
251 Once this is selected, DCB support must be selected for ixgbe. This can
252 be found here:
253
254 -> Device Drivers
255 -> Network device support (NETDEVICES [=y])
256 -> Ethernet (10000 Mbit) (NETDEV_10000 [=y])
257 -> Intel(R) 10GbE PCI Express adapters support
258 -> Data Center Bridging (DCB) Support
259
260 After these options are selected, you must rebuild your kernel and your
261 modules.
262
263 In order to use DCB, userspace tools must be downloaded and installed.
264 The dcbd tools can be found at:
265
266 http://e1000.sf.net
267
PJ Waskiewicz09e1c062009-03-13 22:15:54 +0000268 Ethtool
269 -------
270 The driver utilizes the ethtool interface for driver configuration and
Jeff Kirsher872857a2010-12-09 23:55:47 -0800271 diagnostics, as well as displaying statistical information. The latest
Jeff Kirsher68f20d92010-12-17 12:14:34 +0000272 ethtool version is required for this functionality.
PJ Waskiewicz09e1c062009-03-13 22:15:54 +0000273
274 The latest release of ethtool can be found from
Jeff Kirsher68f20d92010-12-17 12:14:34 +0000275 http://ftp.kernel.org/pub/software/network/ethtool/
PJ Waskiewicz09e1c062009-03-13 22:15:54 +0000276
Jeff Kirsher872857a2010-12-09 23:55:47 -0800277 FCoE
PJ Waskiewicz09e1c062009-03-13 22:15:54 +0000278 ----
Jeff Kirsher872857a2010-12-09 23:55:47 -0800279 This release of the ixgbe driver contains new code to enable users to use
280 Fiber Channel over Ethernet (FCoE) and Data Center Bridging (DCB)
281 functionality that is supported by the 82598-based hardware. This code has
282 no default effect on the regular driver operation, and configuring DCB and
283 FCoE is outside the scope of this driver README. Refer to
284 http://www.open-fcoe.org/ for FCoE project information and contact
285 e1000-eedc@lists.sourceforge.net for DCB information.
PJ Waskiewicz09e1c062009-03-13 22:15:54 +0000286
Jeff Kirsher872857a2010-12-09 23:55:47 -0800287 MAC and VLAN anti-spoofing feature
288 ----------------------------------
289 When a malicious driver attempts to send a spoofed packet, it is dropped by
290 the hardware and not transmitted. An interrupt is sent to the PF driver
291 notifying it of the spoof attempt.
PJ Waskiewicz09e1c062009-03-13 22:15:54 +0000292
Jeff Kirsher872857a2010-12-09 23:55:47 -0800293 When a spoofed packet is detected the PF driver will send the following
294 message to the system log (displayed by the "dmesg" command):
295
296 Spoof event(s) detected on VF (n)
297
298 Where n=the VF that attempted to do the spoofing.
299
300
301Performance Tuning
302==================
303
304An excellent article on performance tuning can be found at:
305
306http://www.redhat.com/promo/summit/2008/downloads/pdf/Thursday/Mark_Wagner.pdf
307
308
309Known Issues
310============
311
Jeff Kirsherd7064f42013-08-23 17:19:23 -0700312 Enabling SR-IOV in a 32-bit or 64-bit Microsoft* Windows* Server 2008/R2
313 Guest OS using Intel (R) 82576-based GbE or Intel (R) 82599-based 10GbE
314 controller under KVM
315 ------------------------------------------------------------------------
Jeff Kirsher872857a2010-12-09 23:55:47 -0800316 KVM Hypervisor/VMM supports direct assignment of a PCIe device to a VM. This
317 includes traditional PCIe devices, as well as SR-IOV-capable devices using
318 Intel 82576-based and 82599-based controllers.
319
320 While direct assignment of a PCIe device or an SR-IOV Virtual Function (VF)
321 to a Linux-based VM running 2.6.32 or later kernel works fine, there is a
322 known issue with Microsoft Windows Server 2008 VM that results in a "yellow
323 bang" error. This problem is within the KVM VMM itself, not the Intel driver,
324 or the SR-IOV logic of the VMM, but rather that KVM emulates an older CPU
325 model for the guests, and this older CPU model does not support MSI-X
326 interrupts, which is a requirement for Intel SR-IOV.
327
328 If you wish to use the Intel 82576 or 82599-based controllers in SR-IOV mode
329 with KVM and a Microsoft Windows Server 2008 guest try the following
330 workaround. The workaround is to tell KVM to emulate a different model of CPU
331 when using qemu to create the KVM guest:
332
333 "-cpu qemu64,model=13"
PJ Waskiewicz09e1c062009-03-13 22:15:54 +0000334
335
336Support
337=======
338
339For general information, go to the Intel support website at:
340
341 http://support.intel.com
342
343or the Intel Wired Networking project hosted by Sourceforge at:
344
345 http://e1000.sourceforge.net
346
347If an issue is identified with the released source code on the supported
348kernel with a supported adapter, email the specific information related
349to the issue to e1000-devel@lists.sf.net