blob: eeb68685c7881e90a0551d8ec2f28c19d4a6cc3f [file] [log] [blame]
PJ Waskiewicz09e1c062009-03-13 22:15:54 +00001Linux Base Driver for 10 Gigabit PCI Express Intel(R) Network Connection
2========================================================================
3
4March 10, 2009
5
6
7Contents
8========
9
10- In This Release
11- Identifying Your Adapter
12- Building and Installation
13- Additional Configurations
14- Support
15
16
17
18In This Release
19===============
20
21This file describes the ixgbe Linux Base Driver for the 10 Gigabit PCI
22Express Intel(R) Network Connection. This driver includes support for
23Itanium(R)2-based systems.
24
25For questions related to hardware requirements, refer to the documentation
26supplied with your 10 Gigabit adapter. All hardware requirements listed apply
27to use with Linux.
28
29The following features are available in this kernel:
30 - Native VLANs
31 - Channel Bonding (teaming)
32 - SNMP
33 - Generic Receive Offload
34 - Data Center Bridging
35
36Channel Bonding documentation can be found in the Linux kernel source:
37/Documentation/networking/bonding.txt
38
39Ethtool, lspci, and ifconfig can be used to display device and driver
40specific information.
41
42
43Identifying Your Adapter
44========================
45
46This driver supports devices based on the 82598 controller and the 82599
47controller.
48
49For specific information on identifying which adapter you have, please visit:
50
51 http://support.intel.com/support/network/sb/CS-008441.htm
52
53
54Building and Installation
55=========================
56
57select m for "Intel(R) 10GbE PCI Express adapters support" located at:
58 Location:
59 -> Device Drivers
60 -> Network device support (NETDEVICES [=y])
61 -> Ethernet (10000 Mbit) (NETDEV_10000 [=y])
62
631. make modules & make modules_install
64
652. Load the module:
66
67# modprobe ixgbe
68
69 The insmod command can be used if the full
70 path to the driver module is specified. For example:
71
72 insmod /lib/modules/<KERNEL VERSION>/kernel/drivers/net/ixgbe/ixgbe.ko
73
74 With 2.6 based kernels also make sure that older ixgbe drivers are
75 removed from the kernel, before loading the new module:
76
77 rmmod ixgbe; modprobe ixgbe
78
793. Assign an IP address to the interface by entering the following, where
80 x is the interface number:
81
82 ifconfig ethx <IP_address>
83
844. Verify that the interface works. Enter the following, where <IP_address>
85 is the IP address for another machine on the same subnet as the interface
86 that is being tested:
87
88 ping <IP_address>
89
90
91Additional Configurations
92=========================
93
94 Viewing Link Messages
95 ---------------------
96 Link messages will not be displayed to the console if the distribution is
97 restricting system messages. In order to see network driver link messages on
98 your console, set dmesg to eight by entering the following:
99
100 dmesg -n 8
101
102 NOTE: This setting is not saved across reboots.
103
104
105 Jumbo Frames
106 ------------
107 The driver supports Jumbo Frames for all adapters. Jumbo Frames support is
108 enabled by changing the MTU to a value larger than the default of 1500.
109 The maximum value for the MTU is 16110. Use the ifconfig command to
110 increase the MTU size. For example:
111
112 ifconfig ethx mtu 9000 up
113
114 The maximum MTU setting for Jumbo Frames is 16110. This value coincides
115 with the maximum Jumbo Frames size of 16128.
116
117 Generic Receive Offload, aka GRO
118 --------------------------------
119 The driver supports the in-kernel software implementation of GRO. GRO has
120 shown that by coalescing Rx traffic into larger chunks of data, CPU
121 utilization can be significantly reduced when under large Rx load. GRO is an
122 evolution of the previously-used LRO interface. GRO is able to coalesce
123 other protocols besides TCP. It's also safe to use with configurations that
124 are problematic for LRO, namely bridging and iSCSI.
125
126 GRO is enabled by default in the driver. Future versions of ethtool will
127 support disabling and re-enabling GRO on the fly.
128
129
130 Data Center Bridging, aka DCB
131 -----------------------------
132
133 DCB is a configuration Quality of Service implementation in hardware.
134 It uses the VLAN priority tag (802.1p) to filter traffic. That means
135 that there are 8 different priorities that traffic can be filtered into.
136 It also enables priority flow control which can limit or eliminate the
137 number of dropped packets during network stress. Bandwidth can be
138 allocated to each of these priorities, which is enforced at the hardware
139 level.
140
141 To enable DCB support in ixgbe, you must enable the DCB netlink layer to
142 allow the userspace tools (see below) to communicate with the driver.
143 This can be found in the kernel configuration here:
144
145 -> Networking support
146 -> Networking options
147 -> Data Center Bridging support
148
149 Once this is selected, DCB support must be selected for ixgbe. This can
150 be found here:
151
152 -> Device Drivers
153 -> Network device support (NETDEVICES [=y])
154 -> Ethernet (10000 Mbit) (NETDEV_10000 [=y])
155 -> Intel(R) 10GbE PCI Express adapters support
156 -> Data Center Bridging (DCB) Support
157
158 After these options are selected, you must rebuild your kernel and your
159 modules.
160
161 In order to use DCB, userspace tools must be downloaded and installed.
162 The dcbd tools can be found at:
163
164 http://e1000.sf.net
165
166
167 Ethtool
168 -------
169 The driver utilizes the ethtool interface for driver configuration and
170 diagnostics, as well as displaying statistical information. Ethtool
171 version 3.0 or later is required for this functionality.
172
173 The latest release of ethtool can be found from
174 http://sourceforge.net/projects/gkernel.
175
176
177 NAPI
178 ----
179
180 NAPI (Rx polling mode) is supported in the ixgbe driver. NAPI is enabled
181 by default in the driver.
182
183 See www.cyberus.ca/~hadi/usenix-paper.tgz for more information on NAPI.
184
185
186Support
187=======
188
189For general information, go to the Intel support website at:
190
191 http://support.intel.com
192
193or the Intel Wired Networking project hosted by Sourceforge at:
194
195 http://e1000.sourceforge.net
196
197If an issue is identified with the released source code on the supported
198kernel with a supported adapter, email the specific information related
199to the issue to e1000-devel@lists.sf.net