Alex Williamson | 4a5b2a2 | 2012-07-31 08:16:23 -0600 | [diff] [blame] | 1 | VFIO - "Virtual Function I/O"[1] |
| 2 | ------------------------------------------------------------------------------- |
| 3 | Many modern system now provide DMA and interrupt remapping facilities |
| 4 | to help ensure I/O devices behave within the boundaries they've been |
| 5 | allotted. This includes x86 hardware with AMD-Vi and Intel VT-d, |
| 6 | POWER systems with Partitionable Endpoints (PEs) and embedded PowerPC |
| 7 | systems such as Freescale PAMU. The VFIO driver is an IOMMU/device |
| 8 | agnostic framework for exposing direct device access to userspace, in |
| 9 | a secure, IOMMU protected environment. In other words, this allows |
| 10 | safe[2], non-privileged, userspace drivers. |
| 11 | |
| 12 | Why do we want that? Virtual machines often make use of direct device |
| 13 | access ("device assignment") when configured for the highest possible |
| 14 | I/O performance. From a device and host perspective, this simply |
| 15 | turns the VM into a userspace driver, with the benefits of |
| 16 | significantly reduced latency, higher bandwidth, and direct use of |
| 17 | bare-metal device drivers[3]. |
| 18 | |
| 19 | Some applications, particularly in the high performance computing |
| 20 | field, also benefit from low-overhead, direct device access from |
| 21 | userspace. Examples include network adapters (often non-TCP/IP based) |
| 22 | and compute accelerators. Prior to VFIO, these drivers had to either |
| 23 | go through the full development cycle to become proper upstream |
| 24 | driver, be maintained out of tree, or make use of the UIO framework, |
| 25 | which has no notion of IOMMU protection, limited interrupt support, |
| 26 | and requires root privileges to access things like PCI configuration |
| 27 | space. |
| 28 | |
| 29 | The VFIO driver framework intends to unify these, replacing both the |
| 30 | KVM PCI specific device assignment code as well as provide a more |
| 31 | secure, more featureful userspace driver environment than UIO. |
| 32 | |
| 33 | Groups, Devices, and IOMMUs |
| 34 | ------------------------------------------------------------------------------- |
| 35 | |
| 36 | Devices are the main target of any I/O driver. Devices typically |
| 37 | create a programming interface made up of I/O access, interrupts, |
| 38 | and DMA. Without going into the details of each of these, DMA is |
| 39 | by far the most critical aspect for maintaining a secure environment |
| 40 | as allowing a device read-write access to system memory imposes the |
| 41 | greatest risk to the overall system integrity. |
| 42 | |
| 43 | To help mitigate this risk, many modern IOMMUs now incorporate |
| 44 | isolation properties into what was, in many cases, an interface only |
| 45 | meant for translation (ie. solving the addressing problems of devices |
| 46 | with limited address spaces). With this, devices can now be isolated |
| 47 | from each other and from arbitrary memory access, thus allowing |
| 48 | things like secure direct assignment of devices into virtual machines. |
| 49 | |
| 50 | This isolation is not always at the granularity of a single device |
| 51 | though. Even when an IOMMU is capable of this, properties of devices, |
| 52 | interconnects, and IOMMU topologies can each reduce this isolation. |
| 53 | For instance, an individual device may be part of a larger multi- |
| 54 | function enclosure. While the IOMMU may be able to distinguish |
| 55 | between devices within the enclosure, the enclosure may not require |
| 56 | transactions between devices to reach the IOMMU. Examples of this |
| 57 | could be anything from a multi-function PCI device with backdoors |
| 58 | between functions to a non-PCI-ACS (Access Control Services) capable |
| 59 | bridge allowing redirection without reaching the IOMMU. Topology |
| 60 | can also play a factor in terms of hiding devices. A PCIe-to-PCI |
| 61 | bridge masks the devices behind it, making transaction appear as if |
| 62 | from the bridge itself. Obviously IOMMU design plays a major factor |
| 63 | as well. |
| 64 | |
| 65 | Therefore, while for the most part an IOMMU may have device level |
| 66 | granularity, any system is susceptible to reduced granularity. The |
| 67 | IOMMU API therefore supports a notion of IOMMU groups. A group is |
| 68 | a set of devices which is isolatable from all other devices in the |
| 69 | system. Groups are therefore the unit of ownership used by VFIO. |
| 70 | |
| 71 | While the group is the minimum granularity that must be used to |
| 72 | ensure secure user access, it's not necessarily the preferred |
| 73 | granularity. In IOMMUs which make use of page tables, it may be |
| 74 | possible to share a set of page tables between different groups, |
| 75 | reducing the overhead both to the platform (reduced TLB thrashing, |
| 76 | reduced duplicate page tables), and to the user (programming only |
| 77 | a single set of translations). For this reason, VFIO makes use of |
| 78 | a container class, which may hold one or more groups. A container |
| 79 | is created by simply opening the /dev/vfio/vfio character device. |
| 80 | |
| 81 | On its own, the container provides little functionality, with all |
| 82 | but a couple version and extension query interfaces locked away. |
| 83 | The user needs to add a group into the container for the next level |
| 84 | of functionality. To do this, the user first needs to identify the |
| 85 | group associated with the desired device. This can be done using |
| 86 | the sysfs links described in the example below. By unbinding the |
| 87 | device from the host driver and binding it to a VFIO driver, a new |
| 88 | VFIO group will appear for the group as /dev/vfio/$GROUP, where |
| 89 | $GROUP is the IOMMU group number of which the device is a member. |
| 90 | If the IOMMU group contains multiple devices, each will need to |
| 91 | be bound to a VFIO driver before operations on the VFIO group |
| 92 | are allowed (it's also sufficient to only unbind the device from |
| 93 | host drivers if a VFIO driver is unavailable; this will make the |
| 94 | group available, but not that particular device). TBD - interface |
| 95 | for disabling driver probing/locking a device. |
| 96 | |
| 97 | Once the group is ready, it may be added to the container by opening |
| 98 | the VFIO group character device (/dev/vfio/$GROUP) and using the |
| 99 | VFIO_GROUP_SET_CONTAINER ioctl, passing the file descriptor of the |
| 100 | previously opened container file. If desired and if the IOMMU driver |
| 101 | supports sharing the IOMMU context between groups, multiple groups may |
| 102 | be set to the same container. If a group fails to set to a container |
| 103 | with existing groups, a new empty container will need to be used |
| 104 | instead. |
| 105 | |
| 106 | With a group (or groups) attached to a container, the remaining |
| 107 | ioctls become available, enabling access to the VFIO IOMMU interfaces. |
| 108 | Additionally, it now becomes possible to get file descriptors for each |
| 109 | device within a group using an ioctl on the VFIO group file descriptor. |
| 110 | |
| 111 | The VFIO device API includes ioctls for describing the device, the I/O |
| 112 | regions and their read/write/mmap offsets on the device descriptor, as |
| 113 | well as mechanisms for describing and registering interrupt |
| 114 | notifications. |
| 115 | |
| 116 | VFIO Usage Example |
| 117 | ------------------------------------------------------------------------------- |
| 118 | |
| 119 | Assume user wants to access PCI device 0000:06:0d.0 |
| 120 | |
| 121 | $ readlink /sys/bus/pci/devices/0000:06:0d.0/iommu_group |
| 122 | ../../../../kernel/iommu_groups/26 |
| 123 | |
| 124 | This device is therefore in IOMMU group 26. This device is on the |
| 125 | pci bus, therefore the user will make use of vfio-pci to manage the |
| 126 | group: |
| 127 | |
| 128 | # modprobe vfio-pci |
| 129 | |
| 130 | Binding this device to the vfio-pci driver creates the VFIO group |
| 131 | character devices for this group: |
| 132 | |
| 133 | $ lspci -n -s 0000:06:0d.0 |
| 134 | 06:0d.0 0401: 1102:0002 (rev 08) |
| 135 | # echo 0000:06:0d.0 > /sys/bus/pci/devices/0000:06:0d.0/driver/unbind |
Alex Williamson | b37b593 | 2012-09-21 10:48:03 -0600 | [diff] [blame] | 136 | # echo 1102 0002 > /sys/bus/pci/drivers/vfio-pci/new_id |
Alex Williamson | 4a5b2a2 | 2012-07-31 08:16:23 -0600 | [diff] [blame] | 137 | |
| 138 | Now we need to look at what other devices are in the group to free |
| 139 | it for use by VFIO: |
| 140 | |
| 141 | $ ls -l /sys/bus/pci/devices/0000:06:0d.0/iommu_group/devices |
| 142 | total 0 |
| 143 | lrwxrwxrwx. 1 root root 0 Apr 23 16:13 0000:00:1e.0 -> |
| 144 | ../../../../devices/pci0000:00/0000:00:1e.0 |
| 145 | lrwxrwxrwx. 1 root root 0 Apr 23 16:13 0000:06:0d.0 -> |
| 146 | ../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.0 |
| 147 | lrwxrwxrwx. 1 root root 0 Apr 23 16:13 0000:06:0d.1 -> |
| 148 | ../../../../devices/pci0000:00/0000:00:1e.0/0000:06:0d.1 |
| 149 | |
| 150 | This device is behind a PCIe-to-PCI bridge[4], therefore we also |
| 151 | need to add device 0000:06:0d.1 to the group following the same |
| 152 | procedure as above. Device 0000:00:1e.0 is a bridge that does |
| 153 | not currently have a host driver, therefore it's not required to |
| 154 | bind this device to the vfio-pci driver (vfio-pci does not currently |
| 155 | support PCI bridges). |
| 156 | |
| 157 | The final step is to provide the user with access to the group if |
| 158 | unprivileged operation is desired (note that /dev/vfio/vfio provides |
| 159 | no capabilities on its own and is therefore expected to be set to |
| 160 | mode 0666 by the system). |
| 161 | |
| 162 | # chown user:user /dev/vfio/26 |
| 163 | |
| 164 | The user now has full access to all the devices and the iommu for this |
| 165 | group and can access them as follows: |
| 166 | |
| 167 | int container, group, device, i; |
| 168 | struct vfio_group_status group_status = |
| 169 | { .argsz = sizeof(group_status) }; |
Zi Shen Lim | dac09b5 | 2013-09-05 16:36:21 -0600 | [diff] [blame] | 170 | struct vfio_iommu_type1_info iommu_info = { .argsz = sizeof(iommu_info) }; |
| 171 | struct vfio_iommu_type1_dma_map dma_map = { .argsz = sizeof(dma_map) }; |
Alex Williamson | 4a5b2a2 | 2012-07-31 08:16:23 -0600 | [diff] [blame] | 172 | struct vfio_device_info device_info = { .argsz = sizeof(device_info) }; |
| 173 | |
| 174 | /* Create a new container */ |
Alexey Kardashevskiy | b0e59b8 | 2013-06-21 09:43:21 -0600 | [diff] [blame] | 175 | container = open("/dev/vfio/vfio", O_RDWR); |
Alex Williamson | 4a5b2a2 | 2012-07-31 08:16:23 -0600 | [diff] [blame] | 176 | |
| 177 | if (ioctl(container, VFIO_GET_API_VERSION) != VFIO_API_VERSION) |
| 178 | /* Unknown API version */ |
| 179 | |
Alexey Kardashevskiy | b0e59b8 | 2013-06-21 09:43:21 -0600 | [diff] [blame] | 180 | if (!ioctl(container, VFIO_CHECK_EXTENSION, VFIO_TYPE1_IOMMU)) |
Alex Williamson | 4a5b2a2 | 2012-07-31 08:16:23 -0600 | [diff] [blame] | 181 | /* Doesn't support the IOMMU driver we want. */ |
| 182 | |
| 183 | /* Open the group */ |
| 184 | group = open("/dev/vfio/26", O_RDWR); |
| 185 | |
| 186 | /* Test the group is viable and available */ |
| 187 | ioctl(group, VFIO_GROUP_GET_STATUS, &group_status); |
| 188 | |
| 189 | if (!(group_status.flags & VFIO_GROUP_FLAGS_VIABLE)) |
| 190 | /* Group is not viable (ie, not all devices bound for vfio) */ |
| 191 | |
| 192 | /* Add the group to the container */ |
| 193 | ioctl(group, VFIO_GROUP_SET_CONTAINER, &container); |
| 194 | |
| 195 | /* Enable the IOMMU model we want */ |
Zi Shen Lim | dac09b5 | 2013-09-05 16:36:21 -0600 | [diff] [blame] | 196 | ioctl(container, VFIO_SET_IOMMU, VFIO_TYPE1_IOMMU); |
Alex Williamson | 4a5b2a2 | 2012-07-31 08:16:23 -0600 | [diff] [blame] | 197 | |
| 198 | /* Get addition IOMMU info */ |
| 199 | ioctl(container, VFIO_IOMMU_GET_INFO, &iommu_info); |
| 200 | |
| 201 | /* Allocate some space and setup a DMA mapping */ |
| 202 | dma_map.vaddr = mmap(0, 1024 * 1024, PROT_READ | PROT_WRITE, |
| 203 | MAP_PRIVATE | MAP_ANONYMOUS, 0, 0); |
| 204 | dma_map.size = 1024 * 1024; |
| 205 | dma_map.iova = 0; /* 1MB starting at 0x0 from device view */ |
| 206 | dma_map.flags = VFIO_DMA_MAP_FLAG_READ | VFIO_DMA_MAP_FLAG_WRITE; |
| 207 | |
| 208 | ioctl(container, VFIO_IOMMU_MAP_DMA, &dma_map); |
| 209 | |
| 210 | /* Get a file descriptor for the device */ |
| 211 | device = ioctl(group, VFIO_GROUP_GET_DEVICE_FD, "0000:06:0d.0"); |
| 212 | |
| 213 | /* Test and setup the device */ |
| 214 | ioctl(device, VFIO_DEVICE_GET_INFO, &device_info); |
| 215 | |
| 216 | for (i = 0; i < device_info.num_regions; i++) { |
| 217 | struct vfio_region_info reg = { .argsz = sizeof(reg) }; |
| 218 | |
| 219 | reg.index = i; |
| 220 | |
| 221 | ioctl(device, VFIO_DEVICE_GET_REGION_INFO, ®); |
| 222 | |
| 223 | /* Setup mappings... read/write offsets, mmaps |
| 224 | * For PCI devices, config space is a region */ |
| 225 | } |
| 226 | |
| 227 | for (i = 0; i < device_info.num_irqs; i++) { |
| 228 | struct vfio_irq_info irq = { .argsz = sizeof(irq) }; |
| 229 | |
| 230 | irq.index = i; |
| 231 | |
Zi Shen Lim | dac09b5 | 2013-09-05 16:36:21 -0600 | [diff] [blame] | 232 | ioctl(device, VFIO_DEVICE_GET_IRQ_INFO, &irq); |
Alex Williamson | 4a5b2a2 | 2012-07-31 08:16:23 -0600 | [diff] [blame] | 233 | |
| 234 | /* Setup IRQs... eventfds, VFIO_DEVICE_SET_IRQS */ |
| 235 | } |
| 236 | |
| 237 | /* Gratuitous device reset and go... */ |
| 238 | ioctl(device, VFIO_DEVICE_RESET); |
| 239 | |
| 240 | VFIO User API |
| 241 | ------------------------------------------------------------------------------- |
| 242 | |
| 243 | Please see include/linux/vfio.h for complete API documentation. |
| 244 | |
| 245 | VFIO bus driver API |
| 246 | ------------------------------------------------------------------------------- |
| 247 | |
| 248 | VFIO bus drivers, such as vfio-pci make use of only a few interfaces |
| 249 | into VFIO core. When devices are bound and unbound to the driver, |
| 250 | the driver should call vfio_add_group_dev() and vfio_del_group_dev() |
| 251 | respectively: |
| 252 | |
| 253 | extern int vfio_add_group_dev(struct iommu_group *iommu_group, |
| 254 | struct device *dev, |
| 255 | const struct vfio_device_ops *ops, |
| 256 | void *device_data); |
| 257 | |
| 258 | extern void *vfio_del_group_dev(struct device *dev); |
| 259 | |
| 260 | vfio_add_group_dev() indicates to the core to begin tracking the |
| 261 | specified iommu_group and register the specified dev as owned by |
| 262 | a VFIO bus driver. The driver provides an ops structure for callbacks |
| 263 | similar to a file operations structure: |
| 264 | |
| 265 | struct vfio_device_ops { |
| 266 | int (*open)(void *device_data); |
| 267 | void (*release)(void *device_data); |
| 268 | ssize_t (*read)(void *device_data, char __user *buf, |
| 269 | size_t count, loff_t *ppos); |
| 270 | ssize_t (*write)(void *device_data, const char __user *buf, |
| 271 | size_t size, loff_t *ppos); |
| 272 | long (*ioctl)(void *device_data, unsigned int cmd, |
| 273 | unsigned long arg); |
| 274 | int (*mmap)(void *device_data, struct vm_area_struct *vma); |
| 275 | }; |
| 276 | |
| 277 | Each function is passed the device_data that was originally registered |
| 278 | in the vfio_add_group_dev() call above. This allows the bus driver |
| 279 | an easy place to store its opaque, private data. The open/release |
| 280 | callbacks are issued when a new file descriptor is created for a |
| 281 | device (via VFIO_GROUP_GET_DEVICE_FD). The ioctl interface provides |
| 282 | a direct pass through for VFIO_DEVICE_* ioctls. The read/write/mmap |
| 283 | interfaces implement the device region access defined by the device's |
| 284 | own VFIO_DEVICE_GET_REGION_INFO ioctl. |
| 285 | |
Alexey Kardashevskiy | 5ffd229 | 2013-05-21 13:33:10 +1000 | [diff] [blame] | 286 | |
| 287 | PPC64 sPAPR implementation note |
| 288 | ------------------------------------------------------------------------------- |
| 289 | |
| 290 | This implementation has some specifics: |
| 291 | |
| 292 | 1) Only one IOMMU group per container is supported as an IOMMU group |
| 293 | represents the minimal entity which isolation can be guaranteed for and |
| 294 | groups are allocated statically, one per a Partitionable Endpoint (PE) |
| 295 | (PE is often a PCI domain but not always). |
| 296 | |
| 297 | 2) The hardware supports so called DMA windows - the PCI address range |
| 298 | within which DMA transfer is allowed, any attempt to access address space |
| 299 | out of the window leads to the whole PE isolation. |
| 300 | |
| 301 | 3) PPC64 guests are paravirtualized but not fully emulated. There is an API |
| 302 | to map/unmap pages for DMA, and it normally maps 1..32 pages per call and |
| 303 | currently there is no way to reduce the number of calls. In order to make things |
| 304 | faster, the map/unmap handling has been implemented in real mode which provides |
| 305 | an excellent performance which has limitations such as inability to do |
| 306 | locked pages accounting in real time. |
| 307 | |
Gavin Shan | 1b69be5 | 2014-06-10 11:41:57 +1000 | [diff] [blame] | 308 | 4) According to sPAPR specification, A Partitionable Endpoint (PE) is an I/O |
| 309 | subtree that can be treated as a unit for the purposes of partitioning and |
| 310 | error recovery. A PE may be a single or multi-function IOA (IO Adapter), a |
| 311 | function of a multi-function IOA, or multiple IOAs (possibly including switch |
| 312 | and bridge structures above the multiple IOAs). PPC64 guests detect PCI errors |
| 313 | and recover from them via EEH RTAS services, which works on the basis of |
| 314 | additional ioctl commands. |
| 315 | |
| 316 | So 4 additional ioctls have been added: |
Alexey Kardashevskiy | 5ffd229 | 2013-05-21 13:33:10 +1000 | [diff] [blame] | 317 | |
| 318 | VFIO_IOMMU_SPAPR_TCE_GET_INFO - returns the size and the start |
| 319 | of the DMA window on the PCI bus. |
| 320 | |
| 321 | VFIO_IOMMU_ENABLE - enables the container. The locked pages accounting |
| 322 | is done at this point. This lets user first to know what |
| 323 | the DMA window is and adjust rlimit before doing any real job. |
| 324 | |
| 325 | VFIO_IOMMU_DISABLE - disables the container. |
| 326 | |
Gavin Shan | 1b69be5 | 2014-06-10 11:41:57 +1000 | [diff] [blame] | 327 | VFIO_EEH_PE_OP - provides an API for EEH setup, error detection and recovery. |
Alexey Kardashevskiy | 5ffd229 | 2013-05-21 13:33:10 +1000 | [diff] [blame] | 328 | |
| 329 | The code flow from the example above should be slightly changed: |
| 330 | |
Gavin Shan | 1b69be5 | 2014-06-10 11:41:57 +1000 | [diff] [blame] | 331 | struct vfio_eeh_pe_op pe_op = { .argsz = sizeof(pe_op), .flags = 0 }; |
| 332 | |
Alexey Kardashevskiy | 5ffd229 | 2013-05-21 13:33:10 +1000 | [diff] [blame] | 333 | ..... |
| 334 | /* Add the group to the container */ |
| 335 | ioctl(group, VFIO_GROUP_SET_CONTAINER, &container); |
| 336 | |
| 337 | /* Enable the IOMMU model we want */ |
| 338 | ioctl(container, VFIO_SET_IOMMU, VFIO_SPAPR_TCE_IOMMU) |
| 339 | |
| 340 | /* Get addition sPAPR IOMMU info */ |
| 341 | vfio_iommu_spapr_tce_info spapr_iommu_info; |
| 342 | ioctl(container, VFIO_IOMMU_SPAPR_TCE_GET_INFO, &spapr_iommu_info); |
| 343 | |
| 344 | if (ioctl(container, VFIO_IOMMU_ENABLE)) |
| 345 | /* Cannot enable container, may be low rlimit */ |
| 346 | |
| 347 | /* Allocate some space and setup a DMA mapping */ |
| 348 | dma_map.vaddr = mmap(0, 1024 * 1024, PROT_READ | PROT_WRITE, |
| 349 | MAP_PRIVATE | MAP_ANONYMOUS, 0, 0); |
| 350 | |
| 351 | dma_map.size = 1024 * 1024; |
| 352 | dma_map.iova = 0; /* 1MB starting at 0x0 from device view */ |
| 353 | dma_map.flags = VFIO_DMA_MAP_FLAG_READ | VFIO_DMA_MAP_FLAG_WRITE; |
| 354 | |
| 355 | /* Check here is .iova/.size are within DMA window from spapr_iommu_info */ |
Alexey Kardashevskiy | 5ffd229 | 2013-05-21 13:33:10 +1000 | [diff] [blame] | 356 | ioctl(container, VFIO_IOMMU_MAP_DMA, &dma_map); |
Gavin Shan | 1b69be5 | 2014-06-10 11:41:57 +1000 | [diff] [blame] | 357 | |
| 358 | /* Get a file descriptor for the device */ |
| 359 | device = ioctl(group, VFIO_GROUP_GET_DEVICE_FD, "0000:06:0d.0"); |
| 360 | |
| 361 | .... |
| 362 | |
| 363 | /* Gratuitous device reset and go... */ |
| 364 | ioctl(device, VFIO_DEVICE_RESET); |
| 365 | |
| 366 | /* Make sure EEH is supported */ |
| 367 | ioctl(container, VFIO_CHECK_EXTENSION, VFIO_EEH); |
| 368 | |
| 369 | /* Enable the EEH functionality on the device */ |
| 370 | pe_op.op = VFIO_EEH_PE_ENABLE; |
| 371 | ioctl(container, VFIO_EEH_PE_OP, &pe_op); |
| 372 | |
| 373 | /* You're suggested to create additional data struct to represent |
| 374 | * PE, and put child devices belonging to same IOMMU group to the |
| 375 | * PE instance for later reference. |
| 376 | */ |
| 377 | |
| 378 | /* Check the PE's state and make sure it's in functional state */ |
| 379 | pe_op.op = VFIO_EEH_PE_GET_STATE; |
| 380 | ioctl(container, VFIO_EEH_PE_OP, &pe_op); |
| 381 | |
| 382 | /* Save device state using pci_save_state(). |
| 383 | * EEH should be enabled on the specified device. |
| 384 | */ |
| 385 | |
| 386 | .... |
| 387 | |
Gavin Shan | 68cbbc3 | 2015-03-26 16:42:09 +1100 | [diff] [blame] | 388 | /* Inject EEH error, which is expected to be caused by 32-bits |
| 389 | * config load. |
| 390 | */ |
| 391 | pe_op.op = VFIO_EEH_PE_INJECT_ERR; |
| 392 | pe_op.err.type = EEH_ERR_TYPE_32; |
| 393 | pe_op.err.func = EEH_ERR_FUNC_LD_CFG_ADDR; |
| 394 | pe_op.err.addr = 0ul; |
| 395 | pe_op.err.mask = 0ul; |
| 396 | ioctl(container, VFIO_EEH_PE_OP, &pe_op); |
| 397 | |
| 398 | .... |
| 399 | |
Gavin Shan | 1b69be5 | 2014-06-10 11:41:57 +1000 | [diff] [blame] | 400 | /* When 0xFF's returned from reading PCI config space or IO BARs |
| 401 | * of the PCI device. Check the PE's state to see if that has been |
| 402 | * frozen. |
| 403 | */ |
| 404 | ioctl(container, VFIO_EEH_PE_OP, &pe_op); |
| 405 | |
| 406 | /* Waiting for pending PCI transactions to be completed and don't |
| 407 | * produce any more PCI traffic from/to the affected PE until |
| 408 | * recovery is finished. |
| 409 | */ |
| 410 | |
| 411 | /* Enable IO for the affected PE and collect logs. Usually, the |
| 412 | * standard part of PCI config space, AER registers are dumped |
| 413 | * as logs for further analysis. |
| 414 | */ |
| 415 | pe_op.op = VFIO_EEH_PE_UNFREEZE_IO; |
| 416 | ioctl(container, VFIO_EEH_PE_OP, &pe_op); |
| 417 | |
| 418 | /* |
| 419 | * Issue PE reset: hot or fundamental reset. Usually, hot reset |
| 420 | * is enough. However, the firmware of some PCI adapters would |
| 421 | * require fundamental reset. |
| 422 | */ |
| 423 | pe_op.op = VFIO_EEH_PE_RESET_HOT; |
| 424 | ioctl(container, VFIO_EEH_PE_OP, &pe_op); |
| 425 | pe_op.op = VFIO_EEH_PE_RESET_DEACTIVATE; |
| 426 | ioctl(container, VFIO_EEH_PE_OP, &pe_op); |
| 427 | |
| 428 | /* Configure the PCI bridges for the affected PE */ |
| 429 | pe_op.op = VFIO_EEH_PE_CONFIGURE; |
| 430 | ioctl(container, VFIO_EEH_PE_OP, &pe_op); |
| 431 | |
| 432 | /* Restored state we saved at initialization time. pci_restore_state() |
| 433 | * is good enough as an example. |
| 434 | */ |
| 435 | |
| 436 | /* Hopefully, error is recovered successfully. Now, you can resume to |
| 437 | * start PCI traffic to/from the affected PE. |
| 438 | */ |
| 439 | |
| 440 | .... |
Alexey Kardashevskiy | 5ffd229 | 2013-05-21 13:33:10 +1000 | [diff] [blame] | 441 | |
Alex Williamson | 4a5b2a2 | 2012-07-31 08:16:23 -0600 | [diff] [blame] | 442 | ------------------------------------------------------------------------------- |
| 443 | |
| 444 | [1] VFIO was originally an acronym for "Virtual Function I/O" in its |
| 445 | initial implementation by Tom Lyon while as Cisco. We've since |
| 446 | outgrown the acronym, but it's catchy. |
| 447 | |
| 448 | [2] "safe" also depends upon a device being "well behaved". It's |
| 449 | possible for multi-function devices to have backdoors between |
| 450 | functions and even for single function devices to have alternative |
| 451 | access to things like PCI config space through MMIO registers. To |
| 452 | guard against the former we can include additional precautions in the |
| 453 | IOMMU driver to group multi-function PCI devices together |
| 454 | (iommu=group_mf). The latter we can't prevent, but the IOMMU should |
| 455 | still provide isolation. For PCI, SR-IOV Virtual Functions are the |
| 456 | best indicator of "well behaved", as these are designed for |
| 457 | virtualization usage models. |
| 458 | |
| 459 | [3] As always there are trade-offs to virtual machine device |
| 460 | assignment that are beyond the scope of VFIO. It's expected that |
| 461 | future IOMMU technologies will reduce some, but maybe not all, of |
| 462 | these trade-offs. |
| 463 | |
| 464 | [4] In this case the device is below a PCI bridge, so transactions |
| 465 | from either function of the device are indistinguishable to the iommu: |
| 466 | |
| 467 | -[0000:00]-+-1e.0-[06]--+-0d.0 |
| 468 | \-0d.1 |
| 469 | |
| 470 | 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 90) |