drm/i915: Do not unmap object unless no other VMAs reference it
When using an IOMMU, GEM objects are mapped by their DMA address as the
physical address is unknown. This depends on the underlying IOMMU
driver to map and unmap the physical pages properly as defined in
intel_iommu.c.
The current code will tell the IOMMU to unmap the GEM BO's pages on the
destruction of the first VMA that "maps" that BO. This is clearly wrong
as there may be other VMAs "mapping" that BO (using flink). The scanout
is one such example.
The patch fixes this issue by only unmapping the DMA maps when there are
no more VMAs mapping that object. This is equivalent to when an object
is considered unbound as can be seen by the code. On the first VMA that
again because bound, we will remap.
An alternate solution would be to move the dma mapping to object
creation and destrubtion. I am not sure if this is considered an
unfriendly thing to do.
Some notes to backporters trying to backport full PPGTT:
The bug can never be hit without enabling the IOMMU. The existing code
will also do the right thing when the object is shared via dmabuf. The
failure should be demonstrable with flink. In cases when not using
intel_iommu_strict it is likely (likely, as defined by: off the top of
my head) on current workloads to *not* hit this bug since we often
teardown all VMAs for an object shared across multiple VMs. We also
finish access to that object before the first dma_unmapping.
intel_iommu_strict with flinked buffers is likely to hit this issue.
Signed-off-by: Armin Reese <armin.c.reese@intel.com>
[danvet: Add the excellent commit message provided by Ben.]
Reviewed-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
1 file changed