drm/radeon: GPU virtual memory support v22

Virtual address space are per drm client (opener of /dev/drm).
Client are in charge of virtual address space, they need to
map bo into it by calling DRM_RADEON_GEM_VA ioctl.

First 16M of virtual address space is reserved by the kernel.

Once using 2 level page table we should be able to have a small
vram memory footprint for each pt (there would be one pt for all
gart, one for all vram and then one first level for each virtual
address space).

Plan include using the sub allocator for a common vm page table
area and using memcpy to copy vm page table in & out. Or use
a gart object and copy things in & out using dma.

v2: agd5f fixes:
- Add vram base offset for vram pages.  The GPU physical address of a
vram page is FB_OFFSET + page offset.  FB_OFFSET is 0 on discrete
cards and the physical bus address of the stolen memory on
integrated chips.
- VM_CONTEXT1_PROTECTION_FAULT_DEFAULT_ADDR covers all vmid's >= 1

v3: agd5f:
- integrate with the semaphore/multi-ring stuff

v4:
- rebase on top ttm dma & multi-ring stuff
- userspace is now in charge of the address space
- no more specific cs vm ioctl, instead cs ioctl has a new
  chunk

v5:
- properly handle mem == NULL case from move_notify callback
- fix the vm cleanup path

v6:
- fix update of page table to only happen on valid mem placement

v7:
- add tlb flush for each vm context
- add flags to define mapping property (readable, writeable, snooped)
- make ring id implicit from ib->fence->ring, up to each asic callback
  to then do ring specific scheduling if vm ib scheduling function

v8:
- add query for ib limit and kernel reserved virtual space
- rename vm->size to max_pfn (maximum number of page)
- update gem_va ioctl to also allow unmap operation
- bump kernel version to allow userspace to query for vm support

v9:
- rebuild page table only when bind and incrementaly depending
  on bo referenced by cs and that have been moved
- allow virtual address space to grow
- use sa allocator for vram page table
- return invalid when querying vm limit on non cayman GPU
- dump vm fault register on lockup

v10: agd5f:
- Move the vm schedule_ib callback to a standalone function, remove
  the callback and use the existing ib_execute callback for VM IBs.

v11:
- rebase on top of lastest Linus

v12: agd5f:
- remove spurious backslash
- set IB vm_id to 0 in radeon_ib_get()

v13: agd5f:
- fix handling of RADEON_CHUNK_ID_FLAGS

v14:
- fix va destruction
- fix suspend resume
- forbid bo to have several different va in same vm

v15:
- rebase

v16:
- cleanup left over of vm init/fini

v17: agd5f:
- cs checker

v18: agd5f:
- reworks the CS ioctl to better support multiple rings and
VM.  Rather than adding a new chunk id for VM, just re-use the
IB chunk id and add a new flags for VM mode.  Also define additional
dwords for the flags chunk id to define the what ring we want to use
(gfx, compute, uvd, etc.) and the priority.

v19:
- fix cs fini in weird case of no ib
- semi working flush fix for ni
- rebase on top of sa allocator changes

v20: agd5f:
- further CS ioctl cleanups from Christian's comments

v21: agd5f:
- integrate CS checker improvements

v22: agd5f:
- final cleanups for release, only allow VM CS on cayman

Signed-off-by: Jerome Glisse <jglisse@redhat.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
19 files changed