drm/i915: Pipelined fencing [infrastructure]

With this change, every batchbuffer can use all available fences (save
pinned and scanout, of course) without ever stalling the gpu!

In theory. Currently the actual pipelined update of the register is
disabled due to some stability issues. However, just the deferred update
is a significant win.

Based on a series of patches by Daniel Vetter.

The premise is that before every access to a buffer through the GTT we
have to declare whether we need a register or not. If the access is by
the GPU, a pipelined update to the register is made via the ringbuffer,
and we track the last seqno of the batches that access it. If by the
CPU we wait for the last GPU access and update the register (either
to clear or to set it for the current buffer).

One advantage of being able to pipeline changes is that we can defer the
actual updating of the fence register until we first need to access the
object through the GTT, i.e. we can eliminate the stall on set_tiling.
This is important as the userspace bo cache does not track the tiling
status of active buffers which generate frequent stalls on gen3 when
enabling tiling for an already bound buffer.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 7b37c19..af9ff40 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -126,6 +126,7 @@
 struct drm_i915_fence_reg {
 	struct list_head lru_list;
 	struct drm_i915_gem_object *obj;
+	uint32_t setup_seqno;
 };
 
 struct sdvo_device_mapping {
@@ -752,6 +753,7 @@
 	 * Current tiling mode for the object.
 	 */
 	unsigned int tiling_mode : 2;
+	unsigned int tiling_changed : 1;
 
 	/** How many users have pinned this object in GTT space. The following
 	 * users can each hold at most one reference: pwrite/pread, pin_ioctl
@@ -1121,10 +1123,10 @@
 	return ring->outstanding_lazy_request = dev_priv->next_seqno;
 }
 
-int __must_check i915_gem_object_get_fence_reg(struct drm_i915_gem_object *obj,
-					       bool interruptible);
-int __must_check i915_gem_object_put_fence_reg(struct drm_i915_gem_object *obj,
-					       bool interruptible);
+int __must_check i915_gem_object_get_fence(struct drm_i915_gem_object *obj,
+					   struct intel_ring_buffer *pipelined,
+					   bool interruptible);
+int __must_check i915_gem_object_put_fence(struct drm_i915_gem_object *obj);
 
 void i915_gem_retire_requests(struct drm_device *dev);
 void i915_gem_reset(struct drm_device *dev);