blob: 622cd0bdef381dde269d76205efdbeae728b9b12 [file] [log] [blame]
Oscar Mateob20385f2014-07-24 17:04:10 +01001/*
2 * Copyright © 2014 Intel Corporation
3 *
4 * Permission is hereby granted, free of charge, to any person obtaining a
5 * copy of this software and associated documentation files (the "Software"),
6 * to deal in the Software without restriction, including without limitation
7 * the rights to use, copy, modify, merge, publish, distribute, sublicense,
8 * and/or sell copies of the Software, and to permit persons to whom the
9 * Software is furnished to do so, subject to the following conditions:
10 *
11 * The above copyright notice and this permission notice (including the next
12 * paragraph) shall be included in all copies or substantial portions of the
13 * Software.
14 *
15 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
18 * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
20 * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
21 * IN THE SOFTWARE.
22 *
23 * Authors:
24 * Ben Widawsky <ben@bwidawsk.net>
25 * Michel Thierry <michel.thierry@intel.com>
26 * Thomas Daniel <thomas.daniel@intel.com>
27 * Oscar Mateo <oscar.mateo@intel.com>
28 *
29 */
30
Oscar Mateo73e4d072014-07-24 17:04:48 +010031/**
32 * DOC: Logical Rings, Logical Ring Contexts and Execlists
33 *
34 * Motivation:
Oscar Mateob20385f2014-07-24 17:04:10 +010035 * GEN8 brings an expansion of the HW contexts: "Logical Ring Contexts".
36 * These expanded contexts enable a number of new abilities, especially
37 * "Execlists" (also implemented in this file).
38 *
Oscar Mateo73e4d072014-07-24 17:04:48 +010039 * One of the main differences with the legacy HW contexts is that logical
40 * ring contexts incorporate many more things to the context's state, like
41 * PDPs or ringbuffer control registers:
42 *
43 * The reason why PDPs are included in the context is straightforward: as
44 * PPGTTs (per-process GTTs) are actually per-context, having the PDPs
45 * contained there mean you don't need to do a ppgtt->switch_mm yourself,
46 * instead, the GPU will do it for you on the context switch.
47 *
48 * But, what about the ringbuffer control registers (head, tail, etc..)?
49 * shouldn't we just need a set of those per engine command streamer? This is
50 * where the name "Logical Rings" starts to make sense: by virtualizing the
51 * rings, the engine cs shifts to a new "ring buffer" with every context
52 * switch. When you want to submit a workload to the GPU you: A) choose your
53 * context, B) find its appropriate virtualized ring, C) write commands to it
54 * and then, finally, D) tell the GPU to switch to that context.
55 *
56 * Instead of the legacy MI_SET_CONTEXT, the way you tell the GPU to switch
57 * to a contexts is via a context execution list, ergo "Execlists".
58 *
59 * LRC implementation:
60 * Regarding the creation of contexts, we have:
61 *
62 * - One global default context.
63 * - One local default context for each opened fd.
64 * - One local extra context for each context create ioctl call.
65 *
66 * Now that ringbuffers belong per-context (and not per-engine, like before)
67 * and that contexts are uniquely tied to a given engine (and not reusable,
68 * like before) we need:
69 *
70 * - One ringbuffer per-engine inside each context.
71 * - One backing object per-engine inside each context.
72 *
73 * The global default context starts its life with these new objects fully
74 * allocated and populated. The local default context for each opened fd is
75 * more complex, because we don't know at creation time which engine is going
76 * to use them. To handle this, we have implemented a deferred creation of LR
77 * contexts:
78 *
79 * The local context starts its life as a hollow or blank holder, that only
80 * gets populated for a given engine once we receive an execbuffer. If later
81 * on we receive another execbuffer ioctl for the same context but a different
82 * engine, we allocate/populate a new ringbuffer and context backing object and
83 * so on.
84 *
85 * Finally, regarding local contexts created using the ioctl call: as they are
86 * only allowed with the render ring, we can allocate & populate them right
87 * away (no need to defer anything, at least for now).
88 *
89 * Execlists implementation:
Oscar Mateob20385f2014-07-24 17:04:10 +010090 * Execlists are the new method by which, on gen8+ hardware, workloads are
91 * submitted for execution (as opposed to the legacy, ringbuffer-based, method).
Oscar Mateo73e4d072014-07-24 17:04:48 +010092 * This method works as follows:
93 *
94 * When a request is committed, its commands (the BB start and any leading or
95 * trailing commands, like the seqno breadcrumbs) are placed in the ringbuffer
96 * for the appropriate context. The tail pointer in the hardware context is not
97 * updated at this time, but instead, kept by the driver in the ringbuffer
98 * structure. A structure representing this request is added to a request queue
99 * for the appropriate engine: this structure contains a copy of the context's
100 * tail after the request was written to the ring buffer and a pointer to the
101 * context itself.
102 *
103 * If the engine's request queue was empty before the request was added, the
104 * queue is processed immediately. Otherwise the queue will be processed during
105 * a context switch interrupt. In any case, elements on the queue will get sent
106 * (in pairs) to the GPU's ExecLists Submit Port (ELSP, for short) with a
107 * globally unique 20-bits submission ID.
108 *
109 * When execution of a request completes, the GPU updates the context status
110 * buffer with a context complete event and generates a context switch interrupt.
111 * During the interrupt handling, the driver examines the events in the buffer:
112 * for each context complete event, if the announced ID matches that on the head
113 * of the request queue, then that request is retired and removed from the queue.
114 *
115 * After processing, if any requests were retired and the queue is not empty
116 * then a new execution list can be submitted. The two requests at the front of
117 * the queue are next to be submitted but since a context may not occur twice in
118 * an execution list, if subsequent requests have the same ID as the first then
119 * the two requests must be combined. This is done simply by discarding requests
120 * at the head of the queue until either only one requests is left (in which case
121 * we use a NULL second context) or the first two requests have unique IDs.
122 *
123 * By always executing the first two requests in the queue the driver ensures
124 * that the GPU is kept as busy as possible. In the case where a single context
125 * completes but a second context is still executing, the request for this second
126 * context will be at the head of the queue when we remove the first one. This
127 * request will then be resubmitted along with a new request for a different context,
128 * which will cause the hardware to continue executing the second request and queue
129 * the new request (the GPU detects the condition of a context getting preempted
130 * with the same context and optimizes the context switch flow by not doing
131 * preemption, but just sampling the new tail pointer).
132 *
Oscar Mateob20385f2014-07-24 17:04:10 +0100133 */
Tvrtko Ursulin27af5ee2016-04-04 12:11:56 +0100134#include <linux/interrupt.h>
Oscar Mateob20385f2014-07-24 17:04:10 +0100135
136#include <drm/drmP.h>
137#include <drm/i915_drm.h>
138#include "i915_drv.h"
Peter Antoine3bbaba02015-07-10 20:13:11 +0300139#include "intel_mocs.h"
Oscar Mateo127f1002014-07-24 17:04:11 +0100140
Michael H. Nguyen468c6812014-11-13 17:51:49 +0000141#define GEN9_LR_CONTEXT_RENDER_SIZE (22 * PAGE_SIZE)
Oscar Mateo8c8579172014-07-24 17:04:14 +0100142#define GEN8_LR_CONTEXT_RENDER_SIZE (20 * PAGE_SIZE)
143#define GEN8_LR_CONTEXT_OTHER_SIZE (2 * PAGE_SIZE)
144
Thomas Daniele981e7b2014-07-24 17:04:39 +0100145#define RING_EXECLIST_QFULL (1 << 0x2)
146#define RING_EXECLIST1_VALID (1 << 0x3)
147#define RING_EXECLIST0_VALID (1 << 0x4)
148#define RING_EXECLIST_ACTIVE_STATUS (3 << 0xE)
149#define RING_EXECLIST1_ACTIVE (1 << 0x11)
150#define RING_EXECLIST0_ACTIVE (1 << 0x12)
151
152#define GEN8_CTX_STATUS_IDLE_ACTIVE (1 << 0)
153#define GEN8_CTX_STATUS_PREEMPTED (1 << 1)
154#define GEN8_CTX_STATUS_ELEMENT_SWITCH (1 << 2)
155#define GEN8_CTX_STATUS_ACTIVE_IDLE (1 << 3)
156#define GEN8_CTX_STATUS_COMPLETE (1 << 4)
157#define GEN8_CTX_STATUS_LITE_RESTORE (1 << 15)
Oscar Mateo8670d6f2014-07-24 17:04:17 +0100158
159#define CTX_LRI_HEADER_0 0x01
160#define CTX_CONTEXT_CONTROL 0x02
161#define CTX_RING_HEAD 0x04
162#define CTX_RING_TAIL 0x06
163#define CTX_RING_BUFFER_START 0x08
164#define CTX_RING_BUFFER_CONTROL 0x0a
165#define CTX_BB_HEAD_U 0x0c
166#define CTX_BB_HEAD_L 0x0e
167#define CTX_BB_STATE 0x10
168#define CTX_SECOND_BB_HEAD_U 0x12
169#define CTX_SECOND_BB_HEAD_L 0x14
170#define CTX_SECOND_BB_STATE 0x16
171#define CTX_BB_PER_CTX_PTR 0x18
172#define CTX_RCS_INDIRECT_CTX 0x1a
173#define CTX_RCS_INDIRECT_CTX_OFFSET 0x1c
174#define CTX_LRI_HEADER_1 0x21
175#define CTX_CTX_TIMESTAMP 0x22
176#define CTX_PDP3_UDW 0x24
177#define CTX_PDP3_LDW 0x26
178#define CTX_PDP2_UDW 0x28
179#define CTX_PDP2_LDW 0x2a
180#define CTX_PDP1_UDW 0x2c
181#define CTX_PDP1_LDW 0x2e
182#define CTX_PDP0_UDW 0x30
183#define CTX_PDP0_LDW 0x32
184#define CTX_LRI_HEADER_2 0x41
185#define CTX_R_PWR_CLK_STATE 0x42
186#define CTX_GPGPU_CSR_BASE_ADDRESS 0x44
187
Ben Widawsky84b790f2014-07-24 17:04:36 +0100188#define GEN8_CTX_VALID (1<<0)
189#define GEN8_CTX_FORCE_PD_RESTORE (1<<1)
190#define GEN8_CTX_FORCE_RESTORE (1<<2)
191#define GEN8_CTX_L3LLC_COHERENT (1<<5)
192#define GEN8_CTX_PRIVILEGE (1<<8)
Michel Thierrye5815a22015-04-08 12:13:32 +0100193
Ville Syrjälä0d925ea2015-11-04 23:20:11 +0200194#define ASSIGN_CTX_REG(reg_state, pos, reg, val) do { \
Ville Syrjäläf0f59a02015-11-18 15:33:26 +0200195 (reg_state)[(pos)+0] = i915_mmio_reg_offset(reg); \
Ville Syrjälä0d925ea2015-11-04 23:20:11 +0200196 (reg_state)[(pos)+1] = (val); \
197} while (0)
198
199#define ASSIGN_CTX_PDP(ppgtt, reg_state, n) do { \
Mika Kuoppalad852c7b2015-06-25 18:35:06 +0300200 const u64 _addr = i915_page_dir_dma_addr((ppgtt), (n)); \
Michel Thierrye5815a22015-04-08 12:13:32 +0100201 reg_state[CTX_PDP ## n ## _UDW+1] = upper_32_bits(_addr); \
202 reg_state[CTX_PDP ## n ## _LDW+1] = lower_32_bits(_addr); \
Ville Syrjälä9244a812015-11-04 23:20:09 +0200203} while (0)
Michel Thierrye5815a22015-04-08 12:13:32 +0100204
Ville Syrjälä9244a812015-11-04 23:20:09 +0200205#define ASSIGN_CTX_PML4(ppgtt, reg_state) do { \
Michel Thierry2dba3232015-07-30 11:06:23 +0100206 reg_state[CTX_PDP0_UDW + 1] = upper_32_bits(px_dma(&ppgtt->pml4)); \
207 reg_state[CTX_PDP0_LDW + 1] = lower_32_bits(px_dma(&ppgtt->pml4)); \
Ville Syrjälä9244a812015-11-04 23:20:09 +0200208} while (0)
Michel Thierry2dba3232015-07-30 11:06:23 +0100209
Ben Widawsky84b790f2014-07-24 17:04:36 +0100210enum {
Ben Widawsky84b790f2014-07-24 17:04:36 +0100211 FAULT_AND_HANG = 0,
212 FAULT_AND_HALT, /* Debug only */
213 FAULT_AND_STREAM,
214 FAULT_AND_CONTINUE /* Unsupported */
215};
216#define GEN8_CTX_ID_SHIFT 32
Chris Wilson7069b142016-04-28 09:56:52 +0100217#define GEN8_CTX_ID_WIDTH 21
Michel Thierry71562912016-02-23 10:31:49 +0000218#define GEN8_CTX_RCS_INDIRECT_CTX_OFFSET_DEFAULT 0x17
219#define GEN9_CTX_RCS_INDIRECT_CTX_OFFSET_DEFAULT 0x26
Ben Widawsky84b790f2014-07-24 17:04:36 +0100220
Chris Wilson0e93cdd2016-04-29 09:07:06 +0100221/* Typical size of the average request (2 pipecontrols and a MI_BB) */
222#define EXECLISTS_REQUEST_SIZE 64 /* bytes */
223
Chris Wilsone2efd132016-05-24 14:53:34 +0100224static int execlists_context_deferred_alloc(struct i915_gem_context *ctx,
Chris Wilson978f1e02016-04-28 09:56:54 +0100225 struct intel_engine_cs *engine);
Chris Wilsone2efd132016-05-24 14:53:34 +0100226static int intel_lr_context_pin(struct i915_gem_context *ctx,
Tvrtko Ursuline52928232016-01-28 10:29:54 +0000227 struct intel_engine_cs *engine);
Thomas Daniel7ba717c2014-11-13 10:28:56 +0000228
Oscar Mateo73e4d072014-07-24 17:04:48 +0100229/**
230 * intel_sanitize_enable_execlists() - sanitize i915.enable_execlists
Tvrtko Ursulin14bb2c12016-06-03 14:02:17 +0100231 * @dev_priv: i915 device private
Oscar Mateo73e4d072014-07-24 17:04:48 +0100232 * @enable_execlists: value of i915.enable_execlists module parameter.
233 *
234 * Only certain platforms support Execlists (the prerequisites being
Thomas Daniel27401d12014-12-11 12:48:35 +0000235 * support for Logical Ring Contexts and Aliasing PPGTT or better).
Oscar Mateo73e4d072014-07-24 17:04:48 +0100236 *
237 * Return: 1 if Execlists is supported and has to be enabled.
238 */
Chris Wilsonc0336662016-05-06 15:40:21 +0100239int intel_sanitize_enable_execlists(struct drm_i915_private *dev_priv, int enable_execlists)
Oscar Mateo127f1002014-07-24 17:04:11 +0100240{
Zhiyuan Lva0bd6c32015-08-28 15:41:16 +0800241 /* On platforms with execlist available, vGPU will only
242 * support execlist mode, no ring buffer mode.
243 */
Chris Wilsonc0336662016-05-06 15:40:21 +0100244 if (HAS_LOGICAL_RING_CONTEXTS(dev_priv) && intel_vgpu_active(dev_priv))
Zhiyuan Lva0bd6c32015-08-28 15:41:16 +0800245 return 1;
246
Chris Wilsonc0336662016-05-06 15:40:21 +0100247 if (INTEL_GEN(dev_priv) >= 9)
Damien Lespiau70ee45e2014-11-14 15:05:59 +0000248 return 1;
249
Oscar Mateo127f1002014-07-24 17:04:11 +0100250 if (enable_execlists == 0)
251 return 0;
252
Daniel Vetter5a21b662016-05-24 17:13:53 +0200253 if (HAS_LOGICAL_RING_CONTEXTS(dev_priv) &&
254 USES_PPGTT(dev_priv) &&
255 i915.use_mmio_flip >= 0)
Oscar Mateo127f1002014-07-24 17:04:11 +0100256 return 1;
257
258 return 0;
259}
Oscar Mateoede7d422014-07-24 17:04:12 +0100260
Tvrtko Ursulinca825802016-01-15 15:10:27 +0000261static void
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000262logical_ring_init_platform_invariants(struct intel_engine_cs *engine)
Tvrtko Ursulinca825802016-01-15 15:10:27 +0000263{
Chris Wilsonc0336662016-05-06 15:40:21 +0100264 struct drm_i915_private *dev_priv = engine->i915;
Tvrtko Ursulinca825802016-01-15 15:10:27 +0000265
Chris Wilsonc0336662016-05-06 15:40:21 +0100266 if (IS_GEN8(dev_priv) || IS_GEN9(dev_priv))
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000267 engine->idle_lite_restore_wa = ~0;
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000268
Chris Wilsonc0336662016-05-06 15:40:21 +0100269 engine->disable_lite_restore_wa = (IS_SKL_REVID(dev_priv, 0, SKL_REVID_B0) ||
270 IS_BXT_REVID(dev_priv, 0, BXT_REVID_A1)) &&
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000271 (engine->id == VCS || engine->id == VCS2);
Tvrtko Ursulinca825802016-01-15 15:10:27 +0000272
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000273 engine->ctx_desc_template = GEN8_CTX_VALID;
Chris Wilsonc0336662016-05-06 15:40:21 +0100274 if (IS_GEN8(dev_priv))
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000275 engine->ctx_desc_template |= GEN8_CTX_L3LLC_COHERENT;
276 engine->ctx_desc_template |= GEN8_CTX_PRIVILEGE;
Tvrtko Ursulinca825802016-01-15 15:10:27 +0000277
278 /* TODO: WaDisableLiteRestore when we start using semaphore
279 * signalling between Command Streamers */
280 /* ring->ctx_desc_template |= GEN8_CTX_FORCE_RESTORE; */
281
282 /* WaEnableForceRestoreInCtxtDescForVCS:skl */
283 /* WaEnableForceRestoreInCtxtDescForVCS:bxt */
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000284 if (engine->disable_lite_restore_wa)
285 engine->ctx_desc_template |= GEN8_CTX_FORCE_RESTORE;
Tvrtko Ursulinca825802016-01-15 15:10:27 +0000286}
287
288/**
289 * intel_lr_context_descriptor_update() - calculate & cache the descriptor
290 * descriptor for a pinned context
Tvrtko Ursulinca825802016-01-15 15:10:27 +0000291 * @ctx: Context to work on
Chris Wilson9021ad02016-05-24 14:53:37 +0100292 * @engine: Engine the descriptor will be used with
Tvrtko Ursulinca825802016-01-15 15:10:27 +0000293 *
294 * The context descriptor encodes various attributes of a context,
295 * including its GTT address and some flags. Because it's fairly
296 * expensive to calculate, we'll just do it once and cache the result,
297 * which remains valid until the context is unpinned.
298 *
Daniel Vetter6e5248b2016-07-15 21:48:06 +0200299 * This is what a descriptor looks like, from LSB to MSB::
300 *
301 * bits 0-11: flags, GEN8_CTX_* (cached in ctx_desc_template)
302 * bits 12-31: LRCA, GTT address of (the HWSP of) this context
303 * bits 32-52: ctx ID, a globally unique tag
304 * bits 53-54: mbz, reserved for use by hardware
305 * bits 55-63: group ID, currently unused and set to 0
Tvrtko Ursulinca825802016-01-15 15:10:27 +0000306 */
307static void
Chris Wilsone2efd132016-05-24 14:53:34 +0100308intel_lr_context_descriptor_update(struct i915_gem_context *ctx,
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000309 struct intel_engine_cs *engine)
Tvrtko Ursulinca825802016-01-15 15:10:27 +0000310{
Chris Wilson9021ad02016-05-24 14:53:37 +0100311 struct intel_context *ce = &ctx->engine[engine->id];
Chris Wilson7069b142016-04-28 09:56:52 +0100312 u64 desc;
Tvrtko Ursulinca825802016-01-15 15:10:27 +0000313
Chris Wilson7069b142016-04-28 09:56:52 +0100314 BUILD_BUG_ON(MAX_CONTEXT_HW_ID > (1<<GEN8_CTX_ID_WIDTH));
315
Zhi Wangc01fc532016-06-16 08:07:02 -0400316 desc = ctx->desc_template; /* bits 3-4 */
317 desc |= engine->ctx_desc_template; /* bits 0-11 */
Chris Wilson9021ad02016-05-24 14:53:37 +0100318 desc |= ce->lrc_vma->node.start + LRC_PPHWSP_PN * PAGE_SIZE;
319 /* bits 12-31 */
Chris Wilson7069b142016-04-28 09:56:52 +0100320 desc |= (u64)ctx->hw_id << GEN8_CTX_ID_SHIFT; /* bits 32-52 */
Tvrtko Ursulinca825802016-01-15 15:10:27 +0000321
Chris Wilson9021ad02016-05-24 14:53:37 +0100322 ce->lrc_desc = desc;
Tvrtko Ursulinca825802016-01-15 15:10:27 +0000323}
324
Chris Wilsone2efd132016-05-24 14:53:34 +0100325uint64_t intel_lr_context_descriptor(struct i915_gem_context *ctx,
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000326 struct intel_engine_cs *engine)
Tvrtko Ursulinca825802016-01-15 15:10:27 +0000327{
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000328 return ctx->engine[engine->id].lrc_desc;
Tvrtko Ursulinca825802016-01-15 15:10:27 +0000329}
330
Mika Kuoppalacc3c4252015-07-03 17:09:36 +0300331static void execlists_elsp_write(struct drm_i915_gem_request *rq0,
332 struct drm_i915_gem_request *rq1)
Ben Widawsky84b790f2014-07-24 17:04:36 +0100333{
Mika Kuoppalacc3c4252015-07-03 17:09:36 +0300334
Tvrtko Ursulin4a570db2016-03-16 11:00:38 +0000335 struct intel_engine_cs *engine = rq0->engine;
Chris Wilsonc0336662016-05-06 15:40:21 +0100336 struct drm_i915_private *dev_priv = rq0->i915;
Mika Kuoppala1cff8cc2015-07-06 11:09:25 +0300337 uint64_t desc[2];
Ben Widawsky84b790f2014-07-24 17:04:36 +0100338
Mika Kuoppala1cff8cc2015-07-06 11:09:25 +0300339 if (rq1) {
Tvrtko Ursulin4a570db2016-03-16 11:00:38 +0000340 desc[1] = intel_lr_context_descriptor(rq1->ctx, rq1->engine);
Mika Kuoppala1cff8cc2015-07-06 11:09:25 +0300341 rq1->elsp_submitted++;
342 } else {
343 desc[1] = 0;
344 }
Ben Widawsky84b790f2014-07-24 17:04:36 +0100345
Tvrtko Ursulin4a570db2016-03-16 11:00:38 +0000346 desc[0] = intel_lr_context_descriptor(rq0->ctx, rq0->engine);
Mika Kuoppala1cff8cc2015-07-06 11:09:25 +0300347 rq0->elsp_submitted++;
Ben Widawsky84b790f2014-07-24 17:04:36 +0100348
Mika Kuoppala1cff8cc2015-07-06 11:09:25 +0300349 /* You must always write both descriptors in the order below. */
Tvrtko Ursuline2f80392016-03-16 11:00:36 +0000350 I915_WRITE_FW(RING_ELSP(engine), upper_32_bits(desc[1]));
351 I915_WRITE_FW(RING_ELSP(engine), lower_32_bits(desc[1]));
Chris Wilson6daccb02015-01-16 11:34:35 +0200352
Tvrtko Ursuline2f80392016-03-16 11:00:36 +0000353 I915_WRITE_FW(RING_ELSP(engine), upper_32_bits(desc[0]));
Ben Widawsky84b790f2014-07-24 17:04:36 +0100354 /* The context is automatically loaded after the following */
Tvrtko Ursuline2f80392016-03-16 11:00:36 +0000355 I915_WRITE_FW(RING_ELSP(engine), lower_32_bits(desc[0]));
Ben Widawsky84b790f2014-07-24 17:04:36 +0100356
Mika Kuoppala1cff8cc2015-07-06 11:09:25 +0300357 /* ELSP is a wo register, use another nearby reg for posting */
Tvrtko Ursuline2f80392016-03-16 11:00:36 +0000358 POSTING_READ_FW(RING_EXECLIST_STATUS_LO(engine));
Ben Widawsky84b790f2014-07-24 17:04:36 +0100359}
360
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000361static void
362execlists_update_context_pdps(struct i915_hw_ppgtt *ppgtt, u32 *reg_state)
363{
364 ASSIGN_CTX_PDP(ppgtt, reg_state, 3);
365 ASSIGN_CTX_PDP(ppgtt, reg_state, 2);
366 ASSIGN_CTX_PDP(ppgtt, reg_state, 1);
367 ASSIGN_CTX_PDP(ppgtt, reg_state, 0);
368}
369
370static void execlists_update_context(struct drm_i915_gem_request *rq)
Oscar Mateoae1250b2014-07-24 17:04:37 +0100371{
Tvrtko Ursulin4a570db2016-03-16 11:00:38 +0000372 struct intel_engine_cs *engine = rq->engine;
Mika Kuoppala05d98242015-07-03 17:09:33 +0300373 struct i915_hw_ppgtt *ppgtt = rq->ctx->ppgtt;
Tvrtko Ursuline2f80392016-03-16 11:00:36 +0000374 uint32_t *reg_state = rq->ctx->engine[engine->id].lrc_reg_state;
Oscar Mateoae1250b2014-07-24 17:04:37 +0100375
Chris Wilson8f942012016-08-02 22:50:30 +0100376 reg_state[CTX_RING_TAIL+1] = intel_ring_offset(rq->ring, rq->tail);
Oscar Mateoae1250b2014-07-24 17:04:37 +0100377
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000378 /* True 32b PPGTT with dynamic page allocation: update PDP
379 * registers and point the unallocated PDPs to scratch page.
380 * PML4 is allocated during ppgtt init, so this is not needed
381 * in 48-bit mode.
382 */
383 if (ppgtt && !USES_FULL_48BIT_PPGTT(ppgtt->base.dev))
384 execlists_update_context_pdps(ppgtt, reg_state);
Oscar Mateoae1250b2014-07-24 17:04:37 +0100385}
386
Chris Wilsonf4ea6bd2016-08-02 22:50:32 +0100387static void execlists_elsp_submit_contexts(struct drm_i915_gem_request *rq0,
388 struct drm_i915_gem_request *rq1)
Ben Widawsky84b790f2014-07-24 17:04:36 +0100389{
Tvrtko Ursulin26720ab2016-03-17 12:59:46 +0000390 struct drm_i915_private *dev_priv = rq0->i915;
Tvrtko Ursulin37566852016-04-12 14:37:31 +0100391 unsigned int fw_domains = rq0->engine->fw_domains;
Tvrtko Ursulin26720ab2016-03-17 12:59:46 +0000392
Mika Kuoppala05d98242015-07-03 17:09:33 +0300393 execlists_update_context(rq0);
Oscar Mateoae1250b2014-07-24 17:04:37 +0100394
Mika Kuoppalacc3c4252015-07-03 17:09:36 +0300395 if (rq1)
Mika Kuoppala05d98242015-07-03 17:09:33 +0300396 execlists_update_context(rq1);
Ben Widawsky84b790f2014-07-24 17:04:36 +0100397
Tvrtko Ursulin27af5ee2016-04-04 12:11:56 +0100398 spin_lock_irq(&dev_priv->uncore.lock);
Tvrtko Ursulin37566852016-04-12 14:37:31 +0100399 intel_uncore_forcewake_get__locked(dev_priv, fw_domains);
Tvrtko Ursulin26720ab2016-03-17 12:59:46 +0000400
Mika Kuoppalacc3c4252015-07-03 17:09:36 +0300401 execlists_elsp_write(rq0, rq1);
Tvrtko Ursulin26720ab2016-03-17 12:59:46 +0000402
Tvrtko Ursulin37566852016-04-12 14:37:31 +0100403 intel_uncore_forcewake_put__locked(dev_priv, fw_domains);
Tvrtko Ursulin27af5ee2016-04-04 12:11:56 +0100404 spin_unlock_irq(&dev_priv->uncore.lock);
Ben Widawsky84b790f2014-07-24 17:04:36 +0100405}
406
Zhi Wang3c7ba632016-06-16 08:07:03 -0400407static inline void execlists_context_status_change(
408 struct drm_i915_gem_request *rq,
409 unsigned long status)
410{
411 /*
412 * Only used when GVT-g is enabled now. When GVT-g is disabled,
413 * The compiler should eliminate this function as dead-code.
414 */
415 if (!IS_ENABLED(CONFIG_DRM_I915_GVT))
416 return;
417
418 atomic_notifier_call_chain(&rq->ctx->status_notifier, status, rq);
419}
420
Chris Wilsonf4ea6bd2016-08-02 22:50:32 +0100421static void execlists_unqueue(struct intel_engine_cs *engine)
Michel Thierryacdd8842014-07-24 17:04:38 +0100422{
Nick Hoath6d3d8272015-01-15 13:10:39 +0000423 struct drm_i915_gem_request *req0 = NULL, *req1 = NULL;
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000424 struct drm_i915_gem_request *cursor, *tmp;
Thomas Daniele981e7b2014-07-24 17:04:39 +0100425
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000426 assert_spin_locked(&engine->execlist_lock);
Michel Thierryacdd8842014-07-24 17:04:38 +0100427
Peter Antoine779949f2015-05-11 16:03:27 +0100428 /*
429 * If irqs are not active generate a warning as batches that finish
430 * without the irqs may get lost and a GPU Hang may occur.
431 */
Chris Wilsonc0336662016-05-06 15:40:21 +0100432 WARN_ON(!intel_irqs_enabled(engine->i915));
Peter Antoine779949f2015-05-11 16:03:27 +0100433
Michel Thierryacdd8842014-07-24 17:04:38 +0100434 /* Try to read in pairs */
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000435 list_for_each_entry_safe(cursor, tmp, &engine->execlist_queue,
Michel Thierryacdd8842014-07-24 17:04:38 +0100436 execlist_link) {
437 if (!req0) {
438 req0 = cursor;
Nick Hoath6d3d8272015-01-15 13:10:39 +0000439 } else if (req0->ctx == cursor->ctx) {
Michel Thierryacdd8842014-07-24 17:04:38 +0100440 /* Same ctx: ignore first request, as second request
441 * will update tail past first request's workload */
Oscar Mateoe1fee722014-07-24 17:04:40 +0100442 cursor->elsp_submitted = req0->elsp_submitted;
Tvrtko Ursuline39d42f2016-04-28 09:56:58 +0100443 list_del(&req0->execlist_link);
Chris Wilsone8a261e2016-07-20 13:31:49 +0100444 i915_gem_request_put(req0);
Michel Thierryacdd8842014-07-24 17:04:38 +0100445 req0 = cursor;
446 } else {
Zhi Wang80a9a8d2016-06-16 08:07:04 -0400447 if (IS_ENABLED(CONFIG_DRM_I915_GVT)) {
448 /*
449 * req0 (after merged) ctx requires single
450 * submission, stop picking
451 */
452 if (req0->ctx->execlists_force_single_submission)
453 break;
454 /*
455 * req0 ctx doesn't require single submission,
456 * but next req ctx requires, stop picking
457 */
458 if (cursor->ctx->execlists_force_single_submission)
459 break;
460 }
Michel Thierryacdd8842014-07-24 17:04:38 +0100461 req1 = cursor;
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000462 WARN_ON(req1->elsp_submitted);
Michel Thierryacdd8842014-07-24 17:04:38 +0100463 break;
464 }
465 }
466
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000467 if (unlikely(!req0))
468 return;
469
Zhi Wang3c7ba632016-06-16 08:07:03 -0400470 execlists_context_status_change(req0, INTEL_CONTEXT_SCHEDULE_IN);
471
472 if (req1)
473 execlists_context_status_change(req1,
474 INTEL_CONTEXT_SCHEDULE_IN);
475
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000476 if (req0->elsp_submitted & engine->idle_lite_restore_wa) {
Michel Thierry53292cd2015-04-15 18:11:33 +0100477 /*
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000478 * WaIdleLiteRestore: make sure we never cause a lite restore
479 * with HEAD==TAIL.
480 *
481 * Apply the wa NOOPS to prevent ring:HEAD == req:TAIL as we
482 * resubmit the request. See gen8_emit_request() for where we
483 * prepare the padding after the end of the request.
Michel Thierry53292cd2015-04-15 18:11:33 +0100484 */
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000485 req0->tail += 8;
Chris Wilsondca33ec2016-08-02 22:50:20 +0100486 req0->tail &= req0->ring->size - 1;
Michel Thierry53292cd2015-04-15 18:11:33 +0100487 }
488
Chris Wilsonf4ea6bd2016-08-02 22:50:32 +0100489 execlists_elsp_submit_contexts(req0, req1);
Michel Thierryacdd8842014-07-24 17:04:38 +0100490}
491
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000492static unsigned int
Tvrtko Ursuline39d42f2016-04-28 09:56:58 +0100493execlists_check_remove_request(struct intel_engine_cs *engine, u32 ctx_id)
Thomas Daniele981e7b2014-07-24 17:04:39 +0100494{
Nick Hoath6d3d8272015-01-15 13:10:39 +0000495 struct drm_i915_gem_request *head_req;
Thomas Daniele981e7b2014-07-24 17:04:39 +0100496
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000497 assert_spin_locked(&engine->execlist_lock);
Thomas Daniele981e7b2014-07-24 17:04:39 +0100498
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000499 head_req = list_first_entry_or_null(&engine->execlist_queue,
Nick Hoath6d3d8272015-01-15 13:10:39 +0000500 struct drm_i915_gem_request,
Thomas Daniele981e7b2014-07-24 17:04:39 +0100501 execlist_link);
502
Tvrtko Ursuline39d42f2016-04-28 09:56:58 +0100503 if (WARN_ON(!head_req || (head_req->ctx_hw_id != ctx_id)))
504 return 0;
Thomas Daniele981e7b2014-07-24 17:04:39 +0100505
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000506 WARN(head_req->elsp_submitted == 0, "Never submitted head request\n");
507
508 if (--head_req->elsp_submitted > 0)
509 return 0;
510
Zhi Wang3c7ba632016-06-16 08:07:03 -0400511 execlists_context_status_change(head_req, INTEL_CONTEXT_SCHEDULE_OUT);
512
Tvrtko Ursuline39d42f2016-04-28 09:56:58 +0100513 list_del(&head_req->execlist_link);
Chris Wilsone8a261e2016-07-20 13:31:49 +0100514 i915_gem_request_put(head_req);
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000515
516 return 1;
Thomas Daniele981e7b2014-07-24 17:04:39 +0100517}
518
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000519static u32
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000520get_context_status(struct intel_engine_cs *engine, unsigned int read_pointer,
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000521 u32 *context_id)
Ben Widawsky91a41032016-01-05 10:30:07 -0800522{
Chris Wilsonc0336662016-05-06 15:40:21 +0100523 struct drm_i915_private *dev_priv = engine->i915;
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000524 u32 status;
Ben Widawsky91a41032016-01-05 10:30:07 -0800525
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000526 read_pointer %= GEN8_CSB_ENTRIES;
Ben Widawsky91a41032016-01-05 10:30:07 -0800527
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000528 status = I915_READ_FW(RING_CONTEXT_STATUS_BUF_LO(engine, read_pointer));
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000529
530 if (status & GEN8_CTX_STATUS_IDLE_ACTIVE)
531 return 0;
532
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000533 *context_id = I915_READ_FW(RING_CONTEXT_STATUS_BUF_HI(engine,
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000534 read_pointer));
535
536 return status;
Ben Widawsky91a41032016-01-05 10:30:07 -0800537}
538
Daniel Vetter6e5248b2016-07-15 21:48:06 +0200539/*
Oscar Mateo73e4d072014-07-24 17:04:48 +0100540 * Check the unread Context Status Buffers and manage the submission of new
541 * contexts to the ELSP accordingly.
542 */
Tvrtko Ursulin27af5ee2016-04-04 12:11:56 +0100543static void intel_lrc_irq_handler(unsigned long data)
Thomas Daniele981e7b2014-07-24 17:04:39 +0100544{
Tvrtko Ursulin27af5ee2016-04-04 12:11:56 +0100545 struct intel_engine_cs *engine = (struct intel_engine_cs *)data;
Chris Wilsonc0336662016-05-06 15:40:21 +0100546 struct drm_i915_private *dev_priv = engine->i915;
Thomas Daniele981e7b2014-07-24 17:04:39 +0100547 u32 status_pointer;
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000548 unsigned int read_pointer, write_pointer;
Tvrtko Ursulin26720ab2016-03-17 12:59:46 +0000549 u32 csb[GEN8_CSB_ENTRIES][2];
550 unsigned int csb_read = 0, i;
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000551 unsigned int submit_contexts = 0;
Thomas Daniele981e7b2014-07-24 17:04:39 +0100552
Tvrtko Ursulin37566852016-04-12 14:37:31 +0100553 intel_uncore_forcewake_get(dev_priv, engine->fw_domains);
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000554
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000555 status_pointer = I915_READ_FW(RING_CONTEXT_STATUS_PTR(engine));
Thomas Daniele981e7b2014-07-24 17:04:39 +0100556
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000557 read_pointer = engine->next_context_status_buffer;
Ben Widawsky5590a5f2016-01-05 10:30:05 -0800558 write_pointer = GEN8_CSB_WRITE_PTR(status_pointer);
Thomas Daniele981e7b2014-07-24 17:04:39 +0100559 if (read_pointer > write_pointer)
Michel Thierrydfc53c52015-09-28 13:25:12 +0100560 write_pointer += GEN8_CSB_ENTRIES;
Thomas Daniele981e7b2014-07-24 17:04:39 +0100561
Thomas Daniele981e7b2014-07-24 17:04:39 +0100562 while (read_pointer < write_pointer) {
Tvrtko Ursulin26720ab2016-03-17 12:59:46 +0000563 if (WARN_ON_ONCE(csb_read == GEN8_CSB_ENTRIES))
564 break;
565 csb[csb_read][0] = get_context_status(engine, ++read_pointer,
566 &csb[csb_read][1]);
567 csb_read++;
Michel Thierry5af05fe2015-09-04 12:59:15 +0100568 }
Thomas Daniele981e7b2014-07-24 17:04:39 +0100569
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000570 engine->next_context_status_buffer = write_pointer % GEN8_CSB_ENTRIES;
Thomas Daniele981e7b2014-07-24 17:04:39 +0100571
Ben Widawsky5590a5f2016-01-05 10:30:05 -0800572 /* Update the read pointer to the old write pointer. Manual ringbuffer
573 * management ftw </sarcasm> */
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000574 I915_WRITE_FW(RING_CONTEXT_STATUS_PTR(engine),
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000575 _MASKED_FIELD(GEN8_CSB_READ_PTR_MASK,
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000576 engine->next_context_status_buffer << 8));
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000577
Tvrtko Ursulin37566852016-04-12 14:37:31 +0100578 intel_uncore_forcewake_put(dev_priv, engine->fw_domains);
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000579
Tvrtko Ursulin26720ab2016-03-17 12:59:46 +0000580 spin_lock(&engine->execlist_lock);
581
582 for (i = 0; i < csb_read; i++) {
583 if (unlikely(csb[i][0] & GEN8_CTX_STATUS_PREEMPTED)) {
584 if (csb[i][0] & GEN8_CTX_STATUS_LITE_RESTORE) {
585 if (execlists_check_remove_request(engine, csb[i][1]))
586 WARN(1, "Lite Restored request removed from queue\n");
587 } else
588 WARN(1, "Preemption without Lite Restore\n");
589 }
590
591 if (csb[i][0] & (GEN8_CTX_STATUS_ACTIVE_IDLE |
592 GEN8_CTX_STATUS_ELEMENT_SWITCH))
593 submit_contexts +=
594 execlists_check_remove_request(engine, csb[i][1]);
595 }
596
597 if (submit_contexts) {
598 if (!engine->disable_lite_restore_wa ||
599 (csb[i][0] & GEN8_CTX_STATUS_ACTIVE_IDLE))
Chris Wilsonf4ea6bd2016-08-02 22:50:32 +0100600 execlists_unqueue(engine);
Tvrtko Ursulin26720ab2016-03-17 12:59:46 +0000601 }
602
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000603 spin_unlock(&engine->execlist_lock);
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000604
605 if (unlikely(submit_contexts > 2))
606 DRM_ERROR("More than two context complete events?\n");
Thomas Daniele981e7b2014-07-24 17:04:39 +0100607}
608
Chris Wilsonf4ea6bd2016-08-02 22:50:32 +0100609static void execlists_submit_request(struct drm_i915_gem_request *request)
Michel Thierryacdd8842014-07-24 17:04:38 +0100610{
Tvrtko Ursulin4a570db2016-03-16 11:00:38 +0000611 struct intel_engine_cs *engine = request->engine;
Nick Hoath6d3d8272015-01-15 13:10:39 +0000612 struct drm_i915_gem_request *cursor;
Oscar Mateof1ad5a12014-07-24 17:04:41 +0100613 int num_elements = 0;
Michel Thierryacdd8842014-07-24 17:04:38 +0100614
Tvrtko Ursulin27af5ee2016-04-04 12:11:56 +0100615 spin_lock_bh(&engine->execlist_lock);
Michel Thierryacdd8842014-07-24 17:04:38 +0100616
Tvrtko Ursuline2f80392016-03-16 11:00:36 +0000617 list_for_each_entry(cursor, &engine->execlist_queue, execlist_link)
Oscar Mateof1ad5a12014-07-24 17:04:41 +0100618 if (++num_elements > 2)
619 break;
620
621 if (num_elements > 2) {
Nick Hoath6d3d8272015-01-15 13:10:39 +0000622 struct drm_i915_gem_request *tail_req;
Oscar Mateof1ad5a12014-07-24 17:04:41 +0100623
Tvrtko Ursuline2f80392016-03-16 11:00:36 +0000624 tail_req = list_last_entry(&engine->execlist_queue,
Nick Hoath6d3d8272015-01-15 13:10:39 +0000625 struct drm_i915_gem_request,
Oscar Mateof1ad5a12014-07-24 17:04:41 +0100626 execlist_link);
627
John Harrisonae707972015-05-29 17:44:14 +0100628 if (request->ctx == tail_req->ctx) {
Oscar Mateof1ad5a12014-07-24 17:04:41 +0100629 WARN(tail_req->elsp_submitted != 0,
Thomas Daniel7ba717c2014-11-13 10:28:56 +0000630 "More than 2 already-submitted reqs queued\n");
Tvrtko Ursuline39d42f2016-04-28 09:56:58 +0100631 list_del(&tail_req->execlist_link);
Chris Wilsone8a261e2016-07-20 13:31:49 +0100632 i915_gem_request_put(tail_req);
Oscar Mateof1ad5a12014-07-24 17:04:41 +0100633 }
634 }
635
Chris Wilsone8a261e2016-07-20 13:31:49 +0100636 i915_gem_request_get(request);
Tvrtko Ursuline2f80392016-03-16 11:00:36 +0000637 list_add_tail(&request->execlist_link, &engine->execlist_queue);
Tvrtko Ursulina3d12762016-04-28 09:56:57 +0100638 request->ctx_hw_id = request->ctx->hw_id;
Oscar Mateof1ad5a12014-07-24 17:04:41 +0100639 if (num_elements == 0)
Chris Wilsonf4ea6bd2016-08-02 22:50:32 +0100640 execlists_unqueue(engine);
Michel Thierryacdd8842014-07-24 17:04:38 +0100641
Tvrtko Ursulin27af5ee2016-04-04 12:11:56 +0100642 spin_unlock_bh(&engine->execlist_lock);
Michel Thierryacdd8842014-07-24 17:04:38 +0100643}
644
John Harrison40e895c2015-05-29 17:43:26 +0100645int intel_logical_ring_alloc_request_extras(struct drm_i915_gem_request *request)
John Harrisonbc0dce32015-03-19 12:30:07 +0000646{
Chris Wilson24f1d3c2016-04-28 09:56:53 +0100647 struct intel_engine_cs *engine = request->engine;
Chris Wilson9021ad02016-05-24 14:53:37 +0100648 struct intel_context *ce = &request->ctx->engine[engine->id];
Chris Wilsonbfa01202016-04-28 09:56:48 +0100649 int ret;
John Harrisonbc0dce32015-03-19 12:30:07 +0000650
Chris Wilson63103462016-04-28 09:56:49 +0100651 /* Flush enough space to reduce the likelihood of waiting after
652 * we start building the request - in which case we will just
653 * have to repeat work.
654 */
Chris Wilson0e93cdd2016-04-29 09:07:06 +0100655 request->reserved_space += EXECLISTS_REQUEST_SIZE;
Chris Wilson63103462016-04-28 09:56:49 +0100656
Chris Wilson9021ad02016-05-24 14:53:37 +0100657 if (!ce->state) {
Chris Wilson978f1e02016-04-28 09:56:54 +0100658 ret = execlists_context_deferred_alloc(request->ctx, engine);
659 if (ret)
660 return ret;
661 }
662
Chris Wilsondca33ec2016-08-02 22:50:20 +0100663 request->ring = ce->ring;
Mika Kuoppalaf3cc01f2015-07-06 11:08:30 +0300664
Alex Daia7e02192015-12-16 11:45:55 -0800665 if (i915.enable_guc_submission) {
666 /*
667 * Check that the GuC has space for the request before
668 * going any further, as the i915_add_request() call
669 * later on mustn't fail ...
670 */
Dave Gordon7c2c2702016-05-13 15:36:32 +0100671 ret = i915_guc_wq_check_space(request);
Alex Daia7e02192015-12-16 11:45:55 -0800672 if (ret)
673 return ret;
674 }
675
Chris Wilson24f1d3c2016-04-28 09:56:53 +0100676 ret = intel_lr_context_pin(request->ctx, engine);
677 if (ret)
678 return ret;
Dave Gordone28e4042016-01-19 19:02:55 +0000679
Chris Wilsonbfa01202016-04-28 09:56:48 +0100680 ret = intel_ring_begin(request, 0);
681 if (ret)
682 goto err_unpin;
683
Chris Wilson9021ad02016-05-24 14:53:37 +0100684 if (!ce->initialised) {
Chris Wilson24f1d3c2016-04-28 09:56:53 +0100685 ret = engine->init_context(request);
686 if (ret)
687 goto err_unpin;
688
Chris Wilson9021ad02016-05-24 14:53:37 +0100689 ce->initialised = true;
Chris Wilson24f1d3c2016-04-28 09:56:53 +0100690 }
691
692 /* Note that after this point, we have committed to using
693 * this request as it is being used to both track the
694 * state of engine initialisation and liveness of the
695 * golden renderstate above. Think twice before you try
696 * to cancel/unwind this request now.
697 */
698
Chris Wilson0e93cdd2016-04-29 09:07:06 +0100699 request->reserved_space -= EXECLISTS_REQUEST_SIZE;
Chris Wilsonbfa01202016-04-28 09:56:48 +0100700 return 0;
701
702err_unpin:
Chris Wilson24f1d3c2016-04-28 09:56:53 +0100703 intel_lr_context_unpin(request->ctx, engine);
Dave Gordone28e4042016-01-19 19:02:55 +0000704 return ret;
John Harrisonbc0dce32015-03-19 12:30:07 +0000705}
706
John Harrisonbc0dce32015-03-19 12:30:07 +0000707/*
Chris Wilsonddd66c52016-08-02 22:50:31 +0100708 * intel_logical_ring_advance() - advance the tail and prepare for submission
John Harrisonae707972015-05-29 17:44:14 +0100709 * @request: Request to advance the logical ringbuffer of.
John Harrisonbc0dce32015-03-19 12:30:07 +0000710 *
711 * The tail is updated in our logical ringbuffer struct, not in the actual context. What
712 * really happens during submission is that the context and current tail will be placed
713 * on a queue waiting for the ELSP to be ready to accept a new context submission. At that
714 * point, the tail *inside* the context is updated and the ELSP written to.
715 */
Chris Wilson7c17d372016-01-20 15:43:35 +0200716static int
Chris Wilsonddd66c52016-08-02 22:50:31 +0100717intel_logical_ring_advance(struct drm_i915_gem_request *request)
John Harrisonbc0dce32015-03-19 12:30:07 +0000718{
Chris Wilson7e37f882016-08-02 22:50:21 +0100719 struct intel_ring *ring = request->ring;
Tvrtko Ursulin4a570db2016-03-16 11:00:38 +0000720 struct intel_engine_cs *engine = request->engine;
John Harrisonbc0dce32015-03-19 12:30:07 +0000721
Chris Wilson1dae2df2016-08-02 22:50:19 +0100722 intel_ring_advance(ring);
723 request->tail = ring->tail;
John Harrisonbc0dce32015-03-19 12:30:07 +0000724
Chris Wilson7c17d372016-01-20 15:43:35 +0200725 /*
726 * Here we add two extra NOOPs as padding to avoid
727 * lite restore of a context with HEAD==TAIL.
728 *
729 * Caller must reserve WA_TAIL_DWORDS for us!
730 */
Chris Wilson1dae2df2016-08-02 22:50:19 +0100731 intel_ring_emit(ring, MI_NOOP);
732 intel_ring_emit(ring, MI_NOOP);
733 intel_ring_advance(ring);
Alex Daid1675192015-08-12 15:43:43 +0100734
Chris Wilsona16a4052016-04-28 09:56:56 +0100735 /* We keep the previous context alive until we retire the following
736 * request. This ensures that any the context object is still pinned
737 * for any residual writes the HW makes into it on the context switch
738 * into the next object following the breadcrumb. Otherwise, we may
739 * retire the context too early.
740 */
741 request->previous_context = engine->last_context;
742 engine->last_context = request->ctx;
Chris Wilson7c17d372016-01-20 15:43:35 +0200743 return 0;
John Harrisonbc0dce32015-03-19 12:30:07 +0000744}
745
Tvrtko Ursuline39d42f2016-04-28 09:56:58 +0100746void intel_execlists_cancel_requests(struct intel_engine_cs *engine)
Thomas Danielc86ee3a92014-11-13 10:27:05 +0000747{
Nick Hoath6d3d8272015-01-15 13:10:39 +0000748 struct drm_i915_gem_request *req, *tmp;
Tvrtko Ursuline39d42f2016-04-28 09:56:58 +0100749 LIST_HEAD(cancel_list);
Thomas Danielc86ee3a92014-11-13 10:27:05 +0000750
Chris Wilson91c8a322016-07-05 10:40:23 +0100751 WARN_ON(!mutex_is_locked(&engine->i915->drm.struct_mutex));
Thomas Danielc86ee3a92014-11-13 10:27:05 +0000752
Tvrtko Ursulin27af5ee2016-04-04 12:11:56 +0100753 spin_lock_bh(&engine->execlist_lock);
Tvrtko Ursuline39d42f2016-04-28 09:56:58 +0100754 list_replace_init(&engine->execlist_queue, &cancel_list);
Tvrtko Ursulin27af5ee2016-04-04 12:11:56 +0100755 spin_unlock_bh(&engine->execlist_lock);
Thomas Danielc86ee3a92014-11-13 10:27:05 +0000756
Tvrtko Ursuline39d42f2016-04-28 09:56:58 +0100757 list_for_each_entry_safe(req, tmp, &cancel_list, execlist_link) {
Thomas Danielc86ee3a92014-11-13 10:27:05 +0000758 list_del(&req->execlist_link);
Chris Wilsone8a261e2016-07-20 13:31:49 +0100759 i915_gem_request_put(req);
Thomas Danielc86ee3a92014-11-13 10:27:05 +0000760 }
761}
762
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000763void intel_logical_ring_stop(struct intel_engine_cs *engine)
Oscar Mateo454afeb2014-07-24 17:04:22 +0100764{
Chris Wilsonc0336662016-05-06 15:40:21 +0100765 struct drm_i915_private *dev_priv = engine->i915;
Oscar Mateo9832b9d2014-07-24 17:04:30 +0100766 int ret;
767
Tvrtko Ursulin117897f2016-03-16 11:00:40 +0000768 if (!intel_engine_initialized(engine))
Oscar Mateo9832b9d2014-07-24 17:04:30 +0100769 return;
770
Tvrtko Ursulin666796d2016-03-16 11:00:39 +0000771 ret = intel_engine_idle(engine);
Chris Wilsonf4457ae2016-04-13 17:35:08 +0100772 if (ret)
Oscar Mateo9832b9d2014-07-24 17:04:30 +0100773 DRM_ERROR("failed to quiesce %s whilst cleaning up: %d\n",
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000774 engine->name, ret);
Oscar Mateo9832b9d2014-07-24 17:04:30 +0100775
776 /* TODO: Is this correct with Execlists enabled? */
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000777 I915_WRITE_MODE(engine, _MASKED_BIT_ENABLE(STOP_RING));
Chris Wilson3e7941a2016-06-30 15:33:23 +0100778 if (intel_wait_for_register(dev_priv,
779 RING_MI_MODE(engine->mmio_base),
780 MODE_IDLE, MODE_IDLE,
781 1000)) {
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000782 DRM_ERROR("%s :timed out trying to stop ring\n", engine->name);
Oscar Mateo9832b9d2014-07-24 17:04:30 +0100783 return;
784 }
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000785 I915_WRITE_MODE(engine, _MASKED_BIT_DISABLE(STOP_RING));
Oscar Mateo454afeb2014-07-24 17:04:22 +0100786}
787
Chris Wilsone2efd132016-05-24 14:53:34 +0100788static int intel_lr_context_pin(struct i915_gem_context *ctx,
Chris Wilson24f1d3c2016-04-28 09:56:53 +0100789 struct intel_engine_cs *engine)
Oscar Mateodcb4c122014-11-13 10:28:10 +0000790{
Chris Wilson24f1d3c2016-04-28 09:56:53 +0100791 struct drm_i915_private *dev_priv = ctx->i915;
Chris Wilson9021ad02016-05-24 14:53:37 +0100792 struct intel_context *ce = &ctx->engine[engine->id];
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +0100793 void *vaddr;
794 u32 *lrc_reg_state;
Tvrtko Ursulinca825802016-01-15 15:10:27 +0000795 int ret;
Oscar Mateodcb4c122014-11-13 10:28:10 +0000796
Chris Wilson91c8a322016-07-05 10:40:23 +0100797 lockdep_assert_held(&ctx->i915->drm.struct_mutex);
Tvrtko Ursulinca825802016-01-15 15:10:27 +0000798
Chris Wilson9021ad02016-05-24 14:53:37 +0100799 if (ce->pin_count++)
Chris Wilson24f1d3c2016-04-28 09:56:53 +0100800 return 0;
801
Chris Wilson9021ad02016-05-24 14:53:37 +0100802 ret = i915_gem_obj_ggtt_pin(ce->state, GEN8_LR_CONTEXT_ALIGN,
803 PIN_OFFSET_BIAS | GUC_WOPCM_TOP);
Nick Hoathe84fe802015-09-11 12:53:46 +0100804 if (ret)
Chris Wilson24f1d3c2016-04-28 09:56:53 +0100805 goto err;
Thomas Daniel7ba717c2014-11-13 10:28:56 +0000806
Chris Wilson9021ad02016-05-24 14:53:37 +0100807 vaddr = i915_gem_object_pin_map(ce->state);
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +0100808 if (IS_ERR(vaddr)) {
809 ret = PTR_ERR(vaddr);
Tvrtko Ursulin82352e92016-01-15 17:12:45 +0000810 goto unpin_ctx_obj;
811 }
812
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +0100813 lrc_reg_state = vaddr + LRC_STATE_PN * PAGE_SIZE;
814
Chris Wilsonaad29fb2016-08-02 22:50:23 +0100815 ret = intel_ring_pin(ce->ring);
Nick Hoathe84fe802015-09-11 12:53:46 +0100816 if (ret)
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +0100817 goto unpin_map;
Alex Daid1675192015-08-12 15:43:43 +0100818
Chris Wilson9021ad02016-05-24 14:53:37 +0100819 ce->lrc_vma = i915_gem_obj_to_ggtt(ce->state);
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000820 intel_lr_context_descriptor_update(ctx, engine);
Chris Wilson9021ad02016-05-24 14:53:37 +0100821
Chris Wilsondca33ec2016-08-02 22:50:20 +0100822 lrc_reg_state[CTX_RING_BUFFER_START+1] = ce->ring->vma->node.start;
Chris Wilson9021ad02016-05-24 14:53:37 +0100823 ce->lrc_reg_state = lrc_reg_state;
824 ce->state->dirty = true;
Daniel Vettere93c28f2015-09-02 14:33:42 +0200825
Nick Hoathe84fe802015-09-11 12:53:46 +0100826 /* Invalidate GuC TLB. */
827 if (i915.enable_guc_submission)
828 I915_WRITE(GEN8_GTCR, GEN8_GTCR_INVALIDATE);
Oscar Mateodcb4c122014-11-13 10:28:10 +0000829
Chris Wilson9a6feaf2016-07-20 13:31:50 +0100830 i915_gem_context_get(ctx);
Chris Wilson24f1d3c2016-04-28 09:56:53 +0100831 return 0;
Thomas Daniel7ba717c2014-11-13 10:28:56 +0000832
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +0100833unpin_map:
Chris Wilson9021ad02016-05-24 14:53:37 +0100834 i915_gem_object_unpin_map(ce->state);
Thomas Daniel7ba717c2014-11-13 10:28:56 +0000835unpin_ctx_obj:
Chris Wilson9021ad02016-05-24 14:53:37 +0100836 i915_gem_object_ggtt_unpin(ce->state);
Chris Wilson24f1d3c2016-04-28 09:56:53 +0100837err:
Chris Wilson9021ad02016-05-24 14:53:37 +0100838 ce->pin_count = 0;
Thomas Daniel7ba717c2014-11-13 10:28:56 +0000839 return ret;
Oscar Mateodcb4c122014-11-13 10:28:10 +0000840}
841
Chris Wilsone2efd132016-05-24 14:53:34 +0100842void intel_lr_context_unpin(struct i915_gem_context *ctx,
Tvrtko Ursuline52928232016-01-28 10:29:54 +0000843 struct intel_engine_cs *engine)
Oscar Mateodcb4c122014-11-13 10:28:10 +0000844{
Chris Wilson9021ad02016-05-24 14:53:37 +0100845 struct intel_context *ce = &ctx->engine[engine->id];
Daniel Vetteraf3302b2015-12-04 17:27:15 +0100846
Chris Wilson91c8a322016-07-05 10:40:23 +0100847 lockdep_assert_held(&ctx->i915->drm.struct_mutex);
Chris Wilson9021ad02016-05-24 14:53:37 +0100848 GEM_BUG_ON(ce->pin_count == 0);
Tvrtko Ursulin321fe302016-01-28 10:29:55 +0000849
Chris Wilson9021ad02016-05-24 14:53:37 +0100850 if (--ce->pin_count)
Chris Wilson24f1d3c2016-04-28 09:56:53 +0100851 return;
852
Chris Wilsonaad29fb2016-08-02 22:50:23 +0100853 intel_ring_unpin(ce->ring);
Chris Wilson24f1d3c2016-04-28 09:56:53 +0100854
Chris Wilson9021ad02016-05-24 14:53:37 +0100855 i915_gem_object_unpin_map(ce->state);
856 i915_gem_object_ggtt_unpin(ce->state);
Chris Wilson24f1d3c2016-04-28 09:56:53 +0100857
Chris Wilson9021ad02016-05-24 14:53:37 +0100858 ce->lrc_vma = NULL;
859 ce->lrc_desc = 0;
860 ce->lrc_reg_state = NULL;
Chris Wilson24f1d3c2016-04-28 09:56:53 +0100861
Chris Wilson9a6feaf2016-07-20 13:31:50 +0100862 i915_gem_context_put(ctx);
Oscar Mateodcb4c122014-11-13 10:28:10 +0000863}
864
John Harrisone2be4fa2015-05-29 17:43:54 +0100865static int intel_logical_ring_workarounds_emit(struct drm_i915_gem_request *req)
Michel Thierry771b9a52014-11-11 16:47:33 +0000866{
867 int ret, i;
Chris Wilson7e37f882016-08-02 22:50:21 +0100868 struct intel_ring *ring = req->ring;
Chris Wilsonc0336662016-05-06 15:40:21 +0100869 struct i915_workarounds *w = &req->i915->workarounds;
Michel Thierry771b9a52014-11-11 16:47:33 +0000870
Boyer, Waynecd7feaa2016-01-06 17:15:29 -0800871 if (w->count == 0)
Michel Thierry771b9a52014-11-11 16:47:33 +0000872 return 0;
873
Chris Wilson7c9cf4e2016-08-02 22:50:25 +0100874 ret = req->engine->emit_flush(req, EMIT_BARRIER);
Michel Thierry771b9a52014-11-11 16:47:33 +0000875 if (ret)
876 return ret;
877
Chris Wilson987046a2016-04-28 09:56:46 +0100878 ret = intel_ring_begin(req, w->count * 2 + 2);
Michel Thierry771b9a52014-11-11 16:47:33 +0000879 if (ret)
880 return ret;
881
Chris Wilson1dae2df2016-08-02 22:50:19 +0100882 intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(w->count));
Michel Thierry771b9a52014-11-11 16:47:33 +0000883 for (i = 0; i < w->count; i++) {
Chris Wilson1dae2df2016-08-02 22:50:19 +0100884 intel_ring_emit_reg(ring, w->reg[i].addr);
885 intel_ring_emit(ring, w->reg[i].value);
Michel Thierry771b9a52014-11-11 16:47:33 +0000886 }
Chris Wilson1dae2df2016-08-02 22:50:19 +0100887 intel_ring_emit(ring, MI_NOOP);
Michel Thierry771b9a52014-11-11 16:47:33 +0000888
Chris Wilson1dae2df2016-08-02 22:50:19 +0100889 intel_ring_advance(ring);
Michel Thierry771b9a52014-11-11 16:47:33 +0000890
Chris Wilson7c9cf4e2016-08-02 22:50:25 +0100891 ret = req->engine->emit_flush(req, EMIT_BARRIER);
Michel Thierry771b9a52014-11-11 16:47:33 +0000892 if (ret)
893 return ret;
894
895 return 0;
896}
897
Arun Siluvery83b8a982015-07-08 10:27:05 +0100898#define wa_ctx_emit(batch, index, cmd) \
Arun Siluvery17ee9502015-06-19 19:07:01 +0100899 do { \
Arun Siluvery83b8a982015-07-08 10:27:05 +0100900 int __index = (index)++; \
901 if (WARN_ON(__index >= (PAGE_SIZE / sizeof(uint32_t)))) { \
Arun Siluvery17ee9502015-06-19 19:07:01 +0100902 return -ENOSPC; \
903 } \
Arun Siluvery83b8a982015-07-08 10:27:05 +0100904 batch[__index] = (cmd); \
Arun Siluvery17ee9502015-06-19 19:07:01 +0100905 } while (0)
906
Ville Syrjälä8f40db72015-11-04 23:20:08 +0200907#define wa_ctx_emit_reg(batch, index, reg) \
Ville Syrjäläf0f59a02015-11-18 15:33:26 +0200908 wa_ctx_emit((batch), (index), i915_mmio_reg_offset(reg))
Arun Siluvery9e000842015-07-03 14:27:31 +0100909
910/*
911 * In this WA we need to set GEN8_L3SQCREG4[21:21] and reset it after
912 * PIPE_CONTROL instruction. This is required for the flush to happen correctly
913 * but there is a slight complication as this is applied in WA batch where the
914 * values are only initialized once so we cannot take register value at the
915 * beginning and reuse it further; hence we save its value to memory, upload a
916 * constant value with bit21 set and then we restore it back with the saved value.
917 * To simplify the WA, a constant value is formed by using the default value
918 * of this register. This shouldn't be a problem because we are only modifying
919 * it for a short period and this batch in non-premptible. We can ofcourse
920 * use additional instructions that read the actual value of the register
921 * at that time and set our bit of interest but it makes the WA complicated.
922 *
923 * This WA is also required for Gen9 so extracting as a function avoids
924 * code duplication.
925 */
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000926static inline int gen8_emit_flush_coherentl3_wa(struct intel_engine_cs *engine,
Daniel Vetter6e5248b2016-07-15 21:48:06 +0200927 uint32_t *batch,
Arun Siluvery9e000842015-07-03 14:27:31 +0100928 uint32_t index)
929{
930 uint32_t l3sqc4_flush = (0x40400000 | GEN8_LQSC_FLUSH_COHERENT_LINES);
931
Arun Siluverya4106a72015-07-14 15:01:29 +0100932 /*
Mika Kuoppalafe905812016-06-07 17:19:03 +0300933 * WaDisableLSQCROPERFforOCL:skl,kbl
Arun Siluverya4106a72015-07-14 15:01:29 +0100934 * This WA is implemented in skl_init_clock_gating() but since
935 * this batch updates GEN8_L3SQCREG4 with default value we need to
936 * set this bit here to retain the WA during flush.
937 */
Mika Kuoppalafe905812016-06-07 17:19:03 +0300938 if (IS_SKL_REVID(engine->i915, 0, SKL_REVID_E0) ||
939 IS_KBL_REVID(engine->i915, 0, KBL_REVID_E0))
Arun Siluverya4106a72015-07-14 15:01:29 +0100940 l3sqc4_flush |= GEN8_LQSC_RO_PERF_DIS;
941
Arun Siluveryf1afe242015-08-04 16:22:20 +0100942 wa_ctx_emit(batch, index, (MI_STORE_REGISTER_MEM_GEN8 |
Arun Siluvery83b8a982015-07-08 10:27:05 +0100943 MI_SRM_LRM_GLOBAL_GTT));
Ville Syrjälä8f40db72015-11-04 23:20:08 +0200944 wa_ctx_emit_reg(batch, index, GEN8_L3SQCREG4);
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000945 wa_ctx_emit(batch, index, engine->scratch.gtt_offset + 256);
Arun Siluvery83b8a982015-07-08 10:27:05 +0100946 wa_ctx_emit(batch, index, 0);
Arun Siluvery9e000842015-07-03 14:27:31 +0100947
Arun Siluvery83b8a982015-07-08 10:27:05 +0100948 wa_ctx_emit(batch, index, MI_LOAD_REGISTER_IMM(1));
Ville Syrjälä8f40db72015-11-04 23:20:08 +0200949 wa_ctx_emit_reg(batch, index, GEN8_L3SQCREG4);
Arun Siluvery83b8a982015-07-08 10:27:05 +0100950 wa_ctx_emit(batch, index, l3sqc4_flush);
Arun Siluvery9e000842015-07-03 14:27:31 +0100951
Arun Siluvery83b8a982015-07-08 10:27:05 +0100952 wa_ctx_emit(batch, index, GFX_OP_PIPE_CONTROL(6));
953 wa_ctx_emit(batch, index, (PIPE_CONTROL_CS_STALL |
954 PIPE_CONTROL_DC_FLUSH_ENABLE));
955 wa_ctx_emit(batch, index, 0);
956 wa_ctx_emit(batch, index, 0);
957 wa_ctx_emit(batch, index, 0);
958 wa_ctx_emit(batch, index, 0);
Arun Siluvery9e000842015-07-03 14:27:31 +0100959
Arun Siluveryf1afe242015-08-04 16:22:20 +0100960 wa_ctx_emit(batch, index, (MI_LOAD_REGISTER_MEM_GEN8 |
Arun Siluvery83b8a982015-07-08 10:27:05 +0100961 MI_SRM_LRM_GLOBAL_GTT));
Ville Syrjälä8f40db72015-11-04 23:20:08 +0200962 wa_ctx_emit_reg(batch, index, GEN8_L3SQCREG4);
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000963 wa_ctx_emit(batch, index, engine->scratch.gtt_offset + 256);
Arun Siluvery83b8a982015-07-08 10:27:05 +0100964 wa_ctx_emit(batch, index, 0);
Arun Siluvery9e000842015-07-03 14:27:31 +0100965
966 return index;
967}
968
Arun Siluvery17ee9502015-06-19 19:07:01 +0100969static inline uint32_t wa_ctx_start(struct i915_wa_ctx_bb *wa_ctx,
970 uint32_t offset,
971 uint32_t start_alignment)
972{
973 return wa_ctx->offset = ALIGN(offset, start_alignment);
974}
975
976static inline int wa_ctx_end(struct i915_wa_ctx_bb *wa_ctx,
977 uint32_t offset,
978 uint32_t size_alignment)
979{
980 wa_ctx->size = offset - wa_ctx->offset;
981
982 WARN(wa_ctx->size % size_alignment,
983 "wa_ctx_bb failed sanity checks: size %d is not aligned to %d\n",
984 wa_ctx->size, size_alignment);
985 return 0;
986}
987
Daniel Vetter6e5248b2016-07-15 21:48:06 +0200988/*
989 * Typically we only have one indirect_ctx and per_ctx batch buffer which are
990 * initialized at the beginning and shared across all contexts but this field
991 * helps us to have multiple batches at different offsets and select them based
992 * on a criteria. At the moment this batch always start at the beginning of the page
993 * and at this point we don't have multiple wa_ctx batch buffers.
Arun Siluvery17ee9502015-06-19 19:07:01 +0100994 *
Daniel Vetter6e5248b2016-07-15 21:48:06 +0200995 * The number of WA applied are not known at the beginning; we use this field
996 * to return the no of DWORDS written.
Arun Siluvery17ee9502015-06-19 19:07:01 +0100997 *
Daniel Vetter6e5248b2016-07-15 21:48:06 +0200998 * It is to be noted that this batch does not contain MI_BATCH_BUFFER_END
999 * so it adds NOOPs as padding to make it cacheline aligned.
1000 * MI_BATCH_BUFFER_END will be added to perctx batch and both of them together
1001 * makes a complete batch buffer.
Arun Siluvery17ee9502015-06-19 19:07:01 +01001002 */
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001003static int gen8_init_indirectctx_bb(struct intel_engine_cs *engine,
Arun Siluvery17ee9502015-06-19 19:07:01 +01001004 struct i915_wa_ctx_bb *wa_ctx,
Daniel Vetter6e5248b2016-07-15 21:48:06 +02001005 uint32_t *batch,
Arun Siluvery17ee9502015-06-19 19:07:01 +01001006 uint32_t *offset)
1007{
Arun Siluvery0160f052015-06-23 15:46:57 +01001008 uint32_t scratch_addr;
Arun Siluvery17ee9502015-06-19 19:07:01 +01001009 uint32_t index = wa_ctx_start(wa_ctx, *offset, CACHELINE_DWORDS);
1010
Arun Siluvery7ad00d12015-06-19 18:37:12 +01001011 /* WaDisableCtxRestoreArbitration:bdw,chv */
Arun Siluvery83b8a982015-07-08 10:27:05 +01001012 wa_ctx_emit(batch, index, MI_ARB_ON_OFF | MI_ARB_DISABLE);
Arun Siluvery17ee9502015-06-19 19:07:01 +01001013
Arun Siluveryc82435b2015-06-19 18:37:13 +01001014 /* WaFlushCoherentL3CacheLinesAtContextSwitch:bdw */
Chris Wilsonc0336662016-05-06 15:40:21 +01001015 if (IS_BROADWELL(engine->i915)) {
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001016 int rc = gen8_emit_flush_coherentl3_wa(engine, batch, index);
Andrzej Hajda604ef732015-09-21 15:33:35 +02001017 if (rc < 0)
1018 return rc;
1019 index = rc;
Arun Siluveryc82435b2015-06-19 18:37:13 +01001020 }
1021
Arun Siluvery0160f052015-06-23 15:46:57 +01001022 /* WaClearSlmSpaceAtContextSwitch:bdw,chv */
1023 /* Actual scratch location is at 128 bytes offset */
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001024 scratch_addr = engine->scratch.gtt_offset + 2*CACHELINE_BYTES;
Arun Siluvery0160f052015-06-23 15:46:57 +01001025
Arun Siluvery83b8a982015-07-08 10:27:05 +01001026 wa_ctx_emit(batch, index, GFX_OP_PIPE_CONTROL(6));
1027 wa_ctx_emit(batch, index, (PIPE_CONTROL_FLUSH_L3 |
1028 PIPE_CONTROL_GLOBAL_GTT_IVB |
1029 PIPE_CONTROL_CS_STALL |
1030 PIPE_CONTROL_QW_WRITE));
1031 wa_ctx_emit(batch, index, scratch_addr);
1032 wa_ctx_emit(batch, index, 0);
1033 wa_ctx_emit(batch, index, 0);
1034 wa_ctx_emit(batch, index, 0);
Arun Siluvery0160f052015-06-23 15:46:57 +01001035
Arun Siluvery17ee9502015-06-19 19:07:01 +01001036 /* Pad to end of cacheline */
1037 while (index % CACHELINE_DWORDS)
Arun Siluvery83b8a982015-07-08 10:27:05 +01001038 wa_ctx_emit(batch, index, MI_NOOP);
Arun Siluvery17ee9502015-06-19 19:07:01 +01001039
1040 /*
1041 * MI_BATCH_BUFFER_END is not required in Indirect ctx BB because
1042 * execution depends on the length specified in terms of cache lines
1043 * in the register CTX_RCS_INDIRECT_CTX
1044 */
1045
1046 return wa_ctx_end(wa_ctx, *offset = index, CACHELINE_DWORDS);
1047}
1048
Daniel Vetter6e5248b2016-07-15 21:48:06 +02001049/*
1050 * This batch is started immediately after indirect_ctx batch. Since we ensure
1051 * that indirect_ctx ends on a cacheline this batch is aligned automatically.
Arun Siluvery17ee9502015-06-19 19:07:01 +01001052 *
Daniel Vetter6e5248b2016-07-15 21:48:06 +02001053 * The number of DWORDS written are returned using this field.
Arun Siluvery17ee9502015-06-19 19:07:01 +01001054 *
1055 * This batch is terminated with MI_BATCH_BUFFER_END and so we need not add padding
1056 * to align it with cacheline as padding after MI_BATCH_BUFFER_END is redundant.
1057 */
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001058static int gen8_init_perctx_bb(struct intel_engine_cs *engine,
Arun Siluvery17ee9502015-06-19 19:07:01 +01001059 struct i915_wa_ctx_bb *wa_ctx,
Daniel Vetter6e5248b2016-07-15 21:48:06 +02001060 uint32_t *batch,
Arun Siluvery17ee9502015-06-19 19:07:01 +01001061 uint32_t *offset)
1062{
1063 uint32_t index = wa_ctx_start(wa_ctx, *offset, CACHELINE_DWORDS);
1064
Arun Siluvery7ad00d12015-06-19 18:37:12 +01001065 /* WaDisableCtxRestoreArbitration:bdw,chv */
Arun Siluvery83b8a982015-07-08 10:27:05 +01001066 wa_ctx_emit(batch, index, MI_ARB_ON_OFF | MI_ARB_ENABLE);
Arun Siluvery7ad00d12015-06-19 18:37:12 +01001067
Arun Siluvery83b8a982015-07-08 10:27:05 +01001068 wa_ctx_emit(batch, index, MI_BATCH_BUFFER_END);
Arun Siluvery17ee9502015-06-19 19:07:01 +01001069
1070 return wa_ctx_end(wa_ctx, *offset = index, 1);
1071}
1072
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001073static int gen9_init_indirectctx_bb(struct intel_engine_cs *engine,
Arun Siluvery0504cff2015-07-14 15:01:27 +01001074 struct i915_wa_ctx_bb *wa_ctx,
Daniel Vetter6e5248b2016-07-15 21:48:06 +02001075 uint32_t *batch,
Arun Siluvery0504cff2015-07-14 15:01:27 +01001076 uint32_t *offset)
1077{
Arun Siluverya4106a72015-07-14 15:01:29 +01001078 int ret;
Arun Siluvery0504cff2015-07-14 15:01:27 +01001079 uint32_t index = wa_ctx_start(wa_ctx, *offset, CACHELINE_DWORDS);
1080
Arun Siluvery0907c8f2015-07-14 15:01:28 +01001081 /* WaDisableCtxRestoreArbitration:skl,bxt */
Chris Wilsonc0336662016-05-06 15:40:21 +01001082 if (IS_SKL_REVID(engine->i915, 0, SKL_REVID_D0) ||
1083 IS_BXT_REVID(engine->i915, 0, BXT_REVID_A1))
Arun Siluvery0907c8f2015-07-14 15:01:28 +01001084 wa_ctx_emit(batch, index, MI_ARB_ON_OFF | MI_ARB_DISABLE);
Arun Siluvery0504cff2015-07-14 15:01:27 +01001085
Arun Siluverya4106a72015-07-14 15:01:29 +01001086 /* WaFlushCoherentL3CacheLinesAtContextSwitch:skl,bxt */
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001087 ret = gen8_emit_flush_coherentl3_wa(engine, batch, index);
Arun Siluverya4106a72015-07-14 15:01:29 +01001088 if (ret < 0)
1089 return ret;
1090 index = ret;
1091
Mika Kuoppala873e8172016-07-20 14:26:13 +03001092 /* WaDisableGatherAtSetShaderCommonSlice:skl,bxt,kbl */
1093 wa_ctx_emit(batch, index, MI_LOAD_REGISTER_IMM(1));
1094 wa_ctx_emit_reg(batch, index, COMMON_SLICE_CHICKEN2);
1095 wa_ctx_emit(batch, index, _MASKED_BIT_DISABLE(
1096 GEN9_DISABLE_GATHER_AT_SET_SHADER_COMMON_SLICE));
1097 wa_ctx_emit(batch, index, MI_NOOP);
1098
Mika Kuoppala066d4622016-06-07 17:19:15 +03001099 /* WaClearSlmSpaceAtContextSwitch:kbl */
1100 /* Actual scratch location is at 128 bytes offset */
1101 if (IS_KBL_REVID(engine->i915, 0, KBL_REVID_A0)) {
1102 uint32_t scratch_addr
1103 = engine->scratch.gtt_offset + 2*CACHELINE_BYTES;
1104
1105 wa_ctx_emit(batch, index, GFX_OP_PIPE_CONTROL(6));
1106 wa_ctx_emit(batch, index, (PIPE_CONTROL_FLUSH_L3 |
1107 PIPE_CONTROL_GLOBAL_GTT_IVB |
1108 PIPE_CONTROL_CS_STALL |
1109 PIPE_CONTROL_QW_WRITE));
1110 wa_ctx_emit(batch, index, scratch_addr);
1111 wa_ctx_emit(batch, index, 0);
1112 wa_ctx_emit(batch, index, 0);
1113 wa_ctx_emit(batch, index, 0);
1114 }
Tim Gore3485d992016-07-05 10:01:30 +01001115
1116 /* WaMediaPoolStateCmdInWABB:bxt */
1117 if (HAS_POOLED_EU(engine->i915)) {
1118 /*
1119 * EU pool configuration is setup along with golden context
1120 * during context initialization. This value depends on
1121 * device type (2x6 or 3x6) and needs to be updated based
1122 * on which subslice is disabled especially for 2x6
1123 * devices, however it is safe to load default
1124 * configuration of 3x6 device instead of masking off
1125 * corresponding bits because HW ignores bits of a disabled
1126 * subslice and drops down to appropriate config. Please
1127 * see render_state_setup() in i915_gem_render_state.c for
1128 * possible configurations, to avoid duplication they are
1129 * not shown here again.
1130 */
1131 u32 eu_pool_config = 0x00777000;
1132 wa_ctx_emit(batch, index, GEN9_MEDIA_POOL_STATE);
1133 wa_ctx_emit(batch, index, GEN9_MEDIA_POOL_ENABLE);
1134 wa_ctx_emit(batch, index, eu_pool_config);
1135 wa_ctx_emit(batch, index, 0);
1136 wa_ctx_emit(batch, index, 0);
1137 wa_ctx_emit(batch, index, 0);
1138 }
1139
Arun Siluvery0504cff2015-07-14 15:01:27 +01001140 /* Pad to end of cacheline */
1141 while (index % CACHELINE_DWORDS)
1142 wa_ctx_emit(batch, index, MI_NOOP);
1143
1144 return wa_ctx_end(wa_ctx, *offset = index, CACHELINE_DWORDS);
1145}
1146
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001147static int gen9_init_perctx_bb(struct intel_engine_cs *engine,
Arun Siluvery0504cff2015-07-14 15:01:27 +01001148 struct i915_wa_ctx_bb *wa_ctx,
Daniel Vetter6e5248b2016-07-15 21:48:06 +02001149 uint32_t *batch,
Arun Siluvery0504cff2015-07-14 15:01:27 +01001150 uint32_t *offset)
1151{
1152 uint32_t index = wa_ctx_start(wa_ctx, *offset, CACHELINE_DWORDS);
1153
Arun Siluvery9b014352015-07-14 15:01:30 +01001154 /* WaSetDisablePixMaskCammingAndRhwoInCommonSliceChicken:skl,bxt */
Chris Wilsonc0336662016-05-06 15:40:21 +01001155 if (IS_SKL_REVID(engine->i915, 0, SKL_REVID_B0) ||
1156 IS_BXT_REVID(engine->i915, 0, BXT_REVID_A1)) {
Arun Siluvery9b014352015-07-14 15:01:30 +01001157 wa_ctx_emit(batch, index, MI_LOAD_REGISTER_IMM(1));
Ville Syrjälä8f40db72015-11-04 23:20:08 +02001158 wa_ctx_emit_reg(batch, index, GEN9_SLICE_COMMON_ECO_CHICKEN0);
Arun Siluvery9b014352015-07-14 15:01:30 +01001159 wa_ctx_emit(batch, index,
1160 _MASKED_BIT_ENABLE(DISABLE_PIXEL_MASK_CAMMING));
1161 wa_ctx_emit(batch, index, MI_NOOP);
1162 }
1163
Tim Goreb1e429f2016-03-21 14:37:29 +00001164 /* WaClearTdlStateAckDirtyBits:bxt */
Chris Wilsonc0336662016-05-06 15:40:21 +01001165 if (IS_BXT_REVID(engine->i915, 0, BXT_REVID_B0)) {
Tim Goreb1e429f2016-03-21 14:37:29 +00001166 wa_ctx_emit(batch, index, MI_LOAD_REGISTER_IMM(4));
1167
1168 wa_ctx_emit_reg(batch, index, GEN8_STATE_ACK);
1169 wa_ctx_emit(batch, index, _MASKED_BIT_DISABLE(GEN9_SUBSLICE_TDL_ACK_BITS));
1170
1171 wa_ctx_emit_reg(batch, index, GEN9_STATE_ACK_SLICE1);
1172 wa_ctx_emit(batch, index, _MASKED_BIT_DISABLE(GEN9_SUBSLICE_TDL_ACK_BITS));
1173
1174 wa_ctx_emit_reg(batch, index, GEN9_STATE_ACK_SLICE2);
1175 wa_ctx_emit(batch, index, _MASKED_BIT_DISABLE(GEN9_SUBSLICE_TDL_ACK_BITS));
1176
1177 wa_ctx_emit_reg(batch, index, GEN7_ROW_CHICKEN2);
1178 /* dummy write to CS, mask bits are 0 to ensure the register is not modified */
1179 wa_ctx_emit(batch, index, 0x0);
1180 wa_ctx_emit(batch, index, MI_NOOP);
1181 }
1182
Arun Siluvery0907c8f2015-07-14 15:01:28 +01001183 /* WaDisableCtxRestoreArbitration:skl,bxt */
Chris Wilsonc0336662016-05-06 15:40:21 +01001184 if (IS_SKL_REVID(engine->i915, 0, SKL_REVID_D0) ||
1185 IS_BXT_REVID(engine->i915, 0, BXT_REVID_A1))
Arun Siluvery0907c8f2015-07-14 15:01:28 +01001186 wa_ctx_emit(batch, index, MI_ARB_ON_OFF | MI_ARB_ENABLE);
1187
Arun Siluvery0504cff2015-07-14 15:01:27 +01001188 wa_ctx_emit(batch, index, MI_BATCH_BUFFER_END);
1189
1190 return wa_ctx_end(wa_ctx, *offset = index, 1);
1191}
1192
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001193static int lrc_setup_wa_ctx_obj(struct intel_engine_cs *engine, u32 size)
Arun Siluvery17ee9502015-06-19 19:07:01 +01001194{
1195 int ret;
1196
Chris Wilson91c8a322016-07-05 10:40:23 +01001197 engine->wa_ctx.obj = i915_gem_object_create(&engine->i915->drm,
1198 PAGE_ALIGN(size));
Chris Wilsonfe3db792016-04-25 13:32:13 +01001199 if (IS_ERR(engine->wa_ctx.obj)) {
Arun Siluvery17ee9502015-06-19 19:07:01 +01001200 DRM_DEBUG_DRIVER("alloc LRC WA ctx backing obj failed.\n");
Chris Wilsonfe3db792016-04-25 13:32:13 +01001201 ret = PTR_ERR(engine->wa_ctx.obj);
1202 engine->wa_ctx.obj = NULL;
1203 return ret;
Arun Siluvery17ee9502015-06-19 19:07:01 +01001204 }
1205
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001206 ret = i915_gem_obj_ggtt_pin(engine->wa_ctx.obj, PAGE_SIZE, 0);
Arun Siluvery17ee9502015-06-19 19:07:01 +01001207 if (ret) {
1208 DRM_DEBUG_DRIVER("pin LRC WA ctx backing obj failed: %d\n",
1209 ret);
Chris Wilsonf8c417c2016-07-20 13:31:53 +01001210 i915_gem_object_put(engine->wa_ctx.obj);
Arun Siluvery17ee9502015-06-19 19:07:01 +01001211 return ret;
1212 }
1213
1214 return 0;
1215}
1216
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001217static void lrc_destroy_wa_ctx_obj(struct intel_engine_cs *engine)
Arun Siluvery17ee9502015-06-19 19:07:01 +01001218{
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001219 if (engine->wa_ctx.obj) {
1220 i915_gem_object_ggtt_unpin(engine->wa_ctx.obj);
Chris Wilsonf8c417c2016-07-20 13:31:53 +01001221 i915_gem_object_put(engine->wa_ctx.obj);
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001222 engine->wa_ctx.obj = NULL;
Arun Siluvery17ee9502015-06-19 19:07:01 +01001223 }
1224}
1225
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001226static int intel_init_workaround_bb(struct intel_engine_cs *engine)
Arun Siluvery17ee9502015-06-19 19:07:01 +01001227{
1228 int ret;
1229 uint32_t *batch;
1230 uint32_t offset;
1231 struct page *page;
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001232 struct i915_ctx_workarounds *wa_ctx = &engine->wa_ctx;
Arun Siluvery17ee9502015-06-19 19:07:01 +01001233
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001234 WARN_ON(engine->id != RCS);
Arun Siluvery17ee9502015-06-19 19:07:01 +01001235
Arun Siluvery5e60d792015-06-23 15:50:44 +01001236 /* update this when WA for higher Gen are added */
Chris Wilsonc0336662016-05-06 15:40:21 +01001237 if (INTEL_GEN(engine->i915) > 9) {
Arun Siluvery0504cff2015-07-14 15:01:27 +01001238 DRM_ERROR("WA batch buffer is not initialized for Gen%d\n",
Chris Wilsonc0336662016-05-06 15:40:21 +01001239 INTEL_GEN(engine->i915));
Arun Siluvery5e60d792015-06-23 15:50:44 +01001240 return 0;
Arun Siluvery0504cff2015-07-14 15:01:27 +01001241 }
Arun Siluvery5e60d792015-06-23 15:50:44 +01001242
Arun Siluveryc4db7592015-06-19 18:37:11 +01001243 /* some WA perform writes to scratch page, ensure it is valid */
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001244 if (engine->scratch.obj == NULL) {
1245 DRM_ERROR("scratch page not allocated for %s\n", engine->name);
Arun Siluveryc4db7592015-06-19 18:37:11 +01001246 return -EINVAL;
1247 }
1248
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001249 ret = lrc_setup_wa_ctx_obj(engine, PAGE_SIZE);
Arun Siluvery17ee9502015-06-19 19:07:01 +01001250 if (ret) {
1251 DRM_DEBUG_DRIVER("Failed to setup context WA page: %d\n", ret);
1252 return ret;
1253 }
1254
Dave Gordon033908a2015-12-10 18:51:23 +00001255 page = i915_gem_object_get_dirty_page(wa_ctx->obj, 0);
Arun Siluvery17ee9502015-06-19 19:07:01 +01001256 batch = kmap_atomic(page);
1257 offset = 0;
1258
Chris Wilsonc0336662016-05-06 15:40:21 +01001259 if (IS_GEN8(engine->i915)) {
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001260 ret = gen8_init_indirectctx_bb(engine,
Arun Siluvery17ee9502015-06-19 19:07:01 +01001261 &wa_ctx->indirect_ctx,
1262 batch,
1263 &offset);
1264 if (ret)
1265 goto out;
1266
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001267 ret = gen8_init_perctx_bb(engine,
Arun Siluvery17ee9502015-06-19 19:07:01 +01001268 &wa_ctx->per_ctx,
1269 batch,
1270 &offset);
1271 if (ret)
1272 goto out;
Chris Wilsonc0336662016-05-06 15:40:21 +01001273 } else if (IS_GEN9(engine->i915)) {
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001274 ret = gen9_init_indirectctx_bb(engine,
Arun Siluvery0504cff2015-07-14 15:01:27 +01001275 &wa_ctx->indirect_ctx,
1276 batch,
1277 &offset);
1278 if (ret)
1279 goto out;
1280
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001281 ret = gen9_init_perctx_bb(engine,
Arun Siluvery0504cff2015-07-14 15:01:27 +01001282 &wa_ctx->per_ctx,
1283 batch,
1284 &offset);
1285 if (ret)
1286 goto out;
Arun Siluvery17ee9502015-06-19 19:07:01 +01001287 }
1288
1289out:
1290 kunmap_atomic(batch);
1291 if (ret)
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001292 lrc_destroy_wa_ctx_obj(engine);
Arun Siluvery17ee9502015-06-19 19:07:01 +01001293
1294 return ret;
1295}
1296
Tvrtko Ursulin04794ad2016-04-12 15:40:41 +01001297static void lrc_init_hws(struct intel_engine_cs *engine)
1298{
Chris Wilsonc0336662016-05-06 15:40:21 +01001299 struct drm_i915_private *dev_priv = engine->i915;
Tvrtko Ursulin04794ad2016-04-12 15:40:41 +01001300
1301 I915_WRITE(RING_HWS_PGA(engine->mmio_base),
1302 (u32)engine->status_page.gfx_addr);
1303 POSTING_READ(RING_HWS_PGA(engine->mmio_base));
1304}
1305
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001306static int gen8_init_common_ring(struct intel_engine_cs *engine)
Oscar Mateo9b1136d2014-07-24 17:04:24 +01001307{
Chris Wilsonc0336662016-05-06 15:40:21 +01001308 struct drm_i915_private *dev_priv = engine->i915;
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +00001309 unsigned int next_context_status_buffer_hw;
Oscar Mateo9b1136d2014-07-24 17:04:24 +01001310
Tvrtko Ursulin04794ad2016-04-12 15:40:41 +01001311 lrc_init_hws(engine);
Nick Hoathe84fe802015-09-11 12:53:46 +01001312
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001313 I915_WRITE_IMR(engine,
1314 ~(engine->irq_enable_mask | engine->irq_keep_mask));
1315 I915_WRITE(RING_HWSTAM(engine->mmio_base), 0xffffffff);
Oscar Mateo73d477f2014-07-24 17:04:31 +01001316
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001317 I915_WRITE(RING_MODE_GEN7(engine),
Oscar Mateo9b1136d2014-07-24 17:04:24 +01001318 _MASKED_BIT_DISABLE(GFX_REPLAY_MODE) |
1319 _MASKED_BIT_ENABLE(GFX_RUN_LIST_ENABLE));
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001320 POSTING_READ(RING_MODE_GEN7(engine));
Michel Thierrydfc53c52015-09-28 13:25:12 +01001321
1322 /*
1323 * Instead of resetting the Context Status Buffer (CSB) read pointer to
1324 * zero, we need to read the write pointer from hardware and use its
1325 * value because "this register is power context save restored".
1326 * Effectively, these states have been observed:
1327 *
1328 * | Suspend-to-idle (freeze) | Suspend-to-RAM (mem) |
1329 * BDW | CSB regs not reset | CSB regs reset |
1330 * CHT | CSB regs not reset | CSB regs not reset |
Ben Widawsky5590a5f2016-01-05 10:30:05 -08001331 * SKL | ? | ? |
1332 * BXT | ? | ? |
Michel Thierrydfc53c52015-09-28 13:25:12 +01001333 */
Ben Widawsky5590a5f2016-01-05 10:30:05 -08001334 next_context_status_buffer_hw =
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001335 GEN8_CSB_WRITE_PTR(I915_READ(RING_CONTEXT_STATUS_PTR(engine)));
Michel Thierrydfc53c52015-09-28 13:25:12 +01001336
1337 /*
1338 * When the CSB registers are reset (also after power-up / gpu reset),
1339 * CSB write pointer is set to all 1's, which is not valid, use '5' in
1340 * this special case, so the first element read is CSB[0].
1341 */
1342 if (next_context_status_buffer_hw == GEN8_CSB_PTR_MASK)
1343 next_context_status_buffer_hw = (GEN8_CSB_ENTRIES - 1);
1344
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001345 engine->next_context_status_buffer = next_context_status_buffer_hw;
1346 DRM_DEBUG_DRIVER("Execlists enabled for %s\n", engine->name);
Oscar Mateo9b1136d2014-07-24 17:04:24 +01001347
Tomas Elffc0768c2016-03-21 16:26:59 +00001348 intel_engine_init_hangcheck(engine);
Oscar Mateo9b1136d2014-07-24 17:04:24 +01001349
Peter Antoine0ccdacf2016-04-13 15:03:25 +01001350 return intel_mocs_init_engine(engine);
Oscar Mateo9b1136d2014-07-24 17:04:24 +01001351}
1352
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001353static int gen8_init_render_ring(struct intel_engine_cs *engine)
Oscar Mateo9b1136d2014-07-24 17:04:24 +01001354{
Chris Wilsonc0336662016-05-06 15:40:21 +01001355 struct drm_i915_private *dev_priv = engine->i915;
Oscar Mateo9b1136d2014-07-24 17:04:24 +01001356 int ret;
1357
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001358 ret = gen8_init_common_ring(engine);
Oscar Mateo9b1136d2014-07-24 17:04:24 +01001359 if (ret)
1360 return ret;
1361
1362 /* We need to disable the AsyncFlip performance optimisations in order
1363 * to use MI_WAIT_FOR_EVENT within the CS. It should already be
1364 * programmed to '1' on all products.
1365 *
1366 * WaDisableAsyncFlipPerfMode:snb,ivb,hsw,vlv,bdw,chv
1367 */
1368 I915_WRITE(MI_MODE, _MASKED_BIT_ENABLE(ASYNC_FLIP_PERF_DISABLE));
1369
Oscar Mateo9b1136d2014-07-24 17:04:24 +01001370 I915_WRITE(INSTPM, _MASKED_BIT_ENABLE(INSTPM_FORCE_ORDERING));
1371
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001372 return init_workarounds_ring(engine);
Oscar Mateo9b1136d2014-07-24 17:04:24 +01001373}
1374
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001375static int gen9_init_render_ring(struct intel_engine_cs *engine)
Damien Lespiau82ef8222015-02-09 19:33:08 +00001376{
1377 int ret;
1378
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001379 ret = gen8_init_common_ring(engine);
Damien Lespiau82ef8222015-02-09 19:33:08 +00001380 if (ret)
1381 return ret;
1382
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001383 return init_workarounds_ring(engine);
Damien Lespiau82ef8222015-02-09 19:33:08 +00001384}
1385
Michel Thierry7a01a0a2015-06-26 13:46:14 +01001386static int intel_logical_ring_emit_pdps(struct drm_i915_gem_request *req)
1387{
1388 struct i915_hw_ppgtt *ppgtt = req->ctx->ppgtt;
Chris Wilson7e37f882016-08-02 22:50:21 +01001389 struct intel_ring *ring = req->ring;
Tvrtko Ursulin4a570db2016-03-16 11:00:38 +00001390 struct intel_engine_cs *engine = req->engine;
Michel Thierry7a01a0a2015-06-26 13:46:14 +01001391 const int num_lri_cmds = GEN8_LEGACY_PDPES * 2;
1392 int i, ret;
1393
Chris Wilson987046a2016-04-28 09:56:46 +01001394 ret = intel_ring_begin(req, num_lri_cmds * 2 + 2);
Michel Thierry7a01a0a2015-06-26 13:46:14 +01001395 if (ret)
1396 return ret;
1397
Chris Wilsonb5321f32016-08-02 22:50:18 +01001398 intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(num_lri_cmds));
Michel Thierry7a01a0a2015-06-26 13:46:14 +01001399 for (i = GEN8_LEGACY_PDPES - 1; i >= 0; i--) {
1400 const dma_addr_t pd_daddr = i915_page_dir_dma_addr(ppgtt, i);
1401
Chris Wilsonb5321f32016-08-02 22:50:18 +01001402 intel_ring_emit_reg(ring, GEN8_RING_PDP_UDW(engine, i));
1403 intel_ring_emit(ring, upper_32_bits(pd_daddr));
1404 intel_ring_emit_reg(ring, GEN8_RING_PDP_LDW(engine, i));
1405 intel_ring_emit(ring, lower_32_bits(pd_daddr));
Michel Thierry7a01a0a2015-06-26 13:46:14 +01001406 }
1407
Chris Wilsonb5321f32016-08-02 22:50:18 +01001408 intel_ring_emit(ring, MI_NOOP);
1409 intel_ring_advance(ring);
Michel Thierry7a01a0a2015-06-26 13:46:14 +01001410
1411 return 0;
1412}
1413
John Harrisonbe795fc2015-05-29 17:44:03 +01001414static int gen8_emit_bb_start(struct drm_i915_gem_request *req,
Chris Wilson803688b2016-08-02 22:50:27 +01001415 u64 offset, u32 len,
1416 unsigned int dispatch_flags)
Oscar Mateo15648582014-07-24 17:04:32 +01001417{
Chris Wilson7e37f882016-08-02 22:50:21 +01001418 struct intel_ring *ring = req->ring;
John Harrison8e004ef2015-02-13 11:48:10 +00001419 bool ppgtt = !(dispatch_flags & I915_DISPATCH_SECURE);
Oscar Mateo15648582014-07-24 17:04:32 +01001420 int ret;
1421
Michel Thierry7a01a0a2015-06-26 13:46:14 +01001422 /* Don't rely in hw updating PDPs, specially in lite-restore.
1423 * Ideally, we should set Force PD Restore in ctx descriptor,
1424 * but we can't. Force Restore would be a second option, but
1425 * it is unsafe in case of lite-restore (because the ctx is
Michel Thierry2dba3232015-07-30 11:06:23 +01001426 * not idle). PML4 is allocated during ppgtt init so this is
1427 * not needed in 48-bit.*/
Michel Thierry7a01a0a2015-06-26 13:46:14 +01001428 if (req->ctx->ppgtt &&
Tvrtko Ursulin666796d2016-03-16 11:00:39 +00001429 (intel_engine_flag(req->engine) & req->ctx->ppgtt->pd_dirty_rings)) {
Zhiyuan Lv331f38e2015-08-28 15:41:14 +08001430 if (!USES_FULL_48BIT_PPGTT(req->i915) &&
Chris Wilsonc0336662016-05-06 15:40:21 +01001431 !intel_vgpu_active(req->i915)) {
Michel Thierry2dba3232015-07-30 11:06:23 +01001432 ret = intel_logical_ring_emit_pdps(req);
1433 if (ret)
1434 return ret;
1435 }
Michel Thierry7a01a0a2015-06-26 13:46:14 +01001436
Tvrtko Ursulin666796d2016-03-16 11:00:39 +00001437 req->ctx->ppgtt->pd_dirty_rings &= ~intel_engine_flag(req->engine);
Michel Thierry7a01a0a2015-06-26 13:46:14 +01001438 }
1439
Chris Wilson987046a2016-04-28 09:56:46 +01001440 ret = intel_ring_begin(req, 4);
Oscar Mateo15648582014-07-24 17:04:32 +01001441 if (ret)
1442 return ret;
1443
1444 /* FIXME(BDW): Address space and security selectors. */
Chris Wilsonb5321f32016-08-02 22:50:18 +01001445 intel_ring_emit(ring, MI_BATCH_BUFFER_START_GEN8 |
1446 (ppgtt<<8) |
1447 (dispatch_flags & I915_DISPATCH_RS ?
1448 MI_BATCH_RESOURCE_STREAMER : 0));
1449 intel_ring_emit(ring, lower_32_bits(offset));
1450 intel_ring_emit(ring, upper_32_bits(offset));
1451 intel_ring_emit(ring, MI_NOOP);
1452 intel_ring_advance(ring);
Oscar Mateo15648582014-07-24 17:04:32 +01001453
1454 return 0;
1455}
1456
Chris Wilson31bb59c2016-07-01 17:23:27 +01001457static void gen8_logical_ring_enable_irq(struct intel_engine_cs *engine)
Oscar Mateo73d477f2014-07-24 17:04:31 +01001458{
Chris Wilsonc0336662016-05-06 15:40:21 +01001459 struct drm_i915_private *dev_priv = engine->i915;
Chris Wilson31bb59c2016-07-01 17:23:27 +01001460 I915_WRITE_IMR(engine,
1461 ~(engine->irq_enable_mask | engine->irq_keep_mask));
1462 POSTING_READ_FW(RING_IMR(engine->mmio_base));
Oscar Mateo73d477f2014-07-24 17:04:31 +01001463}
1464
Chris Wilson31bb59c2016-07-01 17:23:27 +01001465static void gen8_logical_ring_disable_irq(struct intel_engine_cs *engine)
Oscar Mateo73d477f2014-07-24 17:04:31 +01001466{
Chris Wilsonc0336662016-05-06 15:40:21 +01001467 struct drm_i915_private *dev_priv = engine->i915;
Chris Wilson31bb59c2016-07-01 17:23:27 +01001468 I915_WRITE_IMR(engine, ~engine->irq_keep_mask);
Oscar Mateo73d477f2014-07-24 17:04:31 +01001469}
1470
Chris Wilson7c9cf4e2016-08-02 22:50:25 +01001471static int gen8_emit_flush(struct drm_i915_gem_request *request, u32 mode)
Oscar Mateo47122742014-07-24 17:04:28 +01001472{
Chris Wilson7e37f882016-08-02 22:50:21 +01001473 struct intel_ring *ring = request->ring;
1474 u32 cmd;
Oscar Mateo47122742014-07-24 17:04:28 +01001475 int ret;
1476
Chris Wilson987046a2016-04-28 09:56:46 +01001477 ret = intel_ring_begin(request, 4);
Oscar Mateo47122742014-07-24 17:04:28 +01001478 if (ret)
1479 return ret;
1480
1481 cmd = MI_FLUSH_DW + 1;
1482
Chris Wilsonf0a1fb12015-01-22 13:42:00 +00001483 /* We always require a command barrier so that subsequent
1484 * commands, such as breadcrumb interrupts, are strictly ordered
1485 * wrt the contents of the write cache being flushed to memory
1486 * (and thus being coherent from the CPU).
1487 */
1488 cmd |= MI_FLUSH_DW_STORE_INDEX | MI_FLUSH_DW_OP_STOREDW;
1489
Chris Wilson7c9cf4e2016-08-02 22:50:25 +01001490 if (mode & EMIT_INVALIDATE) {
Chris Wilsonf0a1fb12015-01-22 13:42:00 +00001491 cmd |= MI_INVALIDATE_TLB;
Chris Wilson1dae2df2016-08-02 22:50:19 +01001492 if (request->engine->id == VCS)
Chris Wilsonf0a1fb12015-01-22 13:42:00 +00001493 cmd |= MI_INVALIDATE_BSD;
Oscar Mateo47122742014-07-24 17:04:28 +01001494 }
1495
Chris Wilsonb5321f32016-08-02 22:50:18 +01001496 intel_ring_emit(ring, cmd);
1497 intel_ring_emit(ring,
1498 I915_GEM_HWS_SCRATCH_ADDR |
1499 MI_FLUSH_DW_USE_GTT);
1500 intel_ring_emit(ring, 0); /* upper addr */
1501 intel_ring_emit(ring, 0); /* value */
1502 intel_ring_advance(ring);
Oscar Mateo47122742014-07-24 17:04:28 +01001503
1504 return 0;
1505}
1506
John Harrison7deb4d32015-05-29 17:43:59 +01001507static int gen8_emit_flush_render(struct drm_i915_gem_request *request,
Chris Wilson7c9cf4e2016-08-02 22:50:25 +01001508 u32 mode)
Oscar Mateo47122742014-07-24 17:04:28 +01001509{
Chris Wilson7e37f882016-08-02 22:50:21 +01001510 struct intel_ring *ring = request->ring;
Chris Wilsonb5321f32016-08-02 22:50:18 +01001511 struct intel_engine_cs *engine = request->engine;
Tvrtko Ursuline2f80392016-03-16 11:00:36 +00001512 u32 scratch_addr = engine->scratch.gtt_offset + 2 * CACHELINE_BYTES;
Mika Kuoppala0b2d0932016-06-07 17:19:10 +03001513 bool vf_flush_wa = false, dc_flush_wa = false;
Oscar Mateo47122742014-07-24 17:04:28 +01001514 u32 flags = 0;
1515 int ret;
Mika Kuoppala0b2d0932016-06-07 17:19:10 +03001516 int len;
Oscar Mateo47122742014-07-24 17:04:28 +01001517
1518 flags |= PIPE_CONTROL_CS_STALL;
1519
Chris Wilson7c9cf4e2016-08-02 22:50:25 +01001520 if (mode & EMIT_FLUSH) {
Oscar Mateo47122742014-07-24 17:04:28 +01001521 flags |= PIPE_CONTROL_RENDER_TARGET_CACHE_FLUSH;
1522 flags |= PIPE_CONTROL_DEPTH_CACHE_FLUSH;
Francisco Jerez965fd602016-01-13 18:59:39 -08001523 flags |= PIPE_CONTROL_DC_FLUSH_ENABLE;
Chris Wilson40a24482015-08-21 16:08:41 +01001524 flags |= PIPE_CONTROL_FLUSH_ENABLE;
Oscar Mateo47122742014-07-24 17:04:28 +01001525 }
1526
Chris Wilson7c9cf4e2016-08-02 22:50:25 +01001527 if (mode & EMIT_INVALIDATE) {
Oscar Mateo47122742014-07-24 17:04:28 +01001528 flags |= PIPE_CONTROL_TLB_INVALIDATE;
1529 flags |= PIPE_CONTROL_INSTRUCTION_CACHE_INVALIDATE;
1530 flags |= PIPE_CONTROL_TEXTURE_CACHE_INVALIDATE;
1531 flags |= PIPE_CONTROL_VF_CACHE_INVALIDATE;
1532 flags |= PIPE_CONTROL_CONST_CACHE_INVALIDATE;
1533 flags |= PIPE_CONTROL_STATE_CACHE_INVALIDATE;
1534 flags |= PIPE_CONTROL_QW_WRITE;
1535 flags |= PIPE_CONTROL_GLOBAL_GTT_IVB;
Oscar Mateo47122742014-07-24 17:04:28 +01001536
Ben Widawsky1a5a9ce2015-12-17 09:49:57 -08001537 /*
1538 * On GEN9: before VF_CACHE_INVALIDATE we need to emit a NULL
1539 * pipe control.
1540 */
Chris Wilsonc0336662016-05-06 15:40:21 +01001541 if (IS_GEN9(request->i915))
Ben Widawsky1a5a9ce2015-12-17 09:49:57 -08001542 vf_flush_wa = true;
Mika Kuoppala0b2d0932016-06-07 17:19:10 +03001543
1544 /* WaForGAMHang:kbl */
1545 if (IS_KBL_REVID(request->i915, 0, KBL_REVID_B0))
1546 dc_flush_wa = true;
Ben Widawsky1a5a9ce2015-12-17 09:49:57 -08001547 }
Imre Deak9647ff32015-01-25 13:27:11 -08001548
Mika Kuoppala0b2d0932016-06-07 17:19:10 +03001549 len = 6;
1550
1551 if (vf_flush_wa)
1552 len += 6;
1553
1554 if (dc_flush_wa)
1555 len += 12;
1556
1557 ret = intel_ring_begin(request, len);
Oscar Mateo47122742014-07-24 17:04:28 +01001558 if (ret)
1559 return ret;
1560
Imre Deak9647ff32015-01-25 13:27:11 -08001561 if (vf_flush_wa) {
Chris Wilsonb5321f32016-08-02 22:50:18 +01001562 intel_ring_emit(ring, GFX_OP_PIPE_CONTROL(6));
1563 intel_ring_emit(ring, 0);
1564 intel_ring_emit(ring, 0);
1565 intel_ring_emit(ring, 0);
1566 intel_ring_emit(ring, 0);
1567 intel_ring_emit(ring, 0);
Imre Deak9647ff32015-01-25 13:27:11 -08001568 }
1569
Mika Kuoppala0b2d0932016-06-07 17:19:10 +03001570 if (dc_flush_wa) {
Chris Wilsonb5321f32016-08-02 22:50:18 +01001571 intel_ring_emit(ring, GFX_OP_PIPE_CONTROL(6));
1572 intel_ring_emit(ring, PIPE_CONTROL_DC_FLUSH_ENABLE);
1573 intel_ring_emit(ring, 0);
1574 intel_ring_emit(ring, 0);
1575 intel_ring_emit(ring, 0);
1576 intel_ring_emit(ring, 0);
Mika Kuoppala0b2d0932016-06-07 17:19:10 +03001577 }
1578
Chris Wilsonb5321f32016-08-02 22:50:18 +01001579 intel_ring_emit(ring, GFX_OP_PIPE_CONTROL(6));
1580 intel_ring_emit(ring, flags);
1581 intel_ring_emit(ring, scratch_addr);
1582 intel_ring_emit(ring, 0);
1583 intel_ring_emit(ring, 0);
1584 intel_ring_emit(ring, 0);
Mika Kuoppala0b2d0932016-06-07 17:19:10 +03001585
1586 if (dc_flush_wa) {
Chris Wilsonb5321f32016-08-02 22:50:18 +01001587 intel_ring_emit(ring, GFX_OP_PIPE_CONTROL(6));
1588 intel_ring_emit(ring, PIPE_CONTROL_CS_STALL);
1589 intel_ring_emit(ring, 0);
1590 intel_ring_emit(ring, 0);
1591 intel_ring_emit(ring, 0);
1592 intel_ring_emit(ring, 0);
Mika Kuoppala0b2d0932016-06-07 17:19:10 +03001593 }
1594
Chris Wilsonb5321f32016-08-02 22:50:18 +01001595 intel_ring_advance(ring);
Oscar Mateo47122742014-07-24 17:04:28 +01001596
1597 return 0;
1598}
1599
Chris Wilsonc04e0f32016-04-09 10:57:54 +01001600static void bxt_a_seqno_barrier(struct intel_engine_cs *engine)
Imre Deak319404d2015-08-14 18:35:27 +03001601{
Imre Deak319404d2015-08-14 18:35:27 +03001602 /*
1603 * On BXT A steppings there is a HW coherency issue whereby the
1604 * MI_STORE_DATA_IMM storing the completed request's seqno
1605 * occasionally doesn't invalidate the CPU cache. Work around this by
1606 * clflushing the corresponding cacheline whenever the caller wants
1607 * the coherency to be guaranteed. Note that this cacheline is known
1608 * to be clean at this point, since we only write it in
1609 * bxt_a_set_seqno(), where we also do a clflush after the write. So
1610 * this clflush in practice becomes an invalidate operation.
1611 */
Chris Wilsonc04e0f32016-04-09 10:57:54 +01001612 intel_flush_status_page(engine, I915_GEM_HWS_INDEX);
Imre Deak319404d2015-08-14 18:35:27 +03001613}
1614
Chris Wilson7c17d372016-01-20 15:43:35 +02001615/*
1616 * Reserve space for 2 NOOPs at the end of each request to be
1617 * used as a workaround for not being allowed to do lite
1618 * restore with HEAD==TAIL (WaIdleLiteRestore).
1619 */
1620#define WA_TAIL_DWORDS 2
1621
John Harrisonc4e76632015-05-29 17:44:01 +01001622static int gen8_emit_request(struct drm_i915_gem_request *request)
Oscar Mateo4da46e12014-07-24 17:04:27 +01001623{
Chris Wilson7e37f882016-08-02 22:50:21 +01001624 struct intel_ring *ring = request->ring;
Oscar Mateo4da46e12014-07-24 17:04:27 +01001625 int ret;
1626
Chris Wilson987046a2016-04-28 09:56:46 +01001627 ret = intel_ring_begin(request, 6 + WA_TAIL_DWORDS);
Oscar Mateo4da46e12014-07-24 17:04:27 +01001628 if (ret)
1629 return ret;
1630
Chris Wilson7c17d372016-01-20 15:43:35 +02001631 /* w/a: bit 5 needs to be zero for MI_FLUSH_DW address. */
1632 BUILD_BUG_ON(I915_GEM_HWS_INDEX_ADDR & (1 << 5));
Oscar Mateo4da46e12014-07-24 17:04:27 +01001633
Chris Wilsonb5321f32016-08-02 22:50:18 +01001634 intel_ring_emit(ring, (MI_FLUSH_DW + 1) | MI_FLUSH_DW_OP_STOREDW);
1635 intel_ring_emit(ring,
1636 intel_hws_seqno_address(request->engine) |
1637 MI_FLUSH_DW_USE_GTT);
1638 intel_ring_emit(ring, 0);
1639 intel_ring_emit(ring, request->fence.seqno);
1640 intel_ring_emit(ring, MI_USER_INTERRUPT);
1641 intel_ring_emit(ring, MI_NOOP);
Chris Wilsonddd66c52016-08-02 22:50:31 +01001642 return intel_logical_ring_advance(request);
Chris Wilson7c17d372016-01-20 15:43:35 +02001643}
Oscar Mateo4da46e12014-07-24 17:04:27 +01001644
Chris Wilson7c17d372016-01-20 15:43:35 +02001645static int gen8_emit_request_render(struct drm_i915_gem_request *request)
1646{
Chris Wilson7e37f882016-08-02 22:50:21 +01001647 struct intel_ring *ring = request->ring;
Chris Wilson7c17d372016-01-20 15:43:35 +02001648 int ret;
1649
Chris Wilson987046a2016-04-28 09:56:46 +01001650 ret = intel_ring_begin(request, 8 + WA_TAIL_DWORDS);
Chris Wilson7c17d372016-01-20 15:43:35 +02001651 if (ret)
1652 return ret;
1653
Michał Winiarskice81a652016-04-12 15:51:55 +02001654 /* We're using qword write, seqno should be aligned to 8 bytes. */
1655 BUILD_BUG_ON(I915_GEM_HWS_INDEX & 1);
1656
Chris Wilson7c17d372016-01-20 15:43:35 +02001657 /* w/a for post sync ops following a GPGPU operation we
1658 * need a prior CS_STALL, which is emitted by the flush
1659 * following the batch.
Michel Thierry53292cd2015-04-15 18:11:33 +01001660 */
Chris Wilsonb5321f32016-08-02 22:50:18 +01001661 intel_ring_emit(ring, GFX_OP_PIPE_CONTROL(6));
1662 intel_ring_emit(ring,
1663 (PIPE_CONTROL_GLOBAL_GTT_IVB |
1664 PIPE_CONTROL_CS_STALL |
1665 PIPE_CONTROL_QW_WRITE));
1666 intel_ring_emit(ring, intel_hws_seqno_address(request->engine));
1667 intel_ring_emit(ring, 0);
1668 intel_ring_emit(ring, i915_gem_request_get_seqno(request));
Michał Winiarskice81a652016-04-12 15:51:55 +02001669 /* We're thrashing one dword of HWS. */
Chris Wilsonb5321f32016-08-02 22:50:18 +01001670 intel_ring_emit(ring, 0);
1671 intel_ring_emit(ring, MI_USER_INTERRUPT);
1672 intel_ring_emit(ring, MI_NOOP);
Chris Wilsonddd66c52016-08-02 22:50:31 +01001673 return intel_logical_ring_advance(request);
Oscar Mateo4da46e12014-07-24 17:04:27 +01001674}
1675
John Harrison87531812015-05-29 17:43:44 +01001676static int gen8_init_rcs_context(struct drm_i915_gem_request *req)
Thomas Daniele7778be2014-12-02 12:50:48 +00001677{
1678 int ret;
1679
John Harrisone2be4fa2015-05-29 17:43:54 +01001680 ret = intel_logical_ring_workarounds_emit(req);
Thomas Daniele7778be2014-12-02 12:50:48 +00001681 if (ret)
1682 return ret;
1683
Peter Antoine3bbaba02015-07-10 20:13:11 +03001684 ret = intel_rcs_context_init_mocs(req);
1685 /*
1686 * Failing to program the MOCS is non-fatal.The system will not
1687 * run at peak performance. So generate an error and carry on.
1688 */
1689 if (ret)
1690 DRM_ERROR("MOCS failed to program: expect performance issues.\n");
1691
Chris Wilsone40f9ee2016-08-02 22:50:36 +01001692 return i915_gem_render_state_init(req);
Thomas Daniele7778be2014-12-02 12:50:48 +00001693}
1694
Oscar Mateo73e4d072014-07-24 17:04:48 +01001695/**
1696 * intel_logical_ring_cleanup() - deallocate the Engine Command Streamer
Tvrtko Ursulin14bb2c12016-06-03 14:02:17 +01001697 * @engine: Engine Command Streamer.
Oscar Mateo73e4d072014-07-24 17:04:48 +01001698 */
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001699void intel_logical_ring_cleanup(struct intel_engine_cs *engine)
Oscar Mateo454afeb2014-07-24 17:04:22 +01001700{
John Harrison6402c332014-10-31 12:00:26 +00001701 struct drm_i915_private *dev_priv;
Oscar Mateo9832b9d2014-07-24 17:04:30 +01001702
Tvrtko Ursulin117897f2016-03-16 11:00:40 +00001703 if (!intel_engine_initialized(engine))
Oscar Mateo48d82382014-07-24 17:04:23 +01001704 return;
1705
Tvrtko Ursulin27af5ee2016-04-04 12:11:56 +01001706 /*
1707 * Tasklet cannot be active at this point due intel_mark_active/idle
1708 * so this is just for documentation.
1709 */
1710 if (WARN_ON(test_bit(TASKLET_STATE_SCHED, &engine->irq_tasklet.state)))
1711 tasklet_kill(&engine->irq_tasklet);
1712
Chris Wilsonc0336662016-05-06 15:40:21 +01001713 dev_priv = engine->i915;
John Harrison6402c332014-10-31 12:00:26 +00001714
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001715 if (engine->buffer) {
1716 intel_logical_ring_stop(engine);
1717 WARN_ON((I915_READ_MODE(engine) & MODE_IDLE) == 0);
Dave Gordonb0366a52015-12-08 15:02:36 +00001718 }
Oscar Mateo48d82382014-07-24 17:04:23 +01001719
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001720 if (engine->cleanup)
1721 engine->cleanup(engine);
Oscar Mateo48d82382014-07-24 17:04:23 +01001722
Chris Wilson96a945a2016-08-03 13:19:16 +01001723 intel_engine_cleanup_common(engine);
Chris Wilson688e6c72016-07-01 17:23:15 +01001724
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001725 if (engine->status_page.obj) {
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +01001726 i915_gem_object_unpin_map(engine->status_page.obj);
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001727 engine->status_page.obj = NULL;
Oscar Mateo48d82382014-07-24 17:04:23 +01001728 }
Chris Wilson24f1d3c2016-04-28 09:56:53 +01001729 intel_lr_context_unpin(dev_priv->kernel_context, engine);
Arun Siluvery17ee9502015-06-19 19:07:01 +01001730
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001731 engine->idle_lite_restore_wa = 0;
1732 engine->disable_lite_restore_wa = false;
1733 engine->ctx_desc_template = 0;
Tvrtko Ursulinca825802016-01-15 15:10:27 +00001734
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001735 lrc_destroy_wa_ctx_obj(engine);
Chris Wilsonc0336662016-05-06 15:40:21 +01001736 engine->i915 = NULL;
Oscar Mateo454afeb2014-07-24 17:04:22 +01001737}
1738
Chris Wilsonddd66c52016-08-02 22:50:31 +01001739void intel_execlists_enable_submission(struct drm_i915_private *dev_priv)
1740{
1741 struct intel_engine_cs *engine;
1742
1743 for_each_engine(engine, dev_priv)
Chris Wilsonf4ea6bd2016-08-02 22:50:32 +01001744 engine->submit_request = execlists_submit_request;
Chris Wilsonddd66c52016-08-02 22:50:31 +01001745}
1746
Tvrtko Ursulinc9cacf92016-01-12 17:32:34 +00001747static void
Chris Wilsone1382ef2016-05-06 15:40:20 +01001748logical_ring_default_vfuncs(struct intel_engine_cs *engine)
Tvrtko Ursulinc9cacf92016-01-12 17:32:34 +00001749{
1750 /* Default vfuncs which can be overriden by each engine. */
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001751 engine->init_hw = gen8_init_common_ring;
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001752 engine->emit_flush = gen8_emit_flush;
Chris Wilsonddd66c52016-08-02 22:50:31 +01001753 engine->emit_request = gen8_emit_request;
Chris Wilsonf4ea6bd2016-08-02 22:50:32 +01001754 engine->submit_request = execlists_submit_request;
Chris Wilsonddd66c52016-08-02 22:50:31 +01001755
Chris Wilson31bb59c2016-07-01 17:23:27 +01001756 engine->irq_enable = gen8_logical_ring_enable_irq;
1757 engine->irq_disable = gen8_logical_ring_disable_irq;
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001758 engine->emit_bb_start = gen8_emit_bb_start;
Chris Wilson1b7744e2016-07-01 17:23:17 +01001759 if (IS_BXT_REVID(engine->i915, 0, BXT_REVID_A1))
Chris Wilsonc04e0f32016-04-09 10:57:54 +01001760 engine->irq_seqno_barrier = bxt_a_seqno_barrier;
Tvrtko Ursulinc9cacf92016-01-12 17:32:34 +00001761}
1762
Tvrtko Ursulind9f3af92016-01-12 17:32:35 +00001763static inline void
Dave Gordonc2c7f242016-07-13 16:03:35 +01001764logical_ring_default_irqs(struct intel_engine_cs *engine)
Tvrtko Ursulind9f3af92016-01-12 17:32:35 +00001765{
Dave Gordonc2c7f242016-07-13 16:03:35 +01001766 unsigned shift = engine->irq_shift;
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001767 engine->irq_enable_mask = GT_RENDER_USER_INTERRUPT << shift;
1768 engine->irq_keep_mask = GT_CONTEXT_SWITCH_INTERRUPT << shift;
Tvrtko Ursulind9f3af92016-01-12 17:32:35 +00001769}
1770
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +01001771static int
Tvrtko Ursulin04794ad2016-04-12 15:40:41 +01001772lrc_setup_hws(struct intel_engine_cs *engine,
1773 struct drm_i915_gem_object *dctx_obj)
1774{
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +01001775 void *hws;
Tvrtko Ursulin04794ad2016-04-12 15:40:41 +01001776
1777 /* The HWSP is part of the default context object in LRC mode. */
1778 engine->status_page.gfx_addr = i915_gem_obj_ggtt_offset(dctx_obj) +
1779 LRC_PPHWSP_PN * PAGE_SIZE;
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +01001780 hws = i915_gem_object_pin_map(dctx_obj);
1781 if (IS_ERR(hws))
1782 return PTR_ERR(hws);
1783 engine->status_page.page_addr = hws + LRC_PPHWSP_PN * PAGE_SIZE;
Tvrtko Ursulin04794ad2016-04-12 15:40:41 +01001784 engine->status_page.obj = dctx_obj;
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +01001785
1786 return 0;
Tvrtko Ursulin04794ad2016-04-12 15:40:41 +01001787}
1788
Tvrtko Ursulinbb454382016-07-13 16:03:36 +01001789static void
1790logical_ring_setup(struct intel_engine_cs *engine)
1791{
1792 struct drm_i915_private *dev_priv = engine->i915;
1793 enum forcewake_domains fw_domains;
1794
Tvrtko Ursulin019bf272016-07-13 16:03:41 +01001795 intel_engine_setup_common(engine);
1796
Tvrtko Ursulinbb454382016-07-13 16:03:36 +01001797 /* Intentionally left blank. */
1798 engine->buffer = NULL;
1799
1800 fw_domains = intel_uncore_forcewake_for_reg(dev_priv,
1801 RING_ELSP(engine),
1802 FW_REG_WRITE);
1803
1804 fw_domains |= intel_uncore_forcewake_for_reg(dev_priv,
1805 RING_CONTEXT_STATUS_PTR(engine),
1806 FW_REG_READ | FW_REG_WRITE);
1807
1808 fw_domains |= intel_uncore_forcewake_for_reg(dev_priv,
1809 RING_CONTEXT_STATUS_BUF_BASE(engine),
1810 FW_REG_READ);
1811
1812 engine->fw_domains = fw_domains;
1813
Tvrtko Ursulinbb454382016-07-13 16:03:36 +01001814 tasklet_init(&engine->irq_tasklet,
1815 intel_lrc_irq_handler, (unsigned long)engine);
1816
1817 logical_ring_init_platform_invariants(engine);
1818 logical_ring_default_vfuncs(engine);
1819 logical_ring_default_irqs(engine);
Tvrtko Ursulinbb454382016-07-13 16:03:36 +01001820}
1821
Tvrtko Ursulina19d6ff2016-06-23 14:52:41 +01001822static int
1823logical_ring_init(struct intel_engine_cs *engine)
1824{
1825 struct i915_gem_context *dctx = engine->i915->kernel_context;
1826 int ret;
1827
Tvrtko Ursulin019bf272016-07-13 16:03:41 +01001828 ret = intel_engine_init_common(engine);
Tvrtko Ursulina19d6ff2016-06-23 14:52:41 +01001829 if (ret)
1830 goto error;
1831
1832 ret = execlists_context_deferred_alloc(dctx, engine);
1833 if (ret)
1834 goto error;
1835
1836 /* As this is the default context, always pin it */
1837 ret = intel_lr_context_pin(dctx, engine);
1838 if (ret) {
1839 DRM_ERROR("Failed to pin context for %s: %d\n",
1840 engine->name, ret);
1841 goto error;
1842 }
1843
1844 /* And setup the hardware status page. */
1845 ret = lrc_setup_hws(engine, dctx->engine[engine->id].state);
1846 if (ret) {
1847 DRM_ERROR("Failed to set up hws %s: %d\n", engine->name, ret);
1848 goto error;
1849 }
1850
1851 return 0;
1852
1853error:
1854 intel_logical_ring_cleanup(engine);
1855 return ret;
1856}
1857
Tvrtko Ursulin88d2ba22016-07-13 16:03:40 +01001858int logical_render_ring_init(struct intel_engine_cs *engine)
Tvrtko Ursulina19d6ff2016-06-23 14:52:41 +01001859{
1860 struct drm_i915_private *dev_priv = engine->i915;
1861 int ret;
1862
Tvrtko Ursulinbb454382016-07-13 16:03:36 +01001863 logical_ring_setup(engine);
1864
Tvrtko Ursulina19d6ff2016-06-23 14:52:41 +01001865 if (HAS_L3_DPF(dev_priv))
1866 engine->irq_keep_mask |= GT_RENDER_L3_PARITY_ERROR_INTERRUPT;
1867
1868 /* Override some for render ring. */
1869 if (INTEL_GEN(dev_priv) >= 9)
1870 engine->init_hw = gen9_init_render_ring;
1871 else
1872 engine->init_hw = gen8_init_render_ring;
1873 engine->init_context = gen8_init_rcs_context;
1874 engine->cleanup = intel_fini_pipe_control;
1875 engine->emit_flush = gen8_emit_flush_render;
1876 engine->emit_request = gen8_emit_request_render;
1877
Chris Wilson7d5ea802016-07-01 17:23:20 +01001878 ret = intel_init_pipe_control(engine, 4096);
Tvrtko Ursulina19d6ff2016-06-23 14:52:41 +01001879 if (ret)
1880 return ret;
1881
1882 ret = intel_init_workaround_bb(engine);
1883 if (ret) {
1884 /*
1885 * We continue even if we fail to initialize WA batch
1886 * because we only expect rare glitches but nothing
1887 * critical to prevent us from using GPU
1888 */
1889 DRM_ERROR("WA batch buffer initialization failed: %d\n",
1890 ret);
1891 }
1892
1893 ret = logical_ring_init(engine);
1894 if (ret) {
1895 lrc_destroy_wa_ctx_obj(engine);
1896 }
1897
1898 return ret;
1899}
1900
Tvrtko Ursulin88d2ba22016-07-13 16:03:40 +01001901int logical_xcs_ring_init(struct intel_engine_cs *engine)
Tvrtko Ursulinbb454382016-07-13 16:03:36 +01001902{
1903 logical_ring_setup(engine);
1904
1905 return logical_ring_init(engine);
1906}
1907
Jeff McGee0cea6502015-02-13 10:27:56 -06001908static u32
Chris Wilsonc0336662016-05-06 15:40:21 +01001909make_rpcs(struct drm_i915_private *dev_priv)
Jeff McGee0cea6502015-02-13 10:27:56 -06001910{
1911 u32 rpcs = 0;
1912
1913 /*
1914 * No explicit RPCS request is needed to ensure full
1915 * slice/subslice/EU enablement prior to Gen9.
1916 */
Chris Wilsonc0336662016-05-06 15:40:21 +01001917 if (INTEL_GEN(dev_priv) < 9)
Jeff McGee0cea6502015-02-13 10:27:56 -06001918 return 0;
1919
1920 /*
1921 * Starting in Gen9, render power gating can leave
1922 * slice/subslice/EU in a partially enabled state. We
1923 * must make an explicit request through RPCS for full
1924 * enablement.
1925 */
Chris Wilsonc0336662016-05-06 15:40:21 +01001926 if (INTEL_INFO(dev_priv)->has_slice_pg) {
Jeff McGee0cea6502015-02-13 10:27:56 -06001927 rpcs |= GEN8_RPCS_S_CNT_ENABLE;
Chris Wilsonc0336662016-05-06 15:40:21 +01001928 rpcs |= INTEL_INFO(dev_priv)->slice_total <<
Jeff McGee0cea6502015-02-13 10:27:56 -06001929 GEN8_RPCS_S_CNT_SHIFT;
1930 rpcs |= GEN8_RPCS_ENABLE;
1931 }
1932
Chris Wilsonc0336662016-05-06 15:40:21 +01001933 if (INTEL_INFO(dev_priv)->has_subslice_pg) {
Jeff McGee0cea6502015-02-13 10:27:56 -06001934 rpcs |= GEN8_RPCS_SS_CNT_ENABLE;
Chris Wilsonc0336662016-05-06 15:40:21 +01001935 rpcs |= INTEL_INFO(dev_priv)->subslice_per_slice <<
Jeff McGee0cea6502015-02-13 10:27:56 -06001936 GEN8_RPCS_SS_CNT_SHIFT;
1937 rpcs |= GEN8_RPCS_ENABLE;
1938 }
1939
Chris Wilsonc0336662016-05-06 15:40:21 +01001940 if (INTEL_INFO(dev_priv)->has_eu_pg) {
1941 rpcs |= INTEL_INFO(dev_priv)->eu_per_subslice <<
Jeff McGee0cea6502015-02-13 10:27:56 -06001942 GEN8_RPCS_EU_MIN_SHIFT;
Chris Wilsonc0336662016-05-06 15:40:21 +01001943 rpcs |= INTEL_INFO(dev_priv)->eu_per_subslice <<
Jeff McGee0cea6502015-02-13 10:27:56 -06001944 GEN8_RPCS_EU_MAX_SHIFT;
1945 rpcs |= GEN8_RPCS_ENABLE;
1946 }
1947
1948 return rpcs;
1949}
1950
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001951static u32 intel_lr_indirect_ctx_offset(struct intel_engine_cs *engine)
Michel Thierry71562912016-02-23 10:31:49 +00001952{
1953 u32 indirect_ctx_offset;
1954
Chris Wilsonc0336662016-05-06 15:40:21 +01001955 switch (INTEL_GEN(engine->i915)) {
Michel Thierry71562912016-02-23 10:31:49 +00001956 default:
Chris Wilsonc0336662016-05-06 15:40:21 +01001957 MISSING_CASE(INTEL_GEN(engine->i915));
Michel Thierry71562912016-02-23 10:31:49 +00001958 /* fall through */
1959 case 9:
1960 indirect_ctx_offset =
1961 GEN9_CTX_RCS_INDIRECT_CTX_OFFSET_DEFAULT;
1962 break;
1963 case 8:
1964 indirect_ctx_offset =
1965 GEN8_CTX_RCS_INDIRECT_CTX_OFFSET_DEFAULT;
1966 break;
1967 }
1968
1969 return indirect_ctx_offset;
1970}
1971
Oscar Mateo8670d6f2014-07-24 17:04:17 +01001972static int
Chris Wilsone2efd132016-05-24 14:53:34 +01001973populate_lr_context(struct i915_gem_context *ctx,
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +01001974 struct drm_i915_gem_object *ctx_obj,
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001975 struct intel_engine_cs *engine,
Chris Wilson7e37f882016-08-02 22:50:21 +01001976 struct intel_ring *ring)
Oscar Mateo8670d6f2014-07-24 17:04:17 +01001977{
Chris Wilsonc0336662016-05-06 15:40:21 +01001978 struct drm_i915_private *dev_priv = ctx->i915;
Daniel Vetterae6c4802014-08-06 15:04:53 +02001979 struct i915_hw_ppgtt *ppgtt = ctx->ppgtt;
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +01001980 void *vaddr;
1981 u32 *reg_state;
Oscar Mateo8670d6f2014-07-24 17:04:17 +01001982 int ret;
1983
Thomas Daniel2d965532014-08-19 10:13:36 +01001984 if (!ppgtt)
1985 ppgtt = dev_priv->mm.aliasing_ppgtt;
1986
Oscar Mateo8670d6f2014-07-24 17:04:17 +01001987 ret = i915_gem_object_set_to_cpu_domain(ctx_obj, true);
1988 if (ret) {
1989 DRM_DEBUG_DRIVER("Could not set to CPU domain\n");
1990 return ret;
1991 }
1992
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +01001993 vaddr = i915_gem_object_pin_map(ctx_obj);
1994 if (IS_ERR(vaddr)) {
1995 ret = PTR_ERR(vaddr);
1996 DRM_DEBUG_DRIVER("Could not map object pages! (%d)\n", ret);
Oscar Mateo8670d6f2014-07-24 17:04:17 +01001997 return ret;
1998 }
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +01001999 ctx_obj->dirty = true;
Oscar Mateo8670d6f2014-07-24 17:04:17 +01002000
2001 /* The second page of the context object contains some fields which must
2002 * be set up prior to the first execution. */
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +01002003 reg_state = vaddr + LRC_STATE_PN * PAGE_SIZE;
Oscar Mateo8670d6f2014-07-24 17:04:17 +01002004
2005 /* A context is actually a big batch buffer with several MI_LOAD_REGISTER_IMM
2006 * commands followed by (reg, value) pairs. The values we are setting here are
2007 * only for the first context restore: on a subsequent save, the GPU will
2008 * recreate this batchbuffer with new values (including all the missing
2009 * MI_LOAD_REGISTER_IMM commands that we are not initializing here). */
Ville Syrjälä0d925ea2015-11-04 23:20:11 +02002010 reg_state[CTX_LRI_HEADER_0] =
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00002011 MI_LOAD_REGISTER_IMM(engine->id == RCS ? 14 : 11) | MI_LRI_FORCE_POSTED;
2012 ASSIGN_CTX_REG(reg_state, CTX_CONTEXT_CONTROL,
2013 RING_CONTEXT_CONTROL(engine),
Ville Syrjälä0d925ea2015-11-04 23:20:11 +02002014 _MASKED_BIT_ENABLE(CTX_CTRL_INHIBIT_SYN_CTX_SWITCH |
2015 CTX_CTRL_ENGINE_CTX_RESTORE_INHIBIT |
Chris Wilsonc0336662016-05-06 15:40:21 +01002016 (HAS_RESOURCE_STREAMER(dev_priv) ?
Michel Thierry99cf8ea2016-02-25 09:48:58 +00002017 CTX_CTRL_RS_CTX_ENABLE : 0)));
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00002018 ASSIGN_CTX_REG(reg_state, CTX_RING_HEAD, RING_HEAD(engine->mmio_base),
2019 0);
2020 ASSIGN_CTX_REG(reg_state, CTX_RING_TAIL, RING_TAIL(engine->mmio_base),
2021 0);
Thomas Daniel7ba717c2014-11-13 10:28:56 +00002022 /* Ring buffer start address is not known until the buffer is pinned.
2023 * It is written to the context image in execlists_update_context()
2024 */
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00002025 ASSIGN_CTX_REG(reg_state, CTX_RING_BUFFER_START,
2026 RING_START(engine->mmio_base), 0);
2027 ASSIGN_CTX_REG(reg_state, CTX_RING_BUFFER_CONTROL,
2028 RING_CTL(engine->mmio_base),
Chris Wilson7e37f882016-08-02 22:50:21 +01002029 ((ring->size - PAGE_SIZE) & RING_NR_PAGES) | RING_VALID);
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00002030 ASSIGN_CTX_REG(reg_state, CTX_BB_HEAD_U,
2031 RING_BBADDR_UDW(engine->mmio_base), 0);
2032 ASSIGN_CTX_REG(reg_state, CTX_BB_HEAD_L,
2033 RING_BBADDR(engine->mmio_base), 0);
2034 ASSIGN_CTX_REG(reg_state, CTX_BB_STATE,
2035 RING_BBSTATE(engine->mmio_base),
Ville Syrjälä0d925ea2015-11-04 23:20:11 +02002036 RING_BB_PPGTT);
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00002037 ASSIGN_CTX_REG(reg_state, CTX_SECOND_BB_HEAD_U,
2038 RING_SBBADDR_UDW(engine->mmio_base), 0);
2039 ASSIGN_CTX_REG(reg_state, CTX_SECOND_BB_HEAD_L,
2040 RING_SBBADDR(engine->mmio_base), 0);
2041 ASSIGN_CTX_REG(reg_state, CTX_SECOND_BB_STATE,
2042 RING_SBBSTATE(engine->mmio_base), 0);
2043 if (engine->id == RCS) {
2044 ASSIGN_CTX_REG(reg_state, CTX_BB_PER_CTX_PTR,
2045 RING_BB_PER_CTX_PTR(engine->mmio_base), 0);
2046 ASSIGN_CTX_REG(reg_state, CTX_RCS_INDIRECT_CTX,
2047 RING_INDIRECT_CTX(engine->mmio_base), 0);
2048 ASSIGN_CTX_REG(reg_state, CTX_RCS_INDIRECT_CTX_OFFSET,
2049 RING_INDIRECT_CTX_OFFSET(engine->mmio_base), 0);
2050 if (engine->wa_ctx.obj) {
2051 struct i915_ctx_workarounds *wa_ctx = &engine->wa_ctx;
Arun Siluvery17ee9502015-06-19 19:07:01 +01002052 uint32_t ggtt_offset = i915_gem_obj_ggtt_offset(wa_ctx->obj);
2053
2054 reg_state[CTX_RCS_INDIRECT_CTX+1] =
2055 (ggtt_offset + wa_ctx->indirect_ctx.offset * sizeof(uint32_t)) |
2056 (wa_ctx->indirect_ctx.size / CACHELINE_DWORDS);
2057
2058 reg_state[CTX_RCS_INDIRECT_CTX_OFFSET+1] =
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00002059 intel_lr_indirect_ctx_offset(engine) << 6;
Arun Siluvery17ee9502015-06-19 19:07:01 +01002060
2061 reg_state[CTX_BB_PER_CTX_PTR+1] =
2062 (ggtt_offset + wa_ctx->per_ctx.offset * sizeof(uint32_t)) |
2063 0x01;
2064 }
Oscar Mateo8670d6f2014-07-24 17:04:17 +01002065 }
Ville Syrjälä0d925ea2015-11-04 23:20:11 +02002066 reg_state[CTX_LRI_HEADER_1] = MI_LOAD_REGISTER_IMM(9) | MI_LRI_FORCE_POSTED;
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00002067 ASSIGN_CTX_REG(reg_state, CTX_CTX_TIMESTAMP,
2068 RING_CTX_TIMESTAMP(engine->mmio_base), 0);
Ville Syrjälä0d925ea2015-11-04 23:20:11 +02002069 /* PDP values well be assigned later if needed */
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00002070 ASSIGN_CTX_REG(reg_state, CTX_PDP3_UDW, GEN8_RING_PDP_UDW(engine, 3),
2071 0);
2072 ASSIGN_CTX_REG(reg_state, CTX_PDP3_LDW, GEN8_RING_PDP_LDW(engine, 3),
2073 0);
2074 ASSIGN_CTX_REG(reg_state, CTX_PDP2_UDW, GEN8_RING_PDP_UDW(engine, 2),
2075 0);
2076 ASSIGN_CTX_REG(reg_state, CTX_PDP2_LDW, GEN8_RING_PDP_LDW(engine, 2),
2077 0);
2078 ASSIGN_CTX_REG(reg_state, CTX_PDP1_UDW, GEN8_RING_PDP_UDW(engine, 1),
2079 0);
2080 ASSIGN_CTX_REG(reg_state, CTX_PDP1_LDW, GEN8_RING_PDP_LDW(engine, 1),
2081 0);
2082 ASSIGN_CTX_REG(reg_state, CTX_PDP0_UDW, GEN8_RING_PDP_UDW(engine, 0),
2083 0);
2084 ASSIGN_CTX_REG(reg_state, CTX_PDP0_LDW, GEN8_RING_PDP_LDW(engine, 0),
2085 0);
Michel Thierryd7b26332015-04-08 12:13:34 +01002086
Michel Thierry2dba3232015-07-30 11:06:23 +01002087 if (USES_FULL_48BIT_PPGTT(ppgtt->base.dev)) {
2088 /* 64b PPGTT (48bit canonical)
2089 * PDP0_DESCRIPTOR contains the base address to PML4 and
2090 * other PDP Descriptors are ignored.
2091 */
2092 ASSIGN_CTX_PML4(ppgtt, reg_state);
2093 } else {
2094 /* 32b PPGTT
2095 * PDP*_DESCRIPTOR contains the base address of space supported.
2096 * With dynamic page allocation, PDPs may not be allocated at
2097 * this point. Point the unallocated PDPs to the scratch page
2098 */
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +00002099 execlists_update_context_pdps(ppgtt, reg_state);
Michel Thierry2dba3232015-07-30 11:06:23 +01002100 }
2101
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00002102 if (engine->id == RCS) {
Oscar Mateo8670d6f2014-07-24 17:04:17 +01002103 reg_state[CTX_LRI_HEADER_2] = MI_LOAD_REGISTER_IMM(1);
Ville Syrjälä0d925ea2015-11-04 23:20:11 +02002104 ASSIGN_CTX_REG(reg_state, CTX_R_PWR_CLK_STATE, GEN8_R_PWR_CLK_STATE,
Chris Wilsonc0336662016-05-06 15:40:21 +01002105 make_rpcs(dev_priv));
Oscar Mateo8670d6f2014-07-24 17:04:17 +01002106 }
2107
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +01002108 i915_gem_object_unpin_map(ctx_obj);
Oscar Mateo8670d6f2014-07-24 17:04:17 +01002109
2110 return 0;
2111}
2112
Oscar Mateo73e4d072014-07-24 17:04:48 +01002113/**
Dave Gordonc5d46ee2016-01-05 12:21:33 +00002114 * intel_lr_context_size() - return the size of the context for an engine
Tvrtko Ursulin14bb2c12016-06-03 14:02:17 +01002115 * @engine: which engine to find the context size for
Dave Gordonc5d46ee2016-01-05 12:21:33 +00002116 *
2117 * Each engine may require a different amount of space for a context image,
2118 * so when allocating (or copying) an image, this function can be used to
2119 * find the right size for the specific engine.
2120 *
2121 * Return: size (in bytes) of an engine-specific context image
2122 *
2123 * Note: this size includes the HWSP, which is part of the context image
2124 * in LRC mode, but does not include the "shared data page" used with
2125 * GuC submission. The caller should account for this if using the GuC.
2126 */
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00002127uint32_t intel_lr_context_size(struct intel_engine_cs *engine)
Oscar Mateo8c8579172014-07-24 17:04:14 +01002128{
2129 int ret = 0;
2130
Chris Wilsonc0336662016-05-06 15:40:21 +01002131 WARN_ON(INTEL_GEN(engine->i915) < 8);
Oscar Mateo8c8579172014-07-24 17:04:14 +01002132
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00002133 switch (engine->id) {
Oscar Mateo8c8579172014-07-24 17:04:14 +01002134 case RCS:
Chris Wilsonc0336662016-05-06 15:40:21 +01002135 if (INTEL_GEN(engine->i915) >= 9)
Michael H. Nguyen468c6812014-11-13 17:51:49 +00002136 ret = GEN9_LR_CONTEXT_RENDER_SIZE;
2137 else
2138 ret = GEN8_LR_CONTEXT_RENDER_SIZE;
Oscar Mateo8c8579172014-07-24 17:04:14 +01002139 break;
2140 case VCS:
2141 case BCS:
2142 case VECS:
2143 case VCS2:
2144 ret = GEN8_LR_CONTEXT_OTHER_SIZE;
2145 break;
2146 }
2147
2148 return ret;
Oscar Mateoede7d422014-07-24 17:04:12 +01002149}
2150
Chris Wilsone2efd132016-05-24 14:53:34 +01002151static int execlists_context_deferred_alloc(struct i915_gem_context *ctx,
Chris Wilson978f1e02016-04-28 09:56:54 +01002152 struct intel_engine_cs *engine)
Oscar Mateoede7d422014-07-24 17:04:12 +01002153{
Oscar Mateo8c8579172014-07-24 17:04:14 +01002154 struct drm_i915_gem_object *ctx_obj;
Chris Wilson9021ad02016-05-24 14:53:37 +01002155 struct intel_context *ce = &ctx->engine[engine->id];
Oscar Mateo8c8579172014-07-24 17:04:14 +01002156 uint32_t context_size;
Chris Wilson7e37f882016-08-02 22:50:21 +01002157 struct intel_ring *ring;
Oscar Mateo8c8579172014-07-24 17:04:14 +01002158 int ret;
2159
Chris Wilson9021ad02016-05-24 14:53:37 +01002160 WARN_ON(ce->state);
Oscar Mateoede7d422014-07-24 17:04:12 +01002161
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00002162 context_size = round_up(intel_lr_context_size(engine), 4096);
Oscar Mateo8c8579172014-07-24 17:04:14 +01002163
Alex Daid1675192015-08-12 15:43:43 +01002164 /* One extra page as the sharing data between driver and GuC */
2165 context_size += PAGE_SIZE * LRC_PPHWSP_PN;
2166
Chris Wilson91c8a322016-07-05 10:40:23 +01002167 ctx_obj = i915_gem_object_create(&ctx->i915->drm, context_size);
Chris Wilsonfe3db792016-04-25 13:32:13 +01002168 if (IS_ERR(ctx_obj)) {
Dan Carpenter3126a662015-04-30 17:30:50 +03002169 DRM_DEBUG_DRIVER("Alloc LRC backing obj failed.\n");
Chris Wilsonfe3db792016-04-25 13:32:13 +01002170 return PTR_ERR(ctx_obj);
Oscar Mateo8c8579172014-07-24 17:04:14 +01002171 }
2172
Chris Wilson7e37f882016-08-02 22:50:21 +01002173 ring = intel_engine_create_ring(engine, ctx->ring_size);
Chris Wilsondca33ec2016-08-02 22:50:20 +01002174 if (IS_ERR(ring)) {
2175 ret = PTR_ERR(ring);
Nick Hoathe84fe802015-09-11 12:53:46 +01002176 goto error_deref_obj;
Oscar Mateo8670d6f2014-07-24 17:04:17 +01002177 }
2178
Chris Wilsondca33ec2016-08-02 22:50:20 +01002179 ret = populate_lr_context(ctx, ctx_obj, engine, ring);
Oscar Mateo8670d6f2014-07-24 17:04:17 +01002180 if (ret) {
2181 DRM_DEBUG_DRIVER("Failed to populate LRC: %d\n", ret);
Chris Wilsondca33ec2016-08-02 22:50:20 +01002182 goto error_ring_free;
Oscar Mateo84c23772014-07-24 17:04:15 +01002183 }
2184
Chris Wilsondca33ec2016-08-02 22:50:20 +01002185 ce->ring = ring;
Chris Wilson9021ad02016-05-24 14:53:37 +01002186 ce->state = ctx_obj;
2187 ce->initialised = engine->init_context == NULL;
Oscar Mateoede7d422014-07-24 17:04:12 +01002188
2189 return 0;
Oscar Mateo8670d6f2014-07-24 17:04:17 +01002190
Chris Wilsondca33ec2016-08-02 22:50:20 +01002191error_ring_free:
Chris Wilson7e37f882016-08-02 22:50:21 +01002192 intel_ring_free(ring);
Nick Hoathe84fe802015-09-11 12:53:46 +01002193error_deref_obj:
Chris Wilsonf8c417c2016-07-20 13:31:53 +01002194 i915_gem_object_put(ctx_obj);
Chris Wilsondca33ec2016-08-02 22:50:20 +01002195 ce->ring = NULL;
Chris Wilson9021ad02016-05-24 14:53:37 +01002196 ce->state = NULL;
Oscar Mateo8670d6f2014-07-24 17:04:17 +01002197 return ret;
Oscar Mateoede7d422014-07-24 17:04:12 +01002198}
Thomas Daniel3e5b6f02015-02-16 16:12:53 +00002199
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +01002200void intel_lr_context_reset(struct drm_i915_private *dev_priv,
Chris Wilsone2efd132016-05-24 14:53:34 +01002201 struct i915_gem_context *ctx)
Thomas Daniel3e5b6f02015-02-16 16:12:53 +00002202{
Tvrtko Ursuline2f80392016-03-16 11:00:36 +00002203 struct intel_engine_cs *engine;
Thomas Daniel3e5b6f02015-02-16 16:12:53 +00002204
Dave Gordonb4ac5af2016-03-24 11:20:38 +00002205 for_each_engine(engine, dev_priv) {
Chris Wilson9021ad02016-05-24 14:53:37 +01002206 struct intel_context *ce = &ctx->engine[engine->id];
2207 struct drm_i915_gem_object *ctx_obj = ce->state;
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +01002208 void *vaddr;
Thomas Daniel3e5b6f02015-02-16 16:12:53 +00002209 uint32_t *reg_state;
Thomas Daniel3e5b6f02015-02-16 16:12:53 +00002210
2211 if (!ctx_obj)
2212 continue;
2213
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +01002214 vaddr = i915_gem_object_pin_map(ctx_obj);
2215 if (WARN_ON(IS_ERR(vaddr)))
Thomas Daniel3e5b6f02015-02-16 16:12:53 +00002216 continue;
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +01002217
2218 reg_state = vaddr + LRC_STATE_PN * PAGE_SIZE;
2219 ctx_obj->dirty = true;
Thomas Daniel3e5b6f02015-02-16 16:12:53 +00002220
2221 reg_state[CTX_RING_HEAD+1] = 0;
2222 reg_state[CTX_RING_TAIL+1] = 0;
2223
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +01002224 i915_gem_object_unpin_map(ctx_obj);
Thomas Daniel3e5b6f02015-02-16 16:12:53 +00002225
Chris Wilsondca33ec2016-08-02 22:50:20 +01002226 ce->ring->head = 0;
2227 ce->ring->tail = 0;
Thomas Daniel3e5b6f02015-02-16 16:12:53 +00002228 }
2229}