blob: 361977fd117016140942675a3238eff05c8427d3 [file] [log] [blame]
Oscar Mateob20385f2014-07-24 17:04:10 +01001/*
2 * Copyright © 2014 Intel Corporation
3 *
4 * Permission is hereby granted, free of charge, to any person obtaining a
5 * copy of this software and associated documentation files (the "Software"),
6 * to deal in the Software without restriction, including without limitation
7 * the rights to use, copy, modify, merge, publish, distribute, sublicense,
8 * and/or sell copies of the Software, and to permit persons to whom the
9 * Software is furnished to do so, subject to the following conditions:
10 *
11 * The above copyright notice and this permission notice (including the next
12 * paragraph) shall be included in all copies or substantial portions of the
13 * Software.
14 *
15 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
18 * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
20 * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
21 * IN THE SOFTWARE.
22 *
23 * Authors:
24 * Ben Widawsky <ben@bwidawsk.net>
25 * Michel Thierry <michel.thierry@intel.com>
26 * Thomas Daniel <thomas.daniel@intel.com>
27 * Oscar Mateo <oscar.mateo@intel.com>
28 *
29 */
30
Oscar Mateo73e4d072014-07-24 17:04:48 +010031/**
32 * DOC: Logical Rings, Logical Ring Contexts and Execlists
33 *
34 * Motivation:
Oscar Mateob20385f2014-07-24 17:04:10 +010035 * GEN8 brings an expansion of the HW contexts: "Logical Ring Contexts".
36 * These expanded contexts enable a number of new abilities, especially
37 * "Execlists" (also implemented in this file).
38 *
Oscar Mateo73e4d072014-07-24 17:04:48 +010039 * One of the main differences with the legacy HW contexts is that logical
40 * ring contexts incorporate many more things to the context's state, like
41 * PDPs or ringbuffer control registers:
42 *
43 * The reason why PDPs are included in the context is straightforward: as
44 * PPGTTs (per-process GTTs) are actually per-context, having the PDPs
45 * contained there mean you don't need to do a ppgtt->switch_mm yourself,
46 * instead, the GPU will do it for you on the context switch.
47 *
48 * But, what about the ringbuffer control registers (head, tail, etc..)?
49 * shouldn't we just need a set of those per engine command streamer? This is
50 * where the name "Logical Rings" starts to make sense: by virtualizing the
51 * rings, the engine cs shifts to a new "ring buffer" with every context
52 * switch. When you want to submit a workload to the GPU you: A) choose your
53 * context, B) find its appropriate virtualized ring, C) write commands to it
54 * and then, finally, D) tell the GPU to switch to that context.
55 *
56 * Instead of the legacy MI_SET_CONTEXT, the way you tell the GPU to switch
57 * to a contexts is via a context execution list, ergo "Execlists".
58 *
59 * LRC implementation:
60 * Regarding the creation of contexts, we have:
61 *
62 * - One global default context.
63 * - One local default context for each opened fd.
64 * - One local extra context for each context create ioctl call.
65 *
66 * Now that ringbuffers belong per-context (and not per-engine, like before)
67 * and that contexts are uniquely tied to a given engine (and not reusable,
68 * like before) we need:
69 *
70 * - One ringbuffer per-engine inside each context.
71 * - One backing object per-engine inside each context.
72 *
73 * The global default context starts its life with these new objects fully
74 * allocated and populated. The local default context for each opened fd is
75 * more complex, because we don't know at creation time which engine is going
76 * to use them. To handle this, we have implemented a deferred creation of LR
77 * contexts:
78 *
79 * The local context starts its life as a hollow or blank holder, that only
80 * gets populated for a given engine once we receive an execbuffer. If later
81 * on we receive another execbuffer ioctl for the same context but a different
82 * engine, we allocate/populate a new ringbuffer and context backing object and
83 * so on.
84 *
85 * Finally, regarding local contexts created using the ioctl call: as they are
86 * only allowed with the render ring, we can allocate & populate them right
87 * away (no need to defer anything, at least for now).
88 *
89 * Execlists implementation:
Oscar Mateob20385f2014-07-24 17:04:10 +010090 * Execlists are the new method by which, on gen8+ hardware, workloads are
91 * submitted for execution (as opposed to the legacy, ringbuffer-based, method).
Oscar Mateo73e4d072014-07-24 17:04:48 +010092 * This method works as follows:
93 *
94 * When a request is committed, its commands (the BB start and any leading or
95 * trailing commands, like the seqno breadcrumbs) are placed in the ringbuffer
96 * for the appropriate context. The tail pointer in the hardware context is not
97 * updated at this time, but instead, kept by the driver in the ringbuffer
98 * structure. A structure representing this request is added to a request queue
99 * for the appropriate engine: this structure contains a copy of the context's
100 * tail after the request was written to the ring buffer and a pointer to the
101 * context itself.
102 *
103 * If the engine's request queue was empty before the request was added, the
104 * queue is processed immediately. Otherwise the queue will be processed during
105 * a context switch interrupt. In any case, elements on the queue will get sent
106 * (in pairs) to the GPU's ExecLists Submit Port (ELSP, for short) with a
107 * globally unique 20-bits submission ID.
108 *
109 * When execution of a request completes, the GPU updates the context status
110 * buffer with a context complete event and generates a context switch interrupt.
111 * During the interrupt handling, the driver examines the events in the buffer:
112 * for each context complete event, if the announced ID matches that on the head
113 * of the request queue, then that request is retired and removed from the queue.
114 *
115 * After processing, if any requests were retired and the queue is not empty
116 * then a new execution list can be submitted. The two requests at the front of
117 * the queue are next to be submitted but since a context may not occur twice in
118 * an execution list, if subsequent requests have the same ID as the first then
119 * the two requests must be combined. This is done simply by discarding requests
120 * at the head of the queue until either only one requests is left (in which case
121 * we use a NULL second context) or the first two requests have unique IDs.
122 *
123 * By always executing the first two requests in the queue the driver ensures
124 * that the GPU is kept as busy as possible. In the case where a single context
125 * completes but a second context is still executing, the request for this second
126 * context will be at the head of the queue when we remove the first one. This
127 * request will then be resubmitted along with a new request for a different context,
128 * which will cause the hardware to continue executing the second request and queue
129 * the new request (the GPU detects the condition of a context getting preempted
130 * with the same context and optimizes the context switch flow by not doing
131 * preemption, but just sampling the new tail pointer).
132 *
Oscar Mateob20385f2014-07-24 17:04:10 +0100133 */
Tvrtko Ursulin27af5ee2016-04-04 12:11:56 +0100134#include <linux/interrupt.h>
Oscar Mateob20385f2014-07-24 17:04:10 +0100135
136#include <drm/drmP.h>
137#include <drm/i915_drm.h>
138#include "i915_drv.h"
Peter Antoine3bbaba02015-07-10 20:13:11 +0300139#include "intel_mocs.h"
Oscar Mateo127f1002014-07-24 17:04:11 +0100140
Michael H. Nguyen468c6812014-11-13 17:51:49 +0000141#define GEN9_LR_CONTEXT_RENDER_SIZE (22 * PAGE_SIZE)
Oscar Mateo8c8579172014-07-24 17:04:14 +0100142#define GEN8_LR_CONTEXT_RENDER_SIZE (20 * PAGE_SIZE)
143#define GEN8_LR_CONTEXT_OTHER_SIZE (2 * PAGE_SIZE)
144
Thomas Daniele981e7b2014-07-24 17:04:39 +0100145#define RING_EXECLIST_QFULL (1 << 0x2)
146#define RING_EXECLIST1_VALID (1 << 0x3)
147#define RING_EXECLIST0_VALID (1 << 0x4)
148#define RING_EXECLIST_ACTIVE_STATUS (3 << 0xE)
149#define RING_EXECLIST1_ACTIVE (1 << 0x11)
150#define RING_EXECLIST0_ACTIVE (1 << 0x12)
151
152#define GEN8_CTX_STATUS_IDLE_ACTIVE (1 << 0)
153#define GEN8_CTX_STATUS_PREEMPTED (1 << 1)
154#define GEN8_CTX_STATUS_ELEMENT_SWITCH (1 << 2)
155#define GEN8_CTX_STATUS_ACTIVE_IDLE (1 << 3)
156#define GEN8_CTX_STATUS_COMPLETE (1 << 4)
157#define GEN8_CTX_STATUS_LITE_RESTORE (1 << 15)
Oscar Mateo8670d6f2014-07-24 17:04:17 +0100158
159#define CTX_LRI_HEADER_0 0x01
160#define CTX_CONTEXT_CONTROL 0x02
161#define CTX_RING_HEAD 0x04
162#define CTX_RING_TAIL 0x06
163#define CTX_RING_BUFFER_START 0x08
164#define CTX_RING_BUFFER_CONTROL 0x0a
165#define CTX_BB_HEAD_U 0x0c
166#define CTX_BB_HEAD_L 0x0e
167#define CTX_BB_STATE 0x10
168#define CTX_SECOND_BB_HEAD_U 0x12
169#define CTX_SECOND_BB_HEAD_L 0x14
170#define CTX_SECOND_BB_STATE 0x16
171#define CTX_BB_PER_CTX_PTR 0x18
172#define CTX_RCS_INDIRECT_CTX 0x1a
173#define CTX_RCS_INDIRECT_CTX_OFFSET 0x1c
174#define CTX_LRI_HEADER_1 0x21
175#define CTX_CTX_TIMESTAMP 0x22
176#define CTX_PDP3_UDW 0x24
177#define CTX_PDP3_LDW 0x26
178#define CTX_PDP2_UDW 0x28
179#define CTX_PDP2_LDW 0x2a
180#define CTX_PDP1_UDW 0x2c
181#define CTX_PDP1_LDW 0x2e
182#define CTX_PDP0_UDW 0x30
183#define CTX_PDP0_LDW 0x32
184#define CTX_LRI_HEADER_2 0x41
185#define CTX_R_PWR_CLK_STATE 0x42
186#define CTX_GPGPU_CSR_BASE_ADDRESS 0x44
187
Ben Widawsky84b790f2014-07-24 17:04:36 +0100188#define GEN8_CTX_VALID (1<<0)
189#define GEN8_CTX_FORCE_PD_RESTORE (1<<1)
190#define GEN8_CTX_FORCE_RESTORE (1<<2)
191#define GEN8_CTX_L3LLC_COHERENT (1<<5)
192#define GEN8_CTX_PRIVILEGE (1<<8)
Michel Thierrye5815a22015-04-08 12:13:32 +0100193
Ville Syrjälä0d925ea2015-11-04 23:20:11 +0200194#define ASSIGN_CTX_REG(reg_state, pos, reg, val) do { \
Ville Syrjäläf0f59a02015-11-18 15:33:26 +0200195 (reg_state)[(pos)+0] = i915_mmio_reg_offset(reg); \
Ville Syrjälä0d925ea2015-11-04 23:20:11 +0200196 (reg_state)[(pos)+1] = (val); \
197} while (0)
198
199#define ASSIGN_CTX_PDP(ppgtt, reg_state, n) do { \
Mika Kuoppalad852c7b2015-06-25 18:35:06 +0300200 const u64 _addr = i915_page_dir_dma_addr((ppgtt), (n)); \
Michel Thierrye5815a22015-04-08 12:13:32 +0100201 reg_state[CTX_PDP ## n ## _UDW+1] = upper_32_bits(_addr); \
202 reg_state[CTX_PDP ## n ## _LDW+1] = lower_32_bits(_addr); \
Ville Syrjälä9244a812015-11-04 23:20:09 +0200203} while (0)
Michel Thierrye5815a22015-04-08 12:13:32 +0100204
Ville Syrjälä9244a812015-11-04 23:20:09 +0200205#define ASSIGN_CTX_PML4(ppgtt, reg_state) do { \
Michel Thierry2dba3232015-07-30 11:06:23 +0100206 reg_state[CTX_PDP0_UDW + 1] = upper_32_bits(px_dma(&ppgtt->pml4)); \
207 reg_state[CTX_PDP0_LDW + 1] = lower_32_bits(px_dma(&ppgtt->pml4)); \
Ville Syrjälä9244a812015-11-04 23:20:09 +0200208} while (0)
Michel Thierry2dba3232015-07-30 11:06:23 +0100209
Ben Widawsky84b790f2014-07-24 17:04:36 +0100210enum {
Ben Widawsky84b790f2014-07-24 17:04:36 +0100211 FAULT_AND_HANG = 0,
212 FAULT_AND_HALT, /* Debug only */
213 FAULT_AND_STREAM,
214 FAULT_AND_CONTINUE /* Unsupported */
215};
216#define GEN8_CTX_ID_SHIFT 32
Chris Wilson7069b142016-04-28 09:56:52 +0100217#define GEN8_CTX_ID_WIDTH 21
Michel Thierry71562912016-02-23 10:31:49 +0000218#define GEN8_CTX_RCS_INDIRECT_CTX_OFFSET_DEFAULT 0x17
219#define GEN9_CTX_RCS_INDIRECT_CTX_OFFSET_DEFAULT 0x26
Ben Widawsky84b790f2014-07-24 17:04:36 +0100220
Chris Wilson0e93cdd2016-04-29 09:07:06 +0100221/* Typical size of the average request (2 pipecontrols and a MI_BB) */
222#define EXECLISTS_REQUEST_SIZE 64 /* bytes */
223
Chris Wilsone2efd132016-05-24 14:53:34 +0100224static int execlists_context_deferred_alloc(struct i915_gem_context *ctx,
Chris Wilson978f1e02016-04-28 09:56:54 +0100225 struct intel_engine_cs *engine);
Chris Wilsone2efd132016-05-24 14:53:34 +0100226static int intel_lr_context_pin(struct i915_gem_context *ctx,
Tvrtko Ursuline52928232016-01-28 10:29:54 +0000227 struct intel_engine_cs *engine);
Thomas Daniel7ba717c2014-11-13 10:28:56 +0000228
Oscar Mateo73e4d072014-07-24 17:04:48 +0100229/**
230 * intel_sanitize_enable_execlists() - sanitize i915.enable_execlists
Tvrtko Ursulin14bb2c12016-06-03 14:02:17 +0100231 * @dev_priv: i915 device private
Oscar Mateo73e4d072014-07-24 17:04:48 +0100232 * @enable_execlists: value of i915.enable_execlists module parameter.
233 *
234 * Only certain platforms support Execlists (the prerequisites being
Thomas Daniel27401d12014-12-11 12:48:35 +0000235 * support for Logical Ring Contexts and Aliasing PPGTT or better).
Oscar Mateo73e4d072014-07-24 17:04:48 +0100236 *
237 * Return: 1 if Execlists is supported and has to be enabled.
238 */
Chris Wilsonc0336662016-05-06 15:40:21 +0100239int intel_sanitize_enable_execlists(struct drm_i915_private *dev_priv, int enable_execlists)
Oscar Mateo127f1002014-07-24 17:04:11 +0100240{
Zhiyuan Lva0bd6c32015-08-28 15:41:16 +0800241 /* On platforms with execlist available, vGPU will only
242 * support execlist mode, no ring buffer mode.
243 */
Chris Wilsonc0336662016-05-06 15:40:21 +0100244 if (HAS_LOGICAL_RING_CONTEXTS(dev_priv) && intel_vgpu_active(dev_priv))
Zhiyuan Lva0bd6c32015-08-28 15:41:16 +0800245 return 1;
246
Chris Wilsonc0336662016-05-06 15:40:21 +0100247 if (INTEL_GEN(dev_priv) >= 9)
Damien Lespiau70ee45e2014-11-14 15:05:59 +0000248 return 1;
249
Oscar Mateo127f1002014-07-24 17:04:11 +0100250 if (enable_execlists == 0)
251 return 0;
252
Daniel Vetter5a21b662016-05-24 17:13:53 +0200253 if (HAS_LOGICAL_RING_CONTEXTS(dev_priv) &&
254 USES_PPGTT(dev_priv) &&
255 i915.use_mmio_flip >= 0)
Oscar Mateo127f1002014-07-24 17:04:11 +0100256 return 1;
257
258 return 0;
259}
Oscar Mateoede7d422014-07-24 17:04:12 +0100260
Tvrtko Ursulinca825802016-01-15 15:10:27 +0000261static void
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000262logical_ring_init_platform_invariants(struct intel_engine_cs *engine)
Tvrtko Ursulinca825802016-01-15 15:10:27 +0000263{
Chris Wilsonc0336662016-05-06 15:40:21 +0100264 struct drm_i915_private *dev_priv = engine->i915;
Tvrtko Ursulinca825802016-01-15 15:10:27 +0000265
Chris Wilsonc0336662016-05-06 15:40:21 +0100266 if (IS_GEN8(dev_priv) || IS_GEN9(dev_priv))
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000267 engine->idle_lite_restore_wa = ~0;
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000268
Chris Wilsonc0336662016-05-06 15:40:21 +0100269 engine->disable_lite_restore_wa = (IS_SKL_REVID(dev_priv, 0, SKL_REVID_B0) ||
270 IS_BXT_REVID(dev_priv, 0, BXT_REVID_A1)) &&
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000271 (engine->id == VCS || engine->id == VCS2);
Tvrtko Ursulinca825802016-01-15 15:10:27 +0000272
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000273 engine->ctx_desc_template = GEN8_CTX_VALID;
Chris Wilsonc0336662016-05-06 15:40:21 +0100274 if (IS_GEN8(dev_priv))
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000275 engine->ctx_desc_template |= GEN8_CTX_L3LLC_COHERENT;
276 engine->ctx_desc_template |= GEN8_CTX_PRIVILEGE;
Tvrtko Ursulinca825802016-01-15 15:10:27 +0000277
278 /* TODO: WaDisableLiteRestore when we start using semaphore
279 * signalling between Command Streamers */
280 /* ring->ctx_desc_template |= GEN8_CTX_FORCE_RESTORE; */
281
282 /* WaEnableForceRestoreInCtxtDescForVCS:skl */
283 /* WaEnableForceRestoreInCtxtDescForVCS:bxt */
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000284 if (engine->disable_lite_restore_wa)
285 engine->ctx_desc_template |= GEN8_CTX_FORCE_RESTORE;
Tvrtko Ursulinca825802016-01-15 15:10:27 +0000286}
287
288/**
289 * intel_lr_context_descriptor_update() - calculate & cache the descriptor
290 * descriptor for a pinned context
Tvrtko Ursulinca825802016-01-15 15:10:27 +0000291 * @ctx: Context to work on
Chris Wilson9021ad02016-05-24 14:53:37 +0100292 * @engine: Engine the descriptor will be used with
Tvrtko Ursulinca825802016-01-15 15:10:27 +0000293 *
294 * The context descriptor encodes various attributes of a context,
295 * including its GTT address and some flags. Because it's fairly
296 * expensive to calculate, we'll just do it once and cache the result,
297 * which remains valid until the context is unpinned.
298 *
Daniel Vetter6e5248b2016-07-15 21:48:06 +0200299 * This is what a descriptor looks like, from LSB to MSB::
300 *
301 * bits 0-11: flags, GEN8_CTX_* (cached in ctx_desc_template)
302 * bits 12-31: LRCA, GTT address of (the HWSP of) this context
303 * bits 32-52: ctx ID, a globally unique tag
304 * bits 53-54: mbz, reserved for use by hardware
305 * bits 55-63: group ID, currently unused and set to 0
Tvrtko Ursulinca825802016-01-15 15:10:27 +0000306 */
307static void
Chris Wilsone2efd132016-05-24 14:53:34 +0100308intel_lr_context_descriptor_update(struct i915_gem_context *ctx,
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000309 struct intel_engine_cs *engine)
Tvrtko Ursulinca825802016-01-15 15:10:27 +0000310{
Chris Wilson9021ad02016-05-24 14:53:37 +0100311 struct intel_context *ce = &ctx->engine[engine->id];
Chris Wilson7069b142016-04-28 09:56:52 +0100312 u64 desc;
Tvrtko Ursulinca825802016-01-15 15:10:27 +0000313
Chris Wilson7069b142016-04-28 09:56:52 +0100314 BUILD_BUG_ON(MAX_CONTEXT_HW_ID > (1<<GEN8_CTX_ID_WIDTH));
315
Zhi Wangc01fc532016-06-16 08:07:02 -0400316 desc = ctx->desc_template; /* bits 3-4 */
317 desc |= engine->ctx_desc_template; /* bits 0-11 */
Chris Wilson9021ad02016-05-24 14:53:37 +0100318 desc |= ce->lrc_vma->node.start + LRC_PPHWSP_PN * PAGE_SIZE;
319 /* bits 12-31 */
Chris Wilson7069b142016-04-28 09:56:52 +0100320 desc |= (u64)ctx->hw_id << GEN8_CTX_ID_SHIFT; /* bits 32-52 */
Tvrtko Ursulinca825802016-01-15 15:10:27 +0000321
Chris Wilson9021ad02016-05-24 14:53:37 +0100322 ce->lrc_desc = desc;
Tvrtko Ursulinca825802016-01-15 15:10:27 +0000323}
324
Chris Wilsone2efd132016-05-24 14:53:34 +0100325uint64_t intel_lr_context_descriptor(struct i915_gem_context *ctx,
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000326 struct intel_engine_cs *engine)
Tvrtko Ursulinca825802016-01-15 15:10:27 +0000327{
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000328 return ctx->engine[engine->id].lrc_desc;
Tvrtko Ursulinca825802016-01-15 15:10:27 +0000329}
330
Mika Kuoppalacc3c4252015-07-03 17:09:36 +0300331static void execlists_elsp_write(struct drm_i915_gem_request *rq0,
332 struct drm_i915_gem_request *rq1)
Ben Widawsky84b790f2014-07-24 17:04:36 +0100333{
Mika Kuoppalacc3c4252015-07-03 17:09:36 +0300334
Tvrtko Ursulin4a570db2016-03-16 11:00:38 +0000335 struct intel_engine_cs *engine = rq0->engine;
Chris Wilsonc0336662016-05-06 15:40:21 +0100336 struct drm_i915_private *dev_priv = rq0->i915;
Mika Kuoppala1cff8cc2015-07-06 11:09:25 +0300337 uint64_t desc[2];
Ben Widawsky84b790f2014-07-24 17:04:36 +0100338
Mika Kuoppala1cff8cc2015-07-06 11:09:25 +0300339 if (rq1) {
Tvrtko Ursulin4a570db2016-03-16 11:00:38 +0000340 desc[1] = intel_lr_context_descriptor(rq1->ctx, rq1->engine);
Mika Kuoppala1cff8cc2015-07-06 11:09:25 +0300341 rq1->elsp_submitted++;
342 } else {
343 desc[1] = 0;
344 }
Ben Widawsky84b790f2014-07-24 17:04:36 +0100345
Tvrtko Ursulin4a570db2016-03-16 11:00:38 +0000346 desc[0] = intel_lr_context_descriptor(rq0->ctx, rq0->engine);
Mika Kuoppala1cff8cc2015-07-06 11:09:25 +0300347 rq0->elsp_submitted++;
Ben Widawsky84b790f2014-07-24 17:04:36 +0100348
Mika Kuoppala1cff8cc2015-07-06 11:09:25 +0300349 /* You must always write both descriptors in the order below. */
Tvrtko Ursuline2f80392016-03-16 11:00:36 +0000350 I915_WRITE_FW(RING_ELSP(engine), upper_32_bits(desc[1]));
351 I915_WRITE_FW(RING_ELSP(engine), lower_32_bits(desc[1]));
Chris Wilson6daccb02015-01-16 11:34:35 +0200352
Tvrtko Ursuline2f80392016-03-16 11:00:36 +0000353 I915_WRITE_FW(RING_ELSP(engine), upper_32_bits(desc[0]));
Ben Widawsky84b790f2014-07-24 17:04:36 +0100354 /* The context is automatically loaded after the following */
Tvrtko Ursuline2f80392016-03-16 11:00:36 +0000355 I915_WRITE_FW(RING_ELSP(engine), lower_32_bits(desc[0]));
Ben Widawsky84b790f2014-07-24 17:04:36 +0100356
Mika Kuoppala1cff8cc2015-07-06 11:09:25 +0300357 /* ELSP is a wo register, use another nearby reg for posting */
Tvrtko Ursuline2f80392016-03-16 11:00:36 +0000358 POSTING_READ_FW(RING_EXECLIST_STATUS_LO(engine));
Ben Widawsky84b790f2014-07-24 17:04:36 +0100359}
360
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000361static void
362execlists_update_context_pdps(struct i915_hw_ppgtt *ppgtt, u32 *reg_state)
363{
364 ASSIGN_CTX_PDP(ppgtt, reg_state, 3);
365 ASSIGN_CTX_PDP(ppgtt, reg_state, 2);
366 ASSIGN_CTX_PDP(ppgtt, reg_state, 1);
367 ASSIGN_CTX_PDP(ppgtt, reg_state, 0);
368}
369
370static void execlists_update_context(struct drm_i915_gem_request *rq)
Oscar Mateoae1250b2014-07-24 17:04:37 +0100371{
Tvrtko Ursulin4a570db2016-03-16 11:00:38 +0000372 struct intel_engine_cs *engine = rq->engine;
Mika Kuoppala05d98242015-07-03 17:09:33 +0300373 struct i915_hw_ppgtt *ppgtt = rq->ctx->ppgtt;
Tvrtko Ursuline2f80392016-03-16 11:00:36 +0000374 uint32_t *reg_state = rq->ctx->engine[engine->id].lrc_reg_state;
Oscar Mateoae1250b2014-07-24 17:04:37 +0100375
Chris Wilson8f942012016-08-02 22:50:30 +0100376 reg_state[CTX_RING_TAIL+1] = intel_ring_offset(rq->ring, rq->tail);
Oscar Mateoae1250b2014-07-24 17:04:37 +0100377
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000378 /* True 32b PPGTT with dynamic page allocation: update PDP
379 * registers and point the unallocated PDPs to scratch page.
380 * PML4 is allocated during ppgtt init, so this is not needed
381 * in 48-bit mode.
382 */
383 if (ppgtt && !USES_FULL_48BIT_PPGTT(ppgtt->base.dev))
384 execlists_update_context_pdps(ppgtt, reg_state);
Oscar Mateoae1250b2014-07-24 17:04:37 +0100385}
386
Chris Wilsonf4ea6bd2016-08-02 22:50:32 +0100387static void execlists_elsp_submit_contexts(struct drm_i915_gem_request *rq0,
388 struct drm_i915_gem_request *rq1)
Ben Widawsky84b790f2014-07-24 17:04:36 +0100389{
Tvrtko Ursulin26720ab2016-03-17 12:59:46 +0000390 struct drm_i915_private *dev_priv = rq0->i915;
Tvrtko Ursulin37566852016-04-12 14:37:31 +0100391 unsigned int fw_domains = rq0->engine->fw_domains;
Tvrtko Ursulin26720ab2016-03-17 12:59:46 +0000392
Mika Kuoppala05d98242015-07-03 17:09:33 +0300393 execlists_update_context(rq0);
Oscar Mateoae1250b2014-07-24 17:04:37 +0100394
Mika Kuoppalacc3c4252015-07-03 17:09:36 +0300395 if (rq1)
Mika Kuoppala05d98242015-07-03 17:09:33 +0300396 execlists_update_context(rq1);
Ben Widawsky84b790f2014-07-24 17:04:36 +0100397
Tvrtko Ursulin27af5ee2016-04-04 12:11:56 +0100398 spin_lock_irq(&dev_priv->uncore.lock);
Tvrtko Ursulin37566852016-04-12 14:37:31 +0100399 intel_uncore_forcewake_get__locked(dev_priv, fw_domains);
Tvrtko Ursulin26720ab2016-03-17 12:59:46 +0000400
Mika Kuoppalacc3c4252015-07-03 17:09:36 +0300401 execlists_elsp_write(rq0, rq1);
Tvrtko Ursulin26720ab2016-03-17 12:59:46 +0000402
Tvrtko Ursulin37566852016-04-12 14:37:31 +0100403 intel_uncore_forcewake_put__locked(dev_priv, fw_domains);
Tvrtko Ursulin27af5ee2016-04-04 12:11:56 +0100404 spin_unlock_irq(&dev_priv->uncore.lock);
Ben Widawsky84b790f2014-07-24 17:04:36 +0100405}
406
Zhi Wang3c7ba632016-06-16 08:07:03 -0400407static inline void execlists_context_status_change(
408 struct drm_i915_gem_request *rq,
409 unsigned long status)
410{
411 /*
412 * Only used when GVT-g is enabled now. When GVT-g is disabled,
413 * The compiler should eliminate this function as dead-code.
414 */
415 if (!IS_ENABLED(CONFIG_DRM_I915_GVT))
416 return;
417
418 atomic_notifier_call_chain(&rq->ctx->status_notifier, status, rq);
419}
420
Chris Wilsonf4ea6bd2016-08-02 22:50:32 +0100421static void execlists_unqueue(struct intel_engine_cs *engine)
Michel Thierryacdd8842014-07-24 17:04:38 +0100422{
Nick Hoath6d3d8272015-01-15 13:10:39 +0000423 struct drm_i915_gem_request *req0 = NULL, *req1 = NULL;
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000424 struct drm_i915_gem_request *cursor, *tmp;
Thomas Daniele981e7b2014-07-24 17:04:39 +0100425
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000426 assert_spin_locked(&engine->execlist_lock);
Michel Thierryacdd8842014-07-24 17:04:38 +0100427
Peter Antoine779949f2015-05-11 16:03:27 +0100428 /*
429 * If irqs are not active generate a warning as batches that finish
430 * without the irqs may get lost and a GPU Hang may occur.
431 */
Chris Wilsonc0336662016-05-06 15:40:21 +0100432 WARN_ON(!intel_irqs_enabled(engine->i915));
Peter Antoine779949f2015-05-11 16:03:27 +0100433
Michel Thierryacdd8842014-07-24 17:04:38 +0100434 /* Try to read in pairs */
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000435 list_for_each_entry_safe(cursor, tmp, &engine->execlist_queue,
Michel Thierryacdd8842014-07-24 17:04:38 +0100436 execlist_link) {
437 if (!req0) {
438 req0 = cursor;
Nick Hoath6d3d8272015-01-15 13:10:39 +0000439 } else if (req0->ctx == cursor->ctx) {
Michel Thierryacdd8842014-07-24 17:04:38 +0100440 /* Same ctx: ignore first request, as second request
441 * will update tail past first request's workload */
Oscar Mateoe1fee722014-07-24 17:04:40 +0100442 cursor->elsp_submitted = req0->elsp_submitted;
Tvrtko Ursuline39d42f2016-04-28 09:56:58 +0100443 list_del(&req0->execlist_link);
Chris Wilsone8a261e2016-07-20 13:31:49 +0100444 i915_gem_request_put(req0);
Michel Thierryacdd8842014-07-24 17:04:38 +0100445 req0 = cursor;
446 } else {
Zhi Wang80a9a8d2016-06-16 08:07:04 -0400447 if (IS_ENABLED(CONFIG_DRM_I915_GVT)) {
448 /*
449 * req0 (after merged) ctx requires single
450 * submission, stop picking
451 */
452 if (req0->ctx->execlists_force_single_submission)
453 break;
454 /*
455 * req0 ctx doesn't require single submission,
456 * but next req ctx requires, stop picking
457 */
458 if (cursor->ctx->execlists_force_single_submission)
459 break;
460 }
Michel Thierryacdd8842014-07-24 17:04:38 +0100461 req1 = cursor;
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000462 WARN_ON(req1->elsp_submitted);
Michel Thierryacdd8842014-07-24 17:04:38 +0100463 break;
464 }
465 }
466
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000467 if (unlikely(!req0))
468 return;
469
Zhi Wang3c7ba632016-06-16 08:07:03 -0400470 execlists_context_status_change(req0, INTEL_CONTEXT_SCHEDULE_IN);
471
472 if (req1)
473 execlists_context_status_change(req1,
474 INTEL_CONTEXT_SCHEDULE_IN);
475
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000476 if (req0->elsp_submitted & engine->idle_lite_restore_wa) {
Michel Thierry53292cd2015-04-15 18:11:33 +0100477 /*
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000478 * WaIdleLiteRestore: make sure we never cause a lite restore
479 * with HEAD==TAIL.
480 *
481 * Apply the wa NOOPS to prevent ring:HEAD == req:TAIL as we
482 * resubmit the request. See gen8_emit_request() for where we
483 * prepare the padding after the end of the request.
Michel Thierry53292cd2015-04-15 18:11:33 +0100484 */
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000485 req0->tail += 8;
Chris Wilsondca33ec2016-08-02 22:50:20 +0100486 req0->tail &= req0->ring->size - 1;
Michel Thierry53292cd2015-04-15 18:11:33 +0100487 }
488
Chris Wilsonf4ea6bd2016-08-02 22:50:32 +0100489 execlists_elsp_submit_contexts(req0, req1);
Michel Thierryacdd8842014-07-24 17:04:38 +0100490}
491
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000492static unsigned int
Tvrtko Ursuline39d42f2016-04-28 09:56:58 +0100493execlists_check_remove_request(struct intel_engine_cs *engine, u32 ctx_id)
Thomas Daniele981e7b2014-07-24 17:04:39 +0100494{
Nick Hoath6d3d8272015-01-15 13:10:39 +0000495 struct drm_i915_gem_request *head_req;
Thomas Daniele981e7b2014-07-24 17:04:39 +0100496
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000497 assert_spin_locked(&engine->execlist_lock);
Thomas Daniele981e7b2014-07-24 17:04:39 +0100498
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000499 head_req = list_first_entry_or_null(&engine->execlist_queue,
Nick Hoath6d3d8272015-01-15 13:10:39 +0000500 struct drm_i915_gem_request,
Thomas Daniele981e7b2014-07-24 17:04:39 +0100501 execlist_link);
502
Tvrtko Ursuline39d42f2016-04-28 09:56:58 +0100503 if (WARN_ON(!head_req || (head_req->ctx_hw_id != ctx_id)))
504 return 0;
Thomas Daniele981e7b2014-07-24 17:04:39 +0100505
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000506 WARN(head_req->elsp_submitted == 0, "Never submitted head request\n");
507
508 if (--head_req->elsp_submitted > 0)
509 return 0;
510
Zhi Wang3c7ba632016-06-16 08:07:03 -0400511 execlists_context_status_change(head_req, INTEL_CONTEXT_SCHEDULE_OUT);
512
Tvrtko Ursuline39d42f2016-04-28 09:56:58 +0100513 list_del(&head_req->execlist_link);
Chris Wilsone8a261e2016-07-20 13:31:49 +0100514 i915_gem_request_put(head_req);
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000515
516 return 1;
Thomas Daniele981e7b2014-07-24 17:04:39 +0100517}
518
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000519static u32
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000520get_context_status(struct intel_engine_cs *engine, unsigned int read_pointer,
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000521 u32 *context_id)
Ben Widawsky91a41032016-01-05 10:30:07 -0800522{
Chris Wilsonc0336662016-05-06 15:40:21 +0100523 struct drm_i915_private *dev_priv = engine->i915;
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000524 u32 status;
Ben Widawsky91a41032016-01-05 10:30:07 -0800525
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000526 read_pointer %= GEN8_CSB_ENTRIES;
Ben Widawsky91a41032016-01-05 10:30:07 -0800527
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000528 status = I915_READ_FW(RING_CONTEXT_STATUS_BUF_LO(engine, read_pointer));
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000529
530 if (status & GEN8_CTX_STATUS_IDLE_ACTIVE)
531 return 0;
532
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000533 *context_id = I915_READ_FW(RING_CONTEXT_STATUS_BUF_HI(engine,
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000534 read_pointer));
535
536 return status;
Ben Widawsky91a41032016-01-05 10:30:07 -0800537}
538
Daniel Vetter6e5248b2016-07-15 21:48:06 +0200539/*
Oscar Mateo73e4d072014-07-24 17:04:48 +0100540 * Check the unread Context Status Buffers and manage the submission of new
541 * contexts to the ELSP accordingly.
542 */
Tvrtko Ursulin27af5ee2016-04-04 12:11:56 +0100543static void intel_lrc_irq_handler(unsigned long data)
Thomas Daniele981e7b2014-07-24 17:04:39 +0100544{
Tvrtko Ursulin27af5ee2016-04-04 12:11:56 +0100545 struct intel_engine_cs *engine = (struct intel_engine_cs *)data;
Chris Wilsonc0336662016-05-06 15:40:21 +0100546 struct drm_i915_private *dev_priv = engine->i915;
Thomas Daniele981e7b2014-07-24 17:04:39 +0100547 u32 status_pointer;
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000548 unsigned int read_pointer, write_pointer;
Tvrtko Ursulin26720ab2016-03-17 12:59:46 +0000549 u32 csb[GEN8_CSB_ENTRIES][2];
550 unsigned int csb_read = 0, i;
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000551 unsigned int submit_contexts = 0;
Thomas Daniele981e7b2014-07-24 17:04:39 +0100552
Tvrtko Ursulin37566852016-04-12 14:37:31 +0100553 intel_uncore_forcewake_get(dev_priv, engine->fw_domains);
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000554
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000555 status_pointer = I915_READ_FW(RING_CONTEXT_STATUS_PTR(engine));
Thomas Daniele981e7b2014-07-24 17:04:39 +0100556
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000557 read_pointer = engine->next_context_status_buffer;
Ben Widawsky5590a5f2016-01-05 10:30:05 -0800558 write_pointer = GEN8_CSB_WRITE_PTR(status_pointer);
Thomas Daniele981e7b2014-07-24 17:04:39 +0100559 if (read_pointer > write_pointer)
Michel Thierrydfc53c52015-09-28 13:25:12 +0100560 write_pointer += GEN8_CSB_ENTRIES;
Thomas Daniele981e7b2014-07-24 17:04:39 +0100561
Thomas Daniele981e7b2014-07-24 17:04:39 +0100562 while (read_pointer < write_pointer) {
Tvrtko Ursulin26720ab2016-03-17 12:59:46 +0000563 if (WARN_ON_ONCE(csb_read == GEN8_CSB_ENTRIES))
564 break;
565 csb[csb_read][0] = get_context_status(engine, ++read_pointer,
566 &csb[csb_read][1]);
567 csb_read++;
Michel Thierry5af05fe2015-09-04 12:59:15 +0100568 }
Thomas Daniele981e7b2014-07-24 17:04:39 +0100569
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000570 engine->next_context_status_buffer = write_pointer % GEN8_CSB_ENTRIES;
Thomas Daniele981e7b2014-07-24 17:04:39 +0100571
Ben Widawsky5590a5f2016-01-05 10:30:05 -0800572 /* Update the read pointer to the old write pointer. Manual ringbuffer
573 * management ftw </sarcasm> */
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000574 I915_WRITE_FW(RING_CONTEXT_STATUS_PTR(engine),
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000575 _MASKED_FIELD(GEN8_CSB_READ_PTR_MASK,
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000576 engine->next_context_status_buffer << 8));
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000577
Tvrtko Ursulin37566852016-04-12 14:37:31 +0100578 intel_uncore_forcewake_put(dev_priv, engine->fw_domains);
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000579
Tvrtko Ursulin26720ab2016-03-17 12:59:46 +0000580 spin_lock(&engine->execlist_lock);
581
582 for (i = 0; i < csb_read; i++) {
583 if (unlikely(csb[i][0] & GEN8_CTX_STATUS_PREEMPTED)) {
584 if (csb[i][0] & GEN8_CTX_STATUS_LITE_RESTORE) {
585 if (execlists_check_remove_request(engine, csb[i][1]))
586 WARN(1, "Lite Restored request removed from queue\n");
587 } else
588 WARN(1, "Preemption without Lite Restore\n");
589 }
590
591 if (csb[i][0] & (GEN8_CTX_STATUS_ACTIVE_IDLE |
592 GEN8_CTX_STATUS_ELEMENT_SWITCH))
593 submit_contexts +=
594 execlists_check_remove_request(engine, csb[i][1]);
595 }
596
597 if (submit_contexts) {
598 if (!engine->disable_lite_restore_wa ||
599 (csb[i][0] & GEN8_CTX_STATUS_ACTIVE_IDLE))
Chris Wilsonf4ea6bd2016-08-02 22:50:32 +0100600 execlists_unqueue(engine);
Tvrtko Ursulin26720ab2016-03-17 12:59:46 +0000601 }
602
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000603 spin_unlock(&engine->execlist_lock);
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +0000604
605 if (unlikely(submit_contexts > 2))
606 DRM_ERROR("More than two context complete events?\n");
Thomas Daniele981e7b2014-07-24 17:04:39 +0100607}
608
Chris Wilsonf4ea6bd2016-08-02 22:50:32 +0100609static void execlists_submit_request(struct drm_i915_gem_request *request)
Michel Thierryacdd8842014-07-24 17:04:38 +0100610{
Tvrtko Ursulin4a570db2016-03-16 11:00:38 +0000611 struct intel_engine_cs *engine = request->engine;
Nick Hoath6d3d8272015-01-15 13:10:39 +0000612 struct drm_i915_gem_request *cursor;
Oscar Mateof1ad5a12014-07-24 17:04:41 +0100613 int num_elements = 0;
Michel Thierryacdd8842014-07-24 17:04:38 +0100614
Tvrtko Ursulin27af5ee2016-04-04 12:11:56 +0100615 spin_lock_bh(&engine->execlist_lock);
Michel Thierryacdd8842014-07-24 17:04:38 +0100616
Tvrtko Ursuline2f80392016-03-16 11:00:36 +0000617 list_for_each_entry(cursor, &engine->execlist_queue, execlist_link)
Oscar Mateof1ad5a12014-07-24 17:04:41 +0100618 if (++num_elements > 2)
619 break;
620
621 if (num_elements > 2) {
Nick Hoath6d3d8272015-01-15 13:10:39 +0000622 struct drm_i915_gem_request *tail_req;
Oscar Mateof1ad5a12014-07-24 17:04:41 +0100623
Tvrtko Ursuline2f80392016-03-16 11:00:36 +0000624 tail_req = list_last_entry(&engine->execlist_queue,
Nick Hoath6d3d8272015-01-15 13:10:39 +0000625 struct drm_i915_gem_request,
Oscar Mateof1ad5a12014-07-24 17:04:41 +0100626 execlist_link);
627
John Harrisonae707972015-05-29 17:44:14 +0100628 if (request->ctx == tail_req->ctx) {
Oscar Mateof1ad5a12014-07-24 17:04:41 +0100629 WARN(tail_req->elsp_submitted != 0,
Thomas Daniel7ba717c2014-11-13 10:28:56 +0000630 "More than 2 already-submitted reqs queued\n");
Tvrtko Ursuline39d42f2016-04-28 09:56:58 +0100631 list_del(&tail_req->execlist_link);
Chris Wilsone8a261e2016-07-20 13:31:49 +0100632 i915_gem_request_put(tail_req);
Oscar Mateof1ad5a12014-07-24 17:04:41 +0100633 }
634 }
635
Chris Wilsone8a261e2016-07-20 13:31:49 +0100636 i915_gem_request_get(request);
Tvrtko Ursuline2f80392016-03-16 11:00:36 +0000637 list_add_tail(&request->execlist_link, &engine->execlist_queue);
Tvrtko Ursulina3d12762016-04-28 09:56:57 +0100638 request->ctx_hw_id = request->ctx->hw_id;
Oscar Mateof1ad5a12014-07-24 17:04:41 +0100639 if (num_elements == 0)
Chris Wilsonf4ea6bd2016-08-02 22:50:32 +0100640 execlists_unqueue(engine);
Michel Thierryacdd8842014-07-24 17:04:38 +0100641
Tvrtko Ursulin27af5ee2016-04-04 12:11:56 +0100642 spin_unlock_bh(&engine->execlist_lock);
Michel Thierryacdd8842014-07-24 17:04:38 +0100643}
644
John Harrison40e895c2015-05-29 17:43:26 +0100645int intel_logical_ring_alloc_request_extras(struct drm_i915_gem_request *request)
John Harrisonbc0dce32015-03-19 12:30:07 +0000646{
Chris Wilson24f1d3c2016-04-28 09:56:53 +0100647 struct intel_engine_cs *engine = request->engine;
Chris Wilson9021ad02016-05-24 14:53:37 +0100648 struct intel_context *ce = &request->ctx->engine[engine->id];
Chris Wilsonbfa01202016-04-28 09:56:48 +0100649 int ret;
John Harrisonbc0dce32015-03-19 12:30:07 +0000650
Chris Wilson63103462016-04-28 09:56:49 +0100651 /* Flush enough space to reduce the likelihood of waiting after
652 * we start building the request - in which case we will just
653 * have to repeat work.
654 */
Chris Wilson0e93cdd2016-04-29 09:07:06 +0100655 request->reserved_space += EXECLISTS_REQUEST_SIZE;
Chris Wilson63103462016-04-28 09:56:49 +0100656
Chris Wilson9021ad02016-05-24 14:53:37 +0100657 if (!ce->state) {
Chris Wilson978f1e02016-04-28 09:56:54 +0100658 ret = execlists_context_deferred_alloc(request->ctx, engine);
659 if (ret)
660 return ret;
661 }
662
Chris Wilsondca33ec2016-08-02 22:50:20 +0100663 request->ring = ce->ring;
Mika Kuoppalaf3cc01f2015-07-06 11:08:30 +0300664
Alex Daia7e02192015-12-16 11:45:55 -0800665 if (i915.enable_guc_submission) {
666 /*
667 * Check that the GuC has space for the request before
668 * going any further, as the i915_add_request() call
669 * later on mustn't fail ...
670 */
Dave Gordon7c2c2702016-05-13 15:36:32 +0100671 ret = i915_guc_wq_check_space(request);
Alex Daia7e02192015-12-16 11:45:55 -0800672 if (ret)
673 return ret;
674 }
675
Chris Wilson24f1d3c2016-04-28 09:56:53 +0100676 ret = intel_lr_context_pin(request->ctx, engine);
677 if (ret)
678 return ret;
Dave Gordone28e4042016-01-19 19:02:55 +0000679
Chris Wilsonbfa01202016-04-28 09:56:48 +0100680 ret = intel_ring_begin(request, 0);
681 if (ret)
682 goto err_unpin;
683
Chris Wilson9021ad02016-05-24 14:53:37 +0100684 if (!ce->initialised) {
Chris Wilson24f1d3c2016-04-28 09:56:53 +0100685 ret = engine->init_context(request);
686 if (ret)
687 goto err_unpin;
688
Chris Wilson9021ad02016-05-24 14:53:37 +0100689 ce->initialised = true;
Chris Wilson24f1d3c2016-04-28 09:56:53 +0100690 }
691
692 /* Note that after this point, we have committed to using
693 * this request as it is being used to both track the
694 * state of engine initialisation and liveness of the
695 * golden renderstate above. Think twice before you try
696 * to cancel/unwind this request now.
697 */
698
Chris Wilson0e93cdd2016-04-29 09:07:06 +0100699 request->reserved_space -= EXECLISTS_REQUEST_SIZE;
Chris Wilsonbfa01202016-04-28 09:56:48 +0100700 return 0;
701
702err_unpin:
Chris Wilson24f1d3c2016-04-28 09:56:53 +0100703 intel_lr_context_unpin(request->ctx, engine);
Dave Gordone28e4042016-01-19 19:02:55 +0000704 return ret;
John Harrisonbc0dce32015-03-19 12:30:07 +0000705}
706
John Harrisonbc0dce32015-03-19 12:30:07 +0000707/*
Chris Wilsonddd66c52016-08-02 22:50:31 +0100708 * intel_logical_ring_advance() - advance the tail and prepare for submission
John Harrisonae707972015-05-29 17:44:14 +0100709 * @request: Request to advance the logical ringbuffer of.
John Harrisonbc0dce32015-03-19 12:30:07 +0000710 *
711 * The tail is updated in our logical ringbuffer struct, not in the actual context. What
712 * really happens during submission is that the context and current tail will be placed
713 * on a queue waiting for the ELSP to be ready to accept a new context submission. At that
714 * point, the tail *inside* the context is updated and the ELSP written to.
715 */
Chris Wilson7c17d372016-01-20 15:43:35 +0200716static int
Chris Wilsonddd66c52016-08-02 22:50:31 +0100717intel_logical_ring_advance(struct drm_i915_gem_request *request)
John Harrisonbc0dce32015-03-19 12:30:07 +0000718{
Chris Wilson7e37f882016-08-02 22:50:21 +0100719 struct intel_ring *ring = request->ring;
Tvrtko Ursulin4a570db2016-03-16 11:00:38 +0000720 struct intel_engine_cs *engine = request->engine;
John Harrisonbc0dce32015-03-19 12:30:07 +0000721
Chris Wilson1dae2df2016-08-02 22:50:19 +0100722 intel_ring_advance(ring);
723 request->tail = ring->tail;
John Harrisonbc0dce32015-03-19 12:30:07 +0000724
Chris Wilson7c17d372016-01-20 15:43:35 +0200725 /*
726 * Here we add two extra NOOPs as padding to avoid
727 * lite restore of a context with HEAD==TAIL.
728 *
729 * Caller must reserve WA_TAIL_DWORDS for us!
730 */
Chris Wilson1dae2df2016-08-02 22:50:19 +0100731 intel_ring_emit(ring, MI_NOOP);
732 intel_ring_emit(ring, MI_NOOP);
733 intel_ring_advance(ring);
Alex Daid1675192015-08-12 15:43:43 +0100734
Chris Wilsona16a4052016-04-28 09:56:56 +0100735 /* We keep the previous context alive until we retire the following
736 * request. This ensures that any the context object is still pinned
737 * for any residual writes the HW makes into it on the context switch
738 * into the next object following the breadcrumb. Otherwise, we may
739 * retire the context too early.
740 */
741 request->previous_context = engine->last_context;
742 engine->last_context = request->ctx;
Chris Wilson7c17d372016-01-20 15:43:35 +0200743 return 0;
John Harrisonbc0dce32015-03-19 12:30:07 +0000744}
745
Tvrtko Ursuline39d42f2016-04-28 09:56:58 +0100746void intel_execlists_cancel_requests(struct intel_engine_cs *engine)
Thomas Danielc86ee3a92014-11-13 10:27:05 +0000747{
Nick Hoath6d3d8272015-01-15 13:10:39 +0000748 struct drm_i915_gem_request *req, *tmp;
Tvrtko Ursuline39d42f2016-04-28 09:56:58 +0100749 LIST_HEAD(cancel_list);
Thomas Danielc86ee3a92014-11-13 10:27:05 +0000750
Chris Wilson91c8a322016-07-05 10:40:23 +0100751 WARN_ON(!mutex_is_locked(&engine->i915->drm.struct_mutex));
Thomas Danielc86ee3a92014-11-13 10:27:05 +0000752
Tvrtko Ursulin27af5ee2016-04-04 12:11:56 +0100753 spin_lock_bh(&engine->execlist_lock);
Tvrtko Ursuline39d42f2016-04-28 09:56:58 +0100754 list_replace_init(&engine->execlist_queue, &cancel_list);
Tvrtko Ursulin27af5ee2016-04-04 12:11:56 +0100755 spin_unlock_bh(&engine->execlist_lock);
Thomas Danielc86ee3a92014-11-13 10:27:05 +0000756
Tvrtko Ursuline39d42f2016-04-28 09:56:58 +0100757 list_for_each_entry_safe(req, tmp, &cancel_list, execlist_link) {
Thomas Danielc86ee3a92014-11-13 10:27:05 +0000758 list_del(&req->execlist_link);
Chris Wilsone8a261e2016-07-20 13:31:49 +0100759 i915_gem_request_put(req);
Thomas Danielc86ee3a92014-11-13 10:27:05 +0000760 }
761}
762
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000763void intel_logical_ring_stop(struct intel_engine_cs *engine)
Oscar Mateo454afeb2014-07-24 17:04:22 +0100764{
Chris Wilsonc0336662016-05-06 15:40:21 +0100765 struct drm_i915_private *dev_priv = engine->i915;
Oscar Mateo9832b9d2014-07-24 17:04:30 +0100766 int ret;
767
Tvrtko Ursulin117897f2016-03-16 11:00:40 +0000768 if (!intel_engine_initialized(engine))
Oscar Mateo9832b9d2014-07-24 17:04:30 +0100769 return;
770
Tvrtko Ursulin666796d2016-03-16 11:00:39 +0000771 ret = intel_engine_idle(engine);
Chris Wilsonf4457ae2016-04-13 17:35:08 +0100772 if (ret)
Oscar Mateo9832b9d2014-07-24 17:04:30 +0100773 DRM_ERROR("failed to quiesce %s whilst cleaning up: %d\n",
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000774 engine->name, ret);
Oscar Mateo9832b9d2014-07-24 17:04:30 +0100775
776 /* TODO: Is this correct with Execlists enabled? */
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000777 I915_WRITE_MODE(engine, _MASKED_BIT_ENABLE(STOP_RING));
Chris Wilson3e7941a2016-06-30 15:33:23 +0100778 if (intel_wait_for_register(dev_priv,
779 RING_MI_MODE(engine->mmio_base),
780 MODE_IDLE, MODE_IDLE,
781 1000)) {
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000782 DRM_ERROR("%s :timed out trying to stop ring\n", engine->name);
Oscar Mateo9832b9d2014-07-24 17:04:30 +0100783 return;
784 }
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000785 I915_WRITE_MODE(engine, _MASKED_BIT_DISABLE(STOP_RING));
Oscar Mateo454afeb2014-07-24 17:04:22 +0100786}
787
Chris Wilsone2efd132016-05-24 14:53:34 +0100788static int intel_lr_context_pin(struct i915_gem_context *ctx,
Chris Wilson24f1d3c2016-04-28 09:56:53 +0100789 struct intel_engine_cs *engine)
Oscar Mateodcb4c122014-11-13 10:28:10 +0000790{
Chris Wilson24f1d3c2016-04-28 09:56:53 +0100791 struct drm_i915_private *dev_priv = ctx->i915;
Chris Wilson9021ad02016-05-24 14:53:37 +0100792 struct intel_context *ce = &ctx->engine[engine->id];
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +0100793 void *vaddr;
794 u32 *lrc_reg_state;
Tvrtko Ursulinca825802016-01-15 15:10:27 +0000795 int ret;
Oscar Mateodcb4c122014-11-13 10:28:10 +0000796
Chris Wilson91c8a322016-07-05 10:40:23 +0100797 lockdep_assert_held(&ctx->i915->drm.struct_mutex);
Tvrtko Ursulinca825802016-01-15 15:10:27 +0000798
Chris Wilson9021ad02016-05-24 14:53:37 +0100799 if (ce->pin_count++)
Chris Wilson24f1d3c2016-04-28 09:56:53 +0100800 return 0;
801
Chris Wilsonde895082016-08-04 16:32:34 +0100802 ret = i915_gem_object_ggtt_pin(ce->state, NULL,
803 0, GEN8_LR_CONTEXT_ALIGN,
804 PIN_OFFSET_BIAS | GUC_WOPCM_TOP);
Nick Hoathe84fe802015-09-11 12:53:46 +0100805 if (ret)
Chris Wilson24f1d3c2016-04-28 09:56:53 +0100806 goto err;
Thomas Daniel7ba717c2014-11-13 10:28:56 +0000807
Chris Wilson9021ad02016-05-24 14:53:37 +0100808 vaddr = i915_gem_object_pin_map(ce->state);
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +0100809 if (IS_ERR(vaddr)) {
810 ret = PTR_ERR(vaddr);
Tvrtko Ursulin82352e92016-01-15 17:12:45 +0000811 goto unpin_ctx_obj;
812 }
813
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +0100814 lrc_reg_state = vaddr + LRC_STATE_PN * PAGE_SIZE;
815
Chris Wilsonaad29fb2016-08-02 22:50:23 +0100816 ret = intel_ring_pin(ce->ring);
Nick Hoathe84fe802015-09-11 12:53:46 +0100817 if (ret)
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +0100818 goto unpin_map;
Alex Daid1675192015-08-12 15:43:43 +0100819
Chris Wilson9021ad02016-05-24 14:53:37 +0100820 ce->lrc_vma = i915_gem_obj_to_ggtt(ce->state);
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000821 intel_lr_context_descriptor_update(ctx, engine);
Chris Wilson9021ad02016-05-24 14:53:37 +0100822
Chris Wilsondca33ec2016-08-02 22:50:20 +0100823 lrc_reg_state[CTX_RING_BUFFER_START+1] = ce->ring->vma->node.start;
Chris Wilson9021ad02016-05-24 14:53:37 +0100824 ce->lrc_reg_state = lrc_reg_state;
825 ce->state->dirty = true;
Daniel Vettere93c28f2015-09-02 14:33:42 +0200826
Nick Hoathe84fe802015-09-11 12:53:46 +0100827 /* Invalidate GuC TLB. */
828 if (i915.enable_guc_submission)
829 I915_WRITE(GEN8_GTCR, GEN8_GTCR_INVALIDATE);
Oscar Mateodcb4c122014-11-13 10:28:10 +0000830
Chris Wilson9a6feaf2016-07-20 13:31:50 +0100831 i915_gem_context_get(ctx);
Chris Wilson24f1d3c2016-04-28 09:56:53 +0100832 return 0;
Thomas Daniel7ba717c2014-11-13 10:28:56 +0000833
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +0100834unpin_map:
Chris Wilson9021ad02016-05-24 14:53:37 +0100835 i915_gem_object_unpin_map(ce->state);
Thomas Daniel7ba717c2014-11-13 10:28:56 +0000836unpin_ctx_obj:
Chris Wilson9021ad02016-05-24 14:53:37 +0100837 i915_gem_object_ggtt_unpin(ce->state);
Chris Wilson24f1d3c2016-04-28 09:56:53 +0100838err:
Chris Wilson9021ad02016-05-24 14:53:37 +0100839 ce->pin_count = 0;
Thomas Daniel7ba717c2014-11-13 10:28:56 +0000840 return ret;
Oscar Mateodcb4c122014-11-13 10:28:10 +0000841}
842
Chris Wilsone2efd132016-05-24 14:53:34 +0100843void intel_lr_context_unpin(struct i915_gem_context *ctx,
Tvrtko Ursuline52928232016-01-28 10:29:54 +0000844 struct intel_engine_cs *engine)
Oscar Mateodcb4c122014-11-13 10:28:10 +0000845{
Chris Wilson9021ad02016-05-24 14:53:37 +0100846 struct intel_context *ce = &ctx->engine[engine->id];
Daniel Vetteraf3302b2015-12-04 17:27:15 +0100847
Chris Wilson91c8a322016-07-05 10:40:23 +0100848 lockdep_assert_held(&ctx->i915->drm.struct_mutex);
Chris Wilson9021ad02016-05-24 14:53:37 +0100849 GEM_BUG_ON(ce->pin_count == 0);
Tvrtko Ursulin321fe302016-01-28 10:29:55 +0000850
Chris Wilson9021ad02016-05-24 14:53:37 +0100851 if (--ce->pin_count)
Chris Wilson24f1d3c2016-04-28 09:56:53 +0100852 return;
853
Chris Wilsonaad29fb2016-08-02 22:50:23 +0100854 intel_ring_unpin(ce->ring);
Chris Wilson24f1d3c2016-04-28 09:56:53 +0100855
Chris Wilson9021ad02016-05-24 14:53:37 +0100856 i915_gem_object_unpin_map(ce->state);
857 i915_gem_object_ggtt_unpin(ce->state);
Chris Wilson24f1d3c2016-04-28 09:56:53 +0100858
Chris Wilson9021ad02016-05-24 14:53:37 +0100859 ce->lrc_vma = NULL;
860 ce->lrc_desc = 0;
861 ce->lrc_reg_state = NULL;
Chris Wilson24f1d3c2016-04-28 09:56:53 +0100862
Chris Wilson9a6feaf2016-07-20 13:31:50 +0100863 i915_gem_context_put(ctx);
Oscar Mateodcb4c122014-11-13 10:28:10 +0000864}
865
John Harrisone2be4fa2015-05-29 17:43:54 +0100866static int intel_logical_ring_workarounds_emit(struct drm_i915_gem_request *req)
Michel Thierry771b9a52014-11-11 16:47:33 +0000867{
868 int ret, i;
Chris Wilson7e37f882016-08-02 22:50:21 +0100869 struct intel_ring *ring = req->ring;
Chris Wilsonc0336662016-05-06 15:40:21 +0100870 struct i915_workarounds *w = &req->i915->workarounds;
Michel Thierry771b9a52014-11-11 16:47:33 +0000871
Boyer, Waynecd7feaa2016-01-06 17:15:29 -0800872 if (w->count == 0)
Michel Thierry771b9a52014-11-11 16:47:33 +0000873 return 0;
874
Chris Wilson7c9cf4e2016-08-02 22:50:25 +0100875 ret = req->engine->emit_flush(req, EMIT_BARRIER);
Michel Thierry771b9a52014-11-11 16:47:33 +0000876 if (ret)
877 return ret;
878
Chris Wilson987046a2016-04-28 09:56:46 +0100879 ret = intel_ring_begin(req, w->count * 2 + 2);
Michel Thierry771b9a52014-11-11 16:47:33 +0000880 if (ret)
881 return ret;
882
Chris Wilson1dae2df2016-08-02 22:50:19 +0100883 intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(w->count));
Michel Thierry771b9a52014-11-11 16:47:33 +0000884 for (i = 0; i < w->count; i++) {
Chris Wilson1dae2df2016-08-02 22:50:19 +0100885 intel_ring_emit_reg(ring, w->reg[i].addr);
886 intel_ring_emit(ring, w->reg[i].value);
Michel Thierry771b9a52014-11-11 16:47:33 +0000887 }
Chris Wilson1dae2df2016-08-02 22:50:19 +0100888 intel_ring_emit(ring, MI_NOOP);
Michel Thierry771b9a52014-11-11 16:47:33 +0000889
Chris Wilson1dae2df2016-08-02 22:50:19 +0100890 intel_ring_advance(ring);
Michel Thierry771b9a52014-11-11 16:47:33 +0000891
Chris Wilson7c9cf4e2016-08-02 22:50:25 +0100892 ret = req->engine->emit_flush(req, EMIT_BARRIER);
Michel Thierry771b9a52014-11-11 16:47:33 +0000893 if (ret)
894 return ret;
895
896 return 0;
897}
898
Arun Siluvery83b8a982015-07-08 10:27:05 +0100899#define wa_ctx_emit(batch, index, cmd) \
Arun Siluvery17ee9502015-06-19 19:07:01 +0100900 do { \
Arun Siluvery83b8a982015-07-08 10:27:05 +0100901 int __index = (index)++; \
902 if (WARN_ON(__index >= (PAGE_SIZE / sizeof(uint32_t)))) { \
Arun Siluvery17ee9502015-06-19 19:07:01 +0100903 return -ENOSPC; \
904 } \
Arun Siluvery83b8a982015-07-08 10:27:05 +0100905 batch[__index] = (cmd); \
Arun Siluvery17ee9502015-06-19 19:07:01 +0100906 } while (0)
907
Ville Syrjälä8f40db72015-11-04 23:20:08 +0200908#define wa_ctx_emit_reg(batch, index, reg) \
Ville Syrjäläf0f59a02015-11-18 15:33:26 +0200909 wa_ctx_emit((batch), (index), i915_mmio_reg_offset(reg))
Arun Siluvery9e000842015-07-03 14:27:31 +0100910
911/*
912 * In this WA we need to set GEN8_L3SQCREG4[21:21] and reset it after
913 * PIPE_CONTROL instruction. This is required for the flush to happen correctly
914 * but there is a slight complication as this is applied in WA batch where the
915 * values are only initialized once so we cannot take register value at the
916 * beginning and reuse it further; hence we save its value to memory, upload a
917 * constant value with bit21 set and then we restore it back with the saved value.
918 * To simplify the WA, a constant value is formed by using the default value
919 * of this register. This shouldn't be a problem because we are only modifying
920 * it for a short period and this batch in non-premptible. We can ofcourse
921 * use additional instructions that read the actual value of the register
922 * at that time and set our bit of interest but it makes the WA complicated.
923 *
924 * This WA is also required for Gen9 so extracting as a function avoids
925 * code duplication.
926 */
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000927static inline int gen8_emit_flush_coherentl3_wa(struct intel_engine_cs *engine,
Daniel Vetter6e5248b2016-07-15 21:48:06 +0200928 uint32_t *batch,
Arun Siluvery9e000842015-07-03 14:27:31 +0100929 uint32_t index)
930{
931 uint32_t l3sqc4_flush = (0x40400000 | GEN8_LQSC_FLUSH_COHERENT_LINES);
932
Arun Siluverya4106a72015-07-14 15:01:29 +0100933 /*
Mika Kuoppalafe905812016-06-07 17:19:03 +0300934 * WaDisableLSQCROPERFforOCL:skl,kbl
Arun Siluverya4106a72015-07-14 15:01:29 +0100935 * This WA is implemented in skl_init_clock_gating() but since
936 * this batch updates GEN8_L3SQCREG4 with default value we need to
937 * set this bit here to retain the WA during flush.
938 */
Mika Kuoppalafe905812016-06-07 17:19:03 +0300939 if (IS_SKL_REVID(engine->i915, 0, SKL_REVID_E0) ||
940 IS_KBL_REVID(engine->i915, 0, KBL_REVID_E0))
Arun Siluverya4106a72015-07-14 15:01:29 +0100941 l3sqc4_flush |= GEN8_LQSC_RO_PERF_DIS;
942
Arun Siluveryf1afe242015-08-04 16:22:20 +0100943 wa_ctx_emit(batch, index, (MI_STORE_REGISTER_MEM_GEN8 |
Arun Siluvery83b8a982015-07-08 10:27:05 +0100944 MI_SRM_LRM_GLOBAL_GTT));
Ville Syrjälä8f40db72015-11-04 23:20:08 +0200945 wa_ctx_emit_reg(batch, index, GEN8_L3SQCREG4);
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000946 wa_ctx_emit(batch, index, engine->scratch.gtt_offset + 256);
Arun Siluvery83b8a982015-07-08 10:27:05 +0100947 wa_ctx_emit(batch, index, 0);
Arun Siluvery9e000842015-07-03 14:27:31 +0100948
Arun Siluvery83b8a982015-07-08 10:27:05 +0100949 wa_ctx_emit(batch, index, MI_LOAD_REGISTER_IMM(1));
Ville Syrjälä8f40db72015-11-04 23:20:08 +0200950 wa_ctx_emit_reg(batch, index, GEN8_L3SQCREG4);
Arun Siluvery83b8a982015-07-08 10:27:05 +0100951 wa_ctx_emit(batch, index, l3sqc4_flush);
Arun Siluvery9e000842015-07-03 14:27:31 +0100952
Arun Siluvery83b8a982015-07-08 10:27:05 +0100953 wa_ctx_emit(batch, index, GFX_OP_PIPE_CONTROL(6));
954 wa_ctx_emit(batch, index, (PIPE_CONTROL_CS_STALL |
955 PIPE_CONTROL_DC_FLUSH_ENABLE));
956 wa_ctx_emit(batch, index, 0);
957 wa_ctx_emit(batch, index, 0);
958 wa_ctx_emit(batch, index, 0);
959 wa_ctx_emit(batch, index, 0);
Arun Siluvery9e000842015-07-03 14:27:31 +0100960
Arun Siluveryf1afe242015-08-04 16:22:20 +0100961 wa_ctx_emit(batch, index, (MI_LOAD_REGISTER_MEM_GEN8 |
Arun Siluvery83b8a982015-07-08 10:27:05 +0100962 MI_SRM_LRM_GLOBAL_GTT));
Ville Syrjälä8f40db72015-11-04 23:20:08 +0200963 wa_ctx_emit_reg(batch, index, GEN8_L3SQCREG4);
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +0000964 wa_ctx_emit(batch, index, engine->scratch.gtt_offset + 256);
Arun Siluvery83b8a982015-07-08 10:27:05 +0100965 wa_ctx_emit(batch, index, 0);
Arun Siluvery9e000842015-07-03 14:27:31 +0100966
967 return index;
968}
969
Arun Siluvery17ee9502015-06-19 19:07:01 +0100970static inline uint32_t wa_ctx_start(struct i915_wa_ctx_bb *wa_ctx,
971 uint32_t offset,
972 uint32_t start_alignment)
973{
974 return wa_ctx->offset = ALIGN(offset, start_alignment);
975}
976
977static inline int wa_ctx_end(struct i915_wa_ctx_bb *wa_ctx,
978 uint32_t offset,
979 uint32_t size_alignment)
980{
981 wa_ctx->size = offset - wa_ctx->offset;
982
983 WARN(wa_ctx->size % size_alignment,
984 "wa_ctx_bb failed sanity checks: size %d is not aligned to %d\n",
985 wa_ctx->size, size_alignment);
986 return 0;
987}
988
Daniel Vetter6e5248b2016-07-15 21:48:06 +0200989/*
990 * Typically we only have one indirect_ctx and per_ctx batch buffer which are
991 * initialized at the beginning and shared across all contexts but this field
992 * helps us to have multiple batches at different offsets and select them based
993 * on a criteria. At the moment this batch always start at the beginning of the page
994 * and at this point we don't have multiple wa_ctx batch buffers.
Arun Siluvery17ee9502015-06-19 19:07:01 +0100995 *
Daniel Vetter6e5248b2016-07-15 21:48:06 +0200996 * The number of WA applied are not known at the beginning; we use this field
997 * to return the no of DWORDS written.
Arun Siluvery17ee9502015-06-19 19:07:01 +0100998 *
Daniel Vetter6e5248b2016-07-15 21:48:06 +0200999 * It is to be noted that this batch does not contain MI_BATCH_BUFFER_END
1000 * so it adds NOOPs as padding to make it cacheline aligned.
1001 * MI_BATCH_BUFFER_END will be added to perctx batch and both of them together
1002 * makes a complete batch buffer.
Arun Siluvery17ee9502015-06-19 19:07:01 +01001003 */
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001004static int gen8_init_indirectctx_bb(struct intel_engine_cs *engine,
Arun Siluvery17ee9502015-06-19 19:07:01 +01001005 struct i915_wa_ctx_bb *wa_ctx,
Daniel Vetter6e5248b2016-07-15 21:48:06 +02001006 uint32_t *batch,
Arun Siluvery17ee9502015-06-19 19:07:01 +01001007 uint32_t *offset)
1008{
Arun Siluvery0160f052015-06-23 15:46:57 +01001009 uint32_t scratch_addr;
Arun Siluvery17ee9502015-06-19 19:07:01 +01001010 uint32_t index = wa_ctx_start(wa_ctx, *offset, CACHELINE_DWORDS);
1011
Arun Siluvery7ad00d12015-06-19 18:37:12 +01001012 /* WaDisableCtxRestoreArbitration:bdw,chv */
Arun Siluvery83b8a982015-07-08 10:27:05 +01001013 wa_ctx_emit(batch, index, MI_ARB_ON_OFF | MI_ARB_DISABLE);
Arun Siluvery17ee9502015-06-19 19:07:01 +01001014
Arun Siluveryc82435b2015-06-19 18:37:13 +01001015 /* WaFlushCoherentL3CacheLinesAtContextSwitch:bdw */
Chris Wilsonc0336662016-05-06 15:40:21 +01001016 if (IS_BROADWELL(engine->i915)) {
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001017 int rc = gen8_emit_flush_coherentl3_wa(engine, batch, index);
Andrzej Hajda604ef732015-09-21 15:33:35 +02001018 if (rc < 0)
1019 return rc;
1020 index = rc;
Arun Siluveryc82435b2015-06-19 18:37:13 +01001021 }
1022
Arun Siluvery0160f052015-06-23 15:46:57 +01001023 /* WaClearSlmSpaceAtContextSwitch:bdw,chv */
1024 /* Actual scratch location is at 128 bytes offset */
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001025 scratch_addr = engine->scratch.gtt_offset + 2*CACHELINE_BYTES;
Arun Siluvery0160f052015-06-23 15:46:57 +01001026
Arun Siluvery83b8a982015-07-08 10:27:05 +01001027 wa_ctx_emit(batch, index, GFX_OP_PIPE_CONTROL(6));
1028 wa_ctx_emit(batch, index, (PIPE_CONTROL_FLUSH_L3 |
1029 PIPE_CONTROL_GLOBAL_GTT_IVB |
1030 PIPE_CONTROL_CS_STALL |
1031 PIPE_CONTROL_QW_WRITE));
1032 wa_ctx_emit(batch, index, scratch_addr);
1033 wa_ctx_emit(batch, index, 0);
1034 wa_ctx_emit(batch, index, 0);
1035 wa_ctx_emit(batch, index, 0);
Arun Siluvery0160f052015-06-23 15:46:57 +01001036
Arun Siluvery17ee9502015-06-19 19:07:01 +01001037 /* Pad to end of cacheline */
1038 while (index % CACHELINE_DWORDS)
Arun Siluvery83b8a982015-07-08 10:27:05 +01001039 wa_ctx_emit(batch, index, MI_NOOP);
Arun Siluvery17ee9502015-06-19 19:07:01 +01001040
1041 /*
1042 * MI_BATCH_BUFFER_END is not required in Indirect ctx BB because
1043 * execution depends on the length specified in terms of cache lines
1044 * in the register CTX_RCS_INDIRECT_CTX
1045 */
1046
1047 return wa_ctx_end(wa_ctx, *offset = index, CACHELINE_DWORDS);
1048}
1049
Daniel Vetter6e5248b2016-07-15 21:48:06 +02001050/*
1051 * This batch is started immediately after indirect_ctx batch. Since we ensure
1052 * that indirect_ctx ends on a cacheline this batch is aligned automatically.
Arun Siluvery17ee9502015-06-19 19:07:01 +01001053 *
Daniel Vetter6e5248b2016-07-15 21:48:06 +02001054 * The number of DWORDS written are returned using this field.
Arun Siluvery17ee9502015-06-19 19:07:01 +01001055 *
1056 * This batch is terminated with MI_BATCH_BUFFER_END and so we need not add padding
1057 * to align it with cacheline as padding after MI_BATCH_BUFFER_END is redundant.
1058 */
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001059static int gen8_init_perctx_bb(struct intel_engine_cs *engine,
Arun Siluvery17ee9502015-06-19 19:07:01 +01001060 struct i915_wa_ctx_bb *wa_ctx,
Daniel Vetter6e5248b2016-07-15 21:48:06 +02001061 uint32_t *batch,
Arun Siluvery17ee9502015-06-19 19:07:01 +01001062 uint32_t *offset)
1063{
1064 uint32_t index = wa_ctx_start(wa_ctx, *offset, CACHELINE_DWORDS);
1065
Arun Siluvery7ad00d12015-06-19 18:37:12 +01001066 /* WaDisableCtxRestoreArbitration:bdw,chv */
Arun Siluvery83b8a982015-07-08 10:27:05 +01001067 wa_ctx_emit(batch, index, MI_ARB_ON_OFF | MI_ARB_ENABLE);
Arun Siluvery7ad00d12015-06-19 18:37:12 +01001068
Arun Siluvery83b8a982015-07-08 10:27:05 +01001069 wa_ctx_emit(batch, index, MI_BATCH_BUFFER_END);
Arun Siluvery17ee9502015-06-19 19:07:01 +01001070
1071 return wa_ctx_end(wa_ctx, *offset = index, 1);
1072}
1073
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001074static int gen9_init_indirectctx_bb(struct intel_engine_cs *engine,
Arun Siluvery0504cff2015-07-14 15:01:27 +01001075 struct i915_wa_ctx_bb *wa_ctx,
Daniel Vetter6e5248b2016-07-15 21:48:06 +02001076 uint32_t *batch,
Arun Siluvery0504cff2015-07-14 15:01:27 +01001077 uint32_t *offset)
1078{
Arun Siluverya4106a72015-07-14 15:01:29 +01001079 int ret;
Arun Siluvery0504cff2015-07-14 15:01:27 +01001080 uint32_t index = wa_ctx_start(wa_ctx, *offset, CACHELINE_DWORDS);
1081
Arun Siluvery0907c8f2015-07-14 15:01:28 +01001082 /* WaDisableCtxRestoreArbitration:skl,bxt */
Chris Wilsonc0336662016-05-06 15:40:21 +01001083 if (IS_SKL_REVID(engine->i915, 0, SKL_REVID_D0) ||
1084 IS_BXT_REVID(engine->i915, 0, BXT_REVID_A1))
Arun Siluvery0907c8f2015-07-14 15:01:28 +01001085 wa_ctx_emit(batch, index, MI_ARB_ON_OFF | MI_ARB_DISABLE);
Arun Siluvery0504cff2015-07-14 15:01:27 +01001086
Arun Siluverya4106a72015-07-14 15:01:29 +01001087 /* WaFlushCoherentL3CacheLinesAtContextSwitch:skl,bxt */
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001088 ret = gen8_emit_flush_coherentl3_wa(engine, batch, index);
Arun Siluverya4106a72015-07-14 15:01:29 +01001089 if (ret < 0)
1090 return ret;
1091 index = ret;
1092
Mika Kuoppala873e8172016-07-20 14:26:13 +03001093 /* WaDisableGatherAtSetShaderCommonSlice:skl,bxt,kbl */
1094 wa_ctx_emit(batch, index, MI_LOAD_REGISTER_IMM(1));
1095 wa_ctx_emit_reg(batch, index, COMMON_SLICE_CHICKEN2);
1096 wa_ctx_emit(batch, index, _MASKED_BIT_DISABLE(
1097 GEN9_DISABLE_GATHER_AT_SET_SHADER_COMMON_SLICE));
1098 wa_ctx_emit(batch, index, MI_NOOP);
1099
Mika Kuoppala066d4622016-06-07 17:19:15 +03001100 /* WaClearSlmSpaceAtContextSwitch:kbl */
1101 /* Actual scratch location is at 128 bytes offset */
1102 if (IS_KBL_REVID(engine->i915, 0, KBL_REVID_A0)) {
1103 uint32_t scratch_addr
1104 = engine->scratch.gtt_offset + 2*CACHELINE_BYTES;
1105
1106 wa_ctx_emit(batch, index, GFX_OP_PIPE_CONTROL(6));
1107 wa_ctx_emit(batch, index, (PIPE_CONTROL_FLUSH_L3 |
1108 PIPE_CONTROL_GLOBAL_GTT_IVB |
1109 PIPE_CONTROL_CS_STALL |
1110 PIPE_CONTROL_QW_WRITE));
1111 wa_ctx_emit(batch, index, scratch_addr);
1112 wa_ctx_emit(batch, index, 0);
1113 wa_ctx_emit(batch, index, 0);
1114 wa_ctx_emit(batch, index, 0);
1115 }
Tim Gore3485d992016-07-05 10:01:30 +01001116
1117 /* WaMediaPoolStateCmdInWABB:bxt */
1118 if (HAS_POOLED_EU(engine->i915)) {
1119 /*
1120 * EU pool configuration is setup along with golden context
1121 * during context initialization. This value depends on
1122 * device type (2x6 or 3x6) and needs to be updated based
1123 * on which subslice is disabled especially for 2x6
1124 * devices, however it is safe to load default
1125 * configuration of 3x6 device instead of masking off
1126 * corresponding bits because HW ignores bits of a disabled
1127 * subslice and drops down to appropriate config. Please
1128 * see render_state_setup() in i915_gem_render_state.c for
1129 * possible configurations, to avoid duplication they are
1130 * not shown here again.
1131 */
1132 u32 eu_pool_config = 0x00777000;
1133 wa_ctx_emit(batch, index, GEN9_MEDIA_POOL_STATE);
1134 wa_ctx_emit(batch, index, GEN9_MEDIA_POOL_ENABLE);
1135 wa_ctx_emit(batch, index, eu_pool_config);
1136 wa_ctx_emit(batch, index, 0);
1137 wa_ctx_emit(batch, index, 0);
1138 wa_ctx_emit(batch, index, 0);
1139 }
1140
Arun Siluvery0504cff2015-07-14 15:01:27 +01001141 /* Pad to end of cacheline */
1142 while (index % CACHELINE_DWORDS)
1143 wa_ctx_emit(batch, index, MI_NOOP);
1144
1145 return wa_ctx_end(wa_ctx, *offset = index, CACHELINE_DWORDS);
1146}
1147
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001148static int gen9_init_perctx_bb(struct intel_engine_cs *engine,
Arun Siluvery0504cff2015-07-14 15:01:27 +01001149 struct i915_wa_ctx_bb *wa_ctx,
Daniel Vetter6e5248b2016-07-15 21:48:06 +02001150 uint32_t *batch,
Arun Siluvery0504cff2015-07-14 15:01:27 +01001151 uint32_t *offset)
1152{
1153 uint32_t index = wa_ctx_start(wa_ctx, *offset, CACHELINE_DWORDS);
1154
Arun Siluvery9b014352015-07-14 15:01:30 +01001155 /* WaSetDisablePixMaskCammingAndRhwoInCommonSliceChicken:skl,bxt */
Chris Wilsonc0336662016-05-06 15:40:21 +01001156 if (IS_SKL_REVID(engine->i915, 0, SKL_REVID_B0) ||
1157 IS_BXT_REVID(engine->i915, 0, BXT_REVID_A1)) {
Arun Siluvery9b014352015-07-14 15:01:30 +01001158 wa_ctx_emit(batch, index, MI_LOAD_REGISTER_IMM(1));
Ville Syrjälä8f40db72015-11-04 23:20:08 +02001159 wa_ctx_emit_reg(batch, index, GEN9_SLICE_COMMON_ECO_CHICKEN0);
Arun Siluvery9b014352015-07-14 15:01:30 +01001160 wa_ctx_emit(batch, index,
1161 _MASKED_BIT_ENABLE(DISABLE_PIXEL_MASK_CAMMING));
1162 wa_ctx_emit(batch, index, MI_NOOP);
1163 }
1164
Tim Goreb1e429f2016-03-21 14:37:29 +00001165 /* WaClearTdlStateAckDirtyBits:bxt */
Chris Wilsonc0336662016-05-06 15:40:21 +01001166 if (IS_BXT_REVID(engine->i915, 0, BXT_REVID_B0)) {
Tim Goreb1e429f2016-03-21 14:37:29 +00001167 wa_ctx_emit(batch, index, MI_LOAD_REGISTER_IMM(4));
1168
1169 wa_ctx_emit_reg(batch, index, GEN8_STATE_ACK);
1170 wa_ctx_emit(batch, index, _MASKED_BIT_DISABLE(GEN9_SUBSLICE_TDL_ACK_BITS));
1171
1172 wa_ctx_emit_reg(batch, index, GEN9_STATE_ACK_SLICE1);
1173 wa_ctx_emit(batch, index, _MASKED_BIT_DISABLE(GEN9_SUBSLICE_TDL_ACK_BITS));
1174
1175 wa_ctx_emit_reg(batch, index, GEN9_STATE_ACK_SLICE2);
1176 wa_ctx_emit(batch, index, _MASKED_BIT_DISABLE(GEN9_SUBSLICE_TDL_ACK_BITS));
1177
1178 wa_ctx_emit_reg(batch, index, GEN7_ROW_CHICKEN2);
1179 /* dummy write to CS, mask bits are 0 to ensure the register is not modified */
1180 wa_ctx_emit(batch, index, 0x0);
1181 wa_ctx_emit(batch, index, MI_NOOP);
1182 }
1183
Arun Siluvery0907c8f2015-07-14 15:01:28 +01001184 /* WaDisableCtxRestoreArbitration:skl,bxt */
Chris Wilsonc0336662016-05-06 15:40:21 +01001185 if (IS_SKL_REVID(engine->i915, 0, SKL_REVID_D0) ||
1186 IS_BXT_REVID(engine->i915, 0, BXT_REVID_A1))
Arun Siluvery0907c8f2015-07-14 15:01:28 +01001187 wa_ctx_emit(batch, index, MI_ARB_ON_OFF | MI_ARB_ENABLE);
1188
Arun Siluvery0504cff2015-07-14 15:01:27 +01001189 wa_ctx_emit(batch, index, MI_BATCH_BUFFER_END);
1190
1191 return wa_ctx_end(wa_ctx, *offset = index, 1);
1192}
1193
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001194static int lrc_setup_wa_ctx_obj(struct intel_engine_cs *engine, u32 size)
Arun Siluvery17ee9502015-06-19 19:07:01 +01001195{
1196 int ret;
1197
Chris Wilson91c8a322016-07-05 10:40:23 +01001198 engine->wa_ctx.obj = i915_gem_object_create(&engine->i915->drm,
1199 PAGE_ALIGN(size));
Chris Wilsonfe3db792016-04-25 13:32:13 +01001200 if (IS_ERR(engine->wa_ctx.obj)) {
Arun Siluvery17ee9502015-06-19 19:07:01 +01001201 DRM_DEBUG_DRIVER("alloc LRC WA ctx backing obj failed.\n");
Chris Wilsonfe3db792016-04-25 13:32:13 +01001202 ret = PTR_ERR(engine->wa_ctx.obj);
1203 engine->wa_ctx.obj = NULL;
1204 return ret;
Arun Siluvery17ee9502015-06-19 19:07:01 +01001205 }
1206
Chris Wilsonde895082016-08-04 16:32:34 +01001207 ret = i915_gem_object_ggtt_pin(engine->wa_ctx.obj, NULL,
1208 0, PAGE_SIZE, 0);
Arun Siluvery17ee9502015-06-19 19:07:01 +01001209 if (ret) {
1210 DRM_DEBUG_DRIVER("pin LRC WA ctx backing obj failed: %d\n",
1211 ret);
Chris Wilsonf8c417c2016-07-20 13:31:53 +01001212 i915_gem_object_put(engine->wa_ctx.obj);
Arun Siluvery17ee9502015-06-19 19:07:01 +01001213 return ret;
1214 }
1215
1216 return 0;
1217}
1218
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001219static void lrc_destroy_wa_ctx_obj(struct intel_engine_cs *engine)
Arun Siluvery17ee9502015-06-19 19:07:01 +01001220{
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001221 if (engine->wa_ctx.obj) {
1222 i915_gem_object_ggtt_unpin(engine->wa_ctx.obj);
Chris Wilsonf8c417c2016-07-20 13:31:53 +01001223 i915_gem_object_put(engine->wa_ctx.obj);
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001224 engine->wa_ctx.obj = NULL;
Arun Siluvery17ee9502015-06-19 19:07:01 +01001225 }
1226}
1227
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001228static int intel_init_workaround_bb(struct intel_engine_cs *engine)
Arun Siluvery17ee9502015-06-19 19:07:01 +01001229{
1230 int ret;
1231 uint32_t *batch;
1232 uint32_t offset;
1233 struct page *page;
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001234 struct i915_ctx_workarounds *wa_ctx = &engine->wa_ctx;
Arun Siluvery17ee9502015-06-19 19:07:01 +01001235
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001236 WARN_ON(engine->id != RCS);
Arun Siluvery17ee9502015-06-19 19:07:01 +01001237
Arun Siluvery5e60d792015-06-23 15:50:44 +01001238 /* update this when WA for higher Gen are added */
Chris Wilsonc0336662016-05-06 15:40:21 +01001239 if (INTEL_GEN(engine->i915) > 9) {
Arun Siluvery0504cff2015-07-14 15:01:27 +01001240 DRM_ERROR("WA batch buffer is not initialized for Gen%d\n",
Chris Wilsonc0336662016-05-06 15:40:21 +01001241 INTEL_GEN(engine->i915));
Arun Siluvery5e60d792015-06-23 15:50:44 +01001242 return 0;
Arun Siluvery0504cff2015-07-14 15:01:27 +01001243 }
Arun Siluvery5e60d792015-06-23 15:50:44 +01001244
Arun Siluveryc4db7592015-06-19 18:37:11 +01001245 /* some WA perform writes to scratch page, ensure it is valid */
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001246 if (engine->scratch.obj == NULL) {
1247 DRM_ERROR("scratch page not allocated for %s\n", engine->name);
Arun Siluveryc4db7592015-06-19 18:37:11 +01001248 return -EINVAL;
1249 }
1250
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001251 ret = lrc_setup_wa_ctx_obj(engine, PAGE_SIZE);
Arun Siluvery17ee9502015-06-19 19:07:01 +01001252 if (ret) {
1253 DRM_DEBUG_DRIVER("Failed to setup context WA page: %d\n", ret);
1254 return ret;
1255 }
1256
Dave Gordon033908a2015-12-10 18:51:23 +00001257 page = i915_gem_object_get_dirty_page(wa_ctx->obj, 0);
Arun Siluvery17ee9502015-06-19 19:07:01 +01001258 batch = kmap_atomic(page);
1259 offset = 0;
1260
Chris Wilsonc0336662016-05-06 15:40:21 +01001261 if (IS_GEN8(engine->i915)) {
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001262 ret = gen8_init_indirectctx_bb(engine,
Arun Siluvery17ee9502015-06-19 19:07:01 +01001263 &wa_ctx->indirect_ctx,
1264 batch,
1265 &offset);
1266 if (ret)
1267 goto out;
1268
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001269 ret = gen8_init_perctx_bb(engine,
Arun Siluvery17ee9502015-06-19 19:07:01 +01001270 &wa_ctx->per_ctx,
1271 batch,
1272 &offset);
1273 if (ret)
1274 goto out;
Chris Wilsonc0336662016-05-06 15:40:21 +01001275 } else if (IS_GEN9(engine->i915)) {
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001276 ret = gen9_init_indirectctx_bb(engine,
Arun Siluvery0504cff2015-07-14 15:01:27 +01001277 &wa_ctx->indirect_ctx,
1278 batch,
1279 &offset);
1280 if (ret)
1281 goto out;
1282
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001283 ret = gen9_init_perctx_bb(engine,
Arun Siluvery0504cff2015-07-14 15:01:27 +01001284 &wa_ctx->per_ctx,
1285 batch,
1286 &offset);
1287 if (ret)
1288 goto out;
Arun Siluvery17ee9502015-06-19 19:07:01 +01001289 }
1290
1291out:
1292 kunmap_atomic(batch);
1293 if (ret)
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001294 lrc_destroy_wa_ctx_obj(engine);
Arun Siluvery17ee9502015-06-19 19:07:01 +01001295
1296 return ret;
1297}
1298
Tvrtko Ursulin04794ad2016-04-12 15:40:41 +01001299static void lrc_init_hws(struct intel_engine_cs *engine)
1300{
Chris Wilsonc0336662016-05-06 15:40:21 +01001301 struct drm_i915_private *dev_priv = engine->i915;
Tvrtko Ursulin04794ad2016-04-12 15:40:41 +01001302
1303 I915_WRITE(RING_HWS_PGA(engine->mmio_base),
1304 (u32)engine->status_page.gfx_addr);
1305 POSTING_READ(RING_HWS_PGA(engine->mmio_base));
1306}
1307
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001308static int gen8_init_common_ring(struct intel_engine_cs *engine)
Oscar Mateo9b1136d2014-07-24 17:04:24 +01001309{
Chris Wilsonc0336662016-05-06 15:40:21 +01001310 struct drm_i915_private *dev_priv = engine->i915;
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +00001311 unsigned int next_context_status_buffer_hw;
Oscar Mateo9b1136d2014-07-24 17:04:24 +01001312
Tvrtko Ursulin04794ad2016-04-12 15:40:41 +01001313 lrc_init_hws(engine);
Nick Hoathe84fe802015-09-11 12:53:46 +01001314
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001315 I915_WRITE_IMR(engine,
1316 ~(engine->irq_enable_mask | engine->irq_keep_mask));
1317 I915_WRITE(RING_HWSTAM(engine->mmio_base), 0xffffffff);
Oscar Mateo73d477f2014-07-24 17:04:31 +01001318
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001319 I915_WRITE(RING_MODE_GEN7(engine),
Oscar Mateo9b1136d2014-07-24 17:04:24 +01001320 _MASKED_BIT_DISABLE(GFX_REPLAY_MODE) |
1321 _MASKED_BIT_ENABLE(GFX_RUN_LIST_ENABLE));
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001322 POSTING_READ(RING_MODE_GEN7(engine));
Michel Thierrydfc53c52015-09-28 13:25:12 +01001323
1324 /*
1325 * Instead of resetting the Context Status Buffer (CSB) read pointer to
1326 * zero, we need to read the write pointer from hardware and use its
1327 * value because "this register is power context save restored".
1328 * Effectively, these states have been observed:
1329 *
1330 * | Suspend-to-idle (freeze) | Suspend-to-RAM (mem) |
1331 * BDW | CSB regs not reset | CSB regs reset |
1332 * CHT | CSB regs not reset | CSB regs not reset |
Ben Widawsky5590a5f2016-01-05 10:30:05 -08001333 * SKL | ? | ? |
1334 * BXT | ? | ? |
Michel Thierrydfc53c52015-09-28 13:25:12 +01001335 */
Ben Widawsky5590a5f2016-01-05 10:30:05 -08001336 next_context_status_buffer_hw =
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001337 GEN8_CSB_WRITE_PTR(I915_READ(RING_CONTEXT_STATUS_PTR(engine)));
Michel Thierrydfc53c52015-09-28 13:25:12 +01001338
1339 /*
1340 * When the CSB registers are reset (also after power-up / gpu reset),
1341 * CSB write pointer is set to all 1's, which is not valid, use '5' in
1342 * this special case, so the first element read is CSB[0].
1343 */
1344 if (next_context_status_buffer_hw == GEN8_CSB_PTR_MASK)
1345 next_context_status_buffer_hw = (GEN8_CSB_ENTRIES - 1);
1346
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001347 engine->next_context_status_buffer = next_context_status_buffer_hw;
1348 DRM_DEBUG_DRIVER("Execlists enabled for %s\n", engine->name);
Oscar Mateo9b1136d2014-07-24 17:04:24 +01001349
Tomas Elffc0768c2016-03-21 16:26:59 +00001350 intel_engine_init_hangcheck(engine);
Oscar Mateo9b1136d2014-07-24 17:04:24 +01001351
Peter Antoine0ccdacf2016-04-13 15:03:25 +01001352 return intel_mocs_init_engine(engine);
Oscar Mateo9b1136d2014-07-24 17:04:24 +01001353}
1354
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001355static int gen8_init_render_ring(struct intel_engine_cs *engine)
Oscar Mateo9b1136d2014-07-24 17:04:24 +01001356{
Chris Wilsonc0336662016-05-06 15:40:21 +01001357 struct drm_i915_private *dev_priv = engine->i915;
Oscar Mateo9b1136d2014-07-24 17:04:24 +01001358 int ret;
1359
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001360 ret = gen8_init_common_ring(engine);
Oscar Mateo9b1136d2014-07-24 17:04:24 +01001361 if (ret)
1362 return ret;
1363
1364 /* We need to disable the AsyncFlip performance optimisations in order
1365 * to use MI_WAIT_FOR_EVENT within the CS. It should already be
1366 * programmed to '1' on all products.
1367 *
1368 * WaDisableAsyncFlipPerfMode:snb,ivb,hsw,vlv,bdw,chv
1369 */
1370 I915_WRITE(MI_MODE, _MASKED_BIT_ENABLE(ASYNC_FLIP_PERF_DISABLE));
1371
Oscar Mateo9b1136d2014-07-24 17:04:24 +01001372 I915_WRITE(INSTPM, _MASKED_BIT_ENABLE(INSTPM_FORCE_ORDERING));
1373
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001374 return init_workarounds_ring(engine);
Oscar Mateo9b1136d2014-07-24 17:04:24 +01001375}
1376
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001377static int gen9_init_render_ring(struct intel_engine_cs *engine)
Damien Lespiau82ef8222015-02-09 19:33:08 +00001378{
1379 int ret;
1380
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001381 ret = gen8_init_common_ring(engine);
Damien Lespiau82ef8222015-02-09 19:33:08 +00001382 if (ret)
1383 return ret;
1384
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001385 return init_workarounds_ring(engine);
Damien Lespiau82ef8222015-02-09 19:33:08 +00001386}
1387
Michel Thierry7a01a0a2015-06-26 13:46:14 +01001388static int intel_logical_ring_emit_pdps(struct drm_i915_gem_request *req)
1389{
1390 struct i915_hw_ppgtt *ppgtt = req->ctx->ppgtt;
Chris Wilson7e37f882016-08-02 22:50:21 +01001391 struct intel_ring *ring = req->ring;
Tvrtko Ursulin4a570db2016-03-16 11:00:38 +00001392 struct intel_engine_cs *engine = req->engine;
Michel Thierry7a01a0a2015-06-26 13:46:14 +01001393 const int num_lri_cmds = GEN8_LEGACY_PDPES * 2;
1394 int i, ret;
1395
Chris Wilson987046a2016-04-28 09:56:46 +01001396 ret = intel_ring_begin(req, num_lri_cmds * 2 + 2);
Michel Thierry7a01a0a2015-06-26 13:46:14 +01001397 if (ret)
1398 return ret;
1399
Chris Wilsonb5321f32016-08-02 22:50:18 +01001400 intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(num_lri_cmds));
Michel Thierry7a01a0a2015-06-26 13:46:14 +01001401 for (i = GEN8_LEGACY_PDPES - 1; i >= 0; i--) {
1402 const dma_addr_t pd_daddr = i915_page_dir_dma_addr(ppgtt, i);
1403
Chris Wilsonb5321f32016-08-02 22:50:18 +01001404 intel_ring_emit_reg(ring, GEN8_RING_PDP_UDW(engine, i));
1405 intel_ring_emit(ring, upper_32_bits(pd_daddr));
1406 intel_ring_emit_reg(ring, GEN8_RING_PDP_LDW(engine, i));
1407 intel_ring_emit(ring, lower_32_bits(pd_daddr));
Michel Thierry7a01a0a2015-06-26 13:46:14 +01001408 }
1409
Chris Wilsonb5321f32016-08-02 22:50:18 +01001410 intel_ring_emit(ring, MI_NOOP);
1411 intel_ring_advance(ring);
Michel Thierry7a01a0a2015-06-26 13:46:14 +01001412
1413 return 0;
1414}
1415
John Harrisonbe795fc2015-05-29 17:44:03 +01001416static int gen8_emit_bb_start(struct drm_i915_gem_request *req,
Chris Wilson803688b2016-08-02 22:50:27 +01001417 u64 offset, u32 len,
1418 unsigned int dispatch_flags)
Oscar Mateo15648582014-07-24 17:04:32 +01001419{
Chris Wilson7e37f882016-08-02 22:50:21 +01001420 struct intel_ring *ring = req->ring;
John Harrison8e004ef2015-02-13 11:48:10 +00001421 bool ppgtt = !(dispatch_flags & I915_DISPATCH_SECURE);
Oscar Mateo15648582014-07-24 17:04:32 +01001422 int ret;
1423
Michel Thierry7a01a0a2015-06-26 13:46:14 +01001424 /* Don't rely in hw updating PDPs, specially in lite-restore.
1425 * Ideally, we should set Force PD Restore in ctx descriptor,
1426 * but we can't. Force Restore would be a second option, but
1427 * it is unsafe in case of lite-restore (because the ctx is
Michel Thierry2dba3232015-07-30 11:06:23 +01001428 * not idle). PML4 is allocated during ppgtt init so this is
1429 * not needed in 48-bit.*/
Michel Thierry7a01a0a2015-06-26 13:46:14 +01001430 if (req->ctx->ppgtt &&
Tvrtko Ursulin666796d2016-03-16 11:00:39 +00001431 (intel_engine_flag(req->engine) & req->ctx->ppgtt->pd_dirty_rings)) {
Zhiyuan Lv331f38e2015-08-28 15:41:14 +08001432 if (!USES_FULL_48BIT_PPGTT(req->i915) &&
Chris Wilsonc0336662016-05-06 15:40:21 +01001433 !intel_vgpu_active(req->i915)) {
Michel Thierry2dba3232015-07-30 11:06:23 +01001434 ret = intel_logical_ring_emit_pdps(req);
1435 if (ret)
1436 return ret;
1437 }
Michel Thierry7a01a0a2015-06-26 13:46:14 +01001438
Tvrtko Ursulin666796d2016-03-16 11:00:39 +00001439 req->ctx->ppgtt->pd_dirty_rings &= ~intel_engine_flag(req->engine);
Michel Thierry7a01a0a2015-06-26 13:46:14 +01001440 }
1441
Chris Wilson987046a2016-04-28 09:56:46 +01001442 ret = intel_ring_begin(req, 4);
Oscar Mateo15648582014-07-24 17:04:32 +01001443 if (ret)
1444 return ret;
1445
1446 /* FIXME(BDW): Address space and security selectors. */
Chris Wilsonb5321f32016-08-02 22:50:18 +01001447 intel_ring_emit(ring, MI_BATCH_BUFFER_START_GEN8 |
1448 (ppgtt<<8) |
1449 (dispatch_flags & I915_DISPATCH_RS ?
1450 MI_BATCH_RESOURCE_STREAMER : 0));
1451 intel_ring_emit(ring, lower_32_bits(offset));
1452 intel_ring_emit(ring, upper_32_bits(offset));
1453 intel_ring_emit(ring, MI_NOOP);
1454 intel_ring_advance(ring);
Oscar Mateo15648582014-07-24 17:04:32 +01001455
1456 return 0;
1457}
1458
Chris Wilson31bb59c2016-07-01 17:23:27 +01001459static void gen8_logical_ring_enable_irq(struct intel_engine_cs *engine)
Oscar Mateo73d477f2014-07-24 17:04:31 +01001460{
Chris Wilsonc0336662016-05-06 15:40:21 +01001461 struct drm_i915_private *dev_priv = engine->i915;
Chris Wilson31bb59c2016-07-01 17:23:27 +01001462 I915_WRITE_IMR(engine,
1463 ~(engine->irq_enable_mask | engine->irq_keep_mask));
1464 POSTING_READ_FW(RING_IMR(engine->mmio_base));
Oscar Mateo73d477f2014-07-24 17:04:31 +01001465}
1466
Chris Wilson31bb59c2016-07-01 17:23:27 +01001467static void gen8_logical_ring_disable_irq(struct intel_engine_cs *engine)
Oscar Mateo73d477f2014-07-24 17:04:31 +01001468{
Chris Wilsonc0336662016-05-06 15:40:21 +01001469 struct drm_i915_private *dev_priv = engine->i915;
Chris Wilson31bb59c2016-07-01 17:23:27 +01001470 I915_WRITE_IMR(engine, ~engine->irq_keep_mask);
Oscar Mateo73d477f2014-07-24 17:04:31 +01001471}
1472
Chris Wilson7c9cf4e2016-08-02 22:50:25 +01001473static int gen8_emit_flush(struct drm_i915_gem_request *request, u32 mode)
Oscar Mateo47122742014-07-24 17:04:28 +01001474{
Chris Wilson7e37f882016-08-02 22:50:21 +01001475 struct intel_ring *ring = request->ring;
1476 u32 cmd;
Oscar Mateo47122742014-07-24 17:04:28 +01001477 int ret;
1478
Chris Wilson987046a2016-04-28 09:56:46 +01001479 ret = intel_ring_begin(request, 4);
Oscar Mateo47122742014-07-24 17:04:28 +01001480 if (ret)
1481 return ret;
1482
1483 cmd = MI_FLUSH_DW + 1;
1484
Chris Wilsonf0a1fb12015-01-22 13:42:00 +00001485 /* We always require a command barrier so that subsequent
1486 * commands, such as breadcrumb interrupts, are strictly ordered
1487 * wrt the contents of the write cache being flushed to memory
1488 * (and thus being coherent from the CPU).
1489 */
1490 cmd |= MI_FLUSH_DW_STORE_INDEX | MI_FLUSH_DW_OP_STOREDW;
1491
Chris Wilson7c9cf4e2016-08-02 22:50:25 +01001492 if (mode & EMIT_INVALIDATE) {
Chris Wilsonf0a1fb12015-01-22 13:42:00 +00001493 cmd |= MI_INVALIDATE_TLB;
Chris Wilson1dae2df2016-08-02 22:50:19 +01001494 if (request->engine->id == VCS)
Chris Wilsonf0a1fb12015-01-22 13:42:00 +00001495 cmd |= MI_INVALIDATE_BSD;
Oscar Mateo47122742014-07-24 17:04:28 +01001496 }
1497
Chris Wilsonb5321f32016-08-02 22:50:18 +01001498 intel_ring_emit(ring, cmd);
1499 intel_ring_emit(ring,
1500 I915_GEM_HWS_SCRATCH_ADDR |
1501 MI_FLUSH_DW_USE_GTT);
1502 intel_ring_emit(ring, 0); /* upper addr */
1503 intel_ring_emit(ring, 0); /* value */
1504 intel_ring_advance(ring);
Oscar Mateo47122742014-07-24 17:04:28 +01001505
1506 return 0;
1507}
1508
John Harrison7deb4d32015-05-29 17:43:59 +01001509static int gen8_emit_flush_render(struct drm_i915_gem_request *request,
Chris Wilson7c9cf4e2016-08-02 22:50:25 +01001510 u32 mode)
Oscar Mateo47122742014-07-24 17:04:28 +01001511{
Chris Wilson7e37f882016-08-02 22:50:21 +01001512 struct intel_ring *ring = request->ring;
Chris Wilsonb5321f32016-08-02 22:50:18 +01001513 struct intel_engine_cs *engine = request->engine;
Tvrtko Ursuline2f80392016-03-16 11:00:36 +00001514 u32 scratch_addr = engine->scratch.gtt_offset + 2 * CACHELINE_BYTES;
Mika Kuoppala0b2d0932016-06-07 17:19:10 +03001515 bool vf_flush_wa = false, dc_flush_wa = false;
Oscar Mateo47122742014-07-24 17:04:28 +01001516 u32 flags = 0;
1517 int ret;
Mika Kuoppala0b2d0932016-06-07 17:19:10 +03001518 int len;
Oscar Mateo47122742014-07-24 17:04:28 +01001519
1520 flags |= PIPE_CONTROL_CS_STALL;
1521
Chris Wilson7c9cf4e2016-08-02 22:50:25 +01001522 if (mode & EMIT_FLUSH) {
Oscar Mateo47122742014-07-24 17:04:28 +01001523 flags |= PIPE_CONTROL_RENDER_TARGET_CACHE_FLUSH;
1524 flags |= PIPE_CONTROL_DEPTH_CACHE_FLUSH;
Francisco Jerez965fd602016-01-13 18:59:39 -08001525 flags |= PIPE_CONTROL_DC_FLUSH_ENABLE;
Chris Wilson40a24482015-08-21 16:08:41 +01001526 flags |= PIPE_CONTROL_FLUSH_ENABLE;
Oscar Mateo47122742014-07-24 17:04:28 +01001527 }
1528
Chris Wilson7c9cf4e2016-08-02 22:50:25 +01001529 if (mode & EMIT_INVALIDATE) {
Oscar Mateo47122742014-07-24 17:04:28 +01001530 flags |= PIPE_CONTROL_TLB_INVALIDATE;
1531 flags |= PIPE_CONTROL_INSTRUCTION_CACHE_INVALIDATE;
1532 flags |= PIPE_CONTROL_TEXTURE_CACHE_INVALIDATE;
1533 flags |= PIPE_CONTROL_VF_CACHE_INVALIDATE;
1534 flags |= PIPE_CONTROL_CONST_CACHE_INVALIDATE;
1535 flags |= PIPE_CONTROL_STATE_CACHE_INVALIDATE;
1536 flags |= PIPE_CONTROL_QW_WRITE;
1537 flags |= PIPE_CONTROL_GLOBAL_GTT_IVB;
Oscar Mateo47122742014-07-24 17:04:28 +01001538
Ben Widawsky1a5a9ce2015-12-17 09:49:57 -08001539 /*
1540 * On GEN9: before VF_CACHE_INVALIDATE we need to emit a NULL
1541 * pipe control.
1542 */
Chris Wilsonc0336662016-05-06 15:40:21 +01001543 if (IS_GEN9(request->i915))
Ben Widawsky1a5a9ce2015-12-17 09:49:57 -08001544 vf_flush_wa = true;
Mika Kuoppala0b2d0932016-06-07 17:19:10 +03001545
1546 /* WaForGAMHang:kbl */
1547 if (IS_KBL_REVID(request->i915, 0, KBL_REVID_B0))
1548 dc_flush_wa = true;
Ben Widawsky1a5a9ce2015-12-17 09:49:57 -08001549 }
Imre Deak9647ff32015-01-25 13:27:11 -08001550
Mika Kuoppala0b2d0932016-06-07 17:19:10 +03001551 len = 6;
1552
1553 if (vf_flush_wa)
1554 len += 6;
1555
1556 if (dc_flush_wa)
1557 len += 12;
1558
1559 ret = intel_ring_begin(request, len);
Oscar Mateo47122742014-07-24 17:04:28 +01001560 if (ret)
1561 return ret;
1562
Imre Deak9647ff32015-01-25 13:27:11 -08001563 if (vf_flush_wa) {
Chris Wilsonb5321f32016-08-02 22:50:18 +01001564 intel_ring_emit(ring, GFX_OP_PIPE_CONTROL(6));
1565 intel_ring_emit(ring, 0);
1566 intel_ring_emit(ring, 0);
1567 intel_ring_emit(ring, 0);
1568 intel_ring_emit(ring, 0);
1569 intel_ring_emit(ring, 0);
Imre Deak9647ff32015-01-25 13:27:11 -08001570 }
1571
Mika Kuoppala0b2d0932016-06-07 17:19:10 +03001572 if (dc_flush_wa) {
Chris Wilsonb5321f32016-08-02 22:50:18 +01001573 intel_ring_emit(ring, GFX_OP_PIPE_CONTROL(6));
1574 intel_ring_emit(ring, PIPE_CONTROL_DC_FLUSH_ENABLE);
1575 intel_ring_emit(ring, 0);
1576 intel_ring_emit(ring, 0);
1577 intel_ring_emit(ring, 0);
1578 intel_ring_emit(ring, 0);
Mika Kuoppala0b2d0932016-06-07 17:19:10 +03001579 }
1580
Chris Wilsonb5321f32016-08-02 22:50:18 +01001581 intel_ring_emit(ring, GFX_OP_PIPE_CONTROL(6));
1582 intel_ring_emit(ring, flags);
1583 intel_ring_emit(ring, scratch_addr);
1584 intel_ring_emit(ring, 0);
1585 intel_ring_emit(ring, 0);
1586 intel_ring_emit(ring, 0);
Mika Kuoppala0b2d0932016-06-07 17:19:10 +03001587
1588 if (dc_flush_wa) {
Chris Wilsonb5321f32016-08-02 22:50:18 +01001589 intel_ring_emit(ring, GFX_OP_PIPE_CONTROL(6));
1590 intel_ring_emit(ring, PIPE_CONTROL_CS_STALL);
1591 intel_ring_emit(ring, 0);
1592 intel_ring_emit(ring, 0);
1593 intel_ring_emit(ring, 0);
1594 intel_ring_emit(ring, 0);
Mika Kuoppala0b2d0932016-06-07 17:19:10 +03001595 }
1596
Chris Wilsonb5321f32016-08-02 22:50:18 +01001597 intel_ring_advance(ring);
Oscar Mateo47122742014-07-24 17:04:28 +01001598
1599 return 0;
1600}
1601
Chris Wilsonc04e0f32016-04-09 10:57:54 +01001602static void bxt_a_seqno_barrier(struct intel_engine_cs *engine)
Imre Deak319404d2015-08-14 18:35:27 +03001603{
Imre Deak319404d2015-08-14 18:35:27 +03001604 /*
1605 * On BXT A steppings there is a HW coherency issue whereby the
1606 * MI_STORE_DATA_IMM storing the completed request's seqno
1607 * occasionally doesn't invalidate the CPU cache. Work around this by
1608 * clflushing the corresponding cacheline whenever the caller wants
1609 * the coherency to be guaranteed. Note that this cacheline is known
1610 * to be clean at this point, since we only write it in
1611 * bxt_a_set_seqno(), where we also do a clflush after the write. So
1612 * this clflush in practice becomes an invalidate operation.
1613 */
Chris Wilsonc04e0f32016-04-09 10:57:54 +01001614 intel_flush_status_page(engine, I915_GEM_HWS_INDEX);
Imre Deak319404d2015-08-14 18:35:27 +03001615}
1616
Chris Wilson7c17d372016-01-20 15:43:35 +02001617/*
1618 * Reserve space for 2 NOOPs at the end of each request to be
1619 * used as a workaround for not being allowed to do lite
1620 * restore with HEAD==TAIL (WaIdleLiteRestore).
1621 */
1622#define WA_TAIL_DWORDS 2
1623
John Harrisonc4e76632015-05-29 17:44:01 +01001624static int gen8_emit_request(struct drm_i915_gem_request *request)
Oscar Mateo4da46e12014-07-24 17:04:27 +01001625{
Chris Wilson7e37f882016-08-02 22:50:21 +01001626 struct intel_ring *ring = request->ring;
Oscar Mateo4da46e12014-07-24 17:04:27 +01001627 int ret;
1628
Chris Wilson987046a2016-04-28 09:56:46 +01001629 ret = intel_ring_begin(request, 6 + WA_TAIL_DWORDS);
Oscar Mateo4da46e12014-07-24 17:04:27 +01001630 if (ret)
1631 return ret;
1632
Chris Wilson7c17d372016-01-20 15:43:35 +02001633 /* w/a: bit 5 needs to be zero for MI_FLUSH_DW address. */
1634 BUILD_BUG_ON(I915_GEM_HWS_INDEX_ADDR & (1 << 5));
Oscar Mateo4da46e12014-07-24 17:04:27 +01001635
Chris Wilsonb5321f32016-08-02 22:50:18 +01001636 intel_ring_emit(ring, (MI_FLUSH_DW + 1) | MI_FLUSH_DW_OP_STOREDW);
1637 intel_ring_emit(ring,
1638 intel_hws_seqno_address(request->engine) |
1639 MI_FLUSH_DW_USE_GTT);
1640 intel_ring_emit(ring, 0);
1641 intel_ring_emit(ring, request->fence.seqno);
1642 intel_ring_emit(ring, MI_USER_INTERRUPT);
1643 intel_ring_emit(ring, MI_NOOP);
Chris Wilsonddd66c52016-08-02 22:50:31 +01001644 return intel_logical_ring_advance(request);
Chris Wilson7c17d372016-01-20 15:43:35 +02001645}
Oscar Mateo4da46e12014-07-24 17:04:27 +01001646
Chris Wilson7c17d372016-01-20 15:43:35 +02001647static int gen8_emit_request_render(struct drm_i915_gem_request *request)
1648{
Chris Wilson7e37f882016-08-02 22:50:21 +01001649 struct intel_ring *ring = request->ring;
Chris Wilson7c17d372016-01-20 15:43:35 +02001650 int ret;
1651
Chris Wilson987046a2016-04-28 09:56:46 +01001652 ret = intel_ring_begin(request, 8 + WA_TAIL_DWORDS);
Chris Wilson7c17d372016-01-20 15:43:35 +02001653 if (ret)
1654 return ret;
1655
Michał Winiarskice81a652016-04-12 15:51:55 +02001656 /* We're using qword write, seqno should be aligned to 8 bytes. */
1657 BUILD_BUG_ON(I915_GEM_HWS_INDEX & 1);
1658
Chris Wilson7c17d372016-01-20 15:43:35 +02001659 /* w/a for post sync ops following a GPGPU operation we
1660 * need a prior CS_STALL, which is emitted by the flush
1661 * following the batch.
Michel Thierry53292cd2015-04-15 18:11:33 +01001662 */
Chris Wilsonb5321f32016-08-02 22:50:18 +01001663 intel_ring_emit(ring, GFX_OP_PIPE_CONTROL(6));
1664 intel_ring_emit(ring,
1665 (PIPE_CONTROL_GLOBAL_GTT_IVB |
1666 PIPE_CONTROL_CS_STALL |
1667 PIPE_CONTROL_QW_WRITE));
1668 intel_ring_emit(ring, intel_hws_seqno_address(request->engine));
1669 intel_ring_emit(ring, 0);
1670 intel_ring_emit(ring, i915_gem_request_get_seqno(request));
Michał Winiarskice81a652016-04-12 15:51:55 +02001671 /* We're thrashing one dword of HWS. */
Chris Wilsonb5321f32016-08-02 22:50:18 +01001672 intel_ring_emit(ring, 0);
1673 intel_ring_emit(ring, MI_USER_INTERRUPT);
1674 intel_ring_emit(ring, MI_NOOP);
Chris Wilsonddd66c52016-08-02 22:50:31 +01001675 return intel_logical_ring_advance(request);
Oscar Mateo4da46e12014-07-24 17:04:27 +01001676}
1677
John Harrison87531812015-05-29 17:43:44 +01001678static int gen8_init_rcs_context(struct drm_i915_gem_request *req)
Thomas Daniele7778be2014-12-02 12:50:48 +00001679{
1680 int ret;
1681
John Harrisone2be4fa2015-05-29 17:43:54 +01001682 ret = intel_logical_ring_workarounds_emit(req);
Thomas Daniele7778be2014-12-02 12:50:48 +00001683 if (ret)
1684 return ret;
1685
Peter Antoine3bbaba02015-07-10 20:13:11 +03001686 ret = intel_rcs_context_init_mocs(req);
1687 /*
1688 * Failing to program the MOCS is non-fatal.The system will not
1689 * run at peak performance. So generate an error and carry on.
1690 */
1691 if (ret)
1692 DRM_ERROR("MOCS failed to program: expect performance issues.\n");
1693
Chris Wilsone40f9ee2016-08-02 22:50:36 +01001694 return i915_gem_render_state_init(req);
Thomas Daniele7778be2014-12-02 12:50:48 +00001695}
1696
Oscar Mateo73e4d072014-07-24 17:04:48 +01001697/**
1698 * intel_logical_ring_cleanup() - deallocate the Engine Command Streamer
Tvrtko Ursulin14bb2c12016-06-03 14:02:17 +01001699 * @engine: Engine Command Streamer.
Oscar Mateo73e4d072014-07-24 17:04:48 +01001700 */
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001701void intel_logical_ring_cleanup(struct intel_engine_cs *engine)
Oscar Mateo454afeb2014-07-24 17:04:22 +01001702{
John Harrison6402c332014-10-31 12:00:26 +00001703 struct drm_i915_private *dev_priv;
Oscar Mateo9832b9d2014-07-24 17:04:30 +01001704
Tvrtko Ursulin117897f2016-03-16 11:00:40 +00001705 if (!intel_engine_initialized(engine))
Oscar Mateo48d82382014-07-24 17:04:23 +01001706 return;
1707
Tvrtko Ursulin27af5ee2016-04-04 12:11:56 +01001708 /*
1709 * Tasklet cannot be active at this point due intel_mark_active/idle
1710 * so this is just for documentation.
1711 */
1712 if (WARN_ON(test_bit(TASKLET_STATE_SCHED, &engine->irq_tasklet.state)))
1713 tasklet_kill(&engine->irq_tasklet);
1714
Chris Wilsonc0336662016-05-06 15:40:21 +01001715 dev_priv = engine->i915;
John Harrison6402c332014-10-31 12:00:26 +00001716
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001717 if (engine->buffer) {
1718 intel_logical_ring_stop(engine);
1719 WARN_ON((I915_READ_MODE(engine) & MODE_IDLE) == 0);
Dave Gordonb0366a52015-12-08 15:02:36 +00001720 }
Oscar Mateo48d82382014-07-24 17:04:23 +01001721
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001722 if (engine->cleanup)
1723 engine->cleanup(engine);
Oscar Mateo48d82382014-07-24 17:04:23 +01001724
Chris Wilson96a945a2016-08-03 13:19:16 +01001725 intel_engine_cleanup_common(engine);
Chris Wilson688e6c72016-07-01 17:23:15 +01001726
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001727 if (engine->status_page.obj) {
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +01001728 i915_gem_object_unpin_map(engine->status_page.obj);
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001729 engine->status_page.obj = NULL;
Oscar Mateo48d82382014-07-24 17:04:23 +01001730 }
Chris Wilson24f1d3c2016-04-28 09:56:53 +01001731 intel_lr_context_unpin(dev_priv->kernel_context, engine);
Arun Siluvery17ee9502015-06-19 19:07:01 +01001732
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001733 engine->idle_lite_restore_wa = 0;
1734 engine->disable_lite_restore_wa = false;
1735 engine->ctx_desc_template = 0;
Tvrtko Ursulinca825802016-01-15 15:10:27 +00001736
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001737 lrc_destroy_wa_ctx_obj(engine);
Chris Wilsonc0336662016-05-06 15:40:21 +01001738 engine->i915 = NULL;
Oscar Mateo454afeb2014-07-24 17:04:22 +01001739}
1740
Chris Wilsonddd66c52016-08-02 22:50:31 +01001741void intel_execlists_enable_submission(struct drm_i915_private *dev_priv)
1742{
1743 struct intel_engine_cs *engine;
1744
1745 for_each_engine(engine, dev_priv)
Chris Wilsonf4ea6bd2016-08-02 22:50:32 +01001746 engine->submit_request = execlists_submit_request;
Chris Wilsonddd66c52016-08-02 22:50:31 +01001747}
1748
Tvrtko Ursulinc9cacf92016-01-12 17:32:34 +00001749static void
Chris Wilsone1382ef2016-05-06 15:40:20 +01001750logical_ring_default_vfuncs(struct intel_engine_cs *engine)
Tvrtko Ursulinc9cacf92016-01-12 17:32:34 +00001751{
1752 /* Default vfuncs which can be overriden by each engine. */
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001753 engine->init_hw = gen8_init_common_ring;
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001754 engine->emit_flush = gen8_emit_flush;
Chris Wilsonddd66c52016-08-02 22:50:31 +01001755 engine->emit_request = gen8_emit_request;
Chris Wilsonf4ea6bd2016-08-02 22:50:32 +01001756 engine->submit_request = execlists_submit_request;
Chris Wilsonddd66c52016-08-02 22:50:31 +01001757
Chris Wilson31bb59c2016-07-01 17:23:27 +01001758 engine->irq_enable = gen8_logical_ring_enable_irq;
1759 engine->irq_disable = gen8_logical_ring_disable_irq;
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001760 engine->emit_bb_start = gen8_emit_bb_start;
Chris Wilson1b7744e2016-07-01 17:23:17 +01001761 if (IS_BXT_REVID(engine->i915, 0, BXT_REVID_A1))
Chris Wilsonc04e0f32016-04-09 10:57:54 +01001762 engine->irq_seqno_barrier = bxt_a_seqno_barrier;
Tvrtko Ursulinc9cacf92016-01-12 17:32:34 +00001763}
1764
Tvrtko Ursulind9f3af92016-01-12 17:32:35 +00001765static inline void
Dave Gordonc2c7f242016-07-13 16:03:35 +01001766logical_ring_default_irqs(struct intel_engine_cs *engine)
Tvrtko Ursulind9f3af92016-01-12 17:32:35 +00001767{
Dave Gordonc2c7f242016-07-13 16:03:35 +01001768 unsigned shift = engine->irq_shift;
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001769 engine->irq_enable_mask = GT_RENDER_USER_INTERRUPT << shift;
1770 engine->irq_keep_mask = GT_CONTEXT_SWITCH_INTERRUPT << shift;
Tvrtko Ursulind9f3af92016-01-12 17:32:35 +00001771}
1772
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +01001773static int
Tvrtko Ursulin04794ad2016-04-12 15:40:41 +01001774lrc_setup_hws(struct intel_engine_cs *engine,
1775 struct drm_i915_gem_object *dctx_obj)
1776{
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +01001777 void *hws;
Tvrtko Ursulin04794ad2016-04-12 15:40:41 +01001778
1779 /* The HWSP is part of the default context object in LRC mode. */
1780 engine->status_page.gfx_addr = i915_gem_obj_ggtt_offset(dctx_obj) +
1781 LRC_PPHWSP_PN * PAGE_SIZE;
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +01001782 hws = i915_gem_object_pin_map(dctx_obj);
1783 if (IS_ERR(hws))
1784 return PTR_ERR(hws);
1785 engine->status_page.page_addr = hws + LRC_PPHWSP_PN * PAGE_SIZE;
Tvrtko Ursulin04794ad2016-04-12 15:40:41 +01001786 engine->status_page.obj = dctx_obj;
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +01001787
1788 return 0;
Tvrtko Ursulin04794ad2016-04-12 15:40:41 +01001789}
1790
Tvrtko Ursulinbb454382016-07-13 16:03:36 +01001791static void
1792logical_ring_setup(struct intel_engine_cs *engine)
1793{
1794 struct drm_i915_private *dev_priv = engine->i915;
1795 enum forcewake_domains fw_domains;
1796
Tvrtko Ursulin019bf272016-07-13 16:03:41 +01001797 intel_engine_setup_common(engine);
1798
Tvrtko Ursulinbb454382016-07-13 16:03:36 +01001799 /* Intentionally left blank. */
1800 engine->buffer = NULL;
1801
1802 fw_domains = intel_uncore_forcewake_for_reg(dev_priv,
1803 RING_ELSP(engine),
1804 FW_REG_WRITE);
1805
1806 fw_domains |= intel_uncore_forcewake_for_reg(dev_priv,
1807 RING_CONTEXT_STATUS_PTR(engine),
1808 FW_REG_READ | FW_REG_WRITE);
1809
1810 fw_domains |= intel_uncore_forcewake_for_reg(dev_priv,
1811 RING_CONTEXT_STATUS_BUF_BASE(engine),
1812 FW_REG_READ);
1813
1814 engine->fw_domains = fw_domains;
1815
Tvrtko Ursulinbb454382016-07-13 16:03:36 +01001816 tasklet_init(&engine->irq_tasklet,
1817 intel_lrc_irq_handler, (unsigned long)engine);
1818
1819 logical_ring_init_platform_invariants(engine);
1820 logical_ring_default_vfuncs(engine);
1821 logical_ring_default_irqs(engine);
Tvrtko Ursulinbb454382016-07-13 16:03:36 +01001822}
1823
Tvrtko Ursulina19d6ff2016-06-23 14:52:41 +01001824static int
1825logical_ring_init(struct intel_engine_cs *engine)
1826{
1827 struct i915_gem_context *dctx = engine->i915->kernel_context;
1828 int ret;
1829
Tvrtko Ursulin019bf272016-07-13 16:03:41 +01001830 ret = intel_engine_init_common(engine);
Tvrtko Ursulina19d6ff2016-06-23 14:52:41 +01001831 if (ret)
1832 goto error;
1833
1834 ret = execlists_context_deferred_alloc(dctx, engine);
1835 if (ret)
1836 goto error;
1837
1838 /* As this is the default context, always pin it */
1839 ret = intel_lr_context_pin(dctx, engine);
1840 if (ret) {
1841 DRM_ERROR("Failed to pin context for %s: %d\n",
1842 engine->name, ret);
1843 goto error;
1844 }
1845
1846 /* And setup the hardware status page. */
1847 ret = lrc_setup_hws(engine, dctx->engine[engine->id].state);
1848 if (ret) {
1849 DRM_ERROR("Failed to set up hws %s: %d\n", engine->name, ret);
1850 goto error;
1851 }
1852
1853 return 0;
1854
1855error:
1856 intel_logical_ring_cleanup(engine);
1857 return ret;
1858}
1859
Tvrtko Ursulin88d2ba22016-07-13 16:03:40 +01001860int logical_render_ring_init(struct intel_engine_cs *engine)
Tvrtko Ursulina19d6ff2016-06-23 14:52:41 +01001861{
1862 struct drm_i915_private *dev_priv = engine->i915;
1863 int ret;
1864
Tvrtko Ursulinbb454382016-07-13 16:03:36 +01001865 logical_ring_setup(engine);
1866
Tvrtko Ursulina19d6ff2016-06-23 14:52:41 +01001867 if (HAS_L3_DPF(dev_priv))
1868 engine->irq_keep_mask |= GT_RENDER_L3_PARITY_ERROR_INTERRUPT;
1869
1870 /* Override some for render ring. */
1871 if (INTEL_GEN(dev_priv) >= 9)
1872 engine->init_hw = gen9_init_render_ring;
1873 else
1874 engine->init_hw = gen8_init_render_ring;
1875 engine->init_context = gen8_init_rcs_context;
1876 engine->cleanup = intel_fini_pipe_control;
1877 engine->emit_flush = gen8_emit_flush_render;
1878 engine->emit_request = gen8_emit_request_render;
1879
Chris Wilson7d5ea802016-07-01 17:23:20 +01001880 ret = intel_init_pipe_control(engine, 4096);
Tvrtko Ursulina19d6ff2016-06-23 14:52:41 +01001881 if (ret)
1882 return ret;
1883
1884 ret = intel_init_workaround_bb(engine);
1885 if (ret) {
1886 /*
1887 * We continue even if we fail to initialize WA batch
1888 * because we only expect rare glitches but nothing
1889 * critical to prevent us from using GPU
1890 */
1891 DRM_ERROR("WA batch buffer initialization failed: %d\n",
1892 ret);
1893 }
1894
1895 ret = logical_ring_init(engine);
1896 if (ret) {
1897 lrc_destroy_wa_ctx_obj(engine);
1898 }
1899
1900 return ret;
1901}
1902
Tvrtko Ursulin88d2ba22016-07-13 16:03:40 +01001903int logical_xcs_ring_init(struct intel_engine_cs *engine)
Tvrtko Ursulinbb454382016-07-13 16:03:36 +01001904{
1905 logical_ring_setup(engine);
1906
1907 return logical_ring_init(engine);
1908}
1909
Jeff McGee0cea6502015-02-13 10:27:56 -06001910static u32
Chris Wilsonc0336662016-05-06 15:40:21 +01001911make_rpcs(struct drm_i915_private *dev_priv)
Jeff McGee0cea6502015-02-13 10:27:56 -06001912{
1913 u32 rpcs = 0;
1914
1915 /*
1916 * No explicit RPCS request is needed to ensure full
1917 * slice/subslice/EU enablement prior to Gen9.
1918 */
Chris Wilsonc0336662016-05-06 15:40:21 +01001919 if (INTEL_GEN(dev_priv) < 9)
Jeff McGee0cea6502015-02-13 10:27:56 -06001920 return 0;
1921
1922 /*
1923 * Starting in Gen9, render power gating can leave
1924 * slice/subslice/EU in a partially enabled state. We
1925 * must make an explicit request through RPCS for full
1926 * enablement.
1927 */
Chris Wilsonc0336662016-05-06 15:40:21 +01001928 if (INTEL_INFO(dev_priv)->has_slice_pg) {
Jeff McGee0cea6502015-02-13 10:27:56 -06001929 rpcs |= GEN8_RPCS_S_CNT_ENABLE;
Chris Wilsonc0336662016-05-06 15:40:21 +01001930 rpcs |= INTEL_INFO(dev_priv)->slice_total <<
Jeff McGee0cea6502015-02-13 10:27:56 -06001931 GEN8_RPCS_S_CNT_SHIFT;
1932 rpcs |= GEN8_RPCS_ENABLE;
1933 }
1934
Chris Wilsonc0336662016-05-06 15:40:21 +01001935 if (INTEL_INFO(dev_priv)->has_subslice_pg) {
Jeff McGee0cea6502015-02-13 10:27:56 -06001936 rpcs |= GEN8_RPCS_SS_CNT_ENABLE;
Chris Wilsonc0336662016-05-06 15:40:21 +01001937 rpcs |= INTEL_INFO(dev_priv)->subslice_per_slice <<
Jeff McGee0cea6502015-02-13 10:27:56 -06001938 GEN8_RPCS_SS_CNT_SHIFT;
1939 rpcs |= GEN8_RPCS_ENABLE;
1940 }
1941
Chris Wilsonc0336662016-05-06 15:40:21 +01001942 if (INTEL_INFO(dev_priv)->has_eu_pg) {
1943 rpcs |= INTEL_INFO(dev_priv)->eu_per_subslice <<
Jeff McGee0cea6502015-02-13 10:27:56 -06001944 GEN8_RPCS_EU_MIN_SHIFT;
Chris Wilsonc0336662016-05-06 15:40:21 +01001945 rpcs |= INTEL_INFO(dev_priv)->eu_per_subslice <<
Jeff McGee0cea6502015-02-13 10:27:56 -06001946 GEN8_RPCS_EU_MAX_SHIFT;
1947 rpcs |= GEN8_RPCS_ENABLE;
1948 }
1949
1950 return rpcs;
1951}
1952
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001953static u32 intel_lr_indirect_ctx_offset(struct intel_engine_cs *engine)
Michel Thierry71562912016-02-23 10:31:49 +00001954{
1955 u32 indirect_ctx_offset;
1956
Chris Wilsonc0336662016-05-06 15:40:21 +01001957 switch (INTEL_GEN(engine->i915)) {
Michel Thierry71562912016-02-23 10:31:49 +00001958 default:
Chris Wilsonc0336662016-05-06 15:40:21 +01001959 MISSING_CASE(INTEL_GEN(engine->i915));
Michel Thierry71562912016-02-23 10:31:49 +00001960 /* fall through */
1961 case 9:
1962 indirect_ctx_offset =
1963 GEN9_CTX_RCS_INDIRECT_CTX_OFFSET_DEFAULT;
1964 break;
1965 case 8:
1966 indirect_ctx_offset =
1967 GEN8_CTX_RCS_INDIRECT_CTX_OFFSET_DEFAULT;
1968 break;
1969 }
1970
1971 return indirect_ctx_offset;
1972}
1973
Oscar Mateo8670d6f2014-07-24 17:04:17 +01001974static int
Chris Wilsone2efd132016-05-24 14:53:34 +01001975populate_lr_context(struct i915_gem_context *ctx,
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +01001976 struct drm_i915_gem_object *ctx_obj,
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00001977 struct intel_engine_cs *engine,
Chris Wilson7e37f882016-08-02 22:50:21 +01001978 struct intel_ring *ring)
Oscar Mateo8670d6f2014-07-24 17:04:17 +01001979{
Chris Wilsonc0336662016-05-06 15:40:21 +01001980 struct drm_i915_private *dev_priv = ctx->i915;
Daniel Vetterae6c4802014-08-06 15:04:53 +02001981 struct i915_hw_ppgtt *ppgtt = ctx->ppgtt;
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +01001982 void *vaddr;
1983 u32 *reg_state;
Oscar Mateo8670d6f2014-07-24 17:04:17 +01001984 int ret;
1985
Thomas Daniel2d965532014-08-19 10:13:36 +01001986 if (!ppgtt)
1987 ppgtt = dev_priv->mm.aliasing_ppgtt;
1988
Oscar Mateo8670d6f2014-07-24 17:04:17 +01001989 ret = i915_gem_object_set_to_cpu_domain(ctx_obj, true);
1990 if (ret) {
1991 DRM_DEBUG_DRIVER("Could not set to CPU domain\n");
1992 return ret;
1993 }
1994
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +01001995 vaddr = i915_gem_object_pin_map(ctx_obj);
1996 if (IS_ERR(vaddr)) {
1997 ret = PTR_ERR(vaddr);
1998 DRM_DEBUG_DRIVER("Could not map object pages! (%d)\n", ret);
Oscar Mateo8670d6f2014-07-24 17:04:17 +01001999 return ret;
2000 }
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +01002001 ctx_obj->dirty = true;
Oscar Mateo8670d6f2014-07-24 17:04:17 +01002002
2003 /* The second page of the context object contains some fields which must
2004 * be set up prior to the first execution. */
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +01002005 reg_state = vaddr + LRC_STATE_PN * PAGE_SIZE;
Oscar Mateo8670d6f2014-07-24 17:04:17 +01002006
2007 /* A context is actually a big batch buffer with several MI_LOAD_REGISTER_IMM
2008 * commands followed by (reg, value) pairs. The values we are setting here are
2009 * only for the first context restore: on a subsequent save, the GPU will
2010 * recreate this batchbuffer with new values (including all the missing
2011 * MI_LOAD_REGISTER_IMM commands that we are not initializing here). */
Ville Syrjälä0d925ea2015-11-04 23:20:11 +02002012 reg_state[CTX_LRI_HEADER_0] =
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00002013 MI_LOAD_REGISTER_IMM(engine->id == RCS ? 14 : 11) | MI_LRI_FORCE_POSTED;
2014 ASSIGN_CTX_REG(reg_state, CTX_CONTEXT_CONTROL,
2015 RING_CONTEXT_CONTROL(engine),
Ville Syrjälä0d925ea2015-11-04 23:20:11 +02002016 _MASKED_BIT_ENABLE(CTX_CTRL_INHIBIT_SYN_CTX_SWITCH |
2017 CTX_CTRL_ENGINE_CTX_RESTORE_INHIBIT |
Chris Wilsonc0336662016-05-06 15:40:21 +01002018 (HAS_RESOURCE_STREAMER(dev_priv) ?
Michel Thierry99cf8ea2016-02-25 09:48:58 +00002019 CTX_CTRL_RS_CTX_ENABLE : 0)));
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00002020 ASSIGN_CTX_REG(reg_state, CTX_RING_HEAD, RING_HEAD(engine->mmio_base),
2021 0);
2022 ASSIGN_CTX_REG(reg_state, CTX_RING_TAIL, RING_TAIL(engine->mmio_base),
2023 0);
Thomas Daniel7ba717c2014-11-13 10:28:56 +00002024 /* Ring buffer start address is not known until the buffer is pinned.
2025 * It is written to the context image in execlists_update_context()
2026 */
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00002027 ASSIGN_CTX_REG(reg_state, CTX_RING_BUFFER_START,
2028 RING_START(engine->mmio_base), 0);
2029 ASSIGN_CTX_REG(reg_state, CTX_RING_BUFFER_CONTROL,
2030 RING_CTL(engine->mmio_base),
Chris Wilson7e37f882016-08-02 22:50:21 +01002031 ((ring->size - PAGE_SIZE) & RING_NR_PAGES) | RING_VALID);
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00002032 ASSIGN_CTX_REG(reg_state, CTX_BB_HEAD_U,
2033 RING_BBADDR_UDW(engine->mmio_base), 0);
2034 ASSIGN_CTX_REG(reg_state, CTX_BB_HEAD_L,
2035 RING_BBADDR(engine->mmio_base), 0);
2036 ASSIGN_CTX_REG(reg_state, CTX_BB_STATE,
2037 RING_BBSTATE(engine->mmio_base),
Ville Syrjälä0d925ea2015-11-04 23:20:11 +02002038 RING_BB_PPGTT);
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00002039 ASSIGN_CTX_REG(reg_state, CTX_SECOND_BB_HEAD_U,
2040 RING_SBBADDR_UDW(engine->mmio_base), 0);
2041 ASSIGN_CTX_REG(reg_state, CTX_SECOND_BB_HEAD_L,
2042 RING_SBBADDR(engine->mmio_base), 0);
2043 ASSIGN_CTX_REG(reg_state, CTX_SECOND_BB_STATE,
2044 RING_SBBSTATE(engine->mmio_base), 0);
2045 if (engine->id == RCS) {
2046 ASSIGN_CTX_REG(reg_state, CTX_BB_PER_CTX_PTR,
2047 RING_BB_PER_CTX_PTR(engine->mmio_base), 0);
2048 ASSIGN_CTX_REG(reg_state, CTX_RCS_INDIRECT_CTX,
2049 RING_INDIRECT_CTX(engine->mmio_base), 0);
2050 ASSIGN_CTX_REG(reg_state, CTX_RCS_INDIRECT_CTX_OFFSET,
2051 RING_INDIRECT_CTX_OFFSET(engine->mmio_base), 0);
2052 if (engine->wa_ctx.obj) {
2053 struct i915_ctx_workarounds *wa_ctx = &engine->wa_ctx;
Arun Siluvery17ee9502015-06-19 19:07:01 +01002054 uint32_t ggtt_offset = i915_gem_obj_ggtt_offset(wa_ctx->obj);
2055
2056 reg_state[CTX_RCS_INDIRECT_CTX+1] =
2057 (ggtt_offset + wa_ctx->indirect_ctx.offset * sizeof(uint32_t)) |
2058 (wa_ctx->indirect_ctx.size / CACHELINE_DWORDS);
2059
2060 reg_state[CTX_RCS_INDIRECT_CTX_OFFSET+1] =
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00002061 intel_lr_indirect_ctx_offset(engine) << 6;
Arun Siluvery17ee9502015-06-19 19:07:01 +01002062
2063 reg_state[CTX_BB_PER_CTX_PTR+1] =
2064 (ggtt_offset + wa_ctx->per_ctx.offset * sizeof(uint32_t)) |
2065 0x01;
2066 }
Oscar Mateo8670d6f2014-07-24 17:04:17 +01002067 }
Ville Syrjälä0d925ea2015-11-04 23:20:11 +02002068 reg_state[CTX_LRI_HEADER_1] = MI_LOAD_REGISTER_IMM(9) | MI_LRI_FORCE_POSTED;
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00002069 ASSIGN_CTX_REG(reg_state, CTX_CTX_TIMESTAMP,
2070 RING_CTX_TIMESTAMP(engine->mmio_base), 0);
Ville Syrjälä0d925ea2015-11-04 23:20:11 +02002071 /* PDP values well be assigned later if needed */
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00002072 ASSIGN_CTX_REG(reg_state, CTX_PDP3_UDW, GEN8_RING_PDP_UDW(engine, 3),
2073 0);
2074 ASSIGN_CTX_REG(reg_state, CTX_PDP3_LDW, GEN8_RING_PDP_LDW(engine, 3),
2075 0);
2076 ASSIGN_CTX_REG(reg_state, CTX_PDP2_UDW, GEN8_RING_PDP_UDW(engine, 2),
2077 0);
2078 ASSIGN_CTX_REG(reg_state, CTX_PDP2_LDW, GEN8_RING_PDP_LDW(engine, 2),
2079 0);
2080 ASSIGN_CTX_REG(reg_state, CTX_PDP1_UDW, GEN8_RING_PDP_UDW(engine, 1),
2081 0);
2082 ASSIGN_CTX_REG(reg_state, CTX_PDP1_LDW, GEN8_RING_PDP_LDW(engine, 1),
2083 0);
2084 ASSIGN_CTX_REG(reg_state, CTX_PDP0_UDW, GEN8_RING_PDP_UDW(engine, 0),
2085 0);
2086 ASSIGN_CTX_REG(reg_state, CTX_PDP0_LDW, GEN8_RING_PDP_LDW(engine, 0),
2087 0);
Michel Thierryd7b26332015-04-08 12:13:34 +01002088
Michel Thierry2dba3232015-07-30 11:06:23 +01002089 if (USES_FULL_48BIT_PPGTT(ppgtt->base.dev)) {
2090 /* 64b PPGTT (48bit canonical)
2091 * PDP0_DESCRIPTOR contains the base address to PML4 and
2092 * other PDP Descriptors are ignored.
2093 */
2094 ASSIGN_CTX_PML4(ppgtt, reg_state);
2095 } else {
2096 /* 32b PPGTT
2097 * PDP*_DESCRIPTOR contains the base address of space supported.
2098 * With dynamic page allocation, PDPs may not be allocated at
2099 * this point. Point the unallocated PDPs to the scratch page
2100 */
Tvrtko Ursulinc6a2ac72016-02-26 16:58:32 +00002101 execlists_update_context_pdps(ppgtt, reg_state);
Michel Thierry2dba3232015-07-30 11:06:23 +01002102 }
2103
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00002104 if (engine->id == RCS) {
Oscar Mateo8670d6f2014-07-24 17:04:17 +01002105 reg_state[CTX_LRI_HEADER_2] = MI_LOAD_REGISTER_IMM(1);
Ville Syrjälä0d925ea2015-11-04 23:20:11 +02002106 ASSIGN_CTX_REG(reg_state, CTX_R_PWR_CLK_STATE, GEN8_R_PWR_CLK_STATE,
Chris Wilsonc0336662016-05-06 15:40:21 +01002107 make_rpcs(dev_priv));
Oscar Mateo8670d6f2014-07-24 17:04:17 +01002108 }
2109
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +01002110 i915_gem_object_unpin_map(ctx_obj);
Oscar Mateo8670d6f2014-07-24 17:04:17 +01002111
2112 return 0;
2113}
2114
Oscar Mateo73e4d072014-07-24 17:04:48 +01002115/**
Dave Gordonc5d46ee2016-01-05 12:21:33 +00002116 * intel_lr_context_size() - return the size of the context for an engine
Tvrtko Ursulin14bb2c12016-06-03 14:02:17 +01002117 * @engine: which engine to find the context size for
Dave Gordonc5d46ee2016-01-05 12:21:33 +00002118 *
2119 * Each engine may require a different amount of space for a context image,
2120 * so when allocating (or copying) an image, this function can be used to
2121 * find the right size for the specific engine.
2122 *
2123 * Return: size (in bytes) of an engine-specific context image
2124 *
2125 * Note: this size includes the HWSP, which is part of the context image
2126 * in LRC mode, but does not include the "shared data page" used with
2127 * GuC submission. The caller should account for this if using the GuC.
2128 */
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00002129uint32_t intel_lr_context_size(struct intel_engine_cs *engine)
Oscar Mateo8c8579172014-07-24 17:04:14 +01002130{
2131 int ret = 0;
2132
Chris Wilsonc0336662016-05-06 15:40:21 +01002133 WARN_ON(INTEL_GEN(engine->i915) < 8);
Oscar Mateo8c8579172014-07-24 17:04:14 +01002134
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00002135 switch (engine->id) {
Oscar Mateo8c8579172014-07-24 17:04:14 +01002136 case RCS:
Chris Wilsonc0336662016-05-06 15:40:21 +01002137 if (INTEL_GEN(engine->i915) >= 9)
Michael H. Nguyen468c6812014-11-13 17:51:49 +00002138 ret = GEN9_LR_CONTEXT_RENDER_SIZE;
2139 else
2140 ret = GEN8_LR_CONTEXT_RENDER_SIZE;
Oscar Mateo8c8579172014-07-24 17:04:14 +01002141 break;
2142 case VCS:
2143 case BCS:
2144 case VECS:
2145 case VCS2:
2146 ret = GEN8_LR_CONTEXT_OTHER_SIZE;
2147 break;
2148 }
2149
2150 return ret;
Oscar Mateoede7d422014-07-24 17:04:12 +01002151}
2152
Chris Wilsone2efd132016-05-24 14:53:34 +01002153static int execlists_context_deferred_alloc(struct i915_gem_context *ctx,
Chris Wilson978f1e02016-04-28 09:56:54 +01002154 struct intel_engine_cs *engine)
Oscar Mateoede7d422014-07-24 17:04:12 +01002155{
Oscar Mateo8c8579172014-07-24 17:04:14 +01002156 struct drm_i915_gem_object *ctx_obj;
Chris Wilson9021ad02016-05-24 14:53:37 +01002157 struct intel_context *ce = &ctx->engine[engine->id];
Oscar Mateo8c8579172014-07-24 17:04:14 +01002158 uint32_t context_size;
Chris Wilson7e37f882016-08-02 22:50:21 +01002159 struct intel_ring *ring;
Oscar Mateo8c8579172014-07-24 17:04:14 +01002160 int ret;
2161
Chris Wilson9021ad02016-05-24 14:53:37 +01002162 WARN_ON(ce->state);
Oscar Mateoede7d422014-07-24 17:04:12 +01002163
Tvrtko Ursulin0bc40be2016-03-16 11:00:37 +00002164 context_size = round_up(intel_lr_context_size(engine), 4096);
Oscar Mateo8c8579172014-07-24 17:04:14 +01002165
Alex Daid1675192015-08-12 15:43:43 +01002166 /* One extra page as the sharing data between driver and GuC */
2167 context_size += PAGE_SIZE * LRC_PPHWSP_PN;
2168
Chris Wilson91c8a322016-07-05 10:40:23 +01002169 ctx_obj = i915_gem_object_create(&ctx->i915->drm, context_size);
Chris Wilsonfe3db792016-04-25 13:32:13 +01002170 if (IS_ERR(ctx_obj)) {
Dan Carpenter3126a662015-04-30 17:30:50 +03002171 DRM_DEBUG_DRIVER("Alloc LRC backing obj failed.\n");
Chris Wilsonfe3db792016-04-25 13:32:13 +01002172 return PTR_ERR(ctx_obj);
Oscar Mateo8c8579172014-07-24 17:04:14 +01002173 }
2174
Chris Wilson7e37f882016-08-02 22:50:21 +01002175 ring = intel_engine_create_ring(engine, ctx->ring_size);
Chris Wilsondca33ec2016-08-02 22:50:20 +01002176 if (IS_ERR(ring)) {
2177 ret = PTR_ERR(ring);
Nick Hoathe84fe802015-09-11 12:53:46 +01002178 goto error_deref_obj;
Oscar Mateo8670d6f2014-07-24 17:04:17 +01002179 }
2180
Chris Wilsondca33ec2016-08-02 22:50:20 +01002181 ret = populate_lr_context(ctx, ctx_obj, engine, ring);
Oscar Mateo8670d6f2014-07-24 17:04:17 +01002182 if (ret) {
2183 DRM_DEBUG_DRIVER("Failed to populate LRC: %d\n", ret);
Chris Wilsondca33ec2016-08-02 22:50:20 +01002184 goto error_ring_free;
Oscar Mateo84c23772014-07-24 17:04:15 +01002185 }
2186
Chris Wilsondca33ec2016-08-02 22:50:20 +01002187 ce->ring = ring;
Chris Wilson9021ad02016-05-24 14:53:37 +01002188 ce->state = ctx_obj;
2189 ce->initialised = engine->init_context == NULL;
Oscar Mateoede7d422014-07-24 17:04:12 +01002190
2191 return 0;
Oscar Mateo8670d6f2014-07-24 17:04:17 +01002192
Chris Wilsondca33ec2016-08-02 22:50:20 +01002193error_ring_free:
Chris Wilson7e37f882016-08-02 22:50:21 +01002194 intel_ring_free(ring);
Nick Hoathe84fe802015-09-11 12:53:46 +01002195error_deref_obj:
Chris Wilsonf8c417c2016-07-20 13:31:53 +01002196 i915_gem_object_put(ctx_obj);
Chris Wilsondca33ec2016-08-02 22:50:20 +01002197 ce->ring = NULL;
Chris Wilson9021ad02016-05-24 14:53:37 +01002198 ce->state = NULL;
Oscar Mateo8670d6f2014-07-24 17:04:17 +01002199 return ret;
Oscar Mateoede7d422014-07-24 17:04:12 +01002200}
Thomas Daniel3e5b6f02015-02-16 16:12:53 +00002201
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +01002202void intel_lr_context_reset(struct drm_i915_private *dev_priv,
Chris Wilsone2efd132016-05-24 14:53:34 +01002203 struct i915_gem_context *ctx)
Thomas Daniel3e5b6f02015-02-16 16:12:53 +00002204{
Tvrtko Ursuline2f80392016-03-16 11:00:36 +00002205 struct intel_engine_cs *engine;
Thomas Daniel3e5b6f02015-02-16 16:12:53 +00002206
Dave Gordonb4ac5af2016-03-24 11:20:38 +00002207 for_each_engine(engine, dev_priv) {
Chris Wilson9021ad02016-05-24 14:53:37 +01002208 struct intel_context *ce = &ctx->engine[engine->id];
2209 struct drm_i915_gem_object *ctx_obj = ce->state;
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +01002210 void *vaddr;
Thomas Daniel3e5b6f02015-02-16 16:12:53 +00002211 uint32_t *reg_state;
Thomas Daniel3e5b6f02015-02-16 16:12:53 +00002212
2213 if (!ctx_obj)
2214 continue;
2215
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +01002216 vaddr = i915_gem_object_pin_map(ctx_obj);
2217 if (WARN_ON(IS_ERR(vaddr)))
Thomas Daniel3e5b6f02015-02-16 16:12:53 +00002218 continue;
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +01002219
2220 reg_state = vaddr + LRC_STATE_PN * PAGE_SIZE;
2221 ctx_obj->dirty = true;
Thomas Daniel3e5b6f02015-02-16 16:12:53 +00002222
2223 reg_state[CTX_RING_HEAD+1] = 0;
2224 reg_state[CTX_RING_TAIL+1] = 0;
2225
Tvrtko Ursulin7d774ca2016-04-12 15:40:42 +01002226 i915_gem_object_unpin_map(ctx_obj);
Thomas Daniel3e5b6f02015-02-16 16:12:53 +00002227
Chris Wilsondca33ec2016-08-02 22:50:20 +01002228 ce->ring->head = 0;
2229 ce->ring->tail = 0;
Thomas Daniel3e5b6f02015-02-16 16:12:53 +00002230 }
2231}