drm/amdgpu:use job* to replace voluntary
that way we can know which job cause hang and
can do per sched reset/recovery instead of all
sched.
Signed-off-by: Monk Liu <Monk.Liu@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 5b8f7e5..41c1870 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -2609,14 +2609,13 @@ static int amdgpu_recover_vram_from_shadow(struct amdgpu_device *adev,
* amdgpu_sriov_gpu_reset - reset the asic
*
* @adev: amdgpu device pointer
- * @voluntary: if this reset is requested by guest.
- * (true means by guest and false means by HYPERVISOR )
+ * @job: which job trigger hang
*
* Attempt the reset the GPU if it has hung (all asics).
* for SRIOV case.
* Returns 0 for success or an error on failure.
*/
-int amdgpu_sriov_gpu_reset(struct amdgpu_device *adev, bool voluntary)
+int amdgpu_sriov_gpu_reset(struct amdgpu_device *adev, struct amdgpu_job *job)
{
int i, r = 0;
int resched;
@@ -2646,7 +2645,7 @@ int amdgpu_sriov_gpu_reset(struct amdgpu_device *adev, bool voluntary)
amdgpu_fence_driver_force_completion(adev);
/* request to take full control of GPU before re-initialization */
- if (voluntary)
+ if (job)
amdgpu_virt_reset_gpu(adev);
else
amdgpu_virt_request_full_gpu(adev, true);