drm/nouveau/pmu: be more strict about locking
When we start communicating with the pmu a bit more, the current code is
a real issue. I encountered a dead lock here, while testing my dynamic
reclocking code
Signed-off-by: Karol Herbst <nouveau@karolherbst.de>
Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/base.c b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/base.c
index d95eb86..6e6d2ef 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/base.c
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/base.c
@@ -40,21 +40,23 @@
struct nvkm_device *device = subdev->device;
u32 addr;
+ mutex_lock(&subdev->mutex);
/* wait for a free slot in the fifo */
addr = nvkm_rd32(device, 0x10a4a0);
if (nvkm_msec(device, 2000,
u32 tmp = nvkm_rd32(device, 0x10a4b0);
if (tmp != (addr ^ 8))
break;
- ) < 0)
+ ) < 0) {
+ mutex_unlock(&subdev->mutex);
return -EBUSY;
+ }
/* we currently only support a single process at a time waiting
* on a synchronous reply, take the PMU mutex and tell the
* receive handler what we're waiting for
*/
if (reply) {
- mutex_lock(&subdev->mutex);
pmu->recv.message = message;
pmu->recv.process = process;
}
@@ -81,9 +83,9 @@
wait_event(pmu->recv.wait, (pmu->recv.process == 0));
reply[0] = pmu->recv.data[0];
reply[1] = pmu->recv.data[1];
- mutex_unlock(&subdev->mutex);
}
+ mutex_unlock(&subdev->mutex);
return 0;
}