iopoll: Use ktime_get() instead of jiffies for timeout calculations
Presently, small timeout values will round up to 1 jiffy which,
on CONFIG_HZ=100 systems, corresponds to a full 10 milliseconds.
This is undesirable for drivers specifying timeouts of only a
few 10's of microseconds (which is common), since they may end
up tight-loop spinning for hundreds or thousands of times longer
than expected before reporting a timeout.
Additionally, jiffies cannot be reliably used with time_after()
when the value of jiffies is small (like 1). In rare but real
scenarios, jiffies may actually increment by more than 1 at a
time. Specifically, this will occur if interrupts are disabled
on for more than 1 jiffy (10 milliseconds for CONFIG_HZ=100) on
the CPU responsible for incrementing jiffies.
If interrupts are re-enabled on that CPU between the time the
iopoll code (on another CPU) calculates the timeout value of
jiffies and when the time_after() comparison is made, then the
iopoll APIs may return -ETIMEDOUT prematurely, even though the
specified timeout has not actually expired.
Using ktime_get() avoid this problem (which arguably also needs
a generic fix to avoid similar problems for other code which use
jiffies to calculate timeouts).
CRs-Fixed: 587801
Change-Id: I19150f41965b918c59c3fb98e29a8bd2e2c9609f
Signed-off-by: Matt Wagantall <mattw@codeaurora.org>
1 file changed